Jump to content

rukiftw

Members
  • Posts

    33
  • Joined

  • Last visited

Posts posted by rukiftw

  1. anyway to get access to br0 dockers?

    with dsmith44's docker tailscale verison and docker host network access enabled, it can access br0 dockers. So far i have been unable to get this plugin to do the same. any ideas?

  2. 7 hours ago, guy.davis said:

     

    Hi, not quite sure what is being measured here.  However, here are the log locations if you want to truncate them.  Most are auto-rotated.  Hope this helps.

     

    The image shows the chia and chive fork are both writing to the docker image. The other forks to not.

     

    The log location provided are outside of the docker image. Any ideas what could be writing to it?

  3. On 11/23/2021 at 7:04 AM, Helmonder said:

    how to sell my Chia ? 

     

    i use gate.io. no kyc. i transfer my XCH (chia) to them. Sell it for USDT and then plan what to do with it. I was then trading it to matic and investing on polygon for most of this year, but recently started playing with the fantom and avalanche networks. 

     

    I really like that it supports cross transfers with minor waits (minutes) and low fees.

     

    It is not a USA site, so it may seem sketchy, but is quite popular in Asian counties.

  4. 57 minutes ago, guy.davis said:

     

    No problem.  I'm always on the lookout for ways to improve Machinaris and blockchain farming.

    Fyi, I'm running ten blockchains on a single Unraid system with 48 GB total.  This Unraid system is doing a lot more than just Machinaris, including Plex etc.  

     

    image.thumb.png.05b0ac3995266b3d5b0c7ed52db49a34.png

     

    Still have lots of headroom: 

    image.png.c60cfa15b413a42a38e8bae32001433f.png

     

     

     

    cryptodoge is new and not listed on CA. still in testing?

  5. My search-fu failed me. Questions:

    1. The 'check plots' function has been removed from the farmers tab. I am unable to find it. Where was it moved to?
    2. How do i disable certain alerts? Silicon is generating 2213 alerts per day for being 'offline' while it syncs.
    3. Setting up harvester alt coins (on sperate machines), doesn't function in any way.
      1. They do not communicate.
      2. Size is not reported.
      3. unable to access logs
      4. Machinaris does see them, at times, but then replaces them with another alt coin harvester. 
        1. This is similar to the DNS issues a few revisions back.
        2. Assigning manual IPs does not resolve.
  6. 13 hours ago, Mr_4braham said:

    Can I harvest my existing plots with the new chives, hddcoin and nchain as well? With the new flax separate container does that mean flax will be removed from the original machinaris container? Because it looks like I can still harvest it from the main one.

    I started experimenting with this last night. So far there is no way to interact with them.

     

    To get them to sync and and report in, the following is required: 

    1. Own IP. The default bridge mode will not work as the worker's name is linked to its IP address.
    2. Adding the following to extra parameters:
      1. -h "<dockername>" -e TZ=<local> -e worker_address=<Worker IP> -e controller_host=<Machinaris IP> -e controller_api_port=8927
        1. EX: -h "chives" -e TZ=America/Los_Angeles -e worker_address=192.168.1.24 -e controller_host=192.168.1.2 -e controller_api_port=8927

    I also have not been able to figure out what the worker_api_port is for. None of them are being requested by docker, so the port isn't actually opened. when manually opening the by via -p 8931:8931 (for chives), nothing changes.

     

    it appears the built in flax and the addon flax are different.  

     

    I was unable to find documentation any of this, so everything done was a guess.

  7. On 8/26/2021 at 7:07 PM, guy.davis said:

     

    Hi, please add a Worker IP Address to the Wizard that opens from New Worker button.  Right now, the Machinaris fullnode/controller is trying to resolve a hostname on your LAN of "chiaharvest" which may not be resolving to a local IP.  By setting an IP for the worker, then the controller will use that IP without any DNS lookup.  Hope this helps!

     

    is sware i already tried that, but its working now. Your help is highly apprecated.

     

    for the next person, you add the following to the template from the ca app, under extra parameters:

     

    -t -e TZ=America/Los_Angeles -e worker_address=10.0.0.3 -e farmer_address=10.0.0.2 -e farmer_port=8447 -e flax_farmer_port=6888 -e farmer_pk=* -e pool_contract_address=* -e controller_host=10.0.0.2 -e controller_api_port=8927

     

    then fill in the rest of the template as normal.

  8. On 8/24/2021 at 7:02 AM, rik3k said:

     

     

    I have been running into this same issue but ultimately I am still bottlenecked by the transfer process. I plot in approx 45 minutes but the transfer process takes a bit longer (around 20MB/s). When I complete a 2nd plot, I now have Unraid transferring two plots on the same 20MB/s which slows everything down again.

     

    The ideal solution would be for Unraid to select a 2nd drive for the 2nd plot. Currently it always seems to pick the same drive for both file transfers. Any guidance on how to get Unraid to automatically pick another drive?

    I use two drives in a raid 1 for my transfer cache. They write/read at 160mb/s. Then mover runs at 11pm. The array with dual parity reads at 150mb/s, but write is only 35mb/s. Reconstruct mode pushes it to 55mb/s. My dockers are on an  r1  satassd cache drives, and the plotter is on an nvme drive with high endurance. using this setup, a plot is created every 36 minutes, start to finish.

  9. Any suggestions on getting the harvester+plotter working? 

    Error: Connection Refused

     

    certs are are in both the farmer_ca and \mainnet\config\ssl\ca folders

    IPs are used instead of hostnames.

     

    Here are the docker command:

    draft1: generated on the full node, using the new worker button.

    docker run -d -t --name machinaris-plotter -h chiaharvest -v /mnt/user/appdata/machinaris/:/root/.chia -v "/mnt/user/chia/:/plots" -v "/mnt/ssdnvme/:/plotting" -e TZ=America/Los_Angeles -e farmer_address=10.0.0.2 -e farmer_port=8447 -e flax_farmer_port=6888 -e farmer_pk=* -e pool_pk=* -e pool_contract_address=* -e mode=harvester,plotter -e controller_host=10.0.0.2 -e controller_api_port=8927 -e plots_dir=/plots -e blockchains=chia,flax -p 8926:8926 -p 8927:8927 ghcr.io/guydavis/machinaris

    draft2: (https://github.com/guydavis/machinaris/wiki/Workers)

    docker run -d -t --name "machinaris" -h "chiaharvester" -v "/mnt/user/appdata/machinaris/:/root/.chia" -v "/mnt/user/chia/:/plots" -v "/mnt/ssdnvme/:/plotting" -e TZ="America/Los_Angeles" -e farmer_address="10.0.0.2" -e farmer_pk="*" -e pool_pk="*" -e pool_contract_address="*" -e mode="harvester+plotter" -e controller_host="10.0.0.2" -e plots_dir=/plots -e blockchains=chia -p 8926:8926 -p 8927:8927 ghcr.io/guydavis/machinaris

     

    there is no firewall present to be blocking communications.

     

    image.thumb.png.38ca804ee72c232d6bbd6714837f989e.png

  10. For me multiple arrays would me more useful.

    1. You can already obtain ZFS with the plugin or passing a controller card to a vm
      1. Plenty of guides out there for both
    2. Large Drives in today's market are insanely expensive.
      1. 8TB for $150 ouch.
    3. Allows use of already obtained hardware
      1. I have around 50x 2tb sas drives that are being unused.
    4. Helps with the slow write performance by splitting data types to their respective arrays
      1. Dual parity drops array transfers to 40mb/s

    As a future features, i would prefer:

    1. parity reworked.
      1. No idea if it is even possible, but if having 1x parity drive, and then that drive is in a mirror set with drive(s). This removes the write penalty with having two parity drives, but actually gives you as many parity drives as you want to mirror. There are likely problems with this setup, that i do not see.
    2.  Tiered storage.
      1. Have a live file system. t0 for nvme, for high transfer rate items (10gbit). t1 for ssds, where VMs live. t2 is an r10 array, used for most transfers. r3 is the parity array. Data is moved between the four as necessary based upon frequency of access. There is a manual way of setting this up now with the multiple cache pools, but I would prefer an automatic method.
    • Like 1
  11. 5 hours ago, HATEME said:

    @guy.davis I installed the test version, but can't figure out how to turn on mad max, theres only one option for plotter drop down menu and even with changing config to suit a mad max plotter, with pool/farm key, Its not showing a log file, but I can tell it's only using 2 threads vs the full fat cpu.

    EDIT: the config file is saved just like your wiki, but plotting never starts and goes straight to stopped, log for drop down menu says:

    nohup: ignoring input Namespace(cmd='kill', idprefix=['efb07080']) Pausing PID 1556, plot id efb07080bad55792cdca8e0813f977d528a2515adfc55864e8e210bcffe7dc6d Will kill pid 1556, plot id efb07080bad55792cdca8e0813f977d528a2515adfc55864e8e210bcffe7dc6d Will delete 211 temp files Are you sure? ("y" to confirm): killing... cleaning up temp files...

     

    I followied this guide: https://github.com/guydavis/machinaris/wiki/Plotman#plotting

    and

    https://github.com/guydavis/machinaris/wiki/MadMax

     

    it does not work. machinaris doesnt know what the madmax type is.

     

    Error:

    Traceback (most recent call last):
    File "/chia-blockchain/venv/lib/python3.8/site-packages/plotman/configuration.py", line 40, in get_validated_configs
    loaded = schema.load(config_objects)
    File "/chia-blockchain/venv/lib/python3.8/site-packages/marshmallow/schema.py", line 714, in load
    return self._do_load(
    File "/chia-blockchain/venv/lib/python3.8/site-packages/marshmallow/schema.py", line 896, in _do_load
    raise exc
    marshmallow.exceptions.ValidationError: {'plotting': {'e': ['Missing data for required field.'], 'k': ['Missing data for required field.'], 'job_buffer': ['Missing data for required field.'], 'type': ['Unknown field.']}}

    The above exception was the direct cause of the following exception:

    Traceback (most recent call last):
    File "/chia-blockchain/venv/bin/plotman", line 8, in <module>
    sys.exit(main())
    File "/chia-blockchain/venv/lib/python3.8/site-packages/plotman/plotman.py", line 137, in main
    cfg = configuration.get_validated_configs(config_text, config_path)
    File "/chia-blockchain/venv/lib/python3.8/site-packages/plotman/configuration.py", line 42, in get_validated_configs
    raise ConfigurationException(
    plotman.configuration.ConfigurationException: Config file at: '/root/.chia/plotman/plotman.yaml' is malformed

     

  12. On 3/1/2021 at 9:33 AM, CarlM007 said:

    Hi,

     

    I created my certificates for my sub domains around 3 months ago and all worked fine, upon renewing them I've getting the below message.

     

    image.thumb.png.d74d753d4c9695fb8bc429dd69effeae.png

     

    I seem to have this problem constantly and generally reinstalled Nginx seem to fix it but this time it hasn't. Any ideas why this problem is occurring?

     

    I've updated Let's entrypt to SWAG and also have the duckdns and NGINX proxy manager installed (The latter is where I'm seeing the error in the logs from).

     

    Any help would be greatly appreciated.

     

    Thanks in advance.

     

    Same problem. The command to renew is no longer working. You can roll back to the version released before 02/08 or create a new docker with the same configuration.  Both worked for me. I was unsuccessful in using my previous appdata folder. a brandnew install was required.

  13. the pihole docker does not display the console. It does respond to ping. 

    In the docker log, it says there is a lighttpd error, however in the log it shows, but nothing is present.

     

    [s6-init] making user provided files available at /var/run/s6/etc...exited 0.
    [s6-init] ensuring user provided files have correct perms...exited 0.
    [fix-attrs.d] applying ownership & permissions fixes...
    [fix-attrs.d] 01-resolver-resolv: applying...
    [fix-attrs.d] 01-resolver-resolv: exited 0.
    [fix-attrs.d] done.
    [cont-init.d] executing container initialization scripts...
    [cont-init.d] 20-start.sh: executing...
    ::: Starting docker specific checks & setup for docker pihole/pihole
    [i] Existing PHP installation detected : PHP version 7.3.19-1~deb10u1
    
    
    [i] Installing configs from /etc/.pihole...
    [i] Existing dnsmasq.conf found... it is not a Pi-hole file, leaving alone!
    chown: cannot access '': No such file or directory
    chmod: cannot access '': No such file or directory
    chown: cannot access '/etc/pihole/dhcp.leases': No such file or directory
    Setting password: admin
    + pihole -a -p admin ******
    [✓] New password set
    Using custom DNS servers: 127.1.1.1#5153 & 127.1.1.1#5153
    DNSMasq binding to default interface: eth0
    Added ENV to php:
    "PHP_ERROR_LOG" => "/var/log/lighttpd/error.log",
    
    "ServerIP" => "192.168.1.3",
    "VIRTUAL_HOST" => "192.168.1.3",
    Using IPv4
    ::: setup_blocklists now setting default blocklists up:
    ::: TIP: Use a docker volume for /etc/pihole/adlists.list if you want to customize for first boot
    ::: Blocklists (/etc/pihole/adlists.list) now set to:
    https://raw.githubusercontent.com/StevenBlack/hosts/master/hosts
    https://mirror1.malwaredomains.com/files/justdomains
    ::: Testing pihole-FTL DNS: FTL started!
    ::: Testing lighttpd config: Syntax OK
    ::: All config checks passed, cleared for startup ...
    ::: Docker start setup complete
    [i] Creating new gravity database
    [i] Migrating content of /etc/pihole/adlists.list into new database
    [✗] DNS resolution is currently unavailable
    [cont-init.d] 20-start.sh: exited 1.
    [cont-finish.d] executing container finish scripts...
    [cont-finish.d] done.
    [s6-finish] waiting for services.
    [s6-finish] sending all processes the TERM signal.
    [s6-finish] sending all processes the KILL signal and exiting.
    
    [s6-finish] sending all processes the KILL signal and exiting.

     

  14. 1 hour ago, tola5 said:

    368  DHT notes 

    And for peers no on the list it show 0 (number) 

    And tracker some work other don't.

    It start too upload now but really bad only like 150kb can be unlucky but nomaly it a few MB sek

     

    please note that in order for uploading to work:

    1. you have to have port forwarding through your vpn provider and setup the client to use those ports 
    2. upload speed will be based on your vpn's location and your speed to it.
    On 8/15/2020 at 7:07 PM, Dyon said:

    Welcome to my fourth Docker Container that I've ever created. qbittorrentvpn, a fork of MarkusMcNugen's qBittorrentvpn, but with WireGuard support!

     

    qbittorrent-icon-128.png.95be4d6a2a9e436e57b356360412bdb1.png

     

    Overview: Docker container which runs the latest qBittorrent-nox client while connecting to WireGuard (experimental) or OpenVPN with iptables killswitch to prevent IP leakage when the tunnel goes down. This project is based of my DyonR/jackettvpn, which is based of MarkusMcNugen's qBittorrentvpn.

     

    Base: Debian 10.5-slim

    Automated Build: Not yet

    Application: https://github.com/qBittorrent/qBittorrent

    Docker Hub: https://hub.docker.com/r/dyonr/qbittorrentvpn/

    GitHub: https://github.com/DyonR/docker-qbittorrentvpn

     

    Because this project is quite similar to MarkusMcNugen's Project, I asked permission of him beforehand.

    when wg is active, local access is blocked. This includes the webui. I have tried:

    1. changing the webui port from 8080 to another port 7979/8181
    2. editing qBittorrent.conf to change the webui port
    3. manually passing the port as an Extra parameter 
    4. giving the container its own ip
    5. adding my local network to the wg conf
      1. when doing this wg does not connect. the error message is "RTNETLINK answers: File exist". see the attached image.

    these are the same issues i was unable to over come in my post here: 

     

    when wg is off or openvpn is used. no issues.

     

    wg-error.png

  15. The IP-tables "mark" rules only work when WireGuard routes everything through the tunnel, as it creates and array to block all ips, but 0.0.0.0/0.

     

    However, i think the error message, is related  to something else, like the vpn adapter not being found or character issue from copy/paste.

     

×
×
  • Create New...