rukiftw

Members
  • Posts

    33
  • Joined

  • Last visited

Everything posted by rukiftw

  1. anyway to get access to br0 dockers? with dsmith44's docker tailscale verison and docker host network access enabled, it can access br0 dockers. So far i have been unable to get this plugin to do the same. any ideas?
  2. The image shows the chia and chive fork are both writing to the docker image. The other forks to not. The log location provided are outside of the docker image. Any ideas what could be writing to it?
  3. am i blind? i am unable to find the setup guide. Can someone point me in the right direction?
  4. i use gate.io. no kyc. i transfer my XCH (chia) to them. Sell it for USDT and then plan what to do with it. I was then trading it to matic and investing on polygon for most of this year, but recently started playing with the fantom and avalanche networks. I really like that it supports cross transfers with minor waits (minutes) and low fees. It is not a USA site, so it may seem sketchy, but is quite popular in Asian counties.
  5. My search-fu failed me. Questions: The 'check plots' function has been removed from the farmers tab. I am unable to find it. Where was it moved to? How do i disable certain alerts? Silicon is generating 2213 alerts per day for being 'offline' while it syncs. Setting up harvester alt coins (on sperate machines), doesn't function in any way. They do not communicate. Size is not reported. unable to access logs Machinaris does see them, at times, but then replaces them with another alt coin harvester. This is similar to the DNS issues a few revisions back. Assigning manual IPs does not resolve.
  6. I started experimenting with this last night. So far there is no way to interact with them. To get them to sync and and report in, the following is required: Own IP. The default bridge mode will not work as the worker's name is linked to its IP address. Adding the following to extra parameters: -h "<dockername>" -e TZ=<local> -e worker_address=<Worker IP> -e controller_host=<Machinaris IP> -e controller_api_port=8927 EX: -h "chives" -e TZ=America/Los_Angeles -e worker_address=192.168.1.24 -e controller_host=192.168.1.2 -e controller_api_port=8927 I also have not been able to figure out what the worker_api_port is for. None of them are being requested by docker, so the port isn't actually opened. when manually opening the by via -p 8931:8931 (for chives), nothing changes. it appears the built in flax and the addon flax are different. I was unable to find documentation any of this, so everything done was a guess.
  7. is sware i already tried that, but its working now. Your help is highly apprecated. for the next person, you add the following to the template from the ca app, under extra parameters: -t -e TZ=America/Los_Angeles -e worker_address=10.0.0.3 -e farmer_address=10.0.0.2 -e farmer_port=8447 -e flax_farmer_port=6888 -e farmer_pk=* -e pool_contract_address=* -e controller_host=10.0.0.2 -e controller_api_port=8927 then fill in the rest of the template as normal.
  8. I use two drives in a raid 1 for my transfer cache. They write/read at 160mb/s. Then mover runs at 11pm. The array with dual parity reads at 150mb/s, but write is only 35mb/s. Reconstruct mode pushes it to 55mb/s. My dockers are on an r1 satassd cache drives, and the plotter is on an nvme drive with high endurance. using this setup, a plot is created every 36 minutes, start to finish.
  9. Any suggestions on getting the harvester+plotter working? Error: Connection Refused certs are are in both the farmer_ca and \mainnet\config\ssl\ca folders IPs are used instead of hostnames. Here are the docker command: draft1: generated on the full node, using the new worker button. docker run -d -t --name machinaris-plotter -h chiaharvest -v /mnt/user/appdata/machinaris/:/root/.chia -v "/mnt/user/chia/:/plots" -v "/mnt/ssdnvme/:/plotting" -e TZ=America/Los_Angeles -e farmer_address=10.0.0.2 -e farmer_port=8447 -e flax_farmer_port=6888 -e farmer_pk=* -e pool_pk=* -e pool_contract_address=* -e mode=harvester,plotter -e controller_host=10.0.0.2 -e controller_api_port=8927 -e plots_dir=/plots -e blockchains=chia,flax -p 8926:8926 -p 8927:8927 ghcr.io/guydavis/machinaris draft2: (https://github.com/guydavis/machinaris/wiki/Workers) docker run -d -t --name "machinaris" -h "chiaharvester" -v "/mnt/user/appdata/machinaris/:/root/.chia" -v "/mnt/user/chia/:/plots" -v "/mnt/ssdnvme/:/plotting" -e TZ="America/Los_Angeles" -e farmer_address="10.0.0.2" -e farmer_pk="*" -e pool_pk="*" -e pool_contract_address="*" -e mode="harvester+plotter" -e controller_host="10.0.0.2" -e plots_dir=/plots -e blockchains=chia -p 8926:8926 -p 8927:8927 ghcr.io/guydavis/machinaris there is no firewall present to be blocking communications.
  10. For me multiple arrays would me more useful. You can already obtain ZFS with the plugin or passing a controller card to a vm Plenty of guides out there for both Large Drives in today's market are insanely expensive. 8TB for $150 ouch. Allows use of already obtained hardware I have around 50x 2tb sas drives that are being unused. Helps with the slow write performance by splitting data types to their respective arrays Dual parity drops array transfers to 40mb/s As a future features, i would prefer: parity reworked. No idea if it is even possible, but if having 1x parity drive, and then that drive is in a mirror set with drive(s). This removes the write penalty with having two parity drives, but actually gives you as many parity drives as you want to mirror. There are likely problems with this setup, that i do not see. Tiered storage. Have a live file system. t0 for nvme, for high transfer rate items (10gbit). t1 for ssds, where VMs live. t2 is an r10 array, used for most transfers. r3 is the parity array. Data is moved between the four as necessary based upon frequency of access. There is a manual way of setting this up now with the multiple cache pools, but I would prefer an automatic method.
  11. I followied this guide: https://github.com/guydavis/machinaris/wiki/Plotman#plotting and https://github.com/guydavis/machinaris/wiki/MadMax it does not work. machinaris doesnt know what the madmax type is. Error: Traceback (most recent call last): File "/chia-blockchain/venv/lib/python3.8/site-packages/plotman/configuration.py", line 40, in get_validated_configs loaded = schema.load(config_objects) File "/chia-blockchain/venv/lib/python3.8/site-packages/marshmallow/schema.py", line 714, in load return self._do_load( File "/chia-blockchain/venv/lib/python3.8/site-packages/marshmallow/schema.py", line 896, in _do_load raise exc marshmallow.exceptions.ValidationError: {'plotting': {'e': ['Missing data for required field.'], 'k': ['Missing data for required field.'], 'job_buffer': ['Missing data for required field.'], 'type': ['Unknown field.']}} The above exception was the direct cause of the following exception: Traceback (most recent call last): File "/chia-blockchain/venv/bin/plotman", line 8, in <module> sys.exit(main()) File "/chia-blockchain/venv/lib/python3.8/site-packages/plotman/plotman.py", line 137, in main cfg = configuration.get_validated_configs(config_text, config_path) File "/chia-blockchain/venv/lib/python3.8/site-packages/plotman/configuration.py", line 42, in get_validated_configs raise ConfigurationException( plotman.configuration.ConfigurationException: Config file at: '/root/.chia/plotman/plotman.yaml' is malformed
  12. Same problem. The command to renew is no longer working. You can roll back to the version released before 02/08 or create a new docker with the same configuration. Both worked for me. I was unsuccessful in using my previous appdata folder. a brandnew install was required.
  13. I also use Mullvad and had to add adding the ipv6 work around worked for me. ie: pull-filter ignore 'route-ipv6' pull-filter ignore 'ifconfig-ipv6' to the openvpn config.
  14. the pihole docker does not display the console. It does respond to ping. In the docker log, it says there is a lighttpd error, however in the log it shows, but nothing is present. [s6-init] making user provided files available at /var/run/s6/etc...exited 0. [s6-init] ensuring user provided files have correct perms...exited 0. [fix-attrs.d] applying ownership & permissions fixes... [fix-attrs.d] 01-resolver-resolv: applying... [fix-attrs.d] 01-resolver-resolv: exited 0. [fix-attrs.d] done. [cont-init.d] executing container initialization scripts... [cont-init.d] 20-start.sh: executing... ::: Starting docker specific checks & setup for docker pihole/pihole [i] Existing PHP installation detected : PHP version 7.3.19-1~deb10u1 [i] Installing configs from /etc/.pihole... [i] Existing dnsmasq.conf found... it is not a Pi-hole file, leaving alone! chown: cannot access '': No such file or directory chmod: cannot access '': No such file or directory chown: cannot access '/etc/pihole/dhcp.leases': No such file or directory Setting password: admin + pihole -a -p admin ****** [✓] New password set Using custom DNS servers: 127.1.1.1#5153 & 127.1.1.1#5153 DNSMasq binding to default interface: eth0 Added ENV to php: "PHP_ERROR_LOG" => "/var/log/lighttpd/error.log", "ServerIP" => "192.168.1.3", "VIRTUAL_HOST" => "192.168.1.3", Using IPv4 ::: setup_blocklists now setting default blocklists up: ::: TIP: Use a docker volume for /etc/pihole/adlists.list if you want to customize for first boot ::: Blocklists (/etc/pihole/adlists.list) now set to: https://raw.githubusercontent.com/StevenBlack/hosts/master/hosts https://mirror1.malwaredomains.com/files/justdomains ::: Testing pihole-FTL DNS: FTL started! ::: Testing lighttpd config: Syntax OK ::: All config checks passed, cleared for startup ... ::: Docker start setup complete [i] Creating new gravity database [i] Migrating content of /etc/pihole/adlists.list into new database [✗] DNS resolution is currently unavailable [cont-init.d] 20-start.sh: exited 1. [cont-finish.d] executing container finish scripts... [cont-finish.d] done. [s6-finish] waiting for services. [s6-finish] sending all processes the TERM signal. [s6-finish] sending all processes the KILL signal and exiting. [s6-finish] sending all processes the KILL signal and exiting.
  15. rutorrent plus plus doesnt boot. Error: "Error: error in making qname". attached is the log. any ideas? log.txt
  16. please note that in order for uploading to work: you have to have port forwarding through your vpn provider and setup the client to use those ports upload speed will be based on your vpn's location and your speed to it. when wg is active, local access is blocked. This includes the webui. I have tried: changing the webui port from 8080 to another port 7979/8181 editing qBittorrent.conf to change the webui port manually passing the port as an Extra parameter giving the container its own ip adding my local network to the wg conf when doing this wg does not connect. the error message is "RTNETLINK answers: File exist". see the attached image. these are the same issues i was unable to over come in my post here: when wg is off or openvpn is used. no issues.
  17. As, xl3b4n0nx put it, this is what i am hoping for. I Imagine the tiered caching setup as follows: array 1: JOBD SSD (Hot Pool) redundancy unimportant Data is moved when idle, on a timer (mover), or policy (last time accessed, ie hot data is never moved) For: Appdata Dockers Important VMs array 2: r10/z2 BRTFS/ZFS HDD (Living Pool) redundancy possible Data is moved on a timer (mover), or policy (last time accessed, ie hot data is moved up) For: Downloads Testing VMs array 3: Unraid HDD (Cold Pool) redundancy paramount Data is moved by policy (last time accessed, ie hot data is moved up) For: Keeping a redundant copy of everything In addition, proper SAS support would be a welcomed addition. I have never had Sata products last past a few years.
  18. The IP-tables "mark" rules only work when WireGuard routes everything through the tunnel, as it creates and array to block all ips, but 0.0.0.0/0. However, i think the error message, is related to something else, like the vpn adapter not being found or character issue from copy/paste.