Gnomuz

Members
  • Posts

    123
  • Joined

  • Last visited

  • Days Won

    1

Everything posted by Gnomuz

  1. Thanks for the feedback, I had noticed this "not-available" exit code but didn't pay attention it was a 'Warning" log entry. So I understand the problem would be in the scan phase. If I may, the warning line you point out is related with the first file, the one that unBALANCE will move happily : I: 2021/09/16 22:15:26 planner.go:351: scanning:disk(/mnt/disk2):folder(XXXX/portables/1f7n0em0/plot-k32-2021-07-26-14-12-92fa0e550e27c898b01b7d7b839da2a60d7b2ad6f55a9ecf86f1a78627a62b4e.plot) W: 2021/09/16 22:15:26 planner.go:362: issues:not-available:(exit status 1) I: 2021/09/16 22:15:26 planner.go:380: items:count(1):size(108.83 GB) I: 2021/09/16 22:15:26 planner.go:351: scanning:disk(/mnt/disk2):folder(XXXX/portables/1f7n0em0/plot-k32-2021-07-26-15-08-b9330f6ab9e903b464cc97adfecb410f678c90d51b702dbde33c9b58d36de453.plot) W: 2021/09/16 22:15:26 planner.go:362: issues:not-available:(exit status 1) I: 2021/09/16 22:15:26 planner.go:380: items:count(1):size(108.84 GB) I: 2021/09/16 22:15:26 planner.go:110: scatterPlan:items(2) The warnings are the same on both files, but in the end the result of the planning phase is that the first file is transferred, the second is not. And it's the same if I select 3 of them or more, only the first one is selected for transfer as expected, the next ones "will NOT be transferred", same warning on all files including the first one in the scan phase. Permissions and timestamps seem fine -rw-rw-rw- 1 nobody users 102G Jul 26 15:36 plot-k32-2021-07-26-14-12-92fa0e550e27c898b01b7d7b839da2a60d7b2ad6f55a9ecf86f1a78627a62b4e.plot -rw-rw-rw- 1 nobody users 102G Jul 26 16:33 plot-k32-2021-07-26-15-08-b9330f6ab9e903b464cc97adfecb410f678c90d51b702dbde33c9b58d36de453.plot As for a file corruption, these are chia plots as you have seen. There are chia specific tools which check the integrity of the plots and I'm positively sure these are not corrupted at application level, a fortiori at block level. Other similar transfers worked just fine with unBALANCE on smaller files. I also transferred manually a few chia plots with CLI from disk2 to disk3 using the exact same rsync command as the one generated by unBALANCE, it worked like a charm. So I really suspect the unitary size of the files somehow raises an exception in the code behind the "scan" phase, which prevents the transfer of more than one file at a time. Should you need any further information from me, do not hesitate.
  2. Thanks for answering, but sorry, I didn't solve it in the meantime. I tried to set the reserved space to the minimum with no effect. Here are the logs when I plan for transferring two ~109GB files to a drive (disk3) with 5.2 TB free : I: 2021/09/16 22:15:26 planner.go:70: Running scatter planner ... I: 2021/09/16 22:15:26 planner.go:84: scatterPlan:source:(/mnt/disk2) I: 2021/09/16 22:15:26 planner.go:86: scatterPlan:dest:(/mnt/disk3) I: 2021/09/16 22:15:26 planner.go:520: planner:array(8 disks):blockSize(4096) I: 2021/09/16 22:15:26 planner.go:522: disk(/mnt/disk1):fs(xfs):size(3998833471488):free(2304975196160):blocksTotal(976277703):blocksFree(562738085) I: 2021/09/16 22:15:26 planner.go:522: disk(/mnt/disk2):fs(xfs):size(3998833471488):free(509188435968):blocksTotal(976277703):blocksFree(124313583) I: 2021/09/16 22:15:26 planner.go:522: disk(/mnt/disk3):fs(xfs):size(13998382592000):free(5195027685376):blocksTotal(3417573875):blocksFree(1268317306) I: 2021/09/16 22:15:26 planner.go:522: disk(/mnt/disk4):fs(xfs):size(13998382592000):free(30423252992):blocksTotal(3417573875):blocksFree(7427552) I: 2021/09/16 22:15:26 planner.go:522: disk(/mnt/disk5):fs(xfs):size(13998382592000):free(30410973184):blocksTotal(3417573875):blocksFree(7424554) I: 2021/09/16 22:15:26 planner.go:522: disk(/mnt/disk6):fs(xfs):size(13998382592000):free(30273011712):blocksTotal(3417573875):blocksFree(7390872) I: 2021/09/16 22:15:26 planner.go:522: disk(/mnt/disk7):fs(xfs):size(13998382592000):free(29930565632):blocksTotal(3417573875):blocksFree(7307267) I: 2021/09/16 22:15:26 planner.go:522: disk(/mnt/cache):fs(xfs):size(511859089408):free(246631960576):blocksTotal(124965598):blocksFree(60212881) I: 2021/09/16 22:15:26 planner.go:351: scanning:disk(/mnt/disk2):folder(Chia/portables/1f7n0em0/plot-k32-2021-07-26-14-12-92fa0e550e27c898b01b7d7b839da2a60d7b2ad6f55a9ecf86f1a78627a62b4e.plot) W: 2021/09/16 22:15:26 planner.go:362: issues:not-available:(exit status 1) I: 2021/09/16 22:15:26 planner.go:380: items:count(1):size(108.83 GB) I: 2021/09/16 22:15:26 planner.go:351: scanning:disk(/mnt/disk2):folder(Chia/portables/1f7n0em0/plot-k32-2021-07-26-15-08-b9330f6ab9e903b464cc97adfecb410f678c90d51b702dbde33c9b58d36de453.plot) W: 2021/09/16 22:15:26 planner.go:362: issues:not-available:(exit status 1) I: 2021/09/16 22:15:26 planner.go:380: items:count(1):size(108.84 GB) I: 2021/09/16 22:15:26 planner.go:110: scatterPlan:items(2) I: 2021/09/16 22:15:26 planner.go:113: scatterPlan:found(/mnt/disk2/Chia/portables/1f7n0em0/plot-k32-2021-07-26-14-12-92fa0e550e27c898b01b7d7b839da2a60d7b2ad6f55a9ecf86f1a78627a62b4e.plot):size(108828032623) I: 2021/09/16 22:15:26 planner.go:113: scatterPlan:found(/mnt/disk2/Chia/portables/1f7n0em0/plot-k32-2021-07-26-15-08-b9330f6ab9e903b464cc97adfecb410f678c90d51b702dbde33c9b58d36de453.plot):size(108835523381) I: 2021/09/16 22:15:26 planner.go:120: scatterPlan:issues:owner(0),group(0),folder(0),file(0) I: 2021/09/16 22:15:26 planner.go:129: scatterPlan:Trying to allocate items to disk3 ... I: 2021/09/16 22:15:26 planner.go:134: scatterPlan:ItemsLeft(2):ReservedSpace(536870912) I: 2021/09/16 22:15:26 planner.go:463: scatterPlan:1 items will be transferred. I: 2021/09/16 22:15:26 planner.go:465: scatterPlan:willBeTransferred(Chia/portables/1f7n0em0/plot-k32-2021-07-26-14-12-92fa0e550e27c898b01b7d7b839da2a60d7b2ad6f55a9ecf86f1a78627a62b4e.plot) I: 2021/09/16 22:15:26 planner.go:473: scatterPlan:1 items will NOT be transferred. I: 2021/09/16 22:15:26 planner.go:479: scatterPlan:notTransferred(Chia/portables/1f7n0em0/plot-k32-2021-07-26-15-08-b9330f6ab9e903b464cc97adfecb410f678c90d51b702dbde33c9b58d36de453.plot) I: 2021/09/16 22:15:26 planner.go:488: scatterPlan:ItemsLeft(1) I: 2021/09/16 22:15:26 planner.go:489: scatterPlan:Listing (8) disks ... I: 2021/09/16 22:15:26 planner.go:503: ========================================================= I: 2021/09/16 22:15:26 planner.go:504: disk(/mnt/disk1):no-items:currentFree(2.30 TB) I: 2021/09/16 22:15:26 planner.go:505: --------------------------------------------------------- I: 2021/09/16 22:15:26 planner.go:506: --------------------------------------------------------- I: 2021/09/16 22:15:26 planner.go:507: I: 2021/09/16 22:15:26 planner.go:503: ========================================================= I: 2021/09/16 22:15:26 planner.go:504: disk(/mnt/disk2):no-items:currentFree(509.19 GB) I: 2021/09/16 22:15:26 planner.go:505: --------------------------------------------------------- I: 2021/09/16 22:15:26 planner.go:506: --------------------------------------------------------- I: 2021/09/16 22:15:26 planner.go:507: I: 2021/09/16 22:15:26 planner.go:492: ========================================================= I: 2021/09/16 22:15:26 planner.go:493: disk(/mnt/disk3):items(1)-(108.83 GB):currentFree(5.20 TB)-plannedFree(5.09 TB) I: 2021/09/16 22:15:26 planner.go:494: --------------------------------------------------------- I: 2021/09/16 22:15:26 planner.go:497: [108.83 GB] Chia/portables/1f7n0em0/plot-k32-2021-07-26-14-12-92fa0e550e27c898b01b7d7b839da2a60d7b2ad6f55a9ecf86f1a78627a62b4e.plot I: 2021/09/16 22:15:26 planner.go:500: --------------------------------------------------------- I: 2021/09/16 22:15:26 planner.go:501: I: 2021/09/16 22:15:26 planner.go:503: ========================================================= I: 2021/09/16 22:15:26 planner.go:504: disk(/mnt/disk4):no-items:currentFree(30.42 GB) I: 2021/09/16 22:15:26 planner.go:505: --------------------------------------------------------- I: 2021/09/16 22:15:26 planner.go:506: --------------------------------------------------------- I: 2021/09/16 22:15:26 planner.go:507: I: 2021/09/16 22:15:26 planner.go:503: ========================================================= I: 2021/09/16 22:15:26 planner.go:504: disk(/mnt/disk5):no-items:currentFree(30.41 GB) I: 2021/09/16 22:15:26 planner.go:505: --------------------------------------------------------- I: 2021/09/16 22:15:26 planner.go:506: --------------------------------------------------------- I: 2021/09/16 22:15:26 planner.go:507: I: 2021/09/16 22:15:26 planner.go:503: ========================================================= I: 2021/09/16 22:15:26 planner.go:504: disk(/mnt/disk6):no-items:currentFree(30.27 GB) I: 2021/09/16 22:15:26 planner.go:505: --------------------------------------------------------- I: 2021/09/16 22:15:26 planner.go:506: --------------------------------------------------------- I: 2021/09/16 22:15:26 planner.go:507: I: 2021/09/16 22:15:26 planner.go:503: ========================================================= I: 2021/09/16 22:15:26 planner.go:504: disk(/mnt/disk7):no-items:currentFree(29.93 GB) I: 2021/09/16 22:15:26 planner.go:505: --------------------------------------------------------- I: 2021/09/16 22:15:26 planner.go:506: --------------------------------------------------------- I: 2021/09/16 22:15:26 planner.go:507: I: 2021/09/16 22:15:26 planner.go:503: ========================================================= I: 2021/09/16 22:15:26 planner.go:504: disk(/mnt/cache):no-items:currentFree(246.63 GB) I: 2021/09/16 22:15:26 planner.go:505: --------------------------------------------------------- I: 2021/09/16 22:15:26 planner.go:506: --------------------------------------------------------- I: 2021/09/16 22:15:26 planner.go:507: I: 2021/09/16 22:15:26 planner.go:511: ========================================================= I: 2021/09/16 22:15:26 planner.go:512: Bytes To Transfer: 108.83 GB I: 2021/09/16 22:15:26 planner.go:513: --------------------------------------------------------- It clearly states that the second file will NOT be transferred, without further obvious explanation. I suspect the problem is due to the huge size of the files, as I managed to transfer multiple smaller files from disk2 to disk3 without any issue. Maybe some kind of overflow in an intermediate variable ? Thanks in advance for your support.
  3. Hi, I've been using unBALANCE for quite some time now, it's really a great tool to reorganize data among disks. I have recently upgraded some of my disks to a higher capacity and hence reallocate their usage. And now I have an issue as I have some big files of ~100GiB to transfer in this example from disk2 to disk3. disk2 has 509GB free, and disk3 has 5.2TB free. Here's the output of the "Plan" phase when I try to transfer more than one file : PLANNING: Found /mnt/disk2/<sharename>/file1 (108.83 GB) PLANNING: Found /mnt/disk2/<sharename>/file2 (108.84 GB) PLANNING: Trying to allocate items to disk3 ... PLANNING: Ended: Sep 9, 2021 22:41:06 PLANNING: Elapsed: 0s PLANNING: The following items will not be transferred, because there's not enough space in the target disks: PLANNING: <sharename>/file2 PLANNING: Planning Finished I can transfer the files one at a time, but if I try to select two or more files, unBALANCE refuses to move after the first "because there's not enough space in the target disks:". No disk is indicated. Below a snapshot of the array : The share is set to include disk2 to disk7. As can be seen, disk4 to disk7 haven't 100GB free, but of course I only target disk3 in the move operation as the log shows. Also, I've never used unBALANCE before on such big files, maybe it's another possible cause. Any help or advice would be appreciated.
  4. Hi, I think a new version is available, as my logs have been mentioning unsuccessful upgrade attempts for ~18 hours.
  5. I just updated to 2021.06.07.1554. Everything seems fine, but I get the following error message in the dropdown :
  6. I don't see any solution to change dynamically the final destination directory for a running plotting process. But if the final copy fails, due to destination drive full for instance, the process doesn't delete the final file in your temp directory, so you can just copy it manually to a destination with free space, and delete it from your temp directory once done.
  7. I can't see then, but it's strange that your prompt is "#", like in unRAID terminal, where mine is "root@57ad7c0830ad:/chia-blockchain#" in the container console as you can see. Sorry, I can't help further, it must be something obvious, but it's a bit late here !
  8. You have to be in the chia-blockchain directory, which happens to be a root directory in this container, for all that to work : cd /chia-blockchain source ./activate Here's the result in my container ; root@57ad7c0830ad:/chia-blockchain# cd /chia-blockchain root@57ad7c0830ad:/chia-blockchain# source ./activate (venv) root@57ad7c0830ad:/chia-blockchain# chia version 1.1.7.dev0 (venv) root@57ad7c0830ad:/chia-blockchain# Hey, btw, these commands must be entered in the container console, NOT in the unRAID terminal !
  9. You tried ALMOST everything ! The rigth command is : ".(dash) (space).(dash)/activate", you can also type "source ./activate", it's the same.
  10. In fact it's a general issue of crappy versioning and/or repo management. On a baremetal Ubuntu, when trying to upgrade from 1.1.5 to 1.1.6 following the official wiki how-to it also built 1.1.7.dev0. And deleting the .json file is of no effect for building the main stuff, maybe the Linux GUI, I don't use it. And I think the docker image build process simply suffers from the same problem. The only solution so far to build a "real" (?) 1.1.6 in Ubuntu is to delete the ~/chia-blockchain directory and make a fresh install. No a problem, as all static data are stored in ~/.chia/mainnet, but still irritating. They will improve over time, hopefully ...
  11. Beware, the build process by chia dev team definitely needs improvement : I pulled both the (new) '1.1.6' tag as well as 'latest' (which is the default if you don't explicitly mention it). In both cases, the output of 'chia version' in the container console is 1.1.7.dev0 ! Check your own containers, it doesn't seem to raise issues for the moment, but when the pool protocol and the corresponding chia version will be launched, it may lead to disasters, especially in setups where there are several machines (full node, wallet, harvester(s), ...) I had already created an issue in github, I just complemented it with this new incident.
  12. Hi, I used Host network type, as I didn't see any advantage in having a specific private IP address for the container. So far, the X470D4U2-2T matches my needs. Let's say the BIOS support from Asrock Rack could be improved ... Support for Ryzen 5000 has been in beta for 3 months, and no sign of life for a stable release. As for expansion, well, mATX is not the ideal form factor. If ever I were to add a second HBA, I would have to sacrifice the GPU, so no more hardware decoding for Plex. But it's more a form factor issue than something specific to the MB. In terms of reliability, not a single issue with the NAS running 24/7 for a bit more than a year now (fingers crossed), even if in summer the basement is too hot (I live in the south of France). I had a look at the X570D4U-2L2T. For both MBs, I think their choice of two 10Gb RJ45 interfaces is questionable, I would really prefer to have one or two SFP+ cages, that would make the upgrade to 10Gb much more affordable with a DAC cable. And the X570 clearly privileges the M.2 slots in terms of PCIe lanes allocation if I got it well, that wouldn't be my first choice, but it highly depends on your target use ... I hope that can help
  13. No problem, PM me if you find issues in setting the architecture up. For the Pi installation, I chose Ubuntu Server 20.04.2 LTS. You may chose Raspbian also, that's your choice. If you go for Ubuntu, this tutorial is just fine : https://jamesachambers.com/raspberry-pi-4-ubuntu-20-04-usb-mass-storage-boot-guide/ . Note that apart from the enclosure, it's highly recommended to buy the matching power supply by Argon with 3.5A output. The default RPi4 PSU is 3.1A, which is fine with the OS on a SD card, but adding a SATA M.2 SSD draws more current, and 3.5A avoids any risk of instability. The Canakit 3.5A power adapter is fine also, but not available in Europe for me. The general steps are : - flash a SD card with Raspbian - boot the RPi from the SD Card and update the bootloader (I'm on the "stable" channel with firmware 1607685317 of 2020/12/11, no issue) - for the moment, just let the RPi run Raspbian from the SD card - install your SATA SSD in the base of the enclosure (which is a USB to M.2 SATA adapter), and connect it to your PC - flash Ubuntu on the SSD from the PC with Raspberry Pi Imager (or Etcher, Rufus, ...) - connect the SSD base to the rest of the enclosure (USB-A to USB-A), and follow the tutorial from "Modifying Ubuntu for USB booting" - shutdown the RPi, remove he SD Card, and now you should boot from the SSD. By default, the fan is always on, which is useless, as the Argon One M.2 will passively cool the RPi just fine. You have to install either the solution proposed by the manufacturer (see the doc inside the box), not sure it works with Ubuntu. There's also a raspberry community package which is great and installs on all OSes, including Ubuntu, see https://www.raspberrypi.org/forums/viewtopic.php?f=29&t=275713&sid=cae3689f214c6bcd7ba2786504c6d017&start=250 . The install is a piece of cake, and the default params work well. Once the RPi is setup and the fan management of the case installed, just install chia, and then try to copy the whole ~/.chia directory from the container (mainnet directory in appdata/chia share) onto the RPi. Remove the plots directory from the config.yaml as the local harvester on the RPi will be useless anyway. That should preserve your whole configuration and keys, and above all your sync will be much faster, as you won't start from the very beginning of the chain. Run 'chia start farmer' and check the logs. Connect the Unraid harvester from the container as already explained, and check the connection in the container logs. At that stage, you should farm from the RPi the plots stored in the Unraid array through the harvester in the container. To see your keys, just type 'chia keys show' in the RPi, it shows your farmer public key (fk) and pool public key (pk). With these two keys, you can create a plot on any machine, even outside your LAN. Just run 'chia plots create <usual parameters> -f fk -p pk'. It signs the plot with your keys, and only a farmer with these keys can farm them. Once the final plot is copied into your array, it will be seen by the harvester, and that's all.
  14. Btw, the pull request is mine ๐Ÿ˜‚. You could approve it, maybe it will draw attention from the repo maintainer ...
  15. If you ever want to create something else than k-32 , the CLI tool also accepts the '-k 33' parameter for instance. If '-k' is not specified, it defaults to '-k 32' which is the current minimal plot size. All options are documented here https://github.com/Chia-Network/chia-blockchain/wiki/CLI-Commands-Reference , and 'chia plots create -h' is a good reference also ๐Ÿ˜‰
  16. Hi, Thanks for your interest in my post, I'll try to answer your numerous questions, but I think you already got the general idea of the architecture pretty well ๐Ÿ˜‰ - The RPi doesn't have a M.2 slot out of the box. By default, the OS runs from a SD card. The drawback is that SD cards don't last so long when constantly written, e.g. by the OS and the chia processes writing logs or populating the chain database, they were not designed for this kind of purpose. And you certainly don't want to lose your chia full node because of a failing 10$ SD card, as the patiently created plots are still occupying space, but do not participate in any challenge... So, I decide running the RPi OS from a small SATA SSD was a much more resilient solution, and bought an Argon One M.2 case for the RPi. I had an old 120 GB SATA SSD which used to host the OS of my laptop before I upgraded it to a 512 GB SSD, so I used it. This case is great because it's full-metal and there are thermal pads between the CPU and RAM and the case. So, it's essentially a passive heat dissipation setup. There's a fan if temps go too high, but it never starts, unless you live in a desert I suppose. There are many other enclosures for RPis, but this one is my favorite so far. - The RPi is started with the 'chia start farmer' command, which runs the following processes : chia_daemon, chia_full_node (blockchain syncing), chia_farmer (challenges management and communication with the harvester(s)) and chia_wallet. chia_harvester is also launched locally, but is totally useless, as no plots are stored on a storage accessible by the RPi. To get a view on this kind of distributed setup, have a look at https://github.com/Chia-Network/chia-blockchain/wiki/Farming-on-many-machines, that was my starting point. You can also use 'chia start farmer-no-wallet', and sync your wallet on another machine, I may do that in the future as I don't like having it on the machine exposed to the internet. - The plotting rig doesn't need any chia service running on it, the plotting process can run offline. You just need to install chia on it, and you don't even need to have your private keys stored on it. You just run 'chia plots create (usual params) -f <your farmer public key> -p <your pool public key>' , and that's all. The created plots will be farmed by the remote farmer once copied into the destination directory. - I decided to store the plots on the xfs array with one parity drive. I know the general consensus is to store plots on non-protected storage, considering you can replot them. But I hate the idea of losing a bunch of plots. You store them on high-density storage, let's say 12TB drives, which can hold circa 120 plots each. Elite plotting rigs with enterprise-grade temporary SSDs create a plot in 4 hours or less. So recreating 120 plots is circa 500 hours or 20 days. When you see the current netspace growth rate of 15% a week or more, that's a big penalty I think. I you have 10 disks, "wasting" one as a parity drive to protect the other 9 sounds like a reasonable trade-off, provided you have a spare drive around to rebuild the array in case of a faulty drive. To sum up, two extra drives (1 parity + 1 spare) reasonably guarantee the continuity of your farming process and prevent the loss of existing plots, whatever the size of your array is. Of course with a single parity drive, you are not protected against two drives failing together, but as usual it's a trade-off between available size, resiliency and costs, nothing specific to chia ... And the strength of Unraid is you won't lose the plots on the healthy drives, unlike other raid5 solutions. - As for the container, it runs only the harvester process ('chia start harvester'), which must be setup as per the link above, nothing difficult. From the container console, you can also optionally run a plotting process, if your Unraid server has a temporary unassigned SSD available (you can also use your cache SSD, but beware of space ...). You will run it just like on your plotting rig : 'chia plots create (relevant params) -f <farmer key> -p <pool key>'. The advantage is that the final copy from the temp dir to the dest dir is much quicker, as it's a local copy on the server from an attached SSD to the Unraid share (10 mins copy vs 20/30 mins over the network for me). - So yes, you can imagine running your plotting process from a container on the Unraid server if you don't have a separate plotting rig. But then I wouldn't use this early container, and would rather wait for a more mature one which would integrate a plotting manager (plotman or Swar), because managing all that manually is a nightmare on the long run, unless you are a script maestro and have a lot of time to spend on it ๐Ÿ˜‰ Happy farming !
  17. Without port 8444 forwarded to the full node, you'll have sync issues, as you will only have access to the peers which have their 8444 port accessible, and they are obviously a minority in the network.... Generally speaking, opening a port to a container is not such a big security risk, as any malicious code will only have access to the resources available to the container itself. But in the case of chia, the container must have access to your stored plots to harvest them. An attacker may thus be able to delete or alter your patiently created plots from within the container, which would be a disaster if you have hundreds or thousands of them ! And one of the ways to maximise one's netspace ownership could be the deletion of others' plots ... That's one of the reasons why I decided to have a dedicated RPi to run the full-node, with the firewall allowing only the strictly required ports, both on the WAN and LAN sides. There's no zero risk architecture, but I think such a setup is much more robust than a fully centralised one where a single machine supports the full set of functions.
  18. My understanding of the official wiki is that your system clock must not be off by more than 5 minutes, whatever your timezone is, reminded that the clocks (both software and hardware) are managed as UTC by default in Linux. Anyway, I see that many have problems syncing when running the container as a full node. Personally I went the Raspberry way and everything is working like a charm so far. The RPi4 4GB boots from an old 120GB SATA M.2 SSD under Ubuntu 20.04.2 (in an Argon One M.2 case which is great btw) and runs the full node. In a first time, I ran the harvester on the RPi and it accessed the plots stored on the Unraid server through a SMB share. Performance was horrible, average was 5-7 seconds to respond, with peaks at more than 30 sec ! I tried NFS which was a bit better, but still too slow, often > 3 sec. With this container running only the harvester, and connected to the farmer on the RPi, response times are under 0,1 sec at farmer level, it has a negligible footprint on the server and it's rock solid. I also create plots in the container as a bonus, to help my main plotting rig. If others want to replicate this architecture, I posted a step-by-step a few posts above on how to setup the container as a harvester-only, and connect it to a remote farmer/full node. An RPi is not a big investment, compared with the storage cost, and as chia was designed as a distributed and decentralised project, I think it's the way to go to have a resilient setup.
  19. I raised that issue a few posts above, but the problem is tzdata is not included in the official docker, which is out of reach for the author of this template ... So, the TZ variable is just ignored. I only run a harvester on the docker, and it communicates properly with my full node/famer, despite the container is in UTC time and the full node in "Europe/Paris", 2 hours behind just like you. There's a manual workaround to have the correct time, but you have to redo each time you restart the container : - entrer the container console - stop all chia processes : 'chia stop -d farmer' - install tzdata : 'apt-get update' and 'apt-get install tzdata', there you can set the timezone - restart chia : 'chia start farmer' * change 'farmer' in the commands above according to the set of process you run in the container. For me, it's 'harvester' to run 'chia_daemon' and 'chia_harvester' only. 'farmer' is fine for a full node, farmer, harvester and wallet. Hope that can help, although I doubt your sync issues are related with timezone. I'm almost sure everything is managed as UTC time at network level, and the timezone is only used to display timestamps in logs. If you don't open and forward the port 8444 to the container (in practice to the IP address of your Unraid server) on your router, you will have difficulties to connect to peers on the network, and thus the sync will take ages. It's exactly the same as in Bittorrent and many other P2P protocols. The oldest may remember the "Low Id" in emule ๐Ÿ˜‰
  20. Hi, Review the link I gave in my OP (https://github.com/Chia-Network/chia-blockchain/wiki/Farming-on-many-machines) to understand the underlying objectives. Step by step : - copy the mainnet/config/ssl/ca directory of your full node in a path accessible from the container, e.g. appdata/chia - edit the container settings : farmer_address : IP address of your farmer/full node machine farmer_port : 8447 harvester_only : true - start the container - enter the container console - enter "top", you should have only two chia processes : "chia_daemon" and "chia_harvester" - enter the venv : ". ./activate" - stop the running processes : "chia stop -d all" - check the processes stopped properly with "top" - create the keys for the container harvester signed by the farmer node : "chia init -c /root/.chia/ca" (if you copied the famer's ca directory as suggested above, adjust if needed) - restart the harvester : "chia start harvester" - check both logs (container and farmer nodes), you will see successful connections between both nodes if your logs are set at INFO level - if everything is fine, delete the adddata/chia/ca directory so that you don't have multiple copies of your keys around your network as a safety measure
  21. Hi, Thanks for the template, I was waiting for it because my full node is on a Raspberry Pi4 and everything works fine, but harvesting from the Pi on an Unraid Samba share was just awful from a performance standpoint, especially while a new plot was being copied to the array (I saw response times beyond 30 seconds!). Now I run only a harvester in the container, and response times are more in the milliseconds range ! By tweaking config.yaml it's easy to connect the container harvester to the full node, after having generated keys for the harvester with chia init -c /path/to/full/node/certs (see https://github.com/Chia-Network/chia-blockchain/wiki/Farming-on-many-machines for more details). I just have a silly issue, the container seems to default to a generic "Europe" timezone, and timestamps are off by two hours with my local time in France. I added a "TZ" variable set to "Europe/Paris" to the template, but that changed nothing. I installed tzdata inside the container and dkpg-reconfigure tzdata to Europe/Paris, it's fine now, but obviously it's not a persistent solution. I would really prefer to have consistent timestamps in logs between the various machines (harvester container, full node, plotting machines). Thanks in advance for any guidance. PS : of course, my Unraid host is in my timezone
  22. Hi, I have exactly the same issue as @kyis and will try to elaborate a bit further. I live in a area in France where ADSL connections are awfully slow (5Mbps DL / 0.7Mbps UL). So I have a 4G router (in bridge mode) with an external antenna as the main internet connection with very decent speeds (150Mbps DL/45Mbps UP), the ADSL ISP box is only used as a failover. An internal router manages the failover and routing for all internal devices. The issue is that port forwarding is not available on the 4G connection, as often. The workaround I've found is to have a Wireguard tunnel over the 4G connection with a VPN provider which accepts port forwarding on a limited number of ports. That works great and I can have access to my LAN from outside without any problem, except for the Remote Access by Unraid. I understand you get the IP used to contact your mothership server, in may case the public 4G address, which is not accessible from outside. Of course I could, as you suggest, implement policy-based-routing rules to force route the outgoing traffic from my Unraid server : - through the ADSL connection, but then the speeds would be awful, and the remote access would be practically unusable, although available - through the Wireguard tunnel, but then I would route all the traffic of the Unraid server through the VPN provider, and most of it (e.g. docker and plugin updates) doesn't really "deserve" it. The best solution, imho, would be to have a setting in the plugin to indicate a fully qualified name (or a fixed public IP) for the remote access entry point, in specific cases like @kyis or me. Otherwise, the workaround would be to get a detailed view on the exchanges between the Unraid server and your server(s) so that I can try and implement a reasonable routing policy, limiting the use of the WG tunnel to the communication between the plugin and your server(s). Thanks in advance for your attention.
  23. @i-B4se, according to http://apcupsd.org/manual/manual.html#connecting-apcupsd-to-a-snmp-ups , you may try to populate the "Device" setting with "IPAddr:161:APC", in case the autodetection of the "vendor" qualifier fails with our UPS. Another option seems to switch from SNMP to PCNET protocol, see http://apcupsd.org/manual/manual.html#powerchute-network-shutdown-driver-pcnet , if your UPS/Network management card support it. But it's pure guess, I have no practical experience with Network Card / SNMP APC UPSes. Good luck, from my painful personal experience, getting apcupsd to work properly with an APC UPS with USB connection can be a long story ...
  24. Thanks @ich777, quick and efficient, as usual ! I have a strong preference in Production branch rather than the New Features one on my server ...
  25. I agree the USB-RS232 adapters with genuine FTDI chips are fine, and this product seems to be serious. The problem is it costs around 50โ‚ฌ here in France, so the add-on card would be cheaper for @tetrapod if he has a free Pcie slot.