jsebright

Members
  • Content Count

    40
  • Joined

  • Last visited

Community Reputation

8 Neutral

About jsebright

  • Rank
    Newbie

Recent Profile Visitors

The recent visitors block is disabled and is not being shown to other users.

  1. ing Chia... ey at path: /root/.chia/mnemonic.txt ey at path: /root/.chia/mnemonic2.txt ot directory "/plots". ot directory "/plots1". ot directory "/plots2". ot directory "/plots3". ot started yet daemon vester: started mer: started l_node: started let: started annot create directory '/root/.chia/flax': File exists ing Flax... ey at path: /root/.chia/mnemonic.txt ey at path: /root/.chia/mnemonic2.txt ot directory "/plots". ot directory "/plots1". ot directory "/plots2". ot directory "/plots3". ot started yet daemon vester: started mer: started l_node: started let: started ing Plotman... ing Ch
  2. Sorry, that's something that took me a while to get right, and I couldn't see what was wrong with it. I'd suggest you try to revert to defaults and as simple as possible. Check the drive paths are mapping to where they should be. Can't offer any more help as I don't know enough...
  3. @localh0rst I had the internal server error. Logged into the docker and ran "flax init" that seemed to start it for me. May or may not work for you...
  4. Ah yes. Why didn't I think of that? Has the advantage that it doesn't get killed if the docker restarts. Watching a few together in just one window is pretty good though, and a bit less to keep track of.
  5. Nice. Haven't seen that before. I couldn't seem to open two windows of the unraid docker console, but I did find I could just use the command watch -n10 du -sh /plotting /plotting2 and it watches both folders in one window.
  6. Thanks for your reply. Seems like the runs are crashing sometimes. The tempdirs have files from different runs leftover in them. I've just checked my config against the wiki sample (was doing that as I saw your previous post). I've got max jobs set to 1, but had the staggers set differently - might have been causing the issue. I've got the threads set to 4 - I've only got 5 pairs of cores available (Ryzen 2600 with a pair pinned in case a VM wants a look-in) and it does tend to run them at about 90%. Perhaps this needs taking down from the default? - I'll see how it goes with
  7. It also took me some time to get MadMax plotting to work. I also think there's an issue that it's not clearing temp folders up. I've got two ssds that I'm using and have one of them as the primary temp, and one as tmp2. They are slowly filling up until plotting stops. After it's all ground to a halt - see plotting and plotting2 : root@Tower:/# df -h Filesystem Size Used Avail Use% Mounted on /dev/nvme0n1p1 932G 673G 259G 73% / tmpfs 64M 0 64M 0% /dev tmpfs 32G 0 32G 0% /sys/fs/cgroup shm 64M 60K 64M 1% /de
  8. Just did a "check for updates" and it's available. Just waiting for some plots to finish. Looking forward to the latest version.
  9. What's the best way to upgrade the unraid docker if it doesn't show "upgrade available". Will an edit of the settings pull the new images? PS: many thanks to @guy.davis for making and supporting this. Good luck with your farming.
  10. This problem occurred again, then I think I worked out what was going on. The device was "disappearing" when I started a VM, but only a certain one. I had had to fiddle with it a day or so ago as it wouldn't start. Something must have got messed up meaning the VM was trying to take control of the nvme drive. I could spot the device in the xml, but am not confident enough to edit it. Just saving the VM settings from the forms view didn't clear the device, but selecting all the possible usb devices and the one pcie device, saving, then clearing them all and saving seems to have sorted it ou
  11. Am up to date on Bios - a reasonably new one that's been in for a few weeks before this issue. Have added the script. Will fix the other issues and see how it goes. Thanks both.
  12. Ah, thanks. Just cancelled it and rebooted. This is an unassigned drive - not part of the array. So it looks like /dev/sdh1 is OK for this. The BTRFS cache issue was the primary error (and still probably is). It's just that @JorgeB spotted another issue to do with this other drive that needs fixing. One problem turns into two...
  13. Hi @JorgeB Many thanks for your support on this - really appreciate it. Have now rebooted (switched off auto start of dockers & vms before doing this). scrubbed the disk again to fix the errors. Will double check the error count and zero them. Trying to fix the UD URBACKUP disk I get the following: root@Tower:~# xfs_repair /dev/sdh Phase 1 - find and verify superblock... bad primary superblock - bad magic number !!! attempting to find secondary superblock... .found candidate secondary superblock... unable to verify superblock, continuing... .found candidate se
  14. Errors makes sense - as I think you put in the FAQ, some of this is not as obvious as it could be. Diagnostics post cache fixing attached - but with the drive "missing" - although it was showing in UD. tower-diagnostics-20210425-1050.zip
  15. Checking btrfs dev stats showed lots of errors. I ran scrub firstly without the "Repair corrupted blocks" option and it showed the following (whih I first thought was no errors, but presumably is). UUID: e8b8d9ec-0ad2-4867-b3cf-87b43a0d9d15 Scrub started: Sun Apr 25 07:16:27 2021 Status: finished Duration: 0:03:07 Total to scrub: 1.10TiB Rate: 6.02GiB/s Error summary: verify=1438 csum=167501 Corrected: 0 Uncorrectable: 0 Unverified: 0 I then ran it with the "Repair corrupted blocks" option anyway, and