• Posts

  • Joined

  • Last visited

Recent Profile Visitors

The recent visitors block is disabled and is not being shown to other users.

greyday's Achievements


Rookie (2/14)



  1. I'll take a look, but somehow when I restarted the docker for a fifth time (after four consecutive crashes) it just...started working right again. I changed absolutely nothing, but it's been humming along for a few days now, so I'm just going to let it do its thing until the disks are full and then I'll look into it more. Thanks for responding!
  2. So I'm having a weird plotting issue running on unRAID. I had, using previous versions, plotted about 35tb or so without issue, but as I upgraded a couple drives in my array I decided to use the old ones to add to my chia farm. I am running the plotter the exact same way (settings below) using Madmax, a 110gb ram disk, an ssd R0, and the same drive pool I was using before (I moved the old plots to the new disks and am repopulating the originals now), and the plotter just randomly stops. Sometimes on the second plot, sometimes on the tenth, it just stops plotting. The docker keeps running and farming, the plotter just sits idle with nothing new in the queue, and abandoning temp files in multiple locations (including a full ram disk that I usually have to empty via cli). Did something change in the latest update? My settings are: unRIAD v6.9.2 Machinaris v0.7.1 Plotter: Madmax k: 32 threads: 16 buckets: 512 buckets3: 128 rmulti2: 1 running on a 3950x (hence the threads) with 128gb ram. The plotter runs at night so the only other active docker or service is pihole. Edit to add: the main screen lists plotting as idle, the plotting screen says it is active but nothing is being plotted, the queue is empty and there is no disc activity. Emptying the ramdisk and pausing then resuming plotting starts it up again.
  3. Thank you. If you have the time at some point the directions would be great (especially for someone else who might stumble on this thread with a similar problem). Since I already tried to add the drive back into the pool I’ll likely not bother trying anything other than solving the hardware problem and rebuilding the plots, but it would definitely be helpful for future reference. I don’t quite get why btrfs non-redundant pools aren’t like JBODs, so they’re basically raid0 without the added speed? EDIT: leaving a note for myself to avoid bumping the thread: if unRAID assigns a new ID to one of the drives, remove all drives from pool, reboot, then check to make sure the ID is back to normal and reassign, then start array. It will take a couple minutes but the pool will restore.
  4. You can access array drives outside the array, that's kind of the point of the whole unRAID setup. Connect directly to each drive from any linux box until you find the drive that has the most recent backup.
  5. I have three pools that I use for shares for Machinaris/Chia mining that are made up of 5 old laptop drives each. Two of them work fine, however one of them has a drive that for whatever reason will drop out of the pool after a day or two of use. The drive is fine, this is most likely an issue with the hardware I am using to connect it and I am working on that, however I have a bigger problem: Unraid can add the drive back into the pool without a problem, but when I start the array up again that pool reads as "Unmountable: No file system". I tried the cli suggestions in the FAQ and none of them worked; the first time I figured it may have just been some kind of fluke so I just reformatted. Everything worked fine until the drive dropped out again, and same thing again. Before erasing all the work on it I wanted to see if anyone has had this problem before? The drives are set up as JBOD (or whatever BTRFS calls it, can't remember offhand) so theoretically even if that drive failed the data on the other four SHOULD be fine, right? Is there a way I can access it?
  6. Sorry, I missed this reply. Planning to overhaul my server this next weekend so will try it out then and get back to you!
  7. Will this include LTO support?! If so that would be amazing, it's the only thing I keep a Windows VM for...
  8. I came across his videos before, they're good and informative but not very...concise. Excellent for a start to finish initial setup, but a lot of scanning for someone familiar with Unraid. The biggest difference seems to be, as you suggested, ram, but setting up a ramdisk (instrux for which I came across on page 10 of this thread). That is a complete game changer, it knocked phase one down to 25% of previous plots...
  9. Brave is my goto. Honestly, while Apple's hardware and OS keep getting better and better, their apps have gone to total crap the past couple years. I don't think there are any that I still use on most on my machines (my laptop still uses Safari for basic tasks), they are all increasingly buggy and unreliable; Server, Mail, iTunes/Music, Safari, Numbers...all get worse with every update. It's a shame, really, 5 or so years ago they were so close to having a completely on brand work environment and now they're just an OS again.
  10. Is there a guide to optimizing plotting? New to chia and been running it for a few days now and have only plotted 18 plots (averaging around 4 hrs per plot). I mean, it's just running in the background and not using much in the way of resources so no big deal, but I'd like to get it as efficient as possible... So far I've played with thread counts and haven't noticed much of a difference (tested out 12, 15, 16, 24, and 32 so far). Only real difference I noticed was my first plot only took two hours, second 3, and then the rest have been just under 4; I assume this is an ssd trim thing? Running on a 3950x w/128gb ram, temp is a 2tb SATA ssd cache, plots are saved on a btrfs jbod cache of 5x5tb drives, using Mad Max.
  11. THANK YOU. This was driving me crazy. I set that so long ago I totally forgot...
  12. I took my server down for some basic maintenance (hdd stuff) and ever since bringing it back online everything is working fine, but there is some weirdness. My local network no longer recognizes the localhost name, though the ip address works just fine. Also, while the mover used to be scheduled daily, I reset it for once a week, yet still at 2am EVERY night, all my docker apps stop. And when I brought it back online most recently one of the cache drives stopped working, with an error saying it was encrypted (which is weird since literally every other cache drive is also encrypted and works fine). I removed it from the pool, copied files over to the array, replaced and blanked it, and then copied them back and that seemed to work, but yeah, just a bunch of weird things happening all at once... Diagnostics attached. Thanks!
  13. Did Monterey (MacOS 12) break this app/docker? After updating my MBair I can no longer access my daapd server. EDIT: I'm certain it's a MacOS thing, not a DAAPD thing, but if there's a way to re-enable the library in Monterey I'm all ears...
  14. Quick question--is it no longer necessary to set up a multi-disk "single" array manually, or will the gui approach still default to R1? The FAQ entry talks about releases 6.2 and 6.3...
  15. I haven’t tried so I couldn’t say for sure, but it would be a fun experiment (though zero doubt someone has already tried it). I may give it a go once I figure out my crashing issues. My thought is this: if you assign cores to the vm I see no reason why the vm wouldn’t see that as the physical limit. If you didn’t assign cores then I’m not sure, you might have issues since they would be shared. Again, a fun thing to play around with once I have the time…