greyday

Members
  • Posts

    53
  • Joined

  • Last visited

Everything posted by greyday

  1. I'm marking that as "solution" since the drive passed preclear without a hitch. I'll update the original post if it fails when being entered back into parity but I think it was probably just a glitch...
  2. Had a new thing happen to me yesterday; I bought 2x20TB Seagate Exos drives on cyber monday and swapped them in for my 14TB parity drives (2 years old). Here's the order I did everything: Stopped server added new drives into vacant slots in my DAS Removed parity drive 1, added first 20TB Started server, rebuilt parity Repeated process for parity 2 Removed a 4TB drive, added 14TB, did data rebuild All of the above went off without a hitch, zero errors (note that I did not do a preclear on the new drives as I figured they would get the same workout from the rebuilds). Everything was working fine, so I swapped in the final drive. Rebuild ran fine for 90%, then it and the second parity drive started throwing errors. Like 500+. The rebuild was frozen and, assuming it was the old parity/replacement drive (14TB) that was rebuilding that failed, I tried to replace the old data drive and rebuild the errored 20TB parity drive, but it wasn't showing up in the interface at all, as an option for the parity slot or in unassigned devices I stopped the server, physically pulled the parity drive and put it back in and it showed up again. I am currently almost finished with rebuilding the 14TB data drive again (with single parity this time) and am using the preclear plugin to thoroughly test the 20TB. I also tried to find any errors with smart tests and smrtctl and neither produced any logged errors on either drive. It is entirely possible this was a mechanical fault on the DAS side, since just removing the sled and replacing it got the system to identify it and, aside from the drives themselves (at this point), my entire server is built from second hand and used parts. So my question is this: if it preclears without a problem (and the data drive throws zero errors during the rebuild), should I try adding it back in as second parity or should I bite the bullet and just return it? I've never used preclear before so I don't know how reliable it is, but it seems like both drives are otherwise fine based on every test and piece of info I can find...
  3. Since I hate it when these things go unanswered (especially when I had the same problem and searching led me to a thread with no solution), here's what worked for me: I deleted the preferences file from appdata and then restarted the docker image. I had to "set up" the docker again, but this took like ten seconds, and all my library data was still there, no need to rescan. Steps: stop docker open terminal in-browser (or ssh in if you prefer) navigate to mnt/user/appdata/Plex-Media-Server/Library/Application\ Support/Plex\ Media\ Server rm Preferences.xml start docker open webui That was it. I ended up just deleting the file as trying to copy it gave me the same not found error, and trying to view or edit it in nano resulted in an empty document. I have not found any issues thus far with this method, but note that I only use Plex for local access, other settings MAY be effected by this. ALSO--if like me the reason for your hard shutdown was the UPS shutting off without sending a kill signal (thanks to the standard UPS settings not recognizing my model and the previous build of NUT being deprecated) I HIGHLY recommend Rysz's current rebuild of NUT; it's in the app tab if you search for NUT, then choose "Network UPS Tools (NUT) for UNRAID". It's working flawlessly so far...
  4. I had a docker that wouldn't close that was using a single thread at 100%, so I tried to stop the array, which didn't work. I was able to kill the main stuck process using the tip below, allowing the docker to close (according to the dashboard); however, now the array won't fully stop. The docker page and dashboard says it is stopped, and the docker and vms are gone from the dashboard (like I assumed they would be), but the main page lists all the drives as up (though the bottom just hangs with "Stopping" and the only other options being reboot or shutdown). Both top and htop say nothing is using the thread, but the dashboard still reports it as 100%. There are two processes associated with the docker (machinaris) that can't be killed, no matter what I try (tried through htop, manually, manually -9, pkill, etc.), though neither of them are using any cpu according to htop. Unraid v6.9.2.
  5. I'll take a look, but somehow when I restarted the docker for a fifth time (after four consecutive crashes) it just...started working right again. I changed absolutely nothing, but it's been humming along for a few days now, so I'm just going to let it do its thing until the disks are full and then I'll look into it more. Thanks for responding!
  6. So I'm having a weird plotting issue running on unRAID. I had, using previous versions, plotted about 35tb or so without issue, but as I upgraded a couple drives in my array I decided to use the old ones to add to my chia farm. I am running the plotter the exact same way (settings below) using Madmax, a 110gb ram disk, an ssd R0, and the same drive pool I was using before (I moved the old plots to the new disks and am repopulating the originals now), and the plotter just randomly stops. Sometimes on the second plot, sometimes on the tenth, it just stops plotting. The docker keeps running and farming, the plotter just sits idle with nothing new in the queue, and abandoning temp files in multiple locations (including a full ram disk that I usually have to empty via cli). Did something change in the latest update? My settings are: unRIAD v6.9.2 Machinaris v0.7.1 Plotter: Madmax k: 32 threads: 16 buckets: 512 buckets3: 128 rmulti2: 1 running on a 3950x (hence the threads) with 128gb ram. The plotter runs at night so the only other active docker or service is pihole. Edit to add: the main screen lists plotting as idle, the plotting screen says it is active but nothing is being plotted, the queue is empty and there is no disc activity. Emptying the ramdisk and pausing then resuming plotting starts it up again.
  7. Thank you. If you have the time at some point the directions would be great (especially for someone else who might stumble on this thread with a similar problem). Since I already tried to add the drive back into the pool I’ll likely not bother trying anything other than solving the hardware problem and rebuilding the plots, but it would definitely be helpful for future reference. I don’t quite get why btrfs non-redundant pools aren’t like JBODs, so they’re basically raid0 without the added speed? EDIT: leaving a note for myself to avoid bumping the thread: if unRAID assigns a new ID to one of the drives, remove all drives from pool, reboot, then check to make sure the ID is back to normal and reassign, then start array. It will take a couple minutes but the pool will restore.
  8. You can access array drives outside the array, that's kind of the point of the whole unRAID setup. Connect directly to each drive from any linux box until you find the drive that has the most recent backup.
  9. I have three pools that I use for shares for Machinaris/Chia mining that are made up of 5 old laptop drives each. Two of them work fine, however one of them has a drive that for whatever reason will drop out of the pool after a day or two of use. The drive is fine, this is most likely an issue with the hardware I am using to connect it and I am working on that, however I have a bigger problem: Unraid can add the drive back into the pool without a problem, but when I start the array up again that pool reads as "Unmountable: No file system". I tried the cli suggestions in the FAQ and none of them worked; the first time I figured it may have just been some kind of fluke so I just reformatted. Everything worked fine until the drive dropped out again, and same thing again. Before erasing all the work on it I wanted to see if anyone has had this problem before? The drives are set up as JBOD (or whatever BTRFS calls it, can't remember offhand) so theoretically even if that drive failed the data on the other four SHOULD be fine, right? Is there a way I can access it?
  10. Sorry, I missed this reply. Planning to overhaul my server this next weekend so will try it out then and get back to you!
  11. Will this include LTO support?! If so that would be amazing, it's the only thing I keep a Windows VM for...
  12. I came across his videos before, they're good and informative but not very...concise. Excellent for a start to finish initial setup, but a lot of scanning for someone familiar with Unraid. The biggest difference seems to be, as you suggested, ram, but setting up a ramdisk (instrux for which I came across on page 10 of this thread). That is a complete game changer, it knocked phase one down to 25% of previous plots...
  13. Brave is my goto. Honestly, while Apple's hardware and OS keep getting better and better, their apps have gone to total crap the past couple years. I don't think there are any that I still use on most on my machines (my laptop still uses Safari for basic tasks), they are all increasingly buggy and unreliable; Server, Mail, iTunes/Music, Safari, Numbers...all get worse with every update. It's a shame, really, 5 or so years ago they were so close to having a completely on brand work environment and now they're just an OS again.
  14. Is there a guide to optimizing plotting? New to chia and been running it for a few days now and have only plotted 18 plots (averaging around 4 hrs per plot). I mean, it's just running in the background and not using much in the way of resources so no big deal, but I'd like to get it as efficient as possible... So far I've played with thread counts and haven't noticed much of a difference (tested out 12, 15, 16, 24, and 32 so far). Only real difference I noticed was my first plot only took two hours, second 3, and then the rest have been just under 4; I assume this is an ssd trim thing? Running on a 3950x w/128gb ram, temp is a 2tb SATA ssd cache, plots are saved on a btrfs jbod cache of 5x5tb drives, using Mad Max.
  15. THANK YOU. This was driving me crazy. I set that so long ago I totally forgot...
  16. I took my server down for some basic maintenance (hdd stuff) and ever since bringing it back online everything is working fine, but there is some weirdness. My local network no longer recognizes the localhost name, though the ip address works just fine. Also, while the mover used to be scheduled daily, I reset it for once a week, yet still at 2am EVERY night, all my docker apps stop. And when I brought it back online most recently one of the cache drives stopped working, with an error saying it was encrypted (which is weird since literally every other cache drive is also encrypted and works fine). I removed it from the pool, copied files over to the array, replaced and blanked it, and then copied them back and that seemed to work, but yeah, just a bunch of weird things happening all at once...
  17. Did Monterey (MacOS 12) break this app/docker? After updating my MBair I can no longer access my daapd server. EDIT: I'm certain it's a MacOS thing, not a DAAPD thing, but if there's a way to re-enable the library in Monterey I'm all ears...
  18. Quick question--is it no longer necessary to set up a multi-disk "single" array manually, or will the gui approach still default to R1? The FAQ entry talks about releases 6.2 and 6.3...
  19. I haven’t tried so I couldn’t say for sure, but it would be a fun experiment (though zero doubt someone has already tried it). I may give it a go once I figure out my crashing issues. My thought is this: if you assign cores to the vm I see no reason why the vm wouldn’t see that as the physical limit. If you didn’t assign cores then I’m not sure, you might have issues since they would be shared. Again, a fun thing to play around with once I have the time…
  20. Have you tried running the second Unraid license as a VM? You'd still probably need a separate card and/or DAS for the array, but you wouldn't need the other physical server...
  21. I'm going to assume that sorted it and go ahead and mark this sorted, but for posterity: it seems that disabling the UPS Monitor isn't enough if you want to use NUT with a USB connection, you have to set the input in UPSM to anything but USB...
  22. Not reproducible as in I can make it happen, but recurring—it happens every couple days. I tried switching the connection setting from usb to custom to see if maybe it was just still detecting the connection even though it was disabled, will post back if that solved the problem…
  23. I recently upgraded my UPS to a Bxterra, so I had to disable the default UPS monitor and install the NUT plugin, which worked fine. But now I'm getting this weird issue where after a few days the UPS monitor turns itself back on, dropping the NUT plugin from the dashboard/replacing it with the original with no info, and sending me a "lost contact with UPS" error. I checked the settings and the monitor is still set to disabled and the NUT plugin is still enabled, the data is still there on the plugin page, and the NUT page footer is still active and displaying the percentage and load. Is there something extra that needs to be done to disable the default UPS monitor? Or a way to uninstall it entirely perhaps?
  24. This worked like a charm. Well, like a slightly tarnished charm, half the readings aren't there and it says the runtime is at 0 (it isn't), but it's a start at least. Thanks!
  25. Doesn't seem to pick it up, but there's also a NUT docker image now, I tried that and got a 401 error on trying to view it, may have to investigate that one further...