CorserMoon

Members
  • Posts

    57
  • Joined

  • Last visited

Everything posted by CorserMoon

  1. OK, so for whatever reason, a valid music directory is required (I never had one set up[ in the past). I supposed I could have just removed that parameter as well.
  2. EDIT: SOLVED. I just upgraded from 6.12.4 (I think?) to 6.12.8. Upon reboot, plex docker is gone. Attempting to reinstall via Previous Apps results in this error: Also, maybe unrelated, after clicking DONE, I get this error: Any ideas?
  3. WOO! Got it working by unassigning both cache drives, starting array, stopping array, reassigning cache drives, and starting array. Thanks to THIS POST.
  4. @JorgeB OK, I reseated the nvme drives and the missing one is back. I reassigned it to its original spot but I am unable to start the array due to the above error (Wrong Pool State). Here is what the GUI looks like: Not sure how to overcome that error. Any help appreciated. Sorry to bug you.
  5. Now getting this error when trying to start the array. Latest diags attached. diagnostics-20231025-1554.zip
  6. So I manually deleted many gigs of data off the drive, but free space according to the GUI didn't change, still 279GB free. I tried running Mover but it didn't seem to start because there is still data sitting on the cache drive that is configured to move onto the array when mover is invoked. I then rebooted the server and the free space didnt change and the files that I deleted are back. I am stuck and don't know what I am doing wrong. EDIT: At this point it seems to make sense to reformat the pool (since I have the backup from the Backup/Restore Appdata plugin). Is there a guide on how to do this? And I also have the issue of the missing cache drive so not sure how to knock the cache pool back down to 1 drive again (it wont let me change the number of devices from 2 back to 1). Or maybe a better idea to just pop in a replacement ssd so I'm back up to 2 drives first and then reformat the pool? Additional weird observations: As stated in my OP, I was also trying to add new drives to the array. At that time I added them but paused the disk-clear when I noticed issues. I've since removed the new disks, returning those array slots to "unassigned" but now every time I reboot the server, all those drives are back and disk-clear starts! I tried using one of the aforementioned HDDs to replace the missing cache drive and provide additional space so hopefully btrfs would be able to balance but cache pool still mounting as read-only and I received a new error: Unraid Status: Warning - pool BTRFS too many profiles (You can ignore this warning when a pool balance operation is in progress)
  7. Thanks so much for your help. Last questions for now: Would it make sense that 1 of the cache drives dying would lead to this full allocation issue? Could it be resolved by just replacing that 1 dead drive? I'm just trying to figure out if I have 1 issue or multiple different issues.
  8. So what is the difference between allocation and free space? What would cause allocation to fill and is there a way to monitor for that? It's just weird that all this starteed happening after one of the cache drives just disappeared. Would full allocation cause this? I also just noticed that when the array is stopped and I am assigning/un-assigning disks, this error sporadically pops up briefly then disappears: EDIT: I tried to start the Mover process to move any extraneous data of the cache drive but the mover doesnt appear to be starting.
  9. I don't think it actually is full though. The "Super_Cache" pool has 2 1TB drives (super_cache and super_cache 2). 1 disappeared (aka missing) but everything was working fine after I acknowledged that it was missing, since the drives were mirrored (1TB actual space). I was having no issues with docker until this morning. I monitor that capacity closely and they were ~70% full before all this happened. GUI currently shows the remaining drive (super_cache 2) w/ 279GB free space. Strangely, du -sh super_cache/ shows total size of 476GB. But regardless, it shouldn't be full. side note, that link throws this error: You do not have permission to view this topic.
  10. I recently dismantled a secondary, non-parity protected pool of several hdds. 2 of these drives are to replace the existing single parity drive of array and the remaining to be added to array storage. I have run into a lot of cascading issues which has resulted in the docker service not starting. Here is the general timeline: Stopped array in order to swap a single 12tb parity drive for 2x14tb parity drives. As soon as the array stopped, one of my 2 cache drives (2x1tb nvme, mirrored) disappeared. Shows missing and not in disk dropdowns. My first thought is that it died. Immediately restarted the array (without swapping the parity drives) and performed a backup of the cache pool to the array via the Backup/Restore Appdata plugin. Completed successfully. Everything, including docker, working normally. Ordered new nvme drives to replace both. Stopped array and successfully replaced swapped parity drive as outlined earlier. Parity rebuilt successfully. Stopped array to add remaining HDDs to array storage. Added, started array, and disk-clear started automatically as expected. Got notification "Unable to write to super_cache" (super_cache is the cache pool). Paused disk-clear and rebooted the server. Same error upon reboot. In the interest if troubleshooting, I increased docker image size to see if that was the issue but the service still wouldn't start. I AM able to see/read files on cache drive but can't write to it. A simple mkdir command in appdata share errors saying it's a read-only file system. My best guess is that both nvme drives failed? Or maybe the pci-e adapter they are in failed? Any thoughts or clues from the attached diagnostics as I wait for the replacement drives to arrive? diagnostics-20231025-1118.zip
  11. Thanks to help and recommendations from @JorgeB, I've learned that my cache pool (2 nvme drives set to mirror) have some uncorrectable errors (based on Scrub results). THIS older thread recommends backing the cache pool files onto the array, wiping/reformatting the drives, and moving the files back onto the cache pool. What is the best practice for moving 600GB from these onto the array? Rsync via webUI terminal? Krusader? Something else? And for the "wiping/reformatting" portion, is this the proper command? blkdiscard /dev/nvmeX
  12. My Unraid server was non-responsive so I had to force reboot via IPMI. Upon reboot, I am getting the following error and the docker tab is showing no docker containers installed: BTRFS: error (device nvme1n1p1) in btrfs_replay_log:2500: errno=-5 IO failure (Failed to recover log tree) I came across THIS post which seems relevant but their error was slightly different. Thoughts on how to proceed? (diags attached) EDIT: Here is another clue. The cache pool on which docker.img lives is showing unmountable: corsermoon-diagnostics-20230615-1340.zip
  13. OK, thanks for the insight. Bad storms last night and despite everything being plugged into UPS's, could have been flakey power issue.
  14. Hi all. Woke up this morning to Organizr not working (throwing "not writeable" error) as well as many other dockers not operating as expected. Next step was checking the log file which is 100% full. All disks/pools/shares are green and readable though. Log filled up with BTRFS and rsyslog write errors (I am using syslog server). Before I reboot to clear the log file, wanted your expert eyes on. executor-diagnostics-20220714-1053.zip
  15. I ended up just sending a Power Off command via IPMI which essentially forced power off. After rebooting, the NIC came back up but I can't find in the logs what was holding up the shutdown. Have syslog server running as well, but the only entries I see for today are when I powered it back on. I don't see the powerdown command.
  16. So earlier today I suddenly lost connection to my unraid box. After troubleshooting, determined that the NIC is dead (Mellanox ConnectX-2). So I IPMI'd into the motherboard and used the iKVM console to log into unraid via CLI and issued the command 'powerdown'. Problem is that it has been sitting at 'Shutdown Nginx gracefully...' for 30 minutes. Do I have any options besides power cycling it? Really trying to avoid that and the 30 hour parity check.
  17. I'm thinking it is either weirdness with my gateway (ATT fiber gateway) or corruption/conflicts with the unraid routing table. I may try resetting the unraid network settings so see if that helps. I'm also in hte process of building a pfsense box and bypassing the gateway. Hopefully one of those fixes the issue.
  18. With only my router IP as the DNS, I can only access unraid (192.168.1.107) but no internet (http://www.google.com for example) and no other devices on my LAN such as 192.168.1.254 (router), 192.168.1.111 (managed switch) or 192.168.1.201 (Hubitat), etc. If I add 8.8.8.8 to the DNS record (so it's then 192.168.1.254,8.8.8.8) I can access unraid (192.168.1.107) and the internet (Google, etc), but still no other LAN IPs. Right now I'm at my in-laws on their network which is 192.168.68.x so that shouldn't be a conflict.
  19. Yea, similar issue to me (though I don't use pihole). I can only access unraid when i have the DNS set to my router but no internet and no LAN. If I add a public DNS like 8.8.8.8, I can then access internet, but still no LAN. I've read through dozens of threads and reddit posts and still have been unable to get local LAN access to work.
  20. Yeah, that's why I originally went with an open rack. Will have to figure out proper ventilation without all the noise.
  21. Thanks for the help. All disks green and parity check started. I think I may be ok.