-
Posts
59 -
Joined
-
Last visited
Converted
-
Location
Atlanta
Recent Profile Visitors
The recent visitors block is disabled and is not being shown to other users.
CorserMoon's Achievements
-
I have Hotio's Duplicacy container running but I am unable to get SMTP emails working. The container config is the same as as other containers that have working SMTP emails (bridge network, smtp server address, username, password, TLS port 465) but when testing the email in the web UI, I get error: Failed to send the email: read tcp 172.x.x.x:37342->198.x.x.x:465: i/o timeout. where 172.x.x.x is the container IP and 198.x.x.x is the SMTP server IP. There are no container logs and the duplicacy logs in appdata say the same thing without any additional info. I've confirmed that I can ping and telnet the SMTP server from within the duplicacy container. I just cant figure out why it's not working. Any other ideas?
-
Cam here to say that this fixed it for me.
-
So I manually deleted many gigs of data off the drive, but free space according to the GUI didn't change, still 279GB free. I tried running Mover but it didn't seem to start because there is still data sitting on the cache drive that is configured to move onto the array when mover is invoked. I then rebooted the server and the free space didnt change and the files that I deleted are back. I am stuck and don't know what I am doing wrong. EDIT: At this point it seems to make sense to reformat the pool (since I have the backup from the Backup/Restore Appdata plugin). Is there a guide on how to do this? And I also have the issue of the missing cache drive so not sure how to knock the cache pool back down to 1 drive again (it wont let me change the number of devices from 2 back to 1). Or maybe a better idea to just pop in a replacement ssd so I'm back up to 2 drives first and then reformat the pool? Additional weird observations: As stated in my OP, I was also trying to add new drives to the array. At that time I added them but paused the disk-clear when I noticed issues. I've since removed the new disks, returning those array slots to "unassigned" but now every time I reboot the server, all those drives are back and disk-clear starts! I tried using one of the aforementioned HDDs to replace the missing cache drive and provide additional space so hopefully btrfs would be able to balance but cache pool still mounting as read-only and I received a new error: Unraid Status: Warning - pool BTRFS too many profiles (You can ignore this warning when a pool balance operation is in progress)
-
So what is the difference between allocation and free space? What would cause allocation to fill and is there a way to monitor for that? It's just weird that all this starteed happening after one of the cache drives just disappeared. Would full allocation cause this? I also just noticed that when the array is stopped and I am assigning/un-assigning disks, this error sporadically pops up briefly then disappears: EDIT: I tried to start the Mover process to move any extraneous data of the cache drive but the mover doesnt appear to be starting.
-
I don't think it actually is full though. The "Super_Cache" pool has 2 1TB drives (super_cache and super_cache 2). 1 disappeared (aka missing) but everything was working fine after I acknowledged that it was missing, since the drives were mirrored (1TB actual space). I was having no issues with docker until this morning. I monitor that capacity closely and they were ~70% full before all this happened. GUI currently shows the remaining drive (super_cache 2) w/ 279GB free space. Strangely, du -sh super_cache/ shows total size of 476GB. But regardless, it shouldn't be full. side note, that link throws this error: You do not have permission to view this topic.
-
I recently dismantled a secondary, non-parity protected pool of several hdds. 2 of these drives are to replace the existing single parity drive of array and the remaining to be added to array storage. I have run into a lot of cascading issues which has resulted in the docker service not starting. Here is the general timeline: Stopped array in order to swap a single 12tb parity drive for 2x14tb parity drives. As soon as the array stopped, one of my 2 cache drives (2x1tb nvme, mirrored) disappeared. Shows missing and not in disk dropdowns. My first thought is that it died. Immediately restarted the array (without swapping the parity drives) and performed a backup of the cache pool to the array via the Backup/Restore Appdata plugin. Completed successfully. Everything, including docker, working normally. Ordered new nvme drives to replace both. Stopped array and successfully replaced swapped parity drive as outlined earlier. Parity rebuilt successfully. Stopped array to add remaining HDDs to array storage. Added, started array, and disk-clear started automatically as expected. Got notification "Unable to write to super_cache" (super_cache is the cache pool). Paused disk-clear and rebooted the server. Same error upon reboot. In the interest if troubleshooting, I increased docker image size to see if that was the issue but the service still wouldn't start. I AM able to see/read files on cache drive but can't write to it. A simple mkdir command in appdata share errors saying it's a read-only file system. My best guess is that both nvme drives failed? Or maybe the pci-e adapter they are in failed? Any thoughts or clues from the attached diagnostics as I wait for the replacement drives to arrive? diagnostics-20231025-1118.zip
-
Thanks to help and recommendations from @JorgeB, I've learned that my cache pool (2 nvme drives set to mirror) have some uncorrectable errors (based on Scrub results). THIS older thread recommends backing the cache pool files onto the array, wiping/reformatting the drives, and moving the files back onto the cache pool. What is the best practice for moving 600GB from these onto the array? Rsync via webUI terminal? Krusader? Something else? And for the "wiping/reformatting" portion, is this the proper command? blkdiscard /dev/nvmeX