tomwhi Posted February 14, 2023 Share Posted February 14, 2023 Hi guys I'm really struggling - I lost my dad yesterday so my brain is a little foggy but I've woken up to my Unraid server broken and lost all its shares and what seems to be an array disk missing data / showing the wrong data. Also my shares aren't showing on the SHARES tab, and none of my dockers are starting, I assume because something is with Disk1 is messed up. My setup is : HP Microserver Gen10 3x Array disks (not in parity) (sdb, sdc, sdd) Cache drive on 50GB SSD (sde) Flash on 8GB USB stick (sda) When I click on SDB, Disk 1 in the array, I only see system files and none of my data. I would expect to see a list of "share folders" in here and not a linux root folder layout. When I click on Disk2 and Disk3 I see all the data I expect to see. When I click on "Fix common problems" it seems that Disk 1 is read-only or full But it's full not - and it's showing the amout of "Used" space to be what I expect this disk to be at - as disk 3 is now where all my new data is being written to. And I can't see it being read only anywhere, only that the data looks weird on the disk when I click "View". I've attached a diagnostics to see if that helps anyone. And I appoligise for not giving much more information but I wasn't execpting this fault to happen so soon after a family tragidy. I do have backups of the data so all is not lost, it just means I'd need to rebuild all my apps again if Unraid is really that broken and the restore process would take ages and I'd like to avoid it, please try to avoid judging my setup too harshly, it works for me and we can circle back around to what I could have done better later. Thank you so much for understanding, and any help you're able to offer me! Tom tomnas-diagnostics-20230214-0843.zip Quote Link to comment
JorgeB Posted February 14, 2023 Share Posted February 14, 2023 Check filesystem on disk1. 1 Quote Link to comment
tomwhi Posted February 14, 2023 Author Share Posted February 14, 2023 25 minutes ago, JorgeB said: Check filesystem on disk1. Thank you - Attached is the output of check disk against Disk1. It looks like there might be something that didn't finish with a success message, so I've tried to run the same option wihtout the "n" flag. However when I come to run the following command (while the array is still up in maint-mode) I get this error. root@TomNAS:/dev# xfs_repair -v /dev/sdb1 Phase 1 - find and verify superblock... - block cache size set to 709656 entries Phase 2 - using internal log - zero log... zero_log: head block 859423 tail block 859419 ERROR: The filesystem has valuable metadata changes in a log which needs to be replayed. Mount the filesystem to replay the log, and unmount it before re-running xfs_repair. If you are unable to mount the filesystem, then use the -L option to destroy the log and attempt a repair. Note that destroying the log may cause corruption -- please attempt a mount of the filesystem before doing this. Do I need to stop the array to carry out the option to fix the problem do you think? checkdisk.txt Quote Link to comment
Solution itimpi Posted February 14, 2023 Solution Share Posted February 14, 2023 That message is quite normal as described here in the online documentation describing the process for repairing an XFS file system. You need to run without -n and add -L. 1 Quote Link to comment
tomwhi Posted February 14, 2023 Author Share Posted February 14, 2023 1 hour ago, itimpi said: That message is quite normal as described here in the online documentation describing the process for repairing an XFS file system. You need to run without -n and add -L. Thank you! I am normally way better at this but my brain is all foggy. Am I at any risk that the -L removes any data off the disk or wipes it other than stuff that's corrupted? Quote Link to comment
JorgeB Posted February 14, 2023 Share Posted February 14, 2023 There's always some risk of data loss when repairing a filesystem, but it's basically the only option, other than formatting the disk and restoring from a backup. 1 Quote Link to comment
itimpi Posted February 14, 2023 Share Posted February 14, 2023 18 minutes ago, tomwhi said: Thank you! I am normally way better at this but my brain is all foggy. Am I at any risk that the -L removes any data off the disk or wipes it other than stuff that's corrupted? the worst the -L option will do (if anything) is remove the last changes to the drive. 1 Quote Link to comment
tomwhi Posted February 14, 2023 Author Share Posted February 14, 2023 Thank you so much! They array is back online and all my shares are there! However the dockers aren’t, maybe the docker.img was corrupted? Is there any advice about where to start with that? I’ve uploaded a fresh drag but I can’t see anything obvious in the logs to why the dockers aren’t defined any more… tomnas-diagnostics-20230214-1730.zip Quote Link to comment
trurl Posted February 14, 2023 Share Posted February 14, 2023 Some of your system share is on disk1. Possibly it all was recreated on cache when disk1 wasn't working. You want it all on cache anyway. Did you have any VMs? Probably you can just reinstall your containers and delete those system folders from disk1. https://wiki.unraid.net/Manual/Docker_Management#Re-Installing_Docker_Applications 1 Quote Link to comment
JorgeB Posted February 14, 2023 Share Posted February 14, 2023 Docker image is mounting, but based on the transid it's new, possibly it was on disk1 before a new one was created on cache, you can point to the old one or just re-add all containers using the previous apps option in CA. 1 Quote Link to comment
tomwhi Posted February 15, 2023 Author Share Posted February 15, 2023 Oh gang, this is perfect. Thank you so much for supporting me during this mega stressful time. I'll try and work out what's happened another day. But for now i'm mostly back up and running. Thank you again...!! 1 Quote Link to comment
Recommended Posts
Join the conversation
You can post now and register later. If you have an account, sign in now to post with your account.
Note: Your post will require moderator approval before it will be visible.