JohnnyT Posted November 2, 2016 Share Posted November 2, 2016 Last night I got an alert that my array was down. I saw that one of my new drives that I added a few days ago was unmoutnable. I tried shutting down the array and rebooting but the drive was still unmountable. I then went into maintenance mode and tried to do the fix drive errors which did not help. It then pointed me to run xfs_repair -v /dev/sdk which I did and it told me that a super block was missing. Which then pointed me to run reiserfsck --rebuild-sb /dev/sdk to rebuild the superblock. This failed and told me to run --rebuildtree which I am doing on that drive now. SO... yeah , did I lose all my data on that drive ? It is currently rebuilding or something but I did not have a parity drive yet. The other 6tb drive I had to make parity did not pass test and I was waiting for a new one. I just setup unraid on this a few weeks ago. Any help would be great as I am in deeper than I now understand. Link to comment
JorgeB Posted November 2, 2016 Share Posted November 2, 2016 You should have asked for help earlier. You can't use only sdX, you need to add 1 for the partition, but although that would work it would make your parity out of sync, on unRAID you allways need to use the mdX identifier, replace X with the disk number. xfs_repair is for XFS formatted disks, reiserfsck is only for Reiser formatted disks. If the disk is XFS abort the current check and do (with the array in maintenance mode): xfs_repair -v /dev/mdX X=disk number. Maybe you can still salvage something. Link to comment
JohnnyT Posted November 2, 2016 Author Share Posted November 2, 2016 Yeah I know I thought I was following the guide and doing ok then I realized I ran the wrong test and I was in too deep. I will kill it when I get home and run the xfs_repair check. Link to comment
JohnnyT Posted November 2, 2016 Author Share Posted November 2, 2016 I was able to ssh in and stop the current check. Any ideas? root@davidFlix:~# xfs_repair -v /dev/md7 Phase 1 - find and verify superblock... - block cache size set to 1434064 entries Phase 2 - using internal log - zero log... zero_log: head block 121553 tail block 121539 ERROR: The filesystem has valuable metadata changes in a log which needs to be replayed. Mount the filesystem to replay the log, and unmount it before re-running xfs_repair. If you are unable to mount the filesystem, then use the -L option to destroy the log and attempt a repair. Note that destroying the log may cause corruption -- please attempt a mount of the filesystem before doing this. Link to comment
JorgeB Posted November 2, 2016 Share Posted November 2, 2016 run: xfs_repair -v -L /dev/md7 Link to comment
JohnnyT Posted November 2, 2016 Author Share Posted November 2, 2016 That seemed to fix it, not sure if any data is messed up or not. All my dockers started and are working and all my data looks to be there. Is there a good way I can check my data? Does the read check do that? Thank you for the help. Link to comment
JorgeB Posted November 2, 2016 Share Posted November 2, 2016 Unless you using the checksum plugin is not easy, check the lost+found folder, any incomplete files should have been moved there. Link to comment
JohnnyT Posted November 2, 2016 Author Share Posted November 2, 2016 I do not but I will take a look at that for the future. I just started using unraid a few weeks ago so I still don't know all the best practices. root@davidFlix:/mnt/user/lost+found# ls /bin/ls: reading directory '.': Input/output error I am guessing this means the directory is empty or is there more to it than that? Link to comment
JorgeB Posted November 2, 2016 Share Posted November 2, 2016 try /mnt/disk7/lost+found Link to comment
JohnnyT Posted November 2, 2016 Author Share Posted November 2, 2016 There is one file in there 0 bytes called 3314897. Not sure if that means anything. Link to comment
JorgeB Posted November 2, 2016 Share Posted November 2, 2016 if it's 0 bytes nothing you can do about it, most likely it was nothing important. Link to comment
Recommended Posts
Archived
This topic is now archived and is closed to further replies.