mr_lego Posted February 16, 2017 Share Posted February 16, 2017 HI I am having issue for the last 3 days with one of my hard drive which reports "Unmountable disk present" (please see attached snapshost and dignostic file). I used to use for the last month the verion 6.2.4 but yesterday I reverted the unraid ios to the version 6.1.9 and this didn't make any difference. I have got currently two 6TB WD Red hard drives and one ssd 500GB. One of the 6TB WD Red hard drive is reporting the issue and the status is "Unmountable". I have tried to check the parity and run the smart full test which took about 11 hours on Disk1 which reports "unmountable" status but I couldn't find any errors. I also cannot see my shared any more. I have attached all diagnostic files but I don't know how to resolve this. Could someone please advise what would be the next step which I should go for ? Thank you very much in advance. T. blue-diagnostics-20170216-0028.zip Quote Link to comment
trurl Posted February 16, 2017 Share Posted February 16, 2017 Wiki: Check Disk Filesystems Quote Link to comment
John_M Posted February 16, 2017 Share Posted February 16, 2017 Completely unrelated to your problem but worth mentioning anyway, assigning an SSD to an array slot is not advised. While it isn't forbidden and the system does allow it, it isn't supported and can cause problems in the future. Furthermore, its speed advantage is largely wasted as you have a mechanical parity disk. A better use of an SSD is as a cache disk, where it can be used to store your docker image and any VMs. There's a discussion thread here. Quote Link to comment
mr_lego Posted February 16, 2017 Author Share Posted February 16, 2017 HI, Thank you for your prompt respond. Sorry but I've forgotten to mention that I use ssd only for running VM and it's not part of this array. I have just run xfs_repair command and got the following output: root@Blue:~# xfs_repair -v /dev/md1 Phase 1 - find and verify superblock... - block cache size set to 2991032 entries Phase 2 - using internal log - zero log... zero_log: head block 337074 tail block 337067 ERROR: The filesystem has valuable metadata changes in a log which needs to be replayed. Mount the filesystem to replay the log, and unmount it before re-running xfs_repair. If you are unable to mount the filesystem, then use the -L option to destroy the log and attempt a repair. Note that destroying the log may cause corruption -- please attempt a mount of the filesystem before doing this. root@Blue:~# Could you please confirm if I should run this command with the option "-L" ? Thank you T. Quote Link to comment
mr_lego Posted February 16, 2017 Author Share Posted February 16, 2017 HI I have just tried to stop array, unassigned the Disk1, start array, stop array, assigned the Disk1, start array again and now I can see the that Data has started rebuilding from the gui. Will let you know how it goes. T. Quote Link to comment
gubbgnutten Posted February 16, 2017 Share Posted February 16, 2017 Rebuilding will not fix filesystem corruption. Quote Link to comment
John_M Posted February 16, 2017 Share Posted February 16, 2017 Yes, you need to use the -L option. Best to wait until the unnecessary rebuild completes. The SSD is assigned as Disk5, which is part of the array. Quote Link to comment
mr_lego Posted February 16, 2017 Author Share Posted February 16, 2017 I will wait until the process of the rebuilding array is done and I will run xfs_repair with option -L Thank you for your comments and prompt respond. Will let you know how it goes. Probably tomorrow. T. Quote Link to comment
John_M Posted February 16, 2017 Share Posted February 16, 2017 When the rebuild completes the disk will still show as unmountable. The warning about data loss when using the -L option is perhaps a little misleading. It simply discards the journal so any uncommitted transactions are lost, which in practice tend to be minimal. You'll need to have the array started in Maintenance mode, of course. Quote Link to comment
trurl Posted February 17, 2017 Share Posted February 17, 2017 Just thought I would note that you waited only 6 minutes for a reply before starting the unnecessary rebuild. Sometimes a little patience can actually save time. Quote Link to comment
mr_lego Posted February 17, 2017 Author Share Posted February 17, 2017 HI Guys, I have run this morning the following command which fixed my issue: xfs_repair -v -L /dev/md1 Could someone please advise why this happened in the first place and what could cause this issue ? Thank you T. Quote Link to comment
trurl Posted February 17, 2017 Share Posted February 17, 2017 Could someone please advise why this happened in the first place and what could cause this issue ? Have you ever powered off by some means other than using the webUI? Quote Link to comment
mr_lego Posted February 17, 2017 Author Share Posted February 17, 2017 HI. I don't remember using command line to shutdown/reboot unraid. I think about 1 month ago I upgraded the IOS on unraid and upgraded from ver 6.1.9 to the version 6.2.4 and performed reboot afterwards. Thank you very much for your support guys. It was very helpful. Quote Link to comment
John_M Posted February 17, 2017 Share Posted February 17, 2017 Incorrect shutdown, perhaps by the loss of power or if the system crashes and has to be forced to power down, is the most common cause of file system corruption. Quote Link to comment
mr_lego Posted February 17, 2017 Author Share Posted February 17, 2017 thank you for explanation and for your assistance on that. Much appreciated. Quote Link to comment
Recommended Posts
Join the conversation
You can post now and register later. If you have an account, sign in now to post with your account.
Note: Your post will require moderator approval before it will be visible.