Jump to content

One disk failed, unraid says it is emulated, but there is no data from that drive accessible !


Go to solution Solved by Kacper,

Recommended Posts

Hi,

I am terrified, because I have one failed drive.

I have shutdown unraid and removed failed drive.

Now I can star array, it says disk 1 is emulated, but there is no data available from this drive. Does it mean that my data is gone? That parity didn't protect my data?

Or maybe I need to put back my failed drive into system again, which would be quite nonsense?

I do not want to buy new drive and rebuild, because it is too expensive. I want to move my data to second drive, as I have plenty of space free and then recreate array.

 

I have attached screenshot from my array. When it is started i do not see /mnt/disk1  as I should. Is there anything I can do or this unraid parity is complete garbage and I have lost my data? Help!

2024-07-20_16h26_43.png

Edited by Kacper
Link to comment

You are likely to get better informed feedback if you attach your system’s diagnostics zip file to your next post in this thread. It is always a good idea when asking questions to supply your diagnostics so we can see details of your system, how you have things configured, and the current syslog.

 

Handling of unmountable disks is covered here in the online documentation accessible via the ‘Manual’ link at the bottom of the GUI or the DOCS link at the top of each forum page. 

Link to comment
Posted (edited)

OK, I am reading it however it does not provide any new information on what is going on. My disk cannot be browsed as it claims to be emulated.

 

2024-07-20_17h18_46.png

 

I was even able to run check on Disk 1 when in maintenance mode, but data are still not accessible. I am wondering if I add 2TB drive will it rebuild and show all my data? The thing is I do not have spare drive and I do not want to spend money on it as I do not need such a capacity :/

nas-diagnostics-20240720-1732.zip

Edited by Kacper
Link to comment
Posted (edited)

I did run check on disk 1 in maintenance mode, then I stopped array and started in normal mode, but disk 1 does not mount and I cannot see any data in shares that were on this drive.

 

Now with array in normal mode I clicked "check" and it will do check for 5 hours now.

 

So the timeline here is as follows:

1. I can remember that the first time I have logged in to unraid and saw Disk 1 as red cross it was emulating and I could see data.

2. Then I have rebooted my unraid server and disk 1 was still failed.

3. Then I had it shutdown and removed failed drive.

4. When I booted it there is no data from Disk 1.

5. Next I did check on Disk 1 in maintenance mode - still no data disk1 not mounted.

6. Now I am doing check of whole array in normal mode.

 

What actually has happened? Did the parity messed up when I unplugged failed drive or what can this be?

 

 

Edited by Kacper
Link to comment
2 hours ago, Kacper said:

5. Next I did check on Disk 1 in maintenance mode - still no data disk1 not mounted.

Did you also do a repair (I.e. run without the -n option).   This is required to actually fix the file system on the emulated drive.

 

2 hours ago, Kacper said:

6. Now I am doing check of whole array in normal mode

That may not be doing what you think!   It is checking that the other drives can be read OK at the physical sector level.   It is not checking that there is no corruption at the file system level on any drive.

Link to comment
Posted (edited)
45 minutes ago, itimpi said:

Did you also do a repair (I.e. run without the -n option).   This is required to actually fix the file system on the emulated drive.

I clicked check in unraid GUI - do not know what command is run under this. How should I run this command? If disk is emulated what would be the device name and therefore the command ?

 

45 minutes ago, itimpi said:

Did you also do a repair (I.e. run without the -n option).   This is required to actually fix the file system on the emulated drive.

 

That may not be doing what you think!   It is checking that the other drives can be read OK at the physical sector level.   It is not checking that there is no corruption at the file system level on any drive.

I am not sure what is doing. On one hand I think it is running some kind of fs_check, but on the other hand in GUI it says: "Parity operation is running" so maybe it will calculate emulated drive out of parity  ?

 

That is the thing that unraid gui is not very intuitive and I am using it rarely so it is even more difficult to remember how to do things.

 

I have found such an information in logs:


[  976.358462] XFS (md1p1): Mounting V5 Filesystem
[  977.592064] XFS (md1p1): Corruption warning: Metadata has LSN (179:604764) ahead of current LSN (179:604745). Please unmount and run xfs_repair (>= v4.3) to resolve.
[  977.592082] XFS (md1p1): log mount/recovery failed: error -22
[  977.592264] XFS (md1p1): log mount failed

 

I think this is the broken drive 1, which might be that I have to run xfs_repair /dev/md1p1  ?? I have no idea if this is correct thinking?

Edited by Kacper
Link to comment
  • Solution
Posted (edited)

OK guys, huge thanks to "itimpi", you gave me a tip how to do this and I have figured it out !!!

 

Your key answer was here:

"Did you also do a repair (I.e. run without the -n option).   This is required to actually fix the file system on the emulated drive."

 

So my attempt to do check while array is started in normal mode did nothing. It just said 0 errors.

To conclude, in situation when one drive fails, then it says "context emulated" disk is marked with red cross but actually content of the disk is not emulated (you cannot see browse button next to Disk 1 name - see my first post). In this case unraid says "unmountable: blabla". In my case filesystem was corrupted on Drive 1.

So what You need to do is:

1. Stop array.

2. Star array in maintenance mode.

3. Click on removed or faulty disk and then run check for this one disk, however !!! MOST IMPORTANT !!! by default it adds -n option to check. It makes only do readonly check without fixing by default. Remove "-n" and click to check drive.

4. Stop array.

5. Start array in normal mode. Now you should be able to see emulated data from emulated drive.

 

2024-07-20_17h18_46.thumb.png.75cc04404dbe1cfafa486fc203aace05.png

 

See the screen shoot of my array before and after check.

Solution is so silly and it was impossible for me to figure it out from unraid docs. Doc are quite counterintuitive for me.

Community is what makes this product really cool and my faith in unraid parity has been restored. Thanks again Mr "itimpi", I get You a beer if You come to my city Wrocław in Poland! ;)

Edited by Kacper
Link to comment

Oh hi,

ok, so to finish telling my story :)

after Disk 1 was mounted properly and I could see my data I did following steps:

 

1. Installed nerd plugin to install screen

2. In root console I run screen -> mc (midnight commander)

3. I moved all data from /mnt/disk1 to /mnt/disk2 using mc F6 button. It took whole day long :)

4. Then I confirmed that Disk 1 does not contain any data.

5. I stopped the array.

6. I did a screenshoot of MAIN tab :)

7. I went to Tools-> New Config and created new empty config of an array, without coping any old configuration. Just completely new array.

8. Then I went to main tab and assigned hard drives to new array, paying supper careful attention that parity drive is the same drive as it was before and that cache drive is the same drive as it was before. I check disk IDs with the screenshoot (p.6) 10 times :D I added new drive (smaller) in slot where Disk 1 is, as a replacement of failed drive (original drive was 2TB).

9. Then I have stared the array.

10. Then I went to docker tab to stop all dockers as the will degrade array parity rebuild performance a lot! I also stopped all VMs.

11. After parity was rebuild and array was safe I moved some of the most important data from disk 2 to new disk 1. In my case this is appdata, because owncloud is the main reason I am using self hosted NAS and Unraid. All other dockers are my playground, but Owncloud I am using in my job to sync data between PCs, so it is feeding my family, it is priority no 1! :)

 

Of course if I want to afford new 2TB hard drive to replace broken one (it must be equal size or bigger!), after fixing filesystem on drive 1 I could simply rebuild my array from parity. However I didn't want to do it simple way for those reasons:

  • I didn't want to spend cash on 2TB 2,5 inch HD.
  • My space usage was low enough that I didn't need that much space.
  • I had server grade Intel ssd in my drawer, but it was 0,5 TB in size. It is sufficient to keep my owncloud data and even maybe whole appdata folder. Also my server is running quite hot as this is laptop in my garage, so I preferred to replace hdd with ssd as ssd is more resistant to temperature around 50-60 Celcius degree.

Therefore I did procedure above to do it the way I wanted. The result I will show on screenshoot:

2024-07-22_17h36_53.thumb.png.08f30126cfb1f6bcc573bd2f45d240ac.png

 

So it is rebuilding parity now. Point 11 actually I am just about to do in 30 minutes after I am writing this post :)

 

I hope that my summary will help any person that will get they array degradated. Remember, do not format, do not erase before You are 100% sure what it will do. Better wait and ask on forum, wait patiently for the answer, do not panic or rush thing as Your data might be still there!

That was my case cheers!

 

p.s.

I didn't mention that I was not rebooting my server for two years almost, so when my disk 1 drive failed I rebooted and it booted correctly, but the second time I rebooted it didn't boot. It has appeared that at the same time my flash drive had fault usb connection, so it was disappearing and reappering. Obviously my flashdrive backup was one year old :D So I had to deal with two failures at the same time, but when it comes to flash drive just:

  1. Put old flash drive in any PC and copy whole content to another folder on your PC (if this does not work then You can only use backup if you have one or maybe you can recover configuration files partially, eg share configurations, etc.).  I didn't have any up to date backup ;) but luckily I was ably to copy all files from failing flashdrive!
  2. Then use unraid tool to format new flash drive -> do fresh install on new flashdrive.
  3. Then put new flashdrive into PC, delete all files from fat drive, and copy all files from folder where you have stored content of the old/failing flash drive.
  4. Safely remove flashdrive.

 

 

Link to comment

Join the conversation

You can post now and register later. If you have an account, sign in now to post with your account.
Note: Your post will require moderator approval before it will be visible.

Guest
Reply to this topic...

×   Pasted as rich text.   Restore formatting

  Only 75 emoji are allowed.

×   Your link has been automatically embedded.   Display as a link instead

×   Your previous content has been restored.   Clear editor

×   You cannot paste images directly. Upload or insert images from URL.

×
×
  • Create New...