2 disks failed - how to proceed?


Recommended Posts

I am specific situation on my unRaid with 4 data disks + 1 parity + 1 cache:

One of my disks went offline > reason for this was failure of add-on card for additional SATA ports.

I replaced add-on card and SATA cable and disk is now visible in array as "new" (although it is same disk as before). 

 

Before I have put it back to array (several hours later), different disk went offline, with following error: "Unmountable: No file system".

So, at the moment, I have:

- 2 disks in normal mode

- 1 disk marked as "new"

- 1 disk with error: "Unmountable: No file system"

 

image.thumb.png.9fe11f67edc0b62ad433c750dd5f3df0.png

 

 

Array is started automatically upon system boot, but part of data is missing.

I stopped array, started in maintenance mode and tried to repair disk3 with xfs_repair and got:

"

Phase 1 - find and verify superblock...

superblock read failed,

offset 562633482240, size 131072, ag 9, rval -1 fatal error --

Input/output error

"

I rebooted several times, and each time is different behavior of this 2 TB disk (disk3) > sometimes array is started, sometimes is marked as offline, sometimes I cannot start array in maintenance mode...

I suppose that this disk od 2 TB is "dead".

 

I have 1 parity drive, part of data is lost at the moment.

Any suggestions how to recover data using dropped disk - 3 TB (marked as "new") and existing array?  Or to rephrase question: how to return this disk to existing array, without array rebuild (since different disk failed)?

Any other ideas is welcomed.

 

tower diagnostic file and smart for disk 3 attached

 

thanks in advance

tower-diagnostics-20201014-1556.zip

tower-smart-20201014-1631.zip

Edited by positive.bgw
Link to comment

I switched cables again, and I am able now to access disk3 > it still says read errors, but at least I can run xfs_repair command

attached:

- smart for disk3

- new diagnostic file

- output of xfs_repair -n command

 

should I run "xfs_repair /dev/md3 -vL" or something different to fix this error: "Unmountable: No file system" (in maintenance mode)

 

 

 

 

tower-diagnostics-20201014-2202.zip tower-smart-20201014-2208.zip xfs_repair disk3.txt

Edited by positive.bgw
Link to comment

I have finished ddrescue to different disk (from 2TB > 3 TB disk).

I ran xfs repair command on new, 3 TB disk, but no luck:

xfs_repair -vL /dev/sdb
Phase 1 - find and verify superblock...
bad primary superblock - bad magic number !!!

attempting to find secondary superblock...
.found candidate secondary superblock...
unable to verify superblock, continuing...
........................................  (and this goes forever)

 

any idea how to recover data?

 

 

Link to comment

thanks

command has passed now > I can access disk data.

 

funny thing is that xfs repaid passed also on old, problematic disk as well.

previously I tried several times same, but no success.

only difference is that now I have done it via terminal, not via GUI as before (and ddrescue command meanwhile, no system restart or change).

 

only issue I have at the moment, is that I cannot mount new, 3 TB disk via UD plugin, while old problematic disk I can.

currently I am copying data to array, so no big issue for me

 

thanks again!

 

 

 

Link to comment
27 minutes ago, positive.bgw said:

funny thing is that xfs repaid passed also on old, problematic disk as well.

Possibly bad sector(s) got remapped, or is just working for now, these can be intermittent sometimes.

 

27 minutes ago, positive.bgw said:

only issue I have at the moment, is that I cannot mount new, 3 TB disk via UD plugin, while old problematic disk I can.

Should be, but not that only one of them can be mounted at any time, since they will have a duplicate UUID.

Link to comment

ok, all clear now

I tried to mount both at same time > this is reason.

 

so, currently I am copying data now from disk to external disk.

next step for me, is to pre-clear 3 TB disk and add it to array, and return data to array.

 

then I will do moving files inside array (via mc or unbalance) > to somehow group data on disks, currently each share is almost on all disks

 

 

Link to comment

Join the conversation

You can post now and register later. If you have an account, sign in now to post with your account.
Note: Your post will require moderator approval before it will be visible.

Guest
Reply to this topic...

×   Pasted as rich text.   Restore formatting

  Only 75 emoji are allowed.

×   Your link has been automatically embedded.   Display as a link instead

×   Your previous content has been restored.   Clear editor

×   You cannot paste images directly. Upload or insert images from URL.