Jump to content

unRaid - Disk 4 Unmountable - No file system (32)


phuketmymac

Recommended Posts

Hi,

 

I am technician trying to help a customer with its unRaid system.

I am a Debian user myself but never worked on the unRaid distro and haven't played with Reiserfs yet so excuse my ignorance.

 

The customer came with a custom PC with 5 (3TB) drives, from which 2 were unmountable. One with XFS partition (MD2) on it and the other with Reiserfs (MD4).

I could see the system at boot trying to locate an XFS partition on MD4 and a ReiserFS partition on MD2.

My conclusion was that the system did change the drives order (or the customer did)

 

Anyway, I did change MD2 to XFS and MD4 to Reiserfs in the main menu on the web interface and restarted.

 

An XFS repair was necessary which was done successfully apparently.

However for the ReiserFS partition, I am getting a "Unmountable - No file system (32)" on MD4 now.

 

I have tried running reiserfschk --check /dev/md4 but it did return the error: "Failed to open the device /dev/md4 : no such file or directory"

 

Dmesg command shows the MD4 has being detected.

 

Before going further and doing any mistake, can someone advise on what to do please?

I am adding the zip file from the diagnostic menu...
Thanks.

h3mediaserver-diagnostics-20180603-0130.zip

Link to comment

Changing the filesystem on a disk in unRAID will format it. Also, unRAID tracks disks by their serial number. So it would be very unusual for a user to accidentally reorder the disks, or for the user to accidentally change the filesystem on a disk.

 

And these diagnostics were taken immediately after a reboot, so we don't have any information about what happened before that, especially including any of the changes you made. It would have been a lot better if you had asked for help before doing anything.

 

So did the XFS repair actually successfully recover the files on that disk?

 

Are you doing the repairs in Maintenance Mode?

 

 

Link to comment

Hi,

 

Thanks for your reply both.

 

The XFS FS was repaired successfully and we got the Data back.

The original problem may have been the Reiserfs partition failed on disk4 and the customer did try to move the disk in a different SATA port...

 

@trurl, that's scary the system would format the disk without notifying you if you have to change the FS type in the dashboard...(though it did not do it for the disk2 with XFS)

 

Anyway, the disk 2 is back online with its data.

I just did set the disk4 in auto and it is still unmountable.

 

Diag attached.

h3mediaserver-diagnostics-20180603-1814.zip

Link to comment

Here is the log part where it can't mount disk4:

 

shcmd (112): set -o pipefail ; mount -t auto -o noatime,nodiratime /dev/md4 /mnt/disk4 |& logger
Jun  3 18:11:22 H3MediaServer logger: mount: wrong fs type, bad option, bad superblock on /dev/md4,
Jun  3 18:11:22 H3MediaServer logger:        missing codepage or helper program, or other error
Jun  3 18:11:22 H3MediaServer logger:        In some cases useful info is found in syslog - try
Jun  3 18:11:22 H3MediaServer logger:        dmesg | tail  or so
Jun  3 18:11:22 H3MediaServer logger:
Jun  3 18:11:22 H3MediaServer kernel: XFS (md4): Filesystem has duplicate UUID cb851111-1e4f-44c0-afaf-e1c853b5a19e - can't mount
Jun  3 18:11:22 H3MediaServer emhttp: shcmd: shcmd (112): exit status: 32
Jun  3 18:11:22 H3MediaServer emhttp: mount error: No file system (32)

Should I understand that the original FS on disk4 was XFS or that the reiserfs partition was altered by an XFS command and it is now thinking the FS type is XFS?

Link to comment
2 hours ago, phuketmymac said:

that's scary the system would format the disk without notifying

It doesn't, you need to click the format button.

 

As for disk4, the only logical way of a disk having a duplicate UUID is if it was previously rebuilt or cloned from an existing disk, but you can easily change it with:

 

xfs_admin -U generate /dev/sdX1

 

Replace X with correct disk4 letter, should still be d as in the diags posted.

 

 

Link to comment

Thanks for replying

 

Here is what I got after executing the command:

Metadata CRC error detected at block 0x575428d9/0x200
xfs_admin: cannot init perag data (74). Continuing anyway.
xfs_admin: only 'rewrite' supported on V5 fs

 

Trying to remount the drives failed the same way.

Could it be a bad sector on the disk?

 

In the meantime, I will be running the badblocks command in case...

Link to comment

Here are some more infos...

mdcmd (2): import 1 8,0 2930266532 WDC_WD30EZRX-00MMMB0_WD-WCAWZ1961784
md: import disk1: [8,0] (sda) WDC_WD30EZRX-00MMMB0_WD-WCAWZ1961784 size: 2930266532
mdcmd (3): import 2 8,16 2930266532 WDC_WD30EFRX-68AX9N0_WD-WCC1T1271449
md: import disk2: [8,16] (sdb) WDC_WD30EFRX-68AX9N0_WD-WCC1T1271449 size: 2930266532
mdcmd (4): import 3 8,32 2930266532 WDC_WD30EZRX-00MMMB0_WD-WCAWZ1906209
md: import disk3: [8,32] (sdc) WDC_WD30EZRX-00MMMB0_WD-WCAWZ1906209 size: 2930266532
mdcmd (5): import 4 8,48 2930266532 WDC_WD30EZRX-00D8PB0_WD-WMC4N2119373
md: import disk4: [8,48] (sdd) WDC_WD30EZRX-00D8PB0_WD-WMC4N2119373 size: 2930266532
mdcmd (6): import 5 8,64 2930266532 WDC_WD30EZRX-00D8PB0_WD-WMC4N1669933
md: import disk5: [8,64] (sde) WDC_WD30EZRX-00D8PB0_WD-WMC4N1669933 size: 2930266532

 

root@H3MediaServer:~# file -s /dev/md1  
/dev/md1: ReiserFS V3.6
root@H3MediaServer:~# file -s /dev/md2
/dev/md2: SGI XFS filesystem data (blksz 4096, inosz 512, v2 dirs)
root@H3MediaServer:~# file -s /dev/md3
/dev/md3: ReiserFS V3.6
root@H3MediaServer:~# file -s /dev/md4
/dev/md4: SGI XFS filesystem data (blksz 4096, inosz 512, v2 dirs)
root@H3MediaServer:~# file -s /dev/md5
/dev/md5: ReiserFS V3.6

 

root@H3MediaServer:~# ls -l /dev/disk/by-uuid/
total 0
lrwxrwxrwx 1 root root 10 Jun  4 03:22 9380a7d5-1b40-41eb-a4c1-18311f0ba216 -> ../../sde1
lrwxrwxrwx 1 root root 10 Jun  4 03:22 F6AD-31D4 -> ../../sdf1
lrwxrwxrwx 1 root root 10 Jun  4 03:22 a6df5898-abc8-42da-822f-9967372f3f8c -> ../../sda1
lrwxrwxrwx 1 root root 10 Jun  4 03:22 cb851111-1e4f-44c0-afaf-e1c853b5a19e -> ../../sdd1
lrwxrwxrwx 1 root root 10 Jun  4 03:22 e0dcc176-1987-45bd-9ae3-ca2018868040 -> ../../sdc1

 

That last part confuses me...

/dev/md2 aka /dev/sdb is supposed to be mounted but it doesn't show up here in the list...
However the kernel says that uuid from md2 and md4 are the same.

 

If I try to unmount md2 and then mount md4, I am getting this:

 

XFS (md2): Unmounting Filesystem
XFS (md4): Mounting V5 Filesystem
XFS (md4): Log inconsistent (didn't find previous header)
XFS (md4): failed to find log head
XFS (md4): log mount/recovery failed: error -5
XFS (md4): log mount failed

 

I already did an xfs_repair but it gave me the following error with a lot of "........." :

couldn't verify primary superblock - not enough secondary superblocks with matching geometry !!!

attempting to find secondary superblock...

 

Anyway, I will run the xfs_repair again to have the full detail of the result + badblocks check.

Link to comment
3 hours ago, phuketmymac said:

Sorry, could not find valid secondary superblock

Is that for md4? What command are you using? Since there's no parity you can run xfs_repair on the disk without starting the array in maintenance mode, like so:

 

xfs_repair -v /dev/sdX1

 

Replace X with the correct letter, note the 1 in the end.

Link to comment

Hi,

 

Yes that's MD4 aka sdd1

I've run that command already and it couldn't find a primary or secondary superblock.

 

Customer told me he thought the disk was Reiserfs partition cause "the system showed a reiserfs" in the dashboard.

He said he then ran a Reiserfs repair on the disk which could be why the partition doesn't contain any superblock. Could it be?

 

I am backing up what I can at the moment and will update the OS as soon as the other drives are backed up.

 

I am thinking to run "testdisk" to repair the partition if possible.

What do you think about it?

If not possible, then photorec...

 

Thanks for your help so far.

Link to comment
53 minutes ago, phuketmymac said:

He said he then ran a Reiserfs repair on the disk which could be why the partition doesn't contain any superblock. Could it be?

It's possible, if he used the --rebuild-sb option.

 

54 minutes ago, phuketmymac said:

I am thinking to run "testdisk" to repair the partition if possible.

What do you think about it?

Worth trying.

Link to comment

Archived

This topic is now archived and is closed to further replies.

×
×
  • Create New...