Unmountable - No File System


Recommended Posts

Hi,

 

My unraid server been working magically for 7+ years. Recently tried to copy some files from my server to a friends (also on unraid, higher version) however his computer was woefully slow.

 

I thought it would be easier to remove the hard drives from my server, plug in and mount in his, copy files then replace into mine.

 

When i plugged HDDs back into my server, they all are coming up as 'unmountable - no file system' on main page.

After reading through forum, I tried stopping array, starting array in maintenance mode and trying disk check (reiserfsck) on hard drive page, but getting error;

"Failed to open the device '/dev/md1': Unknown code er3k 127"

 

All the hard drives are reiserfs.

 

I am running server version 6.5.0, Pro.

 

Is there any way to salvage disks without doing a full re-create?

 

Thanks in advance.

Link to comment
1 hour ago, Ostrich79 said:

thought it would be easier to remove the hard drives from my server, plug in and mount in his, copy files then replace into mine.

Unless the disks were mounted read only this will make your parity out of sync.

 

1 hour ago, Ostrich79 said:

When i plugged HDDs back into my server, they all are coming up as 'unmountable - no file system' on main page.

Please post the diagnostics: Tools -> Diagnostics

Link to comment

Thanks for prompt response. Attached is diagnostics.

 

I am comfortable data is still on drive, just used some recovery software which identified alot of files. Worst case is a few days/weeks of copying all data off the server and reloading everything xfs, but if i can fix reiser issue in meantime will save alot of stress.

 

Edit: Briefly updated to latest unraid, however still showing same error running reiserfsck, so reverted back to saved copy i took of usb just prior to upgrade.

 

Kind Regards.

tower-diagnostics-20200402-2122.zip

Edited by Ostrich79
Link to comment

You are absolutely sure disks all disks were reiserfs? A reseirfs superblock is missing from most disks, this either means they are not reiser or something destroyed or moved the original superblock, e.g. using the disks on some RAID controllers.

 

A superblock is being found for disks 2 and 4 and ruining reiserfsck should fix them, though there some are some older releases that include a buggy reiserfsck, don't remember now which ones, so best to update to latest Unraid first.

Link to comment

Attached screenshot i took before taking out drives. They were attached to other machine via standard sata ports on motherboard, no controllers. On MY machine, i use a controller just to add the extra disks (i.e. this style thing, cant recall exact model;

https://www.techbuyer.com/au/l3-25121-70a-lsi-logic-9260-8i-pci-e-x8-sas-raid-controller-122210/?gclid=Cj0KCQjwmpb0BRCBARIsAG7y4zbH6Aw59B6k_rwcLa2h54tCicl1cbh5oNoAYODsx0flKjTXG4sZQtUaAt8QEALw_wcB

, but not in raid (just single disks).

 

Edit: Currently running reiser check on drive 2 now. Got further than drive 1.

Edit2: I merely took drive out of my machine, either mounted via UD, or SSH into machine and; mount -t /dev/sdc1 'target'.

then;

rsync the target with his share drive.

Unmounted drive from unraid menu (UD), or via umount. Took drive off sata and plugged in next one.

 

Might also be relevant, but his machine wont recognise one of his own disks (its showing unable to mount issue on HIS machine for his own drive). I havent been able to format via UD or command line or even a windows NTFS format then put into unRAID and letting it reformat) didnt enable the drive, which is on same sata port i was using.

 

IMG_0677.JPG

Edited by Ostrich79
Link to comment
9 minutes ago, Ostrich79 said:

Edit2: I merely took drive out of my machine, either mounted via UD, or SSH into machine and; mount -t /dev/sdc1 'target'.

then;

rsync the target with his share drive.

Unmounted drive from unraid menu (UD), or via umount. Took drive off sata and plugged in next one.

 

Merely doing this on a regular SATA controller would be perfectly safe (except your parity getting out of sync like mentioned), so something is missing here, but the only solution I see now (except for disks 2 and 4 if reiserfsck works) would be to rebuild the reiserfs superblock, then each disk will likely need reiserfsck --rebuild-tree.

Link to comment

Reiserfs check came back with...

 

reiserfsck 3.6.27 Will read-only check consistency of the filesystem on /dev/md2 Will put log info to 'stdout' ########### reiserfsck --check started at Thu Apr 2 22:28:38 2020 ########### reiserfs_open_journal: journal parameters from the superblock does not match to the journal headers ones. It looks like that you created your fs with old reiserfsprogs. Journal header is fixed. Replaying journal: Replaying journal: Done. Reiserfs journal '/dev/md2' in blocks [18..8211]: 0 transactions replayed Checking internal tree.. finished Comparing bitmaps..finished Checking Semantic tree: finished No corruptions found There are on the filesystem: Leaves 455117 Internal nodes 2723 Directories 90 Other files 372 Data block pointers 460532586 (0 of them are zero) Safe links 0 ########### reiserfsck finished at Thu Apr 2 22:33:06 2020 ###########

 

do i just run the following now;

reiserfsck --fix-fixable /dev/md2

Link to comment

Yep, rebooted and disk 2 & 4 back online. Other disks getting the "Unknown code er3k 127" when i run --check.

 

Thinking of starting fresh with array, so i can format to xfs.

Figure the following steps;

1. Create new config via unraid tools (remove existing). Removes all drives from allocation.

2. Add new 8 TB (format xfs). Leave Parity empty.

3. Copy data from good drives onto xfs drive. Once copied, format hdd to xfs and add to array.

4. Recover data from bad drives & copy to server, format to xfs & add to array.

5. Once all complete, add parity drive and recalc (been 2 1/2 years since last parity check)

 

Anything I'm missing? Is there a better option than xfs?

Link to comment
2 minutes ago, Ostrich79 said:

Other disks getting the "Unknown code er3k 127" when i run --check.

Do you still get an error if you do:

reiserfsck --rebuild-sb /dev/mdX

 

 

4 minutes ago, Ostrich79 said:

Anything I'm missing? Is there a better option than xfs?

Looks fine to me. Xfs is probably the best option for most users.

Link to comment

Join the conversation

You can post now and register later. If you have an account, sign in now to post with your account.
Note: Your post will require moderator approval before it will be visible.

Guest
Reply to this topic...

×   Pasted as rich text.   Restore formatting

  Only 75 emoji are allowed.

×   Your link has been automatically embedded.   Display as a link instead

×   Your previous content has been restored.   Clear editor

×   You cannot paste images directly. Upload or insert images from URL.