Ostrich79 Posted April 2, 2020 Share Posted April 2, 2020 Hi, My unraid server been working magically for 7+ years. Recently tried to copy some files from my server to a friends (also on unraid, higher version) however his computer was woefully slow. I thought it would be easier to remove the hard drives from my server, plug in and mount in his, copy files then replace into mine. When i plugged HDDs back into my server, they all are coming up as 'unmountable - no file system' on main page. After reading through forum, I tried stopping array, starting array in maintenance mode and trying disk check (reiserfsck) on hard drive page, but getting error; "Failed to open the device '/dev/md1': Unknown code er3k 127" All the hard drives are reiserfs. I am running server version 6.5.0, Pro. Is there any way to salvage disks without doing a full re-create? Thanks in advance. Quote Link to comment
JorgeB Posted April 2, 2020 Share Posted April 2, 2020 1 hour ago, Ostrich79 said: thought it would be easier to remove the hard drives from my server, plug in and mount in his, copy files then replace into mine. Unless the disks were mounted read only this will make your parity out of sync. 1 hour ago, Ostrich79 said: When i plugged HDDs back into my server, they all are coming up as 'unmountable - no file system' on main page. Please post the diagnostics: Tools -> Diagnostics Quote Link to comment
Ostrich79 Posted April 2, 2020 Author Share Posted April 2, 2020 (edited) Thanks for prompt response. Attached is diagnostics. I am comfortable data is still on drive, just used some recovery software which identified alot of files. Worst case is a few days/weeks of copying all data off the server and reloading everything xfs, but if i can fix reiser issue in meantime will save alot of stress. Edit: Briefly updated to latest unraid, however still showing same error running reiserfsck, so reverted back to saved copy i took of usb just prior to upgrade. Kind Regards. tower-diagnostics-20200402-2122.zip Edited April 2, 2020 by Ostrich79 Quote Link to comment
JorgeB Posted April 2, 2020 Share Posted April 2, 2020 You are absolutely sure disks all disks were reiserfs? A reseirfs superblock is missing from most disks, this either means they are not reiser or something destroyed or moved the original superblock, e.g. using the disks on some RAID controllers. A superblock is being found for disks 2 and 4 and ruining reiserfsck should fix them, though there some are some older releases that include a buggy reiserfsck, don't remember now which ones, so best to update to latest Unraid first. Quote Link to comment
Ostrich79 Posted April 2, 2020 Author Share Posted April 2, 2020 yes, 100% all were reiserFS (took screenshot before i played with server just in case). Most drivers were loaded on unraid 4/5 and reiser was default so i stayed with it. The only one not is xfs is one of the 5TB drives. Quote Link to comment
JorgeB Posted April 2, 2020 Share Posted April 2, 2020 On the other server were they used on a RAID controller? Quote Link to comment
Ostrich79 Posted April 2, 2020 Author Share Posted April 2, 2020 (edited) Attached screenshot i took before taking out drives. They were attached to other machine via standard sata ports on motherboard, no controllers. On MY machine, i use a controller just to add the extra disks (i.e. this style thing, cant recall exact model; https://www.techbuyer.com/au/l3-25121-70a-lsi-logic-9260-8i-pci-e-x8-sas-raid-controller-122210/?gclid=Cj0KCQjwmpb0BRCBARIsAG7y4zbH6Aw59B6k_rwcLa2h54tCicl1cbh5oNoAYODsx0flKjTXG4sZQtUaAt8QEALw_wcB , but not in raid (just single disks). Edit: Currently running reiser check on drive 2 now. Got further than drive 1. Edit2: I merely took drive out of my machine, either mounted via UD, or SSH into machine and; mount -t /dev/sdc1 'target'. then; rsync the target with his share drive. Unmounted drive from unraid menu (UD), or via umount. Took drive off sata and plugged in next one. Might also be relevant, but his machine wont recognise one of his own disks (its showing unable to mount issue on HIS machine for his own drive). I havent been able to format via UD or command line or even a windows NTFS format then put into unRAID and letting it reformat) didnt enable the drive, which is on same sata port i was using. Edited April 2, 2020 by Ostrich79 Quote Link to comment
JorgeB Posted April 2, 2020 Share Posted April 2, 2020 9 minutes ago, Ostrich79 said: Edit2: I merely took drive out of my machine, either mounted via UD, or SSH into machine and; mount -t /dev/sdc1 'target'. then; rsync the target with his share drive. Unmounted drive from unraid menu (UD), or via umount. Took drive off sata and plugged in next one. Merely doing this on a regular SATA controller would be perfectly safe (except your parity getting out of sync like mentioned), so something is missing here, but the only solution I see now (except for disks 2 and 4 if reiserfsck works) would be to rebuild the reiserfs superblock, then each disk will likely need reiserfsck --rebuild-tree. Quote Link to comment
Ostrich79 Posted April 2, 2020 Author Share Posted April 2, 2020 Yep, happy to do that. How do you start it though, i tried with --rebuild-sb however it still failed (error 127 i believe). Quote Link to comment
JorgeB Posted April 2, 2020 Share Posted April 2, 2020 Update to latest Unraid, some earlier release have a bug in the reiser tools, latest reiserfsck is 3.6.27 Quote Link to comment
Ostrich79 Posted April 2, 2020 Author Share Posted April 2, 2020 OK Ill run that once the disk 2 and disk 4 are back online. Should i try and use recovery software first to copy off the contents before i try the SB/rebuild tree (i.e. could it cause unrecoverable changes to drive). Quote Link to comment
JorgeB Posted April 2, 2020 Share Posted April 2, 2020 3 minutes ago, Ostrich79 said: Should i try and use recovery software first to copy off the contents before i try the SB/rebuild tree (i.e. could it cause unrecoverable changes to drive). It's an option, you could also make a clone with dd to another disk before starting. Quote Link to comment
Ostrich79 Posted April 2, 2020 Author Share Posted April 2, 2020 Reiserfs check came back with... reiserfsck 3.6.27 Will read-only check consistency of the filesystem on /dev/md2 Will put log info to 'stdout' ########### reiserfsck --check started at Thu Apr 2 22:28:38 2020 ########### reiserfs_open_journal: journal parameters from the superblock does not match to the journal headers ones. It looks like that you created your fs with old reiserfsprogs. Journal header is fixed. Replaying journal: Replaying journal: Done. Reiserfs journal '/dev/md2' in blocks [18..8211]: 0 transactions replayed Checking internal tree.. finished Comparing bitmaps..finished Checking Semantic tree: finished No corruptions found There are on the filesystem: Leaves 455117 Internal nodes 2723 Directories 90 Other files 372 Data block pointers 460532586 (0 of them are zero) Safe links 0 ########### reiserfsck finished at Thu Apr 2 22:33:06 2020 ########### do i just run the following now; reiserfsck --fix-fixable /dev/md2 Quote Link to comment
JorgeB Posted April 2, 2020 Share Posted April 2, 2020 Should already be fixed, it was a journal problem. Quote Link to comment
Ostrich79 Posted April 2, 2020 Author Share Posted April 2, 2020 Yep, rebooted and disk 2 & 4 back online. Other disks getting the "Unknown code er3k 127" when i run --check. Thinking of starting fresh with array, so i can format to xfs. Figure the following steps; 1. Create new config via unraid tools (remove existing). Removes all drives from allocation. 2. Add new 8 TB (format xfs). Leave Parity empty. 3. Copy data from good drives onto xfs drive. Once copied, format hdd to xfs and add to array. 4. Recover data from bad drives & copy to server, format to xfs & add to array. 5. Once all complete, add parity drive and recalc (been 2 1/2 years since last parity check) Anything I'm missing? Is there a better option than xfs? Quote Link to comment
JorgeB Posted April 2, 2020 Share Posted April 2, 2020 2 minutes ago, Ostrich79 said: Other disks getting the "Unknown code er3k 127" when i run --check. Do you still get an error if you do: reiserfsck --rebuild-sb /dev/mdX 4 minutes ago, Ostrich79 said: Anything I'm missing? Is there a better option than xfs? Looks fine to me. Xfs is probably the best option for most users. Quote Link to comment
Ostrich79 Posted April 2, 2020 Author Share Posted April 2, 2020 Yes, same error on rebuild Quote Link to comment
JorgeB Posted April 2, 2020 Share Posted April 2, 2020 Then only a reiserfs maintainer could help, best bet is probably using a data recovery software, all data should be recoverable. Quote Link to comment
Recommended Posts
Join the conversation
You can post now and register later. If you have an account, sign in now to post with your account.
Note: Your post will require moderator approval before it will be visible.