crakhed Posted October 5, 2019 Share Posted October 5, 2019 So I'm not sure if it was caused by a power outage (we had one a few days before I noticed any problems), but suddenly disk1 wouldn't let me write to it. I checked the webUI and saw that disk1 was being emulated (should've gotten the data off it and replaced with a spare while it was 'accessible' but I didn't). After poking around other forum suggestions and trying the filesystem check from the webUI in maintenance mode (which returned "Failed to open the device '/dev/md1': Unknown code er3k 127"), I rebooted the server (the old off/on again trope) the disk now shows "unmountable - no file system" with no emulation or samba access. This is the original first disk I installed when I first started using unraid 7yrs ago, and has some pretty important stuff on it. It's old, but SMART diag seems fine (no errors, but then a lot of it is greek to me). I have a few working spares from when I upgraded a few of the newer, smaller disks; is there any way to get it at least back to 'emulated' so I can retrieve data and just flush the disk or replace it? I'm assuming that since the array is starting, that the data is at least backed up in the parity, but I also gleaned from reading that just swapping in a new disk will just replace the corruption as-is to the new disk. Again, the physical disk is of little consequence and will definitely be replaced after all this, but I really need the data if there's any way to get at it. Thanks in advance, gurus. babel-diagnostics-20191005-0905.zip Quote Link to comment
itimpi Posted October 5, 2019 Share Posted October 5, 2019 Best way forward will be to: stop the array remove the physical drive start the array and the drive should be emulated if it still shows as unmountable click on the drive on the Main tab and select the option to check the file system report back on the results of the check as the output will help with suggesting the best way forward. Keep the physical drive safe in case you later want/need to try and recover some data off it. Quote Link to comment
crakhed Posted October 5, 2019 Author Share Posted October 5, 2019 Unplugged physical drive, started normal mode, no emulation. Stopped and started maint. mode, still no emulation, same error on filesystem check from webUI. Quote Link to comment
crakhed Posted October 5, 2019 Author Share Posted October 5, 2019 oops forgot diag babel-diagnostics-20191005-0930.zip Quote Link to comment
crakhed Posted October 5, 2019 Author Share Posted October 5, 2019 Another thought I just remembered while reading other related posts, my motherboard DOES have that copy-bios-to-disk backup feature, but I carefully made sure to disable that when I migrated everything to this new setup and haven't had a prob, but it's a headless box and I'm wondering if I should grab a monitor/keyboard and boot to bios to make sure the setting didn't 'reset' when the power originally went out. Would that setting suddenly being enabled on board cause this kind of behavior from the drive if the board modified it out of the array with its bios backup? Just a thought. Quote Link to comment
Frank1940 Posted October 5, 2019 Share Posted October 5, 2019 25 minutes ago, crakhed said: Another thought I just remembered while reading other related posts, my motherboard DOES have that copy-bios-to-disk backup feature, but I carefully made sure to disable that when I migrated everything to this new setup and haven't had a prob, but it's a headless box and I'm wondering if I should grab a monitor/keyboard and boot to bios to make sure the setting didn't 'reset' when the power originally went out. Would that setting suddenly being enabled on board cause this kind of behavior from the drive if the board modified it out of the array with its bios backup? Just a thought. What could it hurt at this point? If it did, I would be looking at the BIOS/MB battery... Quote Link to comment
crakhed Posted October 5, 2019 Author Share Posted October 5, 2019 Checked bios, it's still configured properly. I'm glad because I know undoing that specific 'corruption' is a pain. One thing I'm noticing is that with the physical disk unplugged, the webUI still has 'Unmountable disk present: Disk 1 • ()' under array operation with the checkbox and option to format. Is that normal? Quote Link to comment
Frank1940 Posted October 5, 2019 Share Posted October 5, 2019 Do not format anything until one of the gurus says to do it!!!! Quote Link to comment
itimpi Posted October 5, 2019 Share Posted October 5, 2019 44 minutes ago, crakhed said: Checked bios, it's still configured properly. I'm glad because I know undoing that specific 'corruption' is a pain. One thing I'm noticing is that with the physical disk unplugged, the webUI still has 'Unmountable disk present: Disk 1 • ()' under array operation with the checkbox and option to format. Is that normal? The fact the disk is still showing should mean it is being emulated. Click on the disk and select the option to run a file system check. Quote Link to comment
crakhed Posted October 5, 2019 Author Share Posted October 5, 2019 I tried but it returns reiserfsck 3.6.27 Will read-only check consistency of the filesystem on /dev/md1 Will put log info to 'stdout' Failed to open the device '/dev/md1': Unknown code er3k 127 and if it's being emulated, it's still not appearing in explorer. Quote Link to comment
Squid Posted October 5, 2019 Share Posted October 5, 2019 41 minutes ago, crakhed said: and if it's being emulated, it's still not appearing in explorer. It is being emulated, but since it has corruption and is currently unmountable you won't see it's contents 42 minutes ago, crakhed said: Will read-only check consistency of the filesystem on /dev/md1 Will put log info to 'stdout' Failed to open the device '/dev/md1': Unknown code er3k 127 Did you stop the array and restart in maintenance mode? Quote Link to comment
crakhed Posted October 5, 2019 Author Share Posted October 5, 2019 Yes, I did it through the webUI from the disk1 settings page. Should I maybe do it manually with the terminal html window, or even PuTTY? It is currently running in maintenance mode as in the image. The rest of the array though, when mounted, IS accessible and all shares appear properly except for disk1 shares/disk share are absent. Doesn't it usually say "Emulated" next to the warning/disk on the main tab? Quote Link to comment
JorgeB Posted October 6, 2019 Share Posted October 6, 2019 13 hours ago, Squid said: Unknown code er3k 127 This looks like a reiserfsck problem, likely a bug, it happened to another user recently, and since you're on the last reiserfs tools not much to do, you can look for help on the reiser mailing list, also if any other disks are still reiser consider converting them to a different filesystem. Quote Link to comment
crakhed Posted October 6, 2019 Author Share Posted October 6, 2019 So what are the best options for recovering the data? I have almost no experience with linux if that's required. Are there any windows apps that are capable of repairing/accessing it? There is irreplaceable stuff on there like family photo archives and whatnot. And of course, I was planning to add in another drive and start migrating each drive to xfs, but looks like I didn't start soon enough. Quote Link to comment
JorgeB Posted October 7, 2019 Share Posted October 7, 2019 If the old disk is still working try mounting with UD, if not you'll need help from a reiser maintainer or use a file recovery util like UFS explorer. 15 hours ago, crakhed said: There is irreplaceable stuff on there like family photo archives and whatnot. Unraid is not a backup, you should have a backup of any irreplaceable data. Quote Link to comment
Frank1940 Posted October 7, 2019 Share Posted October 7, 2019 For irreplaceable data, there has long been The 3-2-1 Backup Rule which can basically be stated as follows: Quote The 3-2-1 backup rule is an easy-to-remember acronym for a common approach to keeping your data safe in almost any failure scenario. The rule is: keep at least three (3) copies of your data, and store two (2) backup copies on different storage media, with one (1) of them located offsite. Your unraid server can be one leg of this rule. (My offsite copy is on a 1TB External HD in a safety deposit box.) I realize that it may be too late for you but, perhaps, your experience may be be a warning for others. One thing you can do is to check immediately the other seven disks that should still be readable to see if any of that irreplaceable data is one (or more) of those disks and make two more copies of it ASAP! (Remember Unraid stores your data in a standard Linux file format so each disk is readable by any machine that can read the Reiserfs format. I seem to recall that was even a Reiserfs driver for Windows.) There are also companies that can often recover data from failed disks. (Warning, it is expensive...) But before I went too far down one of these recovery paths, I would first try to use the server to see what is left on those other seven data disks by using the 'New Configuration' Tool. ( @johnnie.black can probably point to instructions for doing this as I recall he has done so for others in the past.) (One thing I do know is if you get a prompt to reformat one of more disks, DON'T!!!) I personally would be copying off any of that critical data off before wasting time on a parity rebuild. Your loss may be less than you think at this point. Quote Link to comment
crakhed Posted October 8, 2019 Author Share Posted October 8, 2019 Well it's a (Halloween?) miracle, and I have no idea why, but I got it to work. So I switched SATA cables with the parity drive on a whim, just to rule out bad cable/controller. Then I reconnected the physical disk1 and booted server. Of course, array had forgotten disk and now sees it as replacement. In maint. mode under unassigned devices, there is now a mount button and share toggle (maybe these only show for 'replacement' drives?). I totally expected to slam into a brick wall of failure, but being a glutton for punishment, I clicked mount. It ...mounts? Then I toggle share. Open explorer and there is it, a network share named after the drive serial. All data accessible. All 1.6tb is now backed up to this desktop and I'm about to flush the failed disk1 from the array and add a brand new 8tb and start migrating the other drives to xfs. Thanks for your help, folks. Lastly, is there any quicker method of fs conversion besides my plan to: add new 8tb as empty xfs disk1 dump rfs disk2 to disk1 (prolly thru midnight commander w/ PuTTY) format now empty disk2 to xfs repeat for rest of array? Quote Link to comment
JorgeB Posted October 8, 2019 Share Posted October 8, 2019 3 minutes ago, crakhed said: Well it's a (Halloween?) miracle, and I have no idea why, but I got it to work. Not really a miracle, just means the emulated disk has some corruption, hence why I posted: On 10/7/2019 at 8:27 AM, johnnie.black said: If the old disk is still working try mounting with UD, 1 Quote Link to comment
Frank1940 Posted October 8, 2019 Share Posted October 8, 2019 (edited) 1 hour ago, crakhed said: Lastly, is there any quicker method of fs conversion besides my plan to: Not going to approve (or disapprove any conversion procedure), but here is the link to the Bible (I used the Mirror each disk with rsync, preserving parity method when I did it.): https://forums.unraid.net/topic/54769-format-xfs-on-replacement-drive-convert-from-rfs-to-xfs/ EDIT: if the emulated disk has some corruption that would probably mean that there is a problem somewhere else on your array. It would probably be best if the problem were on the parity disk. Edited October 8, 2019 by Frank1940 Quote Link to comment
Recommended Posts
Join the conversation
You can post now and register later. If you have an account, sign in now to post with your account.
Note: Your post will require moderator approval before it will be visible.