In2Photos Posted June 17, 2022 Share Posted June 17, 2022 I have a small server running unRAID 6.0 rc4. I have a 3TB WD drive for parity and 3 data drives. This morning I tried to save a file to the server but couldn't access it. I went to the webgui where I encountered an invalid parity. I shut the system down to check cables and such. When I brought the system back up I noticed it took a lot longer than usual and saw some errors for one of my data drives during boot. When the webgui finally appeared it said the data drive was unmounted and parity invalid. I shut the system down again and reseated all cables to all the drives. During the boot process I noted similar errors and a longer time to boot. I decided to check the disk in maintenance mode and it says the superblock read failed, fatal error -- Input/output error. I also still have an invalid parity. So what should I do next? Is there any other info I need to provide? I have attached a screenshot of the main tab. Quote Link to comment
JorgeB Posted June 17, 2022 Share Posted June 17, 2022 Please post the diagnostics. Quote Link to comment
In2Photos Posted June 17, 2022 Author Share Posted June 17, 2022 OK, here you go. mytower-diagnostics-20220617-0530.zip Quote Link to comment
JorgeB Posted June 17, 2022 Share Posted June 17, 2022 You should really update Unraid, you're running an ancient release, much more difficult to look at the diags. Both parity and disk1 are failing, so some data loss might be inevitable, suggest cloning disk1 to a new disk with ddrescue to try and recover as much data as possible and add a new parity drive, alternatively use a new disk1 and parity to rebuild the array than see what you can copy from old disk1 using the UD plugin to mount it. Quote Link to comment
In2Photos Posted June 17, 2022 Author Share Posted June 17, 2022 OK, so what's the best way to go about this? Fix the drive issue first, then upgrade Unraid? Or update Unraid first? What is involved in upgrading from such an old version? I'm thinking about upgrading the hardware on this server as well. Should I get the system back up and running before moving to newer hardware? Quote Link to comment
JorgeB Posted June 17, 2022 Share Posted June 17, 2022 Fix the problems first then upgrade. Quote Link to comment
In2Photos Posted June 17, 2022 Author Share Posted June 17, 2022 OK. I ordered 2 new drives that should be here later today. But now I have another problem. I went to the plugins page to check for updates. And now the web gui is completely blank except for the tabs. Doesn't matter what I click on I get no info below the tabs. I tried rebooting, but the same thing occurs. Quote Link to comment
trurl Posted June 17, 2022 Share Posted June 17, 2022 Did any disks ever get mounted in those diagnostics? Nothing to go on really except syslog which was full of disk errors. Just wondering what filesystem being used on that very old version. 37 minutes ago, In2Photos said: What is involved in upgrading from such an old version? Probably best to just do a new install, keeping your license .key file and super.dat (disk assignments). Quote Link to comment
trurl Posted June 17, 2022 Share Posted June 17, 2022 1 minute ago, In2Photos said: now I have another problem post new diagnostics Quote Link to comment
In2Photos Posted June 17, 2022 Author Share Posted June 17, 2022 Trying from the command line and it says "command not found". I'm logged in as root. Is this not correct? Wow this has gone from bad to worse quickly. XFS is the file system. See screenshot from earlier. Quote Link to comment
JorgeB Posted June 17, 2022 Share Posted June 17, 2022 11 minutes ago, trurl said: Probably best to just do a new install, keeping your license .key file and super.dat (disk assignments). Suggest you do this, and there's no need for the assignments, since you need two new disks and to rebuild the array anyway. Quote Link to comment
In2Photos Posted June 17, 2022 Author Share Posted June 17, 2022 OK. Thanks for the help. I'll do some reading. Quote Link to comment
trurl Posted June 17, 2022 Share Posted June 17, 2022 9 minutes ago, JorgeB said: no need for the assignments Just make sure you don't assign any data disk to any parity slot. Quote Link to comment
In2Photos Posted June 17, 2022 Author Share Posted June 17, 2022 23 minutes ago, trurl said: Just make sure you don't assign any data disk to any parity slot. Got it. I also think I will remove the 500 GB drive from the array as I don't need it anymore with the replacement drive (1.5TB bad drive and 500 GB good drive replaced by new 3TB drive). I also only have 4 SATA ports on this old machine so in order to try and recover the files from the 1.5TB drive I need a free port. Here's the way I'm thinking of doing this. Install 2 new drives and the 2 good drives (3-3TB and 1-500GB) Assign 1 new drive as parity and the other as a data drive Rebuild the array and parity Remove the 500GB drive using the method described here (clear drive then remove drive): https://wiki.unraid.net/Shrink_array Install the 1.5TB bad drive and try to recover files using ddrescue Remove the 1.5TB drive Quote Link to comment
In2Photos Posted June 17, 2022 Author Share Posted June 17, 2022 After reading about ddrescue I see that what I planned to do will not be best since I need to clone the disk before it goes into the array. Instead I guess I need to do the following: Install 1 new drive in place of the parity drive Clone the 1.5 TB drive to this new drive. Install the other new drive and assign as parity and the other as a data drive Rebuild the array and parity Remove the 500GB drive using the method described here (clear drive then remove drive): https://wiki.unraid.net/Shrink_array Quote Link to comment
trurl Posted June 17, 2022 Share Posted June 17, 2022 2 hours ago, In2Photos said: Rebuild the array and parity Remove the 500GB drive using the method described here (clear drive then remove drive): https://wiki.unraid.net/Shrink_array I usually recommend the other method of shrinking the array (rebuild parity without the disk). It is simpler and more reliable. And since you are going to rebuild parity anyway, just remove that drive before rebuilding parity. Quote Link to comment
JonathanM Posted June 17, 2022 Share Posted June 17, 2022 5 minutes ago, In2Photos said: Rebuild the array and parity Remove the 500GB drive using the method described here (clear drive then remove drive): https://wiki.unraid.net/Shrink_array Those two steps sort of cancel each other out. If the 500GB drive is already empty (pre-requisite of the clear drive part) then you should just rebuild parity without the 500GB drive. Quote Link to comment
JonathanM Posted June 17, 2022 Share Posted June 17, 2022 I had another thought. Are you POSITIVE the drives were formatted as XFS? Such an old install, with such small drives, I would have expected them to be ReiserFS. If Unraid got reset somehow, and was trying to mount them as XFS, that would explain some of the weirdness. Quote Link to comment
JonathanM Posted June 17, 2022 Share Posted June 17, 2022 I had a read through your post history, and this one stuck out to me. As I commented back then, a move from disk to disk will NOT be instant, it will take a significant amount of time. I'm wondering if your disks ever really got converted. Quote Link to comment
In2Photos Posted June 17, 2022 Author Share Posted June 17, 2022 Well how would I know if they truly got converted other than unRAID reporting them as XFS? Quote Link to comment
trurl Posted June 17, 2022 Share Posted June 17, 2022 Screenshot shows them mounted as XFS. Diagnostics from that old version inconclusive regarding whether they were mountable. Quote Link to comment
trurl Posted June 17, 2022 Share Posted June 17, 2022 On second thought, maybe the screenshot doesn't show them mounted. There is no used or free space for any. Another disadvantage of running such an old version. Nobody remembers what it is supposed to look like. Quote Link to comment
trurl Posted June 17, 2022 Share Posted June 17, 2022 And it shows reading all data disks and writing parity, just as you would expect from parity rebuild, but of course it would do that regardless of whether they were mountable since parity doesn't know about filesystems. Quote Link to comment
In2Photos Posted June 17, 2022 Author Share Posted June 17, 2022 2 minutes ago, trurl said: Screenshot shows them mounted as XFS. Diagnostics from that old version inconclusive regarding whether they were mountable. The server had been working fine until today since 2015 when I did the conversion. Quote Link to comment
JonathanM Posted June 17, 2022 Share Posted June 17, 2022 The drives won't even attempt to mount in maintenance mode. Quote Link to comment
Recommended Posts
Join the conversation
You can post now and register later. If you have an account, sign in now to post with your account.
Note: Your post will require moderator approval before it will be visible.