Unmountable: Wrong or no file system


Go to solution Solved by itimpi,

Recommended Posts

I've had an unused cable in my case close to a fan for a while and when something shifts, it makes an awful noise. Decided to fix my cable management, so I did a clean shutdown through the UI, moved cables around, and put everything back together - no SATA or power cables were swapped. When I powered back on, I noticed that my newest drive, a 14 TB WD, said "Unmountable: Wrong or no file system." Cue panic mode. Fortunately, I don't think I did anything stupid this time, having learned my lesson before (I hope!). I powered down, made sure cables were solid, and tried again. Same error. I knew it wouldn't help, but some of the SATA cables are old so I replaced this one with a newer locking cable. Unsurprisingly, it made no difference.

 

If someone know where to look through the attached logs and is willing to help, I'd greatly appreciate it. I haven't done a parity check so I think that if worst comes to worst I can remove the drive, from array, format, and have it rebuild parity. I'm on unraid 6.11.5. Thanks for taking a look.

 

Edit: Looking back, I should've had the presence of mind to pull diagnostics before shutting down. I'll know better next time. I also realize there are a few posts with this same error; having looked through them, it appears the cause is corruption due to an unclean shutdown or something like a failed drive. There's also a recent post for issues after upgrading to 6.12, which doesn't apply to me. While this new drive could have failed now instead of during my preclear, I think it's unlikely. Rather than hijacking another thread, I figured it more polite to make a new one.

 

Edit 2: While it seems there are many causes for what I'm experiencing, it seems the response always starts with "Check Filesystem Status." Since I'm not some special snowflake, I figured this would make sense for me too. It says "Phase 1 - find and verify superblock... bad primary superblock - bad magic number !!!" and has been searching for a secondary superblock for about four hours. I'll update once that finishes.

 

nas-diagnostics-20230619-1139.zip

Edited by Crlaozwyn
Link to comment
12 hours ago, Crlaozwyn said:

It says "Phase 1 - find and verify superblock... bad primary superblock - bad magic number !!!" and has been searching for a secondary superblock for about four hours. I'll update once that finishes.

Let it finish but if a secondary superblock is not found in first hour it's a bad sign.

Link to comment
5 hours ago, JorgeB said:

Let it finish but if a secondary superblock is not found in first hour it's a bad sign.

Thanks. It's been about 20 hours now, so I assume I'm SOL. Is it time to restore from parity? If so, I assume the process would be

1) Remove the drive from the array

2) Format the drive in unraid

3) Return the drive to the array

4) Wait for days as parity rebuilds

 

Edit: Sorry, you clearly said, "Let it finish" so I'll do that and hope. I'll respond here with the results but, assuming the news isn't good, is the process above correct?

Edited by Crlaozwyn
Link to comment
51 minutes ago, Crlaozwyn said:

Is it time to restore from parity?

Parity usually cannot help with filesystem corruption, and it will only produce a different result if it's not 100% in sync, but you can try unassigning the disk and see if the emulated disk gives a different result, but note that doing this will leave the array unprotected.

Link to comment

It finished with "exiting now," which seems to be the computer equivalent of "I quit." So, when I get home I'll be searching how to emulate the disk to see if I've lost everything? Any idea why a three month old drive that passed preclear would do this on what appeared to be a routine clean shutdown and restart?

Link to comment
2 minutes ago, Crlaozwyn said:

how to emulate the disk

Stop array, unassign that disk, start array, if it doesn't mount, and likely it won't, check filesystem on the emulated disk to see if the result is any different.

3 minutes ago, Crlaozwyn said:

Any idea why a three month old drive that passed preclear would do this

This is not a disk problem, it's filesystem corruption.

Link to comment

Much appreciated. This stuff is stressful! Want to take time off for the rest of the day to go home and try now, but apparently I’m supposed to be an adult. I’ll report back what I find, but am thankful for the guidance. 

 

Edit:

So, turns out I can WFH today after all. Sweet. Followed the steps you outlined. The array did start, but unsurprisingly disk1 was missing and wasn't emulated by parity. I've started in maintenance mode and it's looking for the disk 1 superblock again.

 

Not going to lie, I want to cry.

 

Edit 2:

So, over an hour has passed since attempting to find secondary superblock with the drive unassigned, which I assume means I'm SOL. I believe using UFS Explorer is going to be as "simple" as removing the 14TB drive from the unraid server, plugging it into my desktop along with another 14TB drive, and getting what I can off of it. But what happens after that? Do I put the drive back into the unraid server and format it, before copying everything back over, or is there another way? When I'm faced with data loss, my brain shuts down and doesn't process things correctly, so my apologies for needing so much help.

 

Edit 3: Looks like it'd be the process at https://docs.unraid.net/legacy/FAQ/check-disk-filesystems/#redoing-a-drive-formatted-with-xfs unless that's out of date.

Edited by Crlaozwyn
Link to comment

Drive is ordered for the data recovery. HDDSuperClone is burned to USB and ready. I’m thinking I probably haven’t asked the most important question, which I haven’t been able to find an answer to in my searches so far, which is:

 

how does this happen and is there anything I can do to prevent it?

Link to comment
3 hours ago, Crlaozwyn said:

how does this happen

Unclear, could be hardware/firmware issue, or just an XFS problem.

 

3 hours ago, Crlaozwyn said:

is there anything I can do to prevent it?

You should have backups of anything important, doesn't mean the whole array, but at least any irreplaceable data.

Link to comment

That's absolutely terrifying. Is there any way to diagnose or have early detection?

 

I have cloud backup for family photos and the like, but even if I don't lose data in this situation, I'm estimating it's going to be at least another week before my unraid server is up and running again (~3 days to clone drive based on preclear times, probably similar to copy the data, which I'll have to do twice). I just want to make sure I've done what I reasonably can on my end. I've been using unraid for around 15 years and so I was probably overdue for some kind of catastrophic failure, but I'd prefer to have another 15 before it happens again. The rig has been stable for three years without hardware changes other than drive upgrades.

Link to comment

Follow up question about procedure as UFS Explorer is completing its scan and I'm preparing to move data back to the drive.

 

I'm obviously going to have to rebuild parity since it includes the corruption. After reformatting the drive in unraid to prepare it for the array, does it make more sense to:

  1. Add it as a blank drive and use some rsync-based solution like DFM to copy everything over the network or
  2. Plug the formatted drive into a separate system, boot up UFS Explorer again, and copy the files over SATA

Everything I'm seeing online seems to assume #1, but wouldn't that significantly slow down the transfer speed as parity tries to keep up? I guess I'm trying to see if there's a reason not to go with #2 because it seems that I'd have a protected array with all of my data much faster that way.

Edited by Crlaozwyn
Link to comment
9 hours ago, Crlaozwyn said:

Follow up question about procedure as UFS Explorer is completing its scan and I'm preparing to move data back to the drive.

Not quite sure what you have done with UFS Explorer?  Did the UFS Explorer scan find anything?    If so have you copied the files elsewhere if you want to put the drive back into the array?

Link to comment
4 hours ago, JorgeB said:

Re-formatting (and re-writing) the disk will update parity, nothing else you need to do.

Sorry for not asking my question clearly. I know parity will automatically be updated when I make changes to the array; what I'm trying to figure out is if I need to add data to the reformatted drive while it's in the array or if I can format it in the array, shut down the array and remove the drive, add the data back outside the array where the speed will be much higher, and reintroduce the drive to unraid. 

 

I don't have a cache drive (limited to 6 drives on my board) so write speeds plummet after a few GB of transfers (from >100 MB/sec to ~20 MB/sec). If I'm extending that to transferring 10TB onto this 14TB drive over the network with the drive in the array, that's going to take about six days. If I do it over SATA on my workbench computer, it's going to take about a day. Building the parity will take probably three, but I'd have access to my content during that time.

 

4 hours ago, itimpi said:

Not quite sure what you have done with UFS Explorer?  Did the UFS Explorer scan find anything?    If so have you copied the files elsewhere if you want to put the drive back into the array?

It seems that UFS Explorer's involvement was unnecessary. It found a bunch of files that had been moved to other disks, but the content I need was readily available. The disk had been formatted as XFS but was showing as ReiserFS. No clue how that happened, but that's probably what confused unraid. If it were possible to change the file system without wiping the contents, I could probably just put the drive back in the array as-is, but I'm not aware of a way to do that. Before checking the drive with UFS Explorer, I cloned the entire drive using HDDSuperClone, so when unraid's formatting wipes out the drive contents they'll still exist on the cloned drive. What I was asking above is if I can put that freshly formatted drive back on my Linux workbench and copy the contents of the cloned drive directly to the reformatted drive (not clone, as I know that would wipe out unraid's flags and the file system) or if that transfer process has to happen while the reformatted drive is in the unraid array.

Link to comment
  • Solution
37 minutes ago, Crlaozwyn said:

Sorry for not asking my question clearly. I know parity will automatically be updated when I make changes to the array; what I'm trying to figure out is if I need to add data to the reformatted drive while it's in the array or if I can format it in the array, shut down the array and remove the drive, add the data back outside the array where the speed will be much higher, and reintroduce the drive to unraid. 

 

I don't have a cache drive (limited to 6 drives on my board) so write speeds plummet after a few GB of transfers (from >100 MB/sec to ~20 MB/sec). If I'm extending that to transferring 10TB onto this 14TB drive over the network with the drive in the array, that's going to take about six days. If I do it over SATA on my workbench computer, it's going to take about a day. Building the parity will take probably three, but I'd have access to my content during that time.

 

It seems that UFS Explorer's involvement was unnecessary. It found a bunch of files that had been moved to other disks, but the content I need was readily available. The disk had been formatted as XFS but was showing as ReiserFS. No clue how that happened, but that's probably what confused unraid. If it were possible to change the file system without wiping the contents, I could probably just put the drive back in the array as-is, but I'm not aware of a way to do that. Before checking the drive with UFS Explorer, I cloned the entire drive using HDDSuperClone, so when unraid's formatting wipes out the drive contents they'll still exist on the cloned drive. What I was asking above is if I can put that freshly formatted drive back on my Linux workbench and copy the contents of the cloned drive directly to the reformatted drive (not clone, as I know that would wipe out unraid's flags and the file system) or if that transfer process has to happen while the reformatted drive is in the unraid array.


You cannot copy data to the drive outside Unraid without invalidating parity.   However as long as the drive was formatted by Unraid then you can copy data to it outside Unraid as long as you later use the New Config tool to reintroduce the drive and then rebuild parity.

 

if you say there may have been some confusion at the ‘Unraid level about what file system was on the drive then there are a couple of thing I would suggest trying:

  • Stop the array.
  • unassign the drive (if it is currently assigned)
  • click on the emulated drive and explicitly set it to be XFS rather than Auto
  • start the array to see if any data is now visible.

The other thing to try while the drive is not in the array is to see if it can be mounted in the server using the Unassigned Devices plugin.   If it can then you could do any copying locally within the server.    You could either speed up the copying by removing the parity drive until copying finished, or (safer) keep the parity drive assigned and live with the slower copying to keep protected against any other drive having problems.

Link to comment

I'm having a similar issue to this.. although thankfully its not an error with an existing drive. I've added 3x4TB WD Reds they cleared just fine but now when I go to format the drives I get the error "Unmountable: Unsupported or no file system". My logs show an issue with invalid superblock number. At present the array is up and running just fine with no issues other than the 3 drives sitting there in that status. These 3 drives were part of the array a long time ago and were upgraded to bigger drives due to lack of drive bays but I since I added a JBOD I wanted to try add them back. Since they get fully wiped during the clear I doubt this is the issue? Is this likely a sign of a bigger issue? Included diagnostics@JorgeB @itimpi Any help would be greatly appreciated. Also this is the first drives I've tried to add since the update to 6.12.x. I did have some difficulty following the update which appeared to be resolved before I tried this. 

server-diagnostics-20230624-2103.zip

Link to comment

Hey tazire, if you'd like an answer to your question, your best bet is to make a thread for your issue. While you have the same error, it's really a unique situation with different symptoms, concerns, and probably solutions.

 

11 hours ago, itimpi said:


You cannot copy data to the drive outside Unraid without invalidating parity.   However as long as the drive was formatted by Unraid then you can copy data to it outside Unraid as long as you later use the New Config tool to reintroduce the drive and then rebuild parity.

 

if you say there may have been some confusion at the ‘Unraid level about what file system was on the drive then there are a couple of thing I would suggest trying:

  • Stop the array.
  • unassign the drive (if it is currently assigned)
  • click on the emulated drive and explicitly set it to be XFS rather than Auto
  • start the array to see if any data is now visible.

The other thing to try while the drive is not in the array is to see if it can be mounted in the server using the Unassigned Devices plugin.   If it can then you could do any copying locally within the server.    You could either speed up the copying by removing the parity drive until copying finished, or (safer) keep the parity drive assigned and live with the slower copying to keep protected against any other drive having problems.

Marking your response as the answer, though I do have more info to share in case others find this thread in future. The reason no one talks about copying drives over SATA is because it's not feasible. The unraid file system can't be read in Linux. While UFS Explorer can read it fine, it requires a traditional file path to SAVE, which means that even though it COULD read from one drive and write to another, this wasn't available in the current release (I believe it's 9.1.3). So, guess what I'm doing? Yeah, a network transfer. Since my parity was shot in my attempts to get a file system readable in Linux anyways, I'm going the "New Config" route. For now, only the target 14TB drive is in the unprotected array, which means I get full network write speeds and the other drives have less opportunity to eat dirt. Once everything is copied over, I'll add the other drives to the array and build parity. I'll report back when it's complete, which will probably be in a few days.

 

Oh, forgot to add - I did try swapping the file system - it was already set to XFS. Tried ReiserFS too, since that's how UFS Explorer identified the drive. In both cases, I got no love.

Edited by Crlaozwyn
Link to comment

Ok, I'm mostly there and very thankful for all the help that's brought me here. Parity is rebuilding and it looks like my data is intact. One thing is very strange and I'm sure it's a setting I missed: my /mnt/user directory has been moved (renamed?) to /mnt/user0. Though all my user shares were originally present, they disappeared after about 20 minutes of the server running. From a brief search, it looks like user0 is similar to user but excludes the cache drive. I've never had a cache drive enabled. 

 

I can deal with the "user0" path if I have to, though I'd prefer the cleaner "user" but is there any way to restore the missing user shares? I assume they disappeared because the user path no longer exists.

Link to comment
9 minutes ago, JorgeB said:

You should always have both /mnt/user0 does not include the pools, a reboot should bring /mnt/user back, if it doesn't post new diags.

Parity will be rebuilding for the next 20 hours or so. OK to let that finish before rebooting or will it cause issues to have parity without that folder? I think it should be OK because it's system generated, but I'm obviously in over my head. Thanks for sticking with me through this!

Link to comment

Join the conversation

You can post now and register later. If you have an account, sign in now to post with your account.
Note: Your post will require moderator approval before it will be visible.

Guest
Reply to this topic...

×   Pasted as rich text.   Restore formatting

  Only 75 emoji are allowed.

×   Your link has been automatically embedded.   Display as a link instead

×   Your previous content has been restored.   Clear editor

×   You cannot paste images directly. Upload or insert images from URL.