zfs Unmountable: Unsupported or no file system // help please


Recommended Posts

Hi,

In my Unraid array, I have five ZFS single-disk stripes, labeled Disk 1 through Disk 5, along with one parity drive. The five ZFS disks have been formatted through the Unraid GUI.

 

Since I encountered an issue with Disk 2, I did and completed a parity check, stopped the array, removed the disk, and began clearing it. After a few minutes, I canceled the clearing process, reinserted the disk into the array, and restarted it, assuming Unraid would recognize Disk 2 as failed and initiate a rebuild process.

 

Upon restarting the array, Disk 2 was flagged as "Unmountable: Unsupported partition layout," which was somewhat expected. However, what surprised me was that Disk 5 was flagged as "Unmountable: Unsupported or no filesystem," and Unraid suggests rebuilding Disk 5, which I accepted.

 

Unfortunately, Disk 5 remains unmountable even after the rebuild has finished.

 

Additionally, what makes me wonder is that the "zpool import" command shows that disk sdi (Disk 5) has the pool name "disk4."

 

root@unraid:~# zpool import
  pool: disk4
    id: 7004989206944992445                                                                                                                                                                                 
 state: UNAVAIL
status: One or more devices contains corrupted data.                                                                                                      
action: The pool cannot be imported due to damaged devices or data.                                                                     
  see: https://openzfs.github.io/openzfs-docs/msg/ZFS-8000-5E                                                                                                                                                 
config:                                                                                                                                                                                                                        
       disk4       UNAVAIL  insufficient replicas
         sdi1      UNAVAIL  invalid label


 

syslog:

Mar 23 19:42:20 unraid emhttpd: mounting /mnt/disk4
Mar 23 19:42:20 unraid emhttpd: shcmd (58): mkdir -p /mnt/disk4
Mar 23 19:42:20 unraid emhttpd: /usr/sbin/zpool import -f -d /dev/md4p1 2>&1
Mar 23 19:42:23 unraid emhttpd:    pool: disk4
Mar 23 19:42:23 unraid emhttpd:      id: 12000237853854133759
Mar 23 19:42:23 unraid emhttpd: shcmd (59): /usr/sbin/zpool import -f -N -o autoexpand=on  -d /dev/md4p1 12000237853854133759 disk4
Mar 23 19:42:27 unraid emhttpd: shcmd (60): /usr/sbin/zpool online -e disk4 /dev/md4p1
Mar 23 19:42:27 unraid emhttpd: /usr/sbin/zpool status -PL disk4 2>&1
Mar 23 19:42:27 unraid emhttpd:   pool: disk4
Mar 23 19:42:27 unraid emhttpd:  state: ONLINE
Mar 23 19:42:27 unraid emhttpd: config:
Mar 23 19:42:27 unraid emhttpd:  NAME          STATE     READ WRITE CKSUM
Mar 23 19:42:27 unraid emhttpd:  disk4         ONLINE       0     0     0
Mar 23 19:42:27 unraid emhttpd:    /dev/md4p1  ONLINE       0     0     0
Mar 23 19:42:27 unraid emhttpd: errors: No known data errors
Mar 23 19:42:27 unraid emhttpd: shcmd (61): /usr/sbin/zfs set mountpoint=/mnt/disk4 disk4
Mar 23 19:42:28 unraid emhttpd: shcmd (62): /usr/sbin/zfs set atime=off disk4
Mar 23 19:42:28 unraid emhttpd: shcmd (63): /usr/sbin/zfs mount disk4
Mar 23 19:42:28 unraid emhttpd: shcmd (64): /usr/sbin/zpool set autotrim=off disk4
Mar 23 19:42:29 unraid emhttpd: shcmd (65): /usr/sbin/zfs set compression=off disk4
Mar 23 19:42:29 unraid emhttpd: mounting /mnt/disk5
Mar 23 19:42:29 unraid emhttpd: shcmd (66): mkdir -p /mnt/disk5
Mar 23 19:42:29 unraid emhttpd: /usr/sbin/zpool import -f -d /dev/md5p1 2>&1
Mar 23 19:42:32 unraid emhttpd:    pool: disk4
Mar 23 19:42:32 unraid emhttpd:      id: 7004989206944992445
Mar 23 19:42:32 unraid emhttpd: shcmd (68): /usr/sbin/zpool import -f -N -o autoexpand=on  -d /dev/md5p1 7004989206944992445 disk5
Mar 23 19:42:35 unraid root: cannot import 'disk4' as 'disk5': one or more devices is currently unavailable
Mar 23 19:42:35 unraid emhttpd: shcmd (68): exit status: 1
Mar 23 19:42:35 unraid emhttpd: shcmd (69): /usr/sbin/zpool online -e disk5 /dev/md5p1
Mar 23 19:42:35 unraid root: cannot open 'disk5': no such pool
Mar 23 19:42:35 unraid emhttpd: shcmd (69): exit status: 1
Mar 23 19:42:35 unraid emhttpd: disk5: import error
Mar 23 19:42:35 unraid emhttpd: /mnt/disk5 mount error: Unsupported or no file system

 

Is there anything I can try to restore the ZFS filesystem on Disk 5 so that Unraid can mount it?

 

Thank you for your help :)

 

Screenshot_20240324_152106.png

unraid-diagnostics-20240324-1349.zip

Edited by jehan
Link to comment
3 hours ago, jehan said:

Since I encountered an issue with Disk 2, I did and completed a parity check, stopped the array, removed the disk, and began clearing it. After a few minutes, I canceled the clearing process, reinserted the disk into the array, and restarted it, assuming Unraid would recognize Disk 2 as failed and initiate a rebuild process.

Not sure I follow, was disk2 disabled at any point, or did you just unassigned it, cleared and re-assign it without it being disabled? That would make parity invalid.

 

And was disk5 disabled or just unmountable?

Link to comment

What I did was: stop the array, remove Disk 2, start the array - Disk 2 was emulated.

Than I started to clear Disk 2.

Than I stopped the array and plugged Disk 2 back to the array and started the array.

At this point Disk 5 became unavailable.

 

I haven't disabled Disk 5 at any point actively.

 

I hope I answered your question.

Link to comment
2 hours ago, jehan said:

Than I stopped the array and plugged Disk 2 back to the array and started the array.

That would begin a rebuild, it could never show

 

8 hours ago, jehan said:

"Unmountable: Unsupported partition layout,"

Something is missing, and the diags don't cover before that, so we can't see what happened.

 

I do see that besides the other issues, zfs detected data corruption in disks 1 and 3, suggesting you have a serious hardware issue:

 

Mar 23 19:42:11 unraid emhttpd: status: One or more devices has experienced an error resulting in data
Mar 23 19:42:11 unraid emhttpd:  corruption.  Applications may be affected.


Start by running memtest.

 

 

 

Link to comment

Yes, I'm aware of the detected data corruption.
Previously, I had some problems with newly attached hardware which forced me to hard-reset the system several times, causing the data corruption. However, this is not a significant issue since I have the corrupted data backed up.

 

I previously conducted a memtest, which showed no problems with the memory.

 

Upon plugging Disk 2 back into the array, Disk 5 became unavailable after I rebooted the Unraid machine. Consequently, I had two disks that could not be mounted. Surprisingly (for me), Unraid offered me the option to rebuild Disk 5, which I accepted.

 

What do you think? Do I have any chance of rescuing the data on Disk 5?

Link to comment
3 minutes ago, jehan said:

Previously, I had some problems with newly attached hardware which forced me to hard-reset the system several times, causing the data corruption.

Hard reset should never cause data corruption, they can cause some data to be lost, if you were in the middle of writing to the pool, but in those cases the data should be complete or incomplete, never corrupt.

 

Also note that using USB for the array/pool is strongly discouraged, it can cause several issues, possibly related to at least some you are seeing:

 

Mar 23 19:42:35 unraid root: cannot import 'disk4' as 'disk5': one or more devices is currently unavailable

 

This suggest disk5 was disk4 at some point in the past, unfortunately the diags don't cover what happened before, post the output of

 

zpool import

 

 

 

 

Link to comment
8 minutes ago, JorgeB said:

Hard reset should never cause data corruption, they can cause some data to be lost, if you were in the middle of writing to the pool, but in those cases the data should be complete or incomplete, never corrupt.

 Okay -  good to know.

 

9 minutes ago, JorgeB said:

Also note that using USB for the array/pool is strongly discouraged, it can cause several issues, possibly related to at least some you are seeing

 

Yes - I'm a bit limited on my side here.

 

zpool import

root@unraid:~# zpool import
  pool: disk4
    id: 7004989206944992445
 state: UNAVAIL
status: One or more devices contains corrupted data.
action: The pool cannot be imported due to damaged devices or data.
  see: https://openzfs.github.io/openzfs-docs/msg/ZFS-8000-5E
config:

       disk4       UNAVAIL  insufficient replicas
         sdh1      UNAVAIL  invalid label
root@unraid:~#


 

Link to comment

Looks like another corrupt pool, though in this case the label/metadata is corrupt, so it cannot even be imported, and like mentioned this pool was disk4 at some point in the past.

 

Unfortunately, and AFAIK, there's nothing than can be done for this error other than destroying the pool and restoring from a backup.

Link to comment

Join the conversation

You can post now and register later. If you have an account, sign in now to post with your account.
Note: Your post will require moderator approval before it will be visible.

Guest
Reply to this topic...

×   Pasted as rich text.   Restore formatting

  Only 75 emoji are allowed.

×   Your link has been automatically embedded.   Display as a link instead

×   Your previous content has been restored.   Clear editor

×   You cannot paste images directly. Upload or insert images from URL.