Can I mount in other OSes Zpools created in Unraid?


Recommended Posts

Just about when I was flipping into prod my little server, a problem that had already happened once, appeared again. The first time I just figured it was my fault since it was a testing system.

 

Not this time. I had recreated my servers from ZFS snapshot clones, reducing about half a terabyte (OS data only) of very high IO in less than 100GB, super lean de-duplicated, compressed data[sets]. It was meticulous, it was thought through.

 

The original vDisks still exist but, I want kinda want to rescue my work on it.

 

Is the ZFS filesystem implemented on Unraid the standard ZFS (on Linux)? Would it mount (assuming it's OK) in Fedora, FreeBSD or anywhere else?

 

Thanks.

 

Link to comment

Well, it is:

706835313_Screenshotfrom2023-11-2017-19-02.png.88e80acc4ae6d9b43eec3e6da3ec8cff.png

 

...
  pool: alpha
    id: 1551723972850019203
 state: UNAVAIL
status: The pool is formatted using an incompatible version.
action: The pool cannot be imported.  Access the pool on a system running newer
	software, or recreate the pool from backup.
   see: http://www.sun.com/msg/ZFS-8000-A5
config:

	alpha                                                              UNAVAIL  newer version
	  raidz1-0                                                         ONLINE
	    disk/by-id/ata-KINGSTON_SV300S3...


 

Oh, wait, do you mean as long as the guest OS expect the partition to be the first--and not the second such as in the case of TrueNAS?

 

That make a little more sense. I couldn't help myself however and sort of already tried importing the pool in another system, Fedora.

 

It seems it comes with outdated ZFS. Well, not "comes" since it's and afterinstall...but you get the idea. 🫤 I'll update to Fedora 39, this is 38, to see if it's like a compatibility thing and gets sorted out on its own. If it doesn't I'm going deep on FreeBSD, maybe even OpenIndiana -- or not, that's kind of a lot -- just FreeBSD then. 😃

 

For what it's worth though, if Fedora can identify the filesystem and reassemble the Zpool, even thought it didn't occur to me to even mark the disks way after I had taken them out of the caddies, it gives me a little hope. That and the fact that ZFS is not there yet in Unraid and the fact that the unresponsive Zpool came with the cutest little kpanics in the log, which I kinda forgot to mention earlier. I saved the text somewhere, I'll come back to paste it as soon as I find it, maybe it helps the devs.

 

Nevertheless, I don't know why am I rambling on and on instead of just saying thank you, because that's what I logged in for. Thanks.

Link to comment

Huh. I thought it was going to take longer to find it.

 

 

Nov 19 19:48:46 zx3 monitor: Stop running nchan processes
Nov 19 19:48:47 zx3 root: error: /plugins/unassigned.devices/UnassignedDevices.php: wrong csrf_token
Nov 19 19:48:50 zx3 root: error: /plugins/unassigned.devices/UnassignedDevices.php: wrong csrf_token
Nov 19 19:48:50 zx3 root: error: /plugins/unassigned.devices/UnassignedDevices.php: wrong csrf_token
Nov 19 19:48:53 zx3 root: error: /plugins/unassigned.devices/UnassignedDevices.php: wrong csrf_token
Nov 19 19:48:56 zx3 root: error: /plugins/unassigned.devices/UnassignedDevices.php: wrong csrf_token
Nov 19 19:48:58 zx3 root: error: /plugins/unassigned.devices/UnassignedDevices.php: wrong csrf_token
Nov 19 19:49:52 zx3 kernel: mdcmd (31): set md_num_stripes 1280
Nov 19 19:49:52 zx3 kernel: mdcmd (32): set md_queue_limit 80
Nov 19 19:49:52 zx3 kernel: mdcmd (33): set md_sync_limit 5
Nov 19 19:49:52 zx3 kernel: mdcmd (34): set md_write_method
Nov 19 19:49:52 zx3 kernel: mdcmd (35): start STOPPED
Nov 19 19:49:52 zx3 kernel: unraid: allocating 15750K for 1280 stripes (3 disks)
Nov 19 19:49:52 zx3 kernel: md1p1: running, size: 71687336 blocks
Nov 19 19:49:52 zx3 emhttpd: shcmd (205): udevadm settle
Nov 19 19:49:53 zx3 emhttpd: Opening encrypted volumes...
Nov 19 19:49:53 zx3 emhttpd: shcmd (206): touch /boot/config/forcesync
Nov 19 19:49:53 zx3 emhttpd: Mounting disks...
Nov 19 19:49:53 zx3 emhttpd: mounting /mnt/disk1
Nov 19 19:49:53 zx3 emhttpd: shcmd (207): mkdir -p /mnt/disk1
Nov 19 19:49:53 zx3 emhttpd: /usr/sbin/zpool import -d /dev/md1p1 2>&1
Nov 19 19:49:56 zx3 emhttpd:    pool: disk1
Nov 19 19:49:56 zx3 emhttpd:      id: 9807385397724693529
Nov 19 19:49:56 zx3 emhttpd: shcmd (209): /usr/sbin/zpool import -N -o autoexpand=on  -d /dev/md1p1 9807385397724693529 disk1
Nov 19 19:50:01 zx3 emhttpd: shcmd (210): /usr/sbin/zpool online -e disk1 /dev/md1p1
Nov 19 19:50:01 zx3 emhttpd: /usr/sbin/zpool status -PL disk1 2>&1
Nov 19 19:50:01 zx3 emhttpd:   pool: disk1
Nov 19 19:50:01 zx3 emhttpd:  state: ONLINE
Nov 19 19:50:01 zx3 emhttpd:   scan: scrub repaired 0B in 00:00:01 with 0 errors on Tue Nov 14 00:00:02 2023
Nov 19 19:50:01 zx3 emhttpd: config:
Nov 19 19:50:01 zx3 emhttpd:  NAME          STATE     READ WRITE CKSUM
Nov 19 19:50:01 zx3 emhttpd:  disk1         ONLINE       0     0     0
Nov 19 19:50:01 zx3 emhttpd:    /dev/md1p1  ONLINE       0     0     0
Nov 19 19:50:01 zx3 emhttpd: errors: No known data errors
Nov 19 19:50:01 zx3 emhttpd: shcmd (211): /usr/sbin/zfs set mountpoint=/mnt/disk1 disk1
Nov 19 19:50:02 zx3 emhttpd: shcmd (212): /usr/sbin/zfs set atime=off disk1
Nov 19 19:50:02 zx3 emhttpd: shcmd (213): /usr/sbin/zfs mount disk1
Nov 19 19:50:02 zx3 emhttpd: shcmd (214): /usr/sbin/zpool set autotrim=off disk1
Nov 19 19:50:02 zx3 emhttpd: shcmd (215): /usr/sbin/zfs set compression=on disk1
Nov 19 19:50:03 zx3 emhttpd: mounting /mnt/alpha
Nov 19 19:50:03 zx3 emhttpd: shcmd (216): mkdir -p /mnt/alpha
Nov 19 19:50:03 zx3 emhttpd: shcmd (217): /usr/sbin/zpool import -N -o autoexpand=on  -d /dev/sdb1 -d /dev/sdc1 -d /dev/sdd1 1551723972850019203 alpha
Nov 19 19:50:29 zx3 kernel: VERIFY3(size <= rt->rt_space) failed (281442912784384 <= 2054406144)
Nov 19 19:50:29 zx3 kernel: PANIC at range_tree.c:436:range_tree_remove_impl()
Nov 19 19:50:29 zx3 kernel: Showing stack for process 25971
Nov 19 19:50:29 zx3 kernel: CPU: 3 PID: 25971 Comm: z_wr_iss Tainted: P          IO       6.1.49-Unraid #1

 

 

Then comes the trace:

[ look at me, saying things authoritatively as if I knew what they mean 😆 ]

Nov 19 19:50:29 zx3 kernel: Call Trace:
Nov 19 19:50:29 zx3 kernel: <TASK>
Nov 19 19:50:29 zx3 kernel: dump_stack_lvl+0x44/0x5c
Nov 19 19:50:29 zx3 kernel: spl_panic+0xd0/0xe8 [spl]
Nov 19 19:50:29 zx3 kernel: ? memcg_slab_free_hook+0x20/0xcf
Nov 19 19:50:29 zx3 kernel: ? zfs_btree_insert_into_leaf+0x2ae/0x47d [zfs]
Nov 19 19:50:29 zx3 kernel: ? slab_free_freelist_hook.constprop.0+0x3b/0xaf
Nov 19 19:50:29 zx3 kernel: ? bt_grow_leaf+0xc3/0xd6 [zfs]
Nov 19 19:50:29 zx3 kernel: ? bt_grow_leaf+0xc3/0xd6 [zfs]
Nov 19 19:50:29 zx3 kernel: ? zfs_btree_find_in_buf+0x4c/0x94 [zfs]
Nov 19 19:50:29 zx3 kernel: ? zfs_btree_find+0x16d/0x1b0 [zfs]
Nov 19 19:50:29 zx3 kernel: ? rs_get_start+0xc/0x1d [zfs]
Nov 19 19:50:29 zx3 kernel: range_tree_remove_impl+0x77/0x406 [zfs]
Nov 19 19:50:29 zx3 kernel: ? range_tree_remove_impl+0x3fb/0x406 [zfs]
Nov 19 19:50:29 zx3 kernel: space_map_load_callback+0x70/0x79 [zfs]
Nov 19 19:50:29 zx3 kernel: space_map_iterate+0x2d3/0x324 [zfs]
Nov 19 19:50:29 zx3 kernel: ? spa_stats_destroy+0x16c/0x16c [zfs]
Nov 19 19:50:29 zx3 kernel: space_map_load_length+0x93/0xcb [zfs]
Nov 19 19:50:29 zx3 kernel: metaslab_load+0x33b/0x6e3 [zfs]
Nov 19 19:50:29 zx3 kernel: ? slab_post_alloc_hook+0x4d/0x15e
Nov 19 19:50:29 zx3 kernel: ? __slab_free+0x83/0x229
Nov 19 19:50:29 zx3 kernel: ? spl_kmem_alloc_impl+0xc1/0xf2 [spl]
Nov 19 19:50:29 zx3 kernel: ? __kmem_cache_alloc_node+0x118/0x147
Nov 19 19:50:29 zx3 kernel: metaslab_activate+0x36/0x1f1 [zfs]
Nov 19 19:50:29 zx3 kernel: metaslab_alloc_dva+0x8bc/0xfce [zfs]
Nov 19 19:50:29 zx3 kernel: ? preempt_latency_start+0x2b/0x46
Nov 19 19:50:29 zx3 kernel: metaslab_alloc+0x107/0x1fd [zfs]
Nov 19 19:50:29 zx3 kernel: zio_dva_allocate+0xee/0x73f [zfs]
Nov 19 19:50:29 zx3 kernel: ? kmem_cache_free+0xc9/0x154
Nov 19 19:50:29 zx3 kernel: ? spl_kmem_cache_free+0x3a/0x1a5 [spl]
Nov 19 19:50:29 zx3 kernel: ? preempt_latency_start+0x2b/0x46
Nov 19 19:50:29 zx3 kernel: ? _raw_spin_lock+0x13/0x1c
Nov 19 19:50:29 zx3 kernel: ? _raw_spin_unlock+0x14/0x29
Nov 19 19:50:29 zx3 kernel: ? tsd_hash_search+0x70/0x7d [spl]
Nov 19 19:50:29 zx3 kernel: zio_execute+0xb1/0xdf [zfs]
Nov 19 19:50:29 zx3 kernel: taskq_thread+0x266/0x38a [spl]
Nov 19 19:50:29 zx3 kernel: ? wake_up_q+0x44/0x44
Nov 19 19:50:29 zx3 kernel: ? zio_subblock+0x22/0x22 [zfs]
Nov 19 19:50:29 zx3 kernel: ? taskq_dispatch_delay+0x106/0x106 [spl]
Nov 19 19:50:29 zx3 kernel: kthread+0xe4/0xef
Nov 19 19:50:29 zx3 kernel: ? kthread_complete_and_exit+0x1b/0x1b
Nov 19 19:50:29 zx3 kernel: ret_from_fork+0x1f/0x30
Nov 19 19:50:29 zx3 kernel: </TASK>

 

And ish hann..nnngs…

1467449830_ScreenShot2023-11-19at23_24_56.thumb.png.69368885d1956e04d95146096ae71669.png

I guess that's it. :)

Link to comment

Join the conversation

You can post now and register later. If you have an account, sign in now to post with your account.
Note: Your post will require moderator approval before it will be visible.

Guest
Reply to this topic...

×   Pasted as rich text.   Restore formatting

  Only 75 emoji are allowed.

×   Your link has been automatically embedded.   Display as a link instead

×   Your previous content has been restored.   Clear editor

×   You cannot paste images directly. Upload or insert images from URL.