• ZFS Pool Import Not Complete


    AgentXXL
    • Closed Minor

    I've tried to upgrade to 6.12 RC2 from 6.11.5 twice now. The Update Assistant reports no issues and the upgrade appears to go smoothly, except for the import of my ZFS pool.

     

    My main unRAID system has been running a zfs pool under the plugin for releases below 6.12. The pool has one dataset called Media, and under that a folder should be found named TV. Alas after upgrading from 6.11.5 to 6.12 RC2 and creating the new pool, the TV folder is not visible. Space allocation looks correct on the Main tab, and under terminal:

     

    BugReport6.12-ZFS1.thumb.jpg.009329ef7245af73403c79de123f334a.jpg

     

    BugReport6.12-ZFS2.thumb.jpg.6b50c1608e1dfca88f0654c7b11c8707.jpg

     

    I did notice that my dashboard was empty so I went through the forums and saw that the ZFS Companion plugin is not compatible with 6.12, so I uninstalled it and my dashboard was now visible. But back to the real issue - no access to the data that's on my pool. If I manually try to change to the folder /mnt/animzfs/Media/TV, the folder is not found.

     

    I decided to try and stop the array but now the system is stuck on 'Retry unmounting disk shares` and it's the zfs pool:

     

    Quote

    Apr  3 20:19:22 AnimNAS emhttpd: Retry unmounting disk share(s)...
    Apr  3 20:19:27 AnimNAS emhttpd: Unmounting disks...
    Apr  3 20:19:27 AnimNAS emhttpd: shcmd (9582): /usr/sbin/zpool export animzfs
    Apr  3 20:19:27 AnimNAS root: cannot unmount '/mnt/animzfs/Media': unmount failed
    Apr  3 20:19:27 AnimNAS emhttpd: shcmd (9582): exit status: 1

     

    So far I've been unable to force an unmount. I also accidentally clicked on the 'SCRUB' button in the ZFS Master section of the Main tab, and that has started a scrub. I've paused it with `zpool scrub -p animzfs` but that still won't let the pool unmount. Any suggestions for a way to cleanly unmount?

     

    I also can't seem to grab diagnostics - it just sits there trying to gather the data into the zip archive but never completes and is not downloaded. Tried clearing cookies and cache and another browser and that didn't help either. I did manage to find the zfs section in my syslog which is mirrored to my other unRAID server:

     

    Quote

    Apr  3 19:18:13 AnimNAS emhttpd: mounting /mnt/animzfs
    Apr  3 19:18:13 AnimNAS emhttpd: shcmd (640): mkdir -p /mnt/animzfs
    Apr  3 19:18:13 AnimNAS emhttpd: /sbin/btrfs filesystem show /dev/sdy1 2>&1
    Apr  3 19:18:13 AnimNAS emhttpd: ERROR: no btrfs on /dev/sdy1
    Apr  3 19:18:13 AnimNAS emhttpd: shcmd (641): modprobe zfs
    Apr  3 19:18:13 AnimNAS emhttpd: /usr/sbin/zpool import -d /dev/sdy1 2>&1
    Apr  3 19:18:14 AnimNAS emhttpd:    pool: animzfs
    Apr  3 19:18:14 AnimNAS emhttpd:      id: 2464160279060078275
    Apr  3 19:18:14 AnimNAS emhttpd: shcmd (642): /usr/sbin/zpool import -N -o autoexpand=on  -d /dev/sdy1 -d /dev/sdw1 -d /dev/sdz1 -d /dev/sdu1 -d /dev/sdx1 -d /dev/sdv1 2464160279060078275 animzfs
    Apr  3 19:18:25 AnimNAS emhttpd: /usr/sbin/zpool status -LP animzfs 2>&1
    Apr  3 19:18:25 AnimNAS emhttpd:   pool: animzfs
    Apr  3 19:18:25 AnimNAS emhttpd:  state: ONLINE
    Apr  3 19:18:25 AnimNAS emhttpd:   scan: scrub repaired 0B in 16:34:07 with 0 errors on Mon Mar 27 09:24:28 2023
    Apr  3 19:18:25 AnimNAS emhttpd: config:
    Apr  3 19:18:25 AnimNAS emhttpd:  NAME           STATE     READ WRITE CKSUM
    Apr  3 19:18:25 AnimNAS emhttpd:  animzfs        ONLINE       0     0     0
    Apr  3 19:18:25 AnimNAS emhttpd:    /r..1/...-0     ONLINE       0     0     0
    Apr  3 19:18:25 AnimNAS emhttpd:      /dev/sdu1  ONLINE       0     0     0
    Apr  3 19:18:25 AnimNAS emhttpd:      /dev/sdv1  ONLINE       0     0     0
    Apr  3 19:18:25 AnimNAS emhttpd:      /dev/sdw1  ONLINE       0     0     0
    Apr  3 19:18:25 AnimNAS emhttpd:      /dev/sdx1  ONLINE       0     0     0
    Apr  3 19:18:25 AnimNAS emhttpd:      /dev/sdy1  ONLINE       0     0     0
    Apr  3 19:18:25 AnimNAS emhttpd:      /dev/sdz1  ONLINE       0     0     0
    Apr  3 19:18:25 AnimNAS emhttpd: errors: No known data errors
    Apr  3 19:18:25 AnimNAS emhttpd: shcmd (643): /usr/sbin/zfs set mountpoint=/mnt/animzfs animzfs
    Apr  3 19:18:25 AnimNAS emhttpd: shcmd (644): /usr/sbin/zfs mount -o noatime animzfs
    Apr  3 19:18:26 AnimNAS emhttpd: shcmd (645): /usr/sbin/zpool set autotrim=off animzfs
    Apr  3 19:18:26 AnimNAS emhttpd: shcmd (646): /usr/sbin/zfs set compression=off animzfs
    Apr  3 19:18:26 AnimNAS emhttpd: /mnt/animzfs root profile: /r..1/...
    Apr  3 19:18:26 AnimNAS emhttpd: /mnt/animzfs root groups: 1
    Apr  3 19:18:26 AnimNAS emhttpd: /mnt/animzfs root width: 6
    Apr  3 19:18:26 AnimNAS emhttpd: /mnt/animzfs root ok: 6
    Apr  3 19:18:26 AnimNAS emhttpd: /mnt/animzfs root new: 0
    Apr  3 19:18:26 AnimNAS emhttpd: /mnt/animzfs root wrong:0
    Apr  3 19:18:26 AnimNAS emhttpd: /mnt/animzfs root missing: 0
    Apr  3 19:18:26 AnimNAS emhttpd: /mnt/animzfs root missing already: 0
    Apr  3 19:18:26 AnimNAS emhttpd: shcmd (647): /usr/sbin/zfs mount animzfs/Media
    Apr  3 19:18:26 AnimNAS root: cannot mount 'animzfs/Media': filesystem already mounted
    Apr  3 19:18:26 AnimNAS emhttpd: shcmd (647): exit status: 1

     

    Any assistance appreciated - thanks!

     

    EDIT: just found a diagnostics zip file in my browser download folder and it appears to be OK so I've attached it. It has the same syslog portion from above.

     

    animnas-diagnostics-20230403-2006.zip




    User Feedback

    Recommended Comments

    This appears to be the problem:

     

    Apr  3 19:18:26 AnimNAS root: cannot mount 'animzfs/Media': filesystem already mounted

     

    Try typing

    zfs unmount animzfs/Media

    To see if the array can then stop.

    Link to comment
    11 hours ago, JorgeB said:

    This appears to be the problem:

     

    Apr  3 19:18:26 AnimNAS root: cannot mount 'animzfs/Media': filesystem already mounted

     

    Try typing

    zfs unmount animzfs/Media

    To see if the array can then stop.

     

    Tried that yesterday, and it didn't work.

     

    root@AnimNAS:/# zfs unmount animzfs/Media
    cannot unmount '/mnt/animzfs/Media': unmount failed

     

    I ended up issuing a reboot command which of course led to an unclean shutdown. As no data in the array was touched I rebooted again to clear the unclean status and prevent a parity check, but after I get the zfs pool import figured, I'll manually restart the scrub of the zfs pool and then do a parity check.

     

    Any other thoughts? I've been researching it and wondering if it might be a permissions issue:

     

    root@AnimNAS:~# cd /mnt/animzfs
    root@AnimNAS:/mnt/animzfs# ls
    Media/
    root@AnimNAS:/mnt/animzfs# ls -alh
    total 40K
    drwxrwxrwx  3 nobody users   3 Apr  4 04:40 ./
    drwxr-xr-x 40 root   root  800 Apr  4 00:01 ../
    drwxr-xr-x  2 root   root    2 Mar 28  2022 Media/
    root@AnimNAS:/mnt/animzfs# cd Media
    root@AnimNAS:/mnt/animzfs/Media# ls -alh
    total 40K
    drwxr-xr-x 2 root   root  2 Mar 28  2022 ./
    drwxrwxrwx 3 nobody users 3 Apr  4 04:40 ../

     

    Thanks!

     

    Link to comment

    The strange thing here is why animzfs/Media was already mounted, never seen this before, a dataset being mounted before Unraid tries to mount it, check if the mount point exists after a reboot (before array start), assuming it doesn't try the commands Unraid uses one by one (adjust sdX devices if needed) and check if the dataset becomes mounted after the first mount command:

     

    mkdir -p /mnt/animzfs
    zpool import -N -o autoexpand=on  -d /dev/sdy1 -d /dev/sdw1 -d /dev/sdz1 -d /dev/sdu1 -d /dev/sdx1 -d /dev/sdv1 2464160279060078275 animzfs
    zfs set mountpoint=/mnt/animzfs animzfs
    zfs mount -o noatime animzfs

     

     

    Link to comment
    32 minutes ago, JorgeB said:

    The strange thing here is why animzfs/Media was already mounted, never seen this before, a dataset being mounted before Unraid tries to mount it, check if the mount point exists after a reboot (before array start), assuming it doesn't try the commands Unraid uses one by one (adjust sdX devices if needed) and check if the dataset becomes mounted after the first mount command:

     

    OK, after a reboot with the array still stopped, there is no animzfs folder under /mnt. I tried the commands you listed and now get this:

     

    root@AnimNAS:/# mkdir -p /mnt/animzfs
    root@AnimNAS:/# zpool import -N -o autoexpand=on  -d /dev/sdy1 -d /dev/sdw1 -d /dev/sdz1 -d /dev/sdu1 -d /dev/sdx1 -d /dev/sdv1 2464160279060078275 animzfs
    root@AnimNAS:/# zfs set mountpoint=/mnt/animzfs animzfs
    root@AnimNAS:/# zfs mount -o noatime animzfs
    root@AnimNAS:/# cd /mnt
    root@AnimNAS:/mnt# ls -alh
    total 39K
    drwxr-xr-x  7 root   root  140 Apr  4 13:01 ./
    drwxr-xr-x 19 root   root  420 Mar 29 09:05 ../
    drwxrwxrwt  2 nobody users  40 Apr  4 12:52 addons/
    drwxrwxrwx  2 nobody users   2 Apr  4 12:08 animzfs/
    drwxrwxrwt  2 nobody users  40 Apr  4 12:52 disks/
    drwxrwxrwt  2 nobody users  40 Apr  4 12:52 remotes/
    drwxrwxrwt  2 nobody users  40 Apr  4 12:52 rootshare/
    root@AnimNAS:/mnt# cd animzfs
    root@AnimNAS:/mnt/animzfs# ls -alh
    total 39K
    drwxrwxrwx 2 nobody users   2 Apr  4 12:08 ./
    drwxr-xr-x 7 root   root  140 Apr  4 13:01 ../
    root@AnimNAS:/mnt/animzfs# cd Media
    bash: cd: Media: No such file or directory

     

    So now even the Media dataset isn't appearing. I did make a full backup of the data on the pool, so worst case I can always destroy the pool and then re-create it under 6.12, and then restore the ~36TB of data. But I really would prefer to figure out what's happening. Any more ideas? Thanks again!

     

     

    Link to comment

    Also note that zfs list still shows the Media dataset.

     

    root@AnimNAS:/mnt/animzfs# zfs list
    NAME            USED  AVAIL     REFER  MOUNTPOINT
    animzfs        33.0T  10.5T      204K  /mnt/animzfs
    animzfs/Media  33.0T  10.5T     33.0T  /mnt/animzfs/Media

     

    Link to comment

    One more note: I did a rollback to 6.11.5 and interesting to note that under it, even with the array stopped the ZFS pool is mounted and the Media dataset contains the TV folder and all is accessible. Something is obviously messed up with the way the pool is mounted as I didn't think it would mount until I start the array.

    Edited by AgentXXL
    Link to comment
    14 hours ago, AgentXXL said:

    So now even the Media dataset isn't appearing.

    That is behaving as expected for now, now after all the other commands try mounting that dataset:

     

    zfs mount animzfs/Media

     

    Link to comment
    12 hours ago, JorgeB said:

    That is behaving as expected for now, now after all the other commands try mounting that dataset:

     

    zfs mount animzfs/Media

     

     

    As mentioned I rolled back to 6.11.5 as some of my 'snowbirding' relatives were bugging me about access to their shows. I'll give it another try tomorrow. In the meantime, is there anything I should look at on the 6.11.5 version that might help pinpoint the cause?

     

    As mentioned also, the ZFS pool is being mounted and the dataset/subfolders/files are fully accessible before the array is started. I was under the impression that it didn't get mounted until the array started, so if that's what's supposed to happen, I need to figure out how it's being mounted before the array starts.

     

    Link to comment
    8 hours ago, AgentXXL said:

    the ZFS pool is being mounted and the dataset/subfolders/files are fully accessible before the array is started

    With v6.11 that is normal since the pools are imported by the plugin, with v6.12 pools must be imported during array start.

    • Thanks 1
    Link to comment
    18 hours ago, JorgeB said:

    With v6.11 that is normal since the pools are imported by the plugin, with v6.12 pools must be imported during array start.

     

    Thanks for that clarification. @Synd on the unRAID Discord server mentioned that it may be related to how the pool was created. We've noticed that the pool is imported using the sdx identification, whereas my pool was created under the old plugin using by-id identification.

     

    So far I haven't come across any reason why the pool won't import correctly, but I'm at the stage where I'm about ready to upgrade to 6.12 RC2, destroy the old pool, create a new pool and then restore the data from my backups. I'll hold off for a while just in case you or others can comment on the differences when using sdx vs by-id for creation of the pool. It would save a lot of time if I could get it to import and my data was accessible.

     

    Edited by AgentXXL
    • Like 1
    Link to comment

    If you can try the manual mount above again with the last step:

    zfs mount animzfs/Media

    Before this confirm the dateset is not mounted.

    Link to comment
    13 hours ago, JorgeB said:

    If you can try the manual mount above again with the last step:

    zfs mount animzfs/Media

    Before this confirm the dateset is not mounted.

     

    Ok, gave it another go and this seems to work:

     

    root@AnimNAS:~# zfs list
    no datasets available
    root@AnimNAS:~# cd /mnt
    root@AnimNAS:/mnt# ls
    addons/  disks/  remotes/  rootshare/
    root@AnimNAS:/mnt# mkdir -p /mnt/animzfs
    root@AnimNAS:/mnt# zpool import -N -o autoexpand=on  -d /dev/sdy1 -d /dev/sdw1 -d /dev/sdz1 -d /dev/sdu1 -d /dev/sdx1 -d /dev/sdv1 2464160279060078275 animzfs
    root@AnimNAS:/mnt# zfs list
    NAME            USED  AVAIL     REFER  MOUNTPOINT
    animzfs        33.8T  9.67T      204K  /mnt/animzfs
    animzfs/Media  33.8T  9.67T     33.8T  /mnt/animzfs/Media
    root@AnimNAS:/mnt# zfs mount animzfs/Media
    root@AnimNAS:/mnt# ls
    addons/  animzfs/  disks/  remotes/  rootshare/
    root@AnimNAS:/mnt# cd animzfs
    root@AnimNAS:/mnt/animzfs# ls
    Media/
    root@AnimNAS:/mnt/animzfs# cd Media
    root@AnimNAS:/mnt/animzfs/Media# ls
    TV/

     

    The TV folder is now visible, and I can play media from it. Alas I then start the array and all 6 disks in the ZFS pool now show as unmountable and the pool contents are no longer accessible.

     

    ZFSMountFail.thumb.jpg.9a9feca49126c04800efa41588c25794.jpg

     

    After a reboot the pool is still created, but no messages - just filesystem set to auto. zfs list reports no datasets. When I start the unRAID array, the pool looks like it mounts OK, but again I now have an empty folder under animzfs/Media. Checking through syslog, I get this:

     

    Apr  7 14:02:31 AnimNAS emhttpd: mounting /mnt/animzfs
    Apr  7 14:02:31 AnimNAS emhttpd: shcmd (174): mkdir -p /mnt/animzfs
    Apr  7 14:02:31 AnimNAS emhttpd: /sbin/btrfs filesystem show /dev/sdu1 2>&1
    Apr  7 14:02:31 AnimNAS emhttpd: ERROR: no btrfs on /dev/sdu1
    Apr  7 14:02:31 AnimNAS emhttpd: shcmd (175): modprobe zfs
    Apr  7 14:02:31 AnimNAS emhttpd: /usr/sbin/zpool import -d /dev/sdu1 2>&1
    Apr  7 14:02:32 AnimNAS emhttpd:    pool: animzfs
    Apr  7 14:02:32 AnimNAS emhttpd:      id: 2464160279060078275
    Apr  7 14:02:32 AnimNAS emhttpd: shcmd (176): /usr/sbin/zpool import -N -o autoexpand=on  -d /dev/sdu1 -d /dev/sdv1 -d /dev/sdw1 -d /dev/sdx1 -d /dev/sdy1 -d /dev/sdz1 2464160279060078275 animzfs
    Apr  7 14:02:43 AnimNAS emhttpd: /usr/sbin/zpool status -LP animzfs 2>&1
    Apr  7 14:02:43 AnimNAS emhttpd:   pool: animzfs
    Apr  7 14:02:43 AnimNAS emhttpd:  state: ONLINE
    Apr  7 14:02:43 AnimNAS emhttpd:   scan: scrub repaired 0B in 1 days 13:34:19 with 0 errors on Wed Apr  5 09:29:43 2023
    Apr  7 14:02:43 AnimNAS emhttpd: config:
    Apr  7 14:02:43 AnimNAS emhttpd:  NAME           STATE     READ WRITE CKSUM
    Apr  7 14:02:43 AnimNAS emhttpd:  animzfs        ONLINE       0     0     0
    Apr  7 14:02:43 AnimNAS emhttpd:    raidz1-0     ONLINE       0     0     0
    Apr  7 14:02:43 AnimNAS emhttpd:      /dev/sdu1  ONLINE       0     0     0
    Apr  7 14:02:43 AnimNAS emhttpd:      /dev/sdv1  ONLINE       0     0     0
    Apr  7 14:02:43 AnimNAS emhttpd:      /dev/sdw1  ONLINE       0     0     0
    Apr  7 14:02:43 AnimNAS emhttpd:      /dev/sdx1  ONLINE       0     0     0
    Apr  7 14:02:43 AnimNAS emhttpd:      /dev/sdy1  ONLINE       0     0     0
    Apr  7 14:02:43 AnimNAS emhttpd:      /dev/sdz1  ONLINE       0     0     0
    Apr  7 14:02:43 AnimNAS emhttpd: errors: No known data errors
    Apr  7 14:02:43 AnimNAS emhttpd: shcmd (177): /usr/sbin/zfs set mountpoint=/mnt/animzfs animzfs
    Apr  7 14:02:44 AnimNAS emhttpd: shcmd (178): /usr/sbin/zfs mount -o noatime animzfs
    Apr  7 14:02:44 AnimNAS emhttpd: shcmd (179): /usr/sbin/zpool set autotrim=off animzfs
    Apr  7 14:02:44 AnimNAS emhttpd: shcmd (180): /usr/sbin/zfs set compression=off animzfs
    Apr  7 14:02:44 AnimNAS emhttpd: /mnt/animzfs root profile: raidz1
    Apr  7 14:02:44 AnimNAS emhttpd: /mnt/animzfs root groups: 1
    Apr  7 14:02:44 AnimNAS emhttpd: /mnt/animzfs root width: 6
    Apr  7 14:02:44 AnimNAS emhttpd: /mnt/animzfs root ok: 6
    Apr  7 14:02:44 AnimNAS emhttpd: /mnt/animzfs root new: 0
    Apr  7 14:02:44 AnimNAS emhttpd: /mnt/animzfs root wrong:0
    Apr  7 14:02:44 AnimNAS emhttpd: /mnt/animzfs root missing: 0
    Apr  7 14:02:44 AnimNAS emhttpd: /mnt/animzfs root missing already: 0
    Apr  7 14:02:44 AnimNAS emhttpd: shcmd (181): /usr/sbin/zfs mount animzfs/Media
    Apr  7 14:02:44 AnimNAS root: cannot mount 'animzfs/Media': filesystem already mounted
    Apr  7 14:02:44 AnimNAS emhttpd: shcmd (181): exit status: 1

     

    Any more suggestions? I assume the BTRFS items on lines 3-4 are part of unRAID determining the filesystem, with the modprobe zfs identifying it as a ZFS pool. And then on the 2nd to last line above, it tries to mount the pool and errors out stating the filesystem is already mounted.

     

    I'll likely have to rollback again as it's Friday and some of my Plex users will scream foul if they can't access my server, but I'll try and give you some time to respond before I do the rollback. Thanks again for the assistance!

     

    Edited by AgentXXL
    grammar
    Link to comment

    @JorgeB I just remembered that I had created a user script to set zfs_arc_max when using the plugin under 6.10.x/6.11.x. I deleted the user script and tried another reboot. There are no datasets or pools mounted before I start the array. Once I start the array, the animzfs pool is mounted, but the Media dataset is empty (even though drive space usage appears correct). Here's the relevant section from the syslog:

     

    Apr  7 15:10:32 AnimNAS emhttpd: mounting /mnt/animzfs
    Apr  7 15:10:32 AnimNAS emhttpd: shcmd (282): mkdir -p /mnt/animzfs
    Apr  7 15:10:32 AnimNAS emhttpd: shcmd (283): modprobe zfs
    Apr  7 15:10:32 AnimNAS emhttpd: shcmd (284): /usr/sbin/zpool import -N -o autoexpand=on  -d /dev/sdu1 -d /dev/sdv1 -d /dev/sdw1 -d /dev/sdx1 -d /dev/sdy1 -d /dev/sdz1 2464160279060078275 animzfs
    Apr  7 15:10:42 AnimNAS emhttpd: shcmd (285): /usr/sbin/zfs set mountpoint=/mnt/animzfs animzfs
    Apr  7 15:10:43 AnimNAS emhttpd: shcmd (286): /usr/sbin/zfs mount -o noatime animzfs
    Apr  7 15:10:43 AnimNAS emhttpd: shcmd (287): /usr/sbin/zpool set autotrim=off animzfs
    Apr  7 15:10:43 AnimNAS emhttpd: shcmd (288): /usr/sbin/zfs set compression=off animzfs
    Apr  7 15:10:43 AnimNAS emhttpd: /mnt/animzfs root profile: raidz1
    Apr  7 15:10:43 AnimNAS emhttpd: /mnt/animzfs root groups: 1
    Apr  7 15:10:43 AnimNAS emhttpd: /mnt/animzfs root width: 6
    Apr  7 15:10:43 AnimNAS emhttpd: /mnt/animzfs root ok: 6
    Apr  7 15:10:43 AnimNAS emhttpd: /mnt/animzfs root new: 0
    Apr  7 15:10:43 AnimNAS emhttpd: /mnt/animzfs root wrong:0
    Apr  7 15:10:43 AnimNAS emhttpd: /mnt/animzfs root missing: 0
    Apr  7 15:10:43 AnimNAS emhttpd: /mnt/animzfs root missing already: 0
    Apr  7 15:10:43 AnimNAS emhttpd: shcmd (289): /usr/sbin/zfs mount animzfs/Media
    Apr  7 15:10:43 AnimNAS root: cannot mount 'animzfs/Media': filesystem already mounted
    Apr  7 15:10:43 AnimNAS emhttpd: shcmd (289): exit status: 1

     

    When I got it to work manually, I only did 3 steps: mkdir, zpool import and mount animzfs/Media. The syslog shows ``zfs set mountpoint...`` and ``zfs mount -o noatime animzfs``, followed by the trim and compression settings as shown above. It looks like it's this 1st mount command (6th line down in above paste) which is doing a ``zfs mount -o noatime animzfs``.

     

    This is obviously not the same as the ``zfs mount animzfs/Media`` command that worked when tried manually. So it's something in the way unRAID is parsing the mountpoint. The 2nd ``zfs mount...`` command (3rd line from bottom) is the command that worked when tried manually, but fails now because of the previous mount command (6th line from top).

     

    Thoughts or suggestions? Thanks again!

    Link to comment
    10 hours ago, AgentXXL said:
    root@AnimNAS:/mnt# mkdir -p /mnt/animzfs
    root@AnimNAS:/mnt# zpool import -N -o autoexpand=on  -d /dev/sdy1 -d /dev/sdw1 -d /dev/sdz1 -d /dev/sdu1 -d /dev/sdx1 -d /dev/sdv1 2464160279060078275 animzfs
    root@AnimNAS:/mnt# zfs list
    NAME            USED  AVAIL     REFER  MOUNTPOINT
    animzfs        33.8T  9.67T      204K  /mnt/animzfs
    animzfs/Media  33.8T  9.67T     33.8T  /mnt/animzfs/Media
    root@AnimNAS:/mnt# zfs mount animzfs/Media

     

    You skipped a command here, Unraid initially mounts the root pool only, then any datasets, that appears to be the problem with this pool, so please try this:

     

    On 4/4/2023 at 8:09 PM, AgentXXL said:
    root@AnimNAS:/# mkdir -p /mnt/animzfs
    root@AnimNAS:/# zpool import -N -o autoexpand=on  -d /dev/sdy1 -d /dev/sdw1 -d /dev/sdz1 -d /dev/sdu1 -d /dev/sdx1 -d /dev/sdv1 2464160279060078275 animzfs
    root@AnimNAS:/# zfs set mountpoint=/mnt/animzfs animzfs
    root@AnimNAS:/# zfs mount -o noatime animzfs

    Only after that try to mount the dataset:

    zfs mount animzfs/Media

     

    I expect the same result as when Unraid does it, but just to confirm.

    Link to comment
    On 4/8/2023 at 1:05 AM, JorgeB said:

    Only after that try to mount the dataset:

    zfs mount animzfs/Media

     

    I expect the same result as when Unraid does it, but just to confirm.

     

    As expected, the same result:

     

    root@AnimNAS:~# zfs list
    no datasets available
    root@AnimNAS:~# zpool status
    no pools available
    root@AnimNAS:~# mkdir -p /mnt/animzfs
    root@AnimNAS:~# zpool import -N -o autoexpand=on  -d /dev/sdy1 -d /dev/sdw1 -d /dev/sdz1 -d /dev/sdu1 -d /dev/sdx1 -d /dev/sdv1 2464160279060078275 animzfs
    root@AnimNAS:~# zfs set mountpoint=/mnt/animzfs animzfs
    root@AnimNAS:~# zfs mount -o noatime animzfs
    root@AnimNAS:~# zfs mount animzfs/Media
    cannot mount 'animzfs/Media': filesystem already mounted
    root@AnimNAS:~# cd /mnt/animzfs/Media
    root@AnimNAS:/mnt/animzfs/Media# ls -l
    total 0

     

    As mentioned in RC2 release thread, I'd really like to see the output of zfs get all from someone who was able to successfully import their pool that was created using the old plugin.

     

    One more thing to bring up: I already have a share called `Media` that uses both the array and a cache pool. Since I'm using `Media` as the dataset name, is it possible that the pre-existing sharename is causing the conflict? The reason I used the same name is that I want media residing on the ZFS pool to be grouped along with the media on the array/cache pool. So for example, the array and cache pool have this structure:

     

    /mnt/user/Media/Movies

    /mnt/user/Media/TV

     

    And the ZFS pool is this:

     

    /mnt/animzfs/Media/TV

     

    Any other thoughts?

     

     

    Link to comment
    10 minutes ago, AgentXXL said:

    One more thing to bring up: I already have a share called `Media` that uses both the array and a cache pool. Since I'm using `Media` as the dataset name, is it possible that the pre-existing sharename is causing the conflict?

    This should not be a problem.

     

    Do you remember how the dataset was created? The zfs plugin was mainly to install zfs, pools and datasets were created manually, or did you use another plugin?

     

    You can also post 'zfs get all' for that dataset, in case there's something obviously different than usual.

    Link to comment

    Another thing you can try is renaming the dataset to see if it makes any difference, import the pool manually them

    zfs rename animzfs/Media animzfs/New_name

    Then export the pool and let Unraid try to import it again, this will rule out any name related issues.

    Link to comment
    31 minutes ago, JorgeB said:

    This should not be a problem.

     

    Do you remember how the dataset was created? The zfs plugin was mainly to install zfs, pools and datasets were created manually, or did you use another plugin?

     

    You can also post 'zfs get all' for that dataset, in case there's something obviously different than usual.

     

    I followed the SpaceinvaderOne video Setting up a Native ZFS Pool on Unraid. I added the commands I used to my unRAID journal, but since then the pool has been moved to /mnt/disks/animzfs and now since trying the upgrade to 6.12, it's at /mnt/animzfs.

     

    zpool create -m /mnt/zfs AnimZFS01 raidz1 ata-ST10000DM0004-2GR11L_ZJV037AG ata-ST10000DM0004-2GR11L_ZJV610GT ata-ST10000DM0004-2GR11L_ZJV67B9L ata-ST10000DM0004-2GR11L_ZJV6BW2S ata-ST10000DM0004-2GR11L_ZJV6BYCF ata-ST10000DM0004-2GR11L_ZJV6BZRG
    
    zfs create AnimZFS01/Media

     

    20 minutes ago, JorgeB said:

    Another thing you can try is renaming the dataset to see if it makes any difference, import the pool manually them

    zfs rename animzfs/Media animzfs/New_name

    Then export the pool and let Unraid try to import it again, this will rule out any name related issues.

     

    It won't let me rename the dataset:

     

    root@AnimNAS:/# zfs rename animzfs/Media animzfs/MyMedia
    cannot unmount '/mnt/animzfs/Media': unmount failed

     

    animzfs.txt

    Edited by AgentXXL
    Attached txt file with output from `zfs get all`.
    Link to comment

    OK, this is interesting. I tried using umount manually. Here's the result:

     

    root@AnimNAS:/# umount -R /mnt/animzfs
    root@AnimNAS:/# cd /mnt
    root@AnimNAS:/mnt# ls
    addons/  animzfs/  disks/  remotes/  rootshare/
    root@AnimNAS:/mnt# cd animzfs
    root@AnimNAS:/mnt/animzfs# ls
    Media/
    root@AnimNAS:/mnt/animzfs# cd Media
    root@AnimNAS:/mnt/animzfs/Media# ls
    TV/

     

    So after manually issuing the command, I got no error messages, but it didn't unmount - instead it now shows the TV folder. And the contents are fully accessible. I tried using the umount command again, and it now says this:

     

    root@AnimNAS:/# umount -vR /mnt/animzfs/Media
    umount: /mnt/animzfs/Media (animzfs/Media) unmounted


    And now when I check, /mnt/animzfs/Media still exists, but the TV folder is again missing. Something's certainly fubar'ed with this zfs pool. I was thinking I should try removing any mountpoint with this:

     

    zfs set mountpoint=none animzfs

     

    Will that just remove the mountpoint without destroying the dataset? From what I've just read on the man page, it should just remove the mountpoint. I do have all that data backup up, but I'm still hoping I can get the pool to properly mount and not have to restore ~35TB of data.

     

     

     

    Edited by AgentXXL
    Read some man pages for zfs
    Link to comment
    On 4/12/2023 at 12:22 PM, JorgeB said:

    Another thing you can try is renaming the dataset to see if it makes any difference, import the pool manually them

    zfs rename animzfs/Media animzfs/New_name

    Then export the pool and let Unraid try to import it again, this will rule out any name related issues.

     

    I took another stab at it and confirmed that renaming the dataset didn't fix it. I've even tried making a new dataset and moving the data to it, but with the way zfs works at the block level, a move results in a copy and then delete. Not as quick as moving folder to folder on a non-zfs filesystem.

     

    I'm now at the stage where I no longer have enough space to backup that pool, so it's time to start from scratch. I'll be destroying the pool and then creating a new one. Then I'll create a dataset called Media and start copying everything back to the pool from the backup disks.

     

    Thanks for the assistance regardless. I never ran into issues with datasets back in my FreeNAS days but sh*t happens. I'll mark the issue as 'CLOSED', with the resolution being to destroy, recreate and restore from backups.

    Edited by AgentXXL
    Link to comment

    One last comment: I rebooted my server prior to destroying the pool/dataset just so it was 'clean'. Alas I couldn't destroy the dataset as the path wasn't found. So then I tried this:

     

    root@AnimNAS:~# umount -vR /mnt/animzfs
    umount: /mnt/animzfs (animzfs) unmounted

     

    And here's the kicker: issuing that umount command somehow remounts the pool and dataset and the full path and files/folders are now accessible. WTF?!?

     

    In any case, I'm not going to trust it as it stands, so now I'll delete the pool from unRAID and then clear the partitions on all disks with them under UD. Then I'll recreate the pool and dataset, and then start the lengthy restore process.

     

    Edited by AgentXXL
    Link to comment

    Just had same problem. zfs dataset was mounted but no access to folders with actualy data in.

     

    i ran this

    umount -vR /mnt/

     

    then everything was mounted. Going to reboot and test again

     

    vert odd

    Link to comment


    Join the conversation

    You can post now and register later. If you have an account, sign in now to post with your account.
    Note: Your post will require moderator approval before it will be visible.

    Guest
    Add a comment...

    ×   Pasted as rich text.   Restore formatting

      Only 75 emoji are allowed.

    ×   Your link has been automatically embedded.   Display as a link instead

    ×   Your previous content has been restored.   Clear editor

    ×   You cannot paste images directly. Upload or insert images from URL.


  • Status Definitions

     

    Open = Under consideration.

     

    Solved = The issue has been resolved.

     

    Solved version = The issue has been resolved in the indicated release version.

     

    Closed = Feedback or opinion better posted on our forum for discussion. Also for reports we cannot reproduce or need more information. In this case just add a comment and we will review it again.

     

    Retest = Please retest in latest release.


    Priority Definitions

     

    Minor = Something not working correctly.

     

    Urgent = Server crash, data loss, or other showstopper.

     

    Annoyance = Doesn't affect functionality but should be fixed.

     

    Other = Announcement or other non-issue.