AgentXXL

Members
  • Posts

    400
  • Joined

  • Last visited

Report Comments posted by AgentXXL

  1. One last comment: I rebooted my server prior to destroying the pool/dataset just so it was 'clean'. Alas I couldn't destroy the dataset as the path wasn't found. So then I tried this:

     

    root@AnimNAS:~# umount -vR /mnt/animzfs
    umount: /mnt/animzfs (animzfs) unmounted

     

    And here's the kicker: issuing that umount command somehow remounts the pool and dataset and the full path and files/folders are now accessible. WTF?!?

     

    In any case, I'm not going to trust it as it stands, so now I'll delete the pool from unRAID and then clear the partitions on all disks with them under UD. Then I'll recreate the pool and dataset, and then start the lengthy restore process.

     

  2. On 4/12/2023 at 12:22 PM, JorgeB said:

    Another thing you can try is renaming the dataset to see if it makes any difference, import the pool manually them

    zfs rename animzfs/Media animzfs/New_name

    Then export the pool and let Unraid try to import it again, this will rule out any name related issues.

     

    I took another stab at it and confirmed that renaming the dataset didn't fix it. I've even tried making a new dataset and moving the data to it, but with the way zfs works at the block level, a move results in a copy and then delete. Not as quick as moving folder to folder on a non-zfs filesystem.

     

    I'm now at the stage where I no longer have enough space to backup that pool, so it's time to start from scratch. I'll be destroying the pool and then creating a new one. Then I'll create a dataset called Media and start copying everything back to the pool from the backup disks.

     

    Thanks for the assistance regardless. I never ran into issues with datasets back in my FreeNAS days but sh*t happens. I'll mark the issue as 'CLOSED', with the resolution being to destroy, recreate and restore from backups.

  3. OK, this is interesting. I tried using umount manually. Here's the result:

     

    root@AnimNAS:/# umount -R /mnt/animzfs
    root@AnimNAS:/# cd /mnt
    root@AnimNAS:/mnt# ls
    addons/  animzfs/  disks/  remotes/  rootshare/
    root@AnimNAS:/mnt# cd animzfs
    root@AnimNAS:/mnt/animzfs# ls
    Media/
    root@AnimNAS:/mnt/animzfs# cd Media
    root@AnimNAS:/mnt/animzfs/Media# ls
    TV/

     

    So after manually issuing the command, I got no error messages, but it didn't unmount - instead it now shows the TV folder. And the contents are fully accessible. I tried using the umount command again, and it now says this:

     

    root@AnimNAS:/# umount -vR /mnt/animzfs/Media
    umount: /mnt/animzfs/Media (animzfs/Media) unmounted


    And now when I check, /mnt/animzfs/Media still exists, but the TV folder is again missing. Something's certainly fubar'ed with this zfs pool. I was thinking I should try removing any mountpoint with this:

     

    zfs set mountpoint=none animzfs

     

    Will that just remove the mountpoint without destroying the dataset? From what I've just read on the man page, it should just remove the mountpoint. I do have all that data backup up, but I'm still hoping I can get the pool to properly mount and not have to restore ~35TB of data.

     

     

     

  4. 31 minutes ago, JorgeB said:

    This should not be a problem.

     

    Do you remember how the dataset was created? The zfs plugin was mainly to install zfs, pools and datasets were created manually, or did you use another plugin?

     

    You can also post 'zfs get all' for that dataset, in case there's something obviously different than usual.

     

    I followed the SpaceinvaderOne video Setting up a Native ZFS Pool on Unraid. I added the commands I used to my unRAID journal, but since then the pool has been moved to /mnt/disks/animzfs and now since trying the upgrade to 6.12, it's at /mnt/animzfs.

     

    zpool create -m /mnt/zfs AnimZFS01 raidz1 ata-ST10000DM0004-2GR11L_ZJV037AG ata-ST10000DM0004-2GR11L_ZJV610GT ata-ST10000DM0004-2GR11L_ZJV67B9L ata-ST10000DM0004-2GR11L_ZJV6BW2S ata-ST10000DM0004-2GR11L_ZJV6BYCF ata-ST10000DM0004-2GR11L_ZJV6BZRG
    
    zfs create AnimZFS01/Media

     

    20 minutes ago, JorgeB said:

    Another thing you can try is renaming the dataset to see if it makes any difference, import the pool manually them

    zfs rename animzfs/Media animzfs/New_name

    Then export the pool and let Unraid try to import it again, this will rule out any name related issues.

     

    It won't let me rename the dataset:

     

    root@AnimNAS:/# zfs rename animzfs/Media animzfs/MyMedia
    cannot unmount '/mnt/animzfs/Media': unmount failed

     

    animzfs.txt

  5. 1 hour ago, JorgeB said:

    Because Unraid first sets the mountpoint for the root dataset, in case the pools comes with a different mountpoint, then mounts the root dataset only, and only then all other datasets are mounted, whatever the problem with your pool is it's not just because it was created with the plugin, since other users don't have issues, and I would still like to see that output as asked, because if the dataset wasn't mounted before, and Unraid only mounts the root dataset first, it doesn't make sense that it then complains it was already mounted.

     

    Updated with the results in the bug report thread. And as expected, I got the same result. I would still like to compare my pool structure with someone who was successful so I'd still appreciate a look at the results for zfs get all.

     

  6. On 4/8/2023 at 1:05 AM, JorgeB said:

    Only after that try to mount the dataset:

    zfs mount animzfs/Media

     

    I expect the same result as when Unraid does it, but just to confirm.

     

    As expected, the same result:

     

    root@AnimNAS:~# zfs list
    no datasets available
    root@AnimNAS:~# zpool status
    no pools available
    root@AnimNAS:~# mkdir -p /mnt/animzfs
    root@AnimNAS:~# zpool import -N -o autoexpand=on  -d /dev/sdy1 -d /dev/sdw1 -d /dev/sdz1 -d /dev/sdu1 -d /dev/sdx1 -d /dev/sdv1 2464160279060078275 animzfs
    root@AnimNAS:~# zfs set mountpoint=/mnt/animzfs animzfs
    root@AnimNAS:~# zfs mount -o noatime animzfs
    root@AnimNAS:~# zfs mount animzfs/Media
    cannot mount 'animzfs/Media': filesystem already mounted
    root@AnimNAS:~# cd /mnt/animzfs/Media
    root@AnimNAS:/mnt/animzfs/Media# ls -l
    total 0

     

    As mentioned in RC2 release thread, I'd really like to see the output of zfs get all from someone who was able to successfully import their pool that was created using the old plugin.

     

    One more thing to bring up: I already have a share called `Media` that uses both the array and a cache pool. Since I'm using `Media` as the dataset name, is it possible that the pre-existing sharename is causing the conflict? The reason I used the same name is that I want media residing on the ZFS pool to be grouped along with the media on the array/cache pool. So for example, the array and cache pool have this structure:

     

    /mnt/user/Media/Movies

    /mnt/user/Media/TV

     

    And the ZFS pool is this:

     

    /mnt/animzfs/Media/TV

     

    Any other thoughts?

     

     

  7. 11 minutes ago, JorgeB said:

    You didn't reply to my last post in the bug report thread.

     

    I haven't tried again because it's pretty apparent it will give me the same result. Plus I've had other more important issues to work on, like replacing the air conditioner for the server room.

     

    As both of us expect that manually mounting with the same commands that unRAID uses will fail, I'd like to find out what's set differently so that a single mount command works. That's why I want to compare the output of `zfs get all` with someone who's been able to successfully import a pool created with the old plugin. I tried importing it under Ubuntu a few months ago and it had no problems with seeing the data.

     

    Why does unRAID do two mount commands? When I mounted it under Ubuntu it was a single command, and when I tried the same thing under unRAID on 6.12 with a single mount command, it would also see the TV folder and all sub-folder and files.

     

    Anyhow, I'm now ready to try it again under 6.12 RC2, but I'd first like to do that comparison to see if my pool is organized differently.

  8. I have a ZFS pool (6 x 10TB RAIDZ1) that was created on 6.10.x using the old plugin. It has one dataset. Pool name animzfs, dataset Media. On 6.11.5, the pool mounts fine and the data is all accessible. I've tried upgrading to 6.12 RC2 three times now and my pool won't import properly.

     

    It creates the mountpoint and seems to mount the dataset, but no files/folders are visible when I browse to it at /mnt/animzfs/Media. There is an error during the mount that reports the pool/dataset is already mounted. What's odd is that the drive usage appears to be OK, but I can't see any of the folders or files. Permissions are listed as 99:100 for all folders/files when checked under 6.11.5.

     

    On 6.11.5 it's seen properly and the data is all visible - just a single 'TV' folder in the root of the dataset, with lots of subfolders and files. Permissions are listed as 99:100 for all folders/files when checked under 6.11.5. I've created a bug report and @JorgeB has thankfully been assisting me. The URL is here:

     

     

    I do have all the data on the pool backed up, so worst case is I do the upgrade to 6.12 RC2 and then destroy the old pool, re-create it and then restore my data from my backups. Alas that's about 35TB of data so it'll take a couple of days. I'd like to try and figure it out rather than destroy and re-create the pool, but I'm not making much headway.

     

    If someone using 6.12 RC2 has successfully imported a pool from the old plugin, could you please share the results of the command:

     

    zfs get all

     

    I'm trying to figure out if there's a way to re-organize the pool under 6.11.5 so that it will import properly under 6.12 RC2 or later. If anyone can supply me with the output of that command, hopefully it may reveal the issue that's preventing me from being able to see the data.

     

    Thanks in advance!

     

     

  9. @JorgeB I just remembered that I had created a user script to set zfs_arc_max when using the plugin under 6.10.x/6.11.x. I deleted the user script and tried another reboot. There are no datasets or pools mounted before I start the array. Once I start the array, the animzfs pool is mounted, but the Media dataset is empty (even though drive space usage appears correct). Here's the relevant section from the syslog:

     

    Apr  7 15:10:32 AnimNAS emhttpd: mounting /mnt/animzfs
    Apr  7 15:10:32 AnimNAS emhttpd: shcmd (282): mkdir -p /mnt/animzfs
    Apr  7 15:10:32 AnimNAS emhttpd: shcmd (283): modprobe zfs
    Apr  7 15:10:32 AnimNAS emhttpd: shcmd (284): /usr/sbin/zpool import -N -o autoexpand=on  -d /dev/sdu1 -d /dev/sdv1 -d /dev/sdw1 -d /dev/sdx1 -d /dev/sdy1 -d /dev/sdz1 2464160279060078275 animzfs
    Apr  7 15:10:42 AnimNAS emhttpd: shcmd (285): /usr/sbin/zfs set mountpoint=/mnt/animzfs animzfs
    Apr  7 15:10:43 AnimNAS emhttpd: shcmd (286): /usr/sbin/zfs mount -o noatime animzfs
    Apr  7 15:10:43 AnimNAS emhttpd: shcmd (287): /usr/sbin/zpool set autotrim=off animzfs
    Apr  7 15:10:43 AnimNAS emhttpd: shcmd (288): /usr/sbin/zfs set compression=off animzfs
    Apr  7 15:10:43 AnimNAS emhttpd: /mnt/animzfs root profile: raidz1
    Apr  7 15:10:43 AnimNAS emhttpd: /mnt/animzfs root groups: 1
    Apr  7 15:10:43 AnimNAS emhttpd: /mnt/animzfs root width: 6
    Apr  7 15:10:43 AnimNAS emhttpd: /mnt/animzfs root ok: 6
    Apr  7 15:10:43 AnimNAS emhttpd: /mnt/animzfs root new: 0
    Apr  7 15:10:43 AnimNAS emhttpd: /mnt/animzfs root wrong:0
    Apr  7 15:10:43 AnimNAS emhttpd: /mnt/animzfs root missing: 0
    Apr  7 15:10:43 AnimNAS emhttpd: /mnt/animzfs root missing already: 0
    Apr  7 15:10:43 AnimNAS emhttpd: shcmd (289): /usr/sbin/zfs mount animzfs/Media
    Apr  7 15:10:43 AnimNAS root: cannot mount 'animzfs/Media': filesystem already mounted
    Apr  7 15:10:43 AnimNAS emhttpd: shcmd (289): exit status: 1

     

    When I got it to work manually, I only did 3 steps: mkdir, zpool import and mount animzfs/Media. The syslog shows ``zfs set mountpoint...`` and ``zfs mount -o noatime animzfs``, followed by the trim and compression settings as shown above. It looks like it's this 1st mount command (6th line down in above paste) which is doing a ``zfs mount -o noatime animzfs``.

     

    This is obviously not the same as the ``zfs mount animzfs/Media`` command that worked when tried manually. So it's something in the way unRAID is parsing the mountpoint. The 2nd ``zfs mount...`` command (3rd line from bottom) is the command that worked when tried manually, but fails now because of the previous mount command (6th line from top).

     

    Thoughts or suggestions? Thanks again!

  10. 13 hours ago, JorgeB said:

    If you can try the manual mount above again with the last step:

    zfs mount animzfs/Media

    Before this confirm the dateset is not mounted.

     

    Ok, gave it another go and this seems to work:

     

    root@AnimNAS:~# zfs list
    no datasets available
    root@AnimNAS:~# cd /mnt
    root@AnimNAS:/mnt# ls
    addons/  disks/  remotes/  rootshare/
    root@AnimNAS:/mnt# mkdir -p /mnt/animzfs
    root@AnimNAS:/mnt# zpool import -N -o autoexpand=on  -d /dev/sdy1 -d /dev/sdw1 -d /dev/sdz1 -d /dev/sdu1 -d /dev/sdx1 -d /dev/sdv1 2464160279060078275 animzfs
    root@AnimNAS:/mnt# zfs list
    NAME            USED  AVAIL     REFER  MOUNTPOINT
    animzfs        33.8T  9.67T      204K  /mnt/animzfs
    animzfs/Media  33.8T  9.67T     33.8T  /mnt/animzfs/Media
    root@AnimNAS:/mnt# zfs mount animzfs/Media
    root@AnimNAS:/mnt# ls
    addons/  animzfs/  disks/  remotes/  rootshare/
    root@AnimNAS:/mnt# cd animzfs
    root@AnimNAS:/mnt/animzfs# ls
    Media/
    root@AnimNAS:/mnt/animzfs# cd Media
    root@AnimNAS:/mnt/animzfs/Media# ls
    TV/

     

    The TV folder is now visible, and I can play media from it. Alas I then start the array and all 6 disks in the ZFS pool now show as unmountable and the pool contents are no longer accessible.

     

    ZFSMountFail.thumb.jpg.9a9feca49126c04800efa41588c25794.jpg

     

    After a reboot the pool is still created, but no messages - just filesystem set to auto. zfs list reports no datasets. When I start the unRAID array, the pool looks like it mounts OK, but again I now have an empty folder under animzfs/Media. Checking through syslog, I get this:

     

    Apr  7 14:02:31 AnimNAS emhttpd: mounting /mnt/animzfs
    Apr  7 14:02:31 AnimNAS emhttpd: shcmd (174): mkdir -p /mnt/animzfs
    Apr  7 14:02:31 AnimNAS emhttpd: /sbin/btrfs filesystem show /dev/sdu1 2>&1
    Apr  7 14:02:31 AnimNAS emhttpd: ERROR: no btrfs on /dev/sdu1
    Apr  7 14:02:31 AnimNAS emhttpd: shcmd (175): modprobe zfs
    Apr  7 14:02:31 AnimNAS emhttpd: /usr/sbin/zpool import -d /dev/sdu1 2>&1
    Apr  7 14:02:32 AnimNAS emhttpd:    pool: animzfs
    Apr  7 14:02:32 AnimNAS emhttpd:      id: 2464160279060078275
    Apr  7 14:02:32 AnimNAS emhttpd: shcmd (176): /usr/sbin/zpool import -N -o autoexpand=on  -d /dev/sdu1 -d /dev/sdv1 -d /dev/sdw1 -d /dev/sdx1 -d /dev/sdy1 -d /dev/sdz1 2464160279060078275 animzfs
    Apr  7 14:02:43 AnimNAS emhttpd: /usr/sbin/zpool status -LP animzfs 2>&1
    Apr  7 14:02:43 AnimNAS emhttpd:   pool: animzfs
    Apr  7 14:02:43 AnimNAS emhttpd:  state: ONLINE
    Apr  7 14:02:43 AnimNAS emhttpd:   scan: scrub repaired 0B in 1 days 13:34:19 with 0 errors on Wed Apr  5 09:29:43 2023
    Apr  7 14:02:43 AnimNAS emhttpd: config:
    Apr  7 14:02:43 AnimNAS emhttpd:  NAME           STATE     READ WRITE CKSUM
    Apr  7 14:02:43 AnimNAS emhttpd:  animzfs        ONLINE       0     0     0
    Apr  7 14:02:43 AnimNAS emhttpd:    raidz1-0     ONLINE       0     0     0
    Apr  7 14:02:43 AnimNAS emhttpd:      /dev/sdu1  ONLINE       0     0     0
    Apr  7 14:02:43 AnimNAS emhttpd:      /dev/sdv1  ONLINE       0     0     0
    Apr  7 14:02:43 AnimNAS emhttpd:      /dev/sdw1  ONLINE       0     0     0
    Apr  7 14:02:43 AnimNAS emhttpd:      /dev/sdx1  ONLINE       0     0     0
    Apr  7 14:02:43 AnimNAS emhttpd:      /dev/sdy1  ONLINE       0     0     0
    Apr  7 14:02:43 AnimNAS emhttpd:      /dev/sdz1  ONLINE       0     0     0
    Apr  7 14:02:43 AnimNAS emhttpd: errors: No known data errors
    Apr  7 14:02:43 AnimNAS emhttpd: shcmd (177): /usr/sbin/zfs set mountpoint=/mnt/animzfs animzfs
    Apr  7 14:02:44 AnimNAS emhttpd: shcmd (178): /usr/sbin/zfs mount -o noatime animzfs
    Apr  7 14:02:44 AnimNAS emhttpd: shcmd (179): /usr/sbin/zpool set autotrim=off animzfs
    Apr  7 14:02:44 AnimNAS emhttpd: shcmd (180): /usr/sbin/zfs set compression=off animzfs
    Apr  7 14:02:44 AnimNAS emhttpd: /mnt/animzfs root profile: raidz1
    Apr  7 14:02:44 AnimNAS emhttpd: /mnt/animzfs root groups: 1
    Apr  7 14:02:44 AnimNAS emhttpd: /mnt/animzfs root width: 6
    Apr  7 14:02:44 AnimNAS emhttpd: /mnt/animzfs root ok: 6
    Apr  7 14:02:44 AnimNAS emhttpd: /mnt/animzfs root new: 0
    Apr  7 14:02:44 AnimNAS emhttpd: /mnt/animzfs root wrong:0
    Apr  7 14:02:44 AnimNAS emhttpd: /mnt/animzfs root missing: 0
    Apr  7 14:02:44 AnimNAS emhttpd: /mnt/animzfs root missing already: 0
    Apr  7 14:02:44 AnimNAS emhttpd: shcmd (181): /usr/sbin/zfs mount animzfs/Media
    Apr  7 14:02:44 AnimNAS root: cannot mount 'animzfs/Media': filesystem already mounted
    Apr  7 14:02:44 AnimNAS emhttpd: shcmd (181): exit status: 1

     

    Any more suggestions? I assume the BTRFS items on lines 3-4 are part of unRAID determining the filesystem, with the modprobe zfs identifying it as a ZFS pool. And then on the 2nd to last line above, it tries to mount the pool and errors out stating the filesystem is already mounted.

     

    I'll likely have to rollback again as it's Friday and some of my Plex users will scream foul if they can't access my server, but I'll try and give you some time to respond before I do the rollback. Thanks again for the assistance!

     

  11. 18 hours ago, JorgeB said:

    With v6.11 that is normal since the pools are imported by the plugin, with v6.12 pools must be imported during array start.

     

    Thanks for that clarification. @Synd on the unRAID Discord server mentioned that it may be related to how the pool was created. We've noticed that the pool is imported using the sdx identification, whereas my pool was created under the old plugin using by-id identification.

     

    So far I haven't come across any reason why the pool won't import correctly, but I'm at the stage where I'm about ready to upgrade to 6.12 RC2, destroy the old pool, create a new pool and then restore the data from my backups. I'll hold off for a while just in case you or others can comment on the differences when using sdx vs by-id for creation of the pool. It would save a lot of time if I could get it to import and my data was accessible.

     

    • Like 1
  12. 12 hours ago, JorgeB said:

    That is behaving as expected for now, now after all the other commands try mounting that dataset:

     

    zfs mount animzfs/Media

     

     

    As mentioned I rolled back to 6.11.5 as some of my 'snowbirding' relatives were bugging me about access to their shows. I'll give it another try tomorrow. In the meantime, is there anything I should look at on the 6.11.5 version that might help pinpoint the cause?

     

    As mentioned also, the ZFS pool is being mounted and the dataset/subfolders/files are fully accessible before the array is started. I was under the impression that it didn't get mounted until the array started, so if that's what's supposed to happen, I need to figure out how it's being mounted before the array starts.

     

  13. One more note: I did a rollback to 6.11.5 and interesting to note that under it, even with the array stopped the ZFS pool is mounted and the Media dataset contains the TV folder and all is accessible. Something is obviously messed up with the way the pool is mounted as I didn't think it would mount until I start the array.

  14. 32 minutes ago, JorgeB said:

    The strange thing here is why animzfs/Media was already mounted, never seen this before, a dataset being mounted before Unraid tries to mount it, check if the mount point exists after a reboot (before array start), assuming it doesn't try the commands Unraid uses one by one (adjust sdX devices if needed) and check if the dataset becomes mounted after the first mount command:

     

    OK, after a reboot with the array still stopped, there is no animzfs folder under /mnt. I tried the commands you listed and now get this:

     

    root@AnimNAS:/# mkdir -p /mnt/animzfs
    root@AnimNAS:/# zpool import -N -o autoexpand=on  -d /dev/sdy1 -d /dev/sdw1 -d /dev/sdz1 -d /dev/sdu1 -d /dev/sdx1 -d /dev/sdv1 2464160279060078275 animzfs
    root@AnimNAS:/# zfs set mountpoint=/mnt/animzfs animzfs
    root@AnimNAS:/# zfs mount -o noatime animzfs
    root@AnimNAS:/# cd /mnt
    root@AnimNAS:/mnt# ls -alh
    total 39K
    drwxr-xr-x  7 root   root  140 Apr  4 13:01 ./
    drwxr-xr-x 19 root   root  420 Mar 29 09:05 ../
    drwxrwxrwt  2 nobody users  40 Apr  4 12:52 addons/
    drwxrwxrwx  2 nobody users   2 Apr  4 12:08 animzfs/
    drwxrwxrwt  2 nobody users  40 Apr  4 12:52 disks/
    drwxrwxrwt  2 nobody users  40 Apr  4 12:52 remotes/
    drwxrwxrwt  2 nobody users  40 Apr  4 12:52 rootshare/
    root@AnimNAS:/mnt# cd animzfs
    root@AnimNAS:/mnt/animzfs# ls -alh
    total 39K
    drwxrwxrwx 2 nobody users   2 Apr  4 12:08 ./
    drwxr-xr-x 7 root   root  140 Apr  4 13:01 ../
    root@AnimNAS:/mnt/animzfs# cd Media
    bash: cd: Media: No such file or directory

     

    So now even the Media dataset isn't appearing. I did make a full backup of the data on the pool, so worst case I can always destroy the pool and then re-create it under 6.12, and then restore the ~36TB of data. But I really would prefer to figure out what's happening. Any more ideas? Thanks again!

     

     

  15. 11 hours ago, JorgeB said:

    This appears to be the problem:

     

    Apr  3 19:18:26 AnimNAS root: cannot mount 'animzfs/Media': filesystem already mounted

     

    Try typing

    zfs unmount animzfs/Media

    To see if the array can then stop.

     

    Tried that yesterday, and it didn't work.

     

    root@AnimNAS:/# zfs unmount animzfs/Media
    cannot unmount '/mnt/animzfs/Media': unmount failed

     

    I ended up issuing a reboot command which of course led to an unclean shutdown. As no data in the array was touched I rebooted again to clear the unclean status and prevent a parity check, but after I get the zfs pool import figured, I'll manually restart the scrub of the zfs pool and then do a parity check.

     

    Any other thoughts? I've been researching it and wondering if it might be a permissions issue:

     

    root@AnimNAS:~# cd /mnt/animzfs
    root@AnimNAS:/mnt/animzfs# ls
    Media/
    root@AnimNAS:/mnt/animzfs# ls -alh
    total 40K
    drwxrwxrwx  3 nobody users   3 Apr  4 04:40 ./
    drwxr-xr-x 40 root   root  800 Apr  4 00:01 ../
    drwxr-xr-x  2 root   root    2 Mar 28  2022 Media/
    root@AnimNAS:/mnt/animzfs# cd Media
    root@AnimNAS:/mnt/animzfs/Media# ls -alh
    total 40K
    drwxr-xr-x 2 root   root  2 Mar 28  2022 ./
    drwxrwxrwx 3 nobody users 3 Apr  4 04:40 ../

     

    Thanks!

     

  16. As @Synd reported, I started seeing very slow writes with parity recently. I usually saw them in 80 - 110MB/s range, but with recent shuffling of data drives and upgrading parity to 20TB drives, it is now in the 30 - 50MB/s range. It was occasionally dropping to less than 10MB/s - I suspect this happened when data was being written to the array.

     

    As I was trying to figure it out, I tried moving the parity drives to the motherboard SATA. I also tried moving my ZFS pool to motherboard SATA instead of using the HBA. Alas with all the changes, my USB boot drive started throwing read errors, and I managed to lose the diagnostics I was grabbing throughout my changes. I had copied them to the USB and forgot to copy them off before I tried formatting the USB key to see if it was OK. It did re-format with no errors (full format, not quick).

     

    Yesterday (Dec 20th, 2022) I decided to pro-actively replace the USB key. I changed it to another of the Eluteng USB 3 to mSATA SSD adapters like I've been using on my 2nd unRAID system for the last 5-6 months. I decided to do a 'clean' rebuild of my main unRAID so I didn't restore my backup immediately. Instead I manually re-installed all needed plugins and when required, I copied the config/support files for each plugin/container from the backup.

     

    Doing this returned my parity build speed (on the new 20TB drives) to ~100MB/s when using single parity, and ~70MB/sec when doing dual parity. Also of note is that the 2nd unRAID system got upgraded with a new 16TB parity and data drives, but its parity build was in the normal 100MB/s range.

     

    The main unRAID is using a LSI 9305-24i to connect to a Supermicro CSE-847 36 bay enclosure that I've converted to use as a DAS shelf. It's been using this new HBA for about 5 months. The DAS conversion of the CSE-847 has been in use for over 2 years using two HBAs, one internal and one external, with those both being replaced by the single 9305-24i. The 2nd unRAID is using a LSI 9201-16i in a Supermicro CSE-846 24 bay enclosure. Specs of both systems are in my signature.

     

    One thing I did notice on the main unRAID is that the parity build seems to be single-threaded and is maxing out the single thread at 100%. Multi-threading would likely make little difference as only one thread at a time can be reading the sectors from alI drives. I did not notice this behavior on the 2nd unRAID system.

     

    I have currently stopped/cancelled the single disk parity build as I realized I had some more data to move between servers and the writes are much faster when no parity calculation is involved. Once this data move is complete I will re-add a single 20TB parity and let it build. If any additional info is required, let me know.

     

    EDIT: I also have my disk config set to Reconstruct Write.

  17. I'm also using Firefox and occasionally seeing the Resend requestor in the browser when starting the array on both of my servers. Both are updated to 6.10.3 stable. It starts fine but when the page reloads at the end of the startup procedure, I get the requestor in the attached pic. If I click Resend, it tries to start the array again. Sometimes it locks up the webgui, some times it continues normally. If I click Cancel it's always started fine. I've cleared browser cache and cookies and tried a private tab in Firefox, but it doesn't always happen so it's been hard to validate. Any thoughts on other ways to diagnose the cause and apply a fix? I've been reminded to take a look at the browser console next time it occurs.

    ArrayStartRequestor.thumb.jpg.d38acae29d543ca7ebdeff32a0294f2a.jpg

  18. 18 minutes ago, MiguelBazil said:

    image.png.7e894d9c8ecb8dc4bb4d601864e93082.png

    Is the clear from this menu enough? Because I've nuked it, and it persists. Not sure what could be causing it still. If you know of a different way, I'd like to try it.

     

    That will clear everything for EVERY site that's been visited. Under Firefox Settings, I go to Privacy & Security, then under Cookies and Site data, choose Manage Data. Then you can search for 'unraid.net' and your server IP address and just clear the cache and cookies for those URLs/address.

     

    CacheCookiesunRAID.thumb.jpg.d69d0b3df435e5d6d5a7854ea695e21a.jpg

  19. When you remove cache/cookies, you'll need to make sure you do it for any unraid URLS, including the IP address(es) of your server(s). I found that some settings were cached under the IP address of my server, with others under unraid.net. Clearing both reset any oddities I was seeing, but that was done before I upgraded to 6.10 stable.

     

    And like @bonienl, my Firefox v100.0.2 is working fine with no page irregularities.

     

  20. FYI - I have been seeing this issue (the disappearing user shares) since upgrading to 6.10 RC4. I am using NFS for my shares as the majority of my systems are Linux or Mac. Mac's in particular have speed issues with SMB shares, but NFS works great.

     

    The gotcha is that I don't use tdarr... in fact I don't use any of the *arr apps. I've grabbed diagnostics just now as it just happened again. I will send them via PM if anyone else wants to look at them, but I prefer not posting them here. Although I use anonymize, going through the diagnostics reveal they still contain information that I consider private.

     

    I'll be taking my own stab at the diagnostics shortly, but I've disabled hard links as suggested and will see if that helps.

     

  21. 15 minutes ago, Squid said:

    So long as the plugin was kept up to date.  Older versions that didn't have the max entity listed will still install and probably cause problems.  Diagnostics however will show that an incompatible plugin has been installed (pluginList.txt)

     

    I don't see anything called 'max entity' but I also recently removed all my Wireguard configs to troubleshoot a remote connection issue. So to be safe, we should uninstall the Dynamix Wireguard plugin before upgrading? And be sure to make another Flash backup after removal.

  22. 40 minutes ago, Squid said:

    Try installing the mover tuning plugin.  I just added an option to it today to set the Priority of the mover process.  See if setting it to low/very low helps things out.

     

    If it doesn't, then there's another option I can add to it to further tune it.

    I have the Mover tuning plugin already, but I'll go update it and let you know what happens. Thanks again! Hope my donation made it to you!

  23. Sorry to hijack the thread, but just one more question: when using the Krusader docker to move data to the array, I've been copying from mountpoints I added to the Krusader config for my UD attached devices. For example, I created a new path in the Krusader config for my UD attached drive called MoviesA. I added the container path as /MoviesA and the host path as /mnt/disks/MoviesA.

     

    But when copying from /MoviesA (on the left panel of Krusader), I've used /media/General/MoviesA/ as the path in the right panel of Krusader., where General is the share name of my main array share. What is the difference in using the /media mountpoints vs using the /mnt/user0 mountpoints? I haven't found an explanation for the differences in the /media mountpoints in my searches.

     

    Thanks!