Jump to content
  • Unraid OS Version 7.0.0-beta.2 available


    ljm42

    Thanks for your feedback! Unraid 7.0.0-beta.2 is now available, details in the release notes. Everyone running -beta.1 is encouraged to upgrade
     

    This is BETA software. Please use on test servers only.

     

    This announce post is perfect for quick questions or comments, but if you suspect there will be back and forth for your specific issue, please start a new topic in the Prereleases board. Be sure to include your diagnostics.zip.

     

    Upgrade steps for this release

    1. Read the release notes.
    2. As always, prior to upgrading, create a backup of your USB flash device:  "Main/Flash/Flash Device Settings" - click "Flash Backup".
    3. Update all of your plugins. This is critical for the Connect, NVIDIA and Realtek plugins in particular.
    4. If the system is currently running 6.12.0 - 6.12.6, we're going to suggest that you stop the array at this point. If it gets stuck on "Retry unmounting shares", open a web terminal and type:
      umount /var/lib/docker

      The array should now stop successfully.

    5. If you have a recent release or Unraid Connect installed:
      • Open the dropdown in the top-right of the Unraid webgui and click Check for Update, then press More options and switch to the Next branch. You may be prompted to sign in to access the Next branch. Select the appropriate version and choose "View changelog to Start Update". More details in this blog post
    6. If you don't have the Check for Update option in the upper right corner:
      • Either install the Unraid Connect plugin or upgrade to 6.12.10 first. Then check for updates as described above.
    7. Wait for the update to download and install
    8. If you have any plugins that install 3rd party drivers (NVIDIA, Realtek, etc), wait for the notification that the new version of the driver has been downloaded. 
    9. Reboot

     

    This announce post is perfect for quick questions or comments, but if you suspect there will be back and forth for your specific issue, please start a new topic in the Prereleases board. Be sure to include your diagnostics.zip.

    • Like 5



    User Feedback

    Recommended Comments



    12 minutes ago, trurl said:

    The user you quoted determined it was a disk needing filesystems repair. 

     

    Thank you so much, missed his later post while perusing through the comments. :)

    Link to comment
    1 hour ago, pkoci said:

    It seems that OpenZFS now supports linux kernel upto 6.9. Can we expect new beta/rc?

    Problem is that kernel 6.9 is now also EOL, and 6.10 is still not officially supported by OpenZFS, I think LT is considering downgrading to kernel 6.6 LTS, or they may be constantly limited by OpenZFS and the supported kernels.

    • Upvote 1
    Link to comment

    What ZFS version Unraid 7.0.0-beta2 has?

    I have a task of moving data between datasets, and apparently ZFS 2.2 has reflink support, which enables block cloning cross-dataset on the same pool unless encryption is involved (https://github.com/openzfs/zfs/discussions/15447). In Unraid --reflink work like this:

    When --reflink[=always] is specified, perform a lightweight copy, where the
    data blocks are copied only when modified.  If this is not possible the copy
    fails, or if --reflink=auto is specified, fall back to a standard copy.
    Use --reflink=never to ensure a standard copy is performed.

    Question is: does current OpenZFS implementation support block cloning cross-dataset? It's not a frequent or even regular task, still, moving TBs of data is excruciatingly long.

    Link to comment
    5 minutes ago, ChatNoir said:

    The release notes state : zfs: version 2.2.4

    Okay, then does --reflink=auto true for beta2 then? For me it didn't work, worked as a regular "cp", long file transfer.

    Link to comment
    1 hour ago, YujiTFD said:

    have a task of moving data between datasets, and apparently ZFS 2.2 has reflink support

    It has, but it has been disabled by default by OpenZFS because there were some issues with it initially, I think now it should be safe to use, and I use it myself, you can override by adding:

    options zfs zfs_bclone_enabled=1

    to /boot/config/modprobe.d/zfs.conf, then reboot

    • Like 2
    Link to comment
    On 8/9/2024 at 12:26 PM, JorgeB said:

    Problem is that kernel 6.9 is now also EOL, and 6.10 is still not officially supported by OpenZFS, I think LT is considering downgrading to kernel 6.6 LTS, or they may be constantly limited by OpenZFS and the supported kernels.

    will this impact Intel Arc support?

    Link to comment
    On 8/9/2024 at 1:26 PM, JorgeB said:

    Problem is that kernel 6.9 is now also EOL, and 6.10 is still not officially supported by OpenZFS, I think LT is considering downgrading to kernel 6.6 LTS, or they may be constantly limited by OpenZFS and the supported kernels.

     

    TBH I keep waiting for a 6.9 unraid kernel just for the Fuse passthrough.

    Link to comment
    37 minutes ago, mikeyosm said:

    will this impact Intel Arc support?

    Nope, that's been added back in 6.2

    Link to comment
    3 hours ago, hot22shot said:

    TBH I keep waiting for a 6.9 unraid kernel just for the Fuse passthrough.

    Me too, there may be another beta with kernel 6.9, just not an rc with an EOL kernel, at least I don't think so, maybe OpenZFS gets official 6.10 kernel support soon, and they can release the rc with that one.

    Link to comment

    I set up a ZFS encrypted pool with 3x 4TB disks. All drives had passed preclear earlier today (on 6.12.10). I must have started the array without entering a passphrase somehow. When I tried formatting I got a message saying the first device in pool had failed, with SMART errors. The device does have some errors logged, but those are years old and believed to be caused by a faulty SATA cable at some point.

     

    I stopped the array, deleted the whole pool and set it up again. This time I entered a passphrase and was able to format the pool without any issues. I don't know what caused the incorrect message saying device had failed. Maybe a combination of not having a passphrase set, and old SMART errors.

     

    Also, with the array being optional an more focus on pools, maybe rename "array operation"? Instead of "Stop will take the array off-line.", change to "Stop will take all pools/array off-line"?

    Link to comment
    14 hours ago, Ademar said:

    I must have started the array without entering a passphrase somehow.

    This is possible with the current beta if you don't select the filesystem before array start, and then change it to an encrypted filesystem, it has been fixed for the next release.

     

    14 hours ago, Ademar said:

    Also, with the array being optional an more focus on pools, maybe rename "array operation"? Instead of "Stop will take the array off-line.", change to "Stop will take all pools/array off-line"?

    That's a good suggestion.

    Link to comment

    I recently upgraded my server hardware and decided to test it out a fresh install of 7.0.0-beta2. The transition has been mostly smooth. However, I am noticing that it seems like the mover isn't doing what I would expect. I added a pool of two SSDs in zfs 'mirror' mode and created a share that uses that pool as primary storage with the array as secondary storage, with the mover action set to transfer from the SSDs to the Array. That is currently the only share set up to use the SSD array. I've transferred a lot of files to this share, and I see them filling up the SSDs, but it seems that the mover does not reduce the utilization of the SSD pool. Am I misunderstanding what should happen when the mover activates?

    Link to comment
    15 hours ago, Parsnip said:

    Am I misunderstanding what should happen when the mover activates?

    Please create a new report for that, add the diagnostics and the name of the share(s) you expect to be moved.

    Link to comment

    Just had a weird semi-lockup issue:

     

    Went to update a single docker container from the webgui, navigated away from the update screen back to the Dashboard and the webgui is frozen in time. I ssh'ed in, ran the nginx restart and reload commands and still frozen. Tried to kill dockerd and containered processes but it refuses to kill them.

     

    Even tried to force a reboot and shutdown from terminal, no dice, says it's going down for reboot but it just doesn't .

     

    I'll have to manually force it when i get home

     

     

     

     

    diagnostics-20240820-1301.zip

    Link to comment
    3 hours ago, MowMdown said:

    Just had a weird semi-lockup issue:

     

    Went to update a single docker container from the webgui, navigated away from the update screen back to the Dashboard and the webgui is frozen in time. I ssh'ed in, ran the nginx restart and reload commands and still frozen. Tried to kill dockerd and containered processes but it refuses to kill them.

     

    Even tried to force a reboot and shutdown from terminal, no dice, says it's going down for reboot but it just doesn't .

     

    I'll have to manually force it when i get home

     

     

     

     

    diagnostics-20240820-1301.zip

    Diag shows a segfault which is probably bad RAM...I would do a memtest and prepare to replace a bad stick...

    • Thanks 1
    Link to comment

    OpenZFS has increased their maximum kernel support to 6.10 on github, so next release is likely just around the corner.

     

    That's a good step, perhaps we could see an updated beta to test that kernel out.

    • Like 2
    Link to comment
    On 8/20/2024 at 4:17 PM, Jclendineng said:

    Diag shows a segfault which is probably bad RAM...I would do a memtest and prepare to replace a bad stick...

    Nah, RAM is good, ran memtest for almost a day.

     

    I believe the issue is because Im running a docker directory on a ZFS pool and didn't read the update notes close enough:

     

    Quote

    There is a conflict with recent releases of Docker, ZFS, and the Linux Kernel. On Settings > Docker, we recommend that you use a Docker image rather than a Docker directory. If you choose to use a directory, avoid placing it on a ZFS pool (XFS or BTRFS are fine). If you have any of these symptoms, you'll want to delete your Docker directory and recreate in an image:

    Call traces

    Containers hanging

    Extremely slow load times of the Docker page

    Inability to update containers

     

    Edited by MowMdown
    Link to comment
    1 hour ago, MowMdown said:

    I believe the issue is because Im running a docker directory on a ZFS pool

    Yep, syslog has the typical call trace for that issue.

    Link to comment

    New forum user here, have been running Unraid for a couple of years now. Wanted to upgrade to 7 in order to explore running a protected ZFS pool instead of a regular unraid array. After following the steps to upgrade, it seems like the upgrade is just... stuck?

    It's showing a screen that says

    image.png.8ba93fa65ba1d87c469a428b1c1c727a.png

     

    But nothing's happened for a good while, is that normal? Should I let it sit for a few hours on that screen, or...?

     

    EDIT: I dared to just reboot the server since nothing seemed to be happening on the usb drive anyways, and tops showed pretty much the same process over and over again. And I tried again and now it seems to show the proper dialogs.

    Edited by daPhie79
    Link to comment



    Join the conversation

    You can post now and register later. If you have an account, sign in now to post with your account.
    Note: Your post will require moderator approval before it will be visible.

    Guest
    Add a comment...

    ×   Pasted as rich text.   Restore formatting

      Only 75 emoji are allowed.

    ×   Your link has been automatically embedded.   Display as a link instead

    ×   Your previous content has been restored.   Clear editor

    ×   You cannot paste images directly. Upload or insert images from URL.


  • Status Definitions

     

    Open = Under consideration.

     

    Solved = The issue has been resolved.

     

    Solved version = The issue has been resolved in the indicated release version.

     

    Closed = Feedback or opinion better posted on our forum for discussion. Also for reports we cannot reproduce or need more information. In this case just add a comment and we will review it again.

     

    Retest = Please retest in latest release.


    Priority Definitions

     

    Minor = Something not working correctly.

     

    Urgent = Server crash, data loss, or other showstopper.

     

    Annoyance = Doesn't affect functionality but should be fixed.

     

    Other = Announcement or other non-issue.

×
×
  • Create New...