Jump to content
  • Unraid OS Version 7.0.0-beta.3 available


    SpencerJ
    • Minor

    Thanks for your feedback! Unraid 7.0.0-beta.3 is now available on the Next branch. Everyone running -beta.2 is encouraged to upgrade
     

    This is BETA software. Please use on test servers only.

     

    This announce post is perfect for quick questions or comments, but if you suspect there will be back and forth for your specific issue, please start a new topic in the Prereleases board. Be sure to include your diagnostics.zip.

     

    Upgrade steps for this release

    1. Read the release notes via the Update OS-> Next -> Unraid 7.0.0-beta.3 Changelog.

    Screenshot 2024-10-04 at 12.29.56 PM.png

     

    2. As always, prior to upgrading, create a backup of your USB flash device:  "Main/Flash/Flash Device Settings" - click "Flash Backup".

    3. Update all of your plugins. This is critical for the Connect, NVIDIA and Realtek plugins in particular.

    4. If the system is currently running 6.12.0 - 6.12.6, we're going to suggest that you stop the array at this point. If it gets stuck on "Retry unmounting shares", open a web terminal and type:

    umount /var/lib/docker

    The array should now stop successfully.

    5. If you have a recent release or Unraid Connect installed:

    Open the dropdown in the top-right of the Unraid webgui and click Check for Update, then press More options and switch to the Next branch. You may be prompted to sign in to access the Next branch. Select the appropriate version and choose "View changelog to Start Update". More details in this blog post

    6. If you don't have the Check for Update option in the upper right corner:

    Either install the Unraid Connect plugin or upgrade to 6.12.10 first. Then check for updates as described above.

    7. Wait for the update to download and install

    8. If you have any plugins that install 3rd party drivers (NVIDIA, Realtek, etc), wait for the notification that the new version of the driver has been downloaded. 

    9. Reboot

     

    This announce post is perfect for quick questions or comments, but if you suspect there will be back and forth for your specific issue, please start a new topic in the Prereleases board. Be sure to include your diagnostics.zip.

    • Like 9
    • Upvote 2



    User Feedback

    Recommended Comments



    25 minutes ago, Revan335 said:

    I mean this, sorry the Path was /mnt/user/docker

    Still not sure why you are asking, but you can use any path you like, default is for new installs.

    Link to comment

    Successful update from beta2, all seems working, including docker with custom networks.

    For some reason reboot ends up with "unclean shutdown detected" and new parity check, which is annoying (~ 11 hours).

    Array and cache pool are both encrypted in my case.

    Link to comment
    1 hour ago, JorgeB said:

    Still not sure why you are asking, but you can use any path you like, default is for new installs.

    In the Release Notes are this Entry:

     

    https://docs.unraid.net/unraid-os/release-notes/7.0.0/#predefined-shares-handling

     

    "The Unraid OS Docker Manager is configured by default to use these predefined shares:

    system - used to store Docker image layers in a loopback image stored in system/docker."

     

    This is another Statement from the Points in the 6.12 Release Notes/Thread that I posted earlier.

    Link to comment
    10 hours ago, ross232 said:

    Am having major problems with VMs.

     

    First I ran into the previous issue I had whereby the system would not boot until I removed my VFIO bindings and manually created them.


    Then after fixing these my VM would not start with a virtiofsd error (picture attached). After removing virtiofs from the config the VM will now start but I get no display in the VM at all - like it gets stuck.

    Unraid Error.png

     

    Updated from beta2 to beta3 without much issue either, just the same error as in the picture for my main Windows 11 VM. Had to re-create and now its working fine.

    Link to comment
    49 minutes ago, Revan335 said:

    "The Unraid OS Docker Manager is configured by default to use these predefined shares:

    system - used to store Docker image layers in a loopback image stored in system/docker."

    Yes, and that is correct, that's the default share as mentioned above /mnt/user/system/docker

    Link to comment
    2 hours ago, asychev said:

    For some reason reboot ends up with "unclean shutdown detected"

    Do you mean the reboot after the update? That would still be from beta.2, in any case I would recommend creating a new post with the diagnostics to try and see what is preventing the clean shutdown.

    Link to comment

    Did the Update from beta2 to beta3, but the reboot did not work. Stucked at rebooting with 3000+ seconds at the count. After waiting that much time, powered the system off pressing the power button few secs, than startet it again. OS is now on beta3 and everything seems to work. No unclean shutdown, logs are also fine. Not sure what went wrong while rebooting nor if there still is a problem.

    Link to comment
    1 hour ago, JorgeB said:

    Yes, and that is correct, that's the default share as mentioned above /mnt/user/system/docker

    OK, then it has been changed back in 7. But then we now know that from 7 it is /mnt/user/system/docker again and no longer /mnt/user/docker as it was from 6.12.x. Thanks for the Clarify!

    Link to comment
    7 minutes ago, Revan335 said:

    OK, then it has been changed back in 7. But then we now know that from 7 it is /mnt/user/system/docker again and no longer /mnt/user/docker as it was from 6.12.x. Thanks for the Clarify!

    As far as I know it has always been /mnt/user/system/docker even for the 6.12.x releases.    I suspect there may have been an error in the Release Notes you quoted.

    Link to comment
    16 hours ago, Revan335 said:

    Is the system Share back for the Docker Path? Say the Documentation from Release Notes? /mnt/Docker was the new Default Path in 6.x with the ZFS availability.

     

    I think you are confused by the release notes you linked. mnt/docker nor mnt/user/docker have ever been the default. The default  has always been under the system share. Nothing has changed in version 7 in this regard.

     

    The notes you linked to make suggestions for setting your own custom docker path for when specifically using directory path instead of docker image, and when using ZFS. In that case, it's suggested to make a new path so that the docker directory is not under the system share, which is dataset when using zfs.

     

    It's written just like this:

    Quote

    here is our recommendation for setting up Docker using directory:

     

    As has been mentioned, you can make that path anything you want and put it anywhere you want.

     

    But an additional recommendation, which supersedes that one, is in the current release notes, where it's recommended to NOT USE directory path for docker at this time, and instead use an image (which is the default).

    Edited by Espressomatic
    Link to comment
    • Featured Comment

    For users updating from 6.12, there's an issue with this beta auto importing single device xfs pools after first array start, the pool(s) will appear as "not installed" and have the device unassigned, to resolve the issue:

     

    -stop the array

    -set the pool(s) slots to 0, this will remove the pool

    -add the pool(s) again, with the same name(s) as before

    -reassign the device to the poo(s)

    -start array and the pool(s) should now import correctly

     

    P.S. still testing, but confirmed, there is also a similar but unrelated issue importing raid1 6.12 btrfs pools (no problem if upgrading from v6.11 or earlier), for those, the array won't start with an "invalid pool config" error, solution is the same as above, just delete and recreate the pool and let Unraid re-import it, but note that for these you won't be able to set the slots to 0 to remove the pool, instead, click on the first pool device, then "remove pool", then add the pool back and re-assign the devices.

     

    Re-importing a pool won't delete its data, it just re-imports the existing pool, this is for any kind of pool.

     

     

    • Thanks 2
    Link to comment
    15 minutes ago, JorgeB said:

    There's an issue with this beta auto importing single device xfs pools after first array start, the pool(s) will appear as "not installed" and have the device unassigned, to resolve the issue:

     

    -stop the array

    -set the pool(s) slots to 0, this will remove the pool

    -add the pool(s) again, with the same name(s) as before

    -reassign the device to the poo(s)

    -start array and the pool(s) should now import correctly

     

    P.S. still testing, but there could also be a similar but unrelated issue importing some multi device btrfs pools, depending on how they were created, for those the array won't start with an "invalid pool config" error, solution is the same as above, just delete and recreate the pool and let Unraid re-import it, but note that for these you won't be able to set the slots to 0 to remove the pool, instead click on the first pool device, then "remove pool", then add the pool back and re-assign the devices.

    For BTRFS pools, will removing and recreating the pool result in data loss, or can it import the existing data? I'd have it backed up anyway, but want to have a general idea of what kind of time commitment it might look like to recreate the pool. 

     

    Thank you @JorgeB

    Link to comment
    6 minutes ago, rabidfibersquirrel said:

    will removing and recreating the pool result in data loss

    No, re-importing a pool just imports the previous existing pool, and this is the same for any kind of pool.

    Link to comment
      [Main] [I/O]
        PID USER       PRI  NI  VIRT   RES   SHR S  CPU% MEM%   TIME+ ▽Command
      16353 root        20   0  273M  5256  4232 S   1.3  0.0  6:36.58 /usr/libexec/unraid/emhttpd

     

     

    # uptime
     10:29:54 up 13:35,  1 user,  load average: 0.86, 0.58, 0.51
    

     

    Is there any chance to reduce the CPU usage of the WebSever?

     

    On my server running a couple of Docker container, the is the top process (with /usr/libexec/unraid/shfs) using my cpu.

    In average it is using 1% of my CPU power.

     

    Version 7.0.0-beta.3 2024-10-04

     

    Intel Corporation NUC12WSBi7 , Version M46422-303
    Intel Corp., Version WSADL357.0088.2023.0505.1623
    BIOS dated: Friday, 05-05-2023

     

    12th Gen Intel® Core™ i7-1260P @ 3465 MHz

     

    ---

    Beside that:

    I'm using gnu-screen and having some bash sessions open.

    If I now want to stop the array, these sessions prevent the stop or reboot process (init via webadmin) from unmounting the drives. Could you add a kill-command for these open shells to your unmount script, to prevent this behavior?

    THX.

     

    Link to comment
    2 hours ago, Unpack5920 said:

    Could you add a kill-command for these open shells to your unmount script, to prevent this behavior?

    This has been requested before.    Until/If that gets added you can use the Dynamix Stop Shell plugin to achieve what you want.

    • Thanks 1
    Link to comment
    Quote

    In a future release we will be adding a mechanism to upgrade your pools.

     

    Is there a way to do this manually? 

    Link to comment
    4 minutes ago, jb350 said:

    Is there a way to do this manually? 

    You can do it by typing:

     

    zpool upgrade pool_name

    or 

    zpool upgrade -a 

     

    to upgrade them all, but note that if you downgrade back to v6.12 the pools won't mount.

    • Thanks 1
    Link to comment

    Updated from Beta 2 to Beta 3, no issues with VM, Dockers, or anything else.

    Odds things that are happening / Requests
    1. I don't seem to be able to delete snapshots for VMs.

       going to VMS tab, click on the snapshot, click on 'Remove Snapshot', Proceed, I get "Image currently active for this domain."
       1.a Somehow when I create a new VM the folder was made in the base directory of the VM, meaning outside of the 'domains' folder. It was made as a share. Not sure how that happen as the actual VM was still at the right location. I recall someone mentioned something like this in this earlier in this topic.

    2. Unspecific to Beta 3, but my dockers write about 1MB/10sec to the drive the dockers are stores for logs. Can we please get an option to write to RAM, which then writes to the logs ? Essentially can we have this added as a toggle-able option in the docker settings ?

    3. Are we running SMB3 as default yet? I thought we were getting SMB hardening options back in v6.

    Link to comment
    15 minutes ago, JustOverride said:

    Updated from Beta 2 to Beta 3, no issues with VM, Dockers, or anything else.

    Odds things that are happening / Requests
    1. I don't seem to be able to delete snapshots for VMs.

       going to VMS tab, click on the snapshot, click on 'Remove Snapshot', Proceed, I get "Image currently active for this domain."
       1.a Somehow when I create a new VM the folder was made in the base directory of the VM, meaning outside of the 'domains' folder. It was made as a share. Not sure how that happen as the actual VM was still at the right location. I recall someone mentioned something like this in this earlier in this topic.

    2. Unspecific to Beta 3, but my dockers write about 1MB/10sec to the drive the dockers are stores for logs. Can we please get an option to write to RAM, which then writes to the logs ? Essentially can we have this added as a toggle-able option in the docker settings ?

    3. Are we running SMB3 as default yet? I thought we were getting SMB hardening options back in v6.

    Can you post diagnostics?

    • Like 1
    Link to comment
    51 minutes ago, jcofer555 said:

    default shares not being created

    You have to enable the Docker and/or VM service for the respective shares to be created.

    Link to comment
    50 minutes ago, JorgeB said:

    You have to enable the Docker and/or VM service for the respective shares to be created.

     

    I've got the same problem as @jcofer555, so after a fresh install on a completely new NAS (new hardware) with Beta3, no User shares are created. Under "Global Share Settings" the "Enable User Shares" is enabled.

     

    I can't enable Docker because it complains that these directories do not exist:

     

    CleanShot2024-10-07at10_53.08@2x.thumb.png.f72615c1d216d82af660c99ece79ddd6.png

     

    I've also attached my diagnostics.

    videte-diagnostics-20241007-1057.zip

    Link to comment



    Join the conversation

    You can post now and register later. If you have an account, sign in now to post with your account.
    Note: Your post will require moderator approval before it will be visible.

    Guest
    Add a comment...

    ×   Pasted as rich text.   Restore formatting

      Only 75 emoji are allowed.

    ×   Your link has been automatically embedded.   Display as a link instead

    ×   Your previous content has been restored.   Clear editor

    ×   You cannot paste images directly. Upload or insert images from URL.


  • Status Definitions

     

    Open = Under consideration.

     

    Solved = The issue has been resolved.

     

    Solved version = The issue has been resolved in the indicated release version.

     

    Closed = Feedback or opinion better posted on our forum for discussion. Also for reports we cannot reproduce or need more information. In this case just add a comment and we will review it again.

     

    Retest = Please retest in latest release.


    Priority Definitions

     

    Minor = Something not working correctly.

     

    Urgent = Server crash, data loss, or other showstopper.

     

    Annoyance = Doesn't affect functionality but should be fixed.

     

    Other = Announcement or other non-issue.

×
×
  • Create New...