• Unraid OS version 6.12.0-rc7 available


    limetech

    Please refer to the 6.12.0-rc1 topic for a general overview.

     


    Version 6.12.0-rc7 2023-06-05

    Changes vs. 6.12.0-rc6

    Share "exclusive mode":

    • Added "Settings/Global Share Settings/Permit exclusive shares" [Yes/No] default: No.
    • Fix issue marking share exclusive when changing Primary storage to a pool but share does not exist there yet.
    • Make exclusive share symnlinks relative.
    • Disable exclusive share mode if the share NFS-exported.

    Networking:

    • Fix issue where/etc/resolve.conf can get deleted when switching DNS Server between auto/static.
    • Support custom interfaces (e.g. Tailscale VPN tunnel or zerotier L2 tunnel)

    Web Terminal:

    • Change renderer from webgl to canvas to mitigate issue with latest Chrome update.
    • For better readability, changed background color on directory listings where 'w+o' is set.

    Docker:

    • Fix issue detecting proper shutdown of docker.
    • rc.docker: Fix multiple fixed IPs

    VM Manager:

    • Fix issues with VM page loads if users have removed vcpu pinning.
    • ovmf-stable: version 202305 (build 3)

    Other:

    • Fix issue mounting emulated encrypted unRAID array devices.
    • Fix ntp drift file save/restore from persistent USB flash 'config' directory.
    • Remove extraneous /root/.config/remmina file
    • Misc. changes to accommodate webGui repo reorganizaion.

    webGUI:

    • Fixed regression error in disk critical / warning coloring & monitoring

    Linux kernel

    • version 6.1.32
    • CONFIG_FANOTIFY: Filesystem wide access notification
    • Like 10
    • Thanks 2



    User Feedback

    Recommended Comments



    Creating a new share and for some reason it's also creating a new ZFS dataset with the share name as well despite it being set to only use my cache drive.

     

    ie: create share "scanner", set it to use cache pool only.  Creates ZFS dataset "scanner".  Delete the ZFS dataset "scanner" and the "scanner" share vanishes as well.  Very strange.

     

    The share is set to use only the cache pool, however when I add some data to it, it actually goes into the ZFS dataset that it created.

     

    It only seems to be the "scanner" share that causes the issue.  If I create a new share with a different name, a ZFS dataset is not created and it seems fine.  Maybe a reboot is in order...

     

    Edit: deleted everything from the share, deleted the share, re-created it and now no mystery ZFS dataset created.  Maybe there was some sort of mystery symlink or something hanging around?  Strange.  Continuing to test.

    Edited by Nogami
    Link to comment
    49 minutes ago, Nogami said:

    Creating a new share and for some reason it's also creating a new ZFS dataset with the share name as well despite it being set to only use my cache drive.

    This is normal and expected behavior if the pool is using zfs, this then allows you to snapshot, send/receive, etc.

    Link to comment

    @JorgeB

    For now the massive create errors are gone. (Before the errors was spammed every minute if something was wrong)

    So looks like it helped.

     

    But now i get some single errors over the last hours in syslog, but it is not shown directly as an red error in log.

     

    shfs: set -o pipefail ; /usr/sbin/zfs create 'cache-mirror/share1' |& logger

    some hours later:

    shfs: set -o pipefail ; /usr/sbin/zfs destroy -r 'cache-mirror/share1' |& logger

     

    No more information is found. in syslog

    A dataset with share1 is now existing

     

    I checked with zfs list now my datasets and i dont have a share1 dataset, after i recieved the above destroy error.

    Maybe it destroyed in a second try but i dont see anything in the log

    Link to comment
    14 minutes ago, unr41dus3r said:

    But now i get some single errors over the last hours in syslog, but it is not shown directly as an red error in log.

    Those are not errors, a dataset is destroyed after the mover runs (if it's empty) and it's recreated when new data is written to that share.

    • Like 1
    Link to comment
    2 hours ago, JorgeB said:

    Those are not errors, a dataset is destroyed after the mover runs (if it's empty) and it's recreated when new data is written to that share.

     

    I thought so, still it appears as an error in the notification area :)

     

    image.png.52d6554e0f2f8089da69c3a986738c83.png

     

    Edit:

    In the syslog itself it is a white message and i recieve an emai that it is an alert.

    Edited by unr41dus3r
    Link to comment

    Dear devs,

     

    I just upgrade to rc7 from rc6 and what I noticed is that ALL unraid setting are reset.

    Had to go through every setting to change back to the desired value.

    I don't know if this has been reported yet. This is the only issue I have encountered so far with the minimal setup I have.

     

    Thanks again for the hard work and the AMAZING product unraid will be when the 6.12 will hit the stable channel.

    Link to comment
    5 minutes ago, Mik3 said:

    I just upgrade to rc7 from rc6 and what I noticed is that ALL unraid setting are reset.

    Had to go through every setting to change back to the desired value.

    As far as I know you are the first person who has experienced this so there is probably something else going on.


     

    it might help if you attach your system’s diagnostics zip file to your next post in this thread.

     

    Link to comment

    After updating to rc7, my server's web UI becomes unresponsive approximately every two hours, but SSH and Docker containers are still functioning properly. What could be causing this?

     

    Screenshot_2023-06-10-23-13-37-479_com.sonelli.juicessh.jpg

    Link to comment
    On 6/8/2023 at 12:51 AM, JorgeB said:

    This is a known issue with mostly Ivy Brige or older CPUs when auto importing a btrfs pool, you can get around it by clicking on the pool and setting the fs to btrfs, or if you are creating a new pool like it looked you can also just erase the devices.

    I complete power-down and reboot fixed this issue.  Just rebooting didn't work

    Link to comment
    9 hours ago, unr41dus3r said:

    I thought so, still it appears as an error in the notification area :)

    That looks more like a plugin or custom setting reacting to the 'fail' part of pipefail, update to rc8, that no longer shows up in the log.

     

    No more issues with the zfs datasets so far?

    • Like 1
    Link to comment
    6 hours ago, duckey77 said:

    I complete power-down and reboot fixed this issue.  Just rebooting didn't work

    You can also just set the pool fs to anything other than auto, unlike my post mentioned it can also happen with new devices (without a filesystem) as I found after posting that.

    Link to comment
    8 hours ago, 九年吃菜粥 said:

    What could be causing this?

    Any chance you are running the Cloudfare docker container?

    Link to comment
    2 hours ago, JorgeB said:

    Any chance you are running the Cloudfare docker container?

    It's impossible. In subsequent tests, I shut down all Docker containers, but the web UI still cannot be accessed.

    Link to comment
    6 hours ago, 九年吃菜粥 said:

    It's impossible. In subsequent tests, I shut down all Docker containers, but the web UI still cannot be accessed

    Suggest you upgrade to rc8 and boot in safe mode with the docker service disabled, if the issue persists then please create a new bug report, and post the complete diagnostics saved after doing that.

    Link to comment
    9 hours ago, JorgeB said:

    That looks more like a plugin or custom setting reacting to the 'fail' part of pipefail, update to rc8, that no longer shows up in the log.

     

    No more issues with the zfs datasets so far?

     

    You are completly right! It was an script i had. Thanks!

     

    It looks like the dataset errors are also gone! So you idea with creating the dataset new with RC7 should fix the problem.

    Will update to RC8 next

    Edited by unr41dus3r
    • Like 1
    Link to comment

    @JorgeB

    Sadly this night i received the old error again.

     

    rserver shfs: /usr/sbin/zfs create 'cache-mirror/share1' |& logger
    rserver root: cannot create 'cache-mirror/share1': dataset already exists
    rserver shfs: command failed: 1

     

    I am on RC8 now, but the dataset was probably created with RC7

     

    I can see it tried to delete the dataset but couldnt do it because it was busy. Before this a force mover schedule was running.

     

    Log about the failed destroy (maybe a second try after some time could be implemented?)

     

    tower shfs: /usr/sbin/zfs destroy -r 'cache-mirror/share1' |& logger
    tower root: cannot destroy 'cache-mirror/share1': dataset is busy
    tower shfs: error: retval 1 attempting 'zfs destroy'

     

    Some hours after this error i get the message that the new dataset cache-mirror/share1 could not be created.

     

    I did the following now:

    • In /mnt/cache-mirror/ the share1 folder is missing.
    • With "zfs list" i can see the share1 dataset.
    • zfs mount -a mounted the dataset correct in /mnt/cache-mirror/share1
    • After a mover run, the folder and dataset share1 was correctly removed.

     

    As i wrote above i am on RC8 now BUT the datasets was created with RC7 as i remember.

    I will report back in the RC8 thread or create a new bug report, if the error occurs again.

    Edited by unr41dus3r
    Link to comment
    11 minutes ago, unr41dus3r said:

    I am on RC8 now, but the dataset was probably created with RC7

    Thanks for the report, it does suggest the issue is still present, at least good to see why it was not deleted (dataset busy), earlier releases didn't show that.

    • Like 1
    Link to comment
    6 hours ago, JorgeB said:

    Thanks for the report, it does suggest the issue is still present, at least good to see why it was not deleted (dataset busy), earlier releases didn't show that.

     

    Sorry i am highjacking this now, but i think the comments about rc7 are now over ;)

     

    I think i found the problem.

    I tried to shutdown the server and have a problem to unmount cache-mirror

     

    I found out an snapshot is stuck and busy and cant destroy it.

     

    cannot destroy snapshot cache-mirror@backup1: dataset is busy

     

    I create this snapshot with "zfs snapshot cache-mirror@backup1" for my backup docker and use the command "zfs destroy cache-mirror@backup1" to destroy it, but then i recieve the above command.

    At the moment i dont know why this happens. I use this snapshot to backup my appdata folder.

     

    NAME                                                                                           USED  AVAIL     REFER  MOUNTPOINT
    cache-mirror@backup1                                                                        8.26G      -      167G  -

     

    Will debug it next, maybe you have an idea.

     

    Edit:

    It is possible this is an snapshot from and older RC. I started with RC5 and it could be from this version

    The docker container is of course disabled.

    Edited by unr41dus3r
    Link to comment
    49 minutes ago, unr41dus3r said:

    It is possible this is an snapshot from and older RC

    We now think older datasets is not the problem, another user also still had issues, there are some planned changes for next release, it will try to unmount the dataset first, and if that fails it won't attempt to destroy it, please re-test once it's available.

    • Like 1
    Link to comment

    This will be the first time doing an Unraid update.

     

    Is there any due diligence I need to do to ensure the plugins I use are compatible? What happens if something is not compatible, does it simply get disabled or could it have unintended side-effects on my system?

    Link to comment
    2 hours ago, gustyScanner said:

    This will be the first time doing an Unraid update.

     

    Is there any due diligence I need to do to ensure the plugins I use are compatible? What happens if something is not compatible, does it simply get disabled or could it have unintended side-effects on my system?

    Make sure all plugins are up to date and if you have fix common problems installed, go to Tools -Update Assistant which will let you know if anything should be uninstalled 

    • Like 1
    Link to comment
    On 6/6/2023 at 11:44 AM, bonienl said:

    Included and Excluded listening interfaces need to be reactivated each time the server reboots or the array is restarted.

    To automate this process, you can add the following code in the "go" file (place it before starting the emhttpd daemon)

     

    # reload services after starting docker with 20 seconds grace period to allow starting up containers
    event=/usr/local/emhttp/webGui/event/docker_started
    mkdir -p $event
    cat <<- 'EOF' >$event/reload_services
    #!/bin/bash
    echo '/usr/local/emhttp/webGui/scripts/reload_services' | at -M -t $(date +%Y%m%d%H%M.%S -d '+20 sec') 2>/dev/null
    EOF
    chmod +x $event/reload_services

     

    With this code in place and autostart of containers is enabled, it will ensure the listening interfaces are automatically updated after a system reboot or array restart.

     


    Sorry, where should I put these instructions?

    Link to comment

    Good Afternoon,

     

    I have completed the instructions to allow Tailscale VPN to pass through to my Unraid Server. I am able to SSH and use the GUI through it. However, when I restart my server or turn on/off the array. Tailscale's IP address to the server stops working unless I go into Settings>Network Settings>Include listening interfaces and remove "tailscale0" and re-add back in. If I try to SMB to the IP address via Tailscale, none of my devices can connect to it. Is anyone having this issue? Is this a known bug?

     

    Thank you! 

    Edited by DC_Interstellar
    Link to comment
    7 hours ago, DC_Interstellar said:

    Good Afternoon,

     

    I have completed the instructions to allow Tailscale VPN to pass through to my Unraid Server. I am able to SSH and use the GUI through it. However, when I restart my server or turn on/off the array. Tailscale's IP address to the server stops working unless I go into Settings>Network Settings>Include listening interfaces and remove "tailscale0" and re-add back in. If I try to SMB to the IP address via Tailscale, none of my devices can connect to it. Is anyone having this issue? Is this a known bug?

     

    Thank you! 

    Are you still on rc7? have you tried the stable release as I think this may have been update in rc8+

    Link to comment



    Join the conversation

    You can post now and register later. If you have an account, sign in now to post with your account.
    Note: Your post will require moderator approval before it will be visible.

    Guest
    Add a comment...

    ×   Pasted as rich text.   Restore formatting

      Only 75 emoji are allowed.

    ×   Your link has been automatically embedded.   Display as a link instead

    ×   Your previous content has been restored.   Clear editor

    ×   You cannot paste images directly. Upload or insert images from URL.


  • Status Definitions

     

    Open = Under consideration.

     

    Solved = The issue has been resolved.

     

    Solved version = The issue has been resolved in the indicated release version.

     

    Closed = Feedback or opinion better posted on our forum for discussion. Also for reports we cannot reproduce or need more information. In this case just add a comment and we will review it again.

     

    Retest = Please retest in latest release.


    Priority Definitions

     

    Minor = Something not working correctly.

     

    Urgent = Server crash, data loss, or other showstopper.

     

    Annoyance = Doesn't affect functionality but should be fixed.

     

    Other = Announcement or other non-issue.