• Unraid OS version 6.12.0-rc7 available


    limetech

    Please refer to the 6.12.0-rc1 topic for a general overview.

     


    Version 6.12.0-rc7 2023-06-05

    Changes vs. 6.12.0-rc6

    Share "exclusive mode":

    • Added "Settings/Global Share Settings/Permit exclusive shares" [Yes/No] default: No.
    • Fix issue marking share exclusive when changing Primary storage to a pool but share does not exist there yet.
    • Make exclusive share symnlinks relative.
    • Disable exclusive share mode if the share NFS-exported.

    Networking:

    • Fix issue where/etc/resolve.conf can get deleted when switching DNS Server between auto/static.
    • Support custom interfaces (e.g. Tailscale VPN tunnel or zerotier L2 tunnel)

    Web Terminal:

    • Change renderer from webgl to canvas to mitigate issue with latest Chrome update.
    • For better readability, changed background color on directory listings where 'w+o' is set.

    Docker:

    • Fix issue detecting proper shutdown of docker.
    • rc.docker: Fix multiple fixed IPs

    VM Manager:

    • Fix issues with VM page loads if users have removed vcpu pinning.
    • ovmf-stable: version 202305 (build 3)

    Other:

    • Fix issue mounting emulated encrypted unRAID array devices.
    • Fix ntp drift file save/restore from persistent USB flash 'config' directory.
    • Remove extraneous /root/.config/remmina file
    • Misc. changes to accommodate webGui repo reorganizaion.

    webGUI:

    • Fixed regression error in disk critical / warning coloring & monitoring

    Linux kernel

    • version 6.1.32
    • CONFIG_FANOTIFY: Filesystem wide access notification
    • Like 10
    • Thanks 2



    User Feedback

    Recommended Comments



    Could it be an option to use docker with a directory without docker using the built in zfs "driver"?

    Listing and working with datasets gets so convoluted with all the images from docker.

     

     

    Edited by Niklas
    Link to comment
    1 hour ago, interwebtech said:

    The false notifications/emails regarding disk usage have returned with 6.12.0-rc7.

     

    What have you set as default warning and critical level?

    See Settings -> Disk Settings

     

    Made a fix

    • Thanks 1
    Link to comment

    I see the following in rc6 and rc7 on a fresh setup.

     

    - configure a cache pool with 2 slots

    - assign a SSD to each slot 

    - start the array

     

    It will stay with "Array starting" forever and it says for "Cache" *Mounting*.

    In the logs the following is visible.

     

    Jun  7 16:48:55 Tower emhttpd: mounting /mnt/cache
    Jun  7 16:48:55 Tower emhttpd: shcmd (334): mkdir -p /mnt/cache
    Jun  7 16:48:55 Tower emhttpd: /sbin/btrfs filesystem show /dev/sdb1 2>&1
    Jun  7 16:48:55 Tower emhttpd: ERROR: not a valid btrfs filesystem: /dev/sdb1
    Jun  7 16:48:55 Tower emhttpd: cache: invalid config: total_devices 0 num_misplaced 0 num_missing 0
    Jun  7 16:48:55 Tower kernel: emhttpd[11562]: segfault at 107 ip 0000147f142f658a sp 0000147f139f5be0 error 4 in libc-2.37.so[147f1427e000+169000] likely on CPU 0 (core 0, socket 0)
    Jun  7 16:48:55 Tower kernel: Code: 48 8d 3d 91 27 11 00 e8 c4 ba ff ff 0f 1f 40 00 48 85 ff 0f 84 bf 00 00 00 55 48 8d 77 f0 53 48 83 ec 18 48 8b 1d 6e d8 14 00 <48> 8b 47 f8 64 8b 2b a8 02 75 5b 48 8b 15 fc d7 14 00 64 48 83 3a

     

    At that point, I can only do a reboot. 

    The way to get it working is to assign only 1 disk to the cache pool and ones configured add a second one.

    It seems to be consistently reproducable.

     

    Link to comment

    Updated to rc7. My array is up and running, but this is what my Main tab looks like.
    image.thumb.png.8df321b009d141c600251a151e636979.png

    image.thumb.png.b26c63586a8bdb45dfb6710696e7e8b7.png

     

    All Docker, VMs and shares are up and available.

    /mnt/user shoaw all shares and /mnt/ shows all diska and zfs arrays available

     

    df
    /dev/md1p1      19529738188  18066278920  1463459268  93% /mnt/disk1
    /dev/md2p1      19529738188  18301980328  1227757860  94% /mnt/disk2
    /dev/md3p1      19529738188  15864198860  3665539328  82% /mnt/disk3
    /dev/md4p1      19529738188  14267824124  5261914064  74% /mnt/disk4
    /dev/md5p1      19529738188  19163167360   366570828  99% /mnt/disk5
    /dev/md6p1      19529738188  16932774200  2596963988  87% /mnt/disk6
    /dev/md7p1       5858435620    998073692  4860361928  18% /mnt/disk7
    /dev/md8p1       5858435620   2495625016  3362810604  43% /mnt/disk8
    /dev/md9p1       5858435620    895198144  4963237476  16% /mnt/disk9
    /dev/md10p1      5858435620   5307366288   551069332  91% /mnt/disk10
    /dev/md11p1      5858435620   4979550840   878884780  85% /mnt/disk11
    /dev/md12p1      5858435620   2167139756  3691295864  37% /mnt/disk12
    /dev/md13p1      5858435620   4581822780  1276612840  79% /mnt/disk13
    /dev/md14p1      5858435620   2417777396  3440658224  42% /mnt/disk14
    /dev/md15p1      5858435620   1626761232  4231674388  28% /mnt/disk15
    /dev/md16p1      5858435620   3145986208  2712449412  54% /mnt/disk16
    /dev/md17p1      5858435620   5609630520   248805100  96% /mnt/disk17
    /dev/md18p1      5858435620   4218253240  1640182380  73% /mnt/disk18
    /dev/md19p1      5858435620   3790304928  2068130692  65% /mnt/disk19
    /dev/md22p1     15623792588   2210644188 13413148400  15% /mnt/disk22
    /dev/md23p1     15623792588    108964700 15514827888   1% /mnt/disk23
    /dev/nvme0n1p1    976761560    651471796   323605596  67% /mnt/fast_cache
    hyperspin       27942883072     15291136 27927591936   1% /mnt/hyperspin
    aio_hyperspin   27943138560  24981325824  2961812736  90% /mnt/aio_hyperspin
    shfs           224585677364 147149322720 77436354644  66% /mnt/user0
    shfs           224585677364 147149322720 77436354644  66% /mnt/user
    /dev/loop2         20971520         7768    20845544   1% /etc/libvirt
    root@MediaMac:~#

     

     

    Edited by duckey77
    Link to comment

    @duckey77

    If Jorge's advice below does not fix your issue, then :

    1. Do not format
    2. Please create a new thread in prerelease so it is easier to assist than in the release thread
    3. attach your diagnostics in the new thread
    Link to comment
    7 hours ago, TheMachine said:

    I see the following in rc6 and rc7 on a fresh setup.

    This is a known issue with mostly Ivy Brige or older CPUs when auto importing a btrfs pool, you can get around it by clicking on the pool and setting the fs to btrfs, or if you are creating a new pool like it looked you can also just erase the devices.

    Link to comment
    On 6/6/2023 at 10:07 AM, dlandon said:

    NFS cannot deal with a symlink.  You can use the actual storage location.  For example if you have Syslogs as a cache only share, use the /mnt/cache/Syslog reference rather than /mnt/user/Syslog.  This avoids shfs when using /mnt/user/Syslog.

     

    Similar problem with 9p (which I use for a VM) in that it's not transparent.

     

    Still I'd appreciate some way to override it - I have a share that was exclusive in rc6 and now because of one subdirectory mounted over NFS (that worked fine) no longer is.

     

    Sure I can hardcode paths but the biggest problem - that I access the share from my Mac over SMB where every access risks triggering an shfs bug that takes down the array – is difficult to work around.

     

    Worst case I'll disable NFS and mount it some other way.

    Link to comment
    9 hours ago, JorgeB said:

    This is a known issue with mostly Ivy Brige or older CPUs when auto importing a btrfs pool, you can get around it by clicking on the pool and setting the fs to btrfs, or if you are creating a new pool like it looked you can also just erase the devices.

     

    Hello @JorgeB, thanks for the info that it is a known issue.


    Yes, I created a new pool, but all disks where empty without any partitions. If I set it to BTRFS like you described, it works as well with empty disks.

     

    CPU is a Celeron N5105. Not that old. Which is I think Tremont/Jasper Lake.
    Issue does not exists with stable 6.11.

    Link to comment
    48 minutes ago, TheMachine said:

    Yes, I created a new pool, but all disks where empty without any partitions.

    OK, that means it still happens while in auto and scanning for btrfs fs on an empty device, I noticed the issue while importing an existing btrfs pool, never tested with empty devices.

     

    49 minutes ago, TheMachine said:

    CPU is a Celeron N5105. Not that old. Which is I think Tremont/Jasper Lake.

    Also known to be affected, there's another user with same CPU that had this issue, with normal desktop CPUs only up to Ivy Bridge are affected, no issues with Haswell, Skylake, or newer, so possibly architecture or instruction set related.

     

    50 minutes ago, TheMachine said:

    Issue does not exists with stable 6.11.

    Correct, it only affects v6.12.

    • Thanks 1
    Link to comment

    I noticed similar behavior in RC6, but it seems as if the stats are not tracking properly for 'read'/'Recieved' data on the stats page

     

    image.thumb.png.a42e52f2600f191e85f79a884253c16d.png

     

    The Transmit and Write stats look perfect, and show activity, but the Read/Receive stats are coming out looking completely off.

    Link to comment

    FYI -- the Tailscale setup instructions are not required for anyone using the Tailscale plugin, there is an option in the plugin that handles things automatically:

     

    image.thumb.png.e7f62234a2623a24660753b0272bda78.png

    Link to comment

    I have a question.

    In version 6.11.x of the Unraid system, I was able to use the hostname in Tailscale to access SMB file sharing, such as "\\unraid\data", but after upgrading to version 6.12.0 rc7, I cannot use the hostname in Tailscale to access SMB sharing anymore.

    It is a bug?

    Link to comment

      

    Hey everyone,

     

    i switched from a BTRFS Cache Pool to a ZFS Cache Pool.

    Nothing changed in config, i only created the new pool, copied the files, deleted old pool and renamed the new one.

     

    I am recieving now from time to time the error in the log:

    shfs: set -o pipefail ; /usr/sbin/zfs create 'cache-mirror/share1' |& logger
    root: cannot create 'cache-mirror/share1': dataset already exists
    shfs: command failed: 1

     

    When i run the command "mkdir /mnt/cache-mirror/share1" the folder will be created AND i can see that are files in this folder.

    The problem is then fixed for some time.

     

    I had this problem also with rc6.

    Anybody else expierience this error?

    Link to comment
    2 minutes ago, unr41dus3r said:

    When i run the command "mkdir /mnt/cache-mirror/share1" the folder will be created AND i can see that are files in this folder.

    That won't create a dataset, it will create a regular folder, try delete that folder (move any data there before) and let Unraid create the dataset automatically when new data is written to that share.

    Link to comment

    The problem is this error occurs when there is no folder and happens with all shares.

     

    After i created the folder the errors doesnt appear again for the moment.

    I think after a mover action the errors appears again. Then the "share folder" is missing, till i would create the folder manual.

     

    Edit:

    This destory error i recieved 1 hours before the create error occurs in the syslog

     

    shfs: set -o pipefail ; /usr/sbin/zfs destroy -r 'cache-/m..r/.../share1' |& logger
    root: cannot destroy 'cache-/m..r/.../share1': dataset is busy
    shfs: error: retval 1 attempting 'zfs destroy'

     

     

    Edited by unr41dus3r
    Link to comment
    5 minutes ago, unr41dus3r said:

    I think after a mover action the errors appears again. Then the "share folder" is missing, till i would create the folder manual.

    The mover will delete the dataset after it finishes, and a new one should be automatically created for any new writes, please create a bug report, reboot the server to clear the logs, run the mover, write something to a share set to cache primary then grab and post the diags.

    Link to comment

    Not sure if its rc6 or rc7 specific, but I just noticed, the docker page doesn't allow me to move the container ordering around.
    Has the function moved or have i found a bug? 😛

    Link to comment
    2 minutes ago, boomam said:

    Not sure if its rc6 or rc7 specific, but I just noticed, the docker page doesn't allow me to move the container ordering around.
    Has the function moved or have i found a bug? 😛

    You need to unlock with the padlock on menu bar on right. Red is unlocked

    Link to comment
    3 hours ago, JorgeB said:

    The mover will delete the dataset after it finishes, and a new one should be automatically created for any new writes, please create a bug report, reboot the server to clear the logs, run the mover, write something to a share set to cache primary then grab and post the diags.

     

     

    I will do this later, but i found this 2 bug reports and it looks like they are the same as mine.

     

     

    When the above error

     

    shfs: set -o pipefail ; /usr/sbin/zfs create 'cache-mirror/share1' |& logger
    root: cannot create 'cache-mirror/share1': dataset already exists
    shfs: command failed: 1

     

    occurs i cant write anything onto "mnt/user/share1/testfolder"

    It does not matter, if i want to create files from my pc over smb share or i use the web gui File Browser to create a folder in "mnt/user/share1/"

     

    share1 has cache-mirror as cache drive set

     

    After creating a the folder "mnt/cache-mirror/share1" i can normally copy files and folders into "mnt/user/share1"

    Link to comment
    52 minutes ago, unr41dus3r said:

    it looks like they are the same as mine.

    They are not exactly the same, those users have issues removing the datasets, not creating them, but it's likely related, because the end result is the same.

     

    52 minutes ago, unr41dus3r said:

    After creating a the folder "mnt/cache-mirror/share1" i can normally copy files and folders into "mnt/user/share1"

    Instead of creating a new folder (which won't be a dataset) try typing:

    zfs mount -a

    If that works you should move everything from that dataset to the array, then see if you can delete it manually with (make sure it's empty):

    zfs destroy cache-mirror/share1

    Then copy new data to that share, a new dataset should now be created, see if the issue happens again, there's a theory that creating the datasets from new with rc7 may fix the problem.

    • Like 1
    Link to comment
    3 hours ago, SimonF said:

    You need to unlock with the padlock on menu bar on right. Red is unlocked

    Thanks, that solved it.

    It must have changed in rc6/rc7 (coming from 6.11). 🙂

    Link to comment
    1 minute ago, boomam said:

    Thanks, that solved it.

    It must have changed in rc6/rc7 (coming from 6.11). 🙂

    Was added in 6.12 to support touch screens. but colour was switched in rc6 I think green used to be unlocked.

    Link to comment
    3 hours ago, JorgeB said:

    They are not exactly the same, those users have issues removing the datasets, not creating them, but it's likely related, because the end result is the same.

     

    Instead of creating a new folder (which won't be a dataset) try typing:

    zfs mount -a

    If that works you should move everything from that dataset to the array, then see if you can delete it manually with (make sure it's empty):

    zfs destroy cache-mirror/share1

    Then copy new data to that share, a new dataset should now be created, see if the issue happens again, there's a theory that creating the datasets from new with rc7 may fix the problem.

     

    The status now, i ran "zfs mount -a" and after that i saw that all folders / datasets (zfs list) was on the cache drive.

    I started the Mover and now the folders and the datasets are removed. (checked with zfs list again)

     

    It was possible to copy something on the drive and the dataset was created correctly, so the error is gone for the moment.

    I will report back if it should happen again.

    • Like 1
    Link to comment



    Join the conversation

    You can post now and register later. If you have an account, sign in now to post with your account.
    Note: Your post will require moderator approval before it will be visible.

    Guest
    Add a comment...

    ×   Pasted as rich text.   Restore formatting

      Only 75 emoji are allowed.

    ×   Your link has been automatically embedded.   Display as a link instead

    ×   Your previous content has been restored.   Clear editor

    ×   You cannot paste images directly. Upload or insert images from URL.


  • Status Definitions

     

    Open = Under consideration.

     

    Solved = The issue has been resolved.

     

    Solved version = The issue has been resolved in the indicated release version.

     

    Closed = Feedback or opinion better posted on our forum for discussion. Also for reports we cannot reproduce or need more information. In this case just add a comment and we will review it again.

     

    Retest = Please retest in latest release.


    Priority Definitions

     

    Minor = Something not working correctly.

     

    Urgent = Server crash, data loss, or other showstopper.

     

    Annoyance = Doesn't affect functionality but should be fixed.

     

    Other = Announcement or other non-issue.