• Unraid OS version 6.9.0-beta24 available


    limetech

    6.9.0-beta24 vs. -beta22 Summary:

    • fixed several bugs
    • added some out-of-tree drivers
    • added ability to use xfs-formatted loopbacks or not use loopback at all for docker image layers.  Refer to Docker section below for more details
    • (-beta23 was an internal release)

     

    Important: Beta code is not fully tested and not feature-complete.  We recommend running on test servers only!

     

    Multiple Pools

    This features permits you to define up to 35 named pools, of up to 30 storage devices/pool.  The current "cache pool" is now simply a pool named "cache".  Pools are created and managed via the Main page.

     

    Note: When you upgrade a server which has a cache pool defined, a backup of config/disk.cfg will be saved to config/disk.cfg.bak, and then cache device assignment settings are moved out of disk.cfg and into a new file, config/pools/cache.cfg.  If later you revert back to a pre-6.9 Unraid OS release you will lose your cache device assignments and you will have to manually re-assign devices to cache.  As long as you reassign the correct devices, data should remain intact.

     

    When you create a user share, or edit an existing user share, you can specify which pool should be associated with that share.  The assigned pool functions identically to current cache pool operation.

     

    Something to be aware of: when a directory listing is obtained for a share, the unRAID array disk volumes and all pools which contain that share are merged in this order:

      pool assigned to share

      disk1

      :

      disk28

      all the other pools in strverscmp() order.

     

    As with the current "cache pool", a single-device pool may be formatted with either xfs, btrfs, or reiserfs.  A multiple-device pool may only be formatted with btrfs.  A future release will include support for multiple "unRAID array" pools.  We are also considering zfs support.

     

    Something else to be aware of: Let's say you have a 2-device btrfs pool. This will be what btrfs calls "raid1" and what most people would understand to be "mirrored disks". Well this is mostly true in that the same data exists on both disks but not necessarily at the block-level.  Now let's say you create another pool, and what you do is unassign one of the devices from the existing 2-device btrfs pool and assign it to this pool.  Now you have x2 1-device btrfs pools.  Upon array Start user might understandably assume there are now x2 pools with exactly the same data.  However this is not the case. Instead, when Unraid OS sees that a btrfs device has been removed from an existing multi-device pool, upon array Start it will do a 'wipefs' on that device so that upon mount it will not be included in the old pool.  This of course effectively deletes all the data on the moved device.

     

    Language Translation

    A huge amount of work and effort has been implemented by @bonienl to provide multiple-language support in the Unraid OS Management Utility, aka, webGUI.  There are several language packs now available, and several more in the works.  Thanks to @Squid, language packs are installed via the Community Applications plugin - look for a new category entitled Language.

     

    Note: Community Applications must be up to date to install languages.  See also here.

     

    Each language pack exists in public Unraid organization github repos.  Interested users are encouraged to clone and issue Pull Requests to correct translations errors.  Language translations and PR merging is managed by @SpencerJ.

     

    Linux Kernel

    Upgraded to 5.7.

     

    These out-of-tree drivers are currently included:

    • QLogic QLGE 10Gb Ethernet Driver Support (from staging)
    • RealTek r8125: version 9.003.05 (included for newer r8125)
    • HighPoint rr272x_1x: version v1.10.6-19_12_05 (per user request)

    Note that as we update Linux kernel, if an out-of-tree driver no longer builds, it will be omitted.

     

    These drivers are currently omitted:

    • Highpoint RocketRaid r750 (does not build)
    • Highpoint RocketRaid rr3740a (does not build)
    • Tehuti Networks tn40xx (does not build)

    If you require one of these drivers, please create a Bug Report and we'll spend some time looking for alternatives.  Better yet, pester the manufacturer of the controller and get them to update their drivers.

     

    Base Packages

    All updated to latest versions.  In addition, Linux PAM has been integrated.  This will permit us to install 2-factor authentication packages in a future release.

     

    Docker

    Updated to version 19.03.11

     

    We also made some changes to add flexibility in assigning storage for the Docker engine.  First, 'rc.docker' will detect the filesystem type of /var/lib/docker.  We now support either btrfs or xfs and the docker storage driver is set appropriately.

     

    Next, 'mount_image' is modifed to support loopback formatted either with btrfs or xfs depending on the suffix of the loopback file name.  For example, the file name ends with ".img", as in "docker.img" then we use mkfs.btrfs.  If file name ends with "-xfs.img", as in "docker-xfs.img" then we use mkfs.xfs.


    We also added the ability to bind-mount a directory instead of using a loopback.  If file name does not end with ".img" then code assumes this is the name of directory (presumably on a share) which is bind-mounted onto /var/lib/docker.

     

    For example, if "/mnt/user/system/docker/docker" then we first create, if necessary the directory "/mnt/user/system/docker/docker".  If this path is on a user share we then "dereference" the path to get the disk path which is then bind-mounted onto /var/lib/docker.  For exmaple, if "/mnt/user/system/docker/docker" is on "disk1", then we would bind-mount "/mnt/disk1/system/docker/docker".  Caution: the share should be cache-only or cache-no so that 'mover' will not attempt to move the directory, but the script does not check this.

     

    In this release however, you must edit the 'config/docker.cfg' file directly to specify a directory, for example:

    DOCKER_IMAGE_FILE="/mnt/user/system/docker/docker"

     

    Finally, it's now possible to select different icons for multiple containers of the same type.  This change necessitates a re-download of the icons for all your installed docker applications.  A delay when initially loading either the dashboard or the docker tab while this happens is to be expected prior to the containers showing up.

     

    Virtualization

    libvirt updated to version 6.4.0

    qemu updated to version 5.0.0

     

    In addition, integrated changes to System Devices page by user @Skitals with modifications by user @ljm42.  You can now select PCI devices to isolate from Linux upon boot simply by checking some boxes.  This makes it easier to reserve those devices for assignment to VM's.

     

    Note: If you had the VFIO-PCI Config plugin installed, you should remove it as that functionality is now built-in to Unraid OS 6.9.  Refer also @ljm42's excellent guide.

     

    In a future release we will include the NVIDIA and AMD GPU drivers natively into Unraid OS.  The primary use case is to facilitate accelerated transcoding in docker containers.  For this we require Linux to detect and auto-install the appropriate driver.  However, in order to reliably pass through an NVIDIA or AMD GPU to a VM, it's necessary to prevent Linux from auto-installing a GPU driver for those devices upon boot, which can be easily done now through System Devices page.  Users passing GPU's to VM's are encouraged to set this up now.

     

    "unexpected GSO errors"

     

    If your system log is being flooded with errors such as:

    Jun 20 09:09:21 Tower kernel: tun: unexpected GSO type: 0x0, gso_size 31, hdr_len 66

    You need to edit each VM and change the model type for the Ethernet bridge from "virtio" to "virtio-net".  In most cases this can be accomplished simply by clicking Update in "Form View" on the VM Edit page.  For other network configs it may be necessary to directly edit the xml.  Example:

    <interface type='bridge'>
          <mac address='xx:xx:xx:xx:xx:xx'/>
          <source bridge='br0'/>
          <model type='virtio-net'/>
          <address type='pci' domain='0x0000' bus='0x00' slot='0x03' function='0x0'/>
    </interface>

     

    Other

    • AFP support has been removed.
    • Numerous other Unraid OS and webGUI bug fixes and improvements.

     


    Version 6.9.0-beta24 2020-07-08

     

    Bug fixes:

    • fix emhttpd crash expanding number of slots for an existing pool
    • fix share protected/not protected status
    • fix btrfs free space reporting
    • fix pool spinning state incorrect

     

    Base distro:

    • curl: version 7.71.0
    • fuse3: version 3.9.2
    • file: version 5.39
    • gnutls: version 3.6.14
    • harfbuzz: version 2.6.8
    • haveged: version 1.9.12
    • kernel-firmware: version 20200619_3890db3
    • libarchive: version 3.4.3
    • libjpeg-turbo: version 2.0.5
    • lcms2: version 2.11
    • libzip: version 1.7.1
    • nginx: version 1.19.0 (CVE-2019-9511, CVE-2019-9513, CVE-2019-9516)
    • ntp: version 4.2.8p15
    • openssh: version 8.3p1
    • pam: version 1.4.0
    • rsync: version 3.2.1
    • samba: version 4.12.5 (CVE-2020-10730, CVE-2020-10745, CVE-2020-10760, CVE-2020-14303)
    • shadow: version 4.8.1
    • sqlite: version 3.32.3
    • sudo: version 1.9.1
    • sysvinit-scripts: version 2.1
    • ttyd: version 20200624
    • util-linux: version 2.35.2
    • xinit: version 1.4.1
    • zstd: version 1.4.5

     

    Linux kernel:

    • version 5.7.7
    • out-of-tree driver: QLogic QLGE 10Gb Ethernet Driver Support (from staging)
    • out-of-tree driver: RealTek r8125: version 9.003.05
    • out-of-tree driver: HighPoint rr272x_1x: version v1.10.6-19_12_05

     

    Management:

    • cleanup passwd, shadow
    • docker: support both btrfs and xfs backing filesystems
    • loopbacks: permit xfs or btrfs based on filename
    • mount_image: suppport bind-mount
    • mount all btrfs volumes using 'space_cache=v2' option
    • mount loopbacks with 'noatime' option; enable 'direct-io'
    • non-rotational device partitions aligned on 1MiB boundary by default
    • ssh: require passwords, disable non-root tunneling
    • web terminal: inhibit warning pop-up when closing window
    • webgui: Add log viewer for vfio-pci
    • webgui: Allow different image types to upload with 512K max
    • webgui: other misc. improvements
    • webgui: vm manager: Preserve VNC port settings
    • Like 6
    • Thanks 3


    User Feedback

    Recommended Comments



    2 hours ago, dlandon said:

    Now I see this:

    Version 2020.07.10b is available

    • Like 1
    Link to comment
    Share on other sites

    Cache was set as a unsecured network share, had to change it back to no export, private.

    Edited by NickF
    Link to comment
    Share on other sites

    After reading that the privacy extension setting should now work I wonderd why I am getting random IPv6 interface identifiers even after updating.

    Turns out UNRAID uses dhcpcd to get the addresses and there slaac is set to private instead of hwaddr. Maybe it should be considerd to change this by default to hwaddr because it makes it very hard to configure a firewall properly to allow traffic to the UNRAID System. (Or at least provide a option to do so)

     

    Uncommenting slaac hwaddr instead of slaac private in

    /etc/dhcpcd.conf

    and restarting networking with

    php -q /usr/local/emhttp/plugins/dynamix/scripts/netconfig eth0

    does the trick.

     

    See my feature request:

     

    Edited by fxp555
    Added command to restart networking
    Link to comment
    Share on other sites

    Had to roll back to beta22. Array wouldnt decrypt some drives on starting the array. (got an incorrect passkey error). Odd thing is that each time i stoppped the arrany and tried it again, it would struggle to mount different disks each time.

    Needed to get the array back up asap so it slipped my mind to take logs.

    Is this a known issue?

    Link to comment
    Share on other sites
    1 hour ago, limetech said:

    no

    re-updated and its behaved this time. i'll grab logs if it does it again.

    Link to comment
    Share on other sites
    On 7/8/2020 at 7:22 AM, dlandon said:

    Recycle Bin plugin is broken with this version.  For the moment don't depend on it for recovering deleted files.  Samba has been updated and either someting has changed or they borke the vfs_recycle module.  I'll spend some time trying to figure out what is happening.

     

    Got it.  Also fixed the excessive logging.

    Can you explain what you did to fix the Recycle Bin issues?

    Link to comment
    Share on other sites

    After I updated to beta24 Br0 disappeard and I do not understand how to get it back, can't run any of the VM's due to this missing.

    What I have done so far is to delete and recreate the docker.img (found this in another thread), but it didn't help

     

    What to do next?

    unraid-diagnostics-20200712-0908.zip

    Link to comment
    Share on other sites
    20 minutes ago, Koenig said:

    After I updated to beta24 Br0 disappeard and I do not understand how to get it back, can't run any of the VM's due to this missing.

    What I have done so far is to delete and recreate the docker.img (found this in another thread), but it didn't help

     

    What to do next?

    unraid-diagnostics-20200712-0908.zip 142.28 kB · 0 downloads

    I think this is a known issue: Go to Settings -> Docker -> Disable Docker -> Apply -> Enable Docker -> Apply.

    Edit: Now that I realized that you already recreated docker.img I am not that sure. You could try disabling and enabling VMs as well.

     

    Edited by fxp555
    Link to comment
    Share on other sites
    19 hours ago, fxp555 said:

    Uncommenting slaac hwaddr instead of slaac private in

    New Unraid version will have "slaac hwaddr" enabled by default.

     

    Link to comment
    Share on other sites
    1 hour ago, fxp555 said:

    I think this is a known issue: Go to Settings -> Docker -> Disable Docker -> Apply -> Enable Docker -> Apply.

    Edit: Now that I realized that you already recreated docker.img I am not that sure. You could try disabling and enabling VMs as well.

     

    Yes, it doesn't help.

     

    I also have another issue or perhaps it is related, when I try to add custom network to Dockers (192.168.8.0/24, gateway: 192.168.8.1 and DHCP: not set) I lose all network connectivity to Unraid.

    If I via the  console reboot the machine, when rebooted the console says "ipv4 not set" but if I log in and run "ifconfig eth0" I can see it gets the correct IP (I have set it to always get the same IP via the DHCP-server in my LAN) but I can still not contact the web-gui.

     

    EDIT: A change to the network settings seems to have solved it, Not that any real change was made just a random setting back and forth to get the update button to become enabled and then press update, and then solved....

    Edited by Koenig
    Link to comment
    Share on other sites
    9 hours ago, sunbear said:

    Can you explain what you did to fix the Recycle Bin issues?

    Had to change some global log settings for the recycle bin after Unraid changed the samba global log setting.

    Link to comment
    Share on other sites
    On 7/9/2020 at 2:08 AM, limetech said:

    Hey looking at source of 'df' command, looks like they use 'statvfs()' instead of 'statfs()' - I'll give that a go.

    Sorry to reply to this locked thread but this was just posted on the btrfs mailing list, you can probably also see it in the df source code but just in case it helps:

    Quote

    statvfs (the 'df' syscall) does not report a "used" number, only total
    and available btrfs data blocks (no metadata blocks are counted).
    'df' computes "used" by subtracting f_blocks - f_bavail.


    https://lore.kernel.org/linux-btrfs/20200723045106.GL10769@hungrycats.org/T/#t

     

    Forgot to mention, as it currently is is we already knew used space on the GUI is not correct for any raid5/6 pool, but it was reported recently in the forum that a 3 device raid1 pool also reports the used space incorrectly, possibly the same for any odd number raid 1 pools, and df shows correct, though both report free space incorrectly, but that part I would think it's likely to be fixed by btrfs in the future.

     

     

     

     

     

    Link to comment
    Share on other sites

    The 'statfs()' call, or 'stat -f' command returns 3 pieces of info for a volume:

     

    fsblkcnt_t f_blocks;  /* Total data blocks in filesystem */
    fsblkcnt_t f_bfree;   /* Free blocks in filesystem */
    fsblkcnt_t f_bavail;  /* Free blocks available to unprivileged user */

    Prior to report of raid5/6 being reported wrong we just used f_blocks for total number of usable blocks in the file system and f_bfree for total free blocks.  These numbers take into account btrfs 'raid' level and work for single,raid0,raid1 of any number of devices.  Then to fix raid6/5 reporting I tried using f_bavail instead and that seems to work for free space but f_blocks for raid5/6 is still wrong.

     

    In next beta I have switched back to using f_bfree.  The raid5/6 numbers are just plain wrong and this is a BUG that btrfs developers are too stubborn to fix.  raid5/6 is still "experimental" and so I am done wasting time on this.  If you use raid5/6 then beware these numbers are wrong.

    Link to comment
    Share on other sites
    9 hours ago, limetech said:

    so I am done wasting time on this

    I understand the desire to do that, but allow me to make a case for not going back:

     

    -yes, raid5/6 is mostly experimental and not in use by many, but with the old way of doing it Unraid also reports the wrong free space for different size devices using raid1, e.g., a 120 + 240GB pool and it will report 180GB usable, and this keeps coming up in the forums when the users run out space despite what the GUI shows.

     

    -if you leave it as is these issues will never get fixed.

     

    -if you make the change and start reporting the same way df does:

    • both used and free space stats would be correctly reported for raid1 pools with different size devices
    • both used and free space stats would be correctly reported for raid5/6 pools
    • df is widely used for scripts and such with btrfs so any future bugs should be quickly found and fixed

     

    AFAIK there's currently only one situation where df doesn't report the correct stats, a raid1 pool with an odd number of devices, I already reported this to the btrfs mailing list and it was already confirmed by another user, so I expect it will be fixed soon, I mean in the near future :)

     

     

    Link to comment
    Share on other sites
    6 hours ago, johnnie.black said:

    -if you make the change and start reporting the same way df does:

    Have you tested lately?  In my tests 'df' also reports things wrong, eg, raid5/6 usable size.  It would be useful to compare output of 'stat -f' with 'df' output.

    Link to comment
    Share on other sites
    26 minutes ago, limetech said:

    Have you tested lately?  In my tests 'df' also reports things wrong, eg, raid5/6 usable size.

    Yep, note that I mentioned correct used and free space, for total size it includes everything, including non usable space for a different size device raid1 pool and including parity for raid5/6, if total size should include that it's I think debatable but IMHO the most important stats are the used and free space, and those are always reported correctly by df (except free space for the above mentioned scenario, odd number of devices raid1 pool).

     

    Empty 250 + 500GB raid1 pool:

     

    Unraid pre-beta25 - used is correct, free is wrong

    imagem.png.33d80b084e5e7a59f67c6f46a5084112.png

     

    Unraid beta25 - used is wrong, free is correct

    imagem.png.15074e4c135ba2737d7cfc00fff44e78.png

     

     

    df -hH - both used and free are correct:

    Filesystem      Size  Used Avail Use% Mounted on
    /dev/sdg1       376G  3.6M  249G   1% /mnt/cache

    stat -f

      File: "/mnt/cache"
        ID: 4270a881f170f3d Namelen: 255     Type: btrfs
    Block size: 4096       Fundamental block size: 4096
    Blocks: Total: 91573138   Free: 91572270   Available: 60779040
    Inodes: Total: 0          Free: 0

     

    Empty 5 x 500GB raid5 pool:

     

    Unraid pre-beta25 - used is correct, free is wrong

    imagem.png.a00e55f374f86b89ac78193d35bba120.png

     

    Unraid beta25 - used is wrong, free is correct

    imagem.png.fdd733fbe5588c3f71ecacc6a171b34e.png

     

    df -hH - both used and free are correct

    Filesystem      Size  Used Avail Use% Mounted on
    /dev/sdd1       2.6T  3.8M  2.0T   1% /mnt/cache

    stat -f

      File: "/mnt/cache"
        ID: 64ae90de57db7af6 Namelen: 255     Type: btrfs
    Block size: 4096       Fundamental block size: 4096
    Blocks: Total: 610483190  Free: 610482286  Available: 487844800
    Inodes: Total: 0          Free: 0

     

     

    Link to comment
    Share on other sites
    10 hours ago, johnnie.black said:

    AFAIK there's currently only one situation where df doesn't report the correct stats, a raid1 pool with an odd number of devices, I already reported this to the btrfs mailing list and it was already confirmed by another user, so I expect it will be fixed soon, I mean in the near future

    That is the exact config I was trying to get right because in my development workstation I have a 3-device btrfs raid-1.

     

    What you get from statvfs() is

    blocks - ie total blocks

    free - unused blocks

    avail - blocks available to be assigned

     

    Normally free==avail, but in case of btrfs avail takes into account raid organization

     

    We only care about size and free - where size is the number of blocks available to hold user data, and free is how much of that total is available.

     

    For next release I'm doing this:

     

    size = total-free + avail

    free = avail

     

    Using this I think everything's correct except for odd number of devices in a raid-1, which is unfortunate.

     

    Link to comment
    Share on other sites
    3 hours ago, limetech said:

    For next release I'm doing this:

    Thanks for giving it another shot, I appreciate that getting btrfs and stats to work correctly gets frustrating, hopefully once it's good it stays good for the future.

    3 hours ago, limetech said:

    except for odd number of devices in a raid-1, which is unfortunate.

    Including this configuration, which should start working correctly once it gets fixed on a future kernel release.

    Link to comment
    Share on other sites



    Guest
    This is now closed for further comments

  • Status Definitions

     

    Open = Under consideration.

     

    Solved = The issue has been resolved.

     

    Solved version = The issue has been resolved in the indicated release version.

     

    Closed = Feedback or opinion better posted on our forum for discussion. Also for reports we cannot reproduce or need more information. In this case just add a comment and we will review it again.

     

    Retest = Please retest in latest release.


    Priority Definitions

     

    Minor = Something not working correctly.

     

    Urgent = Server crash, data loss, or other showstopper.

     

    Annoyance = Doesn't affect functionality but should be fixed.

     

    Other = Announcement or other non-issue.