• Unraid OS version 6.9.0-beta24 available


    limetech

    6.9.0-beta24 vs. -beta22 Summary:

    • fixed several bugs
    • added some out-of-tree drivers
    • added ability to use xfs-formatted loopbacks or not use loopback at all for docker image layers.  Refer to Docker section below for more details
    • (-beta23 was an internal release)

     

    Important: Beta code is not fully tested and not feature-complete.  We recommend running on test servers only!

     

    Multiple Pools

    This features permits you to define up to 35 named pools, of up to 30 storage devices/pool.  The current "cache pool" is now simply a pool named "cache".  Pools are created and managed via the Main page.

     

    Note: When you upgrade a server which has a cache pool defined, a backup of config/disk.cfg will be saved to config/disk.cfg.bak, and then cache device assignment settings are moved out of disk.cfg and into a new file, config/pools/cache.cfg.  If later you revert back to a pre-6.9 Unraid OS release you will lose your cache device assignments and you will have to manually re-assign devices to cache.  As long as you reassign the correct devices, data should remain intact.

     

    When you create a user share, or edit an existing user share, you can specify which pool should be associated with that share.  The assigned pool functions identically to current cache pool operation.

     

    Something to be aware of: when a directory listing is obtained for a share, the unRAID array disk volumes and all pools which contain that share are merged in this order:

      pool assigned to share

      disk1

      :

      disk28

      all the other pools in strverscmp() order.

     

    As with the current "cache pool", a single-device pool may be formatted with either xfs, btrfs, or reiserfs.  A multiple-device pool may only be formatted with btrfs.  A future release will include support for multiple "unRAID array" pools.  We are also considering zfs support.

     

    Something else to be aware of: Let's say you have a 2-device btrfs pool. This will be what btrfs calls "raid1" and what most people would understand to be "mirrored disks". Well this is mostly true in that the same data exists on both disks but not necessarily at the block-level.  Now let's say you create another pool, and what you do is unassign one of the devices from the existing 2-device btrfs pool and assign it to this pool.  Now you have x2 1-device btrfs pools.  Upon array Start user might understandably assume there are now x2 pools with exactly the same data.  However this is not the case. Instead, when Unraid OS sees that a btrfs device has been removed from an existing multi-device pool, upon array Start it will do a 'wipefs' on that device so that upon mount it will not be included in the old pool.  This of course effectively deletes all the data on the moved device.

     

    Language Translation

    A huge amount of work and effort has been implemented by @bonienl to provide multiple-language support in the Unraid OS Management Utility, aka, webGUI.  There are several language packs now available, and several more in the works.  Thanks to @Squid, language packs are installed via the Community Applications plugin - look for a new category entitled Language.

     

    Note: Community Applications must be up to date to install languages.  See also here.

     

    Each language pack exists in public Unraid organization github repos.  Interested users are encouraged to clone and issue Pull Requests to correct translations errors.  Language translations and PR merging is managed by @SpencerJ.

     

    Linux Kernel

    Upgraded to 5.7.

     

    These out-of-tree drivers are currently included:

    • QLogic QLGE 10Gb Ethernet Driver Support (from staging)
    • RealTek r8125: version 9.003.05 (included for newer r8125)
    • HighPoint rr272x_1x: version v1.10.6-19_12_05 (per user request)

    Note that as we update Linux kernel, if an out-of-tree driver no longer builds, it will be omitted.

     

    These drivers are currently omitted:

    • Highpoint RocketRaid r750 (does not build)
    • Highpoint RocketRaid rr3740a (does not build)
    • Tehuti Networks tn40xx (does not build)

    If you require one of these drivers, please create a Bug Report and we'll spend some time looking for alternatives.  Better yet, pester the manufacturer of the controller and get them to update their drivers.

     

    Base Packages

    All updated to latest versions.  In addition, Linux PAM has been integrated.  This will permit us to install 2-factor authentication packages in a future release.

     

    Docker

    Updated to version 19.03.11

     

    We also made some changes to add flexibility in assigning storage for the Docker engine.  First, 'rc.docker' will detect the filesystem type of /var/lib/docker.  We now support either btrfs or xfs and the docker storage driver is set appropriately.

     

    Next, 'mount_image' is modifed to support loopback formatted either with btrfs or xfs depending on the suffix of the loopback file name.  For example, the file name ends with ".img", as in "docker.img" then we use mkfs.btrfs.  If file name ends with "-xfs.img", as in "docker-xfs.img" then we use mkfs.xfs.


    We also added the ability to bind-mount a directory instead of using a loopback.  If file name does not end with ".img" then code assumes this is the name of directory (presumably on a share) which is bind-mounted onto /var/lib/docker.

     

    For example, if "/mnt/user/system/docker/docker" then we first create, if necessary the directory "/mnt/user/system/docker/docker".  If this path is on a user share we then "dereference" the path to get the disk path which is then bind-mounted onto /var/lib/docker.  For exmaple, if "/mnt/user/system/docker/docker" is on "disk1", then we would bind-mount "/mnt/disk1/system/docker/docker".  Caution: the share should be cache-only or cache-no so that 'mover' will not attempt to move the directory, but the script does not check this.

     

    In this release however, you must edit the 'config/docker.cfg' file directly to specify a directory, for example:

    DOCKER_IMAGE_FILE="/mnt/user/system/docker/docker"

     

    Finally, it's now possible to select different icons for multiple containers of the same type.  This change necessitates a re-download of the icons for all your installed docker applications.  A delay when initially loading either the dashboard or the docker tab while this happens is to be expected prior to the containers showing up.

     

    Virtualization

    libvirt updated to version 6.4.0

    qemu updated to version 5.0.0

     

    In addition, integrated changes to System Devices page by user @Skitals with modifications by user @ljm42.  You can now select PCI devices to isolate from Linux upon boot simply by checking some boxes.  This makes it easier to reserve those devices for assignment to VM's.

     

    Note: If you had the VFIO-PCI Config plugin installed, you should remove it as that functionality is now built-in to Unraid OS 6.9.  Refer also @ljm42's excellent guide.

     

    In a future release we will include the NVIDIA and AMD GPU drivers natively into Unraid OS.  The primary use case is to facilitate accelerated transcoding in docker containers.  For this we require Linux to detect and auto-install the appropriate driver.  However, in order to reliably pass through an NVIDIA or AMD GPU to a VM, it's necessary to prevent Linux from auto-installing a GPU driver for those devices upon boot, which can be easily done now through System Devices page.  Users passing GPU's to VM's are encouraged to set this up now.

     

    "unexpected GSO errors"

     

    If your system log is being flooded with errors such as:

    Jun 20 09:09:21 Tower kernel: tun: unexpected GSO type: 0x0, gso_size 31, hdr_len 66

    You need to edit each VM and change the model type for the Ethernet bridge from "virtio" to "virtio-net".  In most cases this can be accomplished simply by clicking Update in "Form View" on the VM Edit page.  For other network configs it may be necessary to directly edit the xml.  Example:

    <interface type='bridge'>
          <mac address='xx:xx:xx:xx:xx:xx'/>
          <source bridge='br0'/>
          <model type='virtio-net'/>
          <address type='pci' domain='0x0000' bus='0x00' slot='0x03' function='0x0'/>
    </interface>

     

    Other

    • AFP support has been removed.
    • Numerous other Unraid OS and webGUI bug fixes and improvements.

     


    Version 6.9.0-beta24 2020-07-08

     

    Bug fixes:

    • fix emhttpd crash expanding number of slots for an existing pool
    • fix share protected/not protected status
    • fix btrfs free space reporting
    • fix pool spinning state incorrect

     

    Base distro:

    • curl: version 7.71.0
    • fuse3: version 3.9.2
    • file: version 5.39
    • gnutls: version 3.6.14
    • harfbuzz: version 2.6.8
    • haveged: version 1.9.12
    • kernel-firmware: version 20200619_3890db3
    • libarchive: version 3.4.3
    • libjpeg-turbo: version 2.0.5
    • lcms2: version 2.11
    • libzip: version 1.7.1
    • nginx: version 1.19.0 (CVE-2019-9511, CVE-2019-9513, CVE-2019-9516)
    • ntp: version 4.2.8p15
    • openssh: version 8.3p1
    • pam: version 1.4.0
    • rsync: version 3.2.1
    • samba: version 4.12.5 (CVE-2020-10730, CVE-2020-10745, CVE-2020-10760, CVE-2020-14303)
    • shadow: version 4.8.1
    • sqlite: version 3.32.3
    • sudo: version 1.9.1
    • sysvinit-scripts: version 2.1
    • ttyd: version 20200624
    • util-linux: version 2.35.2
    • xinit: version 1.4.1
    • zstd: version 1.4.5

     

    Linux kernel:

    • version 5.7.7
    • out-of-tree driver: QLogic QLGE 10Gb Ethernet Driver Support (from staging)
    • out-of-tree driver: RealTek r8125: version 9.003.05
    • out-of-tree driver: HighPoint rr272x_1x: version v1.10.6-19_12_05

     

    Management:

    • cleanup passwd, shadow
    • docker: support both btrfs and xfs backing filesystems
    • loopbacks: permit xfs or btrfs based on filename
    • mount_image: suppport bind-mount
    • mount all btrfs volumes using 'space_cache=v2' option
    • mount loopbacks with 'noatime' option; enable 'direct-io'
    • non-rotational device partitions aligned on 1MiB boundary by default
    • ssh: require passwords, disable non-root tunneling
    • web terminal: inhibit warning pop-up when closing window
    • webgui: Add log viewer for vfio-pci
    • webgui: Allow different image types to upload with 512K max
    • webgui: other misc. improvements
    • webgui: vm manager: Preserve VNC port settings
    • Like 6
    • Thanks 3


    User Feedback

    Recommended Comments



    Quote

    non-rotational device partitions aligned on 1MiB boundary by default

    Hmm:

     

    Jul 8 09:38:54 Test emhttpd: writing MBR on disk (sdd) with partition 1 offset 64, erased: 0
    fdisk -l /dev/sdd
    Disk /dev/sdd: 111.81 GiB, 120034123776 bytes, 234441648 sectors
    Disk model: TS120GSSD220S   
    Units: sectors of 1 * 512 = 512 bytes
    Sector size (logical/physical): 512 bytes / 512 bytes
    I/O size (minimum/optimal): 512 bytes / 512 bytes
    Disklabel type: dos
    Disk identifier: 0x00000000
    
    Device     Boot Start       End   Sectors   Size Id Type
    /dev/sdd1          64 234441647 234441584 111.8G 83 Linux
    root@Test:~# cat /sys/block/sdd/queue/rotational
    0

     

    Link to comment
    Share on other sites

    Just a few comments on the ability to use a folder / share for docker

     

    If you're one of those users who continually has a problem with the docker image filling up, this is the solution, as the "image" will be able to expand (and shrink) to the size of the cache drive.  Just be aware though that this new feature is technically experimental.  (I have however been running this on an XFS formatted cache drive for a while now, and don't see any problems at all)

     

    I would recommend that you use a share that is dedicated to the docker files, and not a folder from another existing share (like system as show in the OP).  

     

    My reasoning for this is that

    1. If you ever have a need to run the New Permissions tool against the share that you've placed the docker folder into, then that tool will cause the entire docker system to not run.  The folder will have to be removed (via the command line), and then recreated.
    2. All of the folders contained within the docker folder are not compatible with being exported over SMB, and you cannot gain access to them that way.  Using a separate share will also allow you to not export it without impacting the other shares' exporting.  (And there are no "user-modifiable" files in there anyways.  If you do need to modify a file within that folder, (ie: a config file for a container and that config isn't available within appdata), you should be doing it via going to the container's shell)
    3. You definitely want the share to be cache only (although cache prefer should probably be ok).  Setting it to cache:yes will undoubtedly cause you problems if mover winds up relocating files to the array for you.

     

    On this beta (until the GUI properly supports this new feature), you also cannot use Settings - Docker to stop / start the service if you've made the change to the .cfg file to utilize this feature.  (You can stop the service, but in order to restart it you have to enable it via the config file and then stop / start the array)

     

    I did have some "weirdness" with using a Unassigned Device as the drive for the docker folder.  This may however been a glitch in my system.

     

    Fix Common Problems (and the Docker Safe New Permissions Tool) will wind up getting updated (once the GUI properly supports these changes) to let you know of any problems that it detects with how you've configured the folder.

    • Thanks 3
    Link to comment
    Share on other sites

    I see the following in my log now:

    Jul 8 05:59:08 BackupServer smbd_audit[21566]: close fd 32
    Jul 8 05:59:08 BackupServer smbd_audit[21566]: close fd 37
    Jul 8 05:59:08 BackupServer smbd_audit[21566]: open unassigned.devices/unassigned.devices.emhttp (fd 32)
    Jul 8 05:59:08 BackupServer smbd_audit[21566]: close fd 32
    Jul 8 05:59:08 BackupServer smbd_audit[21566]: open unassigned.devices/unassigned.devices.emhttp (fd 32)
    Jul 8 05:59:08 BackupServer smbd_audit[21566]: close fd 32
    Jul 8 05:59:08 BackupServer smbd_audit[21566]: open unassigned.devices/unassigned.devices.emhttp (fd 32)
    Jul 8 05:59:08 BackupServer smbd_audit[21566]: close fd 32
    Jul 8 05:59:08 BackupServer smbd_audit[21566]: open unassigned.devices/unassigned.devices.emhttp (fd 32)
    Jul 8 05:59:08 BackupServer smbd_audit[21566]: open unassigned.devices/unassigned.devices.emhttp (fd 37)
    Jul 8 05:59:08 BackupServer smbd_audit[21566]: close fd 37
    Jul 8 05:59:08 BackupServer smbd_audit[21566]: open unassigned.devices/unassigned.devices.emhttp (fd 37)

    The recycle bin plugin is using smb audit to track deleted files and seems to be causing these log entries.  They occur when browsing smb shares.  I'll have to track down how to turn the log entries off.

    Link to comment
    Share on other sites

    Upgrade from beta1 ( skip beta22 ), now I can't got the GUI, below strange found.

     

    - No wait for Unraid boot manual, so I can't select such as safe mode, GUI mode etc. ( Confirm keyboard work )

    - When press Ctl-Alt-Delete, system will try restart, but

    • Nginx is not running
    • Shutting down php-fpm warning, no pid file found
    • btw, system will force restart after force restart-timer reach ( disk should success unmount )
    Link to comment
    Share on other sites
    11 minutes ago, Benson said:

    No wait for Unraid boot manual,

    You might want to confirm that in /syslinux/syslinux.cfg on the flash drive that the timeout is set to something like 50 (5 seconds)

    Link to comment
    Share on other sites
    14 minutes ago, Squid said:

    You might want to confirm that in /syslinux/syslinux.cfg on the flash drive that the timeout is set to something like 50 (5 seconds)

     

    After I manual edit disk.cfg to stop array auto start, then I can boot in GUI, but once I start disk arry, problem happen again.

     

    For boot menu not wait, it is my fault which I set to zero before.

     

    2.PNG.cfb735584d0a083007cff9e8a54af2e2.PNG

     

    And, if I start in maintenance mode and want to check filesystem ( readonly ), nothing will show or execute.

     

    Link to comment
    Share on other sites
    4 minutes ago, Benson said:

    problem happen again.

    emhttpd segfaulting is likely an Unraid issue, Tom might be able to know the reason by the error but if you can get the diags best to add them.

    Link to comment
    Share on other sites

    - I have change the boot menu timeout setting which not show in attach diagnostic.

    - Some disks in "Unsupported partition layout", this is design.

     

    ** Seems fallback should be the solution **

     

    21 hours ago, johnnie.black said:

    emhttpd segfaulting is likely an Unraid issue, Tom might be able to know the reason by the error but if you can get the diags best to add them.

    Yes, attach.

     

    21 hours ago, bonienl said:

    What happens when you start in safe mode?

    Same result, GUI stay in "starting ... "

     

     

    3.thumb.PNG.a7d314eb04062c59f4633f58cb56c960.PNG

     

    Edited by Benson
    Link to comment
    Share on other sites

    Recycle Bin plugin is broken with this version.  For the moment don't depend on it for recovering deleted files.  Samba has been updated and either someting has changed or they borke the vfs_recycle module.  I'll spend some time trying to figure out what is happening.

     

    Got it.  Also fixed the excessive logging.

    Edited by dlandon
    Link to comment
    Share on other sites
    Quote

    fix btrfs free space reporting

    This appears to be fixed but the used space is now wrong, maybe there's another way to get it, like whatever df uses, this is an empty raid5 pool with 4 x 32GB devices, free space now correctly accounts for parity unlike before, but note the used space:

     

    image.png.5000056f0a1cfb3987eac35b7fd9ced7.png

     

    image.png.507f5ea72b27f5e0e446bba614b34212.png

     

     

     

     

     

    • Like 1
    Link to comment
    Share on other sites

    Can someone ELi5 the Docker changes in this version?

     

    How does the docker image get formatted with its own filesystem, wouldn't it inherit whatever filesystem of the drive its living on?

    What sort of differences/impact might we expect from the bind mount vs loopback?

     

    Just curious.

    Link to comment
    Share on other sites
    5 minutes ago, -Daedalus said:

    wouldn't it inherit whatever filesystem of the drive its living on

    If you use the "folder" method as described in the OP, then yes

    5 minutes ago, -Daedalus said:

    How does the docker image get formatted with its own filesystem

    Before this release, the docker.img was always BTRFS, regardless if the drive it sat on was BTRFS, XFS, or ReiserFS.  To make the image xfs, you change the filename in Settings - Docker to be docker.xfs.img instead of docker.img

     

    5 minutes ago, -Daedalus said:

    What sort of differences/impact might we expect from the bind mount vs loopback?

     

    My post detailing some items in the folder option gives one huge advantage->  If you've constantly struggled with the docker.img filling up.  Performance wise, you won't see any significant difference between the options now available (But, the folder method will be faster, if only synthetically because of below)

     

    The main reason for these changes however is to lessen the excess writes to the cache drive.  The new way of mounting the image should give lesser amount of writes.  The absolute least amount of writes however will come via the folder method.  But, the GUI doesn't natively support it yet without the change itemized in the OP.

    • Like 1
    Link to comment
    Share on other sites

    I haven't played around with the beta myself but is it/will it be possible to rename existing storage pools? If so, how does shares set to those pools handle the change?

    Link to comment
    Share on other sites
    9 minutes ago, SelfSD said:

    I haven't played around with the beta myself but is it/will it be possible to rename existing storage pools? If so, how does shares set to those pools handle the change?

    I believe it's not currently possible, at least no with the GUI, maybe manually, but it doesn't affect the shares, since those will remain the same, you'll need to correct any internal paths though.

     

    "/mnt/pool/share" is shared as "\\tower\share"

     

    if you e.g. change the pool name

     

    "/mnt/new_pool_name/share" is still shared as "\\tower\share"

    • Thanks 1
    Link to comment
    Share on other sites
    17 minutes ago, johnnie.black said:

    I believe it's not currently possible, at least no with the GUI,

    You can rename a pool by stopping the array, then click on the pool name in the Main page, which brings up the pool settings. In here clicking on the name opens a window to rename the pool.

     

    Renaming a pool does not change any internal references. For example if the path of your docker image contains a direct reference to the pool name, e.g. /mnt/cache/system/docker.img, you will need to update this reference manually.

     

    • Like 1
    • Thanks 1
    Link to comment
    Share on other sites
    3 hours ago, Squid said:

    If you use the "folder" method as described in the OP, then yes

    Before this release, the docker.img was always BTRFS, regardless if the drive it sat on was BTRFS, XFS, or ReiserFS. [ snip ] The main reason for these changes however is to lessen the excess writes to the cache drive.  The new way of mounting the image should give lesser amount of writes.  The absolute least amount of writes however will come via the folder method.  But, the GUI doesn't natively support it yet without the change itemized in the OP.

    Thank you! I figured it was mostly to address the excess writes (I moved back to a single XFS drive from a pool because of this), just wasn't sure if there were any other affects as well.

    Link to comment
    Share on other sites
    12 hours ago, johnnie.black said:

    Hmm:

    I need tests for my tests ... fixed in next release.

    • Haha 1
    Link to comment
    Share on other sites
    9 hours ago, Benson said:

     

    After I manual edit disk.cfg to stop array auto start, then I can boot in GUI, but once I start disk arry, problem happen again.

     

    For boot menu not wait, it is my fault which I set to zero before.

     

    2.PNG.cfb735584d0a083007cff9e8a54af2e2.PNG

     

    And, if I start in maintenance mode and want to check filesystem ( readonly ), nothing will show or execute.

     

    This bug is fixed in next release.  You can disable NFS to workaround for now.  If that's not desirable then it's best for you to downgrade but first:

     

    1. Download a flash backup just to be safe: Main/Flash/Flash Backup

     

    2. Then from Terminal:

    mv /boot/config/disk.cfg.bak /boot/config/disk.cfg
    rm -r /boot/config/pools
    Link to comment
    Share on other sites
    2 minutes ago, limetech said:

    You can disable NFS to workaround for now. 

    Note, I will try that first, thanks.

    Link to comment
    Share on other sites
    8 hours ago, johnnie.black said:

    This appears to be fixed but the used space is now wrong, maybe there's another way to get it, like whatever df uses, this is an empty raid5 pool with 4 x 32GB devices, free space now correctly accounts for parity unlike before, but note the used space:

     

    image.png.5000056f0a1cfb3987eac35b7fd9ced7.png

     

    image.png.507f5ea72b27f5e0e446bba614b34212.png

     

     

     

     

     

    The webGui is simply subtracting Free from Size to get Used.

     

    What is output of

    stat -f /mnt/cache

     

     

    Link to comment
    Share on other sites
    2 hours ago, bonienl said:

    Renaming a pool does not change any internal references. For example if the path of your docker image contains a direct reference to the pool name, e.g. /mnt/cache/system/docker.img, you will need to update this reference manually.

    Which is why it's best to specify a /mnt/user/... path.  The actual loopback-mount or bind-mount will first determine what physical disk the file/directory is on and specify that as the source, so there's no performance degradation.

    Link to comment
    Share on other sites
    1 hour ago, limetech said:

    What is output of

    
    stat -f /mnt/cache
    root@Test:~# stat -f /mnt/cache
      File: "/mnt/cache"
        ID: 98929008f93c43e2 Namelen: 255     Type: btrfs
    Block size: 4096       Fundamental block size: 4096
    Blocks: Total: 31266616   Free: 31207938   Available: 22859018
    Inodes: Total: 0          Free: 0

     

    Link to comment
    Share on other sites



    Guest
    This is now closed for further comments

  • Status Definitions

     

    Open = Under consideration.

     

    Solved = The issue has been resolved.

     

    Solved version = The issue has been resolved in the indicated release version.

     

    Closed = Feedback or opinion better posted on our forum for discussion. Also for reports we cannot reproduce or need more information. In this case just add a comment and we will review it again.

     

    Retest = Please retest in latest release.


    Priority Definitions

     

    Minor = Something not working correctly.

     

    Urgent = Server crash, data loss, or other showstopper.

     

    Annoyance = Doesn't affect functionality but should be fixed.

     

    Other = Announcement or other non-issue.