• Unraid OS version 6.9.0-beta25 available


    limetech

    6.9.0-beta25 vs. -beta24 Summary:

    • fixed emhttpd crash resulting from having NFS exported disk shares
    • fixed issue where specifying 1 MiB partition alignment was being ignored (see 1 MiB Partition Alignment below)
    • fixed spin-up/down issues
    • ssh improvements (see SSH Improvements below)
    • kernel updated from 5.7.7 to 5.7.8
    • added UI changes to support new docker image file handling - thank you @bonienl.  Refer also to additional information re: docker image folder, provided by @Squid under Docker below.
    • known issue: "Device/SMART Settings/SMART controller type" is ignored, will be fixed in next release

     

    Important: Beta code is not fully tested and not feature-complete.  We recommend running on test servers only!

     

    Multiple Pools

    This features permits you to define up to 35 named pools, of up to 30 storage devices/pool.  The current "cache pool" is now simply a pool named "cache".  Pools are created and managed via the Main page.

     

    Note: When you upgrade a server which has a cache pool defined, a backup of config/disk.cfg will be saved to config/disk.cfg.bak, and then cache device assignment settings are moved out of disk.cfg and into a new file, config/pools/cache.cfg.  If later you revert back to a pre-6.9 Unraid OS release you will lose your cache device assignments and you will have to manually re-assign devices to cache.  As long as you reassign the correct devices, data should remain intact.

     

    When you create a user share, or edit an existing user share, you can specify which pool should be associated with that share.  The assigned pool functions identically to current cache pool operation.

     

    Something to be aware of: when a directory listing is obtained for a share, the unRAID array disk volumes and all pools which contain that share are merged in this order:

      pool assigned to share

      disk1

      :

      disk28

      all the other pools in strverscmp() order.

     

    As with the current "cache pool", a single-device pool may be formatted with either xfs, btrfs, or reiserfs.  A multiple-device pool may only be formatted with btrfs.  A future release will include support for multiple "unRAID array" pools.  We are also considering zfs support.

     

    Something else to be aware of: Suppose you have a 2-device btrfs pool. This will be what btrfs calls "raid1" and what most people would understand to be "mirrored disks". Well this is mostly true in that the same data exists on both disks but not necessarily at the block-level.  Now suppose you create another pool, and what you do is unassign one of the devices from the existing 2-device btrfs pool and assign it to this new pool - now you have x2 single-device btrfs pools.  Upon array Start you might understandably assume there are now x2 pools with exactly the same data.  However this is not the case. Instead, when Unraid OS sees that a btrfs device has been removed from an existing multi-device pool, upon array Start it will do a 'wipefs' on that device so that upon mount it will not be included in the old pool.  This of course effectively deletes all the data on the moved device.

     

    1 MiB Partition Alignment

    We have added another partition layout where the start of partition 1 is aligned on 1 MiB boundary. That is, for devices which present 512-byte sectors, partition 1 will start in sector 2048; for devices with 4096-byte sectors, in sector 256.  This partition type is now used for all non-rotational storage (only).

     

    It is not clear what benefit 1 MiB alignment offers.  For some SSD devices, you won't see any difference; others, perhaps big performance difference.  LimeTech does not recommend re-partitioning an existing SSD device unless you have a compelling reason to do so (or your OCD just won't let it be).

     

    To re-partition a SSD it is necessary to first wipe out any existing partition structure on the device.  Of course this will erase all data on the device.  Probably the easiest way to accomplish this is, with array Stopped, identify the device to be erased and use the 'blkdiscard' command:

    blkdiscard /dev/xxx  # for exmaple /dev/sdb or /dev/nvme0n1 etc)

            WARNING: be sure you type the correct device identifier because all data will be lost on that device!

     

    Upon next array Start the device will appear Unformatted, and since there is now no partition structure, Unraid OS will create it.

     

    Language Translation

    A huge amount of work and effort has been implemented by @bonienl to provide multiple-language support in the Unraid OS Management Utility, aka, webGUI.  There are several language packs now available, and several more in the works.  Thanks to @Squid, language packs are installed via the Community Applications plugin - look for a new category entitled Language.

     

    Note: Community Applications must be up to date to install languages.  See also here.

     

    Each language pack exists in public Unraid organization github repos.  Interested users are encouraged to clone and issue Pull Requests to correct translations errors.  Language translations and PR merging is managed by @SpencerJ.

     

    Linux Kernel

    Upgraded to 5.7.

     

    These out-of-tree drivers are currently included:

    • QLogic QLGE 10Gb Ethernet Driver Support (from staging)
    • RealTek r8125: version 9.003.05 (included for newer r8125)
    • HighPoint rr272x_1x: version v1.10.6-19_12_05 (per user request)

    Note that as we update the Linux kernel, if an out-of-tree driver no longer builds, it will be omitted.

     

    These drivers are currently omitted:

    • Highpoint RocketRaid r750 (does not build)
    • Highpoint RocketRaid rr3740a (does not build)
    • Tehuti Networks tn40xx (does not build)

    If you require one of these drivers, please create a Bug Report and we'll spend some time looking for alternatives.  Better yet, pester the manufacturer of the controller and get them to update their drivers.

     

    Base Packages

    All updated to latest versions.  In addition, Linux PAM has been integrated.  This will permit us to implement 2-factor authentication in a future release.

     

    Docker

    Updated to version 19.03.11

     

    It's now possible to select different icons for multiple containers of the same type.  This change necessitates a re-download of the icons for all your installed docker applications.  A delay when initially loading either the dashboard or the docker tab while this happens is to be expected prior to the containers showing up.

     

    We also made some changes to add flexibility in assigning storage for the Docker engine.  First, 'rc.docker' will detect the filesystem type of /var/lib/docker.  We now support either btrfs or xfs and the docker storage driver is set appropriately.

     

    Next, 'mount_image' is modifed to support loopback formatted either with btrfs or xfs depending on the suffix of the loopback file name.  For example, the file name ends with ".img", as in "docker.img" then we use mkfs.btrfs.  If file name ends with "-xfs.img", as in "docker-xfs.img" then we use mkfs.xfs.


    We also added the ability to bind-mount a directory instead of using a loopback.  If file name does not end with ".img" then code assumes this is the name of directory (presumably on a share) which is bind-mounted onto /var/lib/docker.

     

    For example, if "/mnt/user/system/docker/docker" then we first create, if necessary the directory "/mnt/user/system/docker/docker".  If this path is on a user share we then "dereference" the path to get the disk path which is then bind-mounted onto /var/lib/docker.  For exmaple, if "/mnt/user/system/docker/docker" is on "disk1", then we would bind-mount "/mnt/disk1/system/docker/docker".  Caution: the share should be cache-only or cache-no so that 'mover' will not attempt to move the directory, but the script does not check this.

     

    Additional information from user @Squid:

     

    Quote

    Just a few comments on the ability to use a folder / share for docker

     

    If you're one of those users who continually has a problem with the docker image filling up, this is the solution, as the "image" will be able to expand (and shrink) to the size of the asigned share.  Just be aware though that this new feature is technically experimental.  (I have however been running this on an XFS formatted cache drive for a while now, and don't see any problems at all)

     

    I would recommend that you use a share that is dedicated to the docker files, and not a folder from another existing share (like system as show in the OP).  

     

    My reasoning for this is that:

    1. If you ever have a need to run the New Permissions tool against the share that you've placed the docker folder into, then that tool will cause the entire docker system to not run.  The folder will have to be removed (via the command line), and then recreated.

    2. All of the folders contained within the docker folder are not compatible with being exported over SMB, and you cannot gain access to them that way.  Using a separate share will also allow you to not export it without impacting the other shares' exporting.  (And there are no "user-modifiable" files in there anyways.  If you do need to modify a file within that folder, (ie: a config file for a container and that config isn't available within appdata), you should be doing it via going to the container's shell)

    You definitely want the share to be cache-only or cache-no (although cache-prefer should probably be ok).  Setting it to cache:yes will undoubtedly cause you problems if mover winds up relocating files to the array for you.

     

    I did have some "weirdness" with using a Unassigned Device as the drive for the docker folder.  This may however been a glitch in my system.

     

    Fix Common Problems (and the Docker Safe New Permissions Tool) will wind up getting updated to let you know of any problems that it detects with how you've configured the folder.

     

    Virtualization

    libvirt updated to version 6.4.0

    qemu updated to version 5.0.0

     

    In addition, integrated changes to System Devices page by user @Skitals with modifications by user @ljm42.  You can now select PCI devices to isolate from Linux upon boot simply by checking some boxes.  This makes it easier to reserve those devices for assignment to VM's.

     

    Note: If you had the VFIO-PCI Config plugin installed, you should remove it as that functionality is now built-in to Unraid OS 6.9.  Refer also @ljm42's excellent guide.

     

    In a future release we will include the NVIDIA and AMD GPU drivers natively into Unraid OS.  The primary use case is to facilitate accelerated transcoding in docker containers.  For this we require Linux to detect and auto-install the appropriate driver.  However, in order to reliably pass through an NVIDIA or AMD GPU to a VM, it's necessary to prevent Linux from auto-installing a GPU driver for those devices upon boot, which can be easily done now through System Devices page.  Users passing GPU's to VM's are encouraged to set this up now.

     

    "unexpected GSO errors"

    If your system log is being flooded with errors such as:

    Jun 20 09:09:21 Tower kernel: tun: unexpected GSO type: 0x0, gso_size 31, hdr_len 66

    You need to edit each VM and change the model type for the Ethernet bridge from "virtio" to "virtio-net".  In most cases this can be accomplished simply by clicking Update in "Form View" on the VM Edit page.  For other network configs it may be necessary to directly edit the xml.  Example:

    <interface type='bridge'>
          <mac address='xx:xx:xx:xx:xx:xx'/>
          <source bridge='br0'/>
          <model type='virtio-net'/>
          <address type='pci' domain='0x0000' bus='0x00' slot='0x03' function='0x0'/>
    </interface>

     

    SSH Improvements

    There are changes in /etc/ssh/sshd_conf to improve security (thanks to @Mihai and @ljm42 for suggestions):

    • only root user is permitted to login via ssh (remember: no traditional users in Unraid OS - just 'root')
    • non-null password is now required
    • non-root tunneling is disabled

     

    In addition, upon upgrade we ensure the 'config/ssh/root' directory exists on the USB flash boot device; and, we have set up a symlink: /root/.ssh to this directory.  This means any files you might put into /root/.ssh will be persistent across reboots.

     

    Note: if you examine the sshd startup script (/etc/rc.d/rc.sshd), upon boot all files from the 'config/ssh' directory are copied to /etc/ssh (but not subdirs).  The purpose is to restore the host ssh keys; however, this mechanism can be used to define custom ssh_conf and sshd_conf files (not recommended).

     

    Other

    • AFP support has been removed.
    • Numerous other Unraid OS and webGUI bug fixes and improvements.

     


    Version 6.9.0-beta25 2020-07-12

    Linux kernel:

    • version 5.7.8

    Management:

    • fix emhttpd crash resulting from exporting NFS disk share(s)
    • fix non-rotational device partitions were not actually being 1MiB aligned
    • dhcpcd: ipv6: use slaac hwaddr instead of slaac private
    • docker: correct storage-driver assignemnt logic
    • ssh: allow only root user, require passwords, disable non-root tunneling
    • ssh: add /root/.ssh symlink to /boot/config/ssh/root directory
    • syslog: configure to also listen on localhost udp port 514
    • webgui: Added btrfs info for all pools in diagnostics
    • webgui: Docker: allow BTRFS or XFS vdisk, or folder location
    • webgui: Multi-language: Fixed regression error: missing indicator for required fields
    • webgui: Dashboard: fix stats of missing interface
    • Like 2
    • Thanks 3



    User Feedback

    Recommended Comments



    9 minutes ago, Velodo said:

    I'm excited to see the possibility of zfs! Assuming this does get added, is likely to be added to this release or a future release? I know giving exact timelines is impossible, but I've been planning an upgrade and part of that will include a smaller zfs pool that will be always spun up with everything except large media files, then a large unraid pool with the bulk of my media which will keep the drives spun down when possible to save on electricity. If official implementation is a ways out yet I might set it up with the zfs plugin and trying to deal with importing it later, but starting off with native support would probably be better.

    While things may change, I really don't expect LT to implement ZFS in 6.9.0 due to a few factors:

    • Has the question surrounding zfs licensing been answered? It's less of a legal concern for an enthusiastic user to compile zfs with Unraid kernel and share it. Most businesses need to get proper (and expensive) legal advice to assess this sort of stuff.
    • ZFS would count as a new filesystem and I could be wrong but I vaguely remember the last time a new filesystem was implemented was from 5.x to 6.x with XFS replacing ReiserFS. So it wasn't just a major release but a new version number all together.
    • At the very least, 6.9.0 beta has gone quite far along that adding ZFS would risk destabilising and delaying the release (which is kinda already overdue anyway as kernel 5.x was supposed to be out with Unraid 6.8 - so overdue that LT has made the unprecedented move of doing public beta instead of only releasing RC)

     

    So TL;DR: you are better off with the ZFS plugin (or custom-built Unraid kernel with zfs baked in) if you need ZFS now.

     

    Other than the minor annoyance of needing to use the CLI to monitor my pool free space and health, there isn't really any particular issue that I have seen so far, including when I attempted a mocked failure-and-recovery event (the "magic" of just unplugging the SSD 😅)

     

    • Like 3
    Link to comment

    All very good points, thanks for taking the time to answer! I can't help but wonder how many people will make the switch to Unraid from Freenas if zfs gets implemented. When I tried out Freenas years ago before purchasing Unraid it was certainly decent, but Unraid was way ahead when it comes to all the plugins, VMs, and Docker. Really glad I made the switch before it was too late, but as my data has grown my needs for many TB of faster storage than my Unraid pool can muster has also grown. I'm really hoping zfs will be the answer to that.

    • Like 1
    Link to comment
    4 hours ago, Velodo said:

    All very good points, thanks for taking the time to answer! I can't help but wonder how many people will make the switch to Unraid from Freenas if zfs gets implemented. When I tried out Freenas years ago before purchasing Unraid it was certainly decent, but Unraid was way ahead when it comes to all the plugins, VMs, and Docker. Really glad I made the switch before it was too late, but as my data has grown my needs for many TB of faster storage than my Unraid pool can muster has also grown. I'm really hoping zfs will be the answer to that.

     

    If you just want a fast pool, you don't quite need ZFS. 6.9.0 + btrfs cache pool work just as well.

    You might be mixing up the Unraid array with the (cache) pool. The pool runs RAID and has no performance limitation.

     

    Link to comment
    4 hours ago, testdasi said:

     

    If you just want a fast pool, you don't quite need ZFS. 6.9.0 + btrfs cache pool work just as well.

    You might be mixing up the Unraid array with the (cache) pool. The pool runs RAID and has no performance limitation.

     

    I've been using 3 SSDs in my cache pool which is set to run the equivalent of RAID5, so I'm aware it can be fast. However, with all the issues surrounding btrfs I'm not even really comfortable with just my appdata and VMs on there, but I haven't had issues so far. I really wouldn't trust my important data on a ~60TB btrfs pool at this point in time... maybe once btrfs matures some more. Hence, why I want zfs.. I basically need 60-80TB of fast, reliable storage right now for non media + 200TB of whatever storage for media. ZFS snapshots and scrubbing will just be a nice bonus.

    Link to comment
    On 7/26/2020 at 11:19 PM, testdasi said:

    Has the question surrounding zfs licensing been answered?

    OpenZFS, as used by FreeNAS, is open source.

     

    EDIT: Oh, I see. I clash of two different licences.

    Edited by John_M
    Link to comment

    I updated to the beta and noticed 1 minor issue.

     

    The "new config" option does not have the option to preserve cache pools. If you select all it still preserves it but the individual option is missing.

    Link to comment

    Another minor bug I noticed.

     

    It won't let me change the number of slots on the 1st cache pool once I have any drives assigned to it. If I unassign all the drives I can change the slots.

    Link to comment
    4 minutes ago, TexasUnraid said:

    Another minor bug I noticed.

     

    It won't let me change the number of slots on the 1st cache pool once I have any drives assigned to it. If I unassign all the drives I can change the slots.

    Not a bug, but by design to prevent misconfiguration.

    You need to set the number of slots first before assigning devices.

    You can not change the slot number if you assign devices first.

    Link to comment
    1 hour ago, TexasUnraid said:

    I updated to the beta and noticed 1 minor issue.

     

    The "new config" option does not have the option to preserve cache pools. If you select all it still preserves it but the individual option is missing.

    Not a bug, but by design. The preserve options allow only all data devices or all pool devices.

    Link to comment
    4 minutes ago, bonienl said:

    Not a bug, but by design to prevent misconfiguration.

    You need to set the number of slots first before assigning devices.

    You can not change the slot number if you assign devices first.

    So you can't dynamically add drives to a cache pool anymore? Seems like a strange choice?

     

    What is the reason behind not being able to expand the cache pools at a later date anymore?

    Link to comment
    3 minutes ago, bonienl said:

    Not a bug, but by design. The preserve options allow only all data devices or all pool devices.

    In my case it did not list cache at all, only array and parity?

     

    I clicked both of them and that checked the "All" option and this seemed to also preserve the cache as well but there was no option to preserve the cache listed.

    Edited by TexasUnraid
    Link to comment
    2 minutes ago, TexasUnraid said:

    So you can't dynamically add drives to a cache pool anymore

    No, there is no restriction. You can add up to 30 devices in a pool

     

    3 minutes ago, TexasUnraid said:

    What is the reason behind not being able to expand the cache pools at a later date anymore?

    You can still expand a pool any time you want.

    There is a safety measure in place to prevent a wrong profile selection (there is a report about it, but can't find it that quickly). This enforces the user to work in a predefined sequence: first set the slot number (if this needs to be changed) and then assign devices.

     

    Link to comment
    6 minutes ago, TexasUnraid said:

    In my case it did not list cache at all, only array and parity?

    image.png.6642d5602fd5d6053040deaf505eca30.png

    Link to comment

    I feel silly now, I am so used to seeing the parity in the new config I didn't even register that the P word was different lol. My bad.

     

    Overall I really really like the new pools options, I prefer being able to change the number of slots on the fly, guess I will just leave them all set to like 10, annoying to work around but easier then unassigned and reassigning all the drives later.

     

    Just moving files around so that I could reformat the cache was made way easier with the cache pools and lets me change the cache setup to better fit my needs.

     

    Also like to know that the option is there to move to a more classic raid setup down the road if the need ever arose.

     

    Sorry if I seem to be complaining. I am a bit of a perfectionist, so I notice small bugs and anything not working as good as it could really annoys me.

     

    Not to worry, I am almost done setting up the server and then I will move on to other things. lol 😉

    Edited by TexasUnraid
    Link to comment
    6 minutes ago, TexasUnraid said:

    will just leave them all set to like 10, annoying to work around but easier then unassigned and reassigning all the drives later.

    It is not as bad as you think and no need to use a fixed preset of slots.

    When you stop the array and make a change then start with changing the number of slots as desired (or leave as is), then assign the (new) devices.

    Next start the array and it will use the updated assignments.

    Again when you stop the array you can do the action above.

     

    Link to comment

    Something I talked about in another thread, I have determined that the best setup for my needs right now is for me to manually split up data across disks on the array.

     

    It allows me to group data together to minimize disk spinups along with ensuring that simultaneously accessed data is split across different disks increasing bandwidth. Things unraid can't possible know.

     

    The issue is that turbo write gets disabled when I copy between disks in the array and cuts the speed to almost 1/3 along with thrashing the disks for no reason. When moving 100's of GB's data on a regular basis in this fashion it really slows things down to the point of not sure how to deal with it.

     

    I read that in 6.7 and earlier turbo write was able to be enabled during disk to disk transfers, is it possible this will ever be re-enabled or be given a user setting to enable it? Or disable the "multi-stream" option I think it was called?

     

    With turbo write enabled it is working perfect for my needs and some others also mentioned that they would prefer this in the other thread.

    Link to comment
    1 minute ago, bonienl said:

    It is not as bad as you think and no need to use a fixed preset of slots.

    When you stop the array and make a change then start with changing the number of slots as desired (or leave as is), then assign the (new) devices.

    Next start the array and it will use the updated assignments.

    Again when you stop the array you can do the action above.

     

    Ok, this time I know it was a bug lol.

     

    Yeah, I had the array stopped when I was making the changes earlier (was reformatting all the cache drives to get the new alignment).

     

    With the array stopped and I think this was after clearing the config (was removing a drive from the cache pool at the same time), the option to change the number of slots was visible but grayed out.

     

    I just stopped the array and checked again and sure enough, works just like it did in 6.8, allowing me to change the number of slots.

    Link to comment
    2 minutes ago, TexasUnraid said:

    I just stopped the array and checked again and sure enough, works just like it did in 6.8,

    The behavior in 6.8 and 6.9 is exactly the same, only difference in 6.9 it applies to multiple pools instead of one.

     

    Link to comment
    Just now, bonienl said:

    The behavior in 6.8 and 6.9 is exactly the same, only difference in 6.9 it applies to multiple pools instead of one.

     

    Yeah, After stopping the array again it did. The first time must of been a bug as I 100% know that the slots was grayed out until I unassigned all the drives from the pool, then it allowed me to change the slots and I could re-add the drives to the pool.

     

    It seems to be working now though, so no idea why that was the case.

    Link to comment
    5 minutes ago, TexasUnraid said:

    It seems to be working now though, so no idea why that was the case.

    I did a quick test by doing a New Config and start fresh.

    All is working as expected and I can't reproduce your issue.

    If/when you encounter this issue again, please include the exact steps how to reproduce it. Thx

     

    • Like 1
    Link to comment

    Also, just wanted to say good work on fixing the pool free space not reporting correctly. Kinda funny to see 128gb used on a clean pool but hey it works!👍

    Link to comment
    12 hours ago, TexasUnraid said:

    Kinda funny to see 128gb used on a clean pool but hey it works!

    That anomaly is discussed here - the important thing is that the free space is reported correctly: 

     

    Link to comment
    13 minutes ago, John_M said:

    That anomaly is discussed here - the important thing is that the free space is reported correctly: 

     

    Yeah, I am not knocking it, I meant exactly what I said, it is kinda funny to see the space used on a clean format but it works just fine and is a major improvement from not knowing the correct free space. Ran into some free space issues last week due to the mismatch actually. 😉

    Edited by TexasUnraid
    Link to comment

    Hopefully not crying wolf again but noticed another small bug in the shares menu. I was trying to update several shares to use the new cache only but the "write into" option does not seem to update the cache drive to use when applying it to other shares.

    Link to comment

    While re-doing my cache anyways I decided to connect the drives to the onboard stat controller and run the trim command.

     

    It will not trim them though, even if I try it manually it just says "discard is not supported on this drive".

     

    After some research I am pretty sure it is because I am using an encrypted cache and the luks mount has to be mounted with the discard option to enable trim support.

     

    It would be cool if this could either be made default or add an option to cache pools to add this option.

     

    If that is not an option, is it possible to use some kind of remount command like we did with the space_cache=v2 to enable trim on encrypted cache drives?

    Link to comment



    Guest
    This is now closed for further comments

  • Status Definitions

     

    Open = Under consideration.

     

    Solved = The issue has been resolved.

     

    Solved version = The issue has been resolved in the indicated release version.

     

    Closed = Feedback or opinion better posted on our forum for discussion. Also for reports we cannot reproduce or need more information. In this case just add a comment and we will review it again.

     

    Retest = Please retest in latest release.


    Priority Definitions

     

    Minor = Something not working correctly.

     

    Urgent = Server crash, data loss, or other showstopper.

     

    Annoyance = Doesn't affect functionality but should be fixed.

     

    Other = Announcement or other non-issue.