• Unraid OS version 6.9.0-beta25 available


    limetech

    6.9.0-beta25 vs. -beta24 Summary:

    • fixed emhttpd crash resulting from having NFS exported disk shares
    • fixed issue where specifying 1 MiB partition alignment was being ignored (see 1 MiB Partition Alignment below)
    • fixed spin-up/down issues
    • ssh improvements (see SSH Improvements below)
    • kernel updated from 5.7.7 to 5.7.8
    • added UI changes to support new docker image file handling - thank you @bonienl.  Refer also to additional information re: docker image folder, provided by @Squid under Docker below.
    • known issue: "Device/SMART Settings/SMART controller type" is ignored, will be fixed in next release

     

    Important: Beta code is not fully tested and not feature-complete.  We recommend running on test servers only!

     

    Multiple Pools

    This features permits you to define up to 35 named pools, of up to 30 storage devices/pool.  The current "cache pool" is now simply a pool named "cache".  Pools are created and managed via the Main page.

     

    Note: When you upgrade a server which has a cache pool defined, a backup of config/disk.cfg will be saved to config/disk.cfg.bak, and then cache device assignment settings are moved out of disk.cfg and into a new file, config/pools/cache.cfg.  If later you revert back to a pre-6.9 Unraid OS release you will lose your cache device assignments and you will have to manually re-assign devices to cache.  As long as you reassign the correct devices, data should remain intact.

     

    When you create a user share, or edit an existing user share, you can specify which pool should be associated with that share.  The assigned pool functions identically to current cache pool operation.

     

    Something to be aware of: when a directory listing is obtained for a share, the unRAID array disk volumes and all pools which contain that share are merged in this order:

      pool assigned to share

      disk1

      :

      disk28

      all the other pools in strverscmp() order.

     

    As with the current "cache pool", a single-device pool may be formatted with either xfs, btrfs, or reiserfs.  A multiple-device pool may only be formatted with btrfs.  A future release will include support for multiple "unRAID array" pools.  We are also considering zfs support.

     

    Something else to be aware of: Suppose you have a 2-device btrfs pool. This will be what btrfs calls "raid1" and what most people would understand to be "mirrored disks". Well this is mostly true in that the same data exists on both disks but not necessarily at the block-level.  Now suppose you create another pool, and what you do is unassign one of the devices from the existing 2-device btrfs pool and assign it to this new pool - now you have x2 single-device btrfs pools.  Upon array Start you might understandably assume there are now x2 pools with exactly the same data.  However this is not the case. Instead, when Unraid OS sees that a btrfs device has been removed from an existing multi-device pool, upon array Start it will do a 'wipefs' on that device so that upon mount it will not be included in the old pool.  This of course effectively deletes all the data on the moved device.

     

    1 MiB Partition Alignment

    We have added another partition layout where the start of partition 1 is aligned on 1 MiB boundary. That is, for devices which present 512-byte sectors, partition 1 will start in sector 2048; for devices with 4096-byte sectors, in sector 256.  This partition type is now used for all non-rotational storage (only).

     

    It is not clear what benefit 1 MiB alignment offers.  For some SSD devices, you won't see any difference; others, perhaps big performance difference.  LimeTech does not recommend re-partitioning an existing SSD device unless you have a compelling reason to do so (or your OCD just won't let it be).

     

    To re-partition a SSD it is necessary to first wipe out any existing partition structure on the device.  Of course this will erase all data on the device.  Probably the easiest way to accomplish this is, with array Stopped, identify the device to be erased and use the 'blkdiscard' command:

    blkdiscard /dev/xxx  # for exmaple /dev/sdb or /dev/nvme0n1 etc)

            WARNING: be sure you type the correct device identifier because all data will be lost on that device!

     

    Upon next array Start the device will appear Unformatted, and since there is now no partition structure, Unraid OS will create it.

     

    Language Translation

    A huge amount of work and effort has been implemented by @bonienl to provide multiple-language support in the Unraid OS Management Utility, aka, webGUI.  There are several language packs now available, and several more in the works.  Thanks to @Squid, language packs are installed via the Community Applications plugin - look for a new category entitled Language.

     

    Note: Community Applications must be up to date to install languages.  See also here.

     

    Each language pack exists in public Unraid organization github repos.  Interested users are encouraged to clone and issue Pull Requests to correct translations errors.  Language translations and PR merging is managed by @SpencerJ.

     

    Linux Kernel

    Upgraded to 5.7.

     

    These out-of-tree drivers are currently included:

    • QLogic QLGE 10Gb Ethernet Driver Support (from staging)
    • RealTek r8125: version 9.003.05 (included for newer r8125)
    • HighPoint rr272x_1x: version v1.10.6-19_12_05 (per user request)

    Note that as we update the Linux kernel, if an out-of-tree driver no longer builds, it will be omitted.

     

    These drivers are currently omitted:

    • Highpoint RocketRaid r750 (does not build)
    • Highpoint RocketRaid rr3740a (does not build)
    • Tehuti Networks tn40xx (does not build)

    If you require one of these drivers, please create a Bug Report and we'll spend some time looking for alternatives.  Better yet, pester the manufacturer of the controller and get them to update their drivers.

     

    Base Packages

    All updated to latest versions.  In addition, Linux PAM has been integrated.  This will permit us to implement 2-factor authentication in a future release.

     

    Docker

    Updated to version 19.03.11

     

    It's now possible to select different icons for multiple containers of the same type.  This change necessitates a re-download of the icons for all your installed docker applications.  A delay when initially loading either the dashboard or the docker tab while this happens is to be expected prior to the containers showing up.

     

    We also made some changes to add flexibility in assigning storage for the Docker engine.  First, 'rc.docker' will detect the filesystem type of /var/lib/docker.  We now support either btrfs or xfs and the docker storage driver is set appropriately.

     

    Next, 'mount_image' is modifed to support loopback formatted either with btrfs or xfs depending on the suffix of the loopback file name.  For example, the file name ends with ".img", as in "docker.img" then we use mkfs.btrfs.  If file name ends with "-xfs.img", as in "docker-xfs.img" then we use mkfs.xfs.


    We also added the ability to bind-mount a directory instead of using a loopback.  If file name does not end with ".img" then code assumes this is the name of directory (presumably on a share) which is bind-mounted onto /var/lib/docker.

     

    For example, if "/mnt/user/system/docker/docker" then we first create, if necessary the directory "/mnt/user/system/docker/docker".  If this path is on a user share we then "dereference" the path to get the disk path which is then bind-mounted onto /var/lib/docker.  For exmaple, if "/mnt/user/system/docker/docker" is on "disk1", then we would bind-mount "/mnt/disk1/system/docker/docker".  Caution: the share should be cache-only or cache-no so that 'mover' will not attempt to move the directory, but the script does not check this.

     

    Additional information from user @Squid:

     

    Quote

    Just a few comments on the ability to use a folder / share for docker

     

    If you're one of those users who continually has a problem with the docker image filling up, this is the solution, as the "image" will be able to expand (and shrink) to the size of the asigned share.  Just be aware though that this new feature is technically experimental.  (I have however been running this on an XFS formatted cache drive for a while now, and don't see any problems at all)

     

    I would recommend that you use a share that is dedicated to the docker files, and not a folder from another existing share (like system as show in the OP).  

     

    My reasoning for this is that:

    1. If you ever have a need to run the New Permissions tool against the share that you've placed the docker folder into, then that tool will cause the entire docker system to not run.  The folder will have to be removed (via the command line), and then recreated.

    2. All of the folders contained within the docker folder are not compatible with being exported over SMB, and you cannot gain access to them that way.  Using a separate share will also allow you to not export it without impacting the other shares' exporting.  (And there are no "user-modifiable" files in there anyways.  If you do need to modify a file within that folder, (ie: a config file for a container and that config isn't available within appdata), you should be doing it via going to the container's shell)

    You definitely want the share to be cache-only or cache-no (although cache-prefer should probably be ok).  Setting it to cache:yes will undoubtedly cause you problems if mover winds up relocating files to the array for you.

     

    I did have some "weirdness" with using a Unassigned Device as the drive for the docker folder.  This may however been a glitch in my system.

     

    Fix Common Problems (and the Docker Safe New Permissions Tool) will wind up getting updated to let you know of any problems that it detects with how you've configured the folder.

     

    Virtualization

    libvirt updated to version 6.4.0

    qemu updated to version 5.0.0

     

    In addition, integrated changes to System Devices page by user @Skitals with modifications by user @ljm42.  You can now select PCI devices to isolate from Linux upon boot simply by checking some boxes.  This makes it easier to reserve those devices for assignment to VM's.

     

    Note: If you had the VFIO-PCI Config plugin installed, you should remove it as that functionality is now built-in to Unraid OS 6.9.  Refer also @ljm42's excellent guide.

     

    In a future release we will include the NVIDIA and AMD GPU drivers natively into Unraid OS.  The primary use case is to facilitate accelerated transcoding in docker containers.  For this we require Linux to detect and auto-install the appropriate driver.  However, in order to reliably pass through an NVIDIA or AMD GPU to a VM, it's necessary to prevent Linux from auto-installing a GPU driver for those devices upon boot, which can be easily done now through System Devices page.  Users passing GPU's to VM's are encouraged to set this up now.

     

    "unexpected GSO errors"

    If your system log is being flooded with errors such as:

    Jun 20 09:09:21 Tower kernel: tun: unexpected GSO type: 0x0, gso_size 31, hdr_len 66

    You need to edit each VM and change the model type for the Ethernet bridge from "virtio" to "virtio-net".  In most cases this can be accomplished simply by clicking Update in "Form View" on the VM Edit page.  For other network configs it may be necessary to directly edit the xml.  Example:

    <interface type='bridge'>
          <mac address='xx:xx:xx:xx:xx:xx'/>
          <source bridge='br0'/>
          <model type='virtio-net'/>
          <address type='pci' domain='0x0000' bus='0x00' slot='0x03' function='0x0'/>
    </interface>

     

    SSH Improvements

    There are changes in /etc/ssh/sshd_conf to improve security (thanks to @Mihai and @ljm42 for suggestions):

    • only root user is permitted to login via ssh (remember: no traditional users in Unraid OS - just 'root')
    • non-null password is now required
    • non-root tunneling is disabled

     

    In addition, upon upgrade we ensure the 'config/ssh/root' directory exists on the USB flash boot device; and, we have set up a symlink: /root/.ssh to this directory.  This means any files you might put into /root/.ssh will be persistent across reboots.

     

    Note: if you examine the sshd startup script (/etc/rc.d/rc.sshd), upon boot all files from the 'config/ssh' directory are copied to /etc/ssh (but not subdirs).  The purpose is to restore the host ssh keys; however, this mechanism can be used to define custom ssh_conf and sshd_conf files (not recommended).

     

    Other

    • AFP support has been removed.
    • Numerous other Unraid OS and webGUI bug fixes and improvements.

     


    Version 6.9.0-beta25 2020-07-12

    Linux kernel:

    • version 5.7.8

    Management:

    • fix emhttpd crash resulting from exporting NFS disk share(s)
    • fix non-rotational device partitions were not actually being 1MiB aligned
    • dhcpcd: ipv6: use slaac hwaddr instead of slaac private
    • docker: correct storage-driver assignemnt logic
    • ssh: allow only root user, require passwords, disable non-root tunneling
    • ssh: add /root/.ssh symlink to /boot/config/ssh/root directory
    • syslog: configure to also listen on localhost udp port 514
    • webgui: Added btrfs info for all pools in diagnostics
    • webgui: Docker: allow BTRFS or XFS vdisk, or folder location
    • webgui: Multi-language: Fixed regression error: missing indicator for required fields
    • webgui: Dashboard: fix stats of missing interface
    • Like 2
    • Thanks 3


    User Feedback

    Recommended Comments



    Has anyone noticed a problem with USB passthrough to Docker? It was working ok in 6.8 but not since updating to 6.9 betas. Is there something different that I've missed?

    Cheers,

    Tim

    Link to comment
    Share on other sites
    16 minutes ago, itimpi said:

    I have no problem using the Unraid interface on my iPad.

    It's only the docker page, everything else is fine! It's the same in Safari and Chrome.

    Link to comment
    Share on other sites
    12 minutes ago, MothyTim said:

    It's only the docker page, everything else is fine! It's the same in Safari and Chrome.

    As i said it is working fine for me.   I have had problems in the past but it is OK now.   It may be relevant that I am using the iOS14 beta so possibly a web engine (Which is used by both those browsers)  problem has been fixed.

    Link to comment
    Share on other sites
    26 minutes ago, itimpi said:

    As i said it is working fine for me.   I have had problems in the past but it is OK now.   It may be relevant that I am using the iOS14 beta so possibly a web engine (Which is used by both those browsers)  problem has been fixed.

    Ah OK good to know!

    Link to comment
    Share on other sites

    Update to the latest beta and I have problems with docker and network setting for some reason custom br0 is not accessible. I struggled for a hour yesterday with this problem. I decided to re-install unRAID on my usb stick. Again the latest beta but the problem still exists. Does anyone know if this a bug in the latest beta?

    Link to comment
    Share on other sites

    Freezing daily, requiring power cycle.  Can't ping or access web interface.

     

    Is there a path back to 6.8.3 for troubleshooting?  

     

    Diagnostics attached, appreciate it.

     

    tower-diagnostics-20200813-1057.zip

    Edited by jbear
    Link to comment
    Share on other sites
    6 minutes ago, jbear said:

    Is there a path back to 6.8.3 for troubleshooting?  

    If the update was done using the GUI then you can go to Tools -> Update OS -> Unraid OS (previous) -> Restore

    Link to comment
    Share on other sites
    2 minutes ago, johnnie.black said:

    If the update was done using the GUI then you can go to Tools -> Update OS -> Unraid OS (previous) -> Restore

    It was, but only BETA 24 is available for restore.  I've run every BETA in the 6.9 release cycle.

     

    Thanks.

    Link to comment
    Share on other sites
    1 minute ago, jbear said:

    It was, but only BETA 24 is available for restore

    And it also crashed with beta24? You can always downgrade by manually copying the bz* files from the v6.8.3 zip overwriting the existing ones.

    Link to comment
    Share on other sites
    5 minutes ago, johnnie.black said:

    And it also crashed with beta24? You can always downgrade by manually copying the bz* files from the v6.8.3 zip overwriting the existing ones.

    I'm familiar with this process.  Thanks.

     

    Yes it did freeze under BETA 24 also.

    Edited by jbear
    Link to comment
    Share on other sites

    I was able to downgrade to 6.8.3, I will troubleshoot further from here.  Thanks for your help.

     

    Just not 100% certain it's a software issues, prefer  to troubleshoot hardware on the stable branch :)

    • Like 1
    Link to comment
    Share on other sites

    Multiple pool usage question - and apologies if it's been asked before, maybe on a prior beta thread (tried searching but a bit tricky to do across all the posts).

     

    With the new pools, will it be possible to do a "raid 0"/brtfs striped pool?

     

    Context - I've got a standard pool (parity + 2 spinners), and a cache pool (2 SSDs). When setting up the cache pool I decided to go for the standard duplication vs. striping, for the data security. With the new multiple Unraid pool feature, would I be able to add a second cache-like pool, but instead of a raid 1 style pool, get a speed boost from having a raid 0 style pool setup? Or would that only be a "cache" pool feature, limited to the special cache pool?

     

    Let me know if that doesn't make sense.

     

    Thanks!

    Link to comment
    Share on other sites
    50 minutes ago, atl-far-east said:

    will it be possible to do a "raid 0"/brtfs striped pool?

    Yes, pools can use all the available btrfs raid profiles (for the number of disks used).

    • Thanks 2
    Link to comment
    Share on other sites

    I currently have 3 pools and thinking about adding another:

     

    BTRFS raid 5 with 5 drives

     

    BTRFS single drive

     

    XFS single drive

     

    You can mix and match them at will.

     

    You can also change the BTRFS setup at any time, convert to raid 0/5 etc.

    Link to comment
    Share on other sites
    On 8/8/2020 at 2:37 PM, itimpi said:

    As i said it is working fine for me.   I have had problems in the past but it is OK now.   It may be relevant that I am using the iOS14 beta so possibly a web engine (Which is used by both those browsers)  problem has been fixed.

    Depends on how many dockers you have listed. If they don’t fit the screen, the interface gets stuck. It won’t scroll. Display font size also plays a role, the bigger the font, the less dockers you can list, and the gui gets stuck and wont scroll.

     

    But then again, this is not specific for this version, its an issue for as long as there are dockers in unraid. It wont get fixed for some reason.

     

    Edited by jowi
    Link to comment
    Share on other sites
    4 minutes ago, jowi said:

    Depends on how many dockers you have listed. If they don’t fit the screen, the interface gets stuck. It won’t scroll. Display font size also plays a role, the bigger the font, the less dockers you can list, and the gui gets stuck and wont scroll.

     

    But then again, this is not specific for this version, its an issue for as long as there are dockers in unraid. It wont get fixed for some reason.

     

    I have far more Dockers than fit on the screen and they scroll OK for me on my iPad.

     

    i think the root cause has to be some sort of bug at the web kit level which can therefore affect all iOS/iPadOS browsers as Apple mandates they have to use WebKit for rendering.  I would be interested to know if anyone using Safari on MacOS ever experience such problems.

    Link to comment
    Share on other sites
    2 hours ago, itimpi said:

    I have far more Dockers than fit on the screen and they scroll OK for me on my iPad.

     

    i think the root cause has to be some sort of bug at the web kit level which can therefore affect all iOS/iPadOS browsers as Apple mandates they have to use WebKit for rendering.  I would be interested to know if anyone using Safari on MacOS ever experience such problems.

    In my experience on iOS and specifically iPadOS when trying to scroll the docker page, the scrolling action is highly inconsistent in it actually working but only when touching the docker list itself. The sides of the page outside of the docker list always seem to work fine if I can manage to touch them since they're often small and close to the edge. My guess for why this is is that the docker list can be rearranged by clicking and dragging on desktop so when you use the iPad it simultaneously trys to drag the docker in the list and scroll the page which leads to neither happening. This has largely improved for me as the iPadOS browser had gotten more full featured however I'm confident that I could still get it to do it even now. If I can figure out how to record it with screen taps visible I will post it here to make the experience clearer

    Link to comment
    Share on other sites
    1 hour ago, Unraid Newbie said:

    This just happened a day ago, didn't change anything to the VM xml. please help.

    Did you just update to 6.9.0-beta25 "a day ago" i.e. it was all working fine before the update and then you updated and it immediately stopped working right after?

    This topic is specific to 6.9.0-beta25 related issues to help with bug fixing. If you have an issue that cannot be specifically identified as being a problem with 6.9.0-beta25, you have a much better chance of getting help posting your issue in the main general help forum.

    Link to comment
    Share on other sites

    no, I have been running beta 25 for a month now.

     

    I meant cpu problem started yesterday. didn't make any new changes. 

     

    honestly I can't tell if this is a beta bug or just a bug in general. but I don't see anyone post about it recently.

     

    sorry if i posted incorrectly.

    Link to comment
    Share on other sites

    Got a small problem with beta 25. My log file gets to 100% in a few days. This never happened before on 6.8.3. Just thought i'd say something, other than that, I don't have any problems with this version.

    Link to comment
    Share on other sites
    28 minutes ago, Unraid Newbie said:

    no, I have been running beta 25 for a month now.

     

    I meant cpu problem started yesterday. didn't make any new changes. 

     

    honestly I can't tell if this is a beta bug or just a bug in general. but I don't see anyone post about it recently.

     

    sorry if i posted incorrectly.

    Then it's likely nothing to do with beta25. You probably have another issue happening so perhaps posting in the General forum instead. Don't forget to attach diagnostics, preferably right after you have experienced the problem without rebooting Unraid.

     

    5 minutes ago, Cobragt88 said:

    Got a small problem with beta 25. My log file gets to 100% in a few days. This never happened before on 6.8.3. Just thought i'd say something, other than that, I don't have any problems with this version.

    That suggests you have an issue. Logs don't fill up unless being bombarded with entries. Next time it happens, extract diagnostics (Tools -> Diagnostics -> attach full zip file to your next post).

    • Like 1
    Link to comment
    Share on other sites
    23 hours ago, jowi said:

    Depends on how many dockers you have listed. If they don’t fit the screen, the interface gets stuck. It won’t scroll. Display font size also plays a role, the bigger the font, the less dockers you can list, and the gui gets stuck and wont scroll.

     

    But then again, this is not specific for this version, its an issue for as long as there are dockers in unraid. It wont get fixed for some reason.

     

    Yes, this is precisely what I see and why I’ve given up trying to use the Docker Unraid Tab on my iPad Pro. The limit seems to be about 26 dockers. I have about 50 and the page freezes. Wait long enough and the next few will show up. Seems to be a limit (memory?) in the browser that tops out at around 26 dockers at any one time. Eventually you can scroll to the bottom of the list, but then you can’t scroll back up again! 
     

    This is in addition to the 'am I scrolling or am I dragging and dropping a docker' issue also mentioned. 
     

    The workaround for many, but not all, functions is to use the small list on the Dashboard page. 
     

    Unraid has always had this problem. It isn’t new. 

    Link to comment
    Share on other sites
    On 7/20/2020 at 11:29 PM, 5hurb said:

    Not sure if its been raised but I'm still having to do this mod to xml to get windows 10 vm to boot without a BSOD kernal error. Found a random thread with a solution to the BSOD.

    I don't know what this does or why its needed? Seems like it might be a ZEN 2 issue? has been there since beta22

     

    From This

     <cpu mode='host-passthrough' check='none'>
        <topology sockets='1' dies='1' cores='4' threads='2'/>
        <cache mode='passthrough'/>
        <feature policy='require' name='topoext'/>
      </cpu>

     

     

    To This

     <cpu mode='host-model' check='none'>
        <topology sockets='1' dies='1' cores='4' threads='2'/>
        <feature policy='require' name='topoext'/>
      </cpu>

     

    OMG you're a saint.  My Windows 10 VM would not start.  VM log was just reporting the below.  Is this a bug?  Wonder what the change did that allowed the to VM boot.  Those groupings below were all in there own IOMMU grouping and not bound to the new VFIO-PCI plugin integration.

     

     

    2020-08-19T18:56:34.641854Z qemu-system-x86_64: vfio: Cannot reset device 0000:02:00.0, depends on group 19 which is not owned.
    2020-08-19T18:56:34.651819Z qemu-system-x86_64: vfio: Cannot reset device 0000:0f:00.1, depends on group 40 which is not owned.
    2020-08-19T18:56:34.656829Z qemu-system-x86_64: vfio: Cannot reset device 0000:11:00.4, depends on group 43 which is not owned.
    2020-08-19T18:56:34.770052Z qemu-system-x86_64: vfio: Cannot reset device 0000:02:00.0, depends on group 19 which is not owned.
    2020-08-19T18:56:34.770192Z qemu-system-x86_64: vfio: Cannot reset device 0000:11:00.4, depends on group 43 which is not owned.

     

    Edit:  Adding I was on latest stable build before updating to Beta 25.  The above were the main SATA controller for my motherboard that the array is on, a USB device that's part of my NVIDIA 2060 and some random Non-Essential Instrumentation.

    Edited by mmag05
    Adding additional information
    Link to comment
    Share on other sites



    Guest
    This is now closed for further comments

  • Status Definitions

     

    Open = Under consideration.

     

    Solved = The issue has been resolved.

     

    Solved version = The issue has been resolved in the indicated release version.

     

    Closed = Feedback or opinion better posted on our forum for discussion. Also for reports we cannot reproduce or need more information. In this case just add a comment and we will review it again.

     

    Retest = Please retest in latest release.


    Priority Definitions

     

    Minor = Something not working correctly.

     

    Urgent = Server crash, data loss, or other showstopper.

     

    Annoyance = Doesn't affect functionality but should be fixed.

     

    Other = Announcement or other non-issue.