• Unraid OS version 6.9.0-beta25 available


    limetech

    6.9.0-beta25 vs. -beta24 Summary:

    • fixed emhttpd crash resulting from having NFS exported disk shares
    • fixed issue where specifying 1 MiB partition alignment was being ignored (see 1 MiB Partition Alignment below)
    • fixed spin-up/down issues
    • ssh improvements (see SSH Improvements below)
    • kernel updated from 5.7.7 to 5.7.8
    • added UI changes to support new docker image file handling - thank you @bonienl.  Refer also to additional information re: docker image folder, provided by @Squid under Docker below.
    • known issue: "Device/SMART Settings/SMART controller type" is ignored, will be fixed in next release

     

    Important: Beta code is not fully tested and not feature-complete.  We recommend running on test servers only!

     

    Multiple Pools

    This features permits you to define up to 35 named pools, of up to 30 storage devices/pool.  The current "cache pool" is now simply a pool named "cache".  Pools are created and managed via the Main page.

     

    Note: When you upgrade a server which has a cache pool defined, a backup of config/disk.cfg will be saved to config/disk.cfg.bak, and then cache device assignment settings are moved out of disk.cfg and into a new file, config/pools/cache.cfg.  If later you revert back to a pre-6.9 Unraid OS release you will lose your cache device assignments and you will have to manually re-assign devices to cache.  As long as you reassign the correct devices, data should remain intact.

     

    When you create a user share, or edit an existing user share, you can specify which pool should be associated with that share.  The assigned pool functions identically to current cache pool operation.

     

    Something to be aware of: when a directory listing is obtained for a share, the unRAID array disk volumes and all pools which contain that share are merged in this order:

      pool assigned to share

      disk1

      :

      disk28

      all the other pools in strverscmp() order.

     

    As with the current "cache pool", a single-device pool may be formatted with either xfs, btrfs, or reiserfs.  A multiple-device pool may only be formatted with btrfs.  A future release will include support for multiple "unRAID array" pools.  We are also considering zfs support.

     

    Something else to be aware of: Suppose you have a 2-device btrfs pool. This will be what btrfs calls "raid1" and what most people would understand to be "mirrored disks". Well this is mostly true in that the same data exists on both disks but not necessarily at the block-level.  Now suppose you create another pool, and what you do is unassign one of the devices from the existing 2-device btrfs pool and assign it to this new pool - now you have x2 single-device btrfs pools.  Upon array Start you might understandably assume there are now x2 pools with exactly the same data.  However this is not the case. Instead, when Unraid OS sees that a btrfs device has been removed from an existing multi-device pool, upon array Start it will do a 'wipefs' on that device so that upon mount it will not be included in the old pool.  This of course effectively deletes all the data on the moved device.

     

    1 MiB Partition Alignment

    We have added another partition layout where the start of partition 1 is aligned on 1 MiB boundary. That is, for devices which present 512-byte sectors, partition 1 will start in sector 2048; for devices with 4096-byte sectors, in sector 256.  This partition type is now used for all non-rotational storage (only).

     

    It is not clear what benefit 1 MiB alignment offers.  For some SSD devices, you won't see any difference; others, perhaps big performance difference.  LimeTech does not recommend re-partitioning an existing SSD device unless you have a compelling reason to do so (or your OCD just won't let it be).

     

    To re-partition a SSD it is necessary to first wipe out any existing partition structure on the device.  Of course this will erase all data on the device.  Probably the easiest way to accomplish this is, with array Stopped, identify the device to be erased and use the 'blkdiscard' command:

    blkdiscard /dev/xxx  # for exmaple /dev/sdb or /dev/nvme0n1 etc)

            WARNING: be sure you type the correct device identifier because all data will be lost on that device!

     

    Upon next array Start the device will appear Unformatted, and since there is now no partition structure, Unraid OS will create it.

     

    Language Translation

    A huge amount of work and effort has been implemented by @bonienl to provide multiple-language support in the Unraid OS Management Utility, aka, webGUI.  There are several language packs now available, and several more in the works.  Thanks to @Squid, language packs are installed via the Community Applications plugin - look for a new category entitled Language.

     

    Note: Community Applications must be up to date to install languages.  See also here.

     

    Each language pack exists in public Unraid organization github repos.  Interested users are encouraged to clone and issue Pull Requests to correct translations errors.  Language translations and PR merging is managed by @SpencerJ.

     

    Linux Kernel

    Upgraded to 5.7.

     

    These out-of-tree drivers are currently included:

    • QLogic QLGE 10Gb Ethernet Driver Support (from staging)
    • RealTek r8125: version 9.003.05 (included for newer r8125)
    • HighPoint rr272x_1x: version v1.10.6-19_12_05 (per user request)

    Note that as we update the Linux kernel, if an out-of-tree driver no longer builds, it will be omitted.

     

    These drivers are currently omitted:

    • Highpoint RocketRaid r750 (does not build)
    • Highpoint RocketRaid rr3740a (does not build)
    • Tehuti Networks tn40xx (does not build)

    If you require one of these drivers, please create a Bug Report and we'll spend some time looking for alternatives.  Better yet, pester the manufacturer of the controller and get them to update their drivers.

     

    Base Packages

    All updated to latest versions.  In addition, Linux PAM has been integrated.  This will permit us to implement 2-factor authentication in a future release.

     

    Docker

    Updated to version 19.03.11

     

    It's now possible to select different icons for multiple containers of the same type.  This change necessitates a re-download of the icons for all your installed docker applications.  A delay when initially loading either the dashboard or the docker tab while this happens is to be expected prior to the containers showing up.

     

    We also made some changes to add flexibility in assigning storage for the Docker engine.  First, 'rc.docker' will detect the filesystem type of /var/lib/docker.  We now support either btrfs or xfs and the docker storage driver is set appropriately.

     

    Next, 'mount_image' is modifed to support loopback formatted either with btrfs or xfs depending on the suffix of the loopback file name.  For example, the file name ends with ".img", as in "docker.img" then we use mkfs.btrfs.  If file name ends with "-xfs.img", as in "docker-xfs.img" then we use mkfs.xfs.


    We also added the ability to bind-mount a directory instead of using a loopback.  If file name does not end with ".img" then code assumes this is the name of directory (presumably on a share) which is bind-mounted onto /var/lib/docker.

     

    For example, if "/mnt/user/system/docker/docker" then we first create, if necessary the directory "/mnt/user/system/docker/docker".  If this path is on a user share we then "dereference" the path to get the disk path which is then bind-mounted onto /var/lib/docker.  For exmaple, if "/mnt/user/system/docker/docker" is on "disk1", then we would bind-mount "/mnt/disk1/system/docker/docker".  Caution: the share should be cache-only or cache-no so that 'mover' will not attempt to move the directory, but the script does not check this.

     

    Additional information from user @Squid:

     

    Quote

    Just a few comments on the ability to use a folder / share for docker

     

    If you're one of those users who continually has a problem with the docker image filling up, this is the solution, as the "image" will be able to expand (and shrink) to the size of the asigned share.  Just be aware though that this new feature is technically experimental.  (I have however been running this on an XFS formatted cache drive for a while now, and don't see any problems at all)

     

    I would recommend that you use a share that is dedicated to the docker files, and not a folder from another existing share (like system as show in the OP).  

     

    My reasoning for this is that:

    1. If you ever have a need to run the New Permissions tool against the share that you've placed the docker folder into, then that tool will cause the entire docker system to not run.  The folder will have to be removed (via the command line), and then recreated.

    2. All of the folders contained within the docker folder are not compatible with being exported over SMB, and you cannot gain access to them that way.  Using a separate share will also allow you to not export it without impacting the other shares' exporting.  (And there are no "user-modifiable" files in there anyways.  If you do need to modify a file within that folder, (ie: a config file for a container and that config isn't available within appdata), you should be doing it via going to the container's shell)

    You definitely want the share to be cache-only or cache-no (although cache-prefer should probably be ok).  Setting it to cache:yes will undoubtedly cause you problems if mover winds up relocating files to the array for you.

     

    I did have some "weirdness" with using a Unassigned Device as the drive for the docker folder.  This may however been a glitch in my system.

     

    Fix Common Problems (and the Docker Safe New Permissions Tool) will wind up getting updated to let you know of any problems that it detects with how you've configured the folder.

     

    Virtualization

    libvirt updated to version 6.4.0

    qemu updated to version 5.0.0

     

    In addition, integrated changes to System Devices page by user @Skitals with modifications by user @ljm42.  You can now select PCI devices to isolate from Linux upon boot simply by checking some boxes.  This makes it easier to reserve those devices for assignment to VM's.

     

    Note: If you had the VFIO-PCI Config plugin installed, you should remove it as that functionality is now built-in to Unraid OS 6.9.  Refer also @ljm42's excellent guide.

     

    In a future release we will include the NVIDIA and AMD GPU drivers natively into Unraid OS.  The primary use case is to facilitate accelerated transcoding in docker containers.  For this we require Linux to detect and auto-install the appropriate driver.  However, in order to reliably pass through an NVIDIA or AMD GPU to a VM, it's necessary to prevent Linux from auto-installing a GPU driver for those devices upon boot, which can be easily done now through System Devices page.  Users passing GPU's to VM's are encouraged to set this up now.

     

    "unexpected GSO errors"

    If your system log is being flooded with errors such as:

    Jun 20 09:09:21 Tower kernel: tun: unexpected GSO type: 0x0, gso_size 31, hdr_len 66

    You need to edit each VM and change the model type for the Ethernet bridge from "virtio" to "virtio-net".  In most cases this can be accomplished simply by clicking Update in "Form View" on the VM Edit page.  For other network configs it may be necessary to directly edit the xml.  Example:

    <interface type='bridge'>
          <mac address='xx:xx:xx:xx:xx:xx'/>
          <source bridge='br0'/>
          <model type='virtio-net'/>
          <address type='pci' domain='0x0000' bus='0x00' slot='0x03' function='0x0'/>
    </interface>

     

    SSH Improvements

    There are changes in /etc/ssh/sshd_conf to improve security (thanks to @Mihai and @ljm42 for suggestions):

    • only root user is permitted to login via ssh (remember: no traditional users in Unraid OS - just 'root')
    • non-null password is now required
    • non-root tunneling is disabled

     

    In addition, upon upgrade we ensure the 'config/ssh/root' directory exists on the USB flash boot device; and, we have set up a symlink: /root/.ssh to this directory.  This means any files you might put into /root/.ssh will be persistent across reboots.

     

    Note: if you examine the sshd startup script (/etc/rc.d/rc.sshd), upon boot all files from the 'config/ssh' directory are copied to /etc/ssh (but not subdirs).  The purpose is to restore the host ssh keys; however, this mechanism can be used to define custom ssh_conf and sshd_conf files (not recommended).

     

    Other

    • AFP support has been removed.
    • Numerous other Unraid OS and webGUI bug fixes and improvements.

     


    Version 6.9.0-beta25 2020-07-12

    Linux kernel:

    • version 5.7.8

    Management:

    • fix emhttpd crash resulting from exporting NFS disk share(s)
    • fix non-rotational device partitions were not actually being 1MiB aligned
    • dhcpcd: ipv6: use slaac hwaddr instead of slaac private
    • docker: correct storage-driver assignemnt logic
    • ssh: allow only root user, require passwords, disable non-root tunneling
    • ssh: add /root/.ssh symlink to /boot/config/ssh/root directory
    • syslog: configure to also listen on localhost udp port 514
    • webgui: Added btrfs info for all pools in diagnostics
    • webgui: Docker: allow BTRFS or XFS vdisk, or folder location
    • webgui: Multi-language: Fixed regression error: missing indicator for required fields
    • webgui: Dashboard: fix stats of missing interface
    • Like 2
    • Thanks 3



    User Feedback

    Recommended Comments



    On 8/24/2020 at 3:10 AM, Dava2k7 said:

    Hey all I’m in same position only way I can get the VM to boot is via vnc and using a command line. I get no signal through hdmi when trying to boot Vm driving me crazy!!!!!

    This happened to mine the other day, using it one night, shut it down, woke up in the morning and black screen.  I ended up just recreating the whole vm and installing windows from scratch.  I really shouldn't have had to do that, but had tried so many things that I actually thought my GPU had failed.

    Link to comment
    20 hours ago, sheiy said:

    when will the 6.9.0 release

    Hahaha, I have the same question!

    Link to comment
    6 minutes ago, Moka said:
    21 hours ago, sheiy said:

    when will the 6.9.0 release

    Hahaha, I have the same question!

    Soon™

     

    Seriously, there is no official timeline, asking isn't going to make one appear. When it's done it will be released, no sooner, no later.

    Link to comment
    14 hours ago, jonathanm said:

    Soon™

     

    Seriously, there is no official timeline, asking isn't going to make one appear. When it's done it will be released, no sooner, no later.

     

    But if it was later, how could we really know?

    Link to comment
    2 minutes ago, 1812 said:

     

    But if it was later, how could we really know?

    "Time is an illusion. Lunchtime doubly so."

    • Like 3
    Link to comment

    While I'm not going to worry about when it's released (because this is a common response across open source software development).  A pillar of agile is openness and sharing of progress so that anyone can see what is being attempted to be completed by when and what is being aimed for.  It doesn't say when it will be done or if it will be accepted however, just that it would be attempted within a sprint (e.g. a 1-4 week timeframe). 

     

    Sadly, most don't share this information.

     

    Of course, perhaps lime tech would be one of the few not using Agile, instead using something like waterfall - which would mean they would definitely have a deadline to share.

    Or they could be using nothing, which would be quite enjoyable and would explain why there is nothing to share.  My bet is this last one.  Because there's not really any commitment to provide anything specific and I think that's fine in this environment.  The team is probably distributed and they probably have other responsibilities which make things complicated.

     

    I do wish though that we could see into a Scrum board or something at a read only level to satisfy curiosity.  Or they would pick random customers to participate in each sprint to help or something.  That'd be cool.  (Putting my hand up if anyone from lime tech reads this).

     

    But this note is really just to say, I read this 'theres no official timeline' thing a lot.  And while that's typically true for a software development process, it doesn't mean there's no process or aim that can be shared. 

     

    Hope that doesn't offend anyone, especially at limetech, I just like to help educate on agile sometimes (Certified Agile Scrum Master) among other things. :)

    Link to comment
    55 minutes ago, Marshalleq said:

    I read this 'theres no official timeline' thing a lot.

    Ok, let me be a little more clear. There is no publicly accessible official timeline. What limetech does with their internal development is kept private, for many reasons.

     

    My speculation of the main reason is that the wrath of users over no timeline is tiny compared to multiple missed deadlines. In the distant past there were loose timelines issued, and the flak that ensued was rather spectacular, IIRC. Rather than getting beaten up over progress reports, it's easier for the team to stay focused internally and release when ready, rather than try to justify delays.

     

    When you have a very small team, every man hour is precious. Keeping the masses up to date with every little setback doesn't move the project forward, it just demoralizes with all the negative comments. Even "constructive" requests for updates take time to answer, and it's not up to us to say "well, it's only a small amount of time, surely you can spare it".

     

    The team makes choices on time management, it's best just to accept that and be happy when the updates come.

    • Like 4
    Link to comment
    On 8/22/2020 at 5:17 AM, DZMM said:

    New beta user.  I've finished making the switch which was time consuming as I've created new pools for my appdata (mainly Plex) and VMs, which meant moving a lot of data around, formating drives etc. 

     

    It all went fairly smoothly, but there's one new addition that really didn't help me.  I passthrough my primary GPU to a VM AND also run a pfSense VM.  If my array doesn't start, I have to stop it, unplug the USB, undo the VFIO-bind edits so that I have a screen, and then reboot to be able to access the server, as I can't use another computer to access the server because of the VM.

     

    In 6.9.0 it looks like if there's a unclean shutdown it turns off the disk auto start option - I'm sure this is a new feature.  This means I have to go through the steps above every time which is a pain.  Is it possible to make this optional or to remove as it's a real pain for people with headless servers AND a pfsense VM who can't get into the server over the LAN.

    A possible solution... (Though I'm not running the Beta) I'm in a very similar boat... I have a gaming VM that has control over my NVIDIA GPU and while I can plug a spare monitor into the integrated Intel GPU I prefer keeping things simple.

     

    I've assigned a static Address to my Unraid Server and when my pfSense VM fails for any reason I run the following script in Admin mode (WIndows) to temporarily set my laptops IP to a static address as well.

     

    netsh interface ipv4 set address name="Wi-Fi 2" static 10.40.70.251 255.255.255.0 10.40.70.1

    netsh interface ipv4 set dns name="Wi-Fi 2" static 10.40.70.1 8.8.2.2

     

    Then I run a second script to change back to Dynamic after the router is back up and running:

     

    netsh interface ipv4 set address name="Wi-Fi 2" source=dhcp

     

    This quickly lets me diagnose any problems and get up and running asap.

    Link to comment
    4 hours ago, Arbadacarba said:

    A possible solution... (Though I'm not running the Beta) I'm in a very similar boat... I have a gaming VM that has control over my NVIDIA GPU and while I can plug a spare monitor into the integrated Intel GPU I prefer keeping things simple.

     

    I've assigned a static Address to my Unraid Server and when my pfSense VM fails for any reason I run the following script in Admin mode (WIndows) to temporarily set my laptops IP to a static address as well.

     

    netsh interface ipv4 set address name="Wi-Fi 2" static 10.40.70.251 255.255.255.0 10.40.70.1

    netsh interface ipv4 set dns name="Wi-Fi 2" static 10.40.70.1 8.8.2.2

     

    Then I run a second script to change back to Dynamic after the router is back up and running:

     

    netsh interface ipv4 set address name="Wi-Fi 2" source=dhcp

     

    This quickly lets me diagnose any problems and get up and running asap.

     
     
     
     
     

    I did stumble onto a similar workaround, where I set a static IP up on my laptop that I bought a few months ago, in order to access the server.

     

    I'm loving the new pools and they are a great addition.  It's particularly nice to be able to see the disk activity for drives that were previously UDs.

    642159039_FireShotCapture311-Highlander_Main-1d087a25aac48109ee9a15217a105d14c06e02a6.unraid_net.thumb.png.0d6821294b5d86875aa31f204b402bf8.png

     

    • Like 1
    Link to comment
    • So I'm just registering my disks are no longer spinning down.  I can spin them down manually, but at some point they will spin up again and don't go to sleep.  I've attached logs.  
    • Also I received an out of memory error (below) on my 96G memory system, which I assume is to do with the windows / syslog issue, but don't know, haven't checked yet.  Will also be in logs, not likely to be RAM anyway.
    • My logs seem to be going straight into a folder and I have to manually compress them.  Perhaps my zip is automatically unzipping them, but thought I'd mention in case anyone else has the same.
    • Finally, the idle temperature of my Threadripper 1950X on Asus X399 Prime-A board reports incorrectly since the beta was introduced.  It reports idle temps as about 90 degrees C.  Clearly not correct.

    1171374913_ScreenShot2020-08-28at8_49_38AM.thumb.png.00a9c1564be2115c0f561fe1bb6488bd.png

    obi-wan-diagnostics-20200828-0850.zip

    Link to comment
    2 hours ago, Marshalleq said:

    the idle temperature of my Threadripper 1950X on Asus X399 Prime-A board reports incorrectly since the beta was introduced.

    Unraid doesn't report CPU temperatures. You must be using a plugin for that.

    Link to comment

    Hmmm, I thought the upgraded kernel took care of that now (whereas yes, I agree previously you needed a plugin for the driver).  My assumption was the current plugin now reads whatever the current kernel is sending.  And to that and, I do note with the same plugin on the two different kernel versions, that this kernel has a lot more plugins to choose from, which does seem to indicate I'm on the right track there.

     

    So plugin to display, kernel to send temps right?

     

    If so, I still say kernel is sending wrong temps, or the plugin needs to be updated for AMD's crazy temp +27 degrees or whatever they do.

    Link to comment

    Dude this Spindown problem for Beta 25 seems to be normal, i had them spinning down the first 2 Weeks...

    After that they stopped spinning down on my HP-Server also....:-(

    Hardware, Container and VM wise nothing was changed 🙂

     

    Hopefully that would get fixed in the next Beta/RC.

    Link to comment
    45 minutes ago, DarkMan83 said:

    this Spindown problem for Beta 25 seems to be normal

    Not that I know of, please try after booting in safe mode, if it still happens create a bug report, don't forget the diagnostics.

    Link to comment

    I can't really boot into safe mode without a lot of effort since I run a ZFS plugin with all my dockers and vms on it.  However I do have all my disks still spun up caused by something.  I could compile a custom kernel with ZFS in it, but then people would probably point at that.  Only other option would be to format / move my zfs volumes.  Probably easier to let someone else do it in this case.  @DarkMan83 want to compare plugins or something to help rule them out?

     

    Edited by Marshalleq
    Link to comment

    Just wanted to add that the annoying GSO error, actually happens when a linux VM is running Machine Q35-<5.0.  I haven't noticed anyone else report that specific to linux here before.

    Link to comment

    I began testing this build on my HP ML350 Gen8 in hopes the temperature values would be fixed in the GUI with the smartctl changes I noticed in the code.  Unfortunately this hasn't done anything to resolve the problem of the default "Automatic" setting not pulling the SMART data (incorrect syntax error), and when set manually I still am not getting the temperature data on the Main or Dashboard tabs.  I have also noticed a new problem occurring now in this build that wasn't on 6.8.x.  If I go to an individual disk and set the SMART controller manually, after clicking "apply" and reloading the page, the SMART data will update and reflect the change but the GUI still shows "default."  Just playing around some these settings, I have noticed it can be somewhat buggy as well.  I have hit apply and refreshed the page on two occasions now only to have the settings revert back to the default.  Perhaps someone could try to reproduce this error.

    Link to comment
    1 hour ago, greg_gorrell said:

    I began testing this build on my HP ML350 Gen8 in hopes the temperature values would be fixed in the GUI with the smartctl changes I noticed in the code.  Unfortunately this hasn't done anything to resolve the problem of the default "Automatic" setting not pulling the SMART data (incorrect syntax error), and when set manually I still am not getting the temperature data on the Main or Dashboard tabs.  I have also noticed a new problem occurring now in this build that wasn't on 6.8.x.  If I go to an individual disk and set the SMART controller manually, after clicking "apply" and reloading the page, the SMART data will update and reflect the change but the GUI still shows "default."  Just playing around some these settings, I have noticed it can be somewhat buggy as well.  I have hit apply and refreshed the page on two occasions now only to have the settings revert back to the default.  Perhaps someone could try to reproduce this error.

    Just read the: 6.9.0-beta25 vs. -beta24 Summary

    It's clearly stated, that it will be fixed in next Build! -> 6.9.0-beta26 or RC...dunno whats next.

    Link to comment

    I actually also discovered my disk timeout had been reset to none, in the disk settings and applied to all disks.  So while crash plan was activating the disks (as it should), it was actually that they weren't set to spin down.  I never looked there  because I never go into those disk settings and had forgotten there was even a setting for it.  I'd suggest to double check that setting in case it really is resetting for some people as a result of the upgrade.  It seems unlikely, but who knows.

    Link to comment
    7 hours ago, DarkMan83 said:

    Just read the: 6.9.0-beta25 vs. -beta24 Summary

    It's clearly stated, that it will be fixed in next Build! -> 6.9.0-beta26 or RC...dunno whats next.

    What a moron, I skimmed the whole thread before posting and I still missed that.  Sorry guys!

     

    Edit: I wouldn't say it is "ignored," just not reflected in the GUI in my case.

    Edited by greg_gorrell
    Link to comment



    Guest
    This is now closed for further comments

  • Status Definitions

     

    Open = Under consideration.

     

    Solved = The issue has been resolved.

     

    Solved version = The issue has been resolved in the indicated release version.

     

    Closed = Feedback or opinion better posted on our forum for discussion. Also for reports we cannot reproduce or need more information. In this case just add a comment and we will review it again.

     

    Retest = Please retest in latest release.


    Priority Definitions

     

    Minor = Something not working correctly.

     

    Urgent = Server crash, data loss, or other showstopper.

     

    Annoyance = Doesn't affect functionality but should be fixed.

     

    Other = Announcement or other non-issue.