• Unraid OS version 6.9.0-beta35 available


    limetech

    New in this release:


    GPU Driver Integration

    Unraid OS now includes selected in-tree GPU drivers: ast (Aspeed), i915 (Intel), amdgpu and radeon (AMD).  These drivers are blacklisted by default via 'conf' files in /etc/modprobe.d:

    /etc/modprobe.d/ast.conf
    /etc/modprobe.d/amdgpu.conf
    /etc/modprobe.d/i915.conf
    /etc/modprobe.d/radeon.conf

    Each of these files has a single line which blacklists the driver, preventing it from being loaded by the Linux kernel.

     

    However it is possible to override the settings in these files by creating the directory 'config/modprobe.d' on your USB flash boot device and then creating the same named-file in that directory.  For example, to unblacklist amdgpu type these commands in a Terminal session:

    mkdir /boot/config/modprobe.d
    touch /boot/config/modprobe.d/amdgpu.conf

    When Unraid OS boots, before the Linux kernel executes device discovery, we copy any files from /boot/config/modprobe.d to /etc/modprobe.d.  Since amdgpu.conf on the flash is an empty file, it will effectively cancel the driver from being blacklisted.

     

    This technique can be used to set boot-time options for any driver as well.

     

    Better Support for Third Party Drivers

    Recall that we distribute Linux modules and firmware in separate squashfs files which are read-only mounted at /lib/modules and /lib/firmware.  We now set up an overlayfs on each of these mount points, making it possible to install 3rd party modules at boot time, provided those modules are built against the same kernel version.  This technique may be used by Community Developers to provide an easier way to add modules not included in base Unraid OS: no need to build custom bzimage, bzmodules, bzfirmware and bzroot files.

     

    To go along with the other GPU drivers included in this release, we have created a separate installable Nvidia driver package.  Since each new kernel version requires drivers to be rebuilt, we have set up a feed that enumerates each driver available with each kernel.

     

    The easiest way to install the Nvdia driver, if you require it, is to make use of a plugin provided by Community member @ich777This plugin uses the feed to install the correct driver for the currently running kernel.  A big thank you! to @ich777 for providing assistance and coding up the the plugin:

     

    Linux Kernel

    This release includes Linux kernel 5.8.18.  We realize the 5.8 kernel has reached EOL and we are currently busy upgrading to 5.9.

     


     

    Version 6.9.0-beta35 2020-11-12 (vs -beta30)

    Base distro:

    • aaa_elflibs: version 15.0 build 25
    • brotli: version 1.0.9 build 2
    • btrfs-progs: version 5.9
    • ca-certificates: version 20201016
    • curl: version 7.73.0
    • dmidecode: version 3.3
    • ethtool: version 5.9
    • freetype: version 2.10.4
    • fuse3: version 3.10.0
    • git: version 2.29.1
    • glib2: version 2.66.2
    • glibc-solibs: version 2.30 build 2
    • glibc-zoneinfo: version 2020d
    • glibc: version 2.30 build 2
    • iproute2: version 5.9.0
    • jasper: version 2.0.22
    • less: version 563
    • libcap-ng: version 0.8 build 2
    • libevdev: version 1.10.0
    • libgcrypt: version 1.8.7
    • libnftnl: version 1.1.8
    • librsvg: version 2.50.1
    • libwebp: version 1.1.0 build 3
    • libxml2: version 2.9.10 build 3
    • lmdb: version 0.9.27
    • nano: version 5.3
    • ncurses: version 6.2_20201024
    • nginx: version 1.19.4
    • ntp: version 4.2.8p15 build 3
    • openssh: version 8.4p1 build 2
    • pam: version 1.4.0 build 2
    • rpcbind: version 1.2.5 build 2
    • samba: version 4.12.9 (CVE-2020-14318 CVE-2020-14318 CVE-2020-14318)
    • talloc: version 2.3.1 build 4
    • tcp_wrappers: version 7.6 build 3
    • tdb: version 1.4.3 build 4
    • tevent: version 0.10.2 build 4
    • usbutils: version 013
    • util-linux: version 2.36 build 2
    • vsftpd: version 3.0.3 build 7
    • xfsprogs: version 5.9.0
    • xkeyboard-config: version 2.31
    • xterm: version 361

    Linux kernel:

    • version 5.8.18
    • added GPU drivers:
    • CONFIG_DRM_RADEON: ATI Radeon
    • CONFIG_DRM_RADEON_USERPTR: Always enable userptr support
    • CONFIG_DRM_AMDGPU: AMD GPU
    • CONFIG_DRM_AMDGPU_SI: Enable amdgpu support for SI parts
    • CONFIG_DRM_AMDGPU_CIK: Enable amdgpu support for CIK parts
    • CONFIG_DRM_AMDGPU_USERPTR: Always enable userptr write support
    • CONFIG_HSA_AMD: HSA kernel driver for AMD GPU devices
    • kernel-firmware: version 20201005_58d41d0
    • md/unraid: version 2.9.16: correction recording disk info with array Stopped; remove 'superblock dirty' handling
    • oot: Realtek r8152: version 2.14.0

    Management:

    • emhttpd: fix 'auto' setting where pools enabled for user shares should not be exported
    • emhttpd: permit Erase of 'DISK_DSBL_NEW' replacement devices
    • emhtptd: track clean/unclean shutdown using file 'config/forcesync'
    • emhttpd: avoid unnecessarily removing mover.cron file
    • modprobe: blacklist GPU drivers by default, config/modprobe.d/* can override at boot
    • samba: disable aio by default
    • startup: setup an overlayfs for /lib/modules and /lib/firmware
    • webgui: pools not enabled for user shares should not be selectable for cache
    • webgui: Add pools information to diagnostics
    • webgui: vnc: add browser cache busting
    • webgui: Multilanguage: Fix unable to delete / edit users
    • webgui: Prevent "Add" reverting to English when adding a new user with an invalid username
    • webgui: Fix Azure / Gray Switch Language being cut-off
    • webgui: Fix unable to use top right icons if notifications present
    • webgui: Changed: Consistency between dashboard and docker on accessing logs
    • webgui: correct login form wrong default case icon displayed
    • webgui: set 'mid-tower' default case icon
    • webgui: fix: jGrowl covering buttons
    • webgui: New Perms: Support multi-cache pools
    • webgui: Remove WG from Dashboard if no tunnels defined
    • webgui: dockerMan: Allow readmore in advanced view
    • webgui: dockerMan: Only allow name compatible with docker

    Edited by limetech

    • Like 11
    • Thanks 5



    User Feedback

    Recommended Comments



     

     

    14 hours ago, trurl said:

     

    Thank you for the link!

     

     

    13 hours ago, trurl said:

    Or go to Settings - Display Settings and change "Show Dashboard apps" to "Docker only". Then Dockers on Dashboard work but you will have to go to VMS page to work with your VMs.

     

    Thank you very much!! This works for now as a good workaround! Saves me a little time clicking between tabs!

     

    Link to comment
    1 hour ago, S1dney said:

    I know haha, but I'm waiting for the upgraded kernel ;)

    I'm still on the latest build that had the 5+ kernel included.

    I did answer to somebody who wants to run NVIDIA drivers on stable Unraid releases only. The latest stable release of Unraid is 6.8.3. So I did point him to the correct way to add current NVIDIA drivers to that release.

     

    Link to comment

    As per the nvidia plugin install instructions, I disabled docker but when I tried to enable it, the "Default appdata storage location:" setting turns red and the change does not apply. The path exists. Rebooting does not allow it to enable.

     

    Bug post created.

    Link to comment

    Just a little feedback on upgrading from Unraid Nvidia beta30 to beta35 with Nvidia drivers plugin.

     

    The process was smooth and I see no stability or performance issue after 48h, following these steps :

    - Disable auto-start on "nvidia aware" containers (Plex and F@H for me)

    - Stop all containers

    - Disable Docker engine

    - Stop all VMs (none of them had a GPU passthrough)

    - Disable VM Manager

    - Remove Unraid-Nvidia plugin

    - Upgrade to 6.9.0-beta35 with Tools>Update OS

    - Reboot

    - Install Nvidia Drivers plugin from CA (be patient and wait for the "Done" button)

    - Check the driver installation (Settings>Nvidia Drivers, should be 455.38, run nvidia-smi under CLI). Verify the GpuID is unchanged, which was the case for me, otherwise the "NVIDIA_VISIBLE_DEVICES" variable should be changed accordingly for the relevant containers

    - Reboot

    - Enable VM Manager, restart VMs and check them

    - Enable Docker engine, start Plex and F@H

    - Re-enable autostart for Plex and F@H

     

    All this is perhaps a bit over-cautious, but at least I can confirm I got the expected result, i.e. an upgraded server with all functionalities up and running under 6.9.0-beta35 !

    Edited by Gnomuz
    Rewording / typos
    • Like 10
    • Thanks 3
    Link to comment
    8 hours ago, TechGeek01 said:

    Sounds like a weird edge case or something.

    Like mentioned above, VMs will only auto-start if array auto-start is enable, diags confirm it isn't.

     

    Link to comment
    30 minutes ago, JorgeB said:

    Like mentioned above, VMs will only auto-start if array auto-start is enable, diags confirm it isn't.

    Totally missed that statement above. So then with autostart on my array disabled, the programmatically intended behavior is that VMs don't autostart, correct?

     

    Now, Docker containers also have an autostart option, and when I manually start the array on boot cause auto array start is disabled, the Docker containers autostart themselves once the array is running. Surely, the expected and proper behavior should be that VMs should also follow that pattern?

     

    The array has to be started to even see a list of VMs or Docker containers, so there's no way to even manually start them before the array is already running, meaning that whether the array is started manually or automatically should be entirely irrelevant to both of those autostarts. Can a change be made so that even when starting the array manually, both Docker and VMs respect the chosen autostart options?

    Link to comment
    5 minutes ago, TechGeek01 said:

    Surely, the expected and proper behavior should be that VMs should also follow that pattern?

    IIRC this was done on purpose, for example sometimes users pass-through the wrong device to a VM, like a disk controller, and starting the VM crashes Unraid, this way it's easy to disable array autostart (and consequently VM autostart) so the user can fix the problem.

    Link to comment
    6 hours ago, TechGeek01 said:

    Can a change be made so that even when starting the array manually, both Docker and VMs respect the chosen autostart options?

    Personally I don't rely on Unraid's built in VM autostart, as I have some external conditions that need to be met before some of my VM's come up. Scripting VM startup is very easy, virsh commands are well documented.

     

    Since you have a use case for starting your VM regardless of array autostart, I suggest using a simple script to start the VM. However, as JorgeB noted, I would recommend a conditional in the script to allow you to easily disable the auto start if needed for troubleshooting. It's very frustrating to get into a loop that requires you to manually edit files on Unraid's USB to recover.

    Link to comment
    8 minutes ago, jonathanm said:

    It's very frustrating to get into a loop that requires you to manually edit files on Unraid's USB to recover.

    Worse than that, AFAIK there's no easy way of manually changing a VM from autostarting, hence the no autostart with manual array start, so than it can be edited in case it's needed.

    Link to comment
    Just now, JorgeB said:

    Worse than that, AFAIK there's no easy way of manually changing a VM from autostarting, hence the no autostart with manual array start, so than it can be edited in case it's needed.

    Yeah, that's why I recommend rolling your own autostart script with easily edited conditionals.

     

    The brute force autostart that is built in has severe limitations IMHO, Squid's plugin container autostart with network and timing conditionals should be the model for Unraid's built in autostart for both VM's and containers. It seems like we took a step back when order and timing were added to Unraid, prompting the deprecation of Squid's plugin.

    Link to comment
    On 11/19/2020 at 8:06 AM, jonathanm said:

    prompting the deprecation of Squid's plugin.

    The integration of timing for containers was much nicer than my hacked system, even though it didn't have the conditionals.  I have no regrets about deprecating that plugin.

    Link to comment

    Something wonky is going on with first boot of a new 6.9 beta, the following has happened before on 6.9 but I thought it was just me.

     

    I.e. Update everything (plugins, dockers, OPNsense), reboot, server OK for a few hours then absolute hard lock, no IPMI view, no ping, no dockers or VMs. Only a full power off and on brings the server back.

     

    Then its fine... (Uptime for beta30 was 40+days), uptime now for beta35 is now 10 hours.

     

    Is UnRAID doing something funky on very first boot of a new version? Microcode updates?

     

    If not absolutely baffled why a totally stable server now, almost on demand, falls over within a few hours after the first boot of a new version.

     

    I've had the syslog to flash off as the server hadn't crashed after a week on beta 30, now I've turned it on again I bet its going to be absolutely fine!

     

    Edit: As much as I'd like to turn everything off (boot native) the house and internet is run off this server so I have to boot with some VMs/Dockers!

    Edited by Interstellar
    Link to comment
    On 11/19/2020 at 8:43 PM, Gnomuz said:

    All this is perhaps a bit over-cautious, but at least I can confirm I got the expected result, i.e. an upgraded server with all functionalities up and running under 6.9.0-beta35 !

     

    I recently upgraded from 6.8.3 to 6.9.0-beta35.  While I didn't follow all of your steps everything is working as expected for me as well.  NVIDIA driver is in and working in docker containers.   

    Link to comment

    Upgraded from 6.8.3 without issue!  Using the Nvidia drivers as well ... tested working as expected with a number of dockers and Windows Q35 VMs.

     

    Quote

    Linux Kernel

    This release includes Linux kernel 5.8.18.  We realize the 5.8 kernel has reached EOL and we are currently busy upgrading to 5.9.

    Looking forward to this 5.9 kernel release!🙏  There is a patch to hwmon I've been waiting to get my hands on!

    Link to comment

    Anyone else having issues with SAMBA since upgrading?

     

    Quote

    root@UNRAID:~# net join -U *REDACTED*

    Enter *REDACTED*'s password:

    smb_krb5_init_context_common: Krb5 context initialization failed (Included profile file could not

    be read)

    kerberos_kinit_password_ext: kerberos init context failed (Included profile file could not be read

    )

    kerberos_kinit_password *REDACTED*@*REDACTED* failed: Included profile file could not be read

    smb_krb5_init_context_common: Krb5 context initialization failed (Included profile file could not

    be read)

    smb_krb5_init_context_common: Krb5 context initialization failed (Included profile file could not

    be read)

    secrets_domain_info_kerberos_keys: kerberos init context failed (Included profile file could not b

    e read)

    secrets_store_JoinCtx: secrets_domain_info_password_create(pw) failed for *REDACTED* - NT_STATUS_UNSUC

    CESSFUL

    libnet_join_joindomain_store_secrets: secrets_store_JoinCtx() failed NT_STATUS_UNSUCCESSFUL

    Failed to join domain: This machine is not currently joined to a domain.

    ADS join did not work, falling back to RPC...

    smb_krb5_init_context_common: Krb5 context initialization failed (Included profile file could not

    be read)

    smb_krb5_init_context_common: Krb5 context initialization failed (Included profile file could not

    be read)

    smb_krb5_init_context_common: Krb5 context initialization failed (Included profile file could not

    be read)

    smb_krb5_init_context_common: Krb5 context initialization failed (Included profile file could not

    be read)

     

    Also about 100000000000 errors in event log

     

    Quote

    Nov 22 21:34:38 UNRAID winbindd[28140]: gse_context_init: kerberos init context failed (Included profile file could not be read)
    Nov 22 21:34:38 UNRAID winbindd[28138]: [2020/11/22 21:34:38.860320, 0] ../../lib/krb5_wrap/krb5_samba.c:3549(smb_krb5_init_context_common)
    Nov 22 21:34:38 UNRAID winbindd[28138]: smb_krb5_init_context_common: Krb5 context initialization failed (Included profile file could not be read)
    Nov 22 21:34:38 UNRAID winbindd[28138]: [2020/11/22 21:34:38.860356, 0] ../../source3/libads/kerberos.c:139(kerberos_kinit_password_ext)
    Nov 22 21:34:38 UNRAID winbindd[28138]: kerberos_kinit_password_ext: kerberos init context failed (Included profile file could not be read)

     

    Link to comment
    1 hour ago, Darren Cook said:

    Also about 100000000000 errors in event log

    That's all? :)  Haven't fired up our AD server in while, will have to do that ...

     

    Also: would be helpful to open separate bug report for this.

    Link to comment
    12 minutes ago, limetech said:

    That's all? :)  Haven't fired up our AD server in while, will have to do that ...

     

    Also: would be helpful to open separate bug report for this.

    Thought with it being beta 35 it had to be on here?. seems to keep trying to find said files fails then retries until i kill samba service in ssh.

     

    seems to have a knock on effect at eating ram (not linux ate my ram style)

    Link to comment

    I know this has been an issue for a while. I'm not sure if it's a hardware thing, or if it's an Unraid thing, or a bit of both, but it won't boot UEFI. Made a test USB of the trial of beta 35 and booted an R510. Booting in BIOS works fine, but despite having made the USB with the "allow UEFI boot" option checked, when it gets to the splash screen to select GUI mode or CLI mode at boot, whatever the selected boot option is, I get a "bad file number" error when booting, and it keeps trying and failing every couple of seconds.

     

    UEFI was enabled on the R510, and it should in theory be supported, given that the USB was made with that in mind, but all options at the Unraid boot screen still do this. Would be awesome if it could be fixed, but what exactly is the underlying cause for this?

    Link to comment



    Join the conversation

    You can post now and register later. If you have an account, sign in now to post with your account.
    Note: Your post will require moderator approval before it will be visible.

    Guest
    Add a comment...

    ×   Pasted as rich text.   Restore formatting

      Only 75 emoji are allowed.

    ×   Your link has been automatically embedded.   Display as a link instead

    ×   Your previous content has been restored.   Clear editor

    ×   You cannot paste images directly. Upload or insert images from URL.


  • Status Definitions

     

    Open = Under consideration.

     

    Solved = The issue has been resolved.

     

    Solved version = The issue has been resolved in the indicated release version.

     

    Closed = Feedback or opinion better posted on our forum for discussion. Also for reports we cannot reproduce or need more information. In this case just add a comment and we will review it again.

     

    Retest = Please retest in latest release.


    Priority Definitions

     

    Minor = Something not working correctly.

     

    Urgent = Server crash, data loss, or other showstopper.

     

    Annoyance = Doesn't affect functionality but should be fixed.

     

    Other = Announcement or other non-issue.