Jump to content
  • Unraid OS version 6.9.0-beta35 available


    limetech

    New in this release:


    GPU Driver Integration

    Unraid OS now includes selected in-tree GPU drivers: ast (Aspeed), i915 (Intel), amdgpu and radeon (AMD).  These drivers are blacklisted by default via 'conf' files in /etc/modprobe.d:

    /etc/modprobe.d/ast.conf
    /etc/modprobe.d/amdgpu.conf
    /etc/modprobe.d/i915.conf
    /etc/modprobe.d/radeon.conf

    Each of these files has a single line which blacklists the driver, preventing it from being loaded by the Linux kernel.

     

    However it is possible to override the settings in these files by creating the directory 'config/modprobe.d' on your USB flash boot device and then creating the same named-file in that directory.  For example, to unblacklist amdgpu type these commands in a Terminal session:

    mkdir /boot/config/modprobe.d
    touch /boot/config/modprobe.d/amdgpu.conf

    When Unraid OS boots, before the Linux kernel executes device discovery, we copy any files from /boot/config/modprobe.d to /etc/modprobe.d.  Since amdgpu.conf on the flash is an empty file, it will effectively cancel the driver from being blacklisted.

     

    This technique can be used to set boot-time options for any driver as well.

     

    Better Support for Third Party Drivers

    Recall that we distribute Linux modules and firmware in separate squashfs files which are read-only mounted at /lib/modules and /lib/firmware.  We now set up an overlayfs on each of these mount points, making it possible to install 3rd party modules at boot time, provided those modules are built against the same kernel version.  This technique may be used by Community Developers to provide an easier way to add modules not included in base Unraid OS: no need to build custom bzimage, bzmodules, bzfirmware and bzroot files.

     

    To go along with the other GPU drivers included in this release, we have created a separate installable Nvidia driver package.  Since each new kernel version requires drivers to be rebuilt, we have set up a feed that enumerates each driver available with each kernel.

     

    The easiest way to install the Nvdia driver, if you require it, is to make use of a plugin provided by Community member @ich777This plugin uses the feed to install the correct driver for the currently running kernel.  A big thank you! to @ich777 for providing assistance and coding up the the plugin:

     

    Linux Kernel

    This release includes Linux kernel 5.8.18.  We realize the 5.8 kernel has reached EOL and we are currently busy upgrading to 5.9.

     


     

    Version 6.9.0-beta35 2020-11-12 (vs -beta30)

    Base distro:

    • aaa_elflibs: version 15.0 build 25
    • brotli: version 1.0.9 build 2
    • btrfs-progs: version 5.9
    • ca-certificates: version 20201016
    • curl: version 7.73.0
    • dmidecode: version 3.3
    • ethtool: version 5.9
    • freetype: version 2.10.4
    • fuse3: version 3.10.0
    • git: version 2.29.1
    • glib2: version 2.66.2
    • glibc-solibs: version 2.30 build 2
    • glibc-zoneinfo: version 2020d
    • glibc: version 2.30 build 2
    • iproute2: version 5.9.0
    • jasper: version 2.0.22
    • less: version 563
    • libcap-ng: version 0.8 build 2
    • libevdev: version 1.10.0
    • libgcrypt: version 1.8.7
    • libnftnl: version 1.1.8
    • librsvg: version 2.50.1
    • libwebp: version 1.1.0 build 3
    • libxml2: version 2.9.10 build 3
    • lmdb: version 0.9.27
    • nano: version 5.3
    • ncurses: version 6.2_20201024
    • nginx: version 1.19.4
    • ntp: version 4.2.8p15 build 3
    • openssh: version 8.4p1 build 2
    • pam: version 1.4.0 build 2
    • rpcbind: version 1.2.5 build 2
    • samba: version 4.12.9 (CVE-2020-14318 CVE-2020-14318 CVE-2020-14318)
    • talloc: version 2.3.1 build 4
    • tcp_wrappers: version 7.6 build 3
    • tdb: version 1.4.3 build 4
    • tevent: version 0.10.2 build 4
    • usbutils: version 013
    • util-linux: version 2.36 build 2
    • vsftpd: version 3.0.3 build 7
    • xfsprogs: version 5.9.0
    • xkeyboard-config: version 2.31
    • xterm: version 361

    Linux kernel:

    • version 5.8.18
    • added GPU drivers:
    • CONFIG_DRM_RADEON: ATI Radeon
    • CONFIG_DRM_RADEON_USERPTR: Always enable userptr support
    • CONFIG_DRM_AMDGPU: AMD GPU
    • CONFIG_DRM_AMDGPU_SI: Enable amdgpu support for SI parts
    • CONFIG_DRM_AMDGPU_CIK: Enable amdgpu support for CIK parts
    • CONFIG_DRM_AMDGPU_USERPTR: Always enable userptr write support
    • CONFIG_HSA_AMD: HSA kernel driver for AMD GPU devices
    • kernel-firmware: version 20201005_58d41d0
    • md/unraid: version 2.9.16: correction recording disk info with array Stopped; remove 'superblock dirty' handling
    • oot: Realtek r8152: version 2.14.0

    Management:

    • emhttpd: fix 'auto' setting where pools enabled for user shares should not be exported
    • emhttpd: permit Erase of 'DISK_DSBL_NEW' replacement devices
    • emhtptd: track clean/unclean shutdown using file 'config/forcesync'
    • emhttpd: avoid unnecessarily removing mover.cron file
    • modprobe: blacklist GPU drivers by default, config/modprobe.d/* can override at boot
    • samba: disable aio by default
    • startup: setup an overlayfs for /lib/modules and /lib/firmware
    • webgui: pools not enabled for user shares should not be selectable for cache
    • webgui: Add pools information to diagnostics
    • webgui: vnc: add browser cache busting
    • webgui: Multilanguage: Fix unable to delete / edit users
    • webgui: Prevent "Add" reverting to English when adding a new user with an invalid username
    • webgui: Fix Azure / Gray Switch Language being cut-off
    • webgui: Fix unable to use top right icons if notifications present
    • webgui: Changed: Consistency between dashboard and docker on accessing logs
    • webgui: correct login form wrong default case icon displayed
    • webgui: set 'mid-tower' default case icon
    • webgui: fix: jGrowl covering buttons
    • webgui: New Perms: Support multi-cache pools
    • webgui: Remove WG from Dashboard if no tunnels defined
    • webgui: dockerMan: Allow readmore in advanced view
    • webgui: dockerMan: Only allow name compatible with docker

    Edited by limetech

    • Like 10
    • Thanks 4


    User Feedback

    Recommended Comments



    7 minutes ago, ich777 said:

    如果您从不受骚扰的控制台上运行命令“ nvidia-smi”就足够了,因为它会告诉您是否找到了该命令或缺少了什么。

     

    但请记住 @Scroopy Noopers告诉我,他在使用其他Nvidia插件时也遇到了问题,无法正常工作。我目前正在与他进行私人对话,我们将尝试解决此问题,并将报告问题所在。

    I'm a loyal user. I've been using it since you released the unraid kernel helper. It's very easy to use and feels great.

    Edited by stl88083365
    Wrong typing
    • Like 1

    Share this comment


    Link to comment
    Share on other sites
    On 11/14/2020 at 7:55 AM, ich777 said:

    Install new beta35 -> Reboot -> Download the 'Nvidia-Driver' from the CA App -> then I would recommend to reboot again -> then add the '--runtime=nvidia' back to the container and everything should work as expected again.

     

    If you remove '--runtime=nvidia' the container can't access the Nvidia driver.

    Working for Plex. Thank you. 

     

    Cannot get it to work for Emby. I get the following error:

     

    root@localhost:# /usr/local/emhttp/plugins/dynamix.docker.manager/scripts/docker run -d --name='emby' --net='host' -e TZ="America/Los_Angeles" -e HOST_OS="Unraid" -e 'TCP_PORT_8096'='8096' -e 'TCP_PORT_8920'='8920' -e 'UDP_PORT_1900'='1900' -e 'UDP_PORT_7359'='7359' -e 'NVIDIA_VISIBLE_DEVICES'=' GPU-38ac4a82-0a7c-5e11-2f29-67386c69021c' -e 'PUID'='99' -e 'PGID'='100' -v '/mnt/user/Movies/':'/movies':'rw' -v '/mnt/user/TV Shows/':'/tv':'rw' -v '/mnt/user/Other/':'/other':'rw' -v '':'/music':'rw' -v '/mnt/user/appdata/emby':'/config':'rw' --runtime=nvidia 'linuxserver/emby:latest'

    e5a11bb71c56d3b0114fcb06ee9e5539cbcca17711554edf6a00a9b221366b8b
    docker: Error response from daemon: OCI runtime create failed: container_linux.go:349: starting container process caused "process_linux.go:449: container init caused "process_linux.go:432: running prestart hook 0 caused \"error running hook: exit status 1, stdout: , stderr: nvidia-container-cli: device error: GPU-38ac4a82-0a7c-5e11-2f29-67386c69021c: unknown device\\n\""": unknown.

    Share this comment


    Link to comment
    Share on other sites

    Would be great to know if someone gets transcoding working with AMD and emby. It seems that emby is not recognizing any GPU still. I also did the modprobe thing mentioned in the OP.

    • Like 3

    Share this comment


    Link to comment
    Share on other sites

    @ich777, what we need is a tutorial with complete instructions and guidance as to how to use your new plugin with both Dockers and VM's.  It may well be that everything that one needs to know is in this thread but it is spread out over some five pages at present and that page count will likely increase daily for the next several days.  I would prefer that you also provide a PDF version as I (personally) prefer to have printed hard copy that I can mark up with my notes. 

    • Like 2

    Share this comment


    Link to comment
    Share on other sites

    This is sad, very sad only.

     

    unRAID is a unique product, but it's the community what really makes it shines. As a ordinary user, what really makes me feel safe is not that i'm running a perfect piece of software(it's not, no software will ever be), but having a reliable community always have my back when I'm in trouble, and constantly making things better.

     

    I'm not in a place to judge, but I do see some utterly poor communications. This could have been a happy day yet we are seeing the beginning of a crack.

     

    Guess who gets hurt? LOYAL USERS!

     

    Guess who gets hurt after users are hurt?

     

    Please, look at the bigger picture, fix it when it's not too late. Don't bury what together you have archived just for miscommunication.

     

    It's not worthy.

    Edited by Elvin
    • Like 6

    Share this comment


    Link to comment
    Share on other sites

    @ich777 Hi! I Installed the NVIDIA Driver from CA, but It can't find my GTX 1060 6GB.  I was previously running your Kernel-Helper Pre-compiled Beta30.  I updated to Official Beta35, Uninstalled the Unraid-Kernel-Helper plugin, Rebooted, installed the NVidia Driver Plugin, rebooted, re-enabled the Docker, but still, the Nvidia Driver page in Settings doesn't list my card.  Any idea?

    Share this comment


    Link to comment
    Share on other sites

    I agree with @Frank1940 in that we really need some sort of visual instructions setup in its own thread. A lot of users are having issues with getting all the steps correct and getting their GPU to work in Plex. 
     

    This is especially true since the Nvidia plugin is now gone. Is the only way to get the drivers now is by downloading the Beta version?

     

    I’m completely onboard with having the drivers now integrated officially but it looks like a lot of kinks still need to be worked out. Guess that’s why it’s in beta.

    • Thanks 1

    Share this comment


    Link to comment
    Share on other sites
    2 hours ago, scott45333 said:

    'NVIDIA_VISIBLE_DEVICES'=' GPU-38ac4a82-0a7c-5e11-2f29-67386c69021c'

     

    docker: Error response from daemon: OCI runtime create failed: container_linux.go:349: starting container process caused "process_linux.go:449: container init caused "process_linux.go:432: running prestart hook 0 caused \"error running hook: exit status 1, stdout: , stderr: nvidia-container-cli: device error: GPU-38ac4a82-0a7c-5e11-2f29-67386c69021c: unknown device\\n\""": unknown.

    This lines tell you what's wrong, actually you have a \n (newline) infront of the UUID (looks like a space infont of the NVIDIA_VISIBLE_DEVICES).

    • Thanks 1

    Share this comment


    Link to comment
    Share on other sites
    4 hours ago, Frank1940 said:

    @ich777, what we need is a tutorial with complete instructions and guidance as to how to use your new plugin with both Dockers and VM's.  It may well be that everything that one needs to know is in this thread but it is spread out over some five pages at present and that page count will likely increase daily for the next several days.  I would prefer that you also provide a PDF version as I (personally) prefer to have printed hard copy that I can mark up with my notes. 

    I will look into this and make a thrad/manual for that ASAP, give me a little time got home a few minutes ago. ;)

     

    Thread now live:

     

    • Like 1
    • Thanks 2

    Share this comment


    Link to comment
    Share on other sites
    31 minutes ago, Pducharme said:

    @ich777 Hi! I Installed the NVIDIA Driver from CA, but It can't find my GTX 1060 6GB.  I was previously running your Kernel-Helper Pre-compiled Beta30.  I updated to Official Beta35, Uninstalled the Unraid-Kernel-Helper plugin, Rebooted, installed the NVidia Driver Plugin, rebooted, re-enabled the Docker, but still, the Nvidia Driver page in Settings doesn't list my card.  Any idea?

     

    UPDATE:  I fixed it by re-installing the NVIDIA Driver.  I closed the window while it wasn't finished. It said to wait, but it was confusing because the line just before said the plugin finished download.

    Share this comment


    Link to comment
    Share on other sites
    1 minute ago, Pducharme said:

     

    UPDATE:  I fixed it by re-installing the NVIDIA Driver.  I closed the window while it wasn't finished. It said to wait, but it was confusing because the line just before said the plugin finished download.

    Will update the plugin and add a warning at the top to don't close the windows with the 'X' and wait for the 'Done' button.

    • Like 4

    Share this comment


    Link to comment
    Share on other sites

    Apologies if I missed it. I saw it kind of referenced a few comments back. So others don't rack their brains looking:

     

    You cannot see the new Nvidia-Driver plugin in the CA App Store until you are on at least beta 35. 

     

    Maybe add that to the post if not there already. 

    • Like 2

    Share this comment


    Link to comment
    Share on other sites

    Support thread for the Nvidia-Driver Plugin is now live:

     

     

    • Like 1

    Share this comment


    Link to comment
    Share on other sites

    The VM options is a bit borked

    m_20201115-1okm-21kb.png

    and when i change the view

    m_20201115-fggh-19kb.png

    Share this comment


    Link to comment
    Share on other sites

    excuse the noob question but....

     

    Am I able to assign a card to a docker container and a VM?  I don't mean simultaneously, but let's say plex is using it for transcoding then a VM takes it over when it starts.

     

     

    Share this comment


    Link to comment
    Share on other sites
    6 minutes ago, bigmac5753 said:

    excuse the noob question but....

     

    Am I able to assign a card to a docker container and a VM?  I don't mean simultaneously, but let's say plex is using it for transcoding then a VM takes it over when it starts.

     

     

    Only if you stop the array, change the settings, and reboot.

    Share this comment


    Link to comment
    Share on other sites
    16 minutes ago, jonathanm said:

    Only if you stop the array, change the settings, and reboot.

    Are you 100% sure @jonathanm ?

     

    So far, with the so-called unofficial kernel, you weren't obliged to mess up with settings and reboot to share -cautiously- a GPU between VMs and containers.

    You could passthrough a GPU to a VM and start it, provided the GPU wasn't currently in use by a container, otherwise the system hung and you had to go through an unclean shutdown, I agree. But if no container was actively using the GPU, no issue.

    And once the VM was started, the containers knowing of the GPU ('--runtime-nvidia') would fallback to CPU transcoding for Plex, CPU-only computing for F@h, ... but would not crash, and the system neither of course.

    At least it was my understanding of the now deceased support stream of Nvidia-Unraid.

     

    Correct me if I am wrong. And If the new official solution has new major limitations, then please let the whole community know clearly.

    Share this comment


    Link to comment
    Share on other sites
    55 minutes ago, bigmac5753 said:

    excuse the noob question but....

     

    Am I able to assign a card to a docker container and a VM?  I don't mean simultaneously, but let's say plex is using it for transcoding then a VM takes it over when it starts.

     

     

    Please read the support thread that I've linked a few posts above...

    The answer is no...

    You can use one card for more than one container but only if the card is capable of that but not for a VM and Docker at the same time.

    Share this comment


    Link to comment
    Share on other sites
    49 minutes ago, Gnomuz said:

    Correct me if I am wrong. And If the new official solution has new major limitations, then please let the whole community know clearly.

    It's the same as the old solution, but I would never ever recommend doing that to pass through a GPU to a VM and a Docker container.

    Like you've said you could do that theoretically to start a VM when no Docker container is using the graphics card but also then bad things can happen if something want's to use the graphics card when the VM is running (they don't must happen but just saying).

     

    I think @jonathanm means that with the new betas you could bind your card to VFIO and if you do that the card is not visible for let's say Docker containers, only for VM's if I recall that correctly and if you bind or unbind a hardware device to VFIO you have to reboot.

    Share this comment


    Link to comment
    Share on other sites
    12 minutes ago, ich777 said:

    It's the same as the old solution, but I would never ever recommend doing that to pass through a GPU to a VM and a Docker container.

    Like you've said you could do that theoretically to start a VM when no Docker container is using the graphics card but also then bad things can happen if something want's to use the graphics card when the VM is running (they don't must happen but just saying).

     

    I think @jonathanm means that with the new betas you could bind your card to VFIO and if you do that the card is not visible for let's say Docker containers, only for VM's if I recall that correctly and if you bind or unbind a hardware device to VFIO you have to reboot.

    Thanks for the quick answer.

    I was just trying to understand if there was any major underlying difference between both implementations. I do agree that trying to "share" a GPU between VMs and containers is risky and will in many cases lead to an unclean shutdown if you are not highly cautious. I've experienced it myself, I'm not to be convinced !

    And of course, if the GPU is bound to VFIO for a VM passthrough, when you want to use it back in container(s), you have to unbind it and reboot, nothing new.

    So, now all my doubts are raised, and I think I am ready to upgrade to the brand new setup proposed by 6.9.0-beta35.

    • Like 2

    Share this comment


    Link to comment
    Share on other sites

    I am a quite new user of Unraid (non tech), a year or so. I really hope the issues between Limetech and part of the Community can be solved. Because we need both.

     

    Cheers,

     

     

    Frode

    Edited by frodr
    • Like 4

    Share this comment


    Link to comment
    Share on other sites

    How does this work with DVB drivers?  If we aren't using customized bz* files then are you going to be including DVB drivers directly?  Or will one of the community devs be building a plug-in for those too?

    Share this comment


    Link to comment
    Share on other sites
    6 hours ago, ich777 said:

    Like you've said you could do that theoretically to start a VM when no Docker container is using the graphics card but also then bad things can happen if something want's to use the graphics card when the VM is running (they don't must happen but just saying).

    Just an FYI, my server will hard lock with no dockers running if I try and pass the GPU to a VM and Nvidia driver is loaded aka using NVidia Unraid custom build.

    I think If I stopped the service it was possible, but the real solution was to bind the device to vfio so it was exclusive for VMs. 

    For those of us running GPUs in both docker and VM land, this is handy to know.

    Edited by tjb_altf4

    Share this comment


    Link to comment
    Share on other sites

    I can confirm that the new Nvidia drivers work for me with a Quadro P400.  I followed the previous guidance and:

    • Stopped Docker services
    • Installed unRAID 6.9.0-beta35 (upgrading from 6.8.3)
    • Rebooted
    • Removed the unraid-nvidia plugin
    • Installed the Nvidia-Driver app from Community Applications
      • Waited until completely downloaded and installed.  It took about 3 minutes for me and I have a 1Gb/s internet service.  Make sure you WAIT and read the status window carefully.
    • Rebooted
    • Restarted the Docker services

    In my case I did not need to edit my Plex Docker container as it already had all of the necessary settings in place.

    • Like 1

    Share this comment


    Link to comment
    Share on other sites
    2 hours ago, mkfelidae said:

    How does this work with DVB drivers?  If we aren't using customized bz* files then are you going to be including DVB drivers directly?  Or will one of the community devs be building a plug-in for those too?

    I also created a plugin for DVB and you can download it through the CA App (you have to be at least on Unraid version 6.9.0beta35 to see it in the CA App).

    Currently there are LibreELEC and DigitalDevices drivers supported.

     

    If you got any further questions feel free to ask.

     

    EDIT: Please note that this is currently community driven and I have to rebuilt the modules and tools every time a update of Unraid from @limetech is rolled out.

    If you are using the Plugin don't update instantly to the new version, I will create a support post for the plugin as soon as I got home from work and update for which versions of Unraid the drivers are avilable.

    • Like 4

    Share this comment


    Link to comment
    Share on other sites



    Join the conversation

    You can post now and register later. If you have an account, sign in now to post with your account.
    Note: Your post will require moderator approval before it will be visible.

    Guest
    Add a comment...

    ×   Pasted as rich text.   Restore formatting

      Only 75 emoji are allowed.

    ×   Your link has been automatically embedded.   Display as a link instead

    ×   Your previous content has been restored.   Clear editor

    ×   You cannot paste images directly. Upload or insert images from URL.


  • Status Definitions

     

    Open = Under consideration.

     

    Solved = The issue has been resolved.

     

    Solved version = The issue has been resolved in the indicated release version.

     

    Closed = Feedback or opinion better posted on our forum for discussion. Also for reports we cannot reproduce or need more information. In this case just add a comment and we will review it again.

     

    Retest = Please retest in latest release.


    Priority Definitions

     

    Minor = Something not working correctly.

     

    Urgent = Server crash, data loss, or other showstopper.

     

    Annoyance = Doesn't affect functionality but should be fixed.

     

    Other = Announcement or other non-issue.