• Unraid OS version 6.9.0-beta35 available


    limetech

    New in this release:


    GPU Driver Integration

    Unraid OS now includes selected in-tree GPU drivers: ast (Aspeed), i915 (Intel), amdgpu and radeon (AMD).  These drivers are blacklisted by default via 'conf' files in /etc/modprobe.d:

    /etc/modprobe.d/ast.conf
    /etc/modprobe.d/amdgpu.conf
    /etc/modprobe.d/i915.conf
    /etc/modprobe.d/radeon.conf

    Each of these files has a single line which blacklists the driver, preventing it from being loaded by the Linux kernel.

     

    However it is possible to override the settings in these files by creating the directory 'config/modprobe.d' on your USB flash boot device and then creating the same named-file in that directory.  For example, to unblacklist amdgpu type these commands in a Terminal session:

    mkdir /boot/config/modprobe.d
    touch /boot/config/modprobe.d/amdgpu.conf

    When Unraid OS boots, before the Linux kernel executes device discovery, we copy any files from /boot/config/modprobe.d to /etc/modprobe.d.  Since amdgpu.conf on the flash is an empty file, it will effectively cancel the driver from being blacklisted.

     

    This technique can be used to set boot-time options for any driver as well.

     

    Better Support for Third Party Drivers

    Recall that we distribute Linux modules and firmware in separate squashfs files which are read-only mounted at /lib/modules and /lib/firmware.  We now set up an overlayfs on each of these mount points, making it possible to install 3rd party modules at boot time, provided those modules are built against the same kernel version.  This technique may be used by Community Developers to provide an easier way to add modules not included in base Unraid OS: no need to build custom bzimage, bzmodules, bzfirmware and bzroot files.

     

    To go along with the other GPU drivers included in this release, we have created a separate installable Nvidia driver package.  Since each new kernel version requires drivers to be rebuilt, we have set up a feed that enumerates each driver available with each kernel.

     

    The easiest way to install the Nvdia driver, if you require it, is to make use of a plugin provided by Community member @ich777This plugin uses the feed to install the correct driver for the currently running kernel.  A big thank you! to @ich777 for providing assistance and coding up the the plugin:

     

    Linux Kernel

    This release includes Linux kernel 5.8.18.  We realize the 5.8 kernel has reached EOL and we are currently busy upgrading to 5.9.

     


     

    Version 6.9.0-beta35 2020-11-12 (vs -beta30)

    Base distro:

    • aaa_elflibs: version 15.0 build 25
    • brotli: version 1.0.9 build 2
    • btrfs-progs: version 5.9
    • ca-certificates: version 20201016
    • curl: version 7.73.0
    • dmidecode: version 3.3
    • ethtool: version 5.9
    • freetype: version 2.10.4
    • fuse3: version 3.10.0
    • git: version 2.29.1
    • glib2: version 2.66.2
    • glibc-solibs: version 2.30 build 2
    • glibc-zoneinfo: version 2020d
    • glibc: version 2.30 build 2
    • iproute2: version 5.9.0
    • jasper: version 2.0.22
    • less: version 563
    • libcap-ng: version 0.8 build 2
    • libevdev: version 1.10.0
    • libgcrypt: version 1.8.7
    • libnftnl: version 1.1.8
    • librsvg: version 2.50.1
    • libwebp: version 1.1.0 build 3
    • libxml2: version 2.9.10 build 3
    • lmdb: version 0.9.27
    • nano: version 5.3
    • ncurses: version 6.2_20201024
    • nginx: version 1.19.4
    • ntp: version 4.2.8p15 build 3
    • openssh: version 8.4p1 build 2
    • pam: version 1.4.0 build 2
    • rpcbind: version 1.2.5 build 2
    • samba: version 4.12.9 (CVE-2020-14318 CVE-2020-14318 CVE-2020-14318)
    • talloc: version 2.3.1 build 4
    • tcp_wrappers: version 7.6 build 3
    • tdb: version 1.4.3 build 4
    • tevent: version 0.10.2 build 4
    • usbutils: version 013
    • util-linux: version 2.36 build 2
    • vsftpd: version 3.0.3 build 7
    • xfsprogs: version 5.9.0
    • xkeyboard-config: version 2.31
    • xterm: version 361

    Linux kernel:

    • version 5.8.18
    • added GPU drivers:
    • CONFIG_DRM_RADEON: ATI Radeon
    • CONFIG_DRM_RADEON_USERPTR: Always enable userptr support
    • CONFIG_DRM_AMDGPU: AMD GPU
    • CONFIG_DRM_AMDGPU_SI: Enable amdgpu support for SI parts
    • CONFIG_DRM_AMDGPU_CIK: Enable amdgpu support for CIK parts
    • CONFIG_DRM_AMDGPU_USERPTR: Always enable userptr write support
    • CONFIG_HSA_AMD: HSA kernel driver for AMD GPU devices
    • kernel-firmware: version 20201005_58d41d0
    • md/unraid: version 2.9.16: correction recording disk info with array Stopped; remove 'superblock dirty' handling
    • oot: Realtek r8152: version 2.14.0

    Management:

    • emhttpd: fix 'auto' setting where pools enabled for user shares should not be exported
    • emhttpd: permit Erase of 'DISK_DSBL_NEW' replacement devices
    • emhtptd: track clean/unclean shutdown using file 'config/forcesync'
    • emhttpd: avoid unnecessarily removing mover.cron file
    • modprobe: blacklist GPU drivers by default, config/modprobe.d/* can override at boot
    • samba: disable aio by default
    • startup: setup an overlayfs for /lib/modules and /lib/firmware
    • webgui: pools not enabled for user shares should not be selectable for cache
    • webgui: Add pools information to diagnostics
    • webgui: vnc: add browser cache busting
    • webgui: Multilanguage: Fix unable to delete / edit users
    • webgui: Prevent "Add" reverting to English when adding a new user with an invalid username
    • webgui: Fix Azure / Gray Switch Language being cut-off
    • webgui: Fix unable to use top right icons if notifications present
    • webgui: Changed: Consistency between dashboard and docker on accessing logs
    • webgui: correct login form wrong default case icon displayed
    • webgui: set 'mid-tower' default case icon
    • webgui: fix: jGrowl covering buttons
    • webgui: New Perms: Support multi-cache pools
    • webgui: Remove WG from Dashboard if no tunnels defined
    • webgui: dockerMan: Allow readmore in advanced view
    • webgui: dockerMan: Only allow name compatible with docker

    Edited by limetech

    • Like 11
    • Thanks 5



    User Feedback

    Recommended Comments



    2 hours ago, rallos_hoo said:

    May I ask what's the difference between AMDGPU and RADEONGPU drivers? 

    which should i use for Renoir 4750G?

    I believe the latter supports older cards and the former provides better support for newer cards, so for the Ryzen 4750G's Vega graphics I'd try the AMDGPU driver. 

    • Like 1
    • Thanks 1
    Link to comment

    So I currently have "Unraid NVIDIA" installed in 6.8.3. I found this thread after the original support thread got closed.

    If I understand correctly, due to some community drama, that plugin is no more, and apparently resources the plugin uses are gone too.

     

    I currently get a bunch of errors on the page where I could have reverted to the stock build (see screenshot).

    Is there any other way to revert? When the next stable release comes out, can I still safely upgrade?

     

    ...or are all Unraid NVIDIA users now considered collateral damage?

    2020-11-14 21_31_45-Taranis_Unraid-Nvidia - Chromium.png

    • Like 1
    Link to comment

    Just wanted to share a quick success story. Previously (and for the past few releases now) I was using @ich777's Kernel Docker container to compile with latest Nvidia. Excited now to see this be brought in natively, it worked out of the box for me.

     

    I use the regular Plex docker container for HW transcoding (adding --runtime=nvidia in the extra parameters and setting properly the two container variables NVIDIA_VISIBLE_DEVICES and NVIDIA_DRIVER_CAPABILITIES).

    To prepare for this upgrade, while still on beta 30:

     

    - disabled docker service

    - upgraded to beta 35

    - uninstalled Kernel Helper Plugin

    - uninstalled Kernel Helper Docker

    - rebooted

    - Installed Nvidia Drivers from CA

    - rebooted

    - reenabled docker

     

    So far all is working fine on my Quadro P2200

     

    Install Nvidia Drivers

    image.thumb.png.429c4d5574341cbaae77e7a714c8fcde.png

     

    Docker Settings:

    image.thumb.png.7de5aaf5d988157f30650b89fa62e78b.png

     

    image.thumb.png.41c6359d51942ab50a3859225bd1d77e.png

     

    image.thumb.png.afe2c7641a3e945a423e865e5dfb7a64.pngimage.thumb.png.c38d34a4fe2d22822778c0688334a3aa.png

    Validate in Settings (P2200 and drivers detected fine)

    image.thumb.png.2ba7d0f9f53a5d02842bfaa2a5b62ac5.png

     

    HW transcoding working fine

    image.png.3d4018ee5283dff55f4ce2d065f94e4d.png

     

    Edited by cybrnook
    • Like 5
    • Thanks 4
    Link to comment
    18 minutes ago, BiteyBasilisk said:

    ...or are all Unraid NVIDIA users now considered collateral damage?

     

    Unfortunately, there is no current way to install the nVidia drivers on Unraid 6.8.3 except for the nVidia Kernel Builder container from @ich777.    IMHO, there's no problems with installing 6.9.0-beta33+, and then utilizing the official method of getting the nVidia drivers onto your system (I've been running the beta series on all my servers for months now), but it's all up to your own personal comfort level.

     

    18 minutes ago, BiteyBasilisk said:

    can I still safely upgrade?

    Yes.  The normal upgrade procedure will automatically upgrade the OS and revert the changes the plugin made to the boot files.  

     

    If you really want to revert back to the stock 6.8.3, then it's easiest to simply download the zip file from https://s3.amazonaws.com/dnld.lime-technology.com/stable/unRAIDServer-6.8.3-x86_64.zip, and then overwrite the bz* files on the flash drive with the ones in the zip (10 files total)

    • Like 2
    • Thanks 1
    Link to comment

    So we don't need to use the following anymore for plex ?
    NVIDIA_VISIBLE_DEVICES:
    NVIDIA_DRIVER_CAPABILITIES:

    and only need to keep using: --runtime=nvidia

    • Like 1
    Link to comment

    I got the update to beta 35, the nvidia-plugin installed, patch installed, but doesn't seem to be working in plex docker.  I added --runtime=nvidia to the extras line in docker......is that it?  N

    Edited by deadsoulz
    Link to comment
    2 hours ago, TRaSH said:

    So we don't need to use the following anymore for plex ?
    NVIDIA_VISIBLE_DEVICES:
    NVIDIA_DRIVER_CAPABILITIES:

    and only need to keep using: --runtime=nvidia

    Those are still needed. (I added some more screenshots above)

    Edited by cybrnook
    • Thanks 1
    Link to comment
    13 hours ago, -Daedalus said:

    Most people are installing unRAID on 16-32GB sticks these days.

    Still running unRAID on my 1GB Kingston DataTraveler from 2008.

    • Like 5
    Link to comment

    I’ve been around a little while. I always follow the boards even though I have very little life time to give to being active in the community anymore. 
     

    I felt the need to post to say I can completely appreciate how the guys at @linuxserver.io feel. 
     

    I was lucky enough to be apart of the team @linuxserver.iofor a short while and I can personally attest to how much personal time and effort they put into development, stress testing and supporting their developments.

     

    While @limetech has developed a great base product i think it’s right to acknowledge that much of the popularity and success of the product is down as much to community development and support (which is head and shoulders above by comparison) as it is to the work of the company. 
     

    As a now outsider looking in, my personal observation is that the use of unRAID exploded due to the availability of stable, regularly updated media apps like Plex (the officially supported one was just left to rot) and then exploded again with the emergence of the @linuxserver.ionVidia build and the support that came with it. 


    Given the efforts of the community and groups like @linuxserver.io is even used in unRAID marketing I feel this is a show of poor form. 
     

    I feel frustrated at Tom’s “I didn’t know I needed permission ....” comment as it isn’t about that. It’s about respect and communication. A quick “call” to the @linuxserver.io team to let them know of the plan (yes I know the official team don’t like sharing plans at risk of setting expectations they then won’t meet) to (even privately) acknowledge the work that has (and continues to) contribute to the success of unRAID and let them be a part of it would have cost Nothing but would have been worth so much. I know the guys would have been supporting too. 
     

    I hope the two teams can work it out and that @limetech don’t forget what (and who) helped them get to where they are and perhaps looks at other companies who have alienated their community through poor decisions and communication. Don’t make this the start of a slippery slide. 

    Edited by danioj
    • Like 19
    Link to comment

    Sorry, GPU passthrough newbie here.

     

    If I were to enable the AMDGPU driver via the method detailed in the OP, am I to expect the GPU to just be available for transcoding in official docker containers from Emby and Plex?

     

    If not, are there any adjustments that I or official maintainers are expected to make to these docker containers to enable GPU transcoding?

     

    Thanks in advance

    • Like 1
    Link to comment

    I am a little confused, I am not seeing "Nvidia Drivers" in Community applications?  I am afraid to upgrade from beta 25 as I obviously now can not easily roll back without the Nvidia build plugin functioning anymore :(

    Link to comment
    5 hours ago, deadsoulz said:

    I got the update to beta 35, the nvidia-plugin installed, patch installed, but doesn't seem to be working in plex docker.

    What have you done exactly and what does not work?

    Install beta35, downloaded the Nvidia driver form the CA App, rebooted and/or restarte the docker service?

    Can you send a screenshot from the Plugin itself?

     

    10 minutes ago, sittingmongoose said:

    I am a little confused, I am not seeing "Nvidia Drivers" in Community applications?  I am afraid to upgrade from beta 25 as I obviously now can not easily roll back without the Nvidia build plugin functioning anymore :(

    You have to install the new beta first and then install the drivers from the CA App after that I recommend rebooting since the Docker service has to be restarted.

    Link to comment
    15 minutes ago, ich777 said:

    What have you done exactly and what does not work?

    Install beta35, downloaded the Nvidia driver form the CA App, rebooted and/or restarte the docker service?

    Can you send a screenshot from the Plugin itself?

     

    You have to install the new beta first and then install the drivers from the CA App after that I recommend rebooting since the Docker service has to be restarted.

    Ah, I only have beta 25 installed.  That’s why I can’t see that plug-in in CA, gotcha.  Thanks!

    • Thanks 1
    Link to comment
    36 minutes ago, sittingmongoose said:

    I am a little confused, I am not seeing "Nvidia Drivers" in Community applications?  I am afraid to upgrade from beta 25 as I obviously now can not easily roll back without the Nvidia build plugin functioning anymore :(

    Following @CHBMB’s request to have the support thread locked (and that request being actioned) along with his comment that all development and support for it has now ceased following @limetech announcement it wouldn’t surprise me if that App has been removed from CA altogether. CA is also a community app and the developer AFAIK still has a close relationship with the @linuxserver.io team. 
     

    It appears therefore that to use Nvidia drivers with any future release of unRAID you must use the stock build (which now has them in of course). How to configure your dockers to use those stock builds is another thing and something I haven’t researched yet. 

    Edited by danioj
    Link to comment

    As a user of Unraid I am very scared about the current trends.
    Unraid as a base it is a very good server operating system but what makes it special are the community applications.

    I would be very sad if this breaks apart because of maybe wrong or misunderstandable communication.

    I hope that everyone will get together again.
    For us users you would make us all a pleasure.

     

    I have 33 docker containers and 8 VM running on my system and I hope that my system will continue to be as usable as before. I have many containers from linuxserver.io.

     

    I am grateful for the support from limetech & the whole community and hope it will be continued.

     

    sorry for my english i hope you could understand me.

     

    • Like 11
    Link to comment

    Hi! Am I blind or where can I download the beta version of unRAID 6.9? I want to test it out if with version 6.9 my unsolved bugs which I had with version 6.8.x in the past.

    Sorry for the retard question but until last year the beta releases could be downloaded direct on the website and/ or the USB creator application? Thanks in advance!

    Link to comment
    11 minutes ago, cap089 said:

    Am I blind or where can I download the beta version of unRAID 6.9?

    In Unraid itself go to Tools - Update and choose Next then click on update and reboot your server.

    • Thanks 1
    Link to comment
    16 hours ago, Scroopy Noopers said:

    nvidia-smi reports no device found, however here is the screenshot of devices

     

    image.thumb.png.8315830a9ef9d2ea10bc0179e0e89736.png

     

    As far as the driver number goes, I'm not sure why it isn't reporting.

    So if the driver number isn't reporting and the device isn't bound to VFIO, does that mean something is up with the kernel, and if so, how do I fix it?

    Link to comment
    15 minutes ago, Scroopy Noopers said:

    So if the driver number isn't reporting and the device isn't bound to VFIO, does that mean something is up with the kernel, and if so, how do I fix it?

    Can you send me a PM and we try to solve that instead of spaming this thread and post the solution. :)

    can you send me a screenshot from the output of the command 'nvidia-smi'.

    • Like 1
    Link to comment
    22 minutes ago, Scroopy Noopers said:

    So if the driver number isn't reporting and the device isn't bound to VFIO, does that mean something is up with the kernel, and if so, how do I fix it?

    I am a Chinese user, in the community communication can only use translation software to complete, so there are inaccurate places, please understand.

    After the NVIDIA driver plug-in is installed, restart your raid. If it is still unavailable, enter cd /usr/lib64 && ls -al |grep "nvidia" to see if there is an NVIDIA related driver. If not, the installation is not successful.

    • Thanks 1
    Link to comment
    17 minutes ago, stl88083365 said:

    I am a Chinese user, in the community communication can only use translation software to complete, so there are inaccurate places, please understand.

    After the NVIDIA driver plug-in is installed, restart your raid. If it is still unavailable, enter cd /usr/lib64 && ls -al |grep "nvidia" to see if there is an NVIDIA related driver. If not, the installation is not successful.

    I'm seeing the following text, though from my understanding of it, it looks like there is an Nvidia related driver installed.

    root@PowerEdgeR510:~# cd /usr/lib64 && ls -al |grep "nvidia"
    lrwxrwxrwx  1 root root        23 Nov 15 07:25 libEGL_nvidia.so.0 -> libEGL_nvidia.so.455.38*
    -rwxr-xr-x  1 root root   1346224 Nov  2 11:27 libEGL_nvidia.so.455.38*
    lrwxrwxrwx  1 root root        29 Nov 15 07:25 libGLESv1_CM_nvidia.so.1 -> libGLESv1_CM_nvidia.so.455.38*
    -rwxr-xr-x  1 root root     63784 Nov  2 11:27 libGLESv1_CM_nvidia.so.455.38*
    lrwxrwxrwx  1 root root        26 Nov 15 07:25 libGLESv2_nvidia.so.2 -> libGLESv2_nvidia.so.455.38*
    -rwxr-xr-x  1 root root    112344 Nov  2 11:27 libGLESv2_nvidia.so.455.38*
    lrwxrwxrwx  1 root root        23 Nov 15 07:25 libGLX_indirect.so.0 -> libGLX_nvidia.so.455.38*
    lrwxrwxrwx  1 root root        23 Nov 15 07:25 libGLX_nvidia.so.0 -> libGLX_nvidia.so.455.38*
    -rwxr-xr-x  1 root root   1123872 Nov  2 11:27 libGLX_nvidia.so.455.38*
    lrwxrwxrwx  1 root root        24 Nov 15 07:25 libnvidia-allocator.so -> libnvidia-allocator.so.1*
    lrwxrwxrwx  1 root root        29 Nov 15 07:25 libnvidia-allocator.so.1 -> libnvidia-allocator.so.455.38*
    -rwxr-xr-x  1 root root     82280 Nov  2 11:27 libnvidia-allocator.so.455.38*
    -rwxr-xr-x  1 root root    730176 Nov  2 11:27 libnvidia-cbl.so.455.38*
    lrwxrwxrwx  1 root root        18 Nov 15 07:25 libnvidia-cfg.so -> libnvidia-cfg.so.1*
    lrwxrwxrwx  1 root root        23 Nov 15 07:25 libnvidia-cfg.so.1 -> libnvidia-cfg.so.455.38*
    -rwxr-xr-x  1 root root    206160 Nov  2 11:27 libnvidia-cfg.so.455.38*
    -rwxr-xr-x  1 root root  50277400 Nov  2 11:27 libnvidia-compiler.so.455.38*
    lrwxrwxrwx  1 root root        30 Nov 15 07:25 libnvidia-egl-wayland.so.1 -> libnvidia-egl-wayland.so.1.1.5*
    -rwxr-xr-x  1 root root     37760 Nov  2 11:27 libnvidia-egl-wayland.so.1.1.5*
    -rwxr-xr-x  1 root root  32335784 Nov  2 11:27 libnvidia-eglcore.so.455.38*
    lrwxrwxrwx  1 root root        21 Nov 15 07:25 libnvidia-encode.so -> libnvidia-encode.so.1*
    lrwxrwxrwx  1 root root        26 Nov 15 07:25 libnvidia-encode.so.1 -> libnvidia-encode.so.455.38*
    -rwxr-xr-x  1 root root    104896 Nov  2 11:27 libnvidia-encode.so.455.38*
    lrwxrwxrwx  1 root root        18 Nov 15 07:25 libnvidia-fbc.so -> libnvidia-fbc.so.1*
    lrwxrwxrwx  1 root root        23 Nov 15 07:25 libnvidia-fbc.so.1 -> libnvidia-fbc.so.455.38*
    -rwxr-xr-x  1 root root    127336 Nov  2 11:27 libnvidia-fbc.so.455.38*
    -rwxr-xr-x  1 root root  34290928 Nov  2 11:27 libnvidia-glcore.so.455.38*
    -rwxr-xr-x  1 root root    627144 Nov  2 11:27 libnvidia-glsi.so.455.38*
    -rwxr-xr-x  1 root root  12211864 Nov  2 11:27 libnvidia-glvkspirv.so.455.38*
    -rwxr-xr-x  1 root root   1359312 Nov  2 11:28 libnvidia-gtk2.so.455.38*
    -rwxr-xr-x  1 root root   1368016 Nov  2 11:28 libnvidia-gtk3.so.455.38*
    lrwxrwxrwx  1 root root        18 Nov 15 07:25 libnvidia-ifr.so -> libnvidia-ifr.so.1*
    lrwxrwxrwx  1 root root        23 Nov 15 07:25 libnvidia-ifr.so.1 -> libnvidia-ifr.so.455.38*
    -rwxr-xr-x  1 root root    207064 Nov  2 11:27 libnvidia-ifr.so.455.38*
    lrwxrwxrwx  1 root root        17 Nov 15 07:25 libnvidia-ml.so -> libnvidia-ml.so.1*
    lrwxrwxrwx  1 root root        22 Nov 15 07:25 libnvidia-ml.so.1 -> libnvidia-ml.so.455.38*
    -rwxr-xr-x  1 root root   1922232 Nov  2 11:27 libnvidia-ml.so.455.38*
    lrwxrwxrwx  1 root root        23 Nov 15 07:25 libnvidia-ngx.so.1 -> libnvidia-ngx.so.455.38*
    -rwxr-xr-x  1 root root   3046120 Nov  2 11:27 libnvidia-ngx.so.455.38*
    lrwxrwxrwx  1 root root        26 Nov 15 07:25 libnvidia-opencl.so.1 -> libnvidia-opencl.so.455.38*
    -rwxr-xr-x  1 root root  38154880 Nov  2 11:27 libnvidia-opencl.so.455.38*
    lrwxrwxrwx  1 root root        26 Nov 15 07:25 libnvidia-opticalflow.so -> libnvidia-opticalflow.so.1*
    lrwxrwxrwx  1 root root        31 Nov 15 07:25 libnvidia-opticalflow.so.1 -> libnvidia-opticalflow.so.455.38*
    -rwxr-xr-x  1 root root     42592 Nov  2 11:27 libnvidia-opticalflow.so.455.38*
    lrwxrwxrwx  1 root root        29 Nov 15 07:25 libnvidia-ptxjitcompiler.so -> libnvidia-ptxjitcompiler.so.1*
    lrwxrwxrwx  1 root root        34 Nov 15 07:25 libnvidia-ptxjitcompiler.so.1 -> libnvidia-ptxjitcompiler.so.455.38*
    -rwxr-xr-x  1 root root  10475688 Nov  2 11:27 libnvidia-ptxjitcompiler.so.455.38*
    -rwxr-xr-x  1 root root  58569432 Nov  2 11:27 libnvidia-rtcore.so.455.38*
    -rwxr-xr-x  1 root root     14480 Nov  2 11:27 libnvidia-tls.so.455.38*
    lrwxrwxrwx  1 root root        31 Nov 15 07:25 libvdpau_nvidia.so -> vdpau/libvdpau_nvidia.so.455.38*

     

    Link to comment
    Just now, Scroopy Noopers said:

    I'm seeing the following text, though from my understanding of it, it looks like there is an Nvidia related driver installed.

    
    root@PowerEdgeR510:~# cd /usr/lib64 && ls -al |grep "nvidia"
    lrwxrwxrwx  1 root root        23 Nov 15 07:25 libEGL_nvidia.so.0 -> libEGL_nvidia.so.455.38*
    -rwxr-xr-x  1 root root   1346224 Nov  2 11:27 libEGL_nvidia.so.455.38*
    lrwxrwxrwx  1 root root        29 Nov 15 07:25 libGLESv1_CM_nvidia.so.1 -> libGLESv1_CM_nvidia.so.455.38*
    -rwxr-xr-x  1 root root     63784 Nov  2 11:27 libGLESv1_CM_nvidia.so.455.38*
    lrwxrwxrwx  1 root root        26 Nov 15 07:25 libGLESv2_nvidia.so.2 -> libGLESv2_nvidia.so.455.38*
    -rwxr-xr-x  1 root root    112344 Nov  2 11:27 libGLESv2_nvidia.so.455.38*
    lrwxrwxrwx  1 root root        23 Nov 15 07:25 libGLX_indirect.so.0 -> libGLX_nvidia.so.455.38*
    lrwxrwxrwx  1 root root        23 Nov 15 07:25 libGLX_nvidia.so.0 -> libGLX_nvidia.so.455.38*
    -rwxr-xr-x  1 root root   1123872 Nov  2 11:27 libGLX_nvidia.so.455.38*
    lrwxrwxrwx  1 root root        24 Nov 15 07:25 libnvidia-allocator.so -> libnvidia-allocator.so.1*
    lrwxrwxrwx  1 root root        29 Nov 15 07:25 libnvidia-allocator.so.1 -> libnvidia-allocator.so.455.38*
    -rwxr-xr-x  1 root root     82280 Nov  2 11:27 libnvidia-allocator.so.455.38*
    -rwxr-xr-x  1 root root    730176 Nov  2 11:27 libnvidia-cbl.so.455.38*
    lrwxrwxrwx  1 root root        18 Nov 15 07:25 libnvidia-cfg.so -> libnvidia-cfg.so.1*
    lrwxrwxrwx  1 root root        23 Nov 15 07:25 libnvidia-cfg.so.1 -> libnvidia-cfg.so.455.38*
    -rwxr-xr-x  1 root root    206160 Nov  2 11:27 libnvidia-cfg.so.455.38*
    -rwxr-xr-x  1 root root  50277400 Nov  2 11:27 libnvidia-compiler.so.455.38*
    lrwxrwxrwx  1 root root        30 Nov 15 07:25 libnvidia-egl-wayland.so.1 -> libnvidia-egl-wayland.so.1.1.5*
    -rwxr-xr-x  1 root root     37760 Nov  2 11:27 libnvidia-egl-wayland.so.1.1.5*
    -rwxr-xr-x  1 root root  32335784 Nov  2 11:27 libnvidia-eglcore.so.455.38*
    lrwxrwxrwx  1 root root        21 Nov 15 07:25 libnvidia-encode.so -> libnvidia-encode.so.1*
    lrwxrwxrwx  1 root root        26 Nov 15 07:25 libnvidia-encode.so.1 -> libnvidia-encode.so.455.38*
    -rwxr-xr-x  1 root root    104896 Nov  2 11:27 libnvidia-encode.so.455.38*
    lrwxrwxrwx  1 root root        18 Nov 15 07:25 libnvidia-fbc.so -> libnvidia-fbc.so.1*
    lrwxrwxrwx  1 root root        23 Nov 15 07:25 libnvidia-fbc.so.1 -> libnvidia-fbc.so.455.38*
    -rwxr-xr-x  1 root root    127336 Nov  2 11:27 libnvidia-fbc.so.455.38*
    -rwxr-xr-x  1 root root  34290928 Nov  2 11:27 libnvidia-glcore.so.455.38*
    -rwxr-xr-x  1 root root    627144 Nov  2 11:27 libnvidia-glsi.so.455.38*
    -rwxr-xr-x  1 root root  12211864 Nov  2 11:27 libnvidia-glvkspirv.so.455.38*
    -rwxr-xr-x  1 root root   1359312 Nov  2 11:28 libnvidia-gtk2.so.455.38*
    -rwxr-xr-x  1 root root   1368016 Nov  2 11:28 libnvidia-gtk3.so.455.38*
    lrwxrwxrwx  1 root root        18 Nov 15 07:25 libnvidia-ifr.so -> libnvidia-ifr.so.1*
    lrwxrwxrwx  1 root root        23 Nov 15 07:25 libnvidia-ifr.so.1 -> libnvidia-ifr.so.455.38*
    -rwxr-xr-x  1 root root    207064 Nov  2 11:27 libnvidia-ifr.so.455.38*
    lrwxrwxrwx  1 root root        17 Nov 15 07:25 libnvidia-ml.so -> libnvidia-ml.so.1*
    lrwxrwxrwx  1 root root        22 Nov 15 07:25 libnvidia-ml.so.1 -> libnvidia-ml.so.455.38*
    -rwxr-xr-x  1 root root   1922232 Nov  2 11:27 libnvidia-ml.so.455.38*
    lrwxrwxrwx  1 root root        23 Nov 15 07:25 libnvidia-ngx.so.1 -> libnvidia-ngx.so.455.38*
    -rwxr-xr-x  1 root root   3046120 Nov  2 11:27 libnvidia-ngx.so.455.38*
    lrwxrwxrwx  1 root root        26 Nov 15 07:25 libnvidia-opencl.so.1 -> libnvidia-opencl.so.455.38*
    -rwxr-xr-x  1 root root  38154880 Nov  2 11:27 libnvidia-opencl.so.455.38*
    lrwxrwxrwx  1 root root        26 Nov 15 07:25 libnvidia-opticalflow.so -> libnvidia-opticalflow.so.1*
    lrwxrwxrwx  1 root root        31 Nov 15 07:25 libnvidia-opticalflow.so.1 -> libnvidia-opticalflow.so.455.38*
    -rwxr-xr-x  1 root root     42592 Nov  2 11:27 libnvidia-opticalflow.so.455.38*
    lrwxrwxrwx  1 root root        29 Nov 15 07:25 libnvidia-ptxjitcompiler.so -> libnvidia-ptxjitcompiler.so.1*
    lrwxrwxrwx  1 root root        34 Nov 15 07:25 libnvidia-ptxjitcompiler.so.1 -> libnvidia-ptxjitcompiler.so.455.38*
    -rwxr-xr-x  1 root root  10475688 Nov  2 11:27 libnvidia-ptxjitcompiler.so.455.38*
    -rwxr-xr-x  1 root root  58569432 Nov  2 11:27 libnvidia-rtcore.so.455.38*
    -rwxr-xr-x  1 root root     14480 Nov  2 11:27 libnvidia-tls.so.455.38*
    lrwxrwxrwx  1 root root        31 Nov 15 07:25 libvdpau_nvidia.so -> vdpau/libvdpau_nvidia.so.455.38*

     

    In China, we will have some of the same enthusiasts to study and use together. In our spare time, we will also use remote tools (such as: TeamViewer, QQ) to help new friends or friends with problems to discuss solutions together. Don't know how to use remote tools to help you?

    Link to comment
    23 minutes ago, stl88083365 said:

    I am a Chinese user, in the community communication can only use translation software to complete, so there are inaccurate places, please understand.

    After the NVIDIA driver plug-in is installed, restart your raid. If it is still unavailable, enter cd /usr/lib64 && ls -al |grep "nvidia" to see if there is an NVIDIA related driver. If not, the installation is not successful.

    It should be enough if you run the command 'nvidia-smi' from the unraid co sole since it will tell you if the command is found or not or if anything is missing.

     

    But keepin mind @Scroopy Noopers told me that he had also problems with the other Nvidia plugin and could not get it to work. I'm currently in a private conversation with him and we try to solve this and will report back what was the issue.

    • Like 1
    Link to comment
    2 minutes ago, stl88083365 said:

    In China, we will have some of the same enthusiasts to study and use together. In our spare time, we will also use remote tools (such as: TeamViewer, QQ) to help new friends or friends with problems to discuss solutions together. Don't know how to use remote tools to help you?

    After all, this is the custom of different countries. I don't know if it will infringe your privacy. If so, I'm very sorry. You can run nvidia-smi from the command console to see if it can output the information about the graphics card correctly.

    Link to comment



    Join the conversation

    You can post now and register later. If you have an account, sign in now to post with your account.
    Note: Your post will require moderator approval before it will be visible.

    Guest
    Add a comment...

    ×   Pasted as rich text.   Restore formatting

      Only 75 emoji are allowed.

    ×   Your link has been automatically embedded.   Display as a link instead

    ×   Your previous content has been restored.   Clear editor

    ×   You cannot paste images directly. Upload or insert images from URL.


  • Status Definitions

     

    Open = Under consideration.

     

    Solved = The issue has been resolved.

     

    Solved version = The issue has been resolved in the indicated release version.

     

    Closed = Feedback or opinion better posted on our forum for discussion. Also for reports we cannot reproduce or need more information. In this case just add a comment and we will review it again.

     

    Retest = Please retest in latest release.


    Priority Definitions

     

    Minor = Something not working correctly.

     

    Urgent = Server crash, data loss, or other showstopper.

     

    Annoyance = Doesn't affect functionality but should be fixed.

     

    Other = Announcement or other non-issue.