Jump to content

ich777

Community Developer
  • Posts

    15,714
  • Joined

  • Days Won

    202

Everything posted by ich777

  1. You can actually install my Unraid-Kernel-Helper-Plugin from the CA App. Danke!
  2. As I said there is a way for that but these are only workarounds, have you installed the Nvidia-Driver package already? This would be a workaround but I'm not sure if it's possible on Unraid because the X-librarys are not installed: Click As far as I know the fan speed is managed by the BIOS if no X-Application is using the card and set by the manifacturer. My Zotac is also at 46% but I even can't hear it at that percentage...
  3. Available versions: The build process is now automated and should be finished after 1 hour after a new unRAID version is released if everything went well and no compile error occurred. You can check here for which Kernel versions the drivers are available: Click
  4. DVB-Driver (only Unraid 6.9.0beta35 and up) This Plugin will add DVB Drivers to Unraid. Please note that this Plugin is community driven and if a newer version of Unraid is released the drivers/modules has to be updated (please make a short post here or see the second post if the drivers/modules are already updated, if you update to a newer version and the new drivers/modules aren't built yet this could break your DVB support in Unraid) ! Installation of the Plugin (this is only necessary for the first installation of the plugin) : Go to the Community Applications App and search for 'DVB-Drivers' and click on the Download button (you have to be at least on Unraid 6.9.0beta35 to see the Plugin in the CA App) : Or download it directly from here: https://raw.githubusercontent.com/ich777/unraid-dvb-driver/master/dvb-driver.plg After that wait for the plugin to successfully install (don't close the window with the , wait for the 'DONE' button to appear, the installation can take some time depending on your internet connection, the plugin downloads a custom bzimage with the necessary DVB Kernel modules, the DVB driver itself and installs it afterwards to your Unraid server) : Click on 'DONE' and read the alert message that appears on the top right hand corner and close it with the 'X': You can skip Step 4 if you are want to use the LibreELEC driver package (selected by default) if you want to choose another driver package go to the Plugin itself PLUGINS -> DVB-Driver and choose which version that you want to install and click on 'UPDATE' (currently LibreELEC, TBS-OpenSource, DigitalDevices and Xbox One USB DVB Adapter drivers available) : Reboot your server MAIN -> REBOOT: After the reboot go back to the Plugin page PLUGINS -> DVB-Driver and check if the cards are properly recognized (if your card(s) aren't recognized please see the Troubleshooting section or make a post in this thread but please be sure to read the Reporting Problems section in this post) : Utilize the DVB card(s) in a Docker container: To utilize your DVB card(s) in your Docker container, in this example for Tvheadend, add '--device=/dev/dvb/' to the 'Extra Parameters' in your Docker template (you have to enable 'Advanced view' in the template to see this option) : Now you should see the card(s) in the Docker container: IMPORTANT: If you switch between driver packages a reboot is always necessary! DigitalDevices Notes: (This applies only if you selected the DigitalDevices drivers in the Plugin) If you are experiencing I²C-Timeouts in your syslog please append 'ddbridge.msi=0' to your syslinux configuration (example below). You can also switch the operating modes for the Max S8/SX8/SX8 Basic with the following options: 'ddbridge.fmode=0' 4-tuner mode (internal multi-switch deactivated) 'ddbridge.fmode=1' Quad-LNB/normal outputs of the multiswitch 'ddbridge.fmode=2' Quattro-LNB / cascade outputs of the multiswitch 'ddbridge.fmode=3' Unicable or JESS LNB / Unicabel output of the multiswitch Link to source You also can combine 'ddbridge.msi=0' (but you don't have to if you don't experience I²C-Timeouts) and for example 'ddbridge.fmode=0' here is a short example how to do it: Go to the 'Main' tab and click on the blue text 'Flash': Scroll a little down and append like mentioned above the commands to the syslinux configuration: (As stated above you don't need to append 'ddbridge.msi=0' if you don't experience I²C-Timeouts) Click on 'Apply' on the bottom and reboot your server! TBS-OpenSource Notes: You can also switch the operating modes from the TBS Cards, in this example for the TBS-6909 or TBS-6903-x, if you append one of the following commands to your syslinux configuration (how to is above): 'mxl58x.mode=0' Mode 0 -> see picture below 'mxl58x.mode=1' Mode 1 -> see picture below 'mxl58x.mode=2' Mode 2 -> see picture below Modes: Link to source Troubleshooting: (This section will be updated as soon as someone reports a common issue and will grow over time) Reporting Problems: Please be sure if you have a problem to always include a screenshot from the Plugin page, a textfile or a link to pastebin of the command 'lspci -v' or 'lsusb -v' - depending on the card you are using PCIe or USB (simply open up a Unraid terminal with the button on the top right of Unraid and type in one of the two commands without quotes) and also the output of 'dmesg' in a textfile or a link to pastebin (simply to not spam the thread with the output).
  5. The fan speed should be managed by the card itself and should have nothing to do with the driver itself... Anyways I think there are workarounds out there but I can't recommend that since you are setting a fixed value and if the card is getting too hot the fan wouldn't ramp up.
  6. The hard limit in the Kernel itself is set to 8 by default and I never changed that since it was never requested in my Unraid-Kernel-Helper thread from someone to have a higher number of tuners. I can only tell you that I'm using 2 dual Digital Devices DVB C cards (one physically attached to the PCIe slot and an addon card that is attached to the 'main' card in the PCIe slot) and I can watch a total of 4 tuners and can watch depending on the channel (HD or SD) up to 7 simultaneous streams if I recall correctly could. It's only the bzimage file that you have to replace if you also want a higher limit and that is one thing that my plugin does to activate the DVB capabilities of Unraid.
  7. I'm not very familar with Intel transcoding but I think you should NOT remove that.
  8. Please look at this post, I've already made a plugin for that. And for stable builds you can use the Kernel-Helper for now or a prebuilt image.
  9. My apologies for that, I should have been a little bit clearer on that because I think this was solved in a PM afterwards. You have to download the files manually and place them in the directory that is Quoted down below: Then everything should work just as expected. Never used the console directly, try to use RCON (there are many apps out there for directly connecting to RCON with your mobile phone - iOS & Android and also for windows there are tools out there). I have to build in screen or something like that but I want to avoid that if that's possible, please tell me if my posted solution will be suitable for you...
  10. I also created a plugin for DVB and you can download it through the CA App (you have to be at least on Unraid version 6.9.0beta35 to see it in the CA App). Currently there are LibreELEC and DigitalDevices drivers supported. If you got any further questions feel free to ask. EDIT: Please note that this is currently community driven and I have to rebuilt the modules and tools every time a update of Unraid from @limetech is rolled out. If you are using the Plugin don't update instantly to the new version, I will create a support post for the plugin as soon as I got home from work and update for which versions of Unraid the drivers are avilable.
  11. It's the same as the old solution, but I would never ever recommend doing that to pass through a GPU to a VM and a Docker container. Like you've said you could do that theoretically to start a VM when no Docker container is using the graphics card but also then bad things can happen if something want's to use the graphics card when the VM is running (they don't must happen but just saying). I think @jonathanm means that with the new betas you could bind your card to VFIO and if you do that the card is not visible for let's say Docker containers, only for VM's if I recall that correctly and if you bind or unbind a hardware device to VFIO you have to reboot.
  12. Please read the support thread that I've linked a few posts above... The answer is no... You can use one card for more than one container but only if the card is capable of that but not for a VM and Docker at the same time.
  13. Support thread for the Nvidia-Driver Plugin is now live:
  14. To utilize your Nvidia graphics card in your Docker container(s) the basic steps are: Add '--runtime=nvidia' in your Docker template in 'Extra Parameters' (you have to enable 'Advanced view' in the template to see this option) Add a variable to your Docker template with the Key: 'NVIDIA_VISIBLE_DEVICES' and as Value: 'YOURGPUUUID' (like 'GPU-9cfdd18c-2b41-b158-f67b-720279bc77fd') Add a variable to your Docker template with the Key: 'NVIDIA_DRIVER_CAPABILITIES' and as Value: 'all' Make sure to enable hardware transcoding in the application/container itself See the detailed instructions below for Emby, Jellyfin & Plex (alphabetical order). UUID: You can get the UUID of you graphics card in the Nvidia-Driver Plugin itself PLUGINS -> Nvidia-Driver (please make sure if there is no leading space!) : NOTE: You can use one card for more than one Container at the same time - depending on the capabilities of your card. Emby: Note: To enable Hardware Encoding you need a valid Premium Subscription otherwise Hardwar Encoding will not work! Add '--runtime=nvidia' to the 'Extra Parameters': Add a variable to your Docker template with the Key: 'NVIDIA_VISIBLE_DEVICES' and as Value: 'YOURGPUUUID': Add a variable to your Docker template with the Key: 'NVIDIA_DRIVER_CAPABILITIES' and as Value: 'all': Make sure to enable hardware transcoding in the application/container itself After starting the container and playing some movie that needs to be transcoded that your graphics card is capable of you should see that you can now successfully transcode using your Nvidia graphics card (the text NVENC/DEC is indicating exactly that) : Jellyfin: Add '--runtime=nvidia' to the 'Extra Parameters': Add a variable to your Docker template with the Key: 'NVIDIA_VISIBLE_DEVICES' and as Value: 'YOURGPUUUID': Add a variable to your Docker template with the Key: 'NVIDIA_DRIVER_CAPABILITIES' and as Value: 'all': Make sure to enable hardware transcoding in the application/container itself After starting the container and playing some movie that needs to be transcoded that your graphics card is capable of you should see that you can now successfully transcode using your Nvidia graphics card (Jellyfin doesn't display if it's actually transcoding with the graphics card at time of writing but you can also open up a Unraid terminal and type in 'watch nvidia-smi' then you will see at the bottom that Jellyfin is using your card) : PLEX: (thanks to @cybrnook & @satchafunkilus that granted permission to use their screenshots) Note: To enable Hardware Encoding you need a valid Plex Pass otherwise Hardwar Encoding will not work! Add '--runtime=nvidia' to the 'Extra Parameters': Add a variable to your Docker template with the Key: 'NVIDIA_VISIBLE_DEVICES' and as Value: 'YOURGPUUUID': Add a variable to your Docker template with the Key: 'NVIDIA_DRIVER_CAPABILITIES' and as Value: 'all': Make sure to enable hardware transcoding in the application/container itself: After starting the container and playing some movie that needs to be transcoded that your graphics card is capable of you should see that you can now successfully transcode using your Nvidia graphics card (the text '(hw)' at Video is indicating exactly that):
  15. Nvidia-Driver (only Unraid 6.9.0beta35 and up) This Plugin is only necessary if you are planning to make use of your Nvidia graphics card inside Docker Containers. If you only want to use your Nvidia graphics card for a VM then don't install this Plugin! Discussions about modifications and/or patches that violates the EULA of the driver are not supported by me or anyone here, this could also lead to a take down of the plugin itself! Please remember that this also violates the forum rules and will be removed! Installation of the Nvidia Drivers (this is only necessary for the first installation of the plugin) : Go to the Community Applications App and search for 'Nvidia-Drivers' and click on the Download button (you have to be at least on Unraid 6.9.0beta35 to see the Plugin in the CA App) : Or download it directly from here: https://raw.githubusercontent.com/ich777/unraid-nvidia-driver/master/nvidia-driver.plg After that wait for the plugin to successfully install (don't close the window with the , wait for the 'DONE' button to appear, the installation can take some time depending on your internet connection, the plugin downloads the Nvidia-Driver-Package ~150MB and installs it afterwards to your Unraid server) : Click on 'DONE' and continue with Step 4 (don't close this window for now, if you closed this window don't worry continue to read) : Check if everything is installed correctly and recognized to do this go to the plugin itself if everything shows up PLUGINS -> Nvidia-Driver (if you don't see a driver version at 'Nvidia Driver Version' or another error please scroll down to the Troubleshooting section) : If everything shows up correctly click on the red alert notification from Step 3 (not on the 'X'), this will bring you to the Docker settings (if you are closed this window already go to Settings -> Docker). At the Docker page change 'Enable Docker' from 'Yes' to 'No' and hit 'Apply' (you can now close the message from Step 2) : Then again change 'Enable Docker' from 'No' to 'Yes' and hit again 'Apply' (that step is only necessary for the first plugin installation, you can skip that step if you are going to reboot the server - the background to this is that when the Nvidia-Driver-Package is installed also a file is installed that interacts directly with the Docker Daemon itself and the Docker Daemon needs to be reloaded in order to load that file) : After that, you should now be able to utilize your Nvidia graphics card in your Docker containers how to do that see Post 2 in this thread. IMPORTANT: If you don't plan or want to use acceleration within Docker containers through your Nvidia graphics card then don't install this plugin! Please be sure to never use one card for a VM and also in docker containers (your server will hard lock if it's used in a VM and then something want's to use it in a Container). You can use one card for more than one Container at the same time - depending on the capabilities of your card. Troubleshooting: (This section will be updated as soon as more someone reports an issue and will grow over time) NVIDIA-SMI has failed because it couldn't communicate with the NVIDIA driver. Make sure that the latest NVIDIA driver is installed and running.: This means that the installed driver can't find a supported Nvidia graphics card in your server (it may also be that there is a problem with your hardware - riser cables,...). Check if you accidentally bound all your cards to VFIO, you need at least one card that is supported by the installed driver (you can find a list of all drivers here, click on the corresponding driver at 'Linux x86_64/AMD64/EM64T' and click on the next page on 'Supported products' there you will find all cards that are supported by the driver. If you bound accidentally all cards to VFIO unbind the card you want to use for the Docker container(s) and reboot the server (TOOLS -> System devices -> unselect the card -> BIND SELECTED TO VFIO AT BOOT -> restart your server). docker: Error response from daemon: OCI runtime create failed: container_linux.go:349: starting container process caused "process_linux.go:449: container init caused "process_linux.go:432: running prestart hook 0 caused \"error running hook: exit status 1, stdout: , stderr: nvidia-container-cli: device error: GPU-9cfdd18c-2b41-b158-f67b-720279bc77fd: unknown device\\n\""": unknown.: Please check the 'NVIDIA_VISIBLE_DEVICES' inside your Docker template it may be that you accitentally have what looks like a space at the end or in front of your UUID like: ' GPU-9cfdd18c-2b41-b158-f67b-720279bc77fd' (it's hard to see that in this example but it's there) If you got problems that your card is recognized in 'nvidia-smi' please check also your 'Syslinux configuration' if you haven't earlier prevented Unraid from using the card during the boot process: Click Reporting Problems: Please be sure if you have a problem to always include a screenshot from the Plugin page, a screenshot of the output of the command 'nvidia-smi' (simply open up a Unraid terminal with the button on the top right of Unraid and type in 'nvidia-smi' without quotes) and the error from the startup of the Container/App if there is any.
  16. Will update the plugin and add a warning at the top to don't close the windows with the 'X' and wait for the 'Done' button.
  17. I will look into this and make a thrad/manual for that ASAP, give me a little time got home a few minutes ago. Thread now live:
  18. This lines tell you what's wrong, actually you have a \n (newline) infront of the UUID (looks like a space infont of the NVIDIA_VISIBLE_DEVICES).
  19. Then we are in the same timezone. I'm located in austria, can you write me a quick PM to talk there and post the solution here afrer we solved it?
  20. It should be enough if you run the command 'nvidia-smi' from the unraid co sole since it will tell you if the command is found or not or if anything is missing. But keepin mind @Scroopy Noopers told me that he had also problems with the other Nvidia plugin and could not get it to work. I'm currently in a private conversation with him and we try to solve this and will report back what was the issue.
  21. Yes, there are maybe some bugs in the GUI but if you configure it from the command line it will work and is rock solid(I only marked it as beta because the GUI isn't finished and there may be bugs in it).
  22. Thank you for the quick answer. When installing the beta35 on my server I got a similar problem. I got a Cine C/T v6.5 with an addon card Cine C/T v6.5 and only the first Cine C/T (2 tuner instead of 4) showed up. I had to shutdown the server completely and waited for a minute and then turned back on the server. After that it picked up all of the 4 Tuners. Is a little bit weired but not the first time that I got such a problem. ddbrige isn't actually the driver itself can you give me the output of 'lsmod'? The drivers for the cards tuners should be updated to that version number that you see on github. I also build with the latest driver version. Can I contact you a little bit later on this, I'm currently not at home, in which timezone are you?
  23. Can you send me a PM and we try to solve that instead of spaming this thread and post the solution. can you send me a screenshot from the output of the command 'nvidia-smi'.
  24. Yes keep in mind that the configuration is may be lost if you reebot, since it's in beta now but I will update the plugin in the next few weeks with the help of @SimonF who is currently developing a complete new interface. I recommend using the command line to save everything and make special configurations. See the Unraid-Kernel-Helper thread for more information.
  25. Can you tell me which package shows up on the DVB Plugin page, LibreELEC or DigitalDevices? Have you rebooted after selecting the digitaldevices package? The digitaldevices package is compiled with the latest available drivers from digitaldevices from their github.
×
×
  • Create New...