[Plugin] Linuxserver.io - Unraid Nvidia


Recommended Posts

5 minutes ago, ramblinreck47 said:

Essentially, this alpha version of the Plex transcoder allows for official NVDEC (hardware decoding) support in Linux. If you are using a decoder script, you’d want to disable it when using this new Plex version. Mind you, this is an alpha version and hasn’t been added to Linuxserver.io Docker yet. It could be but for now it is not that I know of. Beta version will probably be out relatively soon though.

I think that's where my confusion lies - I thought we already had NVDEC support, and were waiting for NVENC (which we could patch via scripting if wanted).  I have just come back off holiday though, so I could be getting everything backwards.

Link to comment
On 8/27/2019 at 3:01 AM, Xaero said:

You'll want to post the diagnostics zip. Most likely you are stubbing a PCI-E bus  address that is in the same IOMMU group as the other card. This will stub both cards, even though you only intended to stub the one. This can be worked around, but I'll leave that advice to people more experienced than I with these things. 

Indeed they're on the same bus. Both 2:00.0 and 01:00.0 are stubbed with the VFIO-pci driver. Indeed even though I only intended to stub 01:00.0. In the mean time. 

 

lspci:

01:00.0 VGA compatible controller [0300]: NVIDIA Corporation GM107 [GeForce GTX 750 Ti] [10de:1380] (rev a2)
	Subsystem: Elitegroup Computer Systems GM107 [GeForce GTX 750 Ti] [1019:1028]
	Kernel driver in use: vfio-pci
	Kernel modules: nvidia_drm, nvidia
02:00.0 VGA compatible controller [0300]: NVIDIA Corporation TU104 [GeForce RTX 2080 Rev. A] [10de:1e87] (rev a1)
	Subsystem: Gigabyte Technology Co., Ltd Device [1458:37a7]
	Kernel driver in use: vfio-pci
	Kernel modules: nvidia_drm, nvidia

 

IOMMU groups: (note: vfio-pci.ids=10de:1e87,10de:10f8,10de:1ad8,10de:1ad9)

IOMMU group 0:	[8086:3e30] 00:00.0 Host bridge: Intel Corporation 8th Gen Core 8-core Desktop Processor Host Bridge/DRAM Registers [Coffee Lake S] (rev 0a)
IOMMU group 1:	[8086:1901] 00:01.0 PCI bridge: Intel Corporation Xeon E3-1200 v5/E3-1500 v5/6th Gen Core Processor PCIe Controller (x16) (rev 0a)
[8086:1905] 00:01.1 PCI bridge: Intel Corporation Xeon E3-1200 v5/E3-1500 v5/6th Gen Core Processor PCIe Controller (x8) (rev 0a)
[10de:1380] 01:00.0 VGA compatible controller: NVIDIA Corporation GM107 [GeForce GTX 750 Ti] (rev a2)
[10de:0fbc] 01:00.1 Audio device: NVIDIA Corporation Device 0fbc (rev a1)
[10de:1e87] 02:00.0 VGA compatible controller: NVIDIA Corporation TU104 [GeForce RTX 2080 Rev. A] (rev a1)
[10de:10f8] 02:00.1 Audio device: NVIDIA Corporation Device 10f8 (rev a1)
[10de:1ad8] 02:00.2 USB controller: NVIDIA Corporation Device 1ad8 (rev a1)
[10de:1ad9] 02:00.3 Serial bus controller [0c80]: NVIDIA Corporation Device 1ad9 (rev a1)
IOMMU group 2:	[8086:a379] 00:12.0 Signal processing controller: Intel Corporation Cannon Lake PCH Thermal Controller (rev 10)
IOMMU group 3:	[8086:a36d] 00:14.0 USB controller: Intel Corporation Cannon Lake PCH USB 3.1 xHCI Host Controller (rev 10)
[8086:a36f] 00:14.2 RAM memory: Intel Corporation Cannon Lake PCH Shared SRAM (rev 10)
IOMMU group 4:	[8086:a360] 00:16.0 Communication controller: Intel Corporation Cannon Lake PCH HECI Controller (rev 10)
IOMMU group 5:	[8086:a352] 00:17.0 SATA controller: Intel Corporation Cannon Lake PCH SATA AHCI Controller (rev 10)
IOMMU group 6:	[8086:a340] 00:1b.0 PCI bridge: Intel Corporation Cannon Lake PCH PCI Express Root Port #17 (rev f0)
IOMMU group 7:	[8086:a338] 00:1c.0 PCI bridge: Intel Corporation Cannon Lake PCH PCI Express Root Port #1 (rev f0)
IOMMU group 8:	[8086:a330] 00:1d.0 PCI bridge: Intel Corporation Cannon Lake PCH PCI Express Root Port #9 (rev f0)
IOMMU group 9:	[8086:a305] 00:1f.0 ISA bridge: Intel Corporation Z390 Chipset LPC/eSPI Controller (rev 10)
[8086:a348] 00:1f.3 Audio device: Intel Corporation Cannon Lake PCH cAVS (rev 10)
[8086:a323] 00:1f.4 SMBus: Intel Corporation Cannon Lake PCH SMBus Controller (rev 10)
[8086:a324] 00:1f.5 Serial bus controller [0c80]: Intel Corporation Cannon Lake PCH SPI Controller (rev 10)
[8086:15bc] 00:1f.6 Ethernet controller: Intel Corporation Ethernet Connection (7) I219-V (rev 10)
IOMMU group 10:	[1cc1:8201] 05:00.0 Non-Volatile memory controller: Device 1cc1:8201 (rev 03)

Please, find my diagnostics zip attached.


Thanks in advance,

Tomas

unraid-diagnostics-20190828-1303.zip

Link to comment
1 hour ago, teumaauss said:

Indeed they're on the same bus. Both 2:00.0 and 01:00.0 are stubbed with the VFIO-pci driver. Indeed even though I only intended to stub 01:00.0. In the mean time. 

 

lspci:


01:00.0 VGA compatible controller [0300]: NVIDIA Corporation GM107 [GeForce GTX 750 Ti] [10de:1380] (rev a2)
	Subsystem: Elitegroup Computer Systems GM107 [GeForce GTX 750 Ti] [1019:1028]
	Kernel driver in use: vfio-pci
	Kernel modules: nvidia_drm, nvidia
02:00.0 VGA compatible controller [0300]: NVIDIA Corporation TU104 [GeForce RTX 2080 Rev. A] [10de:1e87] (rev a1)
	Subsystem: Gigabyte Technology Co., Ltd Device [1458:37a7]
	Kernel driver in use: vfio-pci
	Kernel modules: nvidia_drm, nvidia

 

IOMMU groups: (note: vfio-pci.ids=10de:1e87,10de:10f8,10de:1ad8,10de:1ad9)


IOMMU group 0:	[8086:3e30] 00:00.0 Host bridge: Intel Corporation 8th Gen Core 8-core Desktop Processor Host Bridge/DRAM Registers [Coffee Lake S] (rev 0a)
IOMMU group 1:	[8086:1901] 00:01.0 PCI bridge: Intel Corporation Xeon E3-1200 v5/E3-1500 v5/6th Gen Core Processor PCIe Controller (x16) (rev 0a)
[8086:1905] 00:01.1 PCI bridge: Intel Corporation Xeon E3-1200 v5/E3-1500 v5/6th Gen Core Processor PCIe Controller (x8) (rev 0a)
[10de:1380] 01:00.0 VGA compatible controller: NVIDIA Corporation GM107 [GeForce GTX 750 Ti] (rev a2)
[10de:0fbc] 01:00.1 Audio device: NVIDIA Corporation Device 0fbc (rev a1)
[10de:1e87] 02:00.0 VGA compatible controller: NVIDIA Corporation TU104 [GeForce RTX 2080 Rev. A] (rev a1)
[10de:10f8] 02:00.1 Audio device: NVIDIA Corporation Device 10f8 (rev a1)
[10de:1ad8] 02:00.2 USB controller: NVIDIA Corporation Device 1ad8 (rev a1)
[10de:1ad9] 02:00.3 Serial bus controller [0c80]: NVIDIA Corporation Device 1ad9 (rev a1)
IOMMU group 2:	[8086:a379] 00:12.0 Signal processing controller: Intel Corporation Cannon Lake PCH Thermal Controller (rev 10)
IOMMU group 3:	[8086:a36d] 00:14.0 USB controller: Intel Corporation Cannon Lake PCH USB 3.1 xHCI Host Controller (rev 10)
[8086:a36f] 00:14.2 RAM memory: Intel Corporation Cannon Lake PCH Shared SRAM (rev 10)
IOMMU group 4:	[8086:a360] 00:16.0 Communication controller: Intel Corporation Cannon Lake PCH HECI Controller (rev 10)
IOMMU group 5:	[8086:a352] 00:17.0 SATA controller: Intel Corporation Cannon Lake PCH SATA AHCI Controller (rev 10)
IOMMU group 6:	[8086:a340] 00:1b.0 PCI bridge: Intel Corporation Cannon Lake PCH PCI Express Root Port #17 (rev f0)
IOMMU group 7:	[8086:a338] 00:1c.0 PCI bridge: Intel Corporation Cannon Lake PCH PCI Express Root Port #1 (rev f0)
IOMMU group 8:	[8086:a330] 00:1d.0 PCI bridge: Intel Corporation Cannon Lake PCH PCI Express Root Port #9 (rev f0)
IOMMU group 9:	[8086:a305] 00:1f.0 ISA bridge: Intel Corporation Z390 Chipset LPC/eSPI Controller (rev 10)
[8086:a348] 00:1f.3 Audio device: Intel Corporation Cannon Lake PCH cAVS (rev 10)
[8086:a323] 00:1f.4 SMBus: Intel Corporation Cannon Lake PCH SMBus Controller (rev 10)
[8086:a324] 00:1f.5 Serial bus controller [0c80]: Intel Corporation Cannon Lake PCH SPI Controller (rev 10)
[8086:15bc] 00:1f.6 Ethernet controller: Intel Corporation Ethernet Connection (7) I219-V (rev 10)
IOMMU group 10:	[1cc1:8201] 05:00.0 Non-Volatile memory controller: Device 1cc1:8201 (rev 03)

Please, find my diagnostics zip attached.


Thanks in advance,

Tomas

unraid-diagnostics-20190828-1303.zip 84.87 kB · 0 downloads

You can't stub both your cards if you want to use one with the nvidia driver.

And doesn't look like you can use the new method either.

Not sure if you have enabled the ACS override, but if you haven't, try that and see if the cards go in their own groups. Then you can use the new stubbing method to only stub the VM card.

 

Link to comment
1 hour ago, saarg said:

You can't stub both your cards if you want to use one with the nvidia driver.

And doesn't look like you can use the new method either.

Not sure if you have enabled the ACS override, but if you haven't, try that and see if the cards go in their own groups. Then you can use the new stubbing method to only stub the VM card.

 

After your comment and (re)watching the added below YouTube vid, I got it working! Got it working using pcie_acs_override=downstream,multifunction and modprobe.blacklist=nouveau. Thanks a lot! Can boot VM with my 2080 AND transcode vids on my 750. Thanks again!

 

 

Schermafbeelding 2019-08-28 om 17.38.20.png

Link to comment

Question... Installed a new 1660ti for playing games in a VM (I know will cause issues if I launch while transcodes going).  However, to get the VM's to boot I had to use vfio-pci.ids= in my syslinux config as the card apparently has a USB / serial controller built in and the VM's wouldn't launch since the placement group had the nvidia gpu, the nvidia sound, the nvidia usb and the nvidia serial. 

 

Awyways, I used vfio-pci.ids= to resolve; but it seems that perhaps based on my syslog; it's keeping the kernel from this plugin from attaching properly to the card:

 

Sep  3 18:54:59 unRAID kernel: nvidia: loading out-of-tree module taints kernel.
Sep  3 18:54:59 unRAID kernel: nvidia: module license 'NVIDIA' taints kernel.
Sep  3 18:54:59 unRAID kernel: Disabling lock debugging due to kernel taint
Sep  3 18:54:59 unRAID kernel: sd 10:0:2:0: [sdn] Attached SCSI disk
Sep  3 18:54:59 unRAID kernel: nvidia-nvlink: Nvlink Core is being initialized, major device number 247
Sep  3 18:54:59 unRAID kernel: NVRM: The NVIDIA probe routine was not called for 1 device(s).
Sep  3 18:54:59 unRAID kernel: NVRM: This can occur when a driver such as: 
Sep  3 18:54:59 unRAID kernel: NVRM: nouveau, rivafb, nvidiafb or rivatv 
Sep  3 18:54:59 unRAID kernel: NVRM: was loaded and obtained ownership of the NVIDIA device(s).
Sep  3 18:54:59 unRAID kernel: NVRM: Try unloading the conflicting kernel module (and/or
Sep  3 18:54:59 unRAID kernel: NVRM: reconfigure your kernel without the conflicting
Sep  3 18:54:59 unRAID kernel: NVRM: driver(s)), then try loading the NVIDIA kernel module
Sep  3 18:54:59 unRAID kernel: NVRM: again.
Sep  3 18:54:59 unRAID kernel: NVRM: No NVIDIA devices probed.
Sep  3 18:54:59 unRAID kernel: nvidia-nvlink: Unregistered the Nvlink Core, major device number 247
Sep  3 18:54:59 unRAID kernel: nvidia-nvlink: Nvlink Core is being initialized, major device number 247
Sep  3 18:54:59 unRAID kernel: NVRM: The NVIDIA probe routine was not called for 1 device(s).
Sep  3 18:54:59 unRAID kernel: NVRM: This can occur when a driver such as: 
Sep  3 18:54:59 unRAID kernel: NVRM: nouveau, rivafb, nvidiafb or rivatv 
Sep  3 18:54:59 unRAID kernel: NVRM: was loaded and obtained ownership of the NVIDIA device(s).
Sep  3 18:54:59 unRAID kernel: NVRM: Try unloading the conflicting kernel module (and/or
Sep  3 18:54:59 unRAID kernel: NVRM: reconfigure your kernel without the conflicting
Sep  3 18:54:59 unRAID kernel: NVRM: driver(s)), then try loading the NVIDIA kernel module
Sep  3 18:54:59 unRAID kernel: NVRM: again.
Sep  3 18:54:59 unRAID kernel: NVRM: No NVIDIA devices probed.
Sep  3 18:54:59 unRAID kernel: nvidia-nvlink: Unregistered the Nvlink Core, major device number 247

 

Anyone had this issue and work around it?

 

Link to comment

replying to my issue... I get the plugin to see the card if I remove the vfio-pci.ids= (my id's of the nvidia gpu, nvidia sound, nvidia usb, nvidia serial)  This breaks my ability to connect to a VM.  Tried doing the pcie_acs_override=downstream but still no go... guess next try is to add multithread I guess and see if that does anything.  

 

added the multithread option, still a no go. 

 

So does anyone with these newer cards (1660ti and above) have the ability to use the card with the nvidia plugin and kvm?  KVM doesn't seem to lime my iommu group due to those dang usb/serial controllers on the nvidia card.... hence I had to stub them to allow me to launch the vm.

 

Really hope I can do both; otherwise may just have to return the card for another model (1060 or the likes)

 

 

Side note, does anyone else have a 1660ti??  Do they all have this stupid nvidia USB/Serial on them?  It seems to be what screwing with me.

Edited by dnoyeb
Link to comment
1 hour ago, dnoyeb said:

replying to my issue... I get the plugin to see the card if I remove the vfio-pci.ids= (my id's of the nvidia gpu, nvidia sound, nvidia usb, nvidia serial)  This breaks my ability to connect to a VM.  Tried doing the pcie_acs_override=downstream but still no go... guess next try is to add multithread I guess and see if that does anything.  

 

added the multithread option, still a no go. 

 

So does anyone with these newer cards (1660ti and above) have the ability to use the card with the nvidia plugin and kvm?  KVM doesn't seem to lime my iommu group due to those dang usb/serial controllers on the nvidia card.... hence I had to stub them to allow me to launch the vm.

 

Really hope I can do both; otherwise may just have to return the card for another model (1060 or the likes)

 

 

Side note, does anyone else have a 1660ti??  Do they all have this stupid nvidia USB/Serial on them?  It seems to be what screwing with me.

I had this issue also. The workaround (although it's not really a "workaround" as much as it is just doing it a little differently) is to, instead of stubbing the NVIDIA USB and serial controllers, add them as two hostdev blocks to the VM XML for the two devices. This will have the same result with the two devices appearing in the GUI so you can check them to pass to the VM, but they will be returned to the host when the VM shuts down. This way, all four NVIDIA devices in the IOMMU group go to the VM together and are released together. Spaceinvader One has a good video with copy/paste XML on doing this with an NVMe drive, but the same principle applies here as well. 

Link to comment
6 hours ago, dnoyeb said:

Question... Installed a new 1660ti for playing games in a VM (I know will cause issues if I launch while transcodes going).  However, to get the VM's to boot I had to use vfio-pci.ids= in my syslinux config as the card apparently has a USB / serial controller built in and the VM's wouldn't launch since the placement group had the nvidia gpu, the nvidia sound, the nvidia usb and the nvidia serial. 

 

Awyways, I used vfio-pci.ids= to resolve; but it seems that perhaps based on my syslog; it's keeping the kernel from this plugin from attaching properly to the card:

 


Sep  3 18:54:59 unRAID kernel: nvidia: loading out-of-tree module taints kernel.
Sep  3 18:54:59 unRAID kernel: nvidia: module license 'NVIDIA' taints kernel.
Sep  3 18:54:59 unRAID kernel: Disabling lock debugging due to kernel taint
Sep  3 18:54:59 unRAID kernel: sd 10:0:2:0: [sdn] Attached SCSI disk
Sep  3 18:54:59 unRAID kernel: nvidia-nvlink: Nvlink Core is being initialized, major device number 247
Sep  3 18:54:59 unRAID kernel: NVRM: The NVIDIA probe routine was not called for 1 device(s).
Sep  3 18:54:59 unRAID kernel: NVRM: This can occur when a driver such as: 
Sep  3 18:54:59 unRAID kernel: NVRM: nouveau, rivafb, nvidiafb or rivatv 
Sep  3 18:54:59 unRAID kernel: NVRM: was loaded and obtained ownership of the NVIDIA device(s).
Sep  3 18:54:59 unRAID kernel: NVRM: Try unloading the conflicting kernel module (and/or
Sep  3 18:54:59 unRAID kernel: NVRM: reconfigure your kernel without the conflicting
Sep  3 18:54:59 unRAID kernel: NVRM: driver(s)), then try loading the NVIDIA kernel module
Sep  3 18:54:59 unRAID kernel: NVRM: again.
Sep  3 18:54:59 unRAID kernel: NVRM: No NVIDIA devices probed.
Sep  3 18:54:59 unRAID kernel: nvidia-nvlink: Unregistered the Nvlink Core, major device number 247
Sep  3 18:54:59 unRAID kernel: nvidia-nvlink: Nvlink Core is being initialized, major device number 247
Sep  3 18:54:59 unRAID kernel: NVRM: The NVIDIA probe routine was not called for 1 device(s).
Sep  3 18:54:59 unRAID kernel: NVRM: This can occur when a driver such as: 
Sep  3 18:54:59 unRAID kernel: NVRM: nouveau, rivafb, nvidiafb or rivatv 
Sep  3 18:54:59 unRAID kernel: NVRM: was loaded and obtained ownership of the NVIDIA device(s).
Sep  3 18:54:59 unRAID kernel: NVRM: Try unloading the conflicting kernel module (and/or
Sep  3 18:54:59 unRAID kernel: NVRM: reconfigure your kernel without the conflicting
Sep  3 18:54:59 unRAID kernel: NVRM: driver(s)), then try loading the NVIDIA kernel module
Sep  3 18:54:59 unRAID kernel: NVRM: again.
Sep  3 18:54:59 unRAID kernel: NVRM: No NVIDIA devices probed.
Sep  3 18:54:59 unRAID kernel: nvidia-nvlink: Unregistered the Nvlink Core, major device number 247

 

Anyone had this issue and work around it?

 

As far as I know, there is no way around that.

Link to comment

Ok, big thanks to JasonM!  Got the VM side working while using the Nvidia build without pinning anything.  

 

Few things were needed, first:

Step 1 (initial instructions JasonM shared and probably would have been enough if I was already running OVMF):

Quote

After the very last hostdev tag, a few lines from the bottom. add two more that will look like this:

 

<hostdev mode='subsystem' type='pci' managed='yes'>
      <source>
        <address domain='0x0000' bus='0x00' slot='0x00' function='0x0'/>
      </source>
</hostdev>

 

Domain stays as is. Bus, slot, and function, you will need to change. Next to the GPU drop down in the VM settings, in parentheses beside the card name will be a number in the format (00:00.0). The first two digits are bus, next two slot, last is function. On my card, functions 0 and 1 are the GPU and the sound controller; functions 2 and 3 are the USB and serial controllers. Yours may be different. You can also verify these numbers in Tools > System Devices. After you've added the two missing functions to the XML, save changes, then flip back to GUI mode and the devices should be listed, just like they were when you were stubbing the card.

 

However this didn't seem to get it working, ended up that my Windows10 VM was running on SeaBios.  I used the directions from alturismo to prepare my SeaBios backed version of Windows 10 install for OVMF:

Step 2 (prepare vidsk for OVMF): 

Quote

i used powershell in admin mode

 

mbr2gpt /validate /allowFullOS  <-- if ok then

mbr2gpt /convert /disk:0 /allowFullOS

 

now your win10 VM disk is prepared for EFI boot,.shut the VM down.

 

create a new win10 VM with your same settings (exept use OVMF instead seabios), pointing to your existing and edited vdisk1.img (or whatever name it has).

 

Step 3: Edit the newly created VM template, pin the CPU's, manually add the vdisk, checking boxes for the keyboard / mouse I was attaching and saving without starting.

Step 4: edit VM again, this time go into XML mode, add hostdev code you built from Step one up above and paste under the last entry for hostdev

Step 5: save and edit one last time in GUI mode.  Add the Nvidia GPU and sound.  Save

step 6: verify you don't have any transcodes going on, boot up the sucker and go play some Doom. 

 

just figured i'd share in case anyone came across my issue and wondered how it got solved.

 

  • Like 1
Link to comment

Thanks for the shout out. Helping each other is what this community is all about.

 

Also, I see above that you're passing your keyboard and mouse from the host to the VM. Since you're handing off the USB controller from your GPU, you can connect a USB hub to that USB-C port on the back of the GPU, and then plug the input devices into that. This way, you don't have to pass them through at all, and you also get true USB Plug-and-play support in windows. You can connect flash drives and the like without unRAID even knowing they're there. I do this with a dedicated PCI USB card, but it works with that port on the back of the GPU just as well.

Edited by JasonM
  • Upvote 1
Link to comment

And swing and a miss number 2 traded up from a 760 to 960 gtx but same issue "no Cuda enabled devices found" as well as rm-init-adapter failed does this mean these cards can not do transcoding? I've passed it through to the VM successfully (off during testing though)

Edited by Fiservedpi
Link to comment
15 hours ago, Fiservedpi said:

And swing and a miss number 2 traded up from a 760 to 960 gtx but same issue "no Cuda enabled devices found" as well as rm-init-adapter failed does this mean these cards can not do transcoding? I've passed it through to the VM successfully (off during testing though)

A GTX 960 should work, so there might be something else at play.

  • Like 1
Link to comment

Love this plugin! It made me go out and buy a new 1660 ti so I can transcode 4K movies to my devices.

 

I have no issues getting the 1660 ti passed through to Emby. Everything works great on that front. The issue that I am having is that my VMs no longer see the video card I pass through to them. I have a Nvidia 710 that I use for my Windows 10 VM, and though Unraid shows it as an option when setting up the VM, when I pass it through and boot the VM, though UnRaid claims it is running, my screen stays black. If I remove the video card, I can access the VM through VNC without issue.

 

If I revert to  vanilla UnRaid, the VM boots up and displays without issue.

 

I have tried stubbing the video card, but that doesn't seem to help. I have shifted to every possible slot on my motherboard to no avail.

Has anyone seen this before, or have an idea what could be the issue? I've also tried with a Radeon 460 with the same results.

Link to comment
16 hours ago, DoeBoye said:

Love this plugin! It made me go out and buy a new 1660 ti so I can transcode 4K movies to my devices.

 

I have no issues getting the 1660 ti passed through to Emby. Everything works great on that front. The issue that I am having is that my VMs no longer see the video card I pass through to them. I have a Nvidia 710 that I use for my Windows 10 VM, and though Unraid shows it as an option when setting up the VM, when I pass it through and boot the VM, though UnRaid claims it is running, my screen stays black. If I remove the video card, I can access the VM through VNC without issue.

 

If I revert to  vanilla UnRaid, the VM boots up and displays without issue.

 

I have tried stubbing the video card, but that doesn't seem to help. I have shifted to every possible slot on my motherboard to no avail.

Has anyone seen this before, or have an idea what could be the issue? I've also tried with a Radeon 460 with the same results.

That's strange. I have a gtx1060 and a gt710. Similarly, 1060 is used for transcoding in emby and 710 is used for a win vm. They both work fine. I did not stub the cards, emby config defines the id for 1060 so it leaves the 710 alone.

 

Are you starting unraid in gui mode? I do headless and it uses the mobo built-in matrox gpu. That could be the main difference between our servers.

Link to comment
4 hours ago, aptalca said:

That's strange. I have a gtx1060 and a gt710. Similarly, 1060 is used for transcoding in emby and 710 is used for a win vm. They both work fine. I did not stub the cards, emby config defines the id for 1060 so it leaves the 710 alone.

 

Are you starting unraid in gui mode? I do headless and it uses the mobo built-in matrox gpu. That could be the main difference between our servers.

No gui mode and I have an onboard basic vga chip that is usually used by Unraid. It was never an issue with vanilla unraid, but something about the drivers inside unraid is making things more difficult.

 

I tried again with my Radeon 460 and the Nvidia Unraid plugin, and it seems to be fine (last time I tried it caused the same issue, but it was late and I might have done something else wrong), so it's definitely something to do with the Nvidia card itself. I even tried an old 9500 that I dug up, but no luck with that either.

Link to comment

Plex Media Server 1.17.0.1709 is available in the Beta update channel

NEW:

(Transcoder) Update to current upstream ffmpeg
(Transcoder) Support for hardware transcoding on Linux with Intel 9th-gen processors
(Transcoder) Support for VC-1 hardware decoding on supported platforms
(Transcoder) Support for hardware decoding on Linux with Nvidia GPUs
(Transcoder) Support for zero-copy hardware transcoding on Linux with Nvidia GPUs
(Transcoder) Support for zero-copy hardware transcoding of interlaced media

https://forums.plex.tv/t/plex-media-server/30447/286

  • Like 1
Link to comment

Hi guys. Not sure if this is the best place to post this question, but I cant find it anywhere. I am running Nvidia build 6.7.0. I want to upgrade to build 6.7.2, but that includes the Nvidia driver 430, which my GTX 770 does not appear to work with. As soon as I upgrade, Plex no longer is able to play vides (giving some network error). Is it possible to upgrade the Nvidia build, but retain the old driver?

Link to comment
12 hours ago, isaacery said:

Hi guys. Not sure if this is the best place to post this question, but I cant find it anywhere. I am running Nvidia build 6.7.0. I want to upgrade to build 6.7.2, but that includes the Nvidia driver 430, which my GTX 770 does not appear to work with. As soon as I upgrade, Plex no longer is able to play vides (giving some network error). Is it possible to upgrade the Nvidia build, but retain the old driver?

There is no way to keep the driver version. It's pulling the latest driver at the time of build.

Link to comment
13 hours ago, isaacery said:

Hi guys. Not sure if this is the best place to post this question, but I cant find it anywhere. I am running Nvidia build 6.7.0. I want to upgrade to build 6.7.2, but that includes the Nvidia driver 430, which my GTX 770 does not appear to work with. As soon as I upgrade, Plex no longer is able to play vides (giving some network error). Is it possible to upgrade the Nvidia build, but retain the old driver?

No, we build each version of unraid with the latest drivers available ( This is all explained in the first post.)

Link to comment
  • trurl locked this topic
Guest
This topic is now closed to further replies.