Jump to content

dnoyeb

Members
  • Content Count

    113
  • Joined

  • Last visited

Community Reputation

8 Neutral

About dnoyeb

  • Rank
    Advanced Member

Converted

  • Gender
    Undisclosed

Recent Profile Visitors

The recent visitors block is disabled and is not being shown to other users.

  1. Holy smokes, the new alpha build on transcoding via the nvidia card is freaking unreal.... my cpu levels went to practically zero. huge improvement having decode and encode working by default.
  2. Ok, big thanks to JasonM! Got the VM side working while using the Nvidia build without pinning anything. Few things were needed, first: Step 1 (initial instructions JasonM shared and probably would have been enough if I was already running OVMF): However this didn't seem to get it working, ended up that my Windows10 VM was running on SeaBios. I used the directions from alturismo to prepare my SeaBios backed version of Windows 10 install for OVMF: Step 2 (prepare vidsk for OVMF): Step 3: Edit the newly created VM template, pin the CPU's, manually add the vdisk, checking boxes for the keyboard / mouse I was attaching and saving without starting. Step 4: edit VM again, this time go into XML mode, add hostdev code you built from Step one up above and paste under the last entry for hostdev Step 5: save and edit one last time in GUI mode. Add the Nvidia GPU and sound. Save step 6: verify you don't have any transcodes going on, boot up the sucker and go play some Doom. just figured i'd share in case anyone came across my issue and wondered how it got solved.
  3. replying to my issue... I get the plugin to see the card if I remove the vfio-pci.ids= (my id's of the nvidia gpu, nvidia sound, nvidia usb, nvidia serial) This breaks my ability to connect to a VM. Tried doing the pcie_acs_override=downstream but still no go... guess next try is to add multithread I guess and see if that does anything. added the multithread option, still a no go. So does anyone with these newer cards (1660ti and above) have the ability to use the card with the nvidia plugin and kvm? KVM doesn't seem to lime my iommu group due to those dang usb/serial controllers on the nvidia card.... hence I had to stub them to allow me to launch the vm. Really hope I can do both; otherwise may just have to return the card for another model (1060 or the likes) Side note, does anyone else have a 1660ti?? Do they all have this stupid nvidia USB/Serial on them? It seems to be what screwing with me.
  4. Question... Installed a new 1660ti for playing games in a VM (I know will cause issues if I launch while transcodes going). However, to get the VM's to boot I had to use vfio-pci.ids= in my syslinux config as the card apparently has a USB / serial controller built in and the VM's wouldn't launch since the placement group had the nvidia gpu, the nvidia sound, the nvidia usb and the nvidia serial. Awyways, I used vfio-pci.ids= to resolve; but it seems that perhaps based on my syslog; it's keeping the kernel from this plugin from attaching properly to the card: Sep 3 18:54:59 unRAID kernel: nvidia: loading out-of-tree module taints kernel. Sep 3 18:54:59 unRAID kernel: nvidia: module license 'NVIDIA' taints kernel. Sep 3 18:54:59 unRAID kernel: Disabling lock debugging due to kernel taint Sep 3 18:54:59 unRAID kernel: sd 10:0:2:0: [sdn] Attached SCSI disk Sep 3 18:54:59 unRAID kernel: nvidia-nvlink: Nvlink Core is being initialized, major device number 247 Sep 3 18:54:59 unRAID kernel: NVRM: The NVIDIA probe routine was not called for 1 device(s). Sep 3 18:54:59 unRAID kernel: NVRM: This can occur when a driver such as: Sep 3 18:54:59 unRAID kernel: NVRM: nouveau, rivafb, nvidiafb or rivatv Sep 3 18:54:59 unRAID kernel: NVRM: was loaded and obtained ownership of the NVIDIA device(s). Sep 3 18:54:59 unRAID kernel: NVRM: Try unloading the conflicting kernel module (and/or Sep 3 18:54:59 unRAID kernel: NVRM: reconfigure your kernel without the conflicting Sep 3 18:54:59 unRAID kernel: NVRM: driver(s)), then try loading the NVIDIA kernel module Sep 3 18:54:59 unRAID kernel: NVRM: again. Sep 3 18:54:59 unRAID kernel: NVRM: No NVIDIA devices probed. Sep 3 18:54:59 unRAID kernel: nvidia-nvlink: Unregistered the Nvlink Core, major device number 247 Sep 3 18:54:59 unRAID kernel: nvidia-nvlink: Nvlink Core is being initialized, major device number 247 Sep 3 18:54:59 unRAID kernel: NVRM: The NVIDIA probe routine was not called for 1 device(s). Sep 3 18:54:59 unRAID kernel: NVRM: This can occur when a driver such as: Sep 3 18:54:59 unRAID kernel: NVRM: nouveau, rivafb, nvidiafb or rivatv Sep 3 18:54:59 unRAID kernel: NVRM: was loaded and obtained ownership of the NVIDIA device(s). Sep 3 18:54:59 unRAID kernel: NVRM: Try unloading the conflicting kernel module (and/or Sep 3 18:54:59 unRAID kernel: NVRM: reconfigure your kernel without the conflicting Sep 3 18:54:59 unRAID kernel: NVRM: driver(s)), then try loading the NVIDIA kernel module Sep 3 18:54:59 unRAID kernel: NVRM: again. Sep 3 18:54:59 unRAID kernel: NVRM: No NVIDIA devices probed. Sep 3 18:54:59 unRAID kernel: nvidia-nvlink: Unregistered the Nvlink Core, major device number 247 Anyone had this issue and work around it?
  5. Looking over on the main plex page I see folks running it after doing a manual upgrade: Go into the console for that docker and do the following: wget <paste link to Ubuntu version of 1597 Plex> Wait for it to download, then dpkg -i Restart your plex docker and you’re done. I haven't had a chance to try it just yet.
  6. sweet, thanks. anyone tried out the new transcoder for hardware encoding / decoding yet?
  7. Quick question, I am guessing that "latest" version is only pulling from beta. Any way to get the docker to update into 1.16.7.1597 instead? Curious to try out the new transcoder.
  8. those of you running this; how's the overall quality of doing the hardware transcode? I have read a few times now that offloading the transcoding may result in poor quality... is there different quality levels based on which card you're using?
  9. https://lime-technology.com/wp/pricing/ This is answered on the pricing page.
  10. not sure why but when I login it says 20% off? is it 30 for you?
  11. I too agree it would be nice to fork (not completely redirect) the unraid log to a secondary log server... Haven't ever seen a way to do that though.
  12. I completely removed the samsung and went with a sandisk in the end since I saw issues writing to the samsung via unassigned devices with the wrong starting block. I will try and pull the sandisk and swap to the samsung this weekend as a test to see.... One other item of note is I have 256gb of ram; some speculated that was introducing another factor into things.
  13. @limetech just for reference, I was able to reproduce the same issue with multiple Samsung SSD's and there is a thread where others had the same issue back in 2017 when using samsung's in their cache pools. Since removing my samsung from the pool and swapping to XFS; my machine is working like a boss with no issues whatsoever... details showing the samsung write issues were in this other thread: If you start reading this post: and read down a few more posts, you'll see where I did tests and have graphs etc showing the write performance differences to the Samsung device when going from the starting block of 64 up to 2048.
  14. yea, they've changed docker up a bit so i'm guessing my old directions are no longer valid. I have since moved on to using Splunk as a VM since I find that I don't come close to the 500mb of data a day limit for free use.