dnoyeb

Members
  • Content Count

    121
  • Joined

  • Last visited

Community Reputation

8 Neutral

About dnoyeb

  • Rank
    Member

Converted

  • Gender
    Undisclosed

Recent Profile Visitors

The recent visitors block is disabled and is not being shown to other users.

  1. Retested on beta 29, issue still there. Based on further testing and seeing Radek's comment; it occurs in 6.8.3 as well. I tried both cards that were in the box, same issue on each address. Here's the current error that pops up and the diag is attached: Execution error internal error: qemu unexpectedly closed the monitor: 2020-09-29T17:39:26.418978Z qemu-system-x86_64: -device vfio-pci,host=0000:03:00.0,id=hostdev0,bus=pci.4,addr=0x0: vfio 0000:03:00.0: Failed to set up TRIGGER eventfd signaling for interrupt INTX-0: VFIO_DEVICE_SET_IRQS failure: Devic
  2. oh I see one i'm going to test tomorrow! "webgui: better handling of multiple nics with vfio-pci" do you guys prefer that we update the testing in our bug report thread for record keeping purposes?
  3. Anonymized version attached; if you need the other version let me know and i'll dm it. Other testing done last night: I tried disabling the usb2.0 ports and had same issue, also tried disabling the usb3.0 ports and moved the key over to the 2.0 to just make sure it wasn't something funny like that. Neither helped. Thanks for any guidance. tower-diagnostics-20200804-0854.zip
  4. bit more info, I see in the logs these lines: Aug 3 19:04:09 Tower kernel: vfio-pci 0000:03:00.0: enabling device (0100 -> 0102) Aug 3 19:04:10 Tower kernel: vfio-pci 0000:03:00.0: vfio_ecap_init: hiding ecap 0x19@0x18c Aug 3 19:04:10 Tower kernel: genirq: Flags mismatch irq 16. 00000000 (vfio-intx(0000:03:00.0)) vs. 00000080 (ehci_hcd:usb1) Hopefully this helps in the troubleshooting..
  5. Trying to get a mellanox-3 card passed through to a VM and having some troubles. To set the stage, I have two of these cards in my unraid server, I am using one of them for the OS. I used the Tools / System Devices / Bind Selected to VFIO at boot method and have verified that the card is added: cat vfio-pci.cfg BIND=0000:03:00.0|15b3:1003 Here is the log showing it was successful in being bound at boot of unraid: Loading config from /boot/config/vfio-pci.cfg BIND=0000:03:00.0|15b3:1003 --- Processing 0000:03:00.0 15b3:1003 Vendor:Device 15b3:1003 found at 0000:03:00.
  6. Wow I'm literally in the exact same boat... I have been debating on upgrading to some 2696 v2's in the same motherboard or upgrading to an amd since they seem to be extremely strong. The things i'm concerned with are the pci x16 slots as I have 3 m1015 cards now (I have about 20 drives) yet still having space for a 10gb nic and onboard VGA + at least one more slot for my nvidia 1660 for the gaming vm / plex transcoding. Have you made any headway on the motherboard you'd pick?
  7. Holy smokes, the new alpha build on transcoding via the nvidia card is freaking unreal.... my cpu levels went to practically zero. huge improvement having decode and encode working by default.
  8. Ok, big thanks to JasonM! Got the VM side working while using the Nvidia build without pinning anything. Few things were needed, first: Step 1 (initial instructions JasonM shared and probably would have been enough if I was already running OVMF): However this didn't seem to get it working, ended up that my Windows10 VM was running on SeaBios. I used the directions from alturismo to prepare my SeaBios backed version of Windows 10 install for OVMF: Step 2 (prepare vidsk for OVMF): Step 3: Edit the newly created VM template, pin the CPU's,
  9. replying to my issue... I get the plugin to see the card if I remove the vfio-pci.ids= (my id's of the nvidia gpu, nvidia sound, nvidia usb, nvidia serial) This breaks my ability to connect to a VM. Tried doing the pcie_acs_override=downstream but still no go... guess next try is to add multithread I guess and see if that does anything. added the multithread option, still a no go. So does anyone with these newer cards (1660ti and above) have the ability to use the card with the nvidia plugin and kvm? KVM doesn't seem to lime my iommu group due to those dang usb/se
  10. Question... Installed a new 1660ti for playing games in a VM (I know will cause issues if I launch while transcodes going). However, to get the VM's to boot I had to use vfio-pci.ids= in my syslinux config as the card apparently has a USB / serial controller built in and the VM's wouldn't launch since the placement group had the nvidia gpu, the nvidia sound, the nvidia usb and the nvidia serial. Awyways, I used vfio-pci.ids= to resolve; but it seems that perhaps based on my syslog; it's keeping the kernel from this plugin from attaching properly to the card: Sep 3 18:
  11. Looking over on the main plex page I see folks running it after doing a manual upgrade: Go into the console for that docker and do the following: wget <paste link to Ubuntu version of 1597 Plex> Wait for it to download, then dpkg -i Restart your plex docker and you’re done. I haven't had a chance to try it just yet.
  12. sweet, thanks. anyone tried out the new transcoder for hardware encoding / decoding yet?
  13. Quick question, I am guessing that "latest" version is only pulling from beta. Any way to get the docker to update into 1.16.7.1597 instead? Curious to try out the new transcoder.