Jump to content


  • Content Count

  • Joined

  • Last visited

Everything posted by scorcho99

  1. Hmmm, I thought the BIND method did use PCI addresses not device ids. That's what I entered anyway, since that was the format that a forum post referenced. Maybe that is why it didn't work for me.
  2. So I want to get access to dma-buf and it appears it requires opengl support to be enabled in the qemu build. How hard would it be to build a one with this feature enabled? I downloaded the source and started the make last night, and noticed it needed a lot of dependencies.
  3. I tried this method a few days ago as a part of troubleshooting something else and also found that it didn't work at all. vfio-pci-ids works but I thought it was perhaps binding to late.
  4. Second this, for same reason as Pauven. Its not as easy as it sounds though, because what happens when you change a file on the cache? How will that link up to the array?
  5. Another component I've found that is missing is unraid's qemu appears to not be compiled with opengl support. This isn't required to make gvt-g function, but there is a dmabuf function that requires it to link the VM display to the mediated card. You can use a client installed vnc, rdp...maybe looking glass, instead but its a real QoL improvement that isn't there. That said, given how unraid is built its not clear to me if openGL would actually work. The i915 driver must be loaded for this to work, but that might not be the only backend piece missing.
  6. Sorry, stock one. There are no nvidia drivers. I tried a number of things yesterday, OVMF VMs actually do work. I'm probably going to proceed with the dummy vm during i915 load. Its kludgy but if its stable I guess I'll be satisfied.
  7. I have a passed through ROM, although that didn't make a difference.
  8. I don't think that is it because its plugged into a monitor the whole time. It also worked reliably until I started this whole i915 adventure. Although maybe there is something going on there.
  9. So this isn't really about plex, but people doing quick sync on unraid have a similar use case to me. I've found that once I modprobe the i915 my nvidia card just gives a black screen when passed through. If I don't modprobe after boot it works fine. In my case the igpu is primary, nvidia is secondary. I found if I pass the nvidia through (so its in use) and then modprobe, things are OK. Based on that I added nvidia to vfio pci ids in syslinux.cfg, and confirmed vfio was in use at boot. The modprobed but it still broke nvidia passthrough! This is perplexing to me. i915 shouldn't be able to taint the nvidia card in my mine, since its not a compatible driver. I assumed all the displays were reignited or something when the driver loaded and this was tainting the nvidia card somehow, but if its bound to vfio I don't see how it should be able to get at it at all. I think I can workaround this with scripts and a dummy vm to "protect" the nvidia card from i915 but that seems ridiculous. Anyone else seeing this behavior or is it just me? I've only tried nvidia on seabios vms so maybe that is a component.
  10. Yeah, sorry I just noticed that thread. I was sure googling for the chipset and unraid would have shown that thread but google seems worse and worse these days.
  11. So every time I've gone looking for sata controllers there hasn't been much to find. We've had the asmedia 1061, the marvels that don't have a good reputation with unraid and then there's SAS controllers with flashed firmware that most people settle on. I've seen a newer version of the Asmedia but its still a 2 port version. I saw this pop up last month: https://www.amazon.com/dp/B07T3RMFFT https://www.amazon.com/dp/B07ST9CPND Some one in the reviews already claims to have tried it with Unraid. If you look the chip up on jmicron's site this is a Gen3 pci-e x 2 port SATA3 controller, giving you 5 ports. Seems like an interesting alternative in a land of currently sparse options. Pretty sure even if you installed these things in a 1x slot you'd have enough bandwidth to service 5 spinning rust drives. The question is if they are reliable.
  12. Yeah, it doesn't seem like there is a lot of interest I guess. That's kind of why I set out to try it myself. I actually didn't think I was going to get it to work at all, much less transfer the setup to unraid. I'd given up on the idea since coffeelake support was initially not going to happen and then finally showed up in 5.1 kernel. Next step is to mod my bios to support increased aperture sizes. That's a large problem for anyone running this on consumer motherboards, the mediated GPUs take their slice from the GPU aperture pie, not the shared variable memory. While changing the aperture size is a supported option, most motherboards seem to just lock it to 256MB. This means I can only create 2 of the smallest type of virtual GPU at the moment.
  13. I had this problem as well. I don't know if all the links are yanked after new candidates come out or what. I ended up just finding a link to *some* rc candidate somewhere and then changing the values in the link to match the version I wanted. I assume there were links at one point but I couldn't find them!
  14. So, I built a custom unraid 6.8.0-rc7 kernel adding support for these modules which was a first for me. I ended up relying on the Unraid-DVB scripts to do a lot of the heavy lifting on that front. After struggling with some configuration problems for the VM I now have GVT-g working on Unraid. At least, its working as well as it worked on Ubuntu. Lots more to investigate. But as a proof of concept, I think its fair to say that the kernel options listed above are the only thing really blocking this from working today.
  15. That would be cool but I really don't think so. Everything I've read says displaylink are slave devices to the primary GPU. I think there is recent support these days to slave them off virtual GPUs in VMs (allowing you to turn a virtual GPU into something that displays on a real monitor) but that isn't what you're after.
  16. Did the combination of downstream and multifunction ACS override methods not separate this out or at least make it better? I've had to use both to get anywhere before.
  17. You won't be able to boot unraid from the controller*, typically USB boot requires the controller to be tied into the bios and only onboard controllers do that. Theoretically some one could make a USB controller with a boot ram on it but I have never seen or heard of one. Probably your best bet is to just buy another USB card that works better. *If you were really desperate and had enough USB keys lying around, it might be possible to use plop boot loader to make this work. plop lets you boot USB devices from normally unbootable controllers with the caveats that you must boot plop itself first somehow and that the USB connection during boot is brutally slow.
  18. I've added a module that doesn't load by itself on boot to the line in syslinux.cfg. But once I'm booted up if I rmmod the module, it says its not loading. That tells me I have failed somewhere. I have vfio pci ids listed and those seem to work so I'm not sure where I'm going wrong. Does some one have some example syntax where they are doing this successfully? Also confusingly, i915 says its not loaded either but lspci -v shows that "kernel module in use: i915" for the my intel igpu which seems contradictory.
  19. @segator , did you ever find decent instructions on a custom kernel for unraid? I'm in a similar kind of situation. I would like to add some modules to test something before making a better request.
  20. For more detail on what is required, here is a thread where it was added to Solus. https://dev.getsol.us/T6812 Kernel options: CONFIG_DRM_I915_GVT CONFIG_DRM_I915_GVT_KVMGT CONFIG_VFIO_MDEV CONFIG_VFIO_MDEV_DEVICE And a snippet from the developer on how to build: https://github.com/intel/gvt-linux/issues/75#issuecomment-468122607
  21. So am I to understand that 6.9-rc1 will be essentially just like 6.8-rc7? Is the GSO bug a concern if you don't use docker at all? It sounds like the qcow2 corruption bug was corrected in 6.8-rc5 or rc6 so that shouldn't be a concern.
  22. Interesting, I always use i440fx. I noticed today that my oldest card a HD3450 uses ~8watts before the VM starts. It goes up when Windows is running in a VM using it, but unlike other cards...when I shut it down the card continues to use extra power.
  23. Whew, that is a big increase. I guess not all cards/setups work the same way.