Jump to content

scorcho99

Members
  • Content Count

    109
  • Joined

  • Last visited

Community Reputation

5 Neutral

About scorcho99

  • Rank
    Advanced Member

Converted

  • Gender
    Undisclosed

Recent Profile Visitors

The recent visitors block is disabled and is not being shown to other users.

  1. Hmmm, I thought the BIND method did use PCI addresses not device ids. That's what I entered anyway, since that was the format that a forum post referenced. Maybe that is why it didn't work for me.
  2. So I want to get access to dma-buf and it appears it requires opengl support to be enabled in the qemu build. How hard would it be to build a one with this feature enabled? I downloaded the source and started the make last night, and noticed it needed a lot of dependencies.
  3. I tried this method a few days ago as a part of troubleshooting something else and also found that it didn't work at all. vfio-pci-ids works but I thought it was perhaps binding to late.
  4. Second this, for same reason as Pauven. Its not as easy as it sounds though, because what happens when you change a file on the cache? How will that link up to the array?
  5. Another component I've found that is missing is unraid's qemu appears to not be compiled with opengl support. This isn't required to make gvt-g function, but there is a dmabuf function that requires it to link the VM display to the mediated card. You can use a client installed vnc, rdp...maybe looking glass, instead but its a real QoL improvement that isn't there. That said, given how unraid is built its not clear to me if openGL would actually work. The i915 driver must be loaded for this to work, but that might not be the only backend piece missing.
  6. Sorry, stock one. There are no nvidia drivers. I tried a number of things yesterday, OVMF VMs actually do work. I'm probably going to proceed with the dummy vm during i915 load. Its kludgy but if its stable I guess I'll be satisfied.
  7. I have a passed through ROM, although that didn't make a difference.
  8. I don't think that is it because its plugged into a monitor the whole time. It also worked reliably until I started this whole i915 adventure. Although maybe there is something going on there.
  9. So this isn't really about plex, but people doing quick sync on unraid have a similar use case to me. I've found that once I modprobe the i915 my nvidia card just gives a black screen when passed through. If I don't modprobe after boot it works fine. In my case the igpu is primary, nvidia is secondary. I found if I pass the nvidia through (so its in use) and then modprobe, things are OK. Based on that I added nvidia to vfio pci ids in syslinux.cfg, and confirmed vfio was in use at boot. The modprobed but it still broke nvidia passthrough! This is perplexing to me. i915 shouldn't be able to taint the nvidia card in my mine, since its not a compatible driver. I assumed all the displays were reignited or something when the driver loaded and this was tainting the nvidia card somehow, but if its bound to vfio I don't see how it should be able to get at it at all. I think I can workaround this with scripts and a dummy vm to "protect" the nvidia card from i915 but that seems ridiculous. Anyone else seeing this behavior or is it just me? I've only tried nvidia on seabios vms so maybe that is a component.
  10. Yeah, sorry I just noticed that thread. I was sure googling for the chipset and unraid would have shown that thread but google seems worse and worse these days.
  11. So every time I've gone looking for sata controllers there hasn't been much to find. We've had the asmedia 1061, the marvels that don't have a good reputation with unraid and then there's SAS controllers with flashed firmware that most people settle on. I've seen a newer version of the Asmedia but its still a 2 port version. I saw this pop up last month: https://www.amazon.com/dp/B07T3RMFFT https://www.amazon.com/dp/B07ST9CPND Some one in the reviews already claims to have tried it with Unraid. If you look the chip up on jmicron's site this is a Gen3 pci-e x 2 port SATA3 controller, giving you 5 ports. Seems like an interesting alternative in a land of currently sparse options. Pretty sure even if you installed these things in a 1x slot you'd have enough bandwidth to service 5 spinning rust drives. The question is if they are reliable.
  12. Yeah, it doesn't seem like there is a lot of interest I guess. That's kind of why I set out to try it myself. I actually didn't think I was going to get it to work at all, much less transfer the setup to unraid. I'd given up on the idea since coffeelake support was initially not going to happen and then finally showed up in 5.1 kernel. Next step is to mod my bios to support increased aperture sizes. That's a large problem for anyone running this on consumer motherboards, the mediated GPUs take their slice from the GPU aperture pie, not the shared variable memory. While changing the aperture size is a supported option, most motherboards seem to just lock it to 256MB. This means I can only create 2 of the smallest type of virtual GPU at the moment.
  13. I had this problem as well. I don't know if all the links are yanked after new candidates come out or what. I ended up just finding a link to *some* rc candidate somewhere and then changing the values in the link to match the version I wanted. I assume there were links at one point but I couldn't find them!