Jump to content

scorcho99

Members
  • Content Count

    81
  • Joined

  • Last visited

Community Reputation

2 Neutral

About scorcho99

  • Rank
    Advanced Member

Converted

  • Gender
    Undisclosed

Recent Profile Visitors

The recent visitors block is disabled and is not being shown to other users.

  1. So I've been playing around and it seems 6.6.7 at least already supports GVT-d (direct, single device passthrough). I have it working on my coffeelake system and its been solid through my basic testing phase. My request is that the additional components needed for GVT-g (mediated passthrough, the virtual device has 'slices' taken off that can be given to multiple different VMs) be included in unraid. I looked around and although I think coffeelake will require a 5 series kernel, I haven't seen anyone uses this even with sky or kabylake. I believe everyone talking about GVT and Intel iGPU passthrough is talking about the single device mode. If I'm wrong about this I'd love to hear some one has this working. I wouldn't even request support actually (although when things are more mature with this that would be nice), just include the modules required in the Intel guide: https://github.com/intel/gvt-linux/wiki/GVTg_Setup_Guide Having the components to try would be a good start. kvmgt xengt (probably not needed for unraid) vfio-iommu-type1 vfio-mdev
  2. I don't have my notes that I use for this, but the command at least looks right at first glance to me. And that looks like the XML line.
  3. I personally usually create the VM in unraid webUI and then use the command line to convert the disk type. Then I modify the XML to change the type from raw to qcow2. But qcow2 support in the webUI wasn't always available. If you already have a disk it will need to be converted, if starting fresh you probably wouldn't need to do this conversion.
  4. This might help. https://blog.wikichoon.com/2014/03/snapshot-support-in-virt-manager.html It looks like your snapshot manager button is disabled, and you're using raw format disks rather than qcow2. It might be possible to snapshot with raw but I'm not sure how that would work.
  5. Its not in front of me, but IIRC you have to open the virtual machine itself, not the host overview and its under one of the menu bar items.
  6. No reason this card won't work. My experience has been that cards starting with the HD2000 series from ATI/AMD work but prior to that its a no go. I don't think UEFI is required, you can run seabios VMs. I've actually gotten an FX series nvidia GPU to work and I know one other user was using a PCI FX5200 as well.
  7. I suppose its possible. I've had GPU passthrough problems solved by using SeaBios but I thought that was from vbios being tainted during host boot and would not think it would make any difference with usb cards. I'd honestly suspect something else was changed perhaps indirectly by the seabios change.
  8. I had a couple of questions related to this: Does this work with the mediated passthrough? (multiple virtual GPUs created from one real devices) or only GVT-d (single device)? Reading around it sounds like coffeelake support exists but its a pretty recent development and there aren't a lot of reports in the wild. Some people claim they have it working on arch though, I believe it requires 5.1 or 5.2 linux kernel. Are there plans to merge this in?
  9. Are you sure its the 4 identical cards causing the problem and not something else?
  10. Very strange. Have you tried passing it through to a different linux VM? I have cards with this chipset working in linux mint 18.3.
  11. I'm curious if anyone else has any numbers on some modern GPUs when used with unraid. I've been testing this out the past couple days with my limited selection of cards. I've found a couple weird things, like my nvidia cards seem to use more power when the VM is shutdown than at the desktop. But my AMD cards seem to be at their lowest when they're unloaded, albeit not by much. Any observations are of interest. I'm testing with a killawatt and using subtraction with different cards swapped in. The idle power is a little noisy on this system but I'm pretty confident in the read +/-1watt This is with a gold Antec power supply. My GT710 seems to idle at 5watt, although I'm using it as a host card so perhaps it would do better with a driver loaded. My R7 250 DDR3 seems to idle at 6watt My 1060 3GB seems to idle at 14watt which seems high. Has anyone gotten zerocore on AMD to work in a satisfactory way? It didn't seem to make much difference to be versus shutting down the VM. It appears to be broken in newer drivers as well, the fan only shut down when I loaded an old 14.4 driver in Windows 7. There are forum posts indicating it doesn't even work in Windows 10.
  12. When I'm stuck I just start to try isolating. Sounds like the hardware works so that's not it. And you had a VM config working before, so it must be the VM configuration or the OS. My thought on trying a different OS is that if you got it to work with the same VM config you'd know it was the OS configuration that you needed to attack.
  13. Seems like you've tried most of the obvious. Does it work bare metal? And if so, have you tried a different VM OS?
  14. I have an Asus H77 motherboard in my main rig that supports vt-d. IIRC there are a lot of forum posts complaining about it but I think it came to everything with bios updates eventually.
  15. I doubt that iGPU can be passed through, I think only stuff from the mainstream platform has support for this. Are you using the m.2 slot on the board? You could get one of those m.2 sata cards to free up the 1x port and install a GPU in that.