scorcho99

Members
  • Posts

    190
  • Joined

  • Last visited

Everything posted by scorcho99

  1. I've added a module that doesn't load by itself on boot to the line in syslinux.cfg. But once I'm booted up if I rmmod the module, it says its not loading. That tells me I have failed somewhere. I have vfio pci ids listed and those seem to work so I'm not sure where I'm going wrong. Does some one have some example syntax where they are doing this successfully? Also confusingly, i915 says its not loaded either but lspci -v shows that "kernel module in use: i915" for the my intel igpu which seems contradictory.
  2. @segator , did you ever find decent instructions on a custom kernel for unraid? I'm in a similar kind of situation. I would like to add some modules to test something before making a better request.
  3. For more detail on what is required, here is a thread where it was added to Solus. https://dev.getsol.us/T6812 Kernel options: CONFIG_DRM_I915_GVT CONFIG_DRM_I915_GVT_KVMGT CONFIG_VFIO_MDEV CONFIG_VFIO_MDEV_DEVICE And a snippet from the developer on how to build: https://github.com/intel/gvt-linux/issues/75#issuecomment-468122607
  4. So am I to understand that 6.9-rc1 will be essentially just like 6.8-rc7? Is the GSO bug a concern if you don't use docker at all? It sounds like the qcow2 corruption bug was corrected in 6.8-rc5 or rc6 so that shouldn't be a concern.
  5. Interesting, I always use i440fx. I noticed today that my oldest card a HD3450 uses ~8watts before the VM starts. It goes up when Windows is running in a VM using it, but unlike other cards...when I shut it down the card continues to use extra power.
  6. Whew, that is a big increase. I guess not all cards/setups work the same way.
  7. For the Intel GVT-g (mediated virtual GPUs) I think all that is needed is for these modules to be included: kvmgt vfio-iommu-type1 vfio-mdev I wouldn't ask for a whole web GUI, I'd be happy if I could play around with it editing XML and generating the mediated devices from the command line.
  8. I'm curious what graphics card you're running. My experience recently has been that GPUs are in a lower power state before my VMs start. I did notice that I could squeeze a single extra watt of savings out of my AMD cards if I used the drivers that turn off the fan and display but it was still only 1 watt.
  9. I have this working on my z370+i5 8400 machine with Windows 10 and linux VMs. Not sure what I'm doing that others aren't but I'm using the igpu has the host GPU (so it loses output when VM starts), the VMs have to be seabios, I have vesafb and efifb disabled but I doubt that is needed. I may have added i915.enable_gvt=1 to syslinux, but I think that was just in an attempt to get mediated passthrough to work. I can't remember if I blacklisted the i915 driver or not. I'm using 6.6.7 unraid
  10. So I've been playing around and it seems 6.6.7 at least already supports GVT-d (direct, single device passthrough). I have it working on my coffeelake system and its been solid through my basic testing phase. My request is that the additional components needed for GVT-g (mediated passthrough, the virtual device has 'slices' taken off that can be given to multiple different VMs) be included in unraid. I looked around and although I think coffeelake will require a 5 series kernel, I haven't seen anyone uses this even with sky or kabylake. I believe everyone talking about GVT and Intel iGPU passthrough is talking about the single device mode. If I'm wrong about this I'd love to hear some one has this working. I wouldn't even request support actually (although when things are more mature with this that would be nice), just include the modules required in the Intel guide: https://github.com/intel/gvt-linux/wiki/GVTg_Setup_Guide Having the components to try would be a good start. kvmgt xengt (probably not needed for unraid) vfio-iommu-type1 vfio-mdev
  11. I don't have my notes that I use for this, but the command at least looks right at first glance to me. And that looks like the XML line.
  12. I personally usually create the VM in unraid webUI and then use the command line to convert the disk type. Then I modify the XML to change the type from raw to qcow2. But qcow2 support in the webUI wasn't always available. If you already have a disk it will need to be converted, if starting fresh you probably wouldn't need to do this conversion.
  13. This might help. https://blog.wikichoon.com/2014/03/snapshot-support-in-virt-manager.html It looks like your snapshot manager button is disabled, and you're using raw format disks rather than qcow2. It might be possible to snapshot with raw but I'm not sure how that would work.
  14. Its not in front of me, but IIRC you have to open the virtual machine itself, not the host overview and its under one of the menu bar items.
  15. No reason this card won't work. My experience has been that cards starting with the HD2000 series from ATI/AMD work but prior to that its a no go. I don't think UEFI is required, you can run seabios VMs. I've actually gotten an FX series nvidia GPU to work and I know one other user was using a PCI FX5200 as well.
  16. I suppose its possible. I've had GPU passthrough problems solved by using SeaBios but I thought that was from vbios being tainted during host boot and would not think it would make any difference with usb cards. I'd honestly suspect something else was changed perhaps indirectly by the seabios change.
  17. I had a couple of questions related to this: Does this work with the mediated passthrough? (multiple virtual GPUs created from one real devices) or only GVT-d (single device)? Reading around it sounds like coffeelake support exists but its a pretty recent development and there aren't a lot of reports in the wild. Some people claim they have it working on arch though, I believe it requires 5.1 or 5.2 linux kernel. Are there plans to merge this in?
  18. Very strange. Have you tried passing it through to a different linux VM? I have cards with this chipset working in linux mint 18.3.
  19. I'm curious if anyone else has any numbers on some modern GPUs when used with unraid. I've been testing this out the past couple days with my limited selection of cards. I've found a couple weird things, like my nvidia cards seem to use more power when the VM is shutdown than at the desktop. But my AMD cards seem to be at their lowest when they're unloaded, albeit not by much. Any observations are of interest. I'm testing with a killawatt and using subtraction with different cards swapped in. The idle power is a little noisy on this system but I'm pretty confident in the read +/-1watt This is with a gold Antec power supply. My GT710 seems to idle at 5watt, although I'm using it as a host card so perhaps it would do better with a driver loaded. My R7 250 DDR3 seems to idle at 6watt My 1060 3GB seems to idle at 14watt which seems high. Has anyone gotten zerocore on AMD to work in a satisfactory way? It didn't seem to make much difference to be versus shutting down the VM. It appears to be broken in newer drivers as well, the fan only shut down when I loaded an old 14.4 driver in Windows 7. There are forum posts indicating it doesn't even work in Windows 10.
  20. When I'm stuck I just start to try isolating. Sounds like the hardware works so that's not it. And you had a VM config working before, so it must be the VM configuration or the OS. My thought on trying a different OS is that if you got it to work with the same VM config you'd know it was the OS configuration that you needed to attack.
  21. Seems like you've tried most of the obvious. Does it work bare metal? And if so, have you tried a different VM OS?
  22. I have an Asus H77 motherboard in my main rig that supports vt-d. IIRC there are a lot of forum posts complaining about it but I think it came to everything with bios updates eventually.
  23. I doubt that iGPU can be passed through, I think only stuff from the mainstream platform has support for this. Are you using the m.2 slot on the board? You could get one of those m.2 sata cards to free up the 1x port and install a GPU in that.
  24. Thanks. I ended up copying over the files created by the make file and it seemed like it worked. I believe I read somewhere that it was static links by default so that probably explains why it seemed to work.
  25. Believe it or not, I read this whole thread. I think I even maintained some of the information I read. Some one asked for only manual/scheduled hash creation, rather than using inotify to do the hashes when files are created. Was that feature ever added? I'd rather just swoop through nightly and do them then during the day when the server is busy doing other things. Plus, inotify-tools has issues where it can miss adding watches or even perform erroneous watches when files are moved. I think that might explain the issues some people have had while I was reading this thread.