Jump to content

scorcho99

Members
  • Content Count

    85
  • Joined

  • Last visited

Everything posted by scorcho99

  1. Whew, that is a big increase. I guess not all cards/setups work the same way.
  2. For the Intel GVT-g (mediated virtual GPUs) I think all that is needed is for these modules to be included: kvmgt vfio-iommu-type1 vfio-mdev I wouldn't ask for a whole web GUI, I'd be happy if I could play around with it editing XML and generating the mediated devices from the command line.
  3. I'm curious what graphics card you're running. My experience recently has been that GPUs are in a lower power state before my VMs start. I did notice that I could squeeze a single extra watt of savings out of my AMD cards if I used the drivers that turn off the fan and display but it was still only 1 watt.
  4. I have this working on my z370+i5 8400 machine with Windows 10 and linux VMs. Not sure what I'm doing that others aren't but I'm using the igpu has the host GPU (so it loses output when VM starts), the VMs have to be seabios, I have vesafb and efifb disabled but I doubt that is needed. I may have added i915.enable_gvt=1 to syslinux, but I think that was just in an attempt to get mediated passthrough to work. I can't remember if I blacklisted the i915 driver or not. I'm using 6.6.7 unraid
  5. So I've been playing around and it seems 6.6.7 at least already supports GVT-d (direct, single device passthrough). I have it working on my coffeelake system and its been solid through my basic testing phase. My request is that the additional components needed for GVT-g (mediated passthrough, the virtual device has 'slices' taken off that can be given to multiple different VMs) be included in unraid. I looked around and although I think coffeelake will require a 5 series kernel, I haven't seen anyone uses this even with sky or kabylake. I believe everyone talking about GVT and Intel iGPU passthrough is talking about the single device mode. If I'm wrong about this I'd love to hear some one has this working. I wouldn't even request support actually (although when things are more mature with this that would be nice), just include the modules required in the Intel guide: https://github.com/intel/gvt-linux/wiki/GVTg_Setup_Guide Having the components to try would be a good start. kvmgt xengt (probably not needed for unraid) vfio-iommu-type1 vfio-mdev
  6. I don't have my notes that I use for this, but the command at least looks right at first glance to me. And that looks like the XML line.
  7. I personally usually create the VM in unraid webUI and then use the command line to convert the disk type. Then I modify the XML to change the type from raw to qcow2. But qcow2 support in the webUI wasn't always available. If you already have a disk it will need to be converted, if starting fresh you probably wouldn't need to do this conversion.
  8. This might help. https://blog.wikichoon.com/2014/03/snapshot-support-in-virt-manager.html It looks like your snapshot manager button is disabled, and you're using raw format disks rather than qcow2. It might be possible to snapshot with raw but I'm not sure how that would work.
  9. Its not in front of me, but IIRC you have to open the virtual machine itself, not the host overview and its under one of the menu bar items.
  10. No reason this card won't work. My experience has been that cards starting with the HD2000 series from ATI/AMD work but prior to that its a no go. I don't think UEFI is required, you can run seabios VMs. I've actually gotten an FX series nvidia GPU to work and I know one other user was using a PCI FX5200 as well.
  11. I suppose its possible. I've had GPU passthrough problems solved by using SeaBios but I thought that was from vbios being tainted during host boot and would not think it would make any difference with usb cards. I'd honestly suspect something else was changed perhaps indirectly by the seabios change.
  12. I had a couple of questions related to this: Does this work with the mediated passthrough? (multiple virtual GPUs created from one real devices) or only GVT-d (single device)? Reading around it sounds like coffeelake support exists but its a pretty recent development and there aren't a lot of reports in the wild. Some people claim they have it working on arch though, I believe it requires 5.1 or 5.2 linux kernel. Are there plans to merge this in?
  13. Are you sure its the 4 identical cards causing the problem and not something else?
  14. Very strange. Have you tried passing it through to a different linux VM? I have cards with this chipset working in linux mint 18.3.
  15. I'm curious if anyone else has any numbers on some modern GPUs when used with unraid. I've been testing this out the past couple days with my limited selection of cards. I've found a couple weird things, like my nvidia cards seem to use more power when the VM is shutdown than at the desktop. But my AMD cards seem to be at their lowest when they're unloaded, albeit not by much. Any observations are of interest. I'm testing with a killawatt and using subtraction with different cards swapped in. The idle power is a little noisy on this system but I'm pretty confident in the read +/-1watt This is with a gold Antec power supply. My GT710 seems to idle at 5watt, although I'm using it as a host card so perhaps it would do better with a driver loaded. My R7 250 DDR3 seems to idle at 6watt My 1060 3GB seems to idle at 14watt which seems high. Has anyone gotten zerocore on AMD to work in a satisfactory way? It didn't seem to make much difference to be versus shutting down the VM. It appears to be broken in newer drivers as well, the fan only shut down when I loaded an old 14.4 driver in Windows 7. There are forum posts indicating it doesn't even work in Windows 10.
  16. When I'm stuck I just start to try isolating. Sounds like the hardware works so that's not it. And you had a VM config working before, so it must be the VM configuration or the OS. My thought on trying a different OS is that if you got it to work with the same VM config you'd know it was the OS configuration that you needed to attack.
  17. Seems like you've tried most of the obvious. Does it work bare metal? And if so, have you tried a different VM OS?
  18. I have an Asus H77 motherboard in my main rig that supports vt-d. IIRC there are a lot of forum posts complaining about it but I think it came to everything with bios updates eventually.
  19. I doubt that iGPU can be passed through, I think only stuff from the mainstream platform has support for this. Are you using the m.2 slot on the board? You could get one of those m.2 sata cards to free up the 1x port and install a GPU in that.
  20. Thanks. I ended up copying over the files created by the make file and it seemed like it worked. I believe I read somewhere that it was static links by default so that probably explains why it seemed to work.
  21. Believe it or not, I read this whole thread. I think I even maintained some of the information I read. Some one asked for only manual/scheduled hash creation, rather than using inotify to do the hashes when files are created. Was that feature ever added? I'd rather just swoop through nightly and do them then during the day when the server is busy doing other things. Plus, inotify-tools has issues where it can miss adding watches or even perform erroneous watches when files are moved. I think that might explain the issues some people have had while I was reading this thread.
  22. So I want to install an alternative to inotify-tools on my unraid server to use in bash scripts: https://github.com/tinkershack/fluffy/ I tried using the dev pack to build (needed make and some other dependencies listed) I got stuck on a dependency that wasn't on the list apparently and also wasn't in the dev pack. I also had questions about how persistent this was anyway, since unraid runs out of a ramdisk. Anyway, can you build something like this elsewhere and copy over the binary? Should I build it on slackware or does it even matter? My machines are all different versions of linux mint. I can't find the base slackware version used by unraid to download to do that.
  23. Some one reported some issues with the script I wrote in another thread. I pulled the one I've been using off the server and I think it corrected the bug. No warranties on this, I wouldn't say I'm a bash expert. Note: LVSAVEDIR and LVSNAPSHOTDIR are probably different on your machine so change those to what you want. # S00slqemu.sh #!/bin/bash # stops libvirt and moves the save and snapshot folders creating a symlink to them # LVSAVEDIR=/mnt/cache/VMs/qemu_nv/save LVSNAPSHOTDIR=/mnt/cache/VMs/qemu_nv/snapshot if [ -f /var/run/libvirt/libvirtd.pid ];then /etc/rc.d/rc.libvirt stop fi if [ $LVSAVEDIR != "/var/lib/libvirt/qemu/save" ];then if [ ! -L /var/lib/libvirt/qemu/save ];then if [ ! -e $LVSAVEDIR ];then if [ ! -e /var/lib/libvirt/qemu/save ];then mkdir $LVSAVEDIR else mv /var/lib/libvirt/qemu/save $LVSAVEDIR fi else rm -r /var/lib/libvirt/qemu/save fi fi ln -s $LVSAVEDIR /var/lib/libvirt/qemu/save fi if [ $LVSNAPSHOTDIR != "/var/lib/libvirt/qemu/snapshot" ];then if [ ! -L /var/lib/libvirt/qemu/snapshot ];then if [ ! -e $LVSNAPSHOTDIR ];then if [ ! -e /var/lib/libvirt/qemu/shapshot ];then mkdir $LVSNAPSHOTDIR else mv /var/lib/libvirt/qemu/snapshot $LVSNAPSHOTDIR fi else rm -r /var/lib/libvirt/qemu/snapshot fi fi ln -s $LVSNAPSHOTDIR /var/lib/libvirt/qemu/snapshot fi /etc/rc.d/rc.libvirt start
  24. I'm using a MSI x470 Gaming Plus. But I was more looking to see if anyone had installed that card on any Ryzen board in their chipset controlled slot and had it detect for some piece of mind. I have some old PCI-e cards that just will not detect on this board that detect fine on other platforms, but they're all PCIe 1.0 devices.
  25. I'm thinking of buying one of these cards for my unraid box (real limited options for good sata cards with greater than 2 ports btw) and stumbled across this thread: https://forums.servethehome.com/index.php?threads/anyone-running-a-lsi-9211-8i-in-an-pcie-x4-slot.2942/ For my purposes, I'm going to run this in the 4x pcie 2.0 (16x physical) on my ryzen x470 board. I want to keep the other slots for graphics cards. So that thread gave me pause. There are no Ryzen examples there, so I was wondering if anyone is doing this successfully? I also thought maybe they needed to cover some of the pins on their card but that seems like a different issue.