Jump to content

scorcho99

Members
  • Content Count

    75
  • Joined

  • Last visited

Everything posted by scorcho99

  1. I suppose its possible. I've had GPU passthrough problems solved by using SeaBios but I thought that was from vbios being tainted during host boot and would not think it would make any difference with usb cards. I'd honestly suspect something else was changed perhaps indirectly by the seabios change.
  2. I had a couple of questions related to this: Does this work with the mediated passthrough? (multiple virtual GPUs created from one real devices) or only GVT-d (single device)? Reading around it sounds like coffeelake support exists but its a pretty recent development and there aren't a lot of reports in the wild. Some people claim they have it working on arch though, I believe it requires 5.1 or 5.2 linux kernel. Are there plans to merge this in?
  3. Are you sure its the 4 identical cards causing the problem and not something else?
  4. Very strange. Have you tried passing it through to a different linux VM? I have cards with this chipset working in linux mint 18.3.
  5. I'm curious if anyone else has any numbers on some modern GPUs when used with unraid. I've been testing this out the past couple days with my limited selection of cards. I've found a couple weird things, like my nvidia cards seem to use more power when the VM is shutdown than at the desktop. But my AMD cards seem to be at their lowest when they're unloaded, albeit not by much. Any observations are of interest. I'm testing with a killawatt and using subtraction with different cards swapped in. The idle power is a little noisy on this system but I'm pretty confident in the read +/-1watt This is with a gold Antec power supply. My GT710 seems to idle at 5watt, although I'm using it as a host card so perhaps it would do better with a driver loaded. My R7 250 DDR3 seems to idle at 6watt My 1060 3GB seems to idle at 14watt which seems high. Has anyone gotten zerocore on AMD to work in a satisfactory way? It didn't seem to make much difference to be versus shutting down the VM. It appears to be broken in newer drivers as well, the fan only shut down when I loaded an old 14.4 driver in Windows 7. There are forum posts indicating it doesn't even work in Windows 10.
  6. When I'm stuck I just start to try isolating. Sounds like the hardware works so that's not it. And you had a VM config working before, so it must be the VM configuration or the OS. My thought on trying a different OS is that if you got it to work with the same VM config you'd know it was the OS configuration that you needed to attack.
  7. Seems like you've tried most of the obvious. Does it work bare metal? And if so, have you tried a different VM OS?
  8. I have an Asus H77 motherboard in my main rig that supports vt-d. IIRC there are a lot of forum posts complaining about it but I think it came to everything with bios updates eventually.
  9. I doubt that iGPU can be passed through, I think only stuff from the mainstream platform has support for this. Are you using the m.2 slot on the board? You could get one of those m.2 sata cards to free up the 1x port and install a GPU in that.
  10. Thanks. I ended up copying over the files created by the make file and it seemed like it worked. I believe I read somewhere that it was static links by default so that probably explains why it seemed to work.
  11. Believe it or not, I read this whole thread. I think I even maintained some of the information I read. Some one asked for only manual/scheduled hash creation, rather than using inotify to do the hashes when files are created. Was that feature ever added? I'd rather just swoop through nightly and do them then during the day when the server is busy doing other things. Plus, inotify-tools has issues where it can miss adding watches or even perform erroneous watches when files are moved. I think that might explain the issues some people have had while I was reading this thread.
  12. So I want to install an alternative to inotify-tools on my unraid server to use in bash scripts: https://github.com/tinkershack/fluffy/ I tried using the dev pack to build (needed make and some other dependencies listed) I got stuck on a dependency that wasn't on the list apparently and also wasn't in the dev pack. I also had questions about how persistent this was anyway, since unraid runs out of a ramdisk. Anyway, can you build something like this elsewhere and copy over the binary? Should I build it on slackware or does it even matter? My machines are all different versions of linux mint. I can't find the base slackware version used by unraid to download to do that.
  13. Some one reported some issues with the script I wrote in another thread. I pulled the one I've been using off the server and I think it corrected the bug. No warranties on this, I wouldn't say I'm a bash expert. Note: LVSAVEDIR and LVSNAPSHOTDIR are probably different on your machine so change those to what you want. # S00slqemu.sh #!/bin/bash # stops libvirt and moves the save and snapshot folders creating a symlink to them # LVSAVEDIR=/mnt/cache/VMs/qemu_nv/save LVSNAPSHOTDIR=/mnt/cache/VMs/qemu_nv/snapshot if [ -f /var/run/libvirt/libvirtd.pid ];then /etc/rc.d/rc.libvirt stop fi if [ $LVSAVEDIR != "/var/lib/libvirt/qemu/save" ];then if [ ! -L /var/lib/libvirt/qemu/save ];then if [ ! -e $LVSAVEDIR ];then if [ ! -e /var/lib/libvirt/qemu/save ];then mkdir $LVSAVEDIR else mv /var/lib/libvirt/qemu/save $LVSAVEDIR fi else rm -r /var/lib/libvirt/qemu/save fi fi ln -s $LVSAVEDIR /var/lib/libvirt/qemu/save fi if [ $LVSNAPSHOTDIR != "/var/lib/libvirt/qemu/snapshot" ];then if [ ! -L /var/lib/libvirt/qemu/snapshot ];then if [ ! -e $LVSNAPSHOTDIR ];then if [ ! -e /var/lib/libvirt/qemu/shapshot ];then mkdir $LVSNAPSHOTDIR else mv /var/lib/libvirt/qemu/snapshot $LVSNAPSHOTDIR fi else rm -r /var/lib/libvirt/qemu/snapshot fi fi ln -s $LVSNAPSHOTDIR /var/lib/libvirt/qemu/snapshot fi /etc/rc.d/rc.libvirt start
  14. I'm using a MSI x470 Gaming Plus. But I was more looking to see if anyone had installed that card on any Ryzen board in their chipset controlled slot and had it detect for some piece of mind. I have some old PCI-e cards that just will not detect on this board that detect fine on other platforms, but they're all PCIe 1.0 devices.
  15. I'm thinking of buying one of these cards for my unraid box (real limited options for good sata cards with greater than 2 ports btw) and stumbled across this thread: https://forums.servethehome.com/index.php?threads/anyone-running-a-lsi-9211-8i-in-an-pcie-x4-slot.2942/ For my purposes, I'm going to run this in the 4x pcie 2.0 (16x physical) on my ryzen x470 board. I want to keep the other slots for graphics cards. So that thread gave me pause. There are no Ryzen examples there, so I was wondering if anyone is doing this successfully? I also thought maybe they needed to cover some of the pins on their card but that seems like a different issue.
  16. Pretty old post, sorry I don't check these forums to often. Its possible that script is kind of buggy and I've updated it but I believe it at least mostly worked. This has been working well for me for quite awhile. I'm not sure about the hibernate button, but through virt-manager and I believe some scripts I use for backups I can trigger the save of the VM state to disk. IIRC its similar to a guest driven hibernate but not quite the same. I vaguely remember giving up on guest driven hibernate/save for some reason or other. The guest time will be incorrect upon restore of the vm save state, but it corrects itself within a minute or two probably due to NTP time server. One gotcha I recall: snapshots would not work with UEFI based VMs. There was some bug or limitation in how nvram states were stored that prevent it from working and last time I checked no one had fixed it. This didn't bother me, I just run everything with seabios.
  17. I'll have to check my script at home. I've been running that for awhile. I suspect it just had a bug in it that I probably fixed at some point. Edit: I updated the original post with my new script, I think it corrects the error or at least its different.
  18. You can try with a seabios based VM. The easiest way is to create a Windows 7 in unraid web console which will use seabios by default. I get black screen on my intel system if nvidia card is primary with UEFI. I believe its because host boot "touches" UEFI but the seabios is left intact.
  19. I use virt-manager with spice VMs. I don't have my notes in front of me but I believe I used space invader one's youtube tutorial on virt-manager setup and then did similar to use to modify the VM to use spice display. AFAIK virt-manager is not available for Windows however.
  20. I haven't tried figuring out how this works but one thing that kind of stinks about KVM and by extension unraid is that the virtual display adapters don't support any 3D acceleration. You can use PCI passthrough or Intel GPUs to get something but that requires hardware for the task. I know VMWare allows some 3D work to be offloaded to the host graphics and while the vmware svga is available I don't think the host has the necessary hooks to use it. I'm not sure what virtualbox uses. But there is something in existence, at least for linux guests: https://virgil3d.github.io/ It would be nice if this could be made to work with unraid. Maybe the components are already there? I can't find much for tutorials but gather it requires a MESA supporting GPU on the host (so AMD or Intel graphics).
  21. I'm surprised anyone even attempted the thunderbolt cable and dock. Sounds completely bass ackwards to me. In addition to being expensive, I thought thunderbolt had pretty strict cable length requirements. And after you've brought the video card to the location, the dock needing power...you almost should have just put a cheap PC there. I use long hdmi and USB cables but my run isn't to bad (~20ft to wall outlets) A little tip: You can drill much smaller holes if you use micro/mini hdmi cables because the plug is so small.
  22. I'd tend to blame Windows like tr0910 suggested. You can use tools to see if its using a lot of resources but you could also install a linux VM and see if it fairs any better. My VMs don't seem to spike power usage much unless they're actually doing something. I also symlinked my vm folder to my cache drive so I can "hibernate" VMs when they're not in use by storing their memory state to disk. By default unraid uses a ram drive for this area so hibernating will still use ram, it just moves it around in different buckets. It ends up behaving like a suspend to ram sleep state instead of suspend to disk.
  23. .img is the default (only?) option when creating vms through unraids web manager. You can convert them to qcow2. I would, its a lot faster than reinstalling windows. btfs snapshots are another level of snapshots though btfs is a snapshot of the partitions that the virtual disks reside on qcow2 is a virtual disk format that supports snapshotting among other things VMs themselves can do snapshots that comprise information about the VM combined with a disk snapshot (like qcow2)
  24. Your options are the HDMI cable and USB, or possibly the over ethernet versions of this or you could get a steamlink and use that (it won't be as good as the straight HDMI cable but some people think its fine)
  25. I'm bumping this since this is the only thread on this topic I found searching. gnif had a prototype working that did guest to guest copy, although I think it had some issues he hadn't work through. So in theory you could have a linux VM with a graphics card that can view another windows VM with a graphics cards output. But I don't think its there yet. I wouldn't definitely like to see this working in unraid some day. Edit: Actually looks further along than I thought. From https://forum.level1techs.com/t/looking-glass-guides-help-and-support/122387/701 gnif: