DenizenEvil

Members
  • Posts

    18
  • Joined

  • Last visited

Recent Profile Visitors

The recent visitors block is disabled and is not being shown to other users.

DenizenEvil's Achievements

Newbie

Newbie (1/14)

2

Reputation

  1. Is there a reason the template is locked to 6.23.7 instead of latest? If anyone is trying to run the latest version with this template, it doesn't work by default. You have to update the template by adding another port for 8082.
  2. In that case, I can fork the code and test, but I'd need a copy of your images that are providing errors. Or you can ask the dev for support in the discord.
  3. The error looks like it's unrelated to Elasticsearch. It looks like PornVault is able to connect to Elasticsearch perfectly fine. It's failing to add some images, but the logging is not very helpful at only "info" level. You can enable debug mode by starting the container with the environment variable PV_LOG_LEVEL set to "debug" or 5. That might give more useful debug messages.
  4. You should be seeing something in the logs if there's an error. What does your log file output when starting the container? Is your Elasticsearch server working? Going to the URL in a browser should provide output to verify it's working. Does PornVault come up? Does it crash? What is happening?
  5. If something is not working, look in log entries. Labels are correctly being automatically added for me. I did not test actors. Plugin instructions are right on the GitHub:https://github.com/porn-vault/porn-vault/blob/dev/doc/plugins_intro.md
  6. You need to check that your custom network for br0 is enabled first. Stop Docker and enable Advanced View in the Settings tab. You should see "IPv4 custom network on interface br0 (optional)". The subnet box should be checked and you should see your LAN subnet here. Enable the option that says "Host access to custom networks" to enable the macvlan. You may also need to ensure bridging is enabled on your NIC in Network Settings and that your connected NICs are members of br0. Re-enable Docker. If you enable macvlan, you do not need to use the custom br0 network. If you use the custom br0 network and want any other containers in bridge to connect to any containers on the br0 network (e.g. a reverse proxy to serve PornVault), you need to enable macvlan. To get PornVault and Elasticsearch talking to each other, either macvlan or br0 need to be used. If you need bridge <--> br0 communication, macvlan must be enabled. Once Docker is back up and running, check your Elasticsearch container and move it to the br0 network if not already there if choosing this method. You need to make sure nothing else on unRAID is listening on ports 9200 and 9300. Move PornVault to the br0 network as well if going that way. When you move the containers to the different networks, you'll want to set some static IPs for them. After that, edit the config.json for PornVault and point the search host to http://<ELASTICSEARCHIP>:9200. If you just enable macvlan and don't want to use the br0 network, change the search host to http://elasticsearch:9200 or whatever the Elasticsearch container name is. Alternatively, you might need to use 127.0.0.1, which should be default.
  7. Then you must have a setting wrong with your custom br0 network. ElasticSearch would only fail to start if the parameters sent to Docker are bad. Edit the template and click Save. Check the docker command that’s generated by the template and verify that it’s good.
  8. Again. This indicates that you didn’t read my instructions. The containers can not communicate because unRAID disables inter-container traffic by default. Either put them on a custom br0 or enable macvlan. This is the solution that I posted earlier.
  9. You provided no errors, no logs, no context. The assumption is that you may have Elasticsearch installed, but I bet PornVault can’t connect to it.
  10. See my edit on page 1. You need to enable the docker containers to talk to each other with the macvlan option. You may need to also create and use a custom br0 network (maybe not necessary, did not test).
  11. @C4ArtZ Please see the edit in my prior comment. It shows how to fix the issue with PornVault not being able to connect to an Elasticsearch container. You can revert the change to target the latest PornVault version. Reverting to the 0.25.0 version was only a temporary workaround to figure out the root cause. There was no issue with PornVault, it was the communication issue between the containers and the unRAID host due to the macvlan option being dissabled by default. @boi12321 As far as I know, unRAID unfortunately doesn't support officially docker-compose. The two easier options are to either do what I recommended with enabling the macvlan and using a custom br0 network or baking Elasticsearch into the image used in the unRAID template, the latter of which I believe to be bad practice. I'd go with the macvlan option. If someone wants to use docker-compose, there are apparently some ways to work around it, but it's hacky at best.
  12. Looks like some code changes for version 0.26.0 broke the Elasticsearch integration somehow. At least in all cases I tested. To get this running again, do the following: Edit the Docker container Change the repository from leadwolf/porn-vault to leadwolf/porn-vault:0.25.0 or leadwolf/porn-vault:0.25.0-alpine The Alpine build is smaller than the default Debian build and is what I use. Apply and restart the container If you run into issues, check the logs. It may be complaining about the config.json. If that's the case, go into your /mnt/user/appdata/PornVault directory and delete the config.json and config.merged.json files. Restart the container, and it will regenerate the config without losing your data. Edit: I fixed the issue with 0.26.0. Hindsight: I believe allowing the macvlan connections may fix this as well. Stop the Docker service, check the Host access to custom networks, and restart the Docker service. I didn't test this because I have this working with the custom br0 network that I typically use for anything that needs its own IP anyway. This option is needed if you have a reverse proxy running on the bridge network that you want to use to serve PornVault. I put both Elasticsearch and PornVault on a custom br0 network and gave each container a static IP on my normal subnet. For example, 192.168.69.68 and 192.168.69.69 respectively. I edited the config to point to this new IP for the Elasticsearch container, and the issue is resolved. You can remove the tag from the repository and it should start up correctly. You may need to remove and regenerate the config.json. The necessary entry in the JSON is: "search": { "host": "http://192.168.69.68:9200", "log": false, "version": "7.x", "auth": null }
  13. Hi, I posted in another subforum, but I think this is a better place for it, since it deals specifically with PCI passthrough. I have the following equipment: ASUS Z10PE-D16 WS 2x Xeon E5 2620 v3 64GB DDR4 2133 Registered ECC 2x2TB WD Red 1x1TB WD Black 2x960GB Sandisk Ultra II EVGA SuperNova PS 1000W IOMMU group 43 [10de:1b80] 02:00.0 VGA compatible controller: NVIDIA Corporation GP104 [GeForce GTX 1080] (rev a1) [10de:10f0] 02:00.1 Audio device: NVIDIA Corporation GP104 High Definition Audio Controller (rev a1) IOMMU group 44 [1b73:1100] 03:00.0 USB controller: Fresco Logic FL1100 USB 3.0 Host Controller (rev 10) IOMMU group 47 [144d:a802] 07:00.0 Non-Volatile memory controller: Samsung Electronics Co Ltd NVMe SSD Controller SM951/PM951 (rev 01) IOMMU group 52 [12ab:0380] 81:00.0 Non-VGA unclassified device: YUAN High-Tech Development Co., Ltd. Device 0380 (rev ff) IOMMU group 53 [10de:1185] 82:00.0 VGA compatible controller: NVIDIA Corporation GK104 [GeForce GTX 660 OEM] (rev a1) [10de:0e0a] 82:00.1 Audio device: NVIDIA Corporation GK104 HDMI Audio Controller (rev a1) Bus 003 Device 002: ID 0781:5151 SanDisk Corp. Cruzer Micro Flash Drive Bus 003 Device 003: ID b58e:9e84 Blue Microphones Yeti Stereo Microphone Bus 003 Device 004: ID 046d:c539 Logitech, Inc. Bus 003 Device 005: ID 04d9:1818 Holtek Semiconductor, Inc. I currently have two guests setup: Windows 10 on SeaBIOS and i440fx using virtio-win-0.1.126-2 Windows 10 on OVMF and Q35-2.7 Arch Linux on OVMF and Q35-2.7 I created two Windows 10 guests to see if SeaBIOS and/or i440fx was causing the issue. I only run one at a time. Now, I am passing through the following to the Windows 10 guests: 07:00.0 - NVME Drive 82:00.0 - GTX 760 82.00.1 - GTX 760 HDMI Audio 81:00.0 - Elgato HD60 Pro Bus 3, Device 4 - Logitech G900 Wireless Bus 3, Device 5 - Filco Majestouch 2 TKL And the following to the Arch Linux machine: 02:00.0 - GTX 1080 02:00.1 - GTX 1080 HDMI Audio 03:00.0 - PCI-E to 7 Port USB 3.0 with: - Input Club ErgoDox Infinity - Logitech G900 (Wired) - Microsoft LifeCam HD - Belkin Easy Transfer Cable Normally, the USB card and 1080 are on the Windows VM for gaming and the 760 is on the Arch VM. I normally do not have the Filco keyboard plugged in, and I use Synergy to share my mouse and keyboard. I swapped them temporarily because I am working on the Arch VM right now. I have attached the XML if you would like to see how the machines are set up right now. Now, in the Windows 10 VM, I am passing through the NVME drive and the Elgato. The NVME drive is showing properly (right now), but it tends to run into the same issue that the Elgato does. The issue is that the Elgato gets stuck in D3 power mode and fails in the function level reset. I also do not get the video capture device showing in the VM at all in any case (cold boot neither). The *sound* capture device does show up, though. My syslinux.cfg simply uses vfio-pci.ids on the NVME drive. I also tried disabling D3 in the syslinux.cfg, which changed nothing. The logs when I boot up the Windows 10 VM are attached. I saw someone else fixed this issue by recompiling the kernel, but that was with version 4.4.5, which was 32-bit. The Wiki explicitly states that the recompilation for 6.0.0 and up is different, but has not been updated with further instructions. How can I make the two devices properly passthrough to the Windows VM? ArchBox.xml Windows_10_OVMF.xml unRAIDLog VMLog
  14. Okay, I've created another Windows 10 guest on OVMF and Q35 to test. Here is what I have gathered... The NVME drive *only* works on first boot. As soon as the guest is shut down, unRAID can NOT reset the PCI device. Hence the device being stuck in D3 and failing to return from the Function Level Reset (FLR). Thus, subsequent startups of the virtual machine lead to the device not being passed through, so it isn't seen on the guest. Now, the issue is slightly different for the Elgato HD60 Pro capture card. I am able to get the Sound Capture device through, but *not* the Game Capture device. Here are the XML and Logs: <domain type='kvm' id='1'> <name>Windows 10 (OVMF)</name> <uuid>b0f9fe72-5e20-ec8f-3125-809445a65ff0</uuid> <metadata> <vmtemplate xmlns="unraid" name="Windows 10" icon="windows.png" os="windows10"/> </metadata> <memory unit='KiB'>50331648</memory> <currentMemory unit='KiB'>50331648</currentMemory> <memoryBacking> <nosharepages/> </memoryBacking> <vcpu placement='static'>20</vcpu> <cputune> <vcpupin vcpu='0' cpuset='0'/> <vcpupin vcpu='1' cpuset='1'/> <vcpupin vcpu='2' cpuset='2'/> <vcpupin vcpu='3' cpuset='3'/> <vcpupin vcpu='4' cpuset='4'/> <vcpupin vcpu='5' cpuset='5'/> <vcpupin vcpu='6' cpuset='6'/> <vcpupin vcpu='7' cpuset='7'/> <vcpupin vcpu='8' cpuset='8'/> <vcpupin vcpu='9' cpuset='9'/> <vcpupin vcpu='10' cpuset='12'/> <vcpupin vcpu='11' cpuset='13'/> <vcpupin vcpu='12' cpuset='14'/> <vcpupin vcpu='13' cpuset='15'/> <vcpupin vcpu='14' cpuset='16'/> <vcpupin vcpu='15' cpuset='17'/> <vcpupin vcpu='16' cpuset='18'/> <vcpupin vcpu='17' cpuset='19'/> <vcpupin vcpu='18' cpuset='20'/> <vcpupin vcpu='19' cpuset='21'/> </cputune> <resource> <partition>/machine</partition> </resource> <os> <type arch='x86_64' machine='pc-q35-2.7'>hvm</type> <loader readonly='yes' type='pflash'>/usr/share/qemu/ovmf-x64/OVMF_CODE-pure-efi.fd</loader> <nvram>/etc/libvirt/qemu/nvram/b0f9fe72-5e20-ec8f-3125-809445a65ff0_VARS-pure-efi.fd</nvram> </os> <features> <acpi/> <apic/> <hyperv> <relaxed state='on'/> <vapic state='on'/> <spinlocks state='on' retries='8191'/> <vendor_id state='on' value='none'/> </hyperv> </features> <cpu mode='host-passthrough'> <topology sockets='1' cores='10' threads='2'/> </cpu> <clock offset='localtime'> <timer name='hypervclock' present='yes'/> <timer name='hpet' present='no'/> </clock> <on_poweroff>destroy</on_poweroff> <on_reboot>restart</on_reboot> <on_crash>restart</on_crash> <devices> <emulator>/usr/local/sbin/qemu</emulator> <disk type='file' device='disk'> <driver name='qemu' type='raw' cache='writeback'/> <source file='/mnt/user/domains/Windows 10 (OVMF)/vdisk1.img'/> <backingStore/> <target dev='hdc' bus='virtio'/> <boot order='1'/> <alias name='virtio-disk2'/> <address type='pci' domain='0x0000' bus='0x02' slot='0x03' function='0x0'/> </disk> <disk type='file' device='cdrom'> <driver name='qemu' type='raw'/> <source file='/mnt/user/isos/en_windows_10_msdn.iso'/> <backingStore/> <target dev='hda' bus='sata'/> <readonly/> <boot order='2'/> <alias name='sata0-0-0'/> <address type='drive' controller='0' bus='0' target='0' unit='0'/> </disk> <disk type='file' device='cdrom'> <driver name='qemu' type='raw'/> <source file='/mnt/user/isos/virtio-win-0.1.126-2.iso'/> <backingStore/> <target dev='hdb' bus='sata'/> <readonly/> <alias name='sata0-0-1'/> <address type='drive' controller='0' bus='0' target='0' unit='1'/> </disk> <controller type='usb' index='0' model='nec-xhci'> <alias name='usb'/> <address type='pci' domain='0x0000' bus='0x00' slot='0x07' function='0x0'/> </controller> <controller type='sata' index='0'> <alias name='ide'/> <address type='pci' domain='0x0000' bus='0x00' slot='0x1f' function='0x2'/> </controller> <controller type='pci' index='0' model='pcie-root'> <alias name='pcie.0'/> </controller> <controller type='pci' index='1' model='dmi-to-pci-bridge'> <model name='i82801b11-bridge'/> <alias name='pci.1'/> <address type='pci' domain='0x0000' bus='0x00' slot='0x1e' function='0x0'/> </controller> <controller type='pci' index='2' model='pci-bridge'> <model name='pci-bridge'/> <target chassisNr='2'/> <alias name='pci.2'/> <address type='pci' domain='0x0000' bus='0x01' slot='0x00' function='0x0'/> </controller> <controller type='virtio-serial' index='0'> <alias name='virtio-serial0'/> <address type='pci' domain='0x0000' bus='0x02' slot='0x02' function='0x0'/> </controller> <interface type='bridge'> <mac address='52:54:00:04:fb:64'/> <source bridge='br0'/> <target dev='vnet0'/> <model type='virtio'/> <alias name='net0'/> <address type='pci' domain='0x0000' bus='0x02' slot='0x01' function='0x0'/> </interface> <serial type='pty'> <source path='/dev/pts/0'/> <target port='0'/> <alias name='serial0'/> </serial> <console type='pty' tty='/dev/pts/0'> <source path='/dev/pts/0'/> <target type='serial' port='0'/> <alias name='serial0'/> </console> <channel type='unix'> <source mode='bind' path='/var/lib/libvirt/qemu/channel/target/domain-1-Windows 10 (OVMF)/org.qemu.guest_agent.0'/> <target type='virtio' name='org.qemu.guest_agent.0' state='disconnected'/> <alias name='channel0'/> <address type='virtio-serial' controller='0' bus='0' port='1'/> </channel> <input type='mouse' bus='ps2'> <alias name='input0'/> </input> <input type='keyboard' bus='ps2'> <alias name='input1'/> </input> <hostdev mode='subsystem' type='pci' managed='yes'> <driver name='vfio'/> <source> <address domain='0x0000' bus='0x02' slot='0x00' function='0x0'/> </source> <alias name='hostdev0'/> <address type='pci' domain='0x0000' bus='0x02' slot='0x04' function='0x0'/> </hostdev> <hostdev mode='subsystem' type='pci' managed='yes'> <driver name='vfio'/> <source> <address domain='0x0000' bus='0x02' slot='0x00' function='0x1'/> </source> <alias name='hostdev1'/> <address type='pci' domain='0x0000' bus='0x02' slot='0x05' function='0x0'/> </hostdev> <hostdev mode='subsystem' type='pci' managed='yes'> <driver name='vfio'/> <source> <address domain='0x0000' bus='0x03' slot='0x00' function='0x0'/> </source> <alias name='hostdev2'/> <address type='pci' domain='0x0000' bus='0x02' slot='0x06' function='0x0'/> </hostdev> <hostdev mode='subsystem' type='pci' managed='yes'> <driver name='vfio'/> <source> <address domain='0x0000' bus='0x07' slot='0x00' function='0x0'/> </source> <alias name='hostdev3'/> <address type='pci' domain='0x0000' bus='0x02' slot='0x07' function='0x0'/> </hostdev> <hostdev mode='subsystem' type='pci' managed='yes'> <driver name='vfio'/> <source> <address domain='0x0000' bus='0x81' slot='0x00' function='0x0'/> </source> <alias name='hostdev4'/> <address type='pci' domain='0x0000' bus='0x02' slot='0x08' function='0x0'/> </hostdev> <hostdev mode='subsystem' type='usb' managed='no'> <source> <vendor id='0xb58e'/> <product id='0x9e84'/> <address bus='3' device='3'/> </source> <alias name='hostdev5'/> <address type='usb' bus='0' port='1'/> </hostdev> <memballoon model='virtio'> <alias name='balloon0'/> <address type='pci' domain='0x0000' bus='0x02' slot='0x09' function='0x0'/> </memballoon> </devices> <seclabel type='none' model='none'/> <seclabel type='dynamic' model='dac' relabel='yes'> <label>+0:+100</label> <imagelabel>+0:+100</imagelabel> </seclabel> </domain> unRAID Logs: ErrorWarningSystemArrayLogin May 23 05:06:10 seventhcircle ntpd[2745]: Listen normally on 4 docker0 172.17.0.1:123 May 23 05:06:10 seventhcircle ntpd[2745]: new interface(s) found: waking up resolver May 23 05:07:56 seventhcircle root: Updating templates... Updating info... Done. May 23 05:07:56 seventhcircle emhttp: shcmd (60): set -o pipefail ; /usr/local/sbin/mount_image '/mnt/user/system/libvirt/libvirt.img' /etc/libvirt 1 |& logger May 23 05:07:56 seventhcircle kernel: BTRFS: device fsid 8c015d41-10f0-450b-91a6-a972ee53a3cd devid 1 transid 215 /dev/loop1 May 23 05:07:56 seventhcircle kernel: BTRFS info (device loop1): disk space caching is enabled May 23 05:07:56 seventhcircle kernel: BTRFS info (device loop1): has skinny extents May 23 05:07:56 seventhcircle root: Resize '/etc/libvirt' of 'max' May 23 05:07:56 seventhcircle kernel: BTRFS info (device loop1): new size for /dev/loop1 is 1073741824 May 23 05:07:56 seventhcircle emhttp: shcmd (64): /etc/rc.d/rc.libvirt start |& logger May 23 05:07:56 seventhcircle root: Starting virtlockd... May 23 05:07:56 seventhcircle root: Starting virtlogd... May 23 05:07:56 seventhcircle root: Starting libvirtd... May 23 05:07:56 seventhcircle kernel: tun: Universal TUN/TAP device driver, 1.6 May 23 05:07:56 seventhcircle kernel: tun: (C) 1999-2004 Max Krasnyansky <maxk@qualcomm.com> May 23 05:07:56 seventhcircle emhttp: nothing to sync May 23 05:07:56 seventhcircle kernel: pcieport 0000:80:03.0: AER: Corrected error received: id=8018 May 23 05:07:56 seventhcircle kernel: pcieport 0000:80:03.0: PCIe Bus Error: severity=Corrected, type=Data Link Layer, id=8018(Transmitter ID) May 23 05:07:56 seventhcircle kernel: pcieport 0000:80:03.0: device [8086:2f08] error status/mask=00001000/00002000 May 23 05:07:56 seventhcircle kernel: pcieport 0000:80:03.0: [12] Replay Timer Timeout May 23 05:07:56 seventhcircle kernel: Ebtables v2.0 registered May 23 05:07:57 seventhcircle kernel: virbr0: port 1(virbr0-nic) entered blocking state May 23 05:07:57 seventhcircle kernel: virbr0: port 1(virbr0-nic) entered disabled state May 23 05:07:57 seventhcircle kernel: device virbr0-nic entered promiscuous mode May 23 05:07:57 seventhcircle avahi-daemon[4564]: Joining mDNS multicast group on interface virbr0.IPv4 with address 192.168.122.1. May 23 05:07:57 seventhcircle avahi-daemon[4564]: New relevant interface virbr0.IPv4 for mDNS. May 23 05:07:57 seventhcircle avahi-daemon[4564]: Registering new address record for 192.168.122.1 on virbr0.IPv4. May 23 05:07:57 seventhcircle kernel: virbr0: port 1(virbr0-nic) entered blocking state May 23 05:07:57 seventhcircle kernel: virbr0: port 1(virbr0-nic) entered listening state May 23 05:07:57 seventhcircle dnsmasq[9520]: started, version 2.76 cachesize 150 May 23 05:07:57 seventhcircle dnsmasq[9520]: compile time options: IPv6 GNU-getopt no-DBus i18n IDN DHCP DHCPv6 no-Lua TFTP no-conntrack ipset auth no-DNSSEC loop-detect inotify May 23 05:07:57 seventhcircle dnsmasq-dhcp[9520]: DHCP, IP range 192.168.122.2 -- 192.168.122.254, lease time 1h May 23 05:07:57 seventhcircle dnsmasq-dhcp[9520]: DHCP, sockets bound exclusively to interface virbr0 May 23 05:07:57 seventhcircle dnsmasq[9520]: reading /etc/resolv.conf May 23 05:07:57 seventhcircle dnsmasq[9520]: using nameserver 192.168.2.1#53 May 23 05:07:57 seventhcircle dnsmasq[9520]: read /etc/hosts - 2 addresses May 23 05:07:57 seventhcircle dnsmasq[9520]: read /var/lib/libvirt/dnsmasq/default.addnhosts - 0 addresses May 23 05:07:57 seventhcircle dnsmasq-dhcp[9520]: read /var/lib/libvirt/dnsmasq/default.hostsfile May 23 05:07:57 seventhcircle kernel: virbr0: port 1(virbr0-nic) entered disabled state May 23 05:09:37 seventhcircle kernel: vgaarb: device changed decodes: PCI:0000:02:00.0,olddecodes=io+mem,decodes=io+mem:owns=none May 23 05:09:37 seventhcircle kernel: br0: port 2(vnet0) entered blocking state May 23 05:09:37 seventhcircle kernel: br0: port 2(vnet0) entered disabled state May 23 05:09:37 seventhcircle kernel: device vnet0 entered promiscuous mode May 23 05:09:37 seventhcircle kernel: br0: port 2(vnet0) entered blocking state May 23 05:09:37 seventhcircle kernel: br0: port 2(vnet0) entered forwarding state May 23 05:09:56 seventhcircle kernel: vfio-pci 0000:02:00.0: enabling device (0100 -> 0103) May 23 05:09:56 seventhcircle kernel: vfio_ecap_init: 0000:02:00.0 hiding ecap 0x19@0x900 May 23 05:09:56 seventhcircle kernel: pmd_set_huge: Cannot satisfy [mem 0xb0000000-0xb0200000] with a huge-page mapping due to MTRR override. May 23 05:09:56 seventhcircle kernel: vfio_ecap_init: 0000:07:00.0 hiding ecap 0x19@0x168 May 23 05:09:56 seventhcircle kernel: vfio_ecap_init: 0000:07:00.0 hiding ecap 0x1e@0x190 May 23 05:09:56 seventhcircle acpid: input device has been disconnected, fd 6 May 23 05:10:03 seventhcircle kernel: vfio_bar_restore: 0000:81:00.0 reset recovery - restoring bars May 23 05:10:03 seventhcircle kernel: vfio_bar_restore: 0000:81:00.0 reset recovery - restoring bars May 23 05:10:10 seventhcircle kernel: vfio_bar_restore: 0000:81:00.0 reset recovery - restoring bars May 23 05:10:10 seventhcircle kernel: vfio_bar_restore: 0000:81:00.0 reset recovery - restoring bars May 23 05:10:10 seventhcircle kernel: vfio_bar_restore: 0000:81:00.0 reset recovery - restoring bars May 23 05:10:10 seventhcircle kernel: vfio_bar_restore: 0000:81:00.0 reset recovery - restoring bars May 23 05:10:15 seventhcircle kernel: vfio_bar_restore: 0000:81:00.0 reset recovery - restoring bars May 23 05:10:16 seventhcircle kernel: usb 3-3: reset full-speed USB device number 3 using xhci_hcd May 23 05:10:16 seventhcircle kernel: usb 3-3: reset full-speed USB device number 3 using xhci_hcd May 23 05:10:21 seventhcircle kernel: vfio_bar_restore: 0000:81:00.0 reset recovery - restoring bars May 23 05:10:21 seventhcircle kernel: vfio_bar_restore: 0000:81:00.0 reset recovery - restoring bars May 23 05:10:21 seventhcircle kernel: vfio_bar_restore: 0000:81:00.0 reset recovery - restoring bars May 23 05:10:22 seventhcircle kernel: vfio_bar_restore: 0000:81:00.0 reset recovery - restoring bars May 23 05:10:22 seventhcircle kernel: vfio_bar_restore: 0000:81:00.0 reset recovery - restoring bars May 23 05:10:22 seventhcircle kernel: vfio_bar_restore: 0000:81:00.0 reset recovery - restoring bars May 23 05:10:23 seventhcircle kernel: vfio_bar_restore: 0000:81:00.0 reset recovery - restoring bars May 23 05:10:23 seventhcircle kernel: vfio_bar_restore: 0000:81:00.0 reset recovery - restoring bars May 23 05:10:23 seventhcircle kernel: vfio_bar_restore: 0000:81:00.0 reset recovery - restoring bars May 23 05:10:24 seventhcircle kernel: vfio_bar_restore: 0000:81:00.0 reset recovery - restoring bars May 23 05:10:24 seventhcircle kernel: vfio_bar_restore: 0000:81:00.0 reset recovery - restoring bars May 23 05:10:24 seventhcircle kernel: vfio_bar_restore: 0000:81:00.0 reset recovery - restoring bars May 23 05:10:24 seventhcircle kernel: kvm: zapping shadow pages for mmio generation wraparound May 23 05:10:24 seventhcircle kernel: kvm: zapping shadow pages for mmio generation wraparound May 23 05:10:42 seventhcircle kernel: usb 3-3: reset full-speed USB device number 3 using xhci_hcd May 23 05:10:42 seventhcircle kernel: usb 3-3: reset full-speed USB device number 3 using xhci_hcd May 23 05:10:43 seventhcircle kernel: usb 3-3: reset full-speed USB device number 3 using xhci_hcd May 23 05:10:43 seventhcircle kernel: usb 3-3: reset full-speed USB device number 3 using xhci_hcd Windows 10 Log: ErrorWarningSystemArrayLogin 2017-05-23 10:09:38.063+0000: starting up libvirt version: 2.4.0, qemu version: 2.7.1, hostname: seventhcircle LC_ALL=C PATH=/bin:/sbin:/usr/bin:/usr/sbin HOME=/ QEMU_AUDIO_DRV=none /usr/local/sbin/qemu -name 'guest=Windows 10 (OVMF),debug-threads=on' -S -object 'secret,id=masterKey0,format=raw,file=/var/lib/libvirt/qemu/domain-1-Windows 10 (OVMF)/master-key.aes' -machine pc-q35-2.7,accel=kvm,usb=off,dump-guest-core=off,mem-merge=off -cpu host,hv_time,hv_relaxed,hv_vapic,hv_spinlocks=0x1fff,hv_vendor_id=none -drive file=/usr/share/qemu/ovmf-x64/OVMF_CODE-pure-efi.fd,if=pflash,format=raw,unit=0,readonly=on -drive file=/etc/libvirt/qemu/nvram/b0f9fe72-5e20-ec8f-3125-809445a65ff0_VARS-pure-efi.fd,if=pflash,format=raw,unit=1 -m 49152 -realtime mlock=off -smp 20,sockets=1,cores=10,threads=2 -uuid b0f9fe72-5e20-ec8f-3125-809445a65ff0 -display none -no-user-config -nodefaults -chardev 'socket,id=charmonitor,path=/var/lib/libvirt/qemu/domain-1-Windows 10 (OVMF)/monitor.sock,server,nowait' -mon chardev=charmonitor,id=monitor,mode=control -rtc base=localtime -no-hpet -no-shutdown -boot s=charchannel0,path=/var/lib/libvirt/qemu/channel/target/domain-1-Windows 10 (OVMF)/org.qemu.guest_agent.0,server,nowait' -device virtserialport,bus=virtio-serial0.0,nr=1,chardev=charchannel0,id=channel0,name=org.qemu.guest_agent.0 -device vfio-pci,host=02:00.0,id=hostdev0,bus=pci.2,addr=0x4 -device vfio-pci,host=02:00.1,id=hostdev1,bus=pci.2,addr=0x5 -device vfio-pci,host=03:00.0,id=hostdev2,bus=pci.2,addr=0x6 -device vfio-pci,host=07:00.0,id=hostdev3,bus=pci.2,addr=0x7 -device vfio-pci,host=81:00.0,id=hostdev4,bus=pci.2,addr=0x8 -device usb-host,hostbus=3,hostaddr=3,id=hostdev5,bus=usb.0,port=1 -device virtio-balloon-pci,id=balloon0,bus=pci.2,addr=0x9 -msg timestamp=on Domain id=1 is tainted: high-privileges Domain id=1 is tainted: host-cpu char device redirected to /dev/pts/0 (label charserial0) 2017-05-23T10:09:56.660704Z qemu-system-x86_64: -device vfio-pci,host=03:00.0,id=hostdev2,bus=pci.2,addr=0x6: Failed to mmap 0000:03:00.0 BAR 2. Performance may be slow Also note that the PCI USB device I passed through (0000:03:00.0) is logging a warning of being unable to mmap. Not sure what's causing that. I also get at times the guest running *extremely slowly*. Like, the mouse will skip around, the keyboard works slowly, graphics will be slow, dragging windows is laggy, even audio playback is extremely slow and choppy. It's happening right now, and it persists until reboot as far as I can tell. All of my devices except one are connected to the PCI USB device. Despite the mmap error, sometimes the guest runs just fine, but other times, it's unusably laggy and choppy.
  15. So, I checked my logs, and I'm getting a lot of the following error: May 22 20:37:32 seventhcircle kernel: vfio-pci 0000:81:00.0: Refused to change power state, currently in D3 May 22 20:37:32 seventhcircle kernel: vfio-pci 0000:81:00.0: timed out waiting for pending transaction; performing function level reset anyway May 22 20:37:33 seventhcircle kernel: vfio-pci 0000:81:00.0: Failed to return from FLR I do also get the same error for the NVME drive. Currently, it is not passing through properly to my guest: May 22 20:37:34 seventhcircle kernel: vfio-pci 0000:07:00.0: Refused to change power state, currently in D3 May 22 20:37:34 seventhcircle kernel: vfio-pci 0000:07:00.0: timed out waiting for pending transaction; performing function level reset anyway May 22 20:37:35 seventhcircle kernel: vfio-pci 0000:07:00.0: Failed to return from FLR When I get that error, obviously, the NVME drive does not work in my guest.