gadget069

Members
  • Posts

    89
  • Joined

  • Last visited

Everything posted by gadget069

  1. Being a newb, I have the VM installed on the array. I have since added a VM pool I want to move my VM to, to hopefully improve performance. What is the process of doing that? Thanks
  2. Turns out putting this in for Plex was a work around --no-healthcheck
  3. Thanks. I 'll see what I can find
  4. Well, did it again sometime last night. Diagnostics attached. I also have this popping up tortuga-diagnostics-20230614-0656.zip
  5. I did a reboot so I don't think the diagnosis will show anything. I have them attached. tortuga-diagnostics-20230612-0531.zip Sent from my SM-S918U1 using Tapatalk
  6. I havent changed anything recently on the server. Logged in today and see the cup pegged tortuga-syslog-20230612-0125.zip
  7. Thanks. What about moving the drive form a hot swap bay? Is unraid going to freak out if I move it a hard drive caddy instead?
  8. I just recently added a second cache drive for some redundancy. I noticed my primary cache drive has some crc errors. They are not increasing, but this was not a new drive when I built the server. Since SSD's are so cheap I want to replace this drive with a new one. I guess needing to know the process on how to do this. First off it's in a hot swap bay, I want to move it out of there and onto a expansion card. I want to free this space up for another spinning drive. Will this cause any issues? I'm assuming to replace the cache drive, stop the array, start the mover, once done shutdown server replace drive, select new drive in slot 1 of the cache pool. Thanks
  9. I'm getting this error all of a sudden
  10. The errors have not increased since the initial install. I do have different cables to try. I had picked up some CableCreation sata cables from Amazon. No issues with the other 4 I've used in this system I still think I may replace this drive just in case. Nothing special to do when replacing this drive? It is not the primary chache drive. Thanks
  11. Doing some poking around it seems that adding a second cache drive does not give you pairty but a raid 1 config? I was looking at having some redundancy for the cache drive, the primary cache drive was not new (Samsung Evo 860 ssd). The second cache drive I added (EVO 850) also kicked out a crc error count. I'm assuming I dont want to use this drive anymore with it already having issues. Is there a particular process to remove the bad(?) cache drive I added? I have a new one coming today. Thanks
  12. I'm not running Blue Iris yet. I wanted to do some testing with the 30 day trial before transferring my license over. Just running the VM stock it will lock up, have to force stop the VM. I'm trying to get the VM to be reliable first. Not sure if I mentioned, using 5600g for the processor. Sent from my SM-S908U using Tapatalk
  13. I've allocated 16gb and the footage saved to a network share. The problem I'm having just in testing is the CPU pegging 100% and freezing the VM. I've seen multiple threads with this issue. I'm starting to wondering if I'm asking to much from my CPU. Sent from my SM-S908U using Tapatalk
  14. I did an experiment and uninstalled one of the dockers, then re-installed. That worked. So I did the same with the rest fo the dockers. I only had issues with Webmail Lite not connecting to the databse. That was resolved entering in the info again. Thanks!
  15. My bad. I was sleep deprived and meant to make the drive 3tb. Thanks
  16. I'm getting inconsistant audio passthrough. I had it working yesterday(the day before too), the vm seems stable (windows 10). Then my vm froze (cpu pegged), hard reboot and the audio is not working. 5600g and b450 Tomahawk Max mobo. Also getting an error 2023-05-20T16:36:46.336369Z qemu-system-x86_64: vfio: Cannot reset device 0000:30:00.6, depends on group 11 which is not owned.
  17. After doing a vm backup, I'm getting this error now. I compared this xml to one I had saved previously and they both match. I have lost audio on my vm. **Edit** after shutting down the unraid server, doing a reboot. It seems oth ave sorted itself out
  18. After downgrading I'm having issues with some of the dockers not starting and kicking out an "execution/server error". Server log May 17 19:01:58 Tortuga kernel: docker0: port 3(veth45434d6) entered disabled state May 17 19:01:58 Tortuga kernel: device veth45434d6 left promiscuous mode May 17 19:01:58 Tortuga kernel: docker0: port 3(veth45434d6) entered disabled state May 17 19:01:58 Tortuga rc.docker: MariaDB-Official: Error response from daemon: failed to create shim task: OCI runtime create failed: runc create failed: unable to start container process: error during container init: error mounting "cgroup" to rootfs at "/sys/fs/cgroup": mount cgroup:/sys/fs/cgroup/elogind (via /proc/self/fd/6), flags: 0xf, data: elogind: invalid argument: unknown May 17 19:01:58 Tortuga rc.docker: Error: failed to start containers: MariaDB-Official May 17 19:01:58 Tortuga kernel: docker0: port 3(vethc0d698c) entered blocking state May 17 19:01:58 Tortuga kernel: docker0: port 3(vethc0d698c) entered disabled state May 17 19:01:58 Tortuga kernel: device vethc0d698c entered promiscuous mode May 17 19:01:58 Tortuga kernel: docker0: port 3(vethc0d698c) entered blocking state May 17 19:01:58 Tortuga kernel: docker0: port 3(vethc0d698c) entered forwarding state May 17 19:01:59 Tortuga kernel: cgroup: Unknown subsys name 'elogind' May 17 19:01:59 Tortuga kernel: docker0: port 3(vethc0d698c) entered disabled state May 17 19:01:59 Tortuga kernel: device vethc0d698c left promiscuous mode May 17 19:01:59 Tortuga kernel: docker0: port 3(vethc0d698c) entered disabled state May 17 19:01:59 Tortuga rc.docker: nginx: Error response from daemon: failed to create shim task: OCI runtime create failed: runc create failed: unable to start container process: error during container init: error mounting "cgroup" to rootfs at "/sys/fs/cgroup": mount cgroup:/sys/fs/cgroup/elogind (via /proc/self/fd/6), flags: 0xf, data: elogind: invalid argument: unknown May 17 19:01:59 Tortuga rc.docker: Error: failed to start containers: nginx May 17 19:01:59 Tortuga kernel: cgroup: Unknown subsys name 'elogind' May 17 19:01:59 Tortuga rc.docker: binhex-readarr: Error response from daemon: failed to create shim task: OCI runtime create failed: runc create failed: unable to start container process: error during container init: error mounting "cgroup" to rootfs at "/sys/fs/cgroup": mount cgroup:/sys/fs/cgroup/elogind (via /proc/self/fd/6), flags: 0xf, data: elogind: invalid argument: unknown May 17 19:01:59 Tortuga rc.docker: Error: failed to start containers: binhex-readarr May 17 19:01:59 Tortuga kernel: docker0: port 3(veth29bda9c) entered blocking state May 17 19:01:59 Tortuga kernel: docker0: port 3(veth29bda9c) entered disabled state May 17 19:01:59 Tortuga kernel: device veth29bda9c entered promiscuous mode May 17 19:01:59 Tortuga kernel: docker0: port 3(veth29bda9c) entered blocking state May 17 19:01:59 Tortuga kernel: docker0: port 3(veth29bda9c) entered forwarding state May 17 19:01:59 Tortuga kernel: cgroup: Unknown subsys name 'elogind' May 17 19:01:59 Tortuga kernel: docker0: port 3(veth29bda9c) entered disabled state May 17 19:01:59 Tortuga kernel: device veth29bda9c left promiscuous mode May 17 19:01:59 Tortuga kernel: docker0: port 3(veth29bda9c) entered disabled state May 17 19:01:59 Tortuga rc.docker: Webmail-Lite-PHP: Error response from daemon: failed to create shim task: OCI runtime create failed: runc create failed: unable to start container process: error during container init: error mounting "cgroup" to rootfs at "/sys/fs/cgroup": mount cgroup:/sys/fs/cgroup/elogind (via /proc/self/fd/6), flags: 0xf, data: elogind: invalid argument: unknown May 17 19:01:59 Tortuga rc.docker: Error: failed to start containers: Webmail-Lite-PHP May 17 19:01:59 Tortuga rc.docker: binhex-sabnzbd: started succesfully! May 17 19:01:59 Tortuga rc.docker: binhex-sabnzbd: wait 30 seconds May 17 19:02:30 Tortuga rc.docker: binhex-radarr: started succesfully! May 17 19:02:30 Tortuga rc.docker: binhex-radarr: wait 35 seconds May 17 19:02:46 Tortuga kernel: cgroup: Unknown subsys name 'elogind' May 17 19:03:05 Tortuga rc.docker: binhex-sonarr: started succesfully! May 17 19:03:05 Tortuga rc.docker: binhex-sonarr: wait 35 seconds May 17 19:03:14 Tortuga kernel: docker0: port 3(vethb20e4f4) entered blocking state May 17 19:03:14 Tortuga kernel: docker0: port 3(vethb20e4f4) entered disabled state May 17 19:03:14 Tortuga kernel: device vethb20e4f4 entered promiscuous mode May 17 19:03:14 Tortuga kernel: eth0: renamed from veth07d5e74 May 17 19:03:14 Tortuga kernel: IPv6: ADDRCONF(NETDEV_CHANGE): vethb20e4f4: link becomes ready May 17 19:03:14 Tortuga kernel: docker0: port 3(vethb20e4f4) entered blocking state May 17 19:03:14 Tortuga kernel: docker0: port 3(vethb20e4f4) entered forwarding state May 17 19:03:16 Tortuga avahi-daemon[3600]: Joining mDNS multicast group on interface vethb20e4f4.IPv6 with address fe80::c4a1:70ff:fec3:401b. May 17 19:03:16 Tortuga avahi-daemon[3600]: New relevant interface vethb20e4f4.IPv6 for mDNS. May 17 19:03:16 Tortuga avahi-daemon[3600]: Registering new address record for fe80::c4a1:70ff:fec3:401b on vethb20e4f4.*. May 17 19:03:18 Tortuga kernel: docker0: port 4(vethca31bbe) entered blocking state May 17 19:03:18 Tortuga kernel: docker0: port 4(vethca31bbe) entered disabled state May 17 19:03:18 Tortuga kernel: device vethca31bbe entered promiscuous mode May 17 19:03:18 Tortuga kernel: cgroup: Unknown subsys name 'elogind' May 17 19:03:18 Tortuga kernel: docker0: port 4(vethca31bbe) entered disabled state May 17 19:03:18 Tortuga kernel: device vethca31bbe left promiscuous mode May 17 19:03:18 Tortuga kernel: docker0: port 4(vethca31bbe) entered disabled state May 17 19:03:40 Tortuga rc.docker: binhex-urbackup: started succesfully! May 17 19:03:40 Tortuga kernel: cgroup: Unknown subsys name 'elogind' May 17 19:03:40 Tortuga rc.docker: chromium: Error response from daemon: failed to create shim task: OCI runtime create failed: runc create failed: unable to start container process: error during container init: error mounting "cgroup" to rootfs at "/sys/fs/cgroup": mount cgroup:/sys/fs/cgroup/elogind (via /proc/self/fd/6), flags: 0xf, data: elogind: invalid argument: unknown May 17 19:03:40 Tortuga rc.docker: Error: failed to start containers: chromium May 17 19:03:40 Tortuga kernel: docker0: port 4(veth6fa3aae) entered blocking state May 17 19:03:40 Tortuga kernel: docker0: port 4(veth6fa3aae) entered disabled state May 17 19:03:40 Tortuga kernel: device veth6fa3aae entered promiscuous mode May 17 19:03:40 Tortuga kernel: cgroup: Unknown subsys name 'elogind' May 17 19:03:40 Tortuga kernel: docker0: port 4(veth6fa3aae) entered disabled state May 17 19:03:40 Tortuga kernel: device veth6fa3aae left promiscuous mode May 17 19:03:40 Tortuga kernel: docker0: port 4(veth6fa3aae) entered disabled state May 17 19:03:40 Tortuga rc.docker: duckdns: Error response from daemon: failed to create shim task: OCI runtime create failed: runc create failed: unable to start container process: error during container init: error mounting "cgroup" to rootfs at "/sys/fs/cgroup": mount cgroup:/sys/fs/cgroup/elogind (via /proc/self/fd/6), flags: 0xf, data: elogind: invalid argument: unknown May 17 19:03:40 Tortuga rc.docker: Error: failed to start containers: duckdns May 17 19:03:40 Tortuga kernel: cgroup: Unknown subsys name 'elogind' May 17 19:03:41 Tortuga rc.docker: Plex-Media-Server: Error response from daemon: failed to create shim task: OCI runtime create failed: runc create failed: unable to start container process: error during container init: error mounting "cgroup" to rootfs at "/sys/fs/cgroup": mount cgroup:/sys/fs/cgroup/elogind (via /proc/self/fd/6), flags: 0xf, data: elogind: invalid argument: unknown May 17 19:03:41 Tortuga rc.docker: Error: failed to start containers: Plex-Media-Server May 17 19:03:54 Tortuga kernel: docker0: port 4(veth222516d) entered blocking state May 17 19:03:54 Tortuga kernel: docker0: port 4(veth222516d) entered disabled state May 17 19:03:54 Tortuga kernel: device veth222516d entered promiscuous mode May 17 19:03:54 Tortuga kernel: cgroup: Unknown subsys name 'elogind' May 17 19:03:54 Tortuga kernel: docker0: port 4(veth222516d) entered disabled state May 17 19:03:54 Tortuga kernel: device veth222516d left promiscuous mode May 17 19:03:54 Tortuga kernel: docker0: port 4(veth222516d) entered disabled state May 17 19:03:59 Tortuga kernel: docker0: port 4(veth762bde2) entered blocking state May 17 19:03:59 Tortuga kernel: docker0: port 4(veth762bde2) entered disabled state May 17 19:03:59 Tortuga kernel: device veth762bde2 entered promiscuous mode May 17 19:03:59 Tortuga kernel: docker0: port 4(veth762bde2) entered blocking state May 17 19:03:59 Tortuga kernel: docker0: port 4(veth762bde2) entered forwarding state May 17 19:03:59 Tortuga kernel: docker0: port 4(veth762bde2) entered disabled state May 17 19:03:59 Tortuga kernel: cgroup: Unknown subsys name 'elogind' May 17 19:03:59 Tortuga kernel: docker0: port 4(veth762bde2) entered disabled state May 17 19:03:59 Tortuga kernel: device veth762bde2 left promiscuous mode May 17 19:03:59 Tortuga kernel: docker0: port 4(veth762bde2) entered disabled state May 17 19:04:08 Tortuga kernel: cgroup: Unknown subsys name 'elogind' May 17 19:04:20 Tortuga kernel: cgroup: Unknown subsys name 'elogind' May 17 19:05:15 Tortuga kernel: cgroup: Unknown subsys name 'elogind' tortuga-diagnostics-20230517-1914.zip
  19. For whatever reason I cannot remove the previous install of Windows 10 . Now shown as Boot drive (x).
  20. I have aloted 30gb to my vm. I plan on running Blue Iris on it eventually. But after installing Windows 10 Pro, I'm only getting 6gb free out of 30. What am I missing. Still new at unraid and vms. Thanks
  21. installed and working. Thanks Now I need to figure out sharing the vm on the network, if thats possible
  22. Finallly got the ethernet card to show up <domain type='kvm'> <name>Windows 10</name> <uuid>79f82b6a-f4a7-55de-11a7-5cbf638f5729</uuid> <metadata> <vmtemplate xmlns="unraid" name="Windows 10" icon="windows.png" os="windows10"/> </metadata> <memory unit='KiB'>8388608</memory> <currentMemory unit='KiB'>2097152</currentMemory> <memoryBacking> <nosharepages/> </memoryBacking> <vcpu placement='static'>4</vcpu> <cputune> <vcpupin vcpu='0' cpuset='2'/> <vcpupin vcpu='1' cpuset='8'/> <vcpupin vcpu='2' cpuset='3'/> <vcpupin vcpu='3' cpuset='9'/> </cputune> <os> <type arch='x86_64' machine='pc-i440fx-7.1'>hvm</type> <loader readonly='yes' type='pflash'>/usr/share/qemu/ovmf-x64/OVMF_CODE-pure-efi.fd</loader> <nvram>/etc/libvirt/qemu/nvram/79f82b6a-f4a7-55de-11a7-5cbf638f5729_VARS-pure-efi.fd</nvram> </os> <features> <acpi/> <apic/> <hyperv mode='custom'> <relaxed state='on'/> <vapic state='on'/> <spinlocks state='on' retries='8191'/> <vendor_id state='on' value='none'/> </hyperv> </features> <cpu mode='host-passthrough' check='none' migratable='on'> <topology sockets='1' dies='1' cores='2' threads='2'/> <cache mode='passthrough'/> <feature policy='require' name='topoext'/> </cpu> <clock offset='localtime'> <timer name='hypervclock' present='yes'/> <timer name='hpet' present='no'/> </clock> <on_poweroff>destroy</on_poweroff> <on_reboot>restart</on_reboot> <on_crash>restart</on_crash> <devices> <emulator>/usr/local/sbin/qemu</emulator> <disk type='file' device='disk'> <driver name='qemu' type='raw' cache='writeback'/> <source file='/mnt/user/domains/Windows 10/vdisk1.img'/> <target dev='hdc' bus='virtio'/> <serial>vdisk1</serial> <boot order='1'/> <address type='pci' domain='0x0000' bus='0x00' slot='0x06' function='0x0'/> </disk> <disk type='file' device='cdrom'> <driver name='qemu' type='raw'/> <source file='/mnt/user/isos/Windows10.iso'/> <target dev='hda' bus='sata'/> <readonly/> <boot order='2'/> <address type='drive' controller='0' bus='0' target='0' unit='0'/> </disk> <disk type='file' device='cdrom'> <driver name='qemu' type='raw'/> <source file='/mnt/user/isos/virtio-win-0.1.225-2.iso'/> <target dev='hdb' bus='sata'/> <readonly/> <address type='drive' controller='0' bus='0' target='0' unit='1'/> </disk> <controller type='usb' index='0' model='ich9-ehci1'> <address type='pci' domain='0x0000' bus='0x00' slot='0x07' function='0x7'/> </controller> <controller type='usb' index='0' model='ich9-uhci1'> <master startport='0'/> <address type='pci' domain='0x0000' bus='0x00' slot='0x07' function='0x0' multifunction='on'/> </controller> <controller type='usb' index='0' model='ich9-uhci2'> <master startport='2'/> <address type='pci' domain='0x0000' bus='0x00' slot='0x07' function='0x1'/> </controller> <controller type='usb' index='0' model='ich9-uhci3'> <master startport='4'/> <address type='pci' domain='0x0000' bus='0x00' slot='0x07' function='0x2'/> </controller> <controller type='pci' index='0' model='pci-root'/> <controller type='sata' index='0'> <address type='pci' domain='0x0000' bus='0x00' slot='0x04' function='0x0'/> </controller> <controller type='virtio-serial' index='0'> <address type='pci' domain='0x0000' bus='0x00' slot='0x05' function='0x0'/> </controller> <interface type='bridge'> <mac address='52:54:00:4b:ea:1b'/> <source bridge='virbr0'/> <model type='virtio-net'/> <address type='pci' domain='0x0000' bus='0x00' slot='0x03' function='0x0'/> </interface> <serial type='pty'> <target type='isa-serial' port='0'> <model name='isa-serial'/> </target> </serial> <console type='pty'> <target type='serial' port='0'/> </console> <channel type='unix'> <target type='virtio' name='org.qemu.guest_agent.0'/> <address type='virtio-serial' controller='0' bus='0' port='1'/> </channel> <input type='tablet' bus='usb'> <address type='usb' bus='0' port='1'/> </input> <input type='mouse' bus='ps2'/> <input type='keyboard' bus='ps2'/> <graphics type='vnc' port='-1' autoport='yes' websocket='-1' listen='0.0.0.0' keymap='en-us'> <listen type='address' address='0.0.0.0'/> </graphics> <audio id='1' type='none'/> <video> <model type='qxl' ram='65536' vram='65536' vgamem='16384' heads='1' primary='yes'/> <address type='pci' domain='0x0000' bus='0x00' slot='0x02' function='0x0'/> </video> <memballoon model='virtio'> <address type='pci' domain='0x0000' bus='0x00' slot='0x08' function='0x0'/> </memballoon> </devices> </domain>
  23. I changed netowork model to e1000 when setting up the vm again. I upgraded to v6.12-rc5. The libvert log is now kicking out an error after the update. 2023-05-09 16:16:34.061+0000: 5373: error : virNetSocketReadWire:1791 : End of file while reading data: Input/output error
  24. Changing it to e1000 did not work either. Hoefully this is the correct xml, took some internet searching to finf out where it is <domain type='kvm'> <name>Windows 10</name> <uuid>86454787-fc2a-023f-65a5-9bf98fed418e</uuid> <metadata> <vmtemplate xmlns="unraid" name="Windows 10" icon="windows.png" os="windows10"/> </metadata> <memory unit='KiB'>8388608</memory> <currentMemory unit='KiB'>2097152</currentMemory> <memoryBacking> <nosharepages/> </memoryBacking> <vcpu placement='static'>4</vcpu> <cputune> <vcpupin vcpu='0' cpuset='2'/> <vcpupin vcpu='1' cpuset='8'/> <vcpupin vcpu='2' cpuset='3'/> <vcpupin vcpu='3' cpuset='9'/> </cputune> <os> <type arch='x86_64' machine='pc-i440fx-7.1'>hvm</type> <loader readonly='yes' type='pflash'>/usr/share/qemu/ovmf-x64/OVMF_CODE-pure-efi.fd</loader> <nvram>/etc/libvirt/qemu/nvram/86454787-fc2a-023f-65a5-9bf98fed418e_VARS-pure-efi.fd</nvram> </os> <features> <acpi/> <apic/> <hyperv mode='custom'> <relaxed state='on'/> <vapic state='on'/> <spinlocks state='on' retries='8191'/> <vendor_id state='on' value='none'/> </hyperv> </features> <cpu mode='host-passthrough' check='none' migratable='on'> <topology sockets='1' dies='1' cores='2' threads='2'/> <cache mode='passthrough'/> <feature policy='require' name='topoext'/> </cpu> <clock offset='localtime'> <timer name='hypervclock' present='yes'/> <timer name='hpet' present='no'/> </clock> <on_poweroff>destroy</on_poweroff> <on_reboot>restart</on_reboot> <on_crash>restart</on_crash> <devices> <emulator>/usr/local/sbin/qemu</emulator> <disk type='file' device='disk'> <driver name='qemu' type='raw' cache='writeback'/> <source file='/mnt/user/domains/Windows 10/vdisk1.img'/> <target dev='hdc' bus='virtio'/> <boot order='1'/> <address type='pci' domain='0x0000' bus='0x00' slot='0x04' function='0x0'/> </disk> <disk type='file' device='cdrom'> <driver name='qemu' type='raw'/> <source file='/mnt/user/isos/Windows10.iso'/> <target dev='hda' bus='ide'/> <readonly/> <boot order='2'/> <address type='drive' controller='0' bus='0' target='0' unit='0'/> </disk> <disk type='file' device='cdrom'> <driver name='qemu' type='raw'/> <source file='/mnt/user/isos/virtio-win-0.1.229-1.iso'/> <target dev='hdb' bus='ide'/> <readonly/> <address type='drive' controller='0' bus='0' target='0' unit='1'/> </disk> <controller type='usb' index='0' model='ich9-ehci1'> <address type='pci' domain='0x0000' bus='0x00' slot='0x07' function='0x7'/> </controller> <controller type='usb' index='0' model='ich9-uhci1'> <master startport='0'/> <address type='pci' domain='0x0000' bus='0x00' slot='0x07' function='0x0' multifunction='on'/> </controller> <controller type='usb' index='0' model='ich9-uhci2'> <master startport='2'/> <address type='pci' domain='0x0000' bus='0x00' slot='0x07' function='0x1'/> </controller> <controller type='usb' index='0' model='ich9-uhci3'> <master startport='4'/> <address type='pci' domain='0x0000' bus='0x00' slot='0x07' function='0x2'/> </controller> <controller type='pci' index='0' model='pci-root'/> <controller type='ide' index='0'> <address type='pci' domain='0x0000' bus='0x00' slot='0x01' function='0x1'/> </controller> <controller type='virtio-serial' index='0'> <address type='pci' domain='0x0000' bus='0x00' slot='0x03' function='0x0'/> </controller> <serial type='pty'> <target type='isa-serial' port='0'> <model name='isa-serial'/> </target> </serial> <console type='pty'> <target type='serial' port='0'/> </console> <channel type='unix'> <target type='virtio' name='org.qemu.guest_agent.0'/> <address type='virtio-serial' controller='0' bus='0' port='1'/> </channel> <input type='tablet' bus='usb'> <address type='usb' bus='0' port='1'/> </input> <input type='mouse' bus='ps2'/> <input type='keyboard' bus='ps2'/> <graphics type='vnc' port='-1' autoport='yes' websocket='-1' listen='0.0.0.0' keymap='en-us'> <listen type='address' address='0.0.0.0'/> </graphics> <audio id='1' type='none'/> <video> <model type='qxl' ram='65536' vram='65536' vgamem='16384' heads='1' primary='yes'/> <address type='pci' domain='0x0000' bus='0x00' slot='0x02' function='0x0'/> </video> <memballoon model='virtio'> <address type='pci' domain='0x0000' bus='0x00' slot='0x05' function='0x0'/> </memballoon> </devices> </domain>
  25. I've tried and I keep running into this