burkasaurusrex

Members
  • Posts

    10
  • Joined

  • Last visited

Everything posted by burkasaurusrex

  1. Thanks, had the exact same problem, adding a boot entry and chaning the boot order fixed this!
  2. Just wanted to thank you @VladL for the response ... been trying to find an answer to this for quite some time. After some research I found that I was having a lot of Rx packet drops which led me to increasing the NIC Rx buffer. Just wanted to note that bigger isn't always better - seems like ideal setting it the smallest buffer where you don't have packets dropping.
  3. upfwd would be amazing in order to upgrade components' firmware without having to use windows to go
  4. Also having an issue where some containers' update path seem to get kind of corrupted? For example, in the auto update app, the repository will be listed as 'library/sha256:...' and on the docker tab, the container says the update is 'not available'?
  5. It looks like certain docker containers aren't being added to the "Docker Auto Update Settings" page, particularly containers I've added directly from Docker Hub through Community Applications. Is there a way to manually add them? Thanks!
  6. Hi, Not sure if this is the right spot to post or not. I had been running a Windows 10 VM for a few months without any issue until I tried to upgrade to 6.8.1. After the upgrade, the VM performance is so slow that it is unusable (moving the mouse is seriously delayed, 2-3mins to start Firefox, etc.). I just tried 6.8.3 and had similar issues, so I had to downgrade back to 6.8.0. It looks like libvert and qemu were both versioned up in 6.8.1, so not sure if something changed in the underlying libraries where I should adjust my VM XML. Here's the VM in question: <?xml version='1.0' encoding='UTF-8'?> <domain type='kvm'> <name>Windows 10</name> <uuid>943a0b82-8458-39b1-d5e7-a22e978bb93d</uuid> <metadata> <vmtemplate xmlns="unraid" name="Windows 10" icon="windows.png" os="windows10"/> </metadata> <memory unit='KiB'>16777216</memory> <currentMemory unit='KiB'>16777216</currentMemory> <memoryBacking> <nosharepages/> </memoryBacking> <vcpu placement='static'>4</vcpu> <cputune> <vcpupin vcpu='0' cpuset='2'/> <vcpupin vcpu='1' cpuset='3'/> <vcpupin vcpu='2' cpuset='6'/> <vcpupin vcpu='3' cpuset='7'/> </cputune> <os> <type arch='x86_64' machine='pc-q35-4.1'>hvm</type> <loader readonly='yes' type='pflash'>/usr/share/qemu/ovmf-x64/OVMF_CODE-pure-efi.fd</loader> <nvram>/etc/libvirt/qemu/nvram/943a0b82-8458-39b1-d5e7-a22e978bb93d_VARS-pure-efi.fd</nvram> </os> <features> <acpi/> <apic/> <hyperv> <relaxed state='on'/> <vapic state='on'/> <spinlocks state='on' retries='8191'/> <vendor_id state='on' value='none'/> </hyperv> </features> <cpu mode='host-passthrough' check='none'> <topology sockets='1' cores='4' threads='1'/> <cache mode='passthrough'/> </cpu> <clock offset='localtime'> <timer name='hypervclock' present='yes'/> <timer name='hpet' present='no'/> </clock> <on_poweroff>destroy</on_poweroff> <on_reboot>restart</on_reboot> <on_crash>restart</on_crash> <devices> <emulator>/usr/local/sbin/qemu</emulator> <disk type='file' device='disk'> <driver name='qemu' type='raw' cache='writeback'/> <source file='/mnt/user/Domains/Windows 10/vdisk1.img'/> <target dev='hdc' bus='virtio'/> <boot order='1'/> <address type='pci' domain='0x0000' bus='0x03' slot='0x00' function='0x0'/> </disk> <controller type='usb' index='0' model='qemu-xhci' ports='15'> <address type='pci' domain='0x0000' bus='0x00' slot='0x07' function='0x0'/> </controller> <controller type='pci' index='0' model='pcie-root'/> <controller type='pci' index='1' model='pcie-root-port'> <model name='pcie-root-port'/> <target chassis='1' port='0x8'/> <address type='pci' domain='0x0000' bus='0x00' slot='0x01' function='0x0' multifunction='on'/> </controller> <controller type='pci' index='2' model='pcie-root-port'> <model name='pcie-root-port'/> <target chassis='2' port='0x9'/> <address type='pci' domain='0x0000' bus='0x00' slot='0x01' function='0x1'/> </controller> <controller type='pci' index='3' model='pcie-root-port'> <model name='pcie-root-port'/> <target chassis='3' port='0xa'/> <address type='pci' domain='0x0000' bus='0x00' slot='0x01' function='0x2'/> </controller> <controller type='pci' index='4' model='pcie-root-port'> <model name='pcie-root-port'/> <target chassis='4' port='0xb'/> <address type='pci' domain='0x0000' bus='0x00' slot='0x01' function='0x3'/> </controller> <controller type='pci' index='5' model='pcie-root-port'> <model name='pcie-root-port'/> <target chassis='5' port='0xc'/> <address type='pci' domain='0x0000' bus='0x00' slot='0x01' function='0x4'/> </controller> <controller type='pci' index='6' model='pcie-root-port'> <model name='pcie-root-port'/> <target chassis='6' port='0xd'/> <address type='pci' domain='0x0000' bus='0x00' slot='0x01' function='0x5'/> </controller> <controller type='pci' index='7' model='pcie-root-port'> <model name='pcie-root-port'/> <target chassis='7' port='0xe'/> <address type='pci' domain='0x0000' bus='0x00' slot='0x01' function='0x6'/> </controller> <controller type='pci' index='8' model='pcie-root-port'> <model name='pcie-root-port'/> <target chassis='8' port='0xf'/> <address type='pci' domain='0x0000' bus='0x00' slot='0x01' function='0x7'/> </controller> <controller type='virtio-serial' index='0'> <address type='pci' domain='0x0000' bus='0x02' slot='0x00' function='0x0'/> </controller> <controller type='sata' index='0'> <address type='pci' domain='0x0000' bus='0x00' slot='0x1f' function='0x2'/> </controller> <interface type='bridge'> <mac address='52:54:00:bb:2d:2e'/> <source bridge='br0'/> <model type='virtio'/> <address type='pci' domain='0x0000' bus='0x01' slot='0x00' function='0x0'/> </interface> <serial type='pty'> <target type='isa-serial' port='0'> <model name='isa-serial'/> </target> </serial> <console type='pty'> <target type='serial' port='0'/> </console> <channel type='unix'> <target type='virtio' name='org.qemu.guest_agent.0'/> <address type='virtio-serial' controller='0' bus='0' port='1'/> </channel> <input type='mouse' bus='ps2'/> <input type='keyboard' bus='ps2'/> <hostdev mode='subsystem' type='pci' managed='yes'> <driver name='vfio'/> <source> <address domain='0x0000' bus='0x01' slot='0x00' function='0x0'/> </source> <address type='pci' domain='0x0000' bus='0x04' slot='0x00' function='0x0' multifunction='on'/> </hostdev> <hostdev mode='subsystem' type='pci' managed='yes'> <driver name='vfio'/> <source> <address domain='0x0000' bus='0x01' slot='0x00' function='0x1'/> </source> <address type='pci' domain='0x0000' bus='0x04' slot='0x00' function='0x1'/> </hostdev> <hostdev mode='subsystem' type='pci' managed='yes'> <driver name='vfio'/> <source> <address domain='0x0000' bus='0x01' slot='0x00' function='0x2'/> </source> <address type='pci' domain='0x0000' bus='0x04' slot='0x00' function='0x2'/> </hostdev> <hostdev mode='subsystem' type='pci' managed='yes'> <driver name='vfio'/> <source> <address domain='0x0000' bus='0x01' slot='0x00' function='0x3'/> </source> <address type='pci' domain='0x0000' bus='0x04' slot='0x00' function='0x3'/> </hostdev> <hostdev mode='subsystem' type='pci' managed='yes'> <driver name='vfio'/> <source> <address domain='0x0000' bus='0x05' slot='0x00' function='0x0'/> </source> <address type='pci' domain='0x0000' bus='0x08' slot='0x00' function='0x0'/> </hostdev> <memballoon model='none'/> </devices> </domain> A few things to note about my setup: The domains share is currently on my cache drive (2 1TB NVME drives) - I know it's better to run this on an unassigned drive, but I'm currently out of ports Passing through a RTX 2060 (haven't had any issues in the past) Passing through a Renasas uPD720202 USB controller (haven't had any issues in the past) Would appreciate any help! Thanks, Burke htpc-diagnostics-20200331-2122.zip
  7. Awesome, will definitely give this a try. It would be cool to be able to unbind devices / rebind them to vfio-pci without rebooting. Maybe something like this would help? https://github.com/PassthroughPOST/VFIO-Tools/blob/master/vfioselect/vfioselect
  8. I'm having a bit of trouble passing through a GPU so I wanted to write a libvirt hook script to adjust a few things before guest startup and after guest shutdown. It looks like there are already scripts at both /etc/libvirt/hooks/qemu and /etc/libvirt-/hooks/qemu. So two questions: Which libvirt directory is used - libvirt or libvert- ? If I edit the qemu script, will the changes persist between boots or will they be lost like other Unraid changes? Or I'm guessing I could symlink it to somewhere that persists? Separately the PassthroughPOST/VFIO-Tools repo is pretty helpful. I'll most likely port this qemu script to PHP to easily run new scripts if anybody else is interested.
  9. Awesome - this worked perfectly for me. Thanks @jedduff! For posterity, open terminal and run: docker network create container:NAME Then change "Network" to "CUSTOM:conatiner:NAME" in all of the Unraid Docker templates. Probably worth changing the Unraid setting to preserve customer network names as well.
  10. Docker has some functionality where multiple containers can share the same network namespace by using --network='container:<name|id>'. Here's the relevant documentation from the docker run network settings docs: --network="bridge" : Connect a container to a network 'bridge': create a network stack on the default Docker bridge 'none': no networking 'container:<name|id>': reuse another container's network stack 'host': use the Docker host network stack '<network-name>|<network-id>': connect to a user-defined network I implemented this without a problem in prior versions of Unraid by setting the Docker "Network Type" in the Unraid template to "None", then setting the "Extra Parameters" to "--network='container:NAME'". This would yield a docker run command that had both the "--network='none'" and "--network='container:NAME'" parameters. This worked great until I updated to Unraid 6.8. It looks the new version of docker no longer allows you to overload the "--network='none'" parameter with the "--network='container:NAME'" parameter and throws this error: /usr/bin/docker: Error response from daemon: Container cannot be connected to network endpoints: container:NAME, none. See '/usr/bin/docker run --help'. So, I've got two questions to try and sort this out: Is there a way in the Unraid template to not send a "--network" parameter? If so, I could just take care of that in the "Extra Parameters". If not, it would be great to add the functionality. Is there a way in the Unraid template to set the network to "container:NAME"? I tried to edit the underlying XML, but it doesn't look like the templating engine is picking up changes. If this is not an option, it would be great to add the functionality I appreciate any help. Thanks!