Kronos69

Members
  • Posts

    28
  • Joined

  • Last visited

Recent Profile Visitors

968 profile views

Kronos69's Achievements

Noob

Noob (1/14)

1

Reputation

  1. Hi, I've got 3 Windows VMs at work that I need to be totally encrypted (because of GDPR regulations). I did an OS encryption inside the VMs with VeraCrypt. After the latest W10 update (1903) I get the blue screen at the VM startup, because it tries to load the windows bootloader first, and it's obviously encripted. Changing the boot order inside the UEFI solves it temporarily, for that boot only. The 3 VMS are different from each other (some are bare metal) so I think cannot have one single identical UEFI setting for all 3 of them, too. How can I have the UEFI to not reset every time to its default settings? Is it because it's set as read only? Would change it to readonly=no break everything? Any suggestion appreciated Thanks VM1, 2 and 3 <?xml version='1.0' encoding='UTF-8'?> <domain type='kvm' id='2'> <name>RnzOWS1</name> <uuid>30710cf9-a806-7a7b-4a23-5be7b46426c7</uuid> <metadata> <vmtemplate xmlns="unraid" name="Windows 10" icon="windows.png" os="windows10"/> </metadata> <memory unit='KiB'>30932992</memory> <currentMemory unit='KiB'>30932992</currentMemory> <memoryBacking> <nosharepages/> </memoryBacking> <vcpu placement='static'>12</vcpu> <cputune> <vcpupin vcpu='0' cpuset='12'/> <vcpupin vcpu='1' cpuset='13'/> <vcpupin vcpu='2' cpuset='14'/> <vcpupin vcpu='3' cpuset='15'/> <vcpupin vcpu='4' cpuset='16'/> <vcpupin vcpu='5' cpuset='17'/> <vcpupin vcpu='6' cpuset='18'/> <vcpupin vcpu='7' cpuset='19'/> <vcpupin vcpu='8' cpuset='20'/> <vcpupin vcpu='9' cpuset='21'/> <vcpupin vcpu='10' cpuset='22'/> <vcpupin vcpu='11' cpuset='23'/> </cputune> <numatune> <memory mode='preferred' nodeset='1'/> </numatune> <resource> <partition>/machine</partition> </resource> <os> <type arch='x86_64' machine='pc-q35-3.0'>hvm</type> <loader readonly='yes' type='pflash'>/usr/share/qemu/ovmf-x64/OVMF_CODE-pure-efi.fd</loader> <nvram>/etc/libvirt/qemu/nvram/30710cf9-a806-7a7b-4a23-5be7b46426c7_VARS-pure-efi.fd</nvram> </os> <features> <acpi/> <apic/> <hyperv> <relaxed state='on'/> <vapic state='on'/> <spinlocks state='on' retries='8191'/> <vendor_id state='on' value='none'/> </hyperv> </features> <cpu mode='custom' match='exact' check='full'> <model fallback='forbid'>EPYC-IBPB</model> <topology sockets='1' cores='6' threads='2'/> <feature policy='require' name='topoext'/> <feature policy='disable' name='monitor'/> <feature policy='require' name='x2apic'/> <feature policy='require' name='hypervisor'/> <feature policy='disable' name='svm'/> </cpu> <clock offset='localtime'> <timer name='hypervclock' present='yes'/> <timer name='hpet' present='no'/> </clock> <on_poweroff>destroy</on_poweroff> <on_reboot>restart</on_reboot> <on_crash>restart</on_crash> <devices> <emulator>/usr/local/sbin/qemu</emulator> <disk type='block' device='disk'> <driver name='qemu' type='raw' cache='none' discard='unmap'/> <source dev='/dev/disk/by-id/ata-CT120BX300SSD1_1743E10578EC'/> <backingStore/> <target dev='hdc' bus='scsi'/> <boot order='1'/> <alias name='scsi0-0-0-2'/> <address type='drive' controller='0' bus='0' target='0' unit='2'/> </disk> <disk type='file' device='cdrom'> <driver name='qemu' type='raw'/> <source file='/mnt/user/isos/Synology Restore Media OWS1.iso'/> <backingStore/> <target dev='hda' bus='sata'/> <readonly/> <boot order='2'/> <alias name='sata0-0-0'/> <address type='drive' controller='0' bus='0' target='0' unit='0'/> </disk> <disk type='file' device='cdrom'> <driver name='qemu' type='raw'/> <source file='/mnt/user/isos/virtio-win-0.1.160-1.iso'/> <backingStore/> <target dev='hdb' bus='sata'/> <readonly/> <alias name='sata0-0-1'/> <address type='drive' controller='0' bus='0' target='0' unit='1'/> </disk> <controller type='usb' index='0' model='qemu-xhci' ports='15'> <alias name='usb'/> <address type='pci' domain='0x0000' bus='0x00' slot='0x07' function='0x0'/> </controller> <controller type='scsi' index='0' model='virtio-scsi'> <alias name='scsi0'/> <address type='pci' domain='0x0000' bus='0x02' slot='0x00' function='0x0'/> </controller> <controller type='sata' index='0'> <alias name='ide'/> <address type='pci' domain='0x0000' bus='0x00' slot='0x1f' function='0x2'/> </controller> <controller type='pci' index='0' model='pcie-root'> <alias name='pcie.0'/> </controller> <controller type='pci' index='1' model='pcie-root-port'> <model name='pcie-root-port'/> <target chassis='1' port='0x8'/> <alias name='pci.1'/> <address type='pci' domain='0x0000' bus='0x00' slot='0x01' function='0x0' multifunction='on'/> </controller> <controller type='pci' index='2' model='pcie-root-port'> <model name='pcie-root-port'/> <target chassis='2' port='0x9'/> <alias name='pci.2'/> <address type='pci' domain='0x0000' bus='0x00' slot='0x01' function='0x1'/> </controller> <controller type='pci' index='3' model='pcie-root-port'> <model name='pcie-root-port'/> <target chassis='3' port='0xa'/> <alias name='pci.3'/> <address type='pci' domain='0x0000' bus='0x00' slot='0x01' function='0x2'/> </controller> <controller type='pci' index='4' model='pcie-root-port'> <model name='pcie-root-port'/> <target chassis='4' port='0xb'/> <alias name='pci.4'/> <address type='pci' domain='0x0000' bus='0x00' slot='0x01' function='0x3'/> </controller> <controller type='pci' index='5' model='pcie-root-port'> <model name='pcie-root-port'/> <target chassis='5' port='0xc'/> <alias name='pci.5'/> <address type='pci' domain='0x0000' bus='0x00' slot='0x01' function='0x4'/> </controller> <controller type='pci' index='6' model='pcie-root-port'> <model name='pcie-root-port'/> <target chassis='6' port='0xd'/> <alias name='pci.6'/> <address type='pci' domain='0x0000' bus='0x00' slot='0x01' function='0x5'/> </controller> <controller type='pci' index='7' model='pcie-root-port'> <model name='pcie-root-port'/> <target chassis='7' port='0xe'/> <alias name='pci.7'/> <address type='pci' domain='0x0000' bus='0x00' slot='0x01' function='0x6'/> </controller> <controller type='virtio-serial' index='0'> <alias name='virtio-serial0'/> <address type='pci' domain='0x0000' bus='0x03' slot='0x00' function='0x0'/> </controller> <interface type='bridge'> <mac address='52:54:00:f7:dc:4b'/> <source bridge='br0'/> <target dev='vnet1'/> <model type='virtio'/> <alias name='net0'/> <address type='pci' domain='0x0000' bus='0x01' slot='0x00' function='0x0'/> </interface> <serial type='pty'> <source path='/dev/pts/1'/> <target type='isa-serial' port='0'> <model name='isa-serial'/> </target> <alias name='serial0'/> </serial> <console type='pty' tty='/dev/pts/1'> <source path='/dev/pts/1'/> <target type='serial' port='0'/> <alias name='serial0'/> </console> <channel type='unix'> <source mode='bind' path='/var/lib/libvirt/qemu/channel/target/domain-2-RnzOWS1/org.qemu.guest_agent.0'/> <target type='virtio' name='org.qemu.guest_agent.0' state='connected'/> <alias name='channel0'/> <address type='virtio-serial' controller='0' bus='0' port='1'/> </channel> <input type='mouse' bus='ps2'> <alias name='input0'/> </input> <input type='keyboard' bus='ps2'> <alias name='input1'/> </input> <hostdev mode='subsystem' type='pci' managed='yes'> <driver name='vfio'/> <source> <address domain='0x0000' bus='0x43' slot='0x00' function='0x0'/> </source> <alias name='hostdev0'/> <rom file='/mnt/disk1/drivers/vBIOS/GP104.rom'/> <address type='pci' domain='0x0000' bus='0x04' slot='0x00' function='0x0'/> </hostdev> <hostdev mode='subsystem' type='pci' managed='yes'> <driver name='vfio'/> <source> <address domain='0x0000' bus='0x43' slot='0x00' function='0x1'/> </source> <alias name='hostdev1'/> <address type='pci' domain='0x0000' bus='0x05' slot='0x00' function='0x0'/> </hostdev> <hostdev mode='subsystem' type='pci' managed='yes'> <driver name='vfio'/> <source> <address domain='0x0000' bus='0x41' slot='0x00' function='0x0'/> </source> <alias name='hostdev2'/> <address type='pci' domain='0x0000' bus='0x06' slot='0x00' function='0x0'/> </hostdev> <hostdev mode='subsystem' type='pci' managed='yes'> <driver name='vfio'/> <source> <address domain='0x0000' bus='0x42' slot='0x00' function='0x0'/> </source> <alias name='hostdev3'/> <address type='pci' domain='0x0000' bus='0x07' slot='0x00' function='0x0'/> </hostdev> <memballoon model='none'/> </devices> <seclabel type='dynamic' model='dac' relabel='yes'> <label>+0:+100</label> <imagelabel>+0:+100</imagelabel> </seclabel> </domain> <?xml version='1.0' encoding='UTF-8'?> <domain type='kvm' id='3'> <name>RnzOWS2</name> <uuid>69d4067f-4b70-c8dd-c1e0-5bcd9fdc2e8f</uuid> <metadata> <vmtemplate xmlns="unraid" name="Windows 10" icon="windows.png" os="windows10"/> </metadata> <memory unit='KiB'>22544384</memory> <currentMemory unit='KiB'>22544384</currentMemory> <memoryBacking> <nosharepages/> </memoryBacking> <vcpu placement='static'>6</vcpu> <cputune> <vcpupin vcpu='0' cpuset='6'/> <vcpupin vcpu='1' cpuset='7'/> <vcpupin vcpu='2' cpuset='8'/> <vcpupin vcpu='3' cpuset='9'/> <vcpupin vcpu='4' cpuset='10'/> <vcpupin vcpu='5' cpuset='11'/> </cputune> <numatune> <memory mode='preferred' nodeset='0'/> </numatune> <resource> <partition>/machine</partition> </resource> <os> <type arch='x86_64' machine='pc-q35-3.0'>hvm</type> <loader readonly='yes' type='pflash'>/usr/share/qemu/ovmf-x64/OVMF_CODE-pure-efi.fd</loader> <nvram>/etc/libvirt/qemu/nvram/69d4067f-4b70-c8dd-c1e0-5bcd9fdc2e8f_VARS-pure-efi.fd</nvram> </os> <features> <acpi/> <apic/> <hyperv> <relaxed state='on'/> <vapic state='on'/> <spinlocks state='on' retries='8191'/> <vendor_id state='on' value='none'/> </hyperv> </features> <cpu mode='host-passthrough' check='none'> <topology sockets='1' cores='6' threads='1'/> </cpu> <clock offset='localtime'> <timer name='hypervclock' present='yes'/> <timer name='hpet' present='no'/> </clock> <on_poweroff>destroy</on_poweroff> <on_reboot>restart</on_reboot> <on_crash>restart</on_crash> <devices> <emulator>/usr/local/sbin/qemu</emulator> <disk type='block' device='disk'> <driver name='qemu' type='raw' cache='none' discard='unmap'/> <source dev='/dev/disk/by-id/ata-CT120BX300SSD1_1743E1056CCE'/> <backingStore/> <target dev='hdc' bus='scsi'/> <boot order='1'/> <alias name='scsi0-0-0-2'/> <address type='drive' controller='0' bus='0' target='0' unit='2'/> </disk> <disk type='file' device='cdrom'> <driver name='qemu' type='raw'/> <source file='/mnt/user/isos/Windows.iso'/> <backingStore/> <target dev='hda' bus='sata'/> <readonly/> <boot order='2'/> <alias name='sata0-0-0'/> <address type='drive' controller='0' bus='0' target='0' unit='0'/> </disk> <disk type='file' device='cdrom'> <driver name='qemu' type='raw'/> <source file='/mnt/user/isos/virtio-win-0.1.160-1.iso'/> <backingStore/> <target dev='hdb' bus='sata'/> <readonly/> <alias name='sata0-0-1'/> <address type='drive' controller='0' bus='0' target='0' unit='1'/> </disk> <controller type='usb' index='0' model='qemu-xhci' ports='15'> <alias name='usb'/> <address type='pci' domain='0x0000' bus='0x00' slot='0x07' function='0x0'/> </controller> <controller type='scsi' index='0' model='virtio-scsi'> <alias name='scsi0'/> <address type='pci' domain='0x0000' bus='0x02' slot='0x00' function='0x0'/> </controller> <controller type='sata' index='0'> <alias name='ide'/> <address type='pci' domain='0x0000' bus='0x00' slot='0x1f' function='0x2'/> </controller> <controller type='pci' index='0' model='pcie-root'> <alias name='pcie.0'/> </controller> <controller type='pci' index='1' model='pcie-root-port'> <model name='pcie-root-port'/> <target chassis='1' port='0x8'/> <alias name='pci.1'/> <address type='pci' domain='0x0000' bus='0x00' slot='0x01' function='0x0' multifunction='on'/> </controller> <controller type='pci' index='2' model='pcie-root-port'> <model name='pcie-root-port'/> <target chassis='2' port='0x9'/> <alias name='pci.2'/> <address type='pci' domain='0x0000' bus='0x00' slot='0x01' function='0x1'/> </controller> <controller type='pci' index='3' model='pcie-root-port'> <model name='pcie-root-port'/> <target chassis='3' port='0xa'/> <alias name='pci.3'/> <address type='pci' domain='0x0000' bus='0x00' slot='0x01' function='0x2'/> </controller> <controller type='pci' index='4' model='pcie-root-port'> <model name='pcie-root-port'/> <target chassis='4' port='0xb'/> <alias name='pci.4'/> <address type='pci' domain='0x0000' bus='0x00' slot='0x01' function='0x3'/> </controller> <controller type='pci' index='5' model='pcie-root-port'> <model name='pcie-root-port'/> <target chassis='5' port='0xc'/> <alias name='pci.5'/> <address type='pci' domain='0x0000' bus='0x00' slot='0x01' function='0x4'/> </controller> <controller type='pci' index='6' model='pcie-root-port'> <model name='pcie-root-port'/> <target chassis='6' port='0xd'/> <alias name='pci.6'/> <address type='pci' domain='0x0000' bus='0x00' slot='0x01' function='0x5'/> </controller> <controller type='pci' index='7' model='pcie-root-port'> <model name='pcie-root-port'/> <target chassis='7' port='0xe'/> <alias name='pci.7'/> <address type='pci' domain='0x0000' bus='0x00' slot='0x01' function='0x6'/> </controller> <controller type='pci' index='8' model='pcie-root-port'> <model name='pcie-root-port'/> <target chassis='8' port='0xf'/> <alias name='pci.8'/> <address type='pci' domain='0x0000' bus='0x00' slot='0x01' function='0x7'/> </controller> <controller type='pci' index='9' model='pcie-root-port'> <model name='pcie-root-port'/> <target chassis='9' port='0x10'/> <alias name='pci.9'/> <address type='pci' domain='0x0000' bus='0x00' slot='0x02' function='0x0'/> </controller> <controller type='virtio-serial' index='0'> <alias name='virtio-serial0'/> <address type='pci' domain='0x0000' bus='0x03' slot='0x00' function='0x0'/> </controller> <interface type='bridge'> <mac address='52:54:00:ed:48:56'/> <source bridge='br0'/> <target dev='vnet2'/> <model type='virtio'/> <alias name='net0'/> <address type='pci' domain='0x0000' bus='0x01' slot='0x00' function='0x0'/> </interface> <serial type='pty'> <source path='/dev/pts/2'/> <target type='isa-serial' port='0'> <model name='isa-serial'/> </target> <alias name='serial0'/> </serial> <console type='pty' tty='/dev/pts/2'> <source path='/dev/pts/2'/> <target type='serial' port='0'/> <alias name='serial0'/> </console> <channel type='unix'> <source mode='bind' path='/var/lib/libvirt/qemu/channel/target/domain-3-RnzOWS2/org.qemu.guest_agent.0'/> <target type='virtio' name='org.qemu.guest_agent.0' state='connected'/> <alias name='channel0'/> <address type='virtio-serial' controller='0' bus='0' port='1'/> </channel> <input type='mouse' bus='ps2'> <alias name='input0'/> </input> <input type='keyboard' bus='ps2'> <alias name='input1'/> </input> <hostdev mode='subsystem' type='pci' managed='yes'> <driver name='vfio'/> <source> <address domain='0x0000' bus='0x08' slot='0x00' function='0x0'/> </source> <alias name='hostdev0'/> <address type='pci' domain='0x0000' bus='0x04' slot='0x00' function='0x0'/> </hostdev> <hostdev mode='subsystem' type='pci' managed='yes'> <driver name='vfio'/> <source> <address domain='0x0000' bus='0x08' slot='0x00' function='0x1'/> </source> <alias name='hostdev1'/> <address type='pci' domain='0x0000' bus='0x05' slot='0x00' function='0x0'/> </hostdev> <hostdev mode='subsystem' type='pci' managed='yes'> <driver name='vfio'/> <source> <address domain='0x0000' bus='0x07' slot='0x00' function='0x0'/> </source> <alias name='hostdev2'/> <address type='pci' domain='0x0000' bus='0x06' slot='0x00' function='0x0'/> </hostdev> <hostdev mode='subsystem' type='usb' managed='no'> <source> <vendor id='0x046d'/> <product id='0xc52b'/> <address bus='1' device='3'/> </source> <alias name='hostdev3'/> <address type='usb' bus='0' port='1'/> </hostdev> <hostdev mode='subsystem' type='usb' managed='no'> <source> <vendor id='0x0b05'/> <product id='0x17cb'/> <address bus='1' device='8'/> </source> <alias name='hostdev4'/> <address type='usb' bus='0' port='2'/> </hostdev> <hostdev mode='subsystem' type='usb' managed='no'> <source> <vendor id='0x258a'/> <product id='0x0016'/> <address bus='7' device='2'/> </source> <alias name='hostdev5'/> <address type='usb' bus='0' port='3'/> </hostdev> <memballoon model='none'/> </devices> <seclabel type='dynamic' model='dac' relabel='yes'> <label>+0:+100</label> <imagelabel>+0:+100</imagelabel> </seclabel> </domain> <?xml version='1.0' encoding='UTF-8'?> <domain type='kvm' id='1'> <name>RnzOWS3</name> <uuid>ca554543-824c-7f67-343a-247ed792eb92</uuid> <metadata> <vmtemplate xmlns="unraid" name="Windows 10" icon="windows.png" os="windows10"/> </metadata> <memory unit='KiB'>8388608</memory> <currentMemory unit='KiB'>8388608</currentMemory> <memoryBacking> <nosharepages/> </memoryBacking> <vcpu placement='static'>4</vcpu> <cputune> <vcpupin vcpu='0' cpuset='2'/> <vcpupin vcpu='1' cpuset='3'/> <vcpupin vcpu='2' cpuset='4'/> <vcpupin vcpu='3' cpuset='5'/> </cputune> <numatune> <memory mode='preferred' nodeset='0'/> </numatune> <resource> <partition>/machine</partition> </resource> <os> <type arch='x86_64' machine='pc-q35-3.0'>hvm</type> <loader readonly='yes' type='pflash'>/usr/share/qemu/ovmf-x64/OVMF_CODE-pure-efi.fd</loader> <nvram>/etc/libvirt/qemu/nvram/ca554543-824c-7f67-343a-247ed792eb92_VARS-pure-efi.fd</nvram> </os> <features> <acpi/> <apic/> <hyperv> <relaxed state='on'/> <vapic state='on'/> <spinlocks state='on' retries='8191'/> <vendor_id state='on' value='none'/> </hyperv> </features> <cpu mode='host-passthrough' check='none'> <topology sockets='1' cores='4' threads='1'/> </cpu> <clock offset='localtime'> <timer name='hypervclock' present='yes'/> <timer name='hpet' present='no'/> </clock> <on_poweroff>destroy</on_poweroff> <on_reboot>restart</on_reboot> <on_crash>restart</on_crash> <devices> <emulator>/usr/local/sbin/qemu</emulator> <disk type='block' device='disk'> <driver name='qemu' type='raw' cache='none' discard='unmap'/> <source dev='/dev/disk/by-id/ata-KingDian_S280_120GB_2018080900515'/> <backingStore/> <target dev='hdc' bus='scsi'/> <boot order='1'/> <alias name='scsi0-0-0-2'/> <address type='drive' controller='0' bus='0' target='0' unit='2'/> </disk> <disk type='file' device='cdrom'> <driver name='qemu' type='raw'/> <source file='/mnt/user/isos/Windows.iso'/> <backingStore/> <target dev='hda' bus='sata'/> <readonly/> <boot order='2'/> <alias name='sata0-0-0'/> <address type='drive' controller='0' bus='0' target='0' unit='0'/> </disk> <disk type='file' device='cdrom'> <driver name='qemu' type='raw'/> <source file='/mnt/user/isos/virtio-win-0.1.160-1.iso'/> <backingStore/> <target dev='hdb' bus='sata'/> <readonly/> <alias name='sata0-0-1'/> <address type='drive' controller='0' bus='0' target='0' unit='1'/> </disk> <controller type='usb' index='0' model='qemu-xhci' ports='15'> <alias name='usb'/> <address type='pci' domain='0x0000' bus='0x00' slot='0x07' function='0x0'/> </controller> <controller type='scsi' index='0' model='virtio-scsi'> <alias name='scsi0'/> <address type='pci' domain='0x0000' bus='0x02' slot='0x00' function='0x0'/> </controller> <controller type='sata' index='0'> <alias name='ide'/> <address type='pci' domain='0x0000' bus='0x00' slot='0x1f' function='0x2'/> </controller> <controller type='pci' index='0' model='pcie-root'> <alias name='pcie.0'/> </controller> <controller type='pci' index='1' model='pcie-root-port'> <model name='pcie-root-port'/> <target chassis='1' port='0x8'/> <alias name='pci.1'/> <address type='pci' domain='0x0000' bus='0x00' slot='0x01' function='0x0' multifunction='on'/> </controller> <controller type='pci' index='2' model='pcie-root-port'> <model name='pcie-root-port'/> <target chassis='2' port='0x9'/> <alias name='pci.2'/> <address type='pci' domain='0x0000' bus='0x00' slot='0x01' function='0x1'/> </controller> <controller type='pci' index='3' model='pcie-root-port'> <model name='pcie-root-port'/> <target chassis='3' port='0xa'/> <alias name='pci.3'/> <address type='pci' domain='0x0000' bus='0x00' slot='0x01' function='0x2'/> </controller> <controller type='pci' index='4' model='pcie-root-port'> <model name='pcie-root-port'/> <target chassis='4' port='0xb'/> <alias name='pci.4'/> <address type='pci' domain='0x0000' bus='0x00' slot='0x01' function='0x3'/> </controller> <controller type='pci' index='5' model='pcie-root-port'> <model name='pcie-root-port'/> <target chassis='5' port='0xc'/> <alias name='pci.5'/> <address type='pci' domain='0x0000' bus='0x00' slot='0x01' function='0x4'/> </controller> <controller type='pci' index='6' model='pcie-root-port'> <model name='pcie-root-port'/> <target chassis='6' port='0xd'/> <alias name='pci.6'/> <address type='pci' domain='0x0000' bus='0x00' slot='0x01' function='0x5'/> </controller> <controller type='pci' index='7' model='pcie-root-port'> <model name='pcie-root-port'/> <target chassis='7' port='0xe'/> <alias name='pci.7'/> <address type='pci' domain='0x0000' bus='0x00' slot='0x01' function='0x6'/> </controller> <controller type='virtio-serial' index='0'> <alias name='virtio-serial0'/> <address type='pci' domain='0x0000' bus='0x03' slot='0x00' function='0x0'/> </controller> <interface type='bridge'> <mac address='52:54:00:e6:9d:e5'/> <source bridge='br0'/> <target dev='vnet0'/> <model type='virtio'/> <alias name='net0'/> <address type='pci' domain='0x0000' bus='0x01' slot='0x00' function='0x0'/> </interface> <serial type='pty'> <source path='/dev/pts/0'/> <target type='isa-serial' port='0'> <model name='isa-serial'/> </target> <alias name='serial0'/> </serial> <console type='pty' tty='/dev/pts/0'> <source path='/dev/pts/0'/> <target type='serial' port='0'/> <alias name='serial0'/> </console> <channel type='unix'> <source mode='bind' path='/var/lib/libvirt/qemu/channel/target/domain-1-RnzOWS3/org.qemu.guest_agent.0'/> <target type='virtio' name='org.qemu.guest_agent.0' state='connected'/> <alias name='channel0'/> <address type='virtio-serial' controller='0' bus='0' port='1'/> </channel> <input type='mouse' bus='ps2'> <alias name='input0'/> </input> <input type='keyboard' bus='ps2'> <alias name='input1'/> </input> <hostdev mode='subsystem' type='pci' managed='yes'> <driver name='vfio'/> <source> <address domain='0x0000' bus='0x09' slot='0x00' function='0x0'/> </source> <alias name='hostdev0'/> <address type='pci' domain='0x0000' bus='0x04' slot='0x00' function='0x0'/> </hostdev> <hostdev mode='subsystem' type='pci' managed='yes'> <driver name='vfio'/> <source> <address domain='0x0000' bus='0x09' slot='0x00' function='0x1'/> </source> <alias name='hostdev1'/> <address type='pci' domain='0x0000' bus='0x05' slot='0x00' function='0x0'/> </hostdev> <hostdev mode='subsystem' type='usb' managed='no'> <source> <vendor id='0x062a'/> <product id='0x4101'/> <address bus='5' device='2'/> </source> <alias name='hostdev2'/> <address type='usb' bus='0' port='1'/> </hostdev> <memballoon model='none'/> </devices> <seclabel type='dynamic' model='dac' relabel='yes'> <label>+0:+100</label> <imagelabel>+0:+100</imagelabel> </seclabel> </domain>
  2. Tested on VM1: It does not competely solve the stuttering, but it works wonders, perceivably reducing the latency. Thanks! Let's hope it'll be included in the next code. I used this variation: <cpu mode='custom' match='exact' check='partial'> <model fallback='allow'>EPYC-IBPB</model> <topology sockets='1' cores='6' threads='2'/> <feature policy='require' name='topoext'/> </cpu> because I have 12 cores assigned. To find the root cause, I would ask you if you wish to test your system for this very specific problem, running crystaldiskmark on a network share capable of saturating a gigabit connection or pointing towards an unraid SMB network share. Do you notice any of the above problems (increased latency, cpu spikes, stuttering)? P.s.: I've still not changed the cpu mode on the other VMs, I'll update as soon as I have news.
  3. At the moment Core 0,1 free for Unraid to use Core 2-23 isolated from Unraid Core 2-5 VM3 Core 6-11 VM2 Core 12-23 VM1 “-“ meaning “to” Seen the topology the core/multithreaded pairing should be 0 and 1, 2 and 3, etc, and it’s correctly picked up by recent unraid versions. Regarding the stuttering, it doesn’t differ if used together or alone. I’ll try tomorrow and report back As reported above, the RAM BIOS interleaving setting is set to “channel”, and every VM except VM3 had numatune set as preferred on their die. Today I installed the last dimm modules, totaling 64G, so I gave VM3 the “preferred” setting, too. Before, “strict” wasn’t really assigning everything to the correct die, a little part of the other memory banks were still used, always. I’ll try again with strict with the new configuration to see if something changed.
  4. I was about to throw in the towel last week, accept the failure and replace the TR server with 3 separate physical workstations, but I don't want to "go gentle into that good night", so let's see if putting together all the info I collected until now can give someone expert an idea on how to solve this. Quick recap: I'm experimenting using Unraid to service 3 nearby workstations at work, repurposing "old" gaming and office components, but the stuttering has been plaguing us for months, we must locally pause the file sync with the main server while working. When moving/accessing files inside a W10 VM on this threadripper 1920x build, a strong stuttering occurs. It's bursts of continuous micro/macro freezes, ranging from a fraction of a second to 4-5 seconds, and it's really noticeable because even the pointer gets stuck. Having a storage device passthrough by id via scsi helps a lot, and an nvme controller "bare metal" passthrough almost solves the problem regarding local files (latency is still high but no perceivable stuttering). I still can't for the life of me solve the issue with network (LAN via ethernet AND local Unraid shares mounted via SMB) file transfers. The issue is apparently NOT related to writes, only to reads. Random reads give usually more issues than sequential reads. It's apparently NOT an environmental issue (broadcast storms, or something like that) because I can replicate the issue with the ethernet cable disconnected, simply running crystaldiskmark ponting to an unraid SMB share on the array. Reading from a SMB share with crystaldiskmark, one logical core almost always hovers around 50%, and anoter logical core hovers around 90% (see attached screenshots) with a peak activity of "system interrupts" inside the task manager. Otherwise, they idle normally. The faster the share, the worse the stuttering, apparently. Q 35 3.0 seems to help. Test hardware, updated: Mobo - Gigabyte X399 Aorus Gaming 7 rev 1.0 Processor - Threadripper 1920X RAM - 48GB (6x8GB) Kingston KVR24E17S8/8MA 8 GB, DDR4 2400 MHz, CL17, ECC Unbuffered - To be expanded to 64GB (8x8GB) in a few days. 1 x EVGA 1070 FTW3 - VM1 1 x Quadro P400 - VM2 1 x Quadro K620 - VM3 3 x 120GB SATA SSDs (2x Crucial BX300, 1x KingDian cheapo) - VM OS disks 1 x 2.5" 7200rpm 1TB HDD - HGST - only and lonely local array disk 1 x 120GB NVME M.2 PCIe - Intel 600p - VM2 scratch disk 1 x 250GB NVME M.2 PCIe - Samsung 960 evo - VM1 scratch disk 1 x StarTech PEXUSB3S42 4-Port PCI Express USB 3.0 Card - 1 USB controller, 3 ports (+1 internal) passed through to VM1 or VM2 1200W PSU Environment: 1Gbe LAN, nearest router an old Netgear R7000 with dd-wrt custom firmware, 2 unmanaged switches between the Unraid tower and the main server. Main office server, a 918+ with 4xWD REDs in RAID 10 + RAID 1 NVME R/W cache Test software, updated: MB BIOS F11e - AGESA 1.1.0.1a - latest Windows 10 Pro 1809 - latest Unraid 6.6.6 - latest CrystalDiskMark 6.0.2 - latest Office server OS: DSM 6.2.1-23824 Update 2 - latest Office server sync software: Syn Drive 1.1..1-10562 - latest latest NVIDIA drivers for GTX and Quadros virtio-win-0.1.160-1 - latest Configuration (see attached pictures and diagnostics) : BIOS memory interleaving setting: channel ZenStates ON CPU scaling governor: performance (no clear difference from previous "on demand"), turbo boost enabled PCIe ACS: downstream NIC flow control and offload: disabled isolcpus - every cpu, except 0,1 pair rcu-nocbs removed from the last configuration because it should be implemented already in the newer Unraid versions Q35 3.0, OVMF, Hyper-v enabled, USB controller 3.0 quemu XHCI VM1 (4K video editing): 19456 ram, GTX 1070 (with ROM) + 1 SATA SSD by-id SCSI cache=none discard=unmap + 1 NVME SSD bare metal + 1 PCIe-USB adapter, vcpupin from 12 to 23, numatune memory mode='preferred' nodeset='1' VM2 (RAW picture editing): 15360 ram, Quadro P400 + 1 SATA SSD by-id SCSI cache=none discard=unmap + 1 NVME SSD, vcpupin from 6 to 11, numatune memory mode='preferred' nodeset='0' VM3 (light CAD): 8192 ram, Quadro K620 + 1 SATA SSD by-id SCSI cache=none discard=unmap, vcpupin from 2 to 5, no numatune atm (waiting to get more ram) the NMVEs are not isolated via vfio-pci.ids, only via XML hostdev add using the MSI interrupts software I found in one of gridrunner's tutorials, I enabled msi interrupts on everything listed there Result: DAMNED STUTTERING I've seen in a recent video tutorial that @SpaceInvaderOne has experimented with both 1950X and 2990WX, are you experiencing something like that, or have you an idea on how to circumvent this? To replicate the issue 100% of the time, with the above setup, is sufficient to launch crystaldiskmark on an unraid SMB share from a W10 VM. - P.s.: in the green/red scheme below (made by Gigabyte support) that i found somwhere some time ago (maybe here, maybe on the gigabyte forum), DIE 0 and 1 denominations are inverted compared to lstopo rnzows0-diagnostics-20181214-0553.zip
  5. Update, hoping to be helpful to other ryzen/threadripper users. Now I’m pretty sure that the problem with network file transfers and the hiccups is due to system interrupts. The single core almost 100% utilization spikes are correlated to a spike in “system interrupts” resources usage inside the windows resource manager (it’s less than a 10% usage, but with 10 cores, it’s enough to saturate one core). After upgrading to 6.6.1 and creating again a new VM from scratch using q35 quemu 3.0 as the machine, enabling hyper-v and using the msi interrupts tool that I downloaded from one of gridrunner’s videos, to select msi interrupts for the video card, the sound card (!) and the vfio eth adapter, the problem SEEMS to be vanished On a W10 VM With only two cores pinned Even without an emulator core pinned Even using a standard vdisk image on the cache drive, and not passing through anything beyond the gpu I’ll test it further, no news = good news. I’m now trying to convert at least one of the old VMs to see if I can manage to not reainstall every VM from scratch. I managed to change the machine type to q35 without deactivating windows - I created a new vm in the GUI with q35, assigned the passthrough nvme where the VM was installed, and copied the old uuid inside the new xml (but left the new uuid in the file address just below that). Windows automatically updated the missing drivers after a while, and after a few reboots it’s now working. But the stuttering isn’t gone. So I guess that the culprit is the hyper-v that needs to be activated (I had it turned off because older Unraid versions had problem with hyper-v enabled and non-quadro GPUs passthrough). I’m not able to enable hyper v without running across an endless boot loop of blue screens of defeat, does anyone know how to do that, or I have to reinstall all VMs from scratch?
  6. Ok now I tried - updated to 6.6.0 rc2 and installed the new vfio drivers - probably better with passed through ssds with non passed through controllers (passed through ssds with passed through controllers were already ok), but still the same with network shares. I then tried changing the Ethernet adapter from vfio to e1000, and flagging/unflagging it inside of MSI interrupts - nothing, maybe worse To easily trigger the stuttering/hangs, I just need to perform a sequential read test with crystaldiskmark on ANY network folder, even folders that aren’t unraid related (for example, a smb share on the central nas) Bypassing emulation passing through the ssd controller solved the issue with the ssds, but the problem still remains because I cannot pass through the Ethernet adapter having to share it with 3+ vms and unraid. Because of this, anytime I move files between the VM and the network, I still experience heavy stuttering _ Edited the title instead of creating a new topic, because I think that the two issues are related.
  7. I didn't Running the update assistant now I'll try with 6.6.0 rc2
  8. Never managed to passthrough the sata ssd controller. Now the stuttering is gone on the ssd as written above, but if I try to write from inside the VM towards an unraid share, the system hangs the same way (if not worse). High single thread cpu usage is seen for all intensive network usage, and again it's not the emulator pin, but one of the VM cores. I'll open another topic about this problem, but I feel that the bug is somehow related.
  9. I finally had the time to try this, and a few blue screens of despair later, it WORKS! Even without applying any previous optimization I tried before in this topic, the overhead that was causing stuttering is gone. Thanks! -- I'll try to describe the process I followed here to help someone else. I tried to pass through with this method first: SSD Passthrough (had to change the ata- prefix with nvme-, everything else the same), but I noticed no real differencies because as you suggested the entire controller needs to be passed through, so i followed this method NVME controller passthrough, included not stubbing the controller but using the hostdev xml provided in the video description, with a few differencies: 1- I used minitool partition wizard to migrate the os, selecting "copy partitions without resize" to avoid the recovery partition to be unnecessarily stretched, and immediately after stretched the C partition, leaving a 10% overprovisioning. 2- With the most recent unraid version it seems that the modified clover isn't necessary, you simply stub the controller in the syslinux configuration or add the hostdev to the vm xml and click on update, then you specify the boot order adding <boot order='1'/> after the source, so that it looks something like this: <hostdev mode='subsystem' type='pci' managed='yes'> <driver name='vfio'/> <source> <address domain='0x0000' bus='0x41' slot='0x00' function='0x0'/> </source> <boot order='1'/> <address type='pci' domain='0x0000' bus='0x00' slot='0x08' function='0x0'/> </hostdev> then, the device should be visible and selectable inside the GUI editor. You then have to simply select "none" as the primary vdisk location, update again and check that the boot order is still there inside the xml, and then boot the vm up. I had to reboot a few times, inside the windows recovery options that followed the first blue screen telling me "no boot device" or something like that, select the "boot recovery" option (dunno if it's the correct name bc my interface isn't in english), reboot two times again, and it worked. I simply had to reinstall my nvidia drivers again, don't ask me why -- With my configuration, seen that I wanted to pass through the same SSD that was occupied by the vdisk, I had to move the vdisk on another disk with krusader and then select the new location inside the gui editor. Don't do like I did and make TWO copies on the other drive, one as backup, because something might simply go wrong and corrupt your vdisk. -- It works with the nvme drives, and now I want to try this method with the sata SSDs, too. The problem is that isolating the sata controller in it's own IOMMU group isn't that easy. With the second-last stable bios of my x399 aorus, f3g (f3j was bugged as hell), it simply isn't possible, even with the acs override patch enabled. The sata controllers are always grouped with something else. Updating to the latest f10 bios with the new agesa, it seems to be feasible. The obstacle I'm trying to overcome now, is to understand what sata controller I need to pass through without messing everything up. I installed a plugin to run scripts inside the gui with community apps, then ran this script: iommu script that i found on reddit, to try to understand what sata controller I need to pass through. It seems that now every sata drive is under the same sata controller, but later I'll try to change connectors. I'll keep this topic updated!
  10. Another thing that I could try: following one of gridrunner's tutorials for ryzen systems, I have got rcu_nocbs=0-23 inside my syslinux as an added measure to avoid system hangs, but I'm not fully aware of what that command does. I'll try to disable that and see if it changes something.
  11. Any visible spikes of cpu activity during those transfers? May I ask you for your VM XML and unraid syslinux? May I ask you to run one test with crystaldiskmark and see if you notice stuttering of the pointer during the random read test (the second one after launching "all" tests)?
  12. @1812 thanks to ssds it was quick to set it up - unfortunately the cpu spikes and the stuttering do seem to be there with the intel platform, too. 1 vcpu for unraid, one for the emulator pin, two for the vm (the 7700 isn’t generous with cores). 8GB ram to the VM, with a total 16GB ram. Vdisk mounted via scsi, discard=unmap, cache=none or writeback, io=native or thread I’ll do other tests tomorrow morning to be 100% sure, but I think it’s not an amd related issue. With intensive random I/O on ssds with crystaldiskmark, it seems to always happen. The cpu spike starts and stops with the disk access, and it doesn’t happen on the bare metal windows install on the same machine.
  13. In the meantime, it seems an issue for other people as well - same symptoms - changing an existing vm to q35 didn’t help me, but maybe installing from 0 with q35 will, I’ll try that, too.
  14. I’ll try what’s inside there, but also you just gave me another idea: my “old” home PC is very similar talking about storage options, but Intel based, I’ve got a 270 7700k with an 1070 ftw identical to the one I’m passing through VM1, a m.2 960 evo as the system disk, a m.2 970 evo as a scratch disk and a 7200rpm HDD as “intermediate” storage between the pc and the very same central nas (thanks to an ubnt radio link I share my network - and fiber connection - with the office). I’ll try with a test unraid installation there with identical settings (minus the amd specific ones), using the 970 evo to host the vm vdisk, to see if changing platforms changes anything at all. If it doesn’t change anything, I’m guessing it must be a misplaced server setting or an unraid/kvm bug.
  15. Guys, @1812, I'm really, really stuck with this. I also tried the two combinations of io (native and threads), without noticing differencies. The things that seem to somehow mitigate the issue are pinning the emulator to an isolated cpu, and having hyper-v enabled (did that on a new vm, because i'm not able to enable it for old vms), but maybe it's placebo. Cache none also seems to help, but it really impacts performance (I've got a huge UPS so power losses don't scare me that much). I still suffer of bad stuttering and 100% cpu load during disk i/o, especially with random i/o. The problem doesn't seem to reside inside the guests, but inside the unraid host (maybe some misconfiguration made by me). What other steps can I do to help you/anyone to isolate what may be causing this issue on every vm I create? May it be that my cache is btrfs, and that all my vdisks were created first in a btrfs filesystem and later transferred onto an xfs formatted drive? I'm really running out of ideas. I'd really like to use unraid to manage more WSs inside my office, but I've got to get rid of this problem first. Attaching again the updated diagnostics rnzows0-diagnostics-20180628-0007.zip