Jump to content

bastl

Members
  • Content Count

    610
  • Joined

  • Last visited

Everything posted by bastl

  1. The only thing you need is the config folder of that old usb thumb drive. Every config of your old installation is inside this folder. Copy that folder over to new USB key and you should be fine as long as the files aren't damaged. One thing you have to do is to reactivate your license of Unraid for the new UUID of that USB key.
  2. If the device you wanna passthrough isn't in it's own IOMMU group passthrough won't work. If you can't split the group up with ACS Override there isn't really much you can do. Check your BIOS if you can find an IOMMU setting and play around with it. Not all BIOSes have these options. Sometimes it can help to update your BIOS to get your groupings split.
  3. @Ernie11 Forget about the idea of playing games on a "virtual" gpu. It's an emulated GPU which maybe provides such feature sets like openGL or vulkan, but without any of the hardware acceleration behind it what a physical GPU provides. If you wanna play games in a VM which needs some horsepower, passthrough a physical GPU.
  4. I've used the Ubuntu template to create a PopOS Vm. I used the 19.04 and changed nothing in the template nore did I install anything. Below you can see that other resolutions like yours are available. Default VNC driver is QXL in this template. Check which one you are using. <?xml version='1.0' encoding='UTF-8'?> <domain type='kvm' id='20'> <name>Pop</name> <uuid>6d039ddb-88c0-1e32-b457-b779ea549448</uuid> <metadata> <vmtemplate xmlns="unraid" name="Ubuntu" icon="ubuntu.png" os="ubuntu"/> </metadata> <memory unit='KiB'>4194304</memory> <currentMemory unit='KiB'>4194304</currentMemory> <memoryBacking> <nosharepages/> </memoryBacking> <vcpu placement='static'>4</vcpu> <cputune> <vcpupin vcpu='0' cpuset='2'/> <vcpupin vcpu='1' cpuset='18'/> <vcpupin vcpu='2' cpuset='3'/> <vcpupin vcpu='3' cpuset='19'/> </cputune> <resource> <partition>/machine</partition> </resource> <os> <type arch='x86_64' machine='pc-q35-3.1'>hvm</type> <loader readonly='yes' type='pflash'>/usr/share/qemu/ovmf-x64/OVMF_CODE-pure-efi.fd</loader> <nvram>/etc/libvirt/qemu/nvram/6d039ddb-88c0-1e32-b457-b779ea549448_VARS-pure-efi.fd</nvram> </os> <features> <acpi/> <apic/> </features> <cpu mode='host-passthrough' check='none'> <topology sockets='1' cores='4' threads='1'/> </cpu> <clock offset='utc'> <timer name='rtc' tickpolicy='catchup'/> <timer name='pit' tickpolicy='delay'/> <timer name='hpet' present='no'/> </clock> <on_poweroff>destroy</on_poweroff> <on_reboot>restart</on_reboot> <on_crash>restart</on_crash> <devices> <emulator>/usr/local/sbin/qemu</emulator> <disk type='file' device='disk'> <driver name='qemu' type='raw' cache='writeback'/> <source file='/mnt/user/VMs/Pop/vdisk1.img'/> <backingStore/> <target dev='hdc' bus='virtio'/> <boot order='1'/> <alias name='virtio-disk2'/> <address type='pci' domain='0x0000' bus='0x03' slot='0x00' function='0x0'/> </disk> <disk type='file' device='cdrom'> <driver name='qemu' type='raw'/> <source file='/mnt/user/isos/Linux/pop-os_19.04_amd64_nvidia_4.iso'/> <backingStore/> <target dev='hda' bus='sata' tray='open'/> <readonly/> <boot order='2'/> <alias name='sata0-0-0'/> <address type='drive' controller='0' bus='0' target='0' unit='0'/> </disk> <controller type='usb' index='0' model='ich9-ehci1'> <alias name='usb'/> <address type='pci' domain='0x0000' bus='0x00' slot='0x07' function='0x7'/> </controller> <controller type='usb' index='0' model='ich9-uhci1'> <alias name='usb'/> <master startport='0'/> <address type='pci' domain='0x0000' bus='0x00' slot='0x07' function='0x0' multifunction='on'/> </controller> <controller type='usb' index='0' model='ich9-uhci2'> <alias name='usb'/> <master startport='2'/> <address type='pci' domain='0x0000' bus='0x00' slot='0x07' function='0x1'/> </controller> <controller type='usb' index='0' model='ich9-uhci3'> <alias name='usb'/> <master startport='4'/> <address type='pci' domain='0x0000' bus='0x00' slot='0x07' function='0x2'/> </controller> <controller type='sata' index='0'> <alias name='ide'/> <address type='pci' domain='0x0000' bus='0x00' slot='0x1f' function='0x2'/> </controller> <controller type='pci' index='0' model='pcie-root'> <alias name='pcie.0'/> </controller> <controller type='virtio-serial' index='0'> <alias name='virtio-serial0'/> <address type='pci' domain='0x0000' bus='0x02' slot='0x00' function='0x0'/> </controller> <controller type='pci' index='1' model='pcie-root-port'> <model name='pcie-root-port'/> <target chassis='1' port='0x10'/> <alias name='pci.1'/> <address type='pci' domain='0x0000' bus='0x00' slot='0x02' function='0x0' multifunction='on'/> </controller> <controller type='pci' index='2' model='pcie-root-port'> <model name='pcie-root-port'/> <target chassis='2' port='0x11'/> <alias name='pci.2'/> <address type='pci' domain='0x0000' bus='0x00' slot='0x02' function='0x1'/> </controller> <controller type='pci' index='3' model='pcie-root-port'> <model name='pcie-root-port'/> <target chassis='3' port='0x12'/> <alias name='pci.3'/> <address type='pci' domain='0x0000' bus='0x00' slot='0x02' function='0x2'/> </controller> <controller type='pci' index='4' model='pcie-root-port'> <model name='pcie-root-port'/> <target chassis='4' port='0x13'/> <alias name='pci.4'/> <address type='pci' domain='0x0000' bus='0x00' slot='0x02' function='0x3'/> </controller> <controller type='pci' index='5' model='pcie-root-port'> <model name='pcie-root-port'/> <target chassis='5' port='0x14'/> <alias name='pci.5'/> <address type='pci' domain='0x0000' bus='0x00' slot='0x02' function='0x4'/> </controller> <interface type='bridge'> <mac address='52:54:00:cd:bd:8c'/> <source bridge='br0'/> <target dev='vnet3'/> <model type='virtio'/> <alias name='net0'/> <address type='pci' domain='0x0000' bus='0x01' slot='0x00' function='0x0'/> </interface> <serial type='pty'> <source path='/dev/pts/3'/> <target type='isa-serial' port='0'> <model name='isa-serial'/> </target> <alias name='serial0'/> </serial> <console type='pty' tty='/dev/pts/3'> <source path='/dev/pts/3'/> <target type='serial' port='0'/> <alias name='serial0'/> </console> <channel type='unix'> <source mode='bind' path='/var/lib/libvirt/qemu/channel/target/domain-20-Pop/org.qemu.guest_agent.0'/> <target type='virtio' name='org.qemu.guest_agent.0' state='disconnected'/> <alias name='channel0'/> <address type='virtio-serial' controller='0' bus='0' port='1'/> </channel> <input type='tablet' bus='usb'> <alias name='input0'/> <address type='usb' bus='0' port='1'/> </input> <input type='mouse' bus='ps2'> <alias name='input1'/> </input> <input type='keyboard' bus='ps2'> <alias name='input2'/> </input> <graphics type='vnc' port='5901' autoport='yes' websocket='5701' listen='0.0.0.0' keymap='de'> <listen type='address' address='0.0.0.0'/> </graphics> <video> <model type='qxl' ram='65536' vram='65536' vgamem='16384' heads='1' primary='yes'/> <alias name='video0'/> <address type='pci' domain='0x0000' bus='0x00' slot='0x01' function='0x0'/> </video> <memballoon model='virtio'> <alias name='balloon0'/> <address type='pci' domain='0x0000' bus='0x04' slot='0x00' function='0x0'/> </memballoon> </devices> <seclabel type='dynamic' model='dac' relabel='yes'> <label>+0:+100</label> <imagelabel>+0:+100</imagelabel> </seclabel> </domain>
  5. Also known as AMD reset bug. Some people with 5xx or the vega cards have that issue. If you search the forums you will find a couple reports. The card is only able to initialise once and can't be reset. Only a server reboot fixes this in most cases. Some people reported to not passthrough the audio part of the card works, for others using Q35 fixes it and for some it worked with passing through a GPU BIOS for this card. Another guy reported booting unraid in legacy mode vs uefi worked for him. Unfortunately there isn't a one click solution.
  6. Just an idea. Back when I played around with Server2008R2 a non activated install first informed the user to activate the install and after a certain amount of days it shuts itself down after a couple hours of use. The VM logs showed the VM is shutdown and not paused by a full cache disk or a disk the vdisk sits on. Just an idea. Edit: Don't know if that only depends on specific server versions or if something changed until today. I've last played around with Windows server versions, 2-3 years ago.
  7. Do you have any user scripts running maybe, for example a backup script that shuts down your VMs?
  8. As you maybe already noticed. It's kinda hard to compare 2 systems if every spec is different. Memory speeds and latency is a huge thing on the first gen Ryzen's. As already mentioned by testdasi, the chiplet design and the communication between the chips is the next thing you have to count in. Different cores for a VM can lead to different memory speeds/latency. The next thing, which slot for the GPU are you using? Some aren't connected directly to the CPU. Limiting the speed of the PCIe lanes by using a slot wired to the chipset can also be an issue. 16 vs 8 lane slots shouldn't be an issue but only using 4 lanes of the chipset shared with other devices (USB, network, storage) will bottleneck the GPU.
  9. ACS Override is not a thing that you set to gain performance. It's only usecase is to split your IOMMU groupings to separate devices from each other. 30k vs 37k is a huge difference for graphics score only. With overhead of virtualisation 1k maybe 2 is what you can expect. Disc IO as example shouldn't be the issue. Benchmarks and game engines are loading the most stuff at the start. Maybe the memory speed is what causing the difference for you. Are you using the same dimms and the same XMP profile for both tests?
  10. @sit_rp You did both tests, bare metal and virtualised with the same monitor/tv?
  11. @sit_rp Stupid question, i know, but do you use the Nvidia driver or the Windows driver for the card?
  12. Check your windows power plan. Switch to performance instead of balanced(default). What are the CPU clocks Unraid reports running under load? watch grep \"cpu MHz\" /proc/cpuinfo
  13. Serious question, how often do you change the size of your VMs? 100 times a day? 😂 The majority of users of Unraid, let me guess 95% or even more using Unraid in their home environment or in small offices. That small portion of "office people" will never use that feature as a "productivity" feature as you called it. To have that feature available via the GUI for everyone would be great as long as it is save. Don't get me wrong, but I have read a lot of people here in the chat that already broke there VMs by reducing the size of a vdisk.
  14. Go to Settings > VM Manager and enable PCIe ACS override, reboot your server and check if the devices/groupings are split up even further. Without having a device that you wanna pass through in it's own group, passthrough won't work.
  15. @Pducharme You can resize a vdisk via the gui for a long time already if you not noticed it. But only if the VM is turned off. Leftclick the name of the VM in the VMs tab and you can see the actual size of the VM. Click the number and you can change the value. Last time I tried to reduce the size this way it worked. The issue is, unraid maybe knows how much is allocated, but it doesn't know where blocks are so to say. If your data is at the end of the file, unraid will cut it of. I know, you can reduce the partition size inside the VM and than reduce it, but still thats not a accurate way to do it.
  16. In most cases you need a physical device pluged into the GPU. There are dummy HDMI sticks available to emulate a connected display. In some circumstances without a device connected the GPU wont initialize correctly and you can't use it in the VM. If the card works inside the VM you have different options to connect to the VM remotely such like RDP, VNC or Teamviewer. But for gaming for example, don't expect smooth frame rates like on a usual monitor if you connect remotely.
  17. If you only use a dedicated GPU there is no option for a "console". You have to plug in a display to the GPU to see the output.
  18. Does it work for reduce the size off the vdisk? If so, than thats the easiest way to break your VM. 🤨
  19. As long as you don't break something else with this patch I think the users will benefit from it. Having a VM in a state where you always have to restart the server to use your GPU again vs. a VM where "maybe" a driver crash can lead to the same issue?! Even if it's not the final solution and only a small improvement, it's still an improvement. As an early adopter of the TR4 platform I know this feeling, every slight improvement in stability is better than none. I'am not actual sure how many people in the Unraid community are still using the vega cards but every now and then people reporting this issue in the forums. I guess there is a demand for this especially for people with MacOS VMS. Nvidia web drivers for Mojave for example are still not available and maybe never will be and AMD cards kinda the only up to date cards that work on newer MacOS VMs. I don't know if you guys have some insights on the numbers how many people using these cards on unraid but I guess there are a couple. Btw. is there any timeline for the 6.7.3 release and more important for the 6.8? 😁
  20. Hey guys, everone who`s trying to passthrough a Vega 10 card to a VM will know this issue. Restarting a VM isn't possible without resetting the whole server. The card will stuck in a D3 power state until Unraid is rebootet. The early adopters of TR4 and AM4 might already know this guy from fixing some bugs which made it into the kernel and a lot of people using Unraid benefit from his work. He did it again and fixed the reset bug together with AMD for the mentioned cards. This might be a useful information for a couple users and might also be something @limetech is interested in to implement into the next releases. https://forum.level1techs.com/t/vega-10-and-12-reset-application/145666 Here is a short video from him showing that it is working right now with a MacOS Mojave VM.
  21. Remove the following part in the textfile. I think you have to restart Unraid to take this changes in place. IFNAME[1]="br1" BRNAME[1]="br1" BRNICS[1]="eth1" BRSTP[1]="no" BRFD[1]="0" PROTOCOL[1]="ipv4" USE_DHCP[1]="no" IPADDR[1]="192.168.2.2" NETMASK[1]="255.255.255.0" GATEWAY[1]="192.168.2.1"
  22. Click the trashcan should do the trick. At least it did for me as I had that issue upgrading to 6.7.2. This is the corresponding line: 192.168.2.0/24 br1 1
  23. It is used by the bridge br1. If you don't use it by any docker or VM remove it and you should be able to remove the 192.168.2.1 route or it is removed by removing the bridge. Can't test it right now. You can also find the network config in the config folder on your Unraid flash drive in one of the text files. Make sure you are removing the right bridge.
  24. You have 2 default routes. There should be only one defined.
  25. Please don't make RDP directly accessible from the Internet. Please don't!!! There were a couple flaws in the RDP protocol the past months and I have the feeling this won't be the last ones. https://www.bleepingcomputer.com/news/security/microsoft-warns-users-again-to-patch-wormable-bluekeep-flaw/