gray squirrel

Members
  • Posts

    61
  • Joined

  • Last visited

Everything posted by gray squirrel

  1. The RTX 3000 cards supposedly support SR-IOV from a hardware point of view. But Nvidia will need to enable it (they won’t as it’s an enterprise feature). Unraid would also need to support it. Although cool to have 1 GPU and to split it up. It’s a very fringe case. fit two or three GPUs or use this guide to share the GPU (not at the same time)
  2. So I did a bit of testing with CSGO VM vs BM @1080p and see similar results. The community benchmark goes from around 400 to 320 average FPS. Playing with bots on one of the maps and there is a same result around 80 FPS drop. But this is such an extreme test and so unrealistic. I bet if I made a 4 core (one CCX) VM it would be much closer. But 300 FPS in a game is a bit ludicrous anyway.
  3. Although I have not tested CSGO this was my results BM 12 core vs 6 core VM. This is with a GTX 1080. Cinebench R20 BM multi = 6675 VM multi = 3488 (52%) BM single = 511 VM single = 471 (92%) Cinebench R15 BM multi = 3010 VM multi = 1457 (49%) BM single = 192 VM single = 190 (99%) timespy BM GPU = 7548 VM GPU = 7446 (98%) CIV VI BM turn time = 7.56 VM turn time = 7.63 (99%) BM FPS = 128 VM FPS = 130 (102%) F1 2019 BM av FPS 136 BM min FPS 101 VM av FPS 124 (91%) this was gsync on for some reason so probably attributes most of thr differences. VM min FPS 99 mankind divided BM av FPS 73.7 VM av FPS 69.7 (95%) RDR2 Bm av FPS = 62.13 VM av FPS = 63.8 (103%) at that point I gave up benchmarking and just started gaming as I doubt I will ever notice the difference. edit: I game at 1440p so am GPU bound in most situations. In games I have 99% GPU utilisation
  4. The NUMA stuff is related to threadripper not your 3950x. But the principal is the same. your 3950x is made up of two CPU dies. Each die has two CCX made up of 4 cores. if you are gaming. You want to avoid crossing over die to die as this will add significant latancy as the OS isn’t aware of the layout of the CPU (because it’s in a VM) lowest latency will be one CCX. But this might not be enough CPU power for you. My view with my 3900x was to give the VM one whole die. in your first layout you have passed all the hyperthreads of all the cores, this will give good for something like rendering as long as the host isn’t doing anything. But will be very bad for latency. in your second layout you have given two CCX’s across two dies. This will be very poor. Try giving it 8-15 + the hyper threads and your performance should be good. remember to isolate the cores and hyper threads from the host. please also ensure you follow the guidance on this.
  5. It’s to do with cross die and CCX latancy. Also setting up the MV for the correct application. a lot of this is talked about in the thread: https://forums.unraid.net/topic/73509-ryzenthreadripper-psa-core-numberings-and-assignments/#comment-676202 TLDR workstation - spread load evenly across cores / dies / ccx gaming - minimise cross die and CCX interaction. If you are just gaming. You will probably get better performance with 4 cores from one CCX. I have a 3900x and I use one die (2 CCX) for 6C/12T gaming VM. I can’t tell the difference between BM and VM. Benchmark results (even synthetic ones) are within a few %.
  6. So I am still suspicious of the VBIOS as I could never get the gpuz method to work. As your on an ITX you can’t have two GPU. There is a new guide for this that will slow your to dump it directly in unraid. https://youtu.be/FWn6OCWl63o
  7. As an alternative... I don’t sleep my VM I shut them down. I then use the below guide to send a remote WOL request. I have shortcuts on my phone that I can ask Siri to activate. works well and the VM is booted before I have had chance to sit down at my gaming chair!
  8. Still an interesting topic if some developers try to prevent virtual environments. However, it looks like RDR2 was patched today and now for whatever reason it works on a very standard VM setup. 2700X - 4 cores passed and isolated GTX 1080 - passed as primary 2TB NVME - Passed HyperV-on Performance is nice as well 50-70 FPS 1440P ultra!
  9. Short answer is No. longer answer is this is possible on an enterprise level but requires some significant Licensing to enable. The RTX 3000 cards are supposed to be SR-IOV capable but Nvidia would have to enable it for jolly old consumer.
  10. If you have a I-GPU that you keep enabled. Do you even need to pass a VBIOS? I though you only need this if it was your primary GPU. you will need to set the I-GPU as your primary GPU in the BIOS.
  11. Have you dumped your own VBIOS using this guide? I noticed from your XML that your have named your vbios “***modded” so I guess you have downloaded it or dumped via GPU-Z then edited the header. I could never get this to work with my GPU. You need another GPU to do this but for me it worked first time. i would also try and use the GPU as the secondary and pass it without a Vbios fist to check all is good.
  12. I don’t remember editing the vbios when I dumped my own. I don’t think you are supposed to do that. did you follow this guide? edit: just re watched this and you need the 3090 as the secondary GPU. when I tried dumping via GPUZ and editing it still don’t work for me. So I followed this guide. do you have another system with two PCIE slots. Or somebody you trust to dump the bios via this method?
  13. It would be interesting if somebody could make a guide for the not so technical. I am dual booting to get around this issue in RDR2. Are there any risks associated to this approach?
  14. As you are passing a VBIOS, I assume you are using this as your primary GPU? Have you tried using it as a secondary and not passing the BIOS to start with? Have you dumped your own VBIOS from the card or edited one? I have no end of issues with my 1080 until I dumped my own bios.
  15. If you are wanting Windows to use the whole NVME drive why don’t you pass the controller to the VM. You will get better performance that way. your VBios can be stored anywhere. you can also do this so you can boot native into windows for benchmarking or to run stuff that doesn’t like virtualisation.
  16. If you click on the VM name it displays the a disk info. Just edit the value for the one you want to change
  17. Fixed, just used a free partition management tool and moved the recover Partition to the end.
  18. So I am just moving my gaming rig into a VM and I went to expand my Vdisk from 100G to 200G following Spaceinvader One video However, The windows recovery partition is in the wrong place, so I can't extend the C drive. Is there a way to manage the partitions in a Vdisk, so I can move it out of the way?
  19. so i switched the ram out for 4X2GB sticks of ECC i had. after almost 2 months of uptime i think my ram was at fault. how do i check the old ram, i had run extensive runs of memtest with no errors. or do i just hit the bay and by another 32GB? although at that point I might as well swap the platform out for a second gen zen platform to go back to and ATX board and lower power consumption.
  20. I only have 4 sticks of ram (8GB each). my guess is that is CPU 0 (the one on the left) and bank 8, the bottom most below that socket.
  21. so i have been doing more trouble shooting short of changing the ram or using another PSU as i don't have either on hand. the system is now on a UPS, i also have tried reverting to 6.8.2 but with no luck. I had another reboot at around 6am this morning with nothing in the mirrored syslog. however i now am getting detected hardware errors. might this help explain the issue i have? edit: looks like a memory issue, is the log able to tell me if its one stick or all of it? megatron-diagnostics-20200623-1131.zip
  22. I am evaluating combining my Workstation / gaming Rig with my NAS. I am trying to understand the performance hit by moving to a VM. I am losing around 20% of my expected CPU performance in both multi and single threaded application. my test system is: Asrock EP2C602 2x e5-2670 32GB of Ram GTX 960 past thought to the VM when I test the system on a clean windows build (bare metal) I get a Cinebench R15 score of around 1980 (which is as expected) when I move to a VM, assigning with only one CPU I would expect to get around 50% of this as I plan to use the other CPU for Other tasks. currently I am getting 750-820 in R15 (8cores / 16 threads) and the single core test performance is about the same reduction over expected. I have isolated a complete CPU for this testing (the CPU that is directly connected to the GPU), however I get the same results on a VNC windows 10 install on the other CPU (without isolation). results are just as bad if give it all 16 cores (around 1600 scored). 4 cores / 8 threads net 400, so scaling looks constant, im just missing 400 points somewhere? I have tips / tweaks set to performance with boost enabled. Win 10 power mode is set to performance. the CPU looks to b boosting to 3.0Ghz (which is consistent with my BM testing) I have played around with moving the emulations cores off the VM CPU to see if removing the overhead helps but with no impact. this had zero impact! I get the feeling I am missing something as most of the videos on unraid show almost BM performance in most applications. XML <?xml version='1.0' encoding='UTF-8'?> <domain type='kvm' id='5'> <name>WIN.10.A</name> <uuid>73c3fde2-1f62-2524-5410-b8c6087fa87c</uuid> <description>Windows 10</description> <metadata> <vmtemplate xmlns="unraid" name="Windows 10" icon="windows.png" os="windows10"/> </metadata> <memory unit='KiB'>8388608</memory> <currentMemory unit='KiB'>8388608</currentMemory> <memoryBacking> <nosharepages/> </memoryBacking> <vcpu placement='static'>8</vcpu> <cputune> <vcpupin vcpu='0' cpuset='0'/> <vcpupin vcpu='1' cpuset='16'/> <vcpupin vcpu='2' cpuset='2'/> <vcpupin vcpu='3' cpuset='18'/> <vcpupin vcpu='4' cpuset='4'/> <vcpupin vcpu='5' cpuset='20'/> <vcpupin vcpu='6' cpuset='6'/> <vcpupin vcpu='7' cpuset='22'/> <emulatorpin cpuset='8,24'/> </cputune> <resource> <partition>/machine</partition> </resource> <os> <type arch='x86_64' machine='pc-i440fx-3.1'>hvm</type> <loader readonly='yes' type='pflash'>/usr/share/qemu/ovmf-x64/OVMF_CODE-pure-efi.fd</loader> <nvram>/etc/libvirt/qemu/nvram/73c3fde2-1f62-2524-5410-b8c6087fa87c_VARS-pure-efi.fd</nvram> </os> <features> <acpi/> <apic/> <hyperv> <relaxed state='on'/> <vapic state='on'/> <spinlocks state='on' retries='8191'/> <vendor_id state='on' value='none'/> </hyperv> </features> <cpu mode='host-passthrough' check='none'> <topology sockets='1' cores='4' threads='2'/> <cache mode='passthrough'/> </cpu> <clock offset='localtime'> <timer name='hypervclock' present='yes'/> <timer name='hpet' present='no'/> </clock> <on_poweroff>destroy</on_poweroff> <on_reboot>restart</on_reboot> <on_crash>restart</on_crash> <devices> <emulator>/usr/local/sbin/qemu</emulator> <disk type='file' device='disk'> <driver name='qemu' type='raw' cache='none'/> <source file='/mnt/disks/Unassigned_SSD/WIN.10.A/vdisk1.img' index='2'/> <backingStore/> <target dev='hdc' bus='virtio'/> <boot order='1'/> <alias name='virtio-disk2'/> <address type='pci' domain='0x0000' bus='0x00' slot='0x04' function='0x0'/> </disk> <disk type='file' device='cdrom'> <driver name='qemu' type='raw'/> <source file='/mnt/user/isos/drivers/virtio-win-0.1.160-1.iso' index='1'/> <backingStore/> <target dev='hdb' bus='ide'/> <readonly/> <alias name='ide0-0-1'/> <address type='drive' controller='0' bus='0' target='0' unit='1'/> </disk> <controller type='usb' index='0' model='ich9-ehci1'> <alias name='usb'/> <address type='pci' domain='0x0000' bus='0x00' slot='0x07' function='0x7'/> </controller> <controller type='usb' index='0' model='ich9-uhci1'> <alias name='usb'/> <master startport='0'/> <address type='pci' domain='0x0000' bus='0x00' slot='0x07' function='0x0' multifunction='on'/> </controller> <controller type='usb' index='0' model='ich9-uhci2'> <alias name='usb'/> <master startport='2'/> <address type='pci' domain='0x0000' bus='0x00' slot='0x07' function='0x1'/> </controller> <controller type='usb' index='0' model='ich9-uhci3'> <alias name='usb'/> <master startport='4'/> <address type='pci' domain='0x0000' bus='0x00' slot='0x07' function='0x2'/> </controller> <controller type='pci' index='0' model='pci-root'> <alias name='pci.0'/> </controller> <controller type='ide' index='0'> <alias name='ide'/> <address type='pci' domain='0x0000' bus='0x00' slot='0x01' function='0x1'/> </controller> <controller type='virtio-serial' index='0'> <alias name='virtio-serial0'/> <address type='pci' domain='0x0000' bus='0x00' slot='0x03' function='0x0'/> </controller> <interface type='bridge'> <mac address='52:54:00:01:70:a7'/> <source bridge='br0'/> <target dev='vnet0'/> <model type='virtio'/> <alias name='net0'/> <address type='pci' domain='0x0000' bus='0x00' slot='0x02' function='0x0'/> </interface> <serial type='pty'> <source path='/dev/pts/0'/> <target type='isa-serial' port='0'> <model name='isa-serial'/> </target> <alias name='serial0'/> </serial> <console type='pty' tty='/dev/pts/0'> <source path='/dev/pts/0'/> <target type='serial' port='0'/> <alias name='serial0'/> </console> <channel type='unix'> <source mode='bind' path='/var/lib/libvirt/qemu/channel/target/domain-5-WIN.10.A/org.qemu.guest_agent.0'/> <target type='virtio' name='org.qemu.guest_agent.0' state='connected'/> <alias name='channel0'/> <address type='virtio-serial' controller='0' bus='0' port='1'/> </channel> <input type='tablet' bus='usb'> <alias name='input0'/> <address type='usb' bus='0' port='1'/> </input> <input type='mouse' bus='ps2'> <alias name='input1'/> </input> <input type='keyboard' bus='ps2'> <alias name='input2'/> </input> <hostdev mode='subsystem' type='pci' managed='yes'> <driver name='vfio'/> <source> <address domain='0x0000' bus='0x03' slot='0x00' function='0x0'/> </source> <alias name='hostdev0'/> <address type='pci' domain='0x0000' bus='0x00' slot='0x05' function='0x0'/> </hostdev> <hostdev mode='subsystem' type='pci' managed='yes'> <driver name='vfio'/> <source> <address domain='0x0000' bus='0x03' slot='0x00' function='0x1'/> </source> <alias name='hostdev1'/> <address type='pci' domain='0x0000' bus='0x00' slot='0x06' function='0x0'/> </hostdev> <hostdev mode='subsystem' type='pci' managed='yes'> <driver name='vfio'/> <source> <address domain='0x0000' bus='0x00' slot='0x1a' function='0x0'/> </source> <alias name='hostdev2'/> <address type='pci' domain='0x0000' bus='0x00' slot='0x08' function='0x0'/> </hostdev> <memballoon model='none'/> </devices> <seclabel type='dynamic' model='dac' relabel='yes'> <label>+0:+100</label> <imagelabel>+0:+100</imagelabel> </seclabel> </domain>
  23. thanks for the pointers. no pet's or children have access to the server as its in my office. i even noticed it once as the fans spun up but there was no beep from the motherboard (that i would associate with a restart or shutdown). Memtest wasn't to check the RAM, more to just have something running that would not decided to restart itself, I was concerned about OS updates power cycling anyway. is there a better testing methodology? unfortunately i don't have another PSU to test with, the current one is a corsair HX unit that is about 8 years old. its been running in this system for over a year with no problem.
  24. I am completely stuck on this I can't see anything in the logs that provided any more indication of the fault, Asrock's view is that its the OS creating the issue. I had another random shutdown and the last item in the log was one of the drives spinning down. I am over 250Hr into running memtest to see if it still happens. any idea of trouble shooting or do i need to build a completely new config?
  25. So Asrock have come back and recommend a fresh install with a different HD and Ram. The only thing o think that has changed recently is upgrading to 6.8.3