kakashisensei

Members
  • Posts

    34
  • Joined

  • Last visited

Everything posted by kakashisensei

  1. I am having an issue where if my unraid server improperly shuts off or resets, the qbittorrent settings revert back to default. Not the docker config, but the in application settings. I've set the qbittorrent.conf file to read only by all users/grps, but it still reverts to default. Anyone experience this and found a solution? Thanks.
  2. I have been using transmission vpn docker for quite a while, but it frequently had issues after updating docker, and finally stopped working correctly and am unable to fix. Tried all the binhex torrent client dockers and this one is the best imo. The others had various issues. The one issue for me for this one is that the move operation from incomplete to complete folder seems wasteful for ssds. I have my incomplete folder on a ssd cache only share. I dont want active torrent downloads keeping my hdds spinning. When complete, they move to the complete folder, which is also on ssd cache. The daily mover job will move complete downloads to hdd array. With qbittorrent, the move job from incomplete to complete seems to copy the entire contents, instead of just changing the pointer of the file location. This basically will consume twice the write endurance of an ssd if the move job is on the same ssd. Has anyone noticed this and figured out a solution? Thanks.
  3. I've been getting this warning in syslog every minute. Eth0 is always being renamed to some unique string. Nov 3 14:57:48 Tower kernel: igb 0000:04:00.0 eth0: mixed HW and IP checksum settings. Nov 3 14:57:49 Tower kernel: igb 0000:04:00.0 eth0: mixed HW and IP checksum settings. Nov 3 14:57:49 Tower kernel: eth0: renamed from veth3d8769f Nov 3 14:57:49 Tower kernel: device br0 entered promiscuous mode Nov 3 14:58:50 Tower kernel: device br0 left promiscuous mode Nov 3 14:58:50 Tower kernel: veth3d8769f: renamed from eth0 This is the iommu grps for the nics. IOMMU group 28: [8086:1533] 04:00.0 Ethernet controller: Intel Corporation I210 Gigabit Network Connection (rev 03) IOMMU group 29: [8086:1533] 05:00.0 Ethernet controller: Intel Corporation I210 Gigabit Network Connection (rev 03) My motherboard has dual intel nics and ipmi. The ipmi will share use of a nic, but primarily defaults to the first nic. The ipmi has its own configured ip, so there are dual ips on the first nic, one for the ipmi and one for unraid. In order to pass the second nic to a VM, I added the line to flash boot up that hides that nic from being used by unraid. Any help is greatly appreciated. Thanks.
  4. Nvm found the answer here https://forums.unraid.net/topic/73060-same-pci-id-in-multiple-iommu-groups-solved/
  5. My motherboard has dual intel nics, each with their own chip. But in IOMMU groups, they have the same hardware id but are in different groups. This is how it shows up IOMMU group 28: [8086:1533] 04:00.0 Ethernet controller: Intel Corporation I210 Gigabit Network Connection (rev 03) IOMMU group 29: [8086:1533] 05:00.0 Ethernet controller: Intel Corporation I210 Gigabit Network Connection (rev 03) How can I change this command in flash bootup to allow one of the controllers to be selected in VM templates? append vfio-pci.ids=XXXX:XXXX Thanks.
  6. I've found some configurations that half-way bridge the gap to baremetal. The cpu core assignments that give the best performance is somewhat perplexing. Passing these hyper-v features improved single thread performance noticeably. Not sure why. Found a blog that mentioned this hyper-v xml config gave him the best results. Reading the description for each feature, not obvious to me why this gets better performance. Cpu-z single thread score went up by ~20-30. More importantly, the performance loss with streaming is not as bad with these features on. Before, I'd see 30-40 less cpu-z single thread while streaming the desktop. Now, it is only ~15 less. Gta5 benchmark results improved a bit as well with these features on. <vpindex state='on'/> <synic state='on'/> <stimer state='on'/> <frequencies state='on'/> I've found that passing only the primary core threads 0,1,2,3 to the VM give the best overall performance and best single thread performance. I also have emulator pin on HT 4 and iothread on HT 5. Haven't noticed performance improvement with iothread pinning. I get ~320 cpu-z single thread, pretty close to the 330 on bare metal. I get the best gta5 benchmark results with this config. It is about half way in between my baremetal and vm results from earlier. I figure the remaining difference in performance is that baremetal is 2 threads per core, and that this vm config is only 1 thread per core. Frames Per Second (Higher is better) Min, Max, Avg Pass 0, 19.245239, 106.438461, 89.463310 Pass 1, 80.630203, 154.502197, 128.269775 Pass 2, 61.948658, 142.904099, 109.883057 Pass 3, 5.634600, 159.941147, 119.195976 Pass 4, 35.715050, 166.900330, 103.768700 Time in milliseconds(ms). (Lower is better). Min, Max, Avg Pass 0, 9.395100, 51.960903, 11.177767 Pass 1, 6.472400, 12.402301, 7.796069 Pass 2, 6.997700, 16.142399, 9.100584 Pass 3, 6.252300, 177.474899, 8.389544 Pass 4, 5.991600, 27.999401, 9.636817 I was under the assumption that passing the primary core thread + HT thread pair is most optimal, but I am not seeing that. Originally I passed core threads 1,2,3 and their HT pairs 5,6,7 and emulator pin to HT 4. That gave much lower single and multi thread performance. It seems that all four cores are critical to getting best performance. Even though this config is 6 threads compared to 4, it yields worse performance because it is only 3 cores. I have also passed all cpu threads 0,1,2,3,4,5,6,7 to VM and that gives me a cpu-z single thread result of ~300 and the best multi thread result of ~1500. But the performance in gta is not as good as passing only the primary core threads 0-3. If I define emulator pin with this config, I get absolutely atrocious performance, so I didnt define emulator or iothread pin. I don't know why this is. This should give me the closest performance to baremetal, but it doesn't. So TLDR, to recap, I get the closest to baremetal performance in gta5 and best cpu-z single thread in VM with the following: - add the hyper v features mentioned above in the xml - pass only the primary core threads 0,1,2,3 and none of the HT pairs to VM, HT 4 is emulator pin - turn off spectre/meltdown mitigations in both host and vm (if baremetal also had them off) - pass physical NIC has better performance than virtual NIC, takes some load off cpus
  7. I am trying to get more cpu performance out of my win10 VM. Have noticed that in cpu intensive games, performance is quite lackluster, albiet the system is quite old. CPU is i7 sandybridge mobile 4core/8thread 3.2ghz all core turbo, 3.5ghz single core. I have passed 12GB of ram, dual channel ddr3 1600mhz. GPU is 980ti 6GB w/ nvidia 446 driver. On baremetal and with spectre mitigations off, I get ~330 cpu-z single thread benchmark score. Passmark v9 cpu mark total score is ~7600. This is the gta5 benchmark result at 1600x900 low settings: Frames Per Second (Higher is better) Min, Max, Avg Pass 0, 62.167404, 119.561455, 102.000427 Pass 1, 94.056564, 165.343918, 139.509125 Pass 2, 77.531998, 155.506470, 125.293236 Pass 3, 89.976601, 162.171799, 136.439087 Pass 4, 48.572926, 200.867737, 125.084503 Time in milliseconds(ms). (Lower is better). Min, Max, Avg Pass 0, 8.363899, 16.085600, 9.803881 Pass 1, 6.048000, 10.631900, 7.167990 Pass 2, 6.430601, 12.897901, 7.981277 Pass 3, 6.166300, 11.114000, 7.329278 Pass 4, 4.978400, 20.587601, 7.994596 On VM I am using Q35-v4.2 OVMF, cpu host/cache passthrough, hyper-v = yes, and spectre mitigations off on both host and VM. I get ~260-270 cpu-z single thread. Interestingly, the passmark cpu mark score only drops to ~7300. This is the gta5 benchmarks at same settings: Frames Per Second (Higher is better) Min, Max, Avg Pass 0, 16.969919, 86.502945, 72.574181 Pass 1, 46.607010, 125.070351, 101.227242 Pass 2, 47.106037, 136.561646, 94.093735 Pass 3, 64.095169, 130.890060, 99.548531 Pass 4, 35.704082, 161.464798, 88.457596 Time in milliseconds(ms). (Lower is better). Min, Max, Avg Pass 0, 11.560300, 58.927799, 13.779005 Pass 1, 7.995500, 21.455999, 9.878764 Pass 2, 7.322701, 21.228701, 10.627700 Pass 3, 7.639999, 15.601800, 10.045352 Pass 4, 6.193300, 28.008001, 11.304852 On the VM, I can only allocate 3 cores and their HT pairs. I have noticed passing all cores to VM gives quite bad performance. I maintain core 0 for the host and the HT thread for the vm emulator. This is my cpu assignment that I found has given the best performance: <vcpu placement='static'>6</vcpu> <cputune> <vcpupin vcpu='0' cpuset='1'/> <vcpupin vcpu='1' cpuset='5'/> <vcpupin vcpu='2' cpuset='2'/> <vcpupin vcpu='3' cpuset='6'/> <vcpupin vcpu='4' cpuset='3'/> <vcpupin vcpu='5' cpuset='7'/> <emulatorpin cpuset='4'/> </cputune> Since it is a headless server, I use nvidia gamestream to remote access. This further kills performance. I see the cpu-z single thread drop to ~230-240 with streaming the desktop. The above gta5 results were without any streaming. Since online mode is very unoptimized in this game, it can also be another 20-50% loss in performance. I see drops to 30fps in game quite often. I don't expect the performance loss to be entirely attributed to the one less core, especially with the huge drop off in cpu-z single thread results. Have tried all the following but nothing significantly bridges the gap between baremetal and VM cpu performance. 1. Changed to cpu model "Sandybridge" instead of cpu host passthrough. Resulted in significantly lower performance. 2. Passed through 2nd NIC instead of using virtual NIC. Resulted in slightly more performance. 3. Checked cpu turbo speeds on host. It does hit 3.2ghz all core in game on VM. 4. Isolated cpu cores used by VM, no noticeable improvement. 5. Changed cpu pinning and emulator pinning, but above config gives the best performance. 6. Updated kvm and virtio drivers. 7. Changed to i440fx. Resulted in slightly less performance. I am out of ideas to try. Anyone know what else I could try or have experience in this? Should this be the expected performance drop off to VM from baremetal, for a sandybridge era cpu?
  8. Those errors with the UD disk and the lsi sas2008 don't occur when its spin up/down. Can't figure exactly a pattern to when they occur. I put that drive on the intel ich or jmb585 and doesn't show that error any more. Disk is a toshiba 2.5" laptop hd. It has a weird behavior that unraid shows it spun down / no green light, but it hasn't spun down. The drive doesn't follow the spin down delay I set. On another note, I am getting errors with my new hitachi hc320 with the jmb585 only. They are exception emask/ READ FPDMA QUEUED / hard resetting link. This was only with testing the jmb585 and diskspeed. The benchmark runs fine and seems to be no problem, other than the errors showing up in log. The CRC error count also went up from 0. Tried different port/cables but that didn't resolve it. Haven't used the jmb585 for further testing. No other drives show errors with the jmb. I'll just stick to the lsi card now that its working with a pcie switch, and use ssd and that UD disk off the intel ich.
  9. I've been trying to squeeze more performance out of my VM. Looked into cpu exact model instead of cpu passthrough. Never did memory benchmarking on my VM before, but the difference in results were stunning in AIDA64. Here are the results with cpu exact model = sandybridge vs cpu passthrough. cpu passthrough AIDA64 cpu exact model AIDA64 But these results don't make sense. If it was the case, under cpu passthrough it should be slow as a snail for me. Tested with passmark v9 memory and cpu passthrough is faster than cpu exact model (2075 vs 2066 overall). Tested with GTAV benchmark, and cpu passthrough is noticeably faster, about 10% better frames. Has anyone noticed this as well? Found this thread on reddit with same behavior. Seen that for ryzen/threadripper users, they get worse l3 cache performance according to AIDA when using cpu passthrough. Perhaps this AIDA64 wrong results only applies to intel core users? Does AIDA use some algorithm based off detected frequencies? If so, that would seem potentially very inaccurate in some circumstances even on bare metal.
  10. It might be, but disk spin down/up is erratic due to it being used on an VM. These errors appear typically appear after a few hrs from each other, but no consistent interval. I'll try to set it up so it spins up when I can monitor it.
  11. I got one of these pcie switch cards and now can run both the gpu and the lsi hba off the single pcie slot. Asmedia pcie switch card I am using this with m.2 to pcie powered riser extensions. Had to use acs override to break up just that one cpu pcie root port, but it works great. I can pass the gpu to VM while the hba card stays on host. For graphics benchmarks that arent bandwidth intensive, i see roughly 1-2% less performance due to the added latency having to go through the pcie switch. My gpu was already running x4 link only, as I have it externally mounted in a separate case with its own power supply. The lower bandwidth doesnt affect the performance for what I use it for. Probably not worth the cost, but I plan to reuse this switch card down the road when I move my desktop parts to upgrade the unraid server. That motherboard is also mini itx, but it supports 2 way split x8 x8 bifurcation. I could run 3 pcie devices off one pcie x16 slot! Might be handy in a couple years when 5/10Gbps is available and present in cheaper switches. On another note, I started seeing these errors on one of my unassigned devices when connected to the lsi hba. Did not see this before when its connected to the intel ich or the asm1062. This drive does have huge and always increasing dma crc error count that I could never resolve it. Tried different cables and cleaning the connectors, but it didn't resolve it. But this drive has always passed extended smart tests. Lsi hba is flashed to IT mode with the latest P20 firmware. I will try switching the controllers to see if it goes away. Also bought the JMB585 card to test. kernel: sd 7:0:0:0: [sdf] tag#1505 UNKNOWN(0x2003) Result: hostbyte=0x00 driverbyte=0x00 kernel: sd 7:0:0:0: [sdf] tag#1505 CDB: opcode=0x28 28 00 61 b2 a6 98 00 00 88 00 kernel: print_req_error: I/O error, dev sdf, sector 1639098008
  12. If you buy a 8tb elements/easystore/mybook now, you will probably get an air CMR EDAZ drive, not helium. It is a whitelabel hitachi hc320 datacenter drive. It appears to be slightly faster than helium EMAZ/EZAZ drives, but it runs 10C hotter. It is probably running 7200rpm and not downspun to 5400rpm. Uses 1-2W more than the helium drives. The 10TB WD externals also have a likely chance of getting an air drive now. The 12TB are still all helium I believe. When running preclear on mine still inside the external case, I saw temps peak up to 63C! Room temp was ~26C plus. After shucking and putting inside the NAS with a 140mm low rpm fan blowing on it, temps become more reasonable. I see about 43-47C load, 40C or less at idle (not spundown) depending on room temp. But it is still 10C higher than my hitachi coolspin drives in the same backplane.
  13. I have a 4 drive array + 240GB ssd cache where one of the data drives recently failed. The array consists of two 3TB drives where one is parity, and two 2TB drives (one has failed). I got a 8TB drive I plan to shuck. It is running through preclear now. I want to make the 8TB the new parity drive. Is there a process I can rebuild the array and replace the parity drive, that doesn't require backing up data on the failed drive and doing a new config? Thanks.
  14. What are some typical problems that occur with port multipliers?
  15. I currently am using an 6 port Asmedia 1062+1093*2 sata controller for extra ports. Extra ports are for unassigned devices and 1 or 2 array drives. It runs off a mini pcie 2.0 x1 port via a mpcie to pcie riser. The 1062 is a pcie x2 chipset for 2 sata ports, and the 1093 chipsets are port multipliers. The cache and majority of array drives are using onboard 4 port intel sata controller. My motherboard is an old mini itx sandy/ivybridge era, only has one pcie 16x slot and that mini pcie x1 slot. Need to run the gpu off the pcie 16x slot. The Asmedia controller was working fine for ~6months. Recently one of my array drives died, an 8 yr old WD green 2tb. It was getting many exception emask frozen and failed command: WRITE FPDMA QUEUED errors in the log. This was connected to the Asmedia controller and the drive would often get dropped after alot of errors. Unraid would not detect the drive until a reboot. And sometimes, the other drives on the Asmedia could be dropped as well even though they are fine. Luckily those were unassigned devices and not array drives. Sometimes that drive would seem like its fine for a while, and then spit alot of these errors again. I suspect the drive dropping is due to the Asmedia controller. Looking for a cheap and reliable sata controller and it seems the JMB585 cards have been reported to work fine. I already have a dell h310 and ibm m1015, but they simply refuse to work off that mpcie x1 port. All the other non lsi hba cards I've tried work off that x1 port. I've tried everything I can think of but nothing works. Would the JMB585 be better and more stable than the ASM1062?
  16. Just wanted to share a simple How To for setting up NUT if you plan on using unraid as the master and windows clients. I am using a Cyberpower UPS, so your settings may differ. Please note that this guide is based on unraid being a physical server. I wanted to have the UPS also shut off after the server shuts off, and power back on all the devices when the power goes back up. Because "Turn off UPS after shutdown" on apcusd doesn't work with Cyberpower UPS, I need to use NUT. Finding a simple windows client for NUT seemed challenging. Just wanted to find something with a gui and a little ini editting. The main windows binaries from the NUT team didn't seem straight forward. Other windows clients seemed deprecated or abandoned. 1. Install the NUT plugin from community applications. Turn off default UPS service in unraid. 2. In Settings -> NUT, configure it to be: Start Network UPS Tools service: Yes (when done configuring) Enable Manual Config Only: No UPS Mode: Netserver (if you want unraid to be master) UPS IP Addresss: 127.0.0.1 UPS Name: (whichever you like) UPS Monitor Username: (whichever you like) UPS Monitor Password: (whichever you like) UPS Slave Username: (whichever you like, perhaps needs to be something different than Monitor) UPS Slave Password: (whichever you like, perhaps needs to be something different than Monitor) UPS Driver: Usbhid-ups (for cyberpower, try different ones if you cannot get it to detect) UPS Port: auto (if using usb to server) Shutdown Mode: (whichever you like) Battery Runtime Left: Battery Level (maybe some users need to use Battery Level Low) Turn off UPS after shutdown: Yes (If you want UPS to shut off after server shuts off, and turn back on when power comes back on. If you want server to turn back on automatically, need to set bios power setting to On or Last State) Apply when done and hopefully it detects the UPS properly. If not, try different UPS driver settings. 3. In Windows client PC or VM, install winNUT utility 2.0.0.4a (https://code.google.com/archive/p/winnut/downloads) and follow this guide here to set it up from step 5: https://www.seriouslytrivial.com/2018/09/26/shutdown-windows-computer-and-synology-nas-using-winnut/ Example Monitor command: MONITOR [enter your ups name]@[your unraid server ip] 1 [monitor username] [monitor password] slave If it shows access denied in the log file, try using different UPS monitor username and password. By default I think it shuts down right away once on battery. If you want it to shutdown after a certain amount of time on battery, set Shutdown Delay to time in seconds. Forced will be a shutdown forcing programs to terminate. Normal can have prompts confirming programs to exit upon shutdown. For windows 7 and up, to make it run as a service, run configuration tool as administrator and set to run as service. Check services.msc that winNut service is working. I don't know if there is ways to change shutdown metric to be battery level/ runtime left on windows clients. If someone knows, please share. Thanks!
  17. I have some experience with this. Using a m.2 to pcie riser to an external gpu setup, but I have the m.2 riser on the main gfx pcie x16 slot through a pcie -> m.2 adapter card. Mobo is mini itx and old 2nd/3rd gen i7 so it doesnt have real m.2 slots. GPU has its own power supply. When I turn on the egpu while unraid server is on, it can cause my unraid to error and even makes drives in array drop. Those drives are on a sata hba through a mini pcie 1x slot. I must have the egpu powered on before the unraid server boots up. The riser is powered from the egpu power supply. Maybe this plays a role? Maybe its the motherboard issue. On the flip side, I can turn off the egpu with unraid on, and its fine. I need to have the VM turned off. I also have a manual script to remove lspci devices after powering off the gpu.
  18. Figured out the gta5 hanging was due to it running out of virtual memory. Something with the default paging file wasn't working properly. It's working fine now. My mistake. But going back to the poor performance in general. Is there a big cpu virtualization performance penalty for sandybridge cpus?
  19. I am also seeing performance reduction with a sandybridge cpu in VM. Mine is a engineering sample mobile chip, somewhere close to a 2820qm or 2760qm. Theres one cpu-z bench I see for a 2760qm that has scores of 330/1540 for single/multi. I only get 280/1440 on my cpu. Perhaps it is inherent performance penalty of VM with sandybridge architecture? And for cpu intensive games like gtaV, I'm not getting good performance at all. My gpu is a 980ti and it performs fine in non cpu intensive stuff. My avg fps in gta online is ~40-50fps with constant drops to 30fps. This is at 720p low settings. This is with all 4c/8t passed to the VM and it is ~2.9-3.1ghz in gtaV. For reference, a stock 2600k (3.5ghz all core) with a 980ti maintains 60fps at 1080p high settings. I did notice that I get much better gtaV performance on q35 than i440. For cpu benchmarks, I don't see a difference between q35 and i440.
  20. Here is the VM xml for the Q35 setup: <?xml version='1.0' encoding='UTF-8'?> <domain type='kvm'> <name>Win10_Q35</name> <uuid>a0b58003-d5b9-0263-92e4-3d453985f048</uuid> <description>remake VM if switching gpu fixes blank screen</description> <metadata> <vmtemplate xmlns="unraid" name="Windows 10" icon="windows.png" os="windows10"/> </metadata> <memory unit='KiB'>6291456</memory> <currentMemory unit='KiB'>6291456</currentMemory> <memoryBacking> <nosharepages/> </memoryBacking> <vcpu placement='static'>8</vcpu> <cputune> <vcpupin vcpu='0' cpuset='0'/> <vcpupin vcpu='1' cpuset='4'/> <vcpupin vcpu='2' cpuset='1'/> <vcpupin vcpu='3' cpuset='5'/> <vcpupin vcpu='4' cpuset='2'/> <vcpupin vcpu='5' cpuset='6'/> <vcpupin vcpu='6' cpuset='3'/> <vcpupin vcpu='7' cpuset='7'/> </cputune> <os> <type arch='x86_64' machine='pc-q35-4.2'>hvm</type> <loader readonly='yes' type='pflash'>/usr/share/qemu/ovmf-x64/OVMF_CODE-pure-efi.fd</loader> <nvram>/etc/libvirt/qemu/nvram/a0b58003-d5b9-0263-92e4-3d453985f048_VARS-pure-efi.fd</nvram> </os> <features> <acpi/> <apic/> <hyperv> <relaxed state='on'/> <vapic state='on'/> <spinlocks state='on' retries='8191'/> <vendor_id state='on' value='none'/> </hyperv> </features> <cpu mode='host-passthrough' check='none'> <topology sockets='1' cores='4' threads='2'/> <cache mode='passthrough'/> </cpu> <clock offset='localtime'> <timer name='hypervclock' present='yes'/> <timer name='hpet' present='no'/> </clock> <on_poweroff>destroy</on_poweroff> <on_reboot>restart</on_reboot> <on_crash>restart</on_crash> <devices> <emulator>/usr/local/sbin/qemu</emulator> <disk type='file' device='disk'> <driver name='qemu' type='raw' cache='writeback'/> <source file='/mnt/cache/domains/Win10_Q35/vdisk1.img'/> <target dev='hdc' bus='virtio'/> <boot order='1'/> <address type='pci' domain='0x0000' bus='0x03' slot='0x00' function='0x0'/> </disk> <disk type='block' device='disk'> <driver name='qemu' type='raw' cache='writeback'/> <source dev='/dev/disk/by-id/ata-TOSHIBA_MQ01ABD100_15FPT8K0T-part2'/> <target dev='hdd' bus='virtio'/> <address type='pci' domain='0x0000' bus='0x04' slot='0x00' function='0x0'/> </disk> <disk type='file' device='cdrom'> <driver name='qemu' type='raw'/> <source file='/mnt/user/isos/virtio-win-0.1.173-2.iso'/> <target dev='hdb' bus='sata'/> <readonly/> <address type='drive' controller='0' bus='0' target='0' unit='1'/> </disk> <controller type='usb' index='0' model='ich9-ehci1'> <address type='pci' domain='0x0000' bus='0x00' slot='0x07' function='0x7'/> </controller> <controller type='usb' index='0' model='ich9-uhci1'> <master startport='0'/> <address type='pci' domain='0x0000' bus='0x00' slot='0x07' function='0x0' multifunction='on'/> </controller> <controller type='usb' index='0' model='ich9-uhci2'> <master startport='2'/> <address type='pci' domain='0x0000' bus='0x00' slot='0x07' function='0x1'/> </controller> <controller type='usb' index='0' model='ich9-uhci3'> <master startport='4'/> <address type='pci' domain='0x0000' bus='0x00' slot='0x07' function='0x2'/> </controller> <controller type='pci' index='0' model='pcie-root'/> <controller type='pci' index='1' model='pcie-root-port'> <model name='pcie-root-port'/> <target chassis='1' port='0x8'/> <address type='pci' domain='0x0000' bus='0x00' slot='0x01' function='0x0' multifunction='on'/> </controller> <controller type='pci' index='2' model='pcie-root-port'> <model name='pcie-root-port'/> <target chassis='2' port='0x9'/> <address type='pci' domain='0x0000' bus='0x00' slot='0x01' function='0x1'/> </controller> <controller type='pci' index='3' model='pcie-root-port'> <model name='pcie-root-port'/> <target chassis='3' port='0xa'/> <address type='pci' domain='0x0000' bus='0x00' slot='0x01' function='0x2'/> </controller> <controller type='pci' index='4' model='pcie-root-port'> <model name='pcie-root-port'/> <target chassis='4' port='0xb'/> <address type='pci' domain='0x0000' bus='0x00' slot='0x01' function='0x3'/> </controller> <controller type='pci' index='5' model='pcie-root-port'> <model name='pcie-root-port'/> <target chassis='5' port='0xc'/> <address type='pci' domain='0x0000' bus='0x00' slot='0x01' function='0x4'/> </controller> <controller type='pci' index='6' model='pcie-root-port'> <model name='pcie-root-port'/> <target chassis='6' port='0xd'/> <address type='pci' domain='0x0000' bus='0x00' slot='0x01' function='0x5'/> </controller> <controller type='virtio-serial' index='0'> <address type='pci' domain='0x0000' bus='0x02' slot='0x00' function='0x0'/> </controller> <controller type='sata' index='0'> <address type='pci' domain='0x0000' bus='0x00' slot='0x1f' function='0x2'/> </controller> <interface type='bridge'> <mac address='52:54:00:95:90:7c'/> <source bridge='br0'/> <model type='virtio'/> <address type='pci' domain='0x0000' bus='0x01' slot='0x00' function='0x0'/> </interface> <serial type='pty'> <target type='isa-serial' port='0'> <model name='isa-serial'/> </target> </serial> <console type='pty'> <target type='serial' port='0'/> </console> <channel type='unix'> <target type='virtio' name='org.qemu.guest_agent.0'/> <address type='virtio-serial' controller='0' bus='0' port='1'/> </channel> <input type='mouse' bus='ps2'/> <input type='keyboard' bus='ps2'/> <hostdev mode='subsystem' type='pci' managed='yes'> <driver name='vfio'/> <source> <address domain='0x0000' bus='0x01' slot='0x00' function='0x0'/> </source> <rom file='/mnt/cache/isos/vbios/my980tidumpkvmedit.rom'/> <address type='pci' domain='0x0000' bus='0x05' slot='0x00' function='0x0'/> </hostdev> <hostdev mode='subsystem' type='pci' managed='yes'> <driver name='vfio'/> <source> <address domain='0x0000' bus='0x01' slot='0x00' function='0x1'/> </source> <address type='pci' domain='0x0000' bus='0x06' slot='0x00' function='0x0'/> </hostdev> <memballoon model='none'/> </devices> </domain>
  21. I started to use my win10 VM for afk online game farming like GTA5. Never had used the VM for gaming before, only for gpu machine learning stuff and the performance for that was fine. But for cpu intensive games, I've noticed that the performance is quite poor. For low cpu utilized games or benchmarks, the gpu performance is fine and is comparable to 980 ti review benchmarks. The system is made from old spare parts, a 4c/8t sandybridge mobile cpu 3.2ghz, 8gb ddr3, 980ti 6gb, and ssd cache for OS vdisk. The system/cpu does have working vt-d / iommu. Some caveats are that the cpu is an engineering sample (roughly equivalent to an intel 2760qm or 2820qm). The motherboard bios has proper microcode for the ES cpu. Also I rigged the 980ti to be external and with its own power supply. The gpu connects to the only pcie 16x slot (mini itx mobo) at only pcie 2.0 x4, using a pcie to m.2 adapter card and a m.2 to pcie slot 50cm riser. I made the gpu external so that I can turn off the gpu when I have VMs off and because the mini itx case has no space to fit such a large card. In terms of graphical performance, this has not been an issue as indicated above. In GTA5, the avg fps struggles to be even 40 fps at 720p low settings when online. Often it is only 30fps when driving through the city. For reference, a baremetal i7 2600k stock with the same gpu easily maintains 60fps at 1080p high settings. The 2600k stock is ~3.5ghz all cores. Monitoring my cpu frequency, it ranges between 2.9-3.1ghz all cores when playing GTA5. Gpu utilization is only 15-30%. Pcie bus utilization is under 10%, so its not choking on the limited pcie bandwidth. CPU utilization is usually 50-70%. I have passed all 4c/8t to the VM. The main cores and ht cores are properly recognized. Ive tried passing just the 4 main cores only to the VM but that didn't help. Nothing significant on unraid is running that could be taking up resources. Ram is limited as I can only allocate 6GB to the VM. In the end, I figure it could just be poor sandybridge architecture virtual performance. So next I looked at i440 vs q35. The original VM I setup as i440 v3.1 OMVF. Had issues in the past getting q35 just to boot, but finally figured it out. I spun up a new VM using q35 4.2 and with a fresh win10 install. Q35 definitely felt snappier and GTA5 benchmarks were noticeably better than i440 VM. I updated the i440 to 4.2 and updated the virtio drivers, but that didn't improve performance. The OS vdisks are both on the same ssd cache drive, and the game install is using the same vdisk from unassigned device hard drive. Everything is mostly identical between the two VMs, except i440 is win10 1909, and q35 is win10 2004. Here are the benchmark results between the two at 720p low settings: Q35: Frames Per Second (Higher is better) Min, Max, Avg Pass 0, 6.083806, 92.755768, 70.928749 Pass 1, 38.062321, 123.438507, 89.306267 Pass 2, 37.336281, 135.876953, 86.687500 Pass 3, 36.661327, 114.695999, 84.057823 Pass 4, 21.455866, 142.184814, 80.043892 Frames under 16ms (for 60fps): Pass 0: 536/656 frames (81.71%) Pass 1: 811/831 frames (97.59%) Pass 2: 767/798 frames (96.12%) Pass 3: 736/777 frames (94.72%) Pass 4: 6723/8590 frames (78.27%) i440: Frames Per Second (Higher is better) Min, Max, Avg Pass 0, 14.443707, 88.500267, 64.930435 Pass 1, 39.493382, 128.247879, 84.682068 Pass 2, 19.617767, 114.792107, 80.661552 Pass 3, 28.310246, 109.051247, 78.101311 Pass 4, 14.648345, 139.142059, 69.131287 Frames under 16ms (for 60fps): Pass 0: 375/600 frames (62.50%) Pass 1: 734/782 frames (93.86%) Pass 2: 645/731 frames (88.24%) Pass 3: 633/719 frames (88.04%) Pass 4: 4540/7312 frames (62.09%) Now I would love to transition everything over the q35 VM. But on that VM, I've experienced a few issues. GTA5 would always eventually hang. Sometimes when GTA5 becomes unresponsive, the gpu driver would reinitialize. Most of the time GTA5 process would eventually terminate. On the i440 VM, it is completely stable. OS page file is set the same for both VMs. I looked at the event viewer or VM logs but don't see anything to indicate what could be the issue. I've tried with hyperV both on and off, and with and without the editted vbios. This is the IOMMU grp for the gpu: IOMMU group 1: [8086:0101] 00:01.0 PCI bridge: Intel Corporation Xeon E3-1200/2nd Generation Core Processor Family PCI Express Root Port (rev 08) [10de:17c8] 01:00.0 VGA compatible controller: NVIDIA Corporation GM200 [GeForce GTX 980 Ti] (rev a1) [10de:0fb0] 01:00.1 Audio device: NVIDIA Corporation GM200 High Definition Audio (rev a1) Is there something I should take a look at? Really appreciate any feedback into this. Thanks for reading.
  22. Anyone getting empty folders when they delete torrent and data? I am using Transmission remote gui (not web ui from docker) and every time I delete a torrent including data, if it was in a folder, it leaves an empty folder with the download name plus some random text at the end. Its only when I delete from Transmission remote gui.
  23. I have an egpu setup with my unraid server. Had a spare 980ti that I wanted to use for infrequent rendering task in a VM. But I don't want to leave the gpu running idle all the time as it will probably still suck up 20-30W. I only want to power on the egpu when I am about to run some rendering tasks. I rigged up an egpu setup where it has its own standalone power supply and connects to a pcie slot on the unraid server. However when I power on the egpu, it can't be initialized by unraid until i reboot the server. Just a minor inconvenience. Is there a console command where I can reinitialize specific IOMMU groups or pci devices, so I don't need to reboot. I wouldn't want to reinitialize all devices... Thanks. Any help would be much appreciated.
  24. Sorry I never made this because I ended up switching motherboards where I don't have cpu speed problems. If you are just suffering clock modulation issue, you can follow my posts above to make a startup script. Clock modulation msr key only needs to be set once at startup. However if you are trying to force a cpu speed or some kind of cpu msr that always needs to be written to constantly, sorry I can't help you there. I couldnt figure out how to do that here.