Jump to content

Scrapz

Members
  • Content Count

    35
  • Joined

  • Last visited

Community Reputation

0 Neutral

About Scrapz

  • Rank
    Advanced Member

Converted

  • Gender
    Male
  • Location
    Melbourne, Australia
  1. I don't really have anything to add, but I found that playing around with core assignments I was able to get slightly better performance, but still not great. It might just be a case of finding the "sweet spot".
  2. Sounds like you're starving Unraid of processing power. You could try reducing the amount of cores you assign to the VM and see if you notice an improvement?
  3. Weird, the file existed for me. And my scaling_governor is set to "ondemand" by default. Different configs for different CPU"s? I'll give the "up_threshold" a good run as a test, and see how I go.
  4. Good info. Reading that link, the top comment mentions that because you're distributing the load across multiple cores, no one core goes above 95%, so it stays throttled. Or at least the scaling kicks in later than it should be for persistent loads. Bringing that threshold down to about 50% will make it kick in sooner. echo 50 > /sys/devices/system/cpu/cpufreq/ondemand/up_threshold Changes are lost after a reboot, so there's no harm in trying it to see if there's a performance boost.
  5. I managed to get this to work last night, but I don't really know what to do with the results. Unraid is based on Slackware, so you can use the Slackware package manager to install what you need. First, you'll need the netperf package: http://pkgs.org/slackware-14.1/slackonly-x86_64/netperf-2.6.0-x86_64-1_slack.txz.html Then, you'll need the bc package: http://pkgs.org/slackware-14.1/slackware-x86_64/bc-1.06.95-x86_64-2.txz.html Install each package using with upgradepkg --install-new {packagename} And then you'll need to modify the script so it points to the binaries in "/usr/bin" as opposed to "/usr/local/bin" (for both NETPERF and NETSERV variables). Let me know what you make of the results. I'd be interested to know how I'm supposed to read it.
  6. Could be DPC latency issues. I had similar issues, which was fixed after updating my USB3 drivers. I was also playing around with core assignments last night, after I read a thread about how cores and threads aren't necessarily sequentially paired (ie; 0-1, 2-3, 4-5, etc) - https://lime-technology.com/forum/index.php?topic=46664.msg446032#msg446032 - and I noticed a performance boost when choosing different cores. There's no real easy way to work that one out. You can play around with core assignments to see if you notice an improvement. Maybe start at just 1 core+HT, and go from there?
  7. Sounds like you will need to recreate the VM using OVMF instead of SeaBIOS.
  8. I forget where I saw it, but I recall seeing a post where someone mentioned changing this from "threads=2" to "threads=1" addressed some performance issues they were having. Give that a go?
  9. ACS override just allows you to make use of things when they are in the same group, when normally you can't. Sometimes it works, sometimes it doesn't. In your case, I'd say you're one of the unlucky ones and your GPU and HDD's are clashing with each other. Provided the 1x slots are in a different group, you could probably trade in the SAS cards for smaller ones. You can trial this with a cheap 1x USB card, and see if it ends up in another group. Maybe search the forums for other Z97 users to see the hoops they jump through.
  10. Looking at the motherboard online, I see there's 4 1x slots. You might be able to get away with replacing your SAS cards in those? If they exist?
  11. Edited post just as you replied. Unless someone else has something to say, you might be out of luck, because it looks like all 3 of your PCIe slots are going to be using the same group regardless. Basically, you need to break the GPU away from the group it's in. Somehow.
  12. Your GPU is sharing groups with 2 of your RAID controllers. Try moving the GPU to a different slot. Edit: Actually I just re-read and saw a bit I missed. Unless someone else has something to say, you might be out of luck, because it looks like all 3 of your PCIe slots are going to be using the same group. Basically, you need to break the GPU away from the group it's in.
  13. Well, hardware specs are in the sig. I also run MineOS and Pf-Logstash dockers, along with an OpenELEC VM. I isolate the Win10 VM CPU's in syslinux.cfg with "isolcpus=4-11". XML as follows: <domain type='kvm' id='1'> <name>Scrapz_Windows10</name> <uuid>6b1f9934-3dbf-2add-db71-d3d092966b65</uuid> <metadata> <vmtemplate name="Custom" icon="windows.png" os="windows"/> </metadata> <memory unit='KiB'>12582912</memory> <currentMemory unit='KiB'>12582912</currentMemory> <memoryBacking> <nosharepages/> <locked/> </memoryBacking> <vcpu placement='static'>8</vcpu> <cputune> <vcpupin vcpu='0' cpuset='4'/> <vcpupin vcpu='1' cpuset='5'/> <vcpupin vcpu='2' cpuset='6'/> <vcpupin vcpu='3' cpuset='7'/> <vcpupin vcpu='4' cpuset='8'/> <vcpupin vcpu='5' cpuset='9'/> <vcpupin vcpu='6' cpuset='10'/> <vcpupin vcpu='7' cpuset='11'/> </cputune> <resource> <partition>/machine</partition> </resource> <os> <type arch='x86_64' machine='pc-i440fx-2.3'>hvm</type> <loader type='pflash'>/usr/share/qemu/ovmf-x64/OVMF-pure-efi.fd</loader> </os> <features> <acpi/> <apic/> </features> <cpu mode='host-passthrough'> <topology sockets='1' cores='8' threads='1'/> </cpu> <clock offset='localtime'> <timer name='rtc' tickpolicy='catchup'/> <timer name='pit' tickpolicy='delay'/> <timer name='hpet' present='no'/> </clock> <on_poweroff>destroy</on_poweroff> <on_reboot>restart</on_reboot> <on_crash>restart</on_crash> <devices> <emulator>/usr/bin/qemu-system-x86_64</emulator> <disk type='file' device='disk'> <driver name='qemu' type='raw' cache='writeback'/> <source file='/mnt/user/vdisks/Scrapz_Windows10/vdisk1.img'/> <backingStore/> <target dev='hda' bus='virtio'/> <boot order='1'/> <alias name='virtio-disk0'/> <address type='pci' domain='0x0000' bus='0x00' slot='0x04' function='0x0'/> </disk> <disk type='file' device='disk'> <driver name='qemu' type='raw' cache='writeback'/> <source file='/mnt/user/ArrayVdisks/Scrapz_Windows10/vdisk2.img'/> <backingStore/> <target dev='hdc' bus='virtio'/> <alias name='virtio-disk2'/> <address type='pci' domain='0x0000' bus='0x00' slot='0x05' function='0x0'/> </disk> <controller type='usb' index='0'> <alias name='usb'/> <address type='pci' domain='0x0000' bus='0x00' slot='0x01' function='0x2'/> </controller> <controller type='pci' index='0' model='pci-root'> <alias name='pci.0'/> </controller> <controller type='virtio-serial' index='0'> <alias name='virtio-serial0'/> <address type='pci' domain='0x0000' bus='0x00' slot='0x03' function='0x0'/> </controller> <interface type='bridge'> <mac address='52:54:00:e4:e2:d7'/> <source bridge='br0'/> <target dev='vnet0'/> <model type='virtio'/> <alias name='net0'/> <address type='pci' domain='0x0000' bus='0x00' slot='0x02' function='0x0'/> </interface> <serial type='pty'> <source path='/dev/pts/0'/> <target port='0'/> <alias name='serial0'/> </serial> <console type='pty' tty='/dev/pts/0'> <source path='/dev/pts/0'/> <target type='serial' port='0'/> <alias name='serial0'/> </console> <channel type='unix'> <source mode='bind' path='/var/lib/libvirt/qemu/channel/target/Scrapz_Windows10.org.qemu.guest_agent.0'/> <target type='virtio' name='org.qemu.guest_agent.0' state='disconnected'/> <alias name='channel0'/> <address type='virtio-serial' controller='0' bus='0' port='1'/> </channel> <hostdev mode='subsystem' type='pci' managed='yes'> <driver name='vfio'/> <source> <address domain='0x0000' bus='0x84' slot='0x00' function='0x0'/> </source> <alias name='hostdev0'/> <address type='pci' domain='0x0000' bus='0x00' slot='0x06' function='0x0'/> </hostdev> <hostdev mode='subsystem' type='pci' managed='yes'> <driver name='vfio'/> <source> <address domain='0x0000' bus='0x84' slot='0x00' function='0x1'/> </source> <alias name='hostdev1'/> <address type='pci' domain='0x0000' bus='0x00' slot='0x07' function='0x0'/> </hostdev> <hostdev mode='subsystem' type='pci' managed='yes'> <driver name='vfio'/> <source> <address domain='0x0000' bus='0x83' slot='0x00' function='0x0'/> </source> <alias name='hostdev2'/> <address type='pci' domain='0x0000' bus='0x00' slot='0x09' function='0x0'/> </hostdev> <memballoon model='virtio'> <alias name='balloon0'/> <address type='pci' domain='0x0000' bus='0x00' slot='0x08' function='0x0'/> </memballoon> </devices> </domain>
  14. Well, it could cause the computer to go into sleep or hibernate. Have you checked the Windows logs? Do you get any "recovered from an error" messages when you boot back up?