Poor performance with stability OR bare metal performance yet crashing unRaid


Recommended Posts

I had been reading, searching, learning how to setup my unRaid server for a few months prior to actually taking the plunge and buying my hardware and software for my home media/gaming/NAS server. Since I purchased everything and started setting it up about a week ago, I have been able to resolve all of my issues using the posts in this awesome forum as my guide. That is until now. Since my issues span over two different versions of unRaid, I thought placing this thread here would be the best bet.

 

My system specs are:

MB- MSI X99A SLI Plus

CPU- Intel Xeon E5 2670 V3

Mem- 2x8GB Kingston DDR4-2133

GPU- (2) MSI GTX960 GAMING 4G

SSD- (1) 250GB SK hynix (cache)

HDD-(2) 3TB Seagate (parity and storage)

 

 

My current issues are (setups listed below):

 

If using unRaid 6.19 Seabios Win10Pro64 VMs: runs completely stable with no crashes over days of use, however when running any application or game the performance grinds to a halt. Windows performance monitor shows CPU0 (displayed as CPU0 by windows, I keep CPU0 free in the VM setup for unRaid) being run at 100% while the other cores (no matter how few or many are assigned) drop down to just about idle. If CPU0 affinity is turned off, work is distributed across the other cores, but the application will generally crash. If left alone applications will frequently timeout. Even if the application does manage to successfully load, the performance is visually equitable to that of my 10 year old 1st gen i3 laptop.

 

If using unRaid 6.19 OVMF Win10Pro64 VMs: it is nearly impossible to correctly test as the VM will lockup randomly and intermittently (sometimes during OS install, sometimes after a reboot, sometimes when just sitting idle). Usually kicking all of the peripheral devices (pass-through mouse/keyboard). I have not been able to even install or run a game to see the performance in this mode due to instability and crashing issues. unRaid continues to run and network drives are still accessible, only the VM itself seems to die. Also, restarts will eventually break the VM and make it completely unbootable even using the console commands.

 

If using unRaid 6.2 beta 20 OVMF or Seabios Win10Pro64 VMs: the performance is OUTSTANDING! I cannot notice any difference visually between this VM’s gaming performance and the bare metal machine. Also, I am able to use OVMF without the issues listed above for 6.19. However, using either bios type, the server/VM are completely unstable and will regularly lock up the entire unRaid server (can’t even power down from console or ssh) seemingly during file transfers between the unRaid shares and the VM’s 2nd vdisk. I have not seen/noticed a lockup outside of transferring/accessing files.

 

I’m not really sure what to do at this point. 6.2 beta 20 obviously fixes whatever the issue with the performance was, but then introduces severe instability. I’ve tried every combination of CPU cores, OS, and OVMF/Seabios setups on 6.19 that I can think of, and the same poor performance issues listed above can be found on all of them. Can anyone lend any suggestions on things to try, or has any ideas of what might be causing these issues? I’ll list the things I have done below to give a better idea of where everything stands.

 

 

 

 

Current setup:

 

Using latest virtio drivers - virtio-win-0.1.113 ?iso shows as 1.111 on the mounted drive though. Is that normal?

 

As the 2 NVidia GPU’s are in slots 1 & 3, I have copied out the pre-POST vbios from the idle card from console and inserted the rom into the xml using the Seabios or OVMF tags to correctly pass-through and alleviate the black screen issue (tech-powerup files wouldn’t work for me). I do this on all new test VMs since I sorted this issue out. I have also read that switching from VNC => to a dedicate GPU can cause issues sometimes. It also means I don’t have to deal with changing it later. Hyper-V is also turned off.

 

To enable the MSI for Interrupts to Fix HDMI Audio Support I use the MSI_util located here. I quickly grew tired of manually adding the registry keys.

http://lime-technology.com/forum/index.php?topic=47089.msg453707#msg453707

 

I have not made any modifications to the XML aside from inserting the vbios rom mentioned above.

 

I have made no hardware modifications since the initial setup, so all tests have been performed on exactly the same hardware.

 

 

Things I have not tried:

I just realized that I have never used “qemu-ga-x64.msi” could this cause these sort of issues?

I have never used machine Q35. I have always used i440fx. Might this help?

Are the stable virtio-win-0.1.102 drivers new? I kind of remember version of it being something with a 9. So it may be possible that I’ve never tried this version of the “stable” drivers.

 

 

 

 

 

 

After having written this all out. I think my next few steps will be…

1) Installing the qemu-ga-x64.msi from now on

2) Trying the stable 1.102 drivers. Will these even work with Win10?

3) Trying a VM as a Q35 machine

4) Rinse and repeat all of these options on 6.2 beta 20 again, if nothing works in 6.19.

 

 

I will let you know how my tests go after I get home and try it out tonight. Until then though, does anyone have any other suggestions off the top of their heads? Thanks

Link to comment

So I tried the ideas I had (listed above) for 6.19 and the results were the same. Seabios would bang away on CPU0 in Windows with system crippling performance, and OVMF would just randomly lock up whenever it felt like it. I tried with a Q35 machine and it ran, but even worse than the 440. Late last night I saw Jon's post and went for the update. I'm not going to bother with 6.19 from this point as no matter what I've tried it just hasn't really worked.

 

Update to beta 21.

 

Sent from my Nexus 6 using Tapatalk

 

So after upgrading to 6.2 beta 21 here is the current status.

 

I could no longer use the 102 virtio drivers as Win10 install was now stating that it could not be installed on this media after the stor driver was loaded. I used the latest virtio 113 iso from here on in. I then ran into an issue where Seabios and OVMF installs would lockup unRaid trying to transfer files to the second vdisk (for games, etc.). This made me wonder if it was something specific to the share I had setup. For some reason, the share Arrayvdisk (which I had set to "cache-only" in 6.19) was set to "cache-no" in this unraid version. I cleared and deleted the share and remade it cache only which then lead to the differing results below. Also note, that I am one of the people who for whatever reason everytime when I edit the VM via the gui, it dumps the primary vdisk information and I have to re-add it.

 

 

Seabios VM on 440: Runs but has the same issues as in 6.19 (CPU0 doing 100% nearly 99% of the time), however programs are now able to run at this point. So this is a marked improvement! Though, now instead of massive machine crippling pauses, there are micro stutters (10th of a sec -> 2 secs) when that CPU hits 100%. Here are the XML and Log files for this VM. I have not done exhaustive testing on this VM in regards to file transfers, but I was able to copy over about 30GB to the 2nd vdisk which was impossible before on OVMF 6.19. At this point, the VM appears to be stable but definitely not perfect nor close to baremetal.

 

XML

<domain type='kvm'>
  <name>Win10Sea</name>
  <uuid>292e54e5-9644-cb7f-ae95-b8b3a063fca8</uuid>
  <metadata>
    <vmtemplate xmlns="unraid" name="Windows 10" icon="windows.png" os="windows10"/>
  </metadata>
  <memory unit='KiB'>7340032</memory>
  <currentMemory unit='KiB'>7340032</currentMemory>
  <memoryBacking>
    <nosharepages/>
    <locked/>
  </memoryBacking>
  <vcpu placement='static'>10</vcpu>
  <cputune>
    <vcpupin vcpu='0' cpuset='1'/>
    <vcpupin vcpu='1' cpuset='2'/>
    <vcpupin vcpu='2' cpuset='3'/>
    <vcpupin vcpu='3' cpuset='4'/>
    <vcpupin vcpu='4' cpuset='5'/>
    <vcpupin vcpu='5' cpuset='13'/>
    <vcpupin vcpu='6' cpuset='14'/>
    <vcpupin vcpu='7' cpuset='15'/>
    <vcpupin vcpu='8' cpuset='16'/>
    <vcpupin vcpu='9' cpuset='17'/>
  </cputune>
  <os>
    <type arch='x86_64' machine='pc-i440fx-2.5'>hvm</type>
  </os>
  <features>
    <acpi/>
    <apic/>
    <hyperv>
      <relaxed state='on'/>
      <vapic state='on'/>
      <spinlocks state='on' retries='8191'/>
      <vendor id='none'/>
    </hyperv>
  </features>
  <cpu mode='host-passthrough'>
    <topology sockets='1' cores='5' threads='2'/>
  </cpu>
  <clock offset='localtime'>
    <timer name='hypervclock' present='yes'/>
    <timer name='hpet' present='no'/>
  </clock>
  <on_poweroff>destroy</on_poweroff>
  <on_reboot>restart</on_reboot>
  <on_crash>restart</on_crash>
  <devices>
    <emulator>/usr/local/sbin/qemu</emulator>
    <disk type='file' device='disk'>
      <driver name='qemu' type='raw' cache='writeback'/>
      <source file='/mnt/user/vdisks/Win10Sea/vdisk1.img'/>
      <target dev='hdc' bus='virtio'/>
      <boot order='1'/>
      <address type='pci' domain='0x0000' bus='0x00' slot='0x05' function='0x0'/>
    </disk>
    <disk type='file' device='disk'>
      <driver name='qemu' type='raw' cache='writeback'/>
      <source file='/mnt/user/ArrayVdisks/Win10Sea/vdisk2.img'/>
      <target dev='hdd' bus='virtio'/>
      <address type='pci' domain='0x0000' bus='0x00' slot='0x06' function='0x0'/>
    </disk>
    <disk type='file' device='cdrom'>
      <driver name='qemu' type='raw'/>
      <source file='/mnt/user/ISOs/OS iso/Windows 10 Pro  64bit.iso'/>
      <target dev='hda' bus='sata'/>
      <readonly/>
      <boot order='2'/>
      <address type='drive' controller='0' bus='0' target='0' unit='0'/>
    </disk>
    <disk type='file' device='cdrom'>
      <driver name='qemu' type='raw'/>
      <source file='/mnt/user/ISOs/virtio iso/virtio-win-0.1.113.iso'/>
      <target dev='hdb' bus='sata'/>
      <readonly/>
      <address type='drive' controller='0' bus='0' target='0' unit='1'/>
    </disk>
    <controller type='usb' index='0' model='nec-xhci'>
      <address type='pci' domain='0x0000' bus='0x00' slot='0x07' function='0x0'/>
    </controller>
    <controller type='pci' index='0' model='pci-root'/>
    <controller type='sata' index='0'>
      <address type='pci' domain='0x0000' bus='0x00' slot='0x03' function='0x0'/>
    </controller>
    <controller type='virtio-serial' index='0'>
      <address type='pci' domain='0x0000' bus='0x00' slot='0x04' function='0x0'/>
    </controller>
    <interface type='bridge'>
      <mac address='52:54:00:f0:5e:a0'/>
      <source bridge='br0'/>
      <model type='virtio'/>
      <address type='pci' domain='0x0000' bus='0x00' slot='0x02' function='0x0'/>
    </interface>
    <serial type='pty'>
      <target port='0'/>
    </serial>
    <console type='pty'>
      <target type='serial' port='0'/>
    </console>
    <channel type='unix'>
      <source mode='connect'/>
      <target type='virtio' name='org.qemu.guest_agent.0'/>
      <address type='virtio-serial' controller='0' bus='0' port='1'/>
    </channel>
    <hostdev mode='subsystem' type='pci' managed='yes' xvga='yes'>
      <driver name='vfio'/>
      <source>
        <address domain='0x0000' bus='0x03' slot='0x00' function='0x0'/>
      </source>
      <rom file='/boot/vbios.rom'/>
      <address type='pci' domain='0x0000' bus='0x00' slot='0x08' function='0x0'/>
    </hostdev>
    <hostdev mode='subsystem' type='pci' managed='yes'>
      <driver name='vfio'/>
      <source>
        <address domain='0x0000' bus='0x03' slot='0x00' function='0x1'/>
      </source>
      <address type='pci' domain='0x0000' bus='0x00' slot='0x09' function='0x0'/>
    </hostdev>
    <hostdev mode='subsystem' type='usb' managed='no'>
      <source>
        <vendor id='0x056e'/>
        <product id='0x0035'/>
      </source>
    </hostdev>
    <hostdev mode='subsystem' type='usb' managed='no'>
      <source>
        <vendor id='0x1c4f'/>
        <product id='0x0002'/>
      </source>
    </hostdev>
    <memballoon model='virtio'>
      <address type='pci' domain='0x0000' bus='0x00' slot='0x0a' function='0x0'/>
    </memballoon>
  </devices>
</domain>

 

 

Log

2016-04-09 07:29:50.502+0000: starting up libvirt version: 1.3.1, qemu version: 2.5.1, hostname: Beast
LC_ALL=C PATH=/bin:/sbin:/usr/bin:/usr/sbin HOME=/ QEMU_AUDIO_DRV=none /usr/local/sbin/qemu -name Win10Sea -S -machine pc-i440fx-2.5,accel=kvm,usb=off,mem-merge=off -cpu host,hv_time,hv_relaxed,hv_vapic,hv_spinlocks=0x1fff,hv_vendor_id=none -m 7168 -realtime mlock=on -smp 10,sockets=1,cores=5,threads=2 -uuid 292e54e5-9644-cb7f-ae95-b8b3a063fca8 -nographic -no-user-config -nodefaults -chardev socket,id=charmonitor,path=/var/lib/libvirt/qemu/domain-Win10Sea/monitor.sock,server,nowait -mon chardev=charmonitor,id=monitor,mode=control -rtc base=localtime -no-hpet -no-shutdown -boot strict=on -device nec-usb-xhci,id=usb,bus=pci.0,addr=0x7 -device ahci,id=sata0,bus=pci.0,addr=0x3 -device virtio-serial-pci,id=virtio-serial0,bus=pci.0,addr=0x4 -drive file=/mnt/user/vdisks/Win10Sea/vdisk1.img,format=raw,if=none,id=drive-virtio-disk2,cache=writeback -device virtio-blk-pci,scsi=off,bus=pci.0,addr=0x5,drive=drive-virtio-disk2,id=virtio-disk2,bootindex=1 -drive file=/mnt/user/ArrayV0,id=hostdev0,x-vga=on,bus=pci.0,addr=0x8,romfile=/boot/vbios.rom -device vfio-pci,host=03:00.1,id=hostdev1,bus=pci.0,addr=0x9 -device usb-host,hostbus=5,hostaddr=3,id=hostdev2 -device usb-host,hostbus=5,hostaddr=8,id=hostdev3 -device virtio-balloon-pci,id=balloon0,bus=pci.0,addr=0xa -msg timestamp=on
Domain id=6 is tainted: high-privileges
Domain id=6 is tainted: host-cpu
char device redirected to /dev/pts/0 (label charserial0)

 

I will post the OVMF test in the next post as I hit 20000char ^^;

Link to comment

OVMF VM on 440: This one has been interesting to say the least. Trying to install with a 2nd vdisk attached crashed the install once, and locked up unraid more than once while trying to transfer to it. It also had the strange effect of dumping the GPU compeletly at one point. As in, you couldn't even find it in device manager. So i decided to try installing without the 2nd vdisk and was actually able to get it up and running. Here is the XML and log file before adding the 2nd vdisk.

 

XML

<domain type='kvm'>
  <name>Windows 10</name>
  <uuid>a86b24ff-c13e-69ff-3b98-f3b5cf4c4532</uuid>
  <metadata>
    <vmtemplate xmlns="unraid" name="Windows 10" icon="windows.png" os="windows10"/>
  </metadata>
  <memory unit='KiB'>7340032</memory>
  <currentMemory unit='KiB'>7340032</currentMemory>
  <memoryBacking>
    <nosharepages/>
    <locked/>
  </memoryBacking>
  <vcpu placement='static'>10</vcpu>
  <cputune>
    <vcpupin vcpu='0' cpuset='2'/>
    <vcpupin vcpu='1' cpuset='3'/>
    <vcpupin vcpu='2' cpuset='4'/>
    <vcpupin vcpu='3' cpuset='5'/>
    <vcpupin vcpu='4' cpuset='6'/>
    <vcpupin vcpu='5' cpuset='14'/>
    <vcpupin vcpu='6' cpuset='15'/>
    <vcpupin vcpu='7' cpuset='16'/>
    <vcpupin vcpu='8' cpuset='17'/>
    <vcpupin vcpu='9' cpuset='18'/>
  </cputune>
  <os>
    <type arch='x86_64' machine='pc-i440fx-2.5'>hvm</type>
    <loader readonly='yes' type='pflash'>/usr/share/qemu/ovmf-x64/OVMF_CODE-pure-efi.fd</loader>
    <nvram>/etc/libvirt/qemu/nvram/a86b24ff-c13e-69ff-3b98-f3b5cf4c4532_VARS-pure-efi.fd</nvram>
  </os>
  <features>
    <acpi/>
    <apic/>
    <hyperv>
      <relaxed state='on'/>
      <vapic state='on'/>
      <spinlocks state='on' retries='8191'/>
      <vendor id='none'/>
    </hyperv>
  </features>
  <cpu mode='host-passthrough'>
    <topology sockets='1' cores='5' threads='2'/>
  </cpu>
  <clock offset='localtime'>
    <timer name='hypervclock' present='yes'/>
    <timer name='hpet' present='no'/>
  </clock>
  <on_poweroff>destroy</on_poweroff>
  <on_reboot>restart</on_reboot>
  <on_crash>restart</on_crash>
  <devices>
    <emulator>/usr/local/sbin/qemu</emulator>
    <disk type='file' device='disk'>
      <driver name='qemu' type='raw' cache='writeback'/>
      <source file='/mnt/user/vdisks/Windows 10/vdisk1.img'/>
      <target dev='hdc' bus='virtio'/>
      <boot order='1'/>
      <address type='pci' domain='0x0000' bus='0x00' slot='0x05' function='0x0'/>
    </disk>
    <disk type='file' device='cdrom'>
      <driver name='qemu' type='raw'/>
      <source file='/mnt/user/ISOs/OS iso/Windows 10 Pro  64bit.iso'/>
      <target dev='hda' bus='sata'/>
      <readonly/>
      <boot order='2'/>
      <address type='drive' controller='0' bus='0' target='0' unit='0'/>
    </disk>
    <disk type='file' device='cdrom'>
      <driver name='qemu' type='raw'/>
      <source file='/mnt/user/ISOs/virtio iso/virtio-win-0.1.113.iso'/>
      <target dev='hdb' bus='sata'/>
      <readonly/>
      <address type='drive' controller='0' bus='0' target='0' unit='1'/>
    </disk>
    <controller type='usb' index='0' model='nec-xhci'>
      <address type='pci' domain='0x0000' bus='0x00' slot='0x07' function='0x0'/>
    </controller>
    <controller type='pci' index='0' model='pci-root'/>
    <controller type='sata' index='0'>
      <address type='pci' domain='0x0000' bus='0x00' slot='0x03' function='0x0'/>
    </controller>
    <controller type='virtio-serial' index='0'>
      <address type='pci' domain='0x0000' bus='0x00' slot='0x04' function='0x0'/>
    </controller>
    <interface type='bridge'>
      <mac address='52:54:00:a8:d7:4b'/>
      <source bridge='br0'/>
      <model type='virtio'/>
      <address type='pci' domain='0x0000' bus='0x00' slot='0x02' function='0x0'/>
    </interface>
    <serial type='pty'>
      <target port='0'/>
    </serial>
    <console type='pty'>
      <target type='serial' port='0'/>
    </console>
    <channel type='unix'>
      <source mode='connect'/>
      <target type='virtio' name='org.qemu.guest_agent.0'/>
      <address type='virtio-serial' controller='0' bus='0' port='1'/>
    </channel>
    <hostdev mode='subsystem' type='pci' managed='yes'>
      <driver name='vfio'/>
      <source>
        <address domain='0x0000' bus='0x03' slot='0x00' function='0x0'/>
      </source>
      <rom file='/boot/vbios.rom'/>
      <address type='pci' domain='0x0000' bus='0x00' slot='0x06' function='0x0'/>
    </hostdev>
    <hostdev mode='subsystem' type='pci' managed='yes'>
      <driver name='vfio'/>
      <source>
        <address domain='0x0000' bus='0x03' slot='0x00' function='0x1'/>
      </source>
      <address type='pci' domain='0x0000' bus='0x00' slot='0x08' function='0x0'/>
    </hostdev>
    <hostdev mode='subsystem' type='usb' managed='no'>
      <source>
        <vendor id='0x056e'/>
        <product id='0x0035'/>
      </source>
    </hostdev>
    <hostdev mode='subsystem' type='usb' managed='no'>
      <source>
        <vendor id='0x1c4f'/>
        <product id='0x0002'/>
      </source>
    </hostdev>
    <memballoon model='virtio'>
      <address type='pci' domain='0x0000' bus='0x00' slot='0x09' function='0x0'/>
    </memballoon>
  </devices>
</domain>

 

Log file

2016-04-09 05:17:54.622+0000: starting up libvirt version: 1.3.1, qemu version: 2.5.1, hostname: Beast
LC_ALL=C PATH=/bin:/sbin:/usr/bin:/usr/sbin HOME=/ QEMU_AUDIO_DRV=none /usr/local/sbin/qemu -name 'Windows 10' -S -machine pc-i440fx-2.5,accel=kvm,usb=off,mem-merge=off -cpu host,hv_time,hv_relaxed,hv_vapic,hv_spinlocks=0x1fff,hv_vendor_id=none -drive file=/usr/share/qemu/ovmf-x64/OVMF_CODE-pure-efi.fd,if=pflash,format=raw,unit=0,readonly=on -drive file=/etc/libvirt/qemu/nvram/a86b24ff-c13e-69ff-3b98-f3b5cf4c4532_VARS-pure-efi.fd,if=pflash,format=raw,unit=1 -m 7168 -realtime mlock=on -smp 10,sockets=1,cores=5,threads=2 -uuid a86b24ff-c13e-69ff-3b98-f3b5cf4c4532 -nographic -no-user-config -nodefaults -chardev 'socket,id=charmonitor,path=/var/lib/libvirt/qemu/domain-Windows 10/monitor.sock,server,nowait' -mon chardev=charmonitor,id=monitor,mode=control -rtc base=localtime -no-hpet -no-shutdown -boot strict=on -device nec-usb-xhci,id=usb,bus=pci.0,addr=0x7 -device ahci,id=sata0,bus=pci.0,addr=0x3 -device virtio-serial-pci,id=virtio-serial0,bus=pci.0,addr=0x4 -drive '0 -device vfio-pci,host=03:00.0,id=hostdev0,bus=pci.0,addr=0x6,romfile=/boot/vbios.rom -device vfio-pci,host=03:00.1,id=hostdev1,bus=pci.0,addr=0x8 -device usb-host,hostbus=5,hostaddr=3,id=hostdev2 -device usb-host,hostbus=5,hostaddr=8,id=hostdev3 -device virtio-balloon-pci,id=balloon0,bus=pci.0,addr=0x9 -msg timestamp=on
Domain id=1 is tainted: high-privileges
Domain id=1 is tainted: host-cpu
char device redirected to /dev/pts/0 (label charserial0)

 

 

I tried attaching a new 2nd vdisk at this point and the VM rebooted itself once and came up without a GPU or Audio device listed. I left the VM running while I was typing these messages and randomly about 10-15 minutes later the display auto switched to 1080p and the display adapter was available again..... I clean installed the nvidia drivers as the audio was not working, then tried to copy some files over and bam... VM crashed with an error on the unraid web gui, but nothing showed up in the log file.

 

 

Here are the xml and log files after adding and "formatting" the 2nd vdisk, including the crash.

 

XML

<domain type='kvm'>
  <name>Windows 10</name>
  <uuid>a86b24ff-c13e-69ff-3b98-f3b5cf4c4532</uuid>
  <metadata>
    <vmtemplate xmlns="unraid" name="Windows 10" icon="windows.png" os="windows10"/>
  </metadata>
  <memory unit='KiB'>7340032</memory>
  <currentMemory unit='KiB'>7340032</currentMemory>
  <memoryBacking>
    <nosharepages/>
    <locked/>
  </memoryBacking>
  <vcpu placement='static'>10</vcpu>
  <cputune>
    <vcpupin vcpu='0' cpuset='2'/>
    <vcpupin vcpu='1' cpuset='3'/>
    <vcpupin vcpu='2' cpuset='4'/>
    <vcpupin vcpu='3' cpuset='5'/>
    <vcpupin vcpu='4' cpuset='6'/>
    <vcpupin vcpu='5' cpuset='14'/>
    <vcpupin vcpu='6' cpuset='15'/>
    <vcpupin vcpu='7' cpuset='16'/>
    <vcpupin vcpu='8' cpuset='17'/>
    <vcpupin vcpu='9' cpuset='18'/>
  </cputune>
  <os>
    <type arch='x86_64' machine='pc-i440fx-2.5'>hvm</type>
    <loader readonly='yes' type='pflash'>/usr/share/qemu/ovmf-x64/OVMF_CODE-pure-efi.fd</loader>
    <nvram>/etc/libvirt/qemu/nvram/a86b24ff-c13e-69ff-3b98-f3b5cf4c4532_VARS-pure-efi.fd</nvram>
  </os>
  <features>
    <acpi/>
    <apic/>
    <hyperv>
      <relaxed state='on'/>
      <vapic state='on'/>
      <spinlocks state='on' retries='8191'/>
      <vendor id='none'/>
    </hyperv>
  </features>
  <cpu mode='host-passthrough'>
    <topology sockets='1' cores='5' threads='2'/>
  </cpu>
  <clock offset='localtime'>
    <timer name='hypervclock' present='yes'/>
    <timer name='hpet' present='no'/>
  </clock>
  <on_poweroff>destroy</on_poweroff>
  <on_reboot>restart</on_reboot>
  <on_crash>restart</on_crash>
  <devices>
    <emulator>/usr/local/sbin/qemu</emulator>
    <disk type='file' device='disk'>
      <driver name='qemu' type='raw' cache='writeback'/>
      <source file='/mnt/user/vdisks/Windows 10/vdisk1.img'/>
      <target dev='hdc' bus='virtio'/>
      <boot order='1'/>
      <address type='pci' domain='0x0000' bus='0x00' slot='0x05' function='0x0'/>
    </disk>
    <disk type='file' device='disk'>
      <driver name='qemu' type='raw' cache='writeback'/>
      <source file='/mnt/user/ArrayVdisks/Windows 10/vdisk2.img'/>
      <target dev='hdd' bus='virtio'/>
      <address type='pci' domain='0x0000' bus='0x00' slot='0x06' function='0x0'/>
    </disk>
    <disk type='file' device='cdrom'>
      <driver name='qemu' type='raw'/>
      <source file='/mnt/user/ISOs/OS iso/Windows 10 Pro  64bit.iso'/>
      <target dev='hda' bus='sata'/>
      <readonly/>
      <boot order='2'/>
      <address type='drive' controller='0' bus='0' target='0' unit='0'/>
    </disk>
    <disk type='file' device='cdrom'>
      <driver name='qemu' type='raw'/>
      <source file='/mnt/user/ISOs/virtio iso/virtio-win-0.1.113.iso'/>
      <target dev='hdb' bus='sata'/>
      <readonly/>
      <address type='drive' controller='0' bus='0' target='0' unit='1'/>
    </disk>
    <controller type='usb' index='0' model='nec-xhci'>
      <address type='pci' domain='0x0000' bus='0x00' slot='0x07' function='0x0'/>
    </controller>
    <controller type='pci' index='0' model='pci-root'/>
    <controller type='sata' index='0'>
      <address type='pci' domain='0x0000' bus='0x00' slot='0x03' function='0x0'/>
    </controller>
    <controller type='virtio-serial' index='0'>
      <address type='pci' domain='0x0000' bus='0x00' slot='0x04' function='0x0'/>
    </controller>
    <interface type='bridge'>
      <mac address='52:54:00:a8:d7:4b'/>
      <source bridge='br0'/>
      <model type='virtio'/>
      <address type='pci' domain='0x0000' bus='0x00' slot='0x02' function='0x0'/>
    </interface>
    <serial type='pty'>
      <target port='0'/>
    </serial>
    <console type='pty'>
      <target type='serial' port='0'/>
    </console>
    <channel type='unix'>
      <source mode='connect'/>
      <target type='virtio' name='org.qemu.guest_agent.0'/>
      <address type='virtio-serial' controller='0' bus='0' port='1'/>
    </channel>
    <hostdev mode='subsystem' type='pci' managed='yes'>
      <driver name='vfio'/>
      <source>
        <address domain='0x0000' bus='0x03' slot='0x00' function='0x0'/>
      </source>
      <rom file='/boot/vbios.rom'/>
      <address type='pci' domain='0x0000' bus='0x00' slot='0x08' function='0x0'/>
    </hostdev>
    <hostdev mode='subsystem' type='pci' managed='yes'>
      <driver name='vfio'/>
      <source>
        <address domain='0x0000' bus='0x03' slot='0x00' function='0x1'/>
      </source>
      <address type='pci' domain='0x0000' bus='0x00' slot='0x09' function='0x0'/>
    </hostdev>
    <hostdev mode='subsystem' type='usb' managed='no'>
      <source>
        <vendor id='0x056e'/>
        <product id='0x0035'/>
      </source>
    </hostdev>
    <hostdev mode='subsystem' type='usb' managed='no'>
      <source>
        <vendor id='0x1c4f'/>
        <product id='0x0002'/>
      </source>
    </hostdev>
    <memballoon model='virtio'>
      <address type='pci' domain='0x0000' bus='0x00' slot='0x0a' function='0x0'/>
    </memballoon>
  </devices>
</domain>

 

 

Log File (it doesn't even show an error, just that it received a terminating signal)

 

 

2016-04-09T08:03:56.188788Z qemu-system-x86_64: terminating on signal 15 from pid 3347
2016-04-09 08:03:56.462+0000: shutting down
2016-04-09 08:07:08.671+0000: starting up libvirt version: 1.3.1, qemu version: 2.5.1, hostname: Beast
LC_ALL=C PATH=/bin:/sbin:/usr/bin:/usr/sbin HOME=/ QEMU_AUDIO_DRV=none /usr/local/sbin/qemu -name 'Windows 10' -S -machine pc-i440fx-2.5,accel=kvm,usb=off,mem-merge=off -cpu host,hv_time,hv_relaxed,hv_vapic,hv_spinlocks=0x1fff,hv_vendor_id=none -drive file=/usr/share/qemu/ovmf-x64/OVMF_CODE-pure-efi.fd,if=pflash,format=raw,unit=0,readonly=on -drive file=/etc/libvirt/qemu/nvram/a86b24ff-c13e-69ff-3b98-f3b5cf4c4532_VARS-pure-efi.fd,if=pflash,format=raw,unit=1 -m 7168 -realtime mlock=on -smp 10,sockets=1,cores=5,threads=2 -uuid a86b24ff-c13e-69ff-3b98-f3b5cf4c4532 -nographic -no-user-config -nodefaults -chardev 'socket,id=charmonitor,path=/var/lib/libvirt/qemu/domain-Windows 10/monitor.sock,server,nowait' -mon chardev=charmonitor,id=monitor,mode=control -rtc base=localtime -no-hpet -no-shutdown -boot strict=on -device nec-usb-xhci,id=usb,bus=pci.0,addr=0x7 -device ahci,id=sata0,bus=pci.0,addr=0x3 -device virtio-serial-pci,id=virtio-serial0,bus=pci.0,addr=0x4 -drive ',path=/var/lib/libvirt/qemu/channel/target/domain-Windows 10/org.qemu.guest_agent.0,server,nowait' -device virtserialport,bus=virtio-serial0.0,nr=1,chardev=charchannel0,id=channel0,name=org.qemu.guest_agent.0 -device vfio-pci,host=03:00.0,id=hostdev0,bus=pci.0,addr=0x8,romfile=/boot/vbios.rom -device vfio-pci,host=03:00.1,id=hostdev1,bus=pci.0,addr=0x9 -device usb-host,hostbus=5,hostaddr=3,id=hostdev2 -device usb-host,hostbus=5,hostaddr=8,id=hostdev3 -device virtio-balloon-pci,id=balloon0,bus=pci.0,addr=0xa -msg timestamp=on
Domain id=8 is tainted: high-privileges
Domain id=8 is tainted: host-cpu
char device redirected to /dev/pts/0 (label charserial0)

 

I'm going to try some file transfers on this VM prior to adding the 2nd vdisk and see if I can make it lock up from there. I also noticed that when the VM is loading that the network connection is always the last thing to load since its so slow. Wondering if that is an indication of another issue.

Link to comment

" However, using either bios type, the server/VM are completely unstable and will regularly lock up the entire unRaid server (can’t even power down from console or ssh) seemingly during file transfers between the unRaid shares and the VM’s 2nd vdisk. I have not seen/noticed a lockup outside of transferring/accessing files. "

 

sounds like the problems i had with 6.2 where any time i tried to transfer files around with samba Unraid would stop showing files/folder anymore so apps etc would crash i couldn't even reboot unraid with ssh

 

http://lime-technology.com/forum/index.php?topic=47408.375

Yea something very wrong with samba i think, today i was working on my desktop and Windows File History tried to back up the file to my Backup share this worked on 6.1.9 just fine but on 6.2 Unraid webui is not responding and the Openelec VM crashed.

 

If i type in the share into windows explorer it causes the window to not respond i dont understand its as if Windows sees the shares but something is going very wrong and can't connect to it.

 

This time i can't even get the diagnostics

 

and again i can't get the system even to reboot in ssh have to manually power it down.

 

EDIT: some more information

 

Dockers are running during the problem BUT looks like no of them can see the array files, is this a file system problem?

 

EDIT:

so i sshed into Unraid to see if i can see the array files

 

/mnt/user - works, i can see all my share folders

/mnt/user/Tv Shows  works i can see all my tv show folders

/mnt/user/Tv shows/24 works i see the seasons folders

/mnt/user/Tv shows/24/Season 1  Nope when i get into it and type ls it just hangs nothing happens, whats going oN?

 

Edit: after going to eat and coming back Unraid webui responed :) i managed to get a diagnostic file then i checked if the smb shares are working they where up and loading but if i tried to open any file it froze up.

 

 

looks like Unraid can't see any files/folders after the bug happens

 

maybe you can try boot in safemode with no vms and test out transferring files between two share and see if that crashes Unraid, i dont know if this is a samba problem or something else but maybe it can be triggered other ways

 

thanks

Link to comment

 

looks like Unraid can't see any files/folders after the bug happens

 

maybe you can try boot in safemode with no vms and test out transferring files between two share and see if that crashes Unraid, i dont know if this is a samba problem or something else but maybe it can be triggered other ways

 

thanks

 

Hmm, I don't think I've run into that issue specifically. But I did try moving a ton of files over the network via between the public exported shares; I can move files with no issues even when the VMs running. So it would seem to be specifically related to the VM's and how they are interacting with the shares/file system. I thought that the Seabios VM was working properly, but I decided to do some more testing and randomly pushed a bunch of files around the shares which ultimately crashed the VM (locked up). I tried to force shutdown via the webgui as unraid was still running and got this error "Failed to terminate process 8853 with SIGKILL: Device or resource busy." After a few more attempts I tried to stop the array which then locked up unraid and I had to hard reboot. Running a VM using OVMF is much worse in the sense that it will reboot the VM, or lockup even with small file transfers, much more frequently.

 

So the current situation running my Win10 VMs are...

6.19 Seabios most stable, but running apps/games are almost impossible due to 100% cpu usage on "CPU0"

6.19 OVMF completely unstable, crashes and reboots regularly

6.2 beta 21 Seabios mostly stable (can lock up transferring files), still bangs away at "CPU0" but apps and games are "usable", still not baremetal

6.2 beta 21 OVMF OS seems to be mostly stable, have noticed some strange system has changed messages. Utterly unstable during file transfers

Link to comment

From what I read, it seems we share a lot of issues.

I think its similar enough to take the differences as human perception or diffrent priorities in the testing :) 

 

I was posting about the perfomance/load issues in beta18/19, which got fixed in beta 20: (big THANKS to Eric ;) )

While there were no real perfomance issues in the VM, thermal throttling or errors due to a overheating CPU were possible:

A note from the developers

More bug fixes.  In particular, squashed a bug which resulted in Windows 10 VM's running multi-media applications causing host CPU's to peg at near 100%.  This one was a doozy and we had a -beta20 all ready to go which fixed this issue by reverting back to the linux 4.1.x kernel.  (We figured out the issue got introduced by some change in the kernel 4.3 merge window, but kernel 4.2.x is deprecated.)  Not happy with this compromise and not wanting to wait for kvm developers to acknowledge and fix this issue, our own Eric Schultz took the plunge and started "bisecting" the 4.3-rc1 release to find out what patch was the culprit.  It took something like 16 kernel builds to isolate the problem, and the fix turns out to be a truly 1-line change in a configuration file (/etc/modprobe.d/kvm.conf)!  A big Thank You to Eric for his hard work on this!

 

- Add halt_poll_ns=0 to kvm.conf - eliminates high cpu overhead in windows 10 [kudos to Eric S. for this!]

 

So, since beta20 I was investigating the other issue, which at least for me, seems to be a complete lockup of unraid as soon as a VM with a vDisk on the array is getting some sort of i/o on that disk.

 

 

Even after reading all your posts, I am not sure, if your VMs are running on the array, a cache drive or outside of the array.

What are your results, if your vm has NO vDisk on any disk that is part of the parity protected array?

At least for me, moving all vDisks to the cache removes all issues and makes 6.2 as stable and fast as 6.1.9

 

I posted und summarized my experience an observations in the beta21 release thread:

 

Still can't use VMs that have at least one vDisk on a physical disk on the array.

Worked till 6.1.9, broken since first public beta.

 

Happens even in Safe Mode (no Plug-ins) with docker disabled and a clean "go" file.

 

Problem:

- VMs with with at least one vDisk on a physical disk on the array:

    - If the vDisk is the System Disk, VM boots, but never gets to the dektop.

    - If the vDisk is a second Disk, it boots/works fine until I/O is put on the vDisk, then it gets unresponsive

- Once the VM gets unresponsive, it can no longer be shut or even force shut ("resource busy")

    - After trying to force shut the VM, unraid WebGui gets unresponsive after accessing some pages (VM-Tab, Share-Details)

- After starting the VM, I have trouble accesing shares

    - Explorer hangs with "no response" after openig a SMB share

    - even MC through ssh locks up the whole ssh session, after trying to access /mnt/user/"share name"

- more details in my former posts regarding the issue: (I am not the only one with that issue it seems)

    - http://lime-technology.com/forum/index.php?topic=47744.msg457766#msg457766

    - http://lime-technology.com/forum/index.php?topic=47875.msg459773#msg459773

 

How to reproduce:

- Start a Windows VM with at least one vDisk on a physical disk on the array

- put some I/O on that vDisk (booting/copying files)

 

Diagnostics were taken after I tried to force shut the VM.

 

I physicaly removed the NVMe cache (and moved libvirt.img to the array), same issue.

Eric contacted me via PM last weekend, to see if he can get any infos from me regarding the issue.I took a very simple VM (no passthrough and a very basic installation) that can reproduce the crash and he asked me to change the path to the vDisk from /mnt/user to /mnt/disk5. I gave it a try, although I treid it in earlier 6.2 versions.

What I got was a very strange result, that I dont fully understand.

So, I gave it a shot, although I tried that back in beta 19/20.

Changed to disk5 and booted into safe mode with docker disabled.

 

TL;DR: Better than before, not really fixed. (maybe more than one Problem)

Depending on the amount of time you want to spend on this case, you could read on for a more detailed report of my day

 

1. Try:

Good news is, on the first boot, I actually got to the login-screen and could log in. (did not really try anything else)

First thought was, "wait a minute, that can't be it, I already tried that".

So I shut down the VM, and rebooted the Server.

 

2. Try:

This time, I also got to the login-Screen, but after login in, all I got was "welcome" with a spinning circle.

"Ok" I thought, "it's progress, but not fixed."

 

As I mentioned, in that state, I could not access some/many shares. So I tried how that would behave after changing the path.

It seemed, that I can't acces any Folder on disk5 except that with the vDisk image.

I could access a folder/share on disk1 that has an exclude for disk5, but another folder/share in disk1 that has a folder on disk 5 did not work.

 

But since everything hangs after accessing a "wrong" share, I needed to restart the server, so I did..

 

3. Try:

Third boot looked like the first, was able to log in and still confused as to why.

I have a Android pad and a laptop to do that maintenance stuff, the latter has some shares mounted through smb.

Maybe it has something to do with accessing a share before/after the VM gets started... (some race between samba/libvirt?)

 

4. Try:

Back to the spinning "welcome" circle after logging in...

 

So, while its definitly better with "/mnt/disk5" insted of "/mnt/user", 50/50 after 4 attempts seems odd.

 

 

In the End I had probably betweeen 15 and 20 reboots...

On a scale from 1-10, the range of succsess went from:

 

1) spinning "welcome" circle

2) Succsessfull Login and "no response" after opening the "properties" of Drive c:\

4) Succsessfully opening the "properties" of Drive c:\ and hangig after 2-3 seconds of scandisk

5) Succsessfully running scandisk and even defrag

6) beeing able to run a disk benchmark (although with very bad speed (10mb/s seq, 0.3mb/s 4k rand. writes)

7) hanging after "Disk-Cleanup"

8 ) "some time of normal usage" before getting no response.

9) slow but steady vm

10) normal

 

 

Strange thing is, it "gradually" scaled from the "1-2" range after 4 tries, to "4" after that, while the last 5 attempts always got me to "6"...

Because a lot of time has gone by today, I thought "Ok, a slow vm is a good start, lets report back."

Rebooted into normal mode (plugins&Docker), and even the VM on the array was "working".

 

While typing this answer (which is problably way to detailed...) the VM got unresponsive while running the native Disk-Cleanup tool.

 

Do you know a way to increase the log/debug output of libvirt? The libvirt log in its current state does not seem very helpfull.

 

 

To me it seems like libvirt/samba are somehow trying to get exclusive access over some parts of the array, which eventually leads to a lockup.

SMB3.0 and so the current samba version has a lot of new "file locks" and acls to prevent unwanted access. For Microsoft, smb3 is a way to make it a shared-storage protocol for hyper-v. maybe there is an issue with "local" access while samba needs a more exclusive access to create a share.

Link to comment

Ok, a few more questions on this:

 

1. Do you have disk shares enabled or disabled? (you can check this from the Tools -> Global Share Settings page)

 

2. Do the shares containing your vdisk(s) have SMB export set to yes?  (you can check this from the Shares page by clicking on the share and checking the SMB export option)

 

If either of these are yes, please try disabling them and reporting back.  This will help us narrow down where the issue may be stemming from.

Link to comment

Ok, a few more questions on this:

 

1. Do you have disk shares enabled or disabled? (you can check this from the Tools -> Global Share Settings page)

 

2. Do the shares containing your vdisk(s) have SMB export set to yes?  (you can check this from the Shares page by clicking on the share and checking the SMB export option)

 

If either of these are yes, please try disabling them and reporting back.  This will help us narrow down where the issue may be stemming from.

My diagnostics can be found in the release threads (the one I just quoted), settings are/were:

1. disk shares are enabled

2. share containing the vms:

    - Split as required

    - include all, exclude none

    - cache only (I guess I should change that?)

    - export no, security private

 

I'll disable the disk shares, disable docker, boot into safe mode, use it until shares/webgui are broken and post a new set of diagnostics tomorrow.

 

Two things:

- what would be more helpfull, "/mnt/user/VMs" or "/mnt/disk5/VMs" as the path for the vDisk?

      (Eric asked me to try /mnt/disk5 and it may have reduced the issue, but it could have been a coincedence)

- Any way to increase the logging to be more verbose? libvirt/syslog usually are as emtpy as they are while having no issues.

Link to comment

 

Even after reading all your posts, I am not sure, if your VMs are running on the array, a cache drive or outside of the array.

What are your results, if your vm has NO vDisk on any disk that is part of the parity protected array?

At least for me, moving all vDisks to the cache removes all issues and makes 6.2 as stable and fast as 6.1.9

 

 

Sorry about that, I have been doing cache only for the vdisk share which is on the SSD (cache only selected) formated as btrfs. I intend to expand the pool at some point once all of this is up and running

 

Ok, a few more questions on this:

 

1. Do you have disk shares enabled or disabled? (you can check this from the Tools -> Global Share Settings page)

 

2. Do the shares containing your vdisk(s) have SMB export set to yes?  (you can check this from the Shares page by clicking on the share and checking the SMB export option)

 

If either of these are yes, please try disabling them and reporting back.  This will help us narrow down where the issue may be stemming from.

 

Disk shares were set to Auto, and I had the vdisk share export set to yes. After disabling them, I also decided to disable Dockers and removed the image and share for it (I'm not using it at the moment anyways).  When testing out the Seabios VM, it still hits CPU0 to 100% during games and I was able to make that VM crash by trying to run a program from a public exported share on the array. Unfortunately though, unraid completely locked up before I could grab the diagnostic from it. Also, somewhere during these changes and tests, my OVMF VM on the same vdisk share died as well. I tried the fs0:, but there are no directories listed after that to try and force a boot. I will put together another OVMF vdisk tomorrow after work and try to grab a diagnostic when it crashes then.

 

EDIT: Interesting side note...

 

I am also setting up a second unraid server with roughly similar hardware, though this install originated from 6.2 beta 21 and used the virtio drivers downloaded directly through the unraid webgui (112-1 I think). I've had absolutely no issues on this second server (after setting up the vbios.rom and the msi interrupts). Shares can be IO without issue, external pc's can see and access the shares without user/pw prompts, etc. The win10VM runs 100% solid on OVMF bios, and has not crashed once in something like 6 hours of testing. Warthunder seems to bang away at CPU0 still, but I'm starting to think that is related to that particular software in a virtual environment, as Adobe premier, etc. spread the work load across CPUs with no issues. Basically, this second server setup has run ~100% "straight out of the box".

 

I'm wondering if some settings, corruptions, etc. might have been carried over from the 6.19 upgrade on the 1st server this post was originally made about. I plan on doing a complete clean install and reformatting of the drives when I get home tonight and see if this solves the previous issues *fingers crossed*. If not, I will post the diagnostics after a crash.

Link to comment

Ok, a few more questions on this:

 

1. Do you have disk shares enabled or disabled? (you can check this from the Tools -> Global Share Settings page)

 

2. Do the shares containing your vdisk(s) have SMB export set to yes?  (you can check this from the Shares page by clicking on the share and checking the SMB export option)

 

If either of these are yes, please try disabling them and reporting back.  This will help us narrow down where the issue may be stemming from.

Eric asked me to switch the vDisk to IDE, so I did.

 

Ok, I would say going to IDE improved stability. All windows-disk tools were able to finish.

Even the disk benchmark did not crash the system. Although task manager did from time to time fail to show any values.

 

Perfomance however had a mixed result it wen't from 0.xx KB/s to very short bursts of 100MB/s back to 0KB/s

So, according to my scale, it would probably be a 8.5.... Very slow, but seemingly working...

 

Hoping I might have an improved solution, I rebooted into normal mode.

That one vm was still working as "good" as in safe mode.

 

So I went ahead and started a second VM, that has a vDisk (System) on the cache an additional vDisk on the array (which I also changed to IDE).

It's a Server2012R2 and uses the second disk as a backup target.

After ~1GB of data written, the first VM completly freezes, no mouse movement throuh vnc and clock stops going forward.

The Server2012R2 VM itself stops writing anything  to the backup disk but is still able to abort the backup and shut down.

 

"Virsh list" however shows both VMs as running and "virsh destroy" still doesn't do anything.

 

I was able to reproduce that three times.

1) normal mode, with disk shares (attached in next post due to file size restrictions)

2) safe mode, with disk shares (attached)

3) safe mode, no disk shares (attached)

 

jonp's suggestion to disable disk shares had no effect that I am aware of.

System is still crashing as fast and processes that access a disk in the array get stuck (WebGUI, MC, ...)

 

*A new sidenote:

I had to copy the second disk to the array first, because I placed it on the cache until we find a solution.

The first vm was already running. Copying the disk through "mc" whithin a ssh session had constantly 60-62MB/s (cache->array)

I even started t benchmark in the vm, but it did not drop.

So while everything in the VM is slow, anything als is working fine (until the VM goes of the cliff...)

 

safe_no-disk-share_unraid-diagnostics-20160413-2130.zip

safe_with-disk-share_unraid-diagnostics-20160413-2122.zip

Link to comment

Latest updates..

 

2nd machine is running for over 5 days with no issues. VM still runs like a champ, I'd roughly estimate (just by feel) 96% bare-metal performance. System log shows smooth sailing. Extremely happy with it, and I haven't even gotten into the juicy dockers and plugins yet.

 

1st machine (problem child) this post was originally about...

 

So I pulled off all the files and did a from scratch reformat and clean install of everything. Array disks, cache, and the unraid flash drive. Everything. I reinstalled directly to beta 21 and unraid seemed to be doing alright, minus a nagging usb error I had seen since I started building this rig back on 6.19. My VMs still had the same previous issues, though I could not get an OVMF Win10 to install. I decided to just start playing with the MB CMOS settings and ended up getting it locked into UEFI without usb legacy support (whoops) which lead me to jumper clearing the BIOS. After doing a CMOS default settings reset, I was amazed to see how quickly unraid booted on this machine (fastest its ever gone). I also noticed that the previous usb error message mentioned above was also gone.

 

Unraid now seems to run solid on this machine. I'm having zero issues with file transfers into and out of VMs so-far, and I haven't had a lockup yet. The only strange thing is that when I try to install a Win10 VM, I still can't use the stable 112-1 virtio drivers (weird media error for all the files), yet i can "downgrade" them from inside the VM. Installing to Win8.1 first, then using the 112-1 virtio drivers seems to work though, but I have not yet done an upgrade to win10 from 8 yet (tonight maybe.) I'm also not get 60FPS on WT in the VM with a GTX960, when the other server VM can do it on with a GTX950 no problem.

 

I'm hesitant to say that my issues are fixed yet though as this machine just doesn't "feel right" just yet, but I/we can't exactly diagnose a feeling till an error actually rears its head. That being said, resetting my bios to default again seems to have resolved MANY of the issues I was having.

Link to comment
  • 5 months later...

Hi

 

I am having very similar issues. Basically transferring from an Unraid share to a second windows Vdisk shows very poor performance and will make unraid webui and dockers etc... unreachable.

 

I am really pulling my hair out over this so ANY help would be appreciated.

 

I tried changing vdisk type to qcow instead of raw and got the best results yet as in most SMB transfers would finish without breaking unraid however things like allocating disk space in steam would crash.

 

One similarity I noticed with OP is x99 platform. Maybe a clue?

 

I have reset cmos, wiped unraid flash and started fresh, removed all hardware that was not needed. Tried all different network configs.

 

Diagnostics attached.

unjosh-diagnostics-20161008-1115.zip

Link to comment

Join the conversation

You can post now and register later. If you have an account, sign in now to post with your account.
Note: Your post will require moderator approval before it will be visible.

Guest
Reply to this topic...

×   Pasted as rich text.   Restore formatting

  Only 75 emoji are allowed.

×   Your link has been automatically embedded.   Display as a link instead

×   Your previous content has been restored.   Clear editor

×   You cannot paste images directly. Upload or insert images from URL.