Jump to content

jayseejc

Members
  • Posts

    19
  • Joined

  • Last visited

Posts posted by jayseejc

  1. For a read test we can just use the same files we're creating with the write test, dd'ing them to /dev/null. Here's my most recent tests, safe mode, mover, docker and virtual machines disabled. For some reason now I'm getting better speeds all around. I'll try again in safe mode and just turning direct IO off, and update this post with the results.

     

    I should also note that misc is an actual share on my system (the place to put stuff when there's not a place to put it). It's pretty basic share, use the cache and move when applicable. All the usual defaults.

     

    Script used for testing.

    #!/bin/bash
    for i in disk{1..6} cache user user0;
    do
      echo "$i write:"
      dd if=/dev/zero of=/mnt/$i/misc/test-$i bs=1024 count=10240000
      # Needed to flush the ram cache and not get ridiculous read speeds
      echo 3 > /proc/sys/vm/drop_caches
      echo "$i read:"
      dd if=/mnt/$i/misc/test-$i of=/dev/null bs=1024 count=10240000
    done

     

    Read and write test. Direct IO enabled:

    disk1 write:
    10240000+0 records in
    10240000+0 records out
    10485760000 bytes (10 GB, 9.8 GiB) copied, 83.4614 s, 126 MB/s
    disk1 read:
    10240000+0 records in
    10240000+0 records out
    10485760000 bytes (10 GB, 9.8 GiB) copied, 88.6699 s, 118 MB/s
    disk2 write:
    10240000+0 records in
    10240000+0 records out
    10485760000 bytes (10 GB, 9.8 GiB) copied, 43.989 s, 238 MB/s
    disk2 read:
    10240000+0 records in
    10240000+0 records out
    10485760000 bytes (10 GB, 9.8 GiB) copied, 66.3664 s, 158 MB/s
    disk3 write:
    10240000+0 records in
    10240000+0 records out
    10485760000 bytes (10 GB, 9.8 GiB) copied, 36.161 s, 290 MB/s
    disk3 read:
    10240000+0 records in
    10240000+0 records out
    10485760000 bytes (10 GB, 9.8 GiB) copied, 64.9237 s, 162 MB/s
    disk4 write:
    10240000+0 records in
    10240000+0 records out
    10485760000 bytes (10 GB, 9.8 GiB) copied, 36.0347 s, 291 MB/s
    disk4 read:
    10240000+0 records in
    10240000+0 records out
    10485760000 bytes (10 GB, 9.8 GiB) copied, 63.0174 s, 166 MB/s
    disk5 write:
    10240000+0 records in
    10240000+0 records out
    10485760000 bytes (10 GB, 9.8 GiB) copied, 51.0516 s, 205 MB/s
    disk5 read:
    10240000+0 records in
    10240000+0 records out
    10485760000 bytes (10 GB, 9.8 GiB) copied, 90.1883 s, 116 MB/s
    disk6 write:
    10240000+0 records in
    10240000+0 records out
    10485760000 bytes (10 GB, 9.8 GiB) copied, 141.119 s, 74.3 MB/s
    disk6 read:
    10240000+0 records in
    10240000+0 records out
    10485760000 bytes (10 GB, 9.8 GiB) copied, 118.275 s, 88.7 MB/s
    cache write:
    10240000+0 records in
    10240000+0 records out
    10485760000 bytes (10 GB, 9.8 GiB) copied, 108.233 s, 96.9 MB/s
    cache read:
    10240000+0 records in
    10240000+0 records out
    10485760000 bytes (10 GB, 9.8 GiB) copied, 23.9585 s, 438 MB/s
    user write:
    10240000+0 records in
    10240000+0 records out
    10485760000 bytes (10 GB, 9.8 GiB) copied, 192.514 s, 54.5 MB/s
    user read:
    10240000+0 records in
    10240000+0 records out
    10485760000 bytes (10 GB, 9.8 GiB) copied, 111.867 s, 93.7 MB/s
    user0 write:
    10240000+0 records in
    10240000+0 records out
    10485760000 bytes (10 GB, 9.8 GiB) copied, 159.209 s, 65.9 MB/s
    user0 read:
    10240000+0 records in
    10240000+0 records out
    10485760000 bytes (10 GB, 9.8 GiB) copied, 124.317 s, 84.3 MB/s

     

    Read and write test. Direct IO disabled

    disk1 write:
    10240000+0 records in
    10240000+0 records out
    10485760000 bytes (10 GB, 9.8 GiB) copied, 97.0911 s, 108 MB/s
    disk1 read:
    10240000+0 records in
    10240000+0 records out
    10485760000 bytes (10 GB, 9.8 GiB) copied, 66.7662 s, 157 MB/s
    disk2 write:
    10240000+0 records in
    10240000+0 records out
    10485760000 bytes (10 GB, 9.8 GiB) copied, 36.0981 s, 290 MB/s
    disk2 read:
    10240000+0 records in
    10240000+0 records out
    10485760000 bytes (10 GB, 9.8 GiB) copied, 65.3774 s, 160 MB/s
    disk3 write:
    10240000+0 records in
    10240000+0 records out
    10485760000 bytes (10 GB, 9.8 GiB) copied, 61.8678 s, 169 MB/s
    disk3 read:
    10240000+0 records in
    10240000+0 records out
    10485760000 bytes (10 GB, 9.8 GiB) copied, 54.0103 s, 194 MB/s
    disk4 write:
    10240000+0 records in
    10240000+0 records out
    10485760000 bytes (10 GB, 9.8 GiB) copied, 61.2027 s, 171 MB/s
    disk4 read:
    10240000+0 records in
    10240000+0 records out
    10485760000 bytes (10 GB, 9.8 GiB) copied, 56.765 s, 185 MB/s
    disk5 write:
    10240000+0 records in
    10240000+0 records out
    10485760000 bytes (10 GB, 9.8 GiB) copied, 75.509 s, 139 MB/s
    disk5 read:
    10240000+0 records in
    10240000+0 records out
    10485760000 bytes (10 GB, 9.8 GiB) copied, 76.2456 s, 138 MB/s
    disk6 write:
    10240000+0 records in
    10240000+0 records out
    10485760000 bytes (10 GB, 9.8 GiB) copied, 151.803 s, 69.1 MB/s
    disk6 read:
    10240000+0 records in
    10240000+0 records out
    10485760000 bytes (10 GB, 9.8 GiB) copied, 124.251 s, 84.4 MB/s
    cache write:
    10240000+0 records in
    10240000+0 records out
    10485760000 bytes (10 GB, 9.8 GiB) copied, 79.4928 s, 132 MB/s
    cache read:
    10240000+0 records in
    10240000+0 records out
    10485760000 bytes (10 GB, 9.8 GiB) copied, 37.0443 s, 283 MB/s
    user write:
    10240000+0 records in
    10240000+0 records out
    10485760000 bytes (10 GB, 9.8 GiB) copied, 341.036 s, 30.7 MB/s
    user read:
    10240000+0 records in
    10240000+0 records out
    10485760000 bytes (10 GB, 9.8 GiB) copied, 21.237 s, 494 MB/s
    user0 write:
    10240000+0 records in
    10240000+0 records out
    10485760000 bytes (10 GB, 9.8 GiB) copied, 342.749 s, 30.6 MB/s
    user0 read:
    10240000+0 records in
    10240000+0 records out
    10485760000 bytes (10 GB, 9.8 GiB) copied, 68.6197 s, 153 MB/s

     

    Looking at the results, direct IO seems to have SOME effect, but I'm still seeing about half the write speed vs writing to the disk directly.

  2. Strange performance issues I seem to be encountering. Testing via ssh has led me to believe this is some sort of issue in shfs, but I could be wrong. Simply put, writing to the array gives me about 20MB/s write speed (cached share), while writing to any of /mnt/diskX or /mnt/cache gives the expected speed. Writing to /mnt/user writes to the cache drive, as expected. Tested with dd to get some hard numbers, but issues are also observed via smb and 9p mounting in a vm. Any ideas? Observed in unraid 6.5.

     

    jon@core:/mnt$ for i in disk{1..6} cache user user0; do echo $i; dd if=/dev/zero of=/mnt/$i/misc/test-$i bs=1024 count=10240000; done
    disk1
    10240000+0 records in
    10240000+0 records out
    10485760000 bytes (10 GB, 9.8 GiB) copied, 125.09 s, 83.8 MB/s
    disk2
    10240000+0 records in
    10240000+0 records out
    10485760000 bytes (10 GB, 9.8 GiB) copied, 114.943 s, 91.2 MB/s
    disk3
    10240000+0 records in
    10240000+0 records out
    10485760000 bytes (10 GB, 9.8 GiB) copied, 74.9314 s, 140 MB/s
    disk4
    10240000+0 records in
    10240000+0 records out
    10485760000 bytes (10 GB, 9.8 GiB) copied, 93.4166 s, 112 MB/s
    disk5
    10240000+0 records in
    10240000+0 records out
    10485760000 bytes (10 GB, 9.8 GiB) copied, 109.873 s, 95.4 MB/s
    disk6
    10240000+0 records in
    10240000+0 records out
    10485760000 bytes (10 GB, 9.8 GiB) copied, 155.933 s, 67.2 MB/s
    cache
    10240000+0 records in
    10240000+0 records out
    10485760000 bytes (10 GB, 9.8 GiB) copied, 107.349 s, 97.7 MB/s
    user
    10240000+0 records in
    10240000+0 records out
    10485760000 bytes (10 GB, 9.8 GiB) copied, 387.066 s, 27.1 MB/s
    user0
    10240000+0 records in
    10240000+0 records out
    10485760000 bytes (10 GB, 9.8 GiB) copied, 517.551 s, 20.3 MB/s
    

    image.thumb.png.9a7d3644da5618a75bbb5d03e73a7dad.pngimageproxy.php?img=&key=00b562fcac28e727

     

    image.thumb.png.fd250e0a4c49108d0c4e8f231d3f2fb6.png

     

    image.thumb.png.a389ce60187c6a406aea634e8318f20e.png

     

     

  3. I’m having some issues with GPU passthrough to a linux VM for desktop use. I’m running unraid 6.4.0_rc21b on a ryzen system (r7 1700X + Asus prime x370-pro), and my GPU is a RX 580 (this one: https://www.newegg.ca/Product/Product.aspx?Item=N82E16814202293 ).

     

    When I pass through the GPU to a windows 10 VM, I have almost no issues, except that the actual BIOS and boot screen (with the swirly dots) doesn’t show up, and I just get to the log in screen a moment later. I can reboot, shutdown, etc no problem. 


    When I pass through to a linux VM though (Ubuntu 17.10 for sake of documenting this issue) things are a little bit different. I first install the VM with a VNC screen for to confirm that the base system installed correctly, selecting to install updates and third-party drivers as I go, and everything installs fine. I then proceed to pass through my graphics card, along side some other PCI devices like audio and USB controllers. The first boot, everything works as well as it does with a windows VM. I install any updates available via apt update and upt dist-upgrade. When I go to reboot or shutdown though, something fails. Instead of shutting down, the VM gets stuck. If I run lspci looking for my graphics card at this point, before the VM has actually shut down (from what I can tell), all I find is this

     

    0a:00.0 VGA compatible controller: Advanced Micro Devices, Inc. [AMD/ATI] Ellesmere [Radeon RX 470/480/570/580] (rev ff) (prog-if ff)
            !!! Unknown header type 7f
            Kernel driver in use: vfio-pci

     

    Looking in /var/log/syslog, I see that it has been flooded with a somewhat randomly ordered set of messages like the following

     

    Jan 11 18:32:43 core kernel: IOTLB_INV_TIMEOUT device=0a:00.0 address=0x00000007fb631070]
    Jan 11 18:32:43 core kernel: AMD-Vi: Completion-Wait loop timed out

     

    If I try to force stop the VM and then start it again I get the error

    internal error: Unknown PCI header type '127'

    The only way I can get the graphics card usable again is to reboot the entire system.

     

    Now I’m no expert, but in doing some research this looks very much like the infamous PCI reset bug. It’s strange though that it only manifests with linux guests. 

     

    I’m curious if anyone out there has had similar issues? Anyone know a fix, or at least why windows works and linux doesn’t?

  4. 34 minutes ago, ars92 said:


    Woah are you saying that currently you have a ryzen system with only one GPU which is the 960 and yet you're passing through that GPU to the windows VM and unraid isn't giving any issue due to not having a GPU of its own?

    this is great!!


    Sent from my iPhone using Tapatalk

    That is exactly what I'm saying. No qemu bios shenanigans or anything. The only downside is there's no physical console on the system to go at if something goes wrong networking wise, but that's really rare and only when I'm messing around with stuff.

  5. Actually there's a really interesting other way I've found to do it, and should in theory work with any GPU (I have a single 960 myself). Assuming you're booting in UEFI mode, which is only available in the beta right now, if you add "video=efifb:off" this prevents the Linux kernel/unraid from attaching to any graphics adapter at all. When you boot the screen will freeze til you boot up a VM attached to the GPU, but it should work in a more foolproof manner than dealing with the Nvidia bios thing, and in theory should work with and GPUs too.

  6. I use a keyfile of 1MB in size. Upgraded to rc9f, inputted my keyfile, and server was unable to mount any of my disks with the following error repeated for each one. Reverting to rc8q allowed me to mount the drives again.

    Sep 28 17:26:33 core emhttpd: Opening encrypted volumes...
    Sep 28 17:26:33 core emhttpd: shcmd (2902): /usr/sbin/cryptsetup luksOpen /dev/md1 md1 --key-file /root/keyfile
    Sep 28 17:26:35 core root: No key available with this passphrase.
    Sep 28 17:26:35 core emhttpd: shcmd (2902): exit status: 2
    Sep 28 17:26:35 core emhttpd: shcmd (2903): /usr/sbin/cryptsetup luksOpen /dev/md2 md2 --key-file /root/keyfile
    Sep 28 17:26:38 core root: No key available with this passphrase.
    Sep 28 17:26:38 core emhttpd: shcmd (2903): exit status: 2
    Sep 28 17:26:38 core emhttpd: shcmd (2904): /usr/sbin/cryptsetup luksOpen /dev/md3 md3 --key-file /root/keyfile
    Sep 28 17:26:40 core root: No key available with this passphrase.
    Sep 28 17:26:40 core emhttpd: shcmd (2904): exit status: 2
    Sep 28 17:26:40 core emhttpd: shcmd (2905): /usr/sbin/cryptsetup luksOpen /dev/md4 md4 --key-file /root/keyfile
    Sep 28 17:26:42 core root: No key available with this passphrase.
    Sep 28 17:26:42 core emhttpd: shcmd (2905): exit status: 2
    Sep 28 17:26:42 core emhttpd: shcmd (2906): /usr/sbin/cryptsetup luksOpen /dev/md5 md5 --key-file /root/keyfile
    Sep 28 17:26:44 core root: No key available with this passphrase.
    Sep 28 17:26:44 core emhttpd: shcmd (2906): exit status: 2
    Sep 28 17:26:44 core emhttpd: shcmd (2907): /usr/sbin/cryptsetup luksOpen /dev/md6 md6 --key-file /root/keyfile
    Sep 28 17:26:47 core root: No key available with this passphrase.
    Sep 28 17:26:47 core emhttpd: shcmd (2907): exit status: 2
    Sep 28 17:26:47 core emhttpd: Failed opening encrypted volumes: Wrong encryption key
    Sep 28 17:26:47 core kernel: mdcmd (43): stop 
    Sep 28 17:26:47 core kernel: md1: stopping
    Sep 28 17:26:47 core kernel: md2: stopping
    Sep 28 17:26:47 core kernel: md3: stopping
    Sep 28 17:26:47 core kernel: md4: stopping
    Sep 28 17:26:47 core kernel: md5: stopping
    Sep 28 17:26:47 core kernel: md6: stopping
    Sep 28 17:26:47 core emhttpd: shcmd (2908): rmmod md-mod
    Sep 28 17:26:47 core kernel: md: unRAID driver removed
    Sep 28 17:26:47 core emhttpd: shcmd (2909): modprobe md-mod super=/boot/config/super.dat
    Sep 28 17:26:47 core kernel: md: unRAID driver 2.9.0 installed
    

     

  7. Hmm doesn't seem to help. Civ still crashes, Space engineers still hangs. I've moved the VM to cores 2,3,6,7, and set isolcpus to the same cores, but there's effectively no difference in performance (benchmarks are identical). Not sure what could be up here...

    I should mention that for civ it only crashes when I got to start the game proper (moving from the 2d menu to the 3d game) so maybe it's something to do with drivers there?

  8. Hey all! So I'm trying to figure out why some games of mine run really slow or flat out crash when they would run fine on the bare metal before I started trying out unraid. I'm running these games in a Windows 10 VM from an unraid mapped network drive. Some games that crash or otherwise fail to run are Civilization V and Space Engineers. Meanwhile, some games like Rocket League or Bioshock Infinite work perfectly. For the games not working correctly, I have tried installing them on the virtual C: drive, with no difference. My CPU is an AMD FX-9590 and my GPU is a GTX 960.

     

    The config for my VM, if it's useful.

    <domain type='kvm' id='1'>
      <name>Windows 10</name>
      <uuid>630f4fbd-1aaa-75a9-266c-f8fa63d8960b</uuid>
      <metadata>
        <vmtemplate xmlns="unraid" name="Windows 10" icon="windows.png" os="windows10"/>
      </metadata>
      <memory unit='KiB'>17301504</memory>
      <currentMemory unit='KiB'>17301504</currentMemory>
      <memoryBacking>
        <nosharepages/>
        <locked/>
      </memoryBacking>
      <vcpu placement='static'>4</vcpu>
      <cputune>
        <vcpupin vcpu='0' cpuset='0'/>
        <vcpupin vcpu='1' cpuset='1'/>
        <vcpupin vcpu='2' cpuset='2'/>
        <vcpupin vcpu='3' cpuset='3'/>
      </cputune>
      <resource>
        <partition>/machine</partition>
      </resource>
      <os>
        <type arch='x86_64' machine='pc-i440fx-2.5'>hvm</type>
        <loader readonly='yes' type='pflash'>/usr/share/qemu/ovmf-x64/OVMF_CODE-pure-efi.fd</loader>
        <nvram>/etc/libvirt/qemu/nvram/630f4fbd-1aaa-75a9-266c-f8fa63d8960b_VARS-pure-efi.fd</nvram>
      </os>
      <features>
        <acpi/>
        <apic/>
        <hyperv>
          <relaxed state='on'/>
          <vapic state='on'/>
          <spinlocks state='on' retries='8191'/>
          <vendor id='none'/>
        </hyperv>
      </features>
      <cpu mode='host-passthrough'>
        <topology sockets='1' cores='2' threads='2'/>
      </cpu>
      <clock offset='localtime'>
        <timer name='hypervclock' present='yes'/>
        <timer name='hpet' present='no'/>
      </clock>
      <on_poweroff>destroy</on_poweroff>
      <on_reboot>restart</on_reboot>
      <on_crash>restart</on_crash>
      <devices>
        <emulator>/usr/local/sbin/qemu</emulator>
        <disk type='file' device='disk'>
          <driver name='qemu' type='qcow2' cache='writeback'/>
          <source file='/mnt/user/ssd-vdisks/Fresh Windows/vdisk2.qcow2'/>
          <backingStore/>
          <target dev='hdc' bus='virtio'/>
          <boot order='1'/>
          <alias name='virtio-disk2'/>
          <address type='pci' domain='0x0000' bus='0x00' slot='0x05' function='0x0'/>
        </disk>
        <disk type='file' device='disk'>
          <driver name='qemu' type='qcow2' cache='writeback'/>
          <source file='/mnt/user/hdd-vdisks/Misc/starcraft2.qcow2'/>
          <backingStore/>
          <target dev='hdd' bus='virtio'/>
          <alias name='virtio-disk3'/>
          <address type='pci' domain='0x0000' bus='0x00' slot='0x06' function='0x0'/>
        </disk>
        <disk type='file' device='disk'>
          <driver name='qemu' type='qcow2' cache='writeback'/>
          <source file='/mnt/user/hdd-vdisks/Misc/hearthstone.qcow2'/>
          <backingStore/>
          <target dev='hde' bus='virtio'/>
          <alias name='virtio-disk4'/>
          <address type='pci' domain='0x0000' bus='0x00' slot='0x08' function='0x0'/>
        </disk>
        <controller type='usb' index='0' model='nec-xhci'>
          <alias name='usb'/>
          <address type='pci' domain='0x0000' bus='0x00' slot='0x07' function='0x0'/>
        </controller>
        <controller type='pci' index='0' model='pci-root'>
          <alias name='pci.0'/>
        </controller>
        <controller type='virtio-serial' index='0'>
          <alias name='virtio-serial0'/>
          <address type='pci' domain='0x0000' bus='0x00' slot='0x04' function='0x0'/>
        </controller>
        <interface type='bridge'>
          <mac address='52:54:00:96:8d:97'/>
          <source bridge='br0'/>
          <target dev='vnet0'/>
          <model type='virtio'/>
          <alias name='net0'/>
          <address type='pci' domain='0x0000' bus='0x00' slot='0x02' function='0x0'/>
        </interface>
        <interface type='bridge'>
          <mac address='52:54:00:9c:4a:8d'/>
          <source bridge='virbr0'/>
          <target dev='vnet1'/>
          <model type='virtio'/>
          <alias name='net1'/>
          <address type='pci' domain='0x0000' bus='0x00' slot='0x03' function='0x0'/>
        </interface>
        <serial type='pty'>
          <source path='/dev/pts/2'/>
          <target port='0'/>
          <alias name='serial0'/>
        </serial>
        <console type='pty' tty='/dev/pts/2'>
          <source path='/dev/pts/2'/>
          <target type='serial' port='0'/>
          <alias name='serial0'/>
        </console>
        <channel type='unix'>
          <source mode='bind' path='/var/lib/libvirt/qemu/channel/target/domain-Windows 10/org.qemu.guest_agent.0'/>
          <target type='virtio' name='org.qemu.guest_agent.0' state='connected'/>
          <alias name='channel0'/>
          <address type='virtio-serial' controller='0' bus='0' port='1'/>
        </channel>
        <hostdev mode='subsystem' type='pci' managed='yes'>
          <driver name='vfio'/>
          <source>
            <address domain='0x0000' bus='0x04' slot='0x00' function='0x0'/>
          </source>
          <alias name='hostdev0'/>
          <address type='pci' domain='0x0000' bus='0x00' slot='0x09' function='0x0'/>
        </hostdev>
        <hostdev mode='subsystem' type='pci' managed='yes'>
          <driver name='vfio'/>
          <source>
            <address domain='0x0000' bus='0x00' slot='0x14' function='0x2'/>
          </source>
          <alias name='hostdev1'/>
          <address type='pci' domain='0x0000' bus='0x00' slot='0x0a' function='0x0'/>
        </hostdev>
        <hostdev mode='subsystem' type='usb' managed='no'>
          <source>
            <vendor id='0x046d'/>
            <product id='0xc52b'/>
            <address bus='4' device='3'/>
          </source>
          <alias name='hostdev2'/>
        </hostdev>
        <hostdev mode='subsystem' type='usb' managed='no'>
          <source>
            <vendor id='0x0556'/>
            <product id='0x0001'/>
            <address bus='5' device='2'/>
          </source>
          <alias name='hostdev3'/>
        </hostdev>
        <hostdev mode='subsystem' type='usb' managed='no'>
          <source>
            <vendor id='0x28de'/>
            <product id='0x1142'/>
            <address bus='4' device='2'/>
          </source>
          <alias name='hostdev4'/>
        </hostdev>
        <memballoon model='virtio'>
          <alias name='balloon0'/>
          <address type='pci' domain='0x0000' bus='0x00' slot='0x0b' function='0x0'/>
        </memballoon>
      </devices>
    </domain>
    

×
×
  • Create New...