jayseejc

Members
  • Posts

    19
  • Joined

  • Last visited

Converted

  • Gender
    Undisclosed

Recent Profile Visitors

The recent visitors block is disabled and is not being shown to other users.

jayseejc's Achievements

Noob

Noob (1/14)

0

Reputation

  1. You could always blacklist the module from the boot options. Something like this should work... https://askubuntu.com/a/110349
  2. @jonnie.black any chance you could post the results of our little dd test? As a way of verifying that it is actually measuring the correct write speed and not being limited by something else.
  3. I will add that enabling direct io seems to break a bunch of my docker containers. Stuff like Plex and influxdb just won't start.
  4. For a read test we can just use the same files we're creating with the write test, dd'ing them to /dev/null. Here's my most recent tests, safe mode, mover, docker and virtual machines disabled. For some reason now I'm getting better speeds all around. I'll try again in safe mode and just turning direct IO off, and update this post with the results. I should also note that misc is an actual share on my system (the place to put stuff when there's not a place to put it). It's pretty basic share, use the cache and move when applicable. All the usual defaults. Script used for testing. #!/bin/bash for i in disk{1..6} cache user user0; do echo "$i write:" dd if=/dev/zero of=/mnt/$i/misc/test-$i bs=1024 count=10240000 # Needed to flush the ram cache and not get ridiculous read speeds echo 3 > /proc/sys/vm/drop_caches echo "$i read:" dd if=/mnt/$i/misc/test-$i of=/dev/null bs=1024 count=10240000 done Read and write test. Direct IO enabled: disk1 write: 10240000+0 records in 10240000+0 records out 10485760000 bytes (10 GB, 9.8 GiB) copied, 83.4614 s, 126 MB/s disk1 read: 10240000+0 records in 10240000+0 records out 10485760000 bytes (10 GB, 9.8 GiB) copied, 88.6699 s, 118 MB/s disk2 write: 10240000+0 records in 10240000+0 records out 10485760000 bytes (10 GB, 9.8 GiB) copied, 43.989 s, 238 MB/s disk2 read: 10240000+0 records in 10240000+0 records out 10485760000 bytes (10 GB, 9.8 GiB) copied, 66.3664 s, 158 MB/s disk3 write: 10240000+0 records in 10240000+0 records out 10485760000 bytes (10 GB, 9.8 GiB) copied, 36.161 s, 290 MB/s disk3 read: 10240000+0 records in 10240000+0 records out 10485760000 bytes (10 GB, 9.8 GiB) copied, 64.9237 s, 162 MB/s disk4 write: 10240000+0 records in 10240000+0 records out 10485760000 bytes (10 GB, 9.8 GiB) copied, 36.0347 s, 291 MB/s disk4 read: 10240000+0 records in 10240000+0 records out 10485760000 bytes (10 GB, 9.8 GiB) copied, 63.0174 s, 166 MB/s disk5 write: 10240000+0 records in 10240000+0 records out 10485760000 bytes (10 GB, 9.8 GiB) copied, 51.0516 s, 205 MB/s disk5 read: 10240000+0 records in 10240000+0 records out 10485760000 bytes (10 GB, 9.8 GiB) copied, 90.1883 s, 116 MB/s disk6 write: 10240000+0 records in 10240000+0 records out 10485760000 bytes (10 GB, 9.8 GiB) copied, 141.119 s, 74.3 MB/s disk6 read: 10240000+0 records in 10240000+0 records out 10485760000 bytes (10 GB, 9.8 GiB) copied, 118.275 s, 88.7 MB/s cache write: 10240000+0 records in 10240000+0 records out 10485760000 bytes (10 GB, 9.8 GiB) copied, 108.233 s, 96.9 MB/s cache read: 10240000+0 records in 10240000+0 records out 10485760000 bytes (10 GB, 9.8 GiB) copied, 23.9585 s, 438 MB/s user write: 10240000+0 records in 10240000+0 records out 10485760000 bytes (10 GB, 9.8 GiB) copied, 192.514 s, 54.5 MB/s user read: 10240000+0 records in 10240000+0 records out 10485760000 bytes (10 GB, 9.8 GiB) copied, 111.867 s, 93.7 MB/s user0 write: 10240000+0 records in 10240000+0 records out 10485760000 bytes (10 GB, 9.8 GiB) copied, 159.209 s, 65.9 MB/s user0 read: 10240000+0 records in 10240000+0 records out 10485760000 bytes (10 GB, 9.8 GiB) copied, 124.317 s, 84.3 MB/s Read and write test. Direct IO disabled disk1 write: 10240000+0 records in 10240000+0 records out 10485760000 bytes (10 GB, 9.8 GiB) copied, 97.0911 s, 108 MB/s disk1 read: 10240000+0 records in 10240000+0 records out 10485760000 bytes (10 GB, 9.8 GiB) copied, 66.7662 s, 157 MB/s disk2 write: 10240000+0 records in 10240000+0 records out 10485760000 bytes (10 GB, 9.8 GiB) copied, 36.0981 s, 290 MB/s disk2 read: 10240000+0 records in 10240000+0 records out 10485760000 bytes (10 GB, 9.8 GiB) copied, 65.3774 s, 160 MB/s disk3 write: 10240000+0 records in 10240000+0 records out 10485760000 bytes (10 GB, 9.8 GiB) copied, 61.8678 s, 169 MB/s disk3 read: 10240000+0 records in 10240000+0 records out 10485760000 bytes (10 GB, 9.8 GiB) copied, 54.0103 s, 194 MB/s disk4 write: 10240000+0 records in 10240000+0 records out 10485760000 bytes (10 GB, 9.8 GiB) copied, 61.2027 s, 171 MB/s disk4 read: 10240000+0 records in 10240000+0 records out 10485760000 bytes (10 GB, 9.8 GiB) copied, 56.765 s, 185 MB/s disk5 write: 10240000+0 records in 10240000+0 records out 10485760000 bytes (10 GB, 9.8 GiB) copied, 75.509 s, 139 MB/s disk5 read: 10240000+0 records in 10240000+0 records out 10485760000 bytes (10 GB, 9.8 GiB) copied, 76.2456 s, 138 MB/s disk6 write: 10240000+0 records in 10240000+0 records out 10485760000 bytes (10 GB, 9.8 GiB) copied, 151.803 s, 69.1 MB/s disk6 read: 10240000+0 records in 10240000+0 records out 10485760000 bytes (10 GB, 9.8 GiB) copied, 124.251 s, 84.4 MB/s cache write: 10240000+0 records in 10240000+0 records out 10485760000 bytes (10 GB, 9.8 GiB) copied, 79.4928 s, 132 MB/s cache read: 10240000+0 records in 10240000+0 records out 10485760000 bytes (10 GB, 9.8 GiB) copied, 37.0443 s, 283 MB/s user write: 10240000+0 records in 10240000+0 records out 10485760000 bytes (10 GB, 9.8 GiB) copied, 341.036 s, 30.7 MB/s user read: 10240000+0 records in 10240000+0 records out 10485760000 bytes (10 GB, 9.8 GiB) copied, 21.237 s, 494 MB/s user0 write: 10240000+0 records in 10240000+0 records out 10485760000 bytes (10 GB, 9.8 GiB) copied, 342.749 s, 30.6 MB/s user0 read: 10240000+0 records in 10240000+0 records out 10485760000 bytes (10 GB, 9.8 GiB) copied, 68.6197 s, 153 MB/s Looking at the results, direct IO seems to have SOME effect, but I'm still seeing about half the write speed vs writing to the disk directly.
  5. Tried again changing direct io from auto (which allegedly just turns it off) to on, and get about the same write performance with worse read performance.
  6. Yup... Nearly identical in safe mode. I did see shfs spike occasionally when writing to /mnt/user, though only for a few seconds at a time. I'll post in the Defect reports tomorrow evening.
  7. Array is fully encrypted. Don't think that would make much difference though as writing to the specific disks runs at expected speeds.
  8. Strange performance issues I seem to be encountering. Testing via ssh has led me to believe this is some sort of issue in shfs, but I could be wrong. Simply put, writing to the array gives me about 20MB/s write speed (cached share), while writing to any of /mnt/diskX or /mnt/cache gives the expected speed. Writing to /mnt/user writes to the cache drive, as expected. Tested with dd to get some hard numbers, but issues are also observed via smb and 9p mounting in a vm. Any ideas? Observed in unraid 6.5. jon@core:/mnt$ for i in disk{1..6} cache user user0; do echo $i; dd if=/dev/zero of=/mnt/$i/misc/test-$i bs=1024 count=10240000; done disk1 10240000+0 records in 10240000+0 records out 10485760000 bytes (10 GB, 9.8 GiB) copied, 125.09 s, 83.8 MB/s disk2 10240000+0 records in 10240000+0 records out 10485760000 bytes (10 GB, 9.8 GiB) copied, 114.943 s, 91.2 MB/s disk3 10240000+0 records in 10240000+0 records out 10485760000 bytes (10 GB, 9.8 GiB) copied, 74.9314 s, 140 MB/s disk4 10240000+0 records in 10240000+0 records out 10485760000 bytes (10 GB, 9.8 GiB) copied, 93.4166 s, 112 MB/s disk5 10240000+0 records in 10240000+0 records out 10485760000 bytes (10 GB, 9.8 GiB) copied, 109.873 s, 95.4 MB/s disk6 10240000+0 records in 10240000+0 records out 10485760000 bytes (10 GB, 9.8 GiB) copied, 155.933 s, 67.2 MB/s cache 10240000+0 records in 10240000+0 records out 10485760000 bytes (10 GB, 9.8 GiB) copied, 107.349 s, 97.7 MB/s user 10240000+0 records in 10240000+0 records out 10485760000 bytes (10 GB, 9.8 GiB) copied, 387.066 s, 27.1 MB/s user0 10240000+0 records in 10240000+0 records out 10485760000 bytes (10 GB, 9.8 GiB) copied, 517.551 s, 20.3 MB/s
  9. I’m having some issues with GPU passthrough to a linux VM for desktop use. I’m running unraid 6.4.0_rc21b on a ryzen system (r7 1700X + Asus prime x370-pro), and my GPU is a RX 580 (this one: https://www.newegg.ca/Product/Product.aspx?Item=N82E16814202293 ). When I pass through the GPU to a windows 10 VM, I have almost no issues, except that the actual BIOS and boot screen (with the swirly dots) doesn’t show up, and I just get to the log in screen a moment later. I can reboot, shutdown, etc no problem. When I pass through to a linux VM though (Ubuntu 17.10 for sake of documenting this issue) things are a little bit different. I first install the VM with a VNC screen for to confirm that the base system installed correctly, selecting to install updates and third-party drivers as I go, and everything installs fine. I then proceed to pass through my graphics card, along side some other PCI devices like audio and USB controllers. The first boot, everything works as well as it does with a windows VM. I install any updates available via apt update and upt dist-upgrade. When I go to reboot or shutdown though, something fails. Instead of shutting down, the VM gets stuck. If I run lspci looking for my graphics card at this point, before the VM has actually shut down (from what I can tell), all I find is this 0a:00.0 VGA compatible controller: Advanced Micro Devices, Inc. [AMD/ATI] Ellesmere [Radeon RX 470/480/570/580] (rev ff) (prog-if ff) !!! Unknown header type 7f Kernel driver in use: vfio-pci Looking in /var/log/syslog, I see that it has been flooded with a somewhat randomly ordered set of messages like the following Jan 11 18:32:43 core kernel: IOTLB_INV_TIMEOUT device=0a:00.0 address=0x00000007fb631070] Jan 11 18:32:43 core kernel: AMD-Vi: Completion-Wait loop timed out If I try to force stop the VM and then start it again I get the error internal error: Unknown PCI header type '127' The only way I can get the graphics card usable again is to reboot the entire system. Now I’m no expert, but in doing some research this looks very much like the infamous PCI reset bug. It’s strange though that it only manifests with linux guests. I’m curious if anyone out there has had similar issues? Anyone know a fix, or at least why windows works and linux doesn’t?
  10. That is exactly what I'm saying. No qemu bios shenanigans or anything. The only downside is there's no physical console on the system to go at if something goes wrong networking wise, but that's really rare and only when I'm messing around with stuff.
  11. I've been running an r7 1700X since April. It works fine and now great with the npt bug fixed.
  12. Actually there's a really interesting other way I've found to do it, and should in theory work with any GPU (I have a single 960 myself). Assuming you're booting in UEFI mode, which is only available in the beta right now, if you add "video=efifb:off" this prevents the Linux kernel/unraid from attaching to any graphics adapter at all. When you boot the screen will freeze til you boot up a VM attached to the GPU, but it should work in a more foolproof manner than dealing with the Nvidia bios thing, and in theory should work with and GPUs too.
  13. Hmm doesn't seem to help. Civ still crashes, Space engineers still hangs. I've moved the VM to cores 2,3,6,7, and set isolcpus to the same cores, but there's effectively no difference in performance (benchmarks are identical). Not sure what could be up here... I should mention that for civ it only crashes when I got to start the game proper (moving from the 2d menu to the 3d game) so maybe it's something to do with drivers there?
  14. Hey all! So I'm trying to figure out why some games of mine run really slow or flat out crash when they would run fine on the bare metal before I started trying out unraid. I'm running these games in a Windows 10 VM from an unraid mapped network drive. Some games that crash or otherwise fail to run are Civilization V and Space Engineers. Meanwhile, some games like Rocket League or Bioshock Infinite work perfectly. For the games not working correctly, I have tried installing them on the virtual C: drive, with no difference. My CPU is an AMD FX-9590 and my GPU is a GTX 960. The config for my VM, if it's useful. <domain type='kvm' id='1'> <name>Windows 10</name> <uuid>630f4fbd-1aaa-75a9-266c-f8fa63d8960b</uuid> <metadata> <vmtemplate xmlns="unraid" name="Windows 10" icon="windows.png" os="windows10"/> </metadata> <memory unit='KiB'>17301504</memory> <currentMemory unit='KiB'>17301504</currentMemory> <memoryBacking> <nosharepages/> <locked/> </memoryBacking> <vcpu placement='static'>4</vcpu> <cputune> <vcpupin vcpu='0' cpuset='0'/> <vcpupin vcpu='1' cpuset='1'/> <vcpupin vcpu='2' cpuset='2'/> <vcpupin vcpu='3' cpuset='3'/> </cputune> <resource> <partition>/machine</partition> </resource> <os> <type arch='x86_64' machine='pc-i440fx-2.5'>hvm</type> <loader readonly='yes' type='pflash'>/usr/share/qemu/ovmf-x64/OVMF_CODE-pure-efi.fd</loader> <nvram>/etc/libvirt/qemu/nvram/630f4fbd-1aaa-75a9-266c-f8fa63d8960b_VARS-pure-efi.fd</nvram> </os> <features> <acpi/> <apic/> <hyperv> <relaxed state='on'/> <vapic state='on'/> <spinlocks state='on' retries='8191'/> <vendor id='none'/> </hyperv> </features> <cpu mode='host-passthrough'> <topology sockets='1' cores='2' threads='2'/> </cpu> <clock offset='localtime'> <timer name='hypervclock' present='yes'/> <timer name='hpet' present='no'/> </clock> <on_poweroff>destroy</on_poweroff> <on_reboot>restart</on_reboot> <on_crash>restart</on_crash> <devices> <emulator>/usr/local/sbin/qemu</emulator> <disk type='file' device='disk'> <driver name='qemu' type='qcow2' cache='writeback'/> <source file='/mnt/user/ssd-vdisks/Fresh Windows/vdisk2.qcow2'/> <backingStore/> <target dev='hdc' bus='virtio'/> <boot order='1'/> <alias name='virtio-disk2'/> <address type='pci' domain='0x0000' bus='0x00' slot='0x05' function='0x0'/> </disk> <disk type='file' device='disk'> <driver name='qemu' type='qcow2' cache='writeback'/> <source file='/mnt/user/hdd-vdisks/Misc/starcraft2.qcow2'/> <backingStore/> <target dev='hdd' bus='virtio'/> <alias name='virtio-disk3'/> <address type='pci' domain='0x0000' bus='0x00' slot='0x06' function='0x0'/> </disk> <disk type='file' device='disk'> <driver name='qemu' type='qcow2' cache='writeback'/> <source file='/mnt/user/hdd-vdisks/Misc/hearthstone.qcow2'/> <backingStore/> <target dev='hde' bus='virtio'/> <alias name='virtio-disk4'/> <address type='pci' domain='0x0000' bus='0x00' slot='0x08' function='0x0'/> </disk> <controller type='usb' index='0' model='nec-xhci'> <alias name='usb'/> <address type='pci' domain='0x0000' bus='0x00' slot='0x07' function='0x0'/> </controller> <controller type='pci' index='0' model='pci-root'> <alias name='pci.0'/> </controller> <controller type='virtio-serial' index='0'> <alias name='virtio-serial0'/> <address type='pci' domain='0x0000' bus='0x00' slot='0x04' function='0x0'/> </controller> <interface type='bridge'> <mac address='52:54:00:96:8d:97'/> <source bridge='br0'/> <target dev='vnet0'/> <model type='virtio'/> <alias name='net0'/> <address type='pci' domain='0x0000' bus='0x00' slot='0x02' function='0x0'/> </interface> <interface type='bridge'> <mac address='52:54:00:9c:4a:8d'/> <source bridge='virbr0'/> <target dev='vnet1'/> <model type='virtio'/> <alias name='net1'/> <address type='pci' domain='0x0000' bus='0x00' slot='0x03' function='0x0'/> </interface> <serial type='pty'> <source path='/dev/pts/2'/> <target port='0'/> <alias name='serial0'/> </serial> <console type='pty' tty='/dev/pts/2'> <source path='/dev/pts/2'/> <target type='serial' port='0'/> <alias name='serial0'/> </console> <channel type='unix'> <source mode='bind' path='/var/lib/libvirt/qemu/channel/target/domain-Windows 10/org.qemu.guest_agent.0'/> <target type='virtio' name='org.qemu.guest_agent.0' state='connected'/> <alias name='channel0'/> <address type='virtio-serial' controller='0' bus='0' port='1'/> </channel> <hostdev mode='subsystem' type='pci' managed='yes'> <driver name='vfio'/> <source> <address domain='0x0000' bus='0x04' slot='0x00' function='0x0'/> </source> <alias name='hostdev0'/> <address type='pci' domain='0x0000' bus='0x00' slot='0x09' function='0x0'/> </hostdev> <hostdev mode='subsystem' type='pci' managed='yes'> <driver name='vfio'/> <source> <address domain='0x0000' bus='0x00' slot='0x14' function='0x2'/> </source> <alias name='hostdev1'/> <address type='pci' domain='0x0000' bus='0x00' slot='0x0a' function='0x0'/> </hostdev> <hostdev mode='subsystem' type='usb' managed='no'> <source> <vendor id='0x046d'/> <product id='0xc52b'/> <address bus='4' device='3'/> </source> <alias name='hostdev2'/> </hostdev> <hostdev mode='subsystem' type='usb' managed='no'> <source> <vendor id='0x0556'/> <product id='0x0001'/> <address bus='5' device='2'/> </source> <alias name='hostdev3'/> </hostdev> <hostdev mode='subsystem' type='usb' managed='no'> <source> <vendor id='0x28de'/> <product id='0x1142'/> <address bus='4' device='2'/> </source> <alias name='hostdev4'/> </hostdev> <memballoon model='virtio'> <alias name='balloon0'/> <address type='pci' domain='0x0000' bus='0x00' slot='0x0b' function='0x0'/> </memballoon> </devices> </domain>