Jump to content

mikeyosm

Members
  • Content Count

    346
  • Joined

  • Last visited

Community Reputation

8 Neutral

About mikeyosm

  • Rank
    Advanced Member

Converted

  • Gender
    Undisclosed

Recent Profile Visitors

The recent visitors block is disabled and is not being shown to other users.

  1. Hello TR4 2950x UNRAID: 6.6 10gb local vNIC br0 (mtu 1500) 1gb NIC physical Win10 VM 1809 (6 cores/12 threads) SMB Share: Array Disk 1 nvme (contains appdata and downloads share) - Tunable DirectIO set to yes. iperf tests from VM to UNRAID host benched in excess of 10gb/s (no issues there then) w10 VM nvme drive passed through (benched 3Gb/s) unraid smb share on nvme drive (benched 2Gb/s) Tests performed: 1.) Copy 4Gb file from SMB share to W10 VM (avg 300Mb/s) WHY? 2.) Copy 4Gb (different file) from VM to SMB share (avg 1Gbs/s) Expected. Only copy sessions from SMB to the VM are 50% slower than the other way (VM to SMB share). I also ran iperf tests in both directions: VM to HOST: FAST [ 4] 5.00-6.00 sec 261 MBytes 2.18 Gbits/sec [ 6] 5.00-6.00 sec 204 MBytes 1.71 Gbits/sec [ 8] 5.00-6.00 sec 282 MBytes 2.36 Gbits/sec [ 10] 5.00-6.00 sec 248 MBytes 2.08 Gbits/sec [ 12] 5.00-6.00 sec 259 MBytes 2.17 Gbits/sec [ 14] 5.00-6.00 sec 202 MBytes 1.69 Gbits/sec [ 16] 5.00-6.00 sec 257 MBytes 2.15 Gbits/sec [ 18] 5.00-6.00 sec 199 MBytes 1.67 Gbits/sec [ 20] 5.00-6.00 sec 278 MBytes 2.33 Gbits/sec [ 22] 5.00-6.00 sec 242 MBytes 2.03 Gbits/sec [SUM] 5.00-6.00 sec 2.37 GBytes 20.4 Gbits/sec HOST to VM: 50% SLOWER than VM to HOST [ 14] 0.00-10.00 sec 1.15 GBytes 990 Mbits/sec 26 sender [ 14] 0.00-10.00 sec 1.15 GBytes 989 Mbits/sec receiver [ 16] 0.00-10.00 sec 1.10 GBytes 944 Mbits/sec 34 sender [ 16] 0.00-10.00 sec 1.10 GBytes 943 Mbits/sec receiver [ 18] 0.00-10.00 sec 1.14 GBytes 979 Mbits/sec 26 sender [ 18] 0.00-10.00 sec 1.14 GBytes 977 Mbits/sec receiver [ 20] 0.00-10.00 sec 1.09 GBytes 936 Mbits/sec 32 sender [ 20] 0.00-10.00 sec 1.09 GBytes 935 Mbits/sec receiver [ 22] 0.00-10.00 sec 1.12 GBytes 965 Mbits/sec 33 sender [ 22] 0.00-10.00 sec 1.12 GBytes 964 Mbits/sec receiver [SUM] 0.00-10.00 sec 11.2 GBytes 9.65 Gbits/sec 322 sender [SUM] 0.00-10.00 sec 11.2 GBytes 9.63 Gbits/sec receiver It's like the file transfer from UNRAID to VM is over the 1Gb interface and transfer from VM to UNRAID is over the 10Gb virtual interface br0. It seems also that If I transfer large files from another SSD in the unassigned devices pool, I don't have any speed issues. Only when transferring between the Array disk and a VM. Any ideas?
  2. mikeyosm

    MSI MysticLight and Unraid

    Curious whether this works for anyone on 6.6.5? I just tried it on my MSI x399 board and I can't even start the docker. I set it previlige etc but no luck.
  3. mikeyosm

    Do I need a Cache disk?

    I have no need for parity and the only option I had at the time for an array was to include my 128GB SSD. All other drives are configured under unassigned devices plugin. My questions is whether this is OK and whether assigning a SSD/nvme cache drive will actually help.
  4. mikeyosm

    Do I need a Cache disk?

    Yes, just using them as unassigned devices.
  5. mikeyosm

    Do I need a Cache disk?

    My disk configuration as follows: hardware: 2950x MSI x399 MEG Creation, 32GB @3200mhz 1 array disk (SSD) used for appdata and docker. 1 x SSD for VMs 2 x 3.5" HDDs for archive data 1 x 512GB nvme for Music Share 1 x 1TB nvme passed through to a W10 VM Is there any reason why I should allocate a cache disk? Will it make much difference for file transfers between the W10 VM and UNRAID host? I used the diskspeed docker to test the speeds of all my drives and worryingly the nvme 512GB drive tops out at 700MB/s reads. In Windows it tops out at 2-3Gbps. What could be the issue? Thank you
  6. mikeyosm

    help threadripper 1920x gaming

    Any luck with your GPU speed issue? Look at mine... This is 1070ti. I also use 2 x nvme rated at 3GBps read. I'm using MSI x399 MEG Creation - Not cleared CMOS yet. Should I? lspci -s 43:00.0 -vv | grep Lnk LnkCap: Port #0, Speed 8GT/s, Width x16, ASPM L0s L1, Exit Latency L0s <512ns, L1 <16us LnkCtl: ASPM Disabled; RCB 64 bytes Disabled- CommClk+ LnkSta: Speed 2.5GT/s, Width x16, TrErr- Train- SlotClk+ DLActive- BWMgmt- ABWMgmt- LnkCtl2: Target Link Speed: 8GT/s, EnterCompliance- SpeedDis- LnkSta2: Current De-emphasis Level: -3.5dB, EqualizationComplete+, EqualizationPhase1+
  7. I tried strict node 1 (my gpu is on that node) and despite a very slow to power on VM, performance increase was neglible. I think waiting for a patched kernel will be a safer bet, I'm fresh out of ideas on how I can get close to bare metal memory performance.
  8. Spoke too soon. After using the VM for a few hours, it became very unresponsive and sluggish to the point it hard rebooted itself, grrr. Checked UNRAID memory usage and i had none left. Back to square one.
  9. mikeyosm

    Temperature monitoring Threadripper

    somethings strange and buggy with the dynamix system temp plugin for sure. I get a bunch of k10 and nct7802 in the list. If I select k10 tdie, click apply, the temp list is empty and I have to uninstall the plugin, remove sensors.conf
  10. OK, mine reports back correctly 8 4 8 16 and I didnt have to re-activate.
  11. Even better mem stats now, getting close to bare metal 🙂 W10 XML: <cputune> <vcpupin vcpu='0' cpuset='10'/> <vcpupin vcpu='1' cpuset='26'/> <vcpupin vcpu='2' cpuset='11'/> <vcpupin vcpu='3' cpuset='27'/> <vcpupin vcpu='4' cpuset='12'/> <vcpupin vcpu='5' cpuset='28'/> <vcpupin vcpu='6' cpuset='13'/> <vcpupin vcpu='7' cpuset='29'/> <vcpupin vcpu='8' cpuset='14'/> <vcpupin vcpu='9' cpuset='30'/> <vcpupin vcpu='10' cpuset='15'/> <vcpupin vcpu='11' cpuset='31'/> <emulatorpin cpuset='0,16'/> </cputune> <numatune> <memory mode='interleave' nodeset='0-1'/> </numatune> <resource> <cpu mode='custom' match='exact' check='partial'> <model fallback='allow'>EPYC-IBPB</model> <topology sockets='1' cores='6' threads='2'/> <feature policy='require' name='topoext'/> </cpu>
  12. I changed numatune to 0-1 and i have better r/w/c memory speeds now. Latency not too bad. I'll stick with this setting for now and see how I get on. Defo will upgrade to 64GB if and when memory prices come down a bit.
  13. OK, makes sense. However, before I made any mods to the XML or setting my BIOS to channel from auto, memory read/write/copy in Aida64 were exactly the same (half the bare metal mem perf). Adding numatune and EPYC has not impacted read/write/copy - only improve L3 chache a bit.
  14. Understood so how do I make use of dual channel with a VM?
  15. What did the cache look like before you created a new xml? And how does it look now? A comparison would be good so I can troubleshoot my performance issues. Thanks.