Jump to content


  • Content Count

  • Joined

  • Last visited

Community Reputation

8 Neutral

About mikeyosm

  • Rank
    Advanced Member


  • Gender

Recent Profile Visitors

The recent visitors block is disabled and is not being shown to other users.

  1. thought as much. Patience required for x570 and Ryzen 3.
  2. Can the onboard Intel i9 gfx be passed through?
  3. Was thinking of using the Intel gfx onboard for light gaming. Otherwise, I'll have to wait for new AMD refresh and build something from that - might be more cost effective.
  4. Hi people I am looking at putting a new ITX build together to run PLEX (4k support, 2-3 simul streams) and also 2 or 3 VMs (2 gaming VMs) concurrently. Ideally I want it to be ITX in size so probably looking at a max 32/64GB mem all SSDs/NVMEs and min 8 cores (16 threads). Was looking at z390 asus strix and i9 plus 64GB double height DIMM - might be overkill though. Any recommendations? Thanks.
  5. Hi Is it possible to configure br0 and br1 interfaces so that they bond as 2x10Gb interfaces thus making 20Gb? I managed to activate NIC teaming on Windows 10 using this command New-NetSwitchTeam -Name "SwitchTeam01" -TeamMembers "Ethernet","Ethernet 2" I now have a NIC team showing 20Gb in my windows 10 VM. Problem is, I get much worse transfer speed to UNRAID shares than I did with just a single 10Gb NIC configured. Basically, I want to maximise my transfer from my W10 VM to a share on the NVME drive in the array. I figured 600MB/s wasnt enough and thought about bonding/teaming. Is what I am trying to achieve do-able?
  6. Nope, I remove the nvme drive and the flashing stops. I have two other nvme drives also connected and they're fine. I wrote an email to Corsair to explain my issue and suggsted I return the drive. I have to admit it's very odd but I did notice also that the BIOS stop code ends in A5 which is SCSI reset, when I remove the nvme drive, the stop code is something else. I think my drive controller is up s*it creak.
  7. I'm considering using 2 x nvme in RAID for my VMs but want to know if it's indeed worth it compared with say passing through each nvme drive to a VM directly. I ran a performance test using W10 Pro 1809 as a RAW image on my MP510 3GBps nvme and ASSD bench's approx 3GBps read and 1.4GBps write. I then tried copying a 4GB file within the W10 VM using explorer and only got 350-400/MBps. What gives? I would expect closer to the 1GBps mark. I only used defaults when creating the VM so I'm not sure if there's any tweak changes to the XML that I need to do to improve virtio SCSI performance. Any tips?
  8. My HDD LED on my case is constantly flashing and I cant isolate why. I shutdown all VMs, dockers and take the array offline and the light is still flashing like there's constant drive activity. So, I reboot UNRAID, go in to the motherboard BIOS and the light is still flashing. I suspect it may be something to do with one of my nvme drives that I'm passing through to the W10 VM because when I then start UNRAID, power on the VM with the nvme drive passed through, the flashing stops for a few minutes until after Windows is fully loaded. Then a few minutes later the constant flashing starts again. It's very strange and I'm really struggling to pinpoint the culprit. The nvme is the corsair MP510 and all SMART test pass OK and drive performs normally. At a complete loss at the moment where to start looking other than trying a different nvme drive which I dont have atm.
  9. Also, Is it possible to see fan speed from RX 570 GPU? I use istat hwmon etc and can only see GPU temperature. All other info uch as voltage, fans etc are not there.
  10. I keep getting 'there was an error connecting to the Apple ID server' when clicking Sign In on the Appstore. I suspect it might be because Apple doesn't recognise the VM as a real MAC? I have followed everything in the guide so I'm not sure why I can't access the Appstore. Appstore login works on other machines and High Sierra VM, only Mojave has a problem.
  11. Hello TR4 2950x UNRAID: 6.6 10gb local vNIC br0 (mtu 1500) 1gb NIC physical Win10 VM 1809 (6 cores/12 threads) SMB Share: Array Disk 1 nvme (contains appdata and downloads share) - Tunable DirectIO set to yes. iperf tests from VM to UNRAID host benched in excess of 10gb/s (no issues there then) w10 VM nvme drive passed through (benched 3Gb/s) unraid smb share on nvme drive (benched 2Gb/s) Tests performed: 1.) Copy 4Gb file from SMB share to W10 VM (avg 300Mb/s) WHY? 2.) Copy 4Gb (different file) from VM to SMB share (avg 1Gbs/s) Expected. Only copy sessions from SMB to the VM are 50% slower than the other way (VM to SMB share). I also ran iperf tests in both directions: VM to HOST: FAST [ 4] 5.00-6.00 sec 261 MBytes 2.18 Gbits/sec [ 6] 5.00-6.00 sec 204 MBytes 1.71 Gbits/sec [ 8] 5.00-6.00 sec 282 MBytes 2.36 Gbits/sec [ 10] 5.00-6.00 sec 248 MBytes 2.08 Gbits/sec [ 12] 5.00-6.00 sec 259 MBytes 2.17 Gbits/sec [ 14] 5.00-6.00 sec 202 MBytes 1.69 Gbits/sec [ 16] 5.00-6.00 sec 257 MBytes 2.15 Gbits/sec [ 18] 5.00-6.00 sec 199 MBytes 1.67 Gbits/sec [ 20] 5.00-6.00 sec 278 MBytes 2.33 Gbits/sec [ 22] 5.00-6.00 sec 242 MBytes 2.03 Gbits/sec [SUM] 5.00-6.00 sec 2.37 GBytes 20.4 Gbits/sec HOST to VM: 50% SLOWER than VM to HOST [ 14] 0.00-10.00 sec 1.15 GBytes 990 Mbits/sec 26 sender [ 14] 0.00-10.00 sec 1.15 GBytes 989 Mbits/sec receiver [ 16] 0.00-10.00 sec 1.10 GBytes 944 Mbits/sec 34 sender [ 16] 0.00-10.00 sec 1.10 GBytes 943 Mbits/sec receiver [ 18] 0.00-10.00 sec 1.14 GBytes 979 Mbits/sec 26 sender [ 18] 0.00-10.00 sec 1.14 GBytes 977 Mbits/sec receiver [ 20] 0.00-10.00 sec 1.09 GBytes 936 Mbits/sec 32 sender [ 20] 0.00-10.00 sec 1.09 GBytes 935 Mbits/sec receiver [ 22] 0.00-10.00 sec 1.12 GBytes 965 Mbits/sec 33 sender [ 22] 0.00-10.00 sec 1.12 GBytes 964 Mbits/sec receiver [SUM] 0.00-10.00 sec 11.2 GBytes 9.65 Gbits/sec 322 sender [SUM] 0.00-10.00 sec 11.2 GBytes 9.63 Gbits/sec receiver It's like the file transfer from UNRAID to VM is over the 1Gb interface and transfer from VM to UNRAID is over the 10Gb virtual interface br0. It seems also that If I transfer large files from another SSD in the unassigned devices pool, I don't have any speed issues. Only when transferring between the Array disk and a VM. Any ideas?
  12. Curious whether this works for anyone on 6.6.5? I just tried it on my MSI x399 board and I can't even start the docker. I set it previlige etc but no luck.
  13. I have no need for parity and the only option I had at the time for an array was to include my 128GB SSD. All other drives are configured under unassigned devices plugin. My questions is whether this is OK and whether assigning a SSD/nvme cache drive will actually help.
  14. Yes, just using them as unassigned devices.
  15. My disk configuration as follows: hardware: 2950x MSI x399 MEG Creation, 32GB @3200mhz 1 array disk (SSD) used for appdata and docker. 1 x SSD for VMs 2 x 3.5" HDDs for archive data 1 x 512GB nvme for Music Share 1 x 1TB nvme passed through to a W10 VM Is there any reason why I should allocate a cache disk? Will it make much difference for file transfers between the W10 VM and UNRAID host? I used the diskspeed docker to test the speeds of all my drives and worryingly the nvme 512GB drive tops out at 700MB/s reads. In Windows it tops out at 2-3Gbps. What could be the issue? Thank you