Zoroeyes Posted January 23, 2021 Share Posted January 23, 2021 Hi My unRaid server doubles as a workstation (bare metal workstation which sometimes gets booted as a VM within unRaid) and has a number of NVME drives in it for use on both sides (3x 1tb Samsung 970 pro for bare-metal workstation, 4x 1tb Adata sx8200 pro in btrfs raid 0 for use as the cache drive in unRaid). My intention is that I backup my UHD bluRays on the bare metal machine, then copy them across to unRaid once it’s running (via the VM). However, I’m not seeing the copy speeds I’d expect (only about 150mb/s)? I have the unRaid share set to use the cache and everything in on the same machine. So, considering we’re dealing purely with very fast NVMEs here, can anyone suggest what else could be preventing the 500+ mb/s copy speeds Id expect when copying between machines? Could it be the virtual NIC? It is showing a usage of about 1.5Gbps and it’s only a 1gig NIC but I didn’t think this would limit anything as it’s not really using the network in this instance (is it?) I do also have a 10gig card in the server that I could pass through, but it’s not plugged into anything yet? So would this even work, and if so, should I see improved performance when copying from VM to the unRaidshare? Cheers Quote Link to comment
Vr2Io Posted January 23, 2021 Share Posted January 23, 2021 (edited) Just try and got same throughput, I assign 4 core to VM and change VirtIO NIC "Max no. RSS queues" from 8 (Default) to 2 then got 10Gbps speed. Edited January 24, 2021 by Vr2Io Quote Link to comment
Zoroeyes Posted January 24, 2021 Author Share Posted January 24, 2021 Thanks for coming back Vr2Io, could you elaborate a little please? I have logical 32 cores in my server and have 16 provisioned to the VM and 16 for unRaid. can you give a little more detail of the settings you changed please. thanks Quote Link to comment
Vr2Io Posted January 24, 2021 Share Posted January 24, 2021 (edited) Ref below info. change RSS ( receive side scaling, not TSO ) start from 2 or other value. https://www.zetta.io/en/help/articles-tutorials/fixing-outbound-network-issues-windows-disabling-tso/ Edited January 24, 2021 by Vr2Io Quote Link to comment
Zoroeyes Posted January 29, 2021 Author Share Posted January 29, 2021 Hi, sorry for the delay in coming back. I’m changed the RSS queue to 2 but this didn’t improve throughput? Not sure if i also need to touch receive side scaling (enabled at moment) or TSO (maximal at moment) any suggestions? Max throughput I’ve seen so far is 200mb/s and this is nvme to nvme Quote Link to comment
Vr2Io Posted January 29, 2021 Share Posted January 29, 2021 31 minutes ago, Zoroeyes said: Hi, sorry for the delay in coming back. I’m changed the RSS queue to 2 but this didn’t improve throughput? Not sure if i also need to touch receive side scaling (enabled at moment) or TSO (maximal at moment) any suggestions? Max throughput I’ve seen so far is 200mb/s and this is nvme to nvme My test show no need change TSO and I think TSO not related, below post also fix problem by change RSS only. Quote Link to comment
Zoroeyes Posted January 30, 2021 Author Share Posted January 30, 2021 Thanks Vr2lo Ive tried your suggestion and it didn’t seem to improve at all? Very frustrating. I wonder if anyone else has seen this issue? Quote Link to comment
JorgeB Posted January 31, 2021 Share Posted January 31, 2021 Are you using virtio-net or virtio for the VM NIC? Virtio can be faster in some cases, but can also spam the log with unexpected GSO errors. Quote Link to comment
Zoroeyes Posted January 31, 2021 Author Share Posted January 31, 2021 Hi JorgeB virtio-net at the moment cheers Quote Link to comment
Zoroeyes Posted February 4, 2021 Author Share Posted February 4, 2021 Hi JorgeB i tried virtio and this did increase the speed a little, with windows explorer now showing around 300mb/s. however, this is still at least half the speed I’d expect given the hardware in use? I also noted that task manager was showing roughly 2.5gbps network usage, so it was better than the previous 1.5tbps but nothing like what I’d expect? am I better using a physical 10gbps connection out of unRaid and back into the VM, via a 10gbps switch, using a dual port 10gbps NIC, with only one port shared with the VM? I don’t understand why I should have to take this route, but I have £1000 worth of nvme drives sat on either side of this virtual connection, all running on the same mobo, and I’m struggling to transfer 50gig files at anything more than 300mb/s ? Seems a terrible waste of potential to me. ive logged a support ticket regarding this too (some time ago now), but haven’t had a reply as of yet Quote Link to comment
JorgeB Posted February 5, 2021 Share Posted February 5, 2021 11 hours ago, Zoroeyes said: am I better using a physical 10gbps connection You can try, I get close to 1GB/s from my VMs using the virtual NIC, but you're not the first one with this issue, possibly just the different hardware or some configuration setting. Quote Link to comment
jonp Posted February 5, 2021 Share Posted February 5, 2021 Have you tested the performance of each of your NVMe drives individually before? I know we have had a lot of performance issues with ADATA devices and if those are grouped in with your Samsung devices, your cache is only going to perform at the speed of your slowest devices. Can you show us performance tests against the cache that don't involve the network and perhaps break your cache pool so you can test individual drives first? Quote Link to comment
Recommended Posts
Join the conversation
You can post now and register later. If you have an account, sign in now to post with your account.
Note: Your post will require moderator approval before it will be visible.