Data Transferspeed: VM access to Array


Grsh

7 posts in this topic Last Reply

Recommended Posts

Hi!

I am trying to figure one bit out that I might not quite understand.

I am running a Win10 VM on my Unraid as a Daily Driver and Workmachine and so far I am quite happy with it.

I got 3 HDD setup and one SSD as the cache. The VM is straight of a NVMe drive.

 

When I am sitting in front of the VM and copy files onto the array that does not use the Cache I get speeds of around 75-100mb/s.

Which is 10Gb ethernet speed, right? In my head the speed should be the speed the Sata Connection gives me as the data is only moving internally and not through a network.

 

Is the VM looking at the SMB connection and only be able to use ethernet speeds? Is there a way for me to speed this up? I am the only person that needs faster access to data on the array.

 

I might have misunderstood something here and would be thankful for any help. 

Cheers!

 

Link to post

First and foremost, typically small "b" is for bit (e.g. 10Gb = 10 gigabit) while capital "B" is for byte. So you have to start with getting your unit right.

Is the "mb/s" in your "75-100mb/s" bit per second or byte per second?

 

Secondly, "ethernet" is just a network technology. It can be any speed so "ethernet speed" is sort of meaningless.

 

A VM access to the host via SMB will always go through the virtual network, which is 100Gb - that is about 12.5 GB/s theoretical throughput.

Hence your access to the host via SMB will almost never saturate the virtual network adapter so that is never a concern to begin with.

 

I don't see why you expect "the speed should be the speed the Sata Connection" when accessing stuff on the HDD (array). No HDD can saturate the SATA connection (550MB/s).

 

You also haven't mentioned if the 75/100mb/s is read or write, whether it's sequential or random (big file or small file), what kind of HDD's you have, do you have multiple network bridges, what version of Unraid you are running etc.

Those details matter in determining if your access speed is expected or not.

 

 

 

Link to post
On 8/8/2020 at 12:14 AM, testdasi said:

First and foremost, typically small "b" is for bit (e.g. 10Gb = 10 gigabit) while capital "B" is for byte. So you have to start with getting your unit right.

Is the "mb/s" in your "75-100mb/s" bit per second or byte per second?

It was quite late when I tried to tackle this problem.

I got the Gigabyte X570 Pro Wifi Motherboard and using the onboard ethernet. The system profiler shows me the following entry:

eth0: 1000Mb/s, full duplex, mtu 1500

 

If transfering project folders to an smb share not using the cache (containing video files of 300MB - 1Gb AND smaller After Effects and Photoshop projects 10-40MB) I am seeing around 50-100MB/s). I used Crystal DiskMark to get a benchmark result:

20200809_DiskMark64_42.png.72438e55e0ce13174520afede05b4e9b.png

 

 

On 8/8/2020 at 12:14 AM, testdasi said:

I don't see why you expect "the speed should be the speed the Sata Connection" when accessing stuff on the HDD (array). No HDD can saturate the SATA connection (550MB/s).

Saying speed of the Sata connection is refering to the speed I got when I tested the HDD with just Bare Metal Windows and the drive not in the array (all before installing unraid and setting it up). I got around 200-230MB/s transfer speeds.

DiskSpeedTest_BaremetalNAsDisk.thumb.png.8c90bf9ca0fce89170559347488c2272.png

 

 

On 8/8/2020 at 12:14 AM, testdasi said:

what kind of HDD's you have, do you have multiple network bridges, what version of Unraid you are running etc.

Those details matter in determining if your access speed is expected or not.

I got

1x Seagate ST4000NE001 4TB IronWolf Pro 3.5" SATA3 NAS Desktop Hard Drive - Parity Drive

1x Seagate ST4000NE001 4TB IronWolf Pro 3.5" SATA3 NAS Desktop Hard Drive - Setup as the Data Drive

1x WDC_WD2002FAEX - which is currently not beeing used as there is no data on the drive

 

1x Samsung 860 EVO 1TB as the Cache

 

I am testing on an SMB share that is NOT using the cache.

I am copying files over from my NVMe Samsung 970 EVO Plus 1TB drive that I use as my Win10 VM instalation and partly fast data access drive.

 

Unraid Version is 6.8.3

 

I am using br0 as my network for the VM but setup a PiHole docker contrainer (according to SI ones youtube video).

So I have another virbr0 for this. Not sure if this is exactky what you were asking.

 

My current understanding is my Windows VM has a network virtual adapter that is 1GbE. My motherboards ethernet adapter is actually a 1GbE adapter.

It is possible to create a 10GbE virtual adapter (even though I don't have the hardware for it) to not be limited by the 1GbE speeds my VM thinks it got and get closer to what the HDD is capable regarding data transfer speeds?

 

Thanks for your time so far!

Grsh

 

 

 

Link to post
1 hour ago, itimpi said:

It might be worth reading this section of the online documentation to understand why writing to the array is slower than writing to a drive that is not part of the array.

Yea, I know about the write details but went ahead and gave it another read. For my use scenario read/modify/write works best for me as I don't want all my drives spun up. My question is, why is copying from a VM to the Array (which is using a cache SSD, VM is on separate SSD) is maxing out like if it was using a 1GB ethernet connection. My main system using the 1GB ethernet connection makes the same copy at the same max speeds to make the same copy (110 MB/s). However, my main system using the 10GB ethernet connection copies the same file at 650 MB/s. So again, I'm just wondering why the VM which is in the same system as UNRAID does not make the copy to the array at max available speeds. Is it because is using SMB and there for going through a virtual interfaced capped at 1GB? ... Did I just answered my own question?

 

"The auto method has been for the potential of the system automatically switching modes depending on current array activity but this has not happened so far. The problems is knowing when a drive is spinning, and being able to detect it without noticeably affecting write performance, ruining the very benefits we were trying to achieve. If on every write you have to query each drive for its status, then you will noticeably impact I/O performance. So to maintain good performance, you need another function working in the background keeping near-instantaneous track of spin status, and providing a single flag for the writer to check, whether they are all spun up or not, to know which method to use."

 

This part from the documentation, why can't we just keep a on log running on UNRAID in RAM for when was the last read/write to the drive. Knowing this plus knowing the spindown setting means we can know if the drive is still spinning. Actually, doesn't the spin-down command comes from UNRAID? Why can't we log when we last sent it for each disk. A combination of both? This way we would be able to use AUTO for things like rebuilding parity's or heavy write loads.

 

Maybe in the future we can also get the disk write options separated by pools. 

Link to post
18 hours ago, itimpi said:

It might be worth reading this section of the online documentation to understand why writing to the array is slower than writing to a drive that is not part of the array.

Thanks that was quite insightful. I understood the theory of the parity drive but I wasnt aware of the technical aspect of reading and writing the data back.

Out of curiousity: Are the speeds I am hitting with smb shares without cache what is considered normal?

Link to post

Join the conversation

You can post now and register later. If you have an account, sign in now to post with your account.
Note: Your post will require moderator approval before it will be visible.

Guest
Reply to this topic...

×   Pasted as rich text.   Restore formatting

  Only 75 emoji are allowed.

×   Your link has been automatically embedded.   Display as a link instead

×   Your previous content has been restored.   Clear editor

×   You cannot paste images directly. Upload or insert images from URL.