Jump to content

Very slow transfer speeds on 10gbit


karlpox

Recommended Posts

Hi,

 

Good day! Im having very slow transfer with my p2p 10gbit network. Im currently on Windows 7 and did some of the tweaks I found on youtube. The difference between my gigabit and 10gbit transfer switch is that on the gigabit, transfer speed starts at 40-50mbps and then becomes stable at 30+mbps. Meanwhile on the 10gbit, transfer starts around 300mbps then becomes stable at 30+mbps. What seems to be the problem here?

 

Thanks,

Karl

sobnology-diagnostics-20171107-1019.zip

Link to comment
  • 2 months later...

Just wanted to update this post and hope someone would help.

 

Here are the specs to my server:

Motherboard: ASRock - EP2C602-4L/D16  ( http://www.asrockrack.com/general/productdetail.asp?Model=EP2C602-4L/D16#Specifications )

CPU: 2x Intel Xeon E5-2670

RAM: Some assorted ram totaling 36GB

10Gbe LAN: MELLANOX CONNECTX-2 PCI-E X8 10GBE SFP+

Expansion cards: IT Mode LSI 9211-8i SAS SATA 8-port PCI-E Card 

Switch: Quanta LB4M

 

Now Im just have transfer speeds of <12MBs sometimes online KBs. Though occasionally it does spike to around 30+ MBs. Also streaming in 4K is damn slow. It needs to load every 5-10 seconds.

 

I am also using 3x hotswap cages ( https://www.amazon.com/iStarUSA-SATA-Hot-Swap-Black-BPU-350SATA/dp/B004RG6XNG/ref=sr_1_8?ie=UTF8&qid=1517322855&sr=8-8&keywords=istarusa+hotswap ). But I did have 100+ MB/s transfer speeds with them while I was using Windows home server 2011 & Xpenology (synology).

 

Is my motherboard causing this slow transfer speeds? or which hardware is causing this? The <12MBs transfer is just not acceptable anymore.

 

Thanks,

Karl

Link to comment
  • 2 weeks later...
39 minutes ago, karlpox said:

@johnnie.blacki do have same transfer speeds with the builtin gigabit port, 10gbe & a quad port intel 4 port gigabit card. Im not using cache on the shares where I test my copy/paste or stream from. It takes forever for the mover to finish, even if its just a 1gb file.

Writing to the cache with multiple ssds or an nvme drive is the only way you will see close to 10gbit speeds. Most likely what you are seeing is the transfer caching to ram in the beginning and then slows down when using the array. Try out iperf3 to check out your network link speeds. 

Link to comment
On 11/6/2017 at 9:23 PM, karlpox said:

Hi,

 

Good day! Im having very slow transfer with my p2p 10gbit network. Im currently on Windows 7 and did some of the tweaks I found on youtube. The difference between my gigabit and 10gbit transfer switch is that on the gigabit, transfer speed starts at 40-50mbps and then becomes stable at 30+mbps. Meanwhile on the 10gbit, transfer starts around 300mbps then becomes stable at 30+mbps. What seems to be the problem here?

 

Thanks,

Karl

sobnology-diagnostics-20171107-1019.zip

 

6 hours ago, greg2895 said:

Writing to the cache with multiple ssds or an nvme drive is the only way you will see close to 10gbit speeds. Most likely what you are seeing is the transfer caching to ram in the beginning and then slows down when using the array. Try out iperf3 to check out your network link speeds. 

3 hours ago, johnnie.black said:

I don't know how you expect to get anything close to 10GbE speeds without a very fast cache device, like a NVMe device or a raid0/10 SSD cache pool.

 

Yep this is how I improved my p2p speeds with the same network card I installed an nvme ssd and setup my test share as cache only, you should see considerable improvement this way

Link to comment

@greg2895@johnnie.black sorry i was not clear with my goals. What I actually want is just to stream 4K without hickups. The copy/paste was just to see whats the current transfer speed. I rarely copy/paste files from & into the server since its mostly for streaming & downloading stuff. But that one time that I did was very annoying.

 

Is it possible to cache just specific folder within a share? Because I dont really want to cache everything, Im afraid it bugs out or something.

Link to comment
1 minute ago, karlpox said:

Is that speed even normal?

No.

 

1 minute ago, karlpox said:

any chance you can help me figure out what the issue/s could be?

Not without more info, and you'll need to try and rule some things out, like copy a large file from the cache drive on your server to another computer using wired lan, you should get 100MB/s+ with gigabit and more with 10GbE, though it also depend on the destination computer.

Link to comment

The only way a mechanical HDD can manage to fill a normal 1Gbit/s network link (about 110 MB/s) is if there is only one single transfer going on. Older drives will never manage the required bandwidth to fill a 1gbit/s link. And newer drives may only manage the bandwidth when the file is stored on outside tracks of the disk surfaces.

 

Anything that constantly results in head seeks will have the disk transfer rates plummet. It's only SSD that can handle multiple, concurrent, transfers without the total bandwidth free-falling.

 

So if you want to test your 10Gbit transfer rates doing actual file transfers, you need to start by verifying that the source disk has no other disk accesses. And the same thing with the target disk on the receiving end.

Link to comment

@johnnie.blackwow first time ive seen transfer speeds so fast. From disk2-7 Im having 80-90+ MB/s transfer speeds and 100+ MB/s with the Cache. Yeah just streaming a 200mb tv series. 

 

@pwmI was testing with a single transfer. I am the only one who uses the server at home. I am using WD Red 3TB. I also used one of those plugins to monitor the resources of the server. I actually just want to maximize the 1Gbit/s link from ordinary clients.

Link to comment
28 minutes ago, karlpox said:

@johnnie.blackwow first time ive seen transfer speeds so fast. From disk2-7 Im having 80-90+ MB/s transfer speeds and 100+ MB/s with the Cache. Yeah just streaming a 200mb tv series. 

 

@pwmI was testing with a single transfer. I am the only one who uses the server at home. I am using WD Red 3TB. I also used one of those plugins to monitor the resources of the server. I actually just want to maximize the 1Gbit/s link from ordinary clients.

 

It's just important to note that if you have a Docker or similar that may perform a download or some file conversion, hash computation etc that can seriously affect any transfer rates for transfers to/from the same disk. And that also goes for any program performing disk accesses on your client computer - on a normal Windows machine there is constantly lots of things happening in the background.

Link to comment

@pwmThe docker I have on is plex, emby, sabnzbd & sonarr. Does those docker affect transfer rates that much? I also have 2 VM. 

 

@johnnie.blacki redid the copy/paste thing with files already from the disk & added new files into the disk then copy it back to the source.

 

This are all read, the write speeds are all the same at 80+ MB/s

 

                       File already on disk                            New File

disk2             50-60 MB/s                                          76 MB/s
disk3             10-20 MB/s                                          77 MB/s
disk4             40-56 MB/s                                          78 MB/s
disk5             10-20 MB/s                                          76 MB/s
disk6             <5 MB/s                                                78 MB/s
disk7             10-30 MB/s                                          78 MB/s

cache            did not test                                           78 MB/s - It started very slow (KB/s), then it froze for 10-20 sec and then resumed and reached 90MB/s till it stabilized around 78MB/s

 

Im doing parity check now @ <11MB/s (5 Days to complete) and based on the Dynamix Statistic plugin, storage is reading around 75MB/s and occasionally spiking at 90MB/s.

sobnology-diagnostics-20180213-1021.zip

Link to comment
6 hours ago, karlpox said:

Does those docker affect transfer rates that much? I also have 2 VM. 

Dockers and VM doesn't much affect the transfer rates in themselves.

 

But if a Docker or VM performs a download to one of your drives, and you want to perform a transfer to/from the same drive at the same time, then you will fight for the bandwidth. And not only that - the head-seek between the two file streams will cut away huge amounts of available transfer time for the HDD. And if you have two concurrent array write operations then you get contention for the parity drives.

 

That is a reason why it's good to partition the data layout so that different tasks mostly uses different disks. Or alternatively to see if it's possible to use time scheduling so background tasks happens when you are asleep or at work. Some people decide to use a separate drive mounted with the UD plugin for download tasks and to move finalized files to the array during night time.

Link to comment
7 hours ago, karlpox said:

Im doing parity check now @ <11MB/s (5 Days to complete) and based on the Dynamix Statistic plugin, storage is reading around 75MB/s and occasionally spiking at 90MB/s.

 

There's writes to disk6 during the parity check, stop all reads/writes to the arrray, wait a couple of minutes and grab and post new diags

Link to comment

Archived

This topic is now archived and is closed to further replies.

×
×
  • Create New...