Win7 - Unraid 6.7.0 - Seagate Ironwolf Pro = 65MB/s :(


Recommended Posts

Hello dearest community,

 

firstly, let me apologize in case I use some technical terms the wrong way or in case my setup is plain dumb.

I greatly appreciate any help and I am more than willing to learn;)

I found countless threads in my search on the forums and tried to go through everything to find out why my transfer speed is so slow. It seems many people have very different causes and none of the stated solutions I saw helped me yet. So, here I am starting another slow transfer speed thread;(

 

My situation:

Copying files from my Windows 7 workstation to the storage server starts at 120MB/s and ends up at 65MB/s (no SSD cache).

180MB/s (SSD only) with a 4GB file, 136MB/s (SSD only) with 14GB of files (1350 files).

 

My workstation setup:

Win 7 Ult. 64bit

i7 6700K

64GB DDR4

Mellanox Connect X 2 10Gbe

10gbit SFP+ DAC

 

My server setup:

Unraid 6.7.0

2x Xeon X5675

48GB ECC DDR3

Mellanox Connect X 2 10Gbe

10gbit SFP+ DAC

2x 8TB Seagate Ironwolf Pro

1x 8TB Parity Seagate Ironwolf Pro

1x 10TB Parity Seagate Exos X10

LSI 9211-8i IT Mode

 

Server and workstation are directly attached and I put them on a different subnet with the IPs 10.10.10.0 and 10.10.10.1 (is this technically correct to say it like this?).

My iperf3 performance is 6.97 Gbit/s which would be totally enough for me and more than the drives in the server can handle.

I did a drive speedtest that was recommended here on the forums but I forgot the name. All Seagate drives show a speed of 200-250MB/s, no dips below that. The SSD showed 500MB/s without dips.

 

My question:

What makes me lose so much of the transfer speed the drives (and network) are able to handle? There must be something that I clearly don't understand yet. 

 

I understand that the server's RAM supports the transfer until it is filled up but are the drives really only capable of 65MB/s on their own?:/

 

What I see in the statistics on the server is a drive activity of around 190-200MBs which would explain the bottleneck but where are the additional 140MB/s used?

The network stats sit at 450MBit/s. Right now I am using Unraid's "direct I/O", my MTUs are set to 9000, jumbo packets are also set to 9000 on my Win 7 Mellanox Card.  I don't think it is the network, though, since I can also throw files to the SSD share right now during a parallel transfer to the HDDs and get the same transfer speeds as listet above.

 

Writing all this feels like I already got a little closer to the actual bottleneck. If it's the drives that limit my transfer speeds to 65MB/s for continuos transfers, are there HDDs with better results? I transfer big bulks of photos and some large workfiles, which means I can't afford an SSD only server. The SSD cache was basically useless for me so far, since the transfer speeds were still only at around 120MB/s for large transfers and then I have to move everything with slow speed to the HDDs afterwards anyways. In this case I prefer to just transfer it to the HDDs directly.

 

As I said, I appreciate any information, help, pointers and/or ideas about the topic! Thanks in advance! 

 

If more information is needed to talk about this case, please let me know!

 

Take care,

Dziga

 

 

Edited by dzigakaiser
Link to comment

Hey trurl,

 

thanks for the link. That explains a lot already. I was wondering if using one parity drive would reduce the amount of disk operations.

Do I understand correctly, though, that I wouldn't really see a benefit from using single parity in terms of write speeds with direct I/O active?

 

Thanks again!

Dziga

Link to comment
7 hours ago, dzigakaiser said:

If it's the drives that limit my transfer speeds to 65MB/s for continuos transfers, are there HDDs with better results?

Won't HDDs, some other issue cause the slow issue. You may try tuning some parameter and disable NIC "pause frame" at Windows side. 

 

3 hours ago, dzigakaiser said:

benefit from using single parity in terms of write speeds with direct I/O active?

Negative.

 

7 hours ago, dzigakaiser said:

Server and workstation are directly attached and I put them on a different subnet with the IPs 10.10.10.0 and 10.10.10.1 (is this technically correct to say it like this?).

If they are direct connect then it won't in two subnet, otherwise they can't communicate.

 

 

Edited by Benson
Link to comment
On 8/9/2019 at 7:42 PM, Benson said:

Won't HDDs, some other issue cause the slow issue. You may try tuning some parameter and disable NIC "pause frame" at Windows side. 

I tried finding out how to do that but this seems a little too complicated for me to grasp at my current networking knowledge level. Could you give me a pointer what to look for? Does this have something to do with how the Rx and Tx settings are set up?._.

 

About the subnets I wasn't very precise, I didn't mean that those two are on separate subnets but they are on a subnet together while my other gigabit connection with internet access is on another:) Maybe that makes more sense.

 

Link to comment

Join the conversation

You can post now and register later. If you have an account, sign in now to post with your account.
Note: Your post will require moderator approval before it will be visible.

Guest
Reply to this topic...

×   Pasted as rich text.   Restore formatting

  Only 75 emoji are allowed.

×   Your link has been automatically embedded.   Display as a link instead

×   Your previous content has been restored.   Clear editor

×   You cannot paste images directly. Upload or insert images from URL.