Hello dearest community,
firstly, let me apologize in case I use some technical terms the wrong way or in case my setup is plain dumb.
I greatly appreciate any help and I am more than willing to learn;)
I found countless threads in my search on the forums and tried to go through everything to find out why my transfer speed is so slow. It seems many people have very different causes and none of the stated solutions I saw helped me yet. So, here I am starting another slow transfer speed thread;(
My situation:
Copying files from my Windows 7 workstation to the storage server starts at 120MB/s and ends up at 65MB/s (no SSD cache).
180MB/s (SSD only) with a 4GB file, 136MB/s (SSD only) with 14GB of files (1350 files).
My workstation setup:
Win 7 Ult. 64bit
i7 6700K
64GB DDR4
Mellanox Connect X 2 10Gbe
10gbit SFP+ DAC
My server setup:
Unraid 6.7.0
2x Xeon X5675
48GB ECC DDR3
Mellanox Connect X 2 10Gbe
10gbit SFP+ DAC
2x 8TB Seagate Ironwolf Pro
1x 8TB Parity Seagate Ironwolf Pro
1x 10TB Parity Seagate Exos X10
LSI 9211-8i IT Mode
Server and workstation are directly attached and I put them on a different subnet with the IPs 10.10.10.0 and 10.10.10.1 (is this technically correct to say it like this?).
My iperf3 performance is 6.97 Gbit/s which would be totally enough for me and more than the drives in the server can handle.
I did a drive speedtest that was recommended here on the forums but I forgot the name. All Seagate drives show a speed of 200-250MB/s, no dips below that. The SSD showed 500MB/s without dips.
My question:
What makes me lose so much of the transfer speed the drives (and network) are able to handle? There must be something that I clearly don't understand yet.
I understand that the server's RAM supports the transfer until it is filled up but are the drives really only capable of 65MB/s on their own?:/
What I see in the statistics on the server is a drive activity of around 190-200MBs which would explain the bottleneck but where are the additional 140MB/s used?
The network stats sit at 450MBit/s. Right now I am using Unraid's "direct I/O", my MTUs are set to 9000, jumbo packets are also set to 9000 on my Win 7 Mellanox Card. I don't think it is the network, though, since I can also throw files to the SSD share right now during a parallel transfer to the HDDs and get the same transfer speeds as listet above.
Writing all this feels like I already got a little closer to the actual bottleneck. If it's the drives that limit my transfer speeds to 65MB/s for continuos transfers, are there HDDs with better results? I transfer big bulks of photos and some large workfiles, which means I can't afford an SSD only server. The SSD cache was basically useless for me so far, since the transfer speeds were still only at around 120MB/s for large transfers and then I have to move everything with slow speed to the HDDs afterwards anyways. In this case I prefer to just transfer it to the HDDs directly.
As I said, I appreciate any information, help, pointers and/or ideas about the topic! Thanks in advance!
If more information is needed to talk about this case, please let me know!
Take care,
Dziga