Cache Speed Oddity between servers


noacess

Recommended Posts

Hey guys,

 

I'm hoping to get some ideas around an issue I'm having with cache drive speed.  I'm trying to move my array from a Dell R630 (16 core 192GB Ram) to an R620 (12 core 16GB ram) server.  My HDD drives are connected via an LSI SAS9207-8e to a Lenovo SA120.  The R630 has a H330 raid controller for its drive bays, the R620 has a H710p.  Both servers have 10GB ethernet cards (Dell 0C63DV / C63DV Intel Dual Port). For cache drives I'm using two hitachi SSDs (HUSSL4040ASS600)) in hardware raid 0.  

 

When I have this configuration setup on the R630 I can transfer at about 1 GB/s.  On the R620 I can transfer at only 450 - 500 MB/s.  I have no idea why or what to check.  I tried both write through and write back settings on the H710p but that didn't seem to make any difference.  iperf on the R620 gives me a 9 Gbit connection so networking shouldn't be the issue.  I've attached diagnostics.

 

Thanks for any help/ideas/insight!

 

 

tower-diagnostics-20181102-1638.zip

 

Link to comment
15 hours ago, johnnie.black said:

By transfer do you mean read, write, both? from/to where?

 

I'm copying a 5GB file from a Windows 10 workstation that's also connected via a 10GB NIC to a cache only SMB share.  The file on the workstation resides on a PCIe SSD. So, the copy speed is what I mean by the transfer speed.  Hopefully that clarifies a bit.  Also note that this is the same machine I did the the iperf test from.

 

Thanks!

Link to comment
3 hours ago, johnnie.black said:

You can run the script below to confirm network isn't the problem, and if it isn't you can swap some of the hardware around to try and find out what's the issue, e.g., swap the SSDs from one server to the other to rule them out.

 

To run the test copy the script to the root of your flash drive and then type:


/boot/write_speed_test.sh /mnt/cache/test.dat

 

write_speed_test.sh

 

 

 

Well, its definitely not network related as you can see below.  I have a H310 raid controller on order to see if that makes a difference.  In the mean time I'm going to break the raid 0 and test individual drive speed with the script and see what that yields.  Thanks for the script!

 

 

root@Tower:/boot# ./write_speed_test.sh /mnt/cache/test.dat
writing 10240000000 bytes to: /mnt/cache/test.dat
1211290+0 records in
1211289+0 records out
1240359936 bytes (1.2 GB, 1.2 GiB) copied, 5.00092 s, 248 MB/s
2426523+0 records in
2426522+0 records out
2484758528 bytes (2.5 GB, 2.3 GiB) copied, 10.0036 s, 248 MB/s
3396197+0 records in
3396197+0 records out
3477705728 bytes (3.5 GB, 3.2 GiB) copied, 15.0063 s, 232 MB/s
4489504+0 records in
4489504+0 records out
4597252096 bytes (4.6 GB, 4.3 GiB) copied, 20.0113 s, 230 MB/s
5589321+0 records in
5589321+0 records out
5723464704 bytes (5.7 GB, 5.3 GiB) copied, 25.0165 s, 229 MB/s
6708500+0 records in
6708500+0 records out
6869504000 bytes (6.9 GB, 6.4 GiB) copied, 30.0218 s, 229 MB/s
7930174+0 records in
7930174+0 records out
8120498176 bytes (8.1 GB, 7.6 GiB) copied, 35.0271 s, 232 MB/s
9148055+0 records in
9148054+0 records out
9367607296 bytes (9.4 GB, 8.7 GiB) copied, 40.032 s, 234 MB/s
10000000+0 records in
10000000+0 records out
10240000000 bytes (10 GB, 9.5 GiB) copied, 43.5737 s, 235 MB/s
write complete, syncing
removed '/mnt/cache/test.dat'

 

Link to comment
  • 2 weeks later...

So over the last couple of weeks I've swapped out the raid controller from an H710p to a H310 and replaced the backplane (from a 4 bay to an 8 bay).  I've also swapped to a different set of SATA3 SSDs so I can keep my R630 up and running while I do testing on the R620.   

 

With the hardware above swapped out I'm still only able to copy at about 350 MB/s over a 10gbit connection to a cache only SMB sahre.  Today I decided to install Windows Server 2016 and use the same raid 0 cache drive and see what my transfer speed is.  This yielded a SMB file copy of 800 - 850 MB/s which is more in line what what I'd expect.  Are there any other Unraid tunables or logs I can look at to figure out why the SMB file copy is capping out at 350 MB/s?

 

Thanks!

Link to comment
16 hours ago, noacess said:

Are there any other Unraid tunables or logs I can look at to figure out why the SMB file copy is capping out at 350 MB/s?

Only things I need to get 1GB/s are jumbo frames and enabling direct i/o, does it make any difference if you transfer directly to cache instead of a share using cache?

Link to comment

Join the conversation

You can post now and register later. If you have an account, sign in now to post with your account.
Note: Your post will require moderator approval before it will be visible.

Guest
Reply to this topic...

×   Pasted as rich text.   Restore formatting

  Only 75 emoji are allowed.

×   Your link has been automatically embedded.   Display as a link instead

×   Your previous content has been restored.   Clear editor

×   You cannot paste images directly. Upload or insert images from URL.