SOLVED: Slow READ speeds - unRAID 6.6.6 - enterprise hardware - Writes 100+ MB/s Reads ~65 MB/s - HELP!


Recommended Posts

 

As title.

HP DL180 G6, 64gb ram, dual X5570 xeon

6 disk 4TB WD RED array

1 disk 4TB WD RED parity

1 PCIe 960gb NVMe for cache and VM

1 SSD 120GB unassigned disk 

 

Write speeds (to nvme cache drive) completely saturate my GbE network at 100+ MB/s

Write speeds to the 120GB ssd (unassigned disk) also around 100+ MB/s

Read speeds from array, NVMe cache and 120GB ssd (unassigned disk) all max out at around ~65 MB/s

 

hdparm test in terminal -  These are the 6 disk (+1 parity) that make up the array. Also the 120GB ssd and NVMe Cache drive

root@hpdl180g6:~# hdparm -tT /dev/sdb /dev/sdc /dev/sdd /dev/sde /dev/sdf /dev/sdg /dev/sdh /dev/sdi /dev/nvme0n1

/dev/sdb:
 Timing cached reads:   19330 MB in  1.99 seconds = 9701.13 MB/sec
 Timing buffered disk reads: 530 MB in  3.00 seconds = 176.38 MB/sec

/dev/sdc:
 Timing cached reads:   19264 MB in  1.99 seconds = 9667.14 MB/sec
 Timing buffered disk reads: 516 MB in  3.00 seconds = 171.74 MB/sec

/dev/sdd:
 Timing cached reads:   18796 MB in  1.99 seconds = 9432.50 MB/sec
 Timing buffered disk reads: 470 MB in  3.01 seconds = 156.05 MB/sec

/dev/sde:
 Timing cached reads:   18840 MB in  1.99 seconds = 9453.48 MB/sec
 Timing buffered disk reads: 478 MB in  3.01 seconds = 158.81 MB/sec

/dev/sdf:
 Timing cached reads:   18816 MB in  1.99 seconds = 9441.74 MB/sec
 Timing buffered disk reads: 434 MB in  3.01 seconds = 144.16 MB/sec

/dev/sdg:
 Timing cached reads:   19058 MB in  1.99 seconds = 9563.10 MB/sec
 Timing buffered disk reads: 448 MB in  3.01 seconds = 148.74 MB/sec

/dev/sdh:
 Timing cached reads:   18868 MB in  1.99 seconds = 9467.43 MB/sec
 Timing buffered disk reads: 518 MB in  3.00 seconds = 172.40 MB/sec

/dev/sdi: (shitty 120GB ssd)
 Timing cached reads:   18418 MB in  1.99 seconds = 9241.06 MB/sec
 Timing buffered disk reads: 578 MB in  3.00 seconds = 192.36 MB/sec

/dev/nvme0n1:
 Timing cached reads:   18958 MB in  1.99 seconds = 9513.47 MB/sec
 Timing buffered disk reads: 4350 MB in  3.00 seconds = 1449.83 MB/sec

nvme is a gen3 but the server only has gen2 pcie slot - hence the slower (boo hoo 1500MB/s)  reads

 

Crystal Disk Mark 6  on the 120GB unassigned disk - Mapped as a network drive in Windows 7.

   Sequential Read (Q= 32,T= 1) :   118.354 MB/s
  Sequential Write (Q= 32,T= 1) :   117.362 MB/s
  Random Read 4KiB (Q=  8,T= 8) :    88.570 MB/s [  21623.5 IOPS]
 Random Write 4KiB (Q=  8,T= 8) :   110.104 MB/s [  26880.9 IOPS]
  Random Read 4KiB (Q= 32,T= 1) :    83.825 MB/s [  20465.1 IOPS]
 Random Write 4KiB (Q= 32,T= 1) :   109.400 MB/s [  26709.0 IOPS]
  Random Read 4KiB (Q=  1,T= 1) :     9.412 MB/s [   2297.9 IOPS]
 Random Write 4KiB (Q=  1,T= 1) :     6.245 MB/s [   1524.7 IOPS]

  Test : 4096 MiB [S: 22.3% (24.9/111.7 GiB)] (x1) <0Fill> [Interval=5 sec]
    
and
    

   Sequential Read (Q= 32,T= 1) :   118.345 MB/s
  Sequential Write (Q= 32,T= 1) :   117.305 MB/s
  Random Read 4KiB (Q=  8,T= 8) :    87.685 MB/s [  21407.5 IOPS]
 Random Write 4KiB (Q=  8,T= 8) :   108.942 MB/s [  26597.2 IOPS]
  Random Read 4KiB (Q= 32,T= 1) :    83.757 MB/s [  20448.5 IOPS]
 Random Write 4KiB (Q= 32,T= 1) :   109.225 MB/s [  26666.3 IOPS]
  Random Read 4KiB (Q=  1,T= 1) :     8.299 MB/s [   2026.1 IOPS]
 Random Write 4KiB (Q=  1,T= 1) :     6.240 MB/s [   1523.4 IOPS]

  Test : 4096 MiB [S: 22.3% (24.9/111.7 GiB)] (x1)  [Interval=5 sec]

 

 

iperf3 reports 112 MB/s in both send and receive.

 

 

 

As I said, write speeds over the network to the cache drive are flawless at 100+ MB/s.

Write speeds to the array, not using cache drive and not using parity are also in excess of 100+ MB/s - IE: writing to 1 disk at a time

I have noticed after removing the parity drive to do read testing, the rebuild of the parity is operating at ~83 MB/s.

Parity is valid
Last checked on Tuesday, 2019-01-22, 12:06 (four days ago), finding 0 errors.
Duration: 13 hours, 15 minutes, 6 seconds. Average speed: 83.9 MB/sec

ifconfig

oot@hpdl180g6:~# ifconfig
bond0: flags=5187<UP,BROADCAST,RUNNING,MASTER,MULTICAST>  mtu 1500
        inet myIPaddress  netmask 255.255.255.0  broadcast BROADCAST ADDRESS
        inet6 IPV6-MAC ADDRESS  prefixlen 64  scopeid 0x20<link>
        ether MACADDRESS  txqueuelen 1000  (Ethernet)
        RX packets 8983667  bytes 12223721990 (11.3 GiB)
        RX errors 0  dropped 0  overruns 154  frame 0
        TX packets 9531485  bytes 13138810014 (12.2 GiB)
        TX errors 0  dropped 0 overruns 0  carrier 0  collisions 0

 

What am I doing wrong????

This is a pretty much vanilla install of unRAID with all the tools guys like Space Invader One suggests - monitoring tools and unassigned disk plugin.

 

Any ideas guys ?

Edited by squirrelslikenuts
Link to comment
5 minutes ago, squirrelslikenuts said:

Upon further testing, FTP transfers FROM the unRAID server are averaging around 90 MB/s

That's about what you would expect right?

 

I'm thinking there might be two changes affecting this:

  1. Do you have 'Enhanced OS X interoperability' set to Yes?  If so, set to No and see if transfer rate increases.
  2. We changed TCP congestion control to BBR for Unraid 6.7.  You can change it back previous 'reno' default by typing this command:
    echo reno > /proc/sys/net/ipv4/tcp_congestion_control

    Then see if transfer rate increases.

Link to comment
1 minute ago, limetech said:

That's about what you would expect right?

 

I'm thinking there might be two changes affecting this:

  1. Do you have 'Enhanced OS X interoperability' set to Yes?  If so, set to No and see if transfer rate increases.
  2. We changed TCP congestion control to BBR for Unraid 6.7.  You can change it back previous 'reno' default by typing this command:
    
    echo reno > /proc/sys/net/ipv4/tcp_congestion_control

    Then see if transfer rate increases.

100 MB/s is what I'd expect for read speeds. The drives are capable of more than 100 MB/s read. And I can write at faster than 100 MB/s over the network, so I know the hardware can handle it.

 

Will try your suggestions and report back.

 

Im 5 days into my 15 day trial extension and basically this is the last issue to fix before purchasing the license.

 

 

Link to comment
12 minutes ago, squirrelslikenuts said:

Will try your suggestions and report back.

You could also try latest stable version 6.6.6 and see if there are similar results.

 

 

Edit: Ooops I see that's what you're already using?  (So used to spending all my time lately in prerelease bug reports) - if this is the case, maybe try version 6.7.0-rc2

Link to comment
2 hours ago, squirrelslikenuts said:

Im 100% convinced its a samba issue... or the client system Im testing from.....

 

but the problem is that WRITES are saturating GbE ... so it can't be a network issue.

For completeness - How fast are writes to the parity protected array (for a share not using cache)?

 

 

When you do the write tests, what are you writing? (number of files, total size of data).

 

I would expect all writes over the network to occur at line speed until the RAM buffer on the server is full, and you do have plenty of RAM.

 

Link to comment
1 hour ago, gubbgnutten said:

For completeness - How fast are writes to the parity protected array (for a share not using cache)?

 

 

When you do the write tests, what are you writing? (number of files, total size of data).

 

I would expect all writes over the network to occur at line speed until the RAM buffer on the server is full, and you do have plenty of RAM.

 

Test Disk 6 (wd red 4tb) - disabled cache - using parity

 

Absolutely steady at 102 MB/s WRITE  for ~25 GB - MKV files

 

Steady at 102 MB/s WRITE for 25/100GB vdmk file (vmware disk image)

Around 25GB mark, the wirte slows to ~59 MB/s

 

I will say that I have about ~20ish mbit of bandwidth on the network being used by ip cameras but they are pushing data to a different physical server on the network. Im not expecting 125 MB/s read and write  but I would like to see 100 MB/s both ways.

 

Cache drive is a 960GB NVMe - I doubt Id push more than that to the array at any given time.

 

EVERYONE on the net says "wahhh wahhh my writes are slow" I seem to be the only one able to achieve 100 MB/s out of the box but complain my reads are slow. 

 

 

My network is as follows

 

Internet -> pfSense box -> GbE switch -> many computers including the server

Static IPs set to all devices

 

 

 

 

Edited by squirrelslikenuts
more info
Link to comment

A few things to try:

 

- see if there's any difference reading from a disk share vs an user share

- toggle Direct IO (Settings -> Global Share settings)

- some time ago using an older SMB version could make a big difference, not so much recently but worth a try, for example to limit to SMB 2, add to "Samba extra configuration" on Settings -> SMB:

max protocol = SMB2_02

 

Link to comment
1 hour ago, johnnie.black said:

A few things to try:

 

- see if there's any difference reading from a disk share vs an user share

- toggle Direct IO (Settings -> Global Share settings)

- some time ago using an older SMB version could make a big difference, not so much recently but worth a try, for example to limit to SMB 2, add to "Samba extra configuration" on Settings -> SMB:


max protocol = SMB2_02

 

NO CACHE - Disk 6 Share - Empty drive - USES PARITY (not user share)

Write from Win7 -> unRAID - 48GB MKV

102MB/s tapering to ~50-60 MB/s around the 23GB mark

72 MB/s Average Speed

Average CPU Load is 8-9%

 

iotop reveals 4 lines of code below @ ~25 MB/s each (while running at full speed)

shfs /mnt/user -disks 127 2048000000 -o noatime,big_writes,allow_other -o direct_io -o remember=0

Direct I/O is/was enabled 

 

Read Speed starts out around 3.5 MB/s ( I didnt catch what it maxed at)

20GB MKV - 46 MB/s average read (saving to ssd on windows machine)

 

 

120GB SSD (unassigned)

Write speeds to an unassigned 120GB ssd on the unRAID system (outside array and not cached)

48 GB MKV

Steady at ~103 MB/s write - Spiked down to ~65 MB/s within the last 2% of the copy

Average 100 MB/s for the fill 48 GB

 

Read Speed started off at a 500 KB/s then ramped to 65 MB/s and stayed there until complete

Average 61 MB/s

 

 

I will try the max protocol SMB setting you suggested next. If that fails I will reduce the network clutter down to just the unraid box and client machine connected to the switch. I can't see that helping as iperf speed tests, and write speeds look flawless.

 

Edited by squirrelslikenuts
Link to comment
On 1/28/2019 at 10:54 AM, limetech said:

Moving this to a Bug Report so we can keep track of it.

I've just recreated the issue in 6.6.6 on completely different hardware (ibm x3550 m3 dual xeon 32gb ram etc)

 

Exact same symptoms. 101 MB/s write. 65 MB/s read through SMB.

 

~106 MB/s write and ~90-95 MB/s read using unraid FTP server.

 

Last thing I will try and isolate the unraid box and windows machine onto their own switch.

 

Link to comment
11 hours ago, squirrelslikenuts said:

I've just recreated the issue in 6.6.6 on completely different hardware (ibm x3550 m3 dual xeon 32gb ram etc)

Given that you see the issue on two different builds, I'm inclined to think it's in your environment, and not under unraid's control. otherwise, there would be huge numbers of others seeing the same thing.

Link to comment

I've isolated 2 unraid servers on a separate switch that doesn't have internet or any other devices. Static ip address assigned. Cat6 cable all around.

 

Each server is running dual xeon X5570s and 32 or 64 GB ram - onboard broadcom nics.

One server running 6.6.6 the other 6.7.0

iperf3 run as a client and server on each server report 112 MB/s sustained transfers.

 

A windows VM on one machine (on nvme cache drive) writing to the other servers SMB NVMe share (or ssd share) reports over 100 MB/s transfer.

The same windows VM copying data from the other servers SMB NVMe share maxes out at 65 MB/s. The Windows 10 file transfer graph looks like a rollercoaster (for reading)

 

I'm at a loss.

 

The 2 last things I will try is using PCIe intel nics and attempting a previous version of unraid like 6.5.

 

Can anyone else try reading data from an unraid cache or SSD and report back speed over GbE?

 

Edited by squirrelslikenuts
Link to comment

Join the conversation

You can post now and register later. If you have an account, sign in now to post with your account.
Note: Your post will require moderator approval before it will be visible.

Guest
Reply to this topic...

×   Pasted as rich text.   Restore formatting

  Only 75 emoji are allowed.

×   Your link has been automatically embedded.   Display as a link instead

×   Your previous content has been restored.   Clear editor

×   You cannot paste images directly. Upload or insert images from URL.