• Slow READ speeds - unRAID 6.6.6 - enterprise hardware - Writes 100+ MB/s Reads ~65 MB/s - HELP!


    squirrelslikenuts
    • Minor
    Message added by limetech

    Please be aware that these comments were copied here from another source and that the date and time shown for each comment may not be accurate.

     

    As title.

    HP DL180 G6, 64gb ram, dual X5570 xeon

    6 disk 4TB WD RED array

    1 disk 4TB WD RED parity

    1 PCIe 960gb NVMe for cache and VM

    1 SSD 120GB unassigned disk 

     

    Write speeds (to nvme cache drive) completely saturate my GbE network at 100+ MB/s

    Write speeds to the 120GB ssd (unassigned disk) also around 100+ MB/s

    Read speeds from array, NVMe cache and 120GB ssd (unassigned disk) all max out at around ~65 MB/s

     

    hdparm test in terminal -  These are the 6 disk (+1 parity) that make up the array. Also the 120GB ssd and NVMe Cache drive

    root@hpdl180g6:~# hdparm -tT /dev/sdb /dev/sdc /dev/sdd /dev/sde /dev/sdf /dev/sdg /dev/sdh /dev/sdi /dev/nvme0n1
    
    /dev/sdb:
     Timing cached reads:   19330 MB in  1.99 seconds = 9701.13 MB/sec
     Timing buffered disk reads: 530 MB in  3.00 seconds = 176.38 MB/sec
    
    /dev/sdc:
     Timing cached reads:   19264 MB in  1.99 seconds = 9667.14 MB/sec
     Timing buffered disk reads: 516 MB in  3.00 seconds = 171.74 MB/sec
    
    /dev/sdd:
     Timing cached reads:   18796 MB in  1.99 seconds = 9432.50 MB/sec
     Timing buffered disk reads: 470 MB in  3.01 seconds = 156.05 MB/sec
    
    /dev/sde:
     Timing cached reads:   18840 MB in  1.99 seconds = 9453.48 MB/sec
     Timing buffered disk reads: 478 MB in  3.01 seconds = 158.81 MB/sec
    
    /dev/sdf:
     Timing cached reads:   18816 MB in  1.99 seconds = 9441.74 MB/sec
     Timing buffered disk reads: 434 MB in  3.01 seconds = 144.16 MB/sec
    
    /dev/sdg:
     Timing cached reads:   19058 MB in  1.99 seconds = 9563.10 MB/sec
     Timing buffered disk reads: 448 MB in  3.01 seconds = 148.74 MB/sec
    
    /dev/sdh:
     Timing cached reads:   18868 MB in  1.99 seconds = 9467.43 MB/sec
     Timing buffered disk reads: 518 MB in  3.00 seconds = 172.40 MB/sec
    
    /dev/sdi: (shitty 120GB ssd)
     Timing cached reads:   18418 MB in  1.99 seconds = 9241.06 MB/sec
     Timing buffered disk reads: 578 MB in  3.00 seconds = 192.36 MB/sec
    
    /dev/nvme0n1:
     Timing cached reads:   18958 MB in  1.99 seconds = 9513.47 MB/sec
     Timing buffered disk reads: 4350 MB in  3.00 seconds = 1449.83 MB/sec

    nvme is a gen3 but the server only has gen2 pcie slot - hence the slower (boo hoo 1500MB/s)  reads

     

    Crystal Disk Mark 6  on the 120GB unassigned disk - Mapped as a network drive in Windows 7.

       Sequential Read (Q= 32,T= 1) :   118.354 MB/s
      Sequential Write (Q= 32,T= 1) :   117.362 MB/s
      Random Read 4KiB (Q=  8,T= 8) :    88.570 MB/s [  21623.5 IOPS]
     Random Write 4KiB (Q=  8,T= 8) :   110.104 MB/s [  26880.9 IOPS]
      Random Read 4KiB (Q= 32,T= 1) :    83.825 MB/s [  20465.1 IOPS]
     Random Write 4KiB (Q= 32,T= 1) :   109.400 MB/s [  26709.0 IOPS]
      Random Read 4KiB (Q=  1,T= 1) :     9.412 MB/s [   2297.9 IOPS]
     Random Write 4KiB (Q=  1,T= 1) :     6.245 MB/s [   1524.7 IOPS]
    
      Test : 4096 MiB [S: 22.3% (24.9/111.7 GiB)] (x1) <0Fill> [Interval=5 sec]
        
    and
        
    
       Sequential Read (Q= 32,T= 1) :   118.345 MB/s
      Sequential Write (Q= 32,T= 1) :   117.305 MB/s
      Random Read 4KiB (Q=  8,T= 8) :    87.685 MB/s [  21407.5 IOPS]
     Random Write 4KiB (Q=  8,T= 8) :   108.942 MB/s [  26597.2 IOPS]
      Random Read 4KiB (Q= 32,T= 1) :    83.757 MB/s [  20448.5 IOPS]
     Random Write 4KiB (Q= 32,T= 1) :   109.225 MB/s [  26666.3 IOPS]
      Random Read 4KiB (Q=  1,T= 1) :     8.299 MB/s [   2026.1 IOPS]
     Random Write 4KiB (Q=  1,T= 1) :     6.240 MB/s [   1523.4 IOPS]
    
      Test : 4096 MiB [S: 22.3% (24.9/111.7 GiB)] (x1)  [Interval=5 sec]

     

     

    iperf3 reports 112 MB/s in both send and receive.

     

     

     

    As I said, write speeds over the network to the cache drive are flawless at 100+ MB/s.

    Write speeds to the array, not using cache drive and not using parity are also in excess of 100+ MB/s - IE: writing to 1 disk at a time

    I have noticed after removing the parity drive to do read testing, the rebuild of the parity is operating at ~83 MB/s.

    Parity is valid
    Last checked on Tuesday, 2019-01-22, 12:06 (four days ago), finding 0 errors.
    Duration: 13 hours, 15 minutes, 6 seconds. Average speed: 83.9 MB/sec

    ifconfig

    oot@hpdl180g6:~# ifconfig
    bond0: flags=5187<UP,BROADCAST,RUNNING,MASTER,MULTICAST>  mtu 1500
            inet myIPaddress  netmask 255.255.255.0  broadcast BROADCAST ADDRESS
            inet6 IPV6-MAC ADDRESS  prefixlen 64  scopeid 0x20<link>
            ether MACADDRESS  txqueuelen 1000  (Ethernet)
            RX packets 8983667  bytes 12223721990 (11.3 GiB)
            RX errors 0  dropped 0  overruns 154  frame 0
            TX packets 9531485  bytes 13138810014 (12.2 GiB)
            TX errors 0  dropped 0 overruns 0  carrier 0  collisions 0

     

    What am I doing wrong????

    This is a pretty much vanilla install of unRAID with all the tools guys like Space Invader One suggests - monitoring tools and unassigned disk plugin.

     

    Any ideas guys ?




    User Feedback

    Recommended Comments

    5 minutes ago, squirrelslikenuts said:

    Upon further testing, FTP transfers FROM the unRAID server are averaging around 90 MB/s

    That's about what you would expect right?

     

    I'm thinking there might be two changes affecting this:

    1. Do you have 'Enhanced OS X interoperability' set to Yes?  If so, set to No and see if transfer rate increases.
    2. We changed TCP congestion control to BBR for Unraid 6.7.  You can change it back previous 'reno' default by typing this command:
      echo reno > /proc/sys/net/ipv4/tcp_congestion_control

      Then see if transfer rate increases.

    Link to comment
    1 minute ago, limetech said:

    That's about what you would expect right?

     

    I'm thinking there might be two changes affecting this:

    1. Do you have 'Enhanced OS X interoperability' set to Yes?  If so, set to No and see if transfer rate increases.
    2. We changed TCP congestion control to BBR for Unraid 6.7.  You can change it back previous 'reno' default by typing this command:
      
      echo reno > /proc/sys/net/ipv4/tcp_congestion_control

      Then see if transfer rate increases.

    100 MB/s is what I'd expect for read speeds. The drives are capable of more than 100 MB/s read. And I can write at faster than 100 MB/s over the network, so I know the hardware can handle it.

     

    Will try your suggestions and report back.

     

    Im 5 days into my 15 day trial extension and basically this is the last issue to fix before purchasing the license.

     

     

    Link to comment
    12 minutes ago, squirrelslikenuts said:

    Will try your suggestions and report back.

    You could also try latest stable version 6.6.6 and see if there are similar results.

     

     

    Edit: Ooops I see that's what you're already using?  (So used to spending all my time lately in prerelease bug reports) - if this is the case, maybe try version 6.7.0-rc2

    Link to comment

    'Enhanced OS X interoperability' is/was set to NO on all shares

     

    echo reno > /proc/sys/net/ipv4/tcp_congestion_control

     

    makes no measurable difference  - do I need to reboot the server after typing it?

     

    Link to comment

    Im 100% convinced its a samba issue... or the client system Im testing from.....

     

    but the problem is that WRITES are saturating GbE ... so it can't be a network issue.

    Link to comment
    2 hours ago, squirrelslikenuts said:

    Im 100% convinced its a samba issue... or the client system Im testing from.....

     

    but the problem is that WRITES are saturating GbE ... so it can't be a network issue.

    Just to be sure, is this with 6.6.6 or 6.7.0-rc2?

    Link to comment
    2 hours ago, squirrelslikenuts said:

    Im 100% convinced its a samba issue... or the client system Im testing from.....

     

    but the problem is that WRITES are saturating GbE ... so it can't be a network issue.

    For completeness - How fast are writes to the parity protected array (for a share not using cache)?

     

     

    When you do the write tests, what are you writing? (number of files, total size of data).

     

    I would expect all writes over the network to occur at line speed until the RAM buffer on the server is full, and you do have plenty of RAM.

     

    Link to comment
    6 hours ago, squirrelslikenuts said:

    the rebuild of the parity is operating at ~83 MB/s.

    Maybe obvious, but are you building parity during any of this testing?

    Link to comment
    18 minutes ago, trurl said:

    Maybe obvious, but are you building parity during any of this testing?

    no sir, parity is sitting doing nothing at all - array is clean

    Link to comment
    1 hour ago, gubbgnutten said:

    For completeness - How fast are writes to the parity protected array (for a share not using cache)?

     

     

    When you do the write tests, what are you writing? (number of files, total size of data).

     

    I would expect all writes over the network to occur at line speed until the RAM buffer on the server is full, and you do have plenty of RAM.

     

    Test Disk 6 (wd red 4tb) - disabled cache - using parity

     

    Absolutely steady at 102 MB/s WRITE  for ~25 GB - MKV files

     

    Steady at 102 MB/s WRITE for 25/100GB vdmk file (vmware disk image)

    Around 25GB mark, the wirte slows to ~59 MB/s

     

    I will say that I have about ~20ish mbit of bandwidth on the network being used by ip cameras but they are pushing data to a different physical server on the network. Im not expecting 125 MB/s read and write  but I would like to see 100 MB/s both ways.

     

    Cache drive is a 960GB NVMe - I doubt Id push more than that to the array at any given time.

     

    EVERYONE on the net says "wahhh wahhh my writes are slow" I seem to be the only one able to achieve 100 MB/s out of the box but complain my reads are slow. 

     

     

    My network is as follows

     

    Internet -> pfSense box -> GbE switch -> many computers including the server

    Static IPs set to all devices

     

     

     

     

    Link to comment

    A few things to try:

     

    - see if there's any difference reading from a disk share vs an user share

    - toggle Direct IO (Settings -> Global Share settings)

    - some time ago using an older SMB version could make a big difference, not so much recently but worth a try, for example to limit to SMB 2, add to "Samba extra configuration" on Settings -> SMB:

    max protocol = SMB2_02

     

    Link to comment
    1 hour ago, johnnie.black said:

    A few things to try:

     

    - see if there's any difference reading from a disk share vs an user share

    - toggle Direct IO (Settings -> Global Share settings)

    - some time ago using an older SMB version could make a big difference, not so much recently but worth a try, for example to limit to SMB 2, add to "Samba extra configuration" on Settings -> SMB:

    
    max protocol = SMB2_02

     

    NO CACHE - Disk 6 Share - Empty drive - USES PARITY (not user share)

    Write from Win7 -> unRAID - 48GB MKV

    102MB/s tapering to ~50-60 MB/s around the 23GB mark

    72 MB/s Average Speed

    Average CPU Load is 8-9%

     

    iotop reveals 4 lines of code below @ ~25 MB/s each (while running at full speed)

    shfs /mnt/user -disks 127 2048000000 -o noatime,big_writes,allow_other -o direct_io -o remember=0

    Direct I/O is/was enabled 

     

    Read Speed starts out around 3.5 MB/s ( I didnt catch what it maxed at)

    20GB MKV - 46 MB/s average read (saving to ssd on windows machine)

     

     

    120GB SSD (unassigned)

    Write speeds to an unassigned 120GB ssd on the unRAID system (outside array and not cached)

    48 GB MKV

    Steady at ~103 MB/s write - Spiked down to ~65 MB/s within the last 2% of the copy

    Average 100 MB/s for the fill 48 GB

     

    Read Speed started off at a 500 KB/s then ramped to 65 MB/s and stayed there until complete

    Average 61 MB/s

     

     

    I will try the max protocol SMB setting you suggested next. If that fails I will reduce the network clutter down to just the unraid box and client machine connected to the switch. I can't see that helping as iperf speed tests, and write speeds look flawless.

     

    Link to comment


    Guest
    This is now closed for further comments

  • Status Definitions

     

    Open = Under consideration.

     

    Solved = The issue has been resolved.

     

    Solved version = The issue has been resolved in the indicated release version.

     

    Closed = Feedback or opinion better posted on our forum for discussion. Also for reports we cannot reproduce or need more information. In this case just add a comment and we will review it again.

     

    Retest = Please retest in latest release.


    Priority Definitions

     

    Minor = Something not working correctly.

     

    Urgent = Server crash, data loss, or other showstopper.

     

    Annoyance = Doesn't affect functionality but should be fixed.

     

    Other = Announcement or other non-issue.