Slow Write Speeds (test data supplied)


Recommended Posts

See this thread for original discussion:  http://lime-technology.com/forum/index.php?topic=22373.0

 

So, apparently my write speeds are not up to snuff and I never even knew it.  They always seemed adequate to me but once it was brought to my attention that I should be seeing much better speeds, I decided to run some tests on different betas/RCs.  I was really hoping to see a dramatic increase in speed with some of the older releases but that just didn't happen.  The chart below shows the speeds that I recorded on 6 different versions of unRAID 5...b10 through RC8a.  I had intended to also test b8d but I guess that it does not have a compatible NIC driver and my server is now offline and I am at work.  :S

 

Anyway, the command that I used to test write speeds (parity still enabled in each case) is as follows:

 

dd if=/dev/zero of=/mnt/disk2/test.dd count=8192000 bs=64k count=8k conv=fdatasync

 

I did this for disks 1 - 6 and the also the cache disk.

 

speed.jpg

 

As you can see, b12a/b14 seems to be the sweet spot, however, I am still not seeing the speeds that I should (35MB/s – 40MB/s).  The highest speed I achieved in any given test was 31MB/s and the mean seems to be closer to 24MB/s (excluding the cache drive tests).

 

The specs of my system are as follows:

 

Motherboard:  ASUS P8Z68-V LX (http://www.asus.com/Motherboards/Intel_Socket_1155/P8Z68V_LX/)

 

    CPU Support

    Intel® Socket 1155 for 2nd Generation Core™ i7/Core™ i5/Core™ i3 Processors

    Supports Intel® 32 nm CPU

    Supports Intel® Turbo Boost Technology 2

 

    Chipset

    Intel® Z68

 

    Expansion Slots

    1 x PCIe 2.0 x16 (blue) *1

    1 x PCIe 2.0 x16 (x4 mode, black) *1

    2 x PCIe 2.0 x1

    3 x PCI

 

    Storage

    Intel® Z68 chipset :

    2 x SATA 6Gb/s port(s), gray

    4 x SATA 3Gb/s port(s), blue

    Support Raid 0, 1, 5, 10

    Support Intel® Smart Response Technology on 2nd generation Intel® Core™ processor family

 

Processor: Intel Core i3 2120 LGA1155 3.30 GHz Boxed Processor

Memory:  Micro Center 4GB (2x 2GB) DDR3-1333 (PC3-10666) CL9 Dual Channel Desktop Memory Kit

Storage Controllers:  2x AOC-SASLP-MV8 (each in a PCI-e 16x slot)

Cables: 1x 3ware CBL-SFF8087OCF-05M 1 Unit of .5M Multi-lane Internal (SFF-8087) Serial ATA Breakout Cable (http://www.newegg.com/Product/Product.aspx?Item=N82E16816116097)

          4x NORCO C-SFF8087-D SFF-8087 to SFF-8087 Internal Multilane SAS Cable – OEM (http://www.newegg.com/Product/Product.aspx?Item=N82E16816133034)

 

I will attach an RC8a syslog as soon as I get home and get the server back online.

 

I find it very odd that I am seeing the same results with 2 VERY different combinations of MBs/CPUs/Memory (specs for configuration A can be found in the thread referenced at the top).  The constants in both configurations are the case, PSU, storage controllers and cables.  Since I did experience the same slow speeds on two different configurations, is it possible that my issue lies within one of the constants (ie. cables)?

 

Can you guys take a look and let me know if you see any glaring issues or if there are any tests that you would like me to try?

 

TIA for the help!

 

John

 

Link to comment

I think you may be splitting hairs with your current drive selection and set up.

If you are getting from 24-32MB/s then that is pretty normal without kernel and/or md buffer tuning.

 

on my 4.7 settings page I have the following adjusted.

You can try this. Although I would only recommend doing it if you have 2-4gb of ram.

Also try and move your parity drive to the onboard controller and/or put parity/cache together on the same controller separate from the other drives.

Tunable (md_num_stripes): user-set
Tunable (md_write_limit): user-set
Tunable (md_sync_window): user-set

Link to comment

It's interesting in how two EARS drives have very close consistant values across all beta's Yet one is faster then the other.

Could be a difference in the model in multiple platters vs platter density.

 

 

You could also consider a different parity drive. Perhaps one of the fastest 3TB parity drives that are currently available.

I remember reading that the Hitachi drives were good at random I/O but not sequential I/O.

but that could be an older issue.

 

 

Link to comment

I think you may be splitting hairs with your current drive selection and set up.

If you are getting from 24-32MB/s then that is pretty normal without kernel and/or md buffer tuning.

Especially since many of your drives are not 7200RPM drives. 

 

Yes, platter density and internal buffering can affect I/O speed of a disk.

Link to comment

Freesh syslog attached...

 

BTW, are these entries normal if my PCI-E slots are 16x each (4x electrical on the second one)?

 

Sep 25 14:41:28 unRAID kernel: mvsas 0000:01:00.0: mvsas: PCI-E x4, Bandwidth Usage: 2.5 Gbps

Sep 25 14:41:28 unRAID kernel: ahci 0000:00:1f.2: AHCI 0001.0300 32 slots 6 ports 6 Gbps 0x18 impl SATA mode

Sep 25 14:41:28 unRAID kernel: ahci 0000:00:1f.2: flags: 64bit ncq pm led clo pio slum part ems apst

Sep 25 14:41:28 unRAID kernel: ahci 0000:00:1f.2: setting latency timer to 64

 

Sep 25 14:41:28 unRAID kernel: mvsas 0000:02:00.0: mvsas: driver version 0.8.16

Sep 25 14:41:28 unRAID kernel: mvsas 0000:02:00.0: mvsas: PCI-E x1, Bandwidth Usage: 2.5 Gbps

 

John

syslog.txt

Link to comment

I checked a lot of syslogs, containing that line with 'PCI-E x4', and ALL of them report 'Bandwidth Usage: 2.5 Gbps'.

 

I suspect that it is either the bandwidth per channel, or the cards using mvsas have chipsets limited to a max bandwidth of 2.5 Gbps, or the current driver is limited to it, or there is an mvsas setting that enables full bandwidth utilization.

Link to comment

This should be reported to Tom, so he can peek at the driver.

However, this is not the issue for the speed question.

According to the chart below, the cache drive gets almost 110MB/s

This means the bus/bandwidth is fine for a single drive.

a PCIe X1 card can handle 2 drives at full speed.

There's something else going on with your setup if you feel you should be getting more speed.

However without software tuning, possible rearrangement of drives and/or a better faster parity drive, it might be pointless.

I think you are getting average speeds for the drives and filesystem fill.

 

Sheesh on my 1TB drive I get 3.8MB/s near 97% capacity.  Filesystem usage can play a role in it.

To really determine speed you need an empty data drive to work with.

 

What is your ending parity sync rate?

Link to comment

This should be reported to Tom, so he can peek at the driver.

However, this is not the issue for the speed question.

According to the chart below, the cache drive gets almost 110MB/s

This means the bus/bandwidth is fine for a single drive.

a PCIe X1 card can handle 2 drives at full speed.

There's something else going on with your setup if you feel you should be getting more speed.

However without software tuning, possible rearrangement of drives and/or a better faster parity drive, it might be pointless.

I think you are getting average speeds for the drives and filesystem fill.

 

Sheesh on my 1TB drive I get 3.8MB/s near 97% capacity.  Filesystem usage can play a role in it.

To really determine speed you need an empty data drive to work with.

 

What is your ending parity sync rate?

 

I actually just ran one last night...60MB/s almost on the nose.

 

John

Link to comment

This should be reported to Tom, so he can peek at the driver.

However, this is not the issue for the speed question.

According to the chart below, the cache drive gets almost 110MB/s

This means the bus/bandwidth is fine for a single drive.

a PCIe X1 card can handle 2 drives at full speed.

There's something else going on with your setup if you feel you should be getting more speed.

However without software tuning, possible rearrangement of drives and/or a better faster parity drive, it might be pointless.

I think you are getting average speeds for the drives and filesystem fill.

 

Sheesh on my 1TB drive I get 3.8MB/s near 97% capacity.  Filesystem usage can play a role in it.

To really determine speed you need an empty data drive to work with.

 

What is your ending parity sync rate?

 

I actually just ran one last night...60MB/s almost on the nose.

 

John

 

That is within reason.

 

As is some of the data rates you are getting.

 

 

here are mine.

root@atlas /boot/logs #more syslog-mdsync-history

Jul 14 21:22:10 atlas kernel: md: sync done. time=18244sec rate=53538K/sec
Sep  8 03:23:54 atlas kernel: md: sync done. time=32874sec rate=65323K/sec

on Sep 8th I added another 2TB drive. All other drives were max 1.5TB.

 

 

Have you tried moving the parity to the internal controller?

keep parity and cache on the internal controller while the data drives are on the external controller.

 

Another choice is an actual PCIe X1 controller on one of the X1 slots for parity.

Or invest in a used ARC-1200 if you want more of a boost.

 

Have you tried the other suggestions on MD buffer tunings?

 

Link to comment

This should be reported to Tom, so he can peek at the driver.

However, this is not the issue for the speed question.

According to the chart below, the cache drive gets almost 110MB/s

This means the bus/bandwidth is fine for a single drive.

a PCIe X1 card can handle 2 drives at full speed.

There's something else going on with your setup if you feel you should be getting more speed.

However without software tuning, possible rearrangement of drives and/or a better faster parity drive, it might be pointless.

I think you are getting average speeds for the drives and filesystem fill.

 

Sheesh on my 1TB drive I get 3.8MB/s near 97% capacity.  Filesystem usage can play a role in it.

To really determine speed you need an empty data drive to work with.

 

What is your ending parity sync rate?

 

I actually just ran one last night...60MB/s almost on the nose.

 

John

 

That is within reason.

 

As is some of the data rates you are getting.

 

 

here are mine.

root@atlas /boot/logs #more syslog-mdsync-history

Jul 14 21:22:10 atlas kernel: md: sync done. time=18244sec rate=53538K/sec
Sep  8 03:23:54 atlas kernel: md: sync done. time=32874sec rate=65323K/sec

on Sep 8th I added another 2TB drive. All other drives were max 1.5TB.

 

 

Have you tried moving the parity to the internal controller?

keep parity and cache on the internal controller while the data drives are on the external controller.

 

Another choice is an actual PCIe X1 controller on one of the X1 slots for parity.

Or invest in a used ARC-1200 if you want more of a boost.

 

Have you tried the other suggestions on MD buffer tunings?

 

That is exactly how I have it right now.  Parity and Cache on the on-board sata, 3 data drives on one card and 3 data drives on the other card.

 

I'll look into the tuning.

 

Thanks weebo.

 

John

Link to comment
That is exactly how I have it right now.  Parity and Cache on the on-board sata, 3 data drives on one card and 3 data drives on the other card.

 

I'll look into the tuning.

 

Thanks weebo.

 

John

 

 

According to the chart below, parity and cache are on SASLP #2, I would be interested in seeing new benchmarks (just for education).

I'm wondering if there are any differences. Just the latest version of unRAID or whatever you have loaded.

 

 

I know when I was doing tests with RAID0, using the silicone image chipset and the ARC-1200 controller. the speed increase was pretty small. I did it anyway because the random I/O and burst write speed increased enough.

After re-tuning, my burst writes are 50-60MB/s for smaller files (Which is what I needed).

That mattered to me because when you are developing or compiling lots of source over NFS, you really do notice the speed issues.

Link to comment

I checked a lot of syslogs, containing that line with 'PCI-E x4', and ALL of them report 'Bandwidth Usage: 2.5 Gbps'.

 

I suspect that it is either the bandwidth per channel, or the cards using mvsas have chipsets limited to a max bandwidth of 2.5 Gbps, or the current driver is limited to it, or there is an mvsas setting that enables full bandwidth utilization.

 

I noticed the following line in a syslog (of pras1011):

 

mvsas: PCI-E x8, Bandwidth Usage: 5.0 Gbps

 

This x8 card claims double the bandwidth.  I'm inclined to think therefore that it may be a hardware limitation, of the current mvsas card family.  But as WeeboTech said, this is unlikely to be related to the speed issues being discussed.

Link to comment

I checked a lot of syslogs, containing that line with 'PCI-E x4', and ALL of them report 'Bandwidth Usage: 2.5 Gbps'.

 

I suspect that it is either the bandwidth per channel, or the cards using mvsas have chipsets limited to a max bandwidth of 2.5 Gbps, or the current driver is limited to it, or there is an mvsas setting that enables full bandwidth utilization.

 

I noticed the following line in a syslog (of pras1011):

 

mvsas: PCI-E x8, Bandwidth Usage: 5.0 Gbps

 

This x8 card claims double the bandwidth.  I'm inclined to think therefore that it may be a hardware limitation, of the current mvsas card family.  But as WeeboTech said, this is unlikely to be related to the speed issues being discussed.

 

I believe that pras1011 uses SAS2LP cards (PCI-E 2.0, x8 electrical) and SASLPs are only PCI-E 1.1, x4 electrical)

Link to comment

honestly, i think you're nit picking here.

 

 

31MB/s to a PARITY PROTECTED DRIVE (/mnt/disk1 writes still write to parity) i think you're seeing what you should. Cache has NO parity which is why the numbers skyrocket.

 

also, the change in speed from beta to beta to rc to rc is negligible. 25.7MB/s to 25.1MB/s, c'mon, thats nothing.

Link to comment

Join the conversation

You can post now and register later. If you have an account, sign in now to post with your account.
Note: Your post will require moderator approval before it will be visible.

Guest
Reply to this topic...

×   Pasted as rich text.   Restore formatting

  Only 75 emoji are allowed.

×   Your link has been automatically embedded.   Display as a link instead

×   Your previous content has been restored.   Clear editor

×   You cannot paste images directly. Upload or insert images from URL.