Jump to content

Installed SuperMicro AOC-SASLP-MV8, Slow Parity Check


Recommended Posts

So.... I took a drive to Microcenter....

 

I picked up a ASRock Z77 Extreme4 LGA 1155 Z77 ATX Intel Motherboard, and an i3 3225, for $210 out the door.

Gee, I feel guilty--indirectly responsible for your expenditureinvestment. Looking on the bright side, PC hardware is such an incredible bang-for-the-buck these days. [When I started programming, gasoline was $0.35/gallon and computer memory was $1/byte--I earned about $10k/yr--1968-69.]

I had 6GB of DDR 3 left over from another project.  Connected both 3TB drives to 2 of the on board SATA3 connections.  The other drives are connected to the MV8 controller.  I went ahead and am running another parity check and am happy to report a steady 99.4MBps.  I've seen it as high as 115MBps, ...

I would have expected to see a sustained 115-120 for the first 20% of the check. It may be that the chip on the MV8 (Marvell 88SE6480) might not have the processing crunch and/or data-handling throughput to saturate the PCIe x4 (v1). You might try moving one more drive from the MV8 to the Z77, and see if that speeds up the check. If so, move another. (and repeat). If not, please do run that dskt script.  You might have an abnormally slow drive.

 

Enjoy your new toy(s).

 

--UhClem

 

Right, so after it started running last night I checked my drive.  The newer seagate's I have are SATA 3 as well, so I may go ahead and pull those off the MV8, and put them on the remaining 2 SATA3 Ports on the motherboard.  This would give me 2 3TB <1 Parity>, and 2 2TB on the motherboard, and 4 x 2TB Hitachi's, and the 1TB Cache drive on the MV8. 

 

Gotta go train Jiu Jitsu right now but will try this later today =)  And will try the script as well.

 

Thanks!

 

 

Link to comment

I expect that you won't notice the "upper limit" until you exceed 6 drives on the MV8, or try using both of the JMicron  ports. I think you can use all 5 of the Intel (real/SB) mobo ports without reaching their "tipping point" but dskt tell you (I don't have any ICH8 experience). So, allocated optimally, 11 data drives (+ parity) should be able to "parity check" at max (with current hardware).

 

You don't list your drive model #s, but either the Sgt or Hit 2TBs will be the slowest, and will place an inherent limit on the others (during a parity check), so factor that into your "tipping point" decision. Ie, the Sgt 3TB data drives do not need to use all of their max 170+ MB/s, only what the slowest drive's max is.

 

--UhClem

 

Ok, so I've not moved any more drives yet to the onboard.  So we are looking at the 2 3TB drives being onboard SATA 3, the rest of the drives sitting on the MV8.

 

Results:

 

root@Tower:/tmp# ./dsk.sh X b d g h e a i j f

sdb ST3000DM001-9YN166 = 168.61 MB/sec

sdd Hitachi HDS722020ALA330 = 119.00 MB/sec

HDIO_GET_IDENTITY failed: No message of desired type

sdg = 130.40 MB/sec

HDIO_GET_IDENTITY failed: No message of desired type

sdh = 118.46 MB/sec

HDIO_GET_IDENTITY failed: No message of desired type

sde = 117.73 MB/sec

sda ST3000DM001-9YN166 = 174.99 MB/sec

HDIO_GET_IDENTITY failed: No message of desired type

sdi = 143.45 MB/sec

HDIO_GET_IDENTITY failed: No message of desired type

sdj = 164.56 MB/sec

HDIO_GET_IDENTITY failed: No message of desired type

sdf = 113.85 MB/sec

root@Tower:/tmp#

 

Any clue why I get the HDIO_GET_IDENTITY failed?  It cannot figure out the make/model?  Especially since it detects one Hitachi then complains about the rest?

 

Here it is cleaned up: 

 

sdb ST3000DM001-9YN166 = 168.61 MB/sec - PARITY DRIVE Onboard

sdd Hitachi HDS722020ALA330 = 119.00 MB/sec DATA DRIVE

sdg Hitachi HDS722020ALA330 = 130.40 MB/sec DATA DRIVE

sdh Hitachi HDS722020ALA330 = 118.46 MB/sec DATA DRIVE

sde Hitachi HDS722020ALA330 = 117.73 MB/sec DATA DRIVE

sda ST3000DM001-9YN166 = 174.99 MB/sec DATA DRIVE Onboard

sdi ST2000DM001 = 143.45 MB/sec DATA DRIVE

sdj ST2000DM001 = 164.56 MB/sec DATA DRIVE

 

sdf ST31000524AS = 113.85 MB/sec - CACHE DRIVE

 

So the older Hitachi's are slower and that is to be expected.  Even if I move the other Seagate's to the OnBoard Controllers, I'm not sure I will gain a ton of speed, although I could move them and run the same speed test. 

 

Now, I just have to decide if I want to keep it this way or...... Load ESXi 5.1, virtualize unRAID, and run 2 unRAID installs.  One for my fast drives, one for my slow drives =)  Granted, I've been thinking about this so I could essentially run two 10 Data + 1 Parity drive configs vs 20 Data + 1 Parity disk.

 

-Marcus

Link to comment

Ok, so I've not moved any more drives yet to the onboard.  So we are looking at the 2 3TB drives being onboard SATA 3, the rest of the drives sitting on the MV8.

 

Results:

 

root@Tower:/tmp# ./dsk.sh X b d g h e a i j f

sdb ST3000DM001-9YN166 = 168.61 MB/sec

sdd Hitachi HDS722020ALA330 = 119.00 MB/sec

HDIO_GET_IDENTITY failed: No message of desired type

sdg = 130.40 MB/sec

HDIO_GET_IDENTITY failed: No message of desired type

sdh = 118.46 MB/sec

HDIO_GET_IDENTITY failed: No message of desired type

sde = 117.73 MB/sec

sda ST3000DM001-9YN166 = 174.99 MB/sec

HDIO_GET_IDENTITY failed: No message of desired type

sdi = 143.45 MB/sec

HDIO_GET_IDENTITY failed: No message of desired type

sdj = 164.56 MB/sec

HDIO_GET_IDENTITY failed: No message of desired type

sdf = 113.85 MB/sec

root@Tower:/tmp#

 

Any clue why I get the HDIO_GET_IDENTITY failed?  It cannot figure out the make/model?  Especially since it detects one Hitachi then complains about the rest?

It seems like only the first drive on the MV8 (sdd) is being "Identified" using the "hdparm -i" mechanism; could be a driver peculiarity (mvsas driver?). I'll try to make a change to the script that uses the -I option instead.

Here it is cleaned up: 

 

sdb ST3000DM001-9YN166 = 168.61 MB/sec - PARITY DRIVE Onboard

sdd Hitachi HDS722020ALA330 = 119.00 MB/sec DATA DRIVE

sdg Hitachi HDS722020ALA330 = 130.40 MB/sec DATA DRIVE

sdh Hitachi HDS722020ALA330 = 118.46 MB/sec DATA DRIVE

sde Hitachi HDS722020ALA330 = 117.73 MB/sec DATA DRIVE

sda ST3000DM001-9YN166 = 174.99 MB/sec DATA DRIVE Onboard

sdi ST2000DM001 = 143.45 MB/sec DATA DRIVE

sdj ST2000DM001 = 164.56 MB/sec DATA DRIVE

 

sdf ST31000524AS = 113.85 MB/sec - CACHE DRIVE

 

So the older Hitachi's are slower and that is to be expected.  Even if I move the other Seagate's to the OnBoard Controllers, I'm not sure I will gain a ton of speed, although I could move them and run the same speed test. 

Please note that the results (cleaned up) that you got are from the test with the X as the (optional) first argument. Those results are merely to determine the stand-alone speed for each drive, without any contention (for resources) from the other drives; in the X run, each (single drive) speed test runs to completion, and then the next drive's speed is tested. It really doesn't matter which controller the drive is attached to for the X test (except for the obvious mismatch of a fast drive on a SATA 1 connect).

 

The interesting results come from the follow-on tests without the X. In those tests, all specified drive (letters) are speed-tested concurrently which will reveal any limitations that one or more hardware factors are exerting on the total throughput.

 

For example, using your drive letter associations above, the command "./dsk.sh d e f g h i j" would speed-test all 7 drives connected to the MV8 simultaneously! The idea is to push the tested component to its saturation point so that you can make proper capacity planning (from a bandwidth perspective) decisions. If/when you saturate any particular component (in this example, the MV8), you will see it because one, or more, (and often all) the tested drives are underperforming their stand-alone (baseline/nominal) speed (from the initial X test run).

 

If that 7-drive test run saturates the MV8, as I expect it will, you should try it without the cache drive (f). If that also saturates, omit another drive letter ... until all drives in a single test perform very close to their "X" speed.

Now, I just have to decide if I want to keep it this way or...... Load ESXi 5.1, virtualize unRAID, and run 2 unRAID installs.  One for my fast drives, one for my slow drives =)  Granted, I've been thinking about this so I could essentially run two 10 Data + 1 Parity drive configs vs 20 Data + 1 Parity disk.

If that will work to get you (2 * (10 + 1)), instead of (20 + 1), that is a worthwhile goal, for risk minimization. Since you''d need to "choose up teams" anyway, you might as well do it by performance. [Don't let the dumb kids hinder the education of the smart ones, right?]

 

But, are you really planning to add so many drives so soon? If not, I wouldn't rush into it. Maybe dabble with ESXi a little. Is it possible to experiment with different unRAID configs, by not doing any WRITEs to any Data drives, using a fresh/test Parity drive, and preserving the (actual) Parity drive [from a "Production" config.]? Just pondering ... (I don't use unRAID.)

 

Link to comment

Well, interestingly, I've just tried:

hdparm -i /dev/sdx

on my system.

 

I get output for the two drives attached to motherboard ports, but I get

 

 HDIO_GET_IDENTITY failed: Invalid argument

 

for every drive attached to my AOC-USAS2-L8i card (using the mpt2sas driver).

 

However, hdparm -I works.

 

Still, it's not causing me any operational problems on unRAID, as far as I'm aware  ... unless this is connected with the problem being experienced with slow write speeds on X9SCM hardware.

 

Link to comment

Well, interestingly, I've just tried:

hdparm -i /dev/sdx

on my system.

 

I get output for the two drives attached to motherboard ports, but I get

 

 HDIO_GET_IDENTITY failed: Invalid argument

 

for every drive attached to my AOC-USAS2-L8i card (using the mpt2sas driver).

Is one of those (every) drives on the controller's first port (#0)? Just curious, if that is the possible source of the anomaly.

However, hdparm -I works.

Thanks for confirming--I like it when that happens :).

Still, it's not causing me any operational problems on unRAID, as far as I'm aware  ... unless this is connected with the problem being experienced with slow write speeds on X9SCM hardware.

I seriously doubt there is any relationship; at least not a direct one. (I'd consider it an "identity glitch" not an "identity crisis".)

 

Link to comment

Ok so ran it without the X Option.  Some interesting results, with the Cache drive included, I saw my two Seagate 2TB SATA3 drives, on the MV8 controller slow down considerably.  With the cache drive excluded, it got somewhat better for one drive.

 

Tonight I will move those two drives from sdi and sdi back to the 2 open SATA 3 connections I have on the onboard.  From looking at it speeds top out around  120-130MB/sec Max on the MV8 Controller, and less once 8 drives are connected.  So once I move the 2 Seagate 2TB SATA3 drives over, I expect to see somewhere higher than 120-130MB/sec on those drives.  However, since I will now have 4 x SATA3 drives connected to the onboard, I suppose it is possible the overall numbers for the Onboards will get slower.

 

Do you concur?

 

With X Option

 

root@Tower:/tmp# ./dsk.sh X b d g h e a i j f

sda ST3000DM001-9YN166 = 174.99 MB/sec DATA DRIVE Onboard

sdb ST3000DM001-9YN166 = 168.61 MB/sec - PARITY DRIVE Onboard

sdd Hitachi HDS722020ALA330 = 119.00 MB/sec DATA DRIVE

sde Hitachi HDS722020ALA330 = 117.73 MB/sec DATA DRIVE

sdf ST31000524AS = 113.85 MB/sec - CACHE DRIVE

sdg Hitachi HDS722020ALA330 = 130.40 MB/sec DATA DRIVE

sdh Hitachi HDS722020ALA330 = 118.46 MB/sec DATA DRIVE

sdi ST2000DM001 = 143.45 MB/sec DATA DRIVE

sdj ST2000DM001 = 164.56 MB/sec DATA DRIVE

 

Without X Option

 

root@Tower:/tmp# ./dsk.sh b d g h e a i j f

sda = 174.99 MB/sec

sdb = 166.99 MB/sec

sdd = 117.53 MB/sec

sdf = 114.48 MB/sec

sdi = 64.57 MB/sec

sdg = 128.83 MB/sec

sde = 117.17 MB/sec

sdj = 23.31 MB/sec

sdh = 117.87 MB/sec

root@Tower:/tmp#

 

Without X Option - Ordering my drive label

 

root@Tower:/tmp# ./dsk.sh b d g h e a i j f

sda = 174.99 MB/sec  DATA DRIVE Onboard

sdb = 166.99 MB/sec  PARITY DRIVE Onboard

sdd = 117.53 MB/sec  DATA DRIVE

sde = 117.17 MB/sec  DATA DRIVE

sdf = 114.48 MB/sec  CACHE DRIVE

sdg = 128.83 MB/sec  DATA DRIVE

sdh = 117.87 MB/sec  DATA DRIVE

sdi = 64.57 MB/sec  DATA DRIVE

sdj = 23.31 MB/sec  DATA DRIVE

 

Without X Option, Without Cache Drive

 

root@Tower:/tmp# ./dsk.sh b d g h e a i j

sde = 118.13 MB/sec  DATA DRIVE

sdd = 119.06 MB/sec  DATA DRIVE

sdb = 168.11 MB/sec  PARITY DRIVE Onboard

sdh = 118.35 MB/sec  DATA DRIVE

sdg = 130.13 MB/sec  DATA DRIVE

sdj = 62.54 MB/sec  DATA DRIVE

sda = 175.31 MB/sec  DATA DRIVE Onboard

sdi = 121.61 MB/sec  DATA DRIVE

 

Thanks!

 

-Marcus

Link to comment

Without X Option, Without Cache Drive

 

root@Tower:/tmp# ./dsk.sh b d g h e a i j

sde = 118.13 MB/sec  DATA DRIVE

sdd = 119.06 MB/sec  DATA DRIVE

sdb = 168.11 MB/sec  PARITY DRIVE Onboard

sdh = 118.35 MB/sec  DATA DRIVE

sdg = 130.13 MB/sec  DATA DRIVE

sdj = 62.54 MB/sec  DATA DRIVE

sda = 175.31 MB/sec  DATA DRIVE Onboard

sdi = 121.61 MB/sec  DATA DRIVE

 

Thanks!

 

-Marcus

 

After moving the drives, drive letters changed:

 

root@Tower:/boot# ./chkdsk.sh  d f h i g c b a

sdf = 119.06 MB/sec  Data MV8            Hitachi 2TB Sata 2

sda = 164.39 MB/sec  Data Onboard    Seagate 2TB Sata 3

sdh = 130.52 MB/sec  Data MV8            Hitachi 2TB Sata 2

sdd = 165.03 MB/sec  Parity Onboard  Seagate 3TB Sata 3

sdi = 117.77 MB/sec  Data MV8            Hitachi 2TB Sata 2

sdc = 175.28 MB/sec  Data Onboard    Seagate 3TB Sata 3

sdb = 143.35 MB/sec  Data Onboard    Seagate 2TB Sata 3

sdg = 117.32 MB/sec  Data MV8          Hitachi 2TB Sata 2

 

So overall speeds are better.  As you can tell by the numbers the MV8 is not overtaxed, it looks like the Hitachi's are running at top speed.  Amazingly the SATA 3 speeds on the onboard did not change much with adding 2 drives to the SATA 3 onboard.

 

Thanks for all the help!

 

-Marcus

Link to comment

Looking at your transfer rates, and the spec of the AOC-SASLP-MV8, I have done a little calculation.

It appears that the SASLP is a four lane PCIe V1 card.  Theoretically, this would be able to transfer 4 x 250MB/s per second = 1000MB per second.

 

If I add up the measured transfer rates (without the X option) of seven drives on the SASLP, I get a result of 680+MB/s.  This is 68% of the theoretical limit for the PCIe bus.  Is it possible that overheads account for the other 32%?  It's certainly not beyond the bounds of possibility that the bus starts to slow down after it reaches two thirds of theoretical capacity.

 

This is one of the reasons I opted for an 8 lane V2 card (AOC-USAS2-L8i) - it will have four times the capacity of your SASLP.

 

What experience do others have?

Link to comment

Looking at your transfer rates, and the spec of the AOC-SASLP-MV8, I have done a little calculation.

It appears that the SASLP is a four lane PCIe V1 card.  Theoretically, this would be able to transfer 4 x 250MB/s per second = 1000MB per second.

 

If I add up the measured transfer rates (without the X option) of seven drives on the SASLP, I get a result of 680+MB/s.  This is 68% of the theoretical limit for the PCIe bus.  Is it possible that overheads account for the other 32%?  It's certainly not beyond the bounds of possibility that the bus starts to slow down after it reaches two thirds of theoretical capacity.

 

This is one of the reasons I opted for an 8 lane V2 card (AOC-USAS2-L8i) - it will have four times the capacity of your SASLP.

 

What experience do others have?

 

I didn't do enough research =)  I have enough room to expand to more cards so the next one will be a AOC-USAS-L8i or another V2 card.  With 8 ports there, 8 ports on the motherboard, I will only have to use 4 ports on the MV8 which should give me the 20 drives I look to have at most.

 

 

Link to comment

Ok so ran it without the X Option.  Some interesting results, with the Cache drive included, I saw my two Seagate 2TB SATA3 drives, on the MV8 controller slow down considerably.  With the cache drive excluded, it got somewhat better for one drive.

Yes, this is where it does get interesting--and useful.

 

==> Important note: The purpose of the non-X test is just to determine the saturation point for a particular resource (MV8, in this case). We get that number by adding up the MB/s rates for all drives connected to that resource. The individual rates for the drives (in the non-X [saturation] test) are immaterial (and only of possible interest to the really hardcore). [More below]

...

With X Option

 

root@Tower:/tmp# ./dsk.sh X b d g h e a i j f

sda ST3000DM001-9YN166 = 174.99 MB/sec DATA DRIVE Onboard

sdb ST3000DM001-9YN166 = 168.61 MB/sec - PARITY DRIVE Onboard

sdd Hitachi HDS722020ALA330 = 119.00 MB/sec DATA DRIVE

sde Hitachi HDS722020ALA330 = 117.73 MB/sec DATA DRIVE

sdf ST31000524AS = 113.85 MB/sec - CACHE DRIVE

sdg Hitachi HDS722020ALA330 = 130.40 MB/sec DATA DRIVE

sdh Hitachi HDS722020ALA330 = 118.46 MB/sec DATA DRIVE

sdi ST2000DM001 = 143.45 MB/sec DATA DRIVE

sdj ST2000DM001 = 164.56 MB/sec DATA DRIVE

 

Without X Option - Ordering my drive label

 

root@Tower:/tmp# ./dsk.sh b d g h e a i j f

sda = 174.99 MB/sec  DATA DRIVE Onboard

sdb = 166.99 MB/sec  PARITY DRIVE Onboard

sdd = 117.53 MB/sec  DATA DRIVE

sde = 117.17 MB/sec  DATA DRIVE

sdf = 114.48 MB/sec  CACHE DRIVE

sdg = 128.83 MB/sec  DATA DRIVE

sdh = 117.87 MB/sec  DATA DRIVE

sdi = 64.57 MB/sec  DATA DRIVE

sdj = 23.31 MB/sec  DATA DRIVE

OK. The saturation point (max throughput) of your MV8 is ~680 MB/s. As I stated earlier, the max real-world throughput for a PCIe x4 v1 pathway is 780 MB/s (840 MB/s on better motherboards, in the right config). Hence, it certainly appears that the MV8 and/or its Marvell 88SE6480 chip does not have the processing power/data-handling chops to fully utilize that pathway. (A really meager/ancient CPU+Northbridge could be responsible, but that doesn't apply here.)

 

By the way, it is best to run the non-X test with only drive letters on the tested resource/controller. In a final, full-system, test, you can test all drives on all resources, and make sure that your overall system (CPU+Northbridge+Southbridge) is not bandwidth-saturated.

 

Back to the MV8 ... it can sustain 680 MB/s. Which means that you can comfortably put the 4 Hitachi 2TB Data drives plus one of the Seagate 2TB Data drives plus the Cache drive on the MV8 without ever affecting your real-world results. That is because a Parity-Check is your most demanding task (throughput-wise), and that will be limited by the ~120 MB/s speed of a Hitachi 2TB. Since the cache drive doesn't participate in the Check, those other 5 drives will only use (max) 600 MB/s (5*120). You could even add the other Seagate 2TB with very negligible "penalty"--a Parity Check would then (nominally) max out at 113 MB/s (680/6), instead of 120. That would extend the time frame before you'd need/want to add another controller.

 

That takes care of the MV8. The other two disk throughput resources you have are (1) the 6 native SATA ports on your Z77, and (2) the 2 add-on SATA ports on the on-board ASM1061. The ASM1061 ports are limited by a PCIe x1 v2 to ~380 MB/s (or 420) total. I don't know about the 6 native ports, but I expect its limit to be well above 1000. If you have a couple of fast SATA3 SSDs, you could try to saturate that resource. "If you push something hard enough, it will fall over.":)

Thanks!

You're welcome.

 

--UhClem "I think we're all bozos on this bus."

 

Link to comment
  • 2 years later...

Archived

This topic is now archived and is closed to further replies.

×
×
  • Create New...