Re: ARC-1200 in SAFE33 mode vs M1015


Recommended Posts

What, from my point of view, would also be nice would be a Raid 1 Mode/Option for the Parity Drive(s).

 

The Parity Drive is the one, with the most Traffic, fom this point this would make sens to me.

 

cu

But it is the least important drive in actuality.  You can lose it and not lose any data.

 

Agree, you can loose parity without loosing any data, RAID1 on a parity drive is counterproductive.

The performance hit would make unRAID very slow.

 

Also the parity drive is not the most used drive, it's only the most used drive when you are populating the array. Once it's populated with your movies and music, only updates are written to the parity drive.

 

What would be better is RAID1 on cache. Anything placed on cache has a possibility to loss for a maximum period of about 24 hours.

The possibility of loss is worse if you use it as an application drive.

 

For those who say they would spend more, The answer is an ARC-1200

 

http://www.newegg.com/Product/Product.aspx?Item=N82E16816151031

16-151-031-02.jpg

 

What can be done is a two fold aid to unRAID.

 

Using a SAFE33 arrangement and a pair of sizable drives, you can have a RAID0 Parity and RAID1 cache.

 

This provides a nice performance boost for writes to the array and a safe area for cache and apps.

 

I do this and have had great success with it. 

 

Plus you can have that today instead of waiting for limetech to develop it.

 

 

Won't an M1015 in IR mode work as well (and provide more SATA connections for less than the card you mention)?

I've only just started playing with it, but UnRAID 5b12a saw the virtual drive in raid0 that I setup in my M1015-IR.

Link to comment

I wish I knew the answer to your question.

I do not have that controller so I cannot answer if it works in raid mode.

If someone has that controller maybe you can test for all of us.

 

1.  Does the controller allow assignment of 2 drives to a raid set.

2.  Does the controller allow the raidset to have two volumes, and of different raid types.

3.  Does unRAID see each volume as a disk.

 

 

For the ARC-1200 I can answer yes to each of these.

For the 3ware cards, you can create a raidset which is 1 volume of 1 type of raid. RAID0 or RAID1, but not both.  unRAID will see the raid volume as 1 disk.

 

 

Link to comment

It will be an interesting comparison.

 

For a PCIe x1 card and two drives, it provides a nice performance bang for the buck.

Since I bought it used on eBay for under a hundy.. I've been quite pleased with it.

At the very least my hybrid setup RAID1 cache/apps/local/swap drive gives me peace of mind too.

 

Is a RAID1 set on the M1015 seen by unRAID?

Link to comment

I wonder how it would improve your parity operation speed.

I know the Areca's have an onboard ram cache which helps.

Probably will not affect it much unless the data drive can rotate faster.

unRAID's write speed is determined by the slower rotational speed of the two disks involved.  If the raid controller can cache an entire stripe of the parity disk, then the data disk will be the slower of the two.

 

Joe L.

Link to comment

I wonder how it would improve your parity operation speed.

I know the Areca's have an onboard ram cache which helps.

Probably will not affect it much unless the data drive can rotate faster.

unRAID's write speed is determined by the slower rotational speed of the two disks involved.  If the raid controller can cache an entire stripe of the parity disk, then the data disk will be the slower of the two.

 

Joe L.

 

This is not entirely true.

If both disks are slow, then the two operations will be slow.

If one of the disks is faster then the other, the operation becomes a little faster.  Albeit not much.

If I can read and write from one disk faster, then the operation is not held back by the second operation.  I.E. The other operation benefits from the first operation completing sooner.

The problem is that it's not much of a saving unless there is good sized cache on the parity disk.

 

I've tested this operation in so many ways with RAID0 on parity, It has always showed an improvement in write operations. It just doesn't show that much of an improvement to warrant the cost for most people,  unless your controller has write back cache.

 

Even with the Silicon Image steelvine processor without a cache, I saw an improvement of 3MB/s when using RAID0 on the cache.

 

For most people the $100 cost of a steelvine processor or $100-$199 cost of the Areca controller doesn't warrant the upgrade, but I've tested and found that it does increase performance slightly.  No matter what the data disk is. 5400 or 7200.

 

The real savings is when you have all day constant updating processes on multiple disks.  a torrent server or a usenet downloader will see the benefit of a raid0 parity since the array will be dealing with multiple writes from different locations.

Link to comment

I wonder how it would improve your parity operation speed.

I know the Areca's have an onboard ram cache which helps.

Probably will not affect it much unless the data drive can rotate faster.

unRAID's write speed is determined by the slower rotational speed of the two disks involved.  If the raid controller can cache an entire stripe of the parity disk, then the data disk will be the slower of the two.

 

Joe L.

 

This is not entirely true.

If both disks are slow, then the two operations will be slow.

If one of the disks is faster then the other, the operation becomes a little faster.  Albeit not much.

If I can read and write from one disk faster, then the operation is not held back by the second operation.  I.E. The other operation benefits from the first operation completing sooner.

The problem is that it's not much of a saving unless there is good sized cache on the parity disk.

 

I've tested this operation in so many ways with RAID0 on parity, It has always showed an improvement in write operations. It just doesn't show that much of an improvement to warrant the cost for most people,  unless your controller has write back cache.

 

Even with the Silicon Image steelvine processor without a cache, I saw an improvement of 3MB/s when using RAID0 on the cache.

 

For most people the $100 cost of a steelvine processor or $100-$199 cost of the Areca controller doesn't warrant the upgrade, but I've tested and found that it does increase performance slightly.  No matter what the data disk is. 5400 or 7200.

 

The real savings is when you have all day constant updating processes on multiple disks.  a torrent server or a usenet downloader will see the benefit of a raid0 parity since the array will be dealing with multiple writes from different locations.

Basically, you are saying exactly the same as I did.  I did overlook the multiple "writing" processes though, and in that case, the improvement might be more noticeable when the hardware RAID striping happens to access multiple disks.
Link to comment

Well, I've hit a bit of a bump...

I was going to test the performance of my M1015 in IR mode with a RAID0 setup for parity, but, it appears that 2x1TB is smaller than 1x2TB :(

 

I have 2 WD 1TB blacks in RAID0

and 1 Seagate 2TB drive I was going to use for data... but it won't let me because I have to use the larger disk for parity :(

 

Any way to work around this?  I don't have another smaller drive right now :(

Link to comment

Found a smaller drive - but hope to use the 2T as a test.

 

hdparm -N fails on the RAID0 drive (likely due to the fact that the M1015 is creating a "virtual" drive from 2 drives).

 

I'll post the sizes that unRaid reports when I'm done testing with the smaller drive and see if we can get the 2TB working (as its a newer drive and may be faster).

 

Preliminary Results:

 

No surprise to Joe and WeeboTech, no real noticable difference in speeds when writing directly to the protected array (about 30MB/s'ish - bounces around more than my "production" box for some reason, but averages about that - likely due to other things going on in the network/etc as I don't really have a dedicated 'clean' test environment).

 

Now, where I do notice the difference is in the parity check/sync.  it maxes out the slowest drive, and when it got to the point where it was just the parity drive it flew at 280MB/s :)

 

I'm going to try and do a few more tests just in case I'm wrong; but looking like not worth doing RAID0 for parity is correct (at least on the M1015).

 

I'll also try a RAID1 for parity and see what that does to the performance.

 

Link to comment

I'll also try a RAID1 for parity and see what that does to the performance.

 

This isn't worth your time.  There will be a write penalty. Without a cache on the controller, it will hurt every parity operation that does a write.

 

before you break the raid0 array.

 

do the writeread10gb test from my google code page.

 

Choose a data drive that is freshly formatted.

That will show you the maximum potential.

 

http://code.google.com/p/unraid-weebotech/downloads/detail?name=writeread10gb

 

login to unraid as root

then do

./writeread10gb /mnt/disk1/test.dd

Link to comment

writing 10240000000 bytes to: /mnt/disk1/10gbfile

206300+0 records in

206300+0 records out

211251200 bytes (211 MB) copied, 4.95318 s, 42.6 MB/s

324436+0 records in

324436+0 records out

332222464 bytes (332 MB) copied, 9.82195 s, 33.8 MB/s

445496+0 records in

445496+0 records out

456187904 bytes (456 MB) copied, 14.8556 s, 30.7 MB/s

577960+0 records in

577960+0 records out

591831040 bytes (592 MB) copied, 19.987 s, 29.6 MB/s

696016+0 records in

696016+0 records out

712720384 bytes (713 MB) copied, 24.9593 s, 28.6 MB/s

817408+0 records in

817408+0 records out

837025792 bytes (837 MB) copied, 29.9948 s, 27.9 MB/s

951568+0 records in

951568+0 records out

974405632 bytes (974 MB) copied, 35.0269 s, 27.8 MB/s

1073426+0 records in

1073426+0 records out

......

9451878+0 records in

9451878+0 records out

9678723072 bytes (9.7 GB) copied, 361.396 s, 26.8 MB/s

9584998+0 records in

9584998+0 records out

9815037952 bytes (9.8 GB) copied, 366.4 s, 26.8 MB/s

9831238+0 records in

9831238+0 records out

10067187712 bytes (10 GB) copied, 371.39 s, 27.1 MB/s

9961128+0 records in

9961128+0 records out

10200195072 bytes (10 GB) copied, 376.439 s, 27.1 MB/s

10000000+0 records in

10000000+0 records out

10240000000 bytes (10 GB) copied, 378.492 s, 27.1 MB/s

write complete, syncing

reading from: /mnt/disk1/10gbfile

10000000+0 records in

10000000+0 records out

10240000000 bytes (10 GB) copied, 135.964 s, 75.3 MB/s

removing: /mnt/disk1/10gbfile

removed `/mnt/disk1/10gbfile'

Link to comment
  • 6 months later...

Putting some life in an old thread.

 

I finally come to experimenting a bit more with an ARC1200 and perhaps a M1015. I set up 2 WD green drives of 2 TB (totalling 4 TB). I created a 3 TB RAID0 volume for parity and a 500 GB RAID1 volume for cache. I threw in a 1 TB Samsung for data on the M1015. This on a X9SCM. I see parity sync/rebuild speeds of about 42 MB/s from 0 to 100%. Seems a bit slow to me. What is your opinion on this? What settings on the ARC1200 influence speed the most? Stripe sizes and more?

Link to comment

The 1200 card is only going to work as fast at the 1x PCIe lane allows.  Which isn't going to be very fast; probably no more than 500mbps for the slot itself. Then you're limited by whatever that card will do internally.  This also assumes the underlying PCIe fabric has enough lanes free and/or decent performance.  I doubt many motherboards would be putting an emphasis on performance for any 1x slots present.  So getting 42MBps from it doesn't seem unreasonable.

Link to comment

The 1200 card is only going to work as fast at the 1x PCIe lane allows.  Which isn't going to be very fast; probably no more than 500mbps for the slot itself. Then you're limited by whatever that card will do internally.  This also assumes the underlying PCIe fabric has enough lanes free and/or decent performance.  I doubt many motherboards would be putting an emphasis on performance for any 1x slots present.  So getting 42MBps from it doesn't seem unreasonable.

 

Do you use or have experience with the ARC1200? I know it is a PCIe 1x card. In a X9SCM motherboard it sits in at least a PCIe 4x slot. So one would think there should be enough lanes.

Link to comment

42MB/s is not what I would expect from RAID0 even on x1 lane.

 

 

You could try moving it to other slots to see what the deal is.

On my card, I enabled write back cache.

 

 

The best test is to use the writeread10gb script on these raw drives to determine maximum speed before you build the array.

 

 

I.e. with the 3 drives and emhttp down, format each drive with reiserfs manually.

After that  use the writeread10gb script on each drive.

That will be the maximum you could possibly achieve on each drive. From there you can factor in all the other issues.

I can say that if you are doing a parity operation (read/write/sync, etc, etc) and write data to the cache drive, there will be a performance penalty and it could be significant. So I do those two operations separately.

 

 

If you have a UPS you can enable the write back cache on the controller in the firmware before doing anything else to see if it helps.

 

 

I have 15 data drives to 1 ARC1200 on a x1 slot, I get from 75-80 at the start dropping to 55k near the end.

Jul 14 21:22:10 atlas kernel: md: sync done. time=18244sec rate=53538K/sec

 

 

I have 2 seagate 1.5tb 7200RPM 32MB cache drives as my raid0/raid1 hybrid.

If you are using 5400rpm drives for yours, that could be an issue.

 

 

Link to comment

Thanks Weebo,

 

Should I format the 2 drives on the ARC1200 separately, so not building any RAID yet? Then do the writeread10gb test on these separate drives? I use WD 5900 rpm drives, but am tempted to switch to some 7200 rpm drives, any recommendations for this? I have 2 x4 slots and a x8 slots on the X9SCM. Could using a different x4 slot make any difference? I have a UPS and WB cache is enabled somewhere in the RAID settings. I will break the RAID and try to test each drive separately if the show up in unraid.

 

As always I wanna go too fast. assembling before doing any testing. Can't I use writeread10gb on /dev/sd?? I get an error dev/sd?/test.dd not a directory.

So I guess I can only use /mnt/disk?

 

Come to think of it, I had this setup running on an old DFI Socket 775 board and I was getting Parity Sync speeds in the 70s MB/s. Maybe the X9SCM motherboard is the problem, or BIOS.

Link to comment

42MB/s is not what I would expect from RAID0 even on x1 lane.

 

 

You could try moving it to other slots to see what the deal is.

On my card, I enabled write back cache.

 

 

The best test is to use the writeread10gb script on these raw drives to determine maximum speed before you build the array.

 

 

I.e. with the 3 drives and emhttp down, format each drive with reiserfs manually.

After that  use the writeread10gb script on each drive.

That will be the maximum you could possibly achieve on each drive. From there you can factor in all the other issues.

I can say that if you are doing a parity operation (read/write/sync, etc, etc) and write data to the cache drive, there will be a performance penalty and it could be significant. So I do those two operations separately.

 

 

If you have a UPS you can enable the write back cache on the controller in the firmware before doing anything else to see if it helps.

 

 

I have 15 data drives to 1 ARC1200 on a x1 slot, I get from 75-80 at the start dropping to 55k near the end.

Jul 14 21:22:10 atlas kernel: md: sync done. time=18244sec rate=53538K/sec

 

 

I have 2 seagate 1.5tb 7200RPM 32MB cache drives as my raid0/raid1 hybrid.

If you are using 5400rpm drives for yours, that could be an issue.

 

I'm totally lost here! I wanna do a speedtest on all disks or volumes using writeread10gb. Can I do this also on the 2 volumes I created on the ARC1200 or on the 2 individual disks?. How?

 

And how do I down emhttp? Is unmenu gone too then?

 

 

Link to comment

Join the conversation

You can post now and register later. If you have an account, sign in now to post with your account.
Note: Your post will require moderator approval before it will be visible.

Guest
Reply to this topic...

×   Pasted as rich text.   Restore formatting

  Only 75 emoji are allowed.

×   Your link has been automatically embedded.   Display as a link instead

×   Your previous content has been restored.   Clear editor

×   You cannot paste images directly. Upload or insert images from URL.