Parity drive(s) in RAID 0 with hardware?


Recommended Posts

I am building a new unRAID server and I am wondering if it would be possible to use two hard drives in RAID 0 as the parity drive? 

 

Specifically, the server will first be populated with 5x 4TB 5400 RPM data drives and 6x 3TB 5400 RPM data drives.  I would like to use two 2TB 7200 RPM drives in RAID 0 as the parity drive to create a 4TB parity drive.  RAID 0 for the parity drive would be setup with the hardware RAID controller and combine the two drives for 4TB.  This should make the parity drive twice as fast as a single drive and I am wondering if anyone has tried this before?

 

I realize that if either of the parity drives fails in RAID 0 that the parity data would be lost.  But the parity drive could always be rebuilt if that happens.  The tremendous speed increase for writing and parity checks would be fantastic though.  I also do multiple writes simultaneously so faster parity would be very nice.

 

Thoughts?

 

craigr

Link to comment

When writing to the array you are limited by the speed of the parity drive AND the data drive being written to. So having a RAID 0 parity drive is not going to make any difference in terms of speed. You are just increasing the risk of having a parity drive failure. Since you'll be likely to have more parity drive failures while using RAID 0 what do you think happens if a data drive fails while you're rebuilding parity?

 

So, even if you could why would you want to? There is really no upside to doing this.

Link to comment

I think that Unraid will not allow other hardware raids to work simultaneously. This is why most people flash there raid cards to a non raid bios to be able to use them with Unraid.

 

Sorry

 

That is 100% incorrect!

 

I ran an ARC-1210 with two 4TB Hitachi's on it for about a year. Configured a 4TB RAID0 for Parity and a 1TB RAID1 for Cache.  I worked beautifully!  I got great speeds.  I removed the Areca card when I reconfigured my server recently. However, I may go back to this configuration.

Link to comment

When writing to the array you are limited by the speed of the parity drive AND the data drive being written to. So having a RAID 0 parity drive is not going to make any difference in terms of speed. You are just increasing the risk of having a parity drive failure. Since you'll be likely to have more parity drive failures while using RAID 0 what do you think happens if a data drive fails while you're rebuilding parity?

 

So, even if you could why would you want to? There is really no upside to doing this.

 

You are limited to the speed of the parity drive and the speed on a SINGLE data drive. You get improved speed on MULTIPLE drive writes with a RAID0 Parity. And, yes, while the odds of you losing a Parity drive faster because of the RAID0, remember that a parity drive is no more important than any other drive in unRAID.

 

For this reason, I also run bonded NICs. I get improvement on multiple reads and writes.

 

 

Link to comment

And, yes, while the odds of you losing a Parity drive faster because of the RAID0, remember that a parity drive is no more important than any other drive in unRAID.

Yes, I realize that all drives are equally important. My point, which perhaps was not completely clear, is that you'll be rebuilding parity following a failed drive replacement more frequently and that increases the chances for encountering a drive failure while the array is not protected by parity which could lead to data loss.

Link to comment

I think that Unraid will not allow other hardware raids to work simultaneously. This is why most people flash there raid cards to a non raid bios to be able to use them with Unraid.

 

Sorry

 

That is 100% incorrect!

 

I ran an ARC-1210 with two 4TB Hitachi's on it for about a year. Configured a 4TB RAID0 for Parity and a 1TB RAID1 for Cache.  I worked beautifully!  I got great speeds.  I removed the Areca card when I reconfigured my server recently. However, I may go back to this configuration.

Wait, so are you saying that what I want to do will work and you have done it?  I'm the OP and I want to run two 2TB drives in RAID 0 as the parity drive for a server that has 4TB data drives.

 

I'm confused because it sound like you are saying that my idea will work, but you quoted Thornwood when he said that he does not think it will work and it seemed like you were saying he is 100% correct.

 

I don't see how unRAID would know if I have a single disc parity or a RAID 0 for parity so I don't see how it wouldn't work really, but I just want to be sure.  I have used unRAID just for testing so I have little experience with the OS.  I have been running FreeBSD for years.

 

Thanks,

craigr

Link to comment

And, yes, while the odds of you losing a Parity drive faster because of the RAID0, remember that a parity drive is no more important than any other drive in unRAID.

Yes, I realize that all drives are equally important. My point, which perhaps was not completely clear, is that you'll be rebuilding parity following a failed drive replacement more frequently and that increases the chances for encountering a drive failure while the array is not protected by parity which could lead to data loss.

I understand that if either of the drives in a parity RAID 0 array dies that you will loose your parity data.  I said that I understood that risk in my original post.  That being said, unRAID retains all the data on all the data drives even if the parity drive(s) die.  So all one needs to do is replace the failed drive in the parity array and then rebuild the parity drive if a parity drive failure ever occurs.

 

Of course if you also happen to be very unlucky and lose a data drive while the parity array is down than you will lose the data on the dead data drive.  I consider that unlikely and even if it happens it's no big deal because it's just one drive with BD rips on it that I can always recreate if necessary.  So I guess you could say that I honestly don't value the data all that much so I am willing to tolerate more risk to increase write speeds and parity checks.

 

craigr

Link to comment

 

You are limited to the speed of the parity drive and the speed on a SINGLE data drive. You get improved speed on MULTIPLE drive writes with a RAID0 Parity. And, yes, while the odds of you losing a Parity drive faster because of the RAID0, remember that a parity drive is no more important than any other drive in unRAID.

 

For this reason, I also run bonded NICs. I get improvement on multiple reads and writes.

But looking at your signature Steven, it seems that you are running a RAID 0 parity drive (or at least you have done so in the past).

 

craigr

Link to comment

Hey Steven,

 

In looking at your signature again you have:

 

2 x 4TB Hitachi 7200rpm (Areca - 4TB RAID0 Parity / 1TB RAID1 Cache)

 

So were you using two 4TB drives for 8TB of parity?  Why didn't you use two 2TB drives? 

 

I would like to use two 2TB drives as the two drives together in RAID 0 should sum to 4TB which is the same size as my largest data drive.

 

Thanks for any more info.

 

craigr

Link to comment

You can definitely use a RAID array for parity => several folks have done so.  Some also use RAID for the cache ... typically RAID-1 so they have fault-tolerance with their cache drive.

 

I assume Thornwood will reply and confirm this, but as I read his post, he used 2 4TB drives, but configured two arrays on those drives -- a 4TB RAID-0 (using 2TB from each of the drives) for parity, and a 1TB RAID-1 (using 500GB from each drive) for cache.    This would leave 1.5TB unused on each drive -- but would also enhance the performance, since the drives were "short-stroked" in this configuration and never accessed the slower inner cylinders.

 

Link to comment

I was one of the first people to do this in what I call a hybrid SAFE mode on an Areca ARC-1200 controller.

 

I used 2 7200 RPM drives.

 

I created 2 arrays.

 

The first was a RAID0 array of two drives that we equal to the largest drive available at the time.

The second was a RAID1 array of the left over chunks that was used for a cache drive.

It was more an apps drive since I'm very particular where things go.

 

While you could possibly realize very high speeds using the RAID0 array, once you bring unRAID into it, you will see that it's only a small improvement.  It is very unlikely you well get faster then 50% of the slowest drive's speed.

 

Somewhere between 3-7MB/s for the drives I used at the time. yet at that time unRAID performance was somewhere bewtween 20-28MB/s. With the new controller arrangement I was in the 30MB and up range.

 

Today's drives will probably be faster if you select the 3TB 7200 RPM Seagates.

 

There was a real benefit of having some of my critical data on the RAID1 array.

 

I would use that also as a /home folder for the whole network and back it up with rsync every night.

saved my ass.

 

The performance benefit was good for my use since I usually wrote all over the array while also having a torrent client writing to the array 24x7x365.

 

Multiple writes around the array benefit from the random access speed and caching of the controller.

 

The speed increase is not mind blowing, but it did make it noticeable enough that I could use the array for my scratch pad instead of the SSD's in local laptops.

 

For those who feel your at risk.. if you are doing monthly parity and SMART checks you should be good.

 

FWIW, The areca controller will alarm (loudly) if your drive fails.

I believe, It also checks the smart values automatically.

The only downside is you loose direct smart access from unRAID and power control.

You can adjust the bios to automatically spin the drives down after an hour.

 

It worked well for my use. I was very happy with the setup.

 

If you are looking for the speed increase, use the 7200 RPM 3TB drives.

They can get up to 190MB/s on the outer tracks.  It all comes into play.  Just don't expect miracles.

 

BubbaQ experimented with SSD's and at that time it did not show more then 50% of what a drive is capable of. (from what I remember anyway).

 

using certain kernel tunings and extra memory you can get a perceived speed burst by utilizing the buffer cache to hold some data. This is good for smaller files.

I.E. It's a real benefit to me when I update mp3 tags or import artwork into them.

I get from 40-70MB/s for the smaller files. (directly on the array).

 

This is with a 3TB 7200 RPM parity, data drive and some kernel tuning. I do not use the areca any more since I lost my server.

 

If the small performance increase and SAFE RAID0/RAID1 mode interest you, it will work nicely.

However, if you are not interested in eeking out every ounce of performance, you are better off using the 4TB drive for parity and using the 2TB drives for data.

Link to comment

Thanks for all the info.  Do you happen to have any old links to the tests or threads?

 

I already own the 3TB drives and they are installed in my FreeBSD server now.  I need to buy 4x4TB drives for data so that I can transfer all the data on the FreeBSD server to the unRAID server.  I will than need a 4TB drive for parity.  This could be another 4TB drive or it could be two 2TB drives in RAID 0.  Either way, I have to buy new drives for parity so it can be either 2TB drives or 4TB drives.  It sounds like you think the performance increase with 2x 2TB drives in RAID 0 will be nill though.

 

I am thinking that I will probably use a 256Gig SSD drive for cache so I won't need another array for that.

 

craigr

Link to comment

Thanks for all the info.  Do you happen to have any old links to the tests or threads?

 

Scan for ARC-1200

 

I already own the 3TB drives and they are installed in my FreeBSD server now.  I need to buy 4x4TB drives for data so that I can transfer all the data on the FreeBSD server to the unRAID server.  I will than need a 4TB drive for parity.  This could be another 4TB drive or it could be two 2TB drives in RAID 0.  Either way, I have to buy new drives for parity so it can be either 2TB drives or 4TB drives.  It sounds like you think the performance increase with 2x 2TB drives in RAID 0 will be nill though.

 

I am thinking that I will probably use a 256Gig SSD drive for cache so I won't need another array for that.

 

craigr

 

Is 3-10MB/s more performance worth the $100 for the controller and using 2 slots for the RAID0 array?

The performance increase with 2x2TB drives will be small (not nil, small) for a single sequential process.

It will not be measurable for random access, but it will improve. Certainly did for me. 

Basic filesystem housekeeping and journaling benefits due to the caching nature of the controller.

Parity gen/create speed will be faster.  Check speed will not be any faster.

 

It will be stated that your array will not be faster then your slowest drive.

Improving your parity matters when you are writing to more then one drive.

I noticed it, but that's also because I use the array as a central hub/file server.

 

As far as purchasing new drives for the parity, I wouldn't use the 2TB drives. I would go with the 3TB drives.

I have the seagate 2TB 7200 RPM drives and the 3TB drives, the 3TB drives have been faster in my tests.

Plus you will have the space to expand up to 6tb later on.

 

use dd to read the raw drive and benchmark the raw drives yourself.

 

N40L micro server, bare metal. 

egrep 'rdevName|rdevId' /proc/mdcmd

rdevName.0=sdd
rdevId.0=ST3000DM001-1CH166_W1F1GTFJ
rdevName.1=sdc
rdevId.1=ST3000DM001-1CH166_W1F1H834
rdevName.2=sdb
rdevId.2=Hitachi_HDS5C3030ALA630_MJ1321YNG0GBPA
rdevName.3=sde
rdevId.3=Hitachi_HDS5C3030ALA630_MJ1321YNG0EEXA

root@unRAID:~# dd bs=1024 count=1024000 of=/dev/null if=/dev/sdd
1048576000 bytes (1.0 GB) copied, 5.3432 s, 196 MB/s

root@unRAID:~# dd bs=1024 count=1024000 of=/dev/null if=/dev/sdc
1048576000 bytes (1.0 GB) copied, 5.95913 s, 176 MB/s

root@unRAID:~# dd bs=1024 count=1024000 of=/dev/null if=/dev/sde
1048576000 bytes (1.0 GB) copied, 8.23319 s, 127 MB/s

root@unRAID:~# dd bs=1024 count=1024000 of=/dev/null if=/dev/sdb
1048576000 bytes (1.0 GB) copied, 8.40753 s, 125 MB/s


N54L (faster CPU more Ram) under ESX RDM on a HP microsever.

root@unRAID1:~# egrep 'rdevName|rdevId' /proc/mdcmd
rdevName.0=sdb
rdevId.0=ST3000DM001-9YN166_W1F191JR
rdevName.1=sdc
rdevId.1=WDC_WD20EARX-00PASB0_WD-WCAZAJ271733

root@unRAID1:~# dd bs=1024 count=1024000 of=/dev/null if=/dev/sdb
1048576000 bytes (1.0 GB) copied, 6.61537 s, 159 MB/s

root@unRAID1:~# dd bs=1024 count=1024000 of=/dev/null if=/dev/sdc
1048576000 bytes (1.0 GB) copied, 8.91761 s, 118 MB/s

notice how the 3tb is slower under esx RDM. 


a different N54L (faster CPU more Ram) under ESX RDM on a HP microsever.
using 3tb and 4tb drives. 

root@unRAID2:~# egrep 'rdevName|rdevId' /proc/mdcmd
rdevName.0=sdc
rdevId.0=ST4000DM000-1F2168_W3005993
rdevName.1=sdd
rdevId.1=ST4000DM000-1F2168_W3005LMP
rdevName.2=sdf
rdevId.2=ST4000DM000-1F2168_Z300JE0D
rdevName.7=sde
rdevId.7=ST3000DM001-1CH166_Z1F2WFKV

root@unRAID2:~# dd bs=1024 count=1024000 of=/dev/null if=/dev/sde
1048576000 bytes (1.0 GB) copied, 6.08666 s, 172 MB/s

root@unRAID2:~# dd bs=1024 count=1024000 of=/dev/null if=/dev/sde
1048576000 bytes (1.0 GB) copied, 6.12011 s, 171 MB/s

root@unRAID2:~# dd bs=1024 count=1024000 of=/dev/null if=/dev/sdc
1048576000 bytes (1.0 GB) copied, 6.93113 s, 151 MB/s

root@unRAID2:~# dd bs=1024 count=1024000 of=/dev/null if=/dev/sdf
1048576000 bytes (1.0 GB) copied, 6.72722 s, 156 MB/s

Again, the speed is about 20MB/s slower under ESX. 

 

I have the Seagate 2Tb drives, but I cannot benchmark them. from what I remember they were on par with the 4tb drives.

In any case, benchmark them yourself.  I got over 190MB/s on the 3tb drives frequently on raw read tests.

 

However, As it's been said, your array writes are as slow as your slowest drive.

Link to comment

Makes sense.

 

If speed and access are the priority, I would re-purpose the 3tb drives from the other machine.

Keep in mind, the return on investment is small unless you use your server hard like I do.

 

For a basic media server with an SSD cache, you should'nt really need the RAID0 on parity.

 

 

Link to comment

Makes sense.

 

If speed and access are the priority, I would re-purpose the 3tb drives from the other machine.

Keep in mind, the return on investment is small unless you use your server hard like I do.

 

For a basic media server with an SSD cache, you should'nt really need the RAID0 on parity.

I use the server pretty hard which is why I am concerned with unRAID at all really.  With FreeBSD I can easily saturate my network with writes to the server, and the server isn't the bottleneck.

 

I don't think the 3TB drives would be a good choice to repurpose for RAID 0 parity because they are WD Green 5400 RPM drives.  So they would be about the slowest possible RAID 0 cache possible ;)

 

craigr

Link to comment

Makes sense.

 

If speed and access are the priority, I would re-purpose the 3tb drives from the other machine.

Keep in mind, the return on investment is small unless you use your server hard like I do.

 

For a basic media server with an SSD cache, you should'nt really need the RAID0 on parity.

I use the server pretty hard which is why I am concerned with unRAID at all really.  With FreeBSD I can easily saturate my network with writes to the server, and the server isn't the bottleneck.

 

I don't think the 3TB drives would be a good choice to repurpose for RAID 0 parity because they are WD Green 5400 RPM drives.  So they would be about the slowest possible RAID 0 cache possible ;)

 

craigr

 

 

I had no idea they were 5400 RPM, so... don't go there!! LOL! you would not be happy with the performance.

Link to comment

Is 3-10MB/s more performance worth the $100 for the controller and using 2 slots for the RAID0 array?

One thing is that I already have two LSI 9211-8I and 8x SATA ports on my MB.  So I'm not too worried about an extra SATA port taken by two parity drives.

 

The performance increase with 2x2TB drives will be small (not nil, small) for a single sequential process.

It will not be measurable for random access, but it will improve. Certainly did for me. 

Basic filesystem housekeeping and journaling benefits due to the caching nature of the controller.

Parity gen/create speed will be faster.  Check speed will not be any faster.

Why don't you think parity check will be any faster?  This is one of the primary reasons why I liked the idea.  I was under the impression that unRAID reads from all data drives simultaneously during a parity check.  And as such, having a faster parity drive to compare with would increase speed.  What am I missing?  I would really like to improve parity check times.

 

Thanks again,

craigr

Link to comment

I think you're focusing on the wrong thing.

 

A much-faster parity drive will have very LITTLE impact on the write speeds for the array, as you're still limited by the speed of the array drive you're writing to -- and they're all relatively-slow 5400rpm drives.

 

What you want is a fast CACHE drive.    You could either use a fairly large SSD for that; or if you want more space and still a fast drive, use a RAID array for the cache unit .. RAID-0 if you're looking for improved speed;  RAID-1 if you want fault-tolerance on the cache; or RAID-10 for both.

 

Link to comment

Why don't you think parity check will be any faster?  This is one of the primary reasons why I liked the idea.  I was under the impression that unRAID reads from all data drives simultaneously during a parity check.  And as such, having a faster parity drive to compare with would increase speed.  What am I missing?  I would really like to improve parity check times.

 

Parity check speeds are limited by the slowest drive involved -- you could use a 2TB SSD for parity and it would make NO difference in the parity checks  :)

 

Link to comment

Is 3-10MB/s more performance worth the $100 for the controller and using 2 slots for the RAID0 array?

One thing is that I already have two LSI 9211-8I and 8x SATA ports on my MB.  So I'm not too worried about an extra SATA port taken by two parity drives.

 

The performance increase with 2x2TB drives will be small (not nil, small) for a single sequential process.

It will not be measurable for random access, but it will improve. Certainly did for me. 

Basic filesystem housekeeping and journaling benefits due to the caching nature of the controller.

Parity gen/create speed will be faster.  Check speed will not be any faster.

Why don't you think parity check will be any faster?  This is one of the primary reasons why I liked the idea.  I was under the impression that unRAID reads from all data drives simultaneously during a parity check.  And as such, having a faster parity drive to compare with would increase speed.  What am I missing?  I would really like to improve parity check times.

 

Thanks again,

craigr

 

It really depends on the server's layout.  I did not see 'much' of a parity 'check' benefit from RAID0 parity.

I saw a big benefit of using the Areca's cache and creating new parity.  Since a write was cached and returned immediately, it would be in the controllers buffer letting the kernel go on and do other things.

 

While there will be a small benefit from RAID0 parity, it would be small unless you re-tuned the server and it's buffering.

It's highly dependant on the server, controllers and slots they are connected to.

 

Even when I used hardware RAID0 the speed improvements were small.

 

Perhaps after tuning with this utility you will see more improvement.

 

unraid-tunables-tester.sh - A New Utility to Optimize unRAID md_* Tunables

http://lime-technology.com/forum/index.php?topic=29009

Link to comment

... noted your config has mixed 3TB and 4TB drives.

 

Your parity checks would be faster if all the drives were the same size.    As a drive nears the inner cylinders, read/write operations are notably slower than on the outermost cylinders.  When you have mixed size drives in your array, this slowdown occurs more than once.

 

So, in your case, as the parity checks near the inner cylinders of the 3TB drives (especially when you get to 90% or so -- i.e. 2.7TB) the check will run notably slower than it had been up to that point;  then when it crosses 3TB (thus the 3TB drives are no longer involved) it will speed back up, until it gets close to the innermost cylinders of the 4TB units.

 

The speed of the parity drive, as I noted above, has NO impact on this (unless it happens to be a slower drive than those in the array).

 

Link to comment

Thanks for all the great information and links guys.  I really appreciate it.

 

Based on the fact that it seems that having a faster parity drive will not increase the speed of parity checks, I will likely not pursue the idea of a RAID 0 for the parity drive.  I wish there were a way to increase the speed of parity checks.

 

craigr

Link to comment

Thanks for all the great information and links guys.  I really appreciate it.

 

Based on the fact that it seems that having a faster parity drive will not increase the speed of parity checks, I will likely not pursue the idea of a RAID 0 for the parity drive.  I wish there were a way to increase the speed of parity checks.

 

craigr

 

 

I gave you a link to re-tune the buffering. try that out.

Also, if you have the hardware, I would suggest 'trying' it.

 

 

If not, don't invest in unless you want to increase your current write speed to the array, even then it's a small return on investment.

Link to comment

Join the conversation

You can post now and register later. If you have an account, sign in now to post with your account.
Note: Your post will require moderator approval before it will be visible.

Guest
Reply to this topic...

×   Pasted as rich text.   Restore formatting

  Only 75 emoji are allowed.

×   Your link has been automatically embedded.   Display as a link instead

×   Your previous content has been restored.   Clear editor

×   You cannot paste images directly. Upload or insert images from URL.