Jump to content

Hardware RAID 6 Software Raid 0


kt6999

Recommended Posts

Hi, I am planning to move away from the array system and would like to use just the cache pool.

I currently have 8 12TB Seagate Exos drives with 4 of them in use in the standard unRAID array pool.

The issue with the array pool is not enough write speed. 

My Norco 4224 holds 24 drives and I dont have the funds to fill the chassis and create a standard RAID 60.

My plan is to put my drives in hardware raid 6 on a Areca 1882ix and use unRAID's cache pool to Raid 0 the multiple Raid 6's together.

I believe my biggest risk is the BTRFS file system and its corruption possibilities so I have switched out to workstation components, ECC ram and a UPS.

The server will be in production for a year until I move to a 45 drives Storminator XL60 or Supermicro 60 bay chassis.

 

Any suggestions and/or criticisms are very much welcome.

Link to comment
1 minute ago, jonathanm said:

At the moment I believe you are required to have 1 array drive for it to start, so plan for that. It could be a USB or small SSD if that works for you. Other than that, I must ask, why unraid? You are not utilizing the basic feature that unraid is built on, so why bother?

Because the server is currently on unraid and I am not sure the benefits of switching to freenas or whs. If there are reasons I should switch then I would love to hear them.

Link to comment

Most people have multiple classes of data. The unRAID array is best for seldom-changing data - which is normal for media data.

 

Since you consider not using the array anymore - are all data you store semi-volatile so the write speed is important?

 

It's more common to use mirrored disks (preferably SSD) for the data that requires the highest write speed, while keeping the bulk on the normal unRAID array.

Link to comment
1 hour ago, pwm said:

Most people have multiple classes of data. The unRAID array is best for seldom-changing data - which is normal for media data.

 

Since you consider not using the array anymore - are all data you store semi-volatile so the write speed is important?

 

It's more common to use mirrored disks (preferably SSD) for the data that requires the highest write speed, while keeping the bulk on the normal unRAID array.

I use the server for video editing purposes with 4 editors, two rendering machines, and 8 video servers reading 1080p streams at the same time. I was using 8x 2TB micron ssds as cache but the cache was being filled every couple hours. I am also having issues with the mover sometimes not able to move data into the array.

Link to comment
17 minutes ago, kt6999 said:

I use the server for video editing purposes with 4 editors, two rendering machines, and 8 video servers reading 1080p streams at the same time. I was using 8x 2TB micron ssds as cache but the cache was being filled every couple hours. I am also having issues with the mover sometimes not able to move data into the array.

 

Then I understand why you don't want to write your data to an unRAID array. The unRAID array is better as backing store for already edited video where data can later be concurrently streamed from multiple disks in the array.

Link to comment

I think unRAID should allow any combination of array layouts to be concurrently in use. I can't use unRAID for the main storage server just because I need multiple arrays optimized for different usage cases and access patterns.

Link to comment
1 hour ago, pwm said:

 

Then I understand why you don't want to write your data to an unRAID array. The unRAID array is better as backing store for already edited video where data can later be concurrently streamed from multiple disks in the array.

My issue with that is i need the data to be moved immediately and right now it takes about 14 hours for the cache to move over but we fill the cache every 2 hours.

Link to comment
9 minutes ago, kt6999 said:

My issue with that is i need the data to be moved immediately and right now it takes about 14 hours for the cache to move over but we fill the cache every 2 hours.

 

Do you turn on turbo write when you move the data? Turbo write is extremely important to get decent write speed to the array. Still not striping, but no longer a read/modify/write operation.

Link to comment
On 7/9/2018 at 1:39 PM, pwm said:

 

Do you turn on turbo write when you move the data? Turbo write is extremely important to get decent write speed to the array. Still not striping, but no longer a read/modify/write operation.

I do have Turbo Write enabled.

On 7/9/2018 at 1:48 PM, BRiT said:

You should schedule the Mover task to be run more frequently than just once overnight.

Most of the server bandwidth would be held up by the mover task.

Link to comment
4 hours ago, kt6999 said:

I do have Turbo Write enabled.

Most of the server bandwidth would be held up by the mover task.

 

This would be an example where unRAID should support multiple arrays.

 

The SSD cache has enough bandwidth to concurrently feed multiple transfers.

Link to comment

No question this is a task unRAID is not well suited for.  8TB of new information per hour is impossible for spinning disks to keep up with in the world of unRAID.

 

Furthermore I question the need to create two RAID6 arrays out of 8 drives then stripe them together.  You'd end up with 4 data drives and 4 parity drives (capacity-wise, not minding the actual RAID implementation).  If you're chopping your capacity in half anyway why not do a simple RAID 10 to get the speed and redundancy you're after?  Saves a lot of XOR time and you don't need a fancy hardware RAID controller to get the job done.

 

 

 

Link to comment
5 minutes ago, bman said:

No question this is a task unRAID is not well suited for.  8TB of new information per hour is impossible for spinning disks to keep up with in the world of unRAID.

 

Furthermore I question the need to create two RAID6 arrays out of 8 drives then stripe them together.  You'd end up with 4 data drives and 4 parity drives (capacity-wise, not minding the actual RAID implementation).  If you're chopping your capacity in half anyway why not do a simple RAID 10 to get the speed and redundancy you're after?  Saves a lot of XOR time and you don't need a fancy hardware RAID controller to get the job done.

 

 

 

My apologies. I forgot to mention that it is currently a raid 5. I plan to switch to raid 6 when we get more drives. 

Link to comment
2 minutes ago, kt6999 said:

My apologies. I forgot to mention that it is currently a raid 5. I plan to switch to raid 6 when we get more drives. 

 

Still a little confused.  Does this mean you'll be running RAID60 with 10 or more drives in the end?

 

Problem with RAID60 is it's pretty ridiculous in terms of protection versus cost.  Not just cost of hardware, but rebuild time if there's a failure and the speed impact the rebuild function has on the overall performance of the server.

 

RAID 10 is very nearly as robust in terms of data protection as RAID 60, it's cheaper and many hours (sometimes days) quicker to rebuild in the event of drive failure.  I wouldn't touch RAID 50 or 60 for video editing. Time is money.  Hard drives are cheaper than downtime.

 

$0.02

Link to comment

Archived

This topic is now archived and is closed to further replies.

×
×
  • Create New...