Jump to content

Verify...>2TB drives, P+Q Redundancy and HD Limit?


Recommended Posts

On the verge of embarking on my first real unraid setup. I've been watching it for quite a while now waiting for 5 to become final. Two questions...

 

Are >2TB drives fully supported now in both cache and data drives?

 

Has P+Q Redundancy ever been implemented (two drive failure tolerance)?

 

What is the latest hard drive limit (was supposed to be finalized as 24)?

 

 

Link to comment

On the verge of embarking on my first real unraid setup. I've been watching it for quite a while now waiting for 5 to become final. Two questions...

 

Are >2TB drives fully supported now in both cache and data drives?

Yes

 

Has P+Q Redundancy ever been implemented (two drive failure tolerance)?

No

 

 

What is the latest hard drive limit (was supposed to be finalized as 24)?

24 data drives, plus parity, plus cache.
Link to comment

Thank you for the response...

 

Any ideas on when P+Q will be implemented (roadmap is pretty vague)?

 

Is there any hot spare option (minimize necessity of P+Q)?

 

I haven't seen any updates on P+Q in a while. Someone else may know though.

 

As for a hot spare, you could always not assign a drive to the array and if/when one dies assign the spare to it's position.

Link to comment

Thanks, I remember the discussion on how it would be bad to automatically swap in a drive (in case of a minor error) and how many things could go wrong if it was done automatically...

 

Just to beat a dead horse a bit more (a lot of past discussions), does anyone else have the latest on P+Q redundancy and it's urgency?

With my plans of running a massive (3TBx15-20 drives), having a second redundancy would be a big breath of comfort. With how long a new 3TB/4TB drive takes to get online (without using the assignable hot spare possibility), the extra breathing room would really be nice (Especially in a small scale production environment).

Link to comment

I remember those discussions. Frankly, I think if it were configurable it would solve the situation for both sides of the debate.

I would prefer to have a hot spare.  When you have 20 drives, it's easy for something to go wrong.

Without automated testing and alerting. it's easy to miss a situation.

 

Granted, the people who say they do not want an automatic rebuild in place have a good argument.

The issue really is, allot can go wrong when you have such a wide variance of hardware and building techniques.

So many times, it could be a simple cable or seating of a drive, PSU, or other issue that's not drive related.

 

If we had the choice of enabling or disabling this, then people with stable hardware could take advantage of it.

 

The other choice is the whole warm spare thing.

Use a cache drive the size of parity. If you have a failure. Sacrifice cache functionality to rebuild onto the cache drive.

Since we usually preclear all drives before installation, your cache drive should be trustworthy enough to use.

All we would need to do is clear the MBR and add the preclear signature, then add it as the replacement drive.

 

It's not elegant nor automatic, but it is a functional rapidly deployed warm spare.

Link to comment

All we would need to do is clear the MBR and add the preclear signature, then add it as the replacement drive.

 

how could you just add the preclear signature would you not have to go through the whole preclear again?

 

and i would like P+Q Redundancy as well, but not sure of the roadmap of this functionally ?

You should NOT need to do ANYTHING to the cache drive to assign it as a replacement data drive.

A re-construction of a failed or missing drive DOES NOT LOOK FOR A PRECLEAR SIGNATURE.  ONLY THE ADDITION OF AN ADDITIONAL DRIVE TO AN ALREADY PARITY PROTECTED ARRAY LOOKS FOR ONE.  There is no need to preclear again.

 

All you should need do is stop the array, assign the cache drive to the failed slot in the array and start the array.

 

There has been absolutely no word or posts on P+Q or diagonal parity in a very very long time from Lime-Tech.  He has had his hands full just trying to get 5.0 out, and that has been going on for several years.  Do not hold your breath waiting for P+Q parity.  You'll likely grow old and grey first.  He has mentioned multiple arrays, each with their own parity disk, but that is not the same thing.  Even that has not been mentioned for a while.  I'm thinking he'll get the plugin manager in place in 5.1, and possibly a 64 bit kernel release in the 5.X series thereafter with possibly a different file-system type "btrfs"??? for the cache drives... 

 

(I have no inside information from Tom @ lime-tech, I'm just guessing like everyone else based on the few posts he has made.)

 

Joe L.

Link to comment

All we would need to do is clear the MBR and add the preclear signature, then add it as the replacement drive.

 

how could you just add the preclear signature would you not have to go through the whole preclear again?

 

and i would like P+Q Redundancy as well, but not sure of the roadmap of this functionally ?

You should NOT need to do ANYTHING to the cache drive to assign it as a replacement data drive.

A re-construction of a failed or missing drive DOES NOT LOOK FOR A PRECLEAR SIGNATURE.  ONLY THE ADDITION OF AN ADDITIONAL DRIVE TO AN ALREADY PARITY PROTECTED ARRAY LOOKS FOR ONE.  There is no need to preclear again.

 

All you should need do is stop the array, assign the cache drive to the failed slot in the array and start the array.

 

 

My mistake. Sorry for the misinformation on that.

Link to comment

Join the conversation

You can post now and register later. If you have an account, sign in now to post with your account.
Note: Your post will require moderator approval before it will be visible.

Guest
Reply to this topic...

×   Pasted as rich text.   Restore formatting

  Only 75 emoji are allowed.

×   Your link has been automatically embedded.   Display as a link instead

×   Your previous content has been restored.   Clear editor

×   You cannot paste images directly. Upload or insert images from URL.

×
×
  • Create New...