Questions regarding 8tb drives...


Recommended Posts

Hello, 

 

I come from a background of using raid prior to unraid. I know that Unraid is not raid, but with hard drives getting more "dense" raid-5 is practically obsolete due to hard drive densities and the odds of a URE during reconstruction, and many articles I've read are showing that raid-6 is heading in the same direction. 

I do understand that Unraid is not raid, and that it has a different way of "handling" these issues during a reconstruction if a drive was to catastrophically fail, and require a rebuild. 

So I guess my question is, outside of reliability of the drives in general, unraid handles URE's by simply lets say "skipping" over them during the reconstruction correct? I shouldnt really be worried about GB density with these newer drives as technically a URE during a rebuild would not cause a complete loss, but still allow recovery/reconstruction of the pool to a protected state even with some potential corrupt or unreadable data was found during a reconstruction? 

Has anyone here rebuilt from a parity or pool of 8TBs? What was your experience? I'm just looking at getting into 8TB drives on dual parity (im currently on single parity 4tb wd reds), and my case is starting to get full, and I need to expand, and would just like some more insight and if I really shouldnt worry so much about encountering a URE during a reconstruction with such dense drives. 

Thank you all for any answers you can provide, and taking the time to read this. 

Link to comment

There is no question that the math supports statements like yours about odds of a URE during reconstruction, but you should also include the stripe width as a factor. Using larger drives on the same size dataset reduces the number of drives. So, it is not the drive density but the amount of data trying to be protected.

 

(20) 4TB drives protected by single parity is more risky than (10) 8TB protected by single parity. Either may make you nervous.

 

Changing to dual parity greatly increases the data durability. Some will be comforted by (20) 4TB protected by dual parity, but even more with (10) 8TB protected by dual parity.

 

As far as experiencing URE, the occurrence during rebuild is similar to the occurrence during parity check. I have not seen many reports of spurious errors during monthly checks.

 

Link to comment
1 hour ago, gaintrain said:

So I guess my question is, outside of reliability of the drives in general, unraid handles URE's by simply lets say "skipping" over them during the reconstruction correct? I shouldnt really be worried about GB density with these newer drives as technically a URE during a rebuild would not cause a complete loss, but still allow recovery/reconstruction of the pool to a protected state even with some potential corrupt or unreadable data was found during a reconstruction? 

Correct, one or more "md: recovery thread: multiple disk errors" will be logged and this is unRAID speak for "there are errors in more disks than current redundancy can correct, the rebuild/sync will continue but there will be some (or a lot) of corruption."

 

1 hour ago, gaintrain said:

Has anyone here rebuilt from a parity or pool of 8TBs? What was your experience?

Yes, same as rebuilding a small drive, it just took longer ;)

Link to comment
On 2/14/2018 at 11:13 PM, c3 said:

(20) 4TB drives protected by single parity is more risky than (10) 8TB protected by single parity. Either may make you nervous.

 

Changing to dual parity greatly increases the data durability. Some will be comforted by (20) 4TB protected by dual parity, but even more with (10) 8TB protected by dual parity.

 

But parity / rebuild will be roughly twice as fast with the 4TB vs 8TB (assuming same mb/sec performance) and with 8TB @ 5400rpm it takes roughly 24H to complete a parity check. Not imperative, yet still a factor I think is worth mentioning.

Link to comment
1 minute ago, johnnie.black said:

Depends on the drives, more like 15 Hours with Seagate Archive drives, WD RED will be a little slower, I would guess around 17/18H, unless there are other bottlenecks.

My setup take:

Last check completed on Monday, 2018-02-19, 22:08 (two days ago), finding 0 errors. 
Duration: 23 hours, 8 minutes, 51 seconds. Average speed: 96.0 MB/sec

with a mix of Ironwolf from seagate an RED from western digital.

 

How I wish it only took 18H. 9_9

 

But In future I plan to replace the REDs with IronWolf. Then it should speed things up. I may even hit that 18H mark. :D

 

/Alphahelix

 

Link to comment
29 minutes ago, Alphahelix said:

How I wish it only took 18H. 9_9

 

But In future I plan to replace the REDs with IronWolf. Then it should speed things up. I may even hit that 18H mark. :D

 

Why do you care so much that it completes in 18 hours?

Just wondering.

I did a parity check yesterday and it took over 23 hours. But I was streaming movies, copying movies to my array at the same time. lol

My largest drives are 10tb but I still have a bunch of 4tb reds also.

Link to comment
On 2/15/2018 at 9:46 PM, gaintrain said:

Great, I think I just needed the affirmations to tell me this isnt raid, relax, its unraid! Thank you all for sharing!!!

I would just say that with unRaid we suggest running monthly parity checks. These are similar to drive rebuilds. If UREs do not occur month after month after month of parity checks, it is extremely unlikely that a drive replacement would generate one. And with dual parity, even one other drive generating a URE could occur and not compromise the rebuild.

 

Raid5 and 6 don't typically run these check cycles (at least I don't think they do). So when a failure occurs, it is trying to rebuild and might be accessing sectors that have not been accessed in many months or longer.

Link to comment
17 hours ago, johnnie.black said:

Like I said, you have other bottlenecks, or a mixed array with smaller disks, my 8TB server takes 15 Hours for a parity check, with Seagate 8TB Archive drives only.

According to this site, your disk is running 5900RPM. that is a tad faster than 5400RPM. But my logic tells me it should not differ more than 20-30 min. 1 hour tops. It could be my system is discontinued components from an old datacenter, (except the disks). I mean my SATA controllers are only SATAII, yet mb/s wise I should never hit the speed limit of that controller. But there may be instruction in SATAIII that could give a performance boost. But again I can get the math to fit with that big of a time difference. I guess I will see on Monday when the parity will run again if my cache drive will make a difference (just installed it today). I know it will not speed up the process, but I may spare some writes, and that should help on the time, but not much. 

 

17 hours ago, TUMS said:

 

Why do you care so much that it completes in 18 hours?

Just wondering.

I did a parity check yesterday and it took over 23 hours. But I was streaming movies, copying movies to my array at the same time. lol

My largest drives are 10tb but I still have a bunch of 4tb reds also.

I save all my photos (I am a part time photograper) on this system, with backup to a local qnap, and also a backup to a long distance qnap. I use the parity to prevent bitrot, yes I know it is not meant for that. But then let us call it "off label use". ;)

Edited by Alphahelix
  • Like 1
Link to comment
According to this site, your disk is running 5900RPM. that is a tad faster than 5400RPM.

That and mostly because they have an higher areal density makes them faster then the REDs in sequential transfers.

 

I mean my SATA controllers are only SATAII

That doens't matter for HDDs, SATAII has enough bandwidth for the fastest disks currently on the market.

 

Looking at your signature, looks like you have a 6TB disk in the array, that will lower your parity check average speed and make it last at least 2 or 3 hours longer.

 

 

Link to comment
28 minutes ago, johnnie.black said:

That doens't matter for HDDs, SATAII has enough bandwidth for the fastest disks currently on the market.

 

I thought so.

 

28 minutes ago, johnnie.black said:

Looking at your signature, looks like you have a 6TB disk in the array, that will lower your parity check average speed and make it last at least 2 or 3 hours longer.

 

Well it is next on my list, to replace the REDs with Ironwolfs. :) but thank you for your input.

Link to comment
17 hours ago, SSD said:

Raid5 and 6 don't typically run these check cycles (at least I don't think they do). So when a failure occurs, it is trying to rebuild and might be accessing sectors that have not been accessed in many months or longer.

This is a reason why classical RAID solutions sees a big need to step up the number of parity or complement the RAID with checksumming and regular checksum scans.

 

Almost all multi-disk failures in the RAID horror tales are either from long-term issues accumulating and not being detected until a disk is replaced or are related to physical situations (temperature/power/...) that has killed multiple drives within a short interval.

Link to comment

Join the conversation

You can post now and register later. If you have an account, sign in now to post with your account.
Note: Your post will require moderator approval before it will be visible.

Guest
Reply to this topic...

×   Pasted as rich text.   Restore formatting

  Only 75 emoji are allowed.

×   Your link has been automatically embedded.   Display as a link instead

×   Your previous content has been restored.   Clear editor

×   You cannot paste images directly. Upload or insert images from URL.