Jump to content
SSD

BackBlaze Reports

63 posts in this topic Last Reply

Recommended Posts

I wonder if backblaze just buy the drives and stick them into service,  might explain the high failure rates somewhat.  Certainly I've had way more disks DOA than I've had fail in use.  Excluding mis-use :)

Share this post


Link to post

My understanding is that the economics of testing drives just isn't there in the business world. Redundancy and backups are there to protect data. Testing is expensive.

Share this post


Link to post

I wonder if backblaze just buy the drives and stick them into service,  might explain the high failure rates somewhat.  Certainly I've had way more disks DOA than I've had fail in use.  Excluding mis-use :)

 

So, you want us to ignore drives which fail within XX hours? A failure is a failure. But I think you can get most drive manufacturers to join you. Anything to have lower reported failure rates.

Share this post


Link to post

I wonder if backblaze just buy the drives and stick them into service,  might explain the high failure rates somewhat.  Certainly I've had way more disks DOA than I've had fail in use.  Excluding mis-use :)

 

I'm sure that's exactly what they do.    With fault-tolerant systems it's not all that big a deal ... if a drive fails, whether it's new or not, you just replace it.    But I, like you and many others here, prefer to identify those drives that have infant mortality issues BEFORE I actually use them => and my experience mirrors yours ... I've had VERY few drives actually fail in use if they pass my rigorous initial testing; but have had quite a few drives (although not a high %) that didn't make it through the initial tests I subject them to.

Share this post


Link to post

No, they run extensive tests, and have documented what they do.

 

It used to take days, but I think it's faster now, since they switched to faster HBAs.

Share this post


Link to post

So, you want us to ignore drives which fail within XX hours? A failure is a failure. But I think you can get most drive manufacturers to join you. Anything to have lower reported failure rates.

 

I'm just not so worried about DOA drives,  since they go straight back to the store..... a drive that fails after 6 months when it is full of data is more of a pain to me.... 

Share this post


Link to post

checked newegg canada after reading this and got two HGST NAS 4TB for $179.99 each on sale.  Needed a new parity drive and, well, I could always use some more storage :D

 

I'll sleep better after seeing the failure rates on the 3TB Reds.

Share this post


Link to post

Very interesting data correlating specific SMART attributes to failure.

 

HERE

Share this post


Link to post

HERE is a mini-Backblaze report on Toshiba 4TB and 5TB drives.

 

Look pretty good - especially at the current pricing.

 

Note that there is a huge jump in price between 5T and 6T drives. And most 5T drives are Seagate SMR (shingled) at slower RPM. Not that that is necessarily bad as unRAID seems to do fine with shingled drives, but I still prefer the 7200 RPM PMR drives if the price is the same or very close. Toshiba appears the only game in town with these specs - at about the same price as 5T SMRs.

 

Toshiba received the Hitachi drive tech in the WD acquisition back in 2012 - so these have good roots!

Share this post


Link to post
I still prefer the 7200 RPM PMR drives if the price is the same or very close. Toshiba appears the only game in town with these specs - at about the same price as 5T SMRs.

 

 

HGST are still selling 5TB 7200rpm Deskstar NAS drives.  Possibly 6TB as well?  WD have a 6TB Enterprise drive now as well (AE maybe, I don't remember?)

Share this post


Link to post

I still prefer the 7200 RPM PMR drives if the price is the same or very close. Toshiba appears the only game in town with these specs - at about the same price as 5T SMRs.

 

 

HGST are still selling 5TB 7200rpm Deskstar NAS drives.  Possibly 6TB as well? 

I have a N54L unRAID server composed of HGST NAS 6TB drives.

Share this post


Link to post

BackBlaze 2014 Report

 

Highlights:

 

HGST (previously Hitachi) 2T-4T excellent (0.7% - 1.4% annualized failure rate)

 

Seagate 4T decent - 3% AFR

 

WD Red 3T bad - 8.8% AFR

 

Seagate 1.5T-3T varies from bad to sucky by model - 6.7% - 24.9% AFR

 

Consumer drives slightly more reliable than enterprise drives at more that 2x the price.

 

I had the same 1.5tb drive they used and it used to run very hot, over 60C.. That could be the problem.. The motor was having trouble dealing with the 5 platters and after a few years it turned grey around the motor housing.. The drive was very fast though. I also had the 1TB model with 3 platters and that was a very good drive and ran cool and the very same fast transfer speeds..  So I wonder if they just could not get a motor rated for the load.. DC motors can work but you just pump more current through it and the result is higher heat. Both 1TB and the 1.5TB were identical drives except the number of platters.. Same circuit board and housing as well..

 

The 3TB drives had problems with dust and water damage and I seen pictures of platters which showed dried water on them and a lot of other things so this is most likely a production problem.. Firmware and such were only added burdens.  Anyone dealt with the 2TB drives where those without the deeper notch were 20% faster? Those used 3 platters and it depended on which country they were produced in. The drives with 2 platters were the fastest drives at the time and they ran very cool as well. The 3TB were supposedly based on it as well by adding an extra platter to get 3TB. But we know both 2 and 3tb's also had variants which performed very poorly. And those who got the proper 2 and 3TB of those drives are still very happy since they were very fast drives and ran cool as well. It is only people who run raid setups where 1 drive fails and they have to replace it that have problems if they cant find an identical drive. Using the model and part numbers dont matter as identical model/part numbers can perform differently.. I dont think enterprise drives have this problem.

 

So even if backblaze had problems with a particular drive, I loved that same drive since at the time it was a very fast drive and 50% extra storage was worth the extra heat and I did not have to get a very expensive small capacity SSD.  Considering more than a decade ago, the faster the drive the more heat it generated. When we used maxtor 512MB drives you could not touch them. And they failed under 2 years but after the 1 year warranty. We pretty much guessed that due to the short warranty period compared to the 3 and 5 years for the other drives and yet we bought a ton of those drives because of the speeds.. I suppose the difference is people buying premium SSD's now compared to standard SSD's.. If the drive is fast enough then even if reliability is lower it is still worth it for a lot of occasions.

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

Share this post


Link to post

BackBlaze 2014 Report

 

Highlights:

 

HGST (previously Hitachi) 2T-4T excellent (0.7% - 1.4% annualized failure rate)

 

Seagate 4T decent - 3% AFR

 

WD Red 3T bad - 8.8% AFR

 

Seagate 1.5T-3T varies from bad to sucky by model - 6.7% - 24.9% AFR

 

Consumer drives slightly more reliable than enterprise drives at more that 2x the price.

 

I had the same 1.5tb drive they used and it used to run very hot, over 60C.. That could be the problem.. The motor was having trouble dealing with the 5 platters and after a few years it turned grey around the motor housing.. The drive was very fast though. I also had the 1TB model with 3 platters and that was a very good drive and ran cool and the very same fast transfer speeds..  So I wonder if they just could not get a motor rated for the load.. DC motors can work but you just pump more current through it and the result is higher heat. Both 1TB and the 1.5TB were identical drives except the number of platters.. Same circuit board and housing as well..

 

The 3TB drives had problems with dust and water damage and I seen pictures of platters which showed dried water on them and a lot of other things so this is most likely a production problem.. Firmware and such were only added burdens.  Anyone dealt with the 2TB drives where those without the deeper notch were 20% faster? Those used 3 platters and it depended on which country they were produced in. The drives with 2 platters were the fastest drives at the time and they ran very cool as well. The 3TB were supposedly based on it as well by adding an extra platter to get 3TB. But we know both 2 and 3tb's also had variants which performed very poorly. And those who got the proper 2 and 3TB of those drives are still very happy since they were very fast drives and ran cool as well. It is only people who run raid setups where 1 drive fails and they have to replace it that have problems if they cant find an identical drive. Using the model and part numbers dont matter as identical model/part numbers can perform differently.. I dont think enterprise drives have this problem.

 

So even if backblaze had problems with a particular drive, I loved that same drive since at the time it was a very fast drive and 50% extra storage was worth the extra heat and I did not have to get a very expensive small capacity SSD.  Considering more than a decade ago, the faster the drive the more heat it generated. When we used maxtor 512MB drives you could not touch them. And they failed under 2 years but after the 1 year warranty. We pretty much guessed that due to the short warranty period compared to the 3 and 5 years for the other drives and yet we bought a ton of those drives because of the speeds.. I suppose the difference is people buying premium SSD's now compared to standard SSD's.. If the drive is fast enough then even if reliability is lower it is still worth it for a lot of occasions.

 

Sounds like you had cooling problems. Regardless of manufacturer, drives need cooling. Running drives at 60C will cause short life. Improved airflow will help.

 

The failures experienced by BackBlaze are NOT temperature related, but the well documented firmware bugs.

http://www.computerworld.com/article/2530543/data-center/complaints-flood-seagate-over-hard-drive-problems.html

http://www.tomshardware.com/news/seagate-hard-drive-firmware-bricked,6889.html

 

 

Share this post


Link to post

Thoughts? I use mostly desktop drives on my servers, lately I've been having disks failures with some regularity, especially my 3 and 4TB WD green/blues are failing like at least one a month, just this weekend I had a double disk failure on one of my servers on two fairly recent and very low power on hours 4TB WD green drives, I'm starting to think at least these drives don't handle vibrations well, but I also have a lot of Samsung, Toshiba, 2TB WD greens and a few Seagate desktop drives and these have been fairly reliable.

 

 

Edited by johnnie.black

Share this post


Link to post

I have lost a number of blue WD and Seagate desktop drives over the years - both 2.5" and 3.5" drives.

 

I have often used two 2.5" in mirror for system and two 2.5" in mirror for often accessed data and then a number of - at the time of installation - large 3.5" drives for mass storage.

 

I'm not sure if it's the 24x7 use or if it's vibrations.

 

I have a number of WD green, but they have normally only been used together with one SSD or 2.5" system disk - no failures yet. And still no single-disk 2.5" failure - every 2.5" disk that has failed has been in a mirror pair and the 2.5" drives have normally been mounted without any rubber grommets.

 

All manufacturers claims only NAS or enterprise drives should be used in multi-disk installations. I have 3-digit number of disks online but that is still too little to get any really good statistics from. But I get a feeling that the claims about better handling of vibrations in the NAS disks really is an important factor. Both in the improved mounting of the platters and the vibration sensors that makes the disk pause writes if it detects vibrations.

 

Somehow, it would be good to have a normalized test bench, where we can measure amount of disk vibrations using different disks and using different disk cages and/or cases.

Share this post


Link to post

I do agree with the video above in the sense that it is good to have high volume long term data on your drives that are most like the environment in which you will have them. But unfortunately we can't arrange such a study. But even if BackBlaze is not exactly your use case, it does provide a laboratory that exposes all drives to a pretty consistent "average" usage pattern over time. Would you expect enterprise drives to do better? Yes, you might. Desktops worse? Yes, you might. But what if you are finding some desktop drives that perform as well or better than Enterprise. Would the BackBlaze study help you find just gems in the market? YES. So while you might say Seagate should not be penalized for having desktop drives that are pretty crappy for enterprise use, other manufacturers might be complimented for selling a product that is over engineered and works well for both.

 

And in thinking about the use case of BackBlaze - are we really that different? Our media drives tend to get filled up rather quickly. Once full, deletes are rare, but do occur. And occasionally repurposed and refilled. BackBlaze is filling drives rather quickly with lower volume updates. They have client turnover and deleting their data only to replace it with new customer data is happening at some level. Our disks are often spun down when not being accessed. BackBlaze data is mostly backups that may sit unaccessed for long periods or forever. We run our parity checks that BackBlaze problably doesn't do. But overall I really don't think we are so different. Maybe for the video above, someone that plans to install Windows on a 3T spinner in a gaming case would have a very different use case. But we unRAIDers, I think it is pretty similar.

 

I have had best luck with Hitachi and HGST drives (and maybe I'll throw in the Toshiba's that were acquired from Hitachi).

 

The Seagates during the 2T-4T years were the worst of the worst for me. I lost several and swore off of them.

 

Recent 8T WD RED and Seagate SMR purchases are not old enough to comment. But so far so good.

 

I still think BackBlaze data is valuable if used properly. And they would have you buying HGST and steering clear of Seagates - very consistent with my personal experience. If an idiot savant comes up with the right answer, you have to give him credit, even if you don't agree or understand his method! :) 

 

 

Share this post


Link to post
12 hours ago, pwm said:

I'm not sure if it's the 24x7 use or if it's vibrations.

 

In my case it can't be 24x7 use as these servers are archive only, only on for a few hours every week.

 

I always liked the WD green/blue/reds as they are low power and very low noise, and I still have a lot of them without issues, some older 2TB and 4 and 6TB on servers with fewer disks, but those on the biggest servers, especially 3 and 4TB are failing at an alarming rate, and the problem always start the same, they start getting slow sectors (I start noticing low performance on transfers since I always use turbo write), SMART attribute for Raw Read Error Rate starts increasing and after just a few more hours of use they start having read errors.

 

Since I need to replace those last two disks and I've been having good luck with Toshiba disks (and they are very competitively priced) I'm going to do a dual parity swap and upgrade to larger disks, but I'll be using one X300 desktop drive and one N300 NAS drive, and when I need to upgrade or replace more disks on this server will do them 2 by 2, one from each, so after a few years maybe I'll be able to gather if one is really better than the other for larger server use.

Share this post


Link to post

Join the conversation

You can post now and register later. If you have an account, sign in now to post with your account.
Note: Your post will require moderator approval before it will be visible.

Guest
Reply to this topic...

×   Pasted as rich text.   Restore formatting

  Only 75 emoji are allowed.

×   Your link has been automatically embedded.   Display as a link instead

×   Your previous content has been restored.   Clear editor

×   You cannot paste images directly. Upload or insert images from URL.