Seagate’s first shingled hard drives now shipping: 8TB for just $260


Recommended Posts

Oh yes thanks.

 

This is great news. How much power do they consume spun down, spun up, read/write etc?

 

And aren't you worried about the reliability of Seagate?

 

I haven't done any tests on power consumption - BUT - I am happy to do them if someone shows me how they would like the test to be run and how they want the data to be presented!

 

As for reliability of Seagate (and please ALL remember this is my opinion based on my observations ONLY over the years in an Enterprise and Personal environment) - I don't really subscribe to the whole "Seagate drives always fail and are generally less reliable". Generally my observations have led me to conclude that in the main those who have had drives fail (Seagate or otherwise) inside the warranty of the drive tend to be those who don't adequately "work out" their drives before deploying them OR operate them outside the manufacturers operating conditions (e.g. temperature)!

 

Even I didn't work out my past drives like I've given these bad boys a work out (This is down to me learning some best practices off of the likes of Gaz and Brian). I know that doesn't say anything to their future reliability (albeit these newly learned methods for ensuring a drive passes the infant mortality period have been applied) BUT then again they come with a 3 year warranty. On top of Unraid's fault tollerance I have a COMPLETE backup of my data. ALL Drives WILL Fail, it's just a matter of time - if they do Ill replace them - if they are in Warranty Ill RMA if not I've had my moneys worth! My Data is safe so I am good.

 

I'm quite happy.  :)8)

 

I work in the IT field and support/manage hundreds of desktops/laptops. Been with the same company for over 15 years and the most failed hard drive has been Seagate. Then a very close second is the WD Blue. Even the WD Blue 2.5 models fail all the time. I'm not a big Seagate fan because I've seen the real world results, but as time moves on so does technology and the way companies make their products. I'm very hesitant to buy Seagate anything because of what I've seen at work. Mind you these are just workstation drives, low read/writes, low temps and still failing.

 

I guess we will all need to wait and see what happens. One good thing is Seagate classifies these new drives as Enterprise so that's a good thing. Only time will tell. Now to finish reading this thread! I hope it has a good ending. :)

 

 

Link to comment
  • Replies 655
  • Created
  • Last Reply

Top Posters In This Topic

Top Posters In This Topic

Posted Images

I don't really subscribe to the whole "Seagate drives always fail and are generally less reliable". Generally my observations have led me to conclude that in the main those who have had drives fail (Seagate or otherwise) inside the warranty of the drive tend to be those who don't adequately "work out" their drives before deploying them OR operate them outside the manufacturers operating conditions (e.g. temperature)!
Everyone's personal experiences are different. I had two of two Seagate 2TB drives fail on me in scary ways (and that was after returning one of the two for replacement due to problems during its preclear). They were the worst of the worst if you look on BackBlaze. I bought 4 apparently white listed WD drives at a super price. Every one of them failed during preclear. All drives are not equal and it is smart to do research before investing hundreds or even thousands.

 

In contrast, I have a couple of Seagate 4TB drives in my array that got good marks on BackBlaze and were on a good sale, and they have worked quite well so far.  My older drives (which are now in my backup array) were mostly 1T WD and 2T Hitachis, with a few 2T WDs. The 1T WD's are running strong (only 1 failed), all of the Hitachi's are running strong, and 2 of 3 of the WD 2Ts failed. All of these drives are well over 5 years old.

 

We tend to value drives based on $/T, but what we may be more interested in is $/T/yr. If I have Hitachi's that last 7 years and cost $40/T, and Seagates that last 4 years and cost $35/T. Looking on an annual basis, the Hitachis cost $5.71 T/yr, and the Seagates cost $8.75/T/yr. I know there are no guarantees, but having gone through several drive replacement cycles, I certainty consider longevity!

 

IMO, drive brand AND MODEL should influence buying decisions, and if you want your drives to live a long life, HGST drives are consistently at the top of the reliability scores. The Seagates are looking promising, and I am considering investing, but for now I have enough space and would only jump at a great price.

 

Bottom line, it's a gamble whatever you buy, but odds are longer for some drives vs others.

 

Now I have 3 HGST's in my array and I love them. Quiet, fast, and reliable with a decent 3 year warranty. Couldn't keep affording the WD Black 5 year warranty. Too expensive. The HGST's seem to be great. They have a very simplistic site. Can't even sign  up or anything, but you can RMA a drive if you have to.

 

Link to comment

I work in the IT field and support/manage hundreds of desktops/laptops. Been with the same company for over 15 years and the most failed hard drive has been Seagate. Then a very close second is the WD Blue. Even the WD Blue 2.5 models fail all the time. I'm not a big Seagate fan because I've seen the real world results, but as time moves on so does technology and the way companies make their products. I'm very hesitant to buy Seagate anything because of what I've seen at work. Mind you these are just workstation drives, low read/writes, low temps and still failing.

 

I guess we will all need to wait and see what happens. One good thing is Seagate classifies these new drives as Enterprise so that's a good thing. Only time will tell. Now to finish reading this thread! I hope it has a good ending. :)

 

Do you have data on the total drive population, or are you just looking at the failed drives?

 

In 15 years the list of drive providers has change, but Seagate has always been there.

 

I work in the storage field, have for several decades. I have stacks of several hundred failed HGST drives in the room with me right now. That data point is meaningless. That's like looking at the old tires in back of the tire store. You'll always find dead Seagate drives.

 

Last year it was over 14,000 of a different storage component (not disk drive). I did prove manufacturing defect on specific lots and got relief (that manufacturer has left the business). But just inventorying the failed component is not enough.

 

Link to comment

Agree ... I've been buying hard drives at home for over 35 years, and at work for over 45 => and Seagate has been around a LONG time.    Independent of their failure rate (which varies by drive and manufacturing facility ... they've had several ups and downs over the years), they've always had very good customer service ... it's very easy to get drives replaced when they do fail.    Like virtually all of the other manufacturers, many of their failures are "infant mortality" => i.e. if you thoroughly test the drives when you get them, and they pass the tests, then they'll likely last a long time.

 

Link to comment

I don't know, but this looks like more than infant mortality. This chart rings true as it mimics my personal experiences.

 

Seagate has improved with their 4T drives (at least), where their failure rates are ONLY double or triple HGTSs, instead of 10-12x!

 

blog-fail-drives-manufactureX.jpg

Link to comment

I work in the IT field and support/manage hundreds of desktops/laptops. Been with the same company for over 15 years and the most failed hard drive has been Seagate. Then a very close second is the WD Blue. Even the WD Blue 2.5 models fail all the time. I'm not a big Seagate fan because I've seen the real world results, but as time moves on so does technology and the way companies make their products. I'm very hesitant to buy Seagate anything because of what I've seen at work. Mind you these are just workstation drives, low read/writes, low temps and still failing.

 

I guess we will all need to wait and see what happens. One good thing is Seagate classifies these new drives as Enterprise so that's a good thing. Only time will tell. Now to finish reading this thread! I hope it has a good ending. :)

 

Do you have data on the total drive population, or are you just looking at the failed drives?

 

In 15 years the list of drive providers has change, but Seagate has always been there.

 

I work in the storage field, have for several decades. I have stacks of several hundred failed HGST drives in the room with me right now. That data point is meaningless. That's like looking at the old tires in back of the tire store. You'll always find dead Seagate drives.

 

Last year it was over 14,000 of a different storage component (not disk drive). I did prove manufacturing defect on specific lots and got relief (that manufacturer has left the business). But just inventorying the failed component is not enough.

 

Sorry, no hard data since we never kept track. Large companies like mine haven't cared to spend the money on monitoring what goes bad at that level. Just an overall observation I've seen in the past years. It could be since like you said, Seagate and WD have been around the longest and that's why I see them having the most failed drives. There really aren't that many major HD manufacture's that Dell would use in their systems. I think 5+ years ago there was really only WD or Seagate anyway. I remember I had a bunch of those IBM Deskstars, but I think that's where HGST came from. Hard to keep up.

 

Link to comment

I don't know, but this looks like more than infant mortality. This chart rings true as it mimics my personal experiences.

 

Seagate has improved with their 4T drives (at least), where their failure rates are ONLY double or triple HGTSs, instead of 10-12x!

 

blog-fail-drives-manufactureX.jpg

 

So I guess maybe my memory serves me correct on this.

 

Link to comment

I don't know, but this looks like more than infant mortality. This chart rings true as it mimics my personal experiences.

 

Seagate has improved with their 4T drives (at least), where their failure rates are ONLY double or triple HGTSs, instead of 10-12x!

 

blog-fail-drives-manufactureX.jpg

 

So I guess maybe my memory serves me correct on this.

 

You got a lot of 1.5TB drives in desktops?

Link to comment

I don't know, but this looks like more than infant mortality. This chart rings true as it mimics my personal experiences.

 

Seagate has improved with their 4T drives (at least), where their failure rates are ONLY double or triple HGTSs, instead of 10-12x!

 

blog-fail-drives-manufactureX.jpg

 

So I guess maybe my memory serves me correct on this.

 

You got a lot of 1.5TB drives in desktops?

 

No, not really. Was meaning my overall opinion could be true that Seagate may have a higher failure rate then most. After seeing piles of 80 thru 200GB Seagate drives dead maybe it is really a fact and not an opinion. I'm sure manufactures are competing against each other so reports can be doctored.

 

Link to comment

I don't know, but this looks like more than infant mortality. This chart rings true as it mimics my personal experiences.

 

Seagate has improved with their 4T drives (at least), where their failure rates are ONLY double or triple HGTSs, instead of 10-12x!

 

blog-fail-drives-manufactureX.jpg

 

So I guess maybe my memory serves me correct on this.

 

You got a lot of 1.5TB drives in desktops?

 

No, not really. Was meaning my overall opinion could be true that Seagate may have a higher failure rate then most. After seeing piles of 80 thru 200GB Seagate drives dead maybe it is really a fact and not an opinion. I'm sure manufactures are competing against each other so reports can be doctored.

 

If that chart included smaller drives you would see a huge spike for HGST with the 500GB and 750GB.

 

Backblaze has been called on the supporting data as the drives were questionably purchased and handled. Backblaze has improved their material handling significantly, and the rewards of that effort show.

Link to comment

The interesting question to ask after reviewing these charts ... why didn't BackBlaze have a large enough sample of Seagate OR WD 2T drives to make the chart?

 

We know BackBlaze uses an economics formula of load speed, cost, and reliability to drive their purchases.

 

Seems both Seagate and WD missed the cut on 2T.

 

This also jibes with my experience. Both my Seagate 2T were awful and long dead (after one failed under warranty and was replaced - so that was 3 deaths). And one of my 2 WD drives is in service (and this was after one of them failed and was replaced under warranty - so really 2 of 3 died. I'm not betting the ranch on the last one!) In contrast I have 8 or 9 HGST. Not a sign of failure on any of them. [Note "died" in this context means they developed bad sectors that grew and grew and were pulled from service - I don't push any drive to complete failure in my servers]

 

The good news is the Seagate 4T are much better. No news yet on 6T reliability but hopefully it is at least similar to the 4Ts. The 8Ts seem to be doing well in tests here so am encouraged. But anyone that believes a drive is a drive is a drive - and all have similar reliability characteristics, is foolish.

 

Choose wisely!

Link to comment

I am preclearing 6 drives at once. I am at cycle 1 of 3.

 

Currently its at step 2 of 10. 1.6tb zero is written. speed is 56.1MB/s. and its has been 24 hours total time so far.

 

56.1MB/s sounds very low and to complete step 2 will take a further 31 hours.

 

Is there something wrong here?

Link to comment

I am preclearing 6 drives at once. I am at cycle 1 of 3.

 

Current its on step 2 of 10. 1.6tb written. speed is 56.1MB/s. and its has been 24 hours total time so far.

 

56.1MB/s sounds very low and to complete step 2 will take a further 31 hours.

 

Is there something wrong here?

I've not used these 8TB drives, but my rule of thumb is 1 cycle takes about 10 hours / TB, so 80 hours for one cycle. Maybe there are some definitive results further up the thread.
Link to comment

I am preclearing 6 drives at once. I am at cycle 1 of 3.

 

Current its on step 2 of 10. 1.6tb written. speed is 56.1MB/s. and its has been 24 hours total time so far.

 

56.1MB/s sounds very low and to complete step 2 will take a further 31 hours.

 

Is there something wrong here?

I've not used these 8TB drives, but my rule of thumb is 1 cycle takes about 10 hours / TB, so 80 hours for one cycle. Maybe there are some definitive results further up the thread.

 

 

Generally, you can grab smart information and look at the following attribute

Extended self-test routine
recommended polling time:     ( 950) minutes.

 

This is how long the firmware expects to pass through the drive unencumbered by other operations.

i.e. at least 15:50 hours.

 

We've seen that the post read takes extra time, so I would expect a pre-read to be 15:50 hours + overhead, a zero 15:50 hours + overhead, post read 15:50 + overhead.

 

other examples

== Last Cycle's Pre Read Time  : 19:51:11 (111 MB/s)
== Last Cycle's Zeroing time   : 16:11:00 (137 MB/s)

 

Any other information being transferred on the bus to a controller and to 'other' drives can possibly have impact.

Link to comment

I am preclearing 6 drives at once. I am at cycle 1 of 3.

 

Currently its at step 2 of 10. 1.6tb zero is written. speed is 56.1MB/s. and its has been 24 hours total time so far.

 

56.1MB/s sounds very low and to complete step 2 will take a further 31 hours.

 

Is there something wrong here?

6 drives at once -

Could the way the drives are connected create a bottleneck?

Link to comment

2 of the drives are connected to the sas2lp and the other 4 are connected to the mobo.

So the SATA side shouldn't be a bottleneck... strange. I did preclear the 8TB Archives via USB 3.0 card, which is supposedly slower, and speeds were ~80 MB/s for pre-read, ~75 MB/s for zeroing, and ~40 MB/s for post-read...

 

Edited: corrected the link and the numbers (previously mistakenly reported the 4TB drive, not 8TB)

 

Link to comment

Your controllers are fine, so this isn't a SATA bandwidth issue.  I suspect this is simply due to running 6 at a time => you're either running into memory limits or have a CPU overhead issue.    You might want to stop a couple of the pre-clears and see if the others then speed up.

 

 

Link to comment

Getting slower!!

Don't panic  :) This is normal - I mean, not the overall speed your are experiencing, just the effect of getting slower while progressing from outer (fastest) to inner (slowest) cylinders on the drive.

Link to comment

I will let the preclear run as its taken this long already. Maybe it will speed up later?

 

It should slow down gradually until it finishes step 1;  then speed up as it starts step 2 on the outer cylinders -- again getting a bit slower as it moves inward;  and then the same thing will happen with step 3.

 

It's not likely it will speed up to a significantly higher speed => whatever is causing it to run slow (whether it's memory allocation or simply computational overhead) isn't going to change as long as there are 6 simultaneous pre-clears running.

 

Link to comment

Join the conversation

You can post now and register later. If you have an account, sign in now to post with your account.
Note: Your post will require moderator approval before it will be visible.

Guest
Reply to this topic...

×   Pasted as rich text.   Restore formatting

  Only 75 emoji are allowed.

×   Your link has been automatically embedded.   Display as a link instead

×   Your previous content has been restored.   Clear editor

×   You cannot paste images directly. Upload or insert images from URL.