Seagate’s first shingled hard drives now shipping: 8TB for just $260


Recommended Posts

Jeez man you are so combative it is becoming irritating.

 

You are one person who has done one set of tests and produced results (albeit seemingly positive ones). The results look positive and I thank you for doing them and sharing. However I was referring to the this type of drives suitability long term as a Parity drive which I don't think is conclusive (how can it be with just one set of tests as evidence and no one who has this sort of drive for any considerable time). That statement wasn't a dig at you or the effort you have put in - it was a reference to a risk I still consider I will be taking. Or do you believe that you have now conclusively proved with certainty and without risk that they are suitable in an application as a Parity drive in an Unraid environment - No more data required or discussion on the subject jtown has provided, all is well!!

 

[yes, sigh]

Link to comment
  • Replies 655
  • Created
  • Last Reply

Top Posters In This Topic

Top Posters In This Topic

Posted Images

I certainly agree that one set of results is not enough to really confirm what the long-term results might be with this drive used in the parity role.

 

The tests jtown did DO have better write speed results than many had anticipated with this technology; but the fact is that no matter how good they are, a non-shingled drive of the same areal density would absolutely be better.  The big unknown is how much the band rewrite requirements might impact the speeds in heavy random write scenarios.

 

I do, however, think that these units would be fine for a backup server - for both the data and parity drives.  Write performance for a server like danioj noted, which is only going to be turned on once/week or so for a backup, isn't likely to be a significant factor.

 

 

Link to comment

This is going to hit WD hard as the price point of this new technology blows all WD drives out the water. Sure WD REDs etc are better but most users dont see this and to a certain respect neither do I.

 

I can expect to see some WD price drops and I would also expect to see some FUD being spread about these new drives.

Link to comment

This is going to hit WD hard as the price point of this new technology blows all WD drives out the water. Sure WD REDs etc are better but most users dont see this and to a certain respect neither do I.

...

 

Not so sure about that.  Consider current pricing:  At Amazon, the 8TB drive is $315, the 6TB WD Red is $270.

 

That's $45/TB for the WD Reds, $39.38/TB for the Archive units.    I don't think a difference of $6/TB is going to sway a lot of purchasing decisions -- ESPECIALLY if the purchaser understands the difference in the technologies.    And it's certainly not a price difference that will blow anyone out of the water  :)

 

While the results of the limited testing that's been done on this thread do indeed show that the write performance isn't as bad as some had thought it might be, it has also shown that it IS bad compared to standard PMR drives.    It's likely "good enough" for the UnRAID case, where writes are significantly throttled by the 4-disk-I/O's per write requirement;  but for most other applications the significant write penalty isn't likely to be worth a $6/TB savings.

 

Will I buy them?  Of course ... my next backup server will almost certainly use 8TB (or larger, depending on when I build it) shingled drives.  NOT because they're $5-6/TB less expensive ... but because they're the largest units I can get so I'll need fewer disks for a given capacity.

 

Link to comment

I can expect to see some WD price drops and I would also expect to see some FUD being spread about these new drives.

 

BackBlaze are the kings of FUD, and have pretty much scuppered Seagate already.  WD and HGST are laughing.

 

I'll go for 8TB when HGST/WD start shipping normal 5900rpm units.  I'm still pretty much relying on 4TB drives.

Link to comment

I'm planning to reproduce jtown's test results on my (considerably older) hardware, plus some more tests... but this will take a lot of time.

 

I've got some rotten luck with this "four plus four is eight" little project of mine. First, my first 8TB external Seagate got stolen lost in the mail. NeweggBusiness is sending replacement, but they can't replace it back in the time.

 

I ordered two more, they arrived, I started preclearing - and the server hung during the zeroing stage of first cycle. Don't know what happened, but I had to press reset button, since it did not respond at all to IPMI or even to directly connected keyboard. Then the mandatory parity check... then I started preclearing again, the dirves have just entered post-read of first cycle... that's ~55 hours after the start. They are big. And I'm going to do no less than three preclearing cycles... lots of time.

Link to comment

... I've got some rotten luck with this "four plus four is eight" little project of mine.

 

What's your "four plus four is eight" project?    Are you using a pair of 4TB to create an 8TB RAID-0 parity, coupled with 8TB Archive drives for data?

 

 

... I started preclearing again, the dirves have just entered post-read of first cycle... that's ~55 hours after the start. They are big. And I'm going to do no less than three preclearing cycles... lots of time.

 

Three cycles for 8TB drives will definitely take a L..O..N..G time.    Let us know how long they take; and also what the times are for the 3 different clearing cycles (the pre- and post- read cycles should be consistent, since they're just reads => but it will be interesting to see if there's any notable variance in the 3 different clearing cycles, which are all writes).

 

 

Link to comment

This is going to hit WD hard as the price point of this new technology blows all WD drives out the water. Sure WD REDs etc are better but most users dont see this and to a certain respect neither do I.

...

 

Not so sure about that.  Consider current pricing:  At Amazon, the 8TB drive is $315, the 6TB WD Red is $270.

 

That's $45/TB for the WD Reds, $39.38/TB for the Archive units.    I don't think a difference of $6/TB is going to sway a lot of purchasing decisions -- ESPECIALLY if the purchaser understands the difference in the technologies.    And it's certainly not a price difference that will blow anyone out of the water  :)

...

Well, one can look at this numbers in a slightly different way: its ~15% price difference. In highly competitive markets it's more than enough to blow the competitors out of the water.  :)

 

Besides, NeweggBusiness right now sells 8TB Seagate externals for $272, which is quite different from $315.  They at NeweggBusiness were dancing from $270 all the way up to $300, obivously trying to find the sweet spot, but now are back to $272, so, apparently, $300 did not fly.

Link to comment

What's your "four plus four is eight" project?    Are you using a pair of 4TB to create an 8TB RAID-0 parity, coupled with 8TB Archive drives for data?

Not "using" yet - the 8TB drives are still preclearing, but that's exactly what I'm planning to do.

 

Let us know how long they take; and also what the times are for the 3 different clearing cycles (the pre- and post- read cycles should be consistent, since they're just reads => but it will be interesting to see if there's any notable variance in the 3 different clearing cycles, which are all writes).

Will most definitely do, I'm interested in it myself.

 

Although this: "the pre- and post- read cycles should be consistent, since they're just reads" I'm not quite sure about. As far as I remember, the post-read is always the longest step, I think it does more than just straightforward reading.

Link to comment

...

Although this: "the pre- and post- read cycles should be consistent, since they're just reads" I'm not quite sure about. As far as I remember, the post-read is always the longest step, I think it does more than just straightforward reading.

 

I meant they should be consistent across passes (i.e. the pre-reads should all be consistent and the post-reads should be consistent).    The post-reads indeed take FAR longer than any of the other phases (roughly 1/2 of the total time of the pre-clear cycle).

 

Link to comment

This is going to hit WD hard as the price point of this new technology blows all WD drives out the water. Sure WD REDs etc are better but most users dont see this and to a certain respect neither do I.

...

 

Not so sure about that.  Consider current pricing:  At Amazon, the 8TB drive is $315, the 6TB WD Red is $270.

 

That's $45/TB for the WD Reds, $39.38/TB for the Archive units.    I don't think a difference of $6/TB is going to sway a lot of purchasing decisions -- ESPECIALLY if the purchaser understands the difference in the technologies.    And it's certainly not a price difference that will blow anyone out of the water  :)

 

While the results of the limited testing that's been done on this thread do indeed show that the write performance isn't as bad as some had thought it might be, it has also shown that it IS bad compared to standard PMR drives.    It's likely "good enough" for the UnRAID case, where writes are significantly throttled by the 4-disk-I/O's per write requirement;  but for most other applications the significant write penalty isn't likely to be worth a $6/TB savings.

 

Will I buy them?  Of course ... my next backup server will almost certainly use 8TB (or larger, depending on when I build it) shingled drives.  NOT because they're $5-6/TB less expensive ... but because they're the largest units I can get so I'll need fewer disks for a given capacity.

 

I think most drives are sold to people that look at overall cost and dont really know mucgh about how HDD work. We are the exception here.

 

All it needs is a lifehacker type article to say "80% of the cost vs 90% of the performance, but with confidence" and then all that will matter to most buyers will be cost.

 

As of now over here a WD Red 8TB cost (base don 6tb) is 274 and a 8TB Seagate is 190. Thats a VAST difference.

Link to comment
Three cycles for 8TB drives will definitely take a L..O..N..G time.    Let us know how long they take; and also what the times are for the 3 different clearing cycles (the pre- and post- read cycles should be consistent, since they're just reads => but it will be interesting to see if there's any notable variance in the 3 different clearing cycles, which are all writes).

 

A little over 67 hours per complete preclear operation for me.

Link to comment

...

Although this: "the pre- and post- read cycles should be consistent, since they're just reads" I'm not quite sure about. As far as I remember, the post-read is always the longest step, I think it does more than just straightforward reading.

 

I meant they should be consistent across passes (i.e. the pre-reads should all be consistent and the post-reads should be consistent).    The post-reads indeed take FAR longer than any of the other phases (roughly 1/2 of the total time of the pre-clear cycle).

 

Not if you use the faster preclear script. The postreads match the prereads speeds.

Link to comment

...

Although this: "the pre- and post- read cycles should be consistent, since they're just reads" I'm not quite sure about. As far as I remember, the post-read is always the longest step, I think it does more than just straightforward reading.

 

I meant they should be consistent across passes (i.e. the pre-reads should all be consistent and the post-reads should be consistent).    The post-reads indeed take FAR longer than any of the other phases (roughly 1/2 of the total time of the pre-clear cycle).

 

Ah... sorry about that. I did not even get my morning coffee yet, at the time, so my brain was kinda... at stall.  :)

 

Now that I've got my coffee I realized that the most interesting numbers would be time spent (average speed) in writing step, between the three preclearing cycles. Reading speed should not change much from cycle to cycle, but writing speed might degrade. Degree of degradation - is what we are after.

 

Heh... And I also realized that with my little Addonics RAID card I could do not only "four plus four is eight", but also "two by four is eight" - i.e., set four 2TB drives as RAID0 parity with 8TB data drives... one more use for my going-going-almost-gone 2TB drives  ;D

Link to comment
Now that I've got my coffee I realized that the most interesting numbers would be time spent (average speed) in writing step, between the three preclearing cycles. Reading speed should not change much from cycle to cycle, but writing speed might degrade. Degree of degradation - is what we are after.

 

Heh... And I also realized that with my little Addonics RAID card I could do not only "four plus four is eight", but also "two by four is eight" - i.e., set four 2TB drives as RAID0 parity with 8TB data drives... one more use for my going-going-almost-gone 2TB drives  ;D

 

I didn't notice a drop in speed between my virgin write and subsequent writes over the same section of the drive or after preclearing.  It's possible that they set the drive up to always write the entire shingle in order to present a consistent write speed and avoid complaints about the drive slowing over time.  Totally guessing and have no data or info to back it up.

 

As for repurposing old 2tb drives, I do that but my oldest ones are approaching 5 years of age so I don't use them for anything critical.  :)

Link to comment

I didn't notice a drop in speed between my virgin write and subsequent writes over the same section of the drive or after preclearing.  It's possible that they set the drive up to always write the entire shingle in order to present a consistent write speed and avoid complaints about the drive slowing over time.  Totally guessing and have no data or info to back it up.

Intuitively, I don't expect to discover noticeable write speed degradation... but we will see.

 

Although... isn't it funny? It looks like we all suspect that Seagate somehow cheated on us, and desperately trying to catch them red-handed  :D

 

As for repurposing old 2tb drives, I do that but my oldest ones are approaching 5 years of age so I don't use them for anything critical.  :)

Well, I consider parity drive to be the least "critical". Besides, the server I'm doing all this on, is not exactly "test" server anymore, but everything on it is backed up.

Link to comment

once/year when I cycle them all through a spare machine to check all of the MD5's.

 

 

What happens if they fail the md5? Have you ever come across this case?

 

I've never had one fail, but if it did, I'd simply replace the drive and copy all of the files it had to a new backup drive from my media server.  I have complete directory listings of every backup drive (as well as a complete listing of the server contents), so it'd be very easy to do that.    In fact, if only one (or a few) files failed the MD5, I'd probably just copy everything except those from the "bad" drive to a new one; then copy those that had failed the MD5 from the server -- and then, of course, run a MD5 test on the new backup drive.

 

I run full MD5 verifications on the server more frequently -- about 3 times/year  (just did that a couple weeks ago in fact).

 

Link to comment

... It looks like we all suspect that Seagate somehow cheated on us, and desperately trying to catch them red-handed  :D

 

I don't think anyone suspects Seagate has "cheated" at all ... we just want to see just how well they've mitigated the write penalties for the shingled drives.    Clearly they do NOT perform as well as non-shingled units ... but there ARE some mitigations that can be implemented in the drive's firmware (these are outlined well in some of the papers I noted much earlier in this thread) => and the actual tests of the units to date have shown that they've done a pretty good job of this.

 

Seagate has made NO promises r.e. the performance of these units -- in fact, they clearly note these are "Archive Drives" and are NOT designed for primary storage.    And their animations that show the technology involved clearly show the bands and outline the need to rewrite them during write operations.  That's hardly "cheating".

 

Link to comment

That's one of the many mitigation techniques I've seen discussed -- and it certainly makes sense.  A 25GB cache should be more than ample for MOST writes to the drive ... and as long as there was ample "catch up" time before the next set of writes this would almost entirely mask the shingled overhead delays.

 

I suspect the firmware also recognizes when an entire band has been written (such as when doing a pre-clear), and can thus avoid the re-read/re-write cycle for those bands.    Coupled with the cache, this likely almost completely eliminates write delays for this specific operation ... which would account for the performance that's been seen in the testing here so far.

 

Sounds like you'd need to do a significant number of random writes (well over 25GB) to determine the actual amount of slow-down that the band rewrites will cause.    But in practice, for most UnRAID uses, the 25GB cache will probably entirely mask this overhead.

 

Link to comment

I expect the write performance keeps up with sequential writes, but random writes will dramatically slow down when the write cache fills up. So a heavily fragmented disk may start to show the slow performance. It would not make a good cache disk! , Parity performance may truly suck when doing multiple writes to different drives simultaneously.

Link to comment

Join the conversation

You can post now and register later. If you have an account, sign in now to post with your account.
Note: Your post will require moderator approval before it will be visible.

Guest
Reply to this topic...

×   Pasted as rich text.   Restore formatting

  Only 75 emoji are allowed.

×   Your link has been automatically embedded.   Display as a link instead

×   Your previous content has been restored.   Clear editor

×   You cannot paste images directly. Upload or insert images from URL.