Seagate’s first shingled hard drives now shipping: 8TB for just $260


Recommended Posts

  • Replies 655
  • Created
  • Last Reply

Top Posters In This Topic

Top Posters In This Topic

Posted Images

Very interesting.  I've just skimmed it but it shows significant write penalties once that persistent cache is full.  Looks like they have a typo regarding the cache, calling it 20 MB instead of 20 GB.  The presentation/paper I linked to earlier says the 8 TB drive has a 25 GB cache.  Since they're both guessing the cache size based on observation, 5 gigs seems like a reasonable margin of error.

Link to comment

One point worth considering is recovery time.

The article made mention that during recovery of a raid 1 setup the drive's throughput was in the 30MB/s range.

 

This could have been skewed due to random read load on the source drive, yet it still left me wondering about recovery time in an unRAID array.

Link to comment

Clearly these are NOT designed for significant write activity.  That's why they're "Archive Drives" ... and why Seagate does not recommend them for other than archival storage.    Note that Storage Review says that, with regard to using these in a NAS environment, they "... strongly recommend against such usage, as at this time SMR drives are not designed to cope with sustained write behavior."

 

As an example, the Storage Review article did a disk rebuild in a NAS with one of these drives and an 8TB Hitachi with traditional PMR cylinders.    The result:  The shingled drive rebuild averaged 9.6MB/s write speed for the rebuild;  the traditional drive averaged 156.4MB/s  :)

 

While clearly Seagate has done a very good job of mitigating (i.e. "hiding") this write penalty during many use cases -- and this would likely include most cases where they are used as data drives for an UnRAID array -- there's nothing they can do to mitigate this when the entire drive is being written, such as during a drive rebuild.

 

Bottom line:  They're probably fine for a backup array, or even as your primary data drives IF most of your content is static;  you don't write more than 20GB/day (or at least ever few hours, so the drives can empty the persistent cache in between writes); AND if you don't mind the VERY slow rebuild times you would encounter in the event of a failed drive => The rate noted by Storage Review implies a rebuild time of nearly 10 DAYS !!

 

Link to comment

The slow rebuild time was my point.

A failed drive taking 2 days or more to rebuild would concern me.

For me, a rebuild that fits in less then half a day is adequate.

 

In a backup array scenario, I can see using these drives. In a primary array, I might be apprehensive.

 

Is it possible for someone to post a smartctl -a or smartctl -x of an 8TB SMR drive?

Link to comment

Definitely agree.  These would be okay in a backup array, but I'd be very hesitant to use them in a primary array.  The results shown by Storage Review clearly show the limitations of this technology -- not surprising considering the full band rewrite requirements.

 

In a backup scenario, where the initial backup would likely do a lot of full-band writes (eliminating the delays from the persistent cache plus band rewrites); and subsequent backups would likely not exceed the persistent cache size (at least not often), they should work fine.    If this was a parity-protected array, you'd still have the excessive drive rebuild times to contend with, but that's not quite as bad as having the issue with a primary array.

 

Link to comment

Well, with drive sizes in this size range, a rebuild is going to take a long time no matter what.  Even the He8 8tb drives that cost a pot of leprechaun gold took over 19 hours to rebuild a mirror.

 

Next time I can get a deal, I'll grab another Seagate SMR 8tb drive and install it as a data drive.  That'll show whether an unRAID rebuild has the same performance hit as a RAID1 rebuild in a Synology box.  But I kinda hope someone tries it before me.  I've got 8tb free right now so it'll be a while before I need the space.  :)

Link to comment

Well a quick gauge of minimum pass time is to look at the smartctl output for the Extended self-test routine time.

 

Device Model:     HGST HDN726060ALE610
...
Extended self-test routine
recommended polling time:        ( 735) minutes.

 

We'll probably never get very far under this time in a full pass.

 

So this is about 12 hours on a 6Tb drive. Which is about my limits of comfort.

 

Can you post the recommended polling time on the 8TB units?

Link to comment

Well, with drive sizes in this size range, a rebuild is going to take a long time no matter what.  Even the He8 8tb drives that cost a pot of leprechaun gold took over 19 hours to rebuild a mirror.

 

Next time I can get a deal, I'll grab another Seagate SMR 8tb drive and install it as a data drive.  That'll show whether an unRAID rebuild has the same performance hit as a RAID1 rebuild in a Synology box.  But I kinda hope someone tries it before me.  I've got 8tb free right now so it'll be a while before I need the space.  :)

 

19 hours is QUICK, however, when compared to 10 DAYS  :)

 

An UnRAID rebuild will most likely be just as bad.    But it IS possible that this might generate enough sequential writes that the drive recognizes that full bands are always being written -- if that's the case it will skip the persistent cache and the band rewrites ... and in that case the rebuild would be MUCH quicker -- HOURS instead of DAYS  :)

 

That would, of course, be VERY nice  8)

 

... now if one of you guys who has a few of these drives would just experiment with a rebuild so we'll all know  :)

 

pkn: Are you setting up a new array with your 3 drives?    That would be a "perfect" test platform for a rebuild, since there would be no other drives involved to alter any timings.    Once you've set up the array and done a parity check, you'd have to (a) stop the array and un-assign one of the drives;  (b) start the array so it's shown as "missing";  © stop the array and reassign the drive; and (d) start the array and let it do the rebuild.      Not sure if you're willing to spend the time to do this, but I can assure you we'd all be VERY interested in the results !!    [Note:  You don't really even need any data on the array to do this -- the rebuild time isn't impacted by how much data is on the disk.]

 

 

 

Link to comment

The slow rebuild time was my point.

A failed drive taking 2 days or more to rebuild would concern me.

For me, a rebuild that fits in less then half a day is adequate.

 

 

That's why single parity is not used for large drives.

Link to comment

The slow rebuild time was my point.

A failed drive taking 2 days or more to rebuild would concern me.

For me, a rebuild that fits in less then half a day is adequate.

 

That's why single parity is not used for large drives.

 

 

There's not much choice at the current time unless a person migrates away from unRAID.

Link to comment

The speed of the rebuild is based on the speed of the replacement drive. So if an 8TB SMR drive went and you wanted a faster recovery, you could use a PMR 8T drive or a RAID0 of 2 4T drives.

 

Other SMR drives in the array would not hold you back as there is no read penalty.

 

The requirement to rebuild in 1/2 day (12 hours?) is somewhat arbitrary. Although I understand that getting the array back to a protected state in a reasonable period of time is certainly desired, for me, less than a day would be sufficient.

 

I agree that dual parity would be very helpful for these large disk arrays for peace of mind when rebuilds are in progress.

 

But also recognize that the cost of protection is on the rise. With a 4T parity, the cost of parity protection is about $150 single or $300 dual (when/if dual parity available). With an 8T protected array you'd be looking at $300 single and $600 dual.

Link to comment

Clearly the longer the rebuild time, the more important dual fault-tolerance is.  And with larger drives, the likelihood of a 2nd failure during a rebuild get notably higher as well ... yet another reason it would be nice to have dual parity.

 

As for the "cost of protection" => Yes, the larger drives cost more, but they're also "protecting" twice as much data, assuming all drives in the array are the same size.

 

e.g. a 4TB parity drive in an array filled with 4TB drives is "protecting" exactly half as much data as an 8TB parity drive in the same array filled with 8TB drives.  So if the drive costs twice as much, it's exactly the SAME cost/TB of protected space.

 

 

Link to comment

The requirement to rebuild in 1/2 day (12 hours?) is somewhat arbitrary.

 

This is a matter of personal choice.  I can't babysit a rebuild when it takes more then half a day.

Usually when a drive fails, I'm in an agitated state, so having it rebuilt in what I consider reasonable time is important.

For others it probably doesn't matter.

 

The speed of the rebuild is based on the speed of the replacement drive. So if an 8TB SMR drive went and you wanted a faster recovery, you could use a PMR 8T drive or a RAID0 of 2 4T drives.

 

Other SMR drives in the array would not hold you back as there is no read penalty.

 

If it takes almost a day to pass through a single 8TB drive then that's the fastest a rebuild can ever go to completion.

Granted if a 4TB drive failed and it is being rebuilt, you only have to pass through the first 4TB before that drive is now safe.

 

Does the rebuild stop at 4TB, or do the drives read all the way to the end of the parity drive?

That's an interest point that I've never considered.

Link to comment

Clearly the longer the rebuild time, the more important dual fault-tolerance is.  And with larger drives, the likelihood of a 2nd failure during a rebuild get notably higher as well ... yet another reason it would be nice to have dual parity.

 

And this might be a case to raise the priority level of additional parity disks.

 

Using these values (from a prior post) it's sort of a baseline in how long you would be unprotected.

i.e. at least 16-19 hours to write the full 8tb drive.

== Last Cycle's Pre Read Time  : 18:19:52 (121 MB/s)
== Last Cycle's Zeroing time   : 16:58:56 (130 MB/s)
== Last Cycle's Post Read Time : 41:19:33 (53 MB/s)

 

On top of that you have to add drive acquisition time and then doing the whole surface analysis/preclear before doing the rebuild.

Link to comment

Well a quick gauge of minimum pass time is to look at the smartctl output for the Extended self-test routine time.

Device Model:     HGST HDN726060ALE610
...
Extended self-test routine
recommended polling time:        ( 735) minutes.

 

We'll probably never get very far under this time in a full pass.

 

So this is about 12 hours on a 6Tb drive. Which is about my limits of comfort.

 

I have pretty modern drives in my system, nothing too slow, all 1TB platters, mixed 7200 RPM/5900 RPM.

 

What I found interesting in my 8 drive array was the parity check speed was pretty close to that recommended polling time.

Duration: 12 hours, 25 minutes, 52 seconds. Average speed: 134.1 MB/sec

 

 

So that number gives you a best case scenario of passing through the drives.

Link to comment

But also recognize that the cost of protection is on the rise. With a 4T parity, the cost of parity protection is about $150 single or $300 dual (when/if dual parity available). With an 8T protected array you'd be looking at $300 single and $600 dual.

 

But is it? The price of storage is falling.

 

In Dec 2011, I purchased 4Tb drives for $199. Now I get them for $100. Yes, those are "sale" prices, but for your favorite drive prices have fallen too.

 

Here (again) is my basic spreadsheet for quickly calculating storage costs for builds of various sizes, which includes the ability to vary the protection level. I've saved it with unRAID like limits, but you can adjust the parity and see the cost change.

Cost-per-TB.zip

Link to comment

... ... now if one of you guys who has a few of these drives would just experiment with a rebuild so we'll all know  :)

 

pkn: Are you setting up a new array with your 3 drives?    That would be a "perfect" test platform for a rebuild, since there would be no other drives involved to alter any timings.  ...

You know this might actually be a good idea... I do have spare Supermicro H8DME-2 mobo... can't remember 'bout the CPU/RAM, will have to look. I don't need these disks immediately, and was kinda confused right now by the long preclearing times, not wanting to have prolonged "do-not-touch-the-server-it's-preclearing" periods. So making a new separate temporary test server just for playing with the 8TBs might be a go. I'll check what I have and report a bit later. But in any case, it wouldn't be any time soon, I still have to do preclearing before shucking them.

Link to comment

Clearly the longer the rebuild time, the more important dual fault-tolerance is.  And with larger drives, the likelihood of a 2nd failure during a rebuild get notably higher as well ... yet another reason it would be nice to have dual parity.

 

As for the "cost of protection" => Yes, the larger drives cost more, but they're also "protecting" twice as much data, assuming all drives in the array are the same size.

 

e.g. a 4TB parity drive in an array filled with 4TB drives is "protecting" exactly half as much data as an 8TB parity drive in the same array filled with 8TB drives.  So if the drive costs twice as much, it's exactly the SAME cost/TB of protected space.

 

Not true. A 4TB parity can protect exactly the same amount of space - or even more. It just depends on how much data you have an how many disks it is spread across.

 

If you have a mix of 2TB, 3TB, and 4TB drives, and add an 8TB parity and 8T disk), you are doubling the size and cost of parity, but not doubling the amount of protected space. And if instead, you had added a few 4T drives, you'd have avoided the parity upsize, reduced parity check time, and spent considerably less for the same size jump.

 

The reasons to go for a bigger parity (IMO) are ...

1 - If the larger size is considerably less per T than the current parity size. I remember when 3T drives were much less per T than 2T drives. It made sense to upgrade parity if you needed to add space as it almost payed for itself and would more than pay for itself in the future.

2 - If you are running low on slots and continuing to invest in the smaller drives is going to put you in an upsize mode. Upsizing small disks for larger ones after the 5 year mark is table stakes, but upgrading a 4T drive to 6T or 8T after a year or two because you have no more rooms at the unRAID hotel - that is bad planning.

 

Link to comment

Using these values (from a prior post) it's sort of a baseline in how long you would be unprotected.

i.e. at least 16-19 hours to write the full 8tb drive.

== Last Cycle's Zeroing time   : 16:58:56 (130 MB/s)

 

The zeroing time could be used as a guide to the time it would take to rebuild the 8TB drive, without other limiters. I'd expect it to be within 125% of this.

Link to comment

The zeroing time could be used as a guide to the time it would take to rebuild the 8TB drive, without other limiters. I'd expect it to be within 125% of this.

 

I'd agree with this IF (and this is a big IF) UnRAID writes the data during a rebuild in a manner that allows the drive to always recognize that full band writes are being done, so it will skip the use of the persistent cache and the band rewrites that would entail.    If that's not the case, the rebuild will be FAR longer than the zeroing time [Writing zeroes sequentially throughout the entire disk clearly bypasses the persistent cache -- that's clear from the timings we've already seen posted here].

 

Hopefully this will prove to be the case -- I suspect either pkn or jtown will do this experiment one of these days and let us know  :)  [Or anyone else who decides to buy a few of these to experiment with.]

 

Link to comment

Join the conversation

You can post now and register later. If you have an account, sign in now to post with your account.
Note: Your post will require moderator approval before it will be visible.

Guest
Reply to this topic...

×   Pasted as rich text.   Restore formatting

  Only 75 emoji are allowed.

×   Your link has been automatically embedded.   Display as a link instead

×   Your previous content has been restored.   Clear editor

×   You cannot paste images directly. Upload or insert images from URL.