[Out of Stock] Seagate Archive 8 TB - $263.02


mdoom

Recommended Posts

These are starting to pop up here and there.  Rakuten (buy.com) had some @ $239 but sold out quickly.  I think CompSource has some at $249 as I type this: http://www.compsource.com/pn/ST8000AS0002/Seagate-394/Archive-Hdd-Sata-6gbS-8tb/

 

I had ordered 10 at B&H at the $249 pre-order price in January.  I received them March 10 but they were packaged poorly (just thrown in loose) and 4 were DoA so I'm working out replacements.  In hindsight, I should've just pre-ordered 20 from BLT and sold the excess to the good people on this forum for cost.  :)

 

Link to comment
I had ordered 10 at B&H at the $249 pre-order price in January.  I received them March 10 but they were packaged poorly (just thrown in loose) and 4 were DoA so I'm working out replacements.
That is disappointing news.  First time I've heard about bad shipping from B&H on the forums and I've had better luck but I'm only ordering drives in pairs not in your quantities so maybe that made a difference for me.
Link to comment

Has anyone used these in an array?

 

Assuming they would be used as a parity drive it's a bit concerning because they have poor sustained write performance according to this review: http://www.storagereview.com/seagate_archive_hdd_review_8tb

 

There's been an extensive discussion about these drives here:

http://lime-technology.com/forum/index.php?topic=36749.0

 

The bottom line is that random writes are mitigated nicely with a "persistent cache"  (a non-shingled area of the disk that's used to store random writes before they're moved to the shingled area); but if you exceed the size of the persistent cache the writes get VERY slow until the drive "catches up".

 

While we're still waiting for someone to build an all-Archive-drive array for real results; it seems these would work fine as long as the amount of data written to the array in any given session is fairly modest.

 

Note also that if a significant amount of writing is done sequentially, the disk's firmware recognizes the full band writes, and will avoid the need for the persistent cache and for band rewrites ... this will also avoid the slowdowns.

 

Whether or not these characteristics would make it a good choice for a particular UnRAID array depends entirely on just how that array will be used.    They ARE "archive drives" and not designed for extensive random writes.  If that's how you use your array, they'd be a poor choice -- especially for a parity drive.

 

Link to comment

Thanks for the info.

 

I think it'd be pretty safe to assume that most of these drives (At least the 6TB & 8TB models) would be used as parity drives as they are the biggest reasonably priced drives on the market now.

I think these would make very poor parity drives, especially if you need to do a parity check or rebuild... it would take forever.

Link to comment

Thanks for the info.

 

I think it'd be pretty safe to assume that most of these drives (At least the 6TB & 8TB models) would be used as parity drives as they are the biggest reasonably priced drives on the market now.

I think these would make very poor parity drives, especially if you need to do a parity check or rebuild... it would take forever.

 

 

A parity check, or rebuild of another data drive should go as fast as the drive can read.

It's yet to be fully determined how a parity generate (sync) and/or a data rebuild on one of these drives is handled.

 

 

From what I saw on a review, it could drop as low as 40MB/s near the end of the drive. I would imagine the head would be swinging from the outer cache tracks to write to the inner tracks unless the hard drive's cache algorithm manages it well.

Link to comment

Thanks for the info.

 

I think it'd be pretty safe to assume that most of these drives (At least the 6TB & 8TB models) would be used as parity drives as they are the biggest reasonably priced drives on the market now.

I think these would make very poor parity drives, especially if you need to do a parity check or rebuild... it would take forever.

 

 

A parity check, or rebuild of another data drive should go as fast as the drive can read.

It's yet to be fully determined how a parity generate (sync) and/or a data rebuild on one of these drives is handled.

 

 

From what I saw on a review, it could drop as low as 40MB/s near the end of the drive. I would imagine the head would be swinging from the outer cache tracks to write to the inner tracks unless the hard drive's cache algorithm manages it well.

Yup, meant to only type rebuild/sync in there.

Link to comment

... A parity check, or rebuild of another data drive should go as fast as the drive can read.

 

Agree.

 

 

... It's yet to be fully determined how a parity generate (sync) and/or a data rebuild on one of these drives is handled.

 

True.  Some of the early testing outlined in the thread I referenced above shows that writing to the entire drive sequentially works just fine (i.e. the zeroing in a pre-clear) ... and if the parity sync and/or drive rebuilds write in the same order they should as well.    But we need some real-world use of the drives to confirm this.    I'd be more concerned about possible severe slow-downs during typical array usage if there's enough write activity to fill the persistent cache => THAT is when the write speeds get really bad.

 

 

... From what I saw on a review, it could drop as low as 40MB/s near the end of the drive. I would imagine the head would be swinging from the outer cache tracks to write to the inner tracks unless the hard drive's cache algorithm manages it well.

 

It's actually much worse than that, but has little to do with thrashing the head between the inner & outer cylinders.  The Storage Review article noted "... we see sustained write performance all over the map, including single digit throughput for long periods." => i.e. the speeds were under 10MB/s for extended times.    Given the architecture of the drives, this is clearly happening when the persistent cache is being cleared and full band rewrites are being done in the shingled areas where the data is written.  While head movement between the two locations is certainly a factor, I think the full band rewrites are far more likely the cause of this very slow write speed.

 

HOWEVER ... it's fairly likely that a parity sync, or possibly even a drive rebuild, would be written sequentially.  As long as that's the case; and the writes are close enough together so the drive's firmware recognizes that full bands are being written, then the persistent cache will not be used; and band rewrites won't be required, since the drive will "know" that the full band is being written anyway.    If that's the case, then the write speeds will be very good.  Not quite up to par with a PMR drive, but nevertheless very acceptable.

 

Link to comment

Thanks for the info.

 

I think it'd be pretty safe to assume that most of these drives (At least the 6TB & 8TB models) would be used as parity drives as they are the biggest reasonably priced drives on the market now.

I think these would make very poor parity drives, especially if you need to do a parity check or rebuild... it would take forever.

 

As I noted above, a parity check or drive rebuild may not be bad at all, depending on whether it's done in a way that the drive's firmware recognizes that the writes are sequential and are writing full bands ... skipping the need for both use of the persistent cache and band rewrites makes this drive behave much like a traditional PMR drive.

 

The more likely cause of exceptionally poor performance would be random writes that exceed the persistent cache size.  When this happens, write performance becomes abysmal (< 10MB/s).    With the 8TB unit, the persistent cache is 25GB (this is based on testing by 3rd parties - Seagate doesn't publish this spec) ... so that's a fair amount of writing.

 

The best way to avoid this performance degradation is to not use these as parity.    You could still fill the cache on data drives as well, but it seems less likely, as if you were writing large files they would most likely be written sequentially and would avoid both caching and band rewrites ... so the cache would only be used for smaller files and much less likely to fill up.    Clearly your parity "drive" would also have to be 8TB ... but this could be a RAID-0 array using a pair of 4TB drives.

 

Link to comment

Clearly your parity "drive" would also have to be 8TB ... but this could be a RAID-0 array using a pair of 4TB drives.

 

Does unRAID support taking 2 - 4TB drives and making a RAID-0 for the parity?

 

unRAID's software layer does not do the concatenation or RAID-0.

 

To do this you need a hardware raid controller such as the Areca.

 

 

Another choice is a hardware module that uses the silicon image chipsets capable of turning multiple drives into a raid volume that is accessible to one SATA/eSATA port.

 

There are many hardware raid devices like this.

Link to comment

Clearly your parity "drive" would also have to be 8TB ... but this could be a RAID-0 array using a pair of 4TB drives.

 

Does unRAID support taking 2 - 4TB drives and making a RAID-0 for the parity?

 

Sure ... you need a simple RAID controller to do it, as UnRAID doesn't do it natively, but it's easy to do (several folks on this forum already do this)

 

Link to comment

... From what I saw on a review, it could drop as low as 40MB/s near the end of the drive. I would imagine the head would be swinging from the outer cache tracks to write to the inner tracks unless the hard drive's cache algorithm manages it well.

 

It's actually much worse than that, but has little to do with thrashing the head between the inner & outer cylinders.  The Storage Review article noted "... we see sustained write performance all over the map, including single digit throughput for long periods." => i.e. the speeds were under 10MB/s for extended times.    Given the architecture of the drives, this is clearly happening when the persistent cache is being cleared and full band rewrites are being done in the shingled areas where the data is written.  While head movement between the two locations is certainly a factor, I think the full band rewrites are far more likely the cause of this very slow write speed.

 

HOWEVER ... it's fairly likely that a parity sync, or possibly even a drive rebuild, would be written sequentially.  As long as that's the case; and the writes are close enough together so the drive's firmware recognizes that full bands are being written, then the persistent cache will not be used; and band rewrites won't be required, since the drive will "know" that the full band is being written anyway.    If that's the case, then the write speeds will be very good.  Not quite up to par with a PMR drive, but nevertheless very acceptable.

 

That review also presented the drives in a RAID-1 configuration on a busy array.

 

I think unRAID'ers may see different results.

I.E. no access to the drive being rebuilt.

or leaving the array alone until the rebuild is complete.

 

In that case it'll still take at least a day or so.

Link to comment

The only "negative" about it is it uses a floppy-style power connector ... not always conveniently available on modern PSU's  [They generally have one, but it's likely on the end of a molex string that you wouldn't be using otherwise).

 

Fortunately, Monoprice has a very handy adapter for this: http://www.monoprice.com/Product?c_id=102&cp_id=10226&cs_id=1022604&p_id=7642&seq=1&format=2

 

 

Link to comment

The only "negative" about it is it uses a floppy-style power connector ... not always conveniently available on modern PSU's  [They generally have one, but it's likely on the end of a molex string that you wouldn't be using otherwise).

 

Fortunately, Monoprice has a very handy adapter for this: http://www.monoprice.com/Product?c_id=102&cp_id=10226&cs_id=1022604&p_id=7642&seq=1&format=2

 

User guide PDF shows a eSATA to floppy adapter included.

Link to comment

User guide PDF shows a eSATA to floppy adapter included.

 

SWEET

http://www.addonics.com/userguides/AD2HDDHP6G.pdf

 

RAID 0

RAID 0 involves “striping,” where the drives carry alternating sections of the overall space.

RAID 0 is designed for high performance but is not fault tolerant.

Failure of either device will result in loss of all data.

RAID 0 requires identical capacity on both drives.

If the two media are different capacities, the lower capacity drive determines “membership size.”

The extra space on the other drive is unused.

 

 

LARGE

LARGE Mode “spans” the drives.

The lowest portion of the overall capacity is contained on the First Target (the drive the HPM is attached to).

The higher portion contained on the Second Target.

All available space on both media is used, even if the drives are not the same size.

This set is not fault tolerant. If one of the devices fails, some of the data may or may not be recoverable from the other.

 

So this means you can build a 6tb array from a 4TB and 2TB drive in LARGE mode.

Link to comment

Join the conversation

You can post now and register later. If you have an account, sign in now to post with your account.
Note: Your post will require moderator approval before it will be visible.

Guest
Reply to this topic...

×   Pasted as rich text.   Restore formatting

  Only 75 emoji are allowed.

×   Your link has been automatically embedded.   Display as a link instead

×   Your previous content has been restored.   Clear editor

×   You cannot paste images directly. Upload or insert images from URL.