Jump to content

Seagate launch "NAS HDD" (WD Red Copycat!)


Recommended Posts

http://www.theregister.co.uk/2013/06/17/seagate_gets_nasty/

 

http://www.anandtech.com/show/7062/seagate-introduces-nas-hdd-wd-red-gets-a-competitor

 

"Seagate Introduces NAS HDD: WD Red Gets a Competitor

by Ganesh T S on June 11, 2013 8:01 AM EST

 

Consumers looking to fill their SOHO / consumer NAS units with hard drives haven't had too many choices. Western Digital recognized early on that the dwindling HDD sales in the PC arena had to be made up for in the fast growing NAS segment. Towards this, they introduced the WD Red series (in 1TB, 2TB and 3TB capacities) last July. Today, Seagate is responding with their aptly named NAS HDD lineup. Just like the WD Red, these HDDs are targeted towards 1- to 5-bay NAS units. WD terms their firmware secret sauce as NASWare and Seagate's is NASWorks. NASWorks supports customized error recovery controls (TLER in other words), power management and vibration tolerance.

 

TLER helps to ensure that drives don't get dropped from the NAS and send the array into a rebuild phase. Seagate also claims that the firmware has an optimal balance for sequential and random performance.

 

Seagate does have a lead over WD in the capacity department. While the WD Red currently tops out at 3TB, Seagate's NAS HDD comes in 2 TB, 3 TB and 4 TB flavors. Seagate hasn't provided any information on the number of platters or spindle speed. Power consumption numbers are available, though. Average operating power is 4.3W for the 2TB  model and 4.8W for the 3 TB and 4 TB ones.

 

XpYuIef.png

 

Pricing is set at $126, $168 and $229 for the 2TB, 3TB and 4TB models respectively.

 

Source: Seagate"

 

Manual: http://www.seagate.com/files/www-content/product-content/nas-fam/nas-hdd/en-us/docs/100724684.pdf

 

http://www.seagate.com/gb/en/internal-hard-drives/nas-drives/nas-hdd/

Link to comment

They are already listed on Amazon UK and Newegg and the prices are reasonable -- similar to the Hitachi price here.

 

However, I cannot find the warranty period and Seagate say to enter the serial number on their web site to find out the warranty -- that's not very helpful, fellas!

 

If they are at a 1 or 2 year warranty, I'll stick with Hitachi and their 3 year warranty.

Link to comment

They are already listed on Amazon UK and Newegg and the prices are reasonable -- similar to the Hitachi price here.

 

However, I cannot find the warranty period and Seagate say to enter the serial number on their web site to find out the warranty -- that's not very helpful, fellas!

 

If they are at a 1 or 2 year warranty, I'll stick with Hitachi and their 3 year warranty.

Annand link says: "Update: Seagate has released an extensive product manual here. The 3TB and 4TB models have four platters each, while the 2TB model has two. The drives have a 3-year warranty."
Link to comment

They are already listed on Amazon UK and Newegg and the prices are reasonable -- similar to the Hitachi price here.

 

However, I cannot find the warranty period and Seagate say to enter the serial number on their web site to find out the warranty -- that's not very helpful, fellas!

 

If they are at a 1 or 2 year warranty, I'll stick with Hitachi and their 3 year warranty.

Annand link says: "Update: Seagate has released an extensive product manual here. The 3TB and 4TB models have four platters each, while the 2TB model has two. The drives have a 3-year warranty."

 

Interesting that the 2TB and 4TB are 1TB/platter drives, but the 3TB are 750GB/platter units !!  Guess they won't sell many 3TB units -- at least not to anyone who's paying attention.

 

The 8760 power-on hours spec is amazing -- ONE year of use !!  (SURELY that's an error in the specs)

 

Link to comment

Interesting that the 2TB and 4TB are 1TB/platter drives, but the 3TB are 750GB/platter units !!  Guess they won't sell many 3TB units -- at least not to anyone who's paying attention.

 

The 8760 power-on hours spec is amazing -- ONE year of use !!  (SURELY that's an error in the specs)

 

That's exactly 1 year, so I assume it means you can run them 24/7 -- 8760 hours per year. That does need clarification, though.

Link to comment

That's exactly 1 year, so I assume it means you can run them 24/7 -- 8760 hours per year. That does need clarification, though.

I hope you are correct!

 

I'm sure that's what the specs MEAN ... but it's not what they SAY  :)

 

... but it really can't be anything else => why sell a NAS drive that's not intended for 24/7 use !!??

 

One interesting thing about Seagate's description of the drives:  They say it's for "1-to-5 bay Network Attached Storage systems"

 

WD doesn't list any such constraint ... they just say "for NAS systems"

 

Clearly the drives themselves don't give a rip how many drives are in the system ... but it's interesting that Seagate tends to discourage large arrays with these drives.

 

Link to comment

One interesting thing about Seagate's description of the drives:  They say it's for "1-to-5 bay Network Attached Storage systems"

 

WD doesn't list any such constraint ... they just say "for NAS systems"

 

Clearly the drives themselves don't give a rip how many drives are in the system ... but it's interesting that Seagate tends to discourage large arrays with these drives.

Actually, WD has the same 1-5 bay limitation listed on their site (http://www.wdc.com/en/products/products.aspx?id=810):

 

Ideal for:

Specifically designed and tested for small office and home office, 1-5 bay NAS systems and PCs with RAID.

 

I am sure this is both companies wanting these drives to be used in home NAS unit, not in enterprise servers.

Link to comment

The manual or spec sheet or web site does say they are intended to be used 24/7.

 

I imagine they have tested the drives in a 5-bay NAS -- with regard to how they handle vibrations from the other drives while they are also spinning -- that's a lot of what the NASware firmware is about. However, in many unRAID servers, only one drive is spinning for most of the time. My drives are only all spun up once a month for 12 hours during a parity check.

 

The other part of the 1-5 drive server spec is that (if you look on their web site) they recommend you buy the more expensive Constellation server drives if you want to use more than 6. That's just marketing the "better" drives to the higher-end users/businesses.

 

Given what we do is to use lots of drives in one server, but most of the time most of the drives are idle, I think the closest "business" user would be Backblaze. They use 45 drives per server and do not use NAS drives or "server" drives. They use the most reliable, cheap, consumer drives. They use cheap Hitachi drives because they have an incredibly low failure rate. And so do I.  ;D

Link to comment

The other part of the 1-5 drive server spec is that (if you look on their web site) they recommend you buy the more expensive Constellation server drives if you want to use more than 6. That's just marketing the "better" drives to the higher-end users/businesses.

 

That's not the only reason they suggest Constellation drives for arrays with large drive counts.  Seagate, WD, Hitachi, and all the major disk manufacturers manufacture their consumer-grade drives (including the WD Reds and Seagate NAS drives) to an error specification of 1 in 10^14.  Their enterprise-grade drives (Constellation et. al.) have an error specification an order of magnitude better !!  (1 in 10^15)    So the likelihood of a 2nd drive failure during a rebuild is FAR lower with enterprise class drives.

 

And with the very high capacities of today's drives, the likelihood of errors grows rapidly as you increase the number of drives.    Note that a 4TB drive has 4 x 8 (bits/byte) x 10^12 = 32 x 10^12 = 3.2 x 10^13 bits.  So if you're reading 4 4TB drives, you've already statistically read enough bits to hit the statistical likelihood of an unrecoverable error !!  [4 x 3.2 x 10^13 = 12.8 x 10^13 = 1.28 x 10^14]    With enterprise class drives the error rate is 1/10th that of consumer drives, so you're far less likely to encounter these errors.

 

Nevertheless, virtually all enterprise class storage these days uses RAID levels that tolerate more than one drive failure (e.g. RAID-6)  => not so much because they expect dual failures; but because it lets the system tolerate a 2nd failure during a drive rebuild.    Couple that with enterprise class drives and it's FAR more reliable than a RAID 5 or any other single-fault tolerant system (like UnRAID).    A RAID-6 with a hot-spare and automatic rebuild is a VERY reliable storage system.

 

An UnRAID array with 24 drives is definitely "living on the edge" in terms of reliability.  With very high-capacity drives (e.g. 4TB), if you have a drive failure, then during the rebuild you have to read 23 x 3.2 x 10^13 bits = 7.36 x 10^14 bits on drives that statistically have 1 unrecoverable error every 10^14 bits !!  The good news is that UnRAID drives tend to spend a lot of their "life" spun down, so they don't get the 24/7 wear that drives in enterprise environments do, so MOST of the time you'll "get by" without errors during a rebuild.    Those numbers, however, are why a lot of folks have been asking for a RAID-6-equivalent dual-failure mode protection in UnRAID.

 

 

Given what we do is to use lots of drives in one server, but most of the time most of the drives are idle, I think the closest "business" user would be Backblaze. They use 45 drives per server and do not use NAS drives or "server" drives. They use the most reliable, cheap, consumer drives. They use cheap Hitachi drives because they have an incredibly low failure rate. And so do I.  ;D

 

If Backblaze is using 45 consumer-grade drives in one server I certainly hope they're (a) configured in multiple arrays; and (b) those arrays are RAID-6 or some other multiple-failure-tolerant RAID level.

Link to comment

Indeed, BackBlaze has an extensive write-up about their Storage Pods ... and they use 16TB RAID-6 volumes  :)

 

So they've got very good fault-tolerance even with the consumer-grade drives they're using.  [As I would certainly hope they would].    I suspect they also have very good backups across their data centers ... so even if they DO lose data from a RAID-6 volume they can restore it rather easily.

 

A quote from their overview of the Storage Pods:

 

"... However, ext4 has since matured in both reliability and performance, and we realized that with a little additional effort we could get all the benefits and live within the unfortunate 16 terabyte volume limitation of ext4. One of the required changes to work around ext4’s constraints was to add LVM (Logical Volume Manager) above the RAID 6 but below the file system. In our particular application (which features more writes than reads), ext4’s performance was a clear winner over ext3, JFS, and XFS."

 

Link to comment
  • 2 weeks later...

Archived

This topic is now archived and is closed to further replies.

×
×
  • Create New...