WD Red - new NAS optimized product line


Recommended Posts

I have 6 WD Red 3TB drives arriving today, and will spend the next several days pre-clearing them.  I'll post back later on my impressions of the drives.  I'm expecting it will take me a week to integrate them into the array.

 

I currently have 18 Samsung drives (F1's, F2's and F3's), some of which are 3.5 years old, most of which are over 2 years old, and none of which have ever failed.  I have at least 4 other Samsung F1's in various computers in the home, and none of those have ever failed.  Never had a DOA either.  I know everyone's experience is unique, and not all have been as fortunate as I have been, but my word that is an amazing track record.

 

I briefly had a 2TB Western Digital RE4-GP as a parity drive, a really expensive high-end 5-year warranty bugger, but it died within a couple months.  I was so P/O'd I never even had it replaced (but with that 5 year warranty, I guess it's not too late).  I had no intention of abandoning the Samsung drives, and a very high reluctance to go back to Western Digital, but in the current HD market the new Red's just seemed like a logical choice.  Really hoping I don't get burned again.

Link to comment
  • Replies 63
  • Created
  • Last Reply

Top Posters In This Topic

Finished a 1 pass pre-clear on 3 of the WD Red 3TB drives with flying colors (everything that matters is still a zero).  Of the remaining 3 drives, 1 may be defective (didn't show on boot) and 2 are still in the wrapper.

 

Here's the pre-clear reports:

 

Jul 25 04:03:30 Tower preclear_disk-diff[26320]: ========================================================================1.13

Jul 25 04:03:30 Tower preclear_disk-diff[26320]: == invoked as: ./preclear_disk.sh -A /dev/sdv

Jul 25 04:03:30 Tower preclear_disk-diff[26320]: ==  WDC WD30EFRX-68AX9N0    WD-WMC1T0077429

Jul 25 04:03:30 Tower preclear_disk-diff[26320]: == Disk /dev/sdv has been successfully precleared

Jul 25 04:03:30 Tower preclear_disk-diff[26320]: == with a starting sector of 1

Jul 25 04:03:30 Tower preclear_disk-diff[26320]: == Ran 1 cycle

Jul 25 04:03:30 Tower preclear_disk-diff[26320]: ==

Jul 25 04:03:30 Tower preclear_disk-diff[26320]: == Using :Read block size = 8225280 Bytes

Jul 25 04:03:30 Tower preclear_disk-diff[26320]: == Last Cycle's Pre Read Time  : 7:55:20 (105 MB/s)

Jul 25 04:03:30 Tower preclear_disk-diff[26320]: == Last Cycle's Zeroing time  : 7:12:13 (115 MB/s)

Jul 25 04:03:30 Tower preclear_disk-diff[26320]: == Last Cycle's Post Read Time : 18:04:48 (46 MB/s)

Jul 25 04:03:30 Tower preclear_disk-diff[26320]: == Last Cycle's Total Time    : 33:13:22

Jul 25 04:03:30 Tower preclear_disk-diff[26320]: ==

Jul 25 04:03:30 Tower preclear_disk-diff[26320]: == Total Elapsed Time 33:13:22

Jul 25 04:03:30 Tower preclear_disk-diff[26320]: ==

Jul 25 04:03:30 Tower preclear_disk-diff[26320]: == Disk Start Temperature: 33C

Jul 25 04:03:30 Tower preclear_disk-diff[26320]: ==

Jul 25 04:03:30 Tower preclear_disk-diff[26320]: == Current Disk Temperature: 30C,

Jul 25 04:03:30 Tower preclear_disk-diff[26320]: ==

Jul 25 04:03:30 Tower preclear_disk-diff[26320]: ============================================================================

Jul 25 04:03:30 Tower preclear_disk-diff[26320]: ** Changed attributes in files: /tmp/smart_start_sdv  /tmp/smart_finish_sdv

Jul 25 04:03:30 Tower preclear_disk-diff[26320]:                ATTRIBUTE  NEW_VAL OLD_VAL FAILURE_THRESHOLD STATUS      RAW_VALUE

Jul 25 04:03:30 Tower preclear_disk-diff[26320]:      Temperature_Celsius =  120    117            0        ok          30

Jul 25 04:03:30 Tower preclear_disk-diff[26320]:  No SMART attributes are FAILING_NOW

 

Jul 25 04:58:14 Tower preclear_disk-diff[6222]: ========================================================================1.13

Jul 25 04:58:14 Tower preclear_disk-diff[6222]: == invoked as: ./preclear_disk.sh -A /dev/sds

Jul 25 04:58:14 Tower preclear_disk-diff[6222]: ==  WDC WD30EFRX-68AX9N0    WD-WMC1T0076840

Jul 25 04:58:14 Tower preclear_disk-diff[6222]: == Disk /dev/sds has been successfully precleared

Jul 25 04:58:14 Tower preclear_disk-diff[6222]: == with a starting sector of 1

Jul 25 04:58:14 Tower preclear_disk-diff[6222]: == Ran 1 cycle

Jul 25 04:58:14 Tower preclear_disk-diff[6222]: ==

Jul 25 04:58:14 Tower preclear_disk-diff[6222]: == Using :Read block size = 8225280 Bytes

Jul 25 04:58:14 Tower preclear_disk-diff[6222]: == Last Cycle's Pre Read Time  : 8:09:59 (102 MB/s)

Jul 25 04:58:14 Tower preclear_disk-diff[6222]: == Last Cycle's Zeroing time  : 7:26:46 (111 MB/s)

Jul 25 04:58:14 Tower preclear_disk-diff[6222]: == Last Cycle's Post Read Time : 18:29:42 (45 MB/s)

Jul 25 04:58:14 Tower preclear_disk-diff[6222]: == Last Cycle's Total Time    : 34:07:29

Jul 25 04:58:14 Tower preclear_disk-diff[6222]: ==

Jul 25 04:58:14 Tower preclear_disk-diff[6222]: == Total Elapsed Time 34:07:29

Jul 25 04:58:14 Tower preclear_disk-diff[6222]: ==

Jul 25 04:58:14 Tower preclear_disk-diff[6222]: == Disk Start Temperature: 42C

Jul 25 04:58:14 Tower preclear_disk-diff[6222]: ==

Jul 25 04:58:14 Tower preclear_disk-diff[6222]: == Current Disk Temperature: 34C,

Jul 25 04:58:14 Tower preclear_disk-diff[6222]: ==

Jul 25 04:58:14 Tower preclear_disk-diff[6222]: ============================================================================

Jul 25 04:58:14 Tower preclear_disk-diff[6222]: ** Changed attributes in files: /tmp/smart_start_sds  /tmp/smart_finish_sds

Jul 25 04:58:14 Tower preclear_disk-diff[6222]:                ATTRIBUTE  NEW_VAL OLD_VAL FAILURE_THRESHOLD STATUS      RAW_VALUE

Jul 25 04:58:14 Tower preclear_disk-diff[6222]:      Temperature_Celsius =  116    108            0        ok          34

Jul 25 04:58:14 Tower preclear_disk-diff[6222]:  No SMART attributes are FAILING_NOW

 

 

Jul 25 06:35:12 Tower preclear_disk-diff[23441]: ========================================================================1.13

Jul 25 06:35:12 Tower preclear_disk-diff[23441]: == invoked as: ./preclear_disk.sh -A /dev/sdt

Jul 25 06:35:12 Tower preclear_disk-diff[23441]: ==  WDC WD30EFRX-68AX9N0    WD-WMC1T0073984

Jul 25 06:35:12 Tower preclear_disk-diff[23441]: == Disk /dev/sdt has been successfully precleared

Jul 25 06:35:12 Tower preclear_disk-diff[23441]: == with a starting sector of 1

Jul 25 06:35:12 Tower preclear_disk-diff[23441]: == Ran 1 cycle

Jul 25 06:35:12 Tower preclear_disk-diff[23441]: ==

Jul 25 06:35:12 Tower preclear_disk-diff[23441]: == Using :Read block size = 8225280 Bytes

Jul 25 06:35:12 Tower preclear_disk-diff[23441]: == Last Cycle's Pre Read Time  : 8:23:43 (99 MB/s)

Jul 25 06:35:12 Tower preclear_disk-diff[23441]: == Last Cycle's Zeroing time  : 7:40:33 (108 MB/s)

Jul 25 06:35:12 Tower preclear_disk-diff[23441]: == Last Cycle's Post Read Time : 19:01:48 (43 MB/s)

Jul 25 06:35:12 Tower preclear_disk-diff[23441]: == Last Cycle's Total Time    : 35:07:05

Jul 25 06:35:12 Tower preclear_disk-diff[23441]: ==

Jul 25 06:35:12 Tower preclear_disk-diff[23441]: == Total Elapsed Time 35:07:05

Jul 25 06:35:12 Tower preclear_disk-diff[23441]: ==

Jul 25 06:35:12 Tower preclear_disk-diff[23441]: == Disk Start Temperature: 41C

Jul 25 06:35:12 Tower preclear_disk-diff[23441]: ==

Jul 25 06:35:12 Tower preclear_disk-diff[23441]: == Current Disk Temperature: 33C,

Jul 25 06:35:12 Tower preclear_disk-diff[23441]: ==

Jul 25 06:35:12 Tower preclear_disk-diff[23441]: ============================================================================

Jul 25 06:35:12 Tower preclear_disk-diff[23441]: ** Changed attributes in files: /tmp/smart_start_sdt  /tmp/smart_finish_sdt

Jul 25 06:35:12 Tower preclear_disk-diff[23441]:                ATTRIBUTE  NEW_VAL OLD_VAL FAILURE_THRESHOLD STATUS      RAW_VALUE

Jul 25 06:35:12 Tower preclear_disk-diff[23441]:      Temperature_Celsius =  117    109            0        ok          33

Jul 25 06:35:12 Tower preclear_disk-diff[23441]:  No SMART attributes are FAILING_NOW

 

 

Link to comment

Update:  Out of 6 WD Red 3TB drives, 1 was DOA, and 1 appears to have died during clearingOf the 4 that survived, one is now Parity, one replaced a 1TB data drive, and the other 2 are pending data drive upgrades.  So, for me a 33% failure rate, as compared to my 0% failure rate for my Samsungs.  Man I miss Samsung.

 

I did have a power-failure, apparently long enough to drain my UPS, near the end of the preclear on the last two drives, and the power outage appears to have been between hour 35 and 36 of the Preclear, so I'm thinking the Preclear actually finished before the power outage.  The drive that survived has a valid Preclear signature, while the other drive is the one that is now dead. 

 

I'm thinking I will run a another preclear on the drive the lived, even though the SMART report is flawless.

 

Helmonder, how did your preclear go?

Link to comment

Both drives that are dead exhibit the same issue:  They don't show in BIOS, and during startup unRAID sees them and tries to connect to them.  These connection attempts go on  for several minutes, delaying the server startup.  Eventually it looks like Linux gives up on them and unRAID finishes booting.  Inside the GUI, the dead drives are not visible.  So these drives are so dead I can't get a SMART report.  Here's the last set of error messages for the two drives.  Earlier error messages have the same errors (softreset failed / SRST failed type messages).

 

Jul 29 16:56:28 Tower kernel: ata19: softreset failed (device not ready) (Minor Issues)

Jul 29 16:56:28 Tower kernel: ata20: softreset failed (device not ready) (Minor Issues)

Jul 29 16:56:28 Tower kernel: ata20: limiting SATA link speed to 1.5 Gbps (Drive related)

Jul 29 16:56:28 Tower kernel: ata20: softreset failed (device not ready) (Minor Issues)

Jul 29 16:56:28 Tower kernel: ata20: reset failed, giving up (Minor Issues)

Jul 29 16:56:28 Tower kernel: ata19: softreset failed (device not ready) (Minor Issues)

Jul 29 16:56:28 Tower kernel: ata19: softreset failed (1st FIS failed) (Minor Issues)

Jul 29 16:56:28 Tower kernel: ata19: limiting SATA link speed to 1.5 Gbps (Drive related)

Jul 29 16:56:28 Tower kernel: ata19: softreset failed (device not ready) (Minor Issues)

Jul 29 16:56:28 Tower kernel: ata19: reset failed, giving up (Minor Issues)

 

And yes, before you ask, I have tried these drives on other slots / controllers in the server.  The only thing I haven't done with them yet is plug them into my Windows machine to see how they behave on another computer.

Link to comment

Update:  Out of 6 WD Red 3TB drives, 1 was DOA, and 1 appears to have died during clearingOf the 4 that survived, one is now Parity, one replaced a 1TB data drive, and the other 2 are pending data drive upgrades.  So, for me a 33% failure rate, as compared to my 0% failure rate for my Samsungs.  Man I miss Samsung.

 

I did have a power-failure, apparently long enough to drain my UPS, near the end of the preclear on the last two drives, and the power outage appears to have been between hour 35 and 36 of the Preclear, so I'm thinking the Preclear actually finished before the power outage.  The drive that survived has a valid Preclear signature, while the other drive is the one that is now dead. 

 

I'm thinking I will run a another preclear on the drive the lived, even though the SMART report is flawless.

 

Helmonder, how did your preclear go?

 

I did three cycles and flying colours, it is in the array rebuilding now.

Link to comment

So out of 7 drives that have been purchased in this thread we have 1 DOA, correct?

 

I haven't included the second drive that @Pauven reported died as it was under dirty circumstances (power failure during preclear). Even if the preclear likely finished it's hard to say whether the drive was at fault here. I've never blamed a drive for being at fault when there's an environmental issue involved, you can't really- unless the UPS forced a clean shutdown. Still, unfortunate to hear that @Pauven lost two drives :(

Link to comment

We picked up 6 of the 3TB RED's for a NAS at work. 

We have a 6 drive NAS at work that has 6 Samsung F4's (2 TB each) in RAID6 configuration.

 

One of our samsungs F4's finally died after 2 years of 24x7 use. instead of replacing it, we decided to expand the NAS from a 6 drive RAID6 NAS from 8TB to 12TB

 

1 DOA and 1 filled with smart errors.

the 4 good ones we configured as a RAID5 and stress tested them.... the TYLER seems to work. (we could not use the WD greens in it at all) and they did as advertised.

 

Once we got the 2 RMA drives back, we rebuilt the array as a 6 drive RAID6 and all was well. the NAS itself reported the RED drives as slower then the Samsung F4's they replaced.. but the array still exceeded the speed of the copper Gigabit (That is all we care about for our needs). they are pretty quiet and seem to do the job as expected.

 

So far we are happy with the results.

 

I might do the same thing myself at home.. If I see the 3TB RED's at a reasonable price, I will upgrade one of my 4drive Samsung F4 RAID5's with 3 TB RED's and move the F4's to my secondary unraid.

 

the 33% fail rate is bad... but for all we know it is not WD's drives fault, It could have happened in shipping.

 

I will point out that we overpaid for them. We had to use CDW for a vendor and they were about $225 each with shipping and tax for the 3TB flavor when we ordered ours. Since then, the price has gone down.

 

Link to comment

What will unraid do when a WD Red drive reports back that it cannot read the sector anymore. (TLRE will not try it much and report dead sector i think)

It seem the file is corrupted for Unraid, will Unraid re-create the missing data on a other part of the disc somehow ?

 

It seems a good option when you run a raid setup which should correct a broken sector by remapping the data.

If the OS or controller does not do this, you have corrupted files..

Link to comment

Well.... This is just the way it works with every drive.. Difference with the WD RED's seems to be that the drive will try for a shorter amount of time to fix the issue itself.

 

This means you will now earlier that something might be going wrong..

 

In the end that means less chance of dataloss..

 

You could have a drive that would give no indication of failure untill it goes up in flames.. Would you prefer that ?

 

I prefer to know as soon as possible that a drive might be starting to fail.. It will make it possible for me to lessen the chance of loosing data, also makes it possible to give a drive like that an extremely rigoreous pre-clear to make it fail even sooner after the first signs, that makes it more likely that I will be able to fall back on guarantee (the three years itself helps also to that effect)

Link to comment

I would not go for 7200's... They run more hot and with parity on the added speed is not that impressive (disk is not the bottleneck) .. But to each his own !

 

Btw: 2TB's are a bit over 100 eur over here, I would go for new ones, that saves you the guarantee.. Pricedifference is soon gone if one fails in half a year..

Link to comment

I gues looking at your name 'here' is the same for me :)

 

http://tweakers.net/pricewatch/cat/333/serial-ata-harde-schijven.html#filter:q1ZKKkrMS_FMKVayilYyUorVUUrLLCouCSjKTE4NycxN9U2sULLKK83JwZDIzINJ5BelpBa5ZabmpChZKZVlppYXK-koFYAUghUZm-ooFSfnF0HMgnNAUgZATmpOanJJakpwQWoy1BnmQO0mZiZKsbUA

 

105 Euro it seems.

 

That migh be wiser indeed, but at a shop i bought something else, they could only deliver 2 and told me the brand will be phased out. It will be all WD i assume than.

Link to comment
  • 3 weeks later...

just read through this thread. So it seems to that these drives will be good for unraid by finding out sooner rather than (too) later that the drive is failing? everyone still happy with them? i will probably make my next hdd a red but im not rushing out to swap my drives out for no reason! thanks for the info!

Link to comment

So far, so good:

 

 

1	Raw Read Error Rate	0x002f	200	200	051	Pre-fail	Always	Never	0
3	Spin Up Time	0x0027	179	177	021	Pre-fail	Always	Never	6050
4	Start Stop Count	0x0032	100	100	000	Old age	Always	Never	138
5	Reallocated Sector Ct	0x0033	200	200	140	Pre-fail	Always	Never	0
7	Seek Error Rate	0x002e	200	200	000	Old age	Always	Never	0
9	Power On Hours	0x0032	099	099	000	Old age	Always	Never	853
10	Spin Retry Count	0x0032	100	100	000	Old age	Always	Never	0
11	Calibration Retry Count	0x0032	100	253	000	Old age	Always	Never	0
12	Power Cycle Count	0x0032	100	100	000	Old age	Always	Never	11
192	Power-Off Retract Count	0x0032	200	200	000	Old age	Always	Never	1
193	Load Cycle Count	0x0032	200	200	000	Old age	Always	Never	137
194	Temperature Celsius	0x0022	124	106	000	Old age	Always	Never	26
196	Reallocated Event Count	0x0032	200	200	000	Old age	Always	Never	0
197	Current Pending Sector	0x0032	200	200	000	Old age	Always	Never	0
198	Offline Uncorrectable	0x0030	100	253	000	Old age	Offline	Never	0
199	UDMA CRC Error Count	0x0032	200	200	000	Old age	Always	Never	0
200	Multi Zone Error Rate	0x0008	100	253	000	Old age	Offline	Never	0

 

Link to comment

Every drive failing is a PITA, but will eventually happen with any brand. Even more so when out of warranty. WD (4 died last year) has been less painless, with their advance replacement. Samsung is also quit easy going through Seagate but no advance replacement. Nowadays the warranty period has decreased unfortunately. Are the WD Reds the only ones with 3 year warranty?

 

Any Dutch or European sources for WD Reds?

Link to comment

Join the conversation

You can post now and register later. If you have an account, sign in now to post with your account.
Note: Your post will require moderator approval before it will be visible.

Guest
Reply to this topic...

×   Pasted as rich text.   Restore formatting

  Only 75 emoji are allowed.

×   Your link has been automatically embedded.   Display as a link instead

×   Your previous content has been restored.   Clear editor

×   You cannot paste images directly. Upload or insert images from URL.