Seagate 8TB Shingled Drives in UnRAID


Recommended Posts

So I'm planning to test the Unifi Network Video Recorder Docker.

 

The cameras will be Unifi G3's, which have a 4mpixel sensor / max 1080p @ 30fps. I'll probably experiment with recording motion only vs 24/7 and reduced fps (10?).

 

What about if I use an SSD cache (potentially give it its own 1tb cache drive) then write to the SMR drive at the usual time overnight? As I'm not recording constantly perhaps this would put less strain on the config?

 

Can't imagine playback would be an issue from the SMR drives as they clearly handle Plex, etc well

Link to comment

Playback is never an issue => there's NO difference in the read capabilities of shingled vs PMR drives.  It's only a writing issue.

 

If you cache the writes, that would indeed minimize the likelihood of filling the persistent cache, since virtually all of the writes would be contiguous, which will go directly to the shingled array instead of the persistent cache.  As I noted earlier, the only way to know for SURE is to just try it.

 

If you record motion only, that alone will almost certainly reduce the write load enough that this won't be a problem.

 

Link to comment
  • 3 weeks later...

Will these drives hold up to LOTS of reads?

 

I want to use them in a torrent server - expect LOTS of reads from them over time. The files will only ever be written to HDD ONCE.

 

I collect BD 25 and 50's so big files. I typically leave them seeding for 6+ months at a time where they are being read from quite a bit but once they are written to the drive they never get deleted or moved. I expect to keep the files for the life of the drives. So the only real wear is being read from.

 

Given this use case, how will they hold up?

Link to comment

Fwiw I have 8 of these drives in an unraid (7D+1P) setup for 56TB Protected.

 

They're a solid drive, which is designed towards WORM workloads, hence why I chose them as I do singular writes of data and rarely delete.

 

Haven't had any issues with them and they hold up well with 500GB Cache SSD in front of them.

 

 

Sent from my iPhone using Tapatalk

Link to comment

Why do these drives have such bad reviews? Roughly 20% of reviews on Amazon claim they bought 2-4 and all of them failed within 6 months. I just don't believe it. I feel like these guys are attempting to throw them into a RAID array, and when they drop out (they will), they assumed they died. I also see lots of complaints of the drives silently corrupting data. Seagate claimed they'd have 10/12/14TB versions of these drives years ago. It seems they abandoned the technology because they weren't well received?

 

I have 16 of these in my server, including parity, and it's been 24/7 for 8 months with zero issues. All were ran through preclear twice. Not a single SMART error on any of them still.

 

Just ordered 6 more... maybe I got a good batch before and will regret posting this.  :-X

Link to comment

I've only used a few, but have also had zero issues.  I can only surmise that those who have problems are using them in write-intensive scenarios where they frequently encounter the "wall" of a full "persistent cache", which will make the drive's performance drop to absolutely abysmal levels until the drive has enough idle time to "catch up" on writes to the shingled area.  In a use case where they doesn't happen, the drive will effectively "freeze" -- a "fail" in the eyes of a user.

 

Clearly in those cases, they shouldn't be using shingled drives.  But as long as the use case is one where Seagate's mitigations work well, the drives are just fine.

 

One thing that HAS happened since these were released, however, is that the manufacturers' have been able to increase platter density to as much as 1.33TB/platter ... so they can make 8TB and larger drives using standard PMR technology -- this has likely slowed down Seagate's plans to make larger SMR drives, at least until they can adapt the SMR technology to higher capacity platters.  They're already shipping 10TB Ironwolf and BarraCuda units, which use traditional PMR technology -- so there's no reason to make an SMR version at this point.

 

 

 

Link to comment

I read most every page of this post but can seem to find anyone who addresses a solution with pre-clearing then rebuilding. One of my data disk, 8TB V2 Seagate was disabled by unraid because of 'read errors' but then the drive passed both short and long SMART test, how do we re-enable the drive without pre-clearing and rebuilding? I am thinking it was just a loose connection in the hot-swap bay.

 

ID# ATTRIBUTE_NAME          FLAG    VALUE WORST THRESH TYPE      UPDATED  WHEN_FAILED RAW_VALUE

1 Raw read error rate 0x000f 114 099 006 Pre-fail Always Never 70716432

3 Spin up time 0x0003 091 091 000 Pre-fail Always Never 0

4 Start stop count 0x0032 100 100 020 Old age Always Never 134

5 Reallocated sector count 0x0033 100 100 010 Pre-fail Always Never 0

7 Seek error rate 0x000f 084 060 030 Pre-fail Always Never 258740560

9 Power on hours 0x0032 096 096 000 Old age Always Never 4142 (5m, 19d, 14h)

10 Spin retry count 0x0013 100 100 097 Pre-fail Always Never 0

12 Power cycle count 0x0032 100 100 020 Old age Always Never 49

183 Runtime bad block 0x0032 100 100 000 Old age Always Never 0

184 End-to-end error 0x0032 100 100 099 Old age Always Never 0

187 Reported uncorrect 0x0032 100 100 000 Old age Always Never 0

188 Command timeout 0x0032 100 100 000 Old age Always Never 0

189 High fly writes 0x003a 100 100 000 Old age Always Never 0

190 Airflow temperature cel 0x0022 068 053 045 Old age Always Never 32 (min/max 28/39)

191 G-sense error rate 0x0032 100 100 000 Old age Always Never 0

192 Power-off retract count 0x0032 100 100 000 Old age Always Never 1646

193 Load cycle count 0x0032 098 098 000 Old age Always Never 4359

194 Temperature celsius 0x0022 032 047 000 Old age Always Never 32 (0 15 0 0 0)

195 Hardware ECC recovered 0x001a 114 099 000 Old age Always Never 70716432

197 Current pending sector 0x0012 100 100 000 Old age Always Never 0

198 Offline uncorrectable 0x0010 100 100 000 Old age Offline Never 0

199 UDMA CRC error count 0x003e 200 200 000 Old age Always Never 0

240 Head flying hours 0x0000 100 253 000 Old age Offline Never 2739 (24 10 0)

241 Total lbas written 0x0000 100 253 000 Old age Offline Never 62043713366

242 Total lbas read 0x0000 100 253 000 Old age Offline Never 242636153850

 

 

SMART Self-test log structure revision number 1

Num  Test_Description    Status                  Remaining  LifeTime(hours)  LBA_of_first_error

# 1  Extended offline    Completed without error      00%      4114        -

# 2  Short offline      Completed without error      00%      4098       

Link to comment

I read most every page of this post but can seem to find anyone who addresses a solution with pre-clearing then rebuilding. One of my data disk, 8TB V2 Seagate was disabled by unraid because of 'read errors' but then the drive passed both short and long SMART test, how do we re-enable the drive without pre-clearing and rebuilding? I am thinking it was just a loose connection in the hot-swap bay.

SMART looks OK, and you certainly don't need to preclear, but what makes you think you want to avoid rebuilding?

 

unRAID disables a disk when a write to it fails. That failed write was used to update parity though, so the failed write is in the array. And any write that may have happened after the disk was disabled is in the array. So the disk should be rebuilt because its contents are now invalid, but the valid contents are in the array and can be restored by rebuilding the disk.

 

The only other way to reenable a disabled disk is to rebuild parity instead so you wouldn't save any time, and you would lose any of those writes, some of which may have been critical for maintaining that disks filesystem.

 

Rebuild.

Link to comment

I read most every page of this post but can seem to find anyone who addresses a solution with pre-clearing then rebuilding. One of my data disk, 8TB V2 Seagate was disabled by unraid because of 'read errors' but then the drive passed both short and long SMART test, how do we re-enable the drive without pre-clearing and rebuilding? I am thinking it was just a loose connection in the hot-swap bay.

SMART looks OK, and you certainly don't need to preclear, but what makes you think you want to avoid rebuilding?

 

unRAID disables a disk when a write to it fails. That failed write was used to update parity though, so the failed write is in the array. And any write that may have happened after the disk was disabled is in the array. So the disk should be rebuilt because its contents are now invalid, but the valid contents are in the array and can be restored by rebuilding the disk.

 

The only other way to reenable a disabled disk is to rebuild parity instead so you wouldn't save any time, and you would lose any of those writes, some of which may have been critical for maintaining that disks filesystem.

 

Rebuild.

Here is the wiki link:

 

What do I do if I get a red X next to a hard disk?

Link to comment
  • 2 weeks later...

I'm still troubleshooting an issue with these drives under unRAID. If I disable parity and transfer from one Disk to another Disk with ~40GB files, the speed is all over the place, It'll go 200MB/s, then drop down to 120MB/s for a bit, then back up to 200MB/s, over and over. If I queue up another transfer using 2 different disks, both transfers show the same varied sweeps, and they both lose and gain speed at the same time. If I do this same process between 4TB WD Red drives, this does not happen at all. I see the same thing during parity check, it will start at 200MB/s, then about a minute in drop to 160MB/s and stay there for a few minutes before returning to 200MB/s. This cycle will repeat for the entire parity check, making it take about 3 hours longer than it should.

 

Given it doesn't happen with red drives, it makes me think it's the archive drives. However, the fact I can queue up multiple transfers from multiple discs, and the speeds drop and raise at exactly the same times/speed, makes me point to unRAID, an I/O issue, and/or SAS2LP controllers.

 

I've tried many tunable settings but here are the current:

Tunable (enable NCQ): No

Tunable (nr_requests): 9

Tunable (md_num_stripes): 2560

Tunable (md_sync_window): 1280

Tunable (md_sync_thresh): 640

Tunable (md_write_method): Auto

 

Anyone else experience this?

 

Link to comment
  • 4 weeks later...

I copied 6TB from a 8TB Archive (ST8000AS0002) to a 8TB NAS (ST8000VN002) and saw a consistent 180-200MB/s for the whole transfer (using Intel H170 SATA, running Win10Pro).  I've never seen inconsistent read performance like that.

 

I have to say,  have no regrets in buying the Archive drives, they're quiet, fast and run cool.  So far, the pair I have given flawless performance for well over a year.  They're also the original Archive v1 drive, so aren't even as good as the ST8000AS0022 Archive v2 units.

  • Upvote 1
Link to comment
  • 3 weeks later...

Using unRAID 6.3.2 I have a Seagate Archive 8TB ST8000AS0002 as my parity attached to an IBM M1015 pci-e sata controller.

 

I'm considering swapping in a Western Digital Red 8TB WD80EFZX to be the new parity drive and moving the ST8000AS0002 to be a data drive.

 

Should this present any problem in terms of the controller seeing the WD80EFZX as smaller than the ST8000AS0002?

 

A long time ago I seem to remember there was a thing about some like sized drives from different manufacturers being seen by the controller as larger or smaller depending on something (cylinders maybe?).

 

Looking at the the below specs I don't see anything to help me here.  Any input appreciated.

 

ST8000AS0002 specs

WD80EFZX specs

 

On a related note I'd be interested in any comments on the conditions where the WD80EFZX at considerable extra expense might be preferred.  Like many here I don't believe I've seen any performance issues with the ST8000AS0002 but am curious if the WD80EFZX would be a worthwhile upgrade for whatever reason(s).

Link to comment

No, they are both exactly the same size.   All you would need to do is ...

 

(a)  Do a parity check to confirm all is well before you start.

(b)  Swap the parity drive and wait for the rebuild of the new drive (the WD Red)

(c)  Do a parity check to confirm that went well.

(d)  Now add the old 8TB shingled drive to your array.

 

If you haven't seen any issues with the shingled drive, it's probably not really necessary to make this switch; but it IS true that if you are ever going to hit the "wall" of performance with the shingled drive (i.e. a full persistent cache), that it's most likely this will be on the parity drive.    The conditions that would make this likely are a lot of simultaneous write activity by different users.    If that never happens with your use case, you're probably okay to just leave well enough alone.   I agree, however, that the WD Reds are clearly better drives -- both because they're not shingled, but also due to the helium sealed enclosure, which results in lower power draw and lower temps.

 

  • Upvote 1
Link to comment

Thanks for the quick reply and info, greatly appreciated.  I should have added that based on some recent smart reports for one of my disks I forsee the need for a new data drive in the near future to replace the older failing smaller one.  I have been using the 'parity swap' procedure when this has happened before to take advantage of buying a larger drive than anything currently in the array and then using the old parity drive to replace the failed one.

 

I was just thinking of getting some improvement in the new drive I'd use for parity.  It's sounding like the WD Red might be a good choice for me given the cost is not a barrier, though I would say that a lot of simultaneous write activity by different users would not be a factor in my case.

Link to comment

Seems BackBlaze has taken a liking to the Seagate 8TB archive drives. They are seeing about a1.5% failure rate. Not as good as the now venerable 4tb HGST, but pretty darn good! I recently bought two and they are working nicely.

 

Have a fear they may be discontinued as they may be hurting sales of the more expensive (and apparently far less reliable) Seagate PMR drives. 

 

For now, though, I'd find it hard to recommend any other disk for new disk purchases for unRaid based on their price and reliability. Good fast cheap, pick all three!

Link to comment

Agree the archive disks are excellent choice -- although I've been tending towards 8TB Reds when I need large drives these days.   The helium sealed technology is probably more of a factor in that choice than the PMR recording, although it's nice to have both.    However, if I was more price-sensitive I'd certainly stay with the archive units.

Link to comment

The Hard Drives Most (and Least) Likely to Fail, According to Backblaze

 

Actual BackBlaze Report

 

Interesting highlights from first article ...

 

"Perhaps most importantly though, the overall number of failures for the year [2016] is only 1.95%, down from 2015's 2.47% and 2014's 6.39%."

" As for failures by vendor for the year, WDC tops the list, with a 3.88% failure rate ..."

" ranging back all the way to 2013, ... one particular Seagate model [ST1500DL003] had an astounding failure rate of 90.92% "

 

My highlights:

 

Seagate 8TB Archive failure rate - 1.62% - Over 8600 of them are deployed.

Virtually all their 8TB drives are Seagate (except for one pod [45] HGST Heliums with 0% failure).

WDC 8TB not rated, but 6TB RED failure rate - 5.49%

HGST failure rate - 0.60%

Drive counts: Seagate:45K, HGST:25K, WDC:1.5K, Toshiba 0.25K

Link to comment
1 hour ago, bjp999 said:

WDC 8TB not rated, but 6TB RED failure rate - 5.49%

There's virtually no relationship between the 6TB Reds and the 8TB helium-sealed units, which are effectively just lower-speed (and lower cost) versions of the 8TB HGSTs [which, as you noted, had a 0% failure rate >:( ].    In fact, if you put the 8TB Red next to an 8TB HGST they look absolutely identical except for the names.

 

Link to comment

I can remember suggestions for the 6TB Reds. ;-) Just sayin' ...

 

No data on the 8TB Reds yet. Is helium leaking an issue? My understanding is that HGST and WDC are operating independently. I'd be surprised if they're the same under the hood.

 

8TB Seagates at <$27/T is pretty appealing. Not sure I'd pay $100 premium for He even if it was great. There are some drives in my array I wouldn't use SMR (like parity, cache, and disks with a lot of small files with high data turnover) - but for large files that are largely WORM they are a great fit.

Link to comment

The agreement WD has with the China's Ministry of Commerce (MOFCOM) required separate assembly lines for 2 years (which expires this year), but did not preclude using the same technologies between the brands.     As I noted above, if you put an 8TD WD Red next to an 8TB HGST UltraStar HE8 they are absolutely identical in every way except for the labeling.   They do, I'm sure, have different firmware and clearly run at different speeds; but I suspect there's little if any difference in reliability -- in fact, the Reds may be the more reliable of the two, simply due to the lower stresses of running at a lower rpm.

 

Indeed, I suspect the new 8TB WD Gold drive (a 7200 rpm helium sealed unit) is very likely identical to the HGST UltraStar -- although it has to be made on a different line since the 2-year hiatus hasn't yet expired.

 

Reading some articles about the WD 8TB's on various IT sites (Storage Review, etc.) you can find the following tidbits r.e. the WD helium sealed units ...

 

"... WD is using their HelioSeal helium-technology to get the higher capacity much like the HGST Ultrastar Helium Drives. ..."

 

" ... The limited information we have ... indicates that it is nearly a carbon copy of the HGST Ultrastar He8 Series  "

 

"  ... WD ... indicated that it is employing technologies across both brands, which includes, but is not limited to, mechanical components, electronics and firmware."

 

 

1 hour ago, bjp999 said:

No data on the 8TB Reds yet. Is helium leaking an issue?

I very seriously doubt it => I've certainly not read anything to suggest that.   I've installed 7 of these drives -- most in May of last year -- and they're all performing perfectly in 24/7 operation.

 

As I already noted, I'm not saying the archive units aren't excellent drives -- I have a few of them as well; but once the helium-sealed Reds came out, that's all I've been buying, and I'm VERY pleased with their performance.    The cost difference simply doesn't matter to me => but if the extra $75 or so (figure ~ $10/TB) is important to you; then by all means go for the savings.

 

Link to comment

if you are talking about prices, can you compare these Seagates? (these prices are EUR in Latvia/Europe) :

- Archive HDD ST8000AS0002 - 305 EUR

- SkyHawk Surveillance HDD ST8000VX0022 - 305 EUR

- IronWolf ST8000VN0022 - 310 EUR

- Desktop HDD ST8000DM002 - 330 EUR

 

Archive and Desktop are 5900RPM, while Skyhawk and Ironwolf are 7200RPM..

if you wanna low rpm PMR, then Desktop only.. 

Link to comment

Join the conversation

You can post now and register later. If you have an account, sign in now to post with your account.
Note: Your post will require moderator approval before it will be visible.

Guest
Reply to this topic...

×   Pasted as rich text.   Restore formatting

  Only 75 emoji are allowed.

×   Your link has been automatically embedded.   Display as a link instead

×   Your previous content has been restored.   Clear editor

×   You cannot paste images directly. Upload or insert images from URL.