Seagate 8TB Shingled Drives in UnRAID


Recommended Posts

Still to this day - 1 year on almost - so happy with these drives. Still don't understand those who are not choosing these for Parity drives and or are choosing H/W RAID1 (2 x 4TB) to work as Parity! They are just perfect for unRAID as a Data or Parity disk IMHO!

 

Agree => while I still haven't built a system using these drives, it's purely because I simply don't need the extra space (yet).    My next system will absolutely use the 8TB SMR units (unless there's a larger one available at ~ the same cost/TB).  Your experience has absolutely shown that Seagate's mitigations of the shingled limitations completely eliminates any concerns for the UnRAID use case (at least for the vast majority of users).

 

Link to comment

Still to this day - 1 year on almost - so happy with these drives. Still don't understand those who are not choosing these for Parity drives and or are choosing H/W RAID1 (2 x 4TB) to work as Parity! They are just perfect for unRAID as a Data or Parity disk IMHO!

 

Agree => while I still haven't built a system using these drives, it's purely because I simply don't need the extra space (yet).    My next system will absolutely use the 8TB SMR units (unless there's a larger one available at ~ the same cost/TB).  Your experience has absolutely shown that Seagate's mitigations of the shingled limitations completely eliminates any concerns for the UnRAID use case (at least for the vast majority of users).

 

Not in unRAID but I managed to kill two of these drives with FlexRAID... Both times they died when they were parity drive and both times on the initial parity creation (i.e when writing approx 6-7TB continually to the drive). Not sure I would want to ever risk data on one of these again. Sure, as a data drive IF I was writing to it in smallish-chunks but copy 6-7Tb data at max speed to it and it consistently died...  YMMV...  I'm back on tried n tested traditional drives now

Link to comment

As I noted above, they work great "... for the vast majority of users ..."  ==>  there are definitely use cases where the usage will "hit the wall" of shingled drive performance, which slows everything down to a C..R..A..W..L  (a VERY slow crawl indeed) and is completely unacceptable.

 

Hitting that "once in a blue moon" isn't catastrophic (as long as you recognize what happened and simply wait it out) ... but clearly any user who's typical usage is going to do that should NOT be using shingled drives.

 

Link to comment

As I noted above, they work great "... for the vast majority of users ..."  ==>  there are definitely use cases where the usage will "hit the wall" of shingled drive performance, which slows everything down to a C..R..A..W..L  (a VERY slow crawl indeed) and is completely unacceptable.

 

Hitting that "once in a blue moon" isn't catastrophic (as long as you recognize what happened and simply wait it out) ... but clearly any user who's typical usage is going to do that should NOT be using shingled drives.

 

I can live with CRAWL.. but death and loss of data is more serious than a slowdown ;)  Still, as I said.. YMMV... 

Link to comment

Not in unRAID but I managed to kill two of these drives with FlexRAID... Both times they died when they were parity drive and both times on the initial parity creation (i.e when writing approx 6-7TB continually to the drive). Not sure I would want to ever risk data on one of these again. Sure, as a data drive IF I was writing to it in smallish-chunks but copy 6-7Tb data at max speed to it and it consistently died...  YMMV...  I'm back on tried n tested traditional drives now

 

Did you exercise the drives with some sort of burn-in and test program before using them in FlexRAID? I know FlexRAID doesn't need to have them precleared, but if you didn't preclear the drives or do something similar they could have just been infant mortality drives.

Link to comment

Not in unRAID but I managed to kill two of these drives with FlexRAID... Both times they died when they were parity drive and both times on the initial parity creation (i.e when writing approx 6-7TB continually to the drive). Not sure I would want to ever risk data on one of these again. Sure, as a data drive IF I was writing to it in smallish-chunks but copy 6-7Tb data at max speed to it and it consistently died...  YMMV...  I'm back on tried n tested traditional drives now

 

Did you exercise the drives with some sort of burn-in and test program before using them in FlexRAID? I know FlexRAID doesn't need to have them precleared, but if you didn't preclear the drives or do something similar they could have just been infant mortality drives.

 

No, no preclear of any kind. Yes, they could be infant mortality drives (from 2 unrelated batches)... I guess I must just be unlucky...  But Seagate are not fooling me a 3rd time ;)

Link to comment

No, no preclear of any kind. Yes, they could be infant mortality drives (from 2 unrelated batches)... I guess I must just be unlucky...  But Seagate are not fooling me a 3rd time ;)

 

Perhaps you mentioned what type of failure those two drives had in some other thread...  But just stating that they 'died' in this thread is fairly ambiguous.

 

From my own anecdotal experience (using these drives in Unraid as both parity and data); I've thus far precleared (3 cycles each) 7 of them without obvious issue (1 parity, 5 data, 1 spare).

Link to comment

No, no preclear of any kind. Yes, they could be infant mortality drives (from 2 unrelated batches)... I guess I must just be unlucky...  But Seagate are not fooling me a 3rd time ;)

 

Perhaps you mentioned what type of failure those two drives had in some other thread...  But just stating that they 'died' in this thread is fairly ambiguous.

 

From my own anecdotal experience (using these drives in Unraid as both parity and data); I've thus far precleared (3 cycles each) 7 of them without obvious issue (1 parity, 5 data, 1 spare).

 

Going from memory I think they killed the reallcoated sector count or something like that... many thousands of errors anyway.

 

 

 

BUT, I might be tempted back by the testing done here.

 

So, it was 3 pre-clear runs and then build array (with parity enabled) and start copying data over like that OR the usual build array without parity. Copy data over and then enable parity build?

 

 

Link to comment

So I decided to build an unraid box and have 3 of these doing pre-clear right now.  First round took 60 hours to clear. only 120+ more hours to go...

 

I'm using this script, wget http://bit.ly/1G44UhZ -O /boot/config/plugins/preclear.disk/preclear_disk.sh  Hopefully it's the new fast one...

 

========================================================================1.15

==

== Disk /dev/sdb has successfully finished a preclear cycle

==

== Finished Cycle 1 of 3 cycles

==

== Using read block size = 1,000,448 Bytes

== Last Cycle`s Pre Read Time : 20:09:55 (110 MB/s)

== Last Cycle`s Zeroing time : 18:12:11 (122 MB/s)

== Last Cycle`s Post Read Time : 21:54:35 (101 MB/s)

== Last Cycle`s Total Time : 60:28:09

==

== Total Elapsed Time 60:28:09

==

== Disk Start Temperature: 27C

==

== Current Disk Temperature: 27C,

==

== Starting next cycle

==

========================================================================1.15

Link to comment

So, it was 3 pre-clear runs and then build array (with parity enabled) and start copying data over like that OR the usual build array without parity. Copy data over and then enable parity build?

 

Either way will work, use the most convenient for you.

 

I'd like to know for sure that someone has built a "busy" array by copying data over and then doing parity after...  It was the act of creating parity that twice killed these drives for me using Windows and FlexRAID so I am ultra cautious.    I just started experiencing reallocated sectors on my 6 week old 5Tb Toshiba drives. One to start and I wake this morning and now four are reporting reallocations. I don't want to jump onto 8Tb Seagates to get the same problems!

Link to comment

I'd like to know for sure that someone has built a "busy" array by copying data over and then doing parity after...  It was the act of creating parity that twice killed these drives for me using Windows and FlexRAID so I am ultra cautious.    I just started experiencing reallocated sectors on my 6 week old 5Tb Toshiba drives. One to start and I wake this morning and now four are reporting reallocations. I don't want to jump onto 8Tb Seagates to get the same problems!

 

I did that on mine, also upgraded one 3tb drive with an 8tb, this is the same “workout” as building parity.

Link to comment

IDK how parity drive is calculated i.e. is all drive written to even if the data drives are only 20% full?  I'm really freaked out having had problems with the Seagates before and now finding Toshiba's in pre-fail after 60 days...  Sure they are under warranty but I need drives to put my data on so have to buy new drives and then I am left with warranty replacements to sell. Either way it's a big hit financially. Do I buy more Toshibas or Seagates? Decisions decisions!

Link to comment

IDK how parity drive is calculated i.e. is all drive written to even if the data drives are only 20% full?  I'm really freaked out having had problems with the Seagates before and now finding Toshiba's in pre-fail after 60 days...  Sure they are under warranty but I need drives to put my data on so have to buy new drives and then I am left with warranty replacements to sell. Either way it's a big hit financially. Do I buy more Toshibas or Seagates? Decisions decisions!

 

The initial parity sync will always write 100% of the disk, even if all data disks are empty.

 

After that parity is updated, i.e., you write 20GB to a data disk, same amount will be written to parity.

 

Link to comment

IDK how parity drive is calculated i.e. is all drive written to even if the data drives are only 20% full?  I'm really freaked out having had problems with the Seagates before and now finding Toshiba's in pre-fail after 60 days...  Sure they are under warranty but I need drives to put my data on so have to buy new drives and then I am left with warranty replacements to sell. Either way it's a big hit financially. Do I buy more Toshibas or Seagates? Decisions decisions!

 

UnRaid creates redundancy on a drive by drive basis. And to unRaid, a drive is always full of 1s and 0s. It has no idea what they mean.

 

Different drives from different manufacturers do have different reliability characteristics. I've had excellent luck with HGST drives (previously Hitachi). Toshiba got some of that technology and the 5T drives they made have also been reliable for me. The newer models, X300, I have no experience with. I don't think they use the Hitachi technology.

 

Of all the drives I have owned, Seagates have been the most unpredictable. Some models have been awful. Others good. I have not jumped into the 8T world, but experience here appears good.

 

The idea that using a drive for it's intended purpose is somehow going to make it fail sooner is flawed. It they are going to fail under normal usage, you want that to happen so you can return them and get new ones. Although there are things you can duo to make drives fail early, like running them at very high temps, copying large amounts of data to them is not dangerous at all. Do it and let the chips fall where they may and then return for factory new replacement (if possible) or RMA (if past return date)  the bad ones.  Take the lessons learned to your next purchase decision.

 

We tend to think of finding smart issues as a bad thing, but it is far better to get these errors during preclear and weed out the bad ones before your valuable data is at risk.

 

Good luck!

Link to comment

IDK how parity drive is calculated i.e. is all drive written to even if the data drives are only 20% full?  I'm really freaked out having had problems with the Seagates before and now finding Toshiba's in pre-fail after 60 days...  Sure they are under warranty but I need drives to put my data on so have to buy new drives and then I am left with warranty replacements to sell. Either way it's a big hit financially. Do I buy more Toshibas or Seagates? Decisions decisions!

 

UnRaid creates redundancy on a drive by drive basis. And to unRaid, a drive is always full of 1s and 0s. It has no idea what they mean.

 

Different drives from different manufacturers do have different reliability characteristics. I've had excellent luck with HGST drives (previously Hitachi). Toshiba got some of that technology and the 5T drives they made have also been reliable for me. The newer models, X300, I have no experience with. I don't think they use the Hitachi technology.

 

Of all the drives I have owned, Seagates have been the most unpredictable. Some models have been awful. Others good. I have not jumped into the 8T world, but experience here appears good.

 

The idea that using a drive for it's intended purpose is somehow going to make it fail sooner is flawed. It they are going to fail under normal usage, you want that to happen so you can return them and get new ones. Although there are things you can duo to make drives fail early, like running them at very high temps, copying large amounts of data to them is not dangerous at all. Do it and let the chips fall where they may and then return for factory new replacement (if possible) or RMA (if past return date)  the bad ones.  Take the lessons learned to your next purchase decision.

 

We tend to think of finding smart issues as a bad thing, but it is far better to get these errors during preclear and weed out the bad ones before your valuable data is at risk.

 

Good luck!

 

What he said.

 

You were either unlucky with your previous disks or have some underlying issue, like bad cooling or inadequate power supply.

Link to comment

Although there are things you can duo to make drives fail early, like running them at very high temps, copying large amounts of data to them is not dangerous at all.

 

It's interesting though, that enterprise class hard disks have a maximum recommended annual data throughput assigned to them. I haven't done the calculation to find out whether it would be possible to exceed this in practice, constantly reading or writing - like an infinite number of pre-clear cycles - or whether the figure is purely theoretical, but in the absence of throughput figures for consumer grade disks the assumption has to be that they are somewhat lower.

Link to comment

Although there are things you can duo to make drives fail early, like running them at very high temps, copying large amounts of data to them is not dangerous at all.

 

It's interesting though, that enterprise class hard disks have a maximum recommended annual data throughput assigned to them. I haven't done the calculation to find out whether it would be possible to exceed this in practice, constantly reading or writing - like an infinite number of pre-clear cycles - or whether the figure is purely theoretical, but in the absence of throughput figures for consumer grade disks the assumption has to be that they are somewhat lower.

 

You did not exceed the usage of the drive with a few preclear cycles and filling the drive.

 

Enterprise drives are typically for server applications with very high usage.

 

Our unRAID media servers tend to be very light usage applications.

 

Short response - don't sweat it. :)

Link to comment

We tend to think of finding smart issues as a bad thing, but it is far better to get these errors during preclear and weed out the bad ones before your valuable data is at risk.

 

Good luck!

 

Which is exactly the reason we preclear..  An error might only occur after several writes.. An unraid system typically stores data in a Write-Once-Read-Many kind of thing.. Mostly large media folders.. This means that a flaky sector can very good go unnoticed.. Preclearing takes that out of the mix.. Largely.. Not completely..

 

I have precleared a lot of drives (20+) and only one has shown quickly rising smart errors, I returned it 5 days later and got a new drive..

Link to comment
  • 2 weeks later...
  • 2 weeks later...

some advice needed..

 

just bought 2 new Seagate 8TB.

i have another 4 seagate 8tb on my hp micro server running xpenology, but i would like to built one big pool with all the 6 drives, in a new unraid machine.

 

6 drives in total. 1 drive for parity.

 

2 drives are new (unformatted disks) the other 4 disks are full of data.

Which is the best practice to start building my new unraid server..

 

1) how many circles is advised to pre clear?

2) can we pre clear all the drives at once, or one by one?

3) when we get better write speeds? on a pool with parity or without parity drive?

it is preferred if i move my files first, and then set one drive for parity OR the opposite?

 

thanks

Link to comment

1) as many as you feel comfortable with.. I do 3 cycles.

2) you can preclear more at the same time

3) parity increases protection but heavily impacts performance when writing. So no parity gives much better write speeds but drive error means data loss.

4) Up to you.. moving files first and then adding parity will make moving the files faster, but should a drive fail during moving you will loose your data. First adding parity means the moving process will go slower but loosing a drive will not cause data loss.

 

What I would do:

 

3 cycles preclear, two drives at a time.

If you are copying from another system you have your data on that old system so loosing one of the new drives will not cause data loss, I would copy first, then add parity on the new system.

Link to comment

Join the conversation

You can post now and register later. If you have an account, sign in now to post with your account.
Note: Your post will require moderator approval before it will be visible.

Guest
Reply to this topic...

×   Pasted as rich text.   Restore formatting

  Only 75 emoji are allowed.

×   Your link has been automatically embedded.   Display as a link instead

×   Your previous content has been restored.   Clear editor

×   You cannot paste images directly. Upload or insert images from URL.