Seagate 8TB Shingled Drives in UnRAID


524 posts in this topic Last Reply

Recommended Posts

4 minutes ago, garycase said:

 

The ones I've seen reported were a bit over 15 hours. on arrays with all 8TB Seagate archive drives.

 

 

Yeah, from what I can gather in this thread, the reported times are around 15.5 hours, give or take a half. That's not bad considering how cheap they are and it's definitely under an entire day.

 

I likely wouldn't run a mixed array if only because of that, but then again I don't know what I'd do with the 5 older 4TB drives.

Link to post
  • Replies 523
  • Created
  • Last Reply

Top Posters In This Topic

Top Posters In This Topic

Popular Posts

Randomly, no. But at a moment of our choosing...

I am posting this to continue my contribution to this thread. As you all know I am a big supporter of these drives for the typical unRAID use case. My bi-directional transfer speeds are rock solid and

Hi All,   From around November 2014 I was thinking of building a new system suite because my requirements outstripped the system I had. For detailed reference please go here: http://lime-technology

Posted Images

1 hour ago, BRiT said:

Pardon me for asking a few questions which might already be covered in the 13 pages of this thread, but here goes anyways (while I go back to the first page and start re-reading the entire thread).

 

My current array is 5 drives of 4TB HGST 7200rpm that have parity check times around 8 hr, 43 min, 42 sec. My 4 drives are a little over 2 years old and looking perfect, however I'm running out of space. My options are to add another 4TB HGST drive or to expand to larger drive sizes. All the talk of larger drives and solid performance have me considering a large size upgrade. The 8TB Seagate Archive drives are so dirt cheap (~ $225 after 10% off coupon) compared to other options, so they're very tempting, but I have a question about how it might impact parity checks. My last upgrade was swapping in 4TB 7200rpm for old 2TB 5900RPM where despite doubling the size of the data array the parity checks remained nearly the same time.

 

What are the parity check times of arrays that only run 8TB Seagate Archive drives? How does that compare to non-archival 8TB arrays?

 

 

 

I guess I would ask how many more drives your server could support, and how long would 4T take you to fill. If you've got 4+ more bays, and 4T is going to take you a year+ to fill, I'd stick with the 4Ts and just add one for ($145). A year from then you can add another one to last another year. You'd be good for 2 years for $290, and still have ample room to expand. And your parity checks would stay nice and short.

 

But if 4T is only going to last 6 months, and your bays are running short, I'd suggest going to 8T. Two of them would net you 12T, and you'd be set for 18 months. $225 (actually you could buy for $210) for 8T is very inexpensive storage. Then you could buy another 8T archive which would likely get you to the 2.5 year mark. That's as far as I wound think. You'd have spent $630 for 20T of usable storage (plus 4T contribution to the parity size).  You'd lose some time on parity checks, but for most that's not a big deal. The extras shortage is more than worth it! 

 

Update: saw your post about jettisoning the 4tb drives. Definitely would not did that! Too much money wrapped up in that. Won't save you up enough to be worth it.

Link to post
1 hour ago, BRiT said:

 

Yeah, from what I can gather in this thread, the reported times are around 15.5 hours, give or take a half. That's not bad considering how cheap they are and it's definitely under an entire day.

 

I likely wouldn't run a mixed array if only because of that, but then again I don't know what I'd do with the 5 older 4TB drives.

 

 

https://forums.lime-technology.com/topic/37847-seagate-8tb-shingled-drives-in-unraid/?page=10#comment-500321

 

For convenience see link to a post I made with my timings earlier.

Link to post

I still run my pair of Archive drives, along with a venerable ST4000DM000.   My parity drive has recently been upgraded to a ST8000VN0002 (8TB Seagate NAS, now called IronWolf).  The Archive drives are quiet, quick, and I'm very satisfied with them still.  The 7200rpm NAS drive is also quiet, extremely fast, and does run a bit warmer but that's expected.  

 

With those drives it takes a little under 17 hours to do a parity build/check, that's with an i3-6100 pushing it along, and the server continuing to be in 'production'.

Link to post

I have a mixed array of 2, 4, and now 8tb drives and thought I would do a little testing.

First off this is my initial build. http://forum.kodi.tv/showthread.php?tid=143172&pid=1229800#pid1229800

Since then I have replaced the CPU with a Intel® Core™ i5-3470 and just recently replaced the cache drive with Samsung SSD 850 EVO 250GB along with a new Seagate Archive HDD v2  8TB 5900 parity drive.

Here is what my array looks like ...just FYI...

 

So here is a pic of my parity history with the 4 TB against the 8 TB.

 

So after reading about how fast everyone's parity checks were going I decided to run a speed test on drive. here is the pic

 

You can see that the 8TB holds up well and the 4TB are cruising along but the bottle neck are the 2TB drives I have.

I have ordered 3 more 8TB drives and am hoping to replace the 2TB drives. Then add a second parity drive.

Just thought I would post this for info.

 

main.jpg

Parity.jpg

chart.jpeg

Link to post
46 minutes ago, Harro said:

but the bottle neck are the 2TB drives I have

 

If the SASLPs still are in the slots pictured your main bottleneck is slot PCIE3, then the DMI, if still available use the top PCIe slot (PCIE1):

 

Expansion / Connectivity
Slots - 1 x PCI Express 3.0 x16 slot (PCIE1: x16 mode)
- 2 x PCI Express 2.0 x16 slots (PCIE3: x1 mode; PCIE4: x4 mode)
- 1 x PCI Express 2.0 x1 slot
- Supports AMD Quad CrossFireX™ and CrossFireX™
 
*PCIe Gen3 is supported on 3rd Generation of Intel® Core™ i5 and Core™ i7 CPUs.
Link to post
3 hours ago, johnnie.black said:

 

If the SASLPs still are in the slots pictured your main bottleneck is slot PCIE3, then the DMI, if still available use the top PCIe slot (PCIE1):

 

Expansion / Connectivity
Slots - 1 x PCI Express 3.0 x16 slot (PCIE1: x16 mode)
- 2 x PCI Express 2.0 x16 slots (PCIE3: x1 mode; PCIE4: x4 mode)
- 1 x PCI Express 2.0 x1 slot
- Supports AMD Quad CrossFireX™ and CrossFireX™
 
*PCIe Gen3 is supported on 3rd Generation of Intel® Core™ i5 and Core™ i7 CPUs.

Well my end game plan is to rid myself of drives and move 6 drives to sata on M/B. I could then take one SASLP card out. I then could move to the PCIe 1 slot for remainder of the drives, if I keep this case and config. But I do understand the bottleneck with what you are saying. Just at the time of build wasn't thinking I would need much more storage. oopps.  

Link to post
9 minutes ago, Harro said:

But I do understand the bottleneck with what you are saying. Just at the time of build wasn't thinking I would need much more storage. oopps.  

 

If you want to improve parity check speed with current setup try this:

 

6 disks onboard (use your fastest disks only, 4 and 8tb)

6 disks on SASLP #1 using PCIE1

5 disks on SASLP #2 using PCIE4

 

Divide slower 2TB disks by the 2 SASLP evenly.

 

With the right tunables this should give a starting speed of around 100MB/s, eventually decreasing a little during the first 2TB but speeding up considerably once past that mark, total parity check time should be well under 24 hours.

Link to post
1 hour ago, johnnie.black said:

 

If you want to improve parity check speed with current setup try this:

 

6 disks onboard (use your fastest disks only, 4 and 8tb)

6 disks on SASLP #1 using PCIE1

5 disks on SASLP #2 using PCIE4

 

Divide slower 2TB disks by the 2 SASLP evenly.

 

With the right tunables this should give a starting speed of around 100MB/s, eventually decreasing a little during the first 2TB but speeding up considerably once past that mark, total parity check time should be well under 24 hours.

I like this idea and may try it this weekend when I have some time.

Thanks for the suggestions

Link to post
  • 4 weeks later...

I seem to have my array in a state where I can hit the so called shingle wall.  I have 3 of the Seagate 8tb drives 2 data with about 700gb free with the other parity.  I also have 2 8tb reds one a second parity the other a replacement for a 3tb green.  when mover kicks in and starts writing the 40gb files I have siting there or my 120gb disk image backups as soon as about 20gb of data has been written the  array grinds  I even see read performance issues on other disks.  stats show single digit to no reads or writes for periods of time.and I get video playback issues.

 

 

Link to post
4 hours ago, mejutty said:

I seem to have my array in a state where I can hit the so called shingle wall.  I have 3 of the Seagate 8tb drives 2 data with about 700gb free with the other parity.  I also have 2 8tb reds one a second parity the other a replacement for a 3tb green.  when mover kicks in and starts writing the 40gb files I have siting there or my 120gb disk image backups as soon as about 20gb of data has been written the  array grinds  I even see read performance issues on other disks.  stats show single digit to no reads or writes for periods of time.and I get video playback issues.

 

 

That's not enough data to analyze why you're "hitting the wall" => but one thing I would suggest with disk complement you have:    Move all of the data on your 2nd 8TB WD Red to a safe place (the one you said replaced a 3TB green) -- perhaps on another system, or on another drive in your array if you have the space;  then delete it all from the Red so you have an empty 8TB Red drive.    Now do the following ...

(1)  Do a parity check to confirm everything is okay.

(2)  Note all of your drive assignments -- printing a copy of the Main Web GUI page is an easy way to do this.

(3)  Do a New Config, and assign two WD Reds as your parity drives -- the one that's already a parity drive; and the one you just cleared out (Be CERTAIN you assign the correct two drives; as if either of them have any user data on it you'll lose it all).    And assign the 8TB Seagate that was used as a parity drive as a data drive.

(4)  Start the array and wait for it to do a parity sync [It will show one unformatted drive -- the Seagate that was a parity drive before -- you can leave it that way until the parity sync completes; then let it format it ... or you can let it format while it's doing the parity sync (this will slow both operations down, but the format will still complete relatively quickly and then the parity sync will be at full speed afterwards).

(5)  Now copy the data that was on the 2nd 8TB WD Red back to the array (if you moved it somewhere else).

 

It's not really clear from what you posted that you were "hitting the wall" => but since you have the WD Reds I'd use them as parity anyway ... this makes it FAR less likely you'll ever fill the persistent cache on the shingled drives (which is what causing you to "hit the wall").

 

Link to post
9 hours ago, mejutty said:

8tb drives 2 data with about 700gb free with the other parity.

 

Yes, this limited free space will cause issues. My advise, buy an additional drive so the archive drives are not forced to garbage collect so much.

Link to post

I am posting this to continue my contribution to this thread. As you all know I am a big supporter of these drives for the typical unRAID use case. My bi-directional transfer speeds are rock solid and the server supports several clients running concurrently 24 hours per day as well as early morning progressive backups.

 

For those who don't want to read up, as of 29th March my Array configuration consisted of:

 

Parity: 1 x Seagate 8TB Shingle

Data: 5 x WD 3TB Red and 2 x WD 3TB Green and 1 x Seagate 8TB Shingle

 

My monthly parity checks are: 

 

2017-03-01, 20:02:50 19 hr, 32 min, 49 sec 113.7 MB/s OK  
2017-02-01, 19:54:16 19 hr, 24 min, 15 sec 114.5 MB/s OK  
2017-01-01, 19:47:18 19 hr, 17 min, 17 sec 115.2 MB/s OK  
2016-12-01, 19:47:09 19 hr, 17 min, 8 sec 115.2 MB/s OK  
2016-11-06, 06:49:45 19 hr, 42 min, 11 sec 112.8 MB/s OK
 
         

On the 29th March 2017 I added a second Seagate 8TB Shingle as a second Parity.  The subsequent Sync record was: 

 

Quote
2017-03-29, 18:35:46 21 hr, 3 min, 48 sec 105.5 MB/s OK 0

 

On the 30th March 2017 I added a further Seagate 8TB Shingle as a data disk. The subsequent Clear record was:

 

Quote
2017-03-30, 09:57:57 14 hr, 59 min, 16 sec 148.3 MB/s OK 0

 

On the 1st April 2017 my monthly parity check ran. The record was:

 

Quote
2017-04-01, 21:00:28 20 hr, 30 min, 27 sec 108.4 MB/s OK 0

 

Following the parity check (as I rebooted to update to 6.3.3) one of the WD 3TB Green's failed. So I replaced it with a new Seagate 8TB Shingle. The subsequent rebuild record was:

 

Quote
2017-04-02, 19:13:52 22 hr, 54 min, 43 sec 97.0 MB/s OK 0

 

I then went on to replace the second WD 3TB Green from the system (as they were from the same batch as the one that had just failed) with another Seagate 8TB Shingle. The subsequent rebuild record was:

 

Quote
2017-04-03, 16:22:17 19 hr, 48 min, 40 sec 112.2 MB/s OK 0

 

For those not keeping score my new configuration is:

 

Parity: 2 x Seagate 8TB Shingle

Data: 5 x WD 3TB Red and 4 x Seagate 8TB Shingle

 

I am satisfied with these figures. I am currently running a parity check on the system to see if my ~ 19 hour average remains. I will post when it does.

Edited by danioj
Link to post
1 minute ago, johnnie.black said:

 

This can't be a parity check with 8TB + 3TB disks, too fast.

 

You're right. That was the Parity Sync. I didn't realise there was a parity check that occurred in there (obviously as it was the 1st). I am correcting the post now.

Link to post
8 minutes ago, danioj said:

You're right. That was the Parity Sync

 

Sorry, but it also can't be a sync with those disks , it can be the clearing of a 8TB disk (it is recorded in the history as a parity check), or a parity check/sync with 8TB disks only.

Link to post

I have noted that syncs tend to be faster than checks -- I suspect it's because there's less rotational latency delay since the writes to the parity drive can be asynchronous r.e. the other drives.    And in this case, the last 5TB of the process only involves 8TB drives.    It's not likely a "clearing" of the drive, since a parity drive wouldn't be cleared.

 

Link to post
6 minutes ago, garycase said:

I have noted that syncs tend to be faster than checks -- I suspect it's because there's less rotational latency delay since the writes to the parity drive can be asynchronous r.e. the other drives.    And in this case, the last 5TB of the process only involves 8TB drives.    It's not likely a "clearing" of the drive, since a parity drive wouldn't be cleared.

 

 

There may be a small difference like 15 minutes, not over 4 hours, simply impossible.

Link to post

OK - sorry all, I completely screwed that post up. Please let me assure you, it was all coming from a good place. I didn't realise that clears were recorded in the parity check log as well.

 

NB: I think I will be asking LT for a change to the logging to indicate what it was that was actually run. But Ill get to that later ....

 

So based on my log and the timeline of events (before I edit the post again and get it wrong) my log is this ....

 

2017-04-03, 16:22:17 19 hr, 48 min, 40 sec 112.2 MB/s OK 0
2017-04-02, 19:13:52 22 hr, 54 min, 43 sec 97.0 MB/s OK 0
2017-04-01, 21:00:28 20 hr, 30 min, 27 sec 108.4 MB/s OK 0
2017-03-30, 09:57:57 14 hr, 59 min, 16 sec 148.3 MB/s OK 0
2017-03-29, 18:35:46 21 hr, 3 min, 48 sec 105.5 MB/s OK 0
2017-03-01, 20:02:50 19 hr, 32 min, 49 sec 113.7 MB/s OK 0

 

I am confused if Clears are logged as well why there are not more entries.

 

I did the following (working from the bottom up):

 

- Added new 8TB parity disk (no clear was required just a sync) - I assume that was entry 2

- Replaced 3TB data disk with another 8TB disk (required a clear)

- Disk rebuild 3TB disk => 8TB disk

- Parity Check ran in that time - I assume that was entry 4

- Replaced 3TB disk with another 8TB disk (required another clear)

- Disk rebuild 3TB disk => 8TB disk

 

Given that list of events I am finding it difficult to see what that 148.3MB/s is? If it was a clear then surely I would have a similar one as I added a second 8TB disk that required a clear. I am confused.

Edited by danioj
Link to post
8 minutes ago, johnnie.black said:

 

Replacement doesn't require clearing, only adding a disk to the array.

 

Something required a clear .... I remember it!

 

EDIT: Oh crap, I have seriously got brain fog. I did add another 8TB disk too. Sigh. So it goes something like this:

 

Entry 1: Parity Check

Entry 2: Parity Sync as a result of adding another 8TB Parity disk

Entry 3: Clear as a result of additional 8TB disk

Entry 4: Monthly parity check

Entry 5: 3TB => 8TB disk rebuild

Entry 6: 3TB => 8TB disk rebuild

 

God, I feel like I have just spammed this beloved thread! I will make the edits to the original post. What fluff.

Edited by danioj
Link to post

I always preclear my disks outside the array in another machine, but the one time I did take a drive straight from its antistatic bag and placed it in my array, I believe it did a preclear before adding it to the array.  It was years ago, but if I recall correctly, the array was down for something like 30 hours total, with me banging my head against the wall the entire time (I was new to unraid at the time).

 

Perhaps that's the clear he's talking about?

Link to post
13 minutes ago, tucansam said:

I always preclear my disks outside the array in another machine, but the one time I did take a drive straight from its antistatic bag and placed it in my array, I believe it did a preclear before adding it to the array.  It was years ago, but if I recall correctly, the array was down for something like 30 hours total, with me banging my head against the wall the entire time (I was new to unraid at the time).

 

Perhaps that's the clear he's talking about?

 

Me too. However ...

 

These days a disk Clear (to get that flag on the disk) doesn't take the Array down - or make it or the GUI unresponsive (more appropriate explanation). As the disk I was using was from another unRAID server (and had already been through many rigorous tests) I knew it was fine - so had no need to clear outside the array (especially as I note above, this does not result in downtime anymore) hence why I just added it. 

 

What I was unclear (no pun intended - happy accident) was what was recorded in that history log, which made my post look nuts.

 

I have cleared (again - no pun intended - another happy accident) up the original post. All makes sense now.

 

Sigh. Sorry folks.

Edited by danioj
Link to post

Join the conversation

You can post now and register later. If you have an account, sign in now to post with your account.
Note: Your post will require moderator approval before it will be visible.

Guest
Reply to this topic...

×   Pasted as rich text.   Restore formatting

  Only 75 emoji are allowed.

×   Your link has been automatically embedded.   Display as a link instead

×   Your previous content has been restored.   Clear editor

×   You cannot paste images directly. Upload or insert images from URL.