Seagate 8TB Shingled Drives in UnRAID


Recommended Posts

The shingled technology drives from Seagate have been the subject of a lot of conjecture about just how well they might perform in an UnRAID array.    There are some significant design constraints which could have serious consequences in write performance, as entire bands of data have to be re-read and then re-written just to update a single sector within those bands ["shingled bands"].    However, Seagate's design has taken these factors into account and they've built-in some mitigations designed to help minimize the impacts of these requirements.

 

We now know enough, based on detailed testing by Storage Review to help determine the way Seagate's mitigations function; and from a good bit of testing by fellow UnRAID'ers on this forum, to provide some good information on the performance of these drives in an UnRAID array.

 

The purpose of this "sticky" is to give an overview of that performance.

 

If you don't want to read all the nitty gritty details, the bottom line is simple:  These drives work VERY WELL in an UnRAID environment, both as data drives and as parity.    I still wouldn't recommend them as a cache drive, but it's rather unlikely anyone's going to want a cache drive that large anyway  :)

 

The next post outlines my thoughts based on the reviews and testing that has been done to date ... it's based largely on the work Storage Review did in "sleuthing" the design details (Seagate has not released these); and the testing that's been done by others on this forum => notably danioj (Daniel), who has done extensive testing with data groupings I've suggested to help confirm that the drives are in fact working as well as the earlier testing suggested they likely would.

 

Link to comment

If you're not familiar with the shingled drive technology, you may want to glance at this brief overview Seagate provides:  http://www.seagate.com/tech-insights/breaking-areal-density-barriers-with-seagate-smr-master-ti/

 

The key thing to note is that writing to a sector within a shingled "band" requires that every sector after that in the same band has to be read before doing the write; and then they all have to be rewritten, since writing to a sector within the band modifies other sectors after it.    Clearly this makes writes MUCH slower than writing to a traditional PMR (perpendicular magnetic recording) drive.    However, Seagate's design clearly includes several features to mitigate this problem ... and it's pretty clear that these mitigations work VERY well.

 

Realize that most of the "internals" of the shingled drives are closely held by Seagate, so what we know is based on a lot of testing my 3rd parties (Storage Review has done a very good job of "sleuthing" the details) and the performance we've seen in various testing.  But I think the following is pretty accurate ...

 

=>  The drives have a memory cache large enough to hold more than a zone's worth of data, so if the firmware recognizes that you're writing enough data for a complete shingled zone, it will simply write the entire zone and avoid the need for a zone re-write that would be necessary if you'd only written a single sector (or fewer sectors than the zone contains).    So if you're writing a lot of sequential data -- regardless of the file sizes -- there will be very few zone rewrites (a major performance killer with these drives).

 

=>  If you write a single sector in a zone, the drive will instead write it to the "persistent cache" ... a non-shingled area on the drive that effectively "buffers" all sectors that would require zone rewrites ... these are later written to the correct sectors -- and the corresponding zone rewrites done then -- at a time when the drive is otherwise not busy.    What I call "hitting the wall" of performance is when the persistent cache gets full and you're still doing a bunch of random writes that don't avoid the zone rewrite requirement => this can cause a HUGE drop in performance.    Thanks to Storage Review's testing, we know the persistent cache area on the 8TB drives is a 25GB area.

 

=>  Based on these mitigations, we can make the following observations:

 

    ==>  If you're writing a large amount of sequential data, you'll end up with very little use of the persistent cache, since the drives will recognize that you're writing all of the sectors in each of the shingled zones.  There may be a few cases where this isn't true - but those will be written to the persistent cache, and it's unlikely you'll ever fill it.

 

    ==>  If you're later writing a few random files, these will be written to the persistent cache, and as long as it isn't filled performance will remain very good.  And if some of those files are large (multiple GB), they'll simply be written directly to any zones where they fill the entire zone ... so the persistent cache is even less likely to get filled.

 

=>  When you initially populate an UnRAID server, you're likely writing a LOT of data ... but it's all going to be stored sequentially, so the likelihood is very good that you'll generally avoid any use of the persistent cache -- and performance will be excellent, just as if it was a PMR drive.

 

=>  If most of your files are large media files, then most of those writes will also avoid the persistent cache (at least for most of the file), since the files are themselves likely larger than the zoned bands.

 

=>  If you never write more than 25GB at a time, you'll never have a problem, since even if all of the writes were small files that needed to be buffered through the persistent cache, you'd never fill it up ... the drive would empty the cache when it was "idle" after your writes had completed.  [so the very slow band rewrites would be "hidden" from you.]

 

=>  A few words on when you might still encounter a dramatic write slowdown due to the shingled rewrites:  It's actually fairly hard to force this to occur -- especially with a new server where most of the writes are sequential.  Given what we know about the technology, and Seagate's mitigations, what you'd have to do to "hit the wall" of write performance is write a LOT of small files (smaller than a zoned band) that were randomly scattered on the disk (so the disk didn't simply collate them into a bunch of sequential writes that avoided band rewrites), and that collectively were well over 25GB (so the persistent cache would fill up).      The likelihood of encountering this in typical UnRAID use seems very small.

 

=>  Note that if your server IS very active, with multiple clients writing a lot of data that consists of small files, and are writing it at random locations on different data disks, then the single drive that COULD hit the situation I just described is the parity drive => but several tests I asked Daniel (danioj) to do with relatively small files still didn't cause this issue, so I'm confident that it's very unlikely.    The most likely scenario would be if your allocation scheme is set to "most free", since this will cause frequent changing of the drive being used, and you have a LOT of data drives, so the location on the parity drive would be frequently changing as you wrote a lot of files.    Daniel only had 2 data drives, so the test was a bit limited in this regard -- but I really don't think it's likely to be a problem at all.

 

I've asked Daniel to provide an overview of the performance he's seen during his data copies (using TeraCopy); and to outline the testing he did (pre-clears, etc.) along with the times it took to do those tests, so that should be added to this thread soon.

 

But I'm convinced enough that my next server will absolutely use these drives, or perhaps their next-generation cousins -- Seagate has already announced a 10TB version, and has indicated they expect a 20TB SMR drive by 2020  :)

  • Upvote 1
Link to comment

<Reserved #2 -- just in case I need it  :) -- I'll remove if not needed.

[We're going to be gone for about 10 days, so I'll leave this here in case I want to add some more "pre-quell" comments to anything that's added while we're out of town -- I'll just remove the post if I don't need it]

 

Link to comment
  • 1 month later...

Hi All,

 

From around November 2014 I was thinking of building a new system suite because my requirements outstripped the system I had. For detailed reference please go here: http://lime-technology.com/forum/index.php?topic=37567.0

 

As part of the consideration for the new system was the selection of New Hard Disk Drives.  I had decided to go with the largest capacity drive I could get on my budget - which happened to be WD Red/Greens - but then then talk of the new Seagate 8TB drives was increasing. The introduction of these drives would get me a larger capacity drive at a lower price point than the WD’s.

 

There was ALLOT of speculation as to whether these drives would be suitable in an Unraid setup / environment. It all boiled down to this for me: would there be a significant slow down in write speed while using these drives and how often would I experience this. I decided to take a punt on the drives and bought myself 3 initially to form the array in my new Backup Server.

 

Before I deployed the drives I did some testing to determine what I should expect from the drives once deployed. Here is a record of that testing and my findings.

 

 

Note: I have not posted screenshots and evidence of tests again as I did this ALLOT in this thread:

 

http://lime-technology.com/forum/index.php?topic=36749.0

 

I will however if people ask me to.

 

 

I hope this is good enough for all.

 

 

Systems Used (Configuration at the time of testing):

 

Main Server (Source):

 

Case: Antec P183 V3 | Motherboard: ASUSTeK COMPUTER INC. - P8B75-M LX | Memory: G.Skill 1666 Ripjaws 4096 MB | CPU: Intel® Celeron® CPU G550 @ 2.60GHz | Power Supply: Antec Neo Eco 620 | Flash: Kingston DT_101_G2 7.91 GB | Parity: WDC_WD30EFRX 3TB | Data: Disks 1-4: 4 x WDC_WD30EFRX 3TB | Cache: WDC_WD20EARS 2TB | Spare: None | App-Drive: None (Using Cache) Totals: Array Size: 15TB - Available Size: 12TB

 

Backup Server (Destination):

 

Case: SilverStone Black DS380 Hot Swap SFF Chassis | Motherboard: ASRock C2550D4I Mini ITX Motherboard | Memory: Kingston KVR16LE11S8/4I 1666 ECC Unbuffered DDR3L, ValueRAM 4096 MB | CPU: Intel® Atom™ Processor C2550 (2M Cache, 2.40 GHz)| Power Supply: Silverstone ST45SF-G 450W SFX Form Factor | Flash: Kingston DT_101_G2 7.91 GB | Parity: Seagate ST8000AS0002 8TB | Data: Disks 1-2: 2 x Seagate ST8000AS0002 8TB | Cache: None | Spare: None | App-Drive: None (No Apps) Totals: Array Size: 25TB - Available Size: 16TB

 

Transfer Manager (Medium):

 

Host: 27” iMac Core i5 3.2GHz, 8GB RAM, 1TB HDD

Guest: 2 CPU Cores, 2GB RAM, 100GB Hard Disk Space, Bridge Mode (No Audio or Device Passthrough).

 

Network

 

TP-LINK TL-SG1005D 5 Port Gigabit Switch plugged into each machine via quality Cat6a cables.

 

Software:

 

Main Server: Unraid 5.04*

Backup Server: Unraid 6.14b*

*All Plugins and Virtualisation was disabled on both machines.

 

Preclear Script: Bjp999’s Unofficial Faster Preclear script v1.15b

 

Transfer Manager: Windows 10 (December’14 Developers Preview with all latest patches installed as of April 2015) Virtual Machine running on Virualbox Hypervisor 4.3.26 on OS X Yosemite. Terracopy v2.3 Stable was installed on the Windows 10 Guest to manage the copy operations.

 

 

Unraid Array Configuration:

 

Hard Disk Drives on each Server were deployed in a parity protected array as follows:

 

Main Server

 

5 x Western Digital WDC_WD30EFRX 3TB

 

Drive 1: Parity

Drive 2 to 5: Data

 

12TB of protected useable space.

 

Backup Server

 

3 x Seagate ST8000AS0002 8TB

 

Drive 1: Parity

Drive 2 to 3: Data

 

16TB of protected usable space.

 

 

Share Configuration:

 

1 user share configured identically on both the Main (Source) Server and the Backup (Destination) Server as follows:

 

Share Name: nas

Allocation Method: Most Free

Minimum Free Space: 40GB

Split Level: Automatically split any directory as required

Excluded disk(s): None

 

Share protocol: SMB

Export: Yes

Security: Public

 

 

Tests

 

Preclear

 

3 preclear cycles run independently (i.e. not using the -C x switch in the preclear script).

 

S.M.A.R.T Test

 

- 1 long S.M.A.R.T Test

 

Small Files

 

- Benchmark test

- 46.56 GB made up of 390,688 files of exactly 125KB

 

Medium Files

 

- Benchmark Test

- 3.7 TB made up of 22,526 files ranging between 400MB and 4GB

 

Large Files

 

- Benchmark Test

- 4.7 TB made up of 1,472 files ranging between 5GB and 40GB

 

 

Method

 

I run 3 preclear cycles and then a long S.M.A.R.T test per disk (Seagate ST8000AS0002 8TB drives only).

 

- preclear_bjp.sh -f -A -c 3 /dev/sdx

 

EDIT: I have since bought more of these drives and did manage to complete a full 3 cycle preclear using the above command. I have added these results too.

 

**I was unfortunate enough to have a power outage in the middle of cycle 2 so I had to restart the test from cycle 2. Due to the power cut I decided to run both remaining cycles separately. This is reflected clearly in the results.

 

- preclear_bjp.sh -f -A /dev/sdx

- smartctl -t long /dev/sdx

 

Once the drives were cleared I deployed them as per the configuration above.

 

I created a mapped network drive on the Transfer Medium to the ‘nas' share on the Main and Backup Server.

 

I then configured Terracopy to point to the Main Server as source and the Backup Server as destination with verify copy selected.

 

Each test was run independently.

 

 

Results

 

Preclear

 

(Cycle 1, before power interruption)

 

- Initial Pre Read speed at the commencement of the preclear was ~170MB/s for ALL 3 drives.

 

- All three completed their Pre Read with a FINAL speed at 100% of ~80MB/s taking ~20 Hours.

 

- Initial zeroing speed of cycle 1 of 3 was ~200MB/s for ALL 3 drives.

 

- All three completed their Zeroing with a FINAL speed at 100% of ~138 MB/s taking ~16 Hours.

 

- Total Time for cycle 1 at this point was ~36 Hours.

 

- Initial Post Read speed was ~200MB/s for ALL 3 drives.

 

- All three completed their Post Read with a FINAL speed at 100% of ~115 MB/s taking ~16 Hours.

 

Total Time for cycle 1 was ~52 Hours.

 

**Note between now and somewhere in the Pre Read of cycle 2 I had a Power Outage and as such no reports to interpret so I started again as detailed in the method section above**

 

(Cycle 2, after power interruption)

 

- Initial Pre Read speed at the commencement of the preclear was ~174MB/s for ALL 3 drives.

 

- All three completed their Pre Read with a FINAL speed at 100% of ~110MB/s taking ~20 Hours

 

- Initial zeroing speed of cycle 2 of 3 was ~200MB/s for ALL 3 drives.

 

- All three completed their Zeroing with a FINAL speed at 100% of ~136 MB/s taking ~16 Hours

 

- Total Time for cycle 2 at this point was ~37 Hours.

 

- Initial Post Read speed was ~200MB/s for ALL 3 drives.

 

- All three completed their Post Read with a FINAL speed at 100% of ~110 MB/s taking ~21 Hours.

 

Total Time for cycle 2 was ~58 Hours.

 

(Cycle 3)

 

- Initial Pre Read speed at the commencement of the preclear was ~170MB/s for ALL 3 drives.

 

- All three completed their Pre Read with a FINAL speed at 100% of ~110MB/s taking ~20 Hours

 

- Initial zeroing speed of cycle 3 of 3 was ~200MB/s for ALL 3 drives.

 

- All three completed their Zeroing with a FINAL speed at 100% of ~136 MB/s taking ~16 Hours

 

- Total Time for cycle 3 at this point was ~36 Hours.

 

- Initial Post Read speed was ~200MB/s for ALL 3 drives.

 

- All three completed their Post Read with a FINAL speed at 100% of ~105 MB/s taking ~21 Hours.

 

Total Time for cycle 3 was ~57 Hours.

 

EDIT: As noted in the Method Section I have since bought more of these drives and did manage to complete a full 3 cycle preclear using the above command. The summary results were:

 

== invoked as: ./preclear_bjp.sh -f -A -c 3 /dev/sde
== ST8000AS0002-1NA17Z   Z8404KRE
== Disk /dev/sde has been successfully precleared
== with a starting sector of 1 
== Ran 3 cycles
==
== Using :Read block size = 1000448 Bytes
== Last Cycle's Pre Read Time  : 19:42:48 (112 MB/s)
== Last Cycle's Zeroing time   : 17:07:40 (129 MB/s)
== Last Cycle's Post Read Time : 20:59:15 (105 MB/s)
== Last Cycle's Total Time     : 38:07:58
==
== Total Elapsed Time 133:19:52

 

Long S.M.A.R.T Test

 

The test run on each disk without error and took

 

~940 Minutes to complete.

 

Small Files

 

Benchmark Test indicated an expected speed of ~1.2MB/s

 

Random Observations from the Test:

13% (813KB/s after 53,505 of 390,688 files totalling 6.38GB of 46.56GB)

30% (563KB/s after 117,051 of 390,688 files totalling 13.95GB of 46.56GB)

42% (500KB/s after 165,674 of 390,688 files totalling 19.75GB of 46.56GB)

58% (438KB/s after 228,366 of 390,688 files totalling 27.22GB of 46.56GB)

85% (875KB/s after 334,593 of 390,688 files totalling 39.88GB of 46.56GB)

89% (575KB/s after 347,650 of 390,688 files totalling 41.44GB of 46.56GB)

100% (938KB/s after 387,723 of 390,688 files totalling 46.22GB of 46.56GB)

 

At non recorded intervals the test was monitored and there were no noticeable deviations from speeds reported above.

 

Average Observed Speed was ~671KB/s

 

Note: I did not take much notice of the Post Copy Verification beyond that I know it sustained at ~5.5MB/s start to finish.

 

Medium Files

 

Benchmark Test indicated an expected speed of ~40MB/s

 

Random Observations from the Test:

 

19% (40MB/s after 4,994 of 22,526 files totalling 846MB of 2.79TB)

41% (41MB/s after 10,442 of 22,526 files totalling 1.56TB of 3.79TB)

60% (41MB/s after 13,251 of 22,526 files totalling 2.3TB of 3.79TB)

80% (40MB/s after 19,000 of 22,526 files totalling 3.05 TB of 3.79TB)

100% ( 41MB/s after 22,526 files totalling 3.79TB of 3.79TB)

 

At non recorded intervals the test was monitored and there were no noticeable deviations from speeds reported above.

 

Average Observed Speed was ~40.6MB/s

 

Note: I did not take much notice of the Post Copy Verification beyond that I know it sustained at ~46MB/s start to finish.

 

Large Files

 

Benchmark Test indicated an expected speed of ~42MB/s

 

25% (38MB/s after 264 of 1,472 files totalling 1.16TB of 4.7TB)

39% (41MB/s after 528 of 1,472 files totalling 1.86TB of 4.7TB)

58% (38MB/s after 786 of 1,472 files totalling 2.68TB of 4.7TB)

82% (40MB/s after 1,023 of 1,472 files totalling 3.76TB of 4.7TB)

100% (38MB/s after 1,472 of 1,472 files totalling 4.7TB of 4.7TB)

 

At non recorded intervals the test was monitored and there were no noticeable deviations from speeds reported above.

 

Average Observed Speed was ~39MB/s

 

Note: I did not take much notice of the Post Copy Verification beyond that I know it sustained at ~51MB/s start to finish.

 

 

My Conclusion

 

The speeds I saw from the Seagate drives were exactly as I had hoped. Personally I saw no difference between the speeds I have obtained using these drives using Unraid and the speeds I have seen using WD Red’s using Unraid. This makes me very happy.

 

I think that whatever Seagate has done to mitigate the SMR technology it has been done excellently and it mitigates any observable write penalty we have been discussing and speculating about.

 

All in all - whether you've got your Unraid Array filled with Large (~5GB to ~40GB), Medium (~400MB to ~4GB) or Small (<=215KB) Files then I have observed and now reasonably expect this  Seagate 8TB SMR drive to behave on Par with the WD Red PMR drives I have in my Main Server.

 

Based on my testing and observations I believe these are Excellent Drives which I would recommend for anyone using Unraid in a way that I (and I believe allot of others in the community) do. That goes for use as an Data or Parity Drive.

 

I will certainly be putting these drives in my Main Server as well as my Backup Server now. And will do so without fear.

 

It had been suggested to see the impact of a full persistent cache, disabling the drives write cache with hdparm would be worthwhile BUT I don't see the need to do that now. What I wanted to see is if during real world use I would experience any degraded performance from my WD Red PMR drives in normal use in an Unraid environment and I clearly have NOT experienced any of that.

 

Noted Concerns with the drives:

 

Users have reported that with the the Seagate Archive 8TB drive's lack of center mounting holes which means that they won't mount in the drive cages of a Fractal Design Node 804 Case: http://www.fractal-design.com/home/product/cases/node-series/node-804

 

See here for specific details. http://lime-technology.com/forum/index.php?topic=47061.0;topicseen

  • Upvote 2
Link to comment

Nice summary Daniel.

 

As should be fairly clear from the level of detail Daniel provided, he has done EXTENSIVE testing of these drives (much at my request) that has really helped us understand the limitations of the shingled technology; and has also proven that the mitigations Seagate has implemented work VERY well at making these drives work just fine in an UnRAID environment.

 

There was indeed a LOT of speculation that the shingled bands would result in utterly terrible write performance if they were used as parity drives; but that simply isn't the case.  These drives have an excellent cost/TB and are a very good way to build a high-capacity server in a very modest sized case.

 

Link to comment
  • 1 month later...

Thanks for the work involved here guys. Am building a new server (V6) and have decided on 6TB Reds in it but my old server (V5) will become a backup server and I have (after reading this thread) decided to populate it with these drives. For the time being it will only have 2 (1 x parity and 1 x data) to backup critical stuff off my main server.

Awesome work guys

Link to comment
  • 2 weeks later...

Thanks for the work involved here guys. Am building a new server (V6) and have decided on 6TB Reds in it but my old server (V5) will become a backup server and I have (after reading this thread) decided to populate it with these drives. For the time being it will only have 2 (1 x parity and 1 x data) to backup critical stuff off my main server.

Awesome work guys

I recently replaced a 6TB WD Red that I had been using for parity with one of the Seagate 8TB drives.  I found that on my system at least I got slightly better performance with the 8TB drive as parity than I had been getting using a 6TB WD Red.  Since these 8TB drives are (currently at least) slightly cheaper than the 6TB Reds any new drives I buy will be these 8TB drives rather than the 6TB Reds I had previously been buying.
Link to comment
  • 2 months later...

Just to throw my results in.  I just did a pre-clear on a new 8T

 

~61 hours to preclear.  I'm currently running a parity sync right now.  It seems to be taking longer than my older 4T drive.  Maybe I'll do a re-sync with the 4T drive for comparison??..

 

Call it 13.5 hours to get to my old 4T mark where it would have been done.  Now it's just parity drive left as I have no other disks bigger then 4T.

 

 

========================================================================1.16
== invoked as: /boot/config/plugins/preclear.disk/preclear_disk.sh -M 4 -o 2 -c 1 -f -J /dev/sdg
== ST8000AS0002-1NA17Z   Z840AB6F
== Disk /dev/sdg has been successfully precleared
== with a starting sector of 1 
== Ran 1 cycle
==
== Using :Read block size = 1000448 Bytes
== Last Cycle`s Pre Read Time  : 19:16:58 (115 MB/s)
== Last Cycle`s Zeroing time   : 18:31:56 (119 MB/s)
== Last Cycle`s Post Read Time : 23:09:17 (95 MB/s)
== Last Cycle`s Total Time     : 60:59:48
==
== Total Elapsed Time 60:59:48

Link to comment

... It seems to be taking longer than my older 4T drive.

 

... Call it 13.5 hours to get to my old 4T mark where it would have been done.

 

Was your old 4TB parity drive a 7200rpm drive?

 

... and how long did it take with the old drive to do a parity check?    [Note that it's not unusual for a sync to take a bit longer than a check ... so it'll be a better comparison to do a check after the sync has completed and note when it gets to the 4TB mark then.  You SHOULD do a check anyway, to confirm the sync was good.]

 

Link to comment

 

Was your old 4TB parity drive a 7200rpm drive?

 

... and how long did it take with the old drive to do a parity check?    [Note that it's not unusual for a sync to take a bit longer than a check ... so it'll be a better comparison to do a check after the sync has completed and note when it gets to the 4TB mark then.  You SHOULD do a check anyway, to confirm the sync was good.]

I think it was a 5900 drive. "ST4000VN000".  I'll have to check to see if I have any data from a full parity check.  I'm pretty sure I don't have any since we the nr_requests thing was discovered..

I'll dig through my posts and logs to see.  Maybe the last monthly parity check will be in logs somewhere..

 

 

Link to comment

It's very unlikely then that the 8TB drive will be slower => wait until you do a Parity Check and have a look ... but with the higher areal density (1.33TB/platter) and basically the same rotational speed it's almost certainly faster.

 

However, with the other 2TB & 3TB drives you have in your array, it's likely that THOSE are the current bottlenecks in terms of parity check speed anyway -- at least through the 3TB point.

 

I also noticed you're using the SASLP controller => is the new 8TB Seagate connected to a motherboard port or to that card ??

 

Link to comment

I also noticed you're using the SASLP controller => is the new 8TB Seagate connected to a motherboard port or to that card ??

Motherboard port.

This perceived slower sync..  May all be in my head.  I can't locate any logs that give my any insight to older parity check times.

I need to look at my log retention scheme...

Link to comment

... This perceived slower sync..  May all be in my head.

 

:) :)

 

A sign of ageing  :)

 

Actually, it's unlikely there's much (if any) difference up through the 4TB point.  Your 2 & 3 TB drives are almost certainly the slowest units in the check up through the 3TB point ... and if you have any other 4TB drives (do you?) then they will limit the check speed through the 4TB point => so IF you have 4TB drives still in the system it's unlikely the 8TB unit will make ANY difference in the parity check speed through 4TB.    If there aren't any 4TB drives, then after the 3TB point is passed the last TB SHOULD be a bit quicker than it was with your old parity drive, since the 8TB unit has 1.33TB platters.

 

But remember -- the sync time is likely a bit different than the check time => so wait until you run an actual Parity Check and see when the 4TB point is reached with it.

 

Link to comment

I already posted on other thread but since this is the official one and for anyone interested in these disks I’ve found them to be some of the fastest I’ve used for parity check/sync speeds.

 

This is from my latest server, still has one 3TB Toshiba 7200rpm, with all 8TB Seagates it would be slightly faster.

 

Subject: Notice [TOWER7] - Parity sync: finished (0 errors)
Description: Duration: 15 hours, 47 minutes, 30 seconds. Average speed: 140.7 MB/sec

 

Subject: Notice [TOWER7] - Parity check finished (0 errors)
Description: Duration: 15 hours, 53 minutes, 3 seconds. Average speed: 139.9 MB/sec

 

Parity check is slightly slower because as I discovered with these drives the HP N54L has ~750MB/s max usable bandwidth on the onboard sata ports, parity sync was faster because only 3 of the 4 drives were reading, it appears the A-Link is full duplex.

 

As for normal usage write speed, I’ve found it to be between 45 and 60MB/s for large files, and as expect slower than normal drives for smaller files, see image 2, as these are a very small percentage of my files I’m very happy with the performance.

 

1.png.2d66298dac7c8913d575c0a6bc5e7592.png

2.png.36fb95d09ade6761f50a9ce179ca7c87.png

Link to comment

and if you have any other 4TB drives (do you?) then they will limit the check speed through the 4TB point => so IF you have 4TB drives still in the system it's unlikely the 8TB unit will make ANY difference in the parity check speed through 4TB.   

I do have some 4T drives.  My whole thought process is that maybe the "shingled" Archive drive might have a negative effect in my case.

My guess is that it doesn't..  But I'd love to prove myself right or wrong by putting my old parity disk back in! :)

I can see in my hourly status reports that the speed slows down every hour until it gets rid of a drive..  Then it briefly speeds up then slows down until the next drive(s) finish..  Damn!  I really wish I had an older parity check speed for comparison!

 

Link to comment

As for normal usage write speed, I’ve found it to be between 45 and 60MB/s for large files, and as expect slower than normal drives for smaller files, see image 2, as these are a very small percentage of my files I’m very happy with the performance.

What OS is giving you that speed graph during the copy?  Is that win10?
Link to comment

As for normal usage write speed, I’ve found it to be between 45 and 60MB/s for large files, and as expect slower than normal drives for smaller files, see image 2, as these are a very small percentage of my files I’m very happy with the performance.

What OS is giving you that speed graph during the copy?  Is that win10?

 

Windows Server 2012 R2, but both Win 8 and 10 have the same graph.

Link to comment

... the speed slows down every hour until it gets rid of a drive..  Then it briefly speeds up then slows down until the next drive(s) finish..

 

This is, of course, normal.    As a drive moves to the inner cylinders, the transfer rate drops significantly.  Since a parity check is always limited by the slowest drive current active in the check, when you have a mixed set of drives, each different size will result in another "inner cylinder slowdown".    So, in your case, it will slow down as it gets close to the 2TB point;  then speed up somewhat until the 3TB drives reach their inner cylinders; and then slow down again as the 4TB drives reach their inner cylinders; and finally slow down for the last time as the 8TB drive reaches its inner cylinders.    As it passes each of those boundaries, the speed will bump up a bit.

 

In addition to these "geometry-based" slowdowns, the speed is also limited by the areal densities of the drives.  If all of your 2TB, 3TB, and 4TB drives are 1TB /platter drives, that's good ... their speeds won't be all that much worse than the 1.33TB/platter 8TB unit.    But if you have any older lower-density drives (e.g. a 500GB/platter 2TB unit), then THAT will cause things to be even slower until it's out of the picture.

 

Link to comment

... the speed slows down every hour until it gets rid of a drive..  Then it briefly speeds up then slows down until the next drive(s) finish..

 

This is, of course, normal.    As a drive moves to the inner cylinders, the transfer rate drops significantly.  Since a parity check is always limited by the slowest drive current active in the check, when you have a mixed set of drives, each different size will result in another "inner cylinder slowdown".    So, in your case, it will slow down as it gets close to the 2TB point;  then speed up somewhat until the 3TB drives reach their inner cylinders; and then slow down again as the 4TB drives reach their inner cylinders; and finally slow down for the last time as the 8TB drive reaches its inner cylinders.    As it passes each of those boundaries, the speed will bump up a bit.

Yeah..  I'm aware of the inner cylinder slowness..  I should have put an "as expected" at the end of my last post! :)  I'm not quite ready to get rid of my 2T drives yet.  I'm eventually replacing them.  Once they hit the 5 year mark they are put next on the list to be replaced.

 

Jim

Link to comment

Just set up a new UnRAID server with 3 of these drives - 1 parity + 2 data.

 

My preclear times were the same as jbuszkie. My parity check is way faster though. I'm at ~15.5 hours to complete the initial parity of the 3 drive array. These 3 drives are plugged into my AOC-SASLP-MV8 card.

 

 

Oh, and those clicks are annoying. I have them on all 3 drives though, so it must be normal. The SMART data is all fine and they all passed their preclear with flying colors.

Link to comment
My parity check is way faster though. I'm at ~15.5 hours to complete the initial parity of the 3 drive array.
Initial parity build != parity check. Totally different operation, you need to do a check after the build completes if you want full confidence in the array. Many people see much higher speeds on the build than they do on the check.
Link to comment

Join the conversation

You can post now and register later. If you have an account, sign in now to post with your account.
Note: Your post will require moderator approval before it will be visible.

Guest
Reply to this topic...

×   Pasted as rich text.   Restore formatting

  Only 75 emoji are allowed.

×   Your link has been automatically embedded.   Display as a link instead

×   Your previous content has been restored.   Clear editor

×   You cannot paste images directly. Upload or insert images from URL.