Seagate’s first shingled hard drives now shipping: 8TB for just $260


Recommended Posts

I have sometimes wondered if the drive was zeroed with no preclear signiature, if you could do a new config, reassign all old drives + 1 new zeroed drive (with no signative), and do a trust parity, if you would then accomplished the same thing with no signature.

 

Interesting thought => You clearly can't just add a disk like this (since the pre-clear sig isn't there);  but with a New Config you're not "adding" a drive ... so clearly parity would be just fine (since the disk is all zeroes)  ==> the potential "gotcha" is whether or not UnRAID would "balk" at the lack of a partition structure or if it would just show the disk as unformatted (which a quick format would resolve).    I'll have to try that the next time it's convenient [unless you beat me to it  :) ].

 

 

Link to comment
  • Replies 655
  • Created
  • Last Reply

Top Posters In This Topic

Top Posters In This Topic

Posted Images

Don't want to hi-jack here, but right now I'm going to add this drive to replace a smaller one. I burned it in the best I could with HD Tune on my Windows box and the drive seems stable. I did several sector by sector checks and a slew of read/write tests on it. I'm guessing this  special unraid marker / standard is just another way of saying unraid marks this drive as good and can be used, right? There is no negative ramifications if you install a drive without this "special" marker on the drive right? I know a pre-clear is the "standard", but I've really exhausted the drive with using other software and feel the drive is good and not a lemon. This will be my very last upgrade to my current box. Next will be some research on a new build.

 

 

Link to comment

... I'm guessing this  special unraid marker / standard is just another way of saying unraid marks this drive as good and can be used, right?

 

NO.  The "special marker" (pre-clear flag) doesn't tell UnRAID anything about whether the drive is good or bad ... it simply indicates that it has been cleared (zeroed) and can thus be added to the array without UnRAID needing to zero the drive (a LONG process during which the array is not available).

 

The only implication of adding a drive that hasn't been pre-cleared is that UnRAID will then have to clear the drive.

 

 

Link to comment

... I'm guessing this  special unraid marker / standard is just another way of saying unraid marks this drive as good and can be used, right?

 

NO.  The "special marker" (pre-clear flag) doesn't tell UnRAID anything about whether the drive is good or bad ... it simply indicates that it has been cleared (zeroed) and can thus be added to the array without UnRAID needing to zero the drive (a LONG process during which the array is not available).

 

The only implication of adding a drive that hasn't been pre-cleared is that UnRAID will then have to clear the drive.

And the reason an ADDED drive must be clear is so it will match parity.
Link to comment

... And the reason an ADDED drive must be clear is so it will match parity.

 

No -- it's not so the drive will "match parity" => it's so the drive won't have an impact on parity :)    If every bit on the new drive is a zero, that means it wouldn't impact the binary sum of the bits involved in calculating the parity value ... e.g. parity will be the same.    If that wasn't the case, then adding a drive would require a new parity calculation ... so the system would be "at risk" (no fault tolerance) while the new parity values were calculated.

 

 

 

Link to comment

... And the reason an ADDED drive must be clear is so it will match parity.

 

No -- it's not so the drive will "match parity" => it's so the drive won't have an impact on parity :)    If every bit on the new drive is a zero, that means it wouldn't impact the binary sum of the bits involved in calculating the parity value ... e.g. parity will be the same.    If that wasn't the case, then adding a drive would require a new parity calculation ... so the system would be "at risk" (no fault tolerance) while the new parity values were calculated.

Perhaps my use of the word match is confusing, but I am not confused. If by match you mean "is exactly the same as" then of course that is wrong. I meant it as "is consistent with".
Link to comment

Thanks guys.

 

Note the first one (Joe L's) seems to be corrupt?  I can't open the zip file on my windows PC with either WinRar or explorer (in order to copy it over to my unraid machine)

 

As far as the faster one - I too would have the same question as the last poster...  Which is really better? :)

Link to comment

Thanks guys.

 

Note the first one (Joe L's) seems to be corrupt?  I can't open the zip file on my windows PC with either WinRar or explorer (in order to copy it over to my unraid machine)

 

As far as the faster one - I too would have the same question as the last poster...  Which is really better? :)

 

Just downloaded Joe L's, no corruption problem for me.

 

I do not think either is presented as "better".

 

Three cycles for me means

preclear -W;badblocks -ws;preclear -W

Link to comment

Thanks guys.

 

Note the first one (Joe L's) seems to be corrupt?  I can't open the zip file on my windows PC with either WinRar or explorer (in order to copy it over to my unraid machine)

 

As far as the faster one - I too would have the same question as the last poster...  Which is really better? :)

 

Just downloaded Joe L's, no corruption problem for me.

 

 

Thanks - LT's website has been behaving strange with firefox for me - maybe a plugin problem or something...  But sometimes it doesn't load complete pages.  Just loads like 70% of the length and then stops.

 

Anyway - I tried multiple times to download it with the same result of a "premature end of headers" or something.  On Chrome though it downloaded fine.  Thank you :)

Link to comment

All,

 

Sorry it has taken me so long to run and post results of the Small Files Test. Work has been crazy and I've been working interstate  >:( Anyway, I'm back and I've had some time this weekend to run the test so here goes.

 

P.S. I have deleted the last post where I put this information together and replaced it with a link to this post!

 

I read a few posts before posting this so upped the memory of the VM and also decided to run the tests separately instead of simultaneously and I copied the data to the VM itself rather than via a share on the iMac. So In summary: I am running this within a Windows VM on an Core i5 iMac with the VM having 4GB of Memory allocated and 1 CPU thread using Terracopy to copy files from the VM to a share on my Main Server (without Cache drive and containing 3TB WD Red's) and my Backup Server (without Cache drive and containing the new 8TB Seagate's).

 

I have generated a test data set of ~400,000 files totalling ~50GB of data across 10 folders, each folder containing 10 files of 128KB each and several levels deep! I had the files auto generated randomly as I mentioned earlier and I stopped the generation when I hit ~50GB.

 

 

I set a testshare up on both the Main Server and the Backup Server. Both shares without the use of a Cache Drive and both using the Most Free Fill Setting.

 

Main Server & Backup Server Main Pages

 

Seagate_8_TB_Shingle_Small_Files_Test_Unraid.jpg Seagate_8_TB_Shingle_Small_Files_Test_Unraid.jpg

 

Main Server & Backup Server Share Pages

 

Seagate_8_TB_Shingle_Small_Files_Test_Unraid.jpg Seagate_8_TB_Shingle_Small_Files_Test_Unraid.jpg

 

Main Server & Backup Server Share Details

 

Seagate_8_TB_Shingle_Small_Files_Test_Unraid.jpg Seagate_8_TB_Shingle_Small_Files_Test_Unraid.jpg

 

TestFiles Preparation

 

Seagate_8_TB_Shingle_50_GB_Small_File_Test_Dum.jpg

 

TestFiles Details from the VM

 

Seagate_8_TB_Shingle_Small_Files_Test_Dummy_F.jpg

 

I did a baseline speed test for copying a file between the VM and there servers (a W7 iso) and as you can see the speed of the transfer was as fast as the other tests I have done. ~40MB/s.

 

Baseline Test Main Server

 

Seagate_8_TB_Shingle_Small_Files_Test_Speed_B.jpg

 

Baseline Test Back up Server

 

Seagate_8_TB_Shingle_Small_Files_Test_Speed_B.jpg

 

I was happy from what I was seeing in the baseline copy that the ability to copy from the VM to the two servers was sound so I started the test.

 

Test Main Server Beginning

 

Seagate_8_TB_Shingle_Small_Files_Test_Initial.jpg

 

~1MB/s

 

Test Main Server .5%

 

Seagate_8_TB_Shingle_Small_Files_Test_5_perc.jpg

 

After .5% ~500KB/s

 

The variation in speed between those numbers was maintained for another .5 % to 1% complete where I decided to stop.

 

Test Backup Server Beginning

 

Seagate_8_TB_Shingle_Small_Files_Test_Initial.jpg

 

~1MB/s

 

Test Backup Server .5%

 

Seagate_8_TB_Shingle_Small_Files_Test_5_perc.jpg

 

After .5% ~800KB/s

 

Once again the variation in speed between those numbers was maintained for another .5% to 1% where I decided to stop.

 

 

It seems based on this test that the performance of copying files of this size is just as poor irrespective of which server I copy to. I didn't really want to let it run as I didn't see the point. Running at this speed is not going to show us anything I don't think.

 

 

I am not sure what conclusions to draw from this test. I don't feel it has given me any new information on the Seagate's than I had before. The performance of this test to both servers was the same. Shockingly slow.

 

Not sure if anyone wants me to do anything else .....?

Link to comment

I don't see any reason to run these tests further on your main server => what they basically show is (as I had predicted) that with very small files the directory update overhead causes the copies to be MUCH slower than with large files ... and in fact the average is so slow that I suspect even the SMR drives can sustain this -- even if they have to do band rewrites.

 

It would, however, be interesting to confirm that running the entire 50GB copy to your backup server still doesn't hit the "wall" of write performance with only 50GB of data.  If it does, the speeds would be even slower !!  (It would likely seem nearly "hung" for a couple hours while all the band rewrites necessary to empty the persistent cache were done.    If you don't mind letting this run overnight to confirm this -- it'd be nice to know that for a fact.    But after that, I'd definitely say your testing is DONE !!

 

I don't really expect the full 50GB copy of the small files to be a problem (I think the slow average will allow the SMR drives will allow them to do enough band rewrites that the persistent cache will never be filled -- what I'm not certain of is whether that activity will happen while there are still ongoing writes ... if that's the case, it just may get filled up !!).

 

Definitely appreciate the data you've provided us for these drives -- you've definitely shown that these drives work FAR better in UnRAID than their fundamental architecture would have suggested.    Clearly Seagate has done an excellent job with their mitigations of the fundamental SMR limitations.

 

 

 

 

 

 

Link to comment

I don't see any reason to run these tests further on your main server => what they basically show is (as I had predicted) that with very small files the directory update overhead causes the copies to be MUCH slower than with large files ... and in fact the average is so slow that I suspect even the SMR drives can sustain this -- even if they have to do band rewrites.

 

It would, however, be interesting to confirm that running the entire 50GB copy to your backup server still doesn't hit the "wall" of write performance with only 50GB of data.  If it does, the speeds would be even slower !!  (It would likely seem nearly "hung" for a couple hours while all the band rewrites necessary to empty the persistent cache were done.    If you don't mind letting this run overnight to confirm this -- it'd be nice to know that for a fact.    But after that, I'd definitely say your testing is DONE !!

 

I don't really expect the full 50GB copy of the small files to be a problem (I think the slow average will allow the SMR drives will allow them to do enough band rewrites that the persistent cache will never be filled -- what I'm not certain of is whether that activity will happen while there are still ongoing writes ... if that's the case, it just may get filled up !!).

 

Definitely appreciate the data you've provided us for these drives -- you've definitely shown that these drives work FAR better in UnRAID than their fundamental architecture would have suggested.    Clearly Seagate has done an excellent job with their mitigations of the fundamental SMR limitations.

 

 

As requested, starting the copy of the small files to the Backup Server now and Ill let it run till completion and will check in with progress!

 

0% (1.2MB/s after 30 of 390,688 files)

 

Seagate_8_TB_Shingle_Small_Files_Test_FULL_co.jpg

 

13% (813KB/s after 53,505 of 390,688 files totalling 6.38GB of 46.56GB)

 

Seagate_8_TB_Shingle_Small_Files_Test_FULL_co.jpg

 

30% (563KB/s after 117,051 of 390,688 files totalling 13.95GB of 46.56GB)

 

Seagate_8_TB_Shingle_Small_Files_Test_FULL_co.jpg

 

42% (500KB/s after 165,674 of 390,688 files totalling 19.75GB of 46.56GB)

 

Seagate_8_TB_Shingle_Small_Files_Test_FULL_co.png

 

58% (438KB/s after 228,366 of 390,688 files totalling 27.22GB of 46.56GB)

 

Seagate_8_TB_Shingle_Small_Files_Test_FULL_co.png

 

85% (875KB/s after 334,593 of 390,688 files totalling 39.88GB of 46.56GB)

 

Seagate_8_TB_Shingle_Small_Files_Test_FULL_co.png

 

89% (575KB/s after 347,650 of 390,688 files totalling 41.44GB of 46.56GB)

 

Seagate_8_TB_Shingle_Small_Files_Test_FULL_co.jpg

 

100% (938KB/s after 387,723 of 390,688 files totalling 46.22GB of 46.56GB)

 

Seagate_8_TB_Shingle_Small_Files_Test_FULL_copy.jpg

 

Post Write File Verification Read Speed

 

Seagate_8_TB_Shingle_Small_Files_Test_FULL_copy.jpg

 

It is "Faster" than the write. Sustained ~5.5MB/s

 

Link to comment

As for power ... you could measure this with a Kill-a-Watt in the US, but similar devices for 220v are harder to find and cost a good bit more.  Not sure what they sell in Australia.

 

However, I think it's fair to trust the manufacturer's specs on these ...

 

Seagate shows a typical operating power requirement of 7.5w, with idle power of 5.0w (this is spinning, but not actively reading/writing).  Standby is < 1.0w

 

By comparison, a 4TB WD Red has an operating power draw of 4.5w, idle of 3.3w, and standby is rated at 0.4w

 

So these drives draw about 50% more power when spinning, and 67% more power when I/O operations are ongoing, then the Reds, which are about as efficient a drive as you can get.    HOWEVER ... they hold twice as much data => so the power/TB is actually LESS than with the Reds  :)    ... and in a typical UnRAID environment, most of the time drives are spun down, so they'll be drawing less than a watt [Again, they likely draw more than the Reds in the same state; but for a given total capacity you'd also only have about half as many drives].

 

Note:  The 6TB WD Reds draw 5.3w in operation, 3.4w when idle ... so these have a slightly better power/TB draw than the Seagates => but the reality is the power is plenty low enough to really not matter  :) :)

 

... I simply wouldn't let that be a factor in deciding whether or not to use the SMR drives.

 

I'm still reading the thread, so this may have already come up, but when it comes to using drives in low-power boxes (like the HP Microserver with a 150W/200W PSU), the startup current is very important. According to http://www.seagate.com/www-content/product-content/hdd-fam/seagate-archive-hdd/en-us/docs/archive-hdd-dS1834-3-1411us.pdf the ST8000AS0002 needs 2.0 Amps to start up.

 

The WD Red 4TB needs 1.75A.

The Seagate ST4000VN000 needs 2.0A

The Hitachi HDS5C4040ALE630 needs 1.5A

The Hitachi HDS724040ALE640 needs 2.0A

The Seagate ST4000DM needs 2.0A

 

So, if you're running "low power" drives, you may need to be a bit careful. In my case, I could replace drives one by one (perhaps preclearing in another machine simultaneously) and as long as I replace the higher power ones first, I should be OK.

 

As more of the 8TB drives go in, I'm also going to reduce the number of drives in that server to reduce the power needed.

Link to comment

Does filesystem matter?

Reiserfs seems to have significant overheard in starting file writes... compared to, say, xfs.

 

On a fragmented reiserfs drive, I've had to wait for more than 30s just for a file copy to start. I'd imagine loads of small file operations are a nightmare.

Link to comment

 

I'm still reading the thread, so this may have already come up, but when it comes to using drives in low-power boxes (like the HP Microserver with a 150W/200W PSU), the startup current is very important. According to http://www.seagate.com/www-content/product-content/hdd-fam/seagate-archive-hdd/en-us/docs/archive-hdd-dS1834-3-1411us.pdf the ST8000AS0002 needs 2.0 Amps to start up.

 

The WD Red 4TB needs 1.75A.

The Seagate ST4000VN000 needs 2.0A

The Hitachi HDS5C4040ALE630 needs 1.5A

The Hitachi HDS724040ALE640 needs 2.0A

The Seagate ST4000DM needs 2.0A

 

So, if you're running "low power" drives, you may need to be a bit careful. In my case, I could replace drives one by one (perhaps preclearing in another machine simultaneously) and as long as I replace the higher power ones first, I should be OK.

 

As more of the 8TB drives go in, I'm also going to reduce the number of drives in that server to reduce the power needed.

 

Few things to bear in mind. The spec sheets give the maximum current for startup, not actual. The spinning mass plays a big role in startup power demand. So, the number of platters on the highest capacity drives makes an impact (you can cheat and read the weight spec). Lastly speed requires power, higher rotation speed means higher power demand (and temp).

Link to comment

The startup current is all that matter when you are figuring out if your PSU has enough juice to power your hard drives. That was kind of my point, but I didn't make it very well! :-)

 

That spinup draw is crucial, because if there is not enough power, the drive(s) won't spin up.

Link to comment

The startup current is all that matter when you are figuring out if your PSU has enough juice to power your hard drives. That was kind of my point, but I didn't make it very well! :-)

 

That spinup draw is crucial, because if there is not enough power, the drive(s) won't spin up.

 

But also remember that configuring the drives for staggered spinup can mitigate that.

 

Take your 150W PSU. Allocate 75W for the CPU/motherboard/memory/etc (just an example). When you try to spinup (4) drives (24+24+24+24) you run the risk of overloading the PSU (I know many PSU can momentarily exceed rated values). But if you stagger those same drives, you get a series of spikes as each drive spins up, and drops to typical. The last spike is only to 120W (75W+7W+7W+7W+24W).

Link to comment

Join the conversation

You can post now and register later. If you have an account, sign in now to post with your account.
Note: Your post will require moderator approval before it will be visible.

Guest
Reply to this topic...

×   Pasted as rich text.   Restore formatting

  Only 75 emoji are allowed.

×   Your link has been automatically embedded.   Display as a link instead

×   Your previous content has been restored.   Clear editor

×   You cannot paste images directly. Upload or insert images from URL.