Seagate’s first shingled hard drives now shipping: 8TB for just $260


Recommended Posts

Cycle 2 has just finished (glad I executed it separately so I can interpret the reports before starting Cycle 3 - given I don't have reports for Cycle 1 due to the Power cut :-() and I have some data. No power cut this time!!!

 

S.M.A.R.T reports at the beginning of cycle 2 are clean. No logged S.M.A.R.T errors and no Current Pending Sectors or Reallocated Sectors.

 

Seems disk 1 is doing ok - but I have S.M.A.R.T errors and sectors were pending re-allocation after pre-read on disks 2 and 3.

 

The Question I am asking myself is - are these errors and pending sectors due to the power cut and the disks are ok (I note that there are no reallocated sectors), OR are they showing signs of a DOA drive?

 

I have started another cycle (cycle 3) on them anyway as a matter of course but can someone help me interpret these cycle 2 reports please? FULL S.M.A.R.T reports attached.

 

That's exactly as I would have expected. After a preclear the drives all look good. Interesting that one of the three survived the power cut without data loss. But the complete write of zero by preclear corrected that and now all three report clean. Hopefully you'll have same error free results after cycle 3.

Link to comment
  • Replies 655
  • Created
  • Last Reply

Top Posters In This Topic

Top Posters In This Topic

Posted Images

Cycle 2 has just finished (glad I executed it separately so I can interpret the reports before starting Cycle 3 - given I don't have reports for Cycle 1 due to the Power cut :-() and I have some data. No power cut this time!!!

 

S.M.A.R.T reports at the beginning of cycle 2 are clean. No logged S.M.A.R.T errors and no Current Pending Sectors or Reallocated Sectors.

 

Seems disk 1 is doing ok - but I have S.M.A.R.T errors and sectors were pending re-allocation after pre-read on disks 2 and 3.

 

The Question I am asking myself is - are these errors and pending sectors due to the power cut and the disks are ok (I note that there are no reallocated sectors), OR are they showing signs of a DOA drive?

 

I have started another cycle (cycle 3) on them anyway as a matter of course but can someone help me interpret these cycle 2 reports please? FULL S.M.A.R.T reports attached.

 

That's exactly as I would have expected. After a preclear the drives all look good. Interesting that one of the three survived the power cut without data loss. But the complete write of zero by preclear corrected that and now all three report clean. Hopefully you'll have same error free results after cycle 3.

 

Woohoo cycle 3 has finished. Well, cycle 3.5  ;). It has only taken a week! LOL! All looking good thankfully!

 

No Current Pending Sectors or Reallocated Sectors. No additional S.M.A.R.T errors. As far as I am concerned these are all good! ~200 Hours of constant work is a nice workout!  8) I'm very glad the power loss didn't seem to have an impact on the drives.

 

Next Step, configure an array and start backing up my data!  8)

 

Disk 1

========================================================================1.15b
== invoked as: ./preclear_bjp.sh -f -A /dev/sdb
== ST8000AS0002-1NA17Z   Z8402JP1
== Disk /dev/sdb has been successfully precleared
== with a starting sector of 1 
== Ran 1 cycle
==
== Using :Read block size = 1000448 Bytes
== Last Cycle's Pre Read Time  : 20:11:14 (110 MB/s)
== Last Cycle's Zeroing time   : 16:17:40 (136 MB/s)
== Last Cycle's Post Read Time : 21:13:08 (104 MB/s)
== Last Cycle's Total Time     : 57:43:07
==
== Total Elapsed Time 57:43:07
==
== Disk Start Temperature: 31C
==
== Current Disk Temperature: 35C, 
==
============================================================================
** Changed attributes in files: /tmp/smart_start_sdb  /tmp/smart_finish_sdb
                ATTRIBUTE   NEW_VAL OLD_VAL FAILURE_THRESHOLD STATUS      RAW_VALUE
      Raw_Read_Error_Rate =   118     112            6        ok          200546392
          Seek_Error_Rate =    77      75           30        ok          51612607
         Spin_Retry_Count =   100     100           97        near_thresh 0
         End-to-End_Error =   100     100           99        near_thresh 0
  Airflow_Temperature_Cel =    65      69           45        near_thresh 35
      Temperature_Celsius =    35      31            0        ok          35
   Hardware_ECC_Recovered =   118     112            0        ok          200546392
No SMART attributes are FAILING_NOW

0 sectors were pending re-allocation before the start of the preclear.
0 sectors were pending re-allocation after pre-read in cycle 1 of 1.
0 sectors were pending re-allocation after zero of disk in cycle 1 of 1.
0 sectors are pending re-allocation at the end of the preclear,
    the number of sectors pending re-allocation did not change.
0 sectors had been re-allocated before the start of the preclear.
0 sectors are re-allocated at the end of the preclear,
    the number of sectors re-allocated did not change. 
============================================================================

 

Disk 2

========================================================================1.15b
== invoked as: ./preclear_bjp.sh -f -A /dev/sdc
== ST8000AS0002-1NA17Z   Z8402L5T
== Disk /dev/sdc has been successfully precleared
== with a starting sector of 1 
== Ran 1 cycle
==
== Using :Read block size = 1000448 Bytes
== Last Cycle's Pre Read Time  : 19:55:01 (111 MB/s)
== Last Cycle's Zeroing time   : 16:06:33 (137 MB/s)
== Last Cycle's Post Read Time : 20:59:31 (105 MB/s)
== Last Cycle's Total Time     : 57:02:30
==
== Total Elapsed Time 57:02:30
==
== Disk Start Temperature: 30C
==
== Current Disk Temperature: 33C, 
==
============================================================================
** Changed attributes in files: /tmp/smart_start_sdc  /tmp/smart_finish_sdc
                ATTRIBUTE   NEW_VAL OLD_VAL FAILURE_THRESHOLD STATUS      RAW_VALUE
      Raw_Read_Error_Rate =   120     115            6        ok          241352488
          Seek_Error_Rate =    77      75           30        ok          51594284
         Spin_Retry_Count =   100     100           97        near_thresh 0
         End-to-End_Error =   100     100           99        near_thresh 0
  Airflow_Temperature_Cel =    67      70           45        near_thresh 33
      Temperature_Celsius =    33      30            0        ok          33
   Hardware_ECC_Recovered =   120     115            0        ok          241352488
No SMART attributes are FAILING_NOW

0 sectors were pending re-allocation before the start of the preclear.
0 sectors were pending re-allocation after pre-read in cycle 1 of 1.
0 sectors were pending re-allocation after zero of disk in cycle 1 of 1.
0 sectors are pending re-allocation at the end of the preclear,
    the number of sectors pending re-allocation did not change.
0 sectors had been re-allocated before the start of the preclear.
0 sectors are re-allocated at the end of the preclear,
    the number of sectors re-allocated did not change. 
============================================================================

 

Disk 3

========================================================================1.15b
== invoked as: ./preclear_bjp.sh -f -A /dev/sdd
== ST8000AS0002-1NA17Z   Z8402RLJ
== Disk /dev/sdd has been successfully precleared
== with a starting sector of 1 
== Ran 1 cycle
==
== Using :Read block size = 1000448 Bytes
== Last Cycle's Pre Read Time  : 19:51:02 (111 MB/s)
== Last Cycle's Zeroing time   : 16:27:28 (135 MB/s)
== Last Cycle's Post Read Time : 21:05:08 (105 MB/s)
== Last Cycle's Total Time     : 57:24:42
==
== Total Elapsed Time 57:24:42
==
== Disk Start Temperature: 29C
==
== Current Disk Temperature: 32C, 
==
============================================================================
** Changed attributes in files: /tmp/smart_start_sdd  /tmp/smart_finish_sdd
                ATTRIBUTE   NEW_VAL OLD_VAL FAILURE_THRESHOLD STATUS      RAW_VALUE
      Raw_Read_Error_Rate =   109     115            6        ok          23575200
          Seek_Error_Rate =    77      75           30        ok          51389300
         Spin_Retry_Count =   100     100           97        near_thresh 0
         End-to-End_Error =   100     100           99        near_thresh 0
  Airflow_Temperature_Cel =    68      71           45        near_thresh 32
      Temperature_Celsius =    32      29            0        ok          32
   Hardware_ECC_Recovered =   109     115            0        ok          23575200
No SMART attributes are FAILING_NOW

0 sectors were pending re-allocation before the start of the preclear.
0 sectors were pending re-allocation after pre-read in cycle 1 of 1.
0 sectors were pending re-allocation after zero of disk in cycle 1 of 1.
0 sectors are pending re-allocation at the end of the preclear,
    the number of sectors pending re-allocation did not change.
0 sectors had been re-allocated before the start of the preclear.
0 sectors are re-allocated at the end of the preclear,
    the number of sectors re-allocated did not change. 
============================================================================

preclear_finish_Z8402JP1_2015-04-10_Disk_1.txt

preclear_finish_Z8402L5T_2015-04-10_Disk_2.txt

preclear_finish_Z8402RLJ_2015-04-10_Disk_3.txt

Link to comment

Woohoo cycle 3 has finished. Well, cycle 3.5  . It has only taken a week! LOL! All looking good thankfully!....

~200 Hours of constant work is a nice workout!

....

I'm very glad the power loss didn't seem to have an impact on the drives.Next Step, configure an array and start backing up my data! 

 

My suggestion would be for a final SMART long test on each drive.

 

You can trigger these simultaneously without issue as long as they are not assigned to the array and are not being controlled by a spin down timer.

 

Yes it will take another day or so.. approx 17 hours.

Yes, it's similar to the post read

However, This puts a mark in the SMART Test logs and provides a final confidence.

Personally I like to put that log entry into to the smart logs so that I know when the drive was put in service for data usage.

When I have a question, or on some maintenance period, I can review the test logs and/or run another test, thus putting another line in the log for review.

Link to comment

Woohoo cycle 3 has finished. Well, cycle 3.5  . It has only taken a week! LOL! All looking good thankfully!....

~200 Hours of constant work is a nice workout!

....

I'm very glad the power loss didn't seem to have an impact on the drives.Next Step, configure an array and start backing up my data! 

 

My suggestion would be for a final SMART long test on each drive.

 

 

I'm going to take your suggestion, thank you. If my memory serves me correctly the command is: 

 

smartctl -t long /dev/sdx Right?

 

EDIT1: Yes I was right. Found the info here: http://lime-technology.com/wiki/index.php/Troubleshooting#Obtaining_a_SMART_report

 

Long test on both drives started simultaneously in separate tty's. "Please wait 948 minutes for the test to complete".  LOL!  8)

 

EDIT2: For anyone reading this in the future and who is copying what I have done, the command I have written executes the test in the background. To execute the tests in Foreground Mode a "-C" switch must be added to the command like so:

 

smartctl -t -C long /dev/sdx

 

If you do execute the test in the background as I did (without the -C switch), it is nice to know the progress. So I did this for all drives:

 

smartctl -l selftest /dev/sdx

 

Note: I changed sdx to sdb, sdc to sdd in 3 separate commands to check the status of the 3 drives.

 

Example output:

 

=== START OF READ SMART DATA SECTION ===

SMART Self-test log structure revision number 1

Num  Test_Description    Status                                  Remaining  LifeTime(hours)  LBA_of_first_error

# 1  Extended offline    Self-test routine in progress      80%      190        -

 

Link to comment

... "Please wait 948 minutes for the test to complete"

 

Waiting ...  :)

 

Glad the drives survived the power outage without any damage -- that's generally what happens, but you CAN get actual unrecoverable damage if things happen just right [or actually just WRONG  :) ]

 

Well, I'm glad to report that all the long tests completed without error. No additional S.M.A.R.T errors and or attributes to worry about in my opinion. It has taken me a long time (comparative to what I am used to) but its nice to have this level of confidence over these disks. In the future I am defiantly going to make LongTest, 3 preclears, LongTest my disk preparation routine.

 

Backup time!!  :)8)

post_longtest_Z8402JP1_2015_04_11_Disk_1.txt

post_longtest_Z8402L5T_2015_04_11_Disk_2.txt

post_longtest_Z8402RLJ_2015_04_11_DIsk_3.txt

Link to comment

... Backup time!!  :)8)

 

Finally get to actually USE this nifty new array !!

 

I will when I put my new Noctua NF-12's in the rig later today.

 

Until then I am thinking about what the best strategy is for backing up 12TB utilising the new array containing only these drives (1 x Parity and 2 x Data).

 

Given the discussion earlier its clear to maximise the performance of the drives I need to make the writes as sequential as possible.

 

I plan on setting the backup up in Syncback (on a W7 VM under VirtualBox on my iMac) between the usershare on my Main Server to a corresponding usershare on this Backup Server and hitting GO! Its going to be running on a dedicated Gigabit switch so I don't feel there will be any speed issues related to copying over the network.

 

Back to the disk related question - I have a "feeling" that if I assign the Backup Server a Parity protected array from the beginning then execute the backup then those writes to the Parity disk will be considered "random" and I may observe speed issues. If I keep the array unprotected while I do the initial backup and then assign the Parity disk once thats done then the build of the Parity disk will be done more sequentially and less likely to be hit with speed issues due to the nature of the drives.

 

Have I understood the issues with these disks correctly? Does what I am suggesting make sense or am I way off?

Link to comment

... Given the discussion earlier its clear to maximise the performance of the drives I need to make the writes as sequential as possible.

 

Agree that sequential writes should indeed provide the best performance with these drives.

 

 

I plan on setting the backup up in Syncback (on a W7 VM under VirtualBox on my iMac) between the usershare on my Main Server to a corresponding usershare on this Backup Server and hitting GO!

 

Sounds fine -- that's exactly what I'd recommend.

 

 

I have a "feeling" that if I assign the Backup Server a Parity protected array from the beginning then execute the backup then those writes to the Parity disk will be considered "random" and I may observe speed issues. If I keep the array unprotected while I do the initial backup and then assign the Parity disk once thats done then the build of the Parity disk will be done more sequentially and less likely to be hit with speed issues due to the nature of the drives.

 

I don't think it matters => either way, SyncBack is going to write the files sequentially.    I'd do this WITH parity ... it'll take longer, but the writes will be immediately fault tolerant AND you'll know for sure whether or not the shingled drives cause any significant write speed issues when used as a parity drive.    Given the results to date with all the testing in this thread, I don't expect that to be the case, but copying 12TB of data to them will be an excellent confirmation of that.

 

... the only non-sequential bit will be when UnRAID switches between the two data drives.    If you want to maximize the possibility of an SMR-induced problem [  8) ], you could set your allocation strategy to "most free".    This will cause frequent switches between the drives ... thus making the parity writes non-sequential.    This would be an EXCELLENT test of the likely worst-case performance.    If you'd prefer to minimize the possibility of this, use "fill up" => with this setting there will only be one switch between the drives during the entire copy.    With 12TB of data, "high water" would result in about 3 switches between drives.   

 

Personally, I'd love to see the results with allocation set to "most free" => but if you do this, you may want to just copy a single share that's got a couple TB in it for your initial TeraCopy ... so if it causes writes to degrade too much you can change the allocation for the rest of the copies.    As long as your files are relatively large I don't think this will cause significant problems, but it is by FAR the best test of the shingled drives.

 

As an initial test of the best possible write speed, you may want to copy a single large (several GB) file to the array and see how fast it writes.    I'd anticipate between 35 and 45MB/s, but it may be a bit faster, since the initial writes will be on the fastest outer cylinders and UnRAID will buffer some of the data in its memory.

 

 

 

Link to comment

Personally, I'd love to see the results with allocation set to "most free" => but if you do this, you may want to just copy a single share that's got a couple TB in it for your initial TeraCopy ... so if it causes writes to degrade too much you can change the allocation for the rest of the copies.    As long as your files are relatively large I don't think this will cause significant problems, but it is by FAR the best test of the shingled drives.

 

Oh go on then  ;) Ill do the whole thing using the "Most Free" allocation method and just let it run till completion!

 

How would you like me to document / log the test?

Link to comment

Basically just note the final stats in TeraCopy => it shows how long the copy took and the average data rate.  It shows the same thing for the validation ("Test") if you do that as well.

 

If you note a few of the instantaneous rates along the way, that would be nice as well -- especially if there's significant variance [i wouldn't expect a lot of variance unless you have a lot of small files ... these are typically much slower than large files => otherwise it should be fairly consistent, with a general slowdown as the drives move towards the inner cylinders.]

 

 

 

 

Link to comment

Basically just note the final stats in TeraCopy => it shows how long the copy took and the average data rate.  It shows the same thing for the validation ("Test") if you do that as well.

 

If you note a few of the instantaneous rates along the way, that would be nice as well -- especially if there's significant variance [i wouldn't expect a lot of variance unless you have a lot of small files ... these are typically much slower than large files => otherwise it should be fairly consistent, with a general slowdown as the drives move towards the inner cylinders.]

 

Sounds reasonable  :) A few questions before I start though:

 

If I run it using Terracopy do you know if Terracopy has reliable copy validation? In this backup I'd like to be sure that the copy has completed successfully. I was intending on using the confirm copy switch in SyncBack but as long as Terracopy has something similar then all is good.

 

In addition, when I switch to SyncBack for my weekly backup run after this initial backup using Terracopy do you know if SyncBack will allow me to configure it to Incremental mode given it won't have done the initial backup?

Link to comment

On the TeraCopy Menu there's an option "Always test after copy" => this will run a complete validation after the copy completes.

 

SyncBack doesn't "care" how the original copy was made ... the profile will work just fine without recopying the files you've already copied.

 

Roger.  8)

Link to comment

... I'm really looking forward to the results.

 

I suspect everyone following this thread is  :)

 

... and doing the copy with "most free" allocation will provide a worst-case result, since the destination will change constantly back-and-forth between the two data drives (it would change for every file if they were all the same size).    This means the parity writes will definitely NOT be entirely sequential -- so the persistent cache will get a good workout on that drive.  MOST of the writes on parity should still bypass the persistent cache, assuming the files are primarily large media files; but there will be plenty of usage of the cache ... so it'll be very interesting to see if this results in "hitting the wall" of performance that occurs if the persistent cache gets full and the drive has to do a bunch of band rewrites to clear it.    The data drives shouldn't encounter this, since they'll be written sequentially ... although the small time delay between writes (while the other data drive is in use) may have an impact on the logic the drive's firmware uses to determine whether or not a band rewrite can be skipped.

 

This is going to take a LONG time to get the results, however.    Typical UnRAID write speed with parity is ~ 35MB/s.  At that rate, writing 12TB will take ~ 4 days => and that doesn't include the verification phase, which will take another 2-3 days.    So I'd expect roughly a week before we have the results  :)      And if the shingled parity drive hits the performance wall multiple times, it could be a good bit longer than that !!

 

 

 

Link to comment

Daniel =>  If your TeraCopy is underway, it'd be nice to get an occasional update on the status.    The screen always shows the % done and how much has actually been copied so far ... and it also shows the time the operation started.

 

No need to post this a LOT ... but after a day or so of running it'd be interesting to see if it's showing any signs of slowing down due to the shingled parity.

 

Link to comment

Daniel =>  If your TeraCopy is underway, it'd be nice to get an occasional update on the status.    The screen always shows the % done and how much has actually been copied so far ... and it also shows the time the operation started.

 

No need to post this a LOT ... but after a day or so of running it'd be interesting to see if it's showing any signs of slowing down due to the shingled parity.

 

Wilco.

 

I haven't started copying yet. I went out last night, so a bit of a "sleep in"  ;) this morning!

 

I had to Parity Sync the Array (which completed while I was in bed) and took a sweet 15 Hours 10 Minutes ::). I have just hit go on the Parity Check and then I will start the copy when that has finished.

Link to comment

That is a GREAT parity sync speed => average of over 146MB/s !!

 

Based on all your testing to date I expected that -- i.e. absolutely no band rewrites needed because the entire operation was sequential -- but it's nice to see it confirmed.

 

The parity check should also be quick, since there won't be ANY writes at all (unless there were some undetected errors in the sync).

 

The real test will be the 12TB of data written in a manner that will intentionally "worst case" the parity writes.  Definitely looking forward to those results.

 

Link to comment

That is a GREAT parity sync speed => average of over 146MB/s !!

 

Based on all your testing to date I expected that -- i.e. absolutely no band rewrites needed because the entire operation was sequential -- but it's nice to see it confirmed.

 

The parity check should also be quick, since there won't be ANY writes at all (unless there were some undetected errors in the sync).

 

The real test will be the 12TB of data written in a manner that will intentionally "worst case" the parity writes.  Definitely looking forward to those results.

 

The Parity Check has just finished after 14 Hours 58 Minutes.

 

So, here we go  8)

 

 

Enviroment

 

Copy Client

 

2014 Apple iMac

 

Main Server

 

Unraid 5.04 Pro

Intel® Celeron® CPU G550 @ 2.60GHz

ASUSTeK COMPUTER INC. - P8B75-M LX

G.Skill 1666 Ripjaws 4096 MB

Antec Neo Eco 620

 

Parity Protected Array Comprising of 4 x WDC_WD30EFRX-68AX9N0 3TB Drives (1 x Parity 4 x Data)

 

Backup Server

 

Unraid 6.0-beta14b Pro

ASRock C2550D4I Mini ITX Motherboard

Kingston 4GB (1x4GB), PC-12800 (1600MHz) ECC Unbuffered DDR3L, ValueRAM, CL11, 1.35V, Single Stick

Silverstone ST45SF-G 450W SFX Form Factor Power Supply - ST45SF-G

 

Parity Protected Array Comprising of 3 x Seagate 8TB Archive (SMR) HDD, SATA III, 5900RPM, 128MB (1 x Parity 2 x Data)

 

**Both Unraid Environments have User Shares enabled and will be utilising these shares in the test**

 

Network

 

All clients are running Gigabit NIC's

TP-Link 8-Port Gigabit Switch

Cat6 Cable

 

 

Test

 

The test will:

 

Copy ~5TB data (Reduced from 11TB so I don't have to mess with my shares).

Utilise a VM running Windows 7 on OS X Yosemite as the copy client.

Use the Terracopy Free program to manage the copy between the 2 Unraid Servers.

Use the Terracopy Free program for performing a CRC check after each copy to verify a successful transfer.

Copy data from a User Share on the Main Server to a user share on the Backup Server using SMB Protocol.

Use the Allocation Method "Most Free" within Unraid on the Backup Server User Share (So data will be written in a manner that is intentionally attempting to demonstrate "worst case" parity writes for the new SMR Drives).

 

And .... hopefully provide us with some good valid data!  :)8)

 

 

Right, I have just benchmarked the transfer rate. I got myself a 10GB test *.mkv file and copied from one share to the other in the same way the main test will be. The result was a sustained transfer rate of 40MB/s. The post transfer CRC Check (which ran successfully) was done at a sustained read rate of 41MB/s.

 

Benchmark Copy

SG_8_TB_Shingle_Baseline_Copy.png

 

Benchmark Verify

SG_8_TB_Shingle_Baseline_Verify.png

 

 

Right, now we have our benchmark! I have just hit GO! It is running @ 41MB/s.

 

Test Start

SG_8_TB_Shingle_Test_Start.jpg

 

8) 8)

Link to comment

...Copy data from a User Share on the Main Server to a user share on the Backup Server using SMB Protocol.

Use the Allocation Method "Most Free" within Unraid on the Backup Server User Share (So data will be written in a manner that is intentionally attempting to demonstrate "worst case" parity writes for the new SMR Drives).

...

So, copying will be going unRAID Main==>LAN==>iMac==>LAN==>unRAID Backup...

I think the network will be the most limiting factor in here.

 

I guess more stressful test would be to NFS-mount unRAID Main on unRAID Backup and then copy unRAID Main==>LAN==>unRAID Backup, using Midnight  Commander.

Link to comment

...Copy data from a User Share on the Main Server to a user share on the Backup Server using SMB Protocol.

Use the Allocation Method "Most Free" within Unraid on the Backup Server User Share (So data will be written in a manner that is intentionally attempting to demonstrate "worst case" parity writes for the new SMR Drives).

...

So, copying will be going unRAID Main==>LAN==>iMac==>LAN==>unRAID Backup...

I think the network will be the most limiting factor in here.

 

I guess more stressful test would be to NFS-mount unRAID Main on unRAID Backup and then copy unRAID Main==>LAN==>unRAID Backup, using Midnight  Commander.

 

But does midnight commander give throughput numbers? Perhaps an rsync command could do the task, but i dont know the exact params to set to view incremental throughput.

 

Maybe one only has to monitor and graph the Network usage to see how the transfer fluctuates, if at all That might catch any sort of hiccups the drives might exhibit if it has to do full band rewrites.

Link to comment

...Copy data from a User Share on the Main Server to a user share on the Backup Server using SMB Protocol.

Use the Allocation Method "Most Free" within Unraid on the Backup Server User Share (So data will be written in a manner that is intentionally attempting to demonstrate "worst case" parity writes for the new SMR Drives).

...

So, copying will be going unRAID Main==>LAN==>iMac==>LAN==>unRAID Backup...

I think the network will be the most limiting factor in here.

 

I guess more stressful test would be to NFS-mount unRAID Main on unRAID Backup and then copy unRAID Main==>LAN==>unRAID Backup, using Midnight  Commander.

 

**I'll put this at the top here as I might be a little out of my depth so go easy on me**

 

All three are on the same switch and there is nothing else running on it. I might be wrong but I was under the impression that while it "could" be a limiting factor there is no real way that it "will". While I think you are right, taking the network (and the intermediary client) out of the equation makes for a better test I think we can reasonably say that its NOT going to be much of a factor in the results.

 

I know with my Main Server (Which has WD Red's) that I can only write to the array (without Cache) at ~40MB/s. However when I use the Main Server using the Cache Drive I am seeing speeds north of ~100MB/s.

 

So in comparison, for me, the speeds that the test is showing initially are what I would expect of writes to a Parity Protected Array comprising 5400 similar PMR drives. The real test (for me) here is if/when we experience the result of the cache mitigation technology of the SMR drives and what impact that will have on this test.

Link to comment

I don't agree at all that the network limits anything here.    The network speed is FAR above what UnRAID is capable of for writes to a protected array.    Even a direct copy from attached drives wouldn't be any faster.    The only time it would indeed be faster is if you were copying the data to an unprotected (i.e. no parity drive) array ... in which case a local copy would be at "disk speed" (probably close 200MB/s on the outer cylinders), whereas an over-the-network copy would be limited to ~ 120MB/s.  And of course that wouldn't provide any useful data vis-à-vis the shingled drive's performance as a parity drive  8)

 

The test you're doing is as good as it gets !!  You're absolutely correct that the REAL question here is whether or not you hit a point where there is significant degradation of the write speeds due to the shingled parity drive.  The only "better" test would be if you had more data drives, so the "randomness" of the parity writes was even more scattered ... but I doubt that would make much of a difference.

 

Bottom line:  If this runs at close to 40MB/s for the entire 5TB you're copying, I think it's safe to say the 8TB shingled drives are just fine for most UnRAID uses.    The exception would be users who have a LOT of very small files instead of large media files, as this would result in a lot more band rewrite situations  [i'm assuming most of your files are relatively large media files ... is that correct?].

 

Looking forward to a status update after a few hours  :)

 

Link to comment

A little update .....

 

I fell asleep on the couch last night (watching the EPL in Australia means you have to stay up LATE) and didn't check the copy when I slid of the chase and into bed. Unfortunately the VM must have crashed as the iMac had reset itself. I figured it was an issue with the VM itself as I only had it assigned limited resources. So I corrected this before I went to work this morning and set it running again (I let Terracopy overwrite the existing files).

 

Anyway, that was ~3 hours ago as its lunch time now. I just went in remote and had a check:

 

It is @ 10% (~500GB) and is reporting a speed of between  39MB/s and 41MB/s **

 

**I took screen shots with the iPhone and will upload to this post when I get home tonight!

 

Will check again in a few hours!

 

:):D8)

Link to comment

Join the conversation

You can post now and register later. If you have an account, sign in now to post with your account.
Note: Your post will require moderator approval before it will be visible.

Guest
Reply to this topic...

×   Pasted as rich text.   Restore formatting

  Only 75 emoji are allowed.

×   Your link has been automatically embedded.   Display as a link instead

×   Your previous content has been restored.   Clear editor

×   You cannot paste images directly. Upload or insert images from URL.