danioj Posted January 20, 2016 Share Posted January 20, 2016 Very informative thread! Can anyone confirm what the speeds are like when replacing a failed drive? AFAIK no-one has actually done a drive rebuild using unRAID and these drives yet OR no one has posted the results of doing so. In the early days of speculation I had seen worst case quotes of potential re-build times of ~48 hours! However, after my test and experience with these drives during tests I believe (and it is only speculation) an unRAID bit by bit rebuild will generate enough sequential writes that the drive recognizes that full bands are being written and as a result the persistent cache and the band rewrites will be skipped. If my speculative guess is correct then you will likely be looking at around ~18-22 Hours. Note: Sorry for those who hate speculation! I guess I could drop my spare precleared 8TB drive into my Backup Server which is made up of 4 of these drives (when the replacement PSU comes) to add fact to this speculation. Quote Link to comment
JorgeB Posted January 20, 2016 Share Posted January 20, 2016 AFAIK no-one has actually done a drive rebuild using unRAID and these drives yet OR no one has posted the results of doing so. I did one recently, not because of a failure but for upgrading a 3tb drive, duration was similar to a parity check/sync. Event: unRAID Data rebuild: Subject: Notice [TOWER7] - Data rebuild: finished (0 errors) Description: Duration: 14 hours, 55 minutes, 14 seconds. Average speed: 149.0 MB/sec Importance: normal Quote Link to comment
Helmonder Posted January 20, 2016 Share Posted January 20, 2016 I am preclearing one at the moment... Process is REALLY slow... I am preclearing a 6TB on another system, that is now doing its second cycle post read and the Seagate is still at the 1st cycle post read.. Both are done on a different system, but the system the Seagate is running on is not used at all.. So I am expecting that not to be the differentiator... Quote Link to comment
danioj Posted January 20, 2016 Share Posted January 20, 2016 AFAIK no-one has actually done a drive rebuild using unRAID and these drives yet OR no one has posted the results of doing so. I did one recently, not because of a failure but for upgrading a 3tb drive, duration was similar to a parity check/sync. Event: unRAID Data rebuild: Subject: Notice [TOWER7] - Data rebuild: finished (0 errors) Description: Duration: 14 hours, 55 minutes, 14 seconds. Average speed: 149.0 MB/sec Importance: normal Sweet As!!! Thanks for posting!!! Sorry for the clearly Aussie opener to the post! I'm feeling all patriotic at the moment having just watch Australia's Nick Kyrgios hand out a straight sets Aussie beating to Uruguay's Pablo Cuevas in the Australian Open!! Quote Link to comment
SSD Posted January 20, 2016 Share Posted January 20, 2016 A drive rebuild and a parity sync are identical processes from unRaid's perspective, so speeds would be the same. You are reading all drives but one, computing the parity, and writing it to the remaining drive. Note a parity check is typically much faster than a parity sync, because all it is doing is reading. Quote Link to comment
KuniD Posted January 23, 2016 Share Posted January 23, 2016 After 140 hours (!!) the three cycle pre-clear is complete! If this is the 'fast' version I dread to think how long the standard one would have taken Here are the pre-clear results, I'm running through a long S.M.A.R.T test now. Disk 1: ================================================================== 1.15b = unRAID server Pre-Clear disk /dev/sdc = cycle 3 of 3, partition start on sector 1 = = Step 1 of 10 - Copying zeros to first 2048k bytes DONE = Step 2 of 10 - Copying zeros to remainder of disk to clear it DONE = Step 3 of 10 - Disk is now cleared from MBR onward. DONE = Step 4 of 10 - Clearing MBR bytes for partition 2,3 & 4 DONE = Step 5 of 10 - Clearing MBR code area DONE = Step 6 of 10 - Setting MBR signature bytes DONE = Step 7 of 10 - Setting partition 1 to precleared state DONE = Step 8 of 10 - Notifying kernel we changed the partitioning DONE = Step 9 of 10 - Creating the /dev/disk/by* entries DONE = Step 10 of 10 - Verifying if the MBR is cleared. DONE = Disk Post-Clear-Read completed DONE Disk Temperature: 34C, Elapsed Time: 139:43:40 ========================================================================1.15b == ST8000AS0002-1NA17Z Z840C22P == Disk /dev/sdc has been successfully precleared == with a starting sector of 1 ============================================================================ ** Changed attributes in files: /tmp/smart_start_sdc /tmp/smart_finish_sdc ATTRIBUTE NEW_VAL OLD_VAL FAILURE_THRESHOLD STATUS RAW_VALUE Raw_Read_Error_Rate = 119 100 6 ok 214098856 Seek_Error_Rate = 75 100 30 ok 38053954 Spin_Retry_Count = 100 100 97 near_thresh 0 End-to-End_Error = 100 100 99 near_thresh 0 Airflow_Temperature_Cel = 66 68 45 near_thresh 34 Temperature_Celsius = 34 32 0 ok 34 Hardware_ECC_Recovered = 119 100 0 ok 214098856 No SMART attributes are FAILING_NOW 0 sectors were pending re-allocation before the start of the preclear. 0 sectors were pending re-allocation after pre-read in cycle 1 of 3. 0 sectors were pending re-allocation after zero of disk in cycle 1 of 3. 0 sectors were pending re-allocation after post-read in cycle 1 of 3. 0 sectors were pending re-allocation after zero of disk in cycle 2 of 3. 0 sectors were pending re-allocation after post-read in cycle 2 of 3. 0 sectors were pending re-allocation after zero of disk in cycle 3 of 3. 0 sectors are pending re-allocation at the end of the preclear, the number of sectors pending re-allocation did not change. 0 sectors had been re-allocated before the start of the preclear. 0 sectors are re-allocated at the end of the preclear, the number of sectors re-allocated did not change. Disk 2: ================================================================== 1.15b = unRAID server Pre-Clear disk /dev/sdd = cycle 3 of 3, partition start on sector 1 = = Step 1 of 10 - Copying zeros to first 2048k bytes DONE = Step 2 of 10 - Copying zeros to remainder of disk to clear it DONE = Step 3 of 10 - Disk is now cleared from MBR onward. DONE = Step 4 of 10 - Clearing MBR bytes for partition 2,3 & 4 DONE = Step 5 of 10 - Clearing MBR code area DONE = Step 6 of 10 - Setting MBR signature bytes DONE = Step 7 of 10 - Setting partition 1 to precleared state DONE = Step 8 of 10 - Notifying kernel we changed the partitioning DONE = Step 9 of 10 - Creating the /dev/disk/by* entries DONE = Step 10 of 10 - Verifying if the MBR is cleared. DONE = Disk Post-Clear-Read completed DONE Disk Temperature: 34C, Elapsed Time: 139:33:22 ========================================================================1.15b == ST8000AS0002-1NA17Z Z840C1P9 == Disk /dev/sdd has been successfully precleared == with a starting sector of 1 ============================================================================ ** Changed attributes in files: /tmp/smart_start_sdd /tmp/smart_finish_sdd ATTRIBUTE NEW_VAL OLD_VAL FAILURE_THRESHOLD STATUS RAW_VALUE Raw_Read_Error_Rate = 119 100 6 ok 221340152 Seek_Error_Rate = 70 100 30 ok 12923767109 Spin_Retry_Count = 100 100 97 near_thresh 0 End-to-End_Error = 100 100 99 near_thresh 0 High_Fly_Writes = 97 100 0 ok 3 Airflow_Temperature_Cel = 66 67 45 near_thresh 34 Temperature_Celsius = 34 33 0 ok 34 Hardware_ECC_Recovered = 119 100 0 ok 221340152 No SMART attributes are FAILING_NOW 0 sectors were pending re-allocation before the start of the preclear. 0 sectors were pending re-allocation after pre-read in cycle 1 of 3. 0 sectors were pending re-allocation after zero of disk in cycle 1 of 3. 0 sectors were pending re-allocation after post-read in cycle 1 of 3. 0 sectors were pending re-allocation after zero of disk in cycle 2 of 3. 0 sectors were pending re-allocation after post-read in cycle 2 of 3. 0 sectors were pending re-allocation after zero of disk in cycle 3 of 3. 0 sectors are pending re-allocation at the end of the preclear, the number of sectors pending re-allocation did not change. 0 sectors had been re-allocated before the start of the preclear. 0 sectors are re-allocated at the end of the preclear, the number of sectors re-allocated did not change. How do these results look? Quote Link to comment
madpoet Posted January 23, 2016 Share Posted January 23, 2016 So I put 2 4TB HGST Desktars in a RAID0 intending to use them as parity for these Seagate 8TBs. unRaid shows it as 8TB but when I assign it as parity I'm told it's not the largest drive I know other members have done it with these exact disks so I'm baffled why I can't. Quote Link to comment
SSD Posted January 23, 2016 Share Posted January 23, 2016 After 140 hours (!!) the three cycle pre-clear is complete! If this is the 'fast' version I dread to think how long the standard one would have taken ... How do these results look? Good results. These drives are ready to be used in your array. Quote Link to comment
SSD Posted January 23, 2016 Share Posted January 23, 2016 So I put 2 4TB HGST Desktars in a RAID0 intending to use them as parity for these Seagate 8TBs. unRaid shows it as 8TB but when I assign it as parity I'm told it's not the largest drive I know other members have done it with these exact disks so I'm baffled why I can't. See HERE and the ensuing discussion. Quote Link to comment
garycase Posted January 23, 2016 Author Share Posted January 23, 2016 So I put 2 4TB HGST Desktars in a RAID0 intending to use them as parity for these Seagate 8TBs. unRaid shows it as 8TB but when I assign it as parity I'm told it's not the largest drive I know other members have done it with these exact disks so I'm baffled why I can't. It depends on the controller you're using. The discussion bjp999 linked to above gives you a good feel for this. There was a more extensive discussion some time ago that listed specific controllers that did and did not work for this purpose, but a few minutes of searching didn't find it. I think most quality controllers will work fine; the one that started the earlier discussion was a nifty little 2-drive card that could attach directly to one of the disks ... but unfortunately it slightly truncates the drives, so won't work for your purpose. I believe it was this one that did NOT work (and that started the earlier discussion): http://www.startech.com/HDD/Adapters/sata-dual-hard-drive-raid-adapter~S322SAT3R ... a shame, as it would be just about perfect for that purpose Quote Link to comment
danioj Posted January 24, 2016 Share Posted January 24, 2016 After 140 hours (!!) the three cycle pre-clear is complete! If this is the 'fast' version I dread to think how long the standard one would have taken Here are the pre-clear results, I'm running through a long S.M.A.R.T test now. Disk 1: ================================================================== 1.15b = unRAID server Pre-Clear disk /dev/sdc = cycle 3 of 3, partition start on sector 1 = = Step 1 of 10 - Copying zeros to first 2048k bytes DONE = Step 2 of 10 - Copying zeros to remainder of disk to clear it DONE = Step 3 of 10 - Disk is now cleared from MBR onward. DONE = Step 4 of 10 - Clearing MBR bytes for partition 2,3 & 4 DONE = Step 5 of 10 - Clearing MBR code area DONE = Step 6 of 10 - Setting MBR signature bytes DONE = Step 7 of 10 - Setting partition 1 to precleared state DONE = Step 8 of 10 - Notifying kernel we changed the partitioning DONE = Step 9 of 10 - Creating the /dev/disk/by* entries DONE = Step 10 of 10 - Verifying if the MBR is cleared. DONE = Disk Post-Clear-Read completed DONE Disk Temperature: 34C, Elapsed Time: 139:43:40 ========================================================================1.15b == ST8000AS0002-1NA17Z Z840C22P == Disk /dev/sdc has been successfully precleared == with a starting sector of 1 ============================================================================ ** Changed attributes in files: /tmp/smart_start_sdc /tmp/smart_finish_sdc ATTRIBUTE NEW_VAL OLD_VAL FAILURE_THRESHOLD STATUS RAW_VALUE Raw_Read_Error_Rate = 119 100 6 ok 214098856 Seek_Error_Rate = 75 100 30 ok 38053954 Spin_Retry_Count = 100 100 97 near_thresh 0 End-to-End_Error = 100 100 99 near_thresh 0 Airflow_Temperature_Cel = 66 68 45 near_thresh 34 Temperature_Celsius = 34 32 0 ok 34 Hardware_ECC_Recovered = 119 100 0 ok 214098856 No SMART attributes are FAILING_NOW 0 sectors were pending re-allocation before the start of the preclear. 0 sectors were pending re-allocation after pre-read in cycle 1 of 3. 0 sectors were pending re-allocation after zero of disk in cycle 1 of 3. 0 sectors were pending re-allocation after post-read in cycle 1 of 3. 0 sectors were pending re-allocation after zero of disk in cycle 2 of 3. 0 sectors were pending re-allocation after post-read in cycle 2 of 3. 0 sectors were pending re-allocation after zero of disk in cycle 3 of 3. 0 sectors are pending re-allocation at the end of the preclear, the number of sectors pending re-allocation did not change. 0 sectors had been re-allocated before the start of the preclear. 0 sectors are re-allocated at the end of the preclear, the number of sectors re-allocated did not change. Disk 2: ================================================================== 1.15b = unRAID server Pre-Clear disk /dev/sdd = cycle 3 of 3, partition start on sector 1 = = Step 1 of 10 - Copying zeros to first 2048k bytes DONE = Step 2 of 10 - Copying zeros to remainder of disk to clear it DONE = Step 3 of 10 - Disk is now cleared from MBR onward. DONE = Step 4 of 10 - Clearing MBR bytes for partition 2,3 & 4 DONE = Step 5 of 10 - Clearing MBR code area DONE = Step 6 of 10 - Setting MBR signature bytes DONE = Step 7 of 10 - Setting partition 1 to precleared state DONE = Step 8 of 10 - Notifying kernel we changed the partitioning DONE = Step 9 of 10 - Creating the /dev/disk/by* entries DONE = Step 10 of 10 - Verifying if the MBR is cleared. DONE = Disk Post-Clear-Read completed DONE Disk Temperature: 34C, Elapsed Time: 139:33:22 ========================================================================1.15b == ST8000AS0002-1NA17Z Z840C1P9 == Disk /dev/sdd has been successfully precleared == with a starting sector of 1 ============================================================================ ** Changed attributes in files: /tmp/smart_start_sdd /tmp/smart_finish_sdd ATTRIBUTE NEW_VAL OLD_VAL FAILURE_THRESHOLD STATUS RAW_VALUE Raw_Read_Error_Rate = 119 100 6 ok 221340152 Seek_Error_Rate = 70 100 30 ok 12923767109 Spin_Retry_Count = 100 100 97 near_thresh 0 End-to-End_Error = 100 100 99 near_thresh 0 High_Fly_Writes = 97 100 0 ok 3 Airflow_Temperature_Cel = 66 67 45 near_thresh 34 Temperature_Celsius = 34 33 0 ok 34 Hardware_ECC_Recovered = 119 100 0 ok 221340152 No SMART attributes are FAILING_NOW 0 sectors were pending re-allocation before the start of the preclear. 0 sectors were pending re-allocation after pre-read in cycle 1 of 3. 0 sectors were pending re-allocation after zero of disk in cycle 1 of 3. 0 sectors were pending re-allocation after post-read in cycle 1 of 3. 0 sectors were pending re-allocation after zero of disk in cycle 2 of 3. 0 sectors were pending re-allocation after post-read in cycle 2 of 3. 0 sectors were pending re-allocation after zero of disk in cycle 3 of 3. 0 sectors are pending re-allocation at the end of the preclear, the number of sectors pending re-allocation did not change. 0 sectors had been re-allocated before the start of the preclear. 0 sectors are re-allocated at the end of the preclear, the number of sectors re-allocated did not change. How do these results look? Fine results. Pending a successful extended S.M.A.R.T test I'd deploy them without hesitation. I would however just keep a little eye on "High_Fly_Writes" of the second disk you posted (/dev/sdd). On the 7 Seagate 8TB drives I have deployed now I have never seen this variable degrade during/after a preclear, S.M.A.R.T test or in deployment. My WD Red 3TB drives however don't even report it. A quick bit of information on the variable: High Fly Writes Description S.M.A.R.T. parameter indicates the count of these errors detected over the lifetime of the drive. HDD producers implement a Fly Height Monitor that attempts to provide additional protections for write operations by detecting when a recording head is flying outside its normal operating range. If an unsafe fly height condition is encountered, the write process is stopped, and the information is rewritten or reallocated to a safe region of the hard drive. Recommendation This parameter is considered informational by the most hardware vendors. Although degradation of this parameter can be an indicator of drive aging and/or potential electromechanical problems, it does not directly indicate imminent drive failure. Regular backup is recommended. Pay closer attention to other parameters and overall drive health. Quote Link to comment
KuniD Posted January 24, 2016 Share Posted January 24, 2016 I would however just keep a little eye on "High_Fly_Writes" of the second disk you posted (/dev/sdd). On the 7 Seagate 8TB drives I have deployed now I have never seen this variable degrade during/after a preclear, S.M.A.R.T test or in deployment. My WD Red 3TB drives however don't even report it. A quick bit of information on the variable: High Fly Writes Description S.M.A.R.T. parameter indicates the count of these errors detected over the lifetime of the drive. HDD producers implement a Fly Height Monitor that attempts to provide additional protections for write operations by detecting when a recording head is flying outside its normal operating range. If an unsafe fly height condition is encountered, the write process is stopped, and the information is rewritten or reallocated to a safe region of the hard drive. Recommendation This parameter is considered informational by the most hardware vendors. Although degradation of this parameter can be an indicator of drive aging and/or potential electromechanical problems, it does not directly indicate imminent drive failure. Regular backup is recommended. Pay closer attention to other parameters and overall drive health. Ok will keep an eye on it, attached extended S.M.A.R.T results. If all looks good I'll start up the array (just the 1 parity, 1 data disk and 1 SSD cache to start with), so I can begin configuring Dockers, etc. I have another 2 ST8000AS0002's ready to go in the box for a pre-clear. I'll then use my old box (1x 4Tb parity and 3x 4Tb data) as an off site backup server for this box. ultron-smart-20160124-0819-sdc-parity.zip ultron-smart-20160124-0820-sdd-disk1.zip Quote Link to comment
danioj Posted January 24, 2016 Share Posted January 24, 2016 I would however just keep a little eye on "High_Fly_Writes" of the second disk you posted (/dev/sdd). On the 7 Seagate 8TB drives I have deployed now I have never seen this variable degrade during/after a preclear, S.M.A.R.T test or in deployment. My WD Red 3TB drives however don't even report it. A quick bit of information on the variable: High Fly Writes Description S.M.A.R.T. parameter indicates the count of these errors detected over the lifetime of the drive. HDD producers implement a Fly Height Monitor that attempts to provide additional protections for write operations by detecting when a recording head is flying outside its normal operating range. If an unsafe fly height condition is encountered, the write process is stopped, and the information is rewritten or reallocated to a safe region of the hard drive. Recommendation This parameter is considered informational by the most hardware vendors. Although degradation of this parameter can be an indicator of drive aging and/or potential electromechanical problems, it does not directly indicate imminent drive failure. Regular backup is recommended. Pay closer attention to other parameters and overall drive health. Ok will keep an eye on it, attached extended S.M.A.R.T results. If all looks good I'll start up the array (just the 1 parity, 1 data disk and 1 SSD cache to start with), so I can begin configuring Dockers, etc. I have another 2 ST8000AS0002's ready to go in the box for a pre-clear. I'll then use my old box (1x 4Tb parity and 3x 4Tb data) as an off site backup server for this box. Dude, minus that slight slight slight sight slight note I mentioned (which of course you should just check every month - on your monthly scheduled parity check) above re "High Fly Writes" get those Bad Boys into your Array get the other ones precleared and we can have a look at those results! Good to go man! Quote Link to comment
pras1011 Posted January 24, 2016 Share Posted January 24, 2016 I am now on 6.1.7 and the command timeout smart value issue has disappear. Is this good or bad? Quote Link to comment
danioj Posted January 24, 2016 Share Posted January 24, 2016 I am now on 6.1.7 and the command timeout smart value issue has disappear. Is this good or bad? I don't remember the specific issue you had to refer to, however I can guess it was similar to mine which I posted a support request for here: https://lime-technology.com/forum/index.php?topic=45009.0 In summary what I was advised was: For that specific attribute, Yes ... I would just ignore it. If you want to be sure it's not just a poorly seated cable, you could shut down; replace SATA cable; and then reboot ... but the reality is these are known anomalies with Seagate drives and simply nothing to be concerned about. As Bonienl noted, the parameter's not even going to be monitored in the next release. At the time of writing the version of unRAID was 6.1.6 and as the quote - which btw was from garycase and information by Bonienl - it is suggested the variable "command timeout" is not monitored in 6.1.7 (which I haven't checked I have to admit) therefore you will not experience any warnings and as such it is not surprising the issue as gone! Quote Link to comment
pras1011 Posted January 24, 2016 Share Posted January 24, 2016 Ok that's good. Hdds still making that annoying ticking sound though!! Quote Link to comment
danioj Posted January 24, 2016 Share Posted January 24, 2016 Ok that's good. Hdds still making that annoying ticking sound though!! Mine too. The Silverstone DS380 and the Fractal Design Define R5 do a good job of drowning it out. As well as all the other bloody noise in this place! LOL! Either way, I can only really hear it if I sit and listen for it, which in the main I don't. Better things to do! Still to this day - 1 year on almost - so happy with these drives. Still don't understand those who are not choosing these for Parity drives and or are choosing H/W RAID1 (2 x 4TB) to work as Parity! They are just perfect for unRAID as a Data or Parity disk IMHO! Quote Link to comment
pras1011 Posted January 24, 2016 Share Posted January 24, 2016 I agree danioj. Best drives ever. With these drives my server is so much smaller and faster at reading and writing compared to old one. It's seems extremely crazy to raid 0 a parity drive. Quote Link to comment
KuniD Posted January 24, 2016 Share Posted January 24, 2016 Dude, minus that slight slight slight sight slight note I mentioned (which of course you should just check every month - on your monthly scheduled parity check) above re "High Fly Writes" get those Bad Boys into your Array get the other ones precleared and we can have a look at those results! Good to go man! Haha thanks, the waiting game is over, time to start having a play ! Quote Link to comment
Helmonder Posted January 24, 2016 Share Posted January 24, 2016 How long does preclearing the 8TH shingled take for you ? I am now at 97% of my first pre-read and that has allready taken allmost 70 hours.. A 3 cycle preclear on a 6TB (WD RED) drive takes me a week.. An 8 TB should take approx 8 to 9 days in comparison.. but with allmost 3 days for the first pre-read I will never make that.. Quote Link to comment
BRiT Posted January 24, 2016 Share Posted January 24, 2016 How long does preclearing the 8TH shingled take for you ? I am now at 97% of my first pre-read and that has allready taken allmost 70 hours.. A 3 cycle preclear on a 6TB (WD RED) drive takes me a week.. An 8 TB should take approx 8 to 9 days in comparison.. but with allmost 3 days for the first pre-read I will never make that.. Which version of the preclear script are you using? The unofficial faster one? If so, did you tell it to operate in faster mode? Quote Link to comment
Helmonder Posted January 24, 2016 Share Posted January 24, 2016 The official has, it has suited me for years.. Quote Link to comment
BRiT Posted January 24, 2016 Share Posted January 24, 2016 The official has, it has suited me for years.. The old official version is 50% slower than the newer unofficial faster version on the post-read step. So the times you experience will be substantially longer than the times everyone else has reported. Quote Link to comment
Helmonder Posted January 24, 2016 Share Posted January 24, 2016 Ok clear... That explains it.. Quote Link to comment
madpoet Posted January 24, 2016 Share Posted January 24, 2016 Still to this day - 1 year on almost - so happy with these drives. Still don't understand those who are not choosing these for Parity drives and or are choosing H/W RAID1 (2 x 4TB) to work as Parity! They are just perfect for unRAID as a Data or Parity disk IMHO! Because I'm an idiot that thought I needed 7200RPM Honestly with the problems I'm having with the controller card I'm going to pull the 4TB drives and use an 8TB as the parity and then fill the other 3 Silverstone bays with 8TB as well. 56TB with parity is craaaazy in such a small space. Quote Link to comment
Recommended Posts
Join the conversation
You can post now and register later. If you have an account, sign in now to post with your account.
Note: Your post will require moderator approval before it will be visible.