Seagate 8TB Shingled Drives in UnRAID


Recommended Posts

I haven't contributed to this thread for a while BUT I was reminded today about how good these drives are for the general unRAID user.

 

Ran a Parity Check on my Main Server and Backup Server as it is the start of the month:

 

Main Server: 5 x 3TB WD Red's 2 x 8TB Seagate Shingled Drives (1 8TB Drive as Parity)

 

Event: unRAID Parity check
Subject: Notice [MAIN] - Parity check finished (0 errors)
Description: Duration: 19 hours, 27 minutes, 9 seconds. Average speed: 114.3 MB/s
Importance: normal

 

Backup Server: 4 x 8TB Seagate Shingled Drives (1 8TB Drive as Parity)

 

Event: unRAID Parity check
Subject: Notice [bACKUP] - Parity check finished (0 errors)
Description: Duration: 15 hours, 55 minutes, 55 seconds. Average speed: 139.5 MB/s
Importance: normal

 

Neither server got any disk action during the checks. The Server Spec's for the Main Server are SO MUCH better than the Backup Server (See Sig and Build Thread) and just look at how much faster the Backup Server was than the Main Server in doing a Parity Check.

 

Just for information. Hope it helps someone.

Link to comment

Danioj

That has been my experience as well.  Almost 140MB/s & 15 hours on parity check with 3x8TB and 1x1TB(SSD) in the array.  The only time I have seen slow speeds was when I was moving my 7TB of content over to the array.  Then it went down to 40 MB/s after a few minutes of sustained write.  I don't have that issue anymore, as I have a 240GB cache array.

Link to comment

I do not know what to look for in preclear reports. I havent had to upgrade my server much but I am maxed out in storage and need to add some 8tb Seagate drives. I need my server back up asap since it is the media source for our home. I finished running 1 preclear on two 8 tb drives and I want to start building my new case now and get the server back up. I am copying the preclears and also wondering if running only 1 preclear is sufficient?

 

 

=                unRAID server Pre-Clear disk /dev/sde

=              cycle 1 of 1, partition start on sector 1

= Disk Pre-Clear-Read completed                                DONE

= Step 1 of 10 - Copying zeros to first 2048k bytes            DONE

= Step 2 of 10 - Copying zeros to remainder of disk to clear it DONE

= Step 3 of 10 - Disk is now cleared from MBR onward.          DONE

= Step 4 of 10 - Clearing MBR bytes for partition 2,3 & 4      DONE

= Step 5 of 10 - Clearing MBR code area                        DONE

= Step 6 of 10 - Setting MBR signature bytes                    DONE

= Step 7 of 10 - Setting partition 1 to precleared state        DONE

= Step 8 of 10 - Notifying kernel we changed the partitioning  DONE

= Step 9 of 10 - Creating the /dev/disk/by* entries            DONE

= Step 10 of 10 - Verifying if the MBR is cleared.              DONE

= Disk Post-Clear-Read completed                                DONE

Disk Temperature: 33C, Elapsed Time:  64:40:56

========================================================================1.15

== ST8000AS0002-1NA17Z  Z840F2KP

== Disk /dev/sde has been successfully precleared

== with a starting sector of 1

============================================================================

** Changed attributes in files: /tmp/smart_start_sde  /tmp/smart_finish_sde

                ATTRIBUTE  NEW_VAL OLD_VAL FAILURE_THRESHOLD STATUS      RAW_VALUE

      Raw_Read_Error_Rate =  118    108            6        ok          191316368

          Seek_Error_Rate =    72    100          30        ok          19429140

        Spin_Retry_Count =  100    100          97        near_thresh 0

        End-to-End_Error =  100    100          99        near_thresh 0

  Airflow_Temperature_Cel =    67      72          45        near_thresh 33

      Temperature_Celsius =    33      28            0        ok          33

  Hardware_ECC_Recovered =  118    108            0        ok          191316368

No SMART attributes are FAILING_NOW

0 sectors were pending re-allocation before the start of the preclear.

0 sectors were pending re-allocation after pre-read in cycle 1 of 1.

0 sectors were pending re-allocation after zero of disk in cycle 1 of 1.

0 sectors are pending re-allocation at the end of the preclear,

    the number of sectors pending re-allocation did not change.

0 sectors had been re-allocated before the start of the preclear.

0 sectors are re-allocated at the end of the preclear,

    the number of sectors re-allocated did not change.

 

 

 

================================================================== 1.15

=                unRAID server Pre-Clear disk /dev/sdd

=              cycle 1 of 1, partition start on sector 1

= Disk Pre-Clear-Read completed                                DONE

= Step 1 of 10 - Copying zeros to first 2048k bytes            DONE

= Step 2 of 10 - Copying zeros to remainder of disk to clear it DONE

= Step 3 of 10 - Disk is now cleared from MBR onward.          DONE

= Step 4 of 10 - Clearing MBR bytes for partition 2,3 & 4      DONE

= Step 5 of 10 - Clearing MBR code area                        DONE

= Step 6 of 10 - Setting MBR signature bytes                    DONE

= Step 7 of 10 - Setting partition 1 to precleared state        DONE

= Step 8 of 10 - Notifying kernel we changed the partitioning  DONE

= Step 9 of 10 - Creating the /dev/disk/by* entries            DONE

= Step 10 of 10 - Verifying if the MBR is cleared.              DONE

= Disk Post-Clear-Read completed                                DONE

Disk Temperature: 32C, Elapsed Time:  64:30:17

========================================================================1.15

== ST8000AS0002-1NA17Z  Z840EWNX

== Disk /dev/sdd has been successfully precleared

== with a starting sector of 1

============================================================================

** Changed attributes in files: /tmp/smart_start_sdd  /tmp/smart_finish_sdd

                ATTRIBUTE  NEW_VAL OLD_VAL FAILURE_THRESHOLD STATUS      RAW_VALUE

      Raw_Read_Error_Rate =  117    118            6        ok          117718448

          Seek_Error_Rate =    72    100          30        ok          19359000

        Spin_Retry_Count =  100    100          97        near_thresh 0

        End-to-End_Error =  100    100          99        near_thresh 0

          High_Fly_Writes =    99    100            0        ok          1

  Airflow_Temperature_Cel =    68      74          45        near_thresh 32

      Temperature_Celsius =    32      26            0        ok          32

  Hardware_ECC_Recovered =  117    118            0        ok          117718448

No SMART attributes are FAILING_NOW

0 sectors were pending re-allocation before the start of the preclear.

0 sectors were pending re-allocation after pre-read in cycle 1 of 1.

0 sectors were pending re-allocation after zero of disk in cycle 1 of 1.

0 sectors are pending re-allocation at the end of the preclear,

    the number of sectors pending re-allocation did not change.

0 sectors had been re-allocated before the start of the preclear.

0 sectors are re-allocated at the end of the preclear,

    the number of sectors re-allocated did not change.

 

 

Link to comment

I do not know what to look for in preclear reports. I havent had to upgrade my server much but I am maxed out in storage and need to add some 8tb Seagate drives. I need my server back up asap since it is the media source for our home. I finished running 1 preclear on two 8 tb drives and I want to start building my new case now and get the server back up. I am copying the preclears and also wondering if running only 1 preclear is sufficient?

 

 

=                unRAID server Pre-Clear disk /dev/sde

=              cycle 1 of 1, partition start on sector 1

= Disk Pre-Clear-Read completed                                DONE

= Step 1 of 10 - Copying zeros to first 2048k bytes            DONE

= Step 2 of 10 - Copying zeros to remainder of disk to clear it DONE

= Step 3 of 10 - Disk is now cleared from MBR onward.          DONE

= Step 4 of 10 - Clearing MBR bytes for partition 2,3 & 4      DONE

= Step 5 of 10 - Clearing MBR code area                        DONE

= Step 6 of 10 - Setting MBR signature bytes                    DONE

= Step 7 of 10 - Setting partition 1 to precleared state        DONE

= Step 8 of 10 - Notifying kernel we changed the partitioning  DONE

= Step 9 of 10 - Creating the /dev/disk/by* entries            DONE

= Step 10 of 10 - Verifying if the MBR is cleared.              DONE

= Disk Post-Clear-Read completed                                DONE

Disk Temperature: 33C, Elapsed Time:  64:40:56

========================================================================1.15

== ST8000AS0002-1NA17Z  Z840F2KP

== Disk /dev/sde has been successfully precleared

== with a starting sector of 1

============================================================================

** Changed attributes in files: /tmp/smart_start_sde  /tmp/smart_finish_sde

                ATTRIBUTE  NEW_VAL OLD_VAL FAILURE_THRESHOLD STATUS      RAW_VALUE

      Raw_Read_Error_Rate =  118    108            6        ok          191316368

          Seek_Error_Rate =    72    100          30        ok          19429140

        Spin_Retry_Count =  100    100          97        near_thresh 0

        End-to-End_Error =  100    100          99        near_thresh 0

  Airflow_Temperature_Cel =    67      72          45        near_thresh 33

      Temperature_Celsius =    33      28            0        ok          33

  Hardware_ECC_Recovered =  118    108            0        ok          191316368

No SMART attributes are FAILING_NOW

0 sectors were pending re-allocation before the start of the preclear.

0 sectors were pending re-allocation after pre-read in cycle 1 of 1.

0 sectors were pending re-allocation after zero of disk in cycle 1 of 1.

0 sectors are pending re-allocation at the end of the preclear,

    the number of sectors pending re-allocation did not change.

0 sectors had been re-allocated before the start of the preclear.

0 sectors are re-allocated at the end of the preclear,

    the number of sectors re-allocated did not change.

 

 

 

================================================================== 1.15

=                unRAID server Pre-Clear disk /dev/sdd

=              cycle 1 of 1, partition start on sector 1

= Disk Pre-Clear-Read completed                                DONE

= Step 1 of 10 - Copying zeros to first 2048k bytes            DONE

= Step 2 of 10 - Copying zeros to remainder of disk to clear it DONE

= Step 3 of 10 - Disk is now cleared from MBR onward.          DONE

= Step 4 of 10 - Clearing MBR bytes for partition 2,3 & 4      DONE

= Step 5 of 10 - Clearing MBR code area                        DONE

= Step 6 of 10 - Setting MBR signature bytes                    DONE

= Step 7 of 10 - Setting partition 1 to precleared state        DONE

= Step 8 of 10 - Notifying kernel we changed the partitioning  DONE

= Step 9 of 10 - Creating the /dev/disk/by* entries            DONE

= Step 10 of 10 - Verifying if the MBR is cleared.              DONE

= Disk Post-Clear-Read completed                                DONE

Disk Temperature: 32C, Elapsed Time:  64:30:17

========================================================================1.15

== ST8000AS0002-1NA17Z  Z840EWNX

== Disk /dev/sdd has been successfully precleared

== with a starting sector of 1

============================================================================

** Changed attributes in files: /tmp/smart_start_sdd  /tmp/smart_finish_sdd

                ATTRIBUTE  NEW_VAL OLD_VAL FAILURE_THRESHOLD STATUS      RAW_VALUE

      Raw_Read_Error_Rate =  117    118            6        ok          117718448

          Seek_Error_Rate =    72    100          30        ok          19359000

        Spin_Retry_Count =  100    100          97        near_thresh 0

        End-to-End_Error =  100    100          99        near_thresh 0

          High_Fly_Writes =    99    100            0        ok          1

  Airflow_Temperature_Cel =    68      74          45        near_thresh 32

      Temperature_Celsius =    32      26            0        ok          32

  Hardware_ECC_Recovered =  117    118            0        ok          117718448

No SMART attributes are FAILING_NOW

0 sectors were pending re-allocation before the start of the preclear.

0 sectors were pending re-allocation after pre-read in cycle 1 of 1.

0 sectors were pending re-allocation after zero of disk in cycle 1 of 1.

0 sectors are pending re-allocation at the end of the preclear,

    the number of sectors pending re-allocation did not change.

0 sectors had been re-allocated before the start of the preclear.

0 sectors are re-allocated at the end of the preclear,

    the number of sectors re-allocated did not change.

 

As far as I am concerned those disks are fine. I would deploy without hesitation - with only 1 caveat - I notice that you only run both disks through 1 preclear cycle. I normally run my disks through 3 cycles with a short SMART test before and a LONG test after to completely ensure that the disks are thoroughly tested before I add my precious data to them. That being said, I know others that would disagree and if those results you published would have been on a disk run after 3 cycles I wouldn't have even bothered writing the last 2 sentences.

 

Just in case you are wondering you should safely be able to ignore High Fly Write count on the sdd disk. I think its fair to say the drive is almost certainly fine. AFAIK these Seagate "attributes" are not "errors" they are just a count of the number of High Fly Writes. From what I have read this attribute its for information only and will not contribute to a SMART failure nor does it indicate an imminent failure of the disk. My WD's Red's don't even report it BUT the Seagate does.

 

Summary - if you're happy with just relying on 1 preclear cycle. Disks are fine. Deploy them.

Link to comment

I haven't contributed to this thread for a while BUT I was reminded today about how good these drives are for the general unRAID user.

 

Ran a Parity Check on my Main Server and Backup Server as it is the start of the month:

 

Main Server: 5 x 3TB WD Red's 2 x 8TB Seagate Shingled Drives (1 8TB Drive as Parity)

 

Event: unRAID Parity check
Subject: Notice [MAIN] - Parity check finished (0 errors)
Description: Duration: 19 hours, 27 minutes, 9 seconds. Average speed: 114.3 MB/s
Importance: normal

 

Backup Server: 4 x 8TB Seagate Shingled Drives (1 8TB Drive as Parity)

 

Event: unRAID Parity check
Subject: Notice [bACKUP] - Parity check finished (0 errors)
Description: Duration: 15 hours, 55 minutes, 55 seconds. Average speed: 139.5 MB/s
Importance: normal

 

Neither server got any disk action during the checks. The Server Spec's for the Main Server are SO MUCH better than the Backup Server (See Sig and Build Thread) and just look at how much faster the Backup Server was than the Main Server in doing a Parity Check.

 

Just for information. Hope it helps someone.

 

Kind of off topic but does parity check speed depend on the amount of data in your array? The reason I ask is I have a 14 disk 49TB array and the last parity check I ran took a little less than 18 hours with a reported speed of 93.5 MB/s.

Link to comment

I haven't contributed to this thread for a while BUT I was reminded today about how good these drives are for the general unRAID user.

 

Ran a Parity Check on my Main Server and Backup Server as it is the start of the month:

 

Main Server: 5 x 3TB WD Red's 2 x 8TB Seagate Shingled Drives (1 8TB Drive as Parity)

 

Event: unRAID Parity check
Subject: Notice [MAIN] - Parity check finished (0 errors)
Description: Duration: 19 hours, 27 minutes, 9 seconds. Average speed: 114.3 MB/s
Importance: normal

 

Backup Server: 4 x 8TB Seagate Shingled Drives (1 8TB Drive as Parity)

 

Event: unRAID Parity check
Subject: Notice [bACKUP] - Parity check finished (0 errors)
Description: Duration: 15 hours, 55 minutes, 55 seconds. Average speed: 139.5 MB/s
Importance: normal

 

Neither server got any disk action during the checks. The Server Spec's for the Main Server are SO MUCH better than the Backup Server (See Sig and Build Thread) and just look at how much faster the Backup Server was than the Main Server in doing a Parity Check.

 

Just for information. Hope it helps someone.

 

Kind of off topic but does parity check speed depend on the amount of data in your array? The reason I ask is I have a 14 disk 49TB array and the last parity check I ran took a little less than 18 hours with a reported speed of 93.5 MB/s.

 

I don't think so. unRAID computes parity on the "bits" across each of the data disks partitions. AFAIK the drives are not "truly" empty even when they don't have anything on them. There is file system formatting and different disks will have bits in different places whether there is data on them or not.

 

This I think might have to be clarified by one of the older more experienced hats but I THINK I am right.

Link to comment

...Kind of off topic but does parity check speed depend on the amount of data in your array? The reason I ask is I have a 14 disk 49TB array and the last parity check I ran took a little less than 18 hours with a reported speed of 93.5 MB/s.

 

No -- it's completely irrelevant.    Parity simply reads all of the bits and does an XOR computation on them to confirm the results match the bit on the parity disk.    Doesn't matter if the disks contain data or not ... or even if they're unformatted.

 

Link to comment

...Kind of off topic but does parity check speed depend on the amount of data in your array? The reason I ask is I have a 14 disk 49TB array and the last parity check I ran took a little less than 18 hours with a reported speed of 93.5 MB/s.

 

No -- it's completely irrelevant.    Parity simply reads all of the bits and does an XOR computation on them to confirm the results match the bit on the parity disk.    Doesn't matter if the disks contain data or not ... or even if they're unformatted.

 

A much more eloquent and factual explanation than mine!  :)

Link to comment

Kind of off topic but does parity check speed depend on the amount of data in your array? The reason I ask is I have a 14 disk 49TB array and the last parity check I ran took a little less than 18 hours with a reported speed of 93.5 MB/s.

 

Parity checks speed is affected by many factors, including size of parity disk, controllers used (older ones have limited bandwidth when accessing all disks), disks used (higher platter density disks are faster), different sized disks (the more you have, the slower it will be), and to a lesser degree CPU and memory used.

Link to comment

... and, of course, the rotational speed of the disks  :)

 

As I noted earlier, however, the one thing that is NOT a factor is what's actually stored on the disk -- they can be empty; unformatted; full; etc. and it simply doesn't matter.

 

As Johnnie noted, the controllers can make a difference, because they may be a bottleneck in the transfer speeds [i.e. the disk won't be able to provide data as quickly as they are able to];  the areal density of the disks make a difference because that determines how much data can be provided during a single revolution of the disk;  as I noted the rotational speed matters because that determines how long a rotation takes;  the size of the parity disk matters because that determines how much data needs to be checked;  mixing sizes matters because all modern disks use zoned sectoring -- meaning there are more sectors/track on the outer cylinders than the innermost ones ... so they slow down considerably as they move towards the inner cylinders [so if you have mixed disk sizes in your array this will happen multiple times].    Note that the check can never run faster than the slowest disk current involved in the check ... so if you have multiple sizes, it will be limited several times as disks are working in the innermost cylinders;  if you have multiple densities it will be limited by the least dense disk current involved; etc.

 

The best parity performance for a given system will be if all the disks are the same size, rotation rate, and areal density.

 

 

 

 

Link to comment

I guess I'm confused by the reported speed then. If Danio has a 23TB array that took 19+ hours at 114 mb/s, how is my 49TB array that took 18 hours only running at 93 MB/s?

 

Average speed: 114 MB/s

 

Problem with Averages is they hide variation. Averages are simple to calculate and are sometimes a lazy way of determining past performance. For example, over x period of time a performance level may have started at one speed and ended up at another. Simple math then determines the average to be somewhere in between those two points.

Link to comment

I guess I'm confused by the reported speed then. If Danio has a 23TB array that took 19+ hours at 114 mb/s, how is my 49TB array that took 18 hours only running at 93 MB/s?

 

The amount of storage in the array doesn't matter => it's the size of the parity disk.

 

You clearly have a 6TB parity disk.  If you average 93MB/s, that's 93 x 60 = 5580MB/minute = 334,800 MB/hour x 18 hours = 6,026,400 MB in 18 hours ... i.e. 6TB in just under 18 hours.

 

Daniel averaged 114MB/s = 6840MB/min = 410,400 MB/hour = 7,797,600MB in 19 hours => 8TB in just over 19 hours. 

 

Note that Daniel's backup server averaged 139.5MB/s = 8370MB/min = 502,200MB/hour = 8,035,200MB in 16 hours => 8TB in just under 16 hours.    It was faster NOT because it has fewer disks -- but because the primary array has a MIXED set of disks (several 3TB WD Reds mixed with 8TB Seagates), but the backup server uses all the same disks (8TB Seagates).

 

Your speed is limited because you have a mixture of different disks (and possibly some controller bottlenecks -- without the configuration details I can't comment on whether or not that's a factor).  [i can, however, clearly note that you have a mixed set of disks, since you indicated you have a 14 disk array, and your parity check details clearly show you have a 6TB parity disk.  Obviously you don't have 14 6TB disks, as the array would be MUCH larger than 49TB if that was the case  :) ]

 

 

 

Link to comment

I guess I'm confused by the reported speed then. If Danio has a 23TB array that took 19+ hours at 114 mb/s, how is my 49TB array that took 18 hours only running at 93 MB/s?

 

The amount of storage in the array doesn't matter => it's the size of the parity disk.

 

 

Didn't know that, thanks. It makes sense now.  ;D

Link to comment

I just pre cleared two drives, here are the results:

 

================================================================== 1.15b
=                unRAID server Pre-Clear disk /dev/sdaa
=               cycle 1 of 1, partition start on sector 1
= Disk Pre-Clear-Read completed                                 DONE
= Step 1 of 10 - Copying zeros to first 2048k bytes             DONE
= Step 2 of 10 - Copying zeros to remainder of disk to clear it DONE
= Step 3 of 10 - Disk is now cleared from MBR onward.           DONE
= Step 4 of 10 - Clearing MBR bytes for partition 2,3 & 4       DONE
= Step 5 of 10 - Clearing MBR code area                         DONE
= Step 6 of 10 - Setting MBR signature bytes                    DONE
= Step 7 of 10 - Setting partition 1 to precleared state        DONE
= Step 8 of 10 - Notifying kernel we changed the partitioning   DONE
= Step 9 of 10 - Creating the /dev/disk/by* entries             DONE
= Step 10 of 10 - Verifying if the MBR is cleared.              DONE
= Disk Post-Clear-Read completed                                DONE
Disk Temperature: 37C, Elapsed Time:  70:03:12
========================================================================1.15b
== ST8000AS0002-1NA17Z   Z840F1Q2
== Disk /dev/sdaa has been successfully precleared
== with a starting sector of 1
============================================================================
** Changed attributes in files: /tmp/smart_start_sdaa  /tmp/smart_finish_sdaa
                ATTRIBUTE   NEW_VAL OLD_VAL FAILURE_THRESHOLD STATUS      RAW_VALUE
      Raw_Read_Error_Rate =   117     115            6        ok          126589016
          Seek_Error_Rate =    75      70           30        ok          31967094
         Spin_Retry_Count =   100     100           97        near_thresh 0
         End-to-End_Error =   100     100           99        near_thresh 0
  Airflow_Temperature_Cel =    63      65           45        near_thresh 37
      Temperature_Celsius =    37      35            0        ok          37
   Hardware_ECC_Recovered =   117     115            0        ok          126589016
No SMART attributes are FAILING_NOW
0 sectors were pending re-allocation before the start of the preclear.
0 sectors were pending re-allocation after pre-read in cycle 1 of 1.
0 sectors were pending re-allocation after zero of disk in cycle 1 of 1.
0 sectors are pending re-allocation at the end of the preclear,
    the number of sectors pending re-allocation did not change.
0 sectors had been re-allocated before the start of the preclear.
0 sectors are re-allocated at the end of the preclear,
    the number of sectors re-allocated did not change.
root@Tower:/usr/local/emhttp#

================================================================== 1.15b
=                unRAID server Pre-Clear disk /dev/sdi
=               cycle 1 of 1, partition start on sector 1
= Disk Pre-Clear-Read completed                                 DONE
= Step 1 of 10 - Copying zeros to first 2048k bytes             DONE
= Step 2 of 10 - Copying zeros to remainder of disk to clear it DONE
= Step 3 of 10 - Disk is now cleared from MBR onward.           DONE
= Step 4 of 10 - Clearing MBR bytes for partition 2,3 & 4       DONE
= Step 5 of 10 - Clearing MBR code area                         DONE
= Step 6 of 10 - Setting MBR signature bytes                    DONE
= Step 7 of 10 - Setting partition 1 to precleared state        DONE
= Step 8 of 10 - Notifying kernel we changed the partitioning   DONE
= Step 9 of 10 - Creating the /dev/disk/by* entries             DONE
= Step 10 of 10 - Verifying if the MBR is cleared.              DONE
= Disk Post-Clear-Read completed                                DONE
Disk Temperature: 32C, Elapsed Time:  69:17:06
========================================================================1.15b
== ST8000AS0002-1NA17Z   Z840F0J5
== Disk /dev/sdi has been successfully precleared
== with a starting sector of 1
============================================================================
** Changed attributes in files: /tmp/smart_start_sdi  /tmp/smart_finish_sdi
                ATTRIBUTE   NEW_VAL OLD_VAL FAILURE_THRESHOLD STATUS      RAW_VALUE
      Raw_Read_Error_Rate =   114     100            6        ok          61048640
          Seek_Error_Rate =    72     100           30        ok          20723482
         Spin_Retry_Count =   100     100           97        near_thresh 0
         End-to-End_Error =   100     100           99        near_thresh 0
  Airflow_Temperature_Cel =    68      70           45        near_thresh 32
      Temperature_Celsius =    32      30            0        ok          32
   Hardware_ECC_Recovered =   114     100            0        ok          61048640
No SMART attributes are FAILING_NOW
0 sectors were pending re-allocation before the start of the preclear.
0 sectors were pending re-allocation after pre-read in cycle 1 of 1.
0 sectors were pending re-allocation after zero of disk in cycle 1 of 1.
0 sectors are pending re-allocation at the end of the preclear,
    the number of sectors pending re-allocation did not change.
0 sectors had been re-allocated before the start of the preclear.
0 sectors are re-allocated at the end of the preclear,
    the number of sectors re-allocated did not change.
root@Backup:/usr/local/emhttp#

Link to comment

So I have sucessfully added an 8TB drive as a parity drive, I am now adding a second one as a data drive and wow, I knew the writes to these drives were slow, but its saying my parity rebuild will be done in over four days, did it take as long for you guys? I do have a lot of drives and my array is pretty big, which doesn't help with the rebuild time, but wow, four days...lol.

Link to comment

So I have sucessfully added an 8TB drive as a parity drive, I am now adding a second one as a data drive and wow, I knew the writes to these drives were slow, but its saying my parity rebuild will be done in over four days, did it take as long for you guys? I do have a lot of drives and my array is pretty big, which doesn't help with the rebuild time, but wow, four days...lol.

 

I haven't done one, but about 10 posts up (ish) another user posted their results:

 

AFAIK no-one has actually done a drive rebuild using unRAID and these drives yet OR no one has posted the results of doing so.

 

I did one recently, not because of a failure but for upgrading a 3tb drive, duration was similar to a parity check/sync.

 

Event: unRAID Data rebuild:
Subject: Notice [TOWER7] - Data rebuild: finished (0 errors)
Description: Duration: 14 hours, 55 minutes, 14 seconds. Average speed: 149.0 MB/sec
Importance: normal

 

I'd be interested to see what the math behind the "estimated time" is. Parity rebuilt requires the reading all of the data disks and writing to the Parity disk.

 

What average speed is being reported?

 

Also, are you doing anything with the Array at the same time? Reading / Writing?

Link to comment

Nothing else going on with the array, I mean SabNZB is downloading the odd show, but otherwise, nothing really.

 

The average reported speed is still 20.4MB/sec currently sitting at 4 days, 8hrs and 28minutes.

Do you have any other manufacturer's drives in the array?

 

 

I ask because I think I have a compatibility problem with HGST drives and my Seagates in my N54L causing me the same problem when I WRITE to a drive like a rebuild.  When I run a check where it is only reading I get normal speeds.  When my N54L had only Seagate drives in it (on the same controller) I got normal speeds on reads and writes.  It was only when I mixed in the HGST drives that I had a problem.  Also when I had the Seagates on the built in controller in the N54L and the HGST drives off a Dell H310 controller to an external cage the reads and writes were normal.  Only had problems when HGST and Seagate were mixed on the N54L internal controller.

Link to comment

Nothing else going on with the array, I mean SabNZB is downloading the odd show, but otherwise, nothing really.

 

The average reported speed is still 20.4MB/sec currently sitting at 4 days, 8hrs and 28minutes.

Do you have any other manufacturer's drives in the array?

 

 

I ask because I think I have a compatibility problem with HGST drives and my Seagates in my N54L causing me the same problem when I WRITE to a drive like a rebuild.  When I run a check where it is only reading I get normal speeds.  When my N54L had only Seagate drives in it (on the same controller) I got normal speeds on reads and writes.  It was only when I mixed in the HGST drives that I had a problem.  Also when I had the Seagates on the built in controller in the N54L and the HGST drives off a Dell H310 controller to an external cage the reads and writes were normal.  Only had problems when HGST and Seagate were mixed on the N54L internal controller.

Well I can now say I have a compatibility problem with HGST NAS 6TB drives and my HP N54L.  The N54L is now all HGST drives I still get 35-45MB/s drive rebuilds when the drive to rebuild in on the N54L HDD controller.  The N54L has the custom bios from "theBay" that enables the optical ports to run as AHCI so I could see a bios setting causing this.

Link to comment

Join the conversation

You can post now and register later. If you have an account, sign in now to post with your account.
Note: Your post will require moderator approval before it will be visible.

Guest
Reply to this topic...

×   Pasted as rich text.   Restore formatting

  Only 75 emoji are allowed.

×   Your link has been automatically embedded.   Display as a link instead

×   Your previous content has been restored.   Clear editor

×   You cannot paste images directly. Upload or insert images from URL.