[SOLVED] Seagate with huge Seek Error Rate, RMA?


Recommended Posts

I got one ST3000DM001 that went red and it does not show the temp only star sign.

 

With that read error rate this is RMA ready?

 

ID# ATTRIBUTE_NAME          FLAG     VALUE WORST THRESH TYPE      UPDATED  WHEN_FAILED RAW_VALUE
  1 Raw_Read_Error_Rate     0x000f   115   099   006    Pre-fail  Always       -       87619280
  3 Spin_Up_Time            0x0003   094   093   000    Pre-fail  Always       -       0
  4 Start_Stop_Count        0x0032   100   100   020    Old_age   Always       -       148
  5 Reallocated_Sector_Ct   0x0033   100   100   010    Pre-fail  Always       -       0
  7 Seek_Error_Rate         0x000f   063   060   030    Pre-fail  Always       -       1852733
  9 Power_On_Hours          0x0032   100   100   000    Old_age   Always       -       442
10 Spin_Retry_Count        0x0013   100   100   097    Pre-fail  Always       -       0
12 Power_Cycle_Count       0x0032   100   100   020    Old_age   Always       -       76
183 Unknown_Attribute       0x0032   100   100   000    Old_age   Always       -       0
184 Unknown_Attribute       0x0032   100   100   099    Old_age   Always       -       0
187 Reported_Uncorrect      0x0032   100   100   000    Old_age   Always       -       0
188 Unknown_Attribute       0x0032   100   100   000    Old_age   Always       -       4295032833
189 High_Fly_Writes         0x003a   100   100   000    Old_age   Always       -       0
190 Airflow_Temperature_Cel 0x0022   074   062   045    Old_age   Always       -       26 (0 2 26 21)
191 G-Sense_Error_Rate      0x0032   100   100   000    Old_age   Always       -       0
192 Power-Off_Retract_Count 0x0032   100   100   000    Old_age   Always       -       4
193 Load_Cycle_Count        0x0032   098   098   000    Old_age   Always       -       5651
194 Temperature_Celsius     0x0022   026   040   000    Old_age   Always       -       26 (Lifetime Min/Max 0/32768)
197 Current_Pending_Sector  0x0012   100   100   000    Old_age   Always       -       0
198 Offline_Uncorrectable   0x0010   100   100   000    Old_age   Offline      -       0
199 UDMA_CRC_Error_Count    0x003e   200   200   000    Old_age   Always       -       0
240 Head_Flying_Hours       0x0000   100   253   000    Old_age   Offline      -       191881958916450
241 Unknown_Attribute       0x0000   100   253   000    Old_age   Offline      -       9457022960
242 Unknown_Attribute       0x0000   100   253   000    Old_age   Offline      -       41882755701

Link to comment

I got one ST3000DM001 that went red and it does not show the temp only star sign.

 

With that read error rate this is RMA ready?

 

ID# ATTRIBUTE_NAME          FLAG     VALUE WORST THRESH TYPE      UPDATED  WHEN_FAILED RAW_VALUE
  1 Raw_Read_Error_Rate     0x000f   115   099   006    Pre-fail  Always       -       87619280
  3 Spin_Up_Time            0x0003   094   093   000    Pre-fail  Always       -       0
  4 Start_Stop_Count        0x0032   100   100   020    Old_age   Always       -       148
  5 Reallocated_Sector_Ct   0x0033   100   100   010    Pre-fail  Always       -       0
  7 Seek_Error_Rate         0x000f   063   060   030    Pre-fail  Always       -       1852733
  9 Power_On_Hours          0x0032   100   100   000    Old_age   Always       -       442
10 Spin_Retry_Count        0x0013   100   100   097    Pre-fail  Always       -       0
12 Power_Cycle_Count       0x0032   100   100   020    Old_age   Always       -       76
183 Unknown_Attribute       0x0032   100   100   000    Old_age   Always       -       0
184 Unknown_Attribute       0x0032   100   100   099    Old_age   Always       -       0
187 Reported_Uncorrect      0x0032   100   100   000    Old_age   Always       -       0
188 Unknown_Attribute       0x0032   100   100   000    Old_age   Always       -       4295032833
189 High_Fly_Writes         0x003a   100   100   000    Old_age   Always       -       0
190 Airflow_Temperature_Cel 0x0022   074   062   045    Old_age   Always       -       26 (0 2 26 21)
191 G-Sense_Error_Rate      0x0032   100   100   000    Old_age   Always       -       0
192 Power-Off_Retract_Count 0x0032   100   100   000    Old_age   Always       -       4
193 Load_Cycle_Count        0x0032   098   098   000    Old_age   Always       -       5651
194 Temperature_Celsius     0x0022   026   040   000    Old_age   Always       -       26 (Lifetime Min/Max 0/32768)
197 Current_Pending_Sector  0x0012   100   100   000    Old_age   Always       -       0
198 Offline_Uncorrectable   0x0010   100   100   000    Old_age   Offline      -       0
199 UDMA_CRC_Error_Count    0x003e   200   200   000    Old_age   Always       -       0
240 Head_Flying_Hours       0x0000   100   253   000    Old_age   Offline      -       191881958916450
241 Unknown_Attribute       0x0000   100   253   000    Old_age   Offline      -       9457022960
242 Unknown_Attribute       0x0000   100   253   000    Old_age   Offline      -       41882755701

 

It is impossible to make sense of some of the raw values. I would not worry about anything with values stratospherically high. The ones to really monitor closely are the reallocation sectors and pending reallocated sectors (#5 and #197), When they start to go up, you need to monitor them closely, and if they are increasing on every parity check, you have a problem.

 

Other that that, you should be looking at values that are approaching their threshold value. The only one that looks close is the seek_error_rate.  The failure level is a 30 and it is currently over 60. I would keep an eye on it and if you see the dipping lower and lower, I would RMA. But unless you are seeing drastic parity check performance decreases that you attribute to the drive, I wouldn't worry about it until then.

 

To answer your question, I'd say this drive is healthy.

Link to comment

Alas reading syslogs is not my specialty. I do not know why you are getting that message.

 

In all of these scenarios there are two possibilities - either there is truly an issue with the disk itself, or there is a problem with the connection of the disk to the server (the latter would include the controllers, ports, cables, power connection, etc.).

 

The first thing I would do is double check the cabling to that drive, and if that doesn't help move the disk to a different port with fresh cables. If the problem follows the drive the drive is the problem. Otherwise something was loose or you have a problem with another component.

 

This is very general troubleshooting advice. Maybe someone with more syslog expertise can give you more input on the specific error you are seeing.

Link to comment

Thank you for your time!

 

Could you give a general advice on these problems, when drive goes red ball... Do you generally just test cables and try to rebuild drive and if it fails to rebuild then replace? Could the parity be build invalid if this drive fails in the process?

 

I'm scared of using these "problem drives", but I'm not that fond of getting a new drive EVERY time these things happen. :)

 

 

 

Link to comment

Thank you for your time!

 

Could you give a general advice on these problems, when drive goes red ball... Do you generally just test cables and try to rebuild drive and if it fails to rebuild then replace? Could the parity be build invalid if this drive fails in the process?

 

I'm scared of using these "problem drives", but I'm not that fond of getting a new drive EVERY time these things happen. :)

I handle this by having a spare drive that has been previously put through a thorough pre-clear to check it out.    Using this disk I go through the process of rebuilding the failed drive onto this spare as its replacement.  If the rebuild fails for any reason I still have the 'red-balled' disk untouched to attempt data recovery.    If the rebuild works I then put the disk that had 'red-balled' through a thorough pre-clear test.  I use the results of this to decide if the disk is OK or whether the drive really needs replacing.  If the drive appears OK it becomes my new spare disk.

Link to comment

Yes, that is basically what I have done.

 

But I really want to know how good "insurence" the preclear process gives? If it stresses the drive too much could it fail because of that during the rebuild? And if it does not can it fail soon after that?

 

I think this is a bigger problem when you have all slots full. There are no way to preclear and there are so many possible drives to fail.  :-[

Link to comment

Yes, that is basically what I have done.

 

But I really want to know how good "insurence" the preclear process gives? If it stresses the drive too much could it fail because of that during the rebuild? And if it does not can it fail soon after that?

The pre-clear tries to put the drive through the same sort of load as is involved in parity rebuilds and/or normal use.    If at the end of that there are no signs of any problems you have reasonable confidence that this point the drive is showing no problems.    That is actually better than you would have for new drives - a significant proportion of those fail when put through their irst stress test via pre-clear.

I think this is a bigger problem when you have all slots full. There are no way to preclear and there are so many possible drives to fail.  :-[

I actually have a caddy I can plug in externally via eSATA or USB on demand to do this.    You could also use another system as there is no requirement that the pre-clear run on the system where the drive is to be used.

Link to comment
I actually have a caddy I can plug in externally via eSATA or USB on demand to do this.    You could also use another system as there is no requirement that the pre-clear run on the system where the drive is to be used.
However, testing the drive in another system uncovers drive errors only. This can be good or bad, depending. If you have a controller, RAM, or PSU issue in the server it can effect preclear results as well. Ideally you should get a clean preclear cycle in the exact circumstances that the drive will be used in the server.
Link to comment

I have two of these drives, I used one in my unRAID server for about a year without any issues, but recently I have replaced it with the NAS version (I'll use the desktop version for backup storage).  The thing that was bothering me about these drives was the UDMA_CRC_Error_Count, though most of that may have come from one cable problem.    I just finished doing a retest of these drives by preclearing them without any indications of trouble.  Both of these drives also show large values for the Seek_Error_Rate  and they also have the 60/30 numbers for the worst and thresh normalized values - so I figure these are typical of this particular drive.  Here are the smarts from my drives so you can compare (note the newer version of unRAID has an updated version of the smart tool that gives some better attribute names than the version you have):

 

ID# ATTRIBUTE_NAME          FLAG     VALUE WORST THRESH TYPE      UPDATED  WHEN_FAILED RAW_VALUE
  1 Raw_Read_Error_Rate     0x000f   120   099   006    Pre-fail  Always       -       2137384
  3 Spin_Up_Time            0x0003   092   092   000    Pre-fail  Always       -       0
  4 Start_Stop_Count        0x0032   095   095   020    Old_age   Always       -       5765
  5 Reallocated_Sector_Ct   0x0033   100   100   010    Pre-fail  Always       -       0
  7 Seek_Error_Rate         0x000f   071   060   030    Pre-fail  Always       -       13846410
  9 Power_On_Hours          0x0032   093   093   000    Old_age   Always       -       6620
10 Spin_Retry_Count        0x0013   100   100   097    Pre-fail  Always       -       0
12 Power_Cycle_Count       0x0032   100   100   020    Old_age   Always       -       36
183 Runtime_Bad_Block       0x0032   098   098   000    Old_age   Always       -       2
184 End-to-End_Error        0x0032   100   100   099    Old_age   Always       -       0
187 Reported_Uncorrect      0x0032   100   100   000    Old_age   Always       -       0
188 Command_Timeout         0x0032   100   100   000    Old_age   Always       -       0 0 0
189 High_Fly_Writes         0x003a   098   098   000    Old_age   Always       -       2
190 Airflow_Temperature_Cel 0x0022   071   059   045    Old_age   Always       -       29 (Min/Max 21/34)
191 G-Sense_Error_Rate      0x0032   100   100   000    Old_age   Always       -       0
192 Power-Off_Retract_Count 0x0032   100   100   000    Old_age   Always       -       9
193 Load_Cycle_Count        0x0032   092   092   000    Old_age   Always       -       16430
194 Temperature_Celsius     0x0022   029   041   000    Old_age   Always       -       29 (0 20 0 0 0)
197 Current_Pending_Sector  0x0012   100   100   000    Old_age   Always       -       0
198 Offline_Uncorrectable   0x0010   100   100   000    Old_age   Offline      -       0
199 UDMA_CRC_Error_Count    0x003e   200   195   000    Old_age   Always       -       6949
240 Head_Flying_Hours       0x0000   100   253   000    Old_age   Offline      -       1089h+21m+15.431s
241 Total_LBAs_Written      0x0000   100   253   000    Old_age   Offline      -       51506131152
242 Total_LBAs_Read         0x0000   100   253   000    Old_age   Offline      -       222555473047

 

  1 Raw_Read_Error_Rate     0x000f   117   099   006    Pre-fail  Always       -       125981968
  3 Spin_Up_Time            0x0003   092   091   000    Pre-fail  Always       -       0
  4 Start_Stop_Count        0x0032   098   098   020    Old_age   Always       -       2281
  5 Reallocated_Sector_Ct   0x0033   100   100   010    Pre-fail  Always       -       0
  7 Seek_Error_Rate         0x000f   067   060   030    Pre-fail  Always       -       5927616
  9 Power_On_Hours          0x0032   097   097   000    Old_age   Always       -       2670
10 Spin_Retry_Count        0x0013   100   100   097    Pre-fail  Always       -       0
12 Power_Cycle_Count       0x0032   100   100   020    Old_age   Always       -       31
183 Runtime_Bad_Block       0x0032   099   099   000    Old_age   Always       -       1
184 End-to-End_Error        0x0032   100   100   099    Old_age   Always       -       0
187 Reported_Uncorrect      0x0032   100   100   000    Old_age   Always       -       0
188 Command_Timeout         0x0032   100   100   000    Old_age   Always       -       0 0 0
189 High_Fly_Writes         0x003a   091   091   000    Old_age   Always       -       9
190 Airflow_Temperature_Cel 0x0022   069   050   045    Old_age   Always       -       31 (Min/Max 21/36)
191 G-Sense_Error_Rate      0x0032   100   100   000    Old_age   Always       -       0
192 Power-Off_Retract_Count 0x0032   100   100   000    Old_age   Always       -       8
193 Load_Cycle_Count        0x0032   097   097   000    Old_age   Always       -       6649
194 Temperature_Celsius     0x0022   031   050   000    Old_age   Always       -       31 (0 21 0 0 0)
197 Current_Pending_Sector  0x0012   100   100   000    Old_age   Always       -       0
198 Offline_Uncorrectable   0x0010   100   100   000    Old_age   Offline      -       0
199 UDMA_CRC_Error_Count    0x003e   200   194   000    Old_age   Always       -       668
240 Head_Flying_Hours       0x0000   100   253   000    Old_age   Offline      -       475h+23m+44.493s
241 Total_LBAs_Written      0x0000   100   253   000    Old_age   Offline      -       44747895302
242 Total_LBAs_Read         0x0000   100   253   000    Old_age   Offline      -       147844353977

 

Regards,

 

Stephen

 

 

Link to comment

I got one ST3000DM001 that went red and it does not show the temp only star sign.

 

With that read error rate this is RMA ready?

 

ID# ATTRIBUTE_NAME          FLAG     VALUE WORST THRESH TYPE      UPDATED  WHEN_FAILED RAW_VALUE
  1 Raw_Read_Error_Rate     0x000f   115   099   006    Pre-fail  Always       -       87619280
  3 Spin_Up_Time            0x0003   094   093   000    Pre-fail  Always       -       0
  4 Start_Stop_Count        0x0032   100   100   020    Old_age   Always       -       148
  5 Reallocated_Sector_Ct   0x0033   100   100   010    Pre-fail  Always       -       0
  7 Seek_Error_Rate         0x000f   063   060   030    Pre-fail  Always       -       1852733
  9 Power_On_Hours          0x0032   100   100   000    Old_age   Always       -       442
10 Spin_Retry_Count        0x0013   100   100   097    Pre-fail  Always       -       0
12 Power_Cycle_Count       0x0032   100   100   020    Old_age   Always       -       76
183 Unknown_Attribute       0x0032   100   100   000    Old_age   Always       -       0
184 Unknown_Attribute       0x0032   100   100   099    Old_age   Always       -       0
187 Reported_Uncorrect      0x0032   100   100   000    Old_age   Always       -       0
188 Unknown_Attribute       0x0032   100   100   000    Old_age   Always       -       4295032833
189 High_Fly_Writes         0x003a   100   100   000    Old_age   Always       -       0
190 Airflow_Temperature_Cel 0x0022   074   062   045    Old_age   Always       -       26 (0 2 26 21)
191 G-Sense_Error_Rate      0x0032   100   100   000    Old_age   Always       -       0
192 Power-Off_Retract_Count 0x0032   100   100   000    Old_age   Always       -       4
193 Load_Cycle_Count        0x0032   098   098   000    Old_age   Always       -       5651
194 Temperature_Celsius     0x0022   026   040   000    Old_age   Always       -       26 (Lifetime Min/Max 0/32768)
197 Current_Pending_Sector  0x0012   100   100   000    Old_age   Always       -       0
198 Offline_Uncorrectable   0x0010   100   100   000    Old_age   Offline      -       0
199 UDMA_CRC_Error_Count    0x003e   200   200   000    Old_age   Always       -       0
240 Head_Flying_Hours       0x0000   100   253   000    Old_age   Offline      -       191881958916450
241 Unknown_Attribute       0x0000   100   253   000    Old_age   Offline      -       9457022960
242 Unknown_Attribute       0x0000   100   253   000    Old_age   Offline      -       41882755701

 

The seek error rate is a 48bit value, 1852733 is 0x0000001C453D hex.  First 16bits is 0x0000, last 32bits are 0x001C453D.  Basically, zero seek errors over 1852733 total seeks.  Source

Link to comment

Ok, thanks.

 

I attached syslog from the time the drive went red dot.

 

What are those HDIO errors? Not drive related?

 

Looks like disk 10 timed out and refused to reconnect.  Drive may have an electronic problem vs mechanical problem.  Try a power cycle and see if it repeats.  The HDIO errors are a result of the drive never responding to controller reconnect attempts, so those errors are a result of the drive not being properly recognized.  Check your cable routing, don't stack a bunch of SATA cables together tightly as one would typically want to do to keep things tidy.  Also check the power cable to the drive.  Looks to be a fairly new drive, so might be a data/power cable slightly loose.  Only other thing I see is the load cycle count is a bit high due to the APM on the drive, but is common for the newer Seagate drives since they have a 2400 hour rating.

Link to comment

About the SATA cables...

They should be tight firm and preferably locked, but Do *NOT* tie them together, lay them out randomly.

Otherwise you'll something like old fashioned AM radio static on your SATA lines.

 

SATA cables are usually 'unshielded'...each acts like a little antenna. As real data travels down one, it 'radiates' its signal along the wire in a broadcast..the other cables, running parallel and being very close, will pickup the data signal and think its real.  Errors result.

Link to comment

Wow, I did not know that. I have one sas cable that has that same cover as power cables, is that preventing this problem?

 

I bought new ones, because those were so stiff... and the new ones are the basic red cables. Some are pretty tight together, does it help to just loosen them?

 

I'm rebuild of my server and that puts the project back to the drawing board.

 

When there are 16 cables coming from the SAS cards, it's pretty hard not to get them touch each other. Or the air flow can be effected.

 

 

Link to comment

I believe most SAS cables are shielded--at least up to the point where they fan out to individual leads. YMMV

Allow them to touch and cross and mingle... just let it happen...its random.

If you wire tie them together, they look nice and neat and efficient.

But you've created long lengths of antenna running in parallel and feeding each other stuff.

(Some electronics engineer might call that bundle of cables a 'transformer' that's actually increasing the voltage on some of the wires.  :o (Or is it an inductor? (I failed EE  >:( and never looked back  ::) ))

Just cut the cable ties off and let them flop around...they'll be happier for it!

Link to comment

Sorry haven't had time to properly respond in this thread.

 

I believe a lot of these red balls, missing drives, and weird conditions that people experience are due to the server build. Just because someone can set up a workstation, even a high end one, doesn't mean they are ready to build a server like these that will give years of trouble free service.

 

The key ingredients, IMO, on a server with more than 3-4 disks are:

1 - Locking SATA cables and controllers / 5in3s that support them

2 - 5in3s or similar that allow disk to be swapped out easily without opening the case

3 - Good quality power splitters

4 - A roomy case

 

With these pieces you can assemble everything securely, test it out, and close the box and never (ok, very seldom) have to open it up! Because it is far too easy to knock something loose, reroute something that causes a problem, or otherwise "muck with it" and cause something to go wrong. My first server did none of those things and was a constant headache every time I opened it up. My new one which does all of those things and hadn't been opened in 3 years or more, is solid, despite many drive replacements and additions.

Link to comment

I agree.

 

But as for my build, my goal has been to make quiet, low-profile, low budget server with a lot of storage space. Maybe this is what you get for being cheap.

 

I wish I had a few thousand  bucks to just buy the new server full with 4TB drives.  8)

 

I have understood that the 5-in-3's are very noisy and many of them do not have the possibility for a fan change. Then there is the heat issue witch is pretty much only defeated by adding fan rpm's = more noise. And just one of those cases cost more than one 4TB drive. Do they resonate much? I'm pretty happy with the heat levels on my current build, but the humming is killing me at nights. I think it's got to do with the Scythe 4-in-3 brackets that have too stiff rubber on them to damp the drives.

 

The unraid has improved much over the years and the drive prices are gotten so low that it is really possible to reduce the amount of drives in the array considerably. I hope that lowers the noise levels. I also plan to put the vertically in 3in3 brackets. Then the airflow is better too.

 

I have seen videos of those rack builds with multiple 5in3s and they are pretty cool, but not all of us can accomodate those Boeing 747 jet sounds in our small studio apartments.  ;)

 

I don't know how many realise this before they assemble and boot their server for the first time.

 

 

 

 

Link to comment

I agree.

 

But as for my build, my goal has been to make quiet, low-profile, low budget server with a lot of storage space. Maybe this is what you get for being cheap.

 

I wish I had a few thousand  bucks to just buy the new server full with 4TB drives.  8)

 

I have understood that the 5-in-3's are very noisy and many of them do not have the possibility for a fan change. Then there is the heat issue witch is pretty much only defeated by adding fan rpm's = more noise. And just one of those cases cost more than one 4TB drive. Do they resonate much? I'm pretty happy with the heat levels on my current build, but the humming is killing me at nights. I think it's got to do with the Scythe 4-in-3 brackets that have too stiff rubber on them to damp the drives.

 

The unraid has improved much over the years and the drive prices are gotten so low that it is really possible to reduce the amount of drives in the array considerably. I hope that lowers the noise levels. I also plan to put the vertically in 3in3 brackets. Then the airflow is better too.

 

I have seen videos of those rack builds with multiple 5in3s and they are pretty cool, but not all of us can accomodate those Boeing 747 jet sounds in our small studio apartments.  ;)

 

I don't know how many realise this before they assemble and boot their server for the first time.

 

I admit my design goal is different. My server could scream like a banshee and no one would know.

 

But my old server is much noisier than the new one.

 

I use 4 Supermicro 5in3s with replaceable 92mm fans. And they are very cool year round, going down to 12C in the winter to maybe 28C in the summer. I really like them quite a lot. At around $90 a piece on a good sale they aren't cheap, but I hope they will last me for many years.

 

As I have explained in other posts, being able to swap out a disk without risking disconnecting something is important for recovering for a disk problem.

 

Surely there are things you can do with locking cables and good cable management that will help, but I stand by my suggestions for best results.

 

You should check out some of my early posts in the forum. I am cheap too and advised against the unnecessary expensive 5in3s. Older and wiser or just lazier? I'm not sure. But I am sticking with them.

Link to comment

Wow, I did not know that. I have one sas cable that has that same cover as power cables, is that preventing this problem?

 

I bought new ones, because those were so stiff... and the new ones are the basic red cables. Some are pretty tight together, does it help to just loosen them?

 

I'm rebuild of my server and that puts the project back to the drawing board.

 

When there are 16 cables coming from the SAS cards, it's pretty hard not to get them touch each other. Or the air flow can be effected.

 

The problem with SATA cables comes when they are tied together into a bundle.  This is usually done because someone wants the interior of the unit to 'look neat'.  If you have each cable running from the controller to the drive , any incidental contact is not an issue.  Just don't bundle them altogether using tie wraps!  (The technical term for this problem is called 'cross-talk'.  You can google the term for more details.)

Link to comment

We'll I think the OP is solved.

 

I need to get those locked sas cables. They just seem to be 50cm minimum. The shortest distance to from sas card to hdd is about 10cm. Luckily the case has some room to get good airflow too.

 

bjp999: I can see from your temps that I cannot compete with those just within the build.  ;)

Link to comment
  • 6 years later...

I would not worry too much about those seek_error_rate's it seems normal for ST3100 drives.
Guess which 2 drives are NOT seagate drives :)

root@pve02:/storage# smartctl -A /dev/sda |grep Seek                                                   │
  7 Seek_Error_Rate         0x000f   085   060   030    Pre-fail  Always       -       14084313763     │
root@pve02:/storage# smartctl -A /dev/sdb |grep Seek                                                   │
  7 Seek_Error_Rate         0x000f   073   060   030    Pre-fail  Always       -       194114813219    │
root@pve02:/storage# smartctl -A /dev/sdc |grep Seek                                                   │
  7 Seek_Error_Rate         0x000f   090   060   030    Pre-fail  Always       -       1157908901      │
root@pve02:/storage# smartctl -A /dev/sdd |grep Seek                                                   │
  7 Seek_Error_Rate         0x000f   071   060   030    Pre-fail  Always       -       335842530617    │
root@pve02:/storage# smartctl -A /dev/sde |grep Seek                                                   │
  7 Seek_Error_Rate         0x000f   087   060   030    Pre-fail  Always       -       559951038       │
root@pve02:/storage# smartctl -A /dev/sdf |grep Seek                                                   │
  7 Seek_Error_Rate         0x000f   072   060   030    Pre-fail  Always       -       258866449149    │
root@pve02:/storage# smartctl -A /dev/sdg |grep Seek                                                   │
  7 Seek_Error_Rate         0x002e   100   253   000    Old_age   Always       -       0               │
root@pve02:/storage# smartctl -A /dev/sdh |grep Seek                                                   │
  7 Seek_Error_Rate         0x000b   100   100   067    Pre-fail  Always       -       0               │
  8 Seek_Time_Performance   0x0005   138   138   020    Pre-fail  Offline      -       31          

Link to comment

Join the conversation

You can post now and register later. If you have an account, sign in now to post with your account.
Note: Your post will require moderator approval before it will be visible.

Guest
Reply to this topic...

×   Pasted as rich text.   Restore formatting

  Only 75 emoji are allowed.

×   Your link has been automatically embedded.   Display as a link instead

×   Your previous content has been restored.   Clear editor

×   You cannot paste images directly. Upload or insert images from URL.