Disabled Disk - Helium level FAILING NOW.


eeans

Recommended Posts

SMART Attributes Data Structure revision number: 16
Vendor Specific SMART Attributes with Thresholds:
ID# ATTRIBUTE_NAME          FLAGS    VALUE WORST THRESH FAIL RAW_VALUE
  1 Raw_Read_Error_Rate     PO-R--   088   088   016    -    5767168
  2 Throughput_Performance  --S---   130   130   054    -    108
  3 Spin_Up_Time            POS---   148   148   024    -    447 (Average 439)
  4 Start_Stop_Count        -O--C-   100   100   000    -    44
  5 Reallocated_Sector_Ct   PO--CK   100   100   005    -    80
  7 Seek_Error_Rate         -O-R--   100   100   067    -    0
  8 Seek_Time_Performance   --S---   128   128   020    -    18
  9 Power_On_Hours          -O--C-   099   099   000    -    8712  < 363 days, no bueno... should be YEARS.
 10 Spin_Retry_Count        -O--C-   100   100   060    -    0
 12 Power_Cycle_Count       -O--CK   100   100   000    -    44
 22 Helium_Level            PO---K   001   001   025    NOW  1     < ---- bad. no bueno

----------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------

 

Damn, this got me looking at all my HGST ultrastars.. 
That's right under a year of use, I would swap/pull that drive immediately and start working on a warranty call. 
Not sure how white label ones are handled.. maybe through the vendor on the drive? (my white label ones are for HP servers.. I would call HPE in my case.. YMMV of course)

Link to comment

I'll pull this one and put in a warranty claim then.

 

The server took a rather large hit during shipping, knocking some disks free from their mountings. These were then loose to bang around in the case...when I got it I could hear them moving before I opened the box :( In hindsight I should have removed them before shipping to be safe.

 

I think I got lucky and only lost this one disk but would there be a good stat(s) to check on the rest of the disks to see if they were also damaged?

Edited by eeans
Link to comment
1 hour ago, eeans said:

I'll pull this one and put in a warranty claim then.

 

The server took a rather large hit during shipping, knocking some disks free from their mountings. These were then loose to bang around in the case...when I got it I could hear them moving before I opened the box :( In hindsight I should have removed them before shipping to be safe.

 

I think I got lucky and only lost this one disk but would there be a good stat(s) to check on the rest of the disks to see if they were also damaged?

Look at the Helium level - anything lower than 25 is bad it looks like (thats the threshold for being bad in that smart report)
As well as the raw read error rate.. 

 

Link to comment
41 minutes ago, JorgeB said:

I would say that helium level below 100 is bad, it means it's leaking.

I'm not sure what the level should be at.. ie:  if that's a % or not. there should an acceptable loss though over time though. What that is.. noooooooo idea :)

I just checked my helium drives, and I don't see a helium indicator in the smart report. (running an extended test now though)

Edited by Geekd4d
Link to comment
  • 3 weeks later...

For anyone else here in the Netherlands, I had zero issues getting my replacement drive from WD. I shipped it back to them as a bare drive, not in the original external enclosure. A few days later they shipped me a brand new WD EasyStore 8TB as a replacement.

Link to comment

Join the conversation

You can post now and register later. If you have an account, sign in now to post with your account.
Note: Your post will require moderator approval before it will be visible.

Guest
Reply to this topic...

×   Pasted as rich text.   Restore formatting

  Only 75 emoji are allowed.

×   Your link has been automatically embedded.   Display as a link instead

×   Your previous content has been restored.   Clear editor

×   You cannot paste images directly. Upload or insert images from URL.