Jump to content

6.8.3 Eating Disks: HBA? Cable? Disk?


Recommended Posts

Hello, 

 

I've been having issues with Unraid rejecting two drives (print_req_error: I/O error, dev sde) in the past few months, I'm curious if I've just been very unlucky with drive failures, or if there is another issue at play, such as a faulty HBA, SAS to SATA cable or something else. 

 

The first drive to fail was a refurbished Seagate 8TB Ironwolf drive (from a warranty claim), the 2nd drive is another Seagate 8TB Ironwolf. This server is quite young (less than 2 years) so both of these drives are quite young. The 2nd drive was previously deployed in a cheap consumer NAS with overheating issues (55 deg C) due to poor cooling, and some exceptionally hot summer weather. 

 

I've done multiple pre-clear runs with the first failed drive and it came up clean; the most recent failure also seems to be good based on preclear. Preclear speeds also appear normal (starting at ~ 200mb/s and ending ~ 120mb/s, averaging ~180mb/s). 

 

I'm wondering if these drives really are starting to fail despite appearing to perform normally in both preclear and Windows? Even if I submit a warranty claim, will it be accepted? 

 

Keen for some advice. Thank you. 

 

 

Edited by artkingjw
Clarity
Link to comment
6 hours ago, JorgeB said:

Are you on v6.9? If yes see here:

 

Hey thanks for getting back to me, no I'm on 6.8.3 as per title. I was JUST ABOUT to upgrade to 6.9! Thank God for your reply, I'll hold off for now :). I still have a few Seagates left in my array, been gradually swapping them to WD's over the last few months.  

The two Seagate drives that have been affected on my server are:
Drive 1: st8000vn004
Drive 2: st8000vn0022

My HBA is an LSI-9211-8i

Edited by artkingjw
Extra hardware
Link to comment
17 minutes ago, JorgeB said:

Doesn't look like a disc problem, most likely connection or power.

Thanks for looking into it. I will probably re-add this disk to the array then. 

 

Now that you mention power, it COULD be due to the slightly messy SATA power daisy chain I've got going on in my case... I'll see what I can do about it. 

Link to comment

Hypothetically speaking, IF this problem were to happen again to 2 or more drives simultaneously, am I at serious risk of loosing data? Could there be a way to get unraid to 'pretend' this never happened? I'm guessing no, because there would be some failed/incomplete writes, which leads to corrupted bits of data. 

Link to comment

Join the conversation

You can post now and register later. If you have an account, sign in now to post with your account.
Note: Your post will require moderator approval before it will be visible.

Guest
Reply to this topic...

×   Pasted as rich text.   Restore formatting

  Only 75 emoji are allowed.

×   Your link has been automatically embedded.   Display as a link instead

×   Your previous content has been restored.   Clear editor

×   You cannot paste images directly. Upload or insert images from URL.

×
×
  • Create New...