artkingjw Posted March 14, 2021 Share Posted March 14, 2021 (edited) Hello, I've been having issues with Unraid rejecting two drives (print_req_error: I/O error, dev sde) in the past few months, I'm curious if I've just been very unlucky with drive failures, or if there is another issue at play, such as a faulty HBA, SAS to SATA cable or something else. The first drive to fail was a refurbished Seagate 8TB Ironwolf drive (from a warranty claim), the 2nd drive is another Seagate 8TB Ironwolf. This server is quite young (less than 2 years) so both of these drives are quite young. The 2nd drive was previously deployed in a cheap consumer NAS with overheating issues (55 deg C) due to poor cooling, and some exceptionally hot summer weather. I've done multiple pre-clear runs with the first failed drive and it came up clean; the most recent failure also seems to be good based on preclear. Preclear speeds also appear normal (starting at ~ 200mb/s and ending ~ 120mb/s, averaging ~180mb/s). I'm wondering if these drives really are starting to fail despite appearing to perform normally in both preclear and Windows? Even if I submit a warranty claim, will it be accepted? Keen for some advice. Thank you. Edited March 14, 2021 by artkingjw Clarity Quote Link to comment
JorgeB Posted March 14, 2021 Share Posted March 14, 2021 Are you on v6.9? If yes see here: Quote Link to comment
artkingjw Posted March 14, 2021 Author Share Posted March 14, 2021 (edited) 6 hours ago, JorgeB said: Are you on v6.9? If yes see here: Hey thanks for getting back to me, no I'm on 6.8.3 as per title. I was JUST ABOUT to upgrade to 6.9! Thank God for your reply, I'll hold off for now . I still have a few Seagates left in my array, been gradually swapping them to WD's over the last few months. The two Seagate drives that have been affected on my server are: Drive 1: st8000vn004 Drive 2: st8000vn0022 My HBA is an LSI-9211-8i Edited March 14, 2021 by artkingjw Extra hardware Quote Link to comment
JorgeB Posted March 15, 2021 Share Posted March 15, 2021 Then please post the diagnostics: Tools -> Diagnostics Quote Link to comment
artkingjw Posted March 15, 2021 Author Share Posted March 15, 2021 Thanks, I've attached the diagnostics. This was taken soon after the 2nd drive failed, and I decided to test it by starting a pre-clear. That drive has now fully passed pre-clear. diagnostics-20210313-2102.zip Quote Link to comment
JorgeB Posted March 15, 2021 Share Posted March 15, 2021 Doesn't look like a disc problem, most likely connection or power. 1 Quote Link to comment
artkingjw Posted March 15, 2021 Author Share Posted March 15, 2021 17 minutes ago, JorgeB said: Doesn't look like a disc problem, most likely connection or power. Thanks for looking into it. I will probably re-add this disk to the array then. Now that you mention power, it COULD be due to the slightly messy SATA power daisy chain I've got going on in my case... I'll see what I can do about it. Quote Link to comment
artkingjw Posted March 15, 2021 Author Share Posted March 15, 2021 Hypothetically speaking, IF this problem were to happen again to 2 or more drives simultaneously, am I at serious risk of loosing data? Could there be a way to get unraid to 'pretend' this never happened? I'm guessing no, because there would be some failed/incomplete writes, which leads to corrupted bits of data. Quote Link to comment
JorgeB Posted March 15, 2021 Share Posted March 15, 2021 Unraid only disables as many disks as there are parity devices. 1 Quote Link to comment
Recommended Posts
Join the conversation
You can post now and register later. If you have an account, sign in now to post with your account.
Note: Your post will require moderator approval before it will be visible.