Michael_P Posted August 2, 2023 Share Posted August 2, 2023 Looking at my nightly email from unraid about the array's health report, it shows as 'Passed' but I notice that all the drives were spun up (except for 1 anyway). So I log into the server to see if they still are, and they were all still spun up, so I click the button to spin them all down to see if they'll stay down but all the icons do is spin. I try each drive individually, with no effect on any of them, so I open up the log to see if there's any spindown commands noted. Turns out two of the drives were disabled, and the emhttpd process had segfaulted shortly after the drives show disconnected and re-connected. The dashboard continued to show no issues and send an ALL OK Bro! email..... If I hadn't noticed the drives spun up and investigated, it might have been a while before I would have any reason to look at it... What's the point of the health email if it doesn't report the actual health of the array? Quote Link to comment
JorgeB Posted August 2, 2023 Share Posted August 2, 2023 The disk is not disabled but it should report the read errors, please post the diagnostics. Quote Link to comment
Michael_P Posted August 2, 2023 Author Share Posted August 2, 2023 11 minutes ago, JorgeB said: The disk is not disabled but it should report the read errors, They both dropped offline, and after a restart they're both disabled - diags before and after attached urserver-diagnostics-20230802-0520.zip urserver-diagnostics-20230802-0730.zip 1 Quote Link to comment
JorgeB Posted August 2, 2023 Share Posted August 2, 2023 Yeah, there were write errors for both, so they should have been disabled, likely weren't because of the emhttp segfault just before that, first time I see that, you can create a bug report but it might have been a one time thing. Quote Link to comment
Michael_P Posted August 2, 2023 Author Share Posted August 2, 2023 (edited) On 8/2/2023 at 7:53 AM, JorgeB said: but it might have been a one time thing It does it every time a drive drops, here's one from the last time I was re-building a new drive and another in the array decided to take a short nap (same disk btw, I suspect a cable or power delivery issue). The gui no longer reports any progress on the parity re-build, I just have to wait for it to finish and re-boot to get the dashboard back. This one at least sent the correct [FAIL] email, which leads me to question whether it's a 6.12 issue? Edited April 5 by Michael_P deleted diags exposing passwords Quote Link to comment
JorgeB Posted August 2, 2023 Share Posted August 2, 2023 1 hour ago, Michael_P said: It does it every time a drive drops Do you mean every time a drive drops emhttp segfaults? Quote Link to comment
Michael_P Posted August 2, 2023 Author Share Posted August 2, 2023 2 hours ago, JorgeB said: Do you mean every time a drive drops emhttp segfaults? Since moving to 6.11 it's happened each of the times one has dropped offline, this is the 3rd time. The other two times were during parity re-builds after upgrading a drive and one of the others dropped (I rarely get a clean parity check/re-build on the first try without losing a drive likely due to power which I'm working on again..) Quote Link to comment
JorgeB Posted August 2, 2023 Share Posted August 2, 2023 Cannot reproduce this, so not a general v6.12 bug. Quote Link to comment
Michael_P Posted August 2, 2023 Author Share Posted August 2, 2023 4 minutes ago, JorgeB said: Cannot reproduce this, so not a general v6.12 bug. I've just finished dusting it out, re-seating RAM, changed out a power and SAS cable - going to re-build parity and cross my fingers. Quote Link to comment
Recommended Posts
Join the conversation
You can post now and register later. If you have an account, sign in now to post with your account.
Note: Your post will require moderator approval before it will be visible.