INGY Posted February 23, 2023 Share Posted February 23, 2023 Hi, I went and checked my unraid server to find it had dropped a disk. I tried to stop the array so I could remove the disk and runs some checks on it with the plan of rebuilding to its self if no issues where found. System has been stable with no issues for over a year (Minus 2 unexpected power outages and a few planned power downs), current run time over 70 days. Anyway after clicking stop array and proceed nothing was happening so I went and checked the logs and could see segfault errors which from looking at other peoples issues could be a memory issue but I can't check that out until I can stop the array. I manually disabled docker and VM manager as I have had that hold an array online before but that hasn't fixed it either. The odd thing with this is it left my containers online so I manually stopped them with docker container stop and I checked my VM had stopped with virsh list. Since I stopped my containers and VM the segfaults are no longer showing up but I still can't stop the array could someone point me in the right direction to figure out why my array won't stop. Thanks in advanced. Error I was getting before stopping containers and VM posted below. Feb 23 12:53:01 unRAID kernel: Code: d1 0f 31 00 64 4c 8b 00 e9 50 4f 00 00 41 57 41 56 49 89 fb 41 55 41 54 49 89 f2 55 53 4d 89 cc 48 89 d3 48 81 ec c8 00 00 00 <8b> 69 08 64 48 8b 04 25 28 00 00 00 48 89 84 24 b8 00 00 00 31 c0 Feb 23 12:53:33 unRAID kernel: ml3:Upstairs[25498]: segfault at 20 ip 0000148b05df9e1d sp 0000148af1b4a690 error 4 in libc-2.27.so[148b05d20000+1e7000] Feb 23 12:53:33 unRAID kernel: Code: d1 0f 31 00 64 4c 8b 00 e9 50 4f 00 00 41 57 41 56 49 89 fb 41 55 41 54 49 89 f2 55 53 4d 89 cc 48 89 d3 48 81 ec c8 00 00 00 <8b> 69 08 64 48 8b 04 25 28 00 00 00 48 89 84 24 b8 00 00 00 31 c0 Feb 23 12:54:05 unRAID kernel: ml1:Downstairs[27391]: segfault at 20 ip 000014603c0e1e1d sp 0000146028234690 error 4 in libc-2.27.so[14603c008000+1e7000] Feb 23 12:54:05 unRAID kernel: Code: d1 0f 31 00 64 4c 8b 00 e9 50 4f 00 00 41 57 41 56 49 89 fb 41 55 41 54 49 89 f2 55 53 4d 89 cc 48 89 d3 48 81 ec c8 00 00 00 <8b> 69 08 64 48 8b 04 25 28 00 00 00 48 89 84 24 b8 00 00 00 31 c0 Feb 23 12:54:37 unRAID kernel: ml3:Upstairs[29413]: segfault at 20 ip 000014d94b827e1d sp 000014d937578690 error 4 in libc-2.27.so[14d94b74e000+1e7000] Feb 23 12:54:37 unRAID kernel: Code: d1 0f 31 00 64 4c 8b 00 e9 50 4f 00 00 41 57 41 56 49 89 fb 41 55 41 54 49 89 f2 55 53 4d 89 cc 48 89 d3 48 81 ec c8 00 00 00 <8b> 69 08 64 48 8b 04 25 28 00 00 00 48 89 84 24 b8 00 00 00 31 c0 Feb 23 12:55:09 unRAID kernel: ml3:Upstairs[31202]: segfault at 20 ip 000014846e93ce1d sp 000014845a68d690 error 4 in libc-2.27.so[14846e863000+1e7000] Feb 23 12:55:09 unRAID kernel: Code: d1 0f 31 00 64 4c 8b 00 e9 50 4f 00 00 41 57 41 56 49 89 fb 41 55 41 54 49 89 f2 55 53 4d 89 cc 48 89 d3 48 81 ec c8 00 00 00 <8b> 69 08 64 48 8b 04 25 28 00 00 00 48 89 84 24 b8 00 00 00 31 c0 Feb 23 12:55:41 unRAID kernel: ml3:Upstairs[693]: segfault at 20 ip 000014d0c73c9e1d sp 000014d0b311a690 error 4 in libc-2.27.so[14d0c72f0000+1e7000] Quote Link to comment
trurl Posted February 23, 2023 Share Posted February 23, 2023 37 minutes ago, INGY said: remove the disk and runs some checks on it Don't remove. Easy and safer to check in your server Attach diagnostics to your NEXT post in this thread Quote Link to comment
INGY Posted February 23, 2023 Author Share Posted February 23, 2023 Hi Trurl, Thanks for the quick reply. I managed to find a similar post to mine and ended up rebooting the server as I had everything else stopped. It was able to come back up but now that drive has disappeared so will be checking that but currently running a memory test but thats all clear so far. When I said remove I meant from the array (sorry should have made that clearer) so I could run tests on it and if clear bring it back in but looking like there might be an issue hopefully only a cable issue but odd as just died and haven't had any faults slowly creeping up. I checked through the diag logs and couldn't spot anything putting holds on the disk. Happy for the ticket to be closed though as my main issue is sorted. Thanks again Quote Link to comment
trurl Posted February 23, 2023 Share Posted February 23, 2023 1 hour ago, trurl said: Attach diagnostics to your NEXT post in this thread Quote Link to comment
Recommended Posts
Join the conversation
You can post now and register later. If you have an account, sign in now to post with your account.
Note: Your post will require moderator approval before it will be visible.