techystreamer Posted February 29 Share Posted February 29 Upon accessing server Unraid was not running processes so I rebooted. Upon rebooting the a ZFS disk in the array was stuck on mounting. I checked all connections rebooted and the same. I tried to replace the drive thinking there is an issue with the drive since I had uncorrectable I/O failure. How do I get the drive to mount, and should it be the original drive since the drive I tried to replace it with won't mount either? Quote Link to comment
JorgeB Posted February 29 Share Posted February 29 If I understand correctly you already had the same issue with the actual disk before trying the emulated disk? Also please post the diagnostics. Quote Link to comment
techystreamer Posted February 29 Author Share Posted February 29 Yes same issue on original disk 6. Trying to pull diagnostics seems stuck here. Quote Link to comment
techystreamer Posted February 29 Author Share Posted February 29 Had to reboot to get the diagnostics as it froze on above. tower-diagnostics-20240229-1326.zip Quote Link to comment
JorgeB Posted February 29 Share Posted February 29 You will need to reboot, force it if needed, then don't start the array, post the diags and the output of zpool import Quote Link to comment
techystreamer Posted February 29 Author Share Posted February 29 3 minutes ago, JorgeB said: You will need to reboot, force it if needed, then don't start the array, post the diags and the output of zpool import Quote Link to comment
JorgeB Posted February 29 Share Posted February 29 All array disks are using zfs right? Besides disk6 I'm not seeing pools for disks 1 and 2 there. Try zpool import -F disk6 Quote Link to comment
techystreamer Posted February 29 Author Share Posted February 29 Disks 1 and 2 are xfs as I kept them with original data. Quote Link to comment
JorgeB Posted February 29 Share Posted February 29 1 minute ago, techystreamer said: Disks 1 and 2 are xfs as I kept them with original data. Oh yeah, missed that from the screenshot, looked al zfs. Quote Link to comment
techystreamer Posted February 29 Author Share Posted February 29 5 minutes ago, JorgeB said: All array disks are using zfs right? Besides disk6 I'm not seeing pools for disks 1 and 2 there. Try zpool import -F disk6 Says no such pool available Quote Link to comment
JorgeB Posted February 29 Share Posted February 29 That's not a great sign, but kind of expected, and zpool import -F 4903234481085785192 disk6 Quote Link to comment
techystreamer Posted February 29 Author Share Posted February 29 Do I need to use sdj instead of drive 6? Quote Link to comment
techystreamer Posted February 29 Author Share Posted February 29 9 minutes ago, JorgeB said: That's not a great sign, but kind of expected, and zpool import -F 4903234481085785192 disk6 Quote Link to comment
techystreamer Posted February 29 Author Share Posted February 29 Would using zdb command provide any good info, I read about but not sure how to implement on my setup? Quote Link to comment
JorgeB Posted February 29 Share Posted February 29 You can try specifying the device: zpool import -F -d /dev/sdj1 disk6 If it fails try: zpool import -FX -d /dev/sdj1 disk6 If that also fails I'm afraid you will probably need to destroy the pool and restore from a backup Quote Link to comment
techystreamer Posted February 29 Author Share Posted February 29 Ok. So is disk6 its own pool? How do I destroy it and then restore? Quote Link to comment
techystreamer Posted February 29 Author Share Posted February 29 I see there is a sdj directory on the system, is that the directory before I changed over to zfs? Is this something that I can work with? Not sure how to restore the disk6 pool. Quote Link to comment
JorgeB Posted March 1 Share Posted March 1 12 hours ago, techystreamer said: So is disk6 its own pool? Correct 12 hours ago, techystreamer said: How do I destroy it and then restore? You can use wipefs, to restore you'd need to already have a backup. 12 hours ago, techystreamer said: I see there is a sdj directory on the system, is that the directory before I changed over to zfs? If you mean /dev/sdj that's the disk. Quote Link to comment
techystreamer Posted March 2 Author Share Posted March 2 (edited) I wasnt aware of a backup to the system config other than backup of the flash. I am not familiar with wipefs. I removed the drive since it was mostly empty and didnt contain data I wanted. It was doing a parity renew for over a day, then the system was not accessible again. So restarted and restarted array and parity renew......the system was unaccessible again within a couple hours. I attached diagnostics. Hopefully it shows what is causing the inaccessibility. tower-diagnostics-20240302-1729.zip Edited March 3 by techystreamer Quote Link to comment
JorgeB Posted March 3 Share Posted March 3 Syslog in the diags starts over after every boot, enable the syslog server and post that after a crash. Quote Link to comment
techystreamer Posted March 6 Author Share Posted March 6 I enabled the syslog and it has crashed again. Do I do the diagnostics the same way? Does it pull the results from the flash or do I need to access another way? Quote Link to comment
itimpi Posted March 6 Share Posted March 6 6 minutes ago, techystreamer said: Does it pull the results from the flash or do I need to access another way? If you are 6.12.8 AND you are mirroring to flash then it will be automatically included. Quote Link to comment
techystreamer Posted March 6 Author Share Posted March 6 Here is the diagnostics. tower-diagnostics-20240306-0745.zip Quote Link to comment
JorgeB Posted March 6 Share Posted March 6 Not sure it that's related to the problem but I see a lot of GPU related issues logged, if you are not using it for transcoding or similar try backlisting the amdgpu driver: https://docs.unraid.net/unraid-os/release-notes/6.10.0#linux-kernel Quote Link to comment
techystreamer Posted March 6 Author Share Posted March 6 6 minutes ago, JorgeB said: Not sure it that's related to the problem but I see a lot of GPU related issues logged, if you are not using it for transcoding or similar try backlisting the amdgpu driver: https://docs.unraid.net/unraid-os/release-notes/6.10.0#linux-kernel I saw that and wondered, but I haven't changed anything in the GPU recently. The crash happened within the last 10 hours of the log overnight to morning. Quote Link to comment
Recommended Posts
Join the conversation
You can post now and register later. If you have an account, sign in now to post with your account.
Note: Your post will require moderator approval before it will be visible.