Rune Posted April 15, 2024 Posted April 15, 2024 I have an unraid array of 4 disks + parity, all are individual ZFS volumes. One of the disks just suspended itself: Quote zpool status -xv pool: disk1 state: SUSPENDED status: One or more devices are faulted in response to IO failures. action: Make sure the affected devices are connected, then run 'zpool clear'. see: https://openzfs.github.io/openzfs-docs/msg/ZFS-8000-HC scan: scrub repaired 0B in 13:17:31 with 1 errors on Sat Mar 16 11:18:32 2024 config: NAME STATE READ WRITE CKSUM disk1 ONLINE 0 0 0 md1p1 ONLINE 0 0 122 errors: List of errors unavailable: pool I/O is currently suspended "zfs clear" seems to hang (but does reset the CKSUM counter to zero. Replacing the SATA cable seems like an obvious next step.... but I worry if I stop the array to swap the cable or reboot, if I would be able to start the array again, or will I get stuck into maintenance mode or something if the drive won't mount. I don't have a huge amount of time to deal with it today, and since unraid doesn't see it as "failed", the contents aren't emulated (users just get errors when accessing anything on that disk), and don't want to get stuck having everything offline in maintenance mode overnight. Quote
Solution Rune Posted April 16, 2024 Author Solution Posted April 16, 2024 Disk is dying. Unmount failed. Replaced the SATA cable and booted it back up. Lots of clicking noises, dmesg messages resetting and retrying. Mount failed. I pulled the disk and booted the array degraded. Quote
Recommended Posts
Join the conversation
You can post now and register later. If you have an account, sign in now to post with your account.
Note: Your post will require moderator approval before it will be visible.