zerosk Posted August 17 Share Posted August 17 Running Unraid 6.12.11 Main array pool is 8 drives (7 data, 1 parity) Cache pool is 2x 1TB btrfs RAID1 Parity drive is 16TB, all data drives are 8-14TB Drive 5 is 14TB, and is failing. It eventually got disabled by Unraid and its contents are being emulated. I have a new 16TB installed as a replacement, using the same SATA and power cable (I want to see if the 14TB is actually bad or cable/port issue). I am also having issues with one of my cache drives at the same time. Lots of CRC errors (>20000), and I believe its causing stability issues with the system on top of the fragile emulated state. I am getting lots of docker service tipping over, or errors starting containers from a stopped state. Also having roughly 50% chance that array will fail to stop gracefully and I have to reboot the system. Lots of btfrs error in system log about the cache drive. I have tried to fix this myself by reading the unraid docs, some forum posts and the FAQ around dealing with cache disks, but I am not getting the desired result, and feel nervous that I will cause further damage or data loss if I am not careful. https://forums.unraid.net/topic/46802-faq-for-unraid-v6/page/2/#comment-700582 https://forums.unraid.net/topic/89115-unmountable-too-many-missingmisplaced-devices-every-time/ https://forums.unraid.net/topic/141403-cannot-start-array-wrong-pool-state-cache-too-many-missingwrong-devices/ Desired solution: - Run my cache pool as a single drive, on the remaining good 1TB SSD. - Run the array in an emulated state while I wait for a preclear to complete on the new 16TB replacement disk 5. - Maintain system stability (no docker crashes) until disk 5 can be properly integrated to the array. - Once replacement 1TB SSD arrives in a few days, add that back in as a mirror. I have already mounted the good 1TB disk to /temp and then used mc to copy all files on to a designated folder on Disk 7. At a glance, I believe all my data in intact. I have backups of the important stuff as well if needed. I then attempted to start the array with cache pool set to 1. It started array but told me cache needed to be formated. I checked off Yes and tried to format. It gives an error "Unmountable: Unsupported or no file system". I then stopped the array and erased + precleared the good 1TB SSD, and rejoined it to the cache pool. Started up, still get "Unmountable: Unsupported or no file system". I removing all drives, setting the pool slots to 0, and then creating a new pool named "cache" instead of "Cache", and I still get "Unmountable: Unsupported or no file system". Could someone please provide guidance on what the best course of action is? diagnostics-20240816-2140.zip Quote Link to comment
zerosk Posted August 17 Author Share Posted August 17 root@HP-Gaming:~# blkid /dev/sda1: LABEL_FATBOOT="UNRAID" LABEL="UNRAID" UUID="2732-64F5" BLOCK_SIZE="512" TYPE="vfat" PARTUUID="6be85193-01" /dev/loop1: TYPE="squashfs" /dev/sdf1: UUID="b04a38f7-63f4-486c-a90a-08cd9d4119c6" BLOCK_SIZE="512" TYPE="xfs" PARTUUID="dcf35c91-a2e8-4d9a-a8bc-1b819d27eddf" /dev/nvme0n1p1: UUID="1b8e3249-041f-466c-9e04-c95c6c17b5b4" BLOCK_SIZE="4096" TYPE="xfs" PARTUUID="fce6d4e1-8e1c-44d9-a04d-86f069e5e0cb" /dev/md2p1: UUID="b04a38f7-63f4-486c-a90a-08cd9d4119c6" BLOCK_SIZE="512" TYPE="xfs" /dev/sdb1: UUID="04835ac0-2973-455d-a406-d4ed8fb99400" UUID_SUB="e76222c1-f23c-4fc9-b422-95c095992f02" BLOCK_SIZE="4096" TYPE="btrfs" /dev/md5p1: UUID="55230ea4-74dc-4740-83a2-3d9171da7df7" BLOCK_SIZE="512" TYPE="xfs" /dev/sdk1: UUID="ea509c56-6f25-47e5-b35e-9f4f98548024" BLOCK_SIZE="512" TYPE="xfs" PARTUUID="5540aa67-bc96-46b1-8407-e36a1dfe8e3b" /dev/sdi1: UUID="ca9e0c5d-302e-4597-a02b-42ae8d8e9154" BLOCK_SIZE="512" TYPE="xfs" PARTUUID="b6fece0f-3e5f-4f29-8fab-c74151e259e6" /dev/md1p1: UUID="ea509c56-6f25-47e5-b35e-9f4f98548024" BLOCK_SIZE="512" TYPE="xfs" /dev/sdg1: UUID="7dea7d68-2878-484f-b3f4-3cb398dcc0a4" BLOCK_SIZE="512" TYPE="xfs" PARTUUID="0cbd7835-c6a0-40ef-ab26-aa8896a94239" /dev/md4p1: UUID="5bd1dcbf-91cd-4864-bc43-d4ddf5784ee0" BLOCK_SIZE="512" TYPE="xfs" /dev/loop0: TYPE="squashfs" /dev/sde1: UUID="5bd1dcbf-91cd-4864-bc43-d4ddf5784ee0" BLOCK_SIZE="512" TYPE="xfs" PARTUUID="88b2238f-773a-4b7e-854a-a57a7d066d38" /dev/md7p1: UUID="f4677b64-d1f3-48f1-afb2-ce5c22dfa7c7" BLOCK_SIZE="512" TYPE="xfs" /dev/sdj1: UUID="17fb7ceb-2065-4584-99d8-ce8fb63a5cc2" BLOCK_SIZE="512" TYPE="xfs" PARTUUID="8aba17f2-ed9f-40ac-b902-885c03a868e8" /dev/md3p1: UUID="ca9e0c5d-302e-4597-a02b-42ae8d8e9154" BLOCK_SIZE="512" TYPE="xfs" /dev/sdh1: UUID="f4677b64-d1f3-48f1-afb2-ce5c22dfa7c7" BLOCK_SIZE="512" TYPE="xfs" PARTUUID="506489a0-25cb-4164-809c-f1a8c385b3b1" /dev/md6p1: UUID="7dea7d68-2878-484f-b3f4-3cb398dcc0a4" BLOCK_SIZE="512" TYPE="xfs" /dev/loop2: UUID="c3ae25a2-ce33-4b32-ae52-b43b9a566c6a" UUID_SUB="6a796dee-93c5-4391-b74f-f5ba400eaf1e" BLOCK_SIZE="4096" TYPE="btrfs" /dev/loop3: UUID="f945752b-d159-4ae0-9c1e-29966aa70efc" UUID_SUB="6c835a9b-108c-4888-8d32-ed486fba0452" BLOCK_SIZE="4096" TYPE="btrfs" Quote Link to comment
Solution JorgeB Posted August 18 Solution Share Posted August 18 Device is showing busy, try formatting again after a reboot. Quote Link to comment
zerosk Posted August 19 Author Share Posted August 19 I swear I had tried that at some point already but it worked either way. I copied my files back over and my containers and VMs booted. Thanks! 1 Quote Link to comment
Recommended Posts
Join the conversation
You can post now and register later. If you have an account, sign in now to post with your account.
Note: Your post will require moderator approval before it will be visible.