Jump to content
LAST CALL on the Unraid Summer Sale! 😎 ⌛ ×

6.12.1 Docker Service failed to start | Cache pool Unmountable: Unsupported or no file system


Go to solution Solved by JorgeB,

Recommended Posts

Diagnostics attached.

 

I had an issue on 6.12 where my Docker containers would not update -- it was throwing an error saying the container name already existed. Then I noticed some of my containers were not working and all of the Database containers were stopped and would not start. I figured it may be a bug in the new stable version, so I upgraded to 6.12.1

 

Upon starting the array, the Docker would not start. Then I saw on the Main tab that my Cache-docker pool is showing as Unmountable: Unsupported or no file system.

 

Any help is much appreciated! I saw a similar thread here, but couldn't get the btrfs rescue command figured out. image.png.7104960ded958e2ee5ebeee6c885a4ee.png

 

homegrown-diagnostics-20230622-2105.zip

Edited by ramair02
Link to comment

There are read errors with the device:

 

Jun 22 20:59:19 homegrown kernel: BTRFS info (device sdd1): using crc32c (crc32c-intel) checksum algorithm
Jun 22 20:59:19 homegrown kernel: BTRFS info (device sdd1): using free space tree
Jun 22 20:59:19 homegrown kernel: BTRFS info (device sdd1): enabling ssd optimizations
Jun 22 20:59:19 homegrown kernel: BTRFS info (device sdd1): start tree-log replay
Jun 22 20:59:19 homegrown kernel: sd 8:0:1:0: [sdd] tag#2534 UNKNOWN(0x2003) Result: hostbyte=0x00 driverbyte=DRIVER_OK cmd_age=0s
Jun 22 20:59:19 homegrown kernel: sd 8:0:1:0: [sdd] tag#2534 Sense Key : 0x4 [current]
Jun 22 20:59:19 homegrown kernel: sd 8:0:1:0: [sdd] tag#2534 ASC=0x27 ASCQ=0x1
Jun 22 20:59:19 homegrown kernel: sd 8:0:1:0: [sdd] tag#2534 CDB: opcode=0x2a 2a 00 00 01 30 00 00 00 e0 00
Jun 22 20:59:19 homegrown kernel: critical target error, dev sdd, sector 77824 op 0x1:(WRITE) flags 0x1800 phys_seg 5 prio class 2

 

There's already a failed long SMART test, so possibly it's failing, you can run another one to confirm.

Link to comment

Thanks for the reply, Jorge. I'll replace the Cache-docker (sdd) drive.

 

What is the best process to replace & rebuild the drive since it is unmountable -- I have appdata & vm backups from the CA plugin, but I'm only now realizing that the plugin is deprecated in 6.12. I didn't know that and since I've been on 6.12 / 6.12.1 for about a week, my last appdata & vm backups are from June 15th. If I install the new Appdata Backup plugin, will it be able to restore my backups from the previous plugin?

 

Assuming so, would I just shutdown the array, replace the Cache-docker drive, start the array, assign the new disk as Cache-docker and then restore appdata & vm backups?

Link to comment
2 minutes ago, JorgeB said:

If it's failing you cannot do a direct replacement, you can try cloning it with ddrescue and then see if the clone can mount or be fixed.

Ok I will look into it. ddrescue is new to me. Thank you.

 

I found this post from you in the FAQ and was able to mount the drive to /temp and copy off all of the contents. Is that useful in rebuilding a new cache drive? 

 

Link to comment
2 minutes ago, JorgeB said:

Yes, that's another option.

Thanks. Is there a guide to follow to rebuild cache after a failed disk? I didn't find this particular situation in the documentation. After replacing the cache drive with a new one, what's next? Reinstall all Docker containers and then copy over the appdata that I pulled off the failing drive?

Link to comment

Join the conversation

You can post now and register later. If you have an account, sign in now to post with your account.
Note: Your post will require moderator approval before it will be visible.

Guest
Reply to this topic...

×   Pasted as rich text.   Restore formatting

  Only 75 emoji are allowed.

×   Your link has been automatically embedded.   Display as a link instead

×   Your previous content has been restored.   Clear editor

×   You cannot paste images directly. Upload or insert images from URL.

×
×
  • Create New...