Jump to content

docker.img' is in-use, cannot mount


aglyons

Recommended Posts

**I searched the forums for this log entry and while there were a few, they mostly were in regards to a corrupt docker.img file. This is not the case here.

 

While troubleshooting (and trying to figure out how Docker networks work) I have run into this multiple times. Every time I need to stop Docker to make a change to network settings, it will not start back up again and the log file says that the docker image is in use still.

 

docker.img' is in-use, cannot mount.

 

The only way I have found to fix this is to stop the array (or reboot the whole server but that's dramatic). It seems that something is not unmounting the docker image when you initially stop the docker service. I think this could be a bug.

 

 

Link to comment

Not entirely clear what the problem is. Your appdata has files on disk4, and system share is completely on disk2. And system share is set to be moved to the array. Ideally, appdata, domains, system shares would all be on fast pool (cache) and set to stay there so Docker/VM performance isn't impacted by slower parity array, and so array disks can spin down since these files are always open. Might as well clean that up.

 

Nothing can move open files. Disable Docker and VM Manager in Settings. Set system share to cache:prefer. Run mover to get appdata and system moved to cache. Mover won't replace files so if any are already on cache you might have to use Dynamix File Manager to clean up any duplicates.

 

Before enabling Docker again delete and recreate docker.img to see if that helps anything.

 

https://wiki.unraid.net/Manual/Docker_Management#Re-Create_the_Docker_image_file

 

https://wiki.unraid.net/Manual/Docker_Management#Re-Installing_Docker_Applications

Link to comment
  • 11 months later...

I recently ran into a similar issue on 6.12.6.

 

Discovered that my dockers were not running and saw a message "Docker service failed to start" in my docker tab.

 

Saw a bunch of errors in the logs like:

 

```

kernel: I/O error, dev loop2, sector 2764512 op 0x1:(WRITE) flags 0x100000 phys_seg 1 prio class 2
kernel: BTRFS error (device loop2: state EA): bdev /dev/loop2 errs: wr 67, rd 0, flush 0, corrupt 0, gen 0
kernel: loop: Write error at byte offset 5152833536, length 4096.

```

 

Sounds like there is some issue writing to the docker image?

 

So I try to shut down the array, it won't go down, gets stuck on unmounting disks.

 

Tried the suggestions in this thread, unfortunately `umount /var/lib/docker` did not seem to work. Messages in the logs make it look like the docker image isn't the issue, the system also cannot unmount my cache drive and my drive in the array.

 

I wound up going into the Open Files plugin and saw that several appdata folders were in use by `shfs`. I killed that process via the terminal, still no luck.

 

Wound up hitting the Shutdown button and that worked cleanly but still not sure why.

 

Seems like maybe a disk I/O issue? My plan is to go into my docker configs and change `/mnt/user` to `/mnt/cache` where possible to hopefully lighten the load on the fuse filesystem.

 

My only significant recent changes to the system were adding an additional NVME ZFS pool (however no data has been added to that yet so I'm doubtful it's the culprit) and also setting up a Graylog stack with docker-compose... I'm a little worried that writes from Graylog mucked things up, but I only have syslog and plex feeding into there, so the writes shouldn't be *too* excessive.

 

Anyone have any thoughts?

Link to comment
18 hours ago, shaihulud said:

Seems like maybe a disk I/O issue? My plan is to go into my docker configs and change `/mnt/user` to `/mnt/cache` where possible to hopefully lighten the load on the fuse filesystem.

You can also eliminate the fuse overhead by making the share an Exclusive share.

Link to comment
1 hour ago, trurl said:

Attach diagnostics to your NEXT post in this thread.

 

Thanks for your help, here they are.

 

Things seem to be working after a reboot. Dockers are back, I've disabled Graylog and DelugeVPN for now until I have a better idea of what was going on.

 

I did have a large torrent that was downloading and my cache drive got 80% full, so maybe it was just a space issue, although I would have expected things to fail a little more gracefully if my cache disk filled up (altho maybe that was an incorrect assumption).

shai-hulud-diagnostics-20240106-1046.zip

Edited by shaihulud
Link to comment
3 hours ago, shaihulud said:

I did have a large torrent that was downloading and my cache drive got 80% full, so maybe it was just a space issue, although I would have expected things to fail a little more gracefully if my cache disk filled up (altho maybe that was an incorrect assumption).

All of your shares that use cache are configured to overflow to the array if cache gets below Minimum Free. But...

 

Your cache Minimum Free is zero, so it will never overflow, it will just fail if cache fills completely.

 

And btrfs seems particularly fragile if you fill it up.

Link to comment
5 hours ago, trurl said:

All of your shares that use cache are configured to overflow to the array if cache gets below Minimum Free. But...

 

Your cache Minimum Free is zero, so it will never overflow, it will just fail if cache fills completely.

 

And btrfs seems particularly fragile if you fill it up.

 

Oh wow thank you, this may be just what I needed.

 

Set it to 30GB, let's see how it goes.

  • Like 1
Link to comment

Join the conversation

You can post now and register later. If you have an account, sign in now to post with your account.
Note: Your post will require moderator approval before it will be visible.

Guest
Reply to this topic...

×   Pasted as rich text.   Restore formatting

  Only 75 emoji are allowed.

×   Your link has been automatically embedded.   Display as a link instead

×   Your previous content has been restored.   Clear editor

×   You cannot paste images directly. Upload or insert images from URL.

×
×
  • Create New...