Why so many unused volumes?


Recommended Posts

Not sure if this is just a docker feature or whether this is down to how unraid updates containers but I regularly have to go in and seem to end up deleting about 10 unused volumes.

 

I am guessing that it must be something like when container is updated the old volume gets disconnected and a new one created and the old one then sits there in limbo?

 

Is there any way to stop this build up of volumes happening?

Link to comment
1 hour ago, Squid said:

There's an issue if you update from within Apps that it causes orphan images to appear.  Harmless since the orphans don't actually take up any space and on a todo list to fix

It is volumes rather than images.   I only update my containers through unraid UI (that I can think of anyway).

Looking again now I am not sure whether this is something odd in Portainer.   It was showing lots of unused volumes so I deleted them.   It is now showing two volumes but I have many more than that (Portainer itself shows volumes against containers correctly just not in the volume list 😕)
Will see if I can establish when they appear and what they relate to

Link to comment
8 hours ago, jameson_uk said:

I am guessing that it must be something like when container is updated the old volume gets disconnected and a new one created and the old one then sits there in limbo?

It is this, though its not specifically when a container is updated rather it is every time a container is recreated. The answer to why it happens is because docker does not automatically delete volumes when a container is removed, and unless you specify a named volume it wont reattach the volume a new container.

 

8 hours ago, jameson_uk said:

Is there any way to stop this build up of volumes happening?

There are a couple of options. The first would a creating a script to periodically cleanup unused volumes. The docker cli has a purge command (not sure of the exact syntax) that will remove unused volumes.

The other option would be to ensure that they never get created in the first place. When creating a docker image the author uses the VOLUME keyword to specify which paths must be backed by a volume. When running a container specified paths must either be attached to a volume or a bind mount. If the user does not specify a named volume then docker automatically creates an anonymous one for that container which ends up being your problem. In unRAID we typically use bind mounts but template authors might not always specify a bind mount for every volume in the image (typically because it is of no use to the user or it is unknown to the author since many dot create the images that they host templates for). You could use docker inspect on your images to determine which ones have anonymous volumes attached and then add path mapping to there templates.

  • Thanks 2
Link to comment

Join the conversation

You can post now and register later. If you have an account, sign in now to post with your account.
Note: Your post will require moderator approval before it will be visible.

Guest
Reply to this topic...

×   Pasted as rich text.   Restore formatting

  Only 75 emoji are allowed.

×   Your link has been automatically embedded.   Display as a link instead

×   Your previous content has been restored.   Clear editor

×   You cannot paste images directly. Upload or insert images from URL.