Orphan Images Reappearing Frequently

47 posts in this topic Last Reply

Recommended Posts

3 hours ago, trurl said:

Are you automatically updating dockers? If so try disabling the automatic update of your dockers then you can manually update as necessary and catch the output to see if orphans are created without being deleted.

yes - i am automatically updating them using the autoupdate plugin. I will disable all of those and post a reply once i have a chance to test with a new version release. I'm going to assume that orphans would be created when i hit "apply update"?

Link to post
19 hours ago, trurl said:

When you update or edit a container, it gets recreated. The old container becomes an orphan, but normally it gets deleted automatically.


okay so i disabled auto-update as detailed here:




And there are two orphaned images and both Jackett & Plex have updates ready to apply ...




Link to post
4 hours ago, trurl said:

Have you tried manually updating any dockers? Be sure to capture the output.


Have you done memtest?

I have not applied the updates yet - should i do that before or after deleting the orphans? When you say capture the output i presume you mean the dockerrun - if not let me know what that is. I have not run memtest - my understanding is that you have to do it at startup but I don't have a monitor anymore - so not sure if that's a problem here or not.

Link to post
Just now, trurl said:

delete the orphans first then yes I mean the docker run

Here ya go:



Pulling image: linuxserver/plex:latest

IMAGE ID [1008960680]: Pulling from linuxserver/plex.
Status: Image is up to date for linuxserver/plex:latest


Stopping container: plex

Successfully stopped container 'plex'

Removing container: plex

Successfully removed container 'plex'

root@localhost:# /usr/local/emhttp/plugins/dynamix.docker.manager/scripts/docker run -d --name='plex' --net='host' -e TZ="America/New_York" -e HOST_OS="Unraid" -e 'VERSION'='latest' -e 'NVIDIA_VISIBLE_DEVICES'='' -e 'TCP_PORT_32400'='32400' -e 'TCP_PORT_3005'='3005' -e 'TCP_PORT_8324'='8324' -e 'TCP_PORT_32469'='32469' -e 'UDP_PORT_1900'='1900' -e 'UDP_PORT_32410'='32410' -e 'UDP_PORT_32412'='32412' -e 'UDP_PORT_32413'='32413' -e 'UDP_PORT_32414'='32414' -e 'PUID'='99' -e 'PGID'='100' -v '/mnt/user/Videos/Films/':'/movies':'rw' -v '/mnt/user/Videos/TV/':'/tv':'rw' -v '/mnt/user/Music/':'/music':'rw' -v '':'/transcode':'rw' -v '/mnt/user/Videos/Stand-Up/':'/stand-up':'rw' -v '/mnt/user/Videos/Trailers/':'/Trailers':'rw' -v '/mnt/user/Videos/Documentaries/':'/documentaries/':'rw' -v '/mnt/user/Videos/Sports/':'/sports':'rw' -v '/mnt/user/appdata/plexmediaserver':'/config':'rw' 'linuxserver/plex'


The command finished successfully!



Pulling image: linuxserver/jackett:latest

IMAGE ID [166390777]: Pulling from linuxserver/jackett.
Status: Image is up to date for linuxserver/jackett:latest


Stopping container: jackett

Successfully stopped container 'jackett'

Removing container: jackett

Successfully removed container 'jackett'

root@localhost:# /usr/local/emhttp/plugins/dynamix.docker.manager/scripts/docker run -d --name='jackett' -e TZ="America/New_York" -e HOST_OS="Unraid" -e 'PUID'='99' -e 'PGID'='100' -v '/mnt/user/Downloads/':'/downloads':'rw' -v '/mnt/user/appdata/jackett':'/config':'rw' --net=container:binhex-delugevpn 'linuxserver/jackett'


The command finished successfully!


Link to post


okay so some more information for you - i logged into my server a few minutes ago and saw that there were again 2 new orphan images. I then compared the ids to the ones from the screenshot a few back and they are indeed different (not sure if that's surprising or not). I then ran "check for updates" - and found that two of the containers needed to be updated ... so perhaps this is not a hacker and is in fact tied to some sort of "bug" in my system???


Here are the docker runs:

Pulling image: linuxserver/duckdns:latest

IMAGE ID [1600041090]: Pulling from linuxserver/duckdns.
Status: Image is up to date for linuxserver/duckdns:latest


Stopping container: duckdns

Successfully stopped container 'duckdns'

Removing container: duckdns

Successfully removed container 'duckdns'

root@localhost:# /usr/local/emhttp/plugins/dynamix.docker.manager/scripts/docker run -d --name='duckdns' --net='host' -e TZ="America/New_York" -e HOST_OS="Unraid" -e 'SUBDOMAINS'='REMOVEDBYME' -e 'TOKEN'='REMOVEDBYME' -e 'PUID'='99' -e 'PGID'='100' 'linuxserver/duckdns'


The command finished successfully!

and the other one was jackett which updated last night too so i don't see much having changed there 🙂


Link to post
1 hour ago, trurl said:

If it is a bug it hasn't been reported by anyone else. I wonder if searching outside in the larger world of docker would find anything. 

Yeah, i am all about using google and forum search to turn up stuff but if you don't know what to input - it can severely limit what i'm able to turn up. Is it possible this is tied to a plugin that is somehow checking for updates surreptitiously? Here are my plugins...


Link to post
9 hours ago, trurl said:

Do the orphans get created when the backup runs?

they don't appear to be. okay so interestingly enough - i just checked - a new orphan had appeared - i ran "check for updates" and Jackett had an update ... is it normal for a docker to have an update 3 consecutive days? here's the docker run from that install....2078322178_ScreenShot2021-06-10at07_13_34.thumb.png.8c1c4ad534a9a49bdfef75a224a2cefe.png

This is really strange and i appreciate your continued help on it.

Link to post

Join the conversation

You can post now and register later. If you have an account, sign in now to post with your account.
Note: Your post will require moderator approval before it will be visible.

Reply to this topic...

×   Pasted as rich text.   Restore formatting

  Only 75 emoji are allowed.

×   Your link has been automatically embedded.   Display as a link instead

×   Your previous content has been restored.   Clear editor

×   You cannot paste images directly. Upload or insert images from URL.