Orphan Images Reappearing Frequently


Recommended Posts

3 hours ago, trurl said:

Are you automatically updating dockers? If so try disabling the automatic update of your dockers then you can manually update as necessary and catch the output to see if orphans are created without being deleted.

yes - i am automatically updating them using the autoupdate plugin. I will disable all of those and post a reply once i have a chance to test with a new version release. I'm going to assume that orphans would be created when i hit "apply update"?

Link to comment
19 hours ago, trurl said:

When you update or edit a container, it gets recreated. The old container becomes an orphan, but normally it gets deleted automatically.

 

okay so i disabled auto-update as detailed here:

 

1368473710_ScreenShot2021-06-08at08_03_43.thumb.png.18dbe9d20c1fdf32a38cfbe806946a92.png

 

And there are two orphaned images and both Jackett & Plex have updates ready to apply ...

 1686296971_ScreenShot2021-06-08at08_05_51.thumb.png.7bc814a3f007d257a41c3a93ec39f677.png

 

 

Link to comment
4 hours ago, trurl said:

Have you tried manually updating any dockers? Be sure to capture the output.

 

Have you done memtest?

I have not applied the updates yet - should i do that before or after deleting the orphans? When you say capture the output i presume you mean the dockerrun - if not let me know what that is. I have not run memtest - my understanding is that you have to do it at startup but I don't have a monitor anymore - so not sure if that's a problem here or not.

Link to comment
Just now, trurl said:

delete the orphans first then yes I mean the docker run

Here ya go:

 

PLEX
 

Pulling image: linuxserver/plex:latest

IMAGE ID [1008960680]: Pulling from linuxserver/plex.
Status: Image is up to date for linuxserver/plex:latest

TOTAL DATA PULLED: 0 B

Stopping container: plex

Successfully stopped container 'plex'

Removing container: plex

Successfully removed container 'plex'

Command:
root@localhost:# /usr/local/emhttp/plugins/dynamix.docker.manager/scripts/docker run -d --name='plex' --net='host' -e TZ="America/New_York" -e HOST_OS="Unraid" -e 'VERSION'='latest' -e 'NVIDIA_VISIBLE_DEVICES'='' -e 'TCP_PORT_32400'='32400' -e 'TCP_PORT_3005'='3005' -e 'TCP_PORT_8324'='8324' -e 'TCP_PORT_32469'='32469' -e 'UDP_PORT_1900'='1900' -e 'UDP_PORT_32410'='32410' -e 'UDP_PORT_32412'='32412' -e 'UDP_PORT_32413'='32413' -e 'UDP_PORT_32414'='32414' -e 'PUID'='99' -e 'PGID'='100' -v '/mnt/user/Videos/Films/':'/movies':'rw' -v '/mnt/user/Videos/TV/':'/tv':'rw' -v '/mnt/user/Music/':'/music':'rw' -v '':'/transcode':'rw' -v '/mnt/user/Videos/Stand-Up/':'/stand-up':'rw' -v '/mnt/user/Videos/Trailers/':'/Trailers':'rw' -v '/mnt/user/Videos/Documentaries/':'/documentaries/':'rw' -v '/mnt/user/Videos/Sports/':'/sports':'rw' -v '/mnt/user/appdata/plexmediaserver':'/config':'rw' 'linuxserver/plex'

e151a5ff03fa1baa766e27264c11ea2e1c65b36babce67931be1e9d8bd3faaf9

The command finished successfully!

 

Jackett
 

Pulling image: linuxserver/jackett:latest

IMAGE ID [166390777]: Pulling from linuxserver/jackett.
Status: Image is up to date for linuxserver/jackett:latest

TOTAL DATA PULLED: 0 B

Stopping container: jackett

Successfully stopped container 'jackett'

Removing container: jackett

Successfully removed container 'jackett'

Command:
root@localhost:# /usr/local/emhttp/plugins/dynamix.docker.manager/scripts/docker run -d --name='jackett' -e TZ="America/New_York" -e HOST_OS="Unraid" -e 'PUID'='99' -e 'PGID'='100' -v '/mnt/user/Downloads/':'/downloads':'rw' -v '/mnt/user/appdata/jackett':'/config':'rw' --net=container:binhex-delugevpn 'linuxserver/jackett'

d60dce3d0f7f7c0fcd12e0f48a1da6711428a7e09501dd3f05c579d275d2bbdb

The command finished successfully!

 

Link to comment

@trurl

okay so some more information for you - i logged into my server a few minutes ago and saw that there were again 2 new orphan images. I then compared the ids to the ones from the screenshot a few back and they are indeed different (not sure if that's surprising or not). I then ran "check for updates" - and found that two of the containers needed to be updated ... so perhaps this is not a hacker and is in fact tied to some sort of "bug" in my system???

 

Here are the docker runs:

Pulling image: linuxserver/duckdns:latest

IMAGE ID [1600041090]: Pulling from linuxserver/duckdns.
Status: Image is up to date for linuxserver/duckdns:latest

TOTAL DATA PULLED: 0 B

Stopping container: duckdns

Successfully stopped container 'duckdns'

Removing container: duckdns

Successfully removed container 'duckdns'

Command:
root@localhost:# /usr/local/emhttp/plugins/dynamix.docker.manager/scripts/docker run -d --name='duckdns' --net='host' -e TZ="America/New_York" -e HOST_OS="Unraid" -e 'SUBDOMAINS'='REMOVEDBYME' -e 'TOKEN'='REMOVEDBYME' -e 'PUID'='99' -e 'PGID'='100' 'linuxserver/duckdns'

cddcf69994f595a1ce33e8d26098396b65978d772d49307284c96adf864282cf

The command finished successfully!

and the other one was jackett which updated last night too so i don't see much having changed there 🙂

 

Link to comment
1 hour ago, trurl said:

If it is a bug it hasn't been reported by anyone else. I wonder if searching outside in the larger world of docker would find anything. 

Yeah, i am all about using google and forum search to turn up stuff but if you don't know what to input - it can severely limit what i'm able to turn up. Is it possible this is tied to a plugin that is somehow checking for updates surreptitiously? Here are my plugins...

1382675545_Screenshot2021-06-09at11-16-56BuddhaPlugins.thumb.png.141e02cebe562e46f8f2e74b243f22da.png

Link to comment
9 hours ago, trurl said:

Do the orphans get created when the backup runs?

they don't appear to be. okay so interestingly enough - i just checked - a new orphan had appeared - i ran "check for updates" and Jackett had an update ... is it normal for a docker to have an update 3 consecutive days? here's the docker run from that install....2078322178_ScreenShot2021-06-10at07_13_34.thumb.png.8c1c4ad534a9a49bdfef75a224a2cefe.png

This is really strange and i appreciate your continued help on it.

Link to comment

okay so when i logged in today - you will notice that i had 3 orphans and one image saying it needed to be updated:1.thumb.png.5626d293c2bc1becb6169876a7b9cced.png

 

 

After hitting check for updates - there were indeed 3 that had updates ready.2.thumb.png.823416e3a12781eb5e7e415194950817.png

 

I deleted the orphan images, applied the updates and took this screen cap of the docker run for calibre web...1785138011_ScreenShot2021-06-15at16_48_38.thumb.png.18d47d767e175ece7a74e25c3ec96af9.png

 

I'm guessing it's tied to the way my containers are updating - i haven't changed any settings and have tried to show in previous posts all the configurations. Any suggestions?

Link to comment

@trurl

would you have any thoughts on this? was hoping some others might help take the weight of your shoulders.

 

I left the server alone for a few days - came back today and there were 7 orphans with 5 images needing updates applied. I'm curious that the Total Data Pulled is always 0 B for any docker.... is this unrelated to the size of the file or is it because the data is downloaded ahead of time and then the update is applied so the dockder run doesn't show the download? (or something else? again out of my depth here).

Link to comment

Join the conversation

You can post now and register later. If you have an account, sign in now to post with your account.
Note: Your post will require moderator approval before it will be visible.

Guest
Reply to this topic...

×   Pasted as rich text.   Restore formatting

  Only 75 emoji are allowed.

×   Your link has been automatically embedded.   Display as a link instead

×   Your previous content has been restored.   Clear editor

×   You cannot paste images directly. Upload or insert images from URL.