Orphan Images Reappearing Frequently


Recommended Posts

I cannot explain why this keeps happening - I use auto update applications to run daily at 1am my time and every few days i will log in to my server and notice that i have new orphan images.

 

My understanding is that they appear because of failed installs but i really don't know. Is there a way to automatically have my server delete them? If not, what other options are there to prevent them from filling up my docker container and having to manually remove them one at a time.

Link to comment
2 hours ago, trurl said:

See if you can figure out which containers you are breaking when you update them. There must be something wrong with those templates if they don't produce a valid docker run command.

how would i go about that?

2 hours ago, Squid said:

The real question is why are they becoming orphans?  Are the containers dependent upon each other (ie: does something like nzbget run through a vpn from another container)

so definitely a question i won't know how to answer but i will say - yes - i run several containers through delugevpn.

 

You guys were recently great in helping me fix my system so it is working. How would i figure out what's wrong with those templates?

Link to comment

Do you know how to get the docker run as explained at the very first link in the Docker FAQ? Just do that for each container, capture the docker run results, and if any produce an error or orphan post those docker run results.

Link to comment
3 minutes ago, trurl said:

Do you know how to get the docker run as explained at the very first link in the Docker FAQ? Just do that for each container, capture the docker run results, and if any produce an error or orphan post those docker run results.

I do not know how to use docker run independent of going into each docker - making a "change" and then undoing that change before i hit apply. Happy to go do that - though i have 15 dockers installed. Is there a way to do that from a command line or a "batch" command of sorts that would allow me to generate all of that information in one report?

Link to comment
Just now, helpermonkey said:

I do not know how to use docker run independent of going into each docker - making a "change" and then undoing that change before i hit apply.

That is exactly the way described in that FAQ. No better way to do it.

Link to comment
11 minutes ago, trurl said:

That is exactly the way described in that FAQ. No better way to do it.

Okay here you go:

I have removed a few private keys - hope i didn't miss anything private that needed to be removed. 😛

 

Here is the docker run for all 15 of my dockers.

I should note that i haven't really gotten calibre "working" nor have I finished configuring duckdns properly - i believe both containers are setup correctly but i am still trying to figure out the best way to get remote access to my server so i can interact with Sonarr/Radarr/SAB when i'm not at home. I can delete them easily if they are problematic.

 

All run commands executed succesfully

DelugeVPN
Command:
root@localhost:# /usr/local/emhttp/plugins/dynamix.docker.manager/scripts/docker run -d --name='binhex-delugevpn' --net='bridge' --privileged=true -e TZ="America/New_York" -e HOST_OS="Unraid" -e 'VPN_ENABLED'='yes' -e 'VPN_USER'='REMOVED' -e 'VPN_PASS'='REMOVED' -e 'VPN_PROV'='pia' -e 'VPN_OPTIONS'='' -e 'STRICT_PORT_FORWARD'='yes' -e 'ENABLE_PRIVOXY'='yes' -e 'LAN_NETWORK'='192.168.1.0/24' -e 'NAME_SERVERS'='209.222.18.222,84.200.69.80,37.235.1.174,1.1.1.1,209.222.18.218,37.235.1.177,84.200.70.40,1.0.0.1' -e 'DELUGE_DAEMON_LOG_LEVEL'='info' -e 'DELUGE_WEB_LOG_LEVEL'='info' -e 'DEBUG'='false' -e 'UMASK'='000' -e 'PUID'='99' -e 'PGID'='100' -e 'ADDITIONAL_PORTS'='7878,8080,8989,8090,9117' -e 'VPN_CLIENT'='wireguard' -p '8112:8112/tcp' -p '58846:58846/tcp' -p '58946:58946/tcp' -p '58946:58946/udp' -p '8118:8118/tcp' -p '7878:7878/tcp' -p '8080:8080/tcp' -p '8989:8989/tcp' -p '9117:9117/tcp' -p '8083:8083/tcp' -v '/mnt/user/Downloads/':'/data':'rw' -v '/mnt/user/appdata/binhex-delugevpn':'/config':'rw' --sysctl="net.ipv4.conf.all.src_valid_mark=1" 'binhex/arch-delugevpn'

af20751e9a3b5916c09f266776fc5de9a732fed948191445b62b8b7c13346cea

The command finished successfully!

Watchtower
Command:
root@localhost:# /usr/local/emhttp/plugins/dynamix.docker.manager/scripts/docker run -d --name='watchtower' --net='bridge' -e TZ="America/New_York" -e HOST_OS="Unraid" -v '/var/run/docker.sock':'/var/run/docker.sock':'rw' 'centurylink/watchtower'

3f961646ff1107120cc5e4379d5b87ee0100175bcb6c6d453592b0293bb7e0e9

The command finished successfully!

organizrv2
Command:
root@localhost:# /usr/local/emhttp/plugins/dynamix.docker.manager/scripts/docker run -d --name='organizrv2' --net='bridge' -e TZ="America/New_York" -e HOST_OS="Unraid" -e 'branch'='master' -e 'PUID'='99' -e 'PGID'='100' -p '81:80/tcp' -v '/mnt/user/appdata/organizrv2':'/config':'rw' 'organizr/organizr'

42f2f7c61d99e0884e0ebd558d6bddb267a4b6c0edc2e8cdd08157f82990ee0f

The command finished successfully!

Tautulli
Command:
root@localhost:# /usr/local/emhttp/plugins/dynamix.docker.manager/scripts/docker run -d --name='tautulli' --net='bridge' -e TZ="America/New_York" -e HOST_OS="Unraid" -e 'PUID'='99' -e 'PGID'='100' -e 'TZ'='UTC' -p '8181:8181/tcp' -v '/mnt/user/appdata/tautulli':'/config':'rw' 'tautulli/tautulli'

ddfe6095d7db3006ea19bc089e792d792abf33991a8f73eb433ae6668785e636

The command finished successfully!

FileBot
Command:
root@localhost:# /usr/local/emhttp/plugins/dynamix.docker.manager/scripts/docker run -d --name='FileBot' --net='bridge' -e TZ="America/New_York" -e HOST_OS="Unraid" -e 'USER_ID'='99' -e 'GROUP_ID'='100' -e 'UMASK'='0000' -v '/mnt/user/Downloads/Filebot/Input/':'/input':'rw' -v '/mnt/user/Downloads/Filebot/Output/':'/output':'rw' -v '/mnt/user/appdata/FileBot':'/config':'rw' 'coppit/filebot'

3983d1e3164969edfc0a99aa71653aba38c67127de01a639ac742d2a046a116c

The command finished successfully!

MediaInfo
Command:
root@localhost:# /usr/local/emhttp/plugins/dynamix.docker.manager/scripts/docker run -d --name='MediaInfo' --net='bridge' -e TZ="America/New_York" -e HOST_OS="Unraid" -e 'USER_ID'='99' -e 'GROUP_ID'='100' -e 'UMASK'='000' -e 'APP_NICENESS'='' -e 'DISPLAY_WIDTH'='1280' -e 'DISPLAY_HEIGHT'='768' -e 'SECURE_CONNECTION'='0' -e 'X11VNC_EXTRA_OPTS'='' -p '7817:5800/tcp' -p '7917:5900/tcp' -v '/mnt/user/':'/storage':'ro' -v '/mnt/user/appdata/MediaInfo':'/config':'rw' 'jlesage/mediainfo'

6c129a36cdef217da680b411b9445515dc79372e60656d2c4880ef6a7a4dd583

The command finished successfully!

Calibre-Web
Command:
root@localhost:# /usr/local/emhttp/plugins/dynamix.docker.manager/scripts/docker run -d --name='calibre-web' -e TZ="America/New_York" -e HOST_OS="Unraid" -e 'DOCKER_MODS'='linuxserver/calibre-web:calibre' -e 'PUID'='99' -e 'PGID'='100' -v '/mnt/user/Books/':'/books':'rw' -v '/mnt/user/appdata/calibre-web':'/config':'rw' --net=container:binhex-delugevpn 'linuxserver/calibre-web'

b22eddfcc092e91f5900362cc566a1c149528a0f2a0f585d790c1f55dbf965aa

The command finished successfully!

SabNZBD
Command:
root@localhost:# /usr/local/emhttp/plugins/dynamix.docker.manager/scripts/docker run -d --name='binhex-sabnzbd' -e TZ="America/New_York" -e HOST_OS="Unraid" -e 'UMASK'='000' -e 'PUID'='99' -e 'PGID'='100' -v '/mnt/user/Downloads/':'/data':'rw' -v '/mnt/user/appdata/binhex-sabnzbd':'/config':'rw' --net=container:binhex-delugevpn 'binhex/arch-sabnzbd'

998073b6a9db46799ac11b1a33580d951ad3e122faa06e6b7bfcc38cfe64ab56

The command finished successfully!

Krusader
Command:
root@localhost:# /usr/local/emhttp/plugins/dynamix.docker.manager/scripts/docker run -d --name='binhex-krusader' --net='bridge' --privileged=true -e TZ="America/New_York" -e HOST_OS="Unraid" -e 'TEMP_FOLDER'='/config/krusader/tmp' -e 'WEBPAGE_TITLE'='Buddha' -e 'VNC_PASSWORD'='' -e 'UMASK'='000' -e 'PUID'='99' -e 'PGID'='100' -p '6080:6080/tcp' -v '/mnt/':'/media':'rw' -v '/mnt/user/appdata/binhex-krusader':'/config':'rw' 'binhex/arch-krusader'

891d8f2b9a989777ff1d9dfd5cded7dd607f0802463831d8ee168353e8d81f08

The command finished successfully!

Radarr
Command:
root@localhost:# /usr/local/emhttp/plugins/dynamix.docker.manager/scripts/docker run -d --name='binhex-radarr' -e TZ="America/New_York" -e HOST_OS="Unraid" -e 'UMASK'='000' -e 'PUID'='99' -e 'PGID'='100' -v '/mnt/user/Downloads/':'/data':'rw' -v '/mnt/user/Videos/Films/':'/media':'rw' -v '/mnt/user/Videos/Stand-Up/':'/stand-up/':'rw' -v '/mnt/user/Videos/Documentaries/':'/documentaries/':'rw' -v '/mnt/user/appdata/binhex-radarr':'/config':'rw' --net=container:binhex-delugevpn 'binhex/arch-radarr'

bb16a8ccd1cc5c3888ef3795fccac0c304dc4f3ae4eec54aa4c346edaed266c8

The command finished successfully!

Sonarr
Command:
root@localhost:# /usr/local/emhttp/plugins/dynamix.docker.manager/scripts/docker run -d --name='binhex-sonarr' -e TZ="America/New_York" -e HOST_OS="Unraid" -e 'UMASK'='000' -e 'PUID'='99' -e 'PGID'='100' -v '/mnt/user/Downloads/':'/data':'rw' -v '/mnt/user/Videos/TV/':'/media':'rw' -v '/mnt/user/Videos/Stand-Up/':'/stand-up/':'rw' -v '/mnt/user/Videos/Documentaries/':'/documentaries/':'rw' -v '/mnt/user/appdata/binhex-sonarr':'/config':'rw' --net=container:binhex-delugevpn 'binhex/arch-sonarr'

d2b3f317e6b5ecf41855bf6ecd36a74c00f250e2559a25c7fe461c5422b990b1

The command finished successfully!

DuckDNS
Command:
root@localhost:# /usr/local/emhttp/plugins/dynamix.docker.manager/scripts/docker run -d --name='duckdns' --net='host' -e TZ="America/New_York" -e HOST_OS="Unraid" -e 'SUBDOMAINS'='REMOVED' -e 'TOKEN'='REMOVED' -e 'PUID'='99' -e 'PGID'='100' 'linuxserver/duckdns'

a794f0e97497d93f6741c17792ba89bdf17f712df8d3818a72079e918b7fc4c5

The command finished successfully!

Jackett:
Command:
root@localhost:# /usr/local/emhttp/plugins/dynamix.docker.manager/scripts/docker run -d --name='jackett' -e TZ="America/New_York" -e HOST_OS="Unraid" -e 'PUID'='99' -e 'PGID'='100' -v '/mnt/user/Downloads/':'/downloads':'rw' -v '/mnt/user/appdata/jackett':'/config':'rw' --net=container:binhex-delugevpn 'linuxserver/jackett'

53819e3feecdded235be5e493b57a3ce05e531790a41e01498f0410acbc6441b

The command finished successfully!

MKVToolNix
Command:
root@localhost:# /usr/local/emhttp/plugins/dynamix.docker.manager/scripts/docker run -d --name='MKVToolNix' --net='bridge' -e TZ="America/New_York" -e HOST_OS="Unraid" -e 'USER_ID'='99' -e 'GROUP_ID'='100' -e 'UMASK'='000' -e 'APP_NICENESS'='' -e 'DISPLAY_WIDTH'='1280' -e 'DISPLAY_HEIGHT'='768' -e 'SECURE_CONNECTION'='0' -e 'X11VNC_EXTRA_OPTS'='' -p '7805:5800/tcp' -p '7905:5900/tcp' -v '/mnt/user':'/storage':'rw' -v '/mnt/user/appdata/MKVToolNix':'/config':'rw' 'jlesage/mkvtoolnix'

e2b7e6a70b2705b64a33bc292b474d833a12332c3b3e82999cc90b6c8486eb5b

The command finished successfully!

PLEX
Command:
root@localhost:# /usr/local/emhttp/plugins/dynamix.docker.manager/scripts/docker run -d --name='plex' --net='host' -e TZ="America/New_York" -e HOST_OS="Unraid" -e 'VERSION'='latest' -e 'NVIDIA_VISIBLE_DEVICES'='' -e 'TCP_PORT_32400'='32400' -e 'TCP_PORT_3005'='3005' -e 'TCP_PORT_8324'='8324' -e 'TCP_PORT_32469'='32469' -e 'UDP_PORT_1900'='1900' -e 'UDP_PORT_32410'='32410' -e 'UDP_PORT_32412'='32412' -e 'UDP_PORT_32413'='32413' -e 'UDP_PORT_32414'='32414' -e 'PUID'='99' -e 'PGID'='100' -v '/mnt/user/Videos/Films/':'/movies':'rw' -v '/mnt/user/Videos/TV/':'/tv':'rw' -v '/mnt/user/Music/':'/music':'rw' -v '':'/transcode':'rw' -v '/mnt/user/Videos/Stand-Up/':'/stand-up':'rw' -v '/mnt/user/Videos/Trailers/':'/Trailers':'rw' -v '/mnt/user/Videos/Documentaries/':'/documentaries/':'rw' -v '/mnt/user/Videos/Sports/':'/sports':'rw' -v '/mnt/user/appdata/plexmediaserver':'/config':'rw' 'linuxserver/plex'

48a7110e41bdf31c9f80400a1a0bd2c9e936853999919aa29728b807fc447711

The command finished successfully!

 

Link to comment

You'll see in the syslog when it does a backup the order that it's stopping the containers.  The order it restarts them is identical (and shouldn't change afaik from one backup to the next)

 

Because of this, any container that requires DelugeVPN to be running (--network=DelugeVPN) will fail to start (and orphan) if it starts before Deluge.

 

The plugin at this point in time does not respect the startup order dictated in the docker Tab (top to bottom).  Your best solution is to set in the Advanced Settings of the backup for Deluge to not stop. (or alternatively all of them)

Link to comment
Posted (edited)
9 minutes ago, Squid said:

You'll see in the syslog when it does a backup the order that it's stopping the containers.  The order it restarts them is identical (and shouldn't change afaik from one backup to the next)

 

Because of this, any container that requires DelugeVPN to be running (--network=DelugeVPN) will fail to start (and orphan) if it starts before Deluge.

 

The plugin at this point in time does not respect the startup order dictated in the docker Tab (top to bottom).  Your best solution is to set in the Advanced Settings of the backup for Deluge to not stop.

Ahhh - makes sense.

 

One note that i see is:

Note that it is recommended to also exclude from the backup any associated appdata shares from the backup set to ensure that the backup / restore will not fail due to open files, etc

 

And a warning that says:

Note: You should specify a backup share (and subfolders) dedicated to that particular backup. It is entirely possible for backups to erase any other files contained within the destinations.

 

I may not be reading this correctly but what i think it's suggesting is that i stop backing up DelugeVPN app all together? I have a dedicated share for just my backups though but i think you can exclude a folder in that directory tree is ee on the page.

Edited by helpermonkey
Link to comment

The only way for a "true" backup is to ensure that there are no possible changes to files while the backup is taking place (eg: File A & B get changes, but the previous version of A made it into the backup set, but the new version of B did).  Stopping the containers ensures that both A & B are in sync with each other.  In practice though it's not a real problem.

Link to comment
3 minutes ago, Squid said:

The only way for a "true" backup is to ensure that there are no possible changes to files while the backup is taking place (eg: File A & B get changes, but the previous version of A made it into the backup set, but the new version of B did).  Stopping the containers ensures that both A & B are in sync with each other.  In practice though it's not a real problem.

Okay got it - yeah i'll just leave it alone save for telling it not to stop.

Link to comment
  • 4 weeks later...

just wanted to bump this back up again - as I've just logged back into my server and had about 40-some odd orphaned images in. I am not sure what information to share right now to show the problem or give people here the ability to fix it but i would be happy to share whatever people think i need to post.

Link to comment

So just another update for people in case there's a way to solve this.

 

After making my post on May 17th, I went in and deleted all of my orphaned images. Today i logged in - and low and behold there are a bunch listed including some with dates of more than a month ago. How is it possible that images created before my delete date are reappearing?

Example:

819293300_ScreenShot2021-05-25at17_26_40.thumb.png.989f679ff167243f6d0de516feceaafa.png

 

 

I have deleted these to by clicking on the drive icon and selecting remove. What could possibly be causing this?

Link to comment

Not sure why you are getting orphaned images with such regularity, but it is quite normal to see them being dated sometime in the past.    The date on them will be the date that layer of a container was downloaded - not the date at which it became orphaned.

  • Like 1
Link to comment
1 hour ago, itimpi said:

Not sure why you are getting orphaned images with such regularity, but it is quite normal to see them being dated sometime in the past.    The date on them will be the date that layer of a container was downloaded - not the date at which it became orphaned.

gotcha - that helps me understand the date issue! will wait for others to perhaps help figure out why they keep popping up.

Link to comment
3 hours ago, trurl said:

I see your docker.img is configured to be at the root of cache. Have you had any problems with cache?

So i may not entirely understand the implications of the question but my best answer is - sort of but not really???? I do keep getting messages that my cache is filling up, but when i deleted (i think on your advice) the orphaned images that seemed to resolve that issue. Other than that, i don't think i've seen any other problems but is that waht you're interested in? I've followed (or at least tried to follow) spaceinvader's youtube videos in the configuration of this server but that was some time ago and a handful of things have been tweaked based on feedback others have given me.

Link to comment
1 hour ago, trurl said:

And what about docker.img? Have you ever filled it? The orphans would be taking space in docker.img. I was just wondering if something about docker.img or cache got corrupted by overfilling.

ahhh, i misspoke - it's my docker img that keeps saying it's filled (though i haven't seen that error in a bit)- i don't think i've ever seen a problem with my cache itself. Again, a little out of my depth - is there another screen shot or log that you would like to see?

Link to comment

Join the conversation

You can post now and register later. If you have an account, sign in now to post with your account.
Note: Your post will require moderator approval before it will be visible.

Guest
Reply to this topic...

×   Pasted as rich text.   Restore formatting

  Only 75 emoji are allowed.

×   Your link has been automatically embedded.   Display as a link instead

×   Your previous content has been restored.   Clear editor

×   You cannot paste images directly. Upload or insert images from URL.