helpermonkey Posted August 25, 2021 Share Posted August 25, 2021 Whenever an update for one of my docker images becomes available, my unraid installtion ends up with orphan images which in turn creates high image disk utilization. The only solution I've found is to log in every few days and delete the orphan images manually. Attached are my diagnostics, as well as a notification list and my docker page. i deleted those orphan images and applied the updates after collecting the diagnostics that are attached. Is it possible this is a bug? I have mentioned this problem here in the past but we have not been able to uncover a solution. Any suggestions appreciated. buddha-diagnostics-20210825-1229.zip Quote Link to comment
helpermonkey Posted August 27, 2021 Author Share Posted August 27, 2021 Just a bump to see how to get this fixed. Quote Link to comment
helpermonkey Posted September 2, 2021 Author Share Posted September 2, 2021 just thought i'd throw this back up to the top. Quote Link to comment
helpermonkey Posted September 9, 2021 Author Share Posted September 9, 2021 still having this problem - is there a paid support service available? this is quite problematic. 1 Quote Link to comment
JGNiDK Posted October 4, 2021 Share Posted October 4, 2021 I totally agree. I have it also. Can you tell me a little more about how you get rid of it again?? What do you delete again? Quote Link to comment
JGNiDK Posted October 4, 2021 Share Posted October 4, 2021 On 8/25/2021 at 6:38 PM, helpermonkey said: The only solution I've found is to log in every few days and delete the orphan images manually. Ahh....this is what you deleted. And then what? Rebooted? Quote Link to comment
helpermonkey Posted October 4, 2021 Author Share Posted October 4, 2021 19 minutes ago, JGNiDK said: Ahh....this is what you deleted. And then what? Rebooted? just deleting them is sufficient. Quote Link to comment
tjb_altf4 Posted October 4, 2021 Share Posted October 4, 2021 It's because your containers are using another container for networking i.e. delugevpn When that main container gets updated the ID changes so it breaks subsequent updates of the others. There was supposed to be a possible workaround coming in Unraid, but you could also try the rebuild-dndc container. 1 Quote Link to comment
helpermonkey Posted October 4, 2021 Author Share Posted October 4, 2021 12 minutes ago, tjb_altf4 said: It's because your containers are using another container for networking i.e. delugevpn When that main container gets updated the ID changes so it breaks subsequent updates of the others. There was supposed to be a possible workaround coming in Unraid, but you could also try the rebuild-dndc container. thanks for that tip. A question about the instructions for rebuild-dndc.... https://github.com/elmerfdz/rebuild-dndc on step 2 where it talks about ... docker network create container:master_container_name do i need to change master_container_name to binhex-delugevpn? Quote Link to comment
tjb_altf4 Posted October 5, 2021 Share Posted October 5, 2021 1 hour ago, helpermonkey said: thanks for that tip. A question about the instructions for rebuild-dndc.... https://github.com/elmerfdz/rebuild-dndc on step 2 where it talks about ... docker network create container:master_container_name do i need to change master_container_name to binhex-delugevpn? That's correct 1 Quote Link to comment
helpermonkey Posted October 5, 2021 Author Share Posted October 5, 2021 2 minutes ago, tjb_altf4 said: That's correct thanks! Quote Link to comment
JGNiDK Posted October 5, 2021 Share Posted October 5, 2021 20 hours ago, helpermonkey said: just deleting them is sufficient. Something is still wrong. Do I need to look into this thing about containers using other containers network?? Quote Link to comment
Recommended Posts
Join the conversation
You can post now and register later. If you have an account, sign in now to post with your account.
Note: Your post will require moderator approval before it will be visible.