Docker Image Randomly at 100% Utilization and now nothing works.


Recommended Posts

Hey everyone, I could use some help on this one.

 

Docker randomly reports 75% utilization and then minutes later it's at 100%... ok, cool. So I backup the docker image before I start doing anything.

 

From there I went and turned off the docker... but when I turn the Docker back on (and yes, unRAID was pointed at the same docker image it had always been pointed at)... it says I no longer have any apps installed. From there I try to restart the entire machine and I'm unable to unmount my shares. Just stuck "Unmounting disks...Retry unmounting disk share(s)...Unmounting disks..."

 

I've officially reached my limits of understanding with unRAID. Any help guys?

 

 

Edit: I've fixed the issue. For anyone else who may stumble upon this, I fixed it by re-enabling the docker and expanding the size of the docker (from 20GBs to 30GBs). From here unRAID informed me I was somehow using 24 of the 30GBs and all 3 of my apps (Plex, Plexpy, and Deluge) were shown as out of date. I scrubbed the docker which removed ~18GBs and updated apps (even though no data was actually pulled?).

 

Everything is now working properly.

Link to comment

That sounds like far more of the docker image file being used that is normal.

 

I suspect that you have at least one of the apps is misconfigured so that is writing dynamic files into the docker image rather than to a location external to the image.  You really want to get this fixed or you will continue to have issues.

Link to comment

I'm having a similar problem.  I installed unRAID 6.2 about three weeks ago. Deleted/regenerated my 20GB docker.img and reloaded all the docker containers from my "My-xxxxxx" user defined templates. Over the intervening weeks I've been receiving notification messages that my docker file utilization is slowly increasing, now up from 70% to 91%.  I have double-checked the folder mapping for all of the installed dockers and confirmed they are pointing outside the docker image.  Is there a systematic way I can go about periodically logging the disk space usage for each docker to identify which one is growing?

Link to comment

I'm having a similar problem.  I installed unRAID 6.2 about three weeks ago. Deleted/regenerated my 20GB docker.img and reloaded all the docker containers from my "My-xxxxxx" user defined templates. Over the intervening weeks I've been receiving notification messages that my docker file utilization is slowly increasing, now up from 70% to 91%.  I have double-checked the folder mapping for all of the installed dockers and confirmed they are pointing outside the docker image.  Is there a systematic way I can go about periodically logging the disk space usage for each docker to identify which one is growing?

You can look at cAdvisor.  It may or may not help identify the issue.

 

You can also check out this FAQ entry:  http://lime-technology.com/forum/index.php?topic=40937.msg475225#msg475225

 

And double check that any intermediate files from download clients are not stored within the image (but rather either within the appdata (/config) share or in a separate downloads mapping.

Link to comment

Thanks for the suggestions Squid.  I believe I have tracked the issue down to Dropbox throwing error messages that it wasn't assigned to a user account after I re-created the docker during the unRAID 6.1.9->6.2 transition.  There must be a logfile internal to the Dropbox docker that was filling up the docker.img space.  All is quiet now  :)

Link to comment

Join the conversation

You can post now and register later. If you have an account, sign in now to post with your account.
Note: Your post will require moderator approval before it will be visible.

Guest
Reply to this topic...

×   Pasted as rich text.   Restore formatting

  Only 75 emoji are allowed.

×   Your link has been automatically embedded.   Display as a link instead

×   Your previous content has been restored.   Clear editor

×   You cannot paste images directly. Upload or insert images from URL.