Jump to content

[SOLVED] aneely - docker image filling up


Recommended Posts

Command:root@localhost:# /usr/local/emhttp/plugins/dynamix.docker.manager/scripts/docker run -d --name='binhex-radarr' --net='bridge' -e TZ="America/New_York" -e HOST_OS="Unraid" -e 'UMASK'='000' -e 'PUID'='99' -e 'PGID'='100' -p '7878:7878/tcp' -v '/mnt/user/downloads/':'/data':'rw' -v '/mnt/user':'/media':'rw' -v '/mnt/user/appdata/binhex-radarr':'/config':'rw' 'binhex/arch-radarr' 

22655c3d1c704348f66d5a789a48cfaa860efdbf800a421940bafd8464986768

The command finished successfully!

Link to comment

Docker image size is holding steady. You just need to figure out the best way to configure your system to use cache.

7 minutes ago, aneelley said:

I stopped all containers and it is slowly going back down.  I have it running on the hour.  There were lots of downloads that nzbget was processing.

While nothing is downloading then of course mover can get things off cache quickly enough because nothing is being added. But running mover every hour isn't really the solution, unless you also intend to stop adding to cache every hour as well.

 

Mover is intended for idle time. There is simply no way to move from the faster cache to the slower array as quickly as you can write new data to cache. Running mover at the same time you are writing to cache just makes everything slower, including mover itself since it is competing with those writes for access to the drives.

 

A simple strategy would be to not cache any downloads or their postprocessing. Then obviously you won't fill cache.

 

But another strategy would be to cache the downloads, but send the postprocess results directly to the array. So in the case of NZBGet, you would download the intermediate files to a cache-yes user share, but send the completed files to a cache-no user share. This would also have the advantage of doing the postprocess reading from cache but the postprocess writes to a different disk, so those reads and writes aren't competing.

 

Here is how this is described on the Paths page in NZBGet Settings:

Quote

InterDir

Directory to store intermediate files.

If this option is set (not empty) the files are downloaded into this directory first. After successful download of nzb-file (possibly after par-repair) the files are moved to destination directory (option DestDir). If download or unpack fail the files remain in intermediate directory.

Using of intermediate directory can significantly improve unpack performance if you can put intermediate directory (option InterDir) and destination directory (option DestDir) on separate physical hard drives.

So, in the end, the final results (video files) are already on the array where they don't need to be moved, and the intermediate results (downloads, etc.) are removed from cache.

 

 

 

 

 

Link to comment

If you click on the icon for any of your dockers and select Support, you can go directly to the support thread for that docker.

 

In the case of the NZBGet docker you are running, there is a user on the very first page of that thread describing exactly the scenario I explained above.

Link to comment
  • JorgeB changed the title to [SOLVED] aneely - docker image filling up

@aneelley

21 hours ago, aneelley said:

I am wondering what happens when it fills up.

Thought I should come back to this point. You must try to avoid filling any disk.

 

Each user share has a Minimum Free setting.

 

Unraid has no way to know when it chooses a disk to write a file, how large the file will become. If a disk has more than Minimum Free, Unraid can choose the disk. If the disk has less than Minimum Free, it will choose another disk.

 

The general recommendation is to set Minimum Free to larger than the largest file you expect to write to the User Share.

 

Cache also has a Minimum Free, in Global Share Settings. It works in a similar manner. If cache has less than Minimum, Unraid will choose an array disk instead (overflow), provided that the User Share being written is cache-prefer or cache-yes.

 

Note that in any case, the choosing is done before writing begins, and once a disk is chosen, it will attempt to write the entire file to that disk. If it doesn't fit, the disk runs out of space and the write fails.

 

To give an example, Minimum is set to 10G, the disk has 11G free, you write a 9G file. Unraid can choose the disk. It may choose another depending on other factors (Allocation Method, Split Level), but if it does choose the disk, it will write the 9G file to the disk. After that, the disk will only have 2G free, which is below Minimum, so Unraid won't choose the disk again until it has more than Minimum.

 

Another example, Minimum is 10G, disk has 15G free, you write a 20G file. Unraid can choose the disk since it has more than Minimum. If it does choose the disk, it will write the file until the disk is completely full, then the write will fail.

Link to comment

Join the conversation

You can post now and register later. If you have an account, sign in now to post with your account.
Note: Your post will require moderator approval before it will be visible.

Guest
Reply to this topic...

×   Pasted as rich text.   Restore formatting

  Only 75 emoji are allowed.

×   Your link has been automatically embedded.   Display as a link instead

×   Your previous content has been restored.   Clear editor

×   You cannot paste images directly. Upload or insert images from URL.

×
×
  • Create New...