Jump to content
Squid

Real Docker FAQ

45 posts in this topic Last Reply

Recommended Posts

Why does Sonarr keep telling me that it cannot import a file downloaded by nzbGet? (AKA Linking between containers)


This problem seems to continually be brought back up, and the reasons all go back to host / container volume mapping.  (note that I'm using nzbget / sonarr as an example, but the concept is the same for any apps that communicate via their APIs and not by "blackhole" method)

First and foremost, within Sonarr's settings, tell it to communicate to nzbGet via the IP address of the server, not via localhost

Here is how a file gets found by Sonarr, downloaded by nzbGet, post-processed by Sonarr, and moved to your array.

Sonarr searches the indexers for the file, and then tells nzbGet (utilizing its API key) to download the file.   Very few users have trouble with this section.

nzbGet downloads the file, and then tells Sonarr the path that the file exists at.  This is the section this FAQ entry is going to deal with.

Sonarr performs whatever post processing you want them to do (see their appropriate project pages for help with this.

Sonarr then moves the file from the downloaded location to the array.  Once again, very few users have trouble with this section.

nzbGet downloads the file, and then tells Sonarr the path that the file exists at

Let's imagine some host / container volume mappings set up as follows (this set up seems to be a common set up for users having trouble)


App Name    Container Volume    Host Volume
nzbGet    /config                        /mnt/cache/appdata/nzbget
sonarr    /config                        /mnt/cache/appdata/sonarr

            /downloads                   /mnt/cache/appdata/nzbget/downloads/completed/

 

Within nzbGet's settings, the downloads are set to go to /config/downloads/completed

So after the download is completed, nzbGet tells sonarr that the file exists at /config/downloads/completed

Sonarr dutifully looks at /config/downloads/completed and sees that nothing exists there and throws errors into its log stating that it can't import the file.  Error will be something like can't import /config/downloads/completed/filename

 

Why?  Because the mappings don't match.  Sonarr's /config mapping is set to /mnt/cache/appdata/sonarr, whereas nzbGet's /config mapping is set to /mnt/cache/appdata/nzbget.

 

Ultimately, the file is stored at /mnt/cache/appdata/nzbget/downloads/completed/, and sonarr winds up looking for it at /mnt/cache/appdata/sonarr/downloads/completed.

 

Another common set up issue (which is closer to working)

 

App Name    Container Volume    Host Volume
nzbGet          /config                   /mnt/cache/appdata/nzbget

                    /downloads             /mnt/cache/appdata/downloads
sonarr          /config                    /mnt/cache/appdata/sonarr

                   /downloads             /mnt/cache/appdata/downloads/completed

 

Here's what happens with this setup

 

nzbGet is setup to download the files to /downloads/completed

 

After a successful download, the file exists (as far as nzbGet is concerned) at /downloads/completed/...., and nzbGet tells sonarr that.

 

Sonarr then looks for the file, and is unable to find it and throws errors into the logs.  And the kicker is that the error states something along the lines of "Can't import /downloads/completed/whateverFilenameItIs"

 

Everything kinda looks right.  After all the error message is showing the correct path...  No its not, because the mappings don't match between sonarr and nzbGet for the downloads

 

nzbGet puts the file to /downloads/completed/filename (host mapping of /mnt/cache/appdata/downloads/completed/filename)

sonarr is looking for the file at /downloads/filename (host mapped of /mnt/cache/appdata/downloads/completed/completed/filename)

 

Huh?  I don't get it.  ->  The container paths match between the two apps, but the host paths are different, which means that communication isn't going to work correctly.  Think of the "container" path as a shortcut to the host path.

 

Proper way to set up the mappings:

 

App Name    Container Volume    Host Volume
nzbGet                /config              /mnt/cache/appdata/nzbget

                         /downloads        /mnt/cache/appdata/downloads
sonarr               /config              /mnt/cache/appdata/sonarr

                        /downloads        /mnt/cache/appdata/downloads

Tell nzbget to store the files in /downloads/completed, and sonarr will be able to find and import the files because both the host and container volume paths match.


TLDR:  Trust me the above works

Edited by Squid

Share this post


Link to post

How do I limit the CPU resources of a docker application?

Two methods:

The first is the easiest and doesn't involve much thought about what's going on under the covers:  Pin the application to one or more cores of the CPU.  This way the application will only execute on that particular core(s), leaving the other cores open for other applications / vms / etc

On 6.6+ on the Settings - CPU Pinning you can select which core(s) the docker application will run on


Alternatively, you can also prioritize one docker app over another:  eg: You're running the folding@home app and plex.  When Plex starts transcoding you want to give it as much CPU as possible at the expense of folding@home

To do something like this, you would add to the extra parameters section of folding@home the following:

 
--cpu-shares=2

This gives folding@home the absolute lowest cpu priority, so that if another docker app requires / wants to use all the cpu available it will.  Note that if the other docker apps are idle and doing nothing, then folding@home will use as much as it can (subject to its own internal settings)

This is a rather simple example, only utilizing 2 apps.  Here is a better example (and an explanation of what's actually happening when using 3+ apps) (from the docker run reference):

Quote

CPU share constraint¶
By default, all containers get the same proportion of CPU cycles. This proportion can be modified by changing the container’s CPU share weighting relative to the weighting of all other running containers.

To modify the proportion from the default of 1024, use the -c or --cpu-shares flag to set the weighting to 2 or higher. If 0 is set, the system will ignore the value and use the default of 1024.

The proportion will only apply when CPU-intensive processes are running. When tasks in one container are idle, other containers can use the left-over CPU time. The actual amount of CPU time will vary depending on the number of containers running on the system.

For example, consider three containers, one has a cpu-share of 1024 and two others have a cpu-share setting of 512. When processes in all three containers attempt to use 100% of CPU, the first container would receive 50% of the total CPU time. If you add a fourth container with a cpu-share of 1024, the first container only gets 33% of the CPU. The remaining containers receive 16.5%, 16.5% and 33% of the CPU.

On a multi-core system, the shares of CPU time are distributed over all CPU cores. Even if a container is limited to less than 100% of CPU time, it can use 100% of each individual CPU core.

For example, consider a system with more than three cores. If you start one container {C0} with -c=512 running one process, and another container {C1} with -c=1024 running two processes, this can result in the following division of CPU shares:
PID    container    CPU CPU share
100    {C0}     0   100% of CPU0
101    {C1}     1   100% of CPU1
102    {C1}     2   100% of CPU2

 

 


Note that it is more complicated (and beyond the scope of a FAQ) to prioritize non-docker applications over a docker application.  For those so-inclined, review the docker run reference and play around

Edited by Squid

Share this post


Link to post

How do I limit the memory usage of a docker application?

Personally, on my system I limit the memory of most of my docker applications so that there is always (hopefully) memory available for other applications / unRaid if the need arises.  IE: if you watch CA's resource monitor / cAdvisor carefully when an application like nzbGet is unpacking / par-checking, you will see that its memory used skyrockets, but the same operation can take place in far less memory (albeit at a slightly slower speed).  The memory used will not be available to another application such as Plex until after the unpack / par check is completed.

To limit the memory usage of a particular app, add this to the extra parameters section of the app when you edit / add it:

--memory=4G

This will limit the memory of the application to a maximum of 4G

 
Edited by Squid

Share this post


Link to post

I've recreated my docker.img file.  How do I re-add all my old apps?

Two ways:

Old Way:

From the Docker Tab, go to Add Container, and select one of the my* templates then hit add

New Way:

From within Community Applications, go to previous apps, and hit Reinstall on the app.


Using either method, no adjustment of the template should be necessary, as it will be automatically populated with all of your old volume and port mappings, etc.

After the downloads are complete, you're back in business like nothing happened at all.

Share this post


Link to post

With 6.2, do I need to move my appdata and/or docker image into unRaid's recommended shares? (/mnt/user/appdata for appdata and /mnt/user/system for the docker.img)

No you do not need to move the files at all, and everything will still work 100%.

For docker.img, unRaid 6.2 will automatically pick up where your image was stored previously.  For new installs of docker, just change the default Docker storage location to whatever you want.  (Settings - Docker Settings)

For appdata, for already installed applications, nothing will change if the appdata is not stored in the default location.  However on new application installations, unRaid will tend to fill out the /config volume mapping to whatever it's default is set to.  Go to Settings - Docker Settings to change the default appdata storage location to your existing appdata path.


NOTE:  Neither of those settings will allow you to outright specifiy a drive (ie: cache) as the location.  To force  the defaults onto anything other than a user share, you will have to type in the path.  IE: /mnt/cache/myAppdataShare).  

Another NOTE: With 6.2 properly supporting hardlinks / symlinks on user shares, its not as big a deal as it was to set the appdata onto a user share, and then use the share settings to make sure that it is set to be a cache-only share.

If you already have an appdata share, and do not change the default docker appdata location to point to your pre-existing share, then new applications that you add will use the LT default share, while your old ones will use your pre-existing share.  Not a problem in and by itself, but at the very least its confusing, and if you also happen to use CA's appdata module then only one of them is going to get backed up so you have the potential for data loss in the event of a cache-drive failure.

Share this post


Link to post

Why can't I delete / modify files created by CouchPotato (or another docker app)?

You may see something like

Quote
" You do not have permission to access \\server\share\folder . Contact your network administrator to request access"
 


This is because the standard permissions that CP is setting on the newly downloaded media does not allow access via unRaid's share system.  Within CP's settings, change the permissions to something like this:
Untitled_zps3dptcks1.png

 

EDIT: Photobucket sucks, and has removed the ability to show pictures on a forum without paying them something like $300US per year.  Basically you're going to set CP to store the files with permissions of 0777 (probably need to hit advanced settings)


While the New Permissions tool can be run to fix the permissions on media already moved to the array, running this tool may have adverse affects on docker applications, since those apps usually have unique permission requirements within their appdata folder / files.

Either run New Permissions, and specifically exclude the drive your appdata is stored on (and CA's appdata backup folder if backing up the appdata via CA), or use the Docker Safe New Permissions Tools included with the Fix Common Problems plugin which excludes appdata and CA's appdata backup automatically

Edited by Squid

Share this post


Link to post

How should I set up my appdata share?

NOTE

There still seem to be some applications that prefer to have their /config folder mapping set to /mnt/cache/appdata/... instead of /mnt/user/...  The most trouble-free experience with docker applications will always be directly referencing disk shares (/mnt/cache/appdata/...) instead of user shares.  If you are doing this, then you should have the appdata share set up to be use cache drive: only.

Original Posting:

Assuming that you have a cache drive, the appdata share should be set up as use cache drive: prefer and not use cache only

Why?  What difference does it make?

The difference is because of what happens should the cache drive happen to fill up (due to downloads, or the cache floor setting within Global Share Settings).  If the appdata share is set up to use cache: only, then any docker application writing to its appdata will fail with a disk full error (which may in turn have detrimental effects on your apps)

If the appdata share is set up to use cache: prefer then should the cache drive become full, then any additional write by the apps to appdata will be redirected to the array, and the app will not fail with an error.  Once space is freed up on the cache drive, then mover will automatically move those files back to the cache drive where they belong

Edited by Squid

Share this post


Link to post

Can I switch Docker containers, same app, from one author to another?  How do I do it?


For a given application, how do I change Docker containers, to one from a different author or group?

Answer is based on Kode's work here.

Some applications have several Docker containers built for them, and often they are interchangeable, with few or only small differences in style, users, permissions, Docker base, support provided, etc.  For example, Plex has a number of container choices built for it, and with care you can switch between them.

Stop your current Docker container
Click the Docker icon, select Edit on the current docker, and take a screenshot of the current Volume Mappings; if there are Advanced settings, copy them too
Click on the Docker icon, select Remove, and at the prompt select "Container and Image"
Find the new version in Community Applications and click Add
Make the changes and additions necessary, so that the volume and port mappings and advanced settings match your screenshots and notes
Click Create and wait (it may take awhile); that's it, you're done!  Test it of course
The last step may take quite awhile, in some cases a half hour or more.  The setup may include special one-time tasks such as checking and correcting permissions.

Edited by Squid

Share this post


Link to post

I want to run a container from docker hub, how do I interpret the instructions.

Using the duplicati container as an example.

Basically looking at the instructions:
 

docker run --rm -it \
    -v /root/.config/Duplicati/:/root/.config/Duplicati/ \
    -v /data:/data \
    -e DUPLICATI_PASS=duplicatiPass \
    -e MONO_EXTERNAL_ENCODINGS=UTF-8 \
    -p 8200:8200 \
    intersoftlab/duplicati:canary

--rm = remove the container when exits (Not sure we want that) But if you did then you could add it into the extra parameters box
-it = open an interactive pseudoterminal (Not sure why with a webui) But if you did then you could add it into the extra parameters box
-v /root/.config/Duplicati/:/root/.config/Duplicati/ = map a volume host:container therefore I would suggest -v /mnt/cache/appdata/duplicati:/root/.config/Duplicati
-v /data:/data = map a volume host:container therefore I would suggest -v /mnt/user/share:/data
-e DUPLICATI_PASS=duplicatiPass = Set webui password
-e MONO_EXTERNAL_ENCODINGS=UTF-8 = Encoding - Leave at UTF-8
-p 8200:8200 = Port mapping container:port 
intersoftlab/duplicati:canary = dockerhub repository/image:tag

 

Pasting all that into Unraid:
jY0FXvS.png

And hey presto...

 

K20cj0B.png

Edited by Squid

Share this post


Link to post

I want to change the port my docker container is running on or I have two containers that want to use the same port, how do I do that?

 

So say you had four apps with random docker container ports, just edit like this...

 

ZzBY3t7.png

 

Don't change the container port as you can see in this bit of a Dockerfile (code used to generate a container) from nzbget, only 6789 is actually "exposed" so even if you change it in the app's webui to 6788, it won't be able to communicate as only 6789 is exposed to the host at the container/host interface.
 

# ports and volumes
VOLUME /config /downloads
EXPOSE 6789

 

Share this post


Link to post

 

Edited by Squid

Share this post


Link to post

How do I Stop/Start/Restart Docker via Command Line

 

Easily with these simple commands. 

docker stop dockername [Stops Docker]

docker start dockername [Starts Docker]

docker restart dockername [Stops and restarts running Docker, if not running starts Docker]

 

Want to put it in a script? Start Plex for example:

#!/bin/bash
docker start Plex

 

Share this post


Link to post

How Do I Pass Through A device To A Container?

To pass through a device to a container you use the device tag in the Extra Parameters field in the container template. It is only visible in advanced mode.
The device tag looks like this:


--device=/path/to/device

Let's say we want to pass through a USB to RS232 adapter to a container. When you connect the adapter to the unRAID server you will see that a device is created in /dev/. If this is the only USB to RS232 adapter you have it will most likely get /dev/ttyUSB0.
To pass this through to the container we add the below in the Extra Parameters field.


 
--device=/dev/ttyUSB0

When you start the container again, you should have the device available.

If the device do not get a device created in /dev/ you can pass it through using the bus path. Let's say we want to pass through a PC/SC card reader. To find the correct path go to Tools --> System Device in the webgui and find your device in the USB device section.
In my case I want to pass through my OmniKey CardMan 6121.


Bus 002 Device 003: ID 076b:6622 OmniKey AG CardMan 6121

The important info here is the Bus and Device number. The first part of the path is always the same, /dev/bus/usb/. After that comes the Bus number, then the Device number like this: /dev/bus/usb/Bus/Device. So in my case it will be like below.


--device=/dev/bus/usb/002/003

USB device can be passed through without installing the driver in unRAID, but will need the driver installed in the container. For devices like a PCIe DVB card, the driver must be installed in unRAID.
In short I guess one could say that a device can be passed through as long as a device is created in /dev/.

The device tag will follow the path recursively and add every device found to the container.

Edited by Squid

Share this post


Link to post

How do I install a second instance of an applications?

 

There may be some use cases where you may wish to run a given application twice, with separate appdata settings, ports, etc.

 

Assuming that the container runs as network type: Bridge,

 

  • On the Apps Tab (you do have Community Applications installed don't you?),
  • Go to Settings - Enable Reinstall Defaults
  • Go to the Installed Apps section,
  • Reinstall using default values the application you want to run another instance for
  • Change the name of the application, along with assigning it different ports, paths (you may need to show advanced settings to see the appdata (/config) path)
  • Hit Apply

 

If the application runs as network type: Host, you will do all the same as above, but you will need to switch network type to be Bridge, and add in all of the applicable ports and reassign them as needed.  (ie: check with the support thread and/or the project URL to determine which ports need to be defined

 

 

Alternatively, (under either Bridge or Host network types), if you are running unRaid 6.4+, then you can also assign the new instance a different IP address and keep the ports the same (bridge mode), or not define them at all (host mode).  But you will still have change the name of the application and set the appdata (/config folder) accordingly

Edited by Squid

Share this post


Link to post

Why does a certain docker application always start when booting unRaid when autostart is set off?

 

One of two reasons:

 

CA Docker Autostart Manager is installed and configured to start that application.  (Any application which CA Docker Autostart Manager manages will appear as autostart = off on the docker tab.

 

or

 

The application's template has within the Extra Parameters section the following option

--restart=always

 

That particular option will ALWAYS start the applicable application when the docker system initializes, regardless of any autostart option.  The usual use of this parameter is to ensure that if the application / container crashes, it will automatically restart.  A side effect is that it will always start up at initialization

Edited by Squid

Share this post


Link to post

Why are some of my applications randomly stopping and won't restart?  (unRaid 6.4+ with Unassigned Devices mounting SMB shares)

 

Several possible reasons for this:

  • Corrupted or full docker.img file -> covered elsewhere in this FAQ
  • Corrupted files in the appdata folder -> may be easiest to simply delete the appdata and start over again

But, this FAQ entry will deal with another possibility.  You're using the unassigned devices plugin to mount a remote share via SMB hosted on another server or computer and have passed that mount to the docker application in question.  (If you're not using UD to mount a remote smb share none of this applies to you

 

You will notice this in a few different ways

  • After attempting to restart the application you will get a rather vague "Server Execution Error" appear
  • You syslog may contain the following entries
    Nov 11 18:20:14 Test kernel: CIFS VFS: Error -104 sending data on socket to server
    Nov 11 18:20:14 Test kernel: CIFS VFS: Error -32 sending data on socket to server
    
  • From the command prompt an "ls" command will return "Stale File Handle" when attempting to list the contents of the remote share
  • Instead of restarting the container you edit it, make a change, revert the change and then apply you will see the docker run command which will fail with an error of Stale NFS file handle

The solution is within Unassigned Devices' settings to Force SMB Mounts to use SMBv1

 

This problem isn't limited to docker containers itself though.  It is a consequence that not all systems will properly support SMBv2/3, and those mount points you made in unRaid & Unassigned Devices may become unavailable over the network or to any application.

Edited by Squid

Share this post


Link to post
On 5/8/2017 at 8:40 PM, Squid said:

This thread is reserved for Frequently Asked Questions, concerning all things Docker, their setup, operation, management, and troubleshooting.  Please do not ask for support here, such requests and anything off-topic will be deleted or moved, probably to the Docker FAQ feedback topic.  If you wish to comment on the current FAQ posts, or have suggestions or requests for the Docker FAQ, please put them in the Docker FAQ feedback topic.  Thank you!

I have cleaned up this thread and pinned the Docker FAQ feedback thread. As noted in the quoted text from the first post in this FAQ thread, please post any questions to that other thread.

Share this post


Link to post

How come I get "Execution Error" when trying to start a container?

image.png.6a91593dcb1b657476e3d616a8ecaf86.png

 

This happens because the docker start command is returning an error.  To determine the actual error, you will need to try to start the container from the command prompt

root@ServerA:# docker start cops
Error response from daemon: linux mounts: lstat /mnt/disks/SERVERB_EBooks: stale NFS file handle

In this case, there is something wrong with my mount via Unassigned Devices of a remote share.  My fix was to unmount the share and then remount it.

Share this post


Link to post

Join the conversation

You can post now and register later. If you have an account, sign in now to post with your account.
Note: Your post will require moderator approval before it will be visible.

Guest
Reply to this topic...

×   Pasted as rich text.   Restore formatting

  Only 75 emoji are allowed.

×   Your link has been automatically embedded.   Display as a link instead

×   Your previous content has been restored.   Clear editor

×   You cannot paste images directly. Upload or insert images from URL.