mini-SAS SFF-8087 vs SFF-8088 ?


RobJ

Recommended Posts

This thread is reserved for Frequently Asked Questions, concerning all things Docker, their setup, operation, management, and troubleshooting.  Please do not ask for support here, such requests and anything off-topic will be deleted or moved, probably to the Docker FAQ feedback topic.  If you wish to comment on the current FAQ posts, or have suggestions or requests for the Docker FAQ, please put them in the Docker FAQ feedback topic.  Thank you!



Getting Started, Setup

General Questions

Troubleshooting and Maintenance

 


unRAID FAQ's and Guides -
* Guides and Videos - comprehensive collection of all unRAID guides (please let us know if you find one that's missing)
* FAQ for unRAID v6 on the forums, general NAS questions, not for Dockers or VM's
* FAQ for unRAID v6 on the unRAID wiki - it has a tremendous amount of information, questions and answers about unRAID.  It's being updated for v6, but much is still only for v4 and v5.
Docker FAQ - concerning all things Docker, their setup, operation, management, and troubleshooting
* FAQ for binhex Docker containers - some of the questions and answers are of general interest, not just for binhex containers
* VM FAQ - a FAQ for VM's and all virtualization issues

 

The old Docker FAQ (on the old forum)  However, do not use the old index, as it points to the deleted entries.  You will have to browse through the old FAQ to find the FAQ entries you want.


Know of another question that ought to be here?  Please suggest it in the Docker FAQ feedback topic.


Suggested format for FAQ entries - clearly shape the issue as a question or as a statement of the problem to solve, then fully answer it below, including any appropriate links to related info or videos.  Optionally, set the subject heading to be appropriate, perhaps the question itself.

While a moderator could cut and paste a FAQ entry here, only another moderator could edit it.  It's best therefore if only knowledgeable and experienced Docker users/authors create the FAQ posts, so they can be the ones to edit it later, as needed.  Later, the author may want to add new info to the post, or add links to new and helpful info.  And the post may need to be modified if a new unRAID release changes the behavior being discussed.


Additional FAQ entries requested -
* linking between containers, ie between sab and sickbeard - this might be best for specific author support FAQ's, as to how their containers should be linked
* folder mapping - sufficiently covered yet?  more needed?
* port mapping - probably need both a general FAQ here, and individual FAQ's for many of the containers
* location for the image file - answered in Upgrading to UnRAID v6 guide, but should be here too
* where to put appdata - probably need a general FAQ here, but could use individual FAQ's for some containers


Moderators:  please feel free to edit this post.
  • Upvote 1
Link to comment
How do I get started using Docker containers?

First things to view -
* All about Docker in unRAID - Docker Principles and Setup - video guide by gridrunner
* Guides and Videos - check out the Docker section, full of useful videos and guides

First things to read -
* unRAID V6 Manual - has a good conceptual introduction to Dockers and VM's
* LimeTech's Official Docker Guide
* Using Docker - unRAID Manual for v6, guide to creating Dockers
* Later, when you have time, check the other stickied threads here, at the top of the Docker board.

First suggestions -
* Turn on Docker support in the Settings page (there's more information in the Upgrading to UnRAID v6 guide).
* Then install the Community Applications plugin, to obtain the entire list of containers, already categorized for you.  You can select the containers you want, and add them from this plugin.
* Next, read this entire FAQ!  [not working yet: This post] includes examples of how to set up the appdata folder for the configuration settings of your Docker containers.
  • Upvote 1
Link to comment

What are the host volume paths and the container paths

Just in case this is too hard to read, if you're reading this part of the FAQ, you're probably going to be best off setting up your docker paths as /mnt mapped to /mnt.  Similar to this:

Untitled_zpsoy38yheg.png
Note:  You'll notice throughout this that I talk about "best practice" a lot.  Truth be told the risks associated with violating the best practices are pretty low.

I've always had a real tough time trying to explain this because it is a confusing issue to try and explain, but trust me that once you understand it will make 100% perfect sense.  

First thing:  Docker Containers are (for all intents and purposes) completely separate from the rest of the unRaid system.  A container has no idea that its running on unRaid (or any other system), and has no concept of any other containers that may also be running.  What this section really needs is a video with lots of arrows, hand waving, etc.  But, this is my best shot at it purely using text.

A container usually needs to access information (either read only or read/write) on the host (unRaid) system.  Because a container is completely separate from the unRaid system we have to tell it what folders (paths) that it has access to.

This is done in the mapping section of the Add Container screen.

There are two parts to every line.  The container volume path and the host path.

The host path is (generally) your shares that you want to give the container access to.  Best practice for containers dictates that if the container only needs access to a Downloads share, that you only create a mapping for the Downloads share.

On the host path this would be something like /mnt/user/Downloads.

On the container path side of things, you tell the container how it should "refer" to the host path.  IE:  If the container path is /Downloads, then when ever you tell the container to save something into /Downloads, docker will then "map" it and actually store it on your unRaid system at /mnt/user/Downloads.

In most cases, it really doesn't matter that much what your host path and container paths are as long as you can get to the data that you want.  Where it begins to matter is if you have multiple containers that need to communicate with each other.  EG: SabNZBD and Sonaar.  In this case, Sab tells Sonaar "I just downloaded a file and stored it in /Downloads" (remember that /Downloads is mapped to /mnt/user/Downloads)

Now, if Sonaar has different mappings (eg:/MyDownloads mapped to /mnt/user/Downloads), then its not going to be able to find the downloaded file because it will be looking in /Downloads for the file (like sab told it to), but that folder (/Downloads) does not exist in the Sonarr container.  

"Best Practice" in this case dictates that both Sab and Sonaar have to have identical matching mappings for the downloads folder.

Here's a thread showing one user's issues with this:  http://lime-technology.com/forum/index.php?topic=43337.0

Maybe an easy way to think about this is to compare it to mapping a network drive in windows.  When you map a network drive in windows to something like //tower/MyMovies, windows creates a new drive letter (Z:) and when ever it sees a reference to Z:MyParticularMovie, what it actually does is translate that internally to be //tower/MyMovies/MyParticularMovie.  Docker does the same.  If you map /Downloads to be /mnt/user/Downloads, the container refers to everything as /Downloads, but internally docker substitutes /Downloads/MyParticularMovie to be /mnt/user/Downloads/MyParticularMovie.


In my sample picture above, you will notice that I have a separate /config line.  This is where the docker actually stores its data (not its downloads, or whatever).  This is almost always mapped to /mnt/cache/appdata/dockername.  The template authors have now pretty much settled on using appdata/dockername.  The one big caveat is that the appdata share that you create must be a "cache only" share.  If you do not set it to be cache only, when the mover script comes along every day at 3am, it will move the files from your cache drive to your array.  In the process, permissions on the files will get changed with the net result that your docker application probably will not work correctly the next time you use it.


Confused yet?

An easy work around to this (but not "best practice") is for on the container and host paths to pass through either /mnt/user or /mnt on both sides on all of your containers.

This has the advantage that when you're browsing (within a container) to select a folder, you are going to see the exact folder structure that you're already used to from browsing your shares on your server.  In my sample picture above, I would tell Ubooquity (in its UI) that my ebook collection is stored in /mnt/user/ebooks.  This happens to be the EXACT same path that you would refer to it within unRaid.  Hence why this trick is so easy.

The downside is that your container then has access to all of your shares (when there's no real need for it to).  And, you pretty much also have to make sure that /mnt/user is always passed with read/write privileges.  (One huge advantage of docker is that the container does not have to have write privileges to shares which it doesn't need to.

Confused yet?

And now for a final kicker.  Adding a line called /mnt mapped to /mnt may not work on all containers.   Some containers will outright insist on having a /workdirectory or a /downloads folder which you then have to suitable map to a particular share.  But, for the majority of them (probably at least the ones with a UI), you should be safe using the /mnt mapped to /mnt trick.

P.S.  If anyone else can try and come up with something that makes more sense that the above, please be my guest.

Edited by Squid
Link to comment

What do I fill out when a blank template appears?

Assuming you're using Community Applications, see this post:   http://lime-technology.com/forum/index.php?topic=37058.msg392361#msg392361 and my reply.  TLDR:  The application is already installed.  Just close the tab.

If you're not using Community Applications, select a template to install via the template drop down list.

Edited by Squid
Link to comment
How do I move or recreate docker.img?

The easy way to move docker.img -
* Go to Settings -> Docker -> Enable Docker, and set to No, then click the Apply button  (this disables Docker support)
* Using mc or any file manager or the command line, move docker.img to the desired location (/mnt/cache/docker.img is recommended)
* In Settings -> Docker, change the path for Docker image to the exact location you just copied to
* Now set Enable Docker back to Yes, and click the Apply button again  (re-enabling Docker support)

The standard way to move or recreate docker.img is to stop Docker support, delete the current image, re-enable Docker support and recreate the image in the desired location (/mnt/cache/docker.img is recommended), then re-add your current templates.  Your settings should be safe, and nothing else is moved or changed, so once your templates are restored and the Dockers are restarted, they *should* work just the same.  All of that is found in a guide by JonP at -

   ***OFFICIAL GUIDE*** Restoring your Docker Applications in a New Image File
Link to comment

Why does my docker.img file keep filling up?

Assuming that you have set a reasonable initial size for the image file of say 15-20 Gig for the average use case, then there are a few possibilities.

For applications that download (SabNZBD, NZBGet, etc), you must make sure that the destination AND intermediate (temporary download folders) are stored outside the image file (ie: you must make sure that the appropriate folders are mapped correctly).

Any folder which is referenced within the application but NOT explicitly mapped to a user share will wind up being stored within the docker.img file

See these posts for some real-world examples of common mistakes:

SabNZBD:  http://lime-technology.com/forum/index.php?topic=42468.msg403951#msg403951   Note the very subtle difference in the use of the "/" that caused this problem.

NZBGet: http://lime-technology.com/forum/index.php?topic=42500.msg405046#msg405046
 

Also note that unRaid (linux) folders are case-sensitive.  Unlike Windows, /Downloads is not the same thing as /downloads.  If you have mapped a folder as /Downloads, if you refer to it within the app as /downloads, then the files will wind up being stored within the docker.img file.

Note that pretty much there will always be some increase in the size of the docker.img file as the applications update themselves, etc.  But any large increases / outright filling up of the image file should be investigated, as once the image is 100%, unexpected results and/or corruption due to incomplete write operations could result.

Edited by Squid
Link to comment

Why can't (insert docker app name here) see my files mounted on another server or outside the array

Generally, to have a docker application be able to see files located on another server, or stored outside the array, you would use the plugin called "Unassigned Devices" to mount the appropriate shares.

However, on versions of unRaid prior to 6.2beta 20, in order for any docker application to see the files mounted by Unassigned Devices, you must stop the entire docker service (Settings - Docker), and then restart it.

If you are running unRaid 6.2beta 20+, then you need to adjust your volume mappings mode (usually either RW or Read Only) to be RW:Slave or Read Only:Slave)  (Edit the volume and you'll see the option for the "mode")

 

EDIT:  I guess I should mention that slave mode only works for folders within /mnt/disks  (why, I have no idea)  (But it doesn't hurt to have it enabled for a path that doesn't support it)

Edited by Squid
Link to comment

Why did my files get moved off the Cache drive?
Why do I keep losing my Docker container configurations?
Why did my newly configured Docker just stop working?
Where did my Docker files go?


Docker configuration files and application data are often saved to the Cache drive, but there's a very important thing to remember - if you aren't careful, the Mover will move your files off the Cache drive and into a User Share!

Most users have User Shares turned on, and if you do then the Mover process is going to try to move files and folders from the Cache drive to a data drive. It assumes that any top level folder on any data drive or the Cache drive is a Share folder.

To avoid that, here are the rules for the Mover:
* it does not move files at the root of the Cache drive
* it does not move root folders IF those folders are configured as Cache-Only shares
* it does not move root folders whose name begins with a period.  (In Linux by convention, file or folder names beginning with a period are considered 'hidden'.)  

What we recommend (assuming you are using a Cache drive or pool) -
* Configure the Docker image file to be in the root of the Cache drive/pool, where it will be safe.  (/mnt/cache/docker.img)
* Configure all Docker folders as Cache-Only shares.  If you forget, the Mover may move them off the Cache drive!
For an example -
* The configuration settings for your Docker containers are typically stored in an appdata folder, and the appdata folder is typically stored on the Cache drive.
* Create the appdata folder on your Cache drive or pool.
* Go to the Shares tab for the appdata folder and change the Use Cache setting to Only.
* Now when you configure a Docker container path for the /config folder, set it to /mnt/cache/appdata/APP_NAME (e.g. /mnt/cache/appdata/plex and /mnt/cache/appdata/couchpotato).  On first use, those folders will be automatically created for you, and WILL NOT move from the Cache drive.
* Note: NEVER use /mnt/user/appdata/APP_NAME as the host path for the /config files.  The FUSE md driver does not properly support symlinks which most Docker applications require (many documented instances of this around the threads).  EDIT: As of 6.2 RC4, using /mnt/user/appdata/APP_NAME should work fine
  Always reference your appdata via /mnt/cache/appdata/APP_NAME
  If you do not have a cache drive, and are storing your appdata files within the array, then reference the appdata like this: /mnt/disk1/appdata/APP_NAME (or disk2, disk3, etc)
  If you don't do this, you may run into unexpected results with your applications -> not starting up, strange results, etc.

Edited by Squid
Link to comment

Why does my docker.img file keep filling up when I've got everything configured correctly? (Applications are NOT downloading directly into the image, etc)

Some docker applications will log just about everything that they do, and this logging can gradually (or quickly) consume a fair amount of space in the image.

Review this thread: http://lime-technology.com/forum/index.php?topic=45249.0 for details on what you can do to mitigate this and/or delete the log files

TLDR:  If you are running unRaid 6.2+, then you can also add this to the Extra Parameters section of each template (ie: Edit the running container):

--log-opt max-size=50m --log-opt max-file=1 

to automatically cap the log for that container at 50 MB  (Unfortunately, this option is NOT supported under unRaid 6.1.x)

Edited by Squid
Link to comment

What do I do when I see 'layers from manifest don't match image configuration' during a docker app installation?

I have a theory as to why this is actually happening, unfortunately I am unable to replicate this issue so I cannot test the theory. EDIT:  I know whats happening.  The details however aren't important (its caused by the docker API itself, not unRaid)

As to the solution, you will need to delete your docker.img file and recreate it again.  You can then reinstall your docker apps through Community Application's Previous Apps section or via the my* templates.  Your apps will be back the exact same way as before, with no adjustment of the volume mappings, ports, etc required.

Edited by Squid
Link to comment

How Do I Create My Own Docker Templates?

Edit: This FAQ entry is for developers only.  Users do not need to worry about any of this.  This question keeps coming up, and rather than have the answer continually buried in the Programming forum I elected to put it here instead.

Procedure varies a bit depending upon your version of unRaid.  This is what I really recommend.  Manually creating the XML files or deleting sections from the generated files is a recipe for disaster.

unRaid 6.1.x

- On the docker tab, click Add Container.
- Fill out all of the relevant information, including the information under the Advanced Section (don't worry about the banner -> its been deprecated)
- Add the container - It will now install
- On the flashdrive, config/plugins/dockerMan/templates-user you will find the XML file.
- Optional: Edit the template and remove all of the host ports and host volumes (don't remove the tags, just delete the info within)
- Install the application template categorizer plugin ( http://lime-technology.com/forum/index.php?topic=40111.0 ) and select the categories.
- Add the <Date>, <Beta>, <Category> entries the plugin gives you to the XML file

unRaid 6.2+

- On the Settings tab, docker, select Docker Authoring Mode
- On The Docker tab, select Add Container
- Fill out all of the relevant information, including the advanced section.  Note that you DO NOT need to fill out host ports or host volumes
- Click Save
- Copy and paste the ENTIRE XML that appears to an appropriately named XML file somewhere.
- Side note, the Categories which dockerMan generates are unofficial.  The only official entries for that are generated by the Application Template Categorizer.  (Although at this time and for the foreseeable future the two do match)

Common to all unRaid versions

- Add in whatever other tags you wish that are documented here (http://lime-technology.com/forum/index.php?topic=40299.0 ) 

Publishing to CA

- Create a GitHub repository and upload the XML(s)
- Drop myself (squid) and jonp a PM asking for it to be published and for your account to be upgraded to developer status, and a link to the github repo.
- I'm going to odds on reply first and then insist you create a <Support> thread.  Just create it anywhere (as you won't be able to create in the docker Containers forum until jonp upgrades your status) and a moderator will move it.  Update the XML with the support link.  I will also insist on Categories being set.  
- I do a quick check on the xml(s) and then add it to CA.  If there are no major problems, within 2 hours all users of CA will have access to it under normal circumstances, or immediately if the hit Update Applications.
- Time permitting after its available to all users of CA, I will update the Repositories thread

Note:  After you have a repository added, I do not need to know about any new xml's being added.  CA (and more to the point Kode's appfeed) will handle all changes to the xml's automatically every 2 hours.

Edited by Squid
Link to comment

Why does Sonarr keep telling me that it cannot import a file downloaded by nzbGet? (AKA Linking between containers)


This problem seems to continually be brought back up, and the reasons all go back to host / container volume mapping.  (note that I'm using nzbget / sonarr as an example, but the concept is the same for any apps that communicate via their APIs and not by "blackhole" method)

First and foremost, within Sonarr's settings, tell it to communicate to nzbGet via the IP address of the server, not via localhost

Here is how a file gets found by Sonarr, downloaded by nzbGet, post-processed by Sonarr, and moved to your array.

Sonarr searches the indexers for the file, and then tells nzbGet (utilizing its API key) to download the file.   Very few users have trouble with this section.

nzbGet downloads the file, and then tells Sonarr the path that the file exists at.  This is the section this FAQ entry is going to deal with.

Sonarr performs whatever post processing you want them to do (see their appropriate project pages for help with this.

Sonarr then moves the file from the downloaded location to the array.  Once again, very few users have trouble with this section.

nzbGet downloads the file, and then tells Sonarr the path that the file exists at

Let's imagine some host / container volume mappings set up as follows (this set up seems to be a common set up for users having trouble)
 

App Name Container Volume Host Volume
nzbGet /config /mnt/cache/appdata/nzbget
sonarr /config /mnt/cache/appdata/sonarr
  /downloads /mnt/cache/appdata/nzbget/downloads/completed/


Within nzbGet's settings, the downloads are set to go to /config/downloads/completed

So after the download is completed, nzbGet tells sonarr that the file exists at /config/downloads/completed

Sonarr dutifully looks at /config/downloads/completed and sees that nothing exists there and throws errors into its log stating that it can't import the file.  Error will be something like can't import /config/downloads/completed/filename

Why?  Because the mappings don't match.  Sonarr's /config mapping is set to /mnt/cache/appdata/sonarr, whereas nzbGet's /config mapping is set to /mnt/cache/appdata/nzbget.

Ultimately, the file is stored at /mnt/cache/appdata/nzbget/downloads/completed/, and sonarr winds up looking for it at /mnt/cache/appdata/sonarr/downloads/completed.

Another common set up issue (which is closer to working)

 

App Name Container Volume Host Volume
nzbGet /config /mnt/cache/appdata/nzbget
  /downloads /mnt/cache/appdata/downloads
sonarr /config /mnt/cache/appdata/sonarr
  /downloads /mnt/cache/appdata/downloads/completed


Here's what happens with this setup

nzbGet is setup to download the files to /downloads/completed

After a successful download, the file exists (as far as nzbGet is concerned) at /downloads/completed/...., and nzbGet tells sonarr that.

Sonarr then looks for the file, and is unable to find it and throws errors into the logs.  And the kicker is that the error states something along the lines of "Can't import /downloads/completed/whateverFilenameItIs"

Everything kinda looks right.  After all the error message is showing the correct path...  No its not, because the mappings don't match between sonarr and nzbGet for the downloads

nzbGet put the file to /downloads/completed/filename (host mapping of /mnt/cache/appdata/downloads/completed/filename)

sonarr is looking for the file at /downloads/filename (host mapped of /mnt/cache/appdata/downloads/completed/completed/filename)

Huh?  I don't get it.  ->  The container paths match between the two apps, but the host paths are different, which means that communication isn't going to work correctly.  Think of the "container" path as a shortcut to the host path.

Proper way to set up the mappings:

App Name Container Volume Host Volume
nzbGet /config /mnt/cache/appdata/nzbget
  /downloads /mnt/cache/appdata/downloads
sonarr /config /mnt/cache/appdata/sonarr
  /downloads /mnt/cache/appdata/downloads

Tell nzbget to store the files in /downloads/completed, and sonarr will be able to find and import the files because both the host and container volume paths match.


TLDR:  Trust me the above works

Edited by Squid
Link to comment

How do I limit the CPU resources of a docker application?

Two methods:

The first is the easiest and doesn't involve much thought about what's going on under the covers:  Pin the application to one or more cores of the CPU.  This way the application will only execute on that particular core(s), leaving the other cores open for other applications / vms / etc

Add this to the extra parameters section when you add / edit the app:

 

--cpuset-cpus=0,1

This will pin the application to cores 0 & 1  * Note that none of my CPU's have hyperthreading, so I have to assume that you would set the cores the same way as pinning a VM on a hyperthreaded cpu

Alternatively, you can also prioritize one docker app over another:  eg: You're running the folding@home app and plex.  When Plex starts transcoding you want to give it as much CPU as possible at the expense of folding@home

To do something like this, you would add to the extra parameters section of folding@home the following:

 

--cpu-shares=2

This gives folding@home the absolute lowest cpu priority, so that if another docker app requires / wants to use all the cpu available it will.  Note that if the other docker apps are idle and doing nothing, then folding@home will use as much as it can (subject to its own internal settings)

This is a rather simple example, only utilizing 2 apps.  Here is a better example (and an explanation of what's actually happening when using 3+ apps) (from the docker run reference):

Quote
CPU share constraint¶
By default, all containers get the same proportion of CPU cycles. This proportion can be modified by changing the container’s CPU share weighting relative to the weighting of all other running containers.

To modify the proportion from the default of 1024, use the -c or --cpu-shares flag to set the weighting to 2 or higher. If 0 is set, the system will ignore the value and use the default of 1024.

The proportion will only apply when CPU-intensive processes are running. When tasks in one container are idle, other containers can use the left-over CPU time. The actual amount of CPU time will vary depending on the number of containers running on the system.

For example, consider three containers, one has a cpu-share of 1024 and two others have a cpu-share setting of 512. When processes in all three containers attempt to use 100% of CPU, the first container would receive 50% of the total CPU time. If you add a fourth container with a cpu-share of 1024, the first container only gets 33% of the CPU. The remaining containers receive 16.5%, 16.5% and 33% of the CPU.

On a multi-core system, the shares of CPU time are distributed over all CPU cores. Even if a container is limited to less than 100% of CPU time, it can use 100% of each individual CPU core.

For example, consider a system with more than three cores. If you start one container {C0} with -c=512 running one process, and another container {C1} with -c=1024 running two processes, this can result in the following division of CPU shares:
PID    container    CPU CPU share
100    {C0}     0   100% of CPU0
101    {C1}     1   100% of CPU1
102    {C1}     2   100% of CPU2
 


Note that it is more complicated (and beyond the scope of a FAQ) to prioritize non-docker applications over a docker application.  For those so-inclined, review the docker run reference and play around

Edited by Squid
Link to comment

How do I limit the memory usage of a docker application?

Personally, on my system I limit the memory of most of my docker applications so that there is always (hopefully) memory available for other applications / unRaid if the need arises.  IE: if you watch CA's resource monitor / cAdvisor carefully when an application like nzbGet is unpacking / par-checking, you will see that its memory used skyrockets, but the same operation can take place in far less memory (albeit at a slightly slower speed).  The memory used will not be available to another application such as Plex until after the unpack / par check is completed.

To limit the memory usage of a particular app, add this to the extra parameters section of the app when you edit / add it:

--memory=4G

This will limit the memory of the application to a maximum of 4G

Quote
-m, --memory=""   Memory limit (format: <number>[<unit>]). Number is a positive integer. Unit can be one of b, k, m, or g. Minimum is 4M.

 

CA's resource monitor (or cAdvisor) will display the maximum amount of memory (and memory used) and for particular app.

Edited by Squid
Link to comment

I've recreated my docker.img file.  How do I re-add all my old apps?

Two ways:

Old Way:

From the Docker Tab, go to Add Container, and select one of the my* templates then hit add

New Way:

From within Community Applications, go to previous apps, and hit Reinstall on the app.


Using either method, no adjustment of the template should be necessary, as it will be automatically populated with all of your old volume and port mappings, etc.

After the downloads are complete, you're back in business like nothing happened at all.

Edited by Squid
Link to comment

With 6.2, do I need to move my appdata and/or docker image into unRaid's recommended shares? (/mnt/user/appdata for appdata and /mnt/user/system for the docker.img)

No you do not need to move the files at all, and everything will still work 100%.

For docker.img, unRaid 6.2 will automatically pick up where your image was stored previously.  For new installs of docker, just change the default Docker storage location to whatever you want.  (Settings - Docker Settings)

For appdata, for already installed applications, nothing will change if the appdata is not stored in the default location.  However on new application installations, unRaid will tend to fill out the /config volume mapping to whatever it's default is set to.  Go to Settings - Docker Settings to change the default appdata storage location to your existing appdata path.


NOTE:  Neither of those settings will allow you to outright specifiy a drive (ie: cache) as the location.  To force  the defaults onto anything other than a user share, you will have to type in the path.  IE: /mnt/cache/myAppdataShare).  

Another NOTE: With 6.2 properly supporting hardlinks / symlinks on user shares, its not as big a deal as it was to set the appdata onto a user share, and then use the share settings to make sure that it is set to be a cache-only share.

If you already have an appdata share, and do not change the default docker appdata location to point to your pre-existing share, then new applications that you add will use the LT default share, while your old ones will use your pre-existing share.  Not a problem in and by itself, but at the very least its confusing, and if you also happen to use CA's appdata module then only one of them is going to get backed up so you have the potential for data loss in the event of a cache-drive failure.

Edited by Squid
Link to comment

Why can't I delete / modify files created by CouchPotato (or another docker app)?

You may see something like

Quote
" You do not have permission to access \\server\share\folder . Contact your network administrator to request access"
 


This is because the standard permissions that CP is setting on the newly downloaded media does not allow access via unRaid's share system.  Within CP's settings, change the permissions to something like this:
Untitled_zps3dptcks1.png
While the New Permissions tool can be run to fix the permissions on media already moved to the array, running this tool may have adverse affects on docker applications, since those apps usually have unique permission requirements within their appdata folder / files.

Either run New Permissions, and specifically exclude the drive your appdata is stored on (and CA's appdata backup folder if backing up the appdata via CA), or use the Docker Safe New Permissions Tools included with the Fix Common Problems plugin which excludes appdata and CA's appdata backup automatically

Edited by Squid
Link to comment

How should I set up my appdata share?

Assuming that you have a cache drive, the appdata share should be set up as use cache drive: prefer and not use cache only

Why?  What difference does it make?

The difference is because of what happens should the cache drive happen to fill up (due to downloads, or the cache floor setting within Global Share Settings).  If the appdata share is set up to use cache: only, then any docker application writing to its appdata will fail with a disk full error (which may in turn have detrimental effects on your apps)

If the appdata share is set up to use cache: prefer then should the cache drive become full, then any additional write by the apps to appdata will be redirected to the array, and the app will not fail with an error.  Once space is freed up on the cache drive, then mover will automatically move those files back to the cache drive where they belong
 

Edited by Squid
  • Upvote 1
Link to comment
Dockers, there's so much to learn!  Are there any guides?

First, check this whole FAQ topic (the first post has an index).  It includes various guides to getting-started and installation issues.  Then check the stickied topics on this board, the Docker Engine board.  (links to stickies to be added here)

Then check this list of guides, many of which are video guides:
* Guides and Videos

Application guides
* Guide-How to install SABnzbd and SickBeard on unRAID with auto process scripts
* Plex: Guide to Moving Transcoding to RAM
* Get Started with Docker Engine for Linux
* Get Started with Docker for Windows

If you know of other helpful guides (whole topics or single posts), please notify us, post about it in the Docker FAQ feedback topic.  Moderators: please feel free to edit this post and its list of guides.  I'm sure there's a better way to format it.
  • Upvote 1
Link to comment

Can I switch Docker containers, same app, from one author to another?  How do I do it?
For a given application, how do I change Docker containers, to one from a different author or group?

Answer is based on Kode's work here.

Some applications have several Docker containers built for them, and often they are interchangeable, with few or only small differences in style, users, permissions, Docker base, support provided, etc.  For example, Plex has a number of container choices built for it, and with care you can switch between them.

  • Stop your current Docker container
  • Click the Docker icon, select Edit on the current docker, and take a screenshot of the current Volume Mappings; if there are Advanced settings, copy them too
  • Click on the Docker icon, select Remove, and at the prompt select "Container and Image"
  • Find the new version in Community Applications and click Add
  • Make the changes and additions necessary, so that the volume and port mappings and advanced settings match your screenshots and notes
  • Click Create and wait (it may take awhile); that's it, you're done!  Test it of course

The last step may take quite awhile, in some cases a half hour or more.  The setup may include special one-time tasks such as checking and correcting permissions.

  • Upvote 1
Link to comment

I need some help!  What info does the community need to help me?

As a start, post the following:

1.  The docker run command

FRm4UFz.png

2.The docker container logs, get those as shown below....

YuqRa0s.png

3.  Post AS MUCH info as you can, it's much easier to help if we can reproduce the error.  Personally, if someone has gone to some trouble to provide a decent amount of info, I'm way more likely to help that user than the one that just says "It doesn't work" as that means way more work for me trying to elicit more information and mess around with something trying to work out where things went wrong.  

Link to comment

I need to open a terminal inside a docker container, how do I do this?
How do I get a command prompt so I can run commands within a Docker container?


Simply run:
 

docker exec -it $dockername /bin/bash


Where 

$dockername

 is the name of your container. (note that Linux is case sensitive)

Link to comment

I want to run a container from docker hub, how do I interpret the instructions.

Using the duplicati container as an example.

Basically looking at the instructions:
 

docker run --rm -it \
    -v /root/.config/Duplicati/:/root/.config/Duplicati/ \
    -v /data:/data \
    -e DUPLICATI_PASS=duplicatiPass \
    -e MONO_EXTERNAL_ENCODINGS=UTF-8 \
    -p 8200:8200 \
    intersoftlab/duplicati:canary

--rm = remove the container when exits (Not sure we want that) But if you did then you could add it into the extra parameters box
-it = open an interactive pseudoterminal (Not sure why with a webui) But if you did then you could add it into the extra parameters box
-v /root/.config/Duplicati/:/root/.config/Duplicati/ = map a volume host:container therefore I would suggest -v /mnt/cache/appdata/duplicati:/root/.config/Duplicati
-v /data:/data = map a volume host:container therefore I would suggest -v /mnt/user/share:/data
-e DUPLICATI_PASS=duplicatiPass = Set webui password
-e MONO_EXTERNAL_ENCODINGS=UTF-8 = Encoding - Leave at UTF-8
-p 8200:8200 = Port mapping container:port 
intersoftlab/duplicati:canary = dockerhub repository/image:tag

 

Pasting all that into Unraid:
jY0FXvS.png

And hey presto...

 

K20cj0B.png

  • Upvote 1
Link to comment

Join the conversation

You can post now and register later. If you have an account, sign in now to post with your account.
Note: Your post will require moderator approval before it will be visible.

Guest
Reply to this topic...

×   Pasted as rich text.   Restore formatting

  Only 75 emoji are allowed.

×   Your link has been automatically embedded.   Display as a link instead

×   Your previous content has been restored.   Clear editor

×   You cannot paste images directly. Upload or insert images from URL.