unRAID 6 Beta 6: Docker Quick-Start Guide


Recommended Posts

NOTE:  This guide is outdated as there is now embedded Docker management inside the unRAID webGui.  We are actively working on an updated guide to post in the near future.  Thank you.

 

Docker Quick-Start Guide

Docker is probably one of the biggest new improvements to the unRAID 6 beta.  Docker allows us to safely use applications that were not built to run on unRAID Server OS.  These applications run in a secure, compartmentalized fashion without minimal consumption of hardware resources.  Docker is available in all three Boot Modes (Basic, KVM, and Xen) in unRAID Beta 6.

 

Prior to first use and for best performance, we require that you install Docker to a Btrfs drive or pool.  For more on Btrfs in unRAID 6 Beta 6, see the Btrfs Quick-Start Guide (being posted soon).

 

Initial Setup

Before you can use Docker for the first time, you will need to configure it under the Extensions/Docker page:

dockerqs-pic1.png

From here, you can specify your install path for Docker and start/stop the service. Be sure to click "apply" after changing this path and then click "start" to install the Docker service and start it for the first time (this is really fast).

 

dockerqs-pic2.png

 

Install Path

If you initially set the installation path to one location, but later decide to change it, you will first need to stop the Docker service from the webGUI before it will allow you to specify a new location.  Your existing Docker install will NOT be moved to the new location, so please think carefully before changing the installation path.  The existing installation will also not be removed.  If you specify a location that contains an previous Docker installation, that installation should remain intact and usable.

 

Btrfs Requirement Details

When an installation path is specified and Docker is initially started, the path's file system type is analyzed.  Only if Btrfs is detected, Docker will install.  make use of "Copy On Write," which allows the system to significantly reduce the disk space required and improves read to write usage ratios on devices under this file system.  Now for some quick math on storage usage with data numbers from our Beta 6 pre-built Dockerfiles that we have prepared for you.

 

Base Image A

1: Ubuntu 14.04

 

Container 1

1: Base Image A

2: Middleware Applications (python, etc.)

3: SABnzbd

 

Container 2

1: Base Image A

2: Middleware Applications (python, etc.)

3: Couchpotato

 

Container 3

1: Base Image A

2: Middleware Applications (python, etc.)

3: Sickbeard

 

All of our containers are based on the Ubuntu 14.04 base image directly, and as such, do not need to replicate storage consumption for it as a part of each containers private image.  Now let's review the impact to storage consumption.

 

  • Docker Install:  10MB
  • Ubuntu 14.04 Base Image:  280MB
  • Container 1 (SABnzbd):  75MB
  • Container 2 (Couchpotato):  105MB
  • Container 3 (Sickbeard):  95MB

 

In total, that's about 565M used disk space to have these three applications installed and running.  Now if we changed our base image from Ubuntu 14.04 to something else based on Ubuntu 14.04, but with Python, SSH, syslog daemon, and other common tools built in.  Then we could reference that new image as our base for our application containers thus further reducing the total storage requirements.  Pretty neat, eh?

 

Now what would disk consumption look like without Btrfs and Copy On Write?  Well it's way worse than you'd imagine.  You see, for each step in a Dockerfile build process, it actually creates a temporary container/image just for that step, then it shuts down that container and restarts a copy of it right away to then perform the next step in the Dockerfile.  This creates lots of temporary intermediate images which technically remain even after the next image is created for the purposes of logging/snapshotting.  As such, when installing just these three applications and the one base image, I showed a total of 38 images on my system!  Without Btrfs, these images would have consumed almost 12GB of storage capacity!!  In addition, each command to install a container would have taken significantly longer as it would have had to make a full copy of the previous image on disk before proceeding with the next step.  Hopefully this thoroughly explains the benefits of Copy On Write with Btrfs and how it's relevant to Docker specifically.  Want to see this on your system?  Type docker images -a to see a list of all the images present on your system.  The Virtual Size category will help show you what those images WOULD represent on a non-Btrfs file system.

 

dockerqs-pic3.png

 

Other Images?

Later in this guide, we will show you how to quickly install several popular applications on unRAID 6 Beta 6 using Ubuntu 14.04 as our base image.  Then we layer in a combination of both the middleware a mainline application into each container (say Python, SSH, syslog daemon, etc. for middleware and Couchpotato or Sickbeard for mainline).  Using it as a base with Btrfs means that you don't have to replicate the space consumed on your disk by that base image to use it with another container.  This is Copy On Write in action!  Imagine laying transparent sheets (you know, from the days of old school projectors) and each layer is one sheet.  When all the layers add together, you have an application installed.  To change an application, you only need to remove the sheet with that application on it, make your edits, then replace.

However, there are many other base images available on the Docker Registry (https://registry.hub.docker.com/).  There are base images out there for CentOS, Arch, Fedora, and many more.  Best of all, you can choose what layers you want to maintain versus others maintain for you!

 

Installing Sample Applications (Sickbeard, Couchpotato, SABnzbd, Plex)

In Beta 6, we do require the use of command line in order to use Docker, although we've been able to reduce the amount of commands required considerably.  While we know the community will quickly learn how to install their own customized variants of images and containers, we wanted to start out with a more simple approach to get folks started.  We pre-created and hosted a few common applications that you can now install with a single command line.  Each one of these applications requires it's own command, but the syntax for each of these is pretty similar:

 

docker run -d --name="appname" -h hostname -v /path/to/appdata/appname:/config -v /path/to/userdata:/data -p hostport:appport eschultz/appname

 

Each part in red will need to be modified for your installation.  In addition, some applications will have additional variants to this.  Here's a quick rundown on the syntax and what each flag is doing:

 

-h hostname

The -h switch argument should be followed by the hostname of your unRAID server.  Makes for easier access to the application by hostname.

 

-v /mnt/path/to/appdata/appname:/config

For our prepared Dockerfiles, we store application data (like .conf files) outside of the container itself.  The path for this can be anything on the host.  We recommend installing appdata to either a single disk on your array, or for best performance, to non-array partition (or a cache drive).  The red bolded text represents the ACTUAL PATH to the config data from the unRAID server host's perspective.  The /config part after the colon specifies the virtual access path from within the application container that is used to access that host path (DO NOT CHANGE THIS PART).  I know this can be confusing, but it works ;-).

 

-v /mnt/path/to/userdata:/data

You can pass multiple paths here if you have different folder mount points and you can even change the "/data" part to an alternative name if you desire (just don't use a common root folder name for Linux like /etc or /bin ;-).  When using an application hosted in a container, you may have to specify a path to either a download location or a media content storage location.  When browsing, the root folder in the application will contain a /data folder that really is a pass through to the path you specify to the left of the colon in the example syntax.  This allows easy and controllable access to the host storage from within a container.

 

-p hostport:appport

This switch allows you to specify how port mapping should work between container and host.  hostport should be replaced with a number representing the port on the host you want to pass traffic through to from the application.  appport is the port that the application exposes inside its container to the host.  You can change the first number, but changing the second number will not work if using pre-built Docker containers such as ours.  When we change the first number, but not the second, it's transparent to the application and the host.  The most common use for this would be if you have two separate Dockerized applications you want to use that both want to use the same port.  You could let them use the same port within each of their respective containers, but then on the host, map that differently.  many applications like to claim port 8080 by default, so this becomes very useful.  Some applications have multiple ports and as such, will have multiple -p switch statements to indicate the host:guest mapping for each.

 

eschultz/appname

The last part of the command is connecting to Eric's Docker Hub for instructions on how to install this application in a container on your system.  Who is Eric?  He's Developer #2 here at Lime Tech that is responsible for Docker 1.0 support in this build ;-).  We're using his Github account for hosting these Dockerfiles for now as these files are just a starting off point for the community to use to get their feet wet with Docker.

 

Know that if you want to experiment with alternative base-images and containers, you are free to do so!  Search the Docker Registry at https://registry.hub.docker.com/ for other containers and base images.  However, know that each variance in base-image will add to the total disk consumption by Docker.

 

To make it easier for copy and paste fans, here's each application's docker run command that we've pre-configured for use in this beta:

 

SABnzbd

docker run -d -h hostname --name="sabnzbd" -v /mnt/path/to/appdata/sabnzbd:/config -v /mnt/path/to/userdata:/data -p hostport:8080 -p hostport:9090 eschultz/docker-sabnzbd

 

Couchpotato

docker run -d -h hostname --name="couchpotato" -v /mnt/path/to/appdata/couchpotato:/config -v /mnt/path/to/userdata:/data -p hostport:5050 eschultz/docker-couchpotato

 

Sickbeard

docker run -d -h hostname --name="sickbeard" -v /mnt/path/to/appdata/sickbeard:/config -v /mnt/path/to/userdata:/data -p hostport:8081 eschultz/docker-sickbeard

 

Plex

docker run -d  --net="host" --name="plex" -v /mnt/path/to/appdata/plex:/config -v /mnt/path/to/userdata:/data -p hostport:32400 eschultz/docker-plex

 

NOTE:  Plex is a little unique in that we don't need the -h command for the hostname because we use a special network setting for it (--net="host").

 

The first time you use the "docker run" command, regardless of which application you are installing, there will be an added delay in setup compared to all subsequent application installations.  This added delay is because Docker will be downloading the Ubuntu 14.04 base-image the first time that will be shared for use across all subsequent applications that are installed.  When you go to configure your second Lime Tech application container, your installation time will be far less as the base-image download step is skipped.

 

Auto-Start NOT Available In Beta 6

Due to a small bug, we had to adjust the way Docker works when being stopped (either manually or as a part of stopping the array).  As such, when stopping the array with containers running, the containers will be commanded to stop using a "docker stop" command.  This command also removes these containers from being started up automatically when the Docker service is started, thus, no auto-start.  However, we are working on this and there are workarounds to this available for more advanced users (editing GO script, creating PLG methods for auto-starting/stopping containers by name, etc).

 

Useful Docker Tips and Commands

For those that wish to "play" with Docker a little more, here are some quick commands you can use:

 

docker ps

Lists all currently running containers.  Provides container ID,

 

docker ps -a

Will list all containers present (running or not).

 

docker images

Will list all base images present on the system.

 

docker inspect <container id> or <container name>

Will provide a JSON printout of interesting and relevant container information.

 

docker stop <container id> or <container name>

Will stop the container.

 

There are many other commands and ways to use Docker that we haven't even begun to experiment with and we need you, our community, to participate in helping us test it out and give us suggestions for ideas on uses for those features on your unRAID system!

 

Link to comment
  • Replies 321
  • Created
  • Last Reply

Top Posters In This Topic

Top Posters In This Topic

Posted Images

Thanks for the information.

 

How do docker applications write to and read from cache and array drives? What is the performance hit, if any?

 

The base image Ubuntu means docker apps are running under that VM? What is the overhead for running two Docker apps which both have high CPU usage -- e.g. SABnzbd and Transmission? Do they share the VM OS or are they running an OS each?

 

Thanks.

Link to comment

Thanks for the information.

 

How do docker applications write to and read from cache and array drives? What is the performance hit, if any?

 

The base image Ubuntu means docker apps are running under that VM? What is the overhead for running two Docker apps which both have high CPU usage -- e.g. SABnzbd and Transmission? Do they share the VM OS or are they running an OS each?

 

Thanks.

 

Great questions and here are your answers:

 

Docker manages what are called "Linux Containers," (LXCs), not "Virtual Machines" (VMs).  While VMs emulate an entire machine (hardware), LXCs only contain a virtualized OS (software).  As such, there are no hardware-specific requirements for the use of Linux Containers.

 

The base image of Ubuntu means that multiple containers that point to the same base-image for their virtual OS instance will point to the contents of that image in a read-only method, so they don't need to be copied to each container as part of the container's build process.  This is using Copy On Write technology that's a part of Btrfs.  However, each container is still responsible for it's own Operating System environment.  If Container A makes a change to a file that is a part of the base image, instead of actually changing the block-level data on the base image, we simply store a delta file that shows the changes between the base image and Container A.  Then Container B can do the same thing.  This impacts the performance and density of Linux containers by both reducing the use of disk capacity (SIGNIFICANTLY) as well as increasing the speed at which Docker builds and runs containers based on common base images.

 

With respect to multiple high CPU usage applications, I actually think SAB and Transmission are pretty low on the totem poll.  Plex is the real culprit when you add transcoding to the equation.  All of that said, there is actually probably less CPU overhead supporting these concurrent applications in a LXC than a VM thanks to the substantial reduction in layers of emulation required to do so (no hardware emulation).

 

In all our testing thus far, this stuff has just worked really well, but we know that as folks test more and more complex / interesting combinations of hardware, containers, and full VMs, we'll figure out more about how to best control and optimize it all.

Link to comment

jonp - Are there recommendations on the vcpus and memory that be available for dom0/docker for this? I have 8 cores, and 16GB of RAM, and have 6 cores/12GB of ram dedicated to my ArchVM. I presumably want to shift this somewhat to accommodate Docker, but want to make sure I understand how vcpu/memory is used by Docker first.

 

 

Link to comment

jonp - Are there recommendations on the vcpus and memory that be available for dom0/docker for this? I have 8 cores, and 16GB of RAM, and have 6 cores/12GB of ram dedicated to my ArchVM. I presumably want to shift this somewhat to accommodate Docker, but want to make sure I understand how vcpu/memory is used by Docker first.

That will entirely depend on what you plan on doing through docker.

Link to comment

jonp - Are there recommendations on the vcpus and memory that be available for dom0/docker for this? I have 8 cores, and 16GB of RAM, and have 6 cores/12GB of ram dedicated to my ArchVM. I presumably want to shift this somewhat to accommodate Docker, but want to make sure I understand how vcpu/memory is used by Docker first.

That will entirely depend on what you plan on doing through docker.

 

Which makes sense, but I guess I am trying to understand does Docker just use dom0 resources, or have it's own? For testing I will likely go 50/50% on both vcpu and memory. I plan on testing SAB, SB, Plex for now.

Link to comment

Thanks. I may be dense, but I'm not sure you answered my questions?  Maybe I didn't ask them in the right way ;D

 

I'm not asking about how a docker app writes to the OS, I'm asking how a docker app writes its own data to unRAID drives.

 

If you look at my sig, you'll see what hardware I run. The HP Microserver is a pretty popular for NAS usage. It's not underpowered compared to something like a typical mid-range Synology box, but it's also not an 8-core beast.

 

When I am running nzbget and it is simultaneously writing at 20MB/sec to the cache drive and unraring to an array drive, the (dual core) CPU is often pegged.

 

When doing a par2 repair, it will definitely be pegged and when doing a par2 verify, it will be reading the disk as fast as possible.

 

So, in this usage case, I need all the CPU and disk speed available. I use nbzget rather than SABnzbd because SABnzbd is too slow when you have a 170mbps internet connection and are using 30 NNTP connections.

 

tl;dr -- back to the original questions:

 

How do docker applications write to and read from cache and array drives? What is the performance hit, if any?

 

What is the overhead for running two Docker apps which both have high CPU usage? Do they share the VM OS or are they running an OS each?

 

Cheers again,

 

Neil.

Link to comment

jonp - Are there recommendations on the vcpus and memory that be available for dom0/docker for this? I have 8 cores, and 16GB of RAM, and have 6 cores/12GB of ram dedicated to my ArchVM. I presumably want to shift this somewhat to accommodate Docker, but want to make sure I understand how vcpu/memory is used by Docker first.

 

...I guess I am trying to understand does Docker just use dom0 resources, or have it's own? For testing I will likely go 50/50% on both vcpu and memory. I plan on testing SAB, SB, Plex for now.

 

We have more experimenting to do on this.  Our lab servers do not have any vcpus pinned to Dom0 (nor Docker).  That said, we have seen posts in our forums from folks that suggest this.  I can also say that our current lab environment doesn't have any memory pinned to Dom0 either, although I can see this potentially changing down the road.  This is on our to do list for further investigation.

Link to comment

jonp - Are there recommendations on the vcpus and memory that be available for dom0/docker for this? I have 8 cores, and 16GB of RAM, and have 6 cores/12GB of ram dedicated to my ArchVM. I presumably want to shift this somewhat to accommodate Docker, but want to make sure I understand how vcpu/memory is used by Docker first.

 

...I guess I am trying to understand does Docker just use dom0 resources, or have it's own? For testing I will likely go 50/50% on both vcpu and memory. I plan on testing SAB, SB, Plex for now.

 

We have more experimenting to do on this.  Our lab servers do not have any vcpus pinned to Dom0 (nor Docker).  That said, we have seen posts in our forums from folks that suggest this.  I can also say that our current lab environment doesn't have any memory pinned to Dom0 either, although I can see this potentially changing down the road.  This is on our to do list for further investigation.

 

Thanks Jonp. I will do the 50/50 I suggested and see how things progress. :)

 

Link to comment

Excellent guide we should definitely port it to the wiki so it can be maintained by the community.

 

Can I also strongly suggest the docker files are changed to exclude:

 

"RUN apt-get update -q".

 

Best docker practice calls for OS updates to be done only on the OS base layer (or your own OS fork/layer). This ensures that everyone has the same install rather than different installs based on when apt-get happens to be run. It also means you dont have to clean up all the caches it produces anywhere else in the stack.

 

Ideally we should change this soon to reduce the impact.

 

nice work

Link to comment
I'm not asking about how a docker app writes to the OS, I'm asking how a docker app writes its own data to unRAID drives.

 

I think maybe you're just getting a little confused, but that's ok because this Docker/LXC stuff comes with a learning/understanding curve ;-). 

 

How do docker applications write to and read from cache and array drives? What is the performance hit, if any?

 

I think the first question is hard to answer because I don't know what you mean by "how."  They read and write from disks just like anything else would.  The second question regarding the performance hit I can only assume is being asked as a comparison against your current setup which is only using plugins and packages written for Slackware.

 

Most applications that work with Slackware were ported to Slackware by a community member and are not typically maintained by the application owner. There are numerous posts in this forum alone about how horrible application management is on the Slackware OS.  As such, there may be issues with how these Slackware-native applications perform on your current system, making them consume more or perform worse than their equivalents written for other operating systems.  As such, when you switch to using Docker and a more common/supported base image for your applications, you could actually see performance increase as a result (along with other benefits including the ability to use a package manager).  It all really just depends on your unique scenario and there is no simple answer I can give you.

 

What is the overhead for running two Docker apps which both have high CPU usage? Do they share the VM OS or are they running an OS each?

 

I think the real question here is what is the overhead of Docker and LXCs compared to using native applications without virtualization.  I think the answer can't get much more detailed than to say "minimal" and that for almost any instance I can think of, any overhead it causes as a process should be worth the efficiencies gained by running the app in a more mainstream Linux OS.

Link to comment

Excellent guide we should definitely port it to the wiki so it can be maintained by the community.

 

Can I also strongly suggest the docker files are changed to exclude:

 

"RUN apt-get update -q".

 

We were on the fence on this...  The nice thing about Dockerfiles?  We can update these just via Github!  Eric is actually working on refining these a bit further for us.  These Dockerfiles we put up are just examples to help folks get started.

Link to comment

Excellent guide we should definitely port it to the wiki so it can be maintained by the community.

 

Can I also strongly suggest the docker files are changed to exclude:

 

"RUN apt-get update -q".

 

We were on the fence on this...  The nice thing about Dockerfiles?  We can update these just via Github!  Eric is actually working on refining these a bit further for us.  These Dockerfiles we put up are just examples to help folks get started.

 

yeah the problem is doing this is so prevalent because so many docker devs are linux people and thats "just what you do before you install something".

 

I dont think you need your own OS base image fork yet because your not modifying the base OS much so I suggest you just drop the update line for now.

 

We can hammer out the details later if need be but this is the safest option

Link to comment

Excellent guide we should definitely port it to the wiki so it can be maintained by the community.

 

Can I also strongly suggest the docker files are changed to exclude:

 

"RUN apt-get update -q".

 

We were on the fence on this...  The nice thing about Dockerfiles?  We can update these just via Github!  Eric is actually working on refining these a bit further for us.  These Dockerfiles we put up are just examples to help folks get started.

 

yeah the problem is doing this is so prevalent because so many docker devs are linux people and thats "just what you do before you install something".

 

I dont think you need your own OS base image fork yet because your not modifying the base OS much so I suggest you just drop the update line for now.

 

We can hammer out the details later if need be but this is the safest option

 

I understand what you're saying (don't update in a container, because each one will get it's own version of updates), but since I haven't installed docker yet, and don't have a full understanding yet of how it works, I'd like to ask, how (and where) do you do the updates so that it only updates the base image?  Or, where is the base image stored?

Link to comment

Excellent guide we should definitely port it to the wiki so it can be maintained by the community.

 

Can I also strongly suggest the docker files are changed to exclude:

 

"RUN apt-get update -q".

 

We were on the fence on this...  The nice thing about Dockerfiles?  We can update these just via Github!  Eric is actually working on refining these a bit further for us.  These Dockerfiles we put up are just examples to help folks get started.

 

yeah the problem is doing this is so prevalent because so many docker devs are linux people and thats "just what you do before you install something".

 

I dont think you need your own OS base image fork yet because your not modifying the base OS much so I suggest you just drop the update line for now.

 

We can hammer out the details later if need be but this is the safest option

 

I understand what you're saying (don't update in a container, because each one will get it's own version of updates), but since I haven't installed docker yet, and don't have a full understanding yet of how it works, I'd like to ask, how (and where) do you do the updates so that it only updates the base image?  Or, where is the base image stored?

 

Oh man, this could invoke a VERY long response, but I just don't have the time to write it right now.  There's a lot of ways to do this and I'm sure no shortage of opinions on which is the right method.

Link to comment

I'm not asking about how a docker app writes to the OS, I'm asking how a docker app writes its own data to unRAID drives.

 

I think maybe you're just getting a little confused, but that's ok because this Docker/LXC stuff comes with a learning/understanding curve ;-). 

 

How do docker applications write to and read from cache and array drives? What is the performance hit, if any?

 

I think the first question is hard to answer because I don't know what you mean by "how."  They read and write from disks just like anything else would.  The second question regarding the performance hit I can only assume is being asked as a comparison against your current setup which is only using plugins and packages written for Slackware.

 

Most applications that work with Slackware were ported to Slackware by a community member and are not typically maintained by the application owner. There are numerous posts in this forum alone about how horrible application management is on the Slackware OS.  As such, there may be issues with how these Slackware-native applications perform on your current system, making them consume more or perform worse than their equivalents written for other operating systems.  As such, when you switch to using Docker and a more common/supported base image for your applications, you could actually see performance increase as a result (along with other benefits including the ability to use a package manager).  It all really just depends on your unique scenario and there is no simple answer I can give you.

 

What is the overhead for running two Docker apps which both have high CPU usage? Do they share the VM OS or are they running an OS each?

 

I think the real question here is what is the overhead of Docker and LXCs compared to using native applications without virtualization.  I think the answer can't get much more detailed than to say "minimal" and that for almost any instance I can think of, any overhead it causes as a process should be worth the efficiencies gained by running the app in a more mainstream Linux OS.

 

I am not as thick as I may appear.  ;)

 

VMs can't write directly to unRAID arrays, right? They have to write to unRAID via a network share? So you're saying Docker apps (which use a base VM) write directly to the array, with no overhead?

 

To be honest, I'll probably try this out on a spare machine, so can run my own tests, but I am surprised you don't appear to know how the Docker thingies works. Or maybe I'm not making myself clear.  ;D

 

Anyhoo, interesting times in the unRAID world. I'm still not convinced I'll want to move to any kind of VM setup yet, but it's fun to try this stuff out.

 

Cheers,

 

Neil.

Link to comment
VMs can't write directly to unRAID arrays, right? They have to write to unRAID via a network share? So you're saying Docker apps (which use a base VM) write directly to the array, with no overhead?
I'll try to restate the question, since it hasn't been answered yet, and I want to know as well.

 

Does a docker app have the entire native unraid /mnt/* tree directly available to read and write?

 

As a hypothetical example, when I config SAB, I point the downloads to /mnt/disk1/downloads. Where would I point a SAB docker container to accomplish the same thing?

Link to comment

VMs can't write directly to unRAID arrays, right? They have to write to unRAID via a network share? So you're saying Docker apps (which use a base VM) write directly to the array, with no overhead?
I'll try to restate the question, since it hasn't been answered yet, and I want to know as well.

 

Does a docker app have the entire native unraid /mnt/* tree directly available to read and write?

 

As a hypothetical example, when I config SAB, I point the downloads to /mnt/disk1/downloads. Where would I point a SAB docker container to accomplish the same thing?

if you want it to have that level of access, you can, yes.  it is native performance.

 

Sent from my Nexus 5 using Tapatalk

 

 

Link to comment
I am not as thick as I may appear.  ;)

Previous comment wasn't a dig at you.  I've been doing this for 10 years and the concepts with LXC and Docker are still mind-boggling at times, but the results are powerful.

VMs can't write directly to unRAID arrays, right? They have to write to unRAID via a network share? So you're saying Docker apps (which use a base VM) write directly to the array, with no overhead?

 

There are two new features in this release that have to deal with Disk IO performance and virtualization.  The first is relevant to LXC and the second is relevant to VMs.  Disk IO performance seems to be native, but to be fair, we haven't measured in tests.  There are no special drivers or anything like that required.  I suggest trying it out for yourself to feel it.  The other enhancement is mentioned in here as VirtFS.  This also allows direct file system level access between the Dom0 and DomU with KVM.  This MAY work with Xen, but we have not researched this enough/tested it yet.

 

To be honest, I'll probably try this out on a spare machine, so can run my own tests, but I am surprised you don't appear to know how the Docker thingies works. Or maybe I'm not making myself clear.  ;D

 

Very familiar with Docker, but the question just wasn't very clear.  "how does something write to the array" is a very broad question, so needed to narrow down your context.  Comes with the penalty of having lots of perspectives on the information in question ;-).

 

Anyhoo, interesting times in the unRAID world. I'm still not convinced I'll want to move to any kind of VM setup yet, but it's fun to try this stuff out.

 

Cheers,

 

Neil.

 

Thanks Neil!  Let us know what you think after trying a few "docker run" commands to get some instances spun up.  Definitely looking for feedback!

 

Link to comment

Thanks for the guide Jon.

 

I've been using an ubuntu 14.04 VM with Docker as a playground and before this thread I thought I understood Docker/containers.................but now you've dropped base images, apt-get -q, butterFS :o ...........and native disk access

 

/deletes fstab and storms out

Link to comment

So, I have to play with this weekend (or possibly a little later if I get too busy), and I think I understand Docker in a basic way - it's more or less OpenVZ, but more blessed by the mainline kernel - , but I don't understand a lot of the ins and outs - like how would one expose /mnt/user/TV/ to a docker container?

 

Ah.  Read a little more carefully.  -v

Link to comment

Any specific reason to use Ubuntu instead of CentOS or Debian, other than personal preference?

 

Was that aimed at me? I wanted to try Ionix's BUUX script :D

 

If it was aimed at JonP then I *guess* its because most of the containers in hub.docker.com use ubuntu as a base. If I've understood Docker correctly, if you download a container built on another flavour of linux ie. Arch or Cent, the container will fetch all the necessary files to make it run (which could end up being a full OS image).

 

Having been introduced to Arch via Ironic's ArchVM image I prefer the rolling release model over LTS ubuntu but I'm happy to run ubuntu for the sake of compatibility and efficiency.

 

*please correct the above if any/all of it is nonsense*

Link to comment

We do not prevent the use of alternative base images at all.  Go to the docker registry.  Change the run commands I've posted.  The last part of the command is the author/image

 

Sent from my Nexus 5 using Tapatalk

 

 

Link to comment
Guest
This topic is now closed to further replies.