unRAID 6 Beta 6: Docker Quick-Start Guide


Recommended Posts

  • Replies 321
  • Created
  • Last Reply

Top Posters In This Topic

Top Posters In This Topic

Posted Images

Guys, are you getting different results with the same container?

 

Sent from my Nexus 5 using Tapatalk

 

Its sure possible because of the way git is used blindly against head.

 

I seem to be saying this on almost every docker thread but you should not blindly git clone in a docker file you are distributing as it means people will have different containers.

 

From what I can tell the git clone only gets executed per image creation. So once the image is created on Docker Hub it should be consistent until the developer kicks off another build on the Hub, correct?

Link to comment

When dockers creates its folder on my cache drive it does not set it to a Cache only share which meant when the mover script ran it copied the data to my array. Would this be expected behaviour ?

 

I am seeing the same thing, however there is not any docker data on any of my data drives. Only my cache, thankfully.

Link to comment

Guys, are you getting different results with the same container?

 

Sent from my Nexus 5 using Tapatalk

 

Its sure possible because of the way git is used blindly against head.

 

I seem to be saying this on almost every docker thread but you should not blindly git clone in a docker file you are distributing as it means people will have different containers.

 

From what I can tell the git clone only gets executed per image creation. So once the image is created on Docker Hub it should be consistent until the developer kicks off another build on the Hub, correct?

 

Even if correct, if someone runs their image weeks after it was created, that person could have a different version than the creator intended.  I think that's the concern with non-specifics in containers.

Link to comment

Even if correct, if someone runs their image weeks after it was created, that person could have a different version than the creator intended.  I think that's the concern with non-specifics in containers.

 

When you execute the docker run command you are downloading the image(s) from Docker Hub. The docker build that generates the image is executed on the Hub itself. For example my Docker of couchpotato was built last by the Hub on 2014-06-20 14:40:50 that means that you are simply downloading and executing the image that was built on 2014-06-20 14:40:50 and does not contain the latest git updates to couchpotato that were made today.

 

Now if you were to download my Dockerfile from the Hub and do a docker build on it git would be executed and you would get a different image/container then the one I am providing as of 2014-06-20 14:40:50.

Link to comment

Even if correct, if someone runs their image weeks after it was created, that person could have a different version than the creator intended.  I think that's the concern with non-specifics in containers.

 

When you execute the docker run command you are downloading the image(s) from Docker Hub. The docker build that generates the image is executed on the Hub itself. For example my Docker of couchpotato was built last by the Hub on 2014-06-20 14:40:50 that means that you are simply downloading and executing the image that was built on 2014-06-20 14:40:50 and does not contain the latest git updates to couchpotato that were made today.

 

Now if you were to download my Dockerfile from the Hub and do a docker build on it git would be executed and you would get a different image/container then the one I am providing as of 2014-06-20 14:40:50.

 

Yup my concern is that I dont think people are restricting themselves to the operations a consumer of images should only be using. Primarily due to a lack of understanding causing a "scatter gun google vs the forum" approach e.g. to try and fix permission bugs etc.

 

This aside my primary concern with docker is that a users doesn't really know what they will get. There is no user difference between the command to run an image now and the command to run an updated version of it later. Throw in the fact that we have OS deviation and duplication of applications it can make things worse i.e. users run Debian based couchpotato and for some reason decides they want the Ubuntu one later. They both exist, they both profess to offer the same application but because they both cloned from git head at different time they are almost certain to be completely different versions. Since the actual config and database are typically not "protected or versioned" within docker you have a risk.

 

Throw in a healthy lump of rapidly changing dockerfiles in these early days and you end up with potential version confusion where ideally there should be none.

 

Hence why I suggest that even within a dockerfile it is healthy to clone against a specific point in time/tag giving the conscientious user an easy means to know what specific build is included and other devs who want to just use a different base OS the chance to match clone so there is no config/dbase issues.

 

Edit: I fully understand from a dev point of view this might be a pain in the ass so I am completely open to other ideas.

 

Link to comment

I've been tweaking the rc.d script of docker, so it now can start containers by itself. The web page now let you specify what containers you wish to be stopped and started with the disks array. Values are comma separated.

 

It's a simple PLG, so put it into the /boot/plugins folder and you're done.

 

https://dl.dropboxusercontent.com/u/18726846/Docker-startup.plg

 

Wish you enjoy.

 

for some reason your plugin is not working with my docker, I tried to start manually, nothing happens. 

 

 

Link to comment

DOcker sucks as we are using it now

 

there is no way to do a it pull inside a docker

and upgrading a plugin like sickbeard inside the docker doesn't work

i tried a lot of things but or the program doesn't shut down

or it shuts down and the docker with it

 

i tried installing ssh server but then the docker crashes in 2 minutes from start

 

if for every git pull you need to make a new image then this beats the purpose of git in my eyes

 

spend whole weekend trying to get this working as we are used too with the plugins but am seriously dissapointed in docker and BTRFS

Link to comment

DOcker sucks as we are using it now

 

there is no way to do a it pull inside a docker

and upgrading a plugin like sickbeard inside the docker doesn't work

i tried a lot of things but or the program doesn't shut down

or it shuts down and the docker with it

 

i tried installing ssh server but then the docker crashes in 2 minutes from start

 

if for every git pull you need to make a new image then this beats the purpose of git in my eyes

 

spend whole weekend trying to get this working as we are used too with the plugins but am seriously dissapointed in docker and BTRFS

 

Maybe you should wait for a while before diving into beta 6 and docker.  It sounds like you don't have the patience and tolerance needed when using Beta software.  Or at least wait until others work through the major kinks. Or write basic FAQs

Link to comment

DOcker sucks as we are using it now

 

there is no way to do a it pull inside a docker

and upgrading a plugin like sickbeard inside the docker doesn't work

i tried a lot of things but or the program doesn't shut down

or it shuts down and the docker with it

 

i tried installing ssh server but then the docker crashes in 2 minutes from start

 

if for every git pull you need to make a new image then this beats the purpose of git in my eyes

 

spend whole weekend trying to get this working as we are used too with the plugins but am seriously dissapointed in docker and BTRFS

 

I disagree!  Docker is certainly not perfect now, and can absolutely stand to be improved, but to say it sucks is way off base, IMO.

 

It pretty much works great for what it's supposed to do, to let one run a program, isolated from the unRAID OS.

 

Sure, it can be improved, but I'm VERY happy with docker at this point.

 

Constructive ideas for improvement, and/or specific complaints will do FAR more for this community than public displays of personal frustration, which might scare people away who don't take time to read all the posts.

Link to comment

DOcker sucks as we are using it now

 

there is no way to do a it pull inside a docker

and upgrading a plugin like sickbeard inside the docker doesn't work

i tried a lot of things but or the program doesn't shut down

or it shuts down and the docker with it

 

i tried installing ssh server but then the docker crashes in 2 minutes from start

 

if for every git pull you need to make a new image then this beats the purpose of git in my eyes

 

spend whole weekend trying to get this working as we are used too with the plugins but am seriously dissapointed in docker and BTRFS

 

I disagree!  Docker is certainly not perfect now, and can absolutely stand to be improved, but to say it sucks is way off base, IMO.

 

It pretty much works great for what it's supposed to do, to let one run a program, isolated from the unRAID OS.

 

Sure, it can be improved, but I'm VERY happy with docker at this point.

 

Constructive ideas for improvement, and/or specific complaints will do FAR more for this community than public displays of personal frustration, which might scare people away who don't take time to read all the posts.

 

well tell me 2 things

 

can you upgrade sickrage without rebuilding the image?

did you have a look in your btrfs/subvolumes directory ?

 

Link to comment

DOcker sucks as we are using it now

 

there is no way to do a it pull inside a docker

and upgrading a plugin like sickbeard inside the docker doesn't work

i tried a lot of things but or the program doesn't shut down

or it shuts down and the docker with it

 

i tried installing ssh server but then the docker crashes in 2 minutes from start

 

if for every git pull you need to make a new image then this beats the purpose of git in my eyes

 

spend whole weekend trying to get this working as we are used too with the plugins but am seriously dissapointed in docker and BTRFS

 

I disagree!  Docker is certainly not perfect now, and can absolutely stand to be improved, but to say it sucks is way off base, IMO.

 

It pretty much works great for what it's supposed to do, to let one run a program, isolated from the unRAID OS.

 

Sure, it can be improved, but I'm VERY happy with docker at this point.

 

Constructive ideas for improvement, and/or specific complaints will do FAR more for this community than public displays of personal frustration, which might scare people away who don't take time to read all the posts.

 

well tell me 2 things

 

can you upgrade sickrage without rebuilding the image?

did you have a look in your btrfs/subvolumes directory ?

 

I was able to upgrade sickbeard without a problem, I just now finished installing sickrage, but have not started it yet.  Even if I cannot upgrade it within the program, it's not that big of a deal to me.  I know this will improve with time.  I also know that running a program that's a few commits behind doesn't "SUCK".  it's not the perfect situation, but the fact that I can install and run the program with almost no effort is, in fact, great, IMO.

 

As for looking at the subvolumes directory, yes, I looked, and there are LOTS of subdirectories.  Windows won't let me open the btrfs directory, and won't give me a size of the whole thing, do I don't know how much room this all takes on my cache drive.  however, it can't be more than several GB, and my cache is 500GB, so I'm just not worried about how much space this takes, or how many subdirectories it creates.  Docker make my life easier, and I expect that will get better with time.

Link to comment

 

well tell me 2 things

 

can you upgrade sickrage without rebuilding the image?

did you have a look in your btrfs/subvolumes directory ?

 

Let me explain how it can be done with this modified Sickbeard Dockerfile:

 

FROM FROM ubuntu:14.04
MAINTAINER Example <[email protected]>

RUN usermod -u 99 nobody && \
    usermod -g 100 nobody

RUN apt-get update -q && \
    apt-get install -qy --force-yes python python-cheetah ca-certificates git

RUN git clone https://github.com/midgetspy/Sick-Beard/ /opt/sickbeard/ && \
    chown -R nobody:users /opt/sickbeard

VOLUME /config
VOLUME /data

EXPOSE 8081

ADD ./start.sh /start.sh
CMD ["/start.sh"]

 

Now, create a start.sh file alongside Dockerfile:

 

#!/bin/bash

# Main command
CMD="python /opt/sickbeard/SickBeard.py --datadir=/config"

# Update container
apt-get update -qq && apt-get upgrade -y

# Update git clone with pull
cd /opt/sickbeard && git pull

# Running the main command as nobody
su -c "$CMD" -s /bin/sh nobody

 

As you can see, the container will be updated every time it run.

Link to comment

I am in the planning stages for b6 with Docker.  I have a question about the Plex dockerfiles, they get plex like this:

 

RUN echo "deb http://shell.ninthgate.se/packages/debian squeeze main" > /etc/apt/sources.list.d/plexmediaserver.list

RUN apt-get install -qy --force-yes plexmediaserver

 

I'm guessing this will automatically install the latest version of Plex?  That is a bit too cutting edge for me, is there a way to specify the exact version of Plex that we want to run?

Link to comment

I am in the planning stages for b6 with Docker.  I have a question about the Plex dockerfiles, they get plex like this:

 

RUN echo "deb http://shell.ninthgate.se/packages/debian squeeze main" > /etc/apt/sources.list.d/plexmediaserver.list

RUN apt-get install -qy --force-yes plexmediaserver

 

I'm guessing this will automatically install the latest version of Plex?  That is a bit too cutting edge for me, is there a way to specify the exact version of Plex that we want to run?

 

That will install the latest public version of Plex which is a couple releases old.

Link to comment

That will install the latest public version of Plex which is a couple releases old.

 

Hmm, I don't want that either :) I want to be able to specify exactly what version of plex I am going to get.  Why do the dockerfiles all use that shell.ninthgate.se site instead of doing something like this:

 

 

if that would work, then perhaps we could pass the download url via an env variable?

docker <usual stuff> -e src=http://downloads.plexapp.com/plex-media-server/0.9.9.10.458-008ea34/plexmediaserver_0.9.9.10.458-008ea34_amd64.deb

 

Link to comment
Guest
This topic is now closed to further replies.