Docker container developer best practice guidelines for unRAID


NAS

175 posts in this topic Last Reply

Recommended Posts

I think it would be a good to implement a more compatible umask setting in dockers that create files on unRAID. The default umask in unRAID is 0000 which results in files with wide open permissions (directories: 0777 or drwxrwxrwx, files: 0666 or -rw-rw-rw-). In phusion/baseimage the default umask is 0022 which results in files that are only writeable by the owning user (directories: 0755 or drwxr-xr-x, files: 0644 or -rw-r--r--). These more restrictive permissions can cause problems when user share security is enabled.  Many of the applications that are run in these unRAID dockers have internal settings to control permissions, but it would be nice if we did not have to rely on all those separate internal settings.

 

A few possible methods to implement this:

[*]Set the umask for each application.  For example, in phusion/baseimage, a "umask 0000" command could be added to the run script that starts the application.

[*]Set the default umask for the user nobody. I don't know to do this but it seems like a good way to go if one were to create an unRAID baseimage.

[*]Set the default umask for all users. In ubuntu 14.04 this is done in /etc/login.defs. This seems heavy-handed; it doesn't seem like a good idea to have all files created within the container to have wide open permisions.

 

I wasnt sure how to answer this and then I realised why. I dont think this is a best practice recommendation I think it is a feature request and perhaps even a bug report.

 

Let me raise it as such. Thanks

 

Also I am thinking that a new best practice idea would be to include some sort of backup on first run script that if the destination config directory already exists back it up in some way first.

 

The devil will be in the details but my fear is some app upgrade breaks an existing config or some pebcak mixes two diff dameons config into the one dir etc

Link to post
  • Replies 174
  • Created
  • Last Reply

Top Posters In This Topic

Top Posters In This Topic

Popular Posts

I would use alpine, debian or Ubuntu depending on which software you try to run and then use the S6 overlay process monitoring and init of software. 

As best it can.  It's not like it can read the directions or anything.  If the container author included the various environment variables, paths, etc within the dockerfile, then it should properly fi

If you have a docker of your own on dockerhub, then just use the Add Container page in unRAID to specify all the docker run parameters such as volume mappings, etc. Then when you have the Add Containe

...blah blah blah...  umask...  blah blah blah...

 

I wasnt sure how to answer this and then I realised why. I dont think this is a best practice recommendation I think it is a feature request and perhaps even a bug report.

 

Let me raise it as such. Thanks

 

 

Thanks NAS.  I'm curious why you concluded it was a feature request or bug report. 

 

The permissions of files created from inside a docker container are controlled by settings inside that docker container.  If the container creates a file on a volume mapped from unRAID, the umask inside the container sets the permissions, not the unRAID umask.  That is why I thought this should be included in the best practices.

 

I've been playing around a bit and now have a specific example.  Running a needo/couchpotato docker with an empty directory mapped to the /config directory results in these new files:

 

-rw-r--r-- 1 nobody users 8822 Jul 31 13:50 config.ini
drwxr-xr-x 1 nobody users   62 Jul 31 13:50 data/

 

These files are not writable by a user setup in the unRAID GUI.

 

I cloned the needo/couchpotato git and added a "umask 000" line to the couchpotato.sh script.  The results of running that image with another empty config directory:

 

-rw-rw-rw- 1 nobody users 8822 Jul 31 13:35 config.ini
drwxrwxrwx 1 nobody users   62 Jul 31 13:35 data/

 

These files have the same permissions as those that have been through the New Permissions utility.

Link to post

I incorrectly assumed that docker daemon unRAID side set these permissions. Clearly it is the container is responsible (although could it also be set at the daemon side as well?). I will add it to the best practice but I think we might want to go further than this and have it a default setting on all docker.

 

I wonder if each docker should contain a standard unraid.sh for this an other things

Link to post

I incorrectly assumed that docker daemon unRAID side set these permissions. Clearly it is the container is responsible (although could it also be set at the daemon side as well?). I will add it to the best practice but I think we might want to go further than this and have it a default setting on all docker.

 

I wonder if each docker should contain a standard unraid.sh for this an other things

 

I've put a umask 000 in some of mine run scripts, and it worked fine..

Link to post

Updated the list of recommendations (including umask thanks Freddie) and deprecated a couple of less important ones that aren't working out in real life.

 

If anyone wants anything reworded let me know and I will fix (until the wiki is repaired and it can be migrated).

 

I want to raise that most of the dockers people are using dont have license files of any sort. This worries me since even though the spirit allows it the letter doesn't.

 

For example it would be illegal for Limetech to adopt needos dockers since they are copyrighted only for his sole use.

 

I think we need to be getting more serious about this now just so it doesn't become a problem later.

 

Caveat:the whole docker repo seems to be ignoring this too. How can this be?

 

 

Link to post
  • 2 weeks later...
  • 4 weeks later...

not wanting to piss anybody off ... especially not NAS or Needo or GFJardim

 

but was just wondering when we are going to upgrade the baseimage?

 

0.9.13 was released end of August

and although nothing in the release notes make me want to jump to a new image

 

see -> http://blog.phusion.nl/2014/08/22/baseimage-docker-0-9-13-released/

 

i was just wanting to know what our criteria would be to change to a newer image?

Link to post

not wanting to piss anybody off ... especially not NAS or Needo or GFJardim

 

but was just wondering when we are going to upgrade the baseimage?

 

0.9.13 was released end of August

and although nothing in the release notes make me want to jump to a new image

 

see -> http://blog.phusion.nl/2014/08/22/baseimage-docker-0-9-13-released/

 

i was just wanting to know what our criteria would be to change to a newer image?

 

Well, this is a very good question. We have to remember that Phusion is not a linux distro, it's a "dockerized" version of Ubuntu. So, when we do an "apt-get upgrade", we have all up-to-date packages.

 

Phusion itself is just a set of tools. While those tools don't get a major update, I see no point of updating.

 

Maybe we can set a maximum time lapse to update images. What do you think?

Link to post

I have to agree. If there are no big improvements then really only two things matter:

 

1. Is there a slick way for users to know about the update and get it

2. Can we make sure we do as many (ideally all) main containers in a single update window i.e. can we organise all devs to accept and do this

 

But we have to keep in mind using the appliance model LT is moving too the before and after user experience due to a base OS upgrade will be all bit identical.

 

Link to post
  • 2 weeks later...

Is using phusion-baseimage part of the unRAID best practice guides ? I know that the people from Docker are not so pleased with the fact that phusion put cron and sshd in their baseimage. On top of that, Docker released Language Stacks today, images focussed on running applications written in a specific language.

 

As these will be part of the best practice guides by Docker, wouldn't it make sense to base our images on those ?

Link to post

After some more playing around I don't think the language specific containers from Docker are the way to go. That is more focussed on people that quickly want to distribute their own project.

 

Wouldn't it make sense if we take a base image, put the unRAID fixes in that are now put in each Dockerfile (changing uid/gid for user nobodoy) and publish that on the docker registry? That way all could depend on just the unraid baseimage and the maintainer of the unraid baseimage can decide when he wants to upgrade to a newer phusion version (for example) so all are in sync.

Link to post

I'm not convinced about using the phusion base image.  I've not used Debian, but am quite familiar with ubuntu.  Even the ubuntu server distribution is hardly what I'd call a slick reduced footprint server OS.  The fact that Phusion has taken a Debian build, and then had to strip bits out to make it work with docker, worries me.

 

I'm experimenting with using an Arch Linux base image for docker containers - I'm getting the impression that ArchLinux results in builds which are at least 100MB smaller than the Phusion equivalent.  The other benefit of using Arch Linux is that the repositories are kept much more up to date than the Debian ones.

 

Do I make sense, or am I digging a pit for myself?

Link to post

I originally pushed for Debian testing because IMHO its small and reliable to a fault but the community naturally evolved to use Phusion.

 

I think we should be prepared to ask these questions but realistically the people that actually make the containers get the final shout.

Link to post

I'm not convinced about using the phusion base image.  I've not used Debian, but am quite familiar with ubuntu.  Even the ubuntu server distribution is hardly what I'd call a slick reduced footprint server OS.  The fact that Phusion has taken a Debian build, and then had to strip bits out to make it work with docker, worries me.

 

I'm experimenting with using an Arch Linux base image for docker containers - I'm getting the impression that ArchLinux results in builds which are at least 100MB smaller than the Phusion equivalent.  The other benefit of using Arch Linux is that the repositories are kept much more up to date than the Debian ones.

 

Do I make sense, or am I digging a pit for myself?

 

This is not a surprise, since all Phusion images are already shipped with Python 3 (required for their process management tool). I've successfully build some containers with Supervisor instead, and that went well.

 

This may shock some of you, but there's a few users here that have very small SSD's to run apps, so a minimal footprint is a requirement. IMHO, this is better archived with base image standardization.

 

Arch has the most active community right now, and this is good. Phusion is not bad tho; it relies on packages maintained by and old and trustworthy community.

 

I have no preferences; if you all decide to migrate for another base image, I'll gladly migrate all my containers.

Link to post

.... a minimal footprint is a requirement. IMHO, this is better archived with base image standardization.

...

 

I think this sums it up quite nicely. One single docker that uses a different base image will likely expend more disk space and use up more bandwidth than all the refinement possible on any reasonable base OS. This obviously excludes things like TCL etc but I still think for the sake of a couple of gig one off hit on disk space using one of the big 5 distros is worth it.

 

We already see this now with almost all containers here being based from phusion with the odd one like madsonic based on arch.

 

We should never stifle devs with rules but theres an elegance to sticking with one base OS.

Link to post
  • 2 months later...

I think it is time to unsticky this and let it naturally expire. There simply are too many variables and dev preferences to standardise at that point.

 

Any objections?

 

I like the idea of trying to standardize the Wild Wild West, but at the same time, as you've said, I don't want to stop anyone from making Dockers.  I'd be curious if we could get some in put from jonp or limetech or another red as to their thoughts?

Link to post

maybe I hang fire with this until the Limetech base docker OS is out. perhaps that will bring this all to a head?

 

Maybe... I just think we have a great time to make it easy for us to have a uniform approach to dockers so users don't need to  think how each author creates his/hers. 

Link to post

Know that we still have every intention of creating and maintaining a base image as a standard for unraid, we just haven't seen it as critical or important of a development item yet because this isn't so much necessary as it is just desired.  The truth is that the containers now which are mostly based on phusion work just fine. There are probably a few things we would change and lighten up the weight of the base image, but again, not something we need to do right now.

 

I'd argue we don't even need to do this before we release 6.0 final.  Right now our efforts are laser focused on some key tasks:

 

Final feature additions

GUI modifications

Bug squashing

Website updates

Documentation

 

We will circle back to the base image for docker and standardization after those tasks are complete.

Link to post

Know that we still have every intention of creating and maintaining a base image as a standard for unraid, we just haven't seen it as critical or important of a development item yet because this isn't so much necessary as it is just desired.  The truth is that the containers now which are mostly based on phusion work just fine. There are probably a few things we would change and lighten up the weight of the base image, but again, not something we need to do right now.

 

I'd argue we don't even need to do this before we release 6.0 final.  Right now our efforts are laser focused on some key tasks:

 

Final feature additions

GUI modifications

Bug squashing

Website updates

Documentation

 

We will circle back to the base image for docker and standardization after those tasks are complete.

 

What are you thoughts on setting up a list of required guidelines for community created dockers?

Link to post

This is what I do on Phusion:

 

1) Enforce UTF-8 as code page and set Debian system to non-interactive installation

# Set correct environment variables
ENV DEBIAN_FRONTEND noninteractive
ENV HOME /root
ENV LC_ALL C.UTF-8
ENV LANG en_US.UTF-8
ENV LANGUAGE en_US.UTF-8

 

2) Fix nobody user and users group UID and GID;

# Configure user nobody to match unRAID's settings
usermod -u 99 nobody
usermod -g 100 nobody
usermod -d /home nobody
chown -R nobody:users /home

 

3) Disable SSH

# Disable SSH
rm -rf /etc/service/sshd /etc/my_init.d/00_regen_ssh_host_keys.sh

 

4) Do a cleanup on APT:

apt-get clean -y
rm -rf /var/lib/apt/lists/*

 

5) Steps 2, 3 and 4 above, as dependencies installation on a single install.sh script, so we can shrink the number of image layers, as you can see here: https://github.com/gfjardim/docker-containers/blob/master/syncthing/install.sh

 

6) Download GitHub packages with ADD, that allowed me to save on wget installation. ADD instruction can unpack tar archives on the fly;

ADD https://github.com/RuudBurger/CouchPotatoServer/archive/master.tar.gz /opt/couchpotato

 

This allowed me to save a few megabytes on each container (e.g. up to 170MB on CrashPlan) ; this translate in fastest downloads/less occupied space.

 

What do you think?

 

Link to post

This is what I do on Phusion:

 

1) Enforce UTF-8 as code page and set Debian system to non-interactive installation

# Set correct environment variables
ENV DEBIAN_FRONTEND noninteractive
ENV HOME /root
ENV LC_ALL C.UTF-8
ENV LANG en_US.UTF-8
ENV LANGUAGE en_US.UTF-8

 

2) Fix nobody user and users group UID and GID;

# Configure user nobody to match unRAID's settings
usermod -u 99 nobody
usermod -g 100 nobody
usermod -d /home nobody
chown -R nobody:users /home

 

3) Disable SSH

# Disable SSH
rm -rf /etc/service/sshd /etc/my_init.d/00_regen_ssh_host_keys.sh

 

4) Do a cleanup on APT:

apt-get clean -y
rm -rf /var/lib/apt/lists/*

 

5) Steps 2, 3 and 4 above, as dependencies installation on a single install.sh script, so we can shrink the number of image layers, as you can see here: https://github.com/gfjardim/docker-containers/blob/master/syncthing/install.sh

 

6) Download GitHub packages with ADD, that allowed me to save on wget installation. ADD instruction can unpack tar archives on the fly;

ADD https://github.com/RuudBurger/CouchPotatoServer/archive/master.tar.gz /opt/couchpotato

 

This allowed me to save a few megabytes on each container (e.g. up to 170MB on CrashPlan) ; this translate in fastest downloads/less occupied space.

 

What do you think?

 

I saw an earlier article and updated mine with && \ statements.  Do you think there is a benefit to doing scripts over and'd commands?

Link to post

smdion, the code is easier to maintain and to adapt; the penalty is an additional layer, so yes, I think this approach has it's benefits.

 

The Phusion 0.9.15 is another thing I'm considering to use; it's updated with the latest package versions, and it's 100MB smaller.

Link to post

Join the conversation

You can post now and register later. If you have an account, sign in now to post with your account.
Note: Your post will require moderator approval before it will be visible.

Guest
Reply to this topic...

×   Pasted as rich text.   Restore formatting

  Only 75 emoji are allowed.

×   Your link has been automatically embedded.   Display as a link instead

×   Your previous content has been restored.   Clear editor

×   You cannot paste images directly. Upload or insert images from URL.