Docker container developer best practice guidelines for unRAID


NAS

175 posts in this topic Last Reply

Recommended Posts

Know that we still have every intention of creating and maintaining a base image as a standard for unraid, we just haven't seen it as critical or important of a development item yet because this isn't so much necessary as it is just desired.  The truth is that the containers now which are mostly based on phusion work just fine. There are probably a few things we would change and lighten up the weight of the base image, but again, not something we need to do right now.

 

I'd argue we don't even need to do this before we release 6.0 final.  Right now our efforts are laser focused on some key tasks:

 

Final feature additions

GUI modifications

Bug squashing

Website updates

Documentation

 

We will circle back to the base image for docker and standardization after those tasks are complete.

 

What are you thoughts on setting up a list of required guidelines for community created dockers?

Yes. That is on our to do list as well. We have some pretty big ideas about how to do this, but all in due time.

Link to post
  • Replies 174
  • Created
  • Last Reply

Top Posters In This Topic

Top Posters In This Topic

Popular Posts

I would use alpine, debian or Ubuntu depending on which software you try to run and then use the S6 overlay process monitoring and init of software. 

As best it can.  It's not like it can read the directions or anything.  If the container author included the various environment variables, paths, etc within the dockerfile, then it should properly fi

If you have a docker of your own on dockerhub, then just use the Add Container page in unRAID to specify all the docker run parameters such as volume mappings, etc. Then when you have the Add Containe

I wonder how many of the layers we have out there on user machines are related to apt-get update and apt-get clean and divergence as time passes.

 

There has to be a slicker way to do this since our base OS will be ours perhaps we should maintain the apt database there on a scheduled (weekly?) basis and then all users of this base OS can just do a apt-get install.

 

I am not sure that idea is 100% sound but i think it has the beginnings of something in it.

 

Also I have seen several discussions now around specifying the package version in apt (the bergknoff guide being the most recent).

 

So the idea is that you would not do "apt-get install redis-server" but rather "apt-get install redis-server=2:2.4.14-1"

 

This is good in two key ways for us. It allows peer review of what the contianer actually provides using only the docker file which opens the door for the GUI summarsing what a container has without requiring the container to be download and run first (a big shortcoming of docker for me).

 

It also means that if someone compiles the container rather than pull from the repo what they get would be identical identical.

 

It is not without pitfalls but this whole area is where i think our initial focus should be.

 

Note I am going to talk only about stable builds. Users that want to run git and EDGE type stuff are IMHO voiding their ability to reliable request support.

Link to post

smdion, the code is easier to maintain and to adapt; the penalty is an additional layer, so yes, I think this approach has it's benefits.

 

The Phusion 0.9.15 is another thing I'm considering to use; it's updated with the latest package versions, and it's 100MB smaller.

 

Let's say I want to create my first docker from scratch.  Should I start with a Phusion 0.9.15 and use your previous point instructions to make it Unraid friendly ?

 

Thanks!

Link to post

smdion, the code is easier to maintain and to adapt; the penalty is an additional layer, so yes, I think this approach has it's benefits.

 

The Phusion 0.9.15 is another thing I'm considering to use; it's updated with the latest package versions, and it's 100MB smaller.

 

Looks like Needo already went to 15 as well if that sways you.

Link to post

I would say its time for everyone to go to 15 since once one dev does there no extra cost in anyone else doing so.

 

It might be a good time to look at layer stacking / command combining and some peer reviewing to help each other out.

Link to post

I would say its time for everyone to go to 15 since once one dev does there no extra cost in anyone else doing so.

 

It might be a good time to look at layer stacking / command combining and some peer reviewing to help each other out.

 

I just moved all of mine to 15 and incorporated a lot of gfjardam's ENV/Usermod/SSH/Apt-Get Clean up.  Not 100% sold yet on moving the installation of the program out of the dockerfile and into an install.sh, but I can see its benefits.

 

Anyone is free to look over my stuff: https://github.com/smdion/docker-containers

Link to post

smdion, the code is easier to maintain and to adapt; the penalty is an additional layer, so yes, I think this approach has it's benefits.

 

The Phusion 0.9.15 is another thing I'm considering to use; it's updated with the latest package versions, and it's 100MB smaller.

 

Let's say I want to create my first docker from scratch.  Should I start with a Phusion 0.9.15 and use your previous point instructions to make it Unraid friendly ?

 

Thanks!

 

Well, those are only appointments, result of some testing I made one of these days. I would consider them a good point to start.

 

smdion, the code is easier to maintain and to adapt; the penalty is an additional layer, so yes, I think this approach has it's benefits.

 

The Phusion 0.9.15 is another thing I'm considering to use; it's updated with the latest package versions, and it's 100MB smaller.

 

Looks like Needo already went to 15 as well if that sways you.

 

Well, I think it's time for a massive update. I'll start rewriting my containers with minimized layer count and phusion 0.9.15.

 

Link to post

... Not 100% sold yet on moving the installation of the program out of the dockerfile and into an install.sh, but I can see its benefits...

 

 

Me either TBH. In an ideal world the docker file would be the start and end of the install process. Obviously the technology is no where near that yet but it seems we are moving further away from that and not closer.

 

One thing occurred to me there is absolutely no reason we couldnt just agree here and now on a phusion fork that best fits with our needs in advance of the official unRAID one. The more stuff that everyone needs that can be pushed upstream to the base OS layer the more efficient we become. If it doesnt work its not like its alot of work to revert.

Link to post

... Not 100% sold yet on moving the installation of the program out of the dockerfile and into an install.sh, but I can see its benefits...

 

 

Me either TBH. In an ideal world the docker file would be the start and end of the install process. Obviously the technology is no where near that yet but it seems we are moving further away from that and not closer.

 

One thing occurred to me there is absolutely no reason we couldnt just agree here and now on a phusion fork that best fits with our needs in advance of the official unRAID one. The more stuff that everyone needs that can be pushed upstream to the base OS layer the more efficient we become. If it doesnt work its not like its alot of work to revert.

 

I think there is a need for the firstrun and a way to handle multiple branches or edge cases.  We are also in a highly specific use case that I think will cause these outliers.  I wonder if our fight for less containers and smaller size is pushing us into these extra files as well?

Link to post

Agreed but my point which is a bit OT here is that firstrun and edge and not use cases specific to us. Docker needs to accept these are super common use cases and work out a way to standardise them into the docker file or some other file hosted at the repo.

 

Also I just noticed this:

 

You can updated yourself, I was sure I had design it this way. Stop the container. Replace the jar file in the config directory with the new version and and ensure the permission are right.

 

Again there is absolutely nothing wrong with this other than its another deviation from best practice bypassing a docker tennant in the spirit of flexibility.

 

The more I think about it the more i consider there needs to be a peer review process where dockers are given a "certified sticker"... essentially a certified stable branch where all these best practices would be followed. If we done that every other container could do whatever they wanted and would be considered unstable/experimental.

 

This would be front loaded work as most stable containers would only change over time by a refresh of apt etc and not code changes.

 

 

It is worth pointing out the extra importance of this. The last two version of Docker have been deprecated due to security problems where someone could create a docker that allowed them to break out of the container. Essentially they could root your server and containers just by you running the container. Given how little visibity the docker hub gives you this would be a easy attack. Pick something that people want. Make it work. Insert the break out code. Done.

Link to post

For me the whole idea of docker is repeatable distributed code, once you start messing with this and having dockers which can update themselves, or having special "edge" flags to trigger different branches of code then it stops being docker and becomes something else, something IMHO that cannot easily be supported and is more prone to breakage.

 

i prefer solving the following using this methodology instead:-

 

1. updating docker images - use a shared github repository that multiple "trusted" community members have admin access to in order to maintain the dockerfiles for new releases, this would reduce the time a user has to wait in order to run the latest stable version.

 

2. unstable/stable applications - have these as separate docker images, clearly marked as stable/unstable, yes this means more docker images and thus potentially more overhead but in the long run i believe this will reduce the amount of time required to support these images by reducing complexity and by splitting the two you also reduce the risk of complex bash scripts causing issues for users who just want to run a simple stable version.

 

hey just my thoughts, please feel free to ignore :-).

Link to post

For me the whole idea of docker is repeatable distributed code, once you start messing with this and having dockers which can update themselves, or having special "edge" flags to trigger different branches of code then it stops being docker and becomes something else, something IMHO that cannot easily be supported and is more prone to breakage.

TBH that is not just your definition of what should be it but THE definition of docker.

 

1. updating docker images - use a shared github repository that multiple "trusted" community members have admin access to in order to maintain the dockerfiles for new releases, this would reduce the time a user has to wait in order to run the latest stable version.

I like this a lot. So what we could have a group hub account where unoffical but standardized containers are maintained under the guidelines we started here and all others can be a free for all.

 

We dont need a bunch of crazy rules but the main containers all maintained by a group of people all having the same flavour of design and base OS could solve a lot of problems.

 

I do like the idea of a master unraid set.  Already I look at the templates and see more than one version of a containerised app and I have no idea what, if any, differences there are and which I should use.

This is a huge problem for docker and I would guess they are working on it in the background.

Link to post

I do like the idea of a master unraid set.  Already I look at the templates and see more than one version of a containerised app and I have no idea what, if any, differences there are and which I should use.

 

This is, mainly, due to philosophical differences with the "original" container developer, and some apps are more problematic than others. Needo's NZBGet and mine are different in the way I use pre-compiled deb packages to make possible in-app updates; his container relies on the official repository version and build routines to provide the "EDGE feature". So mine is seamlessly to the user, but I have the job to maintain it always updated with pre-compiled packages; in his case, the user must recreate the container with the correct EDGE variable set, but he has a lot less hassle maintaining it.

 

The origin of the difference is that the package maintainer of NZBGet takes several months to push an update, so or we stay far behind using old buggy version, of we update it somehow.

 

Bottom line is: there's a lot of ways to provide some dockerized app, and every method was it's advantages/disadvantages. It's natural to have more than one approach of offering an application, so it's normal to have more than one container per application. 

Link to post

I do like the idea of a master unraid set.  Already I look at the templates and see more than one version of a containerised app and I have no idea what, if any, differences there are and which I should use.

 

This is, mainly, due to philosophical differences with the "original" container developer, and some apps are more problematic than others. Needo's NZBGet and mine are different in the way I use pre-compiled deb packages to make possible in-app updates; his container relies on the official repository version and build routines to provide the "EDGE feature". So mine is seamlessly to the user, but I have the job to maintain it always updated with pre-compiled packages; in his case, the user must recreate the container with the correct EDGE variable set, but he has a lot less hassle maintaining it.

 

The origin of the difference is that the package maintainer of NZBGet takes several months to push an update, so or we stay far behind using old buggy version, of we update it somehow.

 

Bottom line is: there's a lot of ways to provide some dockerized app, and every method was it's advantages/disadvantages. It's natural to have more than one approach of offering an application, so it's normal to have more than one container per application.

 

I agree with you. I don't agree with policing docker applications and telling developers what is best or not. Most people building these apps build them for themselves and then decide to share. Personally I don't agree with the fusion base image. I despise it actually, why is ssh included as part of the base image? SSH is no longer needed not with docker exec being available, sure I can agree at one point and time it was useful in order to debug containers, but that is no longer the case. Most images are on github so if people have an issue they can fork a project themselves and edit as they see fit.

Link to post

I do like the idea of a master unraid set.  Already I look at the templates and see more than one version of a containerised app and I have no idea what, if any, differences there are and which I should use.

 

This is, mainly, due to philosophical differences with the "original" container developer, and some apps are more problematic than others. Needo's NZBGet and mine are different in the way I use pre-compiled deb packages to make possible in-app updates; his container relies on the official repository version and build routines to provide the "EDGE feature". So mine is seamlessly to the user, but I have the job to maintain it always updated with pre-compiled packages; in his case, the user must recreate the container with the correct EDGE variable set, but he has a lot less hassle maintaining it.

 

The origin of the difference is that the package maintainer of NZBGet takes several months to push an update, so or we stay far behind using old buggy version, of we update it somehow.

 

Bottom line is: there's a lot of ways to provide some dockerized app, and every method was it's advantages/disadvantages. It's natural to have more than one approach of offering an application, so it's normal to have more than one container per application.

 

I agree with you. I don't agree with policing docker applications and telling developers what is best or not. Most people building these apps build them for themselves and then decide to share. Personally I don't agree with the fusion base image. I despise it actually, why is ssh included as part of the base image? SSH is no longer needed not with docker exec being available, sure I can agree at one point and time it was useful in order to debug containers, but that is no longer the case. Most images are on github so if people have an issue they can fork a project themselves and edit as they see fit.

 

I think the "trusted repo" is a great compromise.  It will allow people to still create dockers how they want (the wild wild west), but will also have a more streamlined approach for users who really don't understand how it works and want all their dockers to work the same way and always works (don't have to worry about something changing in GIT and breaking.  I don't think we can assume all users can fork and edit a project, but the ones who can shouldn't be stopped.

 

gfjardim recommended removing SSH from Phusion, which I have.  Jonp has stated it is in Limetech's plan to release an approved docker image which I'm sure will not have SSH.

 

What is your preference for a base image?

Link to post

If i was choosing for myself I would choose Debian as its small, predictable, generally not insane and doesn't reinvent the wheel every release cycle.

 

For others though I would probably choose Ubuntu LTS because that where most of the docker community seem to be going.

 

If we stick to a Debian variant at least then we can change later to another variant (i.e. the official one) with little reworking.

 

The key here is to not get into distro wars. We almost dont care about the distro they are just a means to an end for the application container.

 

I also think there is a key differnce between consumers of the stable line of applications and those that have any clue how to develop on docker or even what git is. And that is the key point IMHO we are talking about the stable line here. The developer line can and should be the wild west.

 

Remember the consumers we are talking about here dont know about docker, git, apt, linux or anything. They want a button that magically gives them XBMC or a torrent app and another that updates it.

Link to post

What do you think all "dockers guru" about the Ubuntu Snappy?  It suppose to be made for Docker, be secure, etc.  I think it's based on Ubuntu Core, but adapted for Docker in mind.

 

Can it be the new baseimage in long term ?

I think it was developed to be the host of docker, not the baseimage, but I might be wrong.

Link to post

 

What do you think all "dockers guru" about the Ubuntu Snappy?  It suppose to be made for Docker, be secure, etc.  I think it's based on Ubuntu Core, but adapted for Docker in mind.

 

Can it be the new baseimage in long term ?

I think it was developed to be the host of docker, not the baseimage, but I might be wrong.

 

Oh maybe your right!

Link to post

If someone can't handle Unraid they should invest in a QNAP device, it has a simply stupid interface.

An alternative; if people don't know how to use docker or don't take the time to learn, go install the unraid 6 plugins that are readily available in the forums.

 

Also I wanted to add that Docker is about choice. And each Unraid installation is unique to every installation.

I'm just migrating to Ur6 and I will have several Windows VMs on there using KVM, because I personally believe that KVM > XEN.

 

Then there are those that include ssh in their docker images... that's a back practice.

Docker allows you to connect to the image without the need for SSH, but some want that choice.

Using scripts to run installations inside the docker image... another bad practice.

Some prefer these, others don't it all comes down to preference.

I like my docker containers to auto update on their own, I don't have the time to baby the server and prefer automation.

Link to post

What do you think all "dockers guru" about the Ubuntu Snappy?  It suppose to be made for Docker, be secure, etc.  I think it's based on Ubuntu Core, but adapted for Docker in mind.

 

Can it be the new baseimage in long term ?

I think it was developed to be the host of docker, not the baseimage, but I might be wrong.

 

Actually Ubuntu released a new version designated specifically to be docker baseimage.

Also the Ubuntu docker images are less resource hungry than some other flavors of linux.

Link to post

What do you think all "dockers guru" about the Ubuntu Snappy?  It suppose to be made for Docker, be secure, etc.  I think it's based on Ubuntu Core, but adapted for Docker in mind.

 

Can it be the new baseimage in long term ?

I think it was developed to be the host of docker, not the baseimage, but I might be wrong.

 

Actually Ubuntu released a new version designated specifically to be docker baseimage.

Also the Ubuntu docker images are less resource hungry than some other flavors of linux.

 

Unless someone has a better idea i see no reason to redesign the wheel and the uptream of the base OS we decide on would be the official docker debian/ubuntu OS.

 

Since we know that LT are planning at this point at least to use Ubuntu LTS i see no reason not to just use this. I personally prefer Debian proper but it makes little odds at this point and less sense to create work using something else.

 

Regardless of debian/ubuntu what best practices could we extratc from the app containers and push into our new base OS. The more we push up the more efficient we get

 

 

Link to post

Join the conversation

You can post now and register later. If you have an account, sign in now to post with your account.
Note: Your post will require moderator approval before it will be visible.

Guest
Reply to this topic...

×   Pasted as rich text.   Restore formatting

  Only 75 emoji are allowed.

×   Your link has been automatically embedded.   Display as a link instead

×   Your previous content has been restored.   Clear editor

×   You cannot paste images directly. Upload or insert images from URL.