Docker container developer best practice guidelines for unRAID


NAS

175 posts in this topic Last Reply

Recommended Posts

Hey guys.  Wanted to chime in briefly and say how much I love reading this discussion. Lots of really good points on both sides.  Given how much has changed since when we first announced the plan to build an official base image, I would say we are open to a Debian-based image, but know that docker themselves have said there are more readily available dockerized apps available on Ubuntu than Debian.  Doesn't mean someone couldn't have both, just calling out a point.  I will say that Eric is still the final decision maker on the distro of choice, but if you guys are really in love with Debian over ubuntu, the base image is really something for authors to use, so it'd make sense to pick a distro the aligns to the preferences of the authors doing the work (you guys).

 

Link to post
  • Replies 174
  • Created
  • Last Reply

Top Posters In This Topic

Top Posters In This Topic

Popular Posts

I would use alpine, debian or Ubuntu depending on which software you try to run and then use the S6 overlay process monitoring and init of software. 

As best it can.  It's not like it can read the directions or anything.  If the container author included the various environment variables, paths, etc within the dockerfile, then it should properly fi

If you have a docker of your own on dockerhub, then just use the Add Container page in unRAID to specify all the docker run parameters such as volume mappings, etc. Then when you have the Add Containe

Hey guys.  Wanted to chime in briefly and say how much I love reading this discussion. Lots of really good points on both sides.  Given how much has changed since when we first announced the plan to build an official base image, I would say we are open to a Debian-based image, but know that docker themselves have said there are more readily available dockerized apps available on Ubuntu than Debian.  Doesn't mean someone couldn't have both, just calling out a point.  I will say that Eric is still the final decision maker on the distro of choice, but if you guys are really in love with Debian over ubuntu, the base image is really something for authors to use, so it'd make sense to pick a distro the aligns to the preferences of the authors doing the work (you guys).

 

Well, Ubuntu is Debian-based, so I have no preference as the commands and apt-get and such are all the same.

Link to post

OK so we are wandering away from best practices here to the realm of design the base ISO but ultimately it is relevant.

 

Ignore the specific Debian flavor this now and let me pose a question:

 

what does phusion have that ubuntu doesnt that we need?

Link to post

If someone can't handle Unraid they should invest in a QNAP device, it has a simply stupid interface.

An alternative; if people don't know how to use docker or don't take the time to learn, go install the unraid 6 plugins that are readily available in the forums.

 

A: Docker in unRAID is easier to deploy than the plugin counterpart. I see no reason in a user that can install a plugin  not be able to deploy a docker app.

 

Also I wanted to add that Docker is about choice. And each Unraid installation is unique to every installation.

I'm just migrating to Ur6 and I will have several Windows VMs on there using KVM, because I personally believe that KVM > XEN.

 

A: You are right, there are a lot of variables making the unRAID use very heterogeneous, but Docker >>> Plugins on a lot of uses (downloaders, music streamers, sync tools).

 

Then there are those that include ssh in their docker images... that's a back practice.

Docker allows you to connect to the image without the need for SSH, but some want that choice.

Using scripts to run installations inside the docker image... another bad practice.

Some prefer these, others don't it all comes down to preference.

I like my docker containers to auto update on their own, I don't have the time to baby the server and prefer automation.

 

A: This time I'm not sure your ideas are cohesive, because if you want that your containers have an internal auto-update function, you must have "installations inside the docker image".

Link to post

OK so we are wandering away from best practices here to the realm of design the base ISO but ultimately it is relevant.

 

Ignore the specific Debian flavor this now and let me pose a question:

 

what does phusion have that ubuntu doesnt that we need?

 

A) A process management system, like Supervisor. With Ubuntu, it must be installed every time.

Link to post

Going in hand with NAS post, you can pose your response in two ways:

 

What do you want us to ADD to Ubuntu

 

What do you want us to REMOVE from phusion

 

Example:  REMOVE SSH from phusion since docker exec is really the more fitting way to access a running container.

Example:  ADD python to Ubuntu (required by nearly all unRAID-based apps)

Link to post

If there were a standard base image for unraid, perhaps there could be some standard folder placeholders put in for people's apps.

 

I know it's extremely easy to add your own when making an app, but if they were there already it might make a sort of standard framework for folders etc..

 

make it easier for non-techie types, but this may just may be me having silly ideas, lol.

Link to post

If there were a standard base image for unraid

As far as I know this is not an IT it is a WHEN.

 

I know it's extremely easy to add your own when making an app, but if they were there already it might make a sort of standard framework for folders etc..

 

I would agree and in many ways dockerman or his children does this already. I asked/pushed for this and the current incarnation sort of ticks the box but is 100% there yet as it doesnt handle runing the same container many times.

 

I have suggested this before but I think for most application the loopback image idea applied to app data gives is a load of advantages.

Link to post
  • 2 weeks later...

With dockerman now fully supported by Limetech.. is it time we drop the "docker run" commands out of the readme and point them to the WebGUI?

 

If you're feeling extra energetic, you could take those commands out or mark them under a section labeled Advanced Information.

 

So if users have a failure on the GUI they could attempt to shoot themselves in the foot or troubleshoot the issues themselves.

Link to post

Show of hands of current repo maintainers... If we the community created a common repo based around github group working who would be interested?

 

I would.  I think it would be good to have us all agree on base-image and some design principles so they are easier to maintain?

Link to post

The biggest request I see, beyond the 'how do I get started' questions is; an updated program version.

 

I know dockers main purpose is to be fully reproducible on any machine, so everyone has the same version; but since we are mainly using docker as a plugin replacement, I think it's reasonable to assume most people would prefer to have the ability to always run the newest version, without having to bother the author to update the docker to the newest version.

 

Perhaps the 'base image' that is agreed should address this issue, to avoid the issue.

 

Just thinking out loud :)

Link to post

The biggest request I see, beyond the 'how do I get started' questions is; an updated program version.

 

I know dockers main purpose is to be fully reproducible on any machine, so everyone has the same version; but since we are mainly using docker as a plugin replacement, I think it's reasonable to assume most people would prefer to have the ability to always run the newest version, without having to bother the author to update the docker to the newest version.

 

Perhaps the 'base image' that is agreed should address this issue, to avoid the issue.

 

Just thinking out loud :)

 

I agree as a user I always want the newest version, but as a docker author I want to control the updates for support purposes.

Link to post

The biggest request I see, beyond the 'how do I get started' questions is; an updated program version.

 

I know dockers main purpose is to be fully reproducible on any machine, so everyone has the same version; but since we are mainly using docker as a plugin replacement, I think it's reasonable to assume most people would prefer to have the ability to always run the newest version, without having to bother the author to update the docker to the newest version.

 

Perhaps the 'base image' that is agreed should address this issue, to avoid the issue.

 

Just thinking out loud :)

 

for some apps (particularly things that are updated due to api changes etc... like CP et al) , i would agree that the ability to get updated to the latest version would be good, but for more stable apps i like the fixed version approach of dockers because of the stability.

Link to post

Personally i forked some of the dockers on offer here to adapt them to my setup where i replaced VM's and wanted my existing data and folder structure to remain unchanged.

 

i made my CP docker self update and run from git at start instead of a fixed version, (discovered the stable version had a different config file structure than the git version and my wanted list went screwy with the fixed version).

Link to post

We kinda jumped ahead but the way i see it is stage 1 create a "team" who support a "bunch" (likely most) of the containers users want. This team approach makes it easier to use docker the way it is supposed to delivering pre tested apps (no git or on the fly kludgery).

 

Stage 2 would be if possible to create a second repo with the same team as leader but also pretty much anyone else who wants bleeding edge testing versions to dev with. These testing repo would be completely unsupported with all the suitable warnings.

 

Net result is a clear cut separation between stable and testing, less duplication of effort, less devaition form baseOS etc and with a larger team stable would be easier to keep from stagnating 9what pushes a load of people into git land).

Link to post

We kinda jumped ahead but the way i see it is stage 1 create a "team" who support a "bunch" (likely most) of the containers users want. This team approach makes it easier to use docker the way it is supposed to delivering pre tested apps (no git or on the fly kludgery).

 

Stage 2 would be if possible to create a second repo with the same team as leader but also pretty much anyone else who wants bleeding edge testing versions to dev with. These testing repo would be completely unsupported with all the suitable warnings.

 

Net result is a clear cut separation between stable and testing, less duplication of effort, less devaition form baseOS etc and with a larger team stable would be easier to keep from stagnating 9what pushes a load of people into git land).

 

kind of an unraid "official" docker repo, nice idea and you could get apps that are as stable as is possible to achieve at the time.

and everything all in one place makes for an easier user experience too.

my personal tinkering stuff with dockers for myself aren't in my repo and won't ever be.

 

should have a poll for most wanted dockers (if there hasn't been one already)

Link to post

The proper way of dealing with this is versioning the docker container. That way the user can get a specific container version and know exactly which version of the container they're getting. Doing anything different is still abusing the docker purpose.

 

One other possibility is having the application version contained outside the container and referenced on startup, so the docker container is merely a runtime environment. Think of pointing the docker to a .deb file and installing it to a host mapped volume called /opt/ that maps into the /appdata/applicationname/opt directory similar to how /config is mapped.

 

The other issue with updating of dockers, as I pointed out in another thread, is the docker.img loop back file will get cluttered and will fill up with the user having no means of maintaining it. The only way to fix that is by deleting the entire docker image and recreating it and readding the containers.

Link to post

Also, all knowledgeable users do NOT WANT the latest version, they want the latest stable version. Don't fix what isn't broken.

 

There are a metric ton of users who want the latest development version , but they lack the skills to fix things when it's proven time and time again that the dev version broke something, usually by being incompatible with some other addon they're running. Just take a look at all the trouble threads about people running latest dev version of Plex and then having issues. Instead of correctly identifying the issue, they instead blame the docker container or the creator, needo. That is exactly the wrong direction to cast the blame at.

Link to post

Also, all knowledgeable users do NOT WANT the latest version, they want the latest stable version. Don't fix what isn't broken.

 

There are a metric ton of users who want the latest development version , but they lack the skills to fix things when it's proven time and time again that the dev version broke something, usually by being incompatible with some other addon they're running. Just take a look at all the trouble threads about people running latest dev version of Plex and then having issues. Instead of correctly identifying the issue, they instead blame the docker container or the creator, needo. That is exactly the wrong direction to cast the blame at.

 

gotta have someone to blame, it's the modern way.

Link to post

The proper way of dealing with this is versioning the docker container. That way the user can get a specific container version and know exactly which version of the container they're getting. Doing anything different is still abusing the docker purpose.

 

The only 'problem' with this is that the user is (still) reliant on the docker creator to update the docker, and some people don't have the patience to wait for this to happen.  Also, as we've seen with plugins, authors come and go, as life happens, so one day an author is likely going to disappear, and their docker won't get updated unless/until someone else jumps on it.

 

It's a difficult thing to manage, for sure.

 

Also - there is nothing that forces us to avoid "abusing the docker purpose" if our purpose is different.  Docker is just a tool.  Using that tool in a manner that is different to it's intended use does not invalidate the effectiveness of the tool.  Using a free wooden paint stirrer as a shim is still a totally valid use of that tool. :)

Link to post

Join the conversation

You can post now and register later. If you have an account, sign in now to post with your account.
Note: Your post will require moderator approval before it will be visible.

Guest
Reply to this topic...

×   Pasted as rich text.   Restore formatting

  Only 75 emoji are allowed.

×   Your link has been automatically embedded.   Display as a link instead

×   Your previous content has been restored.   Clear editor

×   You cannot paste images directly. Upload or insert images from URL.