Jump to content

NAS

Moderators
  • Posts

    5,040
  • Joined

  • Last visited

Posts posted by NAS

  1. If there were a standard base image for unraid

    As far as I know this is not an IT it is a WHEN.

     

    I know it's extremely easy to add your own when making an app, but if they were there already it might make a sort of standard framework for folders etc..

     

    I would agree and in many ways dockerman or his children does this already. I asked/pushed for this and the current incarnation sort of ticks the box but is 100% there yet as it doesnt handle runing the same container many times.

     

    I have suggested this before but I think for most application the loopback image idea applied to app data gives is a load of advantages.

  2. What do you think all "dockers guru" about the Ubuntu Snappy?  It suppose to be made for Docker, be secure, etc.  I think it's based on Ubuntu Core, but adapted for Docker in mind.

     

    Can it be the new baseimage in long term ?

    I think it was developed to be the host of docker, not the baseimage, but I might be wrong.

     

    Actually Ubuntu released a new version designated specifically to be docker baseimage.

    Also the Ubuntu docker images are less resource hungry than some other flavors of linux.

     

    Unless someone has a better idea i see no reason to redesign the wheel and the uptream of the base OS we decide on would be the official docker debian/ubuntu OS.

     

    Since we know that LT are planning at this point at least to use Ubuntu LTS i see no reason not to just use this. I personally prefer Debian proper but it makes little odds at this point and less sense to create work using something else.

     

    Regardless of debian/ubuntu what best practices could we extratc from the app containers and push into our new base OS. The more we push up the more efficient we get

     

     

  3. If i was choosing for myself I would choose Debian as its small, predictable, generally not insane and doesn't reinvent the wheel every release cycle.

     

    For others though I would probably choose Ubuntu LTS because that where most of the docker community seem to be going.

     

    If we stick to a Debian variant at least then we can change later to another variant (i.e. the official one) with little reworking.

     

    The key here is to not get into distro wars. We almost dont care about the distro they are just a means to an end for the application container.

     

    I also think there is a key differnce between consumers of the stable line of applications and those that have any clue how to develop on docker or even what git is. And that is the key point IMHO we are talking about the stable line here. The developer line can and should be the wild west.

     

    Remember the consumers we are talking about here dont know about docker, git, apt, linux or anything. They want a button that magically gives them XBMC or a torrent app and another that updates it.

  4. For me the whole idea of docker is repeatable distributed code, once you start messing with this and having dockers which can update themselves, or having special "edge" flags to trigger different branches of code then it stops being docker and becomes something else, something IMHO that cannot easily be supported and is more prone to breakage.

    TBH that is not just your definition of what should be it but THE definition of docker.

     

    1. updating docker images - use a shared github repository that multiple "trusted" community members have admin access to in order to maintain the dockerfiles for new releases, this would reduce the time a user has to wait in order to run the latest stable version.

    I like this a lot. So what we could have a group hub account where unoffical but standardized containers are maintained under the guidelines we started here and all others can be a free for all.

     

    We dont need a bunch of crazy rules but the main containers all maintained by a group of people all having the same flavour of design and base OS could solve a lot of problems.

     

    I do like the idea of a master unraid set.  Already I look at the templates and see more than one version of a containerised app and I have no idea what, if any, differences there are and which I should use.

    This is a huge problem for docker and I would guess they are working on it in the background.

  5. Agreed but my point which is a bit OT here is that firstrun and edge and not use cases specific to us. Docker needs to accept these are super common use cases and work out a way to standardise them into the docker file or some other file hosted at the repo.

     

    Also I just noticed this:

     

    You can updated yourself, I was sure I had design it this way. Stop the container. Replace the jar file in the config directory with the new version and and ensure the permission are right.

     

    Again there is absolutely nothing wrong with this other than its another deviation from best practice bypassing a docker tennant in the spirit of flexibility.

     

    The more I think about it the more i consider there needs to be a peer review process where dockers are given a "certified sticker"... essentially a certified stable branch where all these best practices would be followed. If we done that every other container could do whatever they wanted and would be considered unstable/experimental.

     

    This would be front loaded work as most stable containers would only change over time by a refresh of apt etc and not code changes.

     

     

    It is worth pointing out the extra importance of this. The last two version of Docker have been deprecated due to security problems where someone could create a docker that allowed them to break out of the container. Essentially they could root your server and containers just by you running the container. Given how little visibity the docker hub gives you this would be a easy attack. Pick something that people want. Make it work. Insert the break out code. Done.

  6. ... Not 100% sold yet on moving the installation of the program out of the dockerfile and into an install.sh, but I can see its benefits...

     

     

    Me either TBH. In an ideal world the docker file would be the start and end of the install process. Obviously the technology is no where near that yet but it seems we are moving further away from that and not closer.

     

    One thing occurred to me there is absolutely no reason we couldnt just agree here and now on a phusion fork that best fits with our needs in advance of the official unRAID one. The more stuff that everyone needs that can be pushed upstream to the base OS layer the more efficient we become. If it doesnt work its not like its alot of work to revert.

  7. I wonder how many of the layers we have out there on user machines are related to apt-get update and apt-get clean and divergence as time passes.

     

    There has to be a slicker way to do this since our base OS will be ours perhaps we should maintain the apt database there on a scheduled (weekly?) basis and then all users of this base OS can just do a apt-get install.

     

    I am not sure that idea is 100% sound but i think it has the beginnings of something in it.

     

    Also I have seen several discussions now around specifying the package version in apt (the bergknoff guide being the most recent).

     

    So the idea is that you would not do "apt-get install redis-server" but rather "apt-get install redis-server=2:2.4.14-1"

     

    This is good in two key ways for us. It allows peer review of what the contianer actually provides using only the docker file which opens the door for the GUI summarsing what a container has without requiring the container to be download and run first (a big shortcoming of docker for me).

     

    It also means that if someone compiles the container rather than pull from the repo what they get would be identical identical.

     

    It is not without pitfalls but this whole area is where i think our initial focus should be.

     

    Note I am going to talk only about stable builds. Users that want to run git and EDGE type stuff are IMHO voiding their ability to reliable request support.

  8. On v6 beta 10a I see these files but per core

     

    ls -alh /sys/devices/system/cpu/cpu0/cpufreq/
    total 0
    drwxr-xr-x 2 root root    0 Dec  2 07:12 ./
    drwxr-xr-x 8 root root    0 Dec  2 07:12 ../
    -r--r--r-- 1 root root 4.0K Dec  2 07:12 affected_cpus
    -r-------- 1 root root 4.0K Dec  2 07:12 cpuinfo_cur_freq
    -r--r--r-- 1 root root 4.0K Dec  2 07:12 cpuinfo_max_freq
    -r--r--r-- 1 root root 4.0K Dec  2 07:12 cpuinfo_min_freq
    -r--r--r-- 1 root root 4.0K Dec  2 07:12 cpuinfo_transition_latency
    -r--r--r-- 1 root root 4.0K Dec  2 07:12 related_cpus
    -r--r--r-- 1 root root 4.0K Dec  2 07:12 scaling_available_governors
    -r--r--r-- 1 root root 4.0K Dec  2 07:12 scaling_driver
    -rw-r--r-- 1 root root 4.0K Dec  2 07:12 scaling_governor
    -rw-r--r-- 1 root root 4.0K Dec  2 07:12 scaling_max_freq
    -rw-r--r-- 1 root root 4.0K Dec  2 07:12 scaling_min_freq
    -rw-r--r-- 1 root root 4.0K Dec  2 07:12 scaling_setspeed
    

  9. Devils advocate...How well does this scale? i.e. silly numbers like 200k books and 100k comics

     

    One thing I HATE about Calibre is that it doesnt scale at all and becomes unusably slow

    Its superfast with 30k books (mobi) and 10k PDF.  I can only dream of having 200k books lol

     

    Playing with it now. Its is very basic but I like that. I seems to "just work".

  10. Congratulations these really caught my attention as being genuinely useful and powerful.

     

    I have some ideas for consideration:

     

    Since these tools are so unRAID specific check for the existence of "/etc/unraid-version" as a sanity check before doing anything.

     

    It is not beyond the realms of possibility that a version of unRAID comes out with a missing dependency or a user simply breaks a dependency. To ensure this doesn't end up in unpredictable behavior check before execution i.e. rsync, shopt etc

     

    Consider by default having the rsync "-n, --dry-run  " option set in diskmv so that a user essentially has to opt in to a write action (the same logic as you have put in place with -t in consld8).

     

    Consider more script commenting. e.g. i doubt most users will know what this is without googling "-i -dIWRpEAXogt --numeric-ids --inplace" and in general it will help users get a better feel for how it works.

×
×
  • Create New...