Jump to content

New Docker Manager UI Early Build Screenshots


jonp

Recommended Posts

 

Ok, so for bandwidth considerations, I will concede to your point that there are plenty of folks in the world with less than ideal internet connections.  That said, check this out:  http://www.netindex.com/

 

GLOBAL BROADBAND

 

DOWNLOAD 

21.8

Mbps

 

UPLOAD 

9.9

Mbps

 

QUALITY 

84.3

R-Factor

 

VALUE 

$5.51

USD

 

PROMISE 

87.2 %

 

Ha - Philippines, out in the provinces:

    Download

    1.6Mbps

 

    Upload

    0.7Mbps

 

    Quality

    -R-Factor

 

    Value

    29.88Dollars

 

    Promise

    65.0Percent

 

 

Daily quota on a 1Mbps connection is 3GB.

What plugins are you using or containers for that matter? 

Link to comment
  • Replies 57
  • Created
  • Last Reply

It's better, but there is still a distinct difference in the look and feel of the docker man to the other pages.  For me, the plugin and docker pages should use the same terminology, buttons, menu placement etc etc.

 

I would have thought using a common css/framework (even something like a scaled down bootstrap) would make the gui consistency a lot easier to create.

Link to comment

It's better, but there is still a distinct difference in the look and feel of the docker man to the other pages.  For me, the plugin and docker pages should use the same terminology, buttons, menu placement etc etc.

 

I would have thought using a common css/framework (even something like a scaled down bootstrap) would make the gui consistency a lot easier to create.

That's where we are headed. This is the first step towards that goal. Plugin manager will eventually fold into the design of docker man. You will see this continual effort now to bring consistency to the entire GUI.

Link to comment

Another update. We changed the buttons at the bottom to conform to the rest of the GUI. Also changed the version column to use the same verbiage of "up-to-date" as on the plugins page.

 

17bd62efba3e1c01756cda03249ae284.jpg

 

Sorry for the oversized pic here. Uploading from my phone now.

Link to comment
I think longer term, the Plugins page will conform to the Docker page, not the other way around.

 

Fair enough - it's just that I feel more comfortable with the current plugins manager than the current docker manager, but going by your later posting I think that I will be very happy with the direction you're taking.

 

That said, changing the nomenclature of "current" to "up-to-date" is probably not a bad idea considering that is the existing nomenclature for which everyone is accustomed.  As you say later in your post:  "Consistency, consistency, consistency!"

 

Great!

 

Way ahead of you.  Take a look: ....

 

Great minds .....!

 

What plugins are you using or containers for that matter? 

 

Dockers:

CP,

Deluge,

Minidlna,

LMS and

MariaDB

 

Plugins:

TFTP,

Apcupsd,

Powerdown,

Fan speed,

dovecot,

mpop,

XtermWindowTitle and

several of the Dynamix addons

 

 

Another update. We changed the buttons at the bottom to conform to the rest of the GUI. Also changed the version column to use the same verbiage of "up-to-date" as on the plugins page.

 

Looking really good!  Well done, and I'm becoming eager for the next release to hit!

 

Oh, I still have a complaint about the text in the banner being difficult to read against the graphic - Server, Description, Version and Uptime.  Is it possible to get rid of the white drop shadow?

 

Link to comment

Quote from: NAS on December 30, 2014, 10:00:02 AM

 

    Running out the door so the one line answer versions:

 

    you might be bandwidth rich but most of the world isnt. For many even 1GB is a big deal. Downloading and storing 1GB of OS for a 10MB app is far from ideal especially (and this is the point) when there are usually other versions using the same base OS that cost almost nothing to use. Empower user informed choice.

 

 

Ok, so for bandwidth considerations, I will concede to your point that there are plenty of folks in the world with less than ideal internet connections.  That said, check this out:  http://www.netindex.com/

 

GLOBAL BROADBAND

 

DOWNLOAD

21.8

Mbps

 

UPLOAD

9.9

Mbps

 

QUALITY

84.3

R-Factor

 

VALUE

$5.51

USD

 

PROMISE

87.2 %

 

These are average speeds for worldwide bandwidth.  You can break these down per country too:

 

    US:  31.9mbps

    Canada:  25.3mbps

    UK:  29.8mbps

    Mexico:  11.9mbps

    Russia:  27.0mbps

 

 

1 gigabyte of data translates to 8 gigabits of bandwidth consumed.  With even a 10mbps connection, that is only 13.3 minutes to download.  Is that as fast as installing a mobile app on your phone / tablet?  No.  Is it acceptable for now?  Yes.  Why?  Because the apps that folks can download are all designed to be used over the internet anyway.  So BT Sync, Crashplan, Plex (for streaming out of your home), etc, are all apps that use your internet connection to download / upload data to others.  In addition, we are working to make offerings for folks to purchase servers from Lime Technology that are pre-configured with apps, so for folks that don't want to wait for apps to download, that will be an alternative.  I just don't see bandwidth as that big of an issue because where bandwidth is limited, usage of apps is probably going to be limited as well.  There are a few edge cases to this, but again, let's start by designing for the masses, then we can focus on edge / corner case optimizations.  Fair?

 

Now as far as the 1GB vs. 10mb analogy, I think this is a pretty rare occurrence right now and its a trade-off for the ability to isolate the app and it's underlying software dependencies from other apps.  The cost of flexibility is a sacrifice of some capacity, but for improved reliability, portability, consistency, and capability...yeah, I'll take that trade.

 

[NAS] I don’t disagree with the details but I am not sure it is a direct match for what I am saying. My point is that the gui and process should inform users as far as possible so they can decide how best to use their bandwidth/disk space. Until recently to get > 1MB I had to pay for multiple ADSL lines so I know that it is often overlooked how many users don’t have fast internet. These users are typically masters of using every ounce of resources (I know I was) but to do that you need information.

 

Quote from: NAS on December 30, 2014, 10:00:02 AM

 

    re base os version. We already trust our docker devs and the content they put int he template. theres no reason to stop trusting them now. A link to the dockerfile is better than the current situation where you have to dig about online.

 

 

Eric and I discussed this.  The issue comes down to the fact that the Dockerfile doesn't specify the base distro, just the base image.  So for phusion, you would have to just KNOW that phusion uses Ubuntu or we'd have to continue to traverse the "FROM" statement upwards to find the absolute base image that is initially used.  In addition, Docker provides an ability to build from scratch (no base image), so what do we do there?  Biggest point here is that this is really not necessary to do in the short term for it provides little value to the everyday user.

 

Longer term, I could see us corralling developers into aligning template repos to a common base image.  So instead of a repo per developer, it's a repo per base image for which multiple developers can participate and submit their apps.  This is just a short-hand idea and would require much more design time to implement, which is why it is something we can look at later when we want to further optimize our Docker implementation.  Again, not a short term need.

 

[NAS] Really my point here is simply to empower users with information so they can see where their base OS deviate and then let them decide if it is acceptable to them. I don’t mind hiding the sausage making but the only way to know that my short docker container list already contains Debian, Ubuntu, Phusion and Arch is to really dig into it beyind what users could probably be expected to do. It is a very real case where one of our popular containers uses arch causing a 1GB OS download for a 10MB app and it is not and edge case to expect this to keep happening, in fact by definition it will happen every time a containers base OS isn’t an exact common match... and that again is my point we should be working towards a way to make the more important not less.

Quote from: NAS on December 30, 2014, 10:00:02 AM

 

    Users sometimes do need container IDs. For example thats what is in the logs. Yes they dont need them all the time but were talking about a complete removal from the GUI so even edge case uses become relevant.

 

 

I don't get your example.  User's probably won't read logs very much except to see if there is an error, which will rarely be an image-specific issue.  Sorry NAS, this is a devops need, not a user need.  I have never needed a container ID and I would wager that neither has anyone else that doesn't dig deeper into containers than just the usage of the apps they support.  I'll be man enough to admit I'm wrong if a bunch of folks come in here and set me straight.  That said, see my reply to Peter.  We are putting these back in under "advanced view" mode so you will still have access to them.

 

[NAS] Advanced mode sounds like a good fix. No one needs to see container IDs all the time but when they do they really do

 

Quote from: NAS on December 30, 2014, 10:00:02 AM

 

    The problem is that the templates require users to manually change the appdata folder and the container name for each extra instance and never make a mistake doing so. It might not sound much but this if the difference between a slick safe system and one where a small user gotcha could break irreplaceable appdata.

 

 

Eric and I discussed this.  The simple solution I presented is one we are considering implementing for this.  Longer term, we want to have a similar approach to port mappings (e.g. if two apps are set to by default use the same port, we should not let that happen or prompt the user to fix that before letting them click "apply").

 

[RMC] Sounds sensible

 

Quote from: NAS on December 30, 2014, 10:00:02 AM

 

    You can remove the repo from the list but all the back end files for it stay on disk going stale. Then if you re-add the repo later unpredictable things happen.

 

 

Tested this out earlier since I was confused by your statement.  This is a bug and will be fixed.  We are also working to make template repos work similar to how you can add volume / expose mappings on the add container screen (each template repo will be in its own textbox on the gui with a "remove" button next to it).

 

Hopefully some of the changes I've posted here today will satisfy your requests.

 

[RMC] Good stuff. When you are looking at this we need to enforce a check for the repo containing a sub folder. Currently we don’t enforce this and this could easily cause a clash where two repos have the same name container.

 

Link to comment

 

What plugins are you using or containers for that matter? 

 

Dockers:

CP,

Deluge,

Minidlna,

LMS and

MariaDB

 

Plugins:

TFTP,

Apcupsd,

Powerdown,

Fan speed,

dovecot,

mpop,

XtermWindowTitle and

several of the Dynamix addons

 

So my reason for asking was due to your 1mbps Internet connection.  Seems like you have a high-reliance upon Internet-facing applications anyway, so if you can deal with the speed issues to download/upload there where I'm sure the content is of much larger size than the Docker images I'm talking about, then I don't see how downloading an app image is a big deal.  I get that longer term, optimal efficiency is best, but we need to prioritize development around the issues that impact the most users and provide the biggest benefit.  If we spent all our development time right now focused on base image management and eliminating as much redundancy as possible, the net result is that < 1% of users would actually notice a measurable impact to their usage of unRAID.

 

Put in other words, you have 5 Containers in your list.  If those 5 took 1 hour each to download for you and the net impact of changing our image management even brought that down to 10 minutes each, would that really matter considering that you really only had to download the base images once?  In the grand scheme of things, I think the longer download for you is probably annoying and even a little frustrating, but when it's done, it's done for good.  Your updates do not require a complete redownload of all the images again so therefore your not going to experience nearly as long of a download time.

 

Like I mentioned before, longer term we will work to be optimally efficient in our usage of base images, but for now, this is really not big enough of an issue (space consumption / bandwidth) for us to be worrying about this.  I hope you understand where I'm coming from on this.

 

Oh, I still have a complaint about the text in the banner being difficult to read against the graphic - Server, Description, Version and Uptime.  Is it possible to get rid of the white drop shadow?

 

I think I've had the same annoyance but just haven't voiced it loud enough in development since Eric has been chomping away at bigger fish to fry.  I'll mention this to him today to get his thoughts...

Link to comment

[NAS] I don’t disagree with the details but I am not sure it is a direct match for what I am saying. My point is that the gui and process should inform users as far as possible so they can decide how best to use their bandwidth/disk space. Until recently to get > 1MB I had to pay for multiple ADSL lines so I know that it is often overlooked how many users don’t have fast internet. These users are typically masters of using every ounce of resources (I know I was) but to do that you need information.

 

You are arguing on principle whereas I am arguing on practicality. Your saying that we should give as much info as possible because some users care about "mastering their resource management" as efficiently as possible, and we should let them have the power of choice to do so.  I am all for that.  But again, when we're talking about saving a few dollars worth of storage space or even minutes of download time / gigabits of bandwidth usage, these are just not adding up to enough measurable impact to prioritize in the shorter term.  This is not important for the next release or even the release of 6.0 final.  This falls into a category of an optimization request.  There are plenty of those already out there for unRAID and many of them are requests for optimization that would yield more meaningful / usable impact to users.

 

Long story short:  this belongs as a roadmap request and probably for 6.1.

 

[NAS] Really my point here is simply to empower users with information so they can see where their base OS deviate and then let them decide if it is acceptable to them. I don’t mind hiding the sausage making but the only way to know that my short docker container list already contains Debian, Ubuntu, Phusion and Arch is to really dig into it beyind what users could probably be expected to do. It is a very real case where one of our popular containers uses arch causing a 1GB OS download for a 10MB app and it is not and edge case to expect this to keep happening, in fact by definition it will happen every time a containers base OS isn’t an exact common match... and that again is my point we should be working towards a way to make the more important not less.

 

The truth is that if you feel this strongly that this is important for the short term, I suggest you write a plugin for us to consider incorporating into the base build.  This isn't a priority for us right now.  We are laser focused on getting bugs fixed and features implemented that are of the utmost importance and relevance to the majority, not minority, of our users.  Things like APCUPSd support come first.  Getting notifications better documented and controllable comes first.  Getting this new Docker Manager and another major tool (to be announced soon) implemented comes first.  Looking into btrfs / loopback image issues comes first.  Support for colorblind users comes first.  Etc. etc.

 

To be perfectly clear with you NAS:  I do NOT disagree with your point that this is something that belongs on the roadmap.  I just don't think it's important for us to even schedule on the roadmap in the next 30-60 days.

 

[RMC] Good stuff. When you are looking at this we need to enforce a check for the repo containing a sub folder. Currently we don’t enforce this and this could easily cause a clash where two repos have the same name container.

 

Good catch.  Will mention this to Eric today as well.  He's planning on changing the "add container" interface a bit as well (especially the "choose your template" dropdown list).

Link to comment

Sure plenty of other stuff needs to come first but not even being prepared to roadmap something because other stuff is too important kinda defeats the point of having a roadmap.

 

Im not sure I can add any more justification without saying the same thing is a different way which is pointless and helps no one. Truth be told it doesn't bother me personally because I can and do master my own containers exactly how I want them and I have fast internet now but hopefully one day when new users go "addon click crazy" as they invariably all do we can come back to this with fresh eyes.... and perhaps that is the natural time to discuss it anyway when it is a real problem rather than one that may happen sometime in the future.

 

If you are working on the templating spec then each field should have a description item in the XML so that container devs can add tool tips etc for every default value e.g. for instace say port 8080 is for API access and 81 is for human access etc

 

Stepping away form this thread now as I am hogging it. Time for others to put forward their suggestions

Link to comment

Sure plenty of other stuff needs to come first but not even being prepared to roadmap something because other stuff is too important kinda defeats the point of having a roadmap.

 

My reason for not putting it on the roadmap is that we don't know yet if this is something even important enough to implement in 6.1.  I want to avoid scheduling requests like this until we know they belong in a particular release and we are committed to doing so.  My fear is that I put it into 6.1 roadmap section, then 6.1 comes along and it's a non-issue for 99% of folks, so we bump it to 6.2.  6.2 comes along then it's still a non-issue so we bump it to 6.3.  Here's my suggestion:

 

Post this as a feature request in the roadmap forum.  Garner support from others that this is an important enough issue to address.  If so, it will be prioritized for implementation.  Prepare for me to jump in there and call out all the points I've made here, however, because I want folks to understand the measure of impact this feature will provide them.

 

Truth be told it doesn't bother me personally because I can and do master my own containers exactly how I want them and I have fast internet now...

 

I think the majority of folks that are passionate about this topic like you are probably are in the exact same boat as you (maybe not fast Internet, but that they can manage them just fine).  What I'm waiting for is a bunch of users to post:  "man, why does this take forever for me to download?  This is such a pain.  Why is it taking up so much space?  I can't stand this because X, Y, and Z."  Until then, it's really a non-issue for users.

 

but hopefully one day when new users go "addon click crazy" as they invariably all do we can come back to this with fresh eyes.... and perhaps that is the natural time to discuss it anyway when it is a real problem rather than one that may happen sometime in the future.

 

EXACTLY!  That has been my point all along.  Let this topic find its way to the surface naturally over time and when it does, we will address it.

 

If you are working on the templating spec then each field should have a description item in the XML so that container devs can add tool tips etc for every default value e.g. for instace say port 8080 is for API access and 81 is for human access etc

 

Stepping away form this thread now as I am hogging it. Time for others to put forward their suggestions

 

Good suggestion.  I'll probably create a thread about the templates / repos separate from this one as soon as we are actively working on the design for it.

Link to comment

Most of the changes look good to me.

 

Only thing that I'm not sure on is the removal of "ports".  With is easily accessible by clicking on the name of the container it might not be necessary on the main screen, but I think ports is a piece of info that is needed by the user, not just the developer. 

 

EDIT: We probably don't need to show the container port, but the host port would be usable for the user.

Link to comment

Most of the changes look good to me.

 

Only thing that I'm not sure on is the removal of "ports".  With is easily accessible by clicking on the name of the container it might not be necessary on the main screen, but I think ports is a piece of info that is needed by the user, not just the developer. 

 

EDIT: We probably don't need to show the container port, but the host port would be usable for the user.

 

You must have missed one of the replies, we are adding the ports column back.

Link to comment

Most of the changes look good to me.

 

Only thing that I'm not sure on is the removal of "ports".  With is easily accessible by clicking on the name of the container it might not be necessary on the main screen, but I think ports is a piece of info that is needed by the user, not just the developer. 

 

EDIT: We probably don't need to show the container port, but the host port would be usable for the user.

 

You must have missed one of the replies, we are adding the ports column back.

 

I did read thru.. must have got lost in it.  Thanks! Great work as always.

Link to comment

I've noticed with the more dockers I load, the slower the docker container display is getting. 

 

I'm running:

 

Deluge

cAdvisor (not running, but installed)

Couchpotato

duckdns

headphones

lazy librarian

mediabrowser

Mylar

nzbdrone

nzbget

plex media server

Ubooquity

 

I'm running one VM (musicbrainz ubuntu VM)

 

Plugins:

Web Virtual Manager Support

Libvirt support

dynamix system temp

dynamix system statistics

dynamix system information

dynamix webgui

dynamix cache directories (not running at the moment though)

dynamix active streams

 

From a fresh boot, takes about 10-15 seconds to bring up the docker page.  If I stop a docker, wait then restart one, takes at least 5-7 seconds from when it actually stops, until I can interact with the UI again.  Click restart one one of them, same thing, 5-7 seconds before it will allow any interaction.

 

Didn't have that problem before b12 running the plugin version of the docker page.  Same dockers installed.

 

If I'm at work using it remotely (have a hardware VPN to home), at least double the time it takes.  Seems to slow where it's bringing up the banners/icons.

 

 

Link to comment

I've noticed with the more dockers I load, the slower the docker container display is getting. 

 

I'm running:

 

Deluge

cAdvisor (not running, but installed)

Couchpotato

duckdns

headphones

lazy librarian

mediabrowser

Mylar

nzbdrone

nzbget

plex media server

Ubooquity

 

I'm running one VM (musicbrainz ubuntu VM)

 

Plugins:

Web Virtual Manager Support

Libvirt support

dynamix system temp

dynamix system statistics

dynamix system information

dynamix webgui

dynamix cache directories (not running at the moment though)

dynamix active streams

 

From a fresh boot, takes about 10-15 seconds to bring up the docker page.  If I stop a docker, wait then restart one, takes at least 5-7 seconds from when it actually stops, until I can interact with the UI again.  Click restart one one of them, same thing, 5-7 seconds before it will allow any interaction.

 

Didn't have that problem before b12 running the plugin version of the docker page.  Same dockers installed.

 

If I'm at work using it remotely (have a hardware VPN to home), at least double the time it takes.  Seems to slow where it's bringing up the banners/icons.

The load time performance of this page is phenomenally improved in the new version. Many tweaks have been made that accomplish this. More info soon...

Link to comment

Don't know if it's already been mentioned or included but would it be possible to display the docker containers ip address somewhere?

 

Stopping/starting containers causes them to get a new ip each time. So if I want to ssh* into my nZEDb container I have to do a "docker inspect nZEDb | grep IPAddress" to find out what its current ip is.

 

Having it visible in the docker management ui would be really useful.

 

 

* I know there are debates about whether ssh should be used to interact with a container, but in this case the recent "docker exec" functionality doesn't work.

Link to comment

 

 

* I know there are debates about whether ssh should be used to interact with a container, but in this case the recent "docker exec" functionality doesn't work.

 

What do you mean it doesn't work? I have never seen it not function. You mean it's not suitable to your specific situation. Please be careful when you're throwing out terms and statements such as this.

Link to comment

I didn't say it didn't work in general. As I said, it doesn't work in my particular use case.

 

For monitoring the container I log in via SSH and connect to an established tmux session.

 

With the current build of docker (and I have no idea if it has been/will be fixed in later versions) the 'console' you are presented with when you run docker exec isn't a true TTY. Because of that tmux doesn't work.

 

I wasn't complaining about that, just asking if the container IP could be visible somewhere.

Link to comment

I was just being pedantic because that is how things get blown out of control, someone posts something specific to their situation, others then take that to mean it applies to every situation and then people start thinking there's larger issues. I was just making it clear to those not familiar with the situation and technology.

 

I wonder if there might be a way to have that fake tty turn into a real tty as far as tmux is concerned. Though that is something I have noticed with other internal utilities like less or vi, the terminal session is not the same as what's outside of docker.

Link to comment

Request: Change to "Create-Container" pop-up.

 

Some containers need extra arguments (Eg: --device /dev/blaa:/dev/blaa or --cpuset=x,y). Just like you explain in your topic:

 

http://lime-technology.com/forum/index.php?topic=36257.0

 

As you know these currently have to be added prior to the "Repository" field.

 

Would it be possible of a new field (even under "Advanced Mode") called Extra Arguments of something?

 

Thanks,

 

The Capt.

 

P.S: Any hints on when we might get to see this updated DockerMan?!!

Link to comment

Archived

This topic is now archived and is closed to further replies.


×
×
  • Create New...