lonix

Community Developer
  • Posts

    260
  • Joined

  • Last visited

Everything posted by lonix

  1. Oh and if if forgot to address your issue here, im sorry i forgot. please ask again under the individual thread here in this forum, Link in OP. (And if you are running non-unraid, please address your issues at the http://linuxserver.io web site.
  2. Easy, we already have a Sab image that lonix made. Can convert it to be an alpha/beta friendly version I'm sure. Ill add it to my growning todo\wishlist What we meant was that the plex app updates on container restart. We (both lonix and I) update the file in question within 4 hours max of a plex release but usually a lot less. All the user need do is restart the container. Now, the issue with plex specifically is the handling of auto update for plexpass. It's not easily possible to automate that yet but the not pass version is easy with relying on a third party service (baconater) like he does I think. thanks, ya I knew you meant on restart, but its not "automated" to say if you decide to hightail it out of here (not saying you will, just making the point). I assume the issue is the random string plex passes in the URL for each release?? https://downloads.plex.tv/plex-media-server/0.9.12.4.1192-8a45d21/PlexMediaServer-0.9.12.4.1192-8a45d21-x64-UnRAID.txz Yes and No, the "random" string is not random, its the acctual build number. and the real problem is that there is no way to request "latest-plexpass" version from a developing standpoint. we have debated useing baconcode like some other containers have, but im not quite sure im confortable useing that quite yet. so for now. The version number must be updated manually. but it should seem fairly automatic for the user. ( I have been doing it for the last 1.5 years allready. ) and i gave the rest of the lsio crew the chance to do the same. Sorry We have discoverd a issue, i will look in to it, and get back at you soon. I Shall commence cicking IB too add said link to OP. We always welcome contributors, fork it on Github and send us a pull when you're done. Oh and join us on IRC freenode #linuxserver.io to discuss specifics if you wish. done... also inadvertantly fixed up the permissions errors too. that user init script needs to be first to run otherwise any chown's in init files prior to it running will have the original uid and gid for abc. Thanks, i dont undertand how i missed this for so long in so many containers.
  3. And now i have to pay attention to 10 support threads instead of 1
  4. These are the once i run on my unraid box.
  5. You are welcome Brilliant, we are now one step closer to fixing The Folder "/home/docker/volumes/downloads" gets ownership gets permissions from the user used to execute "docker run" and could simply be chowned 99:100'ed the same goes for ""/home/docker/config/testing/" You could try chowning both directorys, remove the config file and reboot the container perhaps. chowning after the fact seems to have worked... One last issue though... In the STDOUT logs I keep getting this error: ERROR Starting daemon failed: could not acquire lock on lock-file /downloads/nzbget.lock I Have deleted that file and had it re-created and also chowned, but no luck... Thanks for all your help. Are you sure the /downloads folder is owned by the correct user now ? Try this: chown -R 99:100 /home/docker/volumes/downloads && rm /home/docker/volumes/downloads/nzbget.lock Welcome, glad you like it. Its neat isn't it? I haven't added that feature yet. I wasn't totally convinced of Smokepings stability yet but its just been running for a week for me at hone and on my VPS. I guess I can tick that box and move onto features. You can of course modify the Targets file in /config volume to change the ping targets. I'll get around to sorting out the email thing hopefully by the end of this week if work quietens down a bit. a quick google shows that you can use ssmtp with smokeping and it's fairly easy to use in a container. i use it here.... https://github.com/sparklyballs/docker-containers/tree/master/pydio in the src folder is ssmtp.conf which i save out into a config folder for editing and the user puts in their email details and restarts the container. Yup i have used ssmtp myself in my private containers as well as the once i use for work. should not be a problem.
  6. No you want to add This enviorment variable om the dockerman (click on advanced to se it)
  7. That's great! Unfortunately when I try to add the quassel container I get this error: Warning: DOMDocument::load(): Opening and ending tag mismatch: Date line 5 and Container in /boot/config/plugins/dockerMan/templates/linuxserver.io/quassel-core.xml, line: 58 in /usr/local/emhttp/plugins/dynamix.docker.manager/createDocker.php on line 377 Warning: DOMDocument::load(): Premature end of data in tag Container line 2 in /boot/config/plugins/dockerMan/templates/linuxserver.io/quassel-core.xml, line: 59 in /usr/local/emhttp/plugins/dynamix.docker.manager/createDocker.php on line 377 Warning: DOMDocument::load(): Opening and ending tag mismatch: Date line 5 and Container in /boot/config/plugins/dockerMan/templates/linuxserver.io/quassel-core.xml, line: 58 in /usr/local/emhttp/plugins/dynamix.docker.manager/dockerClient.php on line 220 Warning: DOMDocument::load(): Premature end of data in tag Container line 2 in /boot/config/plugins/dockerMan/templates/linuxserver.io/quassel-core.xml, line: 59 in /usr/local/emhttp/plugins/dynamix.docker.manager/dockerClient.php on line 220 Found and fixed the error, thanks sparklyballs
  8. Brilliant, we are now one step closer to fixing The Folder "/home/docker/volumes/downloads" gets ownership gets permissions from the user used to execute "docker run" and could simply be chowned 99:100'ed the same goes for ""/home/docker/config/testing/" You could try chowning both directorys, remove the config file and reboot the container perhaps.
  9. Hi lonix, Thanks for getting back to me... I have correctly entered the env. variables -e puid=99 and -e pgid=100...I am not using the Unraid xml template...I am using this docker on a Debian install... Thanks! Could you kindly paste your docker run/create command ? Here it is: docker run -d --name="nzbget-test" --restart="always" --net="host" -v "/home/docker/config/testing/":/config -v "/home/docker/volumes/downloads":/downloads -e PUID=99 -e PGID=100 linuxserver/nzbget I have other containers that use the set user environment variables, like the one's that hurricane has made, and the ownership is correct... I might be wrong, but I think the way the scripts are set up, they do not pull in the variables for setting the puid & guid... Thanks for your help. if you're running docker on a debian installation, then shouldn't the puid and pgid values be different ? Normally yes, but I created a user "docker" with the puid:pgid of 99:100. I have about a dozen containers up and running correctly with my "docker" user... The owner guid, puid that are created with this nzbget container do seem to get the values from the environment variables though... If you see in the sources code there is a script in init called 90_new_user.sh this scripts takes care to change the id's of anything related to the user "abc" to the correct ID whatever that might be. in your case it seems that the GID\UID is not getting set for some reason. Could you please post the part in the log that goes something like this: ----------------------------------- GID/UID ----------------------------------- User uid: $(id -u abc) User gid: $(id -g abc) ----------------------------------- "
  10. I Agree this is a major premise and for software that spans 1000's of system in a cluster like configuration i could not agree more at all. It makes scaling\moving and such so much more safe\convenient etc. The major flaw is off-course to scale updating containers. e.g. pushing a updated image with a newer\different version of node.js to 10's of hosts and 100's of containers make it a different environment while the container cycle is rebooting. This is something we are struggling with at our data center, and i know many others is trying to figure out as well. (no the auto-update is not the answer there). But luckily we (or at least I) do not require a "identical" user experience. But yes you are right. I was merely pointing out how sometimes usability and security are diametric opposites. Also, want to point out, if it wasn't already been brought up, that Authenticity is not the same as Security. Just because you know this application/container is claimed to be published by XYZ does not mean it is secure and won't do something nasty. Authenticity keys get stolen all the time, just look at all the SSL Certificate / Digital Certificate Authority hacks over the last five years. Totally right.
  11. I Pushed a new version of this image earlyer today, now the question to both of those questions is YES
  12. Hi lonix, Thanks for getting back to me... I have correctly entered the env. variables -e puid=99 and -e pgid=100...I am not using the Unraid xml template...I am using this docker on a Debian install... Thanks! Could you kindly paste your docker run/create command ?
  13. I Just checked 2 of my installations, and i had no problem changeing the password, from whitin the webui. Can you post the logs from when you try to change the password. What is your experience when trying to change the password ? Do you get any error messages ?
  14. Are you running this via the built inn xml ? from comminty apps ?
  15. This is why we need more containers like "Linuxserver.io" where we'd love more authors and contribs (To excalty that ensure that we always have support\maintainers)
  16. I've sat both sides of the fence on updating in containers, my preference is if that if you're going to do it, it should be optional for the user (edge=1, type deal). I don't like apt updates in the container though, i prefer to do any updates like that at the image level, although i did suggest it as a fix for a container that was installing git in an edge routine and was failing because of dependency changes in apt. The docker best practices that squid was referring to are not official docker ones per se , but guidelines for dockers in the unraid environment. I don't think malware is the issue as such though, more stability, with containers self updating, things can break and support becomes a headache with a bajillion different versions out there. * edge=1 suggest to me that one opt-inn to use beta\unstable software not that one wants to use "latest offical", but then again English is my third language. * If dockers in a unraid enviroment wants that as a best practise, we (linuxserver.io) shall take a offical stance agains. The goal is to keep target software at "latest\greatest" everything else is secondary. * As for Stability, the most common reason any software is updated is too improve stability. But then again, yes you are\could be right, sometimes updates will cause software to break\systems to become unstable. However highly unlikly
  17. This. Also for baseimage look at phusion\baseimage.
  18. Even examining the docker container's Dockerfile and the scripts used in starting isn't enough. What becomes extremely dangerous is when the Docker Container does automatic updates. One could compromise all the auto-updating containers by compromising where they pull their auto-updates from. Given the recent hack on Plex, imagine if they got a little further where they slipped in some nefarious code into the Plex repository. Now your auto-updating Plex* Docker Containers will pull down that evil update. Which is why auto updating violates best practices for Docker Containers. Sir, i must respectfully disagree. Saying that Containers should not have autoupdates, is like saying an operating system should not have autoupdates. No There is absoultly no garantee that nobody could insert malisous code into your software, but the problem then is with code\repo maintainer. e.g. My containers autoupdate from a) git or b) ubuntu repo. And lets all be honest getting malous code on the autors git should be so hard that if you sucessed anyone would warrent to stop using said software alltogther. And getting the code onto ubuntu repo is not that easy either. Imho, Its not about wether or not to autoupdate, its all about what sources one use for such activites. it is all in the chain of trust. just like certificates(SSL). User trust Docker Author whom in turn trusts "ubuntu security team" or software author (via git or souch). The again trust someone else to do something. Lets look at any non autoupdateing container, and what happens when a new version of included software is released. 1. Docker Maintainer gets notified of an updated software. 2. Docker maintainer updates his\hers code to reflect **RUN apt-get install -y software-package-1.8.<newversion>.deb** 3. Docker maintainer push this new code to docker registry and causes delays\workload to the registry 4. User gets notified that update is needed. 5. User installs the new update. 6. 7. Profit At what point do you see any malware controll of the new software package ? It wont happen, because its a lot of work, and there is not many docker container maintainers capable of even detecting said malware. Also to the point. I Could not find any mention of not autoupdateing via docker. Infact i attended a seminar where the topic was automatic updateing and maintance of containers.
  19. Sounds like you forgot to set the value of pgid and puid? Eg "-e puid=99"
  20. 1) no but planned, 2) currently in development
  21. Hi, everyone. I would just like to say we do take requests and another major difference here is that these containers anyone is suggested to contribute too. Also I approve this thread
  22. Or you could just repull the image Sent from my iPad using Tapatalk
  23. I'm all for using cool and awesome things to do a inanely simple task requiring a doctorate to do. But let's ask ourselfs, why would we? Why would we run a sash server to manage a python script ? Therefore. I throw phusion out the Window. what is There to manage dokker pull/start/stop/kill should take care of 99% of our problems. Who do you want to develop containers ? Who do you want to use it. When we are talking unraid. using a correctly setup tlc with bells and whistles for something that can be solved with while true. Can be done by 4-5 peps. This is not nor will it ever be a enterprise environment and should not be treated as souch. And why would you update your working docker ? Anyhow I would never use a docker I didn't make myself. That's how one learn. Sent from my iPad using Tapatalk