butlerpeter

Members
  • Posts

    192
  • Joined

  • Last visited

Everything posted by butlerpeter

  1. Just wanted to come and confirm that the new docker stop/start script functionality worked great. Thanks for adding that.
  2. Knowing that the script got called (or not) should be enough for now. Thanks.
  3. Thanks for that - I've just put my scripts in place - will see what happens next time containers are updated. Incidentally - does anything get logged anywhere when the scripts are called?
  4. Is there any possibility for the ability to run a custom script after dockers have been updated? I have a self built docker container which doesn't get updated very often. But it depends on a mariadb container that does get updated. When the weekly docker update process happens, if the mariadb container has been updated then my custom one falls over. It's a simple fix for me to ssh in and restart that container, but if it was possible to script it then it would save me having to (remember to) check.
  5. I wouldn't try and use that one, it's rather hacky and geared towards my personal setup. I only pushed it to the docker hub to ease reinstallation for myself.
  6. Excellent news. Look forward to it being merged and released.
  7. Recently had a bit of a strange issue with the mariadb container. It's had been running fine for weeks, ever since migrating to it from a container from another author. Yesterday I did the upgrade to unRAID 6.1.7, so during the course of stopping the array to reboot the dockers were stopper. The upgrade went to plan, rebooted the server, started the array again and all of my docker containers came back up - or so I thought. It was only some hours later that I noticed something was wrong with the mariadb container. Looking in the logs (which I don't have to hand unfortunately) I saw continuous, repeated, failed attempts to start mariadb. There was a log message about it having not been shutdown cleanly (sorry I don't have the exact text), with messages about recovering from a crash, and then each attempt to start was followed by a message saying that the table mysql.users didn't exist (again sorry for not having the exact text). Looking at the /config/databases folder I saw that the owner of the mysql directory had been changed from 'nobody' to '103' - 103 seems to be the uid of the mysql user inside the container. "chown -R nobody:users mysql" fixed the complaint about the mysql.users table. But then there was a similar message about another mysql.*something* table, and when I looked the owner of the mysql folder had changed to 103 again. Changing the owner back to nobody this time fixed things and mariadb started up correctly. I suspect that what happened was that there was an unclean shutdown (of mariab) and when starting up again it attempted to recover and during that process tries to ensure that it has correct access so changes the owner of the folder to the mysql user. Which then leads to access problems and stops it accessing those mysql.* tables. I wonder if the mariadb startup script "/usr/bin/mysqld_safe" (I think!) should be changed to take into account the uid that has been specified for the container to run as, instead of just using the mysql user? Hope that makes sense!
  8. I get what you're saying about the port. But the unRAID gui is running on port 80 on the host - not port 80 of the container. It's unlikely that anybody will have their unRAID gui exposed externally on port 80. Most likely, as in my case, they might have incoming traffic on port 80 redirected to another port on the server at the router level. In my case I map container port 80 to host port 9080 (for example), then in my router redirect incoming port 80 traffic to port 9080 on my server. Maybe an env variable in the container to specify which method should be used could be a solution.
  9. Thanks I clicked the remove button and removed the sub domain field. Have found a couple of issues though. Firstly, after getting it up and running I ran a 'docker logs Nginx-letsencrypt' and saw a lot of runsv memcached: fatal: unable to start ./run: access denied runsv php-fpm: fatal: unable to start ./run: access denied runsv php-fpm: fatal: unable to start ./run: access denied runsv memcached: fatal: unable to start ./run: access denied runsv php-fpm: fatal: unable to start ./run: access denied runsv memcached: fatal: unable to start ./run: access denied runsv php-fpm: fatal: unable to start ./run: access denied I had to enter the container 'docker exec -it Nginx-letsencrypt /bin/bash' and chmod +x the /etc/services/php-fpm/run and /etc/services/memcached/run files. I also had issues with the container generating the certificates, because the letsencrypt server couldn't connect back to the client to verify the domain. That was due to my using port 443 for ssh access (so I can access my server through work proxy), so am unable to redirect incoming ssl to that port. To get around it I had to enter the container again and modify '/defaults/letsencrypt.sh' to change the standalone supported challenge mode to http-01 instead of tls-sni-01. After doing all of that, it seems to be working - I can get to the default landing page on http via the host port mapped to container port 80 and also on https via the port mapped to container port 443. Now I just need to configure nginx properly.
  10. aptalca, what if you don't want to specify any subdomains e.g. you want a certificate to cover example.com and not www.example.com? I tried leaving the subdomain field empty but got a "this is a required field" message.
  11. As posted in the KVM forum, in the PXE booting OpenELEC thread. I've created a container, based off of sparklyballs tftpdserver dockerfile that runs dnsmasq configured to proxy dns/dhcp to an existing service (e.g. a router) and which provides the tftp server required for pxe booting. I've not created an unRAID template or repository, but the link to it on the docker hub is https://registry.hub.docker.com/u/butlerpeter/dnsmasq-docker-unraid/
  12. I haven't gotten around to creating an unRAID repository yet, but I have now set my dnsmasq docker container up on the docker registry. The link is https://registry.hub.docker.com/u/butlerpeter/dnsmasq-docker-unraid/
  13. If anybody is interested in a docker container that provides proxy pxe booting and tftp services purely from dnsmasq (so ideal if you can't change the appropriate settings on your router), I spent some time yesterday getting it setup. I used sparklyballs tftp docker file as a base. You can grab it by cloning from my github https://github.com/butlerpeter/dnsmasq-docker The README.md file has the command line that I use to run it. Couple of things to be aware of: Two volumes are mapped - /config is where the dnsmasq configuration file gets stored and /tftpboot is the path to the tftp folder. When running pass an environment variable called HOST_ADDRESS which is set to the ip of your server (e.g. -e HOST_ADDRESS=192.168.0.10) You should be able to drop this in as a replacement for the tftp server section in johnodons guide I created it mainly for my own use but am happy to share. If anybody wants to "unraid-ify" it - creating docker manager templates and all of that please feel free as I have no idea how all of that hangs together as yet.
  14. or you could just call the notification script directly if you want to bypass the unRAID notification on screen. Thanks for that, I didn't realise that functionality already existed. Works a treat Should now hopefully get some PushBullet notifications when a couple of long file copies have finished.
  15. Is there a script or anything that can be called to trigger custom notifications? For example, you might have a script that copies a bunch of files from one drive to another (maybe you're migrating from reiser to xfs), it would be good to be able to add a step to the end of that script that triggers a notification to say it has completed.
  16. I figured that adding cleared drives and copying/moving the data would likely be faster than rebuilding 1 drive at a time. I could probably even do both copies at the same time. It might also indicate what files, if any, are damaged. If during the copy it says "can't read file X" Obviously the new config, after removing the "bad" drives will require a parity build.
  17. Hi guys, I've noticed that 2 of the drives in my array, both 3TB WD Greens, are now showing pending sectors. One is reporting 1 sector and the other 5. They both also report 1 for Offline Uncorrectable. I've been a bit dubious about these drives for a while (probably should have replaced them before now). During parity checks they've been throwing some read errors, which reduced when I changed and rerouted cables, but hadn't shown any sector issues until now. I am currently in the process of moving some of the files from the drives onto other drives in the array, although I don't have space for all of it, and have excluded those drives from any shares that are likely to get written to. I've got a couple of replacement drives on order which should be delivered tomorrow and I'll run a couple of preclears on each of them. Once that is done, what should be the procedure for replacing the WD drives and moving the data to the new ones? Am I best off adding the drives to the array, moving the data from the WD drives and then removing the WD drives and doing a new config?
  18. I didn't say it didn't work in general. As I said, it doesn't work in my particular use case. For monitoring the container I log in via SSH and connect to an established tmux session. With the current build of docker (and I have no idea if it has been/will be fixed in later versions) the 'console' you are presented with when you run docker exec isn't a true TTY. Because of that tmux doesn't work. I wasn't complaining about that, just asking if the container IP could be visible somewhere.
  19. Don't know if it's already been mentioned or included but would it be possible to display the docker containers ip address somewhere? Stopping/starting containers causes them to get a new ip each time. So if I want to ssh* into my nZEDb container I have to do a "docker inspect nZEDb | grep IPAddress" to find out what its current ip is. Having it visible in the docker management ui would be really useful. * I know there are debates about whether ssh should be used to interact with a container, but in this case the recent "docker exec" functionality doesn't work.
  20. Yep, if that's what those codes mean (I found it in a forum somewhere). Having it added to your plugin would be great if you get a chance.
  21. Nice little addition I installed it and logged in as a non-root user via ssh, worked great. But if I ran su to do something as root it didn't updated the window title anymore. So I added a ~/.bashrc file which just contains source /etc/profile and now it works just fine Any possibility you could add the creation of the .bashrc file to the plugin? It also doesn't update from within a screen session, not found a fix for that yet though. Edit: I think I found a fix for the screen issue. I changed the added code in /etc/profile to be if [ $?TERM ]; then #paob case $TERM #paob in #paob xterm*) #paob PROMPT_COMMAND='echo -ne "\033]0;${USER}@${HOSTNAME}: ${PWD}\007"' #paob ;; #paob screen) #paob PROMPT_COMMAND='echo -ne "\033P\033]0;${USER}@${HOSTNAME}: ${PWD}\007\033\\"' #paob ;; #paob *) #paob esac #paob fi #paob and it works from within a screen session too