Jump to content

butlerpeter

Members
  • Posts

    192
  • Joined

  • Last visited

Posts posted by butlerpeter

  1. 9 hours ago, Squid said:

    Actually hadn't forgotten...

     

    I have to wait until Friday's round of lsio updates, but what I'm going to do is this.

     

    No GUI as I see this as rather limited in user needs

     

    - Prior to stopping containers, if it exists, something like "/boot/..../stop/stoppingAll.sh"

    - Prior to stopping any particular container, a script called "/boot/.../stop/nameOfContainer.sh" will be called if it exists

    - After restarting any particular container, a script called "/boot/.../start/nameOfContainer.sh" will be called if it exists

    - After restarting all the containers, a script called "/boot/.../start/startingAll.sh" will be called if it exists

     

    Should allow you the flexibility to do just about anything you want.  Work for you?

    Yes that should do the job. Thanks

  2. Is there any possibility for the ability to run a custom script after dockers have been updated?

     

    I have a self built docker container which doesn't get updated very often. But it depends on a mariadb container that does get updated. When the weekly docker update process happens, if the mariadb container has been updated then my custom one falls over.

     

    It's a simple fix for me to ssh in and restart that container, but if it was possible to script it then it would save me having to (remember to) check.

  3. Recently had a bit of a strange issue with the mariadb container.

     

    It's had been running fine for weeks, ever since migrating to it from a container from another author.

     

    Yesterday I did the upgrade to unRAID 6.1.7, so during the course of stopping the array to reboot the dockers were stopper. The upgrade went to plan, rebooted the server, started the array again and all of my docker containers came back up - or so I thought.

     

    It was only some hours later that I noticed something was wrong with the mariadb container. Looking in the logs (which I don't have to hand unfortunately) I saw continuous, repeated, failed attempts to start mariadb.

     

    There was a log message about it having not been shutdown cleanly (sorry I don't have the exact text), with messages about recovering from a crash, and then each attempt to start was followed by a message saying that the table mysql.users didn't exist (again sorry for not having the exact text).

     

    Looking at the /config/databases folder I saw that the owner of the mysql directory had been changed from 'nobody' to '103' - 103 seems to be the uid of the mysql user inside the container. "chown -R nobody:users mysql" fixed the complaint about the mysql.users table. But then there was a similar message about another mysql.*something* table, and when I looked the owner of the mysql folder had changed to 103 again.

     

    Changing the owner back to nobody this time fixed things and mariadb started up correctly.

     

    I suspect that what happened was that there was an unclean shutdown (of mariab) and when starting up again it attempted to recover and during that process tries to ensure that it has correct access so changes the owner of the folder to the mysql user. Which then leads to access problems and stops it accessing those mysql.* tables.

     

    I wonder if the mariadb startup script "/usr/bin/mysqld_safe" (I think!) should be changed to take into account the uid that has been specified for the container to run as, instead of just using the mysql user?

     

    Hope that makes sense!

  4. I get what you're saying about the port.

     

    But the unRAID gui is running on port 80 on the host - not port 80 of the container. It's unlikely that anybody will have their unRAID gui exposed externally on port 80. Most likely, as in my case, they might have incoming traffic on port 80 redirected to another port on the server at the router level. In my case I map container port 80 to host port 9080 (for example), then in my router redirect incoming port 80 traffic to port 9080 on my server.

     

    Maybe an env variable in the container to specify which method should be used could be a solution.

  5. Thanks I clicked the remove button and removed the sub domain field.

     

    Have found a couple of issues though.

     

    Firstly, after getting it up and running I ran a 'docker logs Nginx-letsencrypt' and saw a lot of

     

    runsv memcached: fatal: unable to start ./run: access denied
    runsv php-fpm: fatal: unable to start ./run: access denied
    runsv php-fpm: fatal: unable to start ./run: access denied
    runsv memcached: fatal: unable to start ./run: access denied
    runsv php-fpm: fatal: unable to start ./run: access denied
    runsv memcached: fatal: unable to start ./run: access denied
    runsv php-fpm: fatal: unable to start ./run: access denied
    

     

    I had to enter the container 'docker exec -it Nginx-letsencrypt /bin/bash' and chmod +x the /etc/services/php-fpm/run and /etc/services/memcached/run files.

     

    I also had issues with the container generating the certificates, because the letsencrypt server couldn't connect back to the client to verify the domain.

     

    That was due to my using port 443 for ssh access (so I can access my server through work proxy), so am unable to redirect incoming ssl to that port. To get around it I had to enter the container again and modify '/defaults/letsencrypt.sh' to change the standalone supported challenge mode to http-01 instead of tls-sni-01.

     

    After doing all of that, it seems to be working - I can get to the default landing page on http via the host port mapped to container port 80 and also on https via the port mapped to container port 443.

     

    Now I just need to configure nginx properly.

  6. Alright, container updated with support for multiple subdomains.

     

    If you install fresh from the community apps, you'll have a "SUBDOMAINS" variable "under advanced view" you have to set. The default value is "www" but you can add multiple subdomains as long as they are comma separated and with no spaces.

     

    Make sure that the URL field only contains the domain url without any subdomains otherwise the symlinks won't work. So if you want to get a cert that covers www.domain.com, www1.domain.com and www2.domain.com then set the URL to "domain.com" and the SUBDOMAINS to "www,www1,www2" and you should be good.

     

    If you update the container, the xml won't update itself (unraid issue) so you can add the SUBDOMAINS variable manually and set it as you like.

     

    Keep in mind that if you change the subdomains later, they likely won't be updated in the certs until the next renewal (which won't happen until the certs are 60 days old). In that case you can delete the local folder and start over. But beware, if you do it too many times in a short period of time, letsencrypt will block any new certs requests for that domain for some time.

    aptalca,

     

    what if you don't want to specify any subdomains e.g. you want a certificate to cover example.com and not www.example.com?

     

    I tried leaving the subdomain field empty but got a "this is a required field" message.

  7. As posted in the KVM forum, in the PXE booting OpenELEC thread.

     

    I've created a container, based off of sparklyballs tftpdserver dockerfile that runs dnsmasq configured to proxy dns/dhcp to an existing service (e.g. a router) and which provides the tftp server required for pxe booting.

     

    I've not created an unRAID template or repository, but the link to it on the docker hub is https://registry.hub.docker.com/u/butlerpeter/dnsmasq-docker-unraid/

  8. If anybody is interested in a docker container that provides proxy pxe booting and tftp services purely from dnsmasq (so ideal if you can't change the appropriate settings on your router), I spent some time yesterday getting it setup. I used sparklyballs tftp docker file as a base.

     

    You can grab it by cloning from my github https://github.com/butlerpeter/dnsmasq-docker

     

    The README.md file has the command line that I use to run it. Couple of things to be aware of:

     

    Two volumes are mapped - /config is where the dnsmasq configuration file gets stored and /tftpboot is the path to the tftp folder.

    When running pass an environment variable called HOST_ADDRESS which is set to the ip of your server (e.g. -e HOST_ADDRESS=192.168.0.10)

     

    You should be able to drop this in as a replacement for the tftp server section in johnodons guide

     

    I created it mainly for my own use but am happy to share. If anybody wants to "unraid-ify" it - creating docker manager templates and all of that please feel free as I have no idea how all of that hangs together as yet.

     

  9. I would attempt two things:

     

    1)  The BIOS updates

    2)  Try the command line methods to reading CPU usage and see if they fluctuate better

     

    It appears this issue really is resolved, but that the way scaling occurs on more modern systems is different than the way we used to see it occur.

     

    jon,

     

    I will try the bios updates when I get the chance. But, I have already been using the command line methods to read usage - mentioned in my post - I'm not going by the web gui dashboard figures at all.

     

    Booting without the "intel_pstate=disable" the figures are always very high whereas with that in place, and the server in the same state, the figures drop to expected levels.

  10. For me on RC4, removing "intel_pstate=disable" and then running "cat /proc/cpuinfo | grep MHz" (at various different times and without dashboard open) always shows all cpus (cores) running at 3400MHz+

     

    Adding "intel_pstate=disable" back in and they are back to dropping to 800MHz.

     

    The cpu is a Xeon E3-1240v3 on a Supermicro X10SL7-F motherboard. There are bios updates for the board that I haven't had chance to apply yet, it's running v1.1a and I see that v3 is the latest available, but I can't find anything to detail what the changes have been to see if there is anything relevant.

  11. Any of these should do the trick ...

     

    notify -e "Notice Test" -s "Notice Test" -d "This is a test for Notices" -i "normal"
    notify -e "Warning Test" -s "Warning Test" -d "This is a test for Warnings" -i "warning"
    notify -e "Alert Test" -s "Alert Test" -d "This is a test for Alerts" -i "alert"
    

     

    or you could just call the notification script directly if you want to bypass the unRAID notification on screen.

     

    Thanks for that, I didn't realise that functionality already existed.

     

    Works a treat :D

     

    Should now hopefully get some PushBullet notifications when a couple of long file copies have finished.

  12. Is there a script or anything that can be called to trigger custom notifications?

     

    For example, you might have a script that copies a bunch of files from one drive to another (maybe you're migrating from reiser to xfs), it would be good to be able to add a step to the end of that script that triggers a notification to say it has completed.

     

  13. I figured that adding cleared drives and copying/moving the data would likely be faster than rebuilding 1 drive at a time. I could probably even do both copies at the same time.

     

    It might also indicate what files, if any, are damaged. If during the copy it says "can't read file X"

     

    Obviously the new config, after removing the "bad" drives will require a parity build.

  14. Hi guys,

     

    I've noticed that 2 of the drives in my array, both 3TB WD Greens, are now showing pending sectors. One is reporting 1 sector and the other 5.

    They both also report 1 for Offline Uncorrectable.

     

    I've been a bit dubious about these drives for a while (probably should have replaced them before now). During parity checks they've been throwing some read errors, which reduced when I changed and rerouted cables, but hadn't shown any sector issues until now.

     

    I am currently in the process of moving some of the files from the drives onto other drives in the array, although I don't have space for all of it, and have excluded those drives from any shares that are likely to get written to.

     

    I've got a couple of replacement drives on order which should be delivered tomorrow and I'll run a couple of preclears on each of them.

     

    Once that is done, what should be the procedure for replacing the WD drives and moving the data to the new ones?

     

    Am I best off adding the drives to the array, moving the data from the WD drives and then removing the WD drives and doing a new config?

     

  15. I didn't say it didn't work in general. As I said, it doesn't work in my particular use case.

     

    For monitoring the container I log in via SSH and connect to an established tmux session.

     

    With the current build of docker (and I have no idea if it has been/will be fixed in later versions) the 'console' you are presented with when you run docker exec isn't a true TTY. Because of that tmux doesn't work.

     

    I wasn't complaining about that, just asking if the container IP could be visible somewhere.

  16. Don't know if it's already been mentioned or included but would it be possible to display the docker containers ip address somewhere?

     

    Stopping/starting containers causes them to get a new ip each time. So if I want to ssh* into my nZEDb container I have to do a "docker inspect nZEDb | grep IPAddress" to find out what its current ip is.

     

    Having it visible in the docker management ui would be really useful.

     

     

    * I know there are debates about whether ssh should be used to interact with a container, but in this case the recent "docker exec" functionality doesn't work.

×
×
  • Create New...