Jump to content

aptalca

Community Developer
  • Posts

    3,064
  • Joined

  • Last visited

  • Days Won

    3

Posts posted by aptalca

  1. In using Nginx-letsencrypt

     

    I'm kind of new yo create domains, i do have a ddns name for my home router at http://www.dnsexit.com/, this one I have xxxxx.publicvm.com

     

    How shall I put in this to the docker config to use with letsencrypt?

     

    see image attached

     

     

     

    log from the docker

     

     

    Requesting root privileges to run letsencrypt...
    ~/.local/share/letsencrypt/bin/letsencrypt --no-self-upgrade certonly --standalone --standalone-supported-challenges tls-sni-01 --email [email protected] --agree-tos -d publicvm.com -d xxxxxxx.publicvm.com -d xxxxxxx.publicvm.com
    IMPORTANT NOTES:
    - If you lose your account credentials, you can recover through
    e-mails sent to [email protected].
    - The following errors were reported by the server:
    
    Domain: publicvm.com
    Type: unauthorized
    Detail: Correct zName not found for TLS SNI challenge. Found
    'www.dnsexit.com, dnsexit.com'
    
    To fix these errors, please make sure that your domain name was
    entered correctly and the DNS A record(s) for that domain
    contain(s) the right IP address.
    - Your account credentials have been saved in your Let's Encrypt
    configuration directory at /etc/letsencrypt. You should make a
    secure backup of this folder now. This configuration directory will
    also contain certificates and private keys obtained by Let's
    Encrypt so making regular backups of this folder is ideal.
    * Restarting authentication failure monitor fail2ban
    ...fail!
    Mar 1 17:14:47 32949e45f63a syslog-ng[6271]: syslog-ng starting up; version='3.5.3'
    Mar 1 17:17:01 32949e45f63a /USR/SBIN/CRON[6287]: (root) CMD ( cd / && run-parts --report /etc/cron.hourly)

    //Peter

     

    letsencrypt expects you to have full control over whatever url you put in. In the case of dynamic dns, you do not have control over the actual domain: publicvm.com

    In that case put in xxxxxx.publicvm.com as the URL and leave the subdomains as is (default is www). With that, letsencrypt will create a certificate that covers xxxxxx.publicvm.com and www.xxxxxx.publicvm.com, both of which you have control over.

  2. Hi, I was unsure whether you prefered issues being posted here or directly on github:

     

    As noted in https://github.com/aptalca/docker-webserver/issues/4

     

    I fail to enable PHP on the nginx-letsencrypt Docker. Everything else (nginx, letsencrypt and fail2ban) works fine  ???

     

    Thanks for any advice how to enable this feature...

     

    Looking into it. I thought I had tested php before, but when I just tested now, it didn't work for me either. I'll fix it in the next few days.

  3. I am working with nginx-letsencrypt and having a terrible time trying to configure reverse proxy. I had reverse proxy working with regular nginx but any time I add a server block into the site-confs folder it seems to break it as I can no longer resolve any pages.

     

    I am attempting to do it via subdomain (sonarr.domain.com, etc).

     

    Currently all the domains I added are correctly resolving to the default block and displaying the index.html file.

     

    Any suggestions on how I should be configuring this to get it to work the same as normal nginx?

     

    I posted my config here: http://lime-technology.com/forum/index.php?topic=43696.msg437353#msg437353

     

    I have both a regular webserver and a proxy setup

  4. Leaner, Meaner, Greener (and hopefully not buggier)

     

    - Big code clean up.

    - Don't display stars from dockerHub if app not starred

    - Hide search dockerHub if in previous / installed apps

    - Fix error in settings if a temp directory didn't exist

    - Add support for <Licence> in xml files

     

    Is it License or Licence?

    Good point...  I'm Canadian, eh so I made it <Licence>.  Guess that I can support both spellings as I'll probably take some heat for it....

    What's Canadian? Is it some sort of extraterrestrial thing like Klingon?

  5. Is there a way with plex request so that I can connect it to my movies library so I can see what movies are available?

    It works fine with my Tv Shows, but not with my movies.

     

    I have sonarr for tv Shows, but I have not CouchPotato for movies, does that have anything to say?

    Couchpotato does that, not sure about plex requests.

     

    You can ask in their thread on the plex forums

  6. I am having an issue with the plex requests docker. Everything appears to pull and install correctly, docker reports that the container is started. But when I attempt to browse to the service, it gives me a "Connection refused".  I can't seem to find any logs, so I am not sure how to troubleshoot it at this point.

     

    Here is a screenshot of my configuration:

    http://i.imgur.com/WKXiZW3.png

    Change the configuration folder location to either a cache drive or if you don't have one, a disk location.

    /mnt/cache/blah

    /mnt/diskX/blah

     

    I deleted the container and image, added it fresh and changed /config to point to /mnt/cache/appdata/plexrequests

    I am still seeing the same issue

    Can you post a log?

     

    Also, how long did you wait? At container start, it updates meteor and a couple of times I noticed the meteor servers being ridiculously slow. It took 20 minutes for the update

     

    After 45 minutes, the docker started responding. It looks like the meteor servers were just really bogged down. Thanks for the help, and for the awesome docker containers!

    Sure, glad it worked

  7. I am having an issue with the plex requests docker. Everything appears to pull and install correctly, docker reports that the container is started. But when I attempt to browse to the service, it gives me a "Connection refused".  I can't seem to find any logs, so I am not sure how to troubleshoot it at this point.

     

    Here is a screenshot of my configuration:

    http://i.imgur.com/WKXiZW3.png

    Change the configuration folder location to either a cache drive or if you don't have one, a disk location.

    /mnt/cache/blah

    /mnt/diskX/blah

     

    I deleted the container and image, added it fresh and changed /config to point to /mnt/cache/appdata/plexrequests

    I am still seeing the same issue

    Can you post a log?

     

    Also, how long did you wait? At container start, it updates meteor and a couple of times I noticed the meteor servers being ridiculously slow. It took 20 minutes for the update

  8.  

     

    With your RDP-Calibre docker, is there a way that I could temporarily set the Temp directory to something not in the docker?  I am trying to import a large number of books and it very quickly fills up the entire docker with files in the calibre tmp directory.  This would only need to be done once on the initial import and after that it could go back to its normal setting.

     

    I originally had the docker.img at 20Gb, but that filled up with about 1% of the import done  ;D.  I increased the size of the docker to 200Gb, but watching the size grow, it is likely to fill up again before its finished.

    Hmm. . . never noticed the temp directory and that behavior. I'll look into it.

     

    It is strange that Calibre doesn't do any clean up of the tmp files until the import is completed.

     

    Here's a temporary workaround (since you only have to do this once, it should be fine).

     

    You can edit the container in unraid gui, and add a new volume mapping. Put in "/tmp" under container volume and put in whatever folder on unraid you would like to use for the temporary temp location (double temp :-) under host path.

     

    This way, that folder you pick, which should be outside of the docker image and somewhere on your array, cache drive or a disk outside of the array, should be used for the temporary files during import. After import, if you like, you can edit the container settings to remove that mapping, and then you can delete that folder, too.

     

    Keep in mind that every time you edit the container settings (if edge is set to 1), the container will download the latest version which may take a few minutes depending on how calibre's server is feeling.

     

    By the way, are you hosting the National Library Archives? 400GB of ebooks? How?  :o  ;D

     

    Hmm, I tried mounting to /mnt/user/tmp and keep getting this error

    Fatal server error:
    Can't read lock file /tmp/.X1-lock
    
    Openbox-Message: Failed to open the display from the DISPLAY environment variable.
    
    Fatal server error:
    Can't read lock file /tmp/.X1-lock
    
    Openbox-Message: Failed to open the display from the DISPLAY environment variable.
    
    Fatal server error:
    Can't read lock file /tmp/.X1-lock
    
    Openbox-Message: Failed to open the display from the DISPLAY environment variable.
    

     

    so, instead of mounting to /mnt/user, I mounted it directly to one of the disks instead.  after that it was able to start up.

     

    Did you first create a user share called tmp in unraid?

  9. I am having an issue with the plex requests docker. Everything appears to pull and install correctly, docker reports that the container is started. But when I attempt to browse to the service, it gives me a "Connection refused".  I can't seem to find any logs, so I am not sure how to troubleshoot it at this point.

     

    Here is a screenshot of my configuration:

    http://i.imgur.com/WKXiZW3.png

    Change the configuration folder location to either a cache drive or if you don't have one, a disk location.

    /mnt/cache/blah

    /mnt/diskX/blah

  10. With your RDP-Calibre docker, is there a way that I could temporarily set the Temp directory to something not in the docker?  I am trying to import a large number of books and it very quickly fills up the entire docker with files in the calibre tmp directory.  This would only need to be done once on the initial import and after that it could go back to its normal setting.

     

    I originally had the docker.img at 20Gb, but that filled up with about 1% of the import done  ;D.  I increased the size of the docker to 200Gb, but watching the size grow, it is likely to fill up again before its finished.

    Hmm. . . never noticed the temp directory and that behavior. I'll look into it.

     

    It is strange that Calibre doesn't do any clean up of the tmp files until the import is completed.

     

    Here's a temporary workaround (since you only have to do this once, it should be fine).

     

    You can edit the container in unraid gui, and add a new volume mapping. Put in "/tmp" under container volume and put in whatever folder on unraid you would like to use for the temporary temp location (double temp :-) under host path.

     

    This way, that folder you pick, which should be outside of the docker image and somewhere on your array, cache drive or a disk outside of the array, should be used for the temporary files during import. After import, if you like, you can edit the container settings to remove that mapping, and then you can delete that folder, too.

     

    Keep in mind that every time you edit the container settings (if edge is set to 1), the container will download the latest version which may take a few minutes depending on how calibre's server is feeling.

     

    By the way, are you hosting the National Library Archives? 400GB of ebooks? How?  :o  ;D

  11. With your RDP-Calibre docker, is there a way that I could temporarily set the Temp directory to something not in the docker?  I am trying to import a large number of books and it very quickly fills up the entire docker with files in the calibre tmp directory.  This would only need to be done once on the initial import and after that it could go back to its normal setting.

     

    I originally had the docker.img at 20Gb, but that filled up with about 1% of the import done  ;D.  I increased the size of the docker to 200Gb, but watching the size grow, it is likely to fill up again before its finished.

    Hmm. . . never noticed the temp directory and that behavior. I'll look into it.

  12. This may or may not be the right place for this as I am unsure if it is specific to the docker or not but figured this would be a good place to start.

     

    I installed a hard drive I had laying around (a 320GB disk formatted as ext4) to act as a dedicated disk for transcoding so as not to beat on my cache drive constantly. I didn't want to add the disk to the array or add it as a cache disk so I am utilizing dlandon's Unassigned Devices plugin to mount the drive on boot. I setup my docker to map /transcode to /mnt/disks/transcode/ and made sure that the permissions were set to read/write.

     

    I can see that the transcode is working and being written to the disk but it is unbelievably slow, I think the transcode has written about 40MB in two to three minutes to the disk. From what I can tell this isn't an issue with the disk itself as I test copied 1.5GB of video files and it completed in about a minute averaging 90MB/s.

     

    Has anyone ever tried to setup the plex docker like this before who might have some pointers for me? Or at least clarify whether this is an issue specific to me and my setup or if others can reproduce the same results.

    Is the video playing fine? Plex transcodes as needed. It speeds up and slows down when necessary.

  13. If anyone has any good ideas of what else to incorporate into CA, the floor is now open to the peanut gallery...

     

    Well since you asked...  I've been thinking about something I think is important, part of the maturation of the unRAID community and its many development partners.  I brought it up first over here.  unRAID and the numerous addon options have come a long way, and it's time to think about risk management, not just for our data but also for the tools.  The community is growing, and depending more and more on so many plugins and containers.  Yet for the most part, these plugins and containers have a single author, and that's a single failure point.  Think of what would happen to so many users if something happened to PhAzE, binhex, bungy, etc, with so many plugins and containers and so many dependent users.  As we're all reminded now and then, life happens, and it rarely comes with advance notifications.  Businesses have to have disaster plans, for hurricanes, earthquakes, tornadoes, fires, etc, and so do we.  Rudimentary perhaps, but something that users can count on.  It's important that users be able to select tools with a known backup plan, a succession plan if something happens to the original author.

     

    In a way, I'm an outsider in the addon development world here, so I don't want to say what should happen.  Perhaps as simple as a field in CA for 'Backup author/group'?  But I'm hoping this can get the ball rolling, and you Squid and the many authors can decide on a mechanism, that provides at least a minimum of a succession plan.  I know you have already worked out blacklisting processes, and that's good, but we really need a way to make sure important plugins and containers are carried on, not lost and blacklisted.  Once something is set up, then we can all nudge the authors to make sure they have someone they can get to cover for them, if the unexpected happens...  However, I can imagine some authors being resistant.  That's fine, it's their right.

     

    What is a backup author?  Basically someone that is able and willing.  At a minimum, they need to be able to manage a repository, the original or their own; they need to be able to access the source, and make needed corrections; and they need to be able to package up an update.  More than that is gravy.

    Two things. . .

     

    Linuxserver is a team so all their containers have multiple devs

     

    And docker containers are simple and streamlined enough that in a lot of cases you can just replace an existing one with a different author's version, point it to the same configuration folder and you're good to go. Many containers have multiple versions already. Plex has more than 3 well supported ones and I believe they are interchangeable

  14. when using the calibre rdp docker, how do I connect up my kobo so that I can transfer the books over to my device? If this were a VM I could pass through the USB but thats not the case here. Whats the best way of doing this? Or do I have to use the server and download the file over wifi (would like to avoid wifi connections if possible)...?

    I believe docker can pass through usb devices as long as the host (unraid) has loaded the drivers for them. I'm not sure if unraid contains any drivers for the kobo device  (perhaps recognizes as an sd card?).

     

    I use it with a Kindle and my preferred method is to send the ebooks to the kindle associated amazon email addresses through calibre and the books are delivered to the devices. The other method is downloading them from the server.

  15. Since dolphin is getting some attention, I updated the info on its github and docker hub pages.

     

    By default,  it runs as user nobody (uid 99) which should be fine for unraid.

     

    But some docker containers run as root and you may not have write access to their local files. If you want to run dolphin as root (uid 0), add the environment variables USER_ID and GROUP_ID and set both to 0. Or install from scratch from the community applications and the updated template includes these variables under advanced view

  16. Hey everyone.  I just installed Dolphin after a failed experiment with Torindo.

     

    I'm trying to set up Dolphin s.t. I can access it remotely, but I don't see anywhere where I can configure a user/password to prevent open access to the world.

     

    Is there some configuration somewhere to allow this or do I need to basically shut it off from the internet and remote in to use it?

     

    Thanks in advanced.

    I think you must misunderstand the purpose of Dolphin since it is nothing like Tonido. A good analogy for Dolphin is Windows File Explorer. This particular docker implementation of it is just giving you a Linux desktop in a browser with the Dolphin file manager already launched.

     

    Yeah, sorry my end goal was always just something to manage my files without managing them from my other systems, not necessarily to have access to the files remotely like with Tonido.  I still like being able to manage files while I'm away, but would rather not have it open to the world.  With that in mind, is there a way to lock it down?

    Best option is vpn, second best is to use a reverse proxy like nginx. With reverse proxy, you can set a password through .htpasswd

     

    For vpn, check out the openvpn server dockers

    For nginx, you can use either the nginx docker or the letsencrypt one I have, which sets up a free 3rd party SSL certificate with nginx

  17. Haha yeah that makes sense. I thought before the new/updated didn't include the apps that were uninstalled at some point because I didn't see a couple of those I had removed on the list, but perhaps they didn't meet the cut off for the recent date, I'm not sure.

     

    The name part also makes sense. In fact, the only time I ever changed the name was because I wanted to run a second instance of a container so I had to rename the second one. I'd be curious about whether people change the names regularly.

     

    EDIT: just updated and it works great. Thanks so much

  18. I noticed that the new/updated apps only list the apps that were never installed.

     

    It might not be that crucial to see the newly updated apps that one currently has installed, since they manually update it, but for apps that were once installed and later removed, it would be great to see the new updates to them. Sometimes I might decide to reinstall it due to a new feature

  19.  

     

    Hello aptalca,

     

    Thanks for creating the Zoneminder docker. I got it up and running fine with a few tweaks.

     

    I have a request for you on the zoneminder docker if you can. Can you add API support to it? Im trying to use zmNinja to view/control my Zoneminder setup on a windows machine and andriod device but the app requires API support. There is documentation on how to enable it but you need to get into the system (docker) to make adjustments.

     

    Is that something you can add to your Zoneminder docker?

     

    Here is a couple of links of info:

    https://github.com/pliablepixels/zmNinja/wiki/Configuring-ZoneMinder-with-API

    https://forums.zoneminder.com/viewtopic.php?p=89462#p89462

     

    Thanks

    D

     

    API support is in version 1.29 which is still a release candidate. Once the stable is released, it will automatically update.

     

    If you want to manually update, there are instructions a couple of pages back

  20. anyone using digikam to manage their photos?  I have about 200,000 photos that I would like to organize, and would like some feedback whether using this docker is a viable solution.  Anyone test this vs. Piwigo or Photoshow?

     

    I'm not familiar with Photoshow, but I tried Piwigo.

     

    Piwigo is more for showcasing and sharing select photos. It doesn't manage photos in place. You have to import the photos into Piwigo (potentially duplicating).

     

    Digikam is great for managing the photo library in place. In other words, you point it to your photos folder, and the changes you make in digikam like sorting, tagging, face recognition, etc., all that info can be stored in a separate database that digikam maintains. I don't like modifying the original files, or duplicating the files so I prefer digikam over other options (I would normally pick picasa desktop over digikam, if only picasa allowed keeping photos on a NAS and access through samba easily and allowed transfer of the info database to other computers easily, but unfortunately picasa desktop is primarily a single computer, local files kind of option, which I dislike)

     

    Keep in mind that certain task like face recognition can be extremely cpu intensive and can lock up your container gui for a long time for 200,000 photos. I'd recommend testing on a small batch and doing the rest in batches.

  21. Aptalca,

     

    I am attempting to install your DuckDNS docker, when I click the create button nothing happens.  I have the config folder defined.  I am not having any issues installing other dockers.

     

    Any help would be greatly appreciated. 

     

    Thanks,

    Dan

     

    Hit the advanced view button at the top right and it will reveal new settings and likely an error message. It won't let you install without entering that info under advanced view.

     

    And make sure you read the description at the top  :P

  22. I actually got it up and running. I assume their meteor server was down when I was attempting. I restarted the container this morning and it came right up.

     

    Now that it is set up... How does one share out the request page? Since this is internal on my network. I have duckDNS installed. I assume I need a webdaemon. Have you experts seen a light one that has a walkthrough?

     

    The simplest way is to forward a port on your router. If your container is running on port 3000, go into your router interface, and forward port 3000 to your server's IP. So when others try to connect http://yourdomain.duckdns.org:3000 they'll reach your container interface. Not sure about how secure plexrequests is, you can perhaps ask on their forums if this method is advised against or not.

     

    Other methods (more secure) include setting up a vpn server and having your friends vpn in and access the internal container page, or setting up a reverse proxy (I have a letsencrypt nginx reverse proxy container in the repo, which you can use to set up secure connections with SSL and passwords to your containers. But both of these methods are a little tricky to set up properly.

×
×
  • Create New...