Jump to content
We're Hiring! Full Stack Developer ×

aptalca

Community Developer
  • Posts

    3,064
  • Joined

  • Last visited

  • Days Won

    3

Posts posted by aptalca

  1. After upgrading to 6.4, I have had some problems with my container that wont start. 
     
    Tried to remove everything except my nginx config file yesterday, and now I am getting this error when trying to start letsencrypt:
     
    EDIT: I am running letsencrypt with fixed IP
     
    Generating new certificate
    WARNING: The standalone specific supported challenges flag is deprecated.
    Please use the --preferred-challenges flag instead.
    Saving debug log to /var/log/letsencrypt/letsencrypt.log
    An unexpected error occurred:
    ConnectionError: HTTPSConnectionPool(host='acme-v01.api.letsencrypt.org', port=443): Max retries exceeded with url: /directory (Caused by NewConnectionError(': Failed to establish a new connection: [Errno -3] Try again',))
    Please see the logfiles in /var/log/letsencrypt for more details.
    /var/run/s6/etc/cont-init.d/50-config: line 108: cd: /config/keys/letsencrypt: No such file or directory
    [cont-init.d] 50-config: exited 1.
    [cont-finish.d] executing container finish scripts...
    [cont-finish.d] done.
    [s6-finish] syncing disks.
    [s6-finish] sending all processes the TERM signal.
    [s6-finish] sending all processes the KILL signal and exiting.
     


    I assume you mean separate IP through macvlan. In that case, make sure that your router is forwarding port 443 to the letsencrypt container's IP rather than unraid's
  2. So I did this (except I used /rutorrent rather than /ru in the location line) and it works for me now - however when I am in the unRAID dashboard and I click on the web UI it doesn't work since it tries to open http://tower:7777 rather than http://tower:7777/rutorrent which is now the correct URL given the config change that we did in the /appdata/rutorrent/nginx/nginx.conf file .  What do you do you fix the URL that it uses by default in the dashboard?


    Edit container settings in advanced view, you'll see the field for the gui url
  3. Still can't get this to work with "Local".
    Steps I took-
    1) Completely deleted the docker, img and appdata folder.
    2) Re-installed.
    3) Logged into admin with default admin/password.
    4) Set authentication to local. Saved and refreshed server.
    5) Changed admin password via CLI.
    6) Added new user via CLI.
    7) Added same user via admin page with options "Allow Admin" "Allow Auto Login". Saved and refreshed server.
    8 Attempted to login using newusername/password via https://:943. Login Failed.
     
    Logs show this-
    local auth failed: no stored password digest found in authcred attributes: auth/authlocal:35,web/http:1609,web/http:750,web/server:126,web/server:133,xml/authrpc:110,xml/authrpc:164,internet/defer:102,xml/authsess:50,sagent/saccess:86,xml/authrpc:244,xml/authsess:50,xml/authsess:103,auth/authdelegate:308,util/delegate:26,auth/authdelegate:237,util/defer:224,util/defer:246,internet/defer:190,internet/defer:181,internet/defer:323,util/defer:246,internet/defer:190,internet/defer:181,internet/defer:323,util/defer:245,internet/defer:102,auth/authdelegate:61,auth/authdelegate:240,util/delegate:26,auth/authlocal:35,util/error:61,util/error:44

    Did I miss a step to setup local authentication? Looks like the docker isn't storing credentials locally but I have no idea why.
     
     
     
    Edit: Looks like I DID miss something.
     
    SafariScreenSnapz030.jpg.621c2ea59625f2242407be8eb6eb29d2.jpg
     
    Just adding the password via SSH isn't enough. It has to be added in the admin page as well.
    Wonder what else I missed?[emoji5]



    Why are you adding users through command line? You are creating more PAM users as that cli command in the docker description was posted to modify the admin user, which is a PAM account.

    Create new users (and manage them) through the gui and don't mess with command line
  4. 4 minutes ago, wgstarks said:

    How is this set in the server settings? Just asking so I can insure that I dont.

     

    Just to be sure I understand, you're saying the proper setup requires vpn connection to the local network and then login to the docker webgui? Or should it be impossible to connect to the webgui via vpn? I did some testing and I can connect to admin if I connect to lan via vpn first. Just want to be sure this is proper setup. I would prefer not to be able to access the gui at all under any circumstances from outside my LAN. Not sure if that's possible though?

     

    BTW- thanks for the help.

     

    You have to manually add "client-cert-not-required" to the server config to disable certs.

     

    You should be able to access the gui only when you're on your home lan. No remote access (from the wan or internet) to the gui. However, it's ok to be able to access the gui when you're vpn'ed in, because vpn technically puts you on the home lan (you can set whether vpn clients should have access to the subnet of your unraid server or not).

     

    Basically, don't forward a port on your router for the openvpn gui port (default 943 I believe) and you'll be fine. Only forward the tcp and udp ports for vpn access.

  5. 6 hours ago, wgstarks said:

    In that case wouldn't it just make sense to use "PAM" rather than "Local" authentication? My understanding is that the reason to use local was that users wouldn't have to be recreated/deleted after updating? This is quite a ways outside my knowledge level so I may be totally wrong. Maybe local is better anyway???

     

    PAM means the users on the host OS are used. Local means openvpn keeps it own local database for the user list. Openvpn's list is stored in the config folder and survives container recreation. PAM/OS stored users are in the image and get wiped when container is deleted.

  6. 7 hours ago, Maticks said:

    The Admin user can always VPN in, i cannot see a way to disable this. provided you change the admin password they cannot login obviously.

    but even with the GUI removed they could still brute force the admin password since there is no way to disable that from the UI.

     

    As far as I know, you can't vpn in without the certs (unless specifically set in server settings). No one can brute force into your vpn (as long as your certs are high enough bits and they do not have a quantum computer). Even if they know the username and the password, they still cannot vpn in without the certs.

     

    However, the gui allows for access with just the username and password. No certs needed, thus prone to brute force. That's why you don't expose it to the world.

  7. Yes agree the GUI shouldn't be public facing but if someone was to VPN to my server with admin and password they would also get a login to my local network [emoji20]


    I don't believe you can vpn in using the admin user and password. That is just for the gui access. Vpn access should only be allowed with a client certificate.

    And that is why your gui should not be publicly available. Gui is only protected by a simple password which can potentially be brute forced. Client cert for vpn is much much more secure. But if you allow public access to the gui, a hacker no longer needs to hack in through vpn. They can just brute force the gui password and create a vpn user for themselves. Don't introduce a weak attack surface by publishing the gui.

    If the gui is not publicly available, keeping the admin password default should not be that big of a deal since it can only be accessed on the lan. If someone's already on your lan, they no longer need to hack into your vpn.

    If someone who is not currently on your lan needs access to vpn, you should create their cert and send it to them. They don't need to access the gui.
  8. Maybe this is something obvious but i can't seem to work it out.
    Whenever i update my admin password in the docker from ssh i can login fine with the new password.
    When i shutdown the docker and restart it, the admin password is changed back to password.
    This worries me since its public facing, i've tried to also change the authentication method's but that doesn't seem to prevent this.
     
    How do i make the admin password stick and not change. please help [emoji4]


    The gui probably shouldn't be public facing
  9. Yes. I actually deleted the docker and image (couldn't figure out how to just delete users). Re-installed the docker. Set verification to "local". And then added users and downloaded and installed new ovpn files on the clients.
     
    If auto-login is disabled authentication fails with the error posted above. The only way I could connect is enabling auto-login and PAM.


    I have it set to use local and auto login and it works fine. Users are preserved through updates
  10. Ran into an issue with user credentials being lost. Did some searching and found this in the readme (missed it at first)-
    For user accounts to be persistent, switch the "Authentication" in the webui from "PAM" to "Local" and then set up the user accounts with their passwords.
     
    It looks like that will fix my problem with user accounts surviving docker updates but what about the admin account? Will this also preserve admin password or is there a better way?
     
    Edit: Switching authentication to local doesn't seem to work. Every time I tried to login it would get denied.
    local auth failed: no stored password digest found in authcred attributes: auth/authlocal:35,web/http:1609,web/http:750,web/server:126,web/server:133,xml/authrpc:110,xml/authrpc:164,internet/defer:102,xml/authsess:50,sagent/saccess:86,xml/authrpc:244,xml/authsess:50,xml/authsess:103,auth/authdelegate:308,util/delegate:26,auth/authdelegate:237,util/defer:224,util/defer:246,internet/defer:190,internet/defer:181,internet/defer:323,util/defer:246,internet/defer:190,internet/defer:181,internet/defer:323,util/defer:245,internet/defer:102,auth/authdelegate:61,auth/authdelegate:240,util/delegate:26,auth/authlocal:35,util/error:61,util/error:44

     


    After switching, did you recreate the user accounts and the ovpn config files?

  11. First off, a huge thank you to the LSIO team for all their work on this docker and all the other dockers that they do.
     
    Secondly, with Plex Live TV announcement, Plex (for me) might finally free me from Kodi/TvHeadEnd for Live TV. The holy grail for me would be to be able to use Plex for my media, Live TV, and DVR with Comskip. Having searched the forums, it looks like Comskip support for this Plex docker has stymied folks in the past. I ran across this how-to by a dev / network engineer that makes it seem pretty simple to get Comskip up and running inside a Plex docker.
     
    Has anyone got this running? If so, do you have to re-enable the Comskip, Plexcomkip and Dependencies every time you update the docker (maybe this is a stupid question, but I don't know much about Docker other than how to get one running and set variables/paths)? Is this a solution that could be integrated into the LSIO docker eventually?
     
    I would try playing around with this myself, but I recently fried my HDHR Connect. Note to self, don't plug the wrong AC Adapter into the HDHR. I plugged in an adapter outputting way too many volts and suddenly it smelled like burning...
     


    In the guide you linked to, the part where it tells you to do things inside the container, those things won't survive an update or recreation of the container. However they will survive restarts of the container.
  12. If you set the version parameter to latest when you create the container, it will attempt to upgrade to the latest during each start of the container.

    If you are a plexpass user and logged into your plexpass account in plex, the container will update to the latest plexpass version. If not, it will update to the latest public version.

    The Friday builds come with the latest public version inside

  13. Docker does not pass devices through like kvm does. It only allows the containers to share resources with the host.

    To use a usb device in docker, you would need a driver for it installed on the host os, which in this case is unraid. If unraid does not recognize the usb device and load drivers, it won't work in docker.

  14. 9 hours ago, In0cenT said:

    Hello

    Since today my fail2ban in the container is spitting errors. All my subdomains do not work anymore too.
    https://pastebin.com/C2mP23s4

    Any idea how to get it back working?

     

    OK, I see that although the issue was fixed back in March per https://github.com/fail2ban/fail2ban/issues/1741 there hasn't been a new release with the fix in it yet. Temporary solution to fix fail2ban here: https://gist.github.com/aptalca/ac9c3f931de460c9a2c12176e26df7d8

     

    However, this issue should not break your reverse proxy. It only breaks fail2ban. You probably have a different issue regarding the subdomains.

  15. Hello,
     
    another question (not related to reverse proxying [emoji4] 
    I received an email from letsencrypt that my man URI will expire within 9 days.
    I exec then letsencrypt container with bash /app/le-renew.sh by hand and it tells me
    The following certs are not due for renewal yet: /path/path/path/fullchain.pem
     
    The option subdomain only is set to false.
    Do you have any idea?


    The email is for a cert that is no longer used.

    You likely reinstalled this container and deleted the old appdata without revoking the old certs.

    Nothing to worry about. Letsencrypt lets you get multiple certs for the same domain without revoking the old ones (albeit with limits on number and frequency)
    • Upvote 1
×
×
  • Create New...