chanrc

Members
  • Posts

    12
  • Joined

  • Last visited

Recent Profile Visitors

The recent visitors block is disabled and is not being shown to other users.

chanrc's Achievements

Noob

Noob (1/14)

0

Reputation

  1. Letsencrypt is now requiring secondary validation of your domains from other locations. Basically this means that letsencrypt will start a web server, try to connect to your domains from some external locations, before validating your domain and issuing you a cert. Issue is there are a lot of different firewall configurations and nginx configurations. In @MxFox case his firewall is blocking one of the secondary validation cloudflare locations https://community.letsencrypt.org/t/unexpected-renewal-failures-during-april-2024-please-read-this/216830 https://letsencrypt.org/docs/challenge-types/#http-01-challenge http://<YOUR_DOMAIN>/.well-known/acme-challenge/<TOKEN> Looks like both your cases are failing validation at on the HTTP-01 challenge. As am I, but with different reasons. In my case, I have my nginx configured to hit Authelia for SSO signon but that is messing things up on all the domains because the webserver that letsecrypt spins up to answer validation requests are being blocked my Authelia. I switched to DNS validation using Cloudflare and that works fine now
  2. Anyone able to get plug-ins working with docker? I wanted to install a couple of new agents for audio books and also subzero subs plugin. Pulling the two plugin bundles using git clone <bundle> into the appdata\PMS\plug-ins and restarting the docker doesn't work and they don't show up in the UI. I know Plex is trying to get rid of them so does that mean there is no update then there? I also read online somewhere that I have to use Kitana. So I installed that docker, authorized it with my Plex account, then clicked my server only for it to say I had "No valid plugins" even tho both of the .bundle folders are in the plug-ins.
  3. Its actually not too difficult. Figured it out after a bit of struggling. You just gotta take the IPA server cert file from /var/lib/ipa/certs and put it into the /etc/ssl/certs folder of the Nextcloud docker then add an entry for the new cert into /etc/ldap/ldap.conf and set the BASE,HOST, TLS_CACERT, and TLS_REQCERT values. After that make sure your nextcloud ldap server host gets updated to ldaps://<server> and ldapPort get updated to 636 in your nextcloud config. Nextcloud will recognize your self-signed cert after that AND it will not have any LDAP bind errors so will actually obey your LDAP server and password policies. FreeIPA is configured to only do that when SSL is enabled. You will need to make an entry in the /etc/hosts file since certs only take FQDN. Also make sure you don't mispell anything otherwise your Nextcloud will not be able to connect to your LDAP server and it will fail to start. You will have to use the OCC tool and manually revert settings. docker exec -u www-data Nextcloud php occ log:tail/ldap:show-config/ldap:set-config
  4. Right now I just got Nextcloud working with LDAP and trying to get it on LDAPS without much success. Having trouble getting Nextcloud to recognize the self-signed cert from the internal FreeIPA LDAP server. When I go occ log:tail it just tells me I get have a connection error. I have Authelia working on LDAPS as an authentication portal in front of a number of services I have running I wanted to do SSO on. Was going to get an email server working then look at hooking LDAP into Home Assistant for home automation stuff. Pretty new to LDAP/cert management and its a pretty steep learning curve. First time I screwed up the LDAP config and nextcloud went to an Internal Server error, I almost paniked and almost wiped the DB, Docker, and app settings to start over.
  5. I got Authelia and two factor working for logins, but I'm having issues when setting a new password from the password reset email that Authelia sends out The email that authelia sends out for the password reset link seem and goes to the right reset page, but clicking to execute the password change I get a couple of errors when authelia tries to set a new password with the LDAP server in the logs: time="2022-03-03T18:38:59-07:00" level=error msg="Token is not in DB, it might have already been used" method=POST path=/api/reset-password/identity/finish remote_ip=ip stack="github.com/authelia/authelia/v4/internal/middlewares/authelia_context.go:61 (*AutheliaCtx).Error\ngithub.com/authelia/authelia/v4/internal/middlewares/identity_verification.go:188 IdentityVerificationFinish.func1\ngithub.com/authelia/authelia/v4/internal/middlewares/authelia_context.go:52 AutheliaMiddleware.func1.1\ngithub.com/fasthttp/[email protected]/router.go:414 (*Router).Handler\ngithub.com/authelia/authelia/v4/internal/middlewares/log_request.go:14 LogRequestMiddleware.func1\ngithub.com/authelia/authelia/v4/internal/middlewares/strip_path.go:21 StripPathMiddleware.func1\ngithub.com/valyala/[email protected]/server.go:2298 (*Server).serveConn\ngithub.com/valyala/[email protected]/workerpool.go:223 (*workerPool).workerFunc\ngithub.com/valyala/[email protected]/workerpool.go:195 (*workerPool).getCh.func1\nruntime/asm_amd64.s:1581 goexit" time="2022-03-03T18:39:02-07:00" level=error msg="unable to update password. Cause: LDAP Result Code 13 \"Confidentiality Required\": Operation requires a secure connection.\n" method=POST path=/api/reset-password remote_ip=ip stack="github.com/authelia/authelia/v4/internal/middlewares/authelia_context.go:61 (*AutheliaCtx).Error\ngithub.com/authelia/authelia/v4/internal/handlers/handler_reset_password_step2.go:38 ResetPasswordPost\ngithub.com/authelia/authelia/v4/internal/middlewares/authelia_context.go:52 AutheliaMiddleware.func1.1\ngithub.com/fasthttp/[email protected]/router.go:414 (*Router).Handler\ngithub.com/authelia/authelia/v4/internal/middlewares/log_request.go:14 LogRequestMiddleware.func1\ngithub.com/authelia/authelia/v4/internal/middlewares/strip_path.go:21 StripPathMiddleware.func1\ngithub.com/valyala/[email protected]/server.go:2298 (*Server).serveConn\ngithub.com/valyala/[email protected]/workerpool.go:223 (*workerPool).workerFunc\ngithub.com/valyala/[email protected]/workerpool.go:195 (*workerPool).getCh.func1\nruntime/asm_amd64.s:1581 goexit" It seems like LDAP is requiring some kind of secure connection for the password reset from Authelia, but in the configuration.yml, I specified an ldap:// and not ldaps://. Is this cuz of the tls section in the ibracorp template? I just used the templated and change the domain and added a password. Other than that I am using the default linuxserver.io authelia-location/authelia-server.conf which seems to line up with Ibracorps settings aside from the rules for email. Do I need to use ldaps instead? My nextcloud uses ldap password reset without ldaps and its working correctly there. server: host: 0.0.0.0 port: 9091 path: "authelia" read_buffer_size: 4096 write_buffer_size: 4096 enable_pprof: false enable_expvars: false disable_healthcheck: false tls: key: "" certificate: "" log: level: info authentication_backend: disable_reset_password: false refresh_interval: 5m ldap: implementation: custom url: ldap://192.168.1.180 start_tls: false tls: skip_verify: false minimum_version: TLS1.2 base_dn: dc=domain,dc=com username_attribute: uid additional_users_dn: cn=users,cn=accounts users_filter: (&({username_attribute}={input})(objectClass=person)) additional_groups_dn: cn=groups,cn=accounts groups_filter: (&(member=uid={input},cn=users,cn=accounts,dc=domain,dc=com)(objectclass=groupofnames)) group_name_attribute: cn mail_attribute: mail display_name_attribute: givenName user: uid=admin,cn=users,cn=accounts,dc=domain,dc=com password: "password" EDIT: Defining a /certificates_directory in my configuration.yaml which had my LDAP servers self-signed cert and changing to use LDAPS solved my issues. LDAP only allows edits to passwords securely.
  6. Yea that's exactly what I ended up doing based on your comment. I blew away everything and rebuilt it using the official nextcloud docker and it works properly with LDAP. Was hoping I didn't need to. Shame Iinuxserver didn't work. I'm surprised its been broken this long.
  7. You ever get LDAP working on this docker? I'm running into the same issue as you with all the fields and buttons greyed out in the "LDAP/AD integration" app.
  8. Anyone have issues installing freeIPA at all? I'm trying to get that set up with Fedora 35 so I can use that with authelia. Followed all the steps on Ibracorp's video but when I try to access the ipa.domain.com it just goes to a blank page on the first load. No errors show up in the browser dev tools and I can't see any errors in Fedora for the last steps of the FreeIPA install, says it installed successfully. EDIT: Figured out it was a NGINX config issue
  9. Anyone try out 6.90-beta22 yet? I'm assuming since we haven't heard anything from the LT guys thsi is still probably an issue.
  10. Isn't the absolute easiest way just to run a SMART test on the drive and look at the report? If you click on the name of the drive from the main menu you can download it from there. I just did that, ran a iotop -oa -d 3600, and averaged 10 hours of usage to give me a rough average of how much data loop2 was generating. Multiply that by the power on hours in the drive attributes and you'd get an approx. of how bad this bug is killing the drive and how much of the TBW this bug is responsible for. In my case my drive reported 64.1TBW total in the SMART data, I measured 8.6MB/s from loop2 averaged over a ten hour period with users to plex, access to my server, etc. or about 30.2GB/hour. My drive attributes showed 1064 on hours... so rough napkin math I'm looking at loop2 having generated ~31.4TBW by itself (basically halving the life of my drive). Rest will be from me doing transfer of about 12TB from my old NAS (stupidly had cache enabled for the initial transfer), downloads, heavy handbrake H265 transcoding, couple of VM installs and futzing around with my server in general. For comparison after converting to XFS I'm generating ~9MB/minute from loop2 or what would be about ~0.6TBW over the lifetime of my drive.
  11. I'm 3 days now since the switch to a single un-encrypted XFS cache and consistently getting better results. loop2 is producing only ~9MB/min during idle for writes with all my dockers started (included binhex Plex, sonarr, radarr, sabnzbd, deluge, mariaDB, nextcloud, letsencrypt, cloudflare-ddns, pihole, ombi, grafana, teleconf, influxDB) compared to the ~8MB/s I was seeing before after stopping all my dockers but only having docker enabled on my un-encrypted BTRFS cache. Not sure what the trigger for @nas_nerd's XFS issue but I can't repro it with mariaDB and nextcloud enabled (no user connects in the last 3 days though, maybe I should try and upload something). over 10 minutes using iotop -oa -d 600 over four hours using iotop -oa -d 14400 with several small uploads to nextcloud and a couple of downloads.
  12. I have the same issue and testing with all dockers stopped, loop2 by itself would still be writing data at 5-15MB/s in iotop to my single unencrypted BTRFS cache SSD. Tried converting my cache drive xfs and now it's down to 20MB over the past 10 minutes with no dockers running and 100MB over 10 minutes with all my dockers up (binhex sonarr, radarr, tautulli, sabnzbd, deluge, ombi, pihole, nextcloud). Huge improvements with XFS over BTRFS though still a problem when there is really no usage in any of those dockers. My month and half old cache SSD is already at 66TBW (of the 640TBW my manufacturer rates the drive for) before I noticed this Can devs look at this as an urgent instead of minor issue? Probably cratered a lot of peoples SSDs already.