Caldorian

Members
  • Posts

    70
  • Joined

  • Last visited

Everything posted by Caldorian

  1. Okay, this is a weird one. Just updated to the latest Firefox on Windows 10 (v93.0), and now I can't log into my Deluge instance. Page loads, and the password dialog box comes up, but entering the password doesn't take and it keeps prompting me for it again. Testing on Edge and Chrome has no issues.
  2. Silly question, but I can't seem to find an answer for it: There's the pre-set Hourly, Daily, Weekly, and Monthly schedules. But I can't seem to find when they run (ie. What time does the daily kick off, day/time for weekly, etc.). Is there any way to adjust them to be your preferred values?
  3. Expanding on securing the admin panel a little bit: On my setup, I run a split DNS config w/ Pihole so I can access my web services on the LAN with HTTPS certificates still being valid (Router doesn't support NAT Loopback). In my SWAG nginx folder, I created a lan-only.conf file: allow 192.168.1.0/24; deny all; Then, in my bitwarden.subdomain.conf file, I added the following to location /admin: include /config/nginx/lan-only.conf; This way, any non-LAN access through the reverse proxy gets returned a 403 - Forbidden. I also use the same method for several of my other web services that I want LAN access for, but not external access, while still being able to use DNS entries to access them (used to run 2 SWAG containers, one for internal, one for external, but managed to consolidate it all down with the added config).
  4. Hi there. Have had this up and running for a couple months now, and works great. Not, I'm working through converting my dockers over to a custom network, rather then using the default bridge network. I noticed Urbackup is setup with HOST networking. Any reason I can't change it to use the custom network and manually forward the 3 ports (55413, 55414, and 55415)?
  5. I have. I used the same ddclient.conf file that was posted at https://github.com/ddclient/ddclient/pull/102#issuecomment-619370329, and have my API zones setup specified. Modifying the .conf file to remove the login line as identified on line 195 at https://github.com/ddclient/ddclient/blob/develop/ddclient.conf.in seems to cause the client to be unable to parse the .conf file. So I'm stuck, and curious if anyone else has gotten it to work.
  6. So I have one container setup to generate a wildcard cert for my domain, using dns validation on cloudflare. I was using the Global API key before, but I'm trying to convert over to an API token instead. I updated my cloudflare.ini file, removing the dns_cloudflare_email and dns_cloudflare_api_key values, and instead inserting a dns_cloudflare_api_token value. However, since my cert is currently valid, I'm not seeing in the logs it attempting to regenerate the cert and use the new api token. Is there an easy way I can force the certificate to regenerate to test my configuration change?
  7. I see that the latest ddclient versions support using cloudflare's API tokens instead of the global API token. However, I haven't been able too get my ddclient.conf file updated properly to take it. Anyone had any success doing so? I tried the Cloudflare DDNS docker, and that worked without issue, so I know the token is good. What I'm seeing if I include the line "login=token", then the API request is sent to cloudflare, but returns a failure: However, if I remove the "login=token" line entirely, then the docker doesn't even attempt to send the request:
  8. Thanks. Never realized this restriction existed in *nix systems.
  9. Thanks @mattie112. I don't use NPM personally, as I set all my stuff up on SWAG before NPM was available, but decided to give it a quick shot after I started helping a couple guys on reddit and was confused as to why it wasn't working. One quick question for you: Why did you choose to leave the WebUI port as 8181 rather then setting it back to 81 as the official container has it? Would it be because the CA template wouldn't support opening directly any more, and you'd have to do an advanced edit to set the port yourself?
  10. A couple reasons. Firstly as other people have said, speed and keeping things local when you are on your LAN and don't have a need to go out on the internet and expose that traffic (yes I get the loopback doesn't technically get out on the net, but there's a DNS query, and it's generally the principle of things). Secondly, which is what my impediment is, not everyone's router supports NAT loopback. Mine is an ISP supplied router that handles both my internet and TV service running on a fibre connection, and replacing it would require me spending quite a bit more to get a SFP based router that I can put a custom config on to support the TV service, plus getting an additional Access Point. Just not cash that I want to spend right now. What I'm trying to understand is why you decided to change the ports in the docker container from their defaults. Generally curious as to this decision, because I would have thought it would have been better to leave them on their defaults, and use the CA template to set the "external" ports to something else.
  11. As part of the template, a config share would have been mounted (typically in your appdata share), and in that share is a conf file. Edit that file, and when the docker restarts, the file in the config share is copied overtop of the .conf file within the docker container and then loaded.
  12. I'm interested in setting this up, but I'm curious what the RAM requirements would be for a single user implementation (with possible growth to 5 or so users if I bring my family onboard). That would be the RAM for both the nextcloud docker as well as the required MariaDB instance. Right now, my unraid server only has 8GB total, averaging about 3GB free/cached.
  13. Favourite feature is the parity implementation, and being able to add a single drive at a time while keeping the parity fully intact. If there was one thing I would like to see added, it would be to add support for having SSDs as a part of the disk array, as well as being able to setup secondary disk arrays.
  14. Might not be the right place, and may have to go to the full plex support, but I'll start here. I've had this weird issue come up a couple times over the last couple weeks, and not sure what's causing it. What's happened is that I'm playing something off my Plex server on the local LAN, and then a buddy of mine who I've shared the server with starts to play something. I've got tautulli monitoring things and sends me Slack notifications when something starts playing. What's happened is that I end up with a constant stream of updates where tautulli is sending notifications for both items that are now being played over and over and over and over and over and over.... Today when this happened, I just checked my Plex dashboard, and it only shows one thing playing. But the weird part is that for 8 seconds, it shows only what my buddy is playing. Then for one second it switches to my show, and then switches back to my buddy. And it does on like this on a 10s repeating loop. Tautulli was also only showing 1 active stream when in reality we both had something going on. The tautulli notifications would also occasionally get messed up, stating my buddy was playing what I was watching. Anyone seen anything like this before?
  15. Changing the in Namecheap had no affect, no updates were issued. If I deleted the cache file in docker, and let it continue to run, then I'd see in the console (within 5 minutes) that the IP address was updated, and my DNS gets updated properly. Seems like it does check against the cache file, and if it matches, doesn't do anything. Which is kind of good, as I don't want to issue erroneous updates if there's nothing to change. But it would be nice if it would say that I did check and no changes were required. Seems like it's a known "feature" of ddclient, and various people have come up with all sorts of work arounds to deal with dynamic dns providers that require updates every X days: https://www.linuxquestions.org/questions/debian-26/ddclient-will-not-update-316726/#post1611217
  16. I've had it running for the last 5 hours now, and I haven't new log entries since the last time the dockers were restarted.
  17. Thanks @saarg. While it didn't directly resolve my issue, it was enough to get me started. Been trying to get this to work for a while now, and been using a few different templates for it. First there was mace's, and then tried adding lsio's direct from docker hub. I'm guessing I was getting some package corruption between various elements. I had tried removing each of the dockers, including the option to remove the image, but I guess it was still leaving remnants. In particular, I noticed today after removing the image, if I used the "Add Container" option, I could still pick from all the old templates I had tried. So a quick Google search on how to get rid of those (CA --> Previous Apps --> "X"), and installed the docker fresh (yet again). Updated the config file, and success. Even have it working with 2 copies of the container going (2 domains, one still on Namecheap, the other with Cloudflare). For those curious, here's the final Cloudflare config I ended up with: use=web, web=dynamicdns.park-your-domain.com/getip protocol=cloudflare, \ zone=mydomain.com, \ ttl=1, \ [email protected], \ password=ThisIsWhereYourGlobalAPIKeyGoes \ subdomain1.mydomain.com,subdomain2.mydomain.com One question: I'm sure when I had mace's docker going against Namecheap, it would ping in the console log every 5 minutes, stating that the IP was updated. Has this version been updated such that it'll only post to the console if/when the IP changes and it sends a command out to update the DNS entry? If so, is there some change I can make so that it at least logs that the current IP was checked?
  18. Thank you for this. But do you have a configuration file that works with Cloudflare? I must have tried a couple dozen different configs, and I can't get it to work. I keep getting one of two errors: WARNING: file /var/cache/ddclient/ddclient.cache, line 3: Invalid Value for keyword 'ip' = '' WARNING: found neither ipv4 nor ipv6 address Here's the Cloudflare section of my current config: protocol=cloudflare, \ server=www.cloudflare.com, \ [email protected] \ password=XXXXXXXXXXXXXXXXXXXXXXXXXXXXXXX \ zone=XXXXXX.com, \ YYYYYYYY.XXXXXX.com Edit: I'll add in that if I add a line to detect my IP from a web page (ie. use=web, web=dynamicdns.park-your-domain.com/getip), I get the following series of errors: WARNING: file /var/cache/ddclient/ddclient.cache, line 3: Invalid Value for keyword 'ip' = '' WARNING: skipping update of YYYYYYYY.XXXXXX.com from <nothing> to WWW.XXX.YYY.ZZZ. WARNING: last updated <never> but last attempt on Thu Jun 13 12:26:25 2019 failed. WARNING: Wait at least 5 minutes between update attempts. With the WWW.XXX.YYY.ZZZ being my actual IP address. That would imply that ddclient it actually getting my IP address. But for whatever reason, it's just not parsing it correctly to send it to Cloudflare.
  19. Think I found the issue. Silly me, I had the first DNS server listed on my unraid network settings set to my pihole docker that I run on it's own IP. Of course, unraid can't talk to it due to macvlan network isolation. Updated my dns setting so that unraid only has the external DNS entries (8.8.8.8, 1.1.1.1, etc.) and update resolution seems to be back to normal.
  20. Upgraded to 6.7.0 last week, and have started running into an issue updating my docker containers. When I click the Check for Updates button, all my dockers just sit on "checking..." forever. Left it for 20 minutes now, and it hasn't returned. If I refresh the Docker page, then I can see that I the check process has completed (or at least progressed), and several of my dockers do have updates available. I've attached the diagnostics for review. diagnostics-20190531-2039.zip
  21. Just installed it on a 6.6.7 install, and it's looking good. However, I'm getting an alert in it about my cache drive, saying that the btrfs allocation is 97.3%. However, if I look at my unraid dashboard, it's only around 60% allocated (157GB out of 256GB). Any clue why this alert is being raised, and how I may clear it?
  22. Has anyone been able to get the Pieces plugin (https://dev.deluge-torrent.org/wiki/Plugins/Pieces) to work successfully with this? The best I've ever gotten is for the tab to show up, but nothing to be shown in the tab. Most of the time for me, it ends up where the Web UI can't connect to the running instance.
  23. Tried the apctest stuff. First attempted to run the batter calibration, and it wouldn't run. Next up, I checked the battery date, and it was properly reporting that it was replaced on 2/2/2018. Then tried to run a self test, which failed. So then I decided to go for it, and I pulled the plug from the wall, and let it run off the battery for 10 minutes. No issues there. Plugged it back in, and re-ran a self-test manually from the UPS (holding the power button for 6 seconds). Self test passed and the alarm has cleared. Still, would have been nice if you guys could have addressed my original question at some point an say if there was a way or not to silence the notification in unraid.
  24. No way on the UPS to tell it that the battery was replaced. Only thing you can do is run the self test. After a couple of self tests on the new battery, the alerts on the UPS went away. Been running without issues for about 10.5 months now. It's only recently started alerting me about the battery again. Planning on replacing the UPS as a whole in the future, as there's better options out there now for cheaper. But getting back to my original question: Is there a way I can get Unraid to stop alerting me about this? I don't need the reminder every 9 hours.
  25. I replaced the battery pack less then a year ago. It's failing the self test, but the estimated run-times are still plenty high. If this was a production system, I'd do something more, but given it's just my home server, I'm more concerned about the line conditioning and dealing with brown-outs, which it's handling fine.