CorneliousJD

Members
  • Posts

    691
  • Joined

  • Last visited

  • Days Won

    1

Everything posted by CorneliousJD

  1. So it seems the root cause of this issue is NGINX Proxy Manager (NPM) issue as noted here: https://github.com/jc21/nginx-proxy-manager/issues/1717 I would say the real fix here is to roll back until they fix the problem. 2.9.13 seems to work fine. If you're using the jlesage NPM image, you can add :v1.23.1 to the end of the docker image it pulls to downgrade, and that fixes the issue as well. When NPM fixes the problem itself, this .env modification we are talking about shouldn't be needed anymore.
  2. In that directory there's only one file that is .env (no filename, just .env extension) \appdata\heimdall\www\.env
  3. If you aren't a fan of command line you can just edit it via going to \appdata\heimdall\www and then opening the .env file with a text editor of your choice and editing it there. I put in https://sub.domain.com and saved it, restarted container, and I'm good to go now again.
  4. There should be a lot of good resources for creating a Postgres DB via docker/unraid already out there if you search for them. Once you do that you should be able to simply plug in the proper requirements in to the PWPush docker container relating to the PGSQL database and be off to the races. If you're having trouble I'd suggest trying again and taking screenshots of your setup and any errors and post them here, I can probably find the time to help with that, but I just don't have the time at the moment to build a full walkthrough for it.
  5. Thanks for all the assistance. It runs non-corrective during scheduled runs, so all good there. Just wanted to say thanks again - appreciate the help. ordering new drive now.
  6. Thanks, this disk is ~7 years of power on time. I think I'll just replace him. My monthly parity check is going to run on the 1st. Do you think it would be wise to simply pull this drive from the array now and leave it offline until I get a replacement, or let it run until a replacement shows up?
  7. Thanks, I ran an extended test and it did note there are errors, but it also says "SMART overall-health self-assessment test result: PASSED" I guess I'm not sure what I'm supposed to make of that. Here's the log (attached) WDC_WD4003FZEX-00Z4SA0_WD-WCC131368331-20211229-0816.txt
  8. Hi all, apologies for what may seem like a silly/stupid question. This is actually the first drive I've ever had with a potential failure in unraid. Not sure what to actually do about these warnings I got. Event: Unraid Disk 8 SMART health [197] Subject: Warning [SERVER] - current pending sector is 1 Event: Unraid Disk 8 SMART health [198] Subject: Warning [SERVER] - offline uncorrectable is 1 Does this mean I should be concerned and move to replace the disk ASAP?
  9. Interesting, when you started up a new container did you delete all previouse appdata for the old AMP container? If not, try just pointing it to a new appdata folder (e.g. /mnt/user/appdata/amp2) and try again, this will ensure a fresh data folder w/ no leftover inherited settings.
  10. You either set your default user/password in the container itself, otherwise it will default to admin//password -- have you tried both what you entered and admin//password, and it still wont let you in?
  11. The first docker you list is the Joplin server, which is optional and new. 2nd is Postgres for storing the database - you'd want that or another Postgres container for hte database Third is this container's support page -- it's the VNC based access method. It's not great, but it's the only webUI option. 4th is not required - it is just a base package and is built into the 3rd one. Another option you may be interested in is Trilium Notes - it's an "All in one" system where it has a database, web UI, server, etc, and has apps for all platforms, and is based on markdown, you may find it useful. Joplin to me feels dated at this point and the sync methods are just... cumbersome. I wish it worked differently!
  12. I'm not a big unraid expert myself but there's so many people here willing to help. Since this is affecting multiple containers crashing on you, it may be worthwhile to post your system diagnostics to the general help/support page and note that multiple docker containers are crashing regularly and that you're seeking help trying to narrow down what to look for and what it may be.
  13. You will for sure want to start by checking the logs (click the docker container icon and click the logs button) when a crash happens to see what the last thing it logged was, it should give you some insight as to what is crashing or why hopefully. I can say that I run both Nextcloud and Uptime Kuma (as well as about 25-30 other containers) without crashes for months on end, so it definately seems to be something on your end. Once you get some logs feel free to post the relevant info here and I'll take a peek and see if there's anything I can help you on.
  14. Odd, mine has been running just fine for over 3 months with no issues. FYI you can disable those healthchecks by adding --no-healthcheck Into the "extra paramters" of the container under advanced, which will prevent it from reporting health checks, but I don't think that's your issue. Have you checked logs to see if you can determine why it crashed?
  15. thanks for this, I'll add it to the first post and credit you for this
  16. Glad you have things working for you - I'm actually not the developer and have nothing to do with the development of this app, I just created the docker container template for unraid to bring it to unraid users easily. You may want to check the developers github page to see if they have this as planned features or ping them there to see. Good luck!
  17. That would work with the variable for sure, and I look forward to the rest of the customzations going live, I think this will become out go-to option once those are all finished, thank you!
  18. Thanks for all this! I really like PWPush! I ended up using another option in production (OneTimePassword) because the container I used has CSS/logo customizations, etc, and we needed it to be branded, but I would definately reconsider once PWPush has that feature. I REALLY like the audit logging too, thank you for that. If I could request one more feature it would be the days/views and use "1 click password retrieval" to be an option to set somewhere too so it's a global default. Our whole team would need that set. Ideally a way to let it NOT be overridden too, e.g. our team ONLY sends passwords that can be viewed ONCE and that's it for safety reasons.
  19. I don't use SWAG because of this, I have had so many issues with apps not working behind it without extensive confirutation changes. I have UptimeKuma running in NPM (NGINX Proxy Manager) without any additional changes or config for what its worth... Works great!
  20. Excellent, glad to hear it! I have had weird bugs since 9.6 and that was one of them. Implemetning VLANs though for my custom IP containers fixed everything. I had the necessary hardware to support VLANs already so it wasn't a big deal for me to change. Glad it's working now though, enjoy!
  21. I do have an idea, have you ever experienced a crash of the system since upgrading unraid to 6.9.X? If so I had an issue (currently logged as a bug w/ limetech already) where after a crash, my host access to custom docker network would stop working (despite showing enabled). I had to stop docker enlgine, turn the acces off, save, and turn it back on, save, and then start docker engine again for it to actually work. I'd say it's worth a shot! If that does fix it, you'll need to remember to do the same thing after any other crashes. I was able to personally avoid any furhter crashes by moving from br0 to br0.10 (putting it on a VLAN) For more info on the crashes/vlan issue, see here:
  22. That is odd, I'm not sure exaclty how to help with that off hand, I CAN ping my PiHole br0 IP address from my unraid command line. Only difference between our setups is my PiHole is on a br0.10 (VLAN 10). Might be a good opportunity to start a thread in general unraid support and bring up that you have a custom br0 docker container and the unraid command line is unable to ping it - I'm sure someone can help point you in the right direction. Once you have that resolved, the UptimeKuma ping to your Pihole br0 docker should work successfully
  23. I have a similar br0 PiHole setup, just tested and working for me to ping from Uptime Kuma, so I'm not sure what issue you'd be running into. From the command line of unraid server, are you able to ping 10.0.0.254 successfully?
  24. You need to turn on communication between docker host and macvlan networks to allow the to communicate. Try turning off docker engine in settings then the option to turn that on should be there, then enable docker engine again.