Jump to content

CorneliousJD

Members
  • Posts

    692
  • Joined

  • Last visited

  • Days Won

    1

Everything posted by CorneliousJD

  1. At this point I think I need to reset the camera to defaults, but I'm not sure if that will wipe out my network settings or not and how it connects to WiFi, if it does that's not really an option because I can't get the cameras down right now from where they are. If anyone else has any suggestions that would be helpful. Thank you.
  2. Well I've now somehow made it worse. I clicked "unmange" on the camera, and then tried to manage it again, and it got stuck in the same "Managing" state with an orange icon. I did realize my firmware mistake, I was using original UVC Micro and not the G3 Micro firmware, so I downgraded the firmware some, and that worked and took and I can log back into the camera, but now in my controller nothing shows under cameras, managed or unmanaged.
  3. Weird, I don't think my firmware got updated - but I did try to go download an older release and update via web UI of the camera and it just says it's unsupported.
  4. Ah thanks, this did work. Something else weird going on though is now my camera says it's offline, even though I can see it in the list with an orange dot that says managing. The IP address it shows is valid and I can hit the web UI of the camera itself.
  5. As always guys, thanks for what you do! This was a super quick/easy transition for me. Just pointed to the same appdata and my reverse proxy I had setup still worked, no issues so far on everything. I had way more ports mapped on my OLD container for some reason, but I'm not sure what they were honestly even all doing. Attached here just incase, but I didn't re-create them, just left the defaults in the new template alone and all seems good.
  6. Let it run long enough again this time and it starts locking up the server pretty much until I end this container, but I gathered some logs 2019-02-18 08:38:49.997164 [warn] PUID not defined (via -e PUID), defaulting to '99' 2019-02-18 08:38:50.179288 [warn] PGID not defined (via -e PGID), defaulting to '100' 2019-02-18 08:38:50.341849 [info] Permissions already set for volume mappings Starting unifi-video... (unifi-video) checking for system.properties and truststore files... done. /run.sh: line 107: 7858 Aborted mongo --quiet localhost:7441 --eval "{ ping: 1}" > /dev/null 2>&1 /run.sh: line 107: 8125 Aborted mongo --quiet localhost:7441 --eval "{ ping: 1}" > /dev/null 2>&1 /run.sh: line 107: 8802 Aborted mongo --quiet localhost:7441 --eval "{ ping: 1}" > /dev/null 2>&1 /run.sh: line 107: 8884 Aborted mongo --quiet localhost:7441 --eval "{ ping: 1}" > /dev/null 2>&1 /run.sh: fork: retry: Resource temporarily unavailable /run.sh: fork: retry: Resource temporarily unavailable Then when I closed the container it ended with this. /run.sh: line 107: 26679 Aborted mongo --quiet localhost:7441 --eval "{ ping: 1}" > /dev/null 2>&1 Waiting for mongodb to come online...................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................... EDIT: For now I changed my repo to pducharme/unifi-video-controller:3.9.11 and it's at least up and online now. Although all my previous recordings appear to be gone
  7. I changed to :testing to see if that would change anything, but it was sitting at "UniFi Video is Upgrading" for well over an hour, so I closed it and removed :testing from the version and it's still doing the same thing, all my logs show now are 2019-02-18 08:38:49.997164 [warn] PUID not defined (via -e PUID), defaulting to '99' 2019-02-18 08:38:50.179288 [warn] PGID not defined (via -e PGID), defaulting to '100' 2019-02-18 08:38:50.341849 [info] Permissions already set for volume mappings Starting unifi-video... (unifi-video) checking for system.properties and truststore files... done. And the WebUI is just "UniFi Video is Updating" and spinning over and and over and doesn't seem to actually be doing anything now.
  8. Something broke last night with the latest update, woke up to all cameras offline making the offline "beep" noise it does, docker was started but logs showing it aboriting mongo and webUI wouldn't load.
  9. Had found this out from the github page right as you posted it. I didn't realize when I was asking that it was something container-specific, I thought it was something I could increase/decrease for each container somewhere in unRAID settings. Thanks for the post - I'm back up and running and I found the file at hand that was causing the issue, a database file I'm backing up another way every week anyhow and that's only a big log for HomeAssistant - excluded that from the backups and I think we should be good to go now, thank you very much! For the inotify watch limit, is that a setting somewhere in unRAID that I can change though if I keep getting that error? Thanks in advance!
  10. Do you have a resource that you can point me to that shows how to do this? I'm not familiar with the process as I haven't needed to increase it for any other containers yet, so this is new to me - thanks! EDIT - Have 128GB of RAM in the server running unRAID so I should be able to give it a sizeable bump. EDIT2 - I see this container has a special place for that, whoops! Increasing it to 2048 for now to just make sure that solves all the issues for me first - I can reduce it later if I feel the need.
  11. I have gotten my monthly email update and noticed that I haven't had a 100% backup completed in nearly 60 days. When launching the container I see the following errors. Note my server uptime is only 17 days, so this container has recently been restarted a little over 2 weeks ago. Would love to know how to fix these for good so I don't have non-completed backups and so that it runs effectively. EDIT: Also it looks like a partial cause of my backups not being fully complete for 60 days is that 3 files are not backing up? Where can I find detailed logs that show me which files are having these issues so I can take the appropriate actions? In the crashplanpro.com "history" for this device I see the following in December 12/29/18 09:00PM [User Shares] Starting backup to CrashPlan PRO Online: 275 files (9GB) to back up 12/29/18 09:10PM [User Shares] Completed backup to CrashPlan PRO Online in 0h:10m:03s: 307 files (16.80GB) backed up, 55.60MB encrypted and sent @ 2.4Mbps (Effective rate: 319.6Mbps) 12/29/18 09:10PM - Unable to backup 3 files (next attempt within 15 minutes) And then the following now. 01/20/19 08:45PM CrashPlan for Small Business started, version 6.9.0, GUID 831523038531747495 01/20/19 08:45PM [User Shares] Scanning for files to back up 01/20/19 08:45PM [User Shares] Starting backup to CrashPlan PRO Online: 57 files (1.20GB) to back up 01/20/19 08:46PM [User Shares] Scanning for files stopped 01/20/19 08:46PM CrashPlan for Small Business started, version 6.9.0, GUID 831523038531747495 01/20/19 08:46PM [User Shares] Scanning for files to back up 01/20/19 08:46PM [User Shares] Starting backup to CrashPlan PRO Online: 57 files (1.20GB) to back up 01/20/19 08:47PM [User Shares] Scanning for files stopped
  12. Well I did end up fixing this, but not in a way that I found many answers for, but this works and still gives me an A+ on the NextCloud security and I get the following in Nextcloud too! All I did to fix this on my end was going into my LetsEncrypt site-conf for nextcloud, and under the ### Add HTTP Strict Transport Security ### section I added this header line. add_header Referrer-Policy no-referrer; After saving this and restarting LE and Nextcloud and re-checking, all checks pass and I'm getting an A+ on the security check!
  13. Not too sure off hand, it may be something going off with mixing SpaceInvader One's guide with the one I linked. You may want to blow it out and start over with the guide I linked above, that worked well for me with no issues roughly 6/7 months ago when I setup my container.
  14. Try this URL - check the ending with the Nextcloud config files. https://blog.linuxserver.io/2017/05/10/installing-nextcloud-on-unraid-with-letsencrypt-reverse-proxy/ It has to match the domain name you're trying to access from (your public facing DuckDNS domain)
  15. Ok, so far I am still really struggling with this error. I have searched around and found that the setup I followed had me put this in my reverse proxy (LetsEncrypt container) in my nextcloud site-config. ### Add HTTP Strict Transport Security ### add_header Strict-Transport-Security "max-age=63072000; includeSubdomains"; Reasoning behind this I guess is that Nextcloud's own NGINX will issue those headers now after the update, and having them both cancels them out. I've tried removing that from my LetsEncrypt site config and restarting both containers and no dice, the error still shows up. Not sure what I'm doing wrong here - the post I initailly followed to set this all up is here: https://blog.linuxserver.io/2017/05/10/installing-nextcloud-on-unraid-with-letsencrypt-reverse-proxy/
  16. Sorry to bother, I did try searching for this and found others with the issue but not a resolution. I had 13.0.0 installed, updated to 14.x and then 15.0.2 after that - all via WebUI and that went very smoothly. I just now have these warnings, before the upgrades I didn't have any warnings or issues here listed, I also still continue to get an A+ rating on the Nextcloud security scan, but I would like to resolve all of these issues listed here for good measure. EDIT - got the tables updated w/ the sudo -u -abc command in the docker shell, but still not sure why the refer-policy is kicking that back, i had thought i had that issue on 13.x originally and fixed it. I'll have to look around some more, but if someone has a link or info handy feel free to send it my way! Any help is appreciated!
  17. From the issues last month that had us roll back to :145 - are those still present or have those been fixed? Deluge right now is my only container not at :latest and would like to get it updated again to the latest and greatest if those issues have been fixed. Thanks in advance for anyone that knows!
  18. Did you try to configure your dashboards before hand? Try removing any dashboard and config other than getting it to connect to HA -- it should show you either successfully connecting to HA or give you a reason for the failure. If there's a failure you can then use that to troubleshoot why, if it DOES connect then you can add your dashboards back one by one to see where the issue lies perhaps?
  19. ha_key can now be replaced with token -- fore more information on how to create a token see here: https://appdaemon.readthedocs.io/en/latest/CONFIGURE.html
  20. Hi there, I don't actually run this docker itself, this is the official HADashboard/Appdaemon docker but I just created the template for it, and published it for unRAID. That being said it should be just a matter of the creator of Appdaemon/HADashboard updating this and then the docker will update for everyone as well after that. Right now it would be something you'd have to reach out to them for (ACockburn) to see if/when they plan to update. Thanks!
  21. The first line I left there is most important, your config doesn't have dashboards enabled. Please see the AppDaemon/HADashboard website, but it bascially boils down to adding something like this to your appdaemon.yaml file in /config/ hadashboard: dash_url: http://UNRAIDIP:5050 This should enable dashboards for you to use.
  22. Unfortunately this is the unRAID forums and the docker template was created for use on unRAID boxes. Not sure if you'll find much help here for your QNAP unfortunately.
  23. Ah ok, yep, I've got 10.0.0.10 in mine - my unRAID IP and it's all connected in the controller and accessible via UniFi video cloud account I think I'm good now - time to start tinkering with these more and order a couple more now -- Thanks for the SUPER fast reply on all this, saved me from a night of frustration!
  24. Ok so I put it in br0 mode, did the adoption and flipped it back to bridge mode, then logged into camera interface and change the IP to my unRAIAD IP address - it's showing up now normally in the unifi video container -- seems ok, but I still see the 172.X address you mention in my container settings but I see no way of changing that - how do I go about changing it? Thanks!
×
×
  • Create New...