jademonkee

Members
  • Posts

    333
  • Joined

  • Last visited

Everything posted by jademonkee

  1. I'm still receiving this error. Am I looking in the wrong place for a solution to this? Are LSIO still present in this forum?
  2. I'm still receiving this error after the period (weekly) license check. Is anyone else using Geo IP blocking seeing the same error in their logs? Should I be seeking support in a different forum?
  3. Hi all, I keep seeing in the logs: No MaxMind license key found; exiting. Please enter your license key into /etc/libmaxminddb.cron.conf run-parts: /etc/periodic/weekly/libmaxminddb: exit status 1 If I restart the container, it doesn't appear in the logs, but eventually re-appears. The key I have provided in the Docker variable 'GeoIP2 License key' is current and correct, and if I run the command echo $MAXMINDDB_LICENSE_KEY It returns the correct value. The only mention of this issue that I can find is this: https://github.com/linuxserver/docker-swag/issues/139 Similar to that page, if I run: # /config/geoip2db# ls -lah it returns: sh: /config/geoip2db#: not found But the page says that the issue has been solved. Could it be that I had to manually apply those changes? I'm usually pretty good at looking at the logs after an update to see if any configs need to be manually updated, but maybe I missed it? I'm not sure how to manually check if those changes have been applied in the Docker or not. Your help is appreciated - I'm concerned that Geo IP blocking is not working while this is happening. EDIT: Solution found:
  4. Agreed re CrashPlan. I haven't found any other provider/solution that's as cost effective, but I'm keen to use a different service. One day... I wonder if it's related to Unifi Controller? It's the only Docker I have that uses port 8080, which I assume the above task is related to? Either way, I've limited CrashPlan to 4G RAM. Will clear the warning and see if it comes up again.
  5. I ran 'Fix Common Problems' today and received the following error and recommendation: I've never seen my RAM usage go above 50%, so this is surprising to me. I don't quite know how to find out if the errors are wrong, or if there was some process at some point that managed to fill my 16GB RAM. Would love to know more, so, as per the recommendation from the plugin, here I am posting my diagnostics. Your insight is appreciated, thank you. percy-diagnostics-20220127-1333.zip
  6. I heard that the NAND chips run better hot, so you shouldn't add heatsinks (which is why they don't come with them in the first place).
  7. Auto in disk settings is correct. The script will change it as needed. Keep in mind that it won't change instantly, however: It'll take (up to) as long as the polling time to activate.
  8. This post from Ubiquiti confirms that all versions earlier than Version 6.5.53 are vulnerable, so the only official fix is to upgrade to Version 6.5.54 (or better, Version 6.5.55, as there are additional fixes in that). https://community.ui.com/releases/Security-Advisory-Bulletin-023-023/808a1db0-5f8e-4b91-9097-9822f3f90207 You are correct in thinking that the latest AP firmwares require new versions of the controller to work, too, so to have the latest firmware you should probably be on the latest version of the controller, too. Looks like it's finally time for everyone to move to v6. FWIW, I have no problems with it (though occasionally it resets my theme to light and tells me that my WiFi configs aren't supported. I don't have to change anything, though (except the theme back to dark), and everything seems to work fine. You may have a fun time bumping across such a large number of versions, however, so it would be wise to follow the path up to 5.14 that you mentioned before jumping to v6 (I think I jumped from v5.14 to v6.x without major incident - I may have had to reset and readopt my APs, but I don't think I had to set up my network from scratch or anything like that).
  9. This is a great idea. To make a second backup set, do you run a second instance of the Docker, or can you do it all from the same Docker? And if so, how? Thanks for you help and insight!
  10. Good luck! For me it worked with only small problems. Although I think I might have had to reset the APs so that they would adopt. The tag I use is: linuxserver/unifi-controller:version-6.5.54
  11. Just checking that you saw my edit above? System files are moved from the system flash drive to ram at boot and won't persist after reboot, so I think leave the value in sysctl.conf as-is (I'm assuming after a reboot that it reverts to default, so looks like mine, above (with some other stuff below it). If so, reboot, then try setting it in T&T one more time, then once again rebooting to see if it sticks (this time you will be doing it just in the one place, so maybe that's just what it takes - I may be hopelessly optimistic in this case, though). If not, I'm clean out of ideas. The only way I know how to check is on the right side of the T&T page.
  12. Oh gods not vi... never vi. That's only for oldskool neckbeards and super lightweight Linux installs. I'm sorry that you ever had to deal with vi Use nano - it'll make your life so much easier. So ssh in (or web gui terminal) and run: nano /etc/sysctl.conf (no need for sudo, as you're already root on unraid. Note that with root comes great responsibility - so be cautious with your typing) Nano will make soooo much more sense to you, and the shortcuts are all down the bottom to help you on your way. If you see two entries in there for inotify, delete the second one, then ctrl-x then 'y' to save the changes. EDIT: I just took a look at my sysctl.conf and up the top was: # from https://github.com/guard/listen/wiki/Increasing-the-amount-of-inotify-watchers # increase the number of inotify watches fs.inotify.max_user_watches=524288 Which is a totally different value to what mine is set to in tips and tweaks, so I think it'll be ok for you to leave those lines as-is, only removing any other inotify lines at the end. Tips and tweaks must override it somewhere else. If leaving it as default and setting a value in tips and tweaks doesn't work (or doesn't persist across reboots), though, feel free to change it to a larger value in sysctl.conf using nano.
  13. I struggle with the 'deep maintenance' that they run monthly on my backups (see my complaints scattered throughout this thread). On my 3.7TB backup it takes days, and backups can't run during that time (I even have to shutdown the Docker sometimes to let their maintenance finish running on their server, because, for reasons unknown, the app causes it to loop and never finish). I would not recommend trying to backup 80 TB to their servers - I think 'deep maintenance' would never finish. I am in a fairly regular "f&*k this service" mood every month or so, then whatever problem fixes, and I run the maths, and I can't disagree with the value. But this is not a set-and-forget service, so I now have a habit of checking the Docker like every second day to make sure it's still working, still backing up. You get what you pay for, I guess. I also think they may get shirty at your 80TB backup ('unlimited' is never unlimited), but I have no experience or even anecdote to back that up - it's just me thinking out loud. And they also recommend 1GB RAM per TB backed up. I doubt you'll actually need 80GB RAM for the Docker, but that's just what they recommend. Note that the service isn't meant for archival - everything is geared towards looking for changes in files, and uploading and versioning them, so there is a lot of overhead. Thus 80TB of archive will cause the whole backup to run inefficiently (deep maintenance is just one aspect of this). I have been considering buying a second server and storing it at a friends house for off-site backup (running a weekly script to connect, sync, disconnect). I don't know if that's feasible for an 80TB backup, but I imagine your cloud fees will be huuuge with any provider other than Crashplan, so it may not take too long to pay off the cost of a second-hand server with some black friday/cyber monday shucked drives (I am also assuming you have such a friend, with a good internet connection, a spare port on their UPS, room in their cupboard/basement, and that trusts you to place hardware on their network... hrmmmm lots of maybes here...). Just a thought. Also: do you really need all 80TB backed up in the cloud? At a guess, most of it will be media that is already 'backed up' somewhere on the internet, and will just (ok, that word is doing some heavy lifting) need to be downloaded again (and may not even really be missed if it was lost). Again, just speaking from my choice in what I backup, so YMMV. It may even be a good idea to split the backup into different types: irreplaceable and/or frequently updated backed up using CrashPlan, replaceable/seldom updated (ie media) in cold/archive storage somewhere (even a series of external hard drives kept offsite). Just my £0.02
  14. This is beyond my ken, but I would guess it's being overwritten by something else, or else maybe conflicting settings lead the system to choose a default setting for it (this is totally a guess on my part, though). Mine just worked and stuck (at least from memory). If you have added the line to the end of your sysctl.conf using the echo >> command, open sysctl.conf and remove the line (if there's more than one instance of the line, this may explain these problems, but leave the first instance and only remove the one at the end that was added by the echo >> command). Then reboot, set it in the plugin GUI, then reboot again and see if it sticks.
  15. I don't know if you adding the line to sysctl.conf has changed something (of it it would even survive a reboot), but I, too, needed to change the setting and used the Tips n Tweaks plugin to do so. Mine is currently set (in the plugin gui) to 2097152, and it shows on the right side of that page under 'Current inotify settings' as the same number. Sorry I can't be more helpful.
  16. Also FYI: I updated yesterday from 6.4.?? (whatever the previous stable was) with no issues, too.
  17. As part of my backup strategy, I use CrashPlan (specifically, this Docker: https://forums.unraid.net/topic/59647-support-djoss-crashplan-pro-aka-crashplan-for-small-business/) to backup the tar.gz output of this plugin to the cloud. For whatever reason, this takes days to backup to CrashPlan, and - because I run the backup once a month - it means that every month, Crashplan stops backing up my other files for a couple days while this big boy makes its way to the CrashPlan servers. I read a thing recently about the --rsyncable option* in gzip - is it possible/simple/valuable to add an option to this plugin to support that flag, in the hope** that CrashPlan doesn't have to upload the full 36 GB every month? Many thanks! *https://beeznest.wordpress.com/2005/02/03/rsyncable-gzip/ **I have no idea if CrashPlan will incrementally upload the same way that rsync would with this flag enabled.
  18. Thanks for responding. I haven't checked the official forum, no. I shall take a look.
  19. And it's happened AGAIN! Wow, Unifi! Just amazing! Has anyone else here experienced this? or is it just me? If it's just me... anyone have idea what the cause could be? I'm at a loss why.
  20. Cool cool cool. It happened again: my theme swapped to 'white' and I received an error message saying my WiFi config isn't supported by this version (I haven't changed versions). It also swapped times from 24 hour time to 12 hour time. I guess some config file somewhere got wiped. Also, when I went to settings, the 'new in v6' video started playing, as if it was the first time I visited the page. I'm looking at the WiFi options and can't see anything problematic there - and indeed the main settings (SSID + pw) remain unchanged. Just another fun Unifi moment...
  21. It's now populated with traffic, like HTTP over TLS SSL, Facebook, GMail, Google APIs (SSL), and "Unknown". I'd still prefer current bandwidth usage, but it's better than the previously empty graphs. And I don't know if it's because I use my own DHCP rather than the one in the USG, but the "Client device types" pie chart remains populated with only one type ("Others") so isn't particularly useful. So, a mild improvement, but an improvement all the same.
  22. To be honest, it wasn't a particularly rigorous test... I turned it on, and set a Linux ISO to download over Usenet. It was about half the usual speed (~20MB/s vs ~40MB/s). By the time I'd turned it back on, that download had finished. So I grabbed a different ISO and tried that and it was fast again. I took it that, as I'd previously disabled it, this must be why. HOWEVER, sometimes different downloads connect to different (slower) Usenet servers, so it's not unusual for my downloads to occasionally be significantly slower. I shall try it again today to see if it does really impact me as much as I thought. To answer your Qs: QoS is off Threat management/IDS/IPS is off Hardware offloading is enabled (as is 'Hardware offload scheduler' whatever that is) UPDATE: I've enabled Traffic Identification and this time it didn't negatively affect my downloads. Last time must have been a coincidence. Thanks for the headsup!
  23. One thing I should probably also mention is to make sure that you don't use the (default) 'latest' tag in the docker image, but instead explicitly state a version (even if it's the latest one). That way you can check the Unifi forums (or here) to see if there's any bugs in a new version before upgrading to it, but still receive updates to the Docker image itself. As such, for the current latest version use this tag in the Docker 'Repository' value on the 'edit' page: linuxserver/unifi-controller:version-6.4.54 If you have already installed the Docker using the 'latest' tag, don't fret: just change the 'repository' value to the above, and you'll be fine. Note that if you have a 'latest' tag, you shouldn't try and downgrade to an earlier version: you're best off nuking the install and starting from scratch if you want to downgrade (just coz configs often change between versions). Info on tags here: https://hub.docker.com/r/linuxserver/unifi-controller/tags