jademonkee

Members
  • Posts

    251
  • Joined

  • Last visited

Everything posted by jademonkee

  1. Auto in disk settings is correct. The script will change it as needed. Keep in mind that it won't change instantly, however: It'll take (up to) as long as the polling time to activate.
  2. This post from Ubiquiti confirms that all versions earlier than Version 6.5.53 are vulnerable, so the only official fix is to upgrade to Version 6.5.54 (or better, Version 6.5.55, as there are additional fixes in that). https://community.ui.com/releases/Security-Advisory-Bulletin-023-023/808a1db0-5f8e-4b91-9097-9822f3f90207 You are correct in thinking that the latest AP firmwares require new versions of the controller to work, too, so to have the latest firmware you should probably be on the latest version of the controller, too. Looks like it's finally time for everyone to move to v6. FWIW, I have no problems with it (though occasionally it resets my theme to light and tells me that my WiFi configs aren't supported. I don't have to change anything, though (except the theme back to dark), and everything seems to work fine. You may have a fun time bumping across such a large number of versions, however, so it would be wise to follow the path up to 5.14 that you mentioned before jumping to v6 (I think I jumped from v5.14 to v6.x without major incident - I may have had to reset and readopt my APs, but I don't think I had to set up my network from scratch or anything like that).
  3. This is a great idea. To make a second backup set, do you run a second instance of the Docker, or can you do it all from the same Docker? And if so, how? Thanks for you help and insight!
  4. Good luck! For me it worked with only small problems. Although I think I might have had to reset the APs so that they would adopt. The tag I use is: linuxserver/unifi-controller:version-6.5.54
  5. Just checking that you saw my edit above? System files are moved from the system flash drive to ram at boot and won't persist after reboot, so I think leave the value in sysctl.conf as-is (I'm assuming after a reboot that it reverts to default, so looks like mine, above (with some other stuff below it). If so, reboot, then try setting it in T&T one more time, then once again rebooting to see if it sticks (this time you will be doing it just in the one place, so maybe that's just what it takes - I may be hopelessly optimistic in this case, though). If not, I'm clean out of ideas. The only way I know how to check is on the right side of the T&T page.
  6. Oh gods not vi... never vi. That's only for oldskool neckbeards and super lightweight Linux installs. I'm sorry that you ever had to deal with vi Use nano - it'll make your life so much easier. So ssh in (or web gui terminal) and run: nano /etc/sysctl.conf (no need for sudo, as you're already root on unraid. Note that with root comes great responsibility - so be cautious with your typing) Nano will make soooo much more sense to you, and the shortcuts are all down the bottom to help you on your way. If you see two entries in there for inotify, delete the second one, then ctrl-x then 'y' to save the changes. EDIT: I just took a look at my sysctl.conf and up the top was: # from https://github.com/guard/listen/wiki/Increasing-the-amount-of-inotify-watchers # increase the number of inotify watches fs.inotify.max_user_watches=524288 Which is a totally different value to what mine is set to in tips and tweaks, so I think it'll be ok for you to leave those lines as-is, only removing any other inotify lines at the end. Tips and tweaks must override it somewhere else. If leaving it as default and setting a value in tips and tweaks doesn't work (or doesn't persist across reboots), though, feel free to change it to a larger value in sysctl.conf using nano.
  7. I struggle with the 'deep maintenance' that they run monthly on my backups (see my complaints scattered throughout this thread). On my 3.7TB backup it takes days, and backups can't run during that time (I even have to shutdown the Docker sometimes to let their maintenance finish running on their server, because, for reasons unknown, the app causes it to loop and never finish). I would not recommend trying to backup 80 TB to their servers - I think 'deep maintenance' would never finish. I am in a fairly regular "f&*k this service" mood every month or so, then whatever problem fixes, and I run the maths, and I can't disagree with the value. But this is not a set-and-forget service, so I now have a habit of checking the Docker like every second day to make sure it's still working, still backing up. You get what you pay for, I guess. I also think they may get shirty at your 80TB backup ('unlimited' is never unlimited), but I have no experience or even anecdote to back that up - it's just me thinking out loud. And they also recommend 1GB RAM per TB backed up. I doubt you'll actually need 80GB RAM for the Docker, but that's just what they recommend. Note that the service isn't meant for archival - everything is geared towards looking for changes in files, and uploading and versioning them, so there is a lot of overhead. Thus 80TB of archive will cause the whole backup to run inefficiently (deep maintenance is just one aspect of this). I have been considering buying a second server and storing it at a friends house for off-site backup (running a weekly script to connect, sync, disconnect). I don't know if that's feasible for an 80TB backup, but I imagine your cloud fees will be huuuge with any provider other than Crashplan, so it may not take too long to pay off the cost of a second-hand server with some black friday/cyber monday shucked drives (I am also assuming you have such a friend, with a good internet connection, a spare port on their UPS, room in their cupboard/basement, and that trusts you to place hardware on their network... hrmmmm lots of maybes here...). Just a thought. Also: do you really need all 80TB backed up in the cloud? At a guess, most of it will be media that is already 'backed up' somewhere on the internet, and will just (ok, that word is doing some heavy lifting) need to be downloaded again (and may not even really be missed if it was lost). Again, just speaking from my choice in what I backup, so YMMV. It may even be a good idea to split the backup into different types: irreplaceable and/or frequently updated backed up using CrashPlan, replaceable/seldom updated (ie media) in cold/archive storage somewhere (even a series of external hard drives kept offsite). Just my £0.02
  8. This is beyond my ken, but I would guess it's being overwritten by something else, or else maybe conflicting settings lead the system to choose a default setting for it (this is totally a guess on my part, though). Mine just worked and stuck (at least from memory). If you have added the line to the end of your sysctl.conf using the echo >> command, open sysctl.conf and remove the line (if there's more than one instance of the line, this may explain these problems, but leave the first instance and only remove the one at the end that was added by the echo >> command). Then reboot, set it in the plugin GUI, then reboot again and see if it sticks.
  9. I don't know if you adding the line to sysctl.conf has changed something (of it it would even survive a reboot), but I, too, needed to change the setting and used the Tips n Tweaks plugin to do so. Mine is currently set (in the plugin gui) to 2097152, and it shows on the right side of that page under 'Current inotify settings' as the same number. Sorry I can't be more helpful.
  10. Also FYI: I updated yesterday from 6.4.?? (whatever the previous stable was) with no issues, too.
  11. As part of my backup strategy, I use CrashPlan (specifically, this Docker: https://forums.unraid.net/topic/59647-support-djoss-crashplan-pro-aka-crashplan-for-small-business/) to backup the tar.gz output of this plugin to the cloud. For whatever reason, this takes days to backup to CrashPlan, and - because I run the backup once a month - it means that every month, Crashplan stops backing up my other files for a couple days while this big boy makes its way to the CrashPlan servers. I read a thing recently about the --rsyncable option* in gzip - is it possible/simple/valuable to add an option to this plugin to support that flag, in the hope** that CrashPlan doesn't have to upload the full 36 GB every month? Many thanks! *https://beeznest.wordpress.com/2005/02/03/rsyncable-gzip/ **I have no idea if CrashPlan will incrementally upload the same way that rsync would with this flag enabled.
  12. Thanks for responding. I haven't checked the official forum, no. I shall take a look.
  13. And it's happened AGAIN! Wow, Unifi! Just amazing! Has anyone else here experienced this? or is it just me? If it's just me... anyone have idea what the cause could be? I'm at a loss why.
  14. Cool cool cool. It happened again: my theme swapped to 'white' and I received an error message saying my WiFi config isn't supported by this version (I haven't changed versions). It also swapped times from 24 hour time to 12 hour time. I guess some config file somewhere got wiped. Also, when I went to settings, the 'new in v6' video started playing, as if it was the first time I visited the page. I'm looking at the WiFi options and can't see anything problematic there - and indeed the main settings (SSID + pw) remain unchanged. Just another fun Unifi moment...
  15. It's now populated with traffic, like HTTP over TLS SSL, Facebook, GMail, Google APIs (SSL), and "Unknown". I'd still prefer current bandwidth usage, but it's better than the previously empty graphs. And I don't know if it's because I use my own DHCP rather than the one in the USG, but the "Client device types" pie chart remains populated with only one type ("Others") so isn't particularly useful. So, a mild improvement, but an improvement all the same.
  16. To be honest, it wasn't a particularly rigorous test... I turned it on, and set a Linux ISO to download over Usenet. It was about half the usual speed (~20MB/s vs ~40MB/s). By the time I'd turned it back on, that download had finished. So I grabbed a different ISO and tried that and it was fast again. I took it that, as I'd previously disabled it, this must be why. HOWEVER, sometimes different downloads connect to different (slower) Usenet servers, so it's not unusual for my downloads to occasionally be significantly slower. I shall try it again today to see if it does really impact me as much as I thought. To answer your Qs: QoS is off Threat management/IDS/IPS is off Hardware offloading is enabled (as is 'Hardware offload scheduler' whatever that is) UPDATE: I've enabled Traffic Identification and this time it didn't negatively affect my downloads. Last time must have been a coincidence. Thanks for the headsup!
  17. One thing I should probably also mention is to make sure that you don't use the (default) 'latest' tag in the docker image, but instead explicitly state a version (even if it's the latest one). That way you can check the Unifi forums (or here) to see if there's any bugs in a new version before upgrading to it, but still receive updates to the Docker image itself. As such, for the current latest version use this tag in the Docker 'Repository' value on the 'edit' page: linuxserver/unifi-controller:version-6.4.54 If you have already installed the Docker using the 'latest' tag, don't fret: just change the 'repository' value to the above, and you'll be fine. Note that if you have a 'latest' tag, you shouldn't try and downgrade to an earlier version: you're best off nuking the install and starting from scratch if you want to downgrade (just coz configs often change between versions). Info on tags here: https://hub.docker.com/r/linuxserver/unifi-controller/tags
  18. FWIW I have this same setup and am running the latest v6.4.54. Note however, that the front page on this version is pretty useless for the USG now, as I have a 500mbps internet connection, which is too fast for deep packet inspection to work without causing a bottleneck, and the front page now shows DPI info rather than current bandwidth usage. It can't currently be changed, either, so it just sits there being useless. Curiously, the Android app still shows bandwidth, so I now have to use that to see if something is eating my bandwidth. So, up until recently, 6 was fine. But if you have a fast internet connection and want useful info on the front screen, then stick with the version previous to v6.4.54, as it has bandwidth rather than DPI on the dashboard. Don't go any further back than that, though, as they start to get real flaky. As it's a new setup, I'd prob go to 6, as you're going to have to jump to that version sooner or later anyway, but the version recommended by JonathanM is perfectly fine, too. EDIT: I later found out that traffic analysis doesn't negatively effect bandwidth on my connection, so do ignore any above advice not to enable it.
  19. Out of curiosity, do you have in your logs: 211004 10:30:59 mysqld_safe Logging to '/config/databases/90d1ec1b4a9c.err'. 211004 10:30:59 mysqld_safe Starting mariadbd daemon with databases from /config/databases I have it in mine, but everything works fine (or at least seems to). Just wondering if I do the above 'downgrade, command, upgrade fix' if it will fix that problem, too. Cheers.
  20. I responded, and shall again: I have had no issues.
  21. Ok, so since upgrading (or since the weird error that reset the theme in the dashboard?) I no longer see traffic on my dashboard. Yay. I just love this company... EDIT: Unless it's just changed what's displayed there? It used to show the current bandwidth being used, but maybe it's now meant to show the results of the "traffic identification" option, which I have turned off. I've just turned it on, but it's still not showing anything. I'll give it a few minutes to populate... Any idea how to just show bandwidth being used again, like it used to? EDITEDIT: confirming that yes, the dashboard has changed to display the data gathered by the "traffic identification" option. Can also confirm that enabling the traffic management option severely reduces my throughput, so I have once again disabled it. EDITEDITEDIT: later further testing showed that enabling traffic management doesn't impact my bandwidth: my initial poor results were simply coincidence.
  22. I've had the same error in my log since the Alpine re-base. Nothing is broken on my end, though (I'm only using it for a single instance of Nextcloud with 2 users). I'm pretty sure someone in this thread told me it's nothing to worry about, but I would like to know why it's happening. 210927 12:23:15 mysqld_safe Logging to '/config/databases/e42e52e45c78.err'. 210927 12:23:16 mysqld_safe Starting mariadbd daemon with databases from /config/databases
  23. I just took the plunge. Will report back if anything strange happens. Something strange happened before I upgraded: the theme went to white (from dark) and it said that some WiFi options I had weren't compatible. I had changed nothing since I last upgraded, so don't know why. Maybe something corrupted in the db (again...)? So yeah: any problems I face may not actually be due to the upgrade, anyhow.
  24. You can see the available tags here: https://hub.docker.com/r/linuxserver/mariadb/tags?page=1&ordering=last_updated I don't know the difference between version-10.5.12-r0 and 10.5.12-r0-ls36 but maybe someone here can advise. If you swap to one of those versions, you'll stay on that version until you change the tag. (I think you may still get minor updates to the container, but not mariadb - someone will have to confirm). In saying that, it seems that "latest" would be one of those two, so it's probably no problem updating this time as the latest image is still some variant of the 10.5.12 version you're on (or even the exact same version), and - again - the problems were because of the 'rebasing' of the image, not an update to mariadb itself. So yeah. It's probably pretty safe to update - but there's also no harm in waiting for a few more people in the thread to speak of their experience.