jademonkee

Members
  • Posts

    269
  • Joined

  • Last visited

Everything posted by jademonkee

  1. Here's the output over the time the backup failed (this is after I deleted the contents of the TimeMachine share): 2022-05-24 21:12:51al] Starting manual backup 2022-05-24 21:12:51al] Attempting to mount 'smb://simonchester@percy/sctimemach' 2022-05-24 21:12:52al] Mounted 'smb://simonchester@percy/sctimemach' at '/Volumes/.timemachine/percy/47F34407-EFBA-4499-969E-25DAFCAFA7C4/sctimemach' 2022-05-24 21:12:52al] Initial network volume parameters for 'sctimemach' {disablePrimaryReconnect: 0, disableSecondaryReconnect: 0, reconnectTimeOut: 60, QoS: 0x0, attributes: 0x1C} 2022-05-24 21:12:52al] Configured network volume parameters for 'sctimemach' {disablePrimaryReconnect: 0, disableSecondaryReconnect: 0, reconnectTimeOut: 30, QoS: 0x20, attributes: 0x1C} 2022-05-24 21:12:52al] Mountpoint '/Volumes/.timemachine/percy/47F34407-EFBA-4499-969E-25DAFCAFA7C4/sctimemach' is still valid 2022-05-24 21:12:52al] Mountpoint '/Volumes/.timemachine/percy/47F34407-EFBA-4499-969E-25DAFCAFA7C4/sctimemach' is still valid 2022-05-24 21:12:52al] Creating a sparsebundle using Case-sensitive APFS filesystem 2022-05-24 21:13:10al] Mountpoint '/Volumes/.timemachine/percy/47F34407-EFBA-4499-969E-25DAFCAFA7C4/sctimemach' is still valid 2022-05-24 21:13:10al] Failed to read 'file:///Volumes/.timemachine/percy/47F34407-EFBA-4499-969E-25DAFCAFA7C4/sctimemach/710CB9A8-9829-5E7F-B77F-E11F8AB058ED.sparsebundle/com.apple.TimeMachine.MachineID.plist', error: Error Domain=NSCocoaErrorDomain Code=260 "The file “com.apple.TimeMachine.MachineID.plist” couldn’t be opened because there is no such file." UserInfo={NSFilePath=/Volumes/.timemachine/percy/47F34407-EFBA-4499-969E-25DAFCAFA7C4/sctimemach/710CB9A8-9829-5E7F-B77F-E11F8AB058ED.sparsebundle/com.apple.TimeMachine.MachineID.plist, NSUnderlyingError=0x7f8cc8422b10 {Error Domain=NSPOSIXErrorDomain Code=2 "No such file or directory"}} 2022-05-24 21:13:10al] Failed to read 'file:///Volumes/.timemachine/percy/47F34407-EFBA-4499-969E-25DAFCAFA7C4/sctimemach/710CB9A8-9829-5E7F-B77F-E11F8AB058ED.sparsebundle/com.apple.TimeMachine.MachineID.bckup', error: Error Domain=NSCocoaErrorDomain Code=260 "The file “com.apple.TimeMachine.MachineID.bckup” couldn’t be opened because there is no such file." UserInfo={NSFilePath=/Volumes/.timemachine/percy/47F34407-EFBA-4499-969E-25DAFCAFA7C4/sctimemach/710CB9A8-9829-5E7F-B77F-E11F8AB058ED.sparsebundle/com.apple.TimeMachine.MachineID.bckup, NSUnderlyingError=0x7f8cc842c320 {Error Domain=NSPOSIXErrorDomain Code=2 "No such file or directory"}} 2022-05-24 21:13:20al] Failed to read 'file:///Volumes/.timemachine/percy/47F34407-EFBA-4499-969E-25DAFCAFA7C4/sctimemach/710CB9A8-9829-5E7F-B77F-E11F8AB058ED.sparsebundle/com.apple.TimeMachine.MachineID.plist', error: Error Domain=NSCocoaErrorDomain Code=260 "The file “com.apple.TimeMachine.MachineID.plist” couldn’t be opened because there is no such file." UserInfo={NSFilePath=/Volumes/.timemachine/percy/47F34407-EFBA-4499-969E-25DAFCAFA7C4/sctimemach/710CB9A8-9829-5E7F-B77F-E11F8AB058ED.sparsebundle/com.apple.TimeMachine.MachineID.plist, NSUnderlyingError=0x7f8cc8720100 {Error Domain=NSPOSIXErrorDomain Code=2 "No such file or directory"}} 2022-05-24 21:13:20al] Failed to read 'file:///Volumes/.timemachine/percy/47F34407-EFBA-4499-969E-25DAFCAFA7C4/sctimemach/710CB9A8-9829-5E7F-B77F-E11F8AB058ED.sparsebundle/com.apple.TimeMachine.MachineID.bckup', error: Error Domain=NSCocoaErrorDomain Code=260 "The file “com.apple.TimeMachine.MachineID.bckup” couldn’t be opened because there is no such file." UserInfo={NSFilePath=/Volumes/.timemachine/percy/47F34407-EFBA-4499-969E-25DAFCAFA7C4/sctimemach/710CB9A8-9829-5E7F-B77F-E11F8AB058ED.sparsebundle/com.apple.TimeMachine.MachineID.bckup, NSUnderlyingError=0x7f8cc84351c0 {Error Domain=NSPOSIXErrorDomain Code=2 "No such file or directory"}} 2022-05-24 21:13:32al] Renamed '/Volumes/.timemachine/percy/47F34407-EFBA-4499-969E-25DAFCAFA7C4/sctimemach/710CB9A8-9829-5E7F-B77F-E11F8AB058ED.sparsebundle' to '/Volumes/.timemachine/percy/47F34407-EFBA-4499-969E-25DAFCAFA7C4/sctimemach/Mercury.sparsebundle' 2022-05-24 21:13:32al] Successfully created '/Volumes/.timemachine/percy/47F34407-EFBA-4499-969E-25DAFCAFA7C4/sctimemach/Mercury.sparsebundle' 2022-05-24 21:13:32al] Checking for runtime corruption on '/Volumes/.timemachine/percy/47F34407-EFBA-4499-969E-25DAFCAFA7C4/sctimemach/Mercury.sparsebundle' 2022-05-24 21:13:37al] Mountpoint '/Volumes/.timemachine/percy/47F34407-EFBA-4499-969E-25DAFCAFA7C4/sctimemach' is still valid 2022-05-24 21:13:37al] Runtime corruption check passed for '/Volumes/.timemachine/percy/47F34407-EFBA-4499-969E-25DAFCAFA7C4/sctimemach/Mercury.sparsebundle' 2022-05-24 21:13:43al] Mountpoint '/Volumes/Backups of Mercury' is still valid 2022-05-24 21:13:43al] '/Volumes/.timemachine/percy/47F34407-EFBA-4499-969E-25DAFCAFA7C4/sctimemach/Mercury.sparsebundle' mounted at '/Volumes/Backups of Mercury' 2022-05-24 21:13:43al] Updating volume role for '/Volumes/Backups of Mercury' 2022-05-24 21:13:44al] Mountpoint '/Volumes/.timemachine/percy/47F34407-EFBA-4499-969E-25DAFCAFA7C4/sctimemach' is still valid 2022-05-24 21:13:44al] Mountpoint '/Volumes/Backups of Mercury' is still valid 2022-05-24 21:13:44al] Stopping backup to allow volume '/Volumes/Backups of Mercury' to be unmounted. 2022-05-24 21:13:44al] Backup cancel was requested. 2022-05-24 21:13:54al] backupd exiting - cancelation timed out 2022-05-24 21:14:05Management] Initial thermal pressure level 0 And then when I next hit backup and it worked (though I've truncated this at the point it started logging files etc): 2022-05-24 21:19:55al] Starting manual backup 2022-05-24 21:19:55al] Network destination already mounted at: /Volumes/.timemachine/percy/47F34407-EFBA-4499-969E-25DAFCAFA7C4/sctimemach 2022-05-24 21:19:55al] Initial network volume parameters for 'sctimemach' {disablePrimaryReconnect: 0, disableSecondaryReconnect: 0, reconnectTimeOut: 30, QoS: 0x20, attributes: 0x1C} 2022-05-24 21:19:55al] Configured network volume parameters for 'sctimemach' {disablePrimaryReconnect: 0, disableSecondaryReconnect: 0, reconnectTimeOut: 30, QoS: 0x20, attributes: 0x1C} 2022-05-24 21:19:56al] Found matching sparsebundle 'Mercury.sparsebundle' with host UUID '710CB9A8-9829-5E7F-B77F-E11F8AB058ED' and MAC address '(null)' 2022-05-24 21:19:57al] Not performing periodic backup verification: no previous backups to this destination. 2022-05-24 21:19:58al] 'Mercury.sparsebundle' does not need resizing - current logical size is 510.03 GB (510,027,366,400 bytes), size limit is 510.03 GB (510,027,366,400 bytes) 2022-05-24 21:19:58al] Mountpoint '/Volumes/.timemachine/percy/47F34407-EFBA-4499-969E-25DAFCAFA7C4/sctimemach' is still valid 2022-05-24 21:19:58al] Checking for runtime corruption on '/Volumes/.timemachine/percy/47F34407-EFBA-4499-969E-25DAFCAFA7C4/sctimemach/Mercury.sparsebundle' 2022-05-24 21:20:02al] Mountpoint '/Volumes/.timemachine/percy/47F34407-EFBA-4499-969E-25DAFCAFA7C4/sctimemach' is still valid 2022-05-24 21:20:02al] Runtime corruption check passed for '/Volumes/.timemachine/percy/47F34407-EFBA-4499-969E-25DAFCAFA7C4/sctimemach/Mercury.sparsebundle' 2022-05-24 21:20:07al] Mountpoint '/Volumes/Backups of Mercury' is still valid 2022-05-24 21:20:07al] '/Volumes/.timemachine/percy/47F34407-EFBA-4499-969E-25DAFCAFA7C4/sctimemach/Mercury.sparsebundle' mounted at '/Volumes/Backups of Mercury' 2022-05-24 21:20:07al] Mountpoint '/Volumes/Backups of Mercury' is still valid 2022-05-24 21:20:08al] Checking identity of target volume '/Volumes/Backups of Mercury' 2022-05-24 21:20:08al] Mountpoint '/Volumes/Backups of Mercury' is still valid 2022-05-24 21:20:09al] Mountpoint '/Volumes/Backups of Mercury' is still valid 2022-05-24 21:20:09al] Backing up to Backups of Mercury (/dev/disk3s1,e): /Volumes/Backups of Mercury 2022-05-24 21:20:09al] Mountpoint '/Volumes/Backups of Mercury' is still valid 2022-05-24 21:20:10Thinning] Starting age based thinning of Time Machine local snapshots on disk '/System/Volumes/Data' 2022-05-24 21:20:10SnapshotManagement] Created Time Machine local snapshot with name 'com.apple.TimeMachine.2022-05-24-212010.local' on disk '/System/Volumes/Data' 2022-05-24 21:20:10al] Declared stable snapshot: com.apple.TimeMachine.2022-05-24-212010.local 2022-05-24 21:20:10SnapshotManagement] Mounted stable snapshot: com.apple.TimeMachine.2022-05-24-212010.local at path: /Volumes/com.apple.TimeMachine.localsnapshots/Backups.backupdb/Mercury/2022-05-24-212010/Macintosh HD — Data source: Macintosh HD — Data 2022-05-24 21:20:10pThinning] No further thinning possible - no thinnable backups 2022-05-24 21:20:13Collection] First backup of source: "Macintosh HD — Data" (device: /dev/disk1s1 mount: '/System/Volumes/Data' fsUUID: F88D7C18-E1EC-4F90-9B71-4A481B580F26 eventDBUUID: 58B7A351-1BA1-4761-A370-5A46FA30AC5D) 2022-05-24 21:20:13Collection] Trusting source modification times for remote backups. 2022-05-24 21:20:13Collection] Found 0 perfect clone families, 0 partial clone families. Zero KB physical space used by clone files. Zero KB shared space. 2022-05-24 21:20:13Collection] Finished collecting events from volume "Macintosh HD — Data" 2022-05-24 21:20:13Collection] Saved event cache at /Volumes/Backups of Mercury/2022-05-24-212010.inprogress/.F88D7C18-E1EC-4F90-9B71-4A481B580F26.eventdb 2022-05-24 21:20:13gProgress] (fsk:0,dsk:0,fsz:1,dsz:0)(1/0) FWIW, it looks like my Mac has been backing up successfully over night. I'll report back if it fails at some point today. And, for the record, I'm on MacOS v11.6.5
  2. FYI I deleted the content of my Time Machine share and started fresh. It failed to backup the first time, but then I hit backup again after about 5 minutes, and now it's backing up. If it fails again, then I'll create a support thread, but this may yet work.
  3. I just realised that I also can't backup to Time Machine since upgrading to v6.10 Says "Preparing backup" then... nothing. No error: just doesn't backup. SMB Multichannel is disabled. Enhanced MacOS compat is enabled. SMB extras is: #vfs_recycle_start #Recycle bin configuration [global] syslog only = Yes syslog = 0 logging = 0 log level = 0 vfs:0 #vfs_recycle_end A while ago, I remember having to delete my backups and start fresh, but I was getting errors that time. I don't really have anything too valuable on my Mac, so can experiment with deleting the contents of my TimeMachine share and starting fresh, but I'd rather hold off to see if anyone has any other ideas. I've seen a similar issue mentioned here, but looking at the logs, my error just says "no mountable file systems" also, I'm running MacOS 11.6.5 I have to go pick my child up, but will create my own support thread soon (and probably delete my old backup DIR and start fresh to see if that fixes it).
  4. Wow, thanks for the warning! I updated on the weekend, but hadn't noticed anything wrong. I just checked my logs, and indeed I had the errors mentioned above. So I rebooted and disabled VT-d. Do you know how I can check to see if I have any data corruption? Is it known to be caused by any specific activities? Would a parity check verify (and allow me to find what has been corrupted if it detects any problems?)? Thanks again.
  5. I mostly prefer the new interface. There's heaps of other changes since v5. You can see the changelogs from v6 > v7 here: https://community.ui.com/releases/UniFi-Network-Application-7-0-25/3344c362-7da5-4ecd-a403-3b47520e3c01 And of course there's even more from v5 > v6
  6. I just upgraded to v7.1.61 from v7.0.25 without issue. v7.0.25 has been solid, and the new interface is pretty much complete. I run 2x wired AC LR and a USG, so it's not a complicated setup, but I wouldn't hesitate to recommend moving to the v7 series.
  7. FWIW, it looks like the old method for GeoIP blocking has been superceded, so that's why I was getting errors. I followed the instructions at: https://github.com/linuxserver/docker-mods/tree/swag-maxmind/ and https://virtualize.link/secure/ And replaced the old references to GeoIP2 in the config files mentioned in the above instructions. Seems like it's all working now, although I'll find out in a week if the error (or a new one) pops up in my logs again. I also note that the file appdata/swag/geoip2db/GeoLite2-City.mmdb hasn't been modified since 2021-11-30, so maybe this change will allow it to update? Although, TBH, I'm not even sure what the database does, given I'm banning by country code (does it link IPs to countries, maybe?). Anyway, thought I'd post it here for posterity, in case anyone else has a similar problem.
  8. It's great, as far as I'm concerned. It seems stable (although I've only been running it for a week), and the new UI finally works as it should. Best version yet, dare I say.
  9. Just made the leap from 6.5.55 to 7.0.23. So far, everything seems to be working fine, no settings needed to be changed. The new interface is heaps better, returning alot of the functionality and info that had been removed in the move to v6. Everything feels snappy and responsive. V happy with the upgrade. Now let's see if it's stable!
  10. Fun! But glad to hear that it worked at least. I've been thinking of making the leap.
  11. Hi all, I have installed Redis to try and improve the performance of the gallery in NextCloud. I have set it up as per a combination of the official documentation, and this guide written for the LSIO Unraid NextCloud Docker image: https://skylar.tech/reduce-nextcloud-mysql-usage-using-redis-for-cache/ I notice above that @onkelblubbhas mapped a directory for "/data" but I didn't add that to the Docker setup. There is no Redis folder in my appdata dir. My NextCloud config file looks like: $CONFIG = array ( 'memcache.local' => '\\OC\\Memcache\\APCu', 'memcache.distributed' => '\\OC\\Memcache\\Redis', 'memcache.locking' => '\\OC\\Memcache\\Redis', 'redis' => array ( 'host' => '192.168.200.40', 'port' => 6379, ), I note that the official NextCloud documentation has the formatting slightly different (note the single slashes rather than double, and the lack of 'array' in the Redis definition): 'memcache.local' => '\OC\Memcache\APCu', 'memcache.distributed' => '\OC\Memcache\Redis', 'memcache.locking' => '\OC\Memcache\Redis', 'redis' => [ 'host' => 'redis-host.example.com', 'port' => 6379, ], Any idea if that's problematic? I also have no idea if it's working or not? The Redis log is just absolutely filled with: [truncated] ... 1:M 21 Feb 2022 13:43:31.044 * 1 changes in 3600 seconds. Saving... 1:M 21 Feb 2022 13:43:31.044 * Background saving started by pid 179 179:C 21 Feb 2022 13:43:31.050 * DB saved on disk 179:C 21 Feb 2022 13:43:31.051 * RDB: 0 MB of memory used by copy-on-write 1:M 21 Feb 2022 13:43:31.145 * Background saving terminated with success 1:M 21 Feb 2022 13:55:38.286 * 100 changes in 300 seconds. Saving... 1:M 21 Feb 2022 13:55:38.287 * Background saving started by pid 180 180:C 21 Feb 2022 13:55:38.294 * DB saved on disk 180:C 21 Feb 2022 13:55:38.295 * RDB: 0 MB of memory used by copy-on-write 1:M 21 Feb 2022 13:55:38.387 * Background saving terminated with success 1:M 21 Feb 2022 14:00:39.004 * 100 changes in 300 seconds. Saving... 1:M 21 Feb 2022 14:00:39.004 * Background saving started by pid 181 181:C 21 Feb 2022 14:00:39.011 * DB saved on disk 181:C 21 Feb 2022 14:00:39.011 * RDB: 0 MB of memory used by copy-on-write 1:M 21 Feb 2022 14:00:39.105 * Background saving terminated with success 1:M 21 Feb 2022 14:20:19.561 * 100 changes in 300 seconds. Saving... 1:M 21 Feb 2022 14:20:19.561 * Background saving started by pid 182 182:C 21 Feb 2022 14:20:19.569 * DB saved on disk 182:C 21 Feb 2022 14:20:19.570 * RDB: 0 MB of memory used by copy-on-write 1:M 21 Feb 2022 14:20:19.662 * Background saving terminated with success 1:M 21 Feb 2022 14:44:32.005 * 100 changes in 300 seconds. Saving... 1:M 21 Feb 2022 14:44:32.005 * Background saving started by pid 183 183:C 21 Feb 2022 14:44:32.015 * DB saved on disk 183:C 21 Feb 2022 14:44:32.015 * RDB: 0 MB of memory used by copy-on-write 1:M 21 Feb 2022 14:44:32.106 * Background saving terminated with success 1:M 21 Feb 2022 14:49:33.079 * 100 changes in 300 seconds. Saving... 1:M 21 Feb 2022 14:49:33.079 * Background saving started by pid 184 184:C 21 Feb 2022 14:49:33.089 * DB saved on disk 184:C 21 Feb 2022 14:49:33.089 * RDB: 0 MB of memory used by copy-on-write 1:M 21 Feb 2022 14:49:33.180 * Background saving terminated with success The reason I installed it was to try and speed up the NextCloud gallery on Android, so that I can browse a shared photo folder. It was previously unusably slow. So I installed Redis as well as the Preview Generator app and pre-cached the images. So the gallery is now working faster (though not as fast as I'd like), but I have no idea if it's just because of the pre-generated thumbnails, or if Redis is also playing a role. As far as usage goes on Redis, it's using about 10MB RAM and less than 1% CPU, even when I'm scrolling through the image directory on the Android app. To me, that doesn't seem like it's doing much. There are no errors in the NextCloud log. Your insight and help is appreciated! EDIT: I just thought I'd try mapping an appdata dir, and the Redis logs showed: 1:C 21 Feb 2022 15:14:44.856 # oO0OoO0OoO0Oo Redis is starting oO0OoO0OoO0Oo 1:C 21 Feb 2022 15:14:44.856 # Redis version=6.2.6, bits=64, commit=00000000, modified=0, pid=1, just started 1:C 21 Feb 2022 15:14:44.856 # Warning: no config file specified, using the default config. In order to specify a config file use redis-server /path/to/redis.conf 1:M 21 Feb 2022 15:14:44.857 * monotonic clock: POSIX clock_gettime 1:M 21 Feb 2022 15:14:44.858 * Running mode=standalone, port=6379. 1:M 21 Feb 2022 15:14:44.858 # Server initialized 1:M 21 Feb 2022 15:14:44.858 # WARNING overcommit_memory is set to 0! Background save may fail under low memory condition. To fix this issue add 'vm.overcommit_memory = 1' to /etc/sysctl.conf and then reboot or run the command 'sysctl vm.overcommit_memory=1' for this to take effect. 1:M 21 Feb 2022 15:14:44.858 * Ready to accept connections So I removed the path mapping, and I have the same message in the logs. I have no idea if it was there when I first started Redis, as the logs are filled with the messages above. I decided that I might as well add the mapping back, so have done so. The appdata dir that was created by the mapping is empty. Any tips on what could be in the config file for a Redis Docker/Unraid setup? EDIT: the dir now has a "dump.rdb" in it, but nothing else. As I browse the shares in NextCloud, the log fills up with the messages like above: 1:C 21 Feb 2022 15:21:06.136 # oO0OoO0OoO0Oo Redis is starting oO0OoO0OoO0Oo 1:C 21 Feb 2022 15:21:06.136 # Redis version=6.2.6, bits=64, commit=00000000, modified=0, pid=1, just started 1:C 21 Feb 2022 15:21:06.136 # Warning: no config file specified, using the default config. In order to specify a config file use redis-server /path/to/redis.conf 1:M 21 Feb 2022 15:21:06.137 * monotonic clock: POSIX clock_gettime 1:M 21 Feb 2022 15:21:06.137 * Running mode=standalone, port=6379. 1:M 21 Feb 2022 15:21:06.137 # Server initialized 1:M 21 Feb 2022 15:21:06.137 # WARNING overcommit_memory is set to 0! Background save may fail under low memory condition. To fix this issue add 'vm.overcommit_memory = 1' to /etc/sysctl.conf and then reboot or run the command 'sysctl vm.overcommit_memory=1' for this to take effect. 1:M 21 Feb 2022 15:21:06.138 * Loading RDB produced by version 6.2.6 1:M 21 Feb 2022 15:21:06.138 * RDB age 382 seconds 1:M 21 Feb 2022 15:21:06.138 * RDB memory usage when created 0.77 Mb 1:M 21 Feb 2022 15:21:06.138 # Done loading RDB, keys loaded: 1, keys expired: 2. 1:M 21 Feb 2022 15:21:06.138 * DB loaded from disk: 0.000 seconds 1:M 21 Feb 2022 15:21:06.138 * Ready to accept connections 1:M 21 Feb 2022 15:28:20.667 * 100 changes in 300 seconds. Saving... 1:M 21 Feb 2022 15:28:20.667 * Background saving started by pid 19 19:C 21 Feb 2022 15:28:20.671 * DB saved on disk 19:C 21 Feb 2022 15:28:20.672 * RDB: 0 MB of memory used by copy-on-write 1:M 21 Feb 2022 15:28:20.768 * Background saving terminated with success So are the logs normal? Do I need to setup a config file for Redis? Is Redis working properly (as the logs are reacting to me browsing photos, I'm guessing it is?)?
  12. I'm still receiving this error. Am I looking in the wrong place for a solution to this? Are LSIO still present in this forum?
  13. I'm still receiving this error after the period (weekly) license check. Is anyone else using Geo IP blocking seeing the same error in their logs? Should I be seeking support in a different forum?
  14. Hi all, I keep seeing in the logs: No MaxMind license key found; exiting. Please enter your license key into /etc/libmaxminddb.cron.conf run-parts: /etc/periodic/weekly/libmaxminddb: exit status 1 If I restart the container, it doesn't appear in the logs, but eventually re-appears. The key I have provided in the Docker variable 'GeoIP2 License key' is current and correct, and if I run the command echo $MAXMINDDB_LICENSE_KEY It returns the correct value. The only mention of this issue that I can find is this: https://github.com/linuxserver/docker-swag/issues/139 Similar to that page, if I run: # /config/geoip2db# ls -lah it returns: sh: /config/geoip2db#: not found But the page says that the issue has been solved. Could it be that I had to manually apply those changes? I'm usually pretty good at looking at the logs after an update to see if any configs need to be manually updated, but maybe I missed it? I'm not sure how to manually check if those changes have been applied in the Docker or not. Your help is appreciated - I'm concerned that Geo IP blocking is not working while this is happening. EDIT: Solution found:
  15. I'm already on 8.3 using this container...?
  16. Agreed re CrashPlan. I haven't found any other provider/solution that's as cost effective, but I'm keen to use a different service. One day... I wonder if it's related to Unifi Controller? It's the only Docker I have that uses port 8080, which I assume the above task is related to? Either way, I've limited CrashPlan to 4G RAM. Will clear the warning and see if it comes up again.
  17. I ran 'Fix Common Problems' today and received the following error and recommendation: I've never seen my RAM usage go above 50%, so this is surprising to me. I don't quite know how to find out if the errors are wrong, or if there was some process at some point that managed to fill my 16GB RAM. Would love to know more, so, as per the recommendation from the plugin, here I am posting my diagnostics. Your insight is appreciated, thank you. percy-diagnostics-20220127-1333.zip
  18. I heard that the NAND chips run better hot, so you shouldn't add heatsinks (which is why they don't come with them in the first place).
  19. Auto in disk settings is correct. The script will change it as needed. Keep in mind that it won't change instantly, however: It'll take (up to) as long as the polling time to activate.
  20. This post from Ubiquiti confirms that all versions earlier than Version 6.5.53 are vulnerable, so the only official fix is to upgrade to Version 6.5.54 (or better, Version 6.5.55, as there are additional fixes in that). https://community.ui.com/releases/Security-Advisory-Bulletin-023-023/808a1db0-5f8e-4b91-9097-9822f3f90207 You are correct in thinking that the latest AP firmwares require new versions of the controller to work, too, so to have the latest firmware you should probably be on the latest version of the controller, too. Looks like it's finally time for everyone to move to v6. FWIW, I have no problems with it (though occasionally it resets my theme to light and tells me that my WiFi configs aren't supported. I don't have to change anything, though (except the theme back to dark), and everything seems to work fine. You may have a fun time bumping across such a large number of versions, however, so it would be wise to follow the path up to 5.14 that you mentioned before jumping to v6 (I think I jumped from v5.14 to v6.x without major incident - I may have had to reset and readopt my APs, but I don't think I had to set up my network from scratch or anything like that).
  21. This is a great idea. To make a second backup set, do you run a second instance of the Docker, or can you do it all from the same Docker? And if so, how? Thanks for you help and insight!
  22. Good luck! For me it worked with only small problems. Although I think I might have had to reset the APs so that they would adopt. The tag I use is: linuxserver/unifi-controller:version-6.5.54
  23. Just checking that you saw my edit above? System files are moved from the system flash drive to ram at boot and won't persist after reboot, so I think leave the value in sysctl.conf as-is (I'm assuming after a reboot that it reverts to default, so looks like mine, above (with some other stuff below it). If so, reboot, then try setting it in T&T one more time, then once again rebooting to see if it sticks (this time you will be doing it just in the one place, so maybe that's just what it takes - I may be hopelessly optimistic in this case, though). If not, I'm clean out of ideas. The only way I know how to check is on the right side of the T&T page.
  24. Oh gods not vi... never vi. That's only for oldskool neckbeards and super lightweight Linux installs. I'm sorry that you ever had to deal with vi Use nano - it'll make your life so much easier. So ssh in (or web gui terminal) and run: nano /etc/sysctl.conf (no need for sudo, as you're already root on unraid. Note that with root comes great responsibility - so be cautious with your typing) Nano will make soooo much more sense to you, and the shortcuts are all down the bottom to help you on your way. If you see two entries in there for inotify, delete the second one, then ctrl-x then 'y' to save the changes. EDIT: I just took a look at my sysctl.conf and up the top was: # from https://github.com/guard/listen/wiki/Increasing-the-amount-of-inotify-watchers # increase the number of inotify watches fs.inotify.max_user_watches=524288 Which is a totally different value to what mine is set to in tips and tweaks, so I think it'll be ok for you to leave those lines as-is, only removing any other inotify lines at the end. Tips and tweaks must override it somewhere else. If leaving it as default and setting a value in tips and tweaks doesn't work (or doesn't persist across reboots), though, feel free to change it to a larger value in sysctl.conf using nano.