jademonkee

Members
  • Posts

    269
  • Joined

  • Last visited

Converted

  • Gender
    Male
  • Location
    Somerset, England

Recent Profile Visitors

The recent visitors block is disabled and is not being shown to other users.

jademonkee's Achievements

Contributor

Contributor (5/14)

33

Reputation

  1. Here's the output over the time the backup failed (this is after I deleted the contents of the TimeMachine share): 2022-05-24 21:12:51al] Starting manual backup 2022-05-24 21:12:51al] Attempting to mount 'smb://simonchester@percy/sctimemach' 2022-05-24 21:12:52al] Mounted 'smb://simonchester@percy/sctimemach' at '/Volumes/.timemachine/percy/47F34407-EFBA-4499-969E-25DAFCAFA7C4/sctimemach' 2022-05-24 21:12:52al] Initial network volume parameters for 'sctimemach' {disablePrimaryReconnect: 0, disableSecondaryReconnect: 0, reconnectTimeOut: 60, QoS: 0x0, attributes: 0x1C} 2022-05-24 21:12:52al] Configured network volume parameters for 'sctimemach' {disablePrimaryReconnect: 0, disableSecondaryReconnect: 0, reconnectTimeOut: 30, QoS: 0x20, attributes: 0x1C} 2022-05-24 21:12:52al] Mountpoint '/Volumes/.timemachine/percy/47F34407-EFBA-4499-969E-25DAFCAFA7C4/sctimemach' is still valid 2022-05-24 21:12:52al] Mountpoint '/Volumes/.timemachine/percy/47F34407-EFBA-4499-969E-25DAFCAFA7C4/sctimemach' is still valid 2022-05-24 21:12:52al] Creating a sparsebundle using Case-sensitive APFS filesystem 2022-05-24 21:13:10al] Mountpoint '/Volumes/.timemachine/percy/47F34407-EFBA-4499-969E-25DAFCAFA7C4/sctimemach' is still valid 2022-05-24 21:13:10al] Failed to read 'file:///Volumes/.timemachine/percy/47F34407-EFBA-4499-969E-25DAFCAFA7C4/sctimemach/710CB9A8-9829-5E7F-B77F-E11F8AB058ED.sparsebundle/com.apple.TimeMachine.MachineID.plist', error: Error Domain=NSCocoaErrorDomain Code=260 "The file “com.apple.TimeMachine.MachineID.plist” couldn’t be opened because there is no such file." UserInfo={NSFilePath=/Volumes/.timemachine/percy/47F34407-EFBA-4499-969E-25DAFCAFA7C4/sctimemach/710CB9A8-9829-5E7F-B77F-E11F8AB058ED.sparsebundle/com.apple.TimeMachine.MachineID.plist, NSUnderlyingError=0x7f8cc8422b10 {Error Domain=NSPOSIXErrorDomain Code=2 "No such file or directory"}} 2022-05-24 21:13:10al] Failed to read 'file:///Volumes/.timemachine/percy/47F34407-EFBA-4499-969E-25DAFCAFA7C4/sctimemach/710CB9A8-9829-5E7F-B77F-E11F8AB058ED.sparsebundle/com.apple.TimeMachine.MachineID.bckup', error: Error Domain=NSCocoaErrorDomain Code=260 "The file “com.apple.TimeMachine.MachineID.bckup” couldn’t be opened because there is no such file." UserInfo={NSFilePath=/Volumes/.timemachine/percy/47F34407-EFBA-4499-969E-25DAFCAFA7C4/sctimemach/710CB9A8-9829-5E7F-B77F-E11F8AB058ED.sparsebundle/com.apple.TimeMachine.MachineID.bckup, NSUnderlyingError=0x7f8cc842c320 {Error Domain=NSPOSIXErrorDomain Code=2 "No such file or directory"}} 2022-05-24 21:13:20al] Failed to read 'file:///Volumes/.timemachine/percy/47F34407-EFBA-4499-969E-25DAFCAFA7C4/sctimemach/710CB9A8-9829-5E7F-B77F-E11F8AB058ED.sparsebundle/com.apple.TimeMachine.MachineID.plist', error: Error Domain=NSCocoaErrorDomain Code=260 "The file “com.apple.TimeMachine.MachineID.plist” couldn’t be opened because there is no such file." UserInfo={NSFilePath=/Volumes/.timemachine/percy/47F34407-EFBA-4499-969E-25DAFCAFA7C4/sctimemach/710CB9A8-9829-5E7F-B77F-E11F8AB058ED.sparsebundle/com.apple.TimeMachine.MachineID.plist, NSUnderlyingError=0x7f8cc8720100 {Error Domain=NSPOSIXErrorDomain Code=2 "No such file or directory"}} 2022-05-24 21:13:20al] Failed to read 'file:///Volumes/.timemachine/percy/47F34407-EFBA-4499-969E-25DAFCAFA7C4/sctimemach/710CB9A8-9829-5E7F-B77F-E11F8AB058ED.sparsebundle/com.apple.TimeMachine.MachineID.bckup', error: Error Domain=NSCocoaErrorDomain Code=260 "The file “com.apple.TimeMachine.MachineID.bckup” couldn’t be opened because there is no such file." UserInfo={NSFilePath=/Volumes/.timemachine/percy/47F34407-EFBA-4499-969E-25DAFCAFA7C4/sctimemach/710CB9A8-9829-5E7F-B77F-E11F8AB058ED.sparsebundle/com.apple.TimeMachine.MachineID.bckup, NSUnderlyingError=0x7f8cc84351c0 {Error Domain=NSPOSIXErrorDomain Code=2 "No such file or directory"}} 2022-05-24 21:13:32al] Renamed '/Volumes/.timemachine/percy/47F34407-EFBA-4499-969E-25DAFCAFA7C4/sctimemach/710CB9A8-9829-5E7F-B77F-E11F8AB058ED.sparsebundle' to '/Volumes/.timemachine/percy/47F34407-EFBA-4499-969E-25DAFCAFA7C4/sctimemach/Mercury.sparsebundle' 2022-05-24 21:13:32al] Successfully created '/Volumes/.timemachine/percy/47F34407-EFBA-4499-969E-25DAFCAFA7C4/sctimemach/Mercury.sparsebundle' 2022-05-24 21:13:32al] Checking for runtime corruption on '/Volumes/.timemachine/percy/47F34407-EFBA-4499-969E-25DAFCAFA7C4/sctimemach/Mercury.sparsebundle' 2022-05-24 21:13:37al] Mountpoint '/Volumes/.timemachine/percy/47F34407-EFBA-4499-969E-25DAFCAFA7C4/sctimemach' is still valid 2022-05-24 21:13:37al] Runtime corruption check passed for '/Volumes/.timemachine/percy/47F34407-EFBA-4499-969E-25DAFCAFA7C4/sctimemach/Mercury.sparsebundle' 2022-05-24 21:13:43al] Mountpoint '/Volumes/Backups of Mercury' is still valid 2022-05-24 21:13:43al] '/Volumes/.timemachine/percy/47F34407-EFBA-4499-969E-25DAFCAFA7C4/sctimemach/Mercury.sparsebundle' mounted at '/Volumes/Backups of Mercury' 2022-05-24 21:13:43al] Updating volume role for '/Volumes/Backups of Mercury' 2022-05-24 21:13:44al] Mountpoint '/Volumes/.timemachine/percy/47F34407-EFBA-4499-969E-25DAFCAFA7C4/sctimemach' is still valid 2022-05-24 21:13:44al] Mountpoint '/Volumes/Backups of Mercury' is still valid 2022-05-24 21:13:44al] Stopping backup to allow volume '/Volumes/Backups of Mercury' to be unmounted. 2022-05-24 21:13:44al] Backup cancel was requested. 2022-05-24 21:13:54al] backupd exiting - cancelation timed out 2022-05-24 21:14:05Management] Initial thermal pressure level 0 And then when I next hit backup and it worked (though I've truncated this at the point it started logging files etc): 2022-05-24 21:19:55al] Starting manual backup 2022-05-24 21:19:55al] Network destination already mounted at: /Volumes/.timemachine/percy/47F34407-EFBA-4499-969E-25DAFCAFA7C4/sctimemach 2022-05-24 21:19:55al] Initial network volume parameters for 'sctimemach' {disablePrimaryReconnect: 0, disableSecondaryReconnect: 0, reconnectTimeOut: 30, QoS: 0x20, attributes: 0x1C} 2022-05-24 21:19:55al] Configured network volume parameters for 'sctimemach' {disablePrimaryReconnect: 0, disableSecondaryReconnect: 0, reconnectTimeOut: 30, QoS: 0x20, attributes: 0x1C} 2022-05-24 21:19:56al] Found matching sparsebundle 'Mercury.sparsebundle' with host UUID '710CB9A8-9829-5E7F-B77F-E11F8AB058ED' and MAC address '(null)' 2022-05-24 21:19:57al] Not performing periodic backup verification: no previous backups to this destination. 2022-05-24 21:19:58al] 'Mercury.sparsebundle' does not need resizing - current logical size is 510.03 GB (510,027,366,400 bytes), size limit is 510.03 GB (510,027,366,400 bytes) 2022-05-24 21:19:58al] Mountpoint '/Volumes/.timemachine/percy/47F34407-EFBA-4499-969E-25DAFCAFA7C4/sctimemach' is still valid 2022-05-24 21:19:58al] Checking for runtime corruption on '/Volumes/.timemachine/percy/47F34407-EFBA-4499-969E-25DAFCAFA7C4/sctimemach/Mercury.sparsebundle' 2022-05-24 21:20:02al] Mountpoint '/Volumes/.timemachine/percy/47F34407-EFBA-4499-969E-25DAFCAFA7C4/sctimemach' is still valid 2022-05-24 21:20:02al] Runtime corruption check passed for '/Volumes/.timemachine/percy/47F34407-EFBA-4499-969E-25DAFCAFA7C4/sctimemach/Mercury.sparsebundle' 2022-05-24 21:20:07al] Mountpoint '/Volumes/Backups of Mercury' is still valid 2022-05-24 21:20:07al] '/Volumes/.timemachine/percy/47F34407-EFBA-4499-969E-25DAFCAFA7C4/sctimemach/Mercury.sparsebundle' mounted at '/Volumes/Backups of Mercury' 2022-05-24 21:20:07al] Mountpoint '/Volumes/Backups of Mercury' is still valid 2022-05-24 21:20:08al] Checking identity of target volume '/Volumes/Backups of Mercury' 2022-05-24 21:20:08al] Mountpoint '/Volumes/Backups of Mercury' is still valid 2022-05-24 21:20:09al] Mountpoint '/Volumes/Backups of Mercury' is still valid 2022-05-24 21:20:09al] Backing up to Backups of Mercury (/dev/disk3s1,e): /Volumes/Backups of Mercury 2022-05-24 21:20:09al] Mountpoint '/Volumes/Backups of Mercury' is still valid 2022-05-24 21:20:10Thinning] Starting age based thinning of Time Machine local snapshots on disk '/System/Volumes/Data' 2022-05-24 21:20:10SnapshotManagement] Created Time Machine local snapshot with name 'com.apple.TimeMachine.2022-05-24-212010.local' on disk '/System/Volumes/Data' 2022-05-24 21:20:10al] Declared stable snapshot: com.apple.TimeMachine.2022-05-24-212010.local 2022-05-24 21:20:10SnapshotManagement] Mounted stable snapshot: com.apple.TimeMachine.2022-05-24-212010.local at path: /Volumes/com.apple.TimeMachine.localsnapshots/Backups.backupdb/Mercury/2022-05-24-212010/Macintosh HD — Data source: Macintosh HD — Data 2022-05-24 21:20:10pThinning] No further thinning possible - no thinnable backups 2022-05-24 21:20:13Collection] First backup of source: "Macintosh HD — Data" (device: /dev/disk1s1 mount: '/System/Volumes/Data' fsUUID: F88D7C18-E1EC-4F90-9B71-4A481B580F26 eventDBUUID: 58B7A351-1BA1-4761-A370-5A46FA30AC5D) 2022-05-24 21:20:13Collection] Trusting source modification times for remote backups. 2022-05-24 21:20:13Collection] Found 0 perfect clone families, 0 partial clone families. Zero KB physical space used by clone files. Zero KB shared space. 2022-05-24 21:20:13Collection] Finished collecting events from volume "Macintosh HD — Data" 2022-05-24 21:20:13Collection] Saved event cache at /Volumes/Backups of Mercury/2022-05-24-212010.inprogress/.F88D7C18-E1EC-4F90-9B71-4A481B580F26.eventdb 2022-05-24 21:20:13gProgress] (fsk:0,dsk:0,fsz:1,dsz:0)(1/0) FWIW, it looks like my Mac has been backing up successfully over night. I'll report back if it fails at some point today. And, for the record, I'm on MacOS v11.6.5
  2. FYI I deleted the content of my Time Machine share and started fresh. It failed to backup the first time, but then I hit backup again after about 5 minutes, and now it's backing up. If it fails again, then I'll create a support thread, but this may yet work.
  3. I just realised that I also can't backup to Time Machine since upgrading to v6.10 Says "Preparing backup" then... nothing. No error: just doesn't backup. SMB Multichannel is disabled. Enhanced MacOS compat is enabled. SMB extras is: #vfs_recycle_start #Recycle bin configuration [global] syslog only = Yes syslog = 0 logging = 0 log level = 0 vfs:0 #vfs_recycle_end A while ago, I remember having to delete my backups and start fresh, but I was getting errors that time. I don't really have anything too valuable on my Mac, so can experiment with deleting the contents of my TimeMachine share and starting fresh, but I'd rather hold off to see if anyone has any other ideas. I've seen a similar issue mentioned here, but looking at the logs, my error just says "no mountable file systems" also, I'm running MacOS 11.6.5 I have to go pick my child up, but will create my own support thread soon (and probably delete my old backup DIR and start fresh to see if that fixes it).
  4. Wow, thanks for the warning! I updated on the weekend, but hadn't noticed anything wrong. I just checked my logs, and indeed I had the errors mentioned above. So I rebooted and disabled VT-d. Do you know how I can check to see if I have any data corruption? Is it known to be caused by any specific activities? Would a parity check verify (and allow me to find what has been corrupted if it detects any problems?)? Thanks again.
  5. I mostly prefer the new interface. There's heaps of other changes since v5. You can see the changelogs from v6 > v7 here: https://community.ui.com/releases/UniFi-Network-Application-7-0-25/3344c362-7da5-4ecd-a403-3b47520e3c01 And of course there's even more from v5 > v6
  6. I just upgraded to v7.1.61 from v7.0.25 without issue. v7.0.25 has been solid, and the new interface is pretty much complete. I run 2x wired AC LR and a USG, so it's not a complicated setup, but I wouldn't hesitate to recommend moving to the v7 series.
  7. FWIW, it looks like the old method for GeoIP blocking has been superceded, so that's why I was getting errors. I followed the instructions at: https://github.com/linuxserver/docker-mods/tree/swag-maxmind/ and https://virtualize.link/secure/ And replaced the old references to GeoIP2 in the config files mentioned in the above instructions. Seems like it's all working now, although I'll find out in a week if the error (or a new one) pops up in my logs again. I also note that the file appdata/swag/geoip2db/GeoLite2-City.mmdb hasn't been modified since 2021-11-30, so maybe this change will allow it to update? Although, TBH, I'm not even sure what the database does, given I'm banning by country code (does it link IPs to countries, maybe?). Anyway, thought I'd post it here for posterity, in case anyone else has a similar problem.
  8. It's great, as far as I'm concerned. It seems stable (although I've only been running it for a week), and the new UI finally works as it should. Best version yet, dare I say.
  9. Just made the leap from 6.5.55 to 7.0.23. So far, everything seems to be working fine, no settings needed to be changed. The new interface is heaps better, returning alot of the functionality and info that had been removed in the move to v6. Everything feels snappy and responsive. V happy with the upgrade. Now let's see if it's stable!
  10. Fun! But glad to hear that it worked at least. I've been thinking of making the leap.
  11. Hi all, I have installed Redis to try and improve the performance of the gallery in NextCloud. I have set it up as per a combination of the official documentation, and this guide written for the LSIO Unraid NextCloud Docker image: https://skylar.tech/reduce-nextcloud-mysql-usage-using-redis-for-cache/ I notice above that @onkelblubbhas mapped a directory for "/data" but I didn't add that to the Docker setup. There is no Redis folder in my appdata dir. My NextCloud config file looks like: $CONFIG = array ( 'memcache.local' => '\\OC\\Memcache\\APCu', 'memcache.distributed' => '\\OC\\Memcache\\Redis', 'memcache.locking' => '\\OC\\Memcache\\Redis', 'redis' => array ( 'host' => '192.168.200.40', 'port' => 6379, ), I note that the official NextCloud documentation has the formatting slightly different (note the single slashes rather than double, and the lack of 'array' in the Redis definition): 'memcache.local' => '\OC\Memcache\APCu', 'memcache.distributed' => '\OC\Memcache\Redis', 'memcache.locking' => '\OC\Memcache\Redis', 'redis' => [ 'host' => 'redis-host.example.com', 'port' => 6379, ], Any idea if that's problematic? I also have no idea if it's working or not? The Redis log is just absolutely filled with: [truncated] ... 1:M 21 Feb 2022 13:43:31.044 * 1 changes in 3600 seconds. Saving... 1:M 21 Feb 2022 13:43:31.044 * Background saving started by pid 179 179:C 21 Feb 2022 13:43:31.050 * DB saved on disk 179:C 21 Feb 2022 13:43:31.051 * RDB: 0 MB of memory used by copy-on-write 1:M 21 Feb 2022 13:43:31.145 * Background saving terminated with success 1:M 21 Feb 2022 13:55:38.286 * 100 changes in 300 seconds. Saving... 1:M 21 Feb 2022 13:55:38.287 * Background saving started by pid 180 180:C 21 Feb 2022 13:55:38.294 * DB saved on disk 180:C 21 Feb 2022 13:55:38.295 * RDB: 0 MB of memory used by copy-on-write 1:M 21 Feb 2022 13:55:38.387 * Background saving terminated with success 1:M 21 Feb 2022 14:00:39.004 * 100 changes in 300 seconds. Saving... 1:M 21 Feb 2022 14:00:39.004 * Background saving started by pid 181 181:C 21 Feb 2022 14:00:39.011 * DB saved on disk 181:C 21 Feb 2022 14:00:39.011 * RDB: 0 MB of memory used by copy-on-write 1:M 21 Feb 2022 14:00:39.105 * Background saving terminated with success 1:M 21 Feb 2022 14:20:19.561 * 100 changes in 300 seconds. Saving... 1:M 21 Feb 2022 14:20:19.561 * Background saving started by pid 182 182:C 21 Feb 2022 14:20:19.569 * DB saved on disk 182:C 21 Feb 2022 14:20:19.570 * RDB: 0 MB of memory used by copy-on-write 1:M 21 Feb 2022 14:20:19.662 * Background saving terminated with success 1:M 21 Feb 2022 14:44:32.005 * 100 changes in 300 seconds. Saving... 1:M 21 Feb 2022 14:44:32.005 * Background saving started by pid 183 183:C 21 Feb 2022 14:44:32.015 * DB saved on disk 183:C 21 Feb 2022 14:44:32.015 * RDB: 0 MB of memory used by copy-on-write 1:M 21 Feb 2022 14:44:32.106 * Background saving terminated with success 1:M 21 Feb 2022 14:49:33.079 * 100 changes in 300 seconds. Saving... 1:M 21 Feb 2022 14:49:33.079 * Background saving started by pid 184 184:C 21 Feb 2022 14:49:33.089 * DB saved on disk 184:C 21 Feb 2022 14:49:33.089 * RDB: 0 MB of memory used by copy-on-write 1:M 21 Feb 2022 14:49:33.180 * Background saving terminated with success The reason I installed it was to try and speed up the NextCloud gallery on Android, so that I can browse a shared photo folder. It was previously unusably slow. So I installed Redis as well as the Preview Generator app and pre-cached the images. So the gallery is now working faster (though not as fast as I'd like), but I have no idea if it's just because of the pre-generated thumbnails, or if Redis is also playing a role. As far as usage goes on Redis, it's using about 10MB RAM and less than 1% CPU, even when I'm scrolling through the image directory on the Android app. To me, that doesn't seem like it's doing much. There are no errors in the NextCloud log. Your insight and help is appreciated! EDIT: I just thought I'd try mapping an appdata dir, and the Redis logs showed: 1:C 21 Feb 2022 15:14:44.856 # oO0OoO0OoO0Oo Redis is starting oO0OoO0OoO0Oo 1:C 21 Feb 2022 15:14:44.856 # Redis version=6.2.6, bits=64, commit=00000000, modified=0, pid=1, just started 1:C 21 Feb 2022 15:14:44.856 # Warning: no config file specified, using the default config. In order to specify a config file use redis-server /path/to/redis.conf 1:M 21 Feb 2022 15:14:44.857 * monotonic clock: POSIX clock_gettime 1:M 21 Feb 2022 15:14:44.858 * Running mode=standalone, port=6379. 1:M 21 Feb 2022 15:14:44.858 # Server initialized 1:M 21 Feb 2022 15:14:44.858 # WARNING overcommit_memory is set to 0! Background save may fail under low memory condition. To fix this issue add 'vm.overcommit_memory = 1' to /etc/sysctl.conf and then reboot or run the command 'sysctl vm.overcommit_memory=1' for this to take effect. 1:M 21 Feb 2022 15:14:44.858 * Ready to accept connections So I removed the path mapping, and I have the same message in the logs. I have no idea if it was there when I first started Redis, as the logs are filled with the messages above. I decided that I might as well add the mapping back, so have done so. The appdata dir that was created by the mapping is empty. Any tips on what could be in the config file for a Redis Docker/Unraid setup? EDIT: the dir now has a "dump.rdb" in it, but nothing else. As I browse the shares in NextCloud, the log fills up with the messages like above: 1:C 21 Feb 2022 15:21:06.136 # oO0OoO0OoO0Oo Redis is starting oO0OoO0OoO0Oo 1:C 21 Feb 2022 15:21:06.136 # Redis version=6.2.6, bits=64, commit=00000000, modified=0, pid=1, just started 1:C 21 Feb 2022 15:21:06.136 # Warning: no config file specified, using the default config. In order to specify a config file use redis-server /path/to/redis.conf 1:M 21 Feb 2022 15:21:06.137 * monotonic clock: POSIX clock_gettime 1:M 21 Feb 2022 15:21:06.137 * Running mode=standalone, port=6379. 1:M 21 Feb 2022 15:21:06.137 # Server initialized 1:M 21 Feb 2022 15:21:06.137 # WARNING overcommit_memory is set to 0! Background save may fail under low memory condition. To fix this issue add 'vm.overcommit_memory = 1' to /etc/sysctl.conf and then reboot or run the command 'sysctl vm.overcommit_memory=1' for this to take effect. 1:M 21 Feb 2022 15:21:06.138 * Loading RDB produced by version 6.2.6 1:M 21 Feb 2022 15:21:06.138 * RDB age 382 seconds 1:M 21 Feb 2022 15:21:06.138 * RDB memory usage when created 0.77 Mb 1:M 21 Feb 2022 15:21:06.138 # Done loading RDB, keys loaded: 1, keys expired: 2. 1:M 21 Feb 2022 15:21:06.138 * DB loaded from disk: 0.000 seconds 1:M 21 Feb 2022 15:21:06.138 * Ready to accept connections 1:M 21 Feb 2022 15:28:20.667 * 100 changes in 300 seconds. Saving... 1:M 21 Feb 2022 15:28:20.667 * Background saving started by pid 19 19:C 21 Feb 2022 15:28:20.671 * DB saved on disk 19:C 21 Feb 2022 15:28:20.672 * RDB: 0 MB of memory used by copy-on-write 1:M 21 Feb 2022 15:28:20.768 * Background saving terminated with success So are the logs normal? Do I need to setup a config file for Redis? Is Redis working properly (as the logs are reacting to me browsing photos, I'm guessing it is?)?
  12. I'm still receiving this error. Am I looking in the wrong place for a solution to this? Are LSIO still present in this forum?
  13. I'm still receiving this error after the period (weekly) license check. Is anyone else using Geo IP blocking seeing the same error in their logs? Should I be seeking support in a different forum?
  14. Hi all, I keep seeing in the logs: No MaxMind license key found; exiting. Please enter your license key into /etc/libmaxminddb.cron.conf run-parts: /etc/periodic/weekly/libmaxminddb: exit status 1 If I restart the container, it doesn't appear in the logs, but eventually re-appears. The key I have provided in the Docker variable 'GeoIP2 License key' is current and correct, and if I run the command echo $MAXMINDDB_LICENSE_KEY It returns the correct value. The only mention of this issue that I can find is this: https://github.com/linuxserver/docker-swag/issues/139 Similar to that page, if I run: # /config/geoip2db# ls -lah it returns: sh: /config/geoip2db#: not found But the page says that the issue has been solved. Could it be that I had to manually apply those changes? I'm usually pretty good at looking at the logs after an update to see if any configs need to be manually updated, but maybe I missed it? I'm not sure how to manually check if those changes have been applied in the Docker or not. Your help is appreciated - I'm concerned that Geo IP blocking is not working while this is happening. EDIT: Solution found:
  15. I'm already on 8.3 using this container...?