jademonkee

Members
  • Posts

    333
  • Joined

  • Last visited

Posts posted by jademonkee

  1. Hi there,

    As per the recommendation from the Fix Common Problems plugin, I'm posting here looking for insight in to the Out Of Memory Error I received overnight.

    My server usually sits at about 40% RAM usage, so I'm surprised to see such an error. A few months ago I did tweak the 'php-fpm' settings in NextCloud to improve performance, so I'm wondering if maybe I went overboard with that.

    php-fpm tweaks are:

    ; Tune PHP-FPM https://docs.nextcloud.com/server/latest/admin_manual/installation/server_tuning.html#tune-php-fpm and values from https://spot13.com/pmcalculator/
    pm = dynamic
    pm.max_children = 100
    pm.start_servers = 25
    pm.min_spare_servers = 25
    pm.max_spare_servers = 75

    Diagnostics attached.

    Many thanks.

    percy-diagnostics-20220901-1052_out-of-memory.zip

  2. 14 hours ago, PeteAsking said:

    I guess what I am asking is, did we actually win or gain anything here or what was the reason Unifi wanted to redesign everything? For us or them?

    I fear that in the future they'll neuter the ability to run the app from Docker to force us to buy their hardware (Cloudkey or UDM).

     

    EDIT: especially with this in the release notes for 7.2.91:

    Quote

    Remove Override Inform Host field for cloud console.

     

  3. On 7/13/2022 at 9:09 PM, Krushy said:

    Thanks trurl.

    Yes, all cleared on Safari (on a MacBook Pro)

    As suggested - Using Chrome or Opera getting "Your connection is not private NET::ERR_CERT_INVALID", so won't even connect up.

    You should be able to ignore that error and continue. Sometimes on the error page you need to click on "Advanced" or something like that, then "Proceed Anyway" (or something like that - this is from memory. tbh, it may not even be the same error). Just thought it might be worth a shot before you restore from backup.

  4. Digging this up to say that I recently bought a 2TB Samsung 970 EVO for a Nextcloud cache, and had totally forgotten that my Dell PERC H310 didn't support TRIM until I received the "fstrim: /mnt/ssd2tb: FITRIM ioctl failed: Remote I/O error" email from Dynamix SSD TRIM yesterday morning. While my onboard SAS port does support TRIM, it only has 2x SATAIII ports, with the others being SATAII - and the two SATAIII ports are already taken up by the 2x 850 EVO SSDs in my (regular) cache pool.

    So, a very big thanks to @ezhik for their work finding out that P16 works well, as well as providing a copy of the firmware and flashing instructions! After some nail-biting to-and-fro trying to flash the card (getting errors when I tried flashing it in my server - it erased fine, of course though lol - but then it worked when I used my desktop PC), I'm now running P16 and TRIM runs without error.

     

    • Thanks 1
  5. 22 hours ago, johnieutah said:

    Perfect, worked a treat.

    Only thing worth noting was a pop-up on logging into the Controller about my "wifi networks no longer being something something something" (yeah it disappeared before I could fully register it).

     

    I get that same message every so often. I've never figured out what "WiFi setting" is "incompatible"

    Strangely, when the message appears the interface sets back to "light mode" and the clock to "12 hour" so I have to set them back. I have no idea what causes it. But because it doesn't seem to cause any issues, I've never spent the time to find the reason.

  6. 1 hour ago, johnieutah said:

    Doh, I missed that completely. Thanks.

     

    So I can simply set the tag to lscr.io/linuxserver/unifi-controller:version-7.1.66? (after making sure I've backed up within Unifi of course)

     

    Yeah, backup in Unifi, and if you have a recent "CA Backups" of your apps, that would be good, too.

    Also make sure you're on the latest firmware for your Unifi devices.

    Then just change the tag and away you go.

  7. 7 hours ago, wgstarks said:

    Just curious what error TM showed on the failed backup?

    log show --predicate 'subsystem == "com.apple.TimeMachine"' --info | grep 'upd: (' | cut -c 1-19,140-999

     

     

    Here's the output over the time the backup failed (this is after I deleted the contents of the TimeMachine share):

    2022-05-24 21:12:51al] Starting manual backup
    2022-05-24 21:12:51al] Attempting to mount 'smb://simonchester@percy/sctimemach'
    2022-05-24 21:12:52al] Mounted 'smb://simonchester@percy/sctimemach' at '/Volumes/.timemachine/percy/47F34407-EFBA-4499-969E-25DAFCAFA7C4/sctimemach'
    2022-05-24 21:12:52al] Initial network volume parameters for 'sctimemach' {disablePrimaryReconnect: 0, disableSecondaryReconnect: 0, reconnectTimeOut: 60, QoS: 0x0, attributes: 0x1C}
    2022-05-24 21:12:52al] Configured network volume parameters for 'sctimemach' {disablePrimaryReconnect: 0, disableSecondaryReconnect: 0, reconnectTimeOut: 30, QoS: 0x20, attributes: 0x1C}
    2022-05-24 21:12:52al] Mountpoint '/Volumes/.timemachine/percy/47F34407-EFBA-4499-969E-25DAFCAFA7C4/sctimemach' is still valid
    2022-05-24 21:12:52al] Mountpoint '/Volumes/.timemachine/percy/47F34407-EFBA-4499-969E-25DAFCAFA7C4/sctimemach' is still valid
    2022-05-24 21:12:52al] Creating a sparsebundle using Case-sensitive APFS filesystem
    2022-05-24 21:13:10al] Mountpoint '/Volumes/.timemachine/percy/47F34407-EFBA-4499-969E-25DAFCAFA7C4/sctimemach' is still valid
    2022-05-24 21:13:10al] Failed to read 'file:///Volumes/.timemachine/percy/47F34407-EFBA-4499-969E-25DAFCAFA7C4/sctimemach/710CB9A8-9829-5E7F-B77F-E11F8AB058ED.sparsebundle/com.apple.TimeMachine.MachineID.plist', error: Error Domain=NSCocoaErrorDomain Code=260 "The file “com.apple.TimeMachine.MachineID.plist” couldn’t be opened because there is no such file." UserInfo={NSFilePath=/Volumes/.timemachine/percy/47F34407-EFBA-4499-969E-25DAFCAFA7C4/sctimemach/710CB9A8-9829-5E7F-B77F-E11F8AB058ED.sparsebundle/com.apple.TimeMachine.MachineID.plist, NSUnderlyingError=0x7f8cc8422b10 {Error Domain=NSPOSIXErrorDomain Code=2 "No such file or directory"}}
    2022-05-24 21:13:10al] Failed to read 'file:///Volumes/.timemachine/percy/47F34407-EFBA-4499-969E-25DAFCAFA7C4/sctimemach/710CB9A8-9829-5E7F-B77F-E11F8AB058ED.sparsebundle/com.apple.TimeMachine.MachineID.bckup', error: Error Domain=NSCocoaErrorDomain Code=260 "The file “com.apple.TimeMachine.MachineID.bckup” couldn’t be opened because there is no such file." UserInfo={NSFilePath=/Volumes/.timemachine/percy/47F34407-EFBA-4499-969E-25DAFCAFA7C4/sctimemach/710CB9A8-9829-5E7F-B77F-E11F8AB058ED.sparsebundle/com.apple.TimeMachine.MachineID.bckup, NSUnderlyingError=0x7f8cc842c320 {Error Domain=NSPOSIXErrorDomain Code=2 "No such file or directory"}}
    2022-05-24 21:13:20al] Failed to read 'file:///Volumes/.timemachine/percy/47F34407-EFBA-4499-969E-25DAFCAFA7C4/sctimemach/710CB9A8-9829-5E7F-B77F-E11F8AB058ED.sparsebundle/com.apple.TimeMachine.MachineID.plist', error: Error Domain=NSCocoaErrorDomain Code=260 "The file “com.apple.TimeMachine.MachineID.plist” couldn’t be opened because there is no such file." UserInfo={NSFilePath=/Volumes/.timemachine/percy/47F34407-EFBA-4499-969E-25DAFCAFA7C4/sctimemach/710CB9A8-9829-5E7F-B77F-E11F8AB058ED.sparsebundle/com.apple.TimeMachine.MachineID.plist, NSUnderlyingError=0x7f8cc8720100 {Error Domain=NSPOSIXErrorDomain Code=2 "No such file or directory"}}
    2022-05-24 21:13:20al] Failed to read 'file:///Volumes/.timemachine/percy/47F34407-EFBA-4499-969E-25DAFCAFA7C4/sctimemach/710CB9A8-9829-5E7F-B77F-E11F8AB058ED.sparsebundle/com.apple.TimeMachine.MachineID.bckup', error: Error Domain=NSCocoaErrorDomain Code=260 "The file “com.apple.TimeMachine.MachineID.bckup” couldn’t be opened because there is no such file." UserInfo={NSFilePath=/Volumes/.timemachine/percy/47F34407-EFBA-4499-969E-25DAFCAFA7C4/sctimemach/710CB9A8-9829-5E7F-B77F-E11F8AB058ED.sparsebundle/com.apple.TimeMachine.MachineID.bckup, NSUnderlyingError=0x7f8cc84351c0 {Error Domain=NSPOSIXErrorDomain Code=2 "No such file or directory"}}
    2022-05-24 21:13:32al] Renamed '/Volumes/.timemachine/percy/47F34407-EFBA-4499-969E-25DAFCAFA7C4/sctimemach/710CB9A8-9829-5E7F-B77F-E11F8AB058ED.sparsebundle' to '/Volumes/.timemachine/percy/47F34407-EFBA-4499-969E-25DAFCAFA7C4/sctimemach/Mercury.sparsebundle'
    2022-05-24 21:13:32al] Successfully created '/Volumes/.timemachine/percy/47F34407-EFBA-4499-969E-25DAFCAFA7C4/sctimemach/Mercury.sparsebundle'
    2022-05-24 21:13:32al] Checking for runtime corruption on '/Volumes/.timemachine/percy/47F34407-EFBA-4499-969E-25DAFCAFA7C4/sctimemach/Mercury.sparsebundle'
    2022-05-24 21:13:37al] Mountpoint '/Volumes/.timemachine/percy/47F34407-EFBA-4499-969E-25DAFCAFA7C4/sctimemach' is still valid
    2022-05-24 21:13:37al] Runtime corruption check passed for '/Volumes/.timemachine/percy/47F34407-EFBA-4499-969E-25DAFCAFA7C4/sctimemach/Mercury.sparsebundle'
    2022-05-24 21:13:43al] Mountpoint '/Volumes/Backups of Mercury' is still valid
    2022-05-24 21:13:43al] '/Volumes/.timemachine/percy/47F34407-EFBA-4499-969E-25DAFCAFA7C4/sctimemach/Mercury.sparsebundle' mounted at '/Volumes/Backups of Mercury'
    2022-05-24 21:13:43al] Updating volume role for '/Volumes/Backups of Mercury'
    2022-05-24 21:13:44al] Mountpoint '/Volumes/.timemachine/percy/47F34407-EFBA-4499-969E-25DAFCAFA7C4/sctimemach' is still valid
    2022-05-24 21:13:44al] Mountpoint '/Volumes/Backups of Mercury' is still valid
    2022-05-24 21:13:44al] Stopping backup to allow volume '/Volumes/Backups of Mercury' to be unmounted.
    2022-05-24 21:13:44al] Backup cancel was requested.
    2022-05-24 21:13:54al] backupd exiting - cancelation timed out
    2022-05-24 21:14:05Management] Initial thermal pressure level 0

     

    And then when I next hit backup and it worked (though I've truncated this at the point it started logging files etc):

    2022-05-24 21:19:55al] Starting manual backup
    2022-05-24 21:19:55al] Network destination already mounted at: /Volumes/.timemachine/percy/47F34407-EFBA-4499-969E-25DAFCAFA7C4/sctimemach
    2022-05-24 21:19:55al] Initial network volume parameters for 'sctimemach' {disablePrimaryReconnect: 0, disableSecondaryReconnect: 0, reconnectTimeOut: 30, QoS: 0x20, attributes: 0x1C}
    2022-05-24 21:19:55al] Configured network volume parameters for 'sctimemach' {disablePrimaryReconnect: 0, disableSecondaryReconnect: 0, reconnectTimeOut: 30, QoS: 0x20, attributes: 0x1C}
    2022-05-24 21:19:56al] Found matching sparsebundle 'Mercury.sparsebundle' with host UUID '710CB9A8-9829-5E7F-B77F-E11F8AB058ED' and MAC address '(null)'
    2022-05-24 21:19:57al] Not performing periodic backup verification: no previous backups to this destination.
    2022-05-24 21:19:58al] 'Mercury.sparsebundle' does not need resizing - current logical size is 510.03 GB (510,027,366,400 bytes), size limit is 510.03 GB (510,027,366,400 bytes)
    2022-05-24 21:19:58al] Mountpoint '/Volumes/.timemachine/percy/47F34407-EFBA-4499-969E-25DAFCAFA7C4/sctimemach' is still valid
    2022-05-24 21:19:58al] Checking for runtime corruption on '/Volumes/.timemachine/percy/47F34407-EFBA-4499-969E-25DAFCAFA7C4/sctimemach/Mercury.sparsebundle'
    2022-05-24 21:20:02al] Mountpoint '/Volumes/.timemachine/percy/47F34407-EFBA-4499-969E-25DAFCAFA7C4/sctimemach' is still valid
    2022-05-24 21:20:02al] Runtime corruption check passed for '/Volumes/.timemachine/percy/47F34407-EFBA-4499-969E-25DAFCAFA7C4/sctimemach/Mercury.sparsebundle'
    2022-05-24 21:20:07al] Mountpoint '/Volumes/Backups of Mercury' is still valid
    2022-05-24 21:20:07al] '/Volumes/.timemachine/percy/47F34407-EFBA-4499-969E-25DAFCAFA7C4/sctimemach/Mercury.sparsebundle' mounted at '/Volumes/Backups of Mercury'
    2022-05-24 21:20:07al] Mountpoint '/Volumes/Backups of Mercury' is still valid
    2022-05-24 21:20:08al] Checking identity of target volume '/Volumes/Backups of Mercury'
    2022-05-24 21:20:08al] Mountpoint '/Volumes/Backups of Mercury' is still valid
    2022-05-24 21:20:09al] Mountpoint '/Volumes/Backups of Mercury' is still valid
    2022-05-24 21:20:09al] Backing up to Backups of Mercury (/dev/disk3s1,e): /Volumes/Backups of Mercury
    2022-05-24 21:20:09al] Mountpoint '/Volumes/Backups of Mercury' is still valid
    2022-05-24 21:20:10Thinning] Starting age based thinning of Time Machine local snapshots on disk '/System/Volumes/Data'
    2022-05-24 21:20:10SnapshotManagement] Created Time Machine local snapshot with name 'com.apple.TimeMachine.2022-05-24-212010.local' on disk '/System/Volumes/Data'
    2022-05-24 21:20:10al] Declared stable snapshot: com.apple.TimeMachine.2022-05-24-212010.local
    2022-05-24 21:20:10SnapshotManagement] Mounted stable snapshot: com.apple.TimeMachine.2022-05-24-212010.local at path: /Volumes/com.apple.TimeMachine.localsnapshots/Backups.backupdb/Mercury/2022-05-24-212010/Macintosh HD — Data source: Macintosh HD — Data
    2022-05-24 21:20:10pThinning] No further thinning possible - no thinnable backups
    2022-05-24 21:20:13Collection] First backup of source: "Macintosh HD — Data" (device: /dev/disk1s1 mount: '/System/Volumes/Data' fsUUID: F88D7C18-E1EC-4F90-9B71-4A481B580F26 eventDBUUID: 58B7A351-1BA1-4761-A370-5A46FA30AC5D)
    2022-05-24 21:20:13Collection] Trusting source modification times for remote backups.
    2022-05-24 21:20:13Collection] Found 0 perfect clone families, 0 partial clone families. Zero KB physical space used by clone files. Zero KB shared space.
    2022-05-24 21:20:13Collection] Finished collecting events from volume "Macintosh HD — Data"
    2022-05-24 21:20:13Collection] Saved event cache at /Volumes/Backups of Mercury/2022-05-24-212010.inprogress/.F88D7C18-E1EC-4F90-9B71-4A481B580F26.eventdb
    2022-05-24 21:20:13gProgress] (fsk:0,dsk:0,fsz:1,dsz:0)(1/0)

     

     

    FWIW, it looks like my Mac has been backing up successfully over night. I'll report back if it fails at some point today.

    And, for the record, I'm on MacOS v11.6.5

  8. 4 hours ago, jademonkee said:

    I just realised that I also can't backup to Time Machine since upgrading to v6.10

    Says "Preparing backup" then... nothing. No error: just doesn't backup.

    SMB Multichannel is disabled. Enhanced MacOS compat is enabled.

    SMB extras is:

    #vfs_recycle_start
    #Recycle bin configuration
    [global]
       syslog only = Yes
       syslog = 0
       logging = 0
       log level = 0 vfs:0
    #vfs_recycle_end
    

     

    A while ago, I remember having to delete my backups and start fresh, but I was getting errors that time. I don't really have anything too valuable on my Mac, so can experiment with deleting the contents of my TimeMachine share and starting fresh, but I'd rather hold off to see if anyone has any other ideas.

    I've seen a similar issue mentioned here, but looking at the logs, my error just says "no mountable file systems" also, I'm running MacOS 11.6.5

    I have to go pick my child up, but will create my own support thread soon (and probably delete my old backup DIR and start fresh to see if that fixes it).

     

     

    FYI I deleted the content of my Time Machine share and started fresh. It failed to backup the first time, but then I hit backup again after about 5 minutes, and now it's backing up. If it fails again, then I'll create a support thread, but this may yet work.

  9. On 5/21/2022 at 7:47 PM, pkoci1 said:

    Anyone else cannot perform a time machine backup? It always ends with an error that the backup disc is not to be found. 

    I just realised that I also can't backup to Time Machine since upgrading to v6.10

    Says "Preparing backup" then... nothing. No error: just doesn't backup.

    SMB Multichannel is disabled. Enhanced MacOS compat is enabled.

    SMB extras is:

    #vfs_recycle_start
    #Recycle bin configuration
    [global]
       syslog only = Yes
       syslog = 0
       logging = 0
       log level = 0 vfs:0
    #vfs_recycle_end
    

     

    A while ago, I remember having to delete my backups and start fresh, but I was getting errors that time. I don't really have anything too valuable on my Mac, so can experiment with deleting the contents of my TimeMachine share and starting fresh, but I'd rather hold off to see if anyone has any other ideas.

    I've seen a similar issue mentioned here, but looking at the logs, my error just says "no mountable file systems" also, I'm running MacOS 11.6.5

    I have to go pick my child up, but will create my own support thread soon (and probably delete my old backup DIR and start fresh to see if that fixes it).

     

  10. On 5/22/2022 at 4:57 PM, JorgeB said:

     

    TL; DR I would recommend only running v6.10.x on a server with a Brodcom NIC that uses the tg3 driver if VT-d/IOMMU is disable or it might in some cases cause serious stability issues, including possible filesystem corruption.

     

     

    Another update since this is an important issue, there's a new case with an IBM/Lenovo X3100 M5 server, this server uses the same NIC driver as the HP so this appears to confirm the problem is the NIC/NIC driver when IOMMU is enable, these are the NICs used.

     

    Known problematic NICs:

     

    HP Microserver Gen8:

    [truncated]

     

    Wow, thanks for the warning! I updated on the weekend, but hadn't noticed anything wrong. I just checked my logs, and indeed I had the errors mentioned above.

    So I rebooted and disabled VT-d.

    Do you know how I can check to see if I have any data corruption? Is it known to be caused by any specific activities? Would a parity check verify (and allow me to find what has been corrupted if it detects any problems?)?

    Thanks again.

  11. 2 hours ago, johnieutah said:

    What are the advantages to the newer versions? I'm still on 5.14.23-ls76.

     

    I know the Log4j bug was fixed, and it supports newer hardware (I only run AC Lite APs), but are there other new features that are worthwhile looking at?

     

    Thanks.

    I mostly prefer the new interface.

    There's heaps of other changes since v5. You can see the changelogs from v6 > v7 here: https://community.ui.com/releases/UniFi-Network-Application-7-0-25/3344c362-7da5-4ecd-a403-3b47520e3c01

    And of course there's even more from v5 > v6

    • Like 1
  12. 1 hour ago, JonathanM said:

    Would you say this is a good place to park for a few months? I'm currently still parked at linuxserver/unifi-controller:6.5.55.

     

    If you feel this is a good spot, I'll unpin the 6.2.26 post and pin this one.

    I just upgraded to v7.1.61 from v7.0.25 without issue.

    v7.0.25 has been solid, and the new interface is pretty much complete. I run 2x wired AC LR and a USG, so it's not a complicated setup, but I wouldn't hesitate to recommend moving to the v7 series.

  13. On 2/5/2022 at 7:10 AM, jademonkee said:

    Hi all,

    I keep seeing in the logs:

    No MaxMind license key found; exiting. Please enter your license key into /etc/libmaxminddb.cron.conf
    run-parts: /etc/periodic/weekly/libmaxminddb: exit status 1

    If I restart the container, it doesn't appear in the logs, but eventually re-appears.

    The key I have provided in the Docker variable 'GeoIP2 License key' is current and correct, and if I run the command

    echo $MAXMINDDB_LICENSE_KEY

    It returns the correct value.

     

    The only mention of this issue that I can find is this:

    https://github.com/linuxserver/docker-swag/issues/139

     

    Similar to that page, if I run:

    # /config/geoip2db# ls -lah

    it returns:
     

    sh: /config/geoip2db#: not found

     

    But the page says that the issue has been solved. Could it be that I had to manually apply those changes? I'm usually pretty good at looking at the logs after an update to see if any configs need to be manually updated, but maybe I missed it?

    I'm not sure how to manually check if those changes have been applied in the Docker or not.

     

    Your help is appreciated - I'm concerned that Geo IP blocking is not working while this is happening.

     

    FWIW, it looks like the old method for GeoIP blocking has been superceded, so that's why I was getting errors.

    I followed the instructions at: https://github.com/linuxserver/docker-mods/tree/swag-maxmind/ and https://virtualize.link/secure/

    And replaced the old references to GeoIP2 in the config files mentioned in the above instructions. Seems like it's all working now, although I'll find out in a week if the error (or a new one) pops up in my logs again. I also note that the file appdata/swag/geoip2db/GeoLite2-City.mmdb hasn't been modified since 2021-11-30, so maybe this change will allow it to update? Although, TBH, I'm not even sure what the database does, given I'm banning by country code (does it link IPs to countries, maybe?).

    Anyway, thought I'd post it here for posterity, in case anyone else has a similar problem.

  14. Just made the leap from 6.5.55 to 7.0.23.

    So far, everything seems to be working fine, no settings needed to be changed.

    The new interface is heaps better, returning alot of the functionality and info that had been removed in the move to v6. Everything feels snappy and responsive.

    V happy with the upgrade.

    Now let's see if it's stable!

  15. Hi all,

    I have installed Redis to try and improve the performance of the gallery in NextCloud.

    I have set it up as per a combination of the official documentation, and this guide written for the LSIO Unraid NextCloud Docker image: https://skylar.tech/reduce-nextcloud-mysql-usage-using-redis-for-cache/

    I notice above that @onkelblubbhas mapped a directory for "/data" but I didn't add that to the Docker setup. There is no Redis folder in my appdata dir.

    My NextCloud config file looks like:

    $CONFIG = array (
      'memcache.local' => '\\OC\\Memcache\\APCu',
      'memcache.distributed' => '\\OC\\Memcache\\Redis',
      'memcache.locking' => '\\OC\\Memcache\\Redis',
      'redis' => 
      array (
        'host' => '192.168.200.40',
        'port' => 6379,
      ),

     

    I note that the official NextCloud documentation has the formatting slightly different (note the single slashes rather than double, and the lack of 'array' in the Redis definition):

    'memcache.local' => '\OC\Memcache\APCu',
    'memcache.distributed' => '\OC\Memcache\Redis',
    'memcache.locking' => '\OC\Memcache\Redis',
    'redis' => [
         'host' => 'redis-host.example.com',
         'port' => 6379,
    ],

     

    Any idea if that's problematic?

    I also have no idea if it's working or not? The Redis log is just absolutely filled with:

    [truncated]
    ...
    1:M 21 Feb 2022 13:43:31.044 * 1 changes in 3600 seconds. Saving...
    1:M 21 Feb 2022 13:43:31.044 * Background saving started by pid 179
    179:C 21 Feb 2022 13:43:31.050 * DB saved on disk
    179:C 21 Feb 2022 13:43:31.051 * RDB: 0 MB of memory used by copy-on-write
    1:M 21 Feb 2022 13:43:31.145 * Background saving terminated with success
    1:M 21 Feb 2022 13:55:38.286 * 100 changes in 300 seconds. Saving...
    1:M 21 Feb 2022 13:55:38.287 * Background saving started by pid 180
    180:C 21 Feb 2022 13:55:38.294 * DB saved on disk
    180:C 21 Feb 2022 13:55:38.295 * RDB: 0 MB of memory used by copy-on-write
    1:M 21 Feb 2022 13:55:38.387 * Background saving terminated with success
    1:M 21 Feb 2022 14:00:39.004 * 100 changes in 300 seconds. Saving...
    1:M 21 Feb 2022 14:00:39.004 * Background saving started by pid 181
    181:C 21 Feb 2022 14:00:39.011 * DB saved on disk
    181:C 21 Feb 2022 14:00:39.011 * RDB: 0 MB of memory used by copy-on-write
    1:M 21 Feb 2022 14:00:39.105 * Background saving terminated with success
    1:M 21 Feb 2022 14:20:19.561 * 100 changes in 300 seconds. Saving...
    1:M 21 Feb 2022 14:20:19.561 * Background saving started by pid 182
    182:C 21 Feb 2022 14:20:19.569 * DB saved on disk
    182:C 21 Feb 2022 14:20:19.570 * RDB: 0 MB of memory used by copy-on-write
    1:M 21 Feb 2022 14:20:19.662 * Background saving terminated with success
    1:M 21 Feb 2022 14:44:32.005 * 100 changes in 300 seconds. Saving...
    1:M 21 Feb 2022 14:44:32.005 * Background saving started by pid 183
    183:C 21 Feb 2022 14:44:32.015 * DB saved on disk
    183:C 21 Feb 2022 14:44:32.015 * RDB: 0 MB of memory used by copy-on-write
    1:M 21 Feb 2022 14:44:32.106 * Background saving terminated with success
    1:M 21 Feb 2022 14:49:33.079 * 100 changes in 300 seconds. Saving...
    1:M 21 Feb 2022 14:49:33.079 * Background saving started by pid 184
    184:C 21 Feb 2022 14:49:33.089 * DB saved on disk
    184:C 21 Feb 2022 14:49:33.089 * RDB: 0 MB of memory used by copy-on-write
    1:M 21 Feb 2022 14:49:33.180 * Background saving terminated with success

     

    The reason I installed it was to try and speed up the NextCloud gallery on Android, so that I can browse a shared photo folder. It was previously unusably slow. So I installed Redis as well as the Preview Generator app and pre-cached the images. So the gallery is now working faster (though not as fast as I'd like), but I have no idea if it's just because of the pre-generated thumbnails, or if Redis is also playing a role.

    As far as usage goes on Redis, it's using about 10MB RAM and less than 1% CPU, even when I'm scrolling through the image directory on the Android app. To me, that doesn't seem like it's doing much.

    There are no errors in the NextCloud log.

     

    Your insight and help is appreciated!

     

    EDIT:

    I just thought I'd try mapping an appdata dir, and the Redis logs showed:

    1:C 21 Feb 2022 15:14:44.856 # oO0OoO0OoO0Oo Redis is starting oO0OoO0OoO0Oo
    1:C 21 Feb 2022 15:14:44.856 # Redis version=6.2.6, bits=64, commit=00000000, modified=0, pid=1, just started
    1:C 21 Feb 2022 15:14:44.856 # Warning: no config file specified, using the default config. In order to specify a config file use redis-server /path/to/redis.conf
    1:M 21 Feb 2022 15:14:44.857 * monotonic clock: POSIX clock_gettime
    1:M 21 Feb 2022 15:14:44.858 * Running mode=standalone, port=6379.
    1:M 21 Feb 2022 15:14:44.858 # Server initialized
    1:M 21 Feb 2022 15:14:44.858 # WARNING overcommit_memory is set to 0! Background save may fail under low memory condition. To fix this issue add 'vm.overcommit_memory = 1' to /etc/sysctl.conf and then reboot or run the command 'sysctl vm.overcommit_memory=1' for this to take effect.
    1:M 21 Feb 2022 15:14:44.858 * Ready to accept connections

     

    So I removed the path mapping, and I have the same message in the logs. I have no idea if it was there when I first started Redis, as the logs are filled with the messages above. I decided that I might as well add the mapping back, so have done so.

    The appdata dir that was created by the mapping is empty. Any tips on what could be in the config file for a Redis Docker/Unraid setup?

    EDIT: the dir now has a "dump.rdb" in it, but nothing else.

    As I browse the shares in NextCloud, the log fills up with the messages like above:

    1:C 21 Feb 2022 15:21:06.136 # oO0OoO0OoO0Oo Redis is starting oO0OoO0OoO0Oo
    1:C 21 Feb 2022 15:21:06.136 # Redis version=6.2.6, bits=64, commit=00000000, modified=0, pid=1, just started
    1:C 21 Feb 2022 15:21:06.136 # Warning: no config file specified, using the default config. In order to specify a config file use redis-server /path/to/redis.conf
    1:M 21 Feb 2022 15:21:06.137 * monotonic clock: POSIX clock_gettime
    1:M 21 Feb 2022 15:21:06.137 * Running mode=standalone, port=6379.
    1:M 21 Feb 2022 15:21:06.137 # Server initialized
    1:M 21 Feb 2022 15:21:06.137 # WARNING overcommit_memory is set to 0! Background save may fail under low memory condition. To fix this issue add 'vm.overcommit_memory = 1' to /etc/sysctl.conf and then reboot or run the command 'sysctl vm.overcommit_memory=1' for this to take effect.
    1:M 21 Feb 2022 15:21:06.138 * Loading RDB produced by version 6.2.6
    1:M 21 Feb 2022 15:21:06.138 * RDB age 382 seconds
    1:M 21 Feb 2022 15:21:06.138 * RDB memory usage when created 0.77 Mb
    1:M 21 Feb 2022 15:21:06.138 # Done loading RDB, keys loaded: 1, keys expired: 2.
    1:M 21 Feb 2022 15:21:06.138 * DB loaded from disk: 0.000 seconds
    1:M 21 Feb 2022 15:21:06.138 * Ready to accept connections
    1:M 21 Feb 2022 15:28:20.667 * 100 changes in 300 seconds. Saving...
    1:M 21 Feb 2022 15:28:20.667 * Background saving started by pid 19
    19:C 21 Feb 2022 15:28:20.671 * DB saved on disk
    19:C 21 Feb 2022 15:28:20.672 * RDB: 0 MB of memory used by copy-on-write
    1:M 21 Feb 2022 15:28:20.768 * Background saving terminated with success

     

    So are the logs normal? Do I need to setup a config file for Redis? Is Redis working properly (as the logs are reacting to me browsing photos, I'm guessing it is?)?

  16. On 2/5/2022 at 7:10 AM, jademonkee said:

    Hi all,

    I keep seeing in the logs:

    No MaxMind license key found; exiting. Please enter your license key into /etc/libmaxminddb.cron.conf
    run-parts: /etc/periodic/weekly/libmaxminddb: exit status 1

    If I restart the container, it doesn't appear in the logs, but eventually re-appears.

    The key I have provided in the Docker variable 'GeoIP2 License key' is current and correct, and if I run the command

    echo $MAXMINDDB_LICENSE_KEY

    It returns the correct value.

     

    The only mention of this issue that I can find is this:

    https://github.com/linuxserver/docker-swag/issues/139

     

    Similar to that page, if I run:

    # /config/geoip2db# ls -lah

    it returns:
     

    sh: /config/geoip2db#: not found

     

    But the page says that the issue has been solved. Could it be that I had to manually apply those changes? I'm usually pretty good at looking at the logs after an update to see if any configs need to be manually updated, but maybe I missed it?

    I'm not sure how to manually check if those changes have been applied in the Docker or not.

     

    Your help is appreciated - I'm concerned that Geo IP blocking is not working while this is happening.

     

    On 2/16/2022 at 1:38 PM, jademonkee said:

    I'm still receiving this error after the period (weekly) license check. Is anyone else using Geo IP blocking seeing the same error in their logs? Should I be seeking support in a different forum?

    I'm still receiving this error.

    Am I looking in the wrong place for a solution to this? Are LSIO still present in this forum?

  17. On 2/5/2022 at 7:10 AM, jademonkee said:

    Hi all,

    I keep seeing in the logs:

    No MaxMind license key found; exiting. Please enter your license key into /etc/libmaxminddb.cron.conf
    run-parts: /etc/periodic/weekly/libmaxminddb: exit status 1

    If I restart the container, it doesn't appear in the logs, but eventually re-appears.

    The key I have provided in the Docker variable 'GeoIP2 License key' is current and correct, and if I run the command

    echo $MAXMINDDB_LICENSE_KEY

    It returns the correct value.

     

    The only mention of this issue that I can find is this:

    https://github.com/linuxserver/docker-swag/issues/139

     

    Similar to that page, if I run:

    # /config/geoip2db# ls -lah

    it returns:
     

    sh: /config/geoip2db#: not found

     

    But the page says that the issue has been solved. Could it be that I had to manually apply those changes? I'm usually pretty good at looking at the logs after an update to see if any configs need to be manually updated, but maybe I missed it?

    I'm not sure how to manually check if those changes have been applied in the Docker or not.

     

    Your help is appreciated - I'm concerned that Geo IP blocking is not working while this is happening.

    I'm still receiving this error after the period (weekly) license check. Is anyone else using Geo IP blocking seeing the same error in their logs? Should I be seeking support in a different forum?

  18. Hi all,

    I keep seeing in the logs:

    No MaxMind license key found; exiting. Please enter your license key into /etc/libmaxminddb.cron.conf
    run-parts: /etc/periodic/weekly/libmaxminddb: exit status 1

    If I restart the container, it doesn't appear in the logs, but eventually re-appears.

    The key I have provided in the Docker variable 'GeoIP2 License key' is current and correct, and if I run the command

    echo $MAXMINDDB_LICENSE_KEY

    It returns the correct value.

     

    The only mention of this issue that I can find is this:

    https://github.com/linuxserver/docker-swag/issues/139

     

    Similar to that page, if I run:

    # /config/geoip2db# ls -lah

    it returns:
     

    sh: /config/geoip2db#: not found

     

    But the page says that the issue has been solved. Could it be that I had to manually apply those changes? I'm usually pretty good at looking at the logs after an update to see if any configs need to be manually updated, but maybe I missed it?

    I'm not sure how to manually check if those changes have been applied in the Docker or not.

     

    Your help is appreciated - I'm concerned that Geo IP blocking is not working while this is happening.

     

    EDIT: Solution found:

     

  19. 12 hours ago, Smitty2k1 said:

    How do I update LMS? V8.1 was installed with the docker package but it keeps telling me 8.3 is available. No clue how to download/install.

     

    Thanks for the docker. Working great with my PiCorePlayer!

     

    11 hours ago, dlandon said:

    The docker container will have to be updated.

     

    I'm already on 8.3 using this container...?