jademonkee

Members
  • Posts

    333
  • Joined

  • Last visited

Everything posted by jademonkee

  1. Yeah, can confirm. It's set at the player not the server level. So in the PlexAmp app, under Settings > Music Quality set both WiFi and Cellular (assuming you have a good data plan) to maximum, and it'll send every track in its native format.
  2. If you're looking for a new streaming server, try Plex. They've been working hard on making PlexAmp (their app for streaming music) a joy to use. It's actually amazing. Here's a good feature overview: One thing they've added since that video, though, is "guest DJ" mode, which inserts relevant tracks between tracks on an album or playlist. It works really well, and is a joy to experience. Highly recommended!
  3. I think you may only have success keeping players playing in-sync with each other if you use the original Logitech hardware the platform was designed for. The server can monitor their latency and adjust the stream as necessary. I've never tried streaming to anything other than the original Squeezebox hardware, and I dabbled with using a raspberry pi with a DAC (using Squeezelite), but stopped using it when I realised the sync issues. You can pick up old ones off eBay for not much. I bought a few of the display-less Squeezebox "Receiver" (same as the "Duet" but without the very outdated WiFi remote) and now use a raspberry pi running Squeezelite as the display and controller (no need for a DAC: it can control any player in your system). But, YMMV: I may be wrong in these assertions (the sync issues I had were in the milliseconds range).
  4. Are you using Squeezebox hardware? Or raspberry pi (or something else). I've never had a problem syncing Squeezebox hardware, but I've found that raspberry pi would fall out of sync easily.
  5. Hi there, As per the recommendation from the Fix Common Problems plugin, I'm posting here looking for insight in to the Out Of Memory Error I received overnight. My server usually sits at about 40% RAM usage, so I'm surprised to see such an error. A few months ago I did tweak the 'php-fpm' settings in NextCloud to improve performance, so I'm wondering if maybe I went overboard with that. php-fpm tweaks are: ; Tune PHP-FPM https://docs.nextcloud.com/server/latest/admin_manual/installation/server_tuning.html#tune-php-fpm and values from https://spot13.com/pmcalculator/ pm = dynamic pm.max_children = 100 pm.start_servers = 25 pm.min_spare_servers = 25 pm.max_spare_servers = 75 Diagnostics attached. Many thanks. percy-diagnostics-20220901-1052_out-of-memory.zip
  6. I fear that in the future they'll neuter the ability to run the app from Docker to force us to buy their hardware (Cloudkey or UDM). EDIT: especially with this in the release notes for 7.2.91:
  7. FYI just upgraded to v7.1.68 as it didn't seem to have many changes. No issues upgrading. Will report back if anything goes haywire.
  8. You should be able to ignore that error and continue. Sometimes on the error page you need to click on "Advanced" or something like that, then "Proceed Anyway" (or something like that - this is from memory. tbh, it may not even be the same error). Just thought it might be worth a shot before you restore from backup.
  9. Digging this up to say that I recently bought a 2TB Samsung 970 EVO for a Nextcloud cache, and had totally forgotten that my Dell PERC H310 didn't support TRIM until I received the "fstrim: /mnt/ssd2tb: FITRIM ioctl failed: Remote I/O error" email from Dynamix SSD TRIM yesterday morning. While my onboard SAS port does support TRIM, it only has 2x SATAIII ports, with the others being SATAII - and the two SATAIII ports are already taken up by the 2x 850 EVO SSDs in my (regular) cache pool. So, a very big thanks to @ezhik for their work finding out that P16 works well, as well as providing a copy of the firmware and flashing instructions! After some nail-biting to-and-fro trying to flash the card (getting errors when I tried flashing it in my server - it erased fine, of course though lol - but then it worked when I used my desktop PC), I'm now running P16 and TRIM runs without error.
  10. I get that same message every so often. I've never figured out what "WiFi setting" is "incompatible" Strangely, when the message appears the interface sets back to "light mode" and the clock to "12 hour" so I have to set them back. I have no idea what causes it. But because it doesn't seem to cause any issues, I've never spent the time to find the reason.
  11. Updated my gen8 Microserver from 6.10.2 without incident. Decided to leave VT-d off in BIOS, as I don't use VMs anyhow. Sonarr and Radarr Dockers both started fine and GUIs load (haven't tried searching, but I don't think there'll be problems).
  12. Yeah, backup in Unifi, and if you have a recent "CA Backups" of your apps, that would be good, too. Also make sure you're on the latest firmware for your Unifi devices. Then just change the tag and away you go.
  13. Try reaching out to CrashPlan customer support. They were able to help me when I had a similar problem.
  14. Here's the output over the time the backup failed (this is after I deleted the contents of the TimeMachine share): 2022-05-24 21:12:51al] Starting manual backup 2022-05-24 21:12:51al] Attempting to mount 'smb://simonchester@percy/sctimemach' 2022-05-24 21:12:52al] Mounted 'smb://simonchester@percy/sctimemach' at '/Volumes/.timemachine/percy/47F34407-EFBA-4499-969E-25DAFCAFA7C4/sctimemach' 2022-05-24 21:12:52al] Initial network volume parameters for 'sctimemach' {disablePrimaryReconnect: 0, disableSecondaryReconnect: 0, reconnectTimeOut: 60, QoS: 0x0, attributes: 0x1C} 2022-05-24 21:12:52al] Configured network volume parameters for 'sctimemach' {disablePrimaryReconnect: 0, disableSecondaryReconnect: 0, reconnectTimeOut: 30, QoS: 0x20, attributes: 0x1C} 2022-05-24 21:12:52al] Mountpoint '/Volumes/.timemachine/percy/47F34407-EFBA-4499-969E-25DAFCAFA7C4/sctimemach' is still valid 2022-05-24 21:12:52al] Mountpoint '/Volumes/.timemachine/percy/47F34407-EFBA-4499-969E-25DAFCAFA7C4/sctimemach' is still valid 2022-05-24 21:12:52al] Creating a sparsebundle using Case-sensitive APFS filesystem 2022-05-24 21:13:10al] Mountpoint '/Volumes/.timemachine/percy/47F34407-EFBA-4499-969E-25DAFCAFA7C4/sctimemach' is still valid 2022-05-24 21:13:10al] Failed to read 'file:///Volumes/.timemachine/percy/47F34407-EFBA-4499-969E-25DAFCAFA7C4/sctimemach/710CB9A8-9829-5E7F-B77F-E11F8AB058ED.sparsebundle/com.apple.TimeMachine.MachineID.plist', error: Error Domain=NSCocoaErrorDomain Code=260 "The file “com.apple.TimeMachine.MachineID.plist” couldn’t be opened because there is no such file." UserInfo={NSFilePath=/Volumes/.timemachine/percy/47F34407-EFBA-4499-969E-25DAFCAFA7C4/sctimemach/710CB9A8-9829-5E7F-B77F-E11F8AB058ED.sparsebundle/com.apple.TimeMachine.MachineID.plist, NSUnderlyingError=0x7f8cc8422b10 {Error Domain=NSPOSIXErrorDomain Code=2 "No such file or directory"}} 2022-05-24 21:13:10al] Failed to read 'file:///Volumes/.timemachine/percy/47F34407-EFBA-4499-969E-25DAFCAFA7C4/sctimemach/710CB9A8-9829-5E7F-B77F-E11F8AB058ED.sparsebundle/com.apple.TimeMachine.MachineID.bckup', error: Error Domain=NSCocoaErrorDomain Code=260 "The file “com.apple.TimeMachine.MachineID.bckup” couldn’t be opened because there is no such file." UserInfo={NSFilePath=/Volumes/.timemachine/percy/47F34407-EFBA-4499-969E-25DAFCAFA7C4/sctimemach/710CB9A8-9829-5E7F-B77F-E11F8AB058ED.sparsebundle/com.apple.TimeMachine.MachineID.bckup, NSUnderlyingError=0x7f8cc842c320 {Error Domain=NSPOSIXErrorDomain Code=2 "No such file or directory"}} 2022-05-24 21:13:20al] Failed to read 'file:///Volumes/.timemachine/percy/47F34407-EFBA-4499-969E-25DAFCAFA7C4/sctimemach/710CB9A8-9829-5E7F-B77F-E11F8AB058ED.sparsebundle/com.apple.TimeMachine.MachineID.plist', error: Error Domain=NSCocoaErrorDomain Code=260 "The file “com.apple.TimeMachine.MachineID.plist” couldn’t be opened because there is no such file." UserInfo={NSFilePath=/Volumes/.timemachine/percy/47F34407-EFBA-4499-969E-25DAFCAFA7C4/sctimemach/710CB9A8-9829-5E7F-B77F-E11F8AB058ED.sparsebundle/com.apple.TimeMachine.MachineID.plist, NSUnderlyingError=0x7f8cc8720100 {Error Domain=NSPOSIXErrorDomain Code=2 "No such file or directory"}} 2022-05-24 21:13:20al] Failed to read 'file:///Volumes/.timemachine/percy/47F34407-EFBA-4499-969E-25DAFCAFA7C4/sctimemach/710CB9A8-9829-5E7F-B77F-E11F8AB058ED.sparsebundle/com.apple.TimeMachine.MachineID.bckup', error: Error Domain=NSCocoaErrorDomain Code=260 "The file “com.apple.TimeMachine.MachineID.bckup” couldn’t be opened because there is no such file." UserInfo={NSFilePath=/Volumes/.timemachine/percy/47F34407-EFBA-4499-969E-25DAFCAFA7C4/sctimemach/710CB9A8-9829-5E7F-B77F-E11F8AB058ED.sparsebundle/com.apple.TimeMachine.MachineID.bckup, NSUnderlyingError=0x7f8cc84351c0 {Error Domain=NSPOSIXErrorDomain Code=2 "No such file or directory"}} 2022-05-24 21:13:32al] Renamed '/Volumes/.timemachine/percy/47F34407-EFBA-4499-969E-25DAFCAFA7C4/sctimemach/710CB9A8-9829-5E7F-B77F-E11F8AB058ED.sparsebundle' to '/Volumes/.timemachine/percy/47F34407-EFBA-4499-969E-25DAFCAFA7C4/sctimemach/Mercury.sparsebundle' 2022-05-24 21:13:32al] Successfully created '/Volumes/.timemachine/percy/47F34407-EFBA-4499-969E-25DAFCAFA7C4/sctimemach/Mercury.sparsebundle' 2022-05-24 21:13:32al] Checking for runtime corruption on '/Volumes/.timemachine/percy/47F34407-EFBA-4499-969E-25DAFCAFA7C4/sctimemach/Mercury.sparsebundle' 2022-05-24 21:13:37al] Mountpoint '/Volumes/.timemachine/percy/47F34407-EFBA-4499-969E-25DAFCAFA7C4/sctimemach' is still valid 2022-05-24 21:13:37al] Runtime corruption check passed for '/Volumes/.timemachine/percy/47F34407-EFBA-4499-969E-25DAFCAFA7C4/sctimemach/Mercury.sparsebundle' 2022-05-24 21:13:43al] Mountpoint '/Volumes/Backups of Mercury' is still valid 2022-05-24 21:13:43al] '/Volumes/.timemachine/percy/47F34407-EFBA-4499-969E-25DAFCAFA7C4/sctimemach/Mercury.sparsebundle' mounted at '/Volumes/Backups of Mercury' 2022-05-24 21:13:43al] Updating volume role for '/Volumes/Backups of Mercury' 2022-05-24 21:13:44al] Mountpoint '/Volumes/.timemachine/percy/47F34407-EFBA-4499-969E-25DAFCAFA7C4/sctimemach' is still valid 2022-05-24 21:13:44al] Mountpoint '/Volumes/Backups of Mercury' is still valid 2022-05-24 21:13:44al] Stopping backup to allow volume '/Volumes/Backups of Mercury' to be unmounted. 2022-05-24 21:13:44al] Backup cancel was requested. 2022-05-24 21:13:54al] backupd exiting - cancelation timed out 2022-05-24 21:14:05Management] Initial thermal pressure level 0 And then when I next hit backup and it worked (though I've truncated this at the point it started logging files etc): 2022-05-24 21:19:55al] Starting manual backup 2022-05-24 21:19:55al] Network destination already mounted at: /Volumes/.timemachine/percy/47F34407-EFBA-4499-969E-25DAFCAFA7C4/sctimemach 2022-05-24 21:19:55al] Initial network volume parameters for 'sctimemach' {disablePrimaryReconnect: 0, disableSecondaryReconnect: 0, reconnectTimeOut: 30, QoS: 0x20, attributes: 0x1C} 2022-05-24 21:19:55al] Configured network volume parameters for 'sctimemach' {disablePrimaryReconnect: 0, disableSecondaryReconnect: 0, reconnectTimeOut: 30, QoS: 0x20, attributes: 0x1C} 2022-05-24 21:19:56al] Found matching sparsebundle 'Mercury.sparsebundle' with host UUID '710CB9A8-9829-5E7F-B77F-E11F8AB058ED' and MAC address '(null)' 2022-05-24 21:19:57al] Not performing periodic backup verification: no previous backups to this destination. 2022-05-24 21:19:58al] 'Mercury.sparsebundle' does not need resizing - current logical size is 510.03 GB (510,027,366,400 bytes), size limit is 510.03 GB (510,027,366,400 bytes) 2022-05-24 21:19:58al] Mountpoint '/Volumes/.timemachine/percy/47F34407-EFBA-4499-969E-25DAFCAFA7C4/sctimemach' is still valid 2022-05-24 21:19:58al] Checking for runtime corruption on '/Volumes/.timemachine/percy/47F34407-EFBA-4499-969E-25DAFCAFA7C4/sctimemach/Mercury.sparsebundle' 2022-05-24 21:20:02al] Mountpoint '/Volumes/.timemachine/percy/47F34407-EFBA-4499-969E-25DAFCAFA7C4/sctimemach' is still valid 2022-05-24 21:20:02al] Runtime corruption check passed for '/Volumes/.timemachine/percy/47F34407-EFBA-4499-969E-25DAFCAFA7C4/sctimemach/Mercury.sparsebundle' 2022-05-24 21:20:07al] Mountpoint '/Volumes/Backups of Mercury' is still valid 2022-05-24 21:20:07al] '/Volumes/.timemachine/percy/47F34407-EFBA-4499-969E-25DAFCAFA7C4/sctimemach/Mercury.sparsebundle' mounted at '/Volumes/Backups of Mercury' 2022-05-24 21:20:07al] Mountpoint '/Volumes/Backups of Mercury' is still valid 2022-05-24 21:20:08al] Checking identity of target volume '/Volumes/Backups of Mercury' 2022-05-24 21:20:08al] Mountpoint '/Volumes/Backups of Mercury' is still valid 2022-05-24 21:20:09al] Mountpoint '/Volumes/Backups of Mercury' is still valid 2022-05-24 21:20:09al] Backing up to Backups of Mercury (/dev/disk3s1,e): /Volumes/Backups of Mercury 2022-05-24 21:20:09al] Mountpoint '/Volumes/Backups of Mercury' is still valid 2022-05-24 21:20:10Thinning] Starting age based thinning of Time Machine local snapshots on disk '/System/Volumes/Data' 2022-05-24 21:20:10SnapshotManagement] Created Time Machine local snapshot with name 'com.apple.TimeMachine.2022-05-24-212010.local' on disk '/System/Volumes/Data' 2022-05-24 21:20:10al] Declared stable snapshot: com.apple.TimeMachine.2022-05-24-212010.local 2022-05-24 21:20:10SnapshotManagement] Mounted stable snapshot: com.apple.TimeMachine.2022-05-24-212010.local at path: /Volumes/com.apple.TimeMachine.localsnapshots/Backups.backupdb/Mercury/2022-05-24-212010/Macintosh HD — Data source: Macintosh HD — Data 2022-05-24 21:20:10pThinning] No further thinning possible - no thinnable backups 2022-05-24 21:20:13Collection] First backup of source: "Macintosh HD — Data" (device: /dev/disk1s1 mount: '/System/Volumes/Data' fsUUID: F88D7C18-E1EC-4F90-9B71-4A481B580F26 eventDBUUID: 58B7A351-1BA1-4761-A370-5A46FA30AC5D) 2022-05-24 21:20:13Collection] Trusting source modification times for remote backups. 2022-05-24 21:20:13Collection] Found 0 perfect clone families, 0 partial clone families. Zero KB physical space used by clone files. Zero KB shared space. 2022-05-24 21:20:13Collection] Finished collecting events from volume "Macintosh HD — Data" 2022-05-24 21:20:13Collection] Saved event cache at /Volumes/Backups of Mercury/2022-05-24-212010.inprogress/.F88D7C18-E1EC-4F90-9B71-4A481B580F26.eventdb 2022-05-24 21:20:13gProgress] (fsk:0,dsk:0,fsz:1,dsz:0)(1/0) FWIW, it looks like my Mac has been backing up successfully over night. I'll report back if it fails at some point today. And, for the record, I'm on MacOS v11.6.5
  15. FYI I deleted the content of my Time Machine share and started fresh. It failed to backup the first time, but then I hit backup again after about 5 minutes, and now it's backing up. If it fails again, then I'll create a support thread, but this may yet work.
  16. I just realised that I also can't backup to Time Machine since upgrading to v6.10 Says "Preparing backup" then... nothing. No error: just doesn't backup. SMB Multichannel is disabled. Enhanced MacOS compat is enabled. SMB extras is: #vfs_recycle_start #Recycle bin configuration [global] syslog only = Yes syslog = 0 logging = 0 log level = 0 vfs:0 #vfs_recycle_end A while ago, I remember having to delete my backups and start fresh, but I was getting errors that time. I don't really have anything too valuable on my Mac, so can experiment with deleting the contents of my TimeMachine share and starting fresh, but I'd rather hold off to see if anyone has any other ideas. I've seen a similar issue mentioned here, but looking at the logs, my error just says "no mountable file systems" also, I'm running MacOS 11.6.5 I have to go pick my child up, but will create my own support thread soon (and probably delete my old backup DIR and start fresh to see if that fixes it).
  17. Wow, thanks for the warning! I updated on the weekend, but hadn't noticed anything wrong. I just checked my logs, and indeed I had the errors mentioned above. So I rebooted and disabled VT-d. Do you know how I can check to see if I have any data corruption? Is it known to be caused by any specific activities? Would a parity check verify (and allow me to find what has been corrupted if it detects any problems?)? Thanks again.
  18. I mostly prefer the new interface. There's heaps of other changes since v5. You can see the changelogs from v6 > v7 here: https://community.ui.com/releases/UniFi-Network-Application-7-0-25/3344c362-7da5-4ecd-a403-3b47520e3c01 And of course there's even more from v5 > v6
  19. I just upgraded to v7.1.61 from v7.0.25 without issue. v7.0.25 has been solid, and the new interface is pretty much complete. I run 2x wired AC LR and a USG, so it's not a complicated setup, but I wouldn't hesitate to recommend moving to the v7 series.
  20. FWIW, it looks like the old method for GeoIP blocking has been superceded, so that's why I was getting errors. I followed the instructions at: https://github.com/linuxserver/docker-mods/tree/swag-maxmind/ and https://virtualize.link/secure/ And replaced the old references to GeoIP2 in the config files mentioned in the above instructions. Seems like it's all working now, although I'll find out in a week if the error (or a new one) pops up in my logs again. I also note that the file appdata/swag/geoip2db/GeoLite2-City.mmdb hasn't been modified since 2021-11-30, so maybe this change will allow it to update? Although, TBH, I'm not even sure what the database does, given I'm banning by country code (does it link IPs to countries, maybe?). Anyway, thought I'd post it here for posterity, in case anyone else has a similar problem.
  21. It's great, as far as I'm concerned. It seems stable (although I've only been running it for a week), and the new UI finally works as it should. Best version yet, dare I say.
  22. Just made the leap from 6.5.55 to 7.0.23. So far, everything seems to be working fine, no settings needed to be changed. The new interface is heaps better, returning alot of the functionality and info that had been removed in the move to v6. Everything feels snappy and responsive. V happy with the upgrade. Now let's see if it's stable!
  23. Fun! But glad to hear that it worked at least. I've been thinking of making the leap.
  24. Hi all, I have installed Redis to try and improve the performance of the gallery in NextCloud. I have set it up as per a combination of the official documentation, and this guide written for the LSIO Unraid NextCloud Docker image: https://skylar.tech/reduce-nextcloud-mysql-usage-using-redis-for-cache/ I notice above that @onkelblubbhas mapped a directory for "/data" but I didn't add that to the Docker setup. There is no Redis folder in my appdata dir. My NextCloud config file looks like: $CONFIG = array ( 'memcache.local' => '\\OC\\Memcache\\APCu', 'memcache.distributed' => '\\OC\\Memcache\\Redis', 'memcache.locking' => '\\OC\\Memcache\\Redis', 'redis' => array ( 'host' => '192.168.200.40', 'port' => 6379, ), I note that the official NextCloud documentation has the formatting slightly different (note the single slashes rather than double, and the lack of 'array' in the Redis definition): 'memcache.local' => '\OC\Memcache\APCu', 'memcache.distributed' => '\OC\Memcache\Redis', 'memcache.locking' => '\OC\Memcache\Redis', 'redis' => [ 'host' => 'redis-host.example.com', 'port' => 6379, ], Any idea if that's problematic? I also have no idea if it's working or not? The Redis log is just absolutely filled with: [truncated] ... 1:M 21 Feb 2022 13:43:31.044 * 1 changes in 3600 seconds. Saving... 1:M 21 Feb 2022 13:43:31.044 * Background saving started by pid 179 179:C 21 Feb 2022 13:43:31.050 * DB saved on disk 179:C 21 Feb 2022 13:43:31.051 * RDB: 0 MB of memory used by copy-on-write 1:M 21 Feb 2022 13:43:31.145 * Background saving terminated with success 1:M 21 Feb 2022 13:55:38.286 * 100 changes in 300 seconds. Saving... 1:M 21 Feb 2022 13:55:38.287 * Background saving started by pid 180 180:C 21 Feb 2022 13:55:38.294 * DB saved on disk 180:C 21 Feb 2022 13:55:38.295 * RDB: 0 MB of memory used by copy-on-write 1:M 21 Feb 2022 13:55:38.387 * Background saving terminated with success 1:M 21 Feb 2022 14:00:39.004 * 100 changes in 300 seconds. Saving... 1:M 21 Feb 2022 14:00:39.004 * Background saving started by pid 181 181:C 21 Feb 2022 14:00:39.011 * DB saved on disk 181:C 21 Feb 2022 14:00:39.011 * RDB: 0 MB of memory used by copy-on-write 1:M 21 Feb 2022 14:00:39.105 * Background saving terminated with success 1:M 21 Feb 2022 14:20:19.561 * 100 changes in 300 seconds. Saving... 1:M 21 Feb 2022 14:20:19.561 * Background saving started by pid 182 182:C 21 Feb 2022 14:20:19.569 * DB saved on disk 182:C 21 Feb 2022 14:20:19.570 * RDB: 0 MB of memory used by copy-on-write 1:M 21 Feb 2022 14:20:19.662 * Background saving terminated with success 1:M 21 Feb 2022 14:44:32.005 * 100 changes in 300 seconds. Saving... 1:M 21 Feb 2022 14:44:32.005 * Background saving started by pid 183 183:C 21 Feb 2022 14:44:32.015 * DB saved on disk 183:C 21 Feb 2022 14:44:32.015 * RDB: 0 MB of memory used by copy-on-write 1:M 21 Feb 2022 14:44:32.106 * Background saving terminated with success 1:M 21 Feb 2022 14:49:33.079 * 100 changes in 300 seconds. Saving... 1:M 21 Feb 2022 14:49:33.079 * Background saving started by pid 184 184:C 21 Feb 2022 14:49:33.089 * DB saved on disk 184:C 21 Feb 2022 14:49:33.089 * RDB: 0 MB of memory used by copy-on-write 1:M 21 Feb 2022 14:49:33.180 * Background saving terminated with success The reason I installed it was to try and speed up the NextCloud gallery on Android, so that I can browse a shared photo folder. It was previously unusably slow. So I installed Redis as well as the Preview Generator app and pre-cached the images. So the gallery is now working faster (though not as fast as I'd like), but I have no idea if it's just because of the pre-generated thumbnails, or if Redis is also playing a role. As far as usage goes on Redis, it's using about 10MB RAM and less than 1% CPU, even when I'm scrolling through the image directory on the Android app. To me, that doesn't seem like it's doing much. There are no errors in the NextCloud log. Your insight and help is appreciated! EDIT: I just thought I'd try mapping an appdata dir, and the Redis logs showed: 1:C 21 Feb 2022 15:14:44.856 # oO0OoO0OoO0Oo Redis is starting oO0OoO0OoO0Oo 1:C 21 Feb 2022 15:14:44.856 # Redis version=6.2.6, bits=64, commit=00000000, modified=0, pid=1, just started 1:C 21 Feb 2022 15:14:44.856 # Warning: no config file specified, using the default config. In order to specify a config file use redis-server /path/to/redis.conf 1:M 21 Feb 2022 15:14:44.857 * monotonic clock: POSIX clock_gettime 1:M 21 Feb 2022 15:14:44.858 * Running mode=standalone, port=6379. 1:M 21 Feb 2022 15:14:44.858 # Server initialized 1:M 21 Feb 2022 15:14:44.858 # WARNING overcommit_memory is set to 0! Background save may fail under low memory condition. To fix this issue add 'vm.overcommit_memory = 1' to /etc/sysctl.conf and then reboot or run the command 'sysctl vm.overcommit_memory=1' for this to take effect. 1:M 21 Feb 2022 15:14:44.858 * Ready to accept connections So I removed the path mapping, and I have the same message in the logs. I have no idea if it was there when I first started Redis, as the logs are filled with the messages above. I decided that I might as well add the mapping back, so have done so. The appdata dir that was created by the mapping is empty. Any tips on what could be in the config file for a Redis Docker/Unraid setup? EDIT: the dir now has a "dump.rdb" in it, but nothing else. As I browse the shares in NextCloud, the log fills up with the messages like above: 1:C 21 Feb 2022 15:21:06.136 # oO0OoO0OoO0Oo Redis is starting oO0OoO0OoO0Oo 1:C 21 Feb 2022 15:21:06.136 # Redis version=6.2.6, bits=64, commit=00000000, modified=0, pid=1, just started 1:C 21 Feb 2022 15:21:06.136 # Warning: no config file specified, using the default config. In order to specify a config file use redis-server /path/to/redis.conf 1:M 21 Feb 2022 15:21:06.137 * monotonic clock: POSIX clock_gettime 1:M 21 Feb 2022 15:21:06.137 * Running mode=standalone, port=6379. 1:M 21 Feb 2022 15:21:06.137 # Server initialized 1:M 21 Feb 2022 15:21:06.137 # WARNING overcommit_memory is set to 0! Background save may fail under low memory condition. To fix this issue add 'vm.overcommit_memory = 1' to /etc/sysctl.conf and then reboot or run the command 'sysctl vm.overcommit_memory=1' for this to take effect. 1:M 21 Feb 2022 15:21:06.138 * Loading RDB produced by version 6.2.6 1:M 21 Feb 2022 15:21:06.138 * RDB age 382 seconds 1:M 21 Feb 2022 15:21:06.138 * RDB memory usage when created 0.77 Mb 1:M 21 Feb 2022 15:21:06.138 # Done loading RDB, keys loaded: 1, keys expired: 2. 1:M 21 Feb 2022 15:21:06.138 * DB loaded from disk: 0.000 seconds 1:M 21 Feb 2022 15:21:06.138 * Ready to accept connections 1:M 21 Feb 2022 15:28:20.667 * 100 changes in 300 seconds. Saving... 1:M 21 Feb 2022 15:28:20.667 * Background saving started by pid 19 19:C 21 Feb 2022 15:28:20.671 * DB saved on disk 19:C 21 Feb 2022 15:28:20.672 * RDB: 0 MB of memory used by copy-on-write 1:M 21 Feb 2022 15:28:20.768 * Background saving terminated with success So are the logs normal? Do I need to setup a config file for Redis? Is Redis working properly (as the logs are reacting to me browsing photos, I'm guessing it is?)?