jademonkee

Members
  • Posts

    333
  • Joined

  • Last visited

Everything posted by jademonkee

  1. Does it? Can you elaborate? I have set my Mac/SMB settings up as per the following post: Which includes vetoing the Mac dot files. I note on the 6.12.6 changelog it mentions fruit changes to help TimeMachine restore. Currently my SMB Extra is as follows: veto files = /._*/.DS_Store/ aio read size = 1 aio write size = 1 strict locking = No use sendfile = No server multi channel support = Yes readdir_attr:aapl_rsize = no readdir_attr:aapl_finder_info = no readdir_attr:aapl_max_access = no fruit:posix_rename = yes fruit:metadata = stream Is there something you recommend I change? Either deleting it entirely (perhaps it's setup by default now?) or anything in particular? Are the recommended settings for Mac OS available somewhere so that I can set them up? Sorry to hijack the thread (I just wanted to respond to the news that veto slows things down), and thanks for your help.
  2. UPDATE: I remembered that I updated the plugin this morning (hours after the backup had run), so thought I'd take a look at all the settings and manually run a backup. All the files are in the new directory, as expected. I don't know what caused it, but it's working as expected now. Please discard my previous post.
  3. My weekly backup ran successfully this morning, however the directory is empty. Logs are as follows: [28.02.2024 05:00:01][ℹ️][Main] 👋 WELCOME TO APPDATA.BACKUP!! :D [28.02.2024 05:00:01][ℹ️][Main] Backing up from: /mnt/cache/appdata [28.02.2024 05:00:01][ℹ️][Main] Backing up to: /mnt/user/Backup/ca_backup_app/ab_20240228_050001 [28.02.2024 05:00:01][ℹ️][Main] Selected containers: DiskSpeed, LogitechMediaServer, MongoDB, QDirStat, Redis, Sonarr, binhex-sabnzbdvpn, gPodder, mariadb-nextcloud, nextcloud, phpmyadmin, plex, radarr, swag, unifi-network-application [28.02.2024 05:00:01][ℹ️][Main] Saving container XML files... [28.02.2024 05:00:01][ℹ️][Main] Method: Stop all container before continuing. [28.02.2024 05:00:01][ℹ️][unifi-network-application] Stopping unifi-network-application... done! (took 8 seconds) [28.02.2024 05:00:09][ℹ️][MongoDB] Stopping MongoDB... done! (took 1 seconds) [28.02.2024 05:00:10][ℹ️][QDirStat] No stopping needed for QDirStat: Not started! [28.02.2024 05:00:10][ℹ️][DiskSpeed] No stopping needed for DiskSpeed: Not started! [28.02.2024 05:00:10][ℹ️][phpmyadmin] No stopping needed for phpmyadmin: Not started! [28.02.2024 05:00:10][ℹ️][gPodder] Stopping gPodder... done! (took 5 seconds) [28.02.2024 05:00:15][ℹ️][Sonarr] Stopping Sonarr... done! (took 4 seconds) [28.02.2024 05:00:19][ℹ️][radarr] Stopping radarr... done! (took 5 seconds) [28.02.2024 05:00:24][ℹ️][nextcloud] Stopping nextcloud... done! (took 5 seconds) [28.02.2024 05:00:29][ℹ️][swag] Stopping swag... done! (took 4 seconds) [28.02.2024 05:00:33][ℹ️][Redis] Stopping Redis... done! (took 0 seconds) [28.02.2024 05:00:33][ℹ️][plex] Stopping plex... done! (took 4 seconds) [28.02.2024 05:00:37][ℹ️][LogitechMediaServer] Stopping LogitechMediaServer... done! (took 3 seconds) [28.02.2024 05:00:40][ℹ️][mariadb-nextcloud] Stopping mariadb-nextcloud... done! (took 4 seconds) [28.02.2024 05:00:44][ℹ️][binhex-sabnzbdvpn] Stopping binhex-sabnzbdvpn... done! (took 2 seconds) [28.02.2024 05:00:46][ℹ️][Main] Starting backup for containers [28.02.2024 05:00:46][ℹ️][unifi-network-application] Should NOT backup external volumes, sanitizing them... [28.02.2024 05:00:46][ℹ️][unifi-network-application] Calculated volumes to back up: /mnt/cache/appdata/unifi-network-application [28.02.2024 05:00:46][ℹ️][unifi-network-application] Backing up unifi-network-application... [28.02.2024 05:00:46][ℹ️][unifi-network-application] Backup created without issues [28.02.2024 05:00:46][⚠️][unifi-network-application] Skipping verification for this container because its not wanted! [28.02.2024 05:00:46][ℹ️][MongoDB] Should NOT backup external volumes, sanitizing them... [28.02.2024 05:00:46][⚠️][MongoDB] MongoDB does not have any volume to back up! Skipping. Please consider ignoring this container. [28.02.2024 05:00:46][ℹ️][QDirStat] Should NOT backup external volumes, sanitizing them... [28.02.2024 05:00:46][ℹ️][QDirStat] Calculated volumes to back up: /mnt/cache/appdata/QDirStat [28.02.2024 05:00:46][ℹ️][QDirStat] Backing up QDirStat... [28.02.2024 05:00:46][ℹ️][QDirStat] Backup created without issues [28.02.2024 05:00:46][⚠️][QDirStat] Skipping verification for this container because its not wanted! [28.02.2024 05:00:46][ℹ️][DiskSpeed] Should NOT backup external volumes, sanitizing them... [28.02.2024 05:00:46][ℹ️][DiskSpeed] Calculated volumes to back up: /mnt/cache/appdata/DiskSpeed [28.02.2024 05:00:46][ℹ️][DiskSpeed] Backing up DiskSpeed... [28.02.2024 05:00:46][ℹ️][DiskSpeed] Backup created without issues [28.02.2024 05:00:46][⚠️][DiskSpeed] Skipping verification for this container because its not wanted! [28.02.2024 05:00:46][ℹ️][phpmyadmin] Should NOT backup external volumes, sanitizing them... [28.02.2024 05:00:46][ℹ️][phpmyadmin] Calculated volumes to back up: /mnt/cache/appdata/phpmyadmin [28.02.2024 05:00:46][ℹ️][phpmyadmin] Backing up phpmyadmin... [28.02.2024 05:00:46][ℹ️][phpmyadmin] Backup created without issues [28.02.2024 05:00:46][⚠️][phpmyadmin] Skipping verification for this container because its not wanted! [28.02.2024 05:00:46][ℹ️][gPodder] Should NOT backup external volumes, sanitizing them... [28.02.2024 05:00:46][ℹ️][gPodder] Calculated volumes to back up: /mnt/cache/appdata/gPodder [28.02.2024 05:00:46][ℹ️][gPodder] Backing up gPodder... [28.02.2024 05:00:46][ℹ️][gPodder] Backup created without issues [28.02.2024 05:00:46][⚠️][gPodder] Skipping verification for this container because its not wanted! [28.02.2024 05:00:46][ℹ️][Sonarr] Should NOT backup external volumes, sanitizing them... [28.02.2024 05:00:46][ℹ️][Sonarr] Calculated volumes to back up: /mnt/cache/appdata/sonarr [28.02.2024 05:00:46][ℹ️][Sonarr] Backing up Sonarr... [28.02.2024 05:00:49][ℹ️][Sonarr] Backup created without issues [28.02.2024 05:00:49][⚠️][Sonarr] Skipping verification for this container because its not wanted! [28.02.2024 05:00:49][ℹ️][radarr] Should NOT backup external volumes, sanitizing them... [28.02.2024 05:00:49][ℹ️][radarr] Calculated volumes to back up: /mnt/cache/appdata/radarr [28.02.2024 05:00:49][ℹ️][radarr] Backing up radarr... [28.02.2024 05:00:58][ℹ️][radarr] Backup created without issues [28.02.2024 05:00:58][⚠️][radarr] Skipping verification for this container because its not wanted! [28.02.2024 05:00:58][ℹ️][nextcloud] Should NOT backup external volumes, sanitizing them... [28.02.2024 05:00:58][ℹ️][nextcloud] Calculated volumes to back up: /mnt/cache/appdata/nextcloud [28.02.2024 05:00:58][ℹ️][nextcloud] Backing up nextcloud... [28.02.2024 05:01:11][ℹ️][nextcloud] Backup created without issues [28.02.2024 05:01:11][⚠️][nextcloud] Skipping verification for this container because its not wanted! [28.02.2024 05:01:11][ℹ️][swag] Should NOT backup external volumes, sanitizing them... [28.02.2024 05:01:11][ℹ️][swag] Calculated volumes to back up: /mnt/cache/appdata/swag [28.02.2024 05:01:11][ℹ️][swag] Backing up swag... [28.02.2024 05:01:12][ℹ️][swag] Backup created without issues [28.02.2024 05:01:12][⚠️][swag] Skipping verification for this container because its not wanted! [28.02.2024 05:01:12][ℹ️][Redis] Should NOT backup external volumes, sanitizing them... [28.02.2024 05:01:12][ℹ️][Redis] Calculated volumes to back up: /mnt/cache/appdata/redis/data [28.02.2024 05:01:12][ℹ️][Redis] Backing up Redis... [28.02.2024 05:01:12][ℹ️][Redis] Backup created without issues [28.02.2024 05:01:12][⚠️][Redis] Skipping verification for this container because its not wanted! [28.02.2024 05:01:12][ℹ️][plex] Should NOT backup external volumes, sanitizing them... [28.02.2024 05:01:12][ℹ️][plex] Calculated volumes to back up: /mnt/cache/appdata/plex [28.02.2024 05:01:12][ℹ️][plex] Backing up plex... [28.02.2024 05:05:29][ℹ️][plex] Backup created without issues [28.02.2024 05:05:29][⚠️][plex] Skipping verification for this container because its not wanted! [28.02.2024 05:05:29][ℹ️][LogitechMediaServer] Should NOT backup external volumes, sanitizing them... [28.02.2024 05:05:29][ℹ️][LogitechMediaServer] Calculated volumes to back up: /mnt/cache/appdata/LogitechMediaServer [28.02.2024 05:05:29][ℹ️][LogitechMediaServer] Backing up LogitechMediaServer... [28.02.2024 05:05:45][ℹ️][LogitechMediaServer] Backup created without issues [28.02.2024 05:05:45][⚠️][LogitechMediaServer] Skipping verification for this container because its not wanted! [28.02.2024 05:05:45][ℹ️][mariadb-nextcloud] Should NOT backup external volumes, sanitizing them... [28.02.2024 05:05:45][ℹ️][mariadb-nextcloud] Calculated volumes to back up: /mnt/cache/appdata/mariadb [28.02.2024 05:05:45][ℹ️][mariadb-nextcloud] Backing up mariadb-nextcloud... [28.02.2024 05:05:50][ℹ️][mariadb-nextcloud] Backup created without issues [28.02.2024 05:05:50][⚠️][mariadb-nextcloud] Skipping verification for this container because its not wanted! [28.02.2024 05:05:50][ℹ️][binhex-sabnzbdvpn] '/mnt/cache/appdata/binhex-sabnzbdvpn/data' is within mapped volume '/mnt/cache/appdata/binhex-sabnzbdvpn'! Ignoring! [28.02.2024 05:05:50][ℹ️][binhex-sabnzbdvpn] Should NOT backup external volumes, sanitizing them... [28.02.2024 05:05:50][ℹ️][binhex-sabnzbdvpn] Calculated volumes to back up: /mnt/cache/appdata/binhex-sabnzbdvpn [28.02.2024 05:05:50][ℹ️][binhex-sabnzbdvpn] Backing up binhex-sabnzbdvpn... [28.02.2024 05:05:50][ℹ️][binhex-sabnzbdvpn] Backup created without issues [28.02.2024 05:05:50][⚠️][binhex-sabnzbdvpn] Skipping verification for this container because its not wanted! [28.02.2024 05:05:50][ℹ️][Main] Set containers to previous state [28.02.2024 05:05:50][ℹ️][binhex-sabnzbdvpn] Starting binhex-sabnzbdvpn... (try #1) done! [28.02.2024 05:05:58][ℹ️][mariadb-nextcloud] Starting mariadb-nextcloud... (try #1) done! [28.02.2024 05:05:58][ℹ️][mariadb-nextcloud] The container has a delay set, waiting 5 seconds before carrying on [28.02.2024 05:06:03][ℹ️][LogitechMediaServer] Starting LogitechMediaServer... (try #1) done! [28.02.2024 05:06:05][ℹ️][plex] Starting plex... (try #1) done! [28.02.2024 05:06:08][ℹ️][Redis] Starting Redis... (try #1) done! [28.02.2024 05:06:10][ℹ️][swag] Starting swag... (try #1) done! [28.02.2024 05:06:10][ℹ️][swag] The container has a delay set, waiting 15 seconds before carrying on [28.02.2024 05:06:25][ℹ️][nextcloud] Starting nextcloud... (try #1) done! [28.02.2024 05:06:28][ℹ️][radarr] Starting radarr... (try #1) done! [28.02.2024 05:06:30][ℹ️][Sonarr] Starting Sonarr... (try #1) done! [28.02.2024 05:06:32][ℹ️][gPodder] Starting gPodder... (try #1) done! [28.02.2024 05:06:35][ℹ️][phpmyadmin] Starting phpmyadmin is being ignored, because it was not started before (or should not be started). [28.02.2024 05:06:35][ℹ️][DiskSpeed] Starting DiskSpeed is being ignored, because it was not started before (or should not be started). [28.02.2024 05:06:35][ℹ️][QDirStat] Starting QDirStat is being ignored, because it was not started before (or should not be started). [28.02.2024 05:06:35][ℹ️][MongoDB] Starting MongoDB... (try #1) done! [28.02.2024 05:06:35][ℹ️][MongoDB] The container has a delay set, waiting 5 seconds before carrying on [28.02.2024 05:06:40][ℹ️][unifi-network-application] Starting unifi-network-application... (try #1) done! [28.02.2024 05:06:43][ℹ️][Main] Backing up the flash drive. [28.02.2024 05:07:16][ℹ️][Main] Flash backup created! [28.02.2024 05:07:16][ℹ️][Main] Checking retention... [28.02.2024 05:07:16][ℹ️][Main] Delete old backup: /mnt/user/Backup/ca_backup_app/ab_20240124_050001 [28.02.2024 05:07:18][ℹ️][Main] DONE! Thanks for using this plugin and have a safe day ;) [28.02.2024 05:07:18][ℹ️][Main] ❤️ I also created a private debug log. The debug log id is: 36756508-9131-4a72-9764-cd0ed83a1b75
  4. I just specify the IP address of the Mongo host, rather than a hostname. So delete the Unifi controller image (it only evaluates the name on the first run) and set MONGO_HOST as: 192.168.1.200 Note that I don't use "bridge" but a custom network, but I think you'll be fine specifying your server's IP.
  5. Yeah, I'm clueless as to if it made any difference for me, but figure that I've gone to to the trouble of setting it up and it takes up so little resources that I might as well keep it around. I initially installed it to try and get my photos to load faster in my photo app. I applied so many fixes that I have no idea what fixed it, but it's working well enough now, so I don't want to rock the boat.
  6. Sooooo the Mongo docker now starts/runs fine again... There was an update to the Mongo container, so maybe it was just a bug? Or maybe some lock on some file expired? I don't know what's going on anymore lol
  7. My weekly backup ran this morning, so it shut all my Dockers down at 5am. However, now mongodb is refusing to start. I have no idea why. {"t":{"$date":"2024-01-10T05:00:39.718+00:00"},"s":"I", "c":"CONTROL", "id":23377, "ctx":"SignalHandler","msg":"Received signal","attr":{"signal":15,"error":"Terminated"}} {"t":{"$date":"2024-01-10T05:00:39.718+00:00"},"s":"I", "c":"CONTROL", "id":23378, "ctx":"SignalHandler","msg":"Signal was sent by kill(2)","attr":{"pid":0,"uid":0}} {"t":{"$date":"2024-01-10T05:00:39.718+00:00"},"s":"I", "c":"CONTROL", "id":23381, "ctx":"SignalHandler","msg":"will terminate after current cmd ends"} {"t":{"$date":"2024-01-10T05:00:39.734+00:00"},"s":"I", "c":"REPL", "id":4784900, "ctx":"SignalHandler","msg":"Stepping down the ReplicationCoordinator for shutdown","attr":{"waitTimeMillis":10000}} {"t":{"$date":"2024-01-10T05:00:39.902+00:00"},"s":"I", "c":"COMMAND", "id":4784901, "ctx":"SignalHandler","msg":"Shutting down the MirrorMaestro"} {"t":{"$date":"2024-01-10T05:00:39.902+00:00"},"s":"I", "c":"SHARDING", "id":4784902, "ctx":"SignalHandler","msg":"Shutting down the WaitForMajorityService"} {"t":{"$date":"2024-01-10T05:00:39.902+00:00"},"s":"I", "c":"CONTROL", "id":4784903, "ctx":"SignalHandler","msg":"Shutting down the LogicalSessionCache"} {"t":{"$date":"2024-01-10T05:00:39.903+00:00"},"s":"I", "c":"NETWORK", "id":20562, "ctx":"SignalHandler","msg":"Shutdown: going to close listening sockets"} {"t":{"$date":"2024-01-10T05:00:39.903+00:00"},"s":"I", "c":"NETWORK", "id":23017, "ctx":"listener","msg":"removing socket file","attr":{"path":"/tmp/mongodb-27017.sock"}} {"t":{"$date":"2024-01-10T05:00:39.903+00:00"},"s":"W", "c":"NETWORK", "id":23022, "ctx":"listener","msg":"Unable to remove UNIX socket","attr":{"path":"/tmp/mongodb-27017.sock","error":"Operation not permitted"}} {"t":{"$date":"2024-01-10T05:00:39.903+00:00"},"s":"I", "c":"NETWORK", "id":4784905, "ctx":"SignalHandler","msg":"Shutting down the global connection pool"} {"t":{"$date":"2024-01-10T05:00:39.903+00:00"},"s":"I", "c":"STORAGE", "id":4784906, "ctx":"SignalHandler","msg":"Shutting down the FlowControlTicketholder"} {"t":{"$date":"2024-01-10T05:00:39.903+00:00"},"s":"I", "c":"-", "id":20520, "ctx":"SignalHandler","msg":"Stopping further Flow Control ticket acquisitions."} {"t":{"$date":"2024-01-10T05:00:39.903+00:00"},"s":"I", "c":"STORAGE", "id":4784908, "ctx":"SignalHandler","msg":"Shutting down the PeriodicThreadToAbortExpiredTransactions"} {"t":{"$date":"2024-01-10T05:00:39.903+00:00"},"s":"I", "c":"STORAGE", "id":4784934, "ctx":"SignalHandler","msg":"Shutting down the PeriodicThreadToDecreaseSnapshotHistoryCachePressure"} {"t":{"$date":"2024-01-10T05:00:39.903+00:00"},"s":"I", "c":"REPL", "id":4784909, "ctx":"SignalHandler","msg":"Shutting down the ReplicationCoordinator"} {"t":{"$date":"2024-01-10T05:00:39.903+00:00"},"s":"I", "c":"SHARDING", "id":4784910, "ctx":"SignalHandler","msg":"Shutting down the ShardingInitializationMongoD"} {"t":{"$date":"2024-01-10T05:00:39.903+00:00"},"s":"I", "c":"REPL", "id":4784911, "ctx":"SignalHandler","msg":"Enqueuing the ReplicationStateTransitionLock for shutdown"} {"t":{"$date":"2024-01-10T05:00:39.903+00:00"},"s":"I", "c":"-", "id":4784912, "ctx":"SignalHandler","msg":"Killing all operations for shutdown"} {"t":{"$date":"2024-01-10T05:00:39.903+00:00"},"s":"I", "c":"-", "id":4695300, "ctx":"SignalHandler","msg":"Interrupted all currently running operations","attr":{"opsKilled":5}} {"t":{"$date":"2024-01-10T05:00:39.903+00:00"},"s":"I", "c":"COMMAND", "id":4784913, "ctx":"SignalHandler","msg":"Shutting down all open transactions"} {"t":{"$date":"2024-01-10T05:00:39.903+00:00"},"s":"I", "c":"REPL", "id":4784914, "ctx":"SignalHandler","msg":"Acquiring the ReplicationStateTransitionLock for shutdown"} {"t":{"$date":"2024-01-10T05:00:39.903+00:00"},"s":"I", "c":"INDEX", "id":4784915, "ctx":"SignalHandler","msg":"Shutting down the IndexBuildsCoordinator"} {"t":{"$date":"2024-01-10T05:00:39.903+00:00"},"s":"I", "c":"REPL", "id":4784916, "ctx":"SignalHandler","msg":"Reacquiring the ReplicationStateTransitionLock for shutdown"} {"t":{"$date":"2024-01-10T05:00:39.903+00:00"},"s":"I", "c":"REPL", "id":4784917, "ctx":"SignalHandler","msg":"Attempting to mark clean shutdown"} {"t":{"$date":"2024-01-10T05:00:39.903+00:00"},"s":"I", "c":"NETWORK", "id":4784918, "ctx":"SignalHandler","msg":"Shutting down the ReplicaSetMonitor"} {"t":{"$date":"2024-01-10T05:00:39.903+00:00"},"s":"I", "c":"SHARDING", "id":4784921, "ctx":"SignalHandler","msg":"Shutting down the MigrationUtilExecutor"} {"t":{"$date":"2024-01-10T05:00:39.904+00:00"},"s":"I", "c":"NETWORK", "id":22944, "ctx":"conn3","msg":"Connection ended","attr":{"remote":"172.19.0.3:45170","connectionId":3,"connectionCount":5}} {"t":{"$date":"2024-01-10T05:00:39.904+00:00"},"s":"I", "c":"NETWORK", "id":22944, "ctx":"conn1","msg":"Connection ended","attr":{"remote":"172.19.0.3:45152","connectionId":1,"connectionCount":4}} {"t":{"$date":"2024-01-10T05:00:39.904+00:00"},"s":"I", "c":"STORAGE", "id":4784927, "ctx":"SignalHandler","msg":"Shutting down the HealthLog"} {"t":{"$date":"2024-01-10T05:00:39.904+00:00"},"s":"I", "c":"STORAGE", "id":4784929, "ctx":"SignalHandler","msg":"Acquiring the global lock for shutdown"} {"t":{"$date":"2024-01-10T05:00:39.904+00:00"},"s":"I", "c":"STORAGE", "id":4784930, "ctx":"SignalHandler","msg":"Shutting down the storage engine"} {"t":{"$date":"2024-01-10T05:00:39.904+00:00"},"s":"I", "c":"STORAGE", "id":22320, "ctx":"SignalHandler","msg":"Shutting down journal flusher thread"} {"t":{"$date":"2024-01-10T05:00:39.904+00:00"},"s":"I", "c":"STORAGE", "id":22321, "ctx":"SignalHandler","msg":"Finished shutting down journal flusher thread"} {"t":{"$date":"2024-01-10T05:00:39.904+00:00"},"s":"I", "c":"STORAGE", "id":20282, "ctx":"SignalHandler","msg":"Deregistering all the collections"} {"t":{"$date":"2024-01-10T05:00:39.904+00:00"},"s":"I", "c":"NETWORK", "id":22944, "ctx":"conn160","msg":"Connection ended","attr":{"remote":"172.19.0.3:35476","connectionId":160,"connectionCount":3}} {"t":{"$date":"2024-01-10T05:00:39.904+00:00"},"s":"I", "c":"NETWORK", "id":22944, "ctx":"conn106","msg":"Connection ended","attr":{"remote":"172.19.0.3:50332","connectionId":106,"connectionCount":2}} {"t":{"$date":"2024-01-10T05:00:39.910+00:00"},"s":"I", "c":"STORAGE", "id":22261, "ctx":"SignalHandler","msg":"Timestamp monitor shutting down"} {"t":{"$date":"2024-01-10T05:00:39.910+00:00"},"s":"I", "c":"STORAGE", "id":22317, "ctx":"SignalHandler","msg":"WiredTigerKVEngine shutting down"} {"t":{"$date":"2024-01-10T05:00:39.910+00:00"},"s":"I", "c":"STORAGE", "id":22318, "ctx":"SignalHandler","msg":"Shutting down session sweeper thread"} {"t":{"$date":"2024-01-10T05:00:39.910+00:00"},"s":"I", "c":"STORAGE", "id":22319, "ctx":"SignalHandler","msg":"Finished shutting down session sweeper thread"} {"t":{"$date":"2024-01-10T05:00:39.910+00:00"},"s":"I", "c":"STORAGE", "id":22322, "ctx":"SignalHandler","msg":"Shutting down checkpoint thread"} {"t":{"$date":"2024-01-10T05:00:39.911+00:00"},"s":"I", "c":"STORAGE", "id":22323, "ctx":"SignalHandler","msg":"Finished shutting down checkpoint thread"} {"t":{"$date":"2024-01-10T05:00:39.913+00:00"},"s":"I", "c":"STORAGE", "id":4795902, "ctx":"SignalHandler","msg":"Closing WiredTiger","attr":{"closeConfig":"leak_memory=true,"}} {"t":{"$date":"2024-01-10T05:00:39.925+00:00"},"s":"I", "c":"STORAGE", "id":22430, "ctx":"SignalHandler","msg":"WiredTiger message","attr":{"message":"[1704862839:925143][1:0x15274134b700], close_ckpt: [WT_VERB_CHECKPOINT_PROGRESS] saving checkpoint snapshot min: 343738, snapshot max: 343738 snapshot count: 0, oldest timestamp: (0, 0) , meta checkpoint timestamp: (0, 0) base write gen: 660"}} {"t":{"$date":"2024-01-10T05:00:39.957+00:00"},"s":"I", "c":"STORAGE", "id":4795901, "ctx":"SignalHandler","msg":"WiredTiger closed","attr":{"durationMillis":44}} {"t":{"$date":"2024-01-10T05:00:39.957+00:00"},"s":"I", "c":"STORAGE", "id":22279, "ctx":"SignalHandler","msg":"shutdown: removing fs lock..."} {"t":{"$date":"2024-01-10T05:00:39.957+00:00"},"s":"I", "c":"-", "id":4784931, "ctx":"SignalHandler","msg":"Dropping the scope cache for shutdown"} {"t":{"$date":"2024-01-10T05:00:39.957+00:00"},"s":"I", "c":"FTDC", "id":4784926, "ctx":"SignalHandler","msg":"Shutting down full-time data capture"} {"t":{"$date":"2024-01-10T05:00:39.957+00:00"},"s":"I", "c":"FTDC", "id":20626, "ctx":"SignalHandler","msg":"Shutting down full-time diagnostic data capture"} {"t":{"$date":"2024-01-10T05:00:39.964+00:00"},"s":"I", "c":"CONTROL", "id":20565, "ctx":"SignalHandler","msg":"Now exiting"} {"t":{"$date":"2024-01-10T05:00:39.964+00:00"},"s":"I", "c":"CONTROL", "id":23138, "ctx":"SignalHandler","msg":"Shutting down","attr":{"exitCode":0}} {"t":{"$date":"2024-01-10T05:08:05.056+00:00"},"s":"I", "c":"CONTROL", "id":23285, "ctx":"main","msg":"Automatically disabling TLS 1.0, to force-enable TLS 1.0 specify --sslDisabledProtocols 'none'"} {"t":{"$date":"2024-01-10T05:08:05.062+00:00"},"s":"I", "c":"NETWORK", "id":4648601, "ctx":"main","msg":"Implicit TCP FastOpen unavailable. If TCP FastOpen is required, set tcpFastOpenServer, tcpFastOpenClient, and tcpFastOpenQueueSize."} {"t":{"$date":"2024-01-10T05:08:05.062+00:00"},"s":"I", "c":"STORAGE", "id":4615611, "ctx":"initandlisten","msg":"MongoDB starting","attr":{"pid":1,"port":27017,"dbPath":"/data/db","architecture":"64-bit","host":"1211a40e6d4a"}} {"t":{"$date":"2024-01-10T05:08:05.062+00:00"},"s":"I", "c":"CONTROL", "id":23403, "ctx":"initandlisten","msg":"Build Info","attr":{"buildInfo":{"version":"4.4.27","gitVersion":"2da9e4437d8c792c2b3c3aea62e284f801172a6b","openSSLVersion":"OpenSSL 1.1.1f 31 Mar 2020","modules":[],"allocator":"tcmalloc","environment":{"distmod":"ubuntu2004","distarch":"x86_64","target_arch":"x86_64"}}}} {"t":{"$date":"2024-01-10T05:08:05.062+00:00"},"s":"I", "c":"CONTROL", "id":51765, "ctx":"initandlisten","msg":"Operating System","attr":{"os":{"name":"Ubuntu","version":"20.04"}}} {"t":{"$date":"2024-01-10T05:08:05.062+00:00"},"s":"I", "c":"CONTROL", "id":21951, "ctx":"initandlisten","msg":"Options set by command line","attr":{"options":{"net":{"bindIp":"*"}}}} {"t":{"$date":"2024-01-10T05:08:05.063+00:00"},"s":"E", "c":"NETWORK", "id":23024, "ctx":"initandlisten","msg":"Failed to unlink socket file","attr":{"path":"/tmp/mongodb-27017.sock","error":"Operation not permitted"}} {"t":{"$date":"2024-01-10T05:08:05.063+00:00"},"s":"F", "c":"-", "id":23091, "ctx":"initandlisten","msg":"Fatal assertion","attr":{"msgid":40486,"file":"src/mongo/transport/transport_layer_asio.cpp","line":1048}} {"t":{"$date":"2024-01-10T05:08:05.063+00:00"},"s":"F", "c":"-", "id":23092, "ctx":"initandlisten","msg":"\n\n***aborting after fassert() failure\n\n"} {"t":{"$date":"2024-01-10T11:11:41.977+00:00"},"s":"I", "c":"CONTROL", "id":23285, "ctx":"main","msg":"Automatically disabling TLS 1.0, to force-enable TLS 1.0 specify --sslDisabledProtocols 'none'"} {"t":{"$date":"2024-01-10T11:11:41.979+00:00"},"s":"I", "c":"NETWORK", "id":4648601, "ctx":"main","msg":"Implicit TCP FastOpen unavailable. If TCP FastOpen is required, set tcpFastOpenServer, tcpFastOpenClient, and tcpFastOpenQueueSize."} {"t":{"$date":"2024-01-10T11:11:41.980+00:00"},"s":"I", "c":"STORAGE", "id":4615611, "ctx":"initandlisten","msg":"MongoDB starting","attr":{"pid":1,"port":27017,"dbPath":"/data/db","architecture":"64-bit","host":"1211a40e6d4a"}} {"t":{"$date":"2024-01-10T11:11:41.980+00:00"},"s":"I", "c":"CONTROL", "id":23403, "ctx":"initandlisten","msg":"Build Info","attr":{"buildInfo":{"version":"4.4.27","gitVersion":"2da9e4437d8c792c2b3c3aea62e284f801172a6b","openSSLVersion":"OpenSSL 1.1.1f 31 Mar 2020","modules":[],"allocator":"tcmalloc","environment":{"distmod":"ubuntu2004","distarch":"x86_64","target_arch":"x86_64"}}}} {"t":{"$date":"2024-01-10T11:11:41.980+00:00"},"s":"I", "c":"CONTROL", "id":51765, "ctx":"initandlisten","msg":"Operating System","attr":{"os":{"name":"Ubuntu","version":"20.04"}}} {"t":{"$date":"2024-01-10T11:11:41.980+00:00"},"s":"I", "c":"CONTROL", "id":21951, "ctx":"initandlisten","msg":"Options set by command line","attr":{"options":{"net":{"bindIp":"*"}}}} {"t":{"$date":"2024-01-10T11:11:41.980+00:00"},"s":"E", "c":"NETWORK", "id":23024, "ctx":"initandlisten","msg":"Failed to unlink socket file","attr":{"path":"/tmp/mongodb-27017.sock","error":"Operation not permitted"}} {"t":{"$date":"2024-01-10T11:11:41.980+00:00"},"s":"F", "c":"-", "id":23091, "ctx":"initandlisten","msg":"Fatal assertion","attr":{"msgid":40486,"file":"src/mongo/transport/transport_layer_asio.cpp","line":1048}} {"t":{"$date":"2024-01-10T11:11:41.980+00:00"},"s":"F", "c":"-", "id":23092, "ctx":"initandlisten","msg":"\n\n***aborting after fassert() failure\n\n"} I did remove the .js file, as well as the Docker's path to it. But that's only used the first time it runs, right? I can't think of any other changes I have made. Any idea what's going on? Why would there be a permission error in the tmp dir? Really tempted to move over to PeteA's all in one Docker now...
  8. Thanks for the info. FWIW I opened a terminal to the mongodb container and entered mongod --quiet to stop the logging. If anything goes wrong, I guess I'll turn the logging back on, but for the moment I don't really need it (hopefully that's not naive of me 😅)
  9. Just moved over to the two-container solution using a combination of the instructions from LS.IO and in this thread. Everything seems to be working fine now, except that my mongodb log is constantly being written to with entries like the following: {"t":{"$date":"2024-01-05T15:59:22.000+00:00"},"s":"I", "c":"NETWORK", "id":51800, "ctx":"conn12","msg":"client metadata","attr":{"remote":"172.19.0.3:50624","client":"conn12","doc":{"driver":{"name":"mongo-java-driver|sync","version":"4.6.1"},"os":{"type":"Linux","name":"Linux","architecture":"amd64","version":"6.1.64-Unraid"},"platform":"Java/Private Build/17.0.9+9-Ubuntu-122.04"}}} {"t":{"$date":"2024-01-05T15:59:22.052+00:00"},"s":"I", "c":"ACCESS", "id":20250, "ctx":"conn12","msg":"Authentication succeeded","attr":{"mechanism":"SCRAM-SHA-256","speculative":true,"principalName":"unifi","authenticationDatabase":"unifi","remote":"172.19.0.3:50624","extraInfo":{}}} {"t":{"$date":"2024-01-05T15:59:23.293+00:00"},"s":"I", "c":"STORAGE", "id":22430, "ctx":"WTCheckpointThread","msg":"WiredTiger message","attr":{"message":"[1704470363:293326][1:0x15273f73d700], WT_SESSION.checkpoint: [WT_VERB_CHECKPOINT_PROGRESS] saving checkpoint snapshot min: 6840, snapshot max: 6840 snapshot count: 0, oldest timestamp: (0, 0) , meta checkpoint timestamp: (0, 0) base write gen: 660"}} {"t":{"$date":"2024-01-05T16:00:23.344+00:00"},"s":"I", "c":"STORAGE", "id":22430, "ctx":"WTCheckpointThread","msg":"WiredTiger message","attr":{"message":"[1704470423:344527][1:0x15273f73d700], WT_SESSION.checkpoint: [WT_VERB_CHECKPOINT_PROGRESS] saving checkpoint snapshot min: 6918, snapshot max: 6918 snapshot count: 0, oldest timestamp: (0, 0) , meta checkpoint timestamp: (0, 0) base write gen: 660"}} {"t":{"$date":"2024-01-05T16:01:23.377+00:00"},"s":"I", "c":"STORAGE", "id":22430, "ctx":"WTCheckpointThread","msg":"WiredTiger message","attr":{"message":"[1704470483:377873][1:0x15273f73d700], WT_SESSION.checkpoint: [WT_VERB_CHECKPOINT_PROGRESS] saving checkpoint snapshot min: 6921, snapshot max: 6921 snapshot count: 0, oldest timestamp: (0, 0) , meta checkpoint timestamp: (0, 0) base write gen: 660"}} {"t":{"$date":"2024-01-05T16:02:23.398+00:00"},"s":"I", "c":"STORAGE", "id":22430, "ctx":"WTCheckpointThread","msg":"WiredTiger message","attr":{"message":"[1704470543:398781][1:0x15273f73d700], WT_SESSION.checkpoint: [WT_VERB_CHECKPOINT_PROGRESS] saving checkpoint snapshot min: 6969, snapshot max: 6969 snapshot count: 0, oldest timestamp: (0, 0) , meta checkpoint timestamp: (0, 0) base write gen: 660"}} {"t":{"$date":"2024-01-05T16:03:23.425+00:00"},"s":"I", "c":"STORAGE", "id":22430, "ctx":"WTCheckpointThread","msg":"WiredTiger message","attr":{"message":"[1704470603:425199][1:0x15273f73d700], WT_SESSION.checkpoint: [WT_VERB_CHECKPOINT_PROGRESS] saving checkpoint snapshot min: 6974, snapshot max: 6974 snapshot count: 0, oldest timestamp: (0, 0) , meta checkpoint timestamp: (0, 0) base write gen: 660"}} Is anyone else running the Mongo container getting this? Any idea what it means? Is the log by default set to verbose? This will eat up space really quickly.
  10. I've been putting off the switch to the new Docker, but was wondering: is it worth moving to v8 before I make the switch, or after? Or is it still too early to trust Ubiquiti with a new major version, and I should just stick with v7 for a while longer, anyway? Thanks for your insight!
  11. While there's a fix for a data corruption issue in ZFS, is there any way we can find out if we were affected by the corruption?
  12. Ok, I've changed the command to: rsync -av --times --delete --exclude '.Recycle.Bin' /mnt/disk1/mymusic $MOUNTPOINT 2>&1 >> $LOGFILE And the log is filled with e.g. rsync: [generator] chown "/mnt/disks/ADATA2TB/mymusic/Albums/Conjoint/[2000] Earprints [FLAC]/01. Earprint Nr1.flac" failed: Operation not permitted (1) For every file and dir (I think because exfat doesn't support permissions etc, and the -a flag is the equivalent of -rlptgoD, which includes groups, permissions etc). The rsync man page says, however, suggests that --size-only could be a useful flag here as: "This modifies rsync's lqquick checkrq algorithm for finding files that need to be transferred, changing it from the default of transferring files with either a changed size or a changed last-modified time to just looking for files that have changed in size. This is useful when starting to use rsync after using another mirroring system which may not preserve timestamps exactly." So I've changed my rsync command to: rsync -vrltD --size-only --delete --exclude '.Recycle.Bin' /mnt/disk1/mymusic $MOUNTPOINT 2>&1 >> $LOGFILE And it worked for a run (no changes to sync, however). I will find out when daylight savings next changes if this solution has worked (though I suspect it will). I'll report back if changing to 'size only' causes any weird sync issues. Thanks for your help.
  13. Ah, I already specified -t in my command (vrltD --delete), so I don't think it'll fix it. The issue is that the server's time changes for DST, but the timezone for the exfat isn't specified, so it thinks that every file has shifted by an hour. If it isn't possible to specify the TZ for a specific disk in UD, I'll keep hunting around for an rsync solution. A workaround is to specify "--modify-window=3601", though this means that if I modify something, sync it, then modify it again all within an hour, it won't sync those changes next time. I can probably keep this in mind, though. Thanks.
  14. Hi, sorry, I originally quoted an earlier detailed post, which can be found here: The summary is that I have an exfat disk that I keep music on and which is accessed by Windows and Mac computers (thus my preference for exfat). I sync it from my server using UD and rsync (the rsync script is in the original post). I posted a while ago that it occasionally syncs the entire drive contents, rather than just what has changed, and I now realise it's because of daylight savings changing, and exfat not using unix time. At the time I thought maybe the drive was failing. One solution is to mount the device using the option "tz-utc", so that the time stamps don't change when the UK moves over to DST (we're effectively on UTC otherwise), so I was wondering if it was possible to mount specific disks as UTC, and how to do it. I would think that the script would be too late for adding that mounting option, correct? I'm open to other suggestions, though, including modifying the rsync command/script, or even moving to a format freely and reliably supported by Windows and Mac that also supports unix timestamps/DST. The rsync command from the script is: rsync -vrltD --delete --exclude '.Recycle.Bin' /mnt/user/mymusic $MOUNTPOINT 2>&1 >> $LOGFILE Many thanks.
  15. So, the same thing happened again today: I plugged in my 'sporadic' backup disk and it started copying everything again. I went back to find my old post to see what the solution was, and I couldn't help but notice that I mentioned in my original post that this happened the first time plugging in the disk since daylight savings time started. Well, this is actually the first time I've plugged in the disk since daylight savings time ended and here we are with it copying over everything again. I note that it doesn't happen with my other backup disk, which is formatted in ext4. Could this be related to the disk being exfat and the file timestamps changing relative to DST? Is the solution to run my server as UTC, or is the solution to modify my script? If it's running the server in UTC, what are the drawbacks? I'm assuming I can still choose timezones in Docker apps, yes (my Squeezeboxes are also my clocks)? Thanks for your help and input. EDIT: I note that this page: https://askubuntu.com/questions/1323668/one-volume-did-not-go-dst-with-the-rest-of-the-system Mentions that you can mount disks using the option: tz=UTC How would I go about adding that option to UD for this disk?
  16. I know I'm digging up an old thread here, but I thought I'd chime in with something I found out yesterday: For some reason, my LSIO NextCloud instance had the log level defined in config.php set at 0 (debug), rather than the default 2 (warn). I'd never set this myself, so I'm assuming it's a silly default set by LSIO. Since I changed it, my instance seems snappier and my photo thumbnails in this third party Android app (https://play.google.com/store/apps/details?id=com.nkming.nc_photos.paid) load heaps quicker as I jump through the timeline. So, if anyone on the LSIO NextCloud container is experiencing problems, double check the log level set in your config.php HTH
  17. I installed it anyway and am now running "2023.09.05.31.main" Is it suitable for public consumption?
  18. Sorry, I don't know what that error means. LS.io no longer offer support through this forum, instead offering it through their Discord, so maybe try there.
  19. Dunno quite what that means. Some use Bridge, some use Host, some use a custom network for reverse proxy (Swag + Nextcloud).
  20. Just adding to the voices on this issue: I use MACVLAN on Docker, and have had no problems with that. I have Unifi gear (USG and 2x APs, with the controller running in Docker), and have no problems (except for it complaining that my bonded eth on the server shares an IP address). If there's anything I can do to help troubleshoot this problem (contributing to a known-working hardware list, for instance), feel free to reach out.
  21. FWIW, I copied all the .err files to a new directory (just in case), I then deleted all of them except the one listed in the logs at start-up. I then renamed the active log file to <filename>.err.old I then logged into the mysql console by opening the mariadb console via the Unraid GUI, then issuing the command: mysql -uroot -p As per https://mariadb.com/kb/en/flush/, I then issued the command to close and reopen the .err file (basically recreate it) flush error logs; Now the .err file is KB in size, and I have recovered 3GB of space on the cache drive by clearing the errors that have been accumulating for a couple years now. Fingers crossed I haven't messed anything up!
  22. I Googled the error this morning and it seems to be a problem from a poor upgrade between mariadb versions (ie the container updated the version, but there were manual steps needed inside the container that I was not aware of). This thread shed some light on it: https://github.com/photoprism/photoprism/issues/2382 Specifically, I ran the following command, and now the error isn't constantly spamming the .err logfiles. mysql_upgrade --user=root --password=<root_pwd> Does anyone know if I can just delete all the .err files from the folder now?