Jump to content

jademonkee

Members
  • Posts

    337
  • Joined

  • Last visited

Converted

  • Gender
    Male
  • Location
    Somerset, England

Recent Profile Visitors

The recent visitors block is disabled and is not being shown to other users.

jademonkee's Achievements

Contributor

Contributor (5/14)

49

Reputation

1

Community Answers

  1. I usually avoid 'latest' tags so that I don't inadvertently upgrade (as sometimes upgrades break things), so I use a version tag (linuxserver/nextcloud:version-29.0.3 - a list of other tags is here). LSIO have built the upgrade scripts into when the containers start, so I'd pin a container version now, and manually change it for next NC version upgrade (and see if it upgrades properly this time). If you have been using 'latest', and have been updating/restarting the Docker image, then it's all been happening in the background, so you should have had upgrades happening without your intervention. I think your initial error message showed that one such upgrade went wrong at some point, but it looks like you've now fixed it.
  2. Hi all. FWIW, I upgraded from a USG to the Unifi Cloud Gateway Ultra last week, which contains its own Unifi Network application. The switch over was really easy, and I'm grateful to have simplified the management of the Unifi Network software. The device was about the same cost as the old Cloud Key v2 (£106 inc shipping), but also includes a gigabit compatible router (my USG needed to have IPS/IDS disabled to handle gigabit internet speeds), and a 4 port gigabit switch. It's a great device!
  3. The method to update changed, yes: you need to change the docker tag to update. If you'd changed the tag and were still getting the "update needed" message, then I'm glad you solved it. If you hadn't changed the tag, then I think your update won't stick next time there is an update to the Docker container (not a Nextcloud update, a LinuxServer.io Docker image update, which happen weekly). So do make sure that your Docker tag and current Nextcloud version are aligned. I'd also avoid using the web upgrade and stick to the Docker method, as that's how LSIO have designed it to work. Stepping outside of that has potential for gubbins, gruffins, and other mischief in your system.
  4. I occaionally get these errors in my Nextcloud v28.0.4 (using MariaDB v10.11.5 ) An exception occurred while executing a query: SQLSTATE[23000]: Integrity constraint violation: 1062 Duplicate entry '1-2934029-files' for key 'PRIMARY' 31 May 2024, 1:09:31 pm An exception occurred while executing a query: SQLSTATE[23000]: Integrity constraint violation: 1062 Duplicate entry '1-2934029-files' for key 'PRIMARY' 31 May 2024, 12:47:17 pm An exception occurred while executing a query: SQLSTATE[23000]: Integrity constraint violation: 1062 Duplicate entry '1-2931449-files' for key 'PRIMARY' 27 May 2024, 4:56:24 pm Does anyone have an idea what they mean, what I can do about it, or how I can find out more information? Many thanks.
  5. Does it? Can you elaborate? I have set my Mac/SMB settings up as per the following post: Which includes vetoing the Mac dot files. I note on the 6.12.6 changelog it mentions fruit changes to help TimeMachine restore. Currently my SMB Extra is as follows: veto files = /._*/.DS_Store/ aio read size = 1 aio write size = 1 strict locking = No use sendfile = No server multi channel support = Yes readdir_attr:aapl_rsize = no readdir_attr:aapl_finder_info = no readdir_attr:aapl_max_access = no fruit:posix_rename = yes fruit:metadata = stream Is there something you recommend I change? Either deleting it entirely (perhaps it's setup by default now?) or anything in particular? Are the recommended settings for Mac OS available somewhere so that I can set them up? Sorry to hijack the thread (I just wanted to respond to the news that veto slows things down), and thanks for your help.
  6. UPDATE: I remembered that I updated the plugin this morning (hours after the backup had run), so thought I'd take a look at all the settings and manually run a backup. All the files are in the new directory, as expected. I don't know what caused it, but it's working as expected now. Please discard my previous post.
  7. My weekly backup ran successfully this morning, however the directory is empty. Logs are as follows: [28.02.2024 05:00:01][ℹ️][Main] 👋 WELCOME TO APPDATA.BACKUP!! :D [28.02.2024 05:00:01][ℹ️][Main] Backing up from: /mnt/cache/appdata [28.02.2024 05:00:01][ℹ️][Main] Backing up to: /mnt/user/Backup/ca_backup_app/ab_20240228_050001 [28.02.2024 05:00:01][ℹ️][Main] Selected containers: DiskSpeed, LogitechMediaServer, MongoDB, QDirStat, Redis, Sonarr, binhex-sabnzbdvpn, gPodder, mariadb-nextcloud, nextcloud, phpmyadmin, plex, radarr, swag, unifi-network-application [28.02.2024 05:00:01][ℹ️][Main] Saving container XML files... [28.02.2024 05:00:01][ℹ️][Main] Method: Stop all container before continuing. [28.02.2024 05:00:01][ℹ️][unifi-network-application] Stopping unifi-network-application... done! (took 8 seconds) [28.02.2024 05:00:09][ℹ️][MongoDB] Stopping MongoDB... done! (took 1 seconds) [28.02.2024 05:00:10][ℹ️][QDirStat] No stopping needed for QDirStat: Not started! [28.02.2024 05:00:10][ℹ️][DiskSpeed] No stopping needed for DiskSpeed: Not started! [28.02.2024 05:00:10][ℹ️][phpmyadmin] No stopping needed for phpmyadmin: Not started! [28.02.2024 05:00:10][ℹ️][gPodder] Stopping gPodder... done! (took 5 seconds) [28.02.2024 05:00:15][ℹ️][Sonarr] Stopping Sonarr... done! (took 4 seconds) [28.02.2024 05:00:19][ℹ️][radarr] Stopping radarr... done! (took 5 seconds) [28.02.2024 05:00:24][ℹ️][nextcloud] Stopping nextcloud... done! (took 5 seconds) [28.02.2024 05:00:29][ℹ️][swag] Stopping swag... done! (took 4 seconds) [28.02.2024 05:00:33][ℹ️][Redis] Stopping Redis... done! (took 0 seconds) [28.02.2024 05:00:33][ℹ️][plex] Stopping plex... done! (took 4 seconds) [28.02.2024 05:00:37][ℹ️][LogitechMediaServer] Stopping LogitechMediaServer... done! (took 3 seconds) [28.02.2024 05:00:40][ℹ️][mariadb-nextcloud] Stopping mariadb-nextcloud... done! (took 4 seconds) [28.02.2024 05:00:44][ℹ️][binhex-sabnzbdvpn] Stopping binhex-sabnzbdvpn... done! (took 2 seconds) [28.02.2024 05:00:46][ℹ️][Main] Starting backup for containers [28.02.2024 05:00:46][ℹ️][unifi-network-application] Should NOT backup external volumes, sanitizing them... [28.02.2024 05:00:46][ℹ️][unifi-network-application] Calculated volumes to back up: /mnt/cache/appdata/unifi-network-application [28.02.2024 05:00:46][ℹ️][unifi-network-application] Backing up unifi-network-application... [28.02.2024 05:00:46][ℹ️][unifi-network-application] Backup created without issues [28.02.2024 05:00:46][⚠️][unifi-network-application] Skipping verification for this container because its not wanted! [28.02.2024 05:00:46][ℹ️][MongoDB] Should NOT backup external volumes, sanitizing them... [28.02.2024 05:00:46][⚠️][MongoDB] MongoDB does not have any volume to back up! Skipping. Please consider ignoring this container. [28.02.2024 05:00:46][ℹ️][QDirStat] Should NOT backup external volumes, sanitizing them... [28.02.2024 05:00:46][ℹ️][QDirStat] Calculated volumes to back up: /mnt/cache/appdata/QDirStat [28.02.2024 05:00:46][ℹ️][QDirStat] Backing up QDirStat... [28.02.2024 05:00:46][ℹ️][QDirStat] Backup created without issues [28.02.2024 05:00:46][⚠️][QDirStat] Skipping verification for this container because its not wanted! [28.02.2024 05:00:46][ℹ️][DiskSpeed] Should NOT backup external volumes, sanitizing them... [28.02.2024 05:00:46][ℹ️][DiskSpeed] Calculated volumes to back up: /mnt/cache/appdata/DiskSpeed [28.02.2024 05:00:46][ℹ️][DiskSpeed] Backing up DiskSpeed... [28.02.2024 05:00:46][ℹ️][DiskSpeed] Backup created without issues [28.02.2024 05:00:46][⚠️][DiskSpeed] Skipping verification for this container because its not wanted! [28.02.2024 05:00:46][ℹ️][phpmyadmin] Should NOT backup external volumes, sanitizing them... [28.02.2024 05:00:46][ℹ️][phpmyadmin] Calculated volumes to back up: /mnt/cache/appdata/phpmyadmin [28.02.2024 05:00:46][ℹ️][phpmyadmin] Backing up phpmyadmin... [28.02.2024 05:00:46][ℹ️][phpmyadmin] Backup created without issues [28.02.2024 05:00:46][⚠️][phpmyadmin] Skipping verification for this container because its not wanted! [28.02.2024 05:00:46][ℹ️][gPodder] Should NOT backup external volumes, sanitizing them... [28.02.2024 05:00:46][ℹ️][gPodder] Calculated volumes to back up: /mnt/cache/appdata/gPodder [28.02.2024 05:00:46][ℹ️][gPodder] Backing up gPodder... [28.02.2024 05:00:46][ℹ️][gPodder] Backup created without issues [28.02.2024 05:00:46][⚠️][gPodder] Skipping verification for this container because its not wanted! [28.02.2024 05:00:46][ℹ️][Sonarr] Should NOT backup external volumes, sanitizing them... [28.02.2024 05:00:46][ℹ️][Sonarr] Calculated volumes to back up: /mnt/cache/appdata/sonarr [28.02.2024 05:00:46][ℹ️][Sonarr] Backing up Sonarr... [28.02.2024 05:00:49][ℹ️][Sonarr] Backup created without issues [28.02.2024 05:00:49][⚠️][Sonarr] Skipping verification for this container because its not wanted! [28.02.2024 05:00:49][ℹ️][radarr] Should NOT backup external volumes, sanitizing them... [28.02.2024 05:00:49][ℹ️][radarr] Calculated volumes to back up: /mnt/cache/appdata/radarr [28.02.2024 05:00:49][ℹ️][radarr] Backing up radarr... [28.02.2024 05:00:58][ℹ️][radarr] Backup created without issues [28.02.2024 05:00:58][⚠️][radarr] Skipping verification for this container because its not wanted! [28.02.2024 05:00:58][ℹ️][nextcloud] Should NOT backup external volumes, sanitizing them... [28.02.2024 05:00:58][ℹ️][nextcloud] Calculated volumes to back up: /mnt/cache/appdata/nextcloud [28.02.2024 05:00:58][ℹ️][nextcloud] Backing up nextcloud... [28.02.2024 05:01:11][ℹ️][nextcloud] Backup created without issues [28.02.2024 05:01:11][⚠️][nextcloud] Skipping verification for this container because its not wanted! [28.02.2024 05:01:11][ℹ️][swag] Should NOT backup external volumes, sanitizing them... [28.02.2024 05:01:11][ℹ️][swag] Calculated volumes to back up: /mnt/cache/appdata/swag [28.02.2024 05:01:11][ℹ️][swag] Backing up swag... [28.02.2024 05:01:12][ℹ️][swag] Backup created without issues [28.02.2024 05:01:12][⚠️][swag] Skipping verification for this container because its not wanted! [28.02.2024 05:01:12][ℹ️][Redis] Should NOT backup external volumes, sanitizing them... [28.02.2024 05:01:12][ℹ️][Redis] Calculated volumes to back up: /mnt/cache/appdata/redis/data [28.02.2024 05:01:12][ℹ️][Redis] Backing up Redis... [28.02.2024 05:01:12][ℹ️][Redis] Backup created without issues [28.02.2024 05:01:12][⚠️][Redis] Skipping verification for this container because its not wanted! [28.02.2024 05:01:12][ℹ️][plex] Should NOT backup external volumes, sanitizing them... [28.02.2024 05:01:12][ℹ️][plex] Calculated volumes to back up: /mnt/cache/appdata/plex [28.02.2024 05:01:12][ℹ️][plex] Backing up plex... [28.02.2024 05:05:29][ℹ️][plex] Backup created without issues [28.02.2024 05:05:29][⚠️][plex] Skipping verification for this container because its not wanted! [28.02.2024 05:05:29][ℹ️][LogitechMediaServer] Should NOT backup external volumes, sanitizing them... [28.02.2024 05:05:29][ℹ️][LogitechMediaServer] Calculated volumes to back up: /mnt/cache/appdata/LogitechMediaServer [28.02.2024 05:05:29][ℹ️][LogitechMediaServer] Backing up LogitechMediaServer... [28.02.2024 05:05:45][ℹ️][LogitechMediaServer] Backup created without issues [28.02.2024 05:05:45][⚠️][LogitechMediaServer] Skipping verification for this container because its not wanted! [28.02.2024 05:05:45][ℹ️][mariadb-nextcloud] Should NOT backup external volumes, sanitizing them... [28.02.2024 05:05:45][ℹ️][mariadb-nextcloud] Calculated volumes to back up: /mnt/cache/appdata/mariadb [28.02.2024 05:05:45][ℹ️][mariadb-nextcloud] Backing up mariadb-nextcloud... [28.02.2024 05:05:50][ℹ️][mariadb-nextcloud] Backup created without issues [28.02.2024 05:05:50][⚠️][mariadb-nextcloud] Skipping verification for this container because its not wanted! [28.02.2024 05:05:50][ℹ️][binhex-sabnzbdvpn] '/mnt/cache/appdata/binhex-sabnzbdvpn/data' is within mapped volume '/mnt/cache/appdata/binhex-sabnzbdvpn'! Ignoring! [28.02.2024 05:05:50][ℹ️][binhex-sabnzbdvpn] Should NOT backup external volumes, sanitizing them... [28.02.2024 05:05:50][ℹ️][binhex-sabnzbdvpn] Calculated volumes to back up: /mnt/cache/appdata/binhex-sabnzbdvpn [28.02.2024 05:05:50][ℹ️][binhex-sabnzbdvpn] Backing up binhex-sabnzbdvpn... [28.02.2024 05:05:50][ℹ️][binhex-sabnzbdvpn] Backup created without issues [28.02.2024 05:05:50][⚠️][binhex-sabnzbdvpn] Skipping verification for this container because its not wanted! [28.02.2024 05:05:50][ℹ️][Main] Set containers to previous state [28.02.2024 05:05:50][ℹ️][binhex-sabnzbdvpn] Starting binhex-sabnzbdvpn... (try #1) done! [28.02.2024 05:05:58][ℹ️][mariadb-nextcloud] Starting mariadb-nextcloud... (try #1) done! [28.02.2024 05:05:58][ℹ️][mariadb-nextcloud] The container has a delay set, waiting 5 seconds before carrying on [28.02.2024 05:06:03][ℹ️][LogitechMediaServer] Starting LogitechMediaServer... (try #1) done! [28.02.2024 05:06:05][ℹ️][plex] Starting plex... (try #1) done! [28.02.2024 05:06:08][ℹ️][Redis] Starting Redis... (try #1) done! [28.02.2024 05:06:10][ℹ️][swag] Starting swag... (try #1) done! [28.02.2024 05:06:10][ℹ️][swag] The container has a delay set, waiting 15 seconds before carrying on [28.02.2024 05:06:25][ℹ️][nextcloud] Starting nextcloud... (try #1) done! [28.02.2024 05:06:28][ℹ️][radarr] Starting radarr... (try #1) done! [28.02.2024 05:06:30][ℹ️][Sonarr] Starting Sonarr... (try #1) done! [28.02.2024 05:06:32][ℹ️][gPodder] Starting gPodder... (try #1) done! [28.02.2024 05:06:35][ℹ️][phpmyadmin] Starting phpmyadmin is being ignored, because it was not started before (or should not be started). [28.02.2024 05:06:35][ℹ️][DiskSpeed] Starting DiskSpeed is being ignored, because it was not started before (or should not be started). [28.02.2024 05:06:35][ℹ️][QDirStat] Starting QDirStat is being ignored, because it was not started before (or should not be started). [28.02.2024 05:06:35][ℹ️][MongoDB] Starting MongoDB... (try #1) done! [28.02.2024 05:06:35][ℹ️][MongoDB] The container has a delay set, waiting 5 seconds before carrying on [28.02.2024 05:06:40][ℹ️][unifi-network-application] Starting unifi-network-application... (try #1) done! [28.02.2024 05:06:43][ℹ️][Main] Backing up the flash drive. [28.02.2024 05:07:16][ℹ️][Main] Flash backup created! [28.02.2024 05:07:16][ℹ️][Main] Checking retention... [28.02.2024 05:07:16][ℹ️][Main] Delete old backup: /mnt/user/Backup/ca_backup_app/ab_20240124_050001 [28.02.2024 05:07:18][ℹ️][Main] DONE! Thanks for using this plugin and have a safe day ;) [28.02.2024 05:07:18][ℹ️][Main] ❤️ I also created a private debug log. The debug log id is: 36756508-9131-4a72-9764-cd0ed83a1b75
  8. I just specify the IP address of the Mongo host, rather than a hostname. So delete the Unifi controller image (it only evaluates the name on the first run) and set MONGO_HOST as: 192.168.1.200 Note that I don't use "bridge" but a custom network, but I think you'll be fine specifying your server's IP.
  9. Yeah, I'm clueless as to if it made any difference for me, but figure that I've gone to to the trouble of setting it up and it takes up so little resources that I might as well keep it around. I initially installed it to try and get my photos to load faster in my photo app. I applied so many fixes that I have no idea what fixed it, but it's working well enough now, so I don't want to rock the boat.
  10. Sooooo the Mongo docker now starts/runs fine again... There was an update to the Mongo container, so maybe it was just a bug? Or maybe some lock on some file expired? I don't know what's going on anymore lol
  11. My weekly backup ran this morning, so it shut all my Dockers down at 5am. However, now mongodb is refusing to start. I have no idea why. {"t":{"$date":"2024-01-10T05:00:39.718+00:00"},"s":"I", "c":"CONTROL", "id":23377, "ctx":"SignalHandler","msg":"Received signal","attr":{"signal":15,"error":"Terminated"}} {"t":{"$date":"2024-01-10T05:00:39.718+00:00"},"s":"I", "c":"CONTROL", "id":23378, "ctx":"SignalHandler","msg":"Signal was sent by kill(2)","attr":{"pid":0,"uid":0}} {"t":{"$date":"2024-01-10T05:00:39.718+00:00"},"s":"I", "c":"CONTROL", "id":23381, "ctx":"SignalHandler","msg":"will terminate after current cmd ends"} {"t":{"$date":"2024-01-10T05:00:39.734+00:00"},"s":"I", "c":"REPL", "id":4784900, "ctx":"SignalHandler","msg":"Stepping down the ReplicationCoordinator for shutdown","attr":{"waitTimeMillis":10000}} {"t":{"$date":"2024-01-10T05:00:39.902+00:00"},"s":"I", "c":"COMMAND", "id":4784901, "ctx":"SignalHandler","msg":"Shutting down the MirrorMaestro"} {"t":{"$date":"2024-01-10T05:00:39.902+00:00"},"s":"I", "c":"SHARDING", "id":4784902, "ctx":"SignalHandler","msg":"Shutting down the WaitForMajorityService"} {"t":{"$date":"2024-01-10T05:00:39.902+00:00"},"s":"I", "c":"CONTROL", "id":4784903, "ctx":"SignalHandler","msg":"Shutting down the LogicalSessionCache"} {"t":{"$date":"2024-01-10T05:00:39.903+00:00"},"s":"I", "c":"NETWORK", "id":20562, "ctx":"SignalHandler","msg":"Shutdown: going to close listening sockets"} {"t":{"$date":"2024-01-10T05:00:39.903+00:00"},"s":"I", "c":"NETWORK", "id":23017, "ctx":"listener","msg":"removing socket file","attr":{"path":"/tmp/mongodb-27017.sock"}} {"t":{"$date":"2024-01-10T05:00:39.903+00:00"},"s":"W", "c":"NETWORK", "id":23022, "ctx":"listener","msg":"Unable to remove UNIX socket","attr":{"path":"/tmp/mongodb-27017.sock","error":"Operation not permitted"}} {"t":{"$date":"2024-01-10T05:00:39.903+00:00"},"s":"I", "c":"NETWORK", "id":4784905, "ctx":"SignalHandler","msg":"Shutting down the global connection pool"} {"t":{"$date":"2024-01-10T05:00:39.903+00:00"},"s":"I", "c":"STORAGE", "id":4784906, "ctx":"SignalHandler","msg":"Shutting down the FlowControlTicketholder"} {"t":{"$date":"2024-01-10T05:00:39.903+00:00"},"s":"I", "c":"-", "id":20520, "ctx":"SignalHandler","msg":"Stopping further Flow Control ticket acquisitions."} {"t":{"$date":"2024-01-10T05:00:39.903+00:00"},"s":"I", "c":"STORAGE", "id":4784908, "ctx":"SignalHandler","msg":"Shutting down the PeriodicThreadToAbortExpiredTransactions"} {"t":{"$date":"2024-01-10T05:00:39.903+00:00"},"s":"I", "c":"STORAGE", "id":4784934, "ctx":"SignalHandler","msg":"Shutting down the PeriodicThreadToDecreaseSnapshotHistoryCachePressure"} {"t":{"$date":"2024-01-10T05:00:39.903+00:00"},"s":"I", "c":"REPL", "id":4784909, "ctx":"SignalHandler","msg":"Shutting down the ReplicationCoordinator"} {"t":{"$date":"2024-01-10T05:00:39.903+00:00"},"s":"I", "c":"SHARDING", "id":4784910, "ctx":"SignalHandler","msg":"Shutting down the ShardingInitializationMongoD"} {"t":{"$date":"2024-01-10T05:00:39.903+00:00"},"s":"I", "c":"REPL", "id":4784911, "ctx":"SignalHandler","msg":"Enqueuing the ReplicationStateTransitionLock for shutdown"} {"t":{"$date":"2024-01-10T05:00:39.903+00:00"},"s":"I", "c":"-", "id":4784912, "ctx":"SignalHandler","msg":"Killing all operations for shutdown"} {"t":{"$date":"2024-01-10T05:00:39.903+00:00"},"s":"I", "c":"-", "id":4695300, "ctx":"SignalHandler","msg":"Interrupted all currently running operations","attr":{"opsKilled":5}} {"t":{"$date":"2024-01-10T05:00:39.903+00:00"},"s":"I", "c":"COMMAND", "id":4784913, "ctx":"SignalHandler","msg":"Shutting down all open transactions"} {"t":{"$date":"2024-01-10T05:00:39.903+00:00"},"s":"I", "c":"REPL", "id":4784914, "ctx":"SignalHandler","msg":"Acquiring the ReplicationStateTransitionLock for shutdown"} {"t":{"$date":"2024-01-10T05:00:39.903+00:00"},"s":"I", "c":"INDEX", "id":4784915, "ctx":"SignalHandler","msg":"Shutting down the IndexBuildsCoordinator"} {"t":{"$date":"2024-01-10T05:00:39.903+00:00"},"s":"I", "c":"REPL", "id":4784916, "ctx":"SignalHandler","msg":"Reacquiring the ReplicationStateTransitionLock for shutdown"} {"t":{"$date":"2024-01-10T05:00:39.903+00:00"},"s":"I", "c":"REPL", "id":4784917, "ctx":"SignalHandler","msg":"Attempting to mark clean shutdown"} {"t":{"$date":"2024-01-10T05:00:39.903+00:00"},"s":"I", "c":"NETWORK", "id":4784918, "ctx":"SignalHandler","msg":"Shutting down the ReplicaSetMonitor"} {"t":{"$date":"2024-01-10T05:00:39.903+00:00"},"s":"I", "c":"SHARDING", "id":4784921, "ctx":"SignalHandler","msg":"Shutting down the MigrationUtilExecutor"} {"t":{"$date":"2024-01-10T05:00:39.904+00:00"},"s":"I", "c":"NETWORK", "id":22944, "ctx":"conn3","msg":"Connection ended","attr":{"remote":"172.19.0.3:45170","connectionId":3,"connectionCount":5}} {"t":{"$date":"2024-01-10T05:00:39.904+00:00"},"s":"I", "c":"NETWORK", "id":22944, "ctx":"conn1","msg":"Connection ended","attr":{"remote":"172.19.0.3:45152","connectionId":1,"connectionCount":4}} {"t":{"$date":"2024-01-10T05:00:39.904+00:00"},"s":"I", "c":"STORAGE", "id":4784927, "ctx":"SignalHandler","msg":"Shutting down the HealthLog"} {"t":{"$date":"2024-01-10T05:00:39.904+00:00"},"s":"I", "c":"STORAGE", "id":4784929, "ctx":"SignalHandler","msg":"Acquiring the global lock for shutdown"} {"t":{"$date":"2024-01-10T05:00:39.904+00:00"},"s":"I", "c":"STORAGE", "id":4784930, "ctx":"SignalHandler","msg":"Shutting down the storage engine"} {"t":{"$date":"2024-01-10T05:00:39.904+00:00"},"s":"I", "c":"STORAGE", "id":22320, "ctx":"SignalHandler","msg":"Shutting down journal flusher thread"} {"t":{"$date":"2024-01-10T05:00:39.904+00:00"},"s":"I", "c":"STORAGE", "id":22321, "ctx":"SignalHandler","msg":"Finished shutting down journal flusher thread"} {"t":{"$date":"2024-01-10T05:00:39.904+00:00"},"s":"I", "c":"STORAGE", "id":20282, "ctx":"SignalHandler","msg":"Deregistering all the collections"} {"t":{"$date":"2024-01-10T05:00:39.904+00:00"},"s":"I", "c":"NETWORK", "id":22944, "ctx":"conn160","msg":"Connection ended","attr":{"remote":"172.19.0.3:35476","connectionId":160,"connectionCount":3}} {"t":{"$date":"2024-01-10T05:00:39.904+00:00"},"s":"I", "c":"NETWORK", "id":22944, "ctx":"conn106","msg":"Connection ended","attr":{"remote":"172.19.0.3:50332","connectionId":106,"connectionCount":2}} {"t":{"$date":"2024-01-10T05:00:39.910+00:00"},"s":"I", "c":"STORAGE", "id":22261, "ctx":"SignalHandler","msg":"Timestamp monitor shutting down"} {"t":{"$date":"2024-01-10T05:00:39.910+00:00"},"s":"I", "c":"STORAGE", "id":22317, "ctx":"SignalHandler","msg":"WiredTigerKVEngine shutting down"} {"t":{"$date":"2024-01-10T05:00:39.910+00:00"},"s":"I", "c":"STORAGE", "id":22318, "ctx":"SignalHandler","msg":"Shutting down session sweeper thread"} {"t":{"$date":"2024-01-10T05:00:39.910+00:00"},"s":"I", "c":"STORAGE", "id":22319, "ctx":"SignalHandler","msg":"Finished shutting down session sweeper thread"} {"t":{"$date":"2024-01-10T05:00:39.910+00:00"},"s":"I", "c":"STORAGE", "id":22322, "ctx":"SignalHandler","msg":"Shutting down checkpoint thread"} {"t":{"$date":"2024-01-10T05:00:39.911+00:00"},"s":"I", "c":"STORAGE", "id":22323, "ctx":"SignalHandler","msg":"Finished shutting down checkpoint thread"} {"t":{"$date":"2024-01-10T05:00:39.913+00:00"},"s":"I", "c":"STORAGE", "id":4795902, "ctx":"SignalHandler","msg":"Closing WiredTiger","attr":{"closeConfig":"leak_memory=true,"}} {"t":{"$date":"2024-01-10T05:00:39.925+00:00"},"s":"I", "c":"STORAGE", "id":22430, "ctx":"SignalHandler","msg":"WiredTiger message","attr":{"message":"[1704862839:925143][1:0x15274134b700], close_ckpt: [WT_VERB_CHECKPOINT_PROGRESS] saving checkpoint snapshot min: 343738, snapshot max: 343738 snapshot count: 0, oldest timestamp: (0, 0) , meta checkpoint timestamp: (0, 0) base write gen: 660"}} {"t":{"$date":"2024-01-10T05:00:39.957+00:00"},"s":"I", "c":"STORAGE", "id":4795901, "ctx":"SignalHandler","msg":"WiredTiger closed","attr":{"durationMillis":44}} {"t":{"$date":"2024-01-10T05:00:39.957+00:00"},"s":"I", "c":"STORAGE", "id":22279, "ctx":"SignalHandler","msg":"shutdown: removing fs lock..."} {"t":{"$date":"2024-01-10T05:00:39.957+00:00"},"s":"I", "c":"-", "id":4784931, "ctx":"SignalHandler","msg":"Dropping the scope cache for shutdown"} {"t":{"$date":"2024-01-10T05:00:39.957+00:00"},"s":"I", "c":"FTDC", "id":4784926, "ctx":"SignalHandler","msg":"Shutting down full-time data capture"} {"t":{"$date":"2024-01-10T05:00:39.957+00:00"},"s":"I", "c":"FTDC", "id":20626, "ctx":"SignalHandler","msg":"Shutting down full-time diagnostic data capture"} {"t":{"$date":"2024-01-10T05:00:39.964+00:00"},"s":"I", "c":"CONTROL", "id":20565, "ctx":"SignalHandler","msg":"Now exiting"} {"t":{"$date":"2024-01-10T05:00:39.964+00:00"},"s":"I", "c":"CONTROL", "id":23138, "ctx":"SignalHandler","msg":"Shutting down","attr":{"exitCode":0}} {"t":{"$date":"2024-01-10T05:08:05.056+00:00"},"s":"I", "c":"CONTROL", "id":23285, "ctx":"main","msg":"Automatically disabling TLS 1.0, to force-enable TLS 1.0 specify --sslDisabledProtocols 'none'"} {"t":{"$date":"2024-01-10T05:08:05.062+00:00"},"s":"I", "c":"NETWORK", "id":4648601, "ctx":"main","msg":"Implicit TCP FastOpen unavailable. If TCP FastOpen is required, set tcpFastOpenServer, tcpFastOpenClient, and tcpFastOpenQueueSize."} {"t":{"$date":"2024-01-10T05:08:05.062+00:00"},"s":"I", "c":"STORAGE", "id":4615611, "ctx":"initandlisten","msg":"MongoDB starting","attr":{"pid":1,"port":27017,"dbPath":"/data/db","architecture":"64-bit","host":"1211a40e6d4a"}} {"t":{"$date":"2024-01-10T05:08:05.062+00:00"},"s":"I", "c":"CONTROL", "id":23403, "ctx":"initandlisten","msg":"Build Info","attr":{"buildInfo":{"version":"4.4.27","gitVersion":"2da9e4437d8c792c2b3c3aea62e284f801172a6b","openSSLVersion":"OpenSSL 1.1.1f 31 Mar 2020","modules":[],"allocator":"tcmalloc","environment":{"distmod":"ubuntu2004","distarch":"x86_64","target_arch":"x86_64"}}}} {"t":{"$date":"2024-01-10T05:08:05.062+00:00"},"s":"I", "c":"CONTROL", "id":51765, "ctx":"initandlisten","msg":"Operating System","attr":{"os":{"name":"Ubuntu","version":"20.04"}}} {"t":{"$date":"2024-01-10T05:08:05.062+00:00"},"s":"I", "c":"CONTROL", "id":21951, "ctx":"initandlisten","msg":"Options set by command line","attr":{"options":{"net":{"bindIp":"*"}}}} {"t":{"$date":"2024-01-10T05:08:05.063+00:00"},"s":"E", "c":"NETWORK", "id":23024, "ctx":"initandlisten","msg":"Failed to unlink socket file","attr":{"path":"/tmp/mongodb-27017.sock","error":"Operation not permitted"}} {"t":{"$date":"2024-01-10T05:08:05.063+00:00"},"s":"F", "c":"-", "id":23091, "ctx":"initandlisten","msg":"Fatal assertion","attr":{"msgid":40486,"file":"src/mongo/transport/transport_layer_asio.cpp","line":1048}} {"t":{"$date":"2024-01-10T05:08:05.063+00:00"},"s":"F", "c":"-", "id":23092, "ctx":"initandlisten","msg":"\n\n***aborting after fassert() failure\n\n"} {"t":{"$date":"2024-01-10T11:11:41.977+00:00"},"s":"I", "c":"CONTROL", "id":23285, "ctx":"main","msg":"Automatically disabling TLS 1.0, to force-enable TLS 1.0 specify --sslDisabledProtocols 'none'"} {"t":{"$date":"2024-01-10T11:11:41.979+00:00"},"s":"I", "c":"NETWORK", "id":4648601, "ctx":"main","msg":"Implicit TCP FastOpen unavailable. If TCP FastOpen is required, set tcpFastOpenServer, tcpFastOpenClient, and tcpFastOpenQueueSize."} {"t":{"$date":"2024-01-10T11:11:41.980+00:00"},"s":"I", "c":"STORAGE", "id":4615611, "ctx":"initandlisten","msg":"MongoDB starting","attr":{"pid":1,"port":27017,"dbPath":"/data/db","architecture":"64-bit","host":"1211a40e6d4a"}} {"t":{"$date":"2024-01-10T11:11:41.980+00:00"},"s":"I", "c":"CONTROL", "id":23403, "ctx":"initandlisten","msg":"Build Info","attr":{"buildInfo":{"version":"4.4.27","gitVersion":"2da9e4437d8c792c2b3c3aea62e284f801172a6b","openSSLVersion":"OpenSSL 1.1.1f 31 Mar 2020","modules":[],"allocator":"tcmalloc","environment":{"distmod":"ubuntu2004","distarch":"x86_64","target_arch":"x86_64"}}}} {"t":{"$date":"2024-01-10T11:11:41.980+00:00"},"s":"I", "c":"CONTROL", "id":51765, "ctx":"initandlisten","msg":"Operating System","attr":{"os":{"name":"Ubuntu","version":"20.04"}}} {"t":{"$date":"2024-01-10T11:11:41.980+00:00"},"s":"I", "c":"CONTROL", "id":21951, "ctx":"initandlisten","msg":"Options set by command line","attr":{"options":{"net":{"bindIp":"*"}}}} {"t":{"$date":"2024-01-10T11:11:41.980+00:00"},"s":"E", "c":"NETWORK", "id":23024, "ctx":"initandlisten","msg":"Failed to unlink socket file","attr":{"path":"/tmp/mongodb-27017.sock","error":"Operation not permitted"}} {"t":{"$date":"2024-01-10T11:11:41.980+00:00"},"s":"F", "c":"-", "id":23091, "ctx":"initandlisten","msg":"Fatal assertion","attr":{"msgid":40486,"file":"src/mongo/transport/transport_layer_asio.cpp","line":1048}} {"t":{"$date":"2024-01-10T11:11:41.980+00:00"},"s":"F", "c":"-", "id":23092, "ctx":"initandlisten","msg":"\n\n***aborting after fassert() failure\n\n"} I did remove the .js file, as well as the Docker's path to it. But that's only used the first time it runs, right? I can't think of any other changes I have made. Any idea what's going on? Why would there be a permission error in the tmp dir? Really tempted to move over to PeteA's all in one Docker now...
  12. Thanks for the info. FWIW I opened a terminal to the mongodb container and entered mongod --quiet to stop the logging. If anything goes wrong, I guess I'll turn the logging back on, but for the moment I don't really need it (hopefully that's not naive of me 😅)
  13. Just moved over to the two-container solution using a combination of the instructions from LS.IO and in this thread. Everything seems to be working fine now, except that my mongodb log is constantly being written to with entries like the following: {"t":{"$date":"2024-01-05T15:59:22.000+00:00"},"s":"I", "c":"NETWORK", "id":51800, "ctx":"conn12","msg":"client metadata","attr":{"remote":"172.19.0.3:50624","client":"conn12","doc":{"driver":{"name":"mongo-java-driver|sync","version":"4.6.1"},"os":{"type":"Linux","name":"Linux","architecture":"amd64","version":"6.1.64-Unraid"},"platform":"Java/Private Build/17.0.9+9-Ubuntu-122.04"}}} {"t":{"$date":"2024-01-05T15:59:22.052+00:00"},"s":"I", "c":"ACCESS", "id":20250, "ctx":"conn12","msg":"Authentication succeeded","attr":{"mechanism":"SCRAM-SHA-256","speculative":true,"principalName":"unifi","authenticationDatabase":"unifi","remote":"172.19.0.3:50624","extraInfo":{}}} {"t":{"$date":"2024-01-05T15:59:23.293+00:00"},"s":"I", "c":"STORAGE", "id":22430, "ctx":"WTCheckpointThread","msg":"WiredTiger message","attr":{"message":"[1704470363:293326][1:0x15273f73d700], WT_SESSION.checkpoint: [WT_VERB_CHECKPOINT_PROGRESS] saving checkpoint snapshot min: 6840, snapshot max: 6840 snapshot count: 0, oldest timestamp: (0, 0) , meta checkpoint timestamp: (0, 0) base write gen: 660"}} {"t":{"$date":"2024-01-05T16:00:23.344+00:00"},"s":"I", "c":"STORAGE", "id":22430, "ctx":"WTCheckpointThread","msg":"WiredTiger message","attr":{"message":"[1704470423:344527][1:0x15273f73d700], WT_SESSION.checkpoint: [WT_VERB_CHECKPOINT_PROGRESS] saving checkpoint snapshot min: 6918, snapshot max: 6918 snapshot count: 0, oldest timestamp: (0, 0) , meta checkpoint timestamp: (0, 0) base write gen: 660"}} {"t":{"$date":"2024-01-05T16:01:23.377+00:00"},"s":"I", "c":"STORAGE", "id":22430, "ctx":"WTCheckpointThread","msg":"WiredTiger message","attr":{"message":"[1704470483:377873][1:0x15273f73d700], WT_SESSION.checkpoint: [WT_VERB_CHECKPOINT_PROGRESS] saving checkpoint snapshot min: 6921, snapshot max: 6921 snapshot count: 0, oldest timestamp: (0, 0) , meta checkpoint timestamp: (0, 0) base write gen: 660"}} {"t":{"$date":"2024-01-05T16:02:23.398+00:00"},"s":"I", "c":"STORAGE", "id":22430, "ctx":"WTCheckpointThread","msg":"WiredTiger message","attr":{"message":"[1704470543:398781][1:0x15273f73d700], WT_SESSION.checkpoint: [WT_VERB_CHECKPOINT_PROGRESS] saving checkpoint snapshot min: 6969, snapshot max: 6969 snapshot count: 0, oldest timestamp: (0, 0) , meta checkpoint timestamp: (0, 0) base write gen: 660"}} {"t":{"$date":"2024-01-05T16:03:23.425+00:00"},"s":"I", "c":"STORAGE", "id":22430, "ctx":"WTCheckpointThread","msg":"WiredTiger message","attr":{"message":"[1704470603:425199][1:0x15273f73d700], WT_SESSION.checkpoint: [WT_VERB_CHECKPOINT_PROGRESS] saving checkpoint snapshot min: 6974, snapshot max: 6974 snapshot count: 0, oldest timestamp: (0, 0) , meta checkpoint timestamp: (0, 0) base write gen: 660"}} Is anyone else running the Mongo container getting this? Any idea what it means? Is the log by default set to verbose? This will eat up space really quickly.
  14. I've been putting off the switch to the new Docker, but was wondering: is it worth moving to v8 before I make the switch, or after? Or is it still too early to trust Ubiquiti with a new major version, and I should just stick with v7 for a while longer, anyway? Thanks for your insight!
×
×
  • Create New...