jademonkee

Members
  • Posts

    333
  • Joined

  • Last visited

Posts posted by jademonkee

  1. On 2/22/2024 at 6:33 PM, Squid said:

    Oh, and FWIW veto files slows everything down.

    Does it? Can you elaborate? I have set my Mac/SMB settings up as per the following post:

    Which includes vetoing the Mac dot files.

    I note on the 6.12.6 changelog it mentions fruit changes to help TimeMachine restore.

     

    Currently my SMB Extra is as follows:

    veto files = /._*/.DS_Store/
    aio read size = 1
    aio write size = 1
    strict locking = No
    use sendfile = No
    server multi channel support = Yes
    readdir_attr:aapl_rsize = no
    readdir_attr:aapl_finder_info = no
    readdir_attr:aapl_max_access = no
    fruit:posix_rename = yes
    fruit:metadata = stream

    Is there something you recommend I change? Either deleting it entirely (perhaps it's setup by default now?) or anything in particular?

    Are the recommended settings for Mac OS available somewhere so that I can set them up?

    Sorry to hijack the thread (I just wanted to respond to the news that veto slows things down), and thanks for your help.

  2. UPDATE:
    I remembered that I updated the plugin this morning (hours after the backup had run), so thought I'd take a look at all the settings and manually run a backup.

    All the files are in the new directory, as expected.

    I don't know what caused it, but it's working as expected now.

    Please discard my previous post.

  3. My weekly backup ran successfully this morning, however the directory is empty.

    Logs are as follows:

    [28.02.2024 05:00:01][ℹ️][Main] 👋 WELCOME TO APPDATA.BACKUP!! :D
    [28.02.2024 05:00:01][ℹ️][Main] Backing up from: /mnt/cache/appdata
    [28.02.2024 05:00:01][ℹ️][Main] Backing up to: /mnt/user/Backup/ca_backup_app/ab_20240228_050001
    [28.02.2024 05:00:01][ℹ️][Main] Selected containers: DiskSpeed, LogitechMediaServer, MongoDB, QDirStat, Redis, Sonarr, binhex-sabnzbdvpn, gPodder, mariadb-nextcloud, nextcloud, phpmyadmin, plex, radarr, swag, unifi-network-application
    [28.02.2024 05:00:01][ℹ️][Main] Saving container XML files...
    [28.02.2024 05:00:01][ℹ️][Main] Method: Stop all container before continuing.
    [28.02.2024 05:00:01][ℹ️][unifi-network-application] Stopping unifi-network-application... done! (took 8 seconds)
    [28.02.2024 05:00:09][ℹ️][MongoDB] Stopping MongoDB... done! (took 1 seconds)
    [28.02.2024 05:00:10][ℹ️][QDirStat] No stopping needed for QDirStat: Not started!
    [28.02.2024 05:00:10][ℹ️][DiskSpeed] No stopping needed for DiskSpeed: Not started!
    [28.02.2024 05:00:10][ℹ️][phpmyadmin] No stopping needed for phpmyadmin: Not started!
    [28.02.2024 05:00:10][ℹ️][gPodder] Stopping gPodder... done! (took 5 seconds)
    [28.02.2024 05:00:15][ℹ️][Sonarr] Stopping Sonarr... done! (took 4 seconds)
    [28.02.2024 05:00:19][ℹ️][radarr] Stopping radarr... done! (took 5 seconds)
    [28.02.2024 05:00:24][ℹ️][nextcloud] Stopping nextcloud... done! (took 5 seconds)
    [28.02.2024 05:00:29][ℹ️][swag] Stopping swag... done! (took 4 seconds)
    [28.02.2024 05:00:33][ℹ️][Redis] Stopping Redis... done! (took 0 seconds)
    [28.02.2024 05:00:33][ℹ️][plex] Stopping plex... done! (took 4 seconds)
    [28.02.2024 05:00:37][ℹ️][LogitechMediaServer] Stopping LogitechMediaServer... done! (took 3 seconds)
    [28.02.2024 05:00:40][ℹ️][mariadb-nextcloud] Stopping mariadb-nextcloud... done! (took 4 seconds)
    [28.02.2024 05:00:44][ℹ️][binhex-sabnzbdvpn] Stopping binhex-sabnzbdvpn... done! (took 2 seconds)
    [28.02.2024 05:00:46][ℹ️][Main] Starting backup for containers
    [28.02.2024 05:00:46][ℹ️][unifi-network-application] Should NOT backup external volumes, sanitizing them...
    [28.02.2024 05:00:46][ℹ️][unifi-network-application] Calculated volumes to back up: /mnt/cache/appdata/unifi-network-application
    [28.02.2024 05:00:46][ℹ️][unifi-network-application] Backing up unifi-network-application...
    [28.02.2024 05:00:46][ℹ️][unifi-network-application] Backup created without issues
    [28.02.2024 05:00:46][⚠️][unifi-network-application] Skipping verification for this container because its not wanted!
    [28.02.2024 05:00:46][ℹ️][MongoDB] Should NOT backup external volumes, sanitizing them...
    [28.02.2024 05:00:46][⚠️][MongoDB] MongoDB does not have any volume to back up! Skipping. Please consider ignoring this container.
    [28.02.2024 05:00:46][ℹ️][QDirStat] Should NOT backup external volumes, sanitizing them...
    [28.02.2024 05:00:46][ℹ️][QDirStat] Calculated volumes to back up: /mnt/cache/appdata/QDirStat
    [28.02.2024 05:00:46][ℹ️][QDirStat] Backing up QDirStat...
    [28.02.2024 05:00:46][ℹ️][QDirStat] Backup created without issues
    [28.02.2024 05:00:46][⚠️][QDirStat] Skipping verification for this container because its not wanted!
    [28.02.2024 05:00:46][ℹ️][DiskSpeed] Should NOT backup external volumes, sanitizing them...
    [28.02.2024 05:00:46][ℹ️][DiskSpeed] Calculated volumes to back up: /mnt/cache/appdata/DiskSpeed
    [28.02.2024 05:00:46][ℹ️][DiskSpeed] Backing up DiskSpeed...
    [28.02.2024 05:00:46][ℹ️][DiskSpeed] Backup created without issues
    [28.02.2024 05:00:46][⚠️][DiskSpeed] Skipping verification for this container because its not wanted!
    [28.02.2024 05:00:46][ℹ️][phpmyadmin] Should NOT backup external volumes, sanitizing them...
    [28.02.2024 05:00:46][ℹ️][phpmyadmin] Calculated volumes to back up: /mnt/cache/appdata/phpmyadmin
    [28.02.2024 05:00:46][ℹ️][phpmyadmin] Backing up phpmyadmin...
    [28.02.2024 05:00:46][ℹ️][phpmyadmin] Backup created without issues
    [28.02.2024 05:00:46][⚠️][phpmyadmin] Skipping verification for this container because its not wanted!
    [28.02.2024 05:00:46][ℹ️][gPodder] Should NOT backup external volumes, sanitizing them...
    [28.02.2024 05:00:46][ℹ️][gPodder] Calculated volumes to back up: /mnt/cache/appdata/gPodder
    [28.02.2024 05:00:46][ℹ️][gPodder] Backing up gPodder...
    [28.02.2024 05:00:46][ℹ️][gPodder] Backup created without issues
    [28.02.2024 05:00:46][⚠️][gPodder] Skipping verification for this container because its not wanted!
    [28.02.2024 05:00:46][ℹ️][Sonarr] Should NOT backup external volumes, sanitizing them...
    [28.02.2024 05:00:46][ℹ️][Sonarr] Calculated volumes to back up: /mnt/cache/appdata/sonarr
    [28.02.2024 05:00:46][ℹ️][Sonarr] Backing up Sonarr...
    [28.02.2024 05:00:49][ℹ️][Sonarr] Backup created without issues
    [28.02.2024 05:00:49][⚠️][Sonarr] Skipping verification for this container because its not wanted!
    [28.02.2024 05:00:49][ℹ️][radarr] Should NOT backup external volumes, sanitizing them...
    [28.02.2024 05:00:49][ℹ️][radarr] Calculated volumes to back up: /mnt/cache/appdata/radarr
    [28.02.2024 05:00:49][ℹ️][radarr] Backing up radarr...
    [28.02.2024 05:00:58][ℹ️][radarr] Backup created without issues
    [28.02.2024 05:00:58][⚠️][radarr] Skipping verification for this container because its not wanted!
    [28.02.2024 05:00:58][ℹ️][nextcloud] Should NOT backup external volumes, sanitizing them...
    [28.02.2024 05:00:58][ℹ️][nextcloud] Calculated volumes to back up: /mnt/cache/appdata/nextcloud
    [28.02.2024 05:00:58][ℹ️][nextcloud] Backing up nextcloud...
    [28.02.2024 05:01:11][ℹ️][nextcloud] Backup created without issues
    [28.02.2024 05:01:11][⚠️][nextcloud] Skipping verification for this container because its not wanted!
    [28.02.2024 05:01:11][ℹ️][swag] Should NOT backup external volumes, sanitizing them...
    [28.02.2024 05:01:11][ℹ️][swag] Calculated volumes to back up: /mnt/cache/appdata/swag
    [28.02.2024 05:01:11][ℹ️][swag] Backing up swag...
    [28.02.2024 05:01:12][ℹ️][swag] Backup created without issues
    [28.02.2024 05:01:12][⚠️][swag] Skipping verification for this container because its not wanted!
    [28.02.2024 05:01:12][ℹ️][Redis] Should NOT backup external volumes, sanitizing them...
    [28.02.2024 05:01:12][ℹ️][Redis] Calculated volumes to back up: /mnt/cache/appdata/redis/data
    [28.02.2024 05:01:12][ℹ️][Redis] Backing up Redis...
    [28.02.2024 05:01:12][ℹ️][Redis] Backup created without issues
    [28.02.2024 05:01:12][⚠️][Redis] Skipping verification for this container because its not wanted!
    [28.02.2024 05:01:12][ℹ️][plex] Should NOT backup external volumes, sanitizing them...
    [28.02.2024 05:01:12][ℹ️][plex] Calculated volumes to back up: /mnt/cache/appdata/plex
    [28.02.2024 05:01:12][ℹ️][plex] Backing up plex...
    [28.02.2024 05:05:29][ℹ️][plex] Backup created without issues
    [28.02.2024 05:05:29][⚠️][plex] Skipping verification for this container because its not wanted!
    [28.02.2024 05:05:29][ℹ️][LogitechMediaServer] Should NOT backup external volumes, sanitizing them...
    [28.02.2024 05:05:29][ℹ️][LogitechMediaServer] Calculated volumes to back up: /mnt/cache/appdata/LogitechMediaServer
    [28.02.2024 05:05:29][ℹ️][LogitechMediaServer] Backing up LogitechMediaServer...
    [28.02.2024 05:05:45][ℹ️][LogitechMediaServer] Backup created without issues
    [28.02.2024 05:05:45][⚠️][LogitechMediaServer] Skipping verification for this container because its not wanted!
    [28.02.2024 05:05:45][ℹ️][mariadb-nextcloud] Should NOT backup external volumes, sanitizing them...
    [28.02.2024 05:05:45][ℹ️][mariadb-nextcloud] Calculated volumes to back up: /mnt/cache/appdata/mariadb
    [28.02.2024 05:05:45][ℹ️][mariadb-nextcloud] Backing up mariadb-nextcloud...
    [28.02.2024 05:05:50][ℹ️][mariadb-nextcloud] Backup created without issues
    [28.02.2024 05:05:50][⚠️][mariadb-nextcloud] Skipping verification for this container because its not wanted!
    [28.02.2024 05:05:50][ℹ️][binhex-sabnzbdvpn] '/mnt/cache/appdata/binhex-sabnzbdvpn/data' is within mapped volume '/mnt/cache/appdata/binhex-sabnzbdvpn'! Ignoring!
    [28.02.2024 05:05:50][ℹ️][binhex-sabnzbdvpn] Should NOT backup external volumes, sanitizing them...
    [28.02.2024 05:05:50][ℹ️][binhex-sabnzbdvpn] Calculated volumes to back up: /mnt/cache/appdata/binhex-sabnzbdvpn
    [28.02.2024 05:05:50][ℹ️][binhex-sabnzbdvpn] Backing up binhex-sabnzbdvpn...
    [28.02.2024 05:05:50][ℹ️][binhex-sabnzbdvpn] Backup created without issues
    [28.02.2024 05:05:50][⚠️][binhex-sabnzbdvpn] Skipping verification for this container because its not wanted!
    [28.02.2024 05:05:50][ℹ️][Main] Set containers to previous state
    [28.02.2024 05:05:50][ℹ️][binhex-sabnzbdvpn] Starting binhex-sabnzbdvpn... (try #1) done!
    [28.02.2024 05:05:58][ℹ️][mariadb-nextcloud] Starting mariadb-nextcloud... (try #1) done!
    [28.02.2024 05:05:58][ℹ️][mariadb-nextcloud] The container has a delay set, waiting 5 seconds before carrying on
    [28.02.2024 05:06:03][ℹ️][LogitechMediaServer] Starting LogitechMediaServer... (try #1) done!
    [28.02.2024 05:06:05][ℹ️][plex] Starting plex... (try #1) done!
    [28.02.2024 05:06:08][ℹ️][Redis] Starting Redis... (try #1) done!
    [28.02.2024 05:06:10][ℹ️][swag] Starting swag... (try #1) done!
    [28.02.2024 05:06:10][ℹ️][swag] The container has a delay set, waiting 15 seconds before carrying on
    [28.02.2024 05:06:25][ℹ️][nextcloud] Starting nextcloud... (try #1) done!
    [28.02.2024 05:06:28][ℹ️][radarr] Starting radarr... (try #1) done!
    [28.02.2024 05:06:30][ℹ️][Sonarr] Starting Sonarr... (try #1) done!
    [28.02.2024 05:06:32][ℹ️][gPodder] Starting gPodder... (try #1) done!
    [28.02.2024 05:06:35][ℹ️][phpmyadmin] Starting phpmyadmin is being ignored, because it was not started before (or should not be started).
    [28.02.2024 05:06:35][ℹ️][DiskSpeed] Starting DiskSpeed is being ignored, because it was not started before (or should not be started).
    [28.02.2024 05:06:35][ℹ️][QDirStat] Starting QDirStat is being ignored, because it was not started before (or should not be started).
    [28.02.2024 05:06:35][ℹ️][MongoDB] Starting MongoDB... (try #1) done!
    [28.02.2024 05:06:35][ℹ️][MongoDB] The container has a delay set, waiting 5 seconds before carrying on
    [28.02.2024 05:06:40][ℹ️][unifi-network-application] Starting unifi-network-application... (try #1) done!
    [28.02.2024 05:06:43][ℹ️][Main] Backing up the flash drive.
    [28.02.2024 05:07:16][ℹ️][Main] Flash backup created!
    [28.02.2024 05:07:16][ℹ️][Main] Checking retention...
    [28.02.2024 05:07:16][ℹ️][Main] Delete old backup: /mnt/user/Backup/ca_backup_app/ab_20240124_050001
    [28.02.2024 05:07:18][ℹ️][Main] DONE! Thanks for using this plugin and have a safe day ;)
    [28.02.2024 05:07:18][ℹ️][Main] ❤️

     

    I also created a private debug log. The debug log id is: 36756508-9131-4a72-9764-cd0ed83a1b75

  4. 1 hour ago, daninet said:

    I'm having trouble with the setup and not sure why. I have tried to follow the instructions. The unifi web interface is not reachable.

    These are the steps I took:

    1. backup the deprecated ls.io unifi container.

    2. Install mongodb with the init script

    init-mongo.js

    db.getSiblingDB("unifi").createUser({user: "daninet", pwd: "pwd", roles: [{role: "dbOwner", db: "unifi"}]});
    db.getSiblingDB("unifi_stat").createUser({user: "daninet", pwd: "pwd", roles: [{role: "dbOwner", db: "unifi_stat"}]});

     

    docker run

    docker run
      -d
      --name='MongoDB'
      --net='bridge'
      -e TZ="Europe/Budapest"
      -e HOST_OS="Unraid"
      -e HOST_HOSTNAME="TEVENAS"
      -e HOST_CONTAINERNAME="MongoDB"
      -l net.unraid.docker.managed=dockerman
      -l net.unraid.docker.icon='https://raw.githubusercontent.com/jason-bean/docker-templates/master/jasonbean-repo/mongo.sh-600x600.png'
      -p '27017:27017/tcp'
      -v '/mnt/user/appdata/mongodb/':'/data/db':'rw'
      -v '/mnt/user/appdata/mongodb/mongo_init/init-mongo.js':'/docker-entrypoint-initdb.d/init-mongo.js':'rw' 'mongo'
    
    53b1599bcd21ae9bca1461eb19327e5e4a4952ee6df196f2025fe28c26cd325a

     

    Then setup the new unifi container:

    docker run
      -d
      --name='unifi-network-application'
      --net='bridge'
      -e TZ="Europe/Budapest"
      -e HOST_OS="Unraid"
      -e HOST_HOSTNAME="TEVENAS"
      -e HOST_CONTAINERNAME="unifi-network-application"
      -e 'MONGO_USER'='daninet'
      -e 'MONGO_PASS'='pwd'
      -e 'MONGO_HOST'='unifi-db'
      -e 'MONGO_PORT'='27017'
      -e 'MONGO_DBNAME'='unifi'
      -e 'MEM_LIMIT'='1024'
      -e 'MEM_STARTUP'='1024'
      -e 'MONGO_TLS'=''
      -e 'MONGO_AUTHSOURCE'=''
      -e 'PUID'='99'
      -e 'PGID'='100'
      -e 'UMASK'='022'
      -l net.unraid.docker.managed=dockerman
      -l net.unraid.docker.webui='https://192.168.1.200:8443/'
      -l net.unraid.docker.icon='https://raw.githubusercontent.com/linuxserver/docker-templates/master/linuxserver.io/img/unifi-network-application-icon.png'
      -p '8443:8443/tcp'
      -p '3478:3478/udp'
      -p '10001:10001/udp'
      -p '8080:8080/tcp'
      -p '1900:1900/udp'
      -p '8843:8843/tcp'
      -p '8880:8880/tcp'
      -p '6789:6789/tcp'
      -p '5514:5514/udp'
      -v '/mnt/user/appdata/unifi-network-application':'/config':'rw' 'lscr.io/linuxserver/unifi-network-application'
    
    aaa2d5762f3f99c50def5ef5a419972aba766006c1351c58382cfe57347f8bf5

     

    Now the container log is telling me the following:

    *** Waiting for MONGO_HOST unifi-db to be reachable. ***
    *** Defined MONGO_HOST unifi-db is not reachable, cannot proceed. ***

     

    What is exactly unifi-db? I did not define it anywhere but the documentation says use this.

    They are both on the same subnet. My other DBs are working on this subnet.

     

     

    ksnip_20240125-095432.png

    I just specify the IP address of the Mongo host, rather than a hostname. So delete the Unifi controller image (it only evaluates the name on the first run) and set MONGO_HOST as:

    192.168.1.200

    Note that I don't use "bridge" but a custom network, but I think you'll be fine specifying your server's IP.

    • Thanks 1
  5. 9 hours ago, Facsimile8512 said:

    No clue if it is an actual issue in the latest update and the warning is just a leftover, but I ran the recommended command "sysctl vm.overcommit_memory=1" in unraid's main terminal and that seems to have done something, well applied whatever this actually means. The warning though is still a warning and seems to stay around from the beginning, it doesn't actually verify if it is something that needs to be done on installed PC. 

    I am not sure if my data is not large enough or what, but I'm not really seeing any benefits from Redis, in fact I think it made response worse, but it does get rid of that security warning in the overview. Not sure if I will keep it around since so far it just seems like unnecessary utilization of resources, even though it does seem like minor utilization.

    Yeah, I'm clueless as to if it made any difference for me, but figure that I've gone to to the trouble of setting it up and it takes up so little resources that I might as well keep it around. I initially installed it to try and get my photos to load faster in my photo app. I applied so many fixes that I have no idea what fixed it, but it's working well enough now, so I don't want to rock the boat.

  6. On 1/10/2024 at 11:18 AM, jademonkee said:

    My weekly backup ran this morning, so it shut all my Dockers down at 5am. However, now mongodb is refusing to start. I have no idea why.

    {"t":{"$date":"2024-01-10T05:00:39.718+00:00"},"s":"I",  "c":"CONTROL",  "id":23377,   "ctx":"SignalHandler","msg":"Received signal","attr":{"signal":15,"error":"Terminated"}}
    {"t":{"$date":"2024-01-10T05:00:39.718+00:00"},"s":"I",  "c":"CONTROL",  "id":23378,   "ctx":"SignalHandler","msg":"Signal was sent by kill(2)","attr":{"pid":0,"uid":0}}
    {"t":{"$date":"2024-01-10T05:00:39.718+00:00"},"s":"I",  "c":"CONTROL",  "id":23381,   "ctx":"SignalHandler","msg":"will terminate after current cmd ends"}
    {"t":{"$date":"2024-01-10T05:00:39.734+00:00"},"s":"I",  "c":"REPL",     "id":4784900, "ctx":"SignalHandler","msg":"Stepping down the ReplicationCoordinator for shutdown","attr":{"waitTimeMillis":10000}}
    {"t":{"$date":"2024-01-10T05:00:39.902+00:00"},"s":"I",  "c":"COMMAND",  "id":4784901, "ctx":"SignalHandler","msg":"Shutting down the MirrorMaestro"}
    {"t":{"$date":"2024-01-10T05:00:39.902+00:00"},"s":"I",  "c":"SHARDING", "id":4784902, "ctx":"SignalHandler","msg":"Shutting down the WaitForMajorityService"}
    {"t":{"$date":"2024-01-10T05:00:39.902+00:00"},"s":"I",  "c":"CONTROL",  "id":4784903, "ctx":"SignalHandler","msg":"Shutting down the LogicalSessionCache"}
    {"t":{"$date":"2024-01-10T05:00:39.903+00:00"},"s":"I",  "c":"NETWORK",  "id":20562,   "ctx":"SignalHandler","msg":"Shutdown: going to close listening sockets"}
    {"t":{"$date":"2024-01-10T05:00:39.903+00:00"},"s":"I",  "c":"NETWORK",  "id":23017,   "ctx":"listener","msg":"removing socket file","attr":{"path":"/tmp/mongodb-27017.sock"}}
    {"t":{"$date":"2024-01-10T05:00:39.903+00:00"},"s":"W",  "c":"NETWORK",  "id":23022,   "ctx":"listener","msg":"Unable to remove UNIX socket","attr":{"path":"/tmp/mongodb-27017.sock","error":"Operation not permitted"}}
    {"t":{"$date":"2024-01-10T05:00:39.903+00:00"},"s":"I",  "c":"NETWORK",  "id":4784905, "ctx":"SignalHandler","msg":"Shutting down the global connection pool"}
    {"t":{"$date":"2024-01-10T05:00:39.903+00:00"},"s":"I",  "c":"STORAGE",  "id":4784906, "ctx":"SignalHandler","msg":"Shutting down the FlowControlTicketholder"}
    {"t":{"$date":"2024-01-10T05:00:39.903+00:00"},"s":"I",  "c":"-",        "id":20520,   "ctx":"SignalHandler","msg":"Stopping further Flow Control ticket acquisitions."}
    {"t":{"$date":"2024-01-10T05:00:39.903+00:00"},"s":"I",  "c":"STORAGE",  "id":4784908, "ctx":"SignalHandler","msg":"Shutting down the PeriodicThreadToAbortExpiredTransactions"}
    {"t":{"$date":"2024-01-10T05:00:39.903+00:00"},"s":"I",  "c":"STORAGE",  "id":4784934, "ctx":"SignalHandler","msg":"Shutting down the PeriodicThreadToDecreaseSnapshotHistoryCachePressure"}
    {"t":{"$date":"2024-01-10T05:00:39.903+00:00"},"s":"I",  "c":"REPL",     "id":4784909, "ctx":"SignalHandler","msg":"Shutting down the ReplicationCoordinator"}
    {"t":{"$date":"2024-01-10T05:00:39.903+00:00"},"s":"I",  "c":"SHARDING", "id":4784910, "ctx":"SignalHandler","msg":"Shutting down the ShardingInitializationMongoD"}
    {"t":{"$date":"2024-01-10T05:00:39.903+00:00"},"s":"I",  "c":"REPL",     "id":4784911, "ctx":"SignalHandler","msg":"Enqueuing the ReplicationStateTransitionLock for shutdown"}
    {"t":{"$date":"2024-01-10T05:00:39.903+00:00"},"s":"I",  "c":"-",        "id":4784912, "ctx":"SignalHandler","msg":"Killing all operations for shutdown"}
    {"t":{"$date":"2024-01-10T05:00:39.903+00:00"},"s":"I",  "c":"-",        "id":4695300, "ctx":"SignalHandler","msg":"Interrupted all currently running operations","attr":{"opsKilled":5}}
    {"t":{"$date":"2024-01-10T05:00:39.903+00:00"},"s":"I",  "c":"COMMAND",  "id":4784913, "ctx":"SignalHandler","msg":"Shutting down all open transactions"}
    {"t":{"$date":"2024-01-10T05:00:39.903+00:00"},"s":"I",  "c":"REPL",     "id":4784914, "ctx":"SignalHandler","msg":"Acquiring the ReplicationStateTransitionLock for shutdown"}
    {"t":{"$date":"2024-01-10T05:00:39.903+00:00"},"s":"I",  "c":"INDEX",    "id":4784915, "ctx":"SignalHandler","msg":"Shutting down the IndexBuildsCoordinator"}
    {"t":{"$date":"2024-01-10T05:00:39.903+00:00"},"s":"I",  "c":"REPL",     "id":4784916, "ctx":"SignalHandler","msg":"Reacquiring the ReplicationStateTransitionLock for shutdown"}
    {"t":{"$date":"2024-01-10T05:00:39.903+00:00"},"s":"I",  "c":"REPL",     "id":4784917, "ctx":"SignalHandler","msg":"Attempting to mark clean shutdown"}
    {"t":{"$date":"2024-01-10T05:00:39.903+00:00"},"s":"I",  "c":"NETWORK",  "id":4784918, "ctx":"SignalHandler","msg":"Shutting down the ReplicaSetMonitor"}
    {"t":{"$date":"2024-01-10T05:00:39.903+00:00"},"s":"I",  "c":"SHARDING", "id":4784921, "ctx":"SignalHandler","msg":"Shutting down the MigrationUtilExecutor"}
    {"t":{"$date":"2024-01-10T05:00:39.904+00:00"},"s":"I",  "c":"NETWORK",  "id":22944,   "ctx":"conn3","msg":"Connection ended","attr":{"remote":"172.19.0.3:45170","connectionId":3,"connectionCount":5}}
    {"t":{"$date":"2024-01-10T05:00:39.904+00:00"},"s":"I",  "c":"NETWORK",  "id":22944,   "ctx":"conn1","msg":"Connection ended","attr":{"remote":"172.19.0.3:45152","connectionId":1,"connectionCount":4}}
    {"t":{"$date":"2024-01-10T05:00:39.904+00:00"},"s":"I",  "c":"STORAGE",  "id":4784927, "ctx":"SignalHandler","msg":"Shutting down the HealthLog"}
    {"t":{"$date":"2024-01-10T05:00:39.904+00:00"},"s":"I",  "c":"STORAGE",  "id":4784929, "ctx":"SignalHandler","msg":"Acquiring the global lock for shutdown"}
    {"t":{"$date":"2024-01-10T05:00:39.904+00:00"},"s":"I",  "c":"STORAGE",  "id":4784930, "ctx":"SignalHandler","msg":"Shutting down the storage engine"}
    {"t":{"$date":"2024-01-10T05:00:39.904+00:00"},"s":"I",  "c":"STORAGE",  "id":22320,   "ctx":"SignalHandler","msg":"Shutting down journal flusher thread"}
    {"t":{"$date":"2024-01-10T05:00:39.904+00:00"},"s":"I",  "c":"STORAGE",  "id":22321,   "ctx":"SignalHandler","msg":"Finished shutting down journal flusher thread"}
    {"t":{"$date":"2024-01-10T05:00:39.904+00:00"},"s":"I",  "c":"STORAGE",  "id":20282,   "ctx":"SignalHandler","msg":"Deregistering all the collections"}
    {"t":{"$date":"2024-01-10T05:00:39.904+00:00"},"s":"I",  "c":"NETWORK",  "id":22944,   "ctx":"conn160","msg":"Connection ended","attr":{"remote":"172.19.0.3:35476","connectionId":160,"connectionCount":3}}
    {"t":{"$date":"2024-01-10T05:00:39.904+00:00"},"s":"I",  "c":"NETWORK",  "id":22944,   "ctx":"conn106","msg":"Connection ended","attr":{"remote":"172.19.0.3:50332","connectionId":106,"connectionCount":2}}
    {"t":{"$date":"2024-01-10T05:00:39.910+00:00"},"s":"I",  "c":"STORAGE",  "id":22261,   "ctx":"SignalHandler","msg":"Timestamp monitor shutting down"}
    {"t":{"$date":"2024-01-10T05:00:39.910+00:00"},"s":"I",  "c":"STORAGE",  "id":22317,   "ctx":"SignalHandler","msg":"WiredTigerKVEngine shutting down"}
    {"t":{"$date":"2024-01-10T05:00:39.910+00:00"},"s":"I",  "c":"STORAGE",  "id":22318,   "ctx":"SignalHandler","msg":"Shutting down session sweeper thread"}
    {"t":{"$date":"2024-01-10T05:00:39.910+00:00"},"s":"I",  "c":"STORAGE",  "id":22319,   "ctx":"SignalHandler","msg":"Finished shutting down session sweeper thread"}
    {"t":{"$date":"2024-01-10T05:00:39.910+00:00"},"s":"I",  "c":"STORAGE",  "id":22322,   "ctx":"SignalHandler","msg":"Shutting down checkpoint thread"}
    {"t":{"$date":"2024-01-10T05:00:39.911+00:00"},"s":"I",  "c":"STORAGE",  "id":22323,   "ctx":"SignalHandler","msg":"Finished shutting down checkpoint thread"}
    {"t":{"$date":"2024-01-10T05:00:39.913+00:00"},"s":"I",  "c":"STORAGE",  "id":4795902, "ctx":"SignalHandler","msg":"Closing WiredTiger","attr":{"closeConfig":"leak_memory=true,"}}
    {"t":{"$date":"2024-01-10T05:00:39.925+00:00"},"s":"I",  "c":"STORAGE",  "id":22430,   "ctx":"SignalHandler","msg":"WiredTiger message","attr":{"message":"[1704862839:925143][1:0x15274134b700], close_ckpt: [WT_VERB_CHECKPOINT_PROGRESS] saving checkpoint snapshot min: 343738, snapshot max: 343738 snapshot count: 0, oldest timestamp: (0, 0) , meta checkpoint timestamp: (0, 0) base write gen: 660"}}
    {"t":{"$date":"2024-01-10T05:00:39.957+00:00"},"s":"I",  "c":"STORAGE",  "id":4795901, "ctx":"SignalHandler","msg":"WiredTiger closed","attr":{"durationMillis":44}}
    {"t":{"$date":"2024-01-10T05:00:39.957+00:00"},"s":"I",  "c":"STORAGE",  "id":22279,   "ctx":"SignalHandler","msg":"shutdown: removing fs lock..."}
    {"t":{"$date":"2024-01-10T05:00:39.957+00:00"},"s":"I",  "c":"-",        "id":4784931, "ctx":"SignalHandler","msg":"Dropping the scope cache for shutdown"}
    {"t":{"$date":"2024-01-10T05:00:39.957+00:00"},"s":"I",  "c":"FTDC",     "id":4784926, "ctx":"SignalHandler","msg":"Shutting down full-time data capture"}
    {"t":{"$date":"2024-01-10T05:00:39.957+00:00"},"s":"I",  "c":"FTDC",     "id":20626,   "ctx":"SignalHandler","msg":"Shutting down full-time diagnostic data capture"}
    {"t":{"$date":"2024-01-10T05:00:39.964+00:00"},"s":"I",  "c":"CONTROL",  "id":20565,   "ctx":"SignalHandler","msg":"Now exiting"}
    {"t":{"$date":"2024-01-10T05:00:39.964+00:00"},"s":"I",  "c":"CONTROL",  "id":23138,   "ctx":"SignalHandler","msg":"Shutting down","attr":{"exitCode":0}}
    {"t":{"$date":"2024-01-10T05:08:05.056+00:00"},"s":"I",  "c":"CONTROL",  "id":23285,   "ctx":"main","msg":"Automatically disabling TLS 1.0, to force-enable TLS 1.0 specify --sslDisabledProtocols 'none'"}
    {"t":{"$date":"2024-01-10T05:08:05.062+00:00"},"s":"I",  "c":"NETWORK",  "id":4648601, "ctx":"main","msg":"Implicit TCP FastOpen unavailable. If TCP FastOpen is required, set tcpFastOpenServer, tcpFastOpenClient, and tcpFastOpenQueueSize."}
    {"t":{"$date":"2024-01-10T05:08:05.062+00:00"},"s":"I",  "c":"STORAGE",  "id":4615611, "ctx":"initandlisten","msg":"MongoDB starting","attr":{"pid":1,"port":27017,"dbPath":"/data/db","architecture":"64-bit","host":"1211a40e6d4a"}}
    {"t":{"$date":"2024-01-10T05:08:05.062+00:00"},"s":"I",  "c":"CONTROL",  "id":23403,   "ctx":"initandlisten","msg":"Build Info","attr":{"buildInfo":{"version":"4.4.27","gitVersion":"2da9e4437d8c792c2b3c3aea62e284f801172a6b","openSSLVersion":"OpenSSL 1.1.1f  31 Mar 2020","modules":[],"allocator":"tcmalloc","environment":{"distmod":"ubuntu2004","distarch":"x86_64","target_arch":"x86_64"}}}}
    {"t":{"$date":"2024-01-10T05:08:05.062+00:00"},"s":"I",  "c":"CONTROL",  "id":51765,   "ctx":"initandlisten","msg":"Operating System","attr":{"os":{"name":"Ubuntu","version":"20.04"}}}
    {"t":{"$date":"2024-01-10T05:08:05.062+00:00"},"s":"I",  "c":"CONTROL",  "id":21951,   "ctx":"initandlisten","msg":"Options set by command line","attr":{"options":{"net":{"bindIp":"*"}}}}
    {"t":{"$date":"2024-01-10T05:08:05.063+00:00"},"s":"E",  "c":"NETWORK",  "id":23024,   "ctx":"initandlisten","msg":"Failed to unlink socket file","attr":{"path":"/tmp/mongodb-27017.sock","error":"Operation not permitted"}}
    {"t":{"$date":"2024-01-10T05:08:05.063+00:00"},"s":"F",  "c":"-",        "id":23091,   "ctx":"initandlisten","msg":"Fatal assertion","attr":{"msgid":40486,"file":"src/mongo/transport/transport_layer_asio.cpp","line":1048}}
    {"t":{"$date":"2024-01-10T05:08:05.063+00:00"},"s":"F",  "c":"-",        "id":23092,   "ctx":"initandlisten","msg":"\n\n***aborting after fassert() failure\n\n"}
    {"t":{"$date":"2024-01-10T11:11:41.977+00:00"},"s":"I",  "c":"CONTROL",  "id":23285,   "ctx":"main","msg":"Automatically disabling TLS 1.0, to force-enable TLS 1.0 specify --sslDisabledProtocols 'none'"}
    {"t":{"$date":"2024-01-10T11:11:41.979+00:00"},"s":"I",  "c":"NETWORK",  "id":4648601, "ctx":"main","msg":"Implicit TCP FastOpen unavailable. If TCP FastOpen is required, set tcpFastOpenServer, tcpFastOpenClient, and tcpFastOpenQueueSize."}
    {"t":{"$date":"2024-01-10T11:11:41.980+00:00"},"s":"I",  "c":"STORAGE",  "id":4615611, "ctx":"initandlisten","msg":"MongoDB starting","attr":{"pid":1,"port":27017,"dbPath":"/data/db","architecture":"64-bit","host":"1211a40e6d4a"}}
    {"t":{"$date":"2024-01-10T11:11:41.980+00:00"},"s":"I",  "c":"CONTROL",  "id":23403,   "ctx":"initandlisten","msg":"Build Info","attr":{"buildInfo":{"version":"4.4.27","gitVersion":"2da9e4437d8c792c2b3c3aea62e284f801172a6b","openSSLVersion":"OpenSSL 1.1.1f  31 Mar 2020","modules":[],"allocator":"tcmalloc","environment":{"distmod":"ubuntu2004","distarch":"x86_64","target_arch":"x86_64"}}}}
    {"t":{"$date":"2024-01-10T11:11:41.980+00:00"},"s":"I",  "c":"CONTROL",  "id":51765,   "ctx":"initandlisten","msg":"Operating System","attr":{"os":{"name":"Ubuntu","version":"20.04"}}}
    {"t":{"$date":"2024-01-10T11:11:41.980+00:00"},"s":"I",  "c":"CONTROL",  "id":21951,   "ctx":"initandlisten","msg":"Options set by command line","attr":{"options":{"net":{"bindIp":"*"}}}}
    {"t":{"$date":"2024-01-10T11:11:41.980+00:00"},"s":"E",  "c":"NETWORK",  "id":23024,   "ctx":"initandlisten","msg":"Failed to unlink socket file","attr":{"path":"/tmp/mongodb-27017.sock","error":"Operation not permitted"}}
    {"t":{"$date":"2024-01-10T11:11:41.980+00:00"},"s":"F",  "c":"-",        "id":23091,   "ctx":"initandlisten","msg":"Fatal assertion","attr":{"msgid":40486,"file":"src/mongo/transport/transport_layer_asio.cpp","line":1048}}
    {"t":{"$date":"2024-01-10T11:11:41.980+00:00"},"s":"F",  "c":"-",        "id":23092,   "ctx":"initandlisten","msg":"\n\n***aborting after fassert() failure\n\n"}

    I did remove the .js file, as well as the Docker's path to it. But that's only used the first time it runs, right?

    I can't think of any other changes I have made. Any idea what's going on? Why would there be a permission error in the tmp dir?

    Really tempted to move over to PeteA's all in one Docker now...

    Sooooo the Mongo docker now starts/runs fine again...

    There was an update to the Mongo container, so maybe it was just a bug? Or maybe some lock on some file expired?

    I don't know what's going on anymore lol

  7. My weekly backup ran this morning, so it shut all my Dockers down at 5am. However, now mongodb is refusing to start. I have no idea why.

    {"t":{"$date":"2024-01-10T05:00:39.718+00:00"},"s":"I",  "c":"CONTROL",  "id":23377,   "ctx":"SignalHandler","msg":"Received signal","attr":{"signal":15,"error":"Terminated"}}
    {"t":{"$date":"2024-01-10T05:00:39.718+00:00"},"s":"I",  "c":"CONTROL",  "id":23378,   "ctx":"SignalHandler","msg":"Signal was sent by kill(2)","attr":{"pid":0,"uid":0}}
    {"t":{"$date":"2024-01-10T05:00:39.718+00:00"},"s":"I",  "c":"CONTROL",  "id":23381,   "ctx":"SignalHandler","msg":"will terminate after current cmd ends"}
    {"t":{"$date":"2024-01-10T05:00:39.734+00:00"},"s":"I",  "c":"REPL",     "id":4784900, "ctx":"SignalHandler","msg":"Stepping down the ReplicationCoordinator for shutdown","attr":{"waitTimeMillis":10000}}
    {"t":{"$date":"2024-01-10T05:00:39.902+00:00"},"s":"I",  "c":"COMMAND",  "id":4784901, "ctx":"SignalHandler","msg":"Shutting down the MirrorMaestro"}
    {"t":{"$date":"2024-01-10T05:00:39.902+00:00"},"s":"I",  "c":"SHARDING", "id":4784902, "ctx":"SignalHandler","msg":"Shutting down the WaitForMajorityService"}
    {"t":{"$date":"2024-01-10T05:00:39.902+00:00"},"s":"I",  "c":"CONTROL",  "id":4784903, "ctx":"SignalHandler","msg":"Shutting down the LogicalSessionCache"}
    {"t":{"$date":"2024-01-10T05:00:39.903+00:00"},"s":"I",  "c":"NETWORK",  "id":20562,   "ctx":"SignalHandler","msg":"Shutdown: going to close listening sockets"}
    {"t":{"$date":"2024-01-10T05:00:39.903+00:00"},"s":"I",  "c":"NETWORK",  "id":23017,   "ctx":"listener","msg":"removing socket file","attr":{"path":"/tmp/mongodb-27017.sock"}}
    {"t":{"$date":"2024-01-10T05:00:39.903+00:00"},"s":"W",  "c":"NETWORK",  "id":23022,   "ctx":"listener","msg":"Unable to remove UNIX socket","attr":{"path":"/tmp/mongodb-27017.sock","error":"Operation not permitted"}}
    {"t":{"$date":"2024-01-10T05:00:39.903+00:00"},"s":"I",  "c":"NETWORK",  "id":4784905, "ctx":"SignalHandler","msg":"Shutting down the global connection pool"}
    {"t":{"$date":"2024-01-10T05:00:39.903+00:00"},"s":"I",  "c":"STORAGE",  "id":4784906, "ctx":"SignalHandler","msg":"Shutting down the FlowControlTicketholder"}
    {"t":{"$date":"2024-01-10T05:00:39.903+00:00"},"s":"I",  "c":"-",        "id":20520,   "ctx":"SignalHandler","msg":"Stopping further Flow Control ticket acquisitions."}
    {"t":{"$date":"2024-01-10T05:00:39.903+00:00"},"s":"I",  "c":"STORAGE",  "id":4784908, "ctx":"SignalHandler","msg":"Shutting down the PeriodicThreadToAbortExpiredTransactions"}
    {"t":{"$date":"2024-01-10T05:00:39.903+00:00"},"s":"I",  "c":"STORAGE",  "id":4784934, "ctx":"SignalHandler","msg":"Shutting down the PeriodicThreadToDecreaseSnapshotHistoryCachePressure"}
    {"t":{"$date":"2024-01-10T05:00:39.903+00:00"},"s":"I",  "c":"REPL",     "id":4784909, "ctx":"SignalHandler","msg":"Shutting down the ReplicationCoordinator"}
    {"t":{"$date":"2024-01-10T05:00:39.903+00:00"},"s":"I",  "c":"SHARDING", "id":4784910, "ctx":"SignalHandler","msg":"Shutting down the ShardingInitializationMongoD"}
    {"t":{"$date":"2024-01-10T05:00:39.903+00:00"},"s":"I",  "c":"REPL",     "id":4784911, "ctx":"SignalHandler","msg":"Enqueuing the ReplicationStateTransitionLock for shutdown"}
    {"t":{"$date":"2024-01-10T05:00:39.903+00:00"},"s":"I",  "c":"-",        "id":4784912, "ctx":"SignalHandler","msg":"Killing all operations for shutdown"}
    {"t":{"$date":"2024-01-10T05:00:39.903+00:00"},"s":"I",  "c":"-",        "id":4695300, "ctx":"SignalHandler","msg":"Interrupted all currently running operations","attr":{"opsKilled":5}}
    {"t":{"$date":"2024-01-10T05:00:39.903+00:00"},"s":"I",  "c":"COMMAND",  "id":4784913, "ctx":"SignalHandler","msg":"Shutting down all open transactions"}
    {"t":{"$date":"2024-01-10T05:00:39.903+00:00"},"s":"I",  "c":"REPL",     "id":4784914, "ctx":"SignalHandler","msg":"Acquiring the ReplicationStateTransitionLock for shutdown"}
    {"t":{"$date":"2024-01-10T05:00:39.903+00:00"},"s":"I",  "c":"INDEX",    "id":4784915, "ctx":"SignalHandler","msg":"Shutting down the IndexBuildsCoordinator"}
    {"t":{"$date":"2024-01-10T05:00:39.903+00:00"},"s":"I",  "c":"REPL",     "id":4784916, "ctx":"SignalHandler","msg":"Reacquiring the ReplicationStateTransitionLock for shutdown"}
    {"t":{"$date":"2024-01-10T05:00:39.903+00:00"},"s":"I",  "c":"REPL",     "id":4784917, "ctx":"SignalHandler","msg":"Attempting to mark clean shutdown"}
    {"t":{"$date":"2024-01-10T05:00:39.903+00:00"},"s":"I",  "c":"NETWORK",  "id":4784918, "ctx":"SignalHandler","msg":"Shutting down the ReplicaSetMonitor"}
    {"t":{"$date":"2024-01-10T05:00:39.903+00:00"},"s":"I",  "c":"SHARDING", "id":4784921, "ctx":"SignalHandler","msg":"Shutting down the MigrationUtilExecutor"}
    {"t":{"$date":"2024-01-10T05:00:39.904+00:00"},"s":"I",  "c":"NETWORK",  "id":22944,   "ctx":"conn3","msg":"Connection ended","attr":{"remote":"172.19.0.3:45170","connectionId":3,"connectionCount":5}}
    {"t":{"$date":"2024-01-10T05:00:39.904+00:00"},"s":"I",  "c":"NETWORK",  "id":22944,   "ctx":"conn1","msg":"Connection ended","attr":{"remote":"172.19.0.3:45152","connectionId":1,"connectionCount":4}}
    {"t":{"$date":"2024-01-10T05:00:39.904+00:00"},"s":"I",  "c":"STORAGE",  "id":4784927, "ctx":"SignalHandler","msg":"Shutting down the HealthLog"}
    {"t":{"$date":"2024-01-10T05:00:39.904+00:00"},"s":"I",  "c":"STORAGE",  "id":4784929, "ctx":"SignalHandler","msg":"Acquiring the global lock for shutdown"}
    {"t":{"$date":"2024-01-10T05:00:39.904+00:00"},"s":"I",  "c":"STORAGE",  "id":4784930, "ctx":"SignalHandler","msg":"Shutting down the storage engine"}
    {"t":{"$date":"2024-01-10T05:00:39.904+00:00"},"s":"I",  "c":"STORAGE",  "id":22320,   "ctx":"SignalHandler","msg":"Shutting down journal flusher thread"}
    {"t":{"$date":"2024-01-10T05:00:39.904+00:00"},"s":"I",  "c":"STORAGE",  "id":22321,   "ctx":"SignalHandler","msg":"Finished shutting down journal flusher thread"}
    {"t":{"$date":"2024-01-10T05:00:39.904+00:00"},"s":"I",  "c":"STORAGE",  "id":20282,   "ctx":"SignalHandler","msg":"Deregistering all the collections"}
    {"t":{"$date":"2024-01-10T05:00:39.904+00:00"},"s":"I",  "c":"NETWORK",  "id":22944,   "ctx":"conn160","msg":"Connection ended","attr":{"remote":"172.19.0.3:35476","connectionId":160,"connectionCount":3}}
    {"t":{"$date":"2024-01-10T05:00:39.904+00:00"},"s":"I",  "c":"NETWORK",  "id":22944,   "ctx":"conn106","msg":"Connection ended","attr":{"remote":"172.19.0.3:50332","connectionId":106,"connectionCount":2}}
    {"t":{"$date":"2024-01-10T05:00:39.910+00:00"},"s":"I",  "c":"STORAGE",  "id":22261,   "ctx":"SignalHandler","msg":"Timestamp monitor shutting down"}
    {"t":{"$date":"2024-01-10T05:00:39.910+00:00"},"s":"I",  "c":"STORAGE",  "id":22317,   "ctx":"SignalHandler","msg":"WiredTigerKVEngine shutting down"}
    {"t":{"$date":"2024-01-10T05:00:39.910+00:00"},"s":"I",  "c":"STORAGE",  "id":22318,   "ctx":"SignalHandler","msg":"Shutting down session sweeper thread"}
    {"t":{"$date":"2024-01-10T05:00:39.910+00:00"},"s":"I",  "c":"STORAGE",  "id":22319,   "ctx":"SignalHandler","msg":"Finished shutting down session sweeper thread"}
    {"t":{"$date":"2024-01-10T05:00:39.910+00:00"},"s":"I",  "c":"STORAGE",  "id":22322,   "ctx":"SignalHandler","msg":"Shutting down checkpoint thread"}
    {"t":{"$date":"2024-01-10T05:00:39.911+00:00"},"s":"I",  "c":"STORAGE",  "id":22323,   "ctx":"SignalHandler","msg":"Finished shutting down checkpoint thread"}
    {"t":{"$date":"2024-01-10T05:00:39.913+00:00"},"s":"I",  "c":"STORAGE",  "id":4795902, "ctx":"SignalHandler","msg":"Closing WiredTiger","attr":{"closeConfig":"leak_memory=true,"}}
    {"t":{"$date":"2024-01-10T05:00:39.925+00:00"},"s":"I",  "c":"STORAGE",  "id":22430,   "ctx":"SignalHandler","msg":"WiredTiger message","attr":{"message":"[1704862839:925143][1:0x15274134b700], close_ckpt: [WT_VERB_CHECKPOINT_PROGRESS] saving checkpoint snapshot min: 343738, snapshot max: 343738 snapshot count: 0, oldest timestamp: (0, 0) , meta checkpoint timestamp: (0, 0) base write gen: 660"}}
    {"t":{"$date":"2024-01-10T05:00:39.957+00:00"},"s":"I",  "c":"STORAGE",  "id":4795901, "ctx":"SignalHandler","msg":"WiredTiger closed","attr":{"durationMillis":44}}
    {"t":{"$date":"2024-01-10T05:00:39.957+00:00"},"s":"I",  "c":"STORAGE",  "id":22279,   "ctx":"SignalHandler","msg":"shutdown: removing fs lock..."}
    {"t":{"$date":"2024-01-10T05:00:39.957+00:00"},"s":"I",  "c":"-",        "id":4784931, "ctx":"SignalHandler","msg":"Dropping the scope cache for shutdown"}
    {"t":{"$date":"2024-01-10T05:00:39.957+00:00"},"s":"I",  "c":"FTDC",     "id":4784926, "ctx":"SignalHandler","msg":"Shutting down full-time data capture"}
    {"t":{"$date":"2024-01-10T05:00:39.957+00:00"},"s":"I",  "c":"FTDC",     "id":20626,   "ctx":"SignalHandler","msg":"Shutting down full-time diagnostic data capture"}
    {"t":{"$date":"2024-01-10T05:00:39.964+00:00"},"s":"I",  "c":"CONTROL",  "id":20565,   "ctx":"SignalHandler","msg":"Now exiting"}
    {"t":{"$date":"2024-01-10T05:00:39.964+00:00"},"s":"I",  "c":"CONTROL",  "id":23138,   "ctx":"SignalHandler","msg":"Shutting down","attr":{"exitCode":0}}
    {"t":{"$date":"2024-01-10T05:08:05.056+00:00"},"s":"I",  "c":"CONTROL",  "id":23285,   "ctx":"main","msg":"Automatically disabling TLS 1.0, to force-enable TLS 1.0 specify --sslDisabledProtocols 'none'"}
    {"t":{"$date":"2024-01-10T05:08:05.062+00:00"},"s":"I",  "c":"NETWORK",  "id":4648601, "ctx":"main","msg":"Implicit TCP FastOpen unavailable. If TCP FastOpen is required, set tcpFastOpenServer, tcpFastOpenClient, and tcpFastOpenQueueSize."}
    {"t":{"$date":"2024-01-10T05:08:05.062+00:00"},"s":"I",  "c":"STORAGE",  "id":4615611, "ctx":"initandlisten","msg":"MongoDB starting","attr":{"pid":1,"port":27017,"dbPath":"/data/db","architecture":"64-bit","host":"1211a40e6d4a"}}
    {"t":{"$date":"2024-01-10T05:08:05.062+00:00"},"s":"I",  "c":"CONTROL",  "id":23403,   "ctx":"initandlisten","msg":"Build Info","attr":{"buildInfo":{"version":"4.4.27","gitVersion":"2da9e4437d8c792c2b3c3aea62e284f801172a6b","openSSLVersion":"OpenSSL 1.1.1f  31 Mar 2020","modules":[],"allocator":"tcmalloc","environment":{"distmod":"ubuntu2004","distarch":"x86_64","target_arch":"x86_64"}}}}
    {"t":{"$date":"2024-01-10T05:08:05.062+00:00"},"s":"I",  "c":"CONTROL",  "id":51765,   "ctx":"initandlisten","msg":"Operating System","attr":{"os":{"name":"Ubuntu","version":"20.04"}}}
    {"t":{"$date":"2024-01-10T05:08:05.062+00:00"},"s":"I",  "c":"CONTROL",  "id":21951,   "ctx":"initandlisten","msg":"Options set by command line","attr":{"options":{"net":{"bindIp":"*"}}}}
    {"t":{"$date":"2024-01-10T05:08:05.063+00:00"},"s":"E",  "c":"NETWORK",  "id":23024,   "ctx":"initandlisten","msg":"Failed to unlink socket file","attr":{"path":"/tmp/mongodb-27017.sock","error":"Operation not permitted"}}
    {"t":{"$date":"2024-01-10T05:08:05.063+00:00"},"s":"F",  "c":"-",        "id":23091,   "ctx":"initandlisten","msg":"Fatal assertion","attr":{"msgid":40486,"file":"src/mongo/transport/transport_layer_asio.cpp","line":1048}}
    {"t":{"$date":"2024-01-10T05:08:05.063+00:00"},"s":"F",  "c":"-",        "id":23092,   "ctx":"initandlisten","msg":"\n\n***aborting after fassert() failure\n\n"}
    {"t":{"$date":"2024-01-10T11:11:41.977+00:00"},"s":"I",  "c":"CONTROL",  "id":23285,   "ctx":"main","msg":"Automatically disabling TLS 1.0, to force-enable TLS 1.0 specify --sslDisabledProtocols 'none'"}
    {"t":{"$date":"2024-01-10T11:11:41.979+00:00"},"s":"I",  "c":"NETWORK",  "id":4648601, "ctx":"main","msg":"Implicit TCP FastOpen unavailable. If TCP FastOpen is required, set tcpFastOpenServer, tcpFastOpenClient, and tcpFastOpenQueueSize."}
    {"t":{"$date":"2024-01-10T11:11:41.980+00:00"},"s":"I",  "c":"STORAGE",  "id":4615611, "ctx":"initandlisten","msg":"MongoDB starting","attr":{"pid":1,"port":27017,"dbPath":"/data/db","architecture":"64-bit","host":"1211a40e6d4a"}}
    {"t":{"$date":"2024-01-10T11:11:41.980+00:00"},"s":"I",  "c":"CONTROL",  "id":23403,   "ctx":"initandlisten","msg":"Build Info","attr":{"buildInfo":{"version":"4.4.27","gitVersion":"2da9e4437d8c792c2b3c3aea62e284f801172a6b","openSSLVersion":"OpenSSL 1.1.1f  31 Mar 2020","modules":[],"allocator":"tcmalloc","environment":{"distmod":"ubuntu2004","distarch":"x86_64","target_arch":"x86_64"}}}}
    {"t":{"$date":"2024-01-10T11:11:41.980+00:00"},"s":"I",  "c":"CONTROL",  "id":51765,   "ctx":"initandlisten","msg":"Operating System","attr":{"os":{"name":"Ubuntu","version":"20.04"}}}
    {"t":{"$date":"2024-01-10T11:11:41.980+00:00"},"s":"I",  "c":"CONTROL",  "id":21951,   "ctx":"initandlisten","msg":"Options set by command line","attr":{"options":{"net":{"bindIp":"*"}}}}
    {"t":{"$date":"2024-01-10T11:11:41.980+00:00"},"s":"E",  "c":"NETWORK",  "id":23024,   "ctx":"initandlisten","msg":"Failed to unlink socket file","attr":{"path":"/tmp/mongodb-27017.sock","error":"Operation not permitted"}}
    {"t":{"$date":"2024-01-10T11:11:41.980+00:00"},"s":"F",  "c":"-",        "id":23091,   "ctx":"initandlisten","msg":"Fatal assertion","attr":{"msgid":40486,"file":"src/mongo/transport/transport_layer_asio.cpp","line":1048}}
    {"t":{"$date":"2024-01-10T11:11:41.980+00:00"},"s":"F",  "c":"-",        "id":23092,   "ctx":"initandlisten","msg":"\n\n***aborting after fassert() failure\n\n"}

    I did remove the .js file, as well as the Docker's path to it. But that's only used the first time it runs, right?

    I can't think of any other changes I have made. Any idea what's going on? Why would there be a permission error in the tmp dir?

    Really tempted to move over to PeteA's all in one Docker now...

  8. 4 hours ago, jademonkee said:

    Thanks for the info.

    FWIW I opened a terminal to the mongodb container and entered

    mongod --quiet

    to stop the logging.

    If anything goes wrong, I guess I'll turn the logging back on, but for the moment I don't really need it (hopefully that's not naive of me 😅)

    I speak too soon: it did not stop the logging...

  9. 9 hours ago, bmartino1 said:

     

    That seems to be the default yes. From what I can tell, the logging is set to verbose by default. While I don't see a data space hit, that doesn't mean constant read writes are wanted. The data here is what's used in the unifi controller for adaption and statistical data for access and connections.


    image.thumb.png.fb4936d1f11c9a16ae1224fe1363b909.png
    Do this at your own risk:

    You might be able to disable the log by adding a log path and setting it to null under the post arguments.
    --logpath /dev/null

    image.png.b148724d424c42ff81f628c1eca2fea9.png

    I advise against this. Mainly as it comes down to how you setup and want to run your mongo Database. not sure if mongodb will take the post argument,
    There are many Mongo DB with options.

    LSIO went with a quick fix of 1 js file that dedicates MongoDB data. Instead of a docker compose option that sets the mongodb settings. as a unraid docker, container can have a template with environment variables to set the necessary data over creating 1 js file. https://hub.docker.com/_/mongo#:~:text=mongo/mongod.conf-,Environment Variables,-When you start

    But found that to be a lot more work over a simpler point past change go solution...

    as long as you have a unifi DB and password set, it may be better to run the default docker and create the mongoDB using MongoDB commands.

     

    Thanks for the info.

    FWIW I opened a terminal to the mongodb container and entered

    mongod --quiet

    to stop the logging.

    If anything goes wrong, I guess I'll turn the logging back on, but for the moment I don't really need it (hopefully that's not naive of me 😅)

  10. Just moved over to the two-container solution using a combination of the instructions from LS.IO and in this thread.

    Everything seems to be working fine now, except that my mongodb log is constantly being written to with entries like the following:

    {"t":{"$date":"2024-01-05T15:59:22.000+00:00"},"s":"I",  "c":"NETWORK",  "id":51800,   "ctx":"conn12","msg":"client metadata","attr":{"remote":"172.19.0.3:50624","client":"conn12","doc":{"driver":{"name":"mongo-java-driver|sync","version":"4.6.1"},"os":{"type":"Linux","name":"Linux","architecture":"amd64","version":"6.1.64-Unraid"},"platform":"Java/Private Build/17.0.9+9-Ubuntu-122.04"}}}
    {"t":{"$date":"2024-01-05T15:59:22.052+00:00"},"s":"I",  "c":"ACCESS",   "id":20250,   "ctx":"conn12","msg":"Authentication succeeded","attr":{"mechanism":"SCRAM-SHA-256","speculative":true,"principalName":"unifi","authenticationDatabase":"unifi","remote":"172.19.0.3:50624","extraInfo":{}}}
    {"t":{"$date":"2024-01-05T15:59:23.293+00:00"},"s":"I",  "c":"STORAGE",  "id":22430,   "ctx":"WTCheckpointThread","msg":"WiredTiger message","attr":{"message":"[1704470363:293326][1:0x15273f73d700], WT_SESSION.checkpoint: [WT_VERB_CHECKPOINT_PROGRESS] saving checkpoint snapshot min: 6840, snapshot max: 6840 snapshot count: 0, oldest timestamp: (0, 0) , meta checkpoint timestamp: (0, 0) base write gen: 660"}}
    {"t":{"$date":"2024-01-05T16:00:23.344+00:00"},"s":"I",  "c":"STORAGE",  "id":22430,   "ctx":"WTCheckpointThread","msg":"WiredTiger message","attr":{"message":"[1704470423:344527][1:0x15273f73d700], WT_SESSION.checkpoint: [WT_VERB_CHECKPOINT_PROGRESS] saving checkpoint snapshot min: 6918, snapshot max: 6918 snapshot count: 0, oldest timestamp: (0, 0) , meta checkpoint timestamp: (0, 0) base write gen: 660"}}
    {"t":{"$date":"2024-01-05T16:01:23.377+00:00"},"s":"I",  "c":"STORAGE",  "id":22430,   "ctx":"WTCheckpointThread","msg":"WiredTiger message","attr":{"message":"[1704470483:377873][1:0x15273f73d700], WT_SESSION.checkpoint: [WT_VERB_CHECKPOINT_PROGRESS] saving checkpoint snapshot min: 6921, snapshot max: 6921 snapshot count: 0, oldest timestamp: (0, 0) , meta checkpoint timestamp: (0, 0) base write gen: 660"}}
    {"t":{"$date":"2024-01-05T16:02:23.398+00:00"},"s":"I",  "c":"STORAGE",  "id":22430,   "ctx":"WTCheckpointThread","msg":"WiredTiger message","attr":{"message":"[1704470543:398781][1:0x15273f73d700], WT_SESSION.checkpoint: [WT_VERB_CHECKPOINT_PROGRESS] saving checkpoint snapshot min: 6969, snapshot max: 6969 snapshot count: 0, oldest timestamp: (0, 0) , meta checkpoint timestamp: (0, 0) base write gen: 660"}}
    {"t":{"$date":"2024-01-05T16:03:23.425+00:00"},"s":"I",  "c":"STORAGE",  "id":22430,   "ctx":"WTCheckpointThread","msg":"WiredTiger message","attr":{"message":"[1704470603:425199][1:0x15273f73d700], WT_SESSION.checkpoint: [WT_VERB_CHECKPOINT_PROGRESS] saving checkpoint snapshot min: 6974, snapshot max: 6974 snapshot count: 0, oldest timestamp: (0, 0) , meta checkpoint timestamp: (0, 0) base write gen: 660"}}

    Is anyone else running the Mongo container getting this?

    Any idea what it means? Is the log by default set to verbose? This will eat up space really quickly.

  11. 23 hours ago, dlandon said:

    Add the archive option '-a'.

    Ok, I've changed the command to:

    rsync -av --times --delete --exclude '.Recycle.Bin' /mnt/disk1/mymusic $MOUNTPOINT 2>&1 >> $LOGFILE

    And the log is filled with e.g.

    rsync: [generator] chown "/mnt/disks/ADATA2TB/mymusic/Albums/Conjoint/[2000] Earprints [FLAC]/01. Earprint Nr1.flac" failed: Operation not permitted (1)

    For every file and dir (I think because exfat doesn't support permissions etc, and the -a flag is the equivalent of -rlptgoD, which includes groups, permissions etc).

    The rsync man page says, however, suggests that --size-only could be a useful flag here as:
    "This modifies rsync's lqquick checkrq algorithm for finding files that need to be transferred, changing it from the default of transferring files with either a changed size or a changed last-modified time to just looking for files that have changed in size. This is useful when starting to use rsync after using another mirroring system which may not preserve timestamps exactly."

    So I've changed my rsync command to:

    rsync -vrltD --size-only --delete --exclude '.Recycle.Bin' /mnt/disk1/mymusic $MOUNTPOINT 2>&1 >> $LOGFILE

    And it worked for a run (no changes to sync, however). I will find out when daylight savings next changes if this solution has worked (though I suspect it will). I'll report back if changing to 'size only' causes any weird sync issues.

    Thanks for your help.

  12. 8 minutes ago, dlandon said:

    It should.

    Ah, I already specified -t in my command (vrltD --delete), so I don't think it'll fix it.

    The issue is that the server's time changes for DST, but the timezone for the exfat isn't specified, so it thinks that every file has shifted by an hour.

    If it isn't possible to specify the TZ for a specific disk in UD, I'll keep hunting around for an rsync solution.
    A workaround is to specify "--modify-window=3601", though this means that if I modify something, sync it, then modify it again all within an hour, it won't sync those changes next time. I can probably keep this in mind, though.

    Thanks.

  13. 31 minutes ago, dlandon said:

    What is it you are trying to accomplish?  I assume you are copying files with differing source and destination time zones.

     

    Please provide some clarification:

    • What is the issue with the time zones?  Where are you copying files from and to?
    • What file copy command are you using?  Rsync?

     

    Hi, sorry, I originally quoted an earlier detailed post, which can be found here:

    The summary is that I have an exfat disk that I keep music on and which is accessed by Windows and Mac computers (thus my preference for exfat). I sync it from my server using UD and rsync (the rsync script is in the original post).

    I posted a while ago that it occasionally syncs the entire drive contents, rather than just what has changed, and I now realise it's because of daylight savings changing, and exfat not using unix time. At the time I thought maybe the drive was failing.

    One solution is to mount the device using the option "tz-utc", so that the time stamps don't change when the UK moves over to DST (we're effectively on UTC otherwise), so I was wondering if it was possible to mount specific disks as UTC, and how to do it. I would think that the script would be too late for adding that mounting option, correct?

    I'm open to other suggestions, though, including modifying the rsync command/script, or even moving to a format freely and reliably supported by Windows and Mac that also supports unix timestamps/DST.

    The rsync command from the script is:

    rsync -vrltD --delete --exclude '.Recycle.Bin' /mnt/user/mymusic $MOUNTPOINT 2>&1 >> $LOGFILE

    Many thanks.

  14. On 3/29/2023 at 12:26 PM, jademonkee said:

    Hi there. I assume that this is the best part of the forum to post this, but apologies if not.

    I use UD to trigger a backup script when I plugin one of two USB hard drives. One I plug in every week, and another I do sporadically.

    Every so often when I connect a disk and it kicks off the script, it begins to instead copy EVERYTHING.

    It happened again yesterday when I plugged in the sporadic hard drive, so I cancelled the script through the UI's "Abort" button, but it seemed that rsync kept running - not knowing what else to do I rebooted the server, forcing a parity check on reboot 🙄

    Today I ran my weekly backup and it ran fine. But then I plugged in the sporadic disk again, and it picked up where it left off yesterday, and continues to copy over everything. So that's about 1.5TB to be copied, which not only takes forever, but makes the disk get hot hot hot, so this is not ideal.

    Any idea why this could be happening? Could it somehow (though I don't see how) be related to daylight savings starting last Sunday, and this being the first time I've attached the disk since then? (FWIW, the server time updated automatically, and I don't use the disk on any other machine, so I don't see how... but worth pondering). Or is there something silly in my script?

     

    The script that fires is as follows (I nabbed from either this forum or somewhere else on the internet - apologies to the original author):

    [script removed for space: see original post]

    Thanks for your help!

    So, the same thing happened again today: I plugged in my 'sporadic' backup disk and it started copying everything again. I went back to find my old post to see what the solution was, and I couldn't help but notice that I mentioned in my original post that this happened the first time plugging in the disk since daylight savings time started.

    Well, this is actually the first time I've plugged in the disk since daylight savings time ended and here we are with it copying over everything again.

    I note that it doesn't happen with my other backup disk, which is formatted in ext4.

    Could this be related to the disk being exfat and the file timestamps changing relative to DST?

    Is the solution to run my server as UTC, or is the solution to modify my script?

    If it's running the server in UTC, what are the drawbacks? I'm assuming I can still choose timezones in Docker apps, yes (my Squeezeboxes are also my clocks)?

    Thanks for your help and input.

     

    EDIT: I note that this page: https://askubuntu.com/questions/1323668/one-volume-did-not-go-dst-with-the-rest-of-the-system

    Mentions that you can mount disks using the option:

    tz=UTC

    How would I go about adding that option to UD for this disk?

  15. I know I'm digging up an old thread here, but I thought I'd chime in with something I found out yesterday:

    For some reason, my LSIO NextCloud instance had the log level defined in config.php set at 0 (debug), rather than the default 2 (warn). I'd never set this myself, so I'm assuming it's a silly default set by LSIO.

    Since I changed it, my instance seems snappier and my photo thumbnails in this third party Android app (https://play.google.com/store/apps/details?id=com.nkming.nc_photos.paid) load heaps quicker as I jump through the timeline.

    So, if anyone on the LSIO NextCloud container is experiencing problems, double check the log level set in your config.php

    HTH

    • Like 1
  16. On 9/7/2023 at 1:33 AM, Iker said:

    Update 2023.09.05.31: It was just a test for the new CI/CD system I'm using: Sorry about that. 

    I installed it anyway and am now running "2023.09.05.31.main"

    Is it suitable for public consumption?

  17. 1 hour ago, DrSiva2022 said:

     

    tried and this is the result :

     

    root@a5ca99f8f429:/# mysql -uroot -p
    Enter password: 
    ERROR 2002 (HY000): Can't connect to local server through socket '/run/mysqld/mysqld.sock' (2)
    root@a5ca99f8f429:/# 

     

    Sorry, I don't know what that error means.

    LS.io no longer offer support through this forum, instead offering it through their Discord, so maybe try there.

  18. On 7/17/2023 at 3:56 PM, bambi73 said:

    Updated from 5.11.5 and everything is working fine (not using Plex). Only exception (expected 😄) is MACVLAN.

     

     

    I have Unifi too (UDM-Pro) and I wouldn't call state with IPVLAN as "no issues". It's working (great) but Unifi itself doesn't consider state of network as correct either (not so great):

    • It occasionally complaints about multiple IPs assigned to single MAC in Admin messages
    • In Client view my Unraid server show random IP address (probably last one updated in ARP table) and it's changing
    • It messes statistics (and probably QoS too)
    • Router is constantly (each 15sec) and unsuccessfully ARPING to IP addresses which aren't "main" at the moment (not sure if static ARP records will fix it because ARP table contains "correct" records)

    Anyway it's good you are planning to involve kernel developers in the problem. Before you wrote this I intended to ask if there are some Linux kernel and/or Docker engine issues in their trackers (as I didn't find anything relevant). Was bit hopeful that recent (kernel 6.4) commits into macvlan.c will fix things.

    Just adding to the voices on this issue:

    I use MACVLAN on Docker, and have had no problems with that. I have Unifi gear (USG and 2x APs, with the controller running in Docker), and have no problems (except for it complaining that my bonded eth on the server shares an IP address).

    If there's anything I can do to help troubleshoot this problem (contributing to a known-working hardware list, for instance), feel free to reach out.

  19. 2 hours ago, jademonkee said:

    Does anyone know if I can just delete all the .err files from the folder now?

    FWIW, I copied all the .err files to a new directory (just in case), I then deleted all of them except the one listed in the logs at start-up.

    I then renamed the active log file to <filename>.err.old

    I then logged into the mysql console by opening the mariadb console via the Unraid GUI, then issuing the command:

    mysql -uroot -p

    As per https://mariadb.com/kb/en/flush/, I then issued the command to close and reopen the .err file (basically recreate it)

    flush error logs;

    Now the .err file is KB in size, and I have recovered 3GB of space on the cache drive by clearing the errors that have been accumulating for a couple years now.

    Fingers crossed I haven't messed anything up!

  20. 21 hours ago, jademonkee said:

    Hi there,

    I remember after this image was rebased to Apline that my start up log (the one accessed when clicking on the mariadb icon in the Unraid Docker UI) started producing the following each time it started:

    [custom-init] No custom files found, skipping...
    230720 11:04:18 mysqld_safe Logging to '/config/databases/98d77ae0f2c7.err'.
    230720 11:04:18 mysqld_safe Starting mariadbd daemon with databases from /config/databases
    [ls.io-init] done

    Everything seemed to work correctly, so I didn't really think much about that mention of a .err file.

    Fast forward a couple years (maybe?), and I just realised that my mariadb appdata directory is about 4GB in size, while my db backups are only about 80MB. I was worried that the backups weren't running correctly, so started digging around in the 'db-backup' Docker that I use to back it up. I couldn't find anything in the logs.

    Long story short, I have just under 4GB of .err files in the mariadb appdata directory.

    Opening up the .err file that is referenced in the above log, I see that it's constantly filling with:

    2023-07-20 13:43:56 362 [ERROR] Incorrect definition of table mysql.column_stats: expected column 'hist_type' at position 9 to have type enum('SINGLE_PREC_HB','DOUBLE_PREC_HB','JSON_HB'), found type enum('SINGLE_PREC_HB','DOUBLE_PREC_HB').
    2023-07-20 13:43:56 362 [ERROR] Incorrect definition of table mysql.column_stats: expected column 'histogram' at position 10 to have type longblob, found type varbinary(255).
    

    Over and over and over again.

    How do I fix this?

    The only app I use mariadb for is Nextcloud, which I also use the LSIO Docker for.

    Thanks for your help.

    I Googled the error this morning and it seems to be a problem from a poor upgrade between mariadb versions (ie the container updated the version, but there were manual steps needed inside the container that I was not aware of).

    This thread shed some light on it:

    https://github.com/photoprism/photoprism/issues/2382

    Specifically, I ran the following command, and now the error isn't constantly spamming the .err logfiles.

    mysql_upgrade --user=root --password=<root_pwd>

    Does anyone know if I can just delete all the .err files from the folder now?