[Support] Linuxserver.io - Unifi-Controller


Recommended Posts

I need to upload config.gateway.json to the controller. This needs to be placed in: <unifi_base>/data/sites/site_ID, according to https://help.ui.com/hc/en-us/articles/215458888-UniFi-USG-Advanced-Configuration

 

This means I would upload the json file to user/appdata/unifi-controller/data/sites/default/

However, there is no /sites/default/ path. Do I just make the sites/default folders myself within user/appdata/unifi-controller/data/?

Link to comment
14 hours ago, wgstarks said:

This should already exist. At least it does on mine.

lockquote widget

Screenshot_20231214_082243_Chrome~2.jpg

 

That is weird. I have no such path. Also, notice that I have appdata/unifi-controller instead of your appdata/unifi

 

Edit: Solved! The folders do show up when you upload an image for a floor plan (in the topology section). You have to enable legacy UI for this. I could have created the folders myself but I didn't want the hassle of setting the right folder permissions.

Edited by Wieuwzak
Link to comment

On a related note - Ubiquiti had a security "event" yesterday where users who logged into the cloud for their network controller or NVR were seeing video and or able to config the networks of other users.

 

That certainly gives one more incentive to keep running the controller locally rather than through the cloud.

https://arstechnica.com/security/2023/12/unifi-devices-broadcasted-private-video-to-other-users-accounts/

  • Like 1
Link to comment

For those with the USG 3P gateway from Ubiquiti, the new replacement, the UXG-Lite, is finally in stock again at the Ubiquiti US store.  It has been hard to find in stock since release a month ago. UXG-Lite performs much better than the USG. 

 

https://store.ui.com/us/en/pro/category/all-cloud-keys-gateways/products/uxg-lite

 

There is also a new Unifi Express which also has the controller built-in as well but it is limited to managing four additional Unifi devices.

Link to comment
  • 2 weeks later...

My 2 cents on the container swap.

I dont fully support/understand the decision on why use two dockers but at least the instructions on how to set up the new containers should be more clear for Unraid.

 

In my case the init script didnt want to work. I edited the variables on the mongo container but it didnt want to execute the script on  the first boot. I had to enter the container console and:

 

mongo < path_of_the_init_script_in_the_container.js

 

At first i tought it was a problem with the connection between the two containers since i run unifi in its own IP and mongo in bridge. tried to install ping to troubleshoot in the unifi container but apt has broken dependencies and doesnt let me do anything.

 

So, please, expand the instructions on the github for Unraid. [insert instruction not clear joke here]

Link to comment

For anyone else tearing their hair out trying to get this working where your setup consists of:

 

  • The MongoDB container on the host network AND
  • The unifi-network-application container (UNA) on the br0 network (bridged to your LAN or custom IP range)

and who are getting "Defined MONGO_HOST [xxxxx] is not reachable, cannot proceed" on UNA startup....

 

Go into Settings, Docker and ensure that "Host access to custom networks" is set to Enabled.  Mine was set to disabled, hence no communication between my host MongoDB container and the custom network UNA container....

  • Like 1
Link to comment
On 10/27/2023 at 2:29 PM, wgstarks said:

This is a summary of the steps I took to migrate to LSIO's unifi-network-application docker now available in CA

 

hi @wgstarks,

 

I tried following your steps but could not get it to work.

 

Unifi Network Controller:

This is what I get if I go to web URL for UNA:

 HTTP Status 404 – Not Found

Logs for UNA:

[custom-init] No custom files found, skipping...

Docker Run:

docker run
  -d
  --name='unifi-network-application'
  --net='mongonet'
  -e TZ="America/New_York"
  -e HOST_OS="Unraid"
  -e HOST_HOSTNAME="Tower"
  -e HOST_CONTAINERNAME="unifi-network-application"
  -e 'MONGO_USER'='unifi'
  -e 'MONGO_PASS'='********************'
  -e 'MONGO_HOST'='mongodb'
  -e 'MONGO_PORT'='27017'
  -e 'MONGO_DBNAME'='unifi'
  -e 'MEM_LIMIT'='4024'
  -e 'MEM_STARTUP'='1024'
  -e 'MONGO_TLS'=''
  -e 'MONGO_AUTHSOURCE'=''
  -e 'PUID'='99'
  -e 'PGID'='100'
  -e 'UMASK'='022'
  -l net.unraid.docker.managed=dockerman
  -l net.unraid.docker.icon='https://raw.githubusercontent.com/linuxserver/docker-templates/master/linuxserver.io/img/unifi-network-application-icon.png'
  -p '8446:8443/tcp'
  -p '3478:3478/udp'
  -p '10001:10001/udp'
  -p '8082:8080/tcp'
  -p '1900:1900/udp'
  -p '8843:8843/tcp'
  -p '8880:8880/tcp'
  -p '6789:6789/tcp'
  -p '5514:5514/udp'
  -v '/mnt/user/appdata/unifi-network-application':'/config':'rw' 'lscr.io/linuxserver/unifi-network-application' 

 

 

Mongo DB:

 

This is what I get when I go to the web URL for mongodb:

It looks like you are trying to access MongoDB over HTTP on the native driver port.

 

Docker Run:

 

docker run
  -d
  --name='MongoDB'
  --net='mongonet'
  -e TZ="America/New_York"
  -e HOST_OS="Unraid"
  -e HOST_HOSTNAME="Tower"
  -e HOST_CONTAINERNAME="MongoDB"
  -l net.unraid.docker.managed=dockerman
  -l net.unraid.docker.icon='https://raw.githubusercontent.com/jason-bean/docker-templates/master/jasonbean-repo/mongo.sh-600x600.png'
  -p '27017:27017/tcp'
  -v '/mnt/user/appdata/mongodb/db-data/':'/data/db':'rw'
  -v '/mnt/user/appdata/mongodb/mongo-init/init-mongo.js':'/docker-entrypoint-initdb.d/init-mongo.js':'ro' 'mongo:7.0-rc' 

 

 

Can you help figure out what I'm missing or doing wrong?

 

Thanks

 

 

 

Edited by TekDRu
Link to comment

for anyone that is having issues with the init-mongo.js file/path (it was giving me issues that I was too lazy to troubleshoot), I opted to instead create the mongodb container and then manually create the two dbs and users within and then it worked for me. 

 

Once you're within the console of the mongodb container you can type mongosh and then:

test> use unifi
switched to db unifi
unifi> db.createUser({
user: "unifi",
pwd: "enter-your-pw",
roles: [
{ role: "dbOwner", db: "unifi" }
]
})
{ ok: 1 }


test> use unifi_stat
switched to db unifi_stat
unifi> db.createUser({
user: "unifi",
pwd: "enter-your-pw",
roles: [
{ role: "dbOwner", db: "unifi_stat" }
]
})
{ ok: 1 }

 

Edited by nate1749
  • Like 1
  • Thanks 1
Link to comment

I am getting the same problem as TekDRu reported here. When I try to load up the web portal I get 404 not found, on both HTTPS and HTTP paths.

 

Users are created properly in mongo (Currently using 7.0.5-rc0):

image.png.4e7bd79e9f0a242ce7b9bdc0263cf111.png

 

514028763_Screenshot2024-01-01130117.thumb.png.b46478ef09cb62d3ebb0733a35d9205f.png

 

 

 

 

On 12/30/2023 at 10:16 PM, TekDRu said:

hi @wgstarks,

 

I tried following your steps but could not get it to work.

 

Unifi Network Controller:

This is what I get if I go to web URL for UNA:

 HTTP Status 404 – Not Found

Logs for UNA:

[custom-init] No custom files found, skipping...

Docker Run:

docker run
  -d
  --name='unifi-network-application'
  --net='mongonet'
  -e TZ="America/New_York"
  -e HOST_OS="Unraid"
  -e HOST_HOSTNAME="Tower"
  -e HOST_CONTAINERNAME="unifi-network-application"
  -e 'MONGO_USER'='unifi'
  -e 'MONGO_PASS'='********************'
  -e 'MONGO_HOST'='mongodb'
  -e 'MONGO_PORT'='27017'
  -e 'MONGO_DBNAME'='unifi'
  -e 'MEM_LIMIT'='4024'
  -e 'MEM_STARTUP'='1024'
  -e 'MONGO_TLS'=''
  -e 'MONGO_AUTHSOURCE'=''
  -e 'PUID'='99'
  -e 'PGID'='100'
  -e 'UMASK'='022'
  -l net.unraid.docker.managed=dockerman
  -l net.unraid.docker.icon='https://raw.githubusercontent.com/linuxserver/docker-templates/master/linuxserver.io/img/unifi-network-application-icon.png'
  -p '8446:8443/tcp'
  -p '3478:3478/udp'
  -p '10001:10001/udp'
  -p '8082:8080/tcp'
  -p '1900:1900/udp'
  -p '8843:8843/tcp'
  -p '8880:8880/tcp'
  -p '6789:6789/tcp'
  -p '5514:5514/udp'
  -v '/mnt/user/appdata/unifi-network-application':'/config':'rw' 'lscr.io/linuxserver/unifi-network-application' 

 

 

Mongo DB:

 

This is what I get when I go to the web URL for mongodb:

It looks like you are trying to access MongoDB over HTTP on the native driver port.

 

Docker Run:

 

docker run
  -d
  --name='MongoDB'
  --net='mongonet'
  -e TZ="America/New_York"
  -e HOST_OS="Unraid"
  -e HOST_HOSTNAME="Tower"
  -e HOST_CONTAINERNAME="MongoDB"
  -l net.unraid.docker.managed=dockerman
  -l net.unraid.docker.icon='https://raw.githubusercontent.com/jason-bean/docker-templates/master/jasonbean-repo/mongo.sh-600x600.png'
  -p '27017:27017/tcp'
  -v '/mnt/user/appdata/mongodb/db-data/':'/data/db':'rw'
  -v '/mnt/user/appdata/mongodb/mongo-init/init-mongo.js':'/docker-entrypoint-initdb.d/init-mongo.js':'ro' 'mongo:7.0-rc' 

 

 

Can you help figure out what I'm missing or doing wrong?

 

Thanks

 

 

 

 

 

Edited by Archonite
Link to comment

Re:  404 error.  @Archonite @TekDRu

your mileage may vary but...I ran into this (a lot) and after some google-fu, found it to be an '@' symbol in my password, and some 'legacy' config from adjusting and moving things around.

Connection to the DB must be successful (beyond the unifi-network-application 'logs' no longer showing connection errors to the DB' ; that appears to just be network connection test versus authentication test


1) the image will 'stick' the database/username/password' in the config as a one time go/only evaluated on first run.  You have one shot to input it correctly.  if you change the username/pass, you *may* need to recreate the docker container.
1a) I was able to validate this by going to unifi-network-application console, and viewing the file logs/server.log.  After changing my password, I could still see the old password in the logs.  Same with the change of the mongo_host, post change, was still the first instance/IP I tried.
2) If your password contains an '@' symbol, or something that doesn't URL encode properly, you'll need to submit the pwd in URL encode format (as it notes in the community app)

I ran into this and changing the pw from one with an @ , and recreating the container to a NEW appdata path (unifinetworkapplication versus unifi-network-application) resolved for me

Edited by xanatos
Format/clarity
Link to comment

Just moved over to the two-container solution using a combination of the instructions from LS.IO and in this thread.

Everything seems to be working fine now, except that my mongodb log is constantly being written to with entries like the following:

{"t":{"$date":"2024-01-05T15:59:22.000+00:00"},"s":"I",  "c":"NETWORK",  "id":51800,   "ctx":"conn12","msg":"client metadata","attr":{"remote":"172.19.0.3:50624","client":"conn12","doc":{"driver":{"name":"mongo-java-driver|sync","version":"4.6.1"},"os":{"type":"Linux","name":"Linux","architecture":"amd64","version":"6.1.64-Unraid"},"platform":"Java/Private Build/17.0.9+9-Ubuntu-122.04"}}}
{"t":{"$date":"2024-01-05T15:59:22.052+00:00"},"s":"I",  "c":"ACCESS",   "id":20250,   "ctx":"conn12","msg":"Authentication succeeded","attr":{"mechanism":"SCRAM-SHA-256","speculative":true,"principalName":"unifi","authenticationDatabase":"unifi","remote":"172.19.0.3:50624","extraInfo":{}}}
{"t":{"$date":"2024-01-05T15:59:23.293+00:00"},"s":"I",  "c":"STORAGE",  "id":22430,   "ctx":"WTCheckpointThread","msg":"WiredTiger message","attr":{"message":"[1704470363:293326][1:0x15273f73d700], WT_SESSION.checkpoint: [WT_VERB_CHECKPOINT_PROGRESS] saving checkpoint snapshot min: 6840, snapshot max: 6840 snapshot count: 0, oldest timestamp: (0, 0) , meta checkpoint timestamp: (0, 0) base write gen: 660"}}
{"t":{"$date":"2024-01-05T16:00:23.344+00:00"},"s":"I",  "c":"STORAGE",  "id":22430,   "ctx":"WTCheckpointThread","msg":"WiredTiger message","attr":{"message":"[1704470423:344527][1:0x15273f73d700], WT_SESSION.checkpoint: [WT_VERB_CHECKPOINT_PROGRESS] saving checkpoint snapshot min: 6918, snapshot max: 6918 snapshot count: 0, oldest timestamp: (0, 0) , meta checkpoint timestamp: (0, 0) base write gen: 660"}}
{"t":{"$date":"2024-01-05T16:01:23.377+00:00"},"s":"I",  "c":"STORAGE",  "id":22430,   "ctx":"WTCheckpointThread","msg":"WiredTiger message","attr":{"message":"[1704470483:377873][1:0x15273f73d700], WT_SESSION.checkpoint: [WT_VERB_CHECKPOINT_PROGRESS] saving checkpoint snapshot min: 6921, snapshot max: 6921 snapshot count: 0, oldest timestamp: (0, 0) , meta checkpoint timestamp: (0, 0) base write gen: 660"}}
{"t":{"$date":"2024-01-05T16:02:23.398+00:00"},"s":"I",  "c":"STORAGE",  "id":22430,   "ctx":"WTCheckpointThread","msg":"WiredTiger message","attr":{"message":"[1704470543:398781][1:0x15273f73d700], WT_SESSION.checkpoint: [WT_VERB_CHECKPOINT_PROGRESS] saving checkpoint snapshot min: 6969, snapshot max: 6969 snapshot count: 0, oldest timestamp: (0, 0) , meta checkpoint timestamp: (0, 0) base write gen: 660"}}
{"t":{"$date":"2024-01-05T16:03:23.425+00:00"},"s":"I",  "c":"STORAGE",  "id":22430,   "ctx":"WTCheckpointThread","msg":"WiredTiger message","attr":{"message":"[1704470603:425199][1:0x15273f73d700], WT_SESSION.checkpoint: [WT_VERB_CHECKPOINT_PROGRESS] saving checkpoint snapshot min: 6974, snapshot max: 6974 snapshot count: 0, oldest timestamp: (0, 0) , meta checkpoint timestamp: (0, 0) base write gen: 660"}}

Is anyone else running the Mongo container getting this?

Any idea what it means? Is the log by default set to verbose? This will eat up space really quickly.

Link to comment
On 1/2/2024 at 1:32 PM, xanatos said:

your mileage may vary but...I ran into this (a lot) and after some google-fu, found it to be an '@' symbol in my password, and some 'legacy' config from adjusting and moving things around.

Connection to the DB must be successful (beyond the unifi-network-application 'logs' no longer showing connection errors to the DB' ; that appears to just be network connection test versus authentication test


1) the image will 'stick' the database/username/password' in the config as a one time go/only evaluated on first run.  You have one shot to input it correctly.  if you change the username/pass, you *may* need to recreate the docker container.
1a) I was able to validate this by going to unifi-network-application console, and viewing the file logs/server.log.  After changing my password, I could still see the old password in the logs.  Same with the change of the mongo_host, post change, was still the first instance/IP I tried.
2) If your password contains an '@' symbol, or something that doesn't URL encode properly, you'll need to submit the pwd in URL encode format (as it notes in the community app)

I ran into this and changing the pw from one with an @ , and recreating the container to a NEW appdata path (unifinetworkapplication versus unifi-network-application) resolved for me

Thanks @xanatos.  After changing my password to just alpha numeric and tried again, I was successful in getting the new unifi controller up and running. Thanks for the tip.  Hope it helps others who may have this issue.

Link to comment
On 1/5/2024 at 10:06 AM, jademonkee said:

Just moved over to the two-container solution using a combination of the instructions from LS.IO and in this thread.

Everything seems to be working fine now, except that my mongodb log is constantly being written to with entries like the following:

{"t":{"$date":"2024-01-05T15:59:22.000+00:00"},"s":"I",  "c":"NETWORK",  "id":51800,   "ctx":"conn12","msg":"client metadata","attr":{"remote":"172.19.0.3:50624","client":"conn12","doc":{"driver":{"name":"mongo-java-driver|sync","version":"4.6.1"},"os":{"type":"Linux","name":"Linux","architecture":"amd64","version":"6.1.64-Unraid"},"platform":"Java/Private Build/17.0.9+9-Ubuntu-122.04"}}}
{"t":{"$date":"2024-01-05T15:59:22.052+00:00"},"s":"I",  "c":"ACCESS",   "id":20250,   "ctx":"conn12","msg":"Authentication succeeded","attr":{"mechanism":"SCRAM-SHA-256","speculative":true,"principalName":"unifi","authenticationDatabase":"unifi","remote":"172.19.0.3:50624","extraInfo":{}}}
{"t":{"$date":"2024-01-05T15:59:23.293+00:00"},"s":"I",  "c":"STORAGE",  "id":22430,   "ctx":"WTCheckpointThread","msg":"WiredTiger message","attr":{"message":"[1704470363:293326][1:0x15273f73d700], WT_SESSION.checkpoint: [WT_VERB_CHECKPOINT_PROGRESS] saving checkpoint snapshot min: 6840, snapshot max: 6840 snapshot count: 0, oldest timestamp: (0, 0) , meta checkpoint timestamp: (0, 0) base write gen: 660"}}
{"t":{"$date":"2024-01-05T16:00:23.344+00:00"},"s":"I",  "c":"STORAGE",  "id":22430,   "ctx":"WTCheckpointThread","msg":"WiredTiger message","attr":{"message":"[1704470423:344527][1:0x15273f73d700], WT_SESSION.checkpoint: [WT_VERB_CHECKPOINT_PROGRESS] saving checkpoint snapshot min: 6918, snapshot max: 6918 snapshot count: 0, oldest timestamp: (0, 0) , meta checkpoint timestamp: (0, 0) base write gen: 660"}}
{"t":{"$date":"2024-01-05T16:01:23.377+00:00"},"s":"I",  "c":"STORAGE",  "id":22430,   "ctx":"WTCheckpointThread","msg":"WiredTiger message","attr":{"message":"[1704470483:377873][1:0x15273f73d700], WT_SESSION.checkpoint: [WT_VERB_CHECKPOINT_PROGRESS] saving checkpoint snapshot min: 6921, snapshot max: 6921 snapshot count: 0, oldest timestamp: (0, 0) , meta checkpoint timestamp: (0, 0) base write gen: 660"}}
{"t":{"$date":"2024-01-05T16:02:23.398+00:00"},"s":"I",  "c":"STORAGE",  "id":22430,   "ctx":"WTCheckpointThread","msg":"WiredTiger message","attr":{"message":"[1704470543:398781][1:0x15273f73d700], WT_SESSION.checkpoint: [WT_VERB_CHECKPOINT_PROGRESS] saving checkpoint snapshot min: 6969, snapshot max: 6969 snapshot count: 0, oldest timestamp: (0, 0) , meta checkpoint timestamp: (0, 0) base write gen: 660"}}
{"t":{"$date":"2024-01-05T16:03:23.425+00:00"},"s":"I",  "c":"STORAGE",  "id":22430,   "ctx":"WTCheckpointThread","msg":"WiredTiger message","attr":{"message":"[1704470603:425199][1:0x15273f73d700], WT_SESSION.checkpoint: [WT_VERB_CHECKPOINT_PROGRESS] saving checkpoint snapshot min: 6974, snapshot max: 6974 snapshot count: 0, oldest timestamp: (0, 0) , meta checkpoint timestamp: (0, 0) base write gen: 660"}}

Is anyone else running the Mongo container getting this?

Any idea what it means? Is the log by default set to verbose? This will eat up space really quickly.

 

That seems to be the default yes. From what I can tell, the logging is set to verbose by default. While I don't see a data space hit, that doesn't mean constant read writes are wanted. The data here is what's used in the unifi controller for adaption and statistical data for access and connections.


image.thumb.png.fb4936d1f11c9a16ae1224fe1363b909.png
Do this at your own risk:

You might be able to disable the log by adding a log path and setting it to null under the post arguments.
--logpath /dev/null

image.png.b148724d424c42ff81f628c1eca2fea9.png

I advise against this. Mainly as it comes down to how you setup and want to run your mongo Database. not sure if mongodb will take the post argument,
There are many Mongo DB with options.

LSIO went with a quick fix of 1 js file that dedicates MongoDB data. Instead of a docker compose option that sets the mongodb settings. as a unraid docker, container can have a template with environment variables to set the necessary data over creating 1 js file. https://hub.docker.com/_/mongo#:~:text=mongo/mongod.conf-,Environment Variables,-When you start

But found that to be a lot more work over a simpler point past change go solution...

as long as you have a unifi DB and password set, it may be better to run the default docker and create the mongoDB using MongoDB commands.

 

Edited by bmartino1
  • Thanks 1
Link to comment
On 1/2/2024 at 12:32 PM, xanatos said:

Re:  404 error.  @Archonite @TekDRu

your mileage may vary but...I ran into this (a lot) and after some google-fu, found it to be an '@' symbol in my password, and some 'legacy' config from adjusting and moving things around.

Connection to the DB must be successful (beyond the unifi-network-application 'logs' no longer showing connection errors to the DB' ; that appears to just be network connection test versus authentication test


1) the image will 'stick' the database/username/password' in the config as a one time go/only evaluated on first run.  You have one shot to input it correctly.  if you change the username/pass, you *may* need to recreate the docker container.
1a) I was able to validate this by going to unifi-network-application console, and viewing the file logs/server.log.  After changing my password, I could still see the old password in the logs.  Same with the change of the mongo_host, post change, was still the first instance/IP I tried.
2) If your password contains an '@' symbol, or something that doesn't URL encode properly, you'll need to submit the pwd in URL encode format (as it notes in the community app)

I ran into this and changing the pw from one with an @ , and recreating the container to a NEW appdata path (unifinetworkapplication versus unifi-network-application) resolved for me


You are correct the unifi application is a 1 time pass so make sure the password in the unifi template is correct the first time and you stick with it for the MongoDB password. If you wish to change it is recommended you back up your controller and remove image of both mongo db and unifi. Change the password int eh file first and click add container to re-add the dockers to continue on the new password. (This is why lsio didn't want to maintain this in the docker.

Thank you for this, yes you are correct. in Programming, there are certain characters to avoid as when things are parsed by a program, special characters mean do something else. the @ symbol stops and breaks your password at that line and the rest done't compute to mongo db,

https://www.javatpoint.com/javascript-special-characters

passwords for anything should avoid: @!#
 

This has to do with unicode and how utf-8 handles encoding and decoding. I had a list of "special charters" but with issues in other forms and other nefairs use of that data, I refuse to release it as it gives birth back to sending null and nil characters.

A template of the JavaScript was made here:

 

Link to comment
9 hours ago, bmartino1 said:

 

That seems to be the default yes. From what I can tell, the logging is set to verbose by default. While I don't see a data space hit, that doesn't mean constant read writes are wanted. The data here is what's used in the unifi controller for adaption and statistical data for access and connections.


image.thumb.png.fb4936d1f11c9a16ae1224fe1363b909.png
Do this at your own risk:

You might be able to disable the log by adding a log path and setting it to null under the post arguments.
--logpath /dev/null

image.png.b148724d424c42ff81f628c1eca2fea9.png

I advise against this. Mainly as it comes down to how you setup and want to run your mongo Database. not sure if mongodb will take the post argument,
There are many Mongo DB with options.

LSIO went with a quick fix of 1 js file that dedicates MongoDB data. Instead of a docker compose option that sets the mongodb settings. as a unraid docker, container can have a template with environment variables to set the necessary data over creating 1 js file. https://hub.docker.com/_/mongo#:~:text=mongo/mongod.conf-,Environment Variables,-When you start

But found that to be a lot more work over a simpler point past change go solution...

as long as you have a unifi DB and password set, it may be better to run the default docker and create the mongoDB using MongoDB commands.

 

Thanks for the info.

FWIW I opened a terminal to the mongodb container and entered

mongod --quiet

to stop the logging.

If anything goes wrong, I guess I'll turn the logging back on, but for the moment I don't really need it (hopefully that's not naive of me 😅)

Edited by jademonkee
Link to comment
4 hours ago, jademonkee said:

Thanks for the info.

FWIW I opened a terminal to the mongodb container and entered

mongod --quiet

to stop the logging.

If anything goes wrong, I guess I'll turn the logging back on, but for the moment I don't really need it (hopefully that's not naive of me 😅)

I speak too soon: it did not stop the logging...

Link to comment

My weekly backup ran this morning, so it shut all my Dockers down at 5am. However, now mongodb is refusing to start. I have no idea why.

{"t":{"$date":"2024-01-10T05:00:39.718+00:00"},"s":"I",  "c":"CONTROL",  "id":23377,   "ctx":"SignalHandler","msg":"Received signal","attr":{"signal":15,"error":"Terminated"}}
{"t":{"$date":"2024-01-10T05:00:39.718+00:00"},"s":"I",  "c":"CONTROL",  "id":23378,   "ctx":"SignalHandler","msg":"Signal was sent by kill(2)","attr":{"pid":0,"uid":0}}
{"t":{"$date":"2024-01-10T05:00:39.718+00:00"},"s":"I",  "c":"CONTROL",  "id":23381,   "ctx":"SignalHandler","msg":"will terminate after current cmd ends"}
{"t":{"$date":"2024-01-10T05:00:39.734+00:00"},"s":"I",  "c":"REPL",     "id":4784900, "ctx":"SignalHandler","msg":"Stepping down the ReplicationCoordinator for shutdown","attr":{"waitTimeMillis":10000}}
{"t":{"$date":"2024-01-10T05:00:39.902+00:00"},"s":"I",  "c":"COMMAND",  "id":4784901, "ctx":"SignalHandler","msg":"Shutting down the MirrorMaestro"}
{"t":{"$date":"2024-01-10T05:00:39.902+00:00"},"s":"I",  "c":"SHARDING", "id":4784902, "ctx":"SignalHandler","msg":"Shutting down the WaitForMajorityService"}
{"t":{"$date":"2024-01-10T05:00:39.902+00:00"},"s":"I",  "c":"CONTROL",  "id":4784903, "ctx":"SignalHandler","msg":"Shutting down the LogicalSessionCache"}
{"t":{"$date":"2024-01-10T05:00:39.903+00:00"},"s":"I",  "c":"NETWORK",  "id":20562,   "ctx":"SignalHandler","msg":"Shutdown: going to close listening sockets"}
{"t":{"$date":"2024-01-10T05:00:39.903+00:00"},"s":"I",  "c":"NETWORK",  "id":23017,   "ctx":"listener","msg":"removing socket file","attr":{"path":"/tmp/mongodb-27017.sock"}}
{"t":{"$date":"2024-01-10T05:00:39.903+00:00"},"s":"W",  "c":"NETWORK",  "id":23022,   "ctx":"listener","msg":"Unable to remove UNIX socket","attr":{"path":"/tmp/mongodb-27017.sock","error":"Operation not permitted"}}
{"t":{"$date":"2024-01-10T05:00:39.903+00:00"},"s":"I",  "c":"NETWORK",  "id":4784905, "ctx":"SignalHandler","msg":"Shutting down the global connection pool"}
{"t":{"$date":"2024-01-10T05:00:39.903+00:00"},"s":"I",  "c":"STORAGE",  "id":4784906, "ctx":"SignalHandler","msg":"Shutting down the FlowControlTicketholder"}
{"t":{"$date":"2024-01-10T05:00:39.903+00:00"},"s":"I",  "c":"-",        "id":20520,   "ctx":"SignalHandler","msg":"Stopping further Flow Control ticket acquisitions."}
{"t":{"$date":"2024-01-10T05:00:39.903+00:00"},"s":"I",  "c":"STORAGE",  "id":4784908, "ctx":"SignalHandler","msg":"Shutting down the PeriodicThreadToAbortExpiredTransactions"}
{"t":{"$date":"2024-01-10T05:00:39.903+00:00"},"s":"I",  "c":"STORAGE",  "id":4784934, "ctx":"SignalHandler","msg":"Shutting down the PeriodicThreadToDecreaseSnapshotHistoryCachePressure"}
{"t":{"$date":"2024-01-10T05:00:39.903+00:00"},"s":"I",  "c":"REPL",     "id":4784909, "ctx":"SignalHandler","msg":"Shutting down the ReplicationCoordinator"}
{"t":{"$date":"2024-01-10T05:00:39.903+00:00"},"s":"I",  "c":"SHARDING", "id":4784910, "ctx":"SignalHandler","msg":"Shutting down the ShardingInitializationMongoD"}
{"t":{"$date":"2024-01-10T05:00:39.903+00:00"},"s":"I",  "c":"REPL",     "id":4784911, "ctx":"SignalHandler","msg":"Enqueuing the ReplicationStateTransitionLock for shutdown"}
{"t":{"$date":"2024-01-10T05:00:39.903+00:00"},"s":"I",  "c":"-",        "id":4784912, "ctx":"SignalHandler","msg":"Killing all operations for shutdown"}
{"t":{"$date":"2024-01-10T05:00:39.903+00:00"},"s":"I",  "c":"-",        "id":4695300, "ctx":"SignalHandler","msg":"Interrupted all currently running operations","attr":{"opsKilled":5}}
{"t":{"$date":"2024-01-10T05:00:39.903+00:00"},"s":"I",  "c":"COMMAND",  "id":4784913, "ctx":"SignalHandler","msg":"Shutting down all open transactions"}
{"t":{"$date":"2024-01-10T05:00:39.903+00:00"},"s":"I",  "c":"REPL",     "id":4784914, "ctx":"SignalHandler","msg":"Acquiring the ReplicationStateTransitionLock for shutdown"}
{"t":{"$date":"2024-01-10T05:00:39.903+00:00"},"s":"I",  "c":"INDEX",    "id":4784915, "ctx":"SignalHandler","msg":"Shutting down the IndexBuildsCoordinator"}
{"t":{"$date":"2024-01-10T05:00:39.903+00:00"},"s":"I",  "c":"REPL",     "id":4784916, "ctx":"SignalHandler","msg":"Reacquiring the ReplicationStateTransitionLock for shutdown"}
{"t":{"$date":"2024-01-10T05:00:39.903+00:00"},"s":"I",  "c":"REPL",     "id":4784917, "ctx":"SignalHandler","msg":"Attempting to mark clean shutdown"}
{"t":{"$date":"2024-01-10T05:00:39.903+00:00"},"s":"I",  "c":"NETWORK",  "id":4784918, "ctx":"SignalHandler","msg":"Shutting down the ReplicaSetMonitor"}
{"t":{"$date":"2024-01-10T05:00:39.903+00:00"},"s":"I",  "c":"SHARDING", "id":4784921, "ctx":"SignalHandler","msg":"Shutting down the MigrationUtilExecutor"}
{"t":{"$date":"2024-01-10T05:00:39.904+00:00"},"s":"I",  "c":"NETWORK",  "id":22944,   "ctx":"conn3","msg":"Connection ended","attr":{"remote":"172.19.0.3:45170","connectionId":3,"connectionCount":5}}
{"t":{"$date":"2024-01-10T05:00:39.904+00:00"},"s":"I",  "c":"NETWORK",  "id":22944,   "ctx":"conn1","msg":"Connection ended","attr":{"remote":"172.19.0.3:45152","connectionId":1,"connectionCount":4}}
{"t":{"$date":"2024-01-10T05:00:39.904+00:00"},"s":"I",  "c":"STORAGE",  "id":4784927, "ctx":"SignalHandler","msg":"Shutting down the HealthLog"}
{"t":{"$date":"2024-01-10T05:00:39.904+00:00"},"s":"I",  "c":"STORAGE",  "id":4784929, "ctx":"SignalHandler","msg":"Acquiring the global lock for shutdown"}
{"t":{"$date":"2024-01-10T05:00:39.904+00:00"},"s":"I",  "c":"STORAGE",  "id":4784930, "ctx":"SignalHandler","msg":"Shutting down the storage engine"}
{"t":{"$date":"2024-01-10T05:00:39.904+00:00"},"s":"I",  "c":"STORAGE",  "id":22320,   "ctx":"SignalHandler","msg":"Shutting down journal flusher thread"}
{"t":{"$date":"2024-01-10T05:00:39.904+00:00"},"s":"I",  "c":"STORAGE",  "id":22321,   "ctx":"SignalHandler","msg":"Finished shutting down journal flusher thread"}
{"t":{"$date":"2024-01-10T05:00:39.904+00:00"},"s":"I",  "c":"STORAGE",  "id":20282,   "ctx":"SignalHandler","msg":"Deregistering all the collections"}
{"t":{"$date":"2024-01-10T05:00:39.904+00:00"},"s":"I",  "c":"NETWORK",  "id":22944,   "ctx":"conn160","msg":"Connection ended","attr":{"remote":"172.19.0.3:35476","connectionId":160,"connectionCount":3}}
{"t":{"$date":"2024-01-10T05:00:39.904+00:00"},"s":"I",  "c":"NETWORK",  "id":22944,   "ctx":"conn106","msg":"Connection ended","attr":{"remote":"172.19.0.3:50332","connectionId":106,"connectionCount":2}}
{"t":{"$date":"2024-01-10T05:00:39.910+00:00"},"s":"I",  "c":"STORAGE",  "id":22261,   "ctx":"SignalHandler","msg":"Timestamp monitor shutting down"}
{"t":{"$date":"2024-01-10T05:00:39.910+00:00"},"s":"I",  "c":"STORAGE",  "id":22317,   "ctx":"SignalHandler","msg":"WiredTigerKVEngine shutting down"}
{"t":{"$date":"2024-01-10T05:00:39.910+00:00"},"s":"I",  "c":"STORAGE",  "id":22318,   "ctx":"SignalHandler","msg":"Shutting down session sweeper thread"}
{"t":{"$date":"2024-01-10T05:00:39.910+00:00"},"s":"I",  "c":"STORAGE",  "id":22319,   "ctx":"SignalHandler","msg":"Finished shutting down session sweeper thread"}
{"t":{"$date":"2024-01-10T05:00:39.910+00:00"},"s":"I",  "c":"STORAGE",  "id":22322,   "ctx":"SignalHandler","msg":"Shutting down checkpoint thread"}
{"t":{"$date":"2024-01-10T05:00:39.911+00:00"},"s":"I",  "c":"STORAGE",  "id":22323,   "ctx":"SignalHandler","msg":"Finished shutting down checkpoint thread"}
{"t":{"$date":"2024-01-10T05:00:39.913+00:00"},"s":"I",  "c":"STORAGE",  "id":4795902, "ctx":"SignalHandler","msg":"Closing WiredTiger","attr":{"closeConfig":"leak_memory=true,"}}
{"t":{"$date":"2024-01-10T05:00:39.925+00:00"},"s":"I",  "c":"STORAGE",  "id":22430,   "ctx":"SignalHandler","msg":"WiredTiger message","attr":{"message":"[1704862839:925143][1:0x15274134b700], close_ckpt: [WT_VERB_CHECKPOINT_PROGRESS] saving checkpoint snapshot min: 343738, snapshot max: 343738 snapshot count: 0, oldest timestamp: (0, 0) , meta checkpoint timestamp: (0, 0) base write gen: 660"}}
{"t":{"$date":"2024-01-10T05:00:39.957+00:00"},"s":"I",  "c":"STORAGE",  "id":4795901, "ctx":"SignalHandler","msg":"WiredTiger closed","attr":{"durationMillis":44}}
{"t":{"$date":"2024-01-10T05:00:39.957+00:00"},"s":"I",  "c":"STORAGE",  "id":22279,   "ctx":"SignalHandler","msg":"shutdown: removing fs lock..."}
{"t":{"$date":"2024-01-10T05:00:39.957+00:00"},"s":"I",  "c":"-",        "id":4784931, "ctx":"SignalHandler","msg":"Dropping the scope cache for shutdown"}
{"t":{"$date":"2024-01-10T05:00:39.957+00:00"},"s":"I",  "c":"FTDC",     "id":4784926, "ctx":"SignalHandler","msg":"Shutting down full-time data capture"}
{"t":{"$date":"2024-01-10T05:00:39.957+00:00"},"s":"I",  "c":"FTDC",     "id":20626,   "ctx":"SignalHandler","msg":"Shutting down full-time diagnostic data capture"}
{"t":{"$date":"2024-01-10T05:00:39.964+00:00"},"s":"I",  "c":"CONTROL",  "id":20565,   "ctx":"SignalHandler","msg":"Now exiting"}
{"t":{"$date":"2024-01-10T05:00:39.964+00:00"},"s":"I",  "c":"CONTROL",  "id":23138,   "ctx":"SignalHandler","msg":"Shutting down","attr":{"exitCode":0}}
{"t":{"$date":"2024-01-10T05:08:05.056+00:00"},"s":"I",  "c":"CONTROL",  "id":23285,   "ctx":"main","msg":"Automatically disabling TLS 1.0, to force-enable TLS 1.0 specify --sslDisabledProtocols 'none'"}
{"t":{"$date":"2024-01-10T05:08:05.062+00:00"},"s":"I",  "c":"NETWORK",  "id":4648601, "ctx":"main","msg":"Implicit TCP FastOpen unavailable. If TCP FastOpen is required, set tcpFastOpenServer, tcpFastOpenClient, and tcpFastOpenQueueSize."}
{"t":{"$date":"2024-01-10T05:08:05.062+00:00"},"s":"I",  "c":"STORAGE",  "id":4615611, "ctx":"initandlisten","msg":"MongoDB starting","attr":{"pid":1,"port":27017,"dbPath":"/data/db","architecture":"64-bit","host":"1211a40e6d4a"}}
{"t":{"$date":"2024-01-10T05:08:05.062+00:00"},"s":"I",  "c":"CONTROL",  "id":23403,   "ctx":"initandlisten","msg":"Build Info","attr":{"buildInfo":{"version":"4.4.27","gitVersion":"2da9e4437d8c792c2b3c3aea62e284f801172a6b","openSSLVersion":"OpenSSL 1.1.1f  31 Mar 2020","modules":[],"allocator":"tcmalloc","environment":{"distmod":"ubuntu2004","distarch":"x86_64","target_arch":"x86_64"}}}}
{"t":{"$date":"2024-01-10T05:08:05.062+00:00"},"s":"I",  "c":"CONTROL",  "id":51765,   "ctx":"initandlisten","msg":"Operating System","attr":{"os":{"name":"Ubuntu","version":"20.04"}}}
{"t":{"$date":"2024-01-10T05:08:05.062+00:00"},"s":"I",  "c":"CONTROL",  "id":21951,   "ctx":"initandlisten","msg":"Options set by command line","attr":{"options":{"net":{"bindIp":"*"}}}}
{"t":{"$date":"2024-01-10T05:08:05.063+00:00"},"s":"E",  "c":"NETWORK",  "id":23024,   "ctx":"initandlisten","msg":"Failed to unlink socket file","attr":{"path":"/tmp/mongodb-27017.sock","error":"Operation not permitted"}}
{"t":{"$date":"2024-01-10T05:08:05.063+00:00"},"s":"F",  "c":"-",        "id":23091,   "ctx":"initandlisten","msg":"Fatal assertion","attr":{"msgid":40486,"file":"src/mongo/transport/transport_layer_asio.cpp","line":1048}}
{"t":{"$date":"2024-01-10T05:08:05.063+00:00"},"s":"F",  "c":"-",        "id":23092,   "ctx":"initandlisten","msg":"\n\n***aborting after fassert() failure\n\n"}
{"t":{"$date":"2024-01-10T11:11:41.977+00:00"},"s":"I",  "c":"CONTROL",  "id":23285,   "ctx":"main","msg":"Automatically disabling TLS 1.0, to force-enable TLS 1.0 specify --sslDisabledProtocols 'none'"}
{"t":{"$date":"2024-01-10T11:11:41.979+00:00"},"s":"I",  "c":"NETWORK",  "id":4648601, "ctx":"main","msg":"Implicit TCP FastOpen unavailable. If TCP FastOpen is required, set tcpFastOpenServer, tcpFastOpenClient, and tcpFastOpenQueueSize."}
{"t":{"$date":"2024-01-10T11:11:41.980+00:00"},"s":"I",  "c":"STORAGE",  "id":4615611, "ctx":"initandlisten","msg":"MongoDB starting","attr":{"pid":1,"port":27017,"dbPath":"/data/db","architecture":"64-bit","host":"1211a40e6d4a"}}
{"t":{"$date":"2024-01-10T11:11:41.980+00:00"},"s":"I",  "c":"CONTROL",  "id":23403,   "ctx":"initandlisten","msg":"Build Info","attr":{"buildInfo":{"version":"4.4.27","gitVersion":"2da9e4437d8c792c2b3c3aea62e284f801172a6b","openSSLVersion":"OpenSSL 1.1.1f  31 Mar 2020","modules":[],"allocator":"tcmalloc","environment":{"distmod":"ubuntu2004","distarch":"x86_64","target_arch":"x86_64"}}}}
{"t":{"$date":"2024-01-10T11:11:41.980+00:00"},"s":"I",  "c":"CONTROL",  "id":51765,   "ctx":"initandlisten","msg":"Operating System","attr":{"os":{"name":"Ubuntu","version":"20.04"}}}
{"t":{"$date":"2024-01-10T11:11:41.980+00:00"},"s":"I",  "c":"CONTROL",  "id":21951,   "ctx":"initandlisten","msg":"Options set by command line","attr":{"options":{"net":{"bindIp":"*"}}}}
{"t":{"$date":"2024-01-10T11:11:41.980+00:00"},"s":"E",  "c":"NETWORK",  "id":23024,   "ctx":"initandlisten","msg":"Failed to unlink socket file","attr":{"path":"/tmp/mongodb-27017.sock","error":"Operation not permitted"}}
{"t":{"$date":"2024-01-10T11:11:41.980+00:00"},"s":"F",  "c":"-",        "id":23091,   "ctx":"initandlisten","msg":"Fatal assertion","attr":{"msgid":40486,"file":"src/mongo/transport/transport_layer_asio.cpp","line":1048}}
{"t":{"$date":"2024-01-10T11:11:41.980+00:00"},"s":"F",  "c":"-",        "id":23092,   "ctx":"initandlisten","msg":"\n\n***aborting after fassert() failure\n\n"}

I did remove the .js file, as well as the Docker's path to it. But that's only used the first time it runs, right?

I can't think of any other changes I have made. Any idea what's going on? Why would there be a permission error in the tmp dir?

Really tempted to move over to PeteA's all in one Docker now...

Edited by jademonkee
Link to comment
On 1/10/2024 at 11:18 AM, jademonkee said:

My weekly backup ran this morning, so it shut all my Dockers down at 5am. However, now mongodb is refusing to start. I have no idea why.

{"t":{"$date":"2024-01-10T05:00:39.718+00:00"},"s":"I",  "c":"CONTROL",  "id":23377,   "ctx":"SignalHandler","msg":"Received signal","attr":{"signal":15,"error":"Terminated"}}
{"t":{"$date":"2024-01-10T05:00:39.718+00:00"},"s":"I",  "c":"CONTROL",  "id":23378,   "ctx":"SignalHandler","msg":"Signal was sent by kill(2)","attr":{"pid":0,"uid":0}}
{"t":{"$date":"2024-01-10T05:00:39.718+00:00"},"s":"I",  "c":"CONTROL",  "id":23381,   "ctx":"SignalHandler","msg":"will terminate after current cmd ends"}
{"t":{"$date":"2024-01-10T05:00:39.734+00:00"},"s":"I",  "c":"REPL",     "id":4784900, "ctx":"SignalHandler","msg":"Stepping down the ReplicationCoordinator for shutdown","attr":{"waitTimeMillis":10000}}
{"t":{"$date":"2024-01-10T05:00:39.902+00:00"},"s":"I",  "c":"COMMAND",  "id":4784901, "ctx":"SignalHandler","msg":"Shutting down the MirrorMaestro"}
{"t":{"$date":"2024-01-10T05:00:39.902+00:00"},"s":"I",  "c":"SHARDING", "id":4784902, "ctx":"SignalHandler","msg":"Shutting down the WaitForMajorityService"}
{"t":{"$date":"2024-01-10T05:00:39.902+00:00"},"s":"I",  "c":"CONTROL",  "id":4784903, "ctx":"SignalHandler","msg":"Shutting down the LogicalSessionCache"}
{"t":{"$date":"2024-01-10T05:00:39.903+00:00"},"s":"I",  "c":"NETWORK",  "id":20562,   "ctx":"SignalHandler","msg":"Shutdown: going to close listening sockets"}
{"t":{"$date":"2024-01-10T05:00:39.903+00:00"},"s":"I",  "c":"NETWORK",  "id":23017,   "ctx":"listener","msg":"removing socket file","attr":{"path":"/tmp/mongodb-27017.sock"}}
{"t":{"$date":"2024-01-10T05:00:39.903+00:00"},"s":"W",  "c":"NETWORK",  "id":23022,   "ctx":"listener","msg":"Unable to remove UNIX socket","attr":{"path":"/tmp/mongodb-27017.sock","error":"Operation not permitted"}}
{"t":{"$date":"2024-01-10T05:00:39.903+00:00"},"s":"I",  "c":"NETWORK",  "id":4784905, "ctx":"SignalHandler","msg":"Shutting down the global connection pool"}
{"t":{"$date":"2024-01-10T05:00:39.903+00:00"},"s":"I",  "c":"STORAGE",  "id":4784906, "ctx":"SignalHandler","msg":"Shutting down the FlowControlTicketholder"}
{"t":{"$date":"2024-01-10T05:00:39.903+00:00"},"s":"I",  "c":"-",        "id":20520,   "ctx":"SignalHandler","msg":"Stopping further Flow Control ticket acquisitions."}
{"t":{"$date":"2024-01-10T05:00:39.903+00:00"},"s":"I",  "c":"STORAGE",  "id":4784908, "ctx":"SignalHandler","msg":"Shutting down the PeriodicThreadToAbortExpiredTransactions"}
{"t":{"$date":"2024-01-10T05:00:39.903+00:00"},"s":"I",  "c":"STORAGE",  "id":4784934, "ctx":"SignalHandler","msg":"Shutting down the PeriodicThreadToDecreaseSnapshotHistoryCachePressure"}
{"t":{"$date":"2024-01-10T05:00:39.903+00:00"},"s":"I",  "c":"REPL",     "id":4784909, "ctx":"SignalHandler","msg":"Shutting down the ReplicationCoordinator"}
{"t":{"$date":"2024-01-10T05:00:39.903+00:00"},"s":"I",  "c":"SHARDING", "id":4784910, "ctx":"SignalHandler","msg":"Shutting down the ShardingInitializationMongoD"}
{"t":{"$date":"2024-01-10T05:00:39.903+00:00"},"s":"I",  "c":"REPL",     "id":4784911, "ctx":"SignalHandler","msg":"Enqueuing the ReplicationStateTransitionLock for shutdown"}
{"t":{"$date":"2024-01-10T05:00:39.903+00:00"},"s":"I",  "c":"-",        "id":4784912, "ctx":"SignalHandler","msg":"Killing all operations for shutdown"}
{"t":{"$date":"2024-01-10T05:00:39.903+00:00"},"s":"I",  "c":"-",        "id":4695300, "ctx":"SignalHandler","msg":"Interrupted all currently running operations","attr":{"opsKilled":5}}
{"t":{"$date":"2024-01-10T05:00:39.903+00:00"},"s":"I",  "c":"COMMAND",  "id":4784913, "ctx":"SignalHandler","msg":"Shutting down all open transactions"}
{"t":{"$date":"2024-01-10T05:00:39.903+00:00"},"s":"I",  "c":"REPL",     "id":4784914, "ctx":"SignalHandler","msg":"Acquiring the ReplicationStateTransitionLock for shutdown"}
{"t":{"$date":"2024-01-10T05:00:39.903+00:00"},"s":"I",  "c":"INDEX",    "id":4784915, "ctx":"SignalHandler","msg":"Shutting down the IndexBuildsCoordinator"}
{"t":{"$date":"2024-01-10T05:00:39.903+00:00"},"s":"I",  "c":"REPL",     "id":4784916, "ctx":"SignalHandler","msg":"Reacquiring the ReplicationStateTransitionLock for shutdown"}
{"t":{"$date":"2024-01-10T05:00:39.903+00:00"},"s":"I",  "c":"REPL",     "id":4784917, "ctx":"SignalHandler","msg":"Attempting to mark clean shutdown"}
{"t":{"$date":"2024-01-10T05:00:39.903+00:00"},"s":"I",  "c":"NETWORK",  "id":4784918, "ctx":"SignalHandler","msg":"Shutting down the ReplicaSetMonitor"}
{"t":{"$date":"2024-01-10T05:00:39.903+00:00"},"s":"I",  "c":"SHARDING", "id":4784921, "ctx":"SignalHandler","msg":"Shutting down the MigrationUtilExecutor"}
{"t":{"$date":"2024-01-10T05:00:39.904+00:00"},"s":"I",  "c":"NETWORK",  "id":22944,   "ctx":"conn3","msg":"Connection ended","attr":{"remote":"172.19.0.3:45170","connectionId":3,"connectionCount":5}}
{"t":{"$date":"2024-01-10T05:00:39.904+00:00"},"s":"I",  "c":"NETWORK",  "id":22944,   "ctx":"conn1","msg":"Connection ended","attr":{"remote":"172.19.0.3:45152","connectionId":1,"connectionCount":4}}
{"t":{"$date":"2024-01-10T05:00:39.904+00:00"},"s":"I",  "c":"STORAGE",  "id":4784927, "ctx":"SignalHandler","msg":"Shutting down the HealthLog"}
{"t":{"$date":"2024-01-10T05:00:39.904+00:00"},"s":"I",  "c":"STORAGE",  "id":4784929, "ctx":"SignalHandler","msg":"Acquiring the global lock for shutdown"}
{"t":{"$date":"2024-01-10T05:00:39.904+00:00"},"s":"I",  "c":"STORAGE",  "id":4784930, "ctx":"SignalHandler","msg":"Shutting down the storage engine"}
{"t":{"$date":"2024-01-10T05:00:39.904+00:00"},"s":"I",  "c":"STORAGE",  "id":22320,   "ctx":"SignalHandler","msg":"Shutting down journal flusher thread"}
{"t":{"$date":"2024-01-10T05:00:39.904+00:00"},"s":"I",  "c":"STORAGE",  "id":22321,   "ctx":"SignalHandler","msg":"Finished shutting down journal flusher thread"}
{"t":{"$date":"2024-01-10T05:00:39.904+00:00"},"s":"I",  "c":"STORAGE",  "id":20282,   "ctx":"SignalHandler","msg":"Deregistering all the collections"}
{"t":{"$date":"2024-01-10T05:00:39.904+00:00"},"s":"I",  "c":"NETWORK",  "id":22944,   "ctx":"conn160","msg":"Connection ended","attr":{"remote":"172.19.0.3:35476","connectionId":160,"connectionCount":3}}
{"t":{"$date":"2024-01-10T05:00:39.904+00:00"},"s":"I",  "c":"NETWORK",  "id":22944,   "ctx":"conn106","msg":"Connection ended","attr":{"remote":"172.19.0.3:50332","connectionId":106,"connectionCount":2}}
{"t":{"$date":"2024-01-10T05:00:39.910+00:00"},"s":"I",  "c":"STORAGE",  "id":22261,   "ctx":"SignalHandler","msg":"Timestamp monitor shutting down"}
{"t":{"$date":"2024-01-10T05:00:39.910+00:00"},"s":"I",  "c":"STORAGE",  "id":22317,   "ctx":"SignalHandler","msg":"WiredTigerKVEngine shutting down"}
{"t":{"$date":"2024-01-10T05:00:39.910+00:00"},"s":"I",  "c":"STORAGE",  "id":22318,   "ctx":"SignalHandler","msg":"Shutting down session sweeper thread"}
{"t":{"$date":"2024-01-10T05:00:39.910+00:00"},"s":"I",  "c":"STORAGE",  "id":22319,   "ctx":"SignalHandler","msg":"Finished shutting down session sweeper thread"}
{"t":{"$date":"2024-01-10T05:00:39.910+00:00"},"s":"I",  "c":"STORAGE",  "id":22322,   "ctx":"SignalHandler","msg":"Shutting down checkpoint thread"}
{"t":{"$date":"2024-01-10T05:00:39.911+00:00"},"s":"I",  "c":"STORAGE",  "id":22323,   "ctx":"SignalHandler","msg":"Finished shutting down checkpoint thread"}
{"t":{"$date":"2024-01-10T05:00:39.913+00:00"},"s":"I",  "c":"STORAGE",  "id":4795902, "ctx":"SignalHandler","msg":"Closing WiredTiger","attr":{"closeConfig":"leak_memory=true,"}}
{"t":{"$date":"2024-01-10T05:00:39.925+00:00"},"s":"I",  "c":"STORAGE",  "id":22430,   "ctx":"SignalHandler","msg":"WiredTiger message","attr":{"message":"[1704862839:925143][1:0x15274134b700], close_ckpt: [WT_VERB_CHECKPOINT_PROGRESS] saving checkpoint snapshot min: 343738, snapshot max: 343738 snapshot count: 0, oldest timestamp: (0, 0) , meta checkpoint timestamp: (0, 0) base write gen: 660"}}
{"t":{"$date":"2024-01-10T05:00:39.957+00:00"},"s":"I",  "c":"STORAGE",  "id":4795901, "ctx":"SignalHandler","msg":"WiredTiger closed","attr":{"durationMillis":44}}
{"t":{"$date":"2024-01-10T05:00:39.957+00:00"},"s":"I",  "c":"STORAGE",  "id":22279,   "ctx":"SignalHandler","msg":"shutdown: removing fs lock..."}
{"t":{"$date":"2024-01-10T05:00:39.957+00:00"},"s":"I",  "c":"-",        "id":4784931, "ctx":"SignalHandler","msg":"Dropping the scope cache for shutdown"}
{"t":{"$date":"2024-01-10T05:00:39.957+00:00"},"s":"I",  "c":"FTDC",     "id":4784926, "ctx":"SignalHandler","msg":"Shutting down full-time data capture"}
{"t":{"$date":"2024-01-10T05:00:39.957+00:00"},"s":"I",  "c":"FTDC",     "id":20626,   "ctx":"SignalHandler","msg":"Shutting down full-time diagnostic data capture"}
{"t":{"$date":"2024-01-10T05:00:39.964+00:00"},"s":"I",  "c":"CONTROL",  "id":20565,   "ctx":"SignalHandler","msg":"Now exiting"}
{"t":{"$date":"2024-01-10T05:00:39.964+00:00"},"s":"I",  "c":"CONTROL",  "id":23138,   "ctx":"SignalHandler","msg":"Shutting down","attr":{"exitCode":0}}
{"t":{"$date":"2024-01-10T05:08:05.056+00:00"},"s":"I",  "c":"CONTROL",  "id":23285,   "ctx":"main","msg":"Automatically disabling TLS 1.0, to force-enable TLS 1.0 specify --sslDisabledProtocols 'none'"}
{"t":{"$date":"2024-01-10T05:08:05.062+00:00"},"s":"I",  "c":"NETWORK",  "id":4648601, "ctx":"main","msg":"Implicit TCP FastOpen unavailable. If TCP FastOpen is required, set tcpFastOpenServer, tcpFastOpenClient, and tcpFastOpenQueueSize."}
{"t":{"$date":"2024-01-10T05:08:05.062+00:00"},"s":"I",  "c":"STORAGE",  "id":4615611, "ctx":"initandlisten","msg":"MongoDB starting","attr":{"pid":1,"port":27017,"dbPath":"/data/db","architecture":"64-bit","host":"1211a40e6d4a"}}
{"t":{"$date":"2024-01-10T05:08:05.062+00:00"},"s":"I",  "c":"CONTROL",  "id":23403,   "ctx":"initandlisten","msg":"Build Info","attr":{"buildInfo":{"version":"4.4.27","gitVersion":"2da9e4437d8c792c2b3c3aea62e284f801172a6b","openSSLVersion":"OpenSSL 1.1.1f  31 Mar 2020","modules":[],"allocator":"tcmalloc","environment":{"distmod":"ubuntu2004","distarch":"x86_64","target_arch":"x86_64"}}}}
{"t":{"$date":"2024-01-10T05:08:05.062+00:00"},"s":"I",  "c":"CONTROL",  "id":51765,   "ctx":"initandlisten","msg":"Operating System","attr":{"os":{"name":"Ubuntu","version":"20.04"}}}
{"t":{"$date":"2024-01-10T05:08:05.062+00:00"},"s":"I",  "c":"CONTROL",  "id":21951,   "ctx":"initandlisten","msg":"Options set by command line","attr":{"options":{"net":{"bindIp":"*"}}}}
{"t":{"$date":"2024-01-10T05:08:05.063+00:00"},"s":"E",  "c":"NETWORK",  "id":23024,   "ctx":"initandlisten","msg":"Failed to unlink socket file","attr":{"path":"/tmp/mongodb-27017.sock","error":"Operation not permitted"}}
{"t":{"$date":"2024-01-10T05:08:05.063+00:00"},"s":"F",  "c":"-",        "id":23091,   "ctx":"initandlisten","msg":"Fatal assertion","attr":{"msgid":40486,"file":"src/mongo/transport/transport_layer_asio.cpp","line":1048}}
{"t":{"$date":"2024-01-10T05:08:05.063+00:00"},"s":"F",  "c":"-",        "id":23092,   "ctx":"initandlisten","msg":"\n\n***aborting after fassert() failure\n\n"}
{"t":{"$date":"2024-01-10T11:11:41.977+00:00"},"s":"I",  "c":"CONTROL",  "id":23285,   "ctx":"main","msg":"Automatically disabling TLS 1.0, to force-enable TLS 1.0 specify --sslDisabledProtocols 'none'"}
{"t":{"$date":"2024-01-10T11:11:41.979+00:00"},"s":"I",  "c":"NETWORK",  "id":4648601, "ctx":"main","msg":"Implicit TCP FastOpen unavailable. If TCP FastOpen is required, set tcpFastOpenServer, tcpFastOpenClient, and tcpFastOpenQueueSize."}
{"t":{"$date":"2024-01-10T11:11:41.980+00:00"},"s":"I",  "c":"STORAGE",  "id":4615611, "ctx":"initandlisten","msg":"MongoDB starting","attr":{"pid":1,"port":27017,"dbPath":"/data/db","architecture":"64-bit","host":"1211a40e6d4a"}}
{"t":{"$date":"2024-01-10T11:11:41.980+00:00"},"s":"I",  "c":"CONTROL",  "id":23403,   "ctx":"initandlisten","msg":"Build Info","attr":{"buildInfo":{"version":"4.4.27","gitVersion":"2da9e4437d8c792c2b3c3aea62e284f801172a6b","openSSLVersion":"OpenSSL 1.1.1f  31 Mar 2020","modules":[],"allocator":"tcmalloc","environment":{"distmod":"ubuntu2004","distarch":"x86_64","target_arch":"x86_64"}}}}
{"t":{"$date":"2024-01-10T11:11:41.980+00:00"},"s":"I",  "c":"CONTROL",  "id":51765,   "ctx":"initandlisten","msg":"Operating System","attr":{"os":{"name":"Ubuntu","version":"20.04"}}}
{"t":{"$date":"2024-01-10T11:11:41.980+00:00"},"s":"I",  "c":"CONTROL",  "id":21951,   "ctx":"initandlisten","msg":"Options set by command line","attr":{"options":{"net":{"bindIp":"*"}}}}
{"t":{"$date":"2024-01-10T11:11:41.980+00:00"},"s":"E",  "c":"NETWORK",  "id":23024,   "ctx":"initandlisten","msg":"Failed to unlink socket file","attr":{"path":"/tmp/mongodb-27017.sock","error":"Operation not permitted"}}
{"t":{"$date":"2024-01-10T11:11:41.980+00:00"},"s":"F",  "c":"-",        "id":23091,   "ctx":"initandlisten","msg":"Fatal assertion","attr":{"msgid":40486,"file":"src/mongo/transport/transport_layer_asio.cpp","line":1048}}
{"t":{"$date":"2024-01-10T11:11:41.980+00:00"},"s":"F",  "c":"-",        "id":23092,   "ctx":"initandlisten","msg":"\n\n***aborting after fassert() failure\n\n"}

I did remove the .js file, as well as the Docker's path to it. But that's only used the first time it runs, right?

I can't think of any other changes I have made. Any idea what's going on? Why would there be a permission error in the tmp dir?

Really tempted to move over to PeteA's all in one Docker now...

Sooooo the Mongo docker now starts/runs fine again...

There was an update to the Mongo container, so maybe it was just a bug? Or maybe some lock on some file expired?

I don't know what's going on anymore lol

Link to comment

Join the conversation

You can post now and register later. If you have an account, sign in now to post with your account.
Note: Your post will require moderator approval before it will be visible.

Guest
Reply to this topic...

×   Pasted as rich text.   Restore formatting

  Only 75 emoji are allowed.

×   Your link has been automatically embedded.   Display as a link instead

×   Your previous content has been restored.   Clear editor

×   You cannot paste images directly. Upload or insert images from URL.