jbuszkie
-
Posts
711 -
Joined
-
Last visited
Content Type
Profiles
Forums
Downloads
Store
Gallery
Bug Reports
Documentation
Landing
Posts posted by jbuszkie
-
-
Ok.. it happened again today! grrr....
I'm posting this here to try for next time so I don't have to reboot..
QuoteTo restart nginx i've been having to use both
/etc/rc.d/rc.nginx restart
and /etc/rc.d/rc.nginx stop
Quite often checking its status will show its still running. So make sure it's really closed. It doesn't want to close gracefully
/etc/rc.d/rc.nginx status
I'll have to see if this resolves the out of memory issue without a reboot...
I'd still like to know what's causing this or how to permanently avoid this...
Jim
-
Mine just happened again today... @limetech, any thoughts on this? It seems like there are several of us that have this issue?
-
Mine just happened again. Is there a way to fix this without re-booting the system?
-
1 hour ago, Skylord123 said:
There aren't really "Best Practices" as it all depends on what you need logged and your use case. I'm using MQTT for my home automation so I really only care about logging what devices connect and error messages. This is what I have my log options in mosquitto.conf set to:
log_dest stderr log_type error connection_messages true log_timestamp true
If you really want to add log rotating it is an option you can pass to the container. Here is a post that describes it very well:
https://medium.com/@Quigley_Ja/rotating-docker-logs-keeping-your-overlay-folder-small-40cfa2155412Thanks for the info. so it was really just as easy as purging the log! I wanted to make sure. Thanks! I'll have to look into the log rotate if I want to do that.
-
52 minutes ago, spants said:
Which file is that size? Can you post your mosquitto.conf file?
I was logging the default which was everything, I believe.. I've recently changed it to just errors and warnings. But I can't figure out how to purge it.. Or, potentially, how to use log rotate to keep it manageable. How do other folks handle the mqtt logging? Do they log everything but put it in appdata area?
What are best practices?
Jim
-
-
Another question.. My MQTT log file is > 4GB! how do I get rid of it?
-
I am having this issue too.... 😞
-
So this is the second time this has happened in a couple weeks. Syslog fills up and runs out of space and the system grinds to a halt. Can you guys tell what's going on here?
Here is the last couple lines of syslog before I get all the errors...
Oct 21 06:20:03 Tower sSMTP[3454]: Sent mail for [email protected] (221 elasmtp-masked.atl.sa.earthlink.net closing connection) uid=0 username=root outbytes=1676 Oct 21 07:00:03 Tower kernel: mdcmd (1109): spindown 2 Oct 21 07:20:02 Tower sSMTP[30058]: Sent mail for [email protected] (221 elasmtp-mealy.atl.sa.earthlink.net closing connection) uid=0 username=root outbytes=1672 Oct 21 08:15:32 Tower kernel: mdcmd (1110): spindown 4 Oct 21 08:15:41 Tower kernel: mdcmd (1111): spindown 5 Oct 21 08:20:03 Tower sSMTP[30455]: Sent mail for [email protected] (221 elasmtp-mealy.atl.sa.earthlink.net closing connection) uid=0 username=root outbytes=1680 Oct 21 08:25:01 Tower kernel: mdcmd (1112): spindown 1 Oct 21 09:16:19 Tower nginx: 2020/10/21 09:16:19 [alert] 5212#5212: worker process 5213 exited on signal 6 Oct 21 09:16:20 Tower nginx: 2020/10/21 09:16:20 [alert] 5212#5212: worker process 27120 exited on signal 6 Oct 21 09:16:21 Tower nginx: 2020/10/21 09:16:21 [alert] 5212#5212: worker process 27246 exited on signal 6 Oct 21 09:16:22 Tower nginx: 2020/10/21 09:16:22 [alert] 5212#5212: worker process 27258 exited on signal 6 Oct 21 09:16:23 Tower nginx: 2020/10/21 09:16:23 [alert] 5212#5212: worker process 27259 exited on signal 6 Oct 21 09:16:24 Tower nginx: 2020/10/21 09:16:24 [alert] 5212#5212: worker process 27263 exited on signal 6 Oct 21 09:16:24 Tower nginx: 2020/10/21 09:16:24 [alert] 5212#5212: worker process 27264 exited on signal 6 Oct 21 09:16:25 Tower nginx: 2020/10/21 09:16:25 [alert] 5212#5212: worker process 27267 exited on signal 6
That line repeats for a long time then it reports some memory full issue...
Oct 21 13:14:53 Tower nginx: 2020/10/21 13:14:53 [alert] 5212#5212: worker process 3579 exited on signal 6 Oct 21 13:14:54 Tower nginx: 2020/10/21 13:14:54 [alert] 5212#5212: worker process 3580 exited on signal 6 Oct 21 13:14:55 Tower nginx: 2020/10/21 13:14:55 [alert] 5212#5212: worker process 3583 exited on signal 6 Oct 21 13:14:56 Tower nginx: 2020/10/21 13:14:56 [alert] 5212#5212: worker process 3584 exited on signal 6 Oct 21 13:14:57 Tower nginx: 2020/10/21 13:14:57 [crit] 3629#3629: ngx_slab_alloc() failed: no memory Oct 21 13:14:57 Tower nginx: 2020/10/21 13:14:57 [error] 3629#3629: shpool alloc failed Oct 21 13:14:57 Tower nginx: 2020/10/21 13:14:57 [error] 3629#3629: nchan: Out of shared memory while allocating message of size 6724. Increase nchan_max_reserved_memory. Oct 21 13:14:57 Tower nginx: 2020/10/21 13:14:57 [error] 3629#3629: *4535822 nchan: error publishing message (HTTP status code 500), client: unix:, server: , request: "POST /pub/disks?buffer_length=1 HTTP/1.1", host: "localhost" Oct 21 13:14:57 Tower nginx: 2020/10/21 13:14:57 [error] 3629#3629: MEMSTORE:00: can't create shared message for channel /disks Oct 21 13:14:57 Tower nginx: 2020/10/21 13:14:57 [alert] 5212#5212: worker process 3629 exited on signal 6 Oct 21 13:14:58 Tower nginx: 2020/10/21 13:14:58 [crit] 3641#3641: ngx_slab_alloc() failed: no memory Oct 21 13:14:58 Tower nginx: 2020/10/21 13:14:58 [error] 3641#3641: shpool alloc failed Oct 21 13:14:58 Tower nginx: 2020/10/21 13:14:58 [error] 3641#3641: nchan: Out of shared memory while allocating message of size 6724. Increase nchan_max_reserved_memory. Oct 21 13:14:58 Tower nginx: 2020/10/21 13:14:58 [error] 3641#3641: *4535827 nchan: error publishing message (HTTP status code 500), client: unix:, server: , request: "POST /pub/disks?buffer_length=1 HTTP/1.1", host: "localhost" Oct 21 13:14:58 Tower nginx: 2020/10/21 13:14:58 [error] 3641#3641: MEMSTORE:00: can't create shared message for channel /disks Oct 21 13:14:58 Tower nginx: 2020/10/21 13:14:58 [alert] 5212#5212: worker process 3641 exited on signal 6 Oct 21 13:14:59 Tower nginx: 2020/10/21 13:14:59 [crit] 3687#3687: ngx_slab_alloc() failed: no memory Oct 21 13:14:59 Tower nginx: 2020/10/21 13:14:59 [error] 3687#3687: shpool alloc failed Oct 21 13:14:59 Tower nginx: 2020/10/21 13:14:59 [error] 3687#3687: nchan: Out of shared memory while allocating message of size 6724. Increase nchan_max_reserved_memory.
After this I just keep getting these out of memory errors...
Do you guys have any idea what could be causing this?
-
My MQTT log file is huge and growing.. What (and how) do you guys have your log stuff set to? I never messed with this so it's probably set to the default.
This is what is in the conf file.
log_dest stderr log_type all connection_messages true log_timestamp true
-
For all you guys running Home assistant and such... I see some are running in a docker and some in a VM and some in a VM on a separate machine.
HA in a docker seems fine except no supervisor. I'm no expert in HA (yet) but it seems I would like the supervisor ability. I tried @digiblur's video and I got it
to install and such.. but it seems kinda pieced together and it didn't seem too robust. I thank him for his effort here to give us that option though...
So I guess my question is.. what are the benefits of running in a docker vs a VM or just running it all (HA, ESPHome, MQTT, NodeRed) on a Rpi4?
I like the idea of a Rpi4 because if it breaks, it's pretty easy to replace the hardware fast. I'm planning on this setup controlling my whole HVAC system. So if something breaks, I need to be able to get it up and running quickly again. Does the Rpi4 just not have enough HP to run all this?
I suppose if I run in a VM on unraid I could always take the image and run it on another machine while I get unraid back up and running is something were to break? (assuming I could get to the image...)
What are most of you guys doing?
Thanks,
Jim
-
1 minute ago, H2O_King89 said:
I think the plugin fixed common things said there was a template update a while back
Sent from my Pixel 4 XL using Tapatalk
huh.. either I dismissed that warning.. or I never got it! :-)
-
Ok.. that was it.. It must have changed from when I first installed it. I changed the repo and now it looks ok...
Thanks for the help.
-
wait.. It's a little different..
-
No... It's set to that..
-
5 minutes ago, spants said:
Is your repository set to nodered/node-red? and docker hub to https://hub.docker.com/r/nodered/node-red-docker/
I have not changed it ever as far as I know.. I'll check..
-
I get this now when I check for updates on my node red docker..
Is there a new docker for this or something?
Jim
-
Ok.. That fixed that one... But now I have this!
LOL...
-
Does anyone else see this issue?
-
-
Quote
You want your docker image on cache so your dockers won't have their performance impacted by the slower parity writes and so they won't keep array disks spinning. There is no reason to protect docker image because it is easily recreated, and it isn't very large so there is no good reason to not have it on cache and plenty of reason to have it on cache.
I guess when I first started using dockers I might not have understood completely how they worked. I also may not have had an SSD cache drive back then?
It make sense to move it to the SSD..
QuoteThere is no requirement to cache writes to user shares. Most of my writes are scheduled backups and queued downloads. I don't care if they are a little slower since I'm not waiting on them anyway. So, those are not cached. Instead, they go directly to the array where they are already protected.
I'll have to go back and see.. My guess is some of my shares can go directly to the array..
My DVR stuff and code/work stuff will stay on the cache/array method though...
I can probably change the mover to once a day... The SSD reliability is good enough that I don't have to be paranoid anymore. I do plan on upgrading to two 500GB NVME cards so I can have them in duplicate mode....
-
Does this plugin normally do more than one rsync at a time? My move seems to be nearly stalled....
This is moving to a SMR drive so I expect it to be slower.. but it seems to be crawling!
root@Tower:~# ps auxx | grep rsync root 7754 0.0 0.0 3904 2152 pts/0 S+ 08:33 0:00 grep rsync root 31102 8.4 0.0 13412 3120 ? S 07:55 3:08 /usr/bin/rsync -avPR -X T/DVD/MainMovie/FILE1 /mnt/disk8/ root 31103 0.0 0.0 13156 2308 ? S 07:55 0:00 /usr/bin/rsync -avPR -X T/DVD/MainMovie/FILE1 /mnt/disk8/ root 31104 9.7 0.0 13240 2076 ? D 07:55 3:38 /usr/bin/rsync -avPR -X T/DVD/MainMovie/FILE1 /mnt/disk8/ root@Tower:~#
It did this yesterday too...
Thanks,
Jim
Edit: The sysylog doesn't show any obvious errors...
-
Quote
Not related but why do you have docker image on disk2 instead of cache?
I'm not sure.. It's what I did a while ago.. Maybe because I want it protected? Maybe because my Cache is only 250GB?
QuoteHow often do you have Mover scheduled?
It looks like every 4 hours.. Most of the time it doesn't run long. Crash plan runs at idle time (overnight).. And it seems to take a while to scan everything..
-
I don't see anything in the syslog..
Granted I have a pre-clear running now.. But I was getting these numbers yesterday without the pre-clear..
Edit: Pre-clear Pre-read @178MB/s right now...
Syslog filling up! - NGINX out of shared memory
in General Support
Posted
My terminal is all messed up when in this state as well... I have to use a putty window to talk to unraid.