
jbuszkie
Members-
Posts
677 -
Joined
-
Last visited
Content Type
Profiles
Forums
Downloads
Store
Gallery
Bug Reports
Documentation
Landing
Everything posted by jbuszkie
-
Mine just happened again today... @limetech, any thoughts on this? It seems like there are several of us that have this issue?
-
Syslog filling up! - NGINX out of shared memory
jbuszkie replied to jbuszkie's topic in General Support
Mine just happened again. Is there a way to fix this without re-booting the system? -
[support] Spants - NodeRed, MQTT, Dashing, couchDB
jbuszkie replied to spants's topic in Docker Containers
Thanks for the info. so it was really just as easy as purging the log! I wanted to make sure. Thanks! I'll have to look into the log rotate if I want to do that. -
[support] Spants - NodeRed, MQTT, Dashing, couchDB
jbuszkie replied to spants's topic in Docker Containers
I was logging the default which was everything, I believe.. I've recently changed it to just errors and warnings. But I can't figure out how to purge it.. Or, potentially, how to use log rotate to keep it manageable. How do other folks handle the mqtt logging? Do they log everything but put it in appdata area? What are best practices? Jim -
[support] Spants - NodeRed, MQTT, Dashing, couchDB
jbuszkie replied to spants's topic in Docker Containers
-
[support] Spants - NodeRed, MQTT, Dashing, couchDB
jbuszkie replied to spants's topic in Docker Containers
-
I am having this issue too.... 😞
-
So this is the second time this has happened in a couple weeks. Syslog fills up and runs out of space and the system grinds to a halt. Can you guys tell what's going on here? Here is the last couple lines of syslog before I get all the errors... Oct 21 06:20:03 Tower sSMTP[3454]: Sent mail for [email protected] (221 elasmtp-masked.atl.sa.earthlink.net closing connection) uid=0 username=root outbytes=1676 Oct 21 07:00:03 Tower kernel: mdcmd (1109): spindown 2 Oct 21 07:20:02 Tower sSMTP[30058]: Sent mail for [email protected] (221 elasmtp-mealy.atl.sa.earthlink.net closing connection) uid=0 username=root outbytes=1672 Oct 21 08:15:32 Tower kernel: mdcmd (1110): spindown 4 Oct 21 08:15:41 Tower kernel: mdcmd (1111): spindown 5 Oct 21 08:20:03 Tower sSMTP[30455]: Sent mail for [email protected] (221 elasmtp-mealy.atl.sa.earthlink.net closing connection) uid=0 username=root outbytes=1680 Oct 21 08:25:01 Tower kernel: mdcmd (1112): spindown 1 Oct 21 09:16:19 Tower nginx: 2020/10/21 09:16:19 [alert] 5212#5212: worker process 5213 exited on signal 6 Oct 21 09:16:20 Tower nginx: 2020/10/21 09:16:20 [alert] 5212#5212: worker process 27120 exited on signal 6 Oct 21 09:16:21 Tower nginx: 2020/10/21 09:16:21 [alert] 5212#5212: worker process 27246 exited on signal 6 Oct 21 09:16:22 Tower nginx: 2020/10/21 09:16:22 [alert] 5212#5212: worker process 27258 exited on signal 6 Oct 21 09:16:23 Tower nginx: 2020/10/21 09:16:23 [alert] 5212#5212: worker process 27259 exited on signal 6 Oct 21 09:16:24 Tower nginx: 2020/10/21 09:16:24 [alert] 5212#5212: worker process 27263 exited on signal 6 Oct 21 09:16:24 Tower nginx: 2020/10/21 09:16:24 [alert] 5212#5212: worker process 27264 exited on signal 6 Oct 21 09:16:25 Tower nginx: 2020/10/21 09:16:25 [alert] 5212#5212: worker process 27267 exited on signal 6 That line repeats for a long time then it reports some memory full issue... Oct 21 13:14:53 Tower nginx: 2020/10/21 13:14:53 [alert] 5212#5212: worker process 3579 exited on signal 6 Oct 21 13:14:54 Tower nginx: 2020/10/21 13:14:54 [alert] 5212#5212: worker process 3580 exited on signal 6 Oct 21 13:14:55 Tower nginx: 2020/10/21 13:14:55 [alert] 5212#5212: worker process 3583 exited on signal 6 Oct 21 13:14:56 Tower nginx: 2020/10/21 13:14:56 [alert] 5212#5212: worker process 3584 exited on signal 6 Oct 21 13:14:57 Tower nginx: 2020/10/21 13:14:57 [crit] 3629#3629: ngx_slab_alloc() failed: no memory Oct 21 13:14:57 Tower nginx: 2020/10/21 13:14:57 [error] 3629#3629: shpool alloc failed Oct 21 13:14:57 Tower nginx: 2020/10/21 13:14:57 [error] 3629#3629: nchan: Out of shared memory while allocating message of size 6724. Increase nchan_max_reserved_memory. Oct 21 13:14:57 Tower nginx: 2020/10/21 13:14:57 [error] 3629#3629: *4535822 nchan: error publishing message (HTTP status code 500), client: unix:, server: , request: "POST /pub/disks?buffer_length=1 HTTP/1.1", host: "localhost" Oct 21 13:14:57 Tower nginx: 2020/10/21 13:14:57 [error] 3629#3629: MEMSTORE:00: can't create shared message for channel /disks Oct 21 13:14:57 Tower nginx: 2020/10/21 13:14:57 [alert] 5212#5212: worker process 3629 exited on signal 6 Oct 21 13:14:58 Tower nginx: 2020/10/21 13:14:58 [crit] 3641#3641: ngx_slab_alloc() failed: no memory Oct 21 13:14:58 Tower nginx: 2020/10/21 13:14:58 [error] 3641#3641: shpool alloc failed Oct 21 13:14:58 Tower nginx: 2020/10/21 13:14:58 [error] 3641#3641: nchan: Out of shared memory while allocating message of size 6724. Increase nchan_max_reserved_memory. Oct 21 13:14:58 Tower nginx: 2020/10/21 13:14:58 [error] 3641#3641: *4535827 nchan: error publishing message (HTTP status code 500), client: unix:, server: , request: "POST /pub/disks?buffer_length=1 HTTP/1.1", host: "localhost" Oct 21 13:14:58 Tower nginx: 2020/10/21 13:14:58 [error] 3641#3641: MEMSTORE:00: can't create shared message for channel /disks Oct 21 13:14:58 Tower nginx: 2020/10/21 13:14:58 [alert] 5212#5212: worker process 3641 exited on signal 6 Oct 21 13:14:59 Tower nginx: 2020/10/21 13:14:59 [crit] 3687#3687: ngx_slab_alloc() failed: no memory Oct 21 13:14:59 Tower nginx: 2020/10/21 13:14:59 [error] 3687#3687: shpool alloc failed Oct 21 13:14:59 Tower nginx: 2020/10/21 13:14:59 [error] 3687#3687: nchan: Out of shared memory while allocating message of size 6724. Increase nchan_max_reserved_memory. After this I just keep getting these out of memory errors... Do you guys have any idea what could be causing this?
-
[support] Spants - NodeRed, MQTT, Dashing, couchDB
jbuszkie replied to spants's topic in Docker Containers
My MQTT log file is huge and growing.. What (and how) do you guys have your log stuff set to? I never messed with this so it's probably set to the default. This is what is in the conf file. log_dest stderr log_type all connection_messages true log_timestamp true -
For all you guys running Home assistant and such... I see some are running in a docker and some in a VM and some in a VM on a separate machine. HA in a docker seems fine except no supervisor. I'm no expert in HA (yet) but it seems I would like the supervisor ability. I tried @digiblur's video and I got it to install and such.. but it seems kinda pieced together and it didn't seem too robust. I thank him for his effort here to give us that option though... So I guess my question is.. what are the benefits of running in a docker vs a VM or just running it all (HA, ESPHome, MQTT, NodeRed) on a Rpi4? I like the idea of a Rpi4 because if it breaks, it's pretty easy to replace the hardware fast. I'm planning on this setup controlling my whole HVAC system. So if something breaks, I need to be able to get it up and running quickly again. Does the Rpi4 just not have enough HP to run all this? I suppose if I run in a VM on unraid I could always take the image and run it on another machine while I get unraid back up and running is something were to break? (assuming I could get to the image...) What are most of you guys doing? Thanks, Jim
-
[support] Spants - NodeRed, MQTT, Dashing, couchDB
jbuszkie replied to spants's topic in Docker Containers
huh.. either I dismissed that warning.. or I never got it! :-) -
[support] Spants - NodeRed, MQTT, Dashing, couchDB
jbuszkie replied to spants's topic in Docker Containers
Ok.. that was it.. It must have changed from when I first installed it. I changed the repo and now it looks ok... Thanks for the help. -
[support] Spants - NodeRed, MQTT, Dashing, couchDB
jbuszkie replied to spants's topic in Docker Containers
-
[support] Spants - NodeRed, MQTT, Dashing, couchDB
jbuszkie replied to spants's topic in Docker Containers
-
[support] Spants - NodeRed, MQTT, Dashing, couchDB
jbuszkie replied to spants's topic in Docker Containers
I have not changed it ever as far as I know.. I'll check.. -
[support] Spants - NodeRed, MQTT, Dashing, couchDB
jbuszkie replied to spants's topic in Docker Containers
I get this now when I check for updates on my node red docker.. Is there a new docker for this or something? Jim -
[Support] Djoss - CrashPlan PRO (aka CrashPlan for Small Business)
jbuszkie replied to Djoss's topic in Docker Containers
-
[Support] Djoss - CrashPlan PRO (aka CrashPlan for Small Business)
jbuszkie replied to Djoss's topic in Docker Containers
-
I'm moving over from the server layout plugin. First of all thanks for this plugin! Second.. Is there a way to manually add a disk to the database? I really like the other plugin's "Historical Drives" section. It kinda give me a history of what was installed and when it was removed.
-
I guess when I first started using dockers I might not have understood completely how they worked. I also may not have had an SSD cache drive back then? It make sense to move it to the SSD.. I'll have to go back and see.. My guess is some of my shares can go directly to the array.. My DVR stuff and code/work stuff will stay on the cache/array method though... I can probably change the mover to once a day... The SSD reliability is good enough that I don't have to be paranoid anymore. I do plan on upgrading to two 500GB NVME cards so I can have them in duplicate mode....
-
Does this plugin normally do more than one rsync at a time? My move seems to be nearly stalled.... This is moving to a SMR drive so I expect it to be slower.. but it seems to be crawling! [email protected]:~# ps auxx | grep rsync root 7754 0.0 0.0 3904 2152 pts/0 S+ 08:33 0:00 grep rsync root 31102 8.4 0.0 13412 3120 ? S 07:55 3:08 /usr/bin/rsync -avPR -X T/DVD/MainMovie/FILE1 /mnt/disk8/ root 31103 0.0 0.0 13156 2308 ? S 07:55 0:00 /usr/bin/rsync -avPR -X T/DVD/MainMovie/FILE1 /mnt/disk8/ root 31104 9.7 0.0 13240 2076 ? D 07:55 3:38 /usr/bin/rsync -avPR -X T/DVD/MainMovie/FILE1 /mnt/disk8/ [email protected]:~# It did this yesterday too... Thanks, Jim Edit: The sysylog doesn't show any obvious errors...
-
I'm not sure.. It's what I did a while ago.. Maybe because I want it protected? Maybe because my Cache is only 250GB? It looks like every 4 hours.. Most of the time it doesn't run long. Crash plan runs at idle time (overnight).. And it seems to take a while to scan everything..
-
I don't see anything in the syslog.. Granted I have a pre-clear running now.. But I was getting these numbers yesterday without the pre-clear.. Edit: Pre-clear Pre-read @178MB/s right now... tower-diagnostics-20200326-1619.zip
-
I'm copying big enough files that get past the cache pretty quick, I imagine.. one thing I noticed when I broke the unbalance into smaller chunks, was that the copy would be "finished" but the remove (rm) command took a while.... or like it was waiting for the disk to be ready again before it could start the next group..
-
Grasping at straws?? 😄 I think the shingled drive might be my answer...