Jump to content

jbuszkie

Members
  • Content Count

    623
  • Joined

  • Last visited

Community Reputation

4 Neutral

About jbuszkie

Converted

  • Gender
    Male
  • Location
    Westminster, MA

Recent Profile Visitors

The recent visitors block is disabled and is not being shown to other users.

  1. Another question.. My MQTT log file is > 4GB! how do I get rid of it?
  2. I am having this issue too.... 😞
  3. So this is the second time this has happened in a couple weeks. Syslog fills up and runs out of space and the system grinds to a halt. Can you guys tell what's going on here? Here is the last couple lines of syslog before I get all the errors... Oct 21 06:20:03 Tower sSMTP[3454]: Sent mail for UnRaid@Busky.net (221 elasmtp-masked.atl.sa.earthlink.net closing connection) uid=0 username=root outbytes=1676 Oct 21 07:00:03 Tower kernel: mdcmd (1109): spindown 2 Oct 21 07:20:02 Tower sSMTP[30058]: Sent mail for UnRaid@Busky.net (221 elasmtp-mealy.atl.sa.earthlink.net closing connection) uid=0 username=root outbytes=1672 Oct 21 08:15:32 Tower kernel: mdcmd (1110): spindown 4 Oct 21 08:15:41 Tower kernel: mdcmd (1111): spindown 5 Oct 21 08:20:03 Tower sSMTP[30455]: Sent mail for UnRaid@Busky.net (221 elasmtp-mealy.atl.sa.earthlink.net closing connection) uid=0 username=root outbytes=1680 Oct 21 08:25:01 Tower kernel: mdcmd (1112): spindown 1 Oct 21 09:16:19 Tower nginx: 2020/10/21 09:16:19 [alert] 5212#5212: worker process 5213 exited on signal 6 Oct 21 09:16:20 Tower nginx: 2020/10/21 09:16:20 [alert] 5212#5212: worker process 27120 exited on signal 6 Oct 21 09:16:21 Tower nginx: 2020/10/21 09:16:21 [alert] 5212#5212: worker process 27246 exited on signal 6 Oct 21 09:16:22 Tower nginx: 2020/10/21 09:16:22 [alert] 5212#5212: worker process 27258 exited on signal 6 Oct 21 09:16:23 Tower nginx: 2020/10/21 09:16:23 [alert] 5212#5212: worker process 27259 exited on signal 6 Oct 21 09:16:24 Tower nginx: 2020/10/21 09:16:24 [alert] 5212#5212: worker process 27263 exited on signal 6 Oct 21 09:16:24 Tower nginx: 2020/10/21 09:16:24 [alert] 5212#5212: worker process 27264 exited on signal 6 Oct 21 09:16:25 Tower nginx: 2020/10/21 09:16:25 [alert] 5212#5212: worker process 27267 exited on signal 6 That line repeats for a long time then it reports some memory full issue... Oct 21 13:14:53 Tower nginx: 2020/10/21 13:14:53 [alert] 5212#5212: worker process 3579 exited on signal 6 Oct 21 13:14:54 Tower nginx: 2020/10/21 13:14:54 [alert] 5212#5212: worker process 3580 exited on signal 6 Oct 21 13:14:55 Tower nginx: 2020/10/21 13:14:55 [alert] 5212#5212: worker process 3583 exited on signal 6 Oct 21 13:14:56 Tower nginx: 2020/10/21 13:14:56 [alert] 5212#5212: worker process 3584 exited on signal 6 Oct 21 13:14:57 Tower nginx: 2020/10/21 13:14:57 [crit] 3629#3629: ngx_slab_alloc() failed: no memory Oct 21 13:14:57 Tower nginx: 2020/10/21 13:14:57 [error] 3629#3629: shpool alloc failed Oct 21 13:14:57 Tower nginx: 2020/10/21 13:14:57 [error] 3629#3629: nchan: Out of shared memory while allocating message of size 6724. Increase nchan_max_reserved_memory. Oct 21 13:14:57 Tower nginx: 2020/10/21 13:14:57 [error] 3629#3629: *4535822 nchan: error publishing message (HTTP status code 500), client: unix:, server: , request: "POST /pub/disks?buffer_length=1 HTTP/1.1", host: "localhost" Oct 21 13:14:57 Tower nginx: 2020/10/21 13:14:57 [error] 3629#3629: MEMSTORE:00: can't create shared message for channel /disks Oct 21 13:14:57 Tower nginx: 2020/10/21 13:14:57 [alert] 5212#5212: worker process 3629 exited on signal 6 Oct 21 13:14:58 Tower nginx: 2020/10/21 13:14:58 [crit] 3641#3641: ngx_slab_alloc() failed: no memory Oct 21 13:14:58 Tower nginx: 2020/10/21 13:14:58 [error] 3641#3641: shpool alloc failed Oct 21 13:14:58 Tower nginx: 2020/10/21 13:14:58 [error] 3641#3641: nchan: Out of shared memory while allocating message of size 6724. Increase nchan_max_reserved_memory. Oct 21 13:14:58 Tower nginx: 2020/10/21 13:14:58 [error] 3641#3641: *4535827 nchan: error publishing message (HTTP status code 500), client: unix:, server: , request: "POST /pub/disks?buffer_length=1 HTTP/1.1", host: "localhost" Oct 21 13:14:58 Tower nginx: 2020/10/21 13:14:58 [error] 3641#3641: MEMSTORE:00: can't create shared message for channel /disks Oct 21 13:14:58 Tower nginx: 2020/10/21 13:14:58 [alert] 5212#5212: worker process 3641 exited on signal 6 Oct 21 13:14:59 Tower nginx: 2020/10/21 13:14:59 [crit] 3687#3687: ngx_slab_alloc() failed: no memory Oct 21 13:14:59 Tower nginx: 2020/10/21 13:14:59 [error] 3687#3687: shpool alloc failed Oct 21 13:14:59 Tower nginx: 2020/10/21 13:14:59 [error] 3687#3687: nchan: Out of shared memory while allocating message of size 6724. Increase nchan_max_reserved_memory. After this I just keep getting these out of memory errors... Do you guys have any idea what could be causing this?
  4. My MQTT log file is huge and growing.. What (and how) do you guys have your log stuff set to? I never messed with this so it's probably set to the default. This is what is in the conf file. log_dest stderr log_type all connection_messages true log_timestamp true
  5. For all you guys running Home assistant and such... I see some are running in a docker and some in a VM and some in a VM on a separate machine. HA in a docker seems fine except no supervisor. I'm no expert in HA (yet) but it seems I would like the supervisor ability. I tried @digiblur's video and I got it to install and such.. but it seems kinda pieced together and it didn't seem too robust. I thank him for his effort here to give us that option though... So I guess my question is.. what are the benefits of running in a docker vs a VM or just running it all (HA, ESPHome, MQTT, NodeRed) on a Rpi4? I like the idea of a Rpi4 because if it breaks, it's pretty easy to replace the hardware fast. I'm planning on this setup controlling my whole HVAC system. So if something breaks, I need to be able to get it up and running quickly again. Does the Rpi4 just not have enough HP to run all this? I suppose if I run in a VM on unraid I could always take the image and run it on another machine while I get unraid back up and running is something were to break? (assuming I could get to the image...) What are most of you guys doing? Thanks, Jim
  6. huh.. either I dismissed that warning.. or I never got it! :-)
  7. Ok.. that was it.. It must have changed from when I first installed it. I changed the repo and now it looks ok... Thanks for the help.
  8. I have not changed it ever as far as I know.. I'll check..
  9. I get this now when I check for updates on my node red docker.. Is there a new docker for this or something? Jim
  10. Ok.. That fixed that one... But now I have this! LOL...
  11. I'm moving over from the server layout plugin. First of all thanks for this plugin! Second.. Is there a way to manually add a disk to the database? I really like the other plugin's "Historical Drives" section. It kinda give me a history of what was installed and when it was removed.