sjoerd Posted June 24 Share Posted June 24 Hi, I had two oom killer warnings recently and according to fix common problems i should open a topic in general support and post the diagnostics. It happened yesterday - I did a grep on oom-killer on the /mnt/user/syslog/*.log found out its not the first time: ``` Jun 18 19:06:40 towerpve kernel: crond invoked oom-killer: gfp_mask=0x140cca(GFP_HIGHUSER_MOVABLE|__GFP_COMP), order=0, oom_score_adj=0 Jun 19 21:22:27 towerpve kernel: wsdd2 invoked oom-killer: gfp_mask=0x140cca(GFP_HIGHUSER_MOVABLE|__GFP_COMP), order=0, oom_score_adj=0 Jun 23 14:23:05 towerpve kernel: monitor_nchan invoked oom-killer: gfp_mask=0x102dc2(GFP_HIGHUSER|__GFP_NOWARN|__GFP_ZERO), order=0, oom_score_adj=0 ``` Can someone check what's going on? This unraid server longest online time is max 2 weeks i think where my other one had uptime longer then a year without issues towerpve-diagnostics-20240624-1015.zip Quote Link to comment
JorgeB Posted June 24 Share Posted June 24 The VM is what got killed, but this was using a lot of RAM: node-red See if you can find the container that uses it. Quote Link to comment
sjoerd Posted June 24 Author Share Posted June 24 i got 4 dockers (homeassistant, zigbee2mqtt, mosquitto and node-red) and they are connected/using eachother - is weird that node-red is using a lot since the entire "hassio infra" is not being used - its just a test setup to see if i got it running with 4 seperate dockers. The VM is a ubuntu 22.04 desktop. I assume I can never use more then the memory i gave it. Other the for the dockers, they do/use "what ever they like". Is that a correct assumption?. Could disable the node-red docker for two weeks orso and see if happens again.. I really would like to keep de VM enabled since its my office desktop and leave all kind of stuff running (visual studio code, flask, couple of terminals) Quote Link to comment
sjoerd Posted June 24 Author Share Posted June 24 Is there a way i can monitor memory usage per hour ? Quote Link to comment
sjoerd Posted June 25 Author Share Posted June 25 fancy docker - found the usage per application/docker. Can't figure out how to see memory timeline per docker. Quite overwhelming, all those charts Quote Link to comment
sjoerd Posted July 16 Author Share Posted July 16 (edited) Okidoki I got a 17 days uptime now without any issues. I change a couple of things: - I updated all node-red plugins (palettes). Not sure if that was needed but still - the netdata did not give me any signs node-red docker was using huge amounts - gitlab (and its services did tho but no extremes) - I reduced the amount of memory my sole vm has, with 2G - fixed the ipvlan/macvlan I think I gave my vm way too much memory. Not only did the vm not need that much (hence more is better right?) but I think there was just enough memory for unraid the "breathe" anymore. I still havent figured out how I can see in netdata the memory usage behaviour per hour/day for the span of a week / month orso - anyone know how to get that ? Edited July 16 by sjoerd Quote Link to comment
Recommended Posts
Join the conversation
You can post now and register later. If you have an account, sign in now to post with your account.
Note: Your post will require moderator approval before it will be visible.