• large number of signal 9 (SIGKILL) on pool www since rc 7 install

    • Minor
    Jun  6 12:02:11 TheLibrary php-fpm[10134]: [WARNING] [pool www] child 15670 exited on signal 9 (SIGKILL) after 456.455878 seconds from start
    Jun  6 12:02:15 TheLibrary php-fpm[10134]: [WARNING] [pool www] child 28993 exited on signal 9 (SIGKILL) after 281.040782 seconds from start
    Jun  6 12:02:29 TheLibrary php-fpm[10134]: [WARNING] [pool www] child 26795 exited on signal 9 (SIGKILL) after 16.541999 seconds from start
    Jun  6 12:02:30 TheLibrary php-fpm[10134]: [WARNING] [pool www] child 30046 exited on signal 9 (SIGKILL) after 14.761019 seconds from start
    Jun  6 12:02:41 TheLibrary php-fpm[10134]: [WARNING] [pool www] child 3637 exited on signal 9 (SIGKILL) after 12.463393 seconds from start
    Jun  6 12:02:44 TheLibrary php-fpm[10134]: [WARNING] [pool www] child 3692 exited on signal 9 (SIGKILL) after 13.842282 seconds from start
    Jun  6 12:02:57 TheLibrary php-fpm[10134]: [WARNING] [pool www] child 4181 exited on signal 9 (SIGKILL) after 14.610889 seconds from start
    Jun  6 12:02:59 TheLibrary php-fpm[10134]: [WARNING] [pool www] child 5044 exited on signal 9 (SIGKILL) after 14.410527 seconds from start
    Jun  6 12:03:10 TheLibrary php-fpm[10134]: [WARNING] [pool www] child 12666 exited on signal 9 (SIGKILL) after 12.416831 seconds from start
    Jun  6 12:03:13 TheLibrary php-fpm[10134]: [WARNING] [pool www] child 12702 exited on signal 9 (SIGKILL) after 13.517078 seconds from start
    Jun  6 12:03:25 TheLibrary php-fpm[10134]: [WARNING] [pool www] child 17475 exited on signal 9 (SIGKILL) after 14.049300 seconds from start
    Jun  6 12:03:38 TheLibrary php-fpm[10134]: [WARNING] [pool www] child 18195 exited on signal 9 (SIGKILL) after 13.041803 seconds from start
    Jun  6 12:03:50 TheLibrary php-fpm[10134]: [WARNING] [pool www] child 25690 exited on signal 9 (SIGKILL) after 12.074396 seconds from start
    Jun  6 12:04:08 TheLibrary php-fpm[10134]: [WARNING] [pool www] child 32568 exited on signal 9 (SIGKILL) after 15.341012 seconds from start
    Jun  6 12:04:25 TheLibrary php-fpm[10134]: [WARNING] [pool www] child 8194 exited on signal 9 (SIGKILL) after 16.351577 seconds from start
    Jun  6 12:04:39 TheLibrary php-fpm[10134]: [WARNING] [pool www] child 13806 exited on signal 9 (SIGKILL) after 13.355236 seconds from start
    Jun  6 12:04:52 TheLibrary php-fpm[10134]: [WARNING] [pool www] child 21202 exited on signal 9 (SIGKILL) after 12.777710 seconds from start
    Jun  6 12:05:11 TheLibrary php-fpm[10134]: [WARNING] [pool www] child 27381 exited on signal 9 (SIGKILL) after 13.180648 seconds from start
    Jun  6 12:05:27 TheLibrary php-fpm[10134]: [WARNING] [pool www] child 2566 exited on signal 9 (SIGKILL) after 14.681882 seconds from start
    Jun  6 12:05:40 TheLibrary php-fpm[10134]: [WARNING] [pool www] child 9672 exited on signal 9 (SIGKILL) after 11.977200 seconds from start
    Jun  6 12:06:04 TheLibrary php-fpm[10134]: [WARNING] [pool www] child 11342 exited on signal 9 (SIGKILL) after 22.298786 seconds from start



    User Feedback

    Recommended Comments

    Is there away to do this from the CLI? GUI has become unusable, with this issue. 

    Link to comment

    If reboot didn't work try a short power key press, if it still doesn't shutdown after a few minutes you will need to force it.

    Link to comment

    Got a reboot, not a clean one but. It looks like things are behaving again. Guess I now play that whack a plugin game.

    Link to comment

    Update: So I thought I found the issue being Dynamix Cache Directories. I reinstalled everything and disabled Dynamix Cache Directories in hopes it may be patched at a later date. I woke up this morning to 100% CPU usage and 80% RAM usage I was unable to get anything out of the server. I had to reboot it via ssh again. 

    Link to comment

    Yes, not as quickly. In safe mode in rc7 when I login ram is in the 60% range on rc7 and seems to steadily climb over the hours in tandem with CPU usage. In rc6 after 9 hours of usage the RAM is sitting at the 40% range and CPU usage is nominal.

    With NetData installed as a Docker it reports completely different usage numbers than the UnRaid GUI with rc7 and matches the GUI in rc6.

    When using Top, I see Dockerd, SFSH at the top trading places  almost exclusively in rc7 in rc6 first place is more fluid as to be expected. 


    I had been running rc6 since almost release and have had to date no issues with it. With rc7 I haven't had more then a few hours uptime. As whatever is happening also kills all my dockers and network mounts. 

    Link to comment

    OK so I had one lock up without docker or plugins, gui dead but ssh and netwrok mounts worked. 

    After the reboot RAM started low but hit 80% quite quickly. Network mounts still worked. 

    The control panel for ZFS was gone when I went into the drives for them.


    Screenshot 2023-06-08 094148.png

    Screenshot 2023-06-08 165001.png

    ZFS details gone.png


    Link to comment
    1 hour ago, DuzAwe said:

    After the reboot RAM started low but hit 80% quite quickly

    That's the docker image used space, not used RAM.

    Link to comment

    That's after the mover runs, and since it's set to run hourly you can see it every hour (if there was data on that share).

    Link to comment

    So, if I don't have a crash today. Is the course of action to add one thing back at a time or ?

    Link to comment

    That's what I would recommend, first re-boot in normal mode with docker disabled, if still good re-enable docker and start one container at a time and let it run the necessary time to confirm all is good before enabling the next one.

    Link to comment

    I'm having the same lock-ups and log errors as OP. On rc7.


    It doesn't lock up completely just extremely unresponsive. Most dockers not working.


    @DuzAwe, did you figure anything out with this?

    Link to comment

    Posted (edited)

    @sunbear Still hunting. So far PLEX and the ARR suite has a PASS from me. I have quite a few dockers so it will be a number of days before I have gone through them all. I have also ruled out all my plugins


    @JorgeB When I click on the first drive in the ZFS array/cluster it shows like above an incomplete interface, if I go to any other disk in the machine, Unraid Array, BTRFS and the other ZFS disks I get the normal interfaces. ie Scrub options, Smart Info, Free space setting. But for the first disk in the the ZFS it is missing all these options. 

    Edited by DuzAwe
    Link to comment
    8 hours ago, DuzAwe said:

    When I click on the first drive in the ZFS array/cluster it shows like above an incomplete interface

    I now see the screenshot above, this was an issue with an earlier internal beta, it should not happen now, IIRC it had to do with the minimum free space calculation, @bonienlany idea?





    Link to comment

    Join the conversation

    You can post now and register later. If you have an account, sign in now to post with your account.
    Note: Your post will require moderator approval before it will be visible.

    Add a comment...

    ×   Pasted as rich text.   Restore formatting

      Only 75 emoji are allowed.

    ×   Your link has been automatically embedded.   Display as a link instead

    ×   Your previous content has been restored.   Clear editor

    ×   You cannot paste images directly. Upload or insert images from URL.

  • Status Definitions


    Open = Under consideration.


    Solved = The issue has been resolved.


    Solved version = The issue has been resolved in the indicated release version.


    Closed = Feedback or opinion better posted on our forum for discussion. Also for reports we cannot reproduce or need more information. In this case just add a comment and we will review it again.


    Retest = Please retest in latest release.

    Priority Definitions


    Minor = Something not working correctly.


    Urgent = Server crash, data loss, or other showstopper.


    Annoyance = Doesn't affect functionality but should be fixed.


    Other = Announcement or other non-issue.