Bmalone

Members
  • Posts

    86
  • Joined

  • Last visited

Recent Profile Visitors

The recent visitors block is disabled and is not being shown to other users.

Bmalone's Achievements

Apprentice

Apprentice (3/14)

2

Reputation

1

Community Answers

  1. I am seeing the errors below all the time in my syslog on both my Unriad servers. It's driving me nuts, but there doesn't seem to be much to go in in order to investigate further. Sometimes the logs are flooded with these messages and other times it can be more sporadic. Any idea what could cause this so I can investigate further? That Mac address is the Unraid server's Mac address. Apr 22 12:54:12 SmashySmash kernel: br0: received packet on bond0 with own address as source address (addr:0c:c4:7a:bc:56:c8, vlan:0) Apr 22 12:54:12 SmashySmash kernel: br0: received packet on bond0 with own address as source address (addr:0c:c4:7a:bc:56:c8, vlan:0) Apr 22 12:54:12 SmashySmash kernel: br0: received packet on bond0 with own address as source address (addr:0c:c4:7a:bc:56:c8, vlan:0) Apr 22 12:54:13 SmashySmash kernel: br0: received packet on bond0 with own address as source address (addr:0c:c4:7a:bc:56:c8, vlan:0) Apr 22 12:54:13 SmashySmash kernel: br0: received packet on bond0 with own address as source address (addr:0c:c4:7a:bc:56:c8, vlan:0) Apr 22 12:54:13 SmashySmash kernel: br0: received packet on bond0 with own address as source address (addr:0c:c4:7a:bc:56:c8, vlan:0) Apr 22 12:54:23 SmashySmash kernel: br0: received packet on bond0 with own address as source address (addr:0c:c4:7a:bc:56:c8, vlan:0) Apr 22 12:54:23 SmashySmash kernel: br0: received packet on bond0 with own address as source address (addr:0c:c4:7a:bc:56:c8, vlan:0) Apr 22 12:56:40 SmashySmash shfs: /usr/sbin/zfs create 'ingestion_cache/unraiddata2' Apr 22 13:24:37 SmashySmash kernel: br0: received packet on bond0 with own address as source address (addr:0c:c4:7a:bc:56:c8, vlan:0) Apr 22 13:24:37 SmashySmash kernel: br0: received packet on bond0 with own address as source address (addr:0c:c4:7a:bc:56:c8, vlan:0) Apr 22 13:24:47 SmashySmash kernel: br0: received packet on bond0 with own address as source address (addr:0c:c4:7a:bc:56:c8, vlan:0) Apr 22 13:24:47 SmashySmash kernel: br0: received packet on bond0 with own address as source address (addr:0c:c4:7a:bc:56:c8, vlan:0) Apr 22 13:55:01 SmashySmash kernel: br0: received packet on bond0 with own address as source address (addr:0c:c4:7a:bc:56:c8, vlan:0) Apr 22 13:55:01 SmashySmash kernel: br0: received packet on bond0 with own address as source address (addr:0c:c4:7a:bc:56:c8, vlan:0) Apr 22 13:55:11 SmashySmash kernel: br0: received packet on bond0 with own address as source address (addr:0c:c4:7a:bc:56:c8, vlan:0) Apr 22 13:55:11 SmashySmash kernel: br0: received packet on bond0 with own address as source address (addr:0c:c4:7a:bc:56:c8, vlan:0) Apr 22 14:09:04 SmashySmash emhttpd: read SMART /dev/sdc Apr 22 14:25:25 SmashySmash kernel: br0: received packet on bond0 with own address as source address (addr:0c:c4:7a:bc:56:c8, vlan:0) Apr 22 14:25:25 SmashySmash kernel: br0: received packet on bond0 with own address as source address (addr:0c:c4:7a:bc:56:c8, vlan:0) Apr 22 14:25:35 SmashySmash kernel: br0: received packet on bond0 with own address as source address (addr:0c:c4:7a:bc:56:c8, vlan:0) Apr 22 14:25:35 SmashySmash kernel: br0: received packet on bond0 with own address as source address (addr:0c:c4:7a:bc:56:c8, vlan:0)
  2. +1 Share quotas are table stakes and required for a proper NAS system.
  3. Not yet. I'm on 6.12.8. I was going to give it a month or two. Are you suggesting the fix below might resolve the unmount issue? I can't see anything that gives me any indication as to why it wouldn't unmount. It just says the containers_cache is busy. Are you able to glean anything? Is there somewhere else I should be looking? ZFS: Detect if insufficient pools are defined for an imported pool with a missing device. syslog-previous.txt
  4. So I was able to start the rebuild after rebooting (safely). It's very concerning the basic functionality doesn't work smoothly.
  5. No luck yet with stopping Docker. It just hangs. Reboot to shutdown -r? I understood shutdown -r was better (although using that can also cause an unclean shutdown in my experience).
  6. I'm trying to understand why the drive rebuild process won't work. I was adding 2 drives to the array and when I started the array back up it was telling me that disk 3 had an issue. So I followed the 'Rebuilding a drive onto itself' process step-by-step. After the extended self-test, which took a couple of days, there were no errors found, and the emulated contents were all in order. So I stopped the array, unassigned the disk, restarted the array to register the missing disk, and then tried to stop the array, but it won't stop because the drives are busy. The only thing I can think of would be that the containers set to autostart when I start the array would have started up, but can't Unraid handle stopping them with the 'Stop Array' button? There is nothing I've seen in the documentation about needing to manually stop those before trying to stop the array. Any idea why this isn't working? This is pretty fundamental. Rebuilding a drive onto itself. https://docs.unraid.net/unraid-os/manual/storage-management/ Apr 17 08:12:35 SmashySmash emhttpd: Unmounting disks... Apr 17 08:12:35 SmashySmash emhttpd: shcmd (894358): umount /mnt/disk1 Apr 17 08:12:35 SmashySmash root: umount: /mnt/disk1: target is busy. Apr 17 08:12:35 SmashySmash emhttpd: shcmd (894358): exit status: 32 Apr 17 08:12:35 SmashySmash emhttpd: shcmd (894359): /usr/sbin/zpool export containers_cache Apr 17 08:12:35 SmashySmash root: cannot unmount '/mnt/containers_cache/appdata': pool or dataset is busy Apr 17 08:12:35 SmashySmash emhttpd: shcmd (894359): exit status: 1 Apr 17 08:12:35 SmashySmash emhttpd: Retry unmounting disk share(s)... Apr 17 08:12:40 SmashySmash emhttpd: Unmounting disks... Apr 17 08:12:40 SmashySmash emhttpd: shcmd (894360): umount /mnt/disk1 Apr 17 08:12:40 SmashySmash root: umount: /mnt/disk1: target is busy. Apr 17 08:12:40 SmashySmash emhttpd: shcmd (894360): exit status: 32 Apr 17 08:12:40 SmashySmash emhttpd: shcmd (894361): /usr/sbin/zpool export containers_cache Apr 17 08:12:40 SmashySmash root: cannot unmount '/mnt/containers_cache/appdata': pool or dataset is busy Apr 17 08:12:40 SmashySmash emhttpd: shcmd (894361): exit status: 1 Apr 17 08:12:40 SmashySmash emhttpd: Retry unmounting disk share(s)... Apr 17 08:12:45 SmashySmash emhttpd: Unmounting disks... Apr 17 08:12:45 SmashySmash emhttpd: shcmd (894362): umount /mnt/disk1 Apr 17 08:12:45 SmashySmash root: umount: /mnt/disk1: target is busy. Apr 17 08:12:45 SmashySmash emhttpd: shcmd (894362): exit status: 32 Apr 17 08:12:45 SmashySmash emhttpd: shcmd (894363): /usr/sbin/zpool export containers_cache Apr 17 08:12:45 SmashySmash root: cannot unmount '/mnt/containers_cache/appdata': pool or dataset is busy Apr 17 08:12:45 SmashySmash emhttpd: shcmd (894363): exit status: 1 Apr 17 08:12:45 SmashySmash emhttpd: Retry unmounting disk share(s)... Apr 17 08:12:50 SmashySmash emhttpd: Unmounting disks... Apr 17 08:12:50 SmashySmash emhttpd: shcmd (894364): umount /mnt/disk1 Apr 17 08:12:50 SmashySmash root: umount: /mnt/disk1: target is busy. Apr 17 08:12:50 SmashySmash emhttpd: shcmd (894364): exit status: 32 Apr 17 08:12:50 SmashySmash emhttpd: shcmd (894365): /usr/sbin/zpool export containers_cache Apr 17 08:12:50 SmashySmash root: cannot unmount '/mnt/containers_cache/appdata': pool or dataset is busy Apr 17 08:12:50 SmashySmash emhttpd: shcmd (894365): exit status: 1 Apr 17 08:12:50 SmashySmash emhttpd: Retry unmounting disk share(s)... Apr 17 08:12:55 SmashySmash emhttpd: Unmounting disks... Apr 17 08:12:55 SmashySmash emhttpd: shcmd (894367): umount /mnt/disk1 Apr 17 08:12:55 SmashySmash root: umount: /mnt/disk1: target is busy. Apr 17 08:12:55 SmashySmash emhttpd: shcmd (894367): exit status: 32 Apr 17 08:12:55 SmashySmash emhttpd: shcmd (894368): /usr/sbin/zpool export containers_cache Apr 17 08:12:55 SmashySmash root: cannot unmount '/mnt/containers_cache/appdata': pool or dataset is busy Apr 17 08:12:55 SmashySmash emhttpd: shcmd (894368): exit status: 1 Apr 17 08:12:55 SmashySmash emhttpd: Retry unmounting disk share(s)... Apr 17 08:13:00 SmashySmash emhttpd: Unmounting disks... Apr 17 08:13:00 SmashySmash emhttpd: shcmd (894369): umount /mnt/disk1 Apr 17 08:13:00 SmashySmash root: umount: /mnt/disk1: target is busy. Apr 17 08:13:00 SmashySmash emhttpd: shcmd (894369): exit status: 32 Apr 17 08:13:00 SmashySmash emhttpd: shcmd (894370): /usr/sbin/zpool export containers_cache Apr 17 08:13:00 SmashySmash root: cannot unmount '/mnt/containers_cache/appdata': pool or dataset is busy Apr 17 08:13:00 SmashySmash emhttpd: shcmd (894370): exit status: 1 Apr 17 08:13:00 SmashySmash emhttpd: Retry unmounting disk share(s)... Apr 17 08:13:00 SmashySmash root: Fix Common Problems Version 2024.03.29 Apr 17 08:13:01 SmashySmash root: Fix Common Problems: Warning: Plugin fix.common.problems.plg is not up to date Apr 17 08:13:05 SmashySmash emhttpd: Unmounting disks... Apr 17 08:13:05 SmashySmash emhttpd: shcmd (894371): umount /mnt/disk1 Apr 17 08:13:05 SmashySmash root: umount: /mnt/disk1: target is busy. Apr 17 08:13:05 SmashySmash emhttpd: shcmd (894371): exit status: 32 Apr 17 08:13:05 SmashySmash emhttpd: shcmd (894372): /usr/sbin/zpool export containers_cache Apr 17 08:13:05 SmashySmash root: cannot unmount '/mnt/containers_cache/appdata': pool or dataset is busy Apr 17 08:13:05 SmashySmash emhttpd: shcmd (894372): exit status: 1 Apr 17 08:13:05 SmashySmash emhttpd: Retry unmounting disk share(s)... Apr 17 08:13:10 SmashySmash emhttpd: Unmounting disks... Apr 17 08:13:10 SmashySmash emhttpd: shcmd (894373): umount /mnt/disk1 Apr 17 08:13:10 SmashySmash root: umount: /mnt/disk1: target is busy. Apr 17 08:13:10 SmashySmash emhttpd: shcmd (894373): exit status: 32 Apr 17 08:13:10 SmashySmash emhttpd: shcmd (894374): /usr/sbin/zpool export containers_cache Apr 17 08:13:10 SmashySmash root: cannot unmount '/mnt/containers_cache/appdata': pool or dataset is busy Apr 17 08:13:10 SmashySmash emhttpd: shcmd (894374): exit status: 1 Apr 17 08:13:10 SmashySmash emhttpd: Retry unmounting disk share(s)... Apr 17 08:13:15 SmashySmash emhttpd: Unmounting disks... Apr 17 08:13:15 SmashySmash emhttpd: shcmd (894375): umount /mnt/disk1 Apr 17 08:13:15 SmashySmash root: umount: /mnt/disk1: target is busy. Apr 17 08:13:15 SmashySmash emhttpd: shcmd (894375): exit status: 32 Apr 17 08:13:15 SmashySmash emhttpd: shcmd (894376): /usr/sbin/zpool export containers_cache Apr 17 08:13:15 SmashySmash root: cannot unmount '/mnt/containers_cache/appdata': pool or dataset is busy Apr 17 08:13:15 SmashySmash emhttpd: shcmd (894376): exit status: 1 Apr 17 08:13:15 SmashySmash emhttpd: Retry unmounting disk share(s)... Apr 17 08:13:20 SmashySmash emhttpd: Unmounting disks... Apr 17 08:13:20 SmashySmash emhttpd: shcmd (894377): umount /mnt/disk1 Apr 17 08:13:20 SmashySmash root: umount: /mnt/disk1: target is busy. Apr 17 08:13:20 SmashySmash emhttpd: shcmd (894377): exit status: 32 Apr 17 08:13:20 SmashySmash emhttpd: shcmd (894378): /usr/sbin/zpool export containers_cache Apr 17 08:13:20 SmashySmash root: cannot unmount '/mnt/containers_cache/appdata': pool or dataset is busy Apr 17 08:13:20 SmashySmash emhttpd: shcmd (894378): exit status: 1 Apr 17 08:13:20 SmashySmash emhttpd: Retry unmounting disk share(s)... Apr 17 08:13:25 SmashySmash emhttpd: Unmounting disks... Apr 17 08:13:25 SmashySmash emhttpd: shcmd (894380): umount /mnt/disk1 Apr 17 08:13:25 SmashySmash root: umount: /mnt/disk1: target is busy. Apr 17 08:13:25 SmashySmash emhttpd: shcmd (894380): exit status: 32 Apr 17 08:13:25 SmashySmash emhttpd: shcmd (894381): /usr/sbin/zpool export containers_cache Apr 17 08:13:25 SmashySmash root: cannot unmount '/mnt/containers_cache/appdata': pool or dataset is busy Apr 17 08:13:25 SmashySmash emhttpd: shcmd (894381): exit status: 1 Apr 17 08:13:25 SmashySmash emhttpd: Retry unmounting disk share(s)... Apr 17 08:13:30 SmashySmash emhttpd: Unmounting disks... Apr 17 08:13:30 SmashySmash emhttpd: shcmd (894382): umount /mnt/disk1 Apr 17 08:13:30 SmashySmash root: umount: /mnt/disk1: target is busy. Apr 17 08:13:30 SmashySmash emhttpd: shcmd (894382): exit status: 32 Apr 17 08:13:30 SmashySmash emhttpd: shcmd (894383): /usr/sbin/zpool export containers_cache Apr 17 08:13:30 SmashySmash root: cannot unmount '/mnt/containers_cache/appdata': pool or dataset is busy Apr 17 08:13:30 SmashySmash emhttpd: shcmd (894383): exit status: 1 Apr 17 08:13:30 SmashySmash emhttpd: Retry unmounting disk share(s)... Apr 17 08:13:35 SmashySmash emhttpd: Unmounting disks... Apr 17 08:13:35 SmashySmash emhttpd: shcmd (894384): umount /mnt/disk1 Apr 17 08:13:35 SmashySmash root: umount: /mnt/disk1: target is busy. Apr 17 08:13:35 SmashySmash emhttpd: shcmd (894384): exit status: 32 Apr 17 08:13:35 SmashySmash emhttpd: shcmd (894385): /usr/sbin/zpool export containers_cache Apr 17 08:13:35 SmashySmash root: cannot unmount '/mnt/containers_cache/appdata': pool or dataset is busy Apr 17 08:13:35 SmashySmash emhttpd: shcmd (894385): exit status: 1 Apr 17 08:13:35 SmashySmash emhttpd: Retry unmounting disk share(s)... Apr 17 08:13:40 SmashySmash emhttpd: Unmounting disks...
  7. I added 2 new 18TB disks to my array yesterday and pre-clear started, and it shows as running. It's been 24 hours and it's only 9.7% completed. Also, it hasn't progressed all night and this morning and Unraid is showing 0% CPU usage, which I assume is a bug because I'm writing this from a VM hosted on the Unraid server and all my containers are running fine. In the syslog I see the events below. I have parity tuning set to run almost all day except from 16:00-00:00. Is this normal? I don't recall my original drives taking this long to clear. Apr 15 12:24:22 SmashySmash crond[2362]: exit status 255 from user root /usr/local/emhttp/plugins/parity.check.tuning/parity.check.tuning.php "monitor" &>/dev/null Apr 15 12:30:23 SmashySmash crond[2362]: exit status 255 from user root /usr/local/emhttp/plugins/parity.check.tuning/parity.check.tuning.php "monitor" &>/dev/null Apr 15 12:36:33 SmashySmash crond[2362]: exit status 255 from user root /usr/local/emhttp/plugins/parity.check.tuning/parity.check.tuning.php "monitor" &>/dev/null Apr 15 12:42:24 SmashySmash crond[2362]: exit status 255 from user root /usr/local/emhttp/plugins/parity.check.tuning/parity.check.tuning.php "monitor" &>/dev/null
  8. Today I tried to install a new container, deleted all the appdata from the previous install, and rolled back to older versions and I'm seeing the errors below consistently. Any idea why this all of a sudden stopped working? time=2024-04-11T16:27:59.961-05:00 comm=netdata source=health level=warning tid=709 thread=HEALTH msg_id=9ce0cb58ab8b44df82c4bf1ad9ee22de node=Goathead instance=app.dhcp_fds_open_limit context=app.fds_open_limit code=0 alert_id=1712871056 alert_unique_id=1712871195 alert_event_id=2 alert_transition_id=94e16a2fc9464250ad2e77411cf2c4bf alert_config=27ec43a6f27349f2808a959928f59920 alert=apps_group_file_descriptors_utilization alert_class=Utilization alert_component=Process alert_type=System alert_exec=/usr/libexec/netdata/plugins.d/alarm-notify.sh alert_recipient=sysadmin alert_duration=1 alert_value=500 alert_value_old=null alert_status=WARNING alert_value_old=UNINITIALIZED alert_units=% alert_summary="App group dhcp file descriptors utilization" alert_info="Open files percentage against the processes limits, among all PIDs in application group" alert_notification_timestamp=2024-04-11T16:27:59-05:00 msg="ALERT 'apps_group_file_descriptors_utilization' of instance 'app.dhcp_fds_open_limit' on node 'Goathead', transitioned from UNINITIALIZED to WARNING" time=2024-04-11T16:43:25.329-05:00 comm=cgroup-network source=collector level=error tid=1548 thread=cgroup-network msg="child pid 1549 exited with code 1." time=2024-04-11T16:43:25.329-05:00 comm=cgroup-network source=collector level=error tid=1548 thread=cgroup-network msg="Cannot find a cgroup PID from cgroup '/host/sys/fs/cgroup/docker/43c4a2d485238a93adc627e2338ecf01733bb284dc08a13e0bfe9cc7033a11b9'" time=2024-04-11T16:43:25.330-05:00 comm=netdata source=daemon level=error tid=761 thread=P[cgroups] msg="child pid 1548 exited with code 1."
  9. I recently updated my Netdata Docker containers on my production and backup server which have an almost identical configuration, or as close as I can keep them, and after the update my production instance doesn't work and my backup server is fine. The app will start fine, but won't render any visualizations of the data and I'm getting the error below. I've tried deleting the container and reinstalling, but the error is the same. Any idea what could be causing this? time=2024-04-10T10:13:00.425-05:00 comm=apps.plugin source=collector level=error errno="3, No such process" tid=718 thread=apps.plugin msg="Cannot process /host/proc/12131/cmdline (command 'z_wr_iss')" time=2024-04-10T10:13:27.413-05:00 comm=apps.plugin source=collector level=error errno="3, No such process" tid=718 thread=apps.plugin msg="Cannot process /host/proc/36316/status (command 'zfs')" time=2024-04-10T10:19:58.407-05:00 comm=apps.plugin source=collector level=error errno="3, No such process" tid=718 thread=apps.plugin msg="Cannot process /host/proc/29752/limits (command 'zpool')"
  10. Got it. Thanks for clarifying.
  11. I would like to use this plugin but I have a question about it's configuration. In order to avoid conflicts, I understand that the scheduled parity check must be disabled if one wants to use the plugin. Therefore, I disabled it. When I try to turn on the parity check tuning, I get the error below and I'm unable to use it, despite not having any other parity check scheduled. I must be misunderstanding something. Can someone explain why I cannot use the plugin if I don't have any parity check scheduled and I've not selected the cumulative parity check option?
  12. So I've left SSH off for a few days. Today, I turned it on again to try and test it again. What I noticed is as soon as I turned it on, performance was significantly impacted and the server hung, and the dashboard wouldn't render. When I looked in the syslog I can see that my server was absolutely flooded with Nginx events non-stop right after I turned it on. I tried to include the diagnostics, but there are so many events it won't complete. It won't even retrieve my folder directory in my media folder it's so busy. Does anyone know what might be causing this? Thousands upon thousands of these events... Apr 3 16:30:52 Goathead nginx: 2024/04/03 16:30:52 [alert] 6493#6493: worker process 31293 exited on signal 6 Apr 3 16:30:54 Goathead nginx: 2024/04/03 16:30:54 [alert] 6493#6493: worker process 31462 exited on signal 6 Apr 3 16:30:56 Goathead nginx: 2024/04/03 16:30:56 [alert] 6493#6493: worker process 31563 exited on signal 6 Apr 3 16:30:58 Goathead nginx: 2024/04/03 16:30:58 [alert] 6493#6493: worker process 31757 exited on signal 6 Apr 3 16:31:00 Goathead nginx: 2024/04/03 16:31:00 [alert] 6493#6493: worker process 31852 exited on signal 6 Apr 3 16:31:02 Goathead nginx: 2024/04/03 16:31:02 [alert] 6493#6493: worker process 32112 exited on signal 6 Apr 3 16:31:04 Goathead nginx: 2024/04/03 16:31:04 [alert] 6493#6493: worker process 32252 exited on signal 6 Apr 3 16:31:06 Goathead nginx: 2024/04/03 16:31:06 [alert] 6493#6493: worker process 32353 exited on signal 6 Apr 3 16:31:08 Goathead nginx: 2024/04/03 16:31:08 [alert] 6493#6493: worker process 32694 exited on signal 6 Apr 3 16:31:10 Goathead nginx: 2024/04/03 16:31:10 [alert] 6493#6493: worker process 32777 exited on signal 6 Apr 3 16:31:12 Goathead nginx: 2024/04/03 16:31:12 [alert] 6493#6493: worker process 32906 exited on signal 6 Apr 3 16:31:14 Goathead nginx: 2024/04/03 16:31:14 [alert] 6493#6493: worker process 33093 exited on signal 6 Apr 3 16:31:16 Goathead nginx: 2024/04/03 16:31:16 [alert] 6493#6493: worker process 33204 exited on signal 6 Apr 3 16:31:18 Goathead nginx: 2024/04/03 16:31:18 [alert] 6493#6493: worker process 33300 exited on signal 6 Apr 3 16:31:20 Goathead nginx: 2024/04/03 16:31:20 [alert] 6493#6493: worker process 33403 exited on signal 6 Apr 3 16:31:22 Goathead nginx: 2024/04/03 16:31:22 [alert] 6493#6493: worker process 33521 exited on signal 6 Apr 3 16:31:24 Goathead nginx: 2024/04/03 16:31:24 [alert] 6493#6493: worker process 33712 exited on signal 6 Apr 3 16:31:26 Goathead nginx: 2024/04/03 16:31:26 [alert] 6493#6493: worker process 33773 exited on signal 6 Apr 3 16:31:28 Goathead nginx: 2024/04/03 16:31:28 [alert] 6493#6493: worker process 34002 exited on signal 6 Apr 3 16:31:30 Goathead nginx: 2024/04/03 16:31:30 [alert] 6493#6493: worker process 34150 exited on signal 6 Apr 3 16:31:32 Goathead nginx: 2024/04/03 16:31:32 [alert] 6493#6493: worker process 34307 exited on signal 6 Apr 3 16:31:34 Goathead nginx: 2024/04/03 16:31:34 [alert] 6493#6493: worker process 34464 exited on signal 6 Apr 3 16:31:36 Goathead nginx: 2024/04/03 16:31:36 [alert] 6493#6493: worker process 34516 exited on signal 6 Apr 3 16:31:38 Goathead nginx: 2024/04/03 16:31:38 [alert] 6493#6493: worker process 34647 exited on signal 6 Apr 3 16:31:40 Goathead nginx: 2024/04/03 16:31:40 [alert] 6493#6493: worker process 34835 exited on signal 6 Apr 3 16:31:42 Goathead nginx: 2024/04/03 16:31:42 [alert] 6493#6493: worker process 34947 exited on signal 6 Apr 3 16:31:44 Goathead nginx: 2024/04/03 16:31:44 [alert] 6493#6493: worker process 35210 exited on signal 6 Apr 3 16:31:46 Goathead nginx: 2024/04/03 16:31:46 [alert] 6493#6493: worker process 35349 exited on signal 6 Apr 3 16:31:48 Goathead nginx: 2024/04/03 16:31:48 [alert] 6493#6493: worker process 35492 exited on signal 6 Apr 3 16:31:50 Goathead nginx: 2024/04/03 16:31:50 [alert] 6493#6493: worker process 35676 exited on signal 6 Apr 3 16:31:52 Goathead nginx: 2024/04/03 16:31:52 [alert] 6493#6493: worker process 35776 exited on signal 6 Apr 3 16:31:54 Goathead nginx: 2024/04/03 16:31:54 [alert] 6493#6493: worker process 35853 exited on signal 6 Apr 3 16:31:56 Goathead nginx: 2024/04/03 16:31:56 [alert] 6493#6493: worker process 36259 exited on signal 6 Apr 3 16:31:58 Goathead nginx: 2024/04/03 16:31:58 [alert] 6493#6493: worker process 36347 exited on signal 6
  13. Doe anyone have anything I can try to unlock the SSH capability? I assume that it is blocked and something needs to be done to unblock it, rather than it being a bug, but I don't know. Seems like a pretty fundamental feature to not work out of the box (albeit disables in the GUI which makes sense).