srirams

Members
  • Posts

    32
  • Joined

  • Last visited

Recent Profile Visitors

The recent visitors block is disabled and is not being shown to other users.

srirams's Achievements

Noob

Noob (1/14)

2

Reputation

  1. That's what I would try... killing the master process should kill all the child process as well.... when that happens you can try starting the nginx service again
  2. I've downgraded to 6.11.5, but these command may be helpful: to control nginx: /etc/rc.d/rc.nginx <start, stop, or restart> you might have have to kill the nginx process: ps -aux | grep nginx kill -9 <process id of nginx master process and maybe the s6-supervise nginx> I kept the nginx process stopped (/etc/rc.d/rc.nginx stop) unless I needed to use the web ui, in which case I would start it and immediately stop it after I was done.
  3. I'm on 6.12.3 and I still get this behavior Jul 19 03:37:45 trantor nginx: 2023/07/19 03:37:45 [alert] 30307#30307: worker process 6589 exited on signal 6 Jul 19 03:37:46 trantor nginx: 2023/07/19 03:37:46 [alert] 30307#30307: worker process 6769 exited on signal 6 Jul 19 03:37:48 trantor nginx: 2023/07/19 03:37:48 [alert] 30307#30307: worker process 6819 exited on signal 6 Jul 19 03:37:50 trantor nginx: 2023/07/19 03:37:50 [alert] 30307#30307: worker process 6888 exited on signal 6 Jul 19 03:37:55 trantor nginx: 2023/07/19 03:37:55 [alert] 30307#30307: worker process 7004 exited on signal 6 Jul 19 03:37:58 trantor nginx: 2023/07/19 03:37:58 [alert] 30307#30307: worker process 7256 exited on signal 6 Starting a cloudflared docker seems to trigger this, but it doesn't get fixed immediately after stopping the cloudflared docker container either.
  4. Is there any utility in spinning down SSDs? I would think it would be better to skip spin down operations on SSDs... Also there is no "settings cog" next to the disks, so I'm not able mark them as passed through? The disks are btrfs formatted (each individual 200gb pooled into one 800gb). root@trantor:~# df -Th Filesystem Type Size Used Avail Use% Mounted on /dev/sdx btrfs 746G 451G 293G 61% /mnt/disks/scratch
  5. Upgraded and restarted and the problem went away... I think the problem was due to something else because I had the same out of space error on a docker container. Thanks!
  6. I have a sun oracle f80 pcie ssd that UD (I think) is constantly trying to spin down. Jul 13 21:06:08 trantor emhttpd: spinning down /dev/sdz Jul 13 21:06:08 trantor emhttpd: spinning down /dev/sdaa Jul 13 21:06:08 trantor emhttpd: spinning down /dev/sdx Jul 13 21:06:10 trantor emhttpd: spinning down /dev/sdy (The above is repeated every 45 minutes) The drives are formatted using btrfs (not through UD).
  7. Attached! trantor-diagnostics-20230709-2127.zip
  8. I'm trying to create and start a container but it is failing: root@trantor:/mnt/disks/data# lxc-start -F oracle lxc-start: oracle: ../src/lxc/conf.c: lxc_setup_console: 2156 No space left on device - Failed to allocate console from container's devpts instance lxc-start: oracle: ../src/lxc/conf.c: lxc_setup: 4471 Failed to setup console lxc-start: oracle: ../src/lxc/start.c: do_start: 1272 Failed to setup container "oracle" lxc-start: oracle: ../src/lxc/sync.c: sync_wait: 34 An error occurred in another process (expected sequence number 4) lxc-start: oracle: ../src/lxc/start.c: __lxc_start: 2107 Failed to spawn container "oracle" lxc-start: oracle: ../src/lxc/tools/lxc_start.c: main: 306 The container failed to start lxc-start: oracle: ../src/lxc/tools/lxc_start.c: main: 311 Additional information can be obtained by setting the --logfile and --logpriority options and df -H Filesystem Size Used Avail Use% Mounted on rootfs 17G 1.3G 16G 8% / tmpfs 34M 5.1M 29M 15% /run /dev/sda1 16G 800M 15G 6% /boot overlay 17G 1.3G 16G 8% /lib/firmware overlay 17G 1.3G 16G 8% /lib/modules devtmpfs 8.4M 0 8.4M 0% /dev tmpfs 17G 0 17G 0% /dev/shm cgroup_root 8.4M 0 8.4M 0% /sys/fs/cgroup tmpfs 135M 15M 120M 11% /var/log tmpfs 1.1M 0 1.1M 0% /mnt/disks tmpfs 1.1M 0 1.1M 0% /mnt/remotes tmpfs 1.1M 0 1.1M 0% /mnt/addons tmpfs 1.1M 0 1.1M 0% /mnt/rootshare /dev/sdx 800G 451G 348G 57% /mnt/disks/scratch /dev/md1 12T 12T 69G 100% /mnt/disk1 /dev/md2 8.0T 8.0T 35G 100% /mnt/disk2 /dev/md3 12T 12T 161G 99% /mnt/disk3 /dev/md4 12T 12T 41G 100% /mnt/disk4 /dev/md5 8.0T 7.9T 116G 99% /mnt/disk5 /dev/md6 12T 11T 1.8T 86% /mnt/disk6 /dev/md7 12T 12T 11G 100% /mnt/disk7 /dev/md8 10T 9.9T 122G 99% /mnt/disk8 /dev/md9 8.0T 7.9T 106G 99% /mnt/disk9 /dev/md10 8.0T 8.0T 36G 100% /mnt/disk10 /dev/md11 10T 9.7T 351G 97% /mnt/disk11 /dev/md12 8.0T 7.9T 166G 98% /mnt/disk12 /dev/md13 10T 9.9T 121G 99% /mnt/disk13 /dev/md14 12T 12T 28G 100% /mnt/disk14 /dev/md15 12T 2.1T 10T 17% /mnt/disk15 /dev/sdb1 250G 119G 132G 48% /mnt/disks/data /dev/sdp1 4.0T 963G 3.1T 25% /mnt/disks/backup /dev/sds1 5.0T 2.0T 3.1T 40% /mnt/disks/transfer /dev/sdu1 4.0T 161G 3.9T 5% /mnt/disks/nvr /mnt/disks/data 250G 119G 132G 48% /share-ro/data /mnt/disks/scratch 800G 451G 348G 57% /share-ro/scratch /mnt/disks/transfer 5.0T 2.0T 3.1T 40% /share-ro/transfer /mnt/disks/backup 4.0T 963G 3.1T 25% /share-ro/backup /mnt/disk2 8.0T 8.0T 35G 100% /share-ro/17 /mnt/disk4 12T 12T 41G 100% /share-ro/29 /mnt/disk8 10T 9.9T 122G 99% /share-ro/a01 /mnt/disk11 10T 9.7T 351G 97% /share-ro/a02 /mnt/disk1 12T 12T 69G 100% /share-ro/a03 /mnt/disk5 8.0T 7.9T 116G 99% /share-ro/23 /mnt/disk6 12T 11T 1.8T 86% /share-ro/27 /mnt/disk7 12T 12T 11G 100% /share-ro/a05 /mnt/disk9 8.0T 7.9T 106G 99% /share-ro/30 /mnt/disk13 10T 9.9T 121G 99% /share-ro/a01 /mnt/disk12 8.0T 7.9T 166G 98% /share-ro/31 /mnt/disk15 12T 2.1T 10T 17% /share-ro/a07 /mnt/disk10 8.0T 8.0T 36G 100% /share-ro/20 /mnt/disk3 12T 12T 161G 99% /share-ro/28 /mnt/disk14 12T 12T 28G 100% /share-ro/a06 /dev/loop3 1.1G 5.0M 948M 1% /etc/libvirt /dev/loop2 43G 19G 24G 44% /var/lib/docker
  9. Sorry, can't think of what else could be the problem... Plugin in needs sas address in order to work.
  10. The problem is that you don't have a sas address for your disks.... Is the hba on raid or jbod mode?
  11. Hi, can't access the output... please try putting it in a pastebin, or include it here.
  12. That worked, thanks! Didn't realize that I had to format it in unraid after preclear.
  13. I've attached it below! TRANTOR-unassigned.devices.preclear-20220603-1056.zip
  14. I think unraid creating the folders when you add a new container is ok, but I don't see the use case for docker ever creating folders when you start a container (its gonna create an empty folder because the appdata folder is not mounted)