L0rdRaiden

Members
  • Posts

    568
  • Joined

  • Last visited

Everything posted by L0rdRaiden

  1. I have rebooted the server several times but nothings changes. I have been reproducing the error with plex this is what I get in unraid log Jul 10 20:44:49 Unraid kernel: WARNING: CPU: 14 PID: 0 at net/netfilter/nf_conntrack_core.c:1210 __nf_conntrack_confirm+0xa4/0x2b0 [nf_conntrack] Jul 10 20:44:49 Unraid kernel: Modules linked in: veth wireguard curve25519_x86_64 libcurve25519_generic libchacha20poly1305 chacha_x86_64 poly1305_x86_64 ip6_udp_tunnel udp_tunnel libchacha af_packet nvidia_uvm(PO) xt_nat macvlan xt_CHECKSUM ipt_REJECT nf_reject_ipv4 xt_tcpudp ip6table_mangle ip6table_nat iptable_mangle vhost_net tun vhost vhost_iotlb tap xt_conntrack xt_MASQUERADE nf_conntrack_netlink nfnetlink xfrm_user iptable_nat nf_nat nf_conntrack nf_defrag_ipv6 nf_defrag_ipv4 xt_addrtype br_netfilter xfs md_mod tcp_diag inet_diag ip6table_filter ip6_tables iptable_filter ip_tables x_tables efivarfs bridge 8021q garp mrp stp llc ixgbe xfrm_algo mdio igb i2c_algo_bit nvidia_drm(PO) nvidia_modeset(PO) zfs(PO) zunicode(PO) zzstd(O) zlua(O) zavl(PO) icp(PO) nvidia(PO) edac_mce_amd zcommon(PO) edac_core znvpair(PO) spl(O) kvm_amd video drm_kms_helper kvm drm crct10dif_pclmul crc32_pclmul crc32c_intel ghash_clmulni_intel sha512_ssse3 backlight aesni_intel crypto_simd syscopyarea tpm_crb cryptd wmi_bmof Jul 10 20:44:49 Unraid kernel: mxm_wmi asus_wmi_sensors tpm_tis sysfillrect i2c_piix4 k10temp nvme rapl tpm_tis_core input_leds ccp ahci sysimgblt led_class cdc_acm nvme_core i2c_core libahci fb_sys_fops tpm wmi button acpi_cpufreq unix [last unloaded: xfrm_algo] Jul 10 20:44:49 Unraid kernel: CPU: 14 PID: 0 Comm: swapper/14 Tainted: P O 6.1.36-Unraid #1 Jul 10 20:44:49 Unraid kernel: Hardware name: ASUS System Product Name/ROG CROSSHAIR VII HERO, BIOS 4603 09/13/2021 Jul 10 20:44:49 Unraid kernel: RIP: 0010:__nf_conntrack_confirm+0xa4/0x2b0 [nf_conntrack] Jul 10 20:44:49 Unraid kernel: Code: 44 24 10 e8 e2 e1 ff ff 8b 7c 24 04 89 ea 89 c6 89 04 24 e8 7e e6 ff ff 84 c0 75 a2 48 89 df e8 9b e2 ff ff 85 c0 89 c5 74 18 <0f> 0b 8b 34 24 8b 7c 24 04 e8 18 dd ff ff e8 93 e3 ff ff e9 72 01 Jul 10 20:44:49 Unraid kernel: RSP: 0018:ffffc900004c8838 EFLAGS: 00010202 Jul 10 20:44:49 Unraid kernel: RAX: 0000000000000001 RBX: ffff8885c2e81f00 RCX: 7aecd0b99ace0591 Jul 10 20:44:49 Unraid kernel: RDX: 0000000000000000 RSI: 0000000000000001 RDI: ffff8885c2e81f00 Jul 10 20:44:49 Unraid kernel: RBP: 0000000000000001 R08: fed2146f5781fd9e R09: d403ee2a01cdc41c Jul 10 20:44:49 Unraid kernel: R10: 13c56616bc33d4cc R11: ffffc900004c8800 R12: ffffffff82a11440 Jul 10 20:44:49 Unraid kernel: R13: 00000000000254b3 R14: ffff88892d6dbe00 R15: 0000000000000000 Jul 10 20:44:49 Unraid kernel: FS: 0000000000000000(0000) GS:ffff888ffeb80000(0000) knlGS:0000000000000000 Jul 10 20:44:49 Unraid kernel: CS: 0010 DS: 0000 ES: 0000 CR0: 0000000080050033 Jul 10 20:44:49 Unraid kernel: CR2: 000000c000107010 CR3: 00000001c7cee000 CR4: 0000000000350ee0 Jul 10 20:44:49 Unraid kernel: Call Trace: Jul 10 20:44:49 Unraid kernel: <IRQ> Jul 10 20:44:49 Unraid kernel: ? __warn+0xab/0x122 Jul 10 20:44:49 Unraid kernel: ? report_bug+0x109/0x17e Jul 10 20:44:49 Unraid kernel: ? __nf_conntrack_confirm+0xa4/0x2b0 [nf_conntrack] Jul 10 20:44:49 Unraid kernel: ? handle_bug+0x41/0x6f Jul 10 20:44:49 Unraid kernel: ? exc_invalid_op+0x13/0x60 Jul 10 20:44:49 Unraid kernel: ? asm_exc_invalid_op+0x16/0x20 Jul 10 20:44:49 Unraid kernel: ? __nf_conntrack_confirm+0xa4/0x2b0 [nf_conntrack] Jul 10 20:44:49 Unraid kernel: ? __nf_conntrack_confirm+0x9e/0x2b0 [nf_conntrack] Jul 10 20:44:49 Unraid kernel: ? nf_nat_inet_fn+0xc0/0x1a8 [nf_nat] Jul 10 20:44:49 Unraid kernel: nf_conntrack_confirm+0x25/0x54 [nf_conntrack] Jul 10 20:44:49 Unraid kernel: nf_hook_slow+0x3d/0x96 Jul 10 20:44:49 Unraid kernel: ? ip_protocol_deliver_rcu+0x164/0x164 Jul 10 20:44:49 Unraid kernel: NF_HOOK.constprop.0+0x79/0xd9 Jul 10 20:44:49 Unraid kernel: ? ip_protocol_deliver_rcu+0x164/0x164 Jul 10 20:44:49 Unraid kernel: ip_sabotage_in+0x52/0x60 [br_netfilter] Jul 10 20:44:49 Unraid kernel: nf_hook_slow+0x3d/0x96 Jul 10 20:44:49 Unraid kernel: ? ip_rcv_finish_core.constprop.0+0x3e8/0x3e8 Jul 10 20:44:49 Unraid kernel: NF_HOOK.constprop.0+0x79/0xd9 Jul 10 20:44:49 Unraid kernel: ? ip_rcv_finish_core.constprop.0+0x3e8/0x3e8 Jul 10 20:44:49 Unraid kernel: __netif_receive_skb_one_core+0x77/0x9c Jul 10 20:44:49 Unraid kernel: netif_receive_skb+0xbf/0x127 Jul 10 20:44:49 Unraid kernel: br_handle_frame_finish+0x438/0x472 [bridge] Jul 10 20:44:49 Unraid kernel: ? br_pass_frame_up+0xdd/0xdd [bridge] Jul 10 20:44:49 Unraid kernel: br_nf_hook_thresh+0xe5/0x109 [br_netfilter] Jul 10 20:44:49 Unraid kernel: ? br_pass_frame_up+0xdd/0xdd [bridge] Jul 10 20:44:49 Unraid kernel: br_nf_pre_routing_finish+0x2c1/0x2ec [br_netfilter] Jul 10 20:44:49 Unraid kernel: ? br_pass_frame_up+0xdd/0xdd [bridge] Jul 10 20:44:49 Unraid kernel: ? NF_HOOK.isra.0+0xe4/0x140 [br_netfilter] Jul 10 20:44:49 Unraid kernel: ? br_nf_hook_thresh+0x109/0x109 [br_netfilter] Jul 10 20:44:49 Unraid kernel: br_nf_pre_routing+0x236/0x24a [br_netfilter] Jul 10 20:44:49 Unraid kernel: ? br_nf_hook_thresh+0x109/0x109 [br_netfilter] Jul 10 20:44:49 Unraid kernel: br_handle_frame+0x27a/0x2e0 [bridge] Jul 10 20:44:49 Unraid kernel: ? br_pass_frame_up+0xdd/0xdd [bridge] Jul 10 20:44:49 Unraid kernel: __netif_receive_skb_core.constprop.0+0x4fd/0x6e9 Jul 10 20:44:49 Unraid kernel: __netif_receive_skb_list_core+0x8a/0x11e Jul 10 20:44:49 Unraid kernel: netif_receive_skb_list_internal+0x1d2/0x20b Jul 10 20:44:49 Unraid kernel: gro_normal_list+0x1d/0x3f Jul 10 20:44:49 Unraid kernel: napi_complete_done+0x7b/0x11a Jul 10 20:44:49 Unraid kernel: igb_poll+0xd88/0xf8e [igb] Jul 10 20:44:49 Unraid kernel: ? run_cmd+0x13/0x51 Jul 10 20:44:49 Unraid kernel: ? update_overutilized_status+0x33/0x6e Jul 10 20:44:49 Unraid kernel: ? hrtick_update+0x17/0x4f Jul 10 20:44:49 Unraid kernel: __napi_poll.constprop.0+0x2b/0x124 Jul 10 20:44:49 Unraid kernel: net_rx_action+0x159/0x24f Jul 10 20:44:49 Unraid kernel: __do_softirq+0x129/0x288 Jul 10 20:44:49 Unraid kernel: __irq_exit_rcu+0x5e/0xb8 Jul 10 20:44:49 Unraid kernel: common_interrupt+0x9b/0xc1 Jul 10 20:44:49 Unraid kernel: </IRQ> Jul 10 20:44:49 Unraid kernel: <TASK> Jul 10 20:44:49 Unraid kernel: asm_common_interrupt+0x22/0x40 Jul 10 20:44:49 Unraid kernel: RIP: 0010:cpuidle_enter_state+0x11d/0x202 Jul 10 20:44:49 Unraid kernel: Code: 16 37 a0 ff 45 84 ff 74 1b 9c 58 0f 1f 40 00 0f ba e0 09 73 08 0f 0b fa 0f 1f 44 00 00 31 ff e8 24 f6 a4 ff fb 0f 1f 44 00 00 <45> 85 e4 0f 88 ba 00 00 00 48 8b 04 24 49 63 cc 48 6b d1 68 49 29 Jul 10 20:44:49 Unraid kernel: RSP: 0018:ffffc900001c7e98 EFLAGS: 00000246 Jul 10 20:44:49 Unraid kernel: RAX: ffff888ffeb80000 RBX: ffff888108c8cc00 RCX: 0000000000000000 Jul 10 20:44:49 Unraid kernel: RDX: 0000096113d8cad6 RSI: ffffffff820909fc RDI: ffffffff82090f05 Jul 10 20:44:49 Unraid kernel: RBP: 0000000000000002 R08: 0000000000000002 R09: 0000000000000002 Jul 10 20:44:49 Unraid kernel: R10: 0000000000000020 R11: 0000000000004bc6 R12: 0000000000000002 Jul 10 20:44:49 Unraid kernel: R13: ffffffff823235a0 R14: 0000096113d8cad6 R15: 0000000000000000 Jul 10 20:44:49 Unraid kernel: ? cpuidle_enter_state+0xf7/0x202 Jul 10 20:44:49 Unraid kernel: cpuidle_enter+0x2a/0x38 Jul 10 20:44:49 Unraid kernel: do_idle+0x18d/0x1fb Jul 10 20:44:49 Unraid kernel: cpu_startup_entry+0x1d/0x1f Jul 10 20:44:49 Unraid kernel: start_secondary+0xeb/0xeb Jul 10 20:44:49 Unraid kernel: secondary_startup_64_no_verify+0xce/0xdb Jul 10 20:44:49 Unraid kernel: </TASK> Jul 10 20:44:49 Unraid kernel: ---[ end trace 0000000000000000 ]--- This is plex app logs while reproducing the error, the error below is what i get every time I try to transcode I get the popup error. Regarding your comments frigate, the network is fine because as soon as I disable gpu decoding everything works. While using gpu decoding I can access the cameras using other tools and works. ANyway I'm going to do the changes you proposed to see if something changes, considering that it affects to plex as well, and I have the same problem with plex even if frigate is stopped... thanks for you help. Maybe is something with my config... but I don't even know where to start to troubleshoot it, and the logs don't tell a lot. All I know is that it only happens when the container try to use the GPU for something
  2. For plex there is nothing in the logs, just the error in the screenshot above when I try to use transcoding [migrations] started [migrations] no migrations found usermod: no changes ─────────────────────────────────────── ██╗ ███████╗██╗ ██████╗ ██║ ██╔════╝██║██╔═══██╗ ██║ ███████╗██║██║ ██║ ██║ ╚════██║██║██║ ██║ ███████╗███████║██║╚██████╔╝ ╚══════╝╚══════╝╚═╝ ╚═════╝ Brought to you by linuxserver.io ─────────────────────────────────────── To support LSIO projects visit: https://www.linuxserver.io/donate/ ─────────────────────────────────────── GID/UID ─────────────────────────────────────── User UID: 99 User GID: 100 ─────────────────────────────────────── **** Server already claimed **** **** permissions for /dev/dri/renderD128 are good **** **** permissions for /dev/dri/card0 are good **** Docker is used for versioning skip update check [custom-init] No custom files found, skipping... Starting Plex Media Server. . . (you can ignore the libusb_init error) [ls.io-init] done. Critical: libusb_init failed Plex docker run docker run -d --name='Plex' --net='br2' --ip='10.10.50.20' --cpuset-cpus='4,5,6,7,8,9,16,17,18,19,20,21' -e TZ="Europe/Paris" -e HOST_OS="Unraid" -e HOST_HOSTNAME="Unraid" -e HOST_CONTAINERNAME="Plex" -e 'VERSION'='docker' -e 'NVIDIA_VISIBLE_DEVICES'='GPU-f1c0f52c-e491-64c7-428c-e10038734368' -e 'NVIDIA_DRIVER_CAPABILITIES'='all' -e 'PUID'='99' -e 'PGID'='100' -e 'TCP_PORT_32400'='32400' -e 'TCP_PORT_3005'='3005' -e 'TCP_PORT_8324'='8324' -e 'TCP_PORT_32469'='32469' -e 'UDP_PORT_1900'='1900' -e 'UDP_PORT_32410'='32410' -e 'UDP_PORT_32412'='32412' -e 'UDP_PORT_32413'='32413' -e 'UDP_PORT_32414'='32414' -e '022'='022' -l net.unraid.docker.managed=dockerman -l net.unraid.docker.webui='http://[IP]:[PORT:32400]/web' -l net.unraid.docker.icon='https://raw.githubusercontent.com/linuxserver/docker-templates/master/linuxserver.io/img/plex-icon.png' -v '/mnt/user/Video/Películas/':'/media/Películas':'rw' -v '/mnt/user/Video/Movies/':'/media/Movies':'rw' -v '/mnt/user/Video/Series/':'/media/Series':'rw' -v '':'/movies':'rw' -v '':'/tv':'rw' -v '':'/music':'rw' -v '/mnt/user/Docker/Plex/':'/config':'rw' --dns=10.10.50.5 --no-healthcheck --runtime=nvidia --mount type=tmpfs,destination=/tmp,tmpfs-size=4000000000 'lscr.io/linuxserver/plex' d52fd6937b48a59636659eacbee1624de26fe7ba3f718fff524eafdd4e205cba The command finished successfully! For frigate I opened this bug when I though that the problem was with frigate but is actually with the GPU, you can see the logs in the last 3 or 4 posts https://github.com/blakeblackshear/frigate/issues/7051 I don't know what else to do to troubleshoot this.
  3. Since I installed 6.12.2 nvidia stoped working in my containers like plex or frigate, when I try to use it I get errors and crashes plugin ver 2023.07.06 in frigate I get ffmpeg errors due to the H264 decoder using nvidia GPU error in plex when transcoding All this was working fine before. I haven't changed anything not in the docker run, not in the apps config. Any idea what the problem could be? @SpencerJ could this be related with the issue you mention here? Although I',m using the latest version of everything. unraid-diagnostics-20230710-1727.zip
  4. I have this configuration and I still have macvlan errors. Is this because I use the same network for VM as well? what could be the problem?
  5. I got this logs, any idea about the root cause? or how to do more troubleshooting? ===== START ===== 2023-07-04T11:03:00.000+02:00 source=Unraid Unraid kernel: [ 25409] 0 25409 52 7 24576 0 0 s6-svscan 2023-07-04T11:03:00.000+02:00 source=Unraid Unraid kernel: [ 25389] 0 25389 180192 1006 118784 0 1 containerd-shim 2023-07-04T11:03:00.000+02:00 source=Unraid Unraid kernel: [ 25226] 0 25226 234860 19547 3559424 0 0 node 2023-07-04T11:03:00.000+02:00 source=Unraid Unraid kernel: [ 25088] 0 25088 536 15 40960 0 0 dumb-init 2023-07-04T11:03:00.000+02:00 source=Unraid Unraid kernel: [ 25067] 0 25067 180256 999 118784 0 1 containerd-shim 2023-07-04T11:03:00.000+02:00 source=Unraid Unraid kernel: [ 24676] 0 24676 179791 1679 106496 0 0 dozzle 2023-07-04T11:03:00.000+02:00 source=Unraid Unraid kernel: [ 24653] 0 24653 180192 890 114688 0 1 containerd-shim 2023-07-04T11:03:00.000+02:00 source=Unraid Unraid kernel: [ 24643] 0 24643 715402 9959 557056 0 0 mono-sgen 2023-07-04T11:03:00.000+02:00 source=Unraid Unraid kernel: [ 24471] 201 24471 5140 264 77824 0 0 netdata 2023-07-04T11:03:00.000+02:00 source=Unraid Unraid kernel: [ 24296] 0 24296 35484 4973 176128 0 0 mono 2023-07-04T11:03:00.000+02:00 source=Unraid Unraid kernel: [ 24217] 201 24217 62608 39133 581632 0 0 netdata 2023-07-04T11:03:00.000+02:00 source=Unraid Unraid kernel: [ 24180] 0 24180 47 1 24576 0 0 s6-ipcserverd 2023-07-04T11:03:00.000+02:00 source=Unraid Unraid kernel: [ 24172] 0 24172 53 5 24576 0 0 s6-supervise 2023-07-04T11:03:00.000+02:00 source=Unraid Unraid kernel: [ 24171] 0 24171 53 5 24576 0 0 s6-supervise 2023-07-04T11:03:00.000+02:00 source=Unraid Unraid kernel: [ 24170] 0 24170 53 4 24576 0 0 s6-supervise 2023-07-04T11:03:00.000+02:00 source=Unraid Unraid kernel: [ 24150] 0 24150 180192 812 114688 0 1 containerd-shim 2023-07-04T11:03:00.000+02:00 source=Unraid Unraid kernel: [ 24123] 0 24123 50 1 24576 0 0 s6-linux-init-s 2023-07-04T11:03:00.000+02:00 source=Unraid Unraid kernel: [ 24122] 0 24122 53 5 24576 0 0 s6-supervise 2023-07-04T11:03:00.000+02:00 source=Unraid Unraid kernel: [ 24069] 99 24069 100958 28553 622592 0 0 python3 2023-07-04T11:03:00.000+02:00 source=Unraid Unraid kernel: [ 23957] 0 23957 52 8 24576 0 0 s6-svscan 2023-07-04T11:03:00.000+02:00 source=Unraid Unraid kernel: [ 23931] 0 23931 180256 1036 118784 0 1 containerd-shim 2023-07-04T11:03:00.000+02:00 source=Unraid Unraid kernel: [ 23777] 99 23777 4665 3186 73728 0 0 python3 2023-07-04T11:03:00.000+02:00 source=Unraid Unraid kernel: [ 23775] 0 23775 114 10 36864 0 0 s6-ftrigrd 2023-07-04T11:03:00.000+02:00 source=Unraid Unraid kernel: [ 23774] 0 23774 56 7 36864 0 0 s6-svlisten1 2023-07-04T11:03:00.000+02:00 source=Unraid Unraid kernel: [ 23633] 0 23633 47 1 24576 0 0 s6-ipcserverd 2023-07-04T11:03:00.000+02:00 source=Unraid Unraid kernel: [ 23629] 0 23629 54 6 24576 0 0 s6-rc 2023-07-04T11:03:00.000+02:00 source=Unraid Unraid kernel: [ 23620] 0 23620 53 5 24576 0 0 s6-supervise 2023-07-04T11:03:00.000+02:00 source=Unraid Unraid kernel: [ 23619] 0 23619 53 6 24576 0 0 s6-supervise 2023-07-04T11:03:00.000+02:00 source=Unraid Unraid kernel: [ 23617] 0 23617 53 6 24576 0 0 s6-supervise 2023-07-04T11:03:00.000+02:00 source=Unraid Unraid kernel: [ 23608] 0 23608 180131 968 126976 0 0 unpackerr 2023-07-04T11:03:00.000+02:00 source=Unraid Unraid kernel: [ 23579] 0 23579 180256 1103 118784 0 1 containerd-shim 2023-07-04T11:03:00.000+02:00 source=Unraid Unraid kernel: [ 23563] 0 23563 50 1 24576 0 0 s6-linux-init-s 2023-07-04T11:03:00.000+02:00 source=Unraid Unraid kernel: [ 23561] 0 23561 53 6 24576 0 0 s6-supervise 2023-07-04T11:03:00.000+02:00 source=Unraid Unraid kernel: [ 23560] 0 23560 406 16 40960 0 0 rc.init 2023-07-04T11:03:00.000+02:00 source=Unraid Unraid kernel: [ 23554] 99 23554 24228 14171 237568 0 0 python3 2023-07-04T11:03:00.000+02:00 source=Unraid Unraid kernel: [ 23329] 0 23329 47 1 24576 0 0 s6-ipcserverd 2023-07-04T11:03:00.000+02:00 source=Unraid Unraid kernel: [ 23307] 0 23307 53 5 24576 0 0 s6-supervise 2023-07-04T11:03:00.000+02:00 source=Unraid Unraid kernel: [ 23306] 0 23306 53 6 24576 0 0 s6-supervise 2023-07-04T11:03:00.000+02:00 source=Unraid Unraid kernel: [ 23305] 0 23305 53 5 24576 0 0 s6-supervise 2023-07-04T11:03:00.000+02:00 source=Unraid Unraid kernel: [ 23281] 0 23281 52 6 24576 0 0 s6-svscan 2023-07-04T11:03:00.000+02:00 source=Unraid Unraid kernel: [ 23250] 0 23250 180192 1046 114688 0 1 containerd-shim 2023-07-04T11:03:00.000+02:00 source=Unraid Unraid kernel: [ 23235] 0 23235 50 1 24576 0 0 s6-linux-init-s 2023-07-04T11:03:00.000+02:00 source=Unraid Unraid kernel: [ 23233] 0 23233 53 5 24576 0 0 s6-supervise 2023-07-04T11:03:00.000+02:00 source=Unraid Unraid kernel: [ 22973] 0 22973 52 7 24576 0 0 s6-svscan 2023-07-04T11:03:00.000+02:00 source=Unraid Unraid kernel: [ 22953] 0 22953 180256 1067 114688 0 1 containerd-shim 2023-07-04T11:03:00.000+02:00 source=Unraid Unraid kernel: [ 22704] 1883 22704 956 186 40960 0 0 mosquitto 2023-07-04T11:03:00.000+02:00 source=Unraid Unraid kernel: [ 22684] 0 22684 180256 1031 118784 0 1 containerd-shim 2023-07-04T11:03:00.000+02:00 source=Unraid Unraid kernel: [ 20792] 0 20792 676449 241599 2670592 0 0 qemu-system-x86 2023-07-04T11:03:00.000+02:00 source=Unraid Unraid kernel: [ 20786] 0 20786 2340 1021 53248 0 0 swtpm 2023-07-04T11:03:00.000+02:00 source=Unraid Unraid kernel: [ 20771] 0 20771 19125 1319 126976 0 0 winbindd 2023-07-04T11:03:00.000+02:00 source=Unraid Unraid kernel: [ 20658] 0 20658 19042 1531 126976 0 0 winbindd 2023-07-04T11:03:00.000+02:00 source=Unraid Unraid kernel: [ 20653] 0 20653 18993 1409 126976 0 0 winbindd 2023-07-04T11:03:00.000+02:00 source=Unraid Unraid kernel: [ 20651] 0 20651 19534 1049 131072 0 0 cleanupd 2023-07-04T11:03:00.000+02:00 source=Unraid Unraid kernel: [ 20650] 0 20650 19534 1201 135168 0 0 smbd-notifyd 2023-07-04T11:03:00.000+02:00 source=Unraid Unraid kernel: [ 20647] 0 20647 19881 1669 139264 0 0 smbd 2023-07-04T11:03:00.000+02:00 source=Unraid Unraid kernel: [ 20552] 0 20552 37503 2450 233472 0 0 ffmpeg 2023-07-04T11:03:00.000+02:00 source=Unraid Unraid kernel: [ 20497] 0 20497 1802527 1640742 14467072 0 0 qemu-system-x86 2023-07-04T11:03:00.000+02:00 source=Unraid Unraid kernel: [ 20475] 0 20475 441940 100915 1548288 0 0 python3 2023-07-04T11:03:00.000+02:00 source=Unraid Unraid kernel: [ 20473] 0 20473 53 5 24576 0 0 s6-supervise 2023-07-04T11:03:00.000+02:00 source=Unraid Unraid kernel: [ 20440] 0 20440 47 1 24576 0 0 s6-ipcserverd 2023-07-04T11:03:00.000+02:00 source=Unraid Unraid kernel: [ 20432] 0 20432 53 6 24576 0 0 s6-supervise 2023-07-04T11:03:00.000+02:00 source=Unraid Unraid kernel: [ 20431] 0 20431 53 6 24576 0 0 s6-supervise 2023-07-04T11:03:00.000+02:00 source=Unraid Unraid kernel: [ 20422] 0 20422 50 1 24576 0 0 s6-linux-init-s 2023-07-04T11:03:00.000+02:00 source=Unraid Unraid kernel: [ 20420] 0 20420 53 6 24576 0 0 s6-supervise 2023-07-04T11:03:00.000+02:00 source=Unraid Unraid kernel: [ 20413] 0 20413 31705 1763 167936 0 0 ffmpeg 2023-07-04T11:03:00.000+02:00 source=Unraid Unraid kernel: [ 20411] 0 20411 31705 1753 167936 0 0 ffmpeg 2023-07-04T11:03:00.000+02:00 source=Unraid Unraid kernel: [ 20398] 0 20398 2421639 53602 741376 0 0 ffmpeg 2023-07-04T11:03:00.000+02:00 source=Unraid Unraid kernel: [ 20394] 0 20394 349098 21812 495616 0 0 frigate.capture 2023-07-04T11:03:00.000+02:00 source=Unraid Unraid kernel: [ 20391] 0 20391 360790 22700 495616 0 0 frigate.capture 2023-07-04T11:03:00.000+02:00 source=Unraid Unraid kernel: [ 20390] 0 20390 434827 24150 593920 0 0 frigate.process 2023-07-04T11:03:00.000+02:00 source=Unraid Unraid kernel: [ 20389] 0 20389 31705 1763 163840 0 0 ffmpeg 2023-07-04T11:03:00.000+02:00 source=Unraid Unraid kernel: [ 20388] 0 20388 432779 22059 569344 0 0 frigate.process 2023-07-04T11:03:00.000+02:00 source=Unraid Unraid kernel: [ 20385] 0 20385 333083 21109 532480 0 0 frigate.output 2023-07-04T11:03:00.000+02:00 source=Unraid Unraid kernel: [ 20383] 0 20383 350647 22117 495616 0 0 frigate.detecto 2023-07-04T11:03:00.000+02:00 source=Unraid Unraid kernel: [ 20382] 0 20382 3564 1319 65536 0 0 python3 2023-07-04T11:03:00.000+02:00 source=Unraid Unraid kernel: [ 20370] 0 20370 247659 19180 425984 0 0 frigate.logger 2023-07-04T11:03:00.000+02:00 source=Unraid Unraid kernel: [ 20231] 0 20231 52 7 24576 0 0 s6-svscan 2023-07-04T11:03:00.000+02:00 source=Unraid Unraid kernel: [ 20205] 0 20205 180256 907 114688 0 1 containerd-shim 2023-07-04T11:03:00.000+02:00 source=Unraid Unraid kernel: [ 20113] 0 20113 1857 84 49152 0 0 dnsmasq 2023-07-04T11:03:00.000+02:00 source=Unraid Unraid kernel: [ 20112] 99 20112 1890 500 49152 0 0 dnsmasq 2023-07-04T11:03:00.000+02:00 source=Unraid Unraid kernel: [ 19614] 0 19614 364802 2705 286720 0 0 libvirtd 2023-07-04T11:03:00.000+02:00 source=Unraid Unraid kernel: [ 19570] 0 19570 8099 1476 98304 0 0 virtlogd 2023-07-04T11:03:00.000+02:00 source=Unraid Unraid kernel: [ 19548] 0 19548 8082 1484 98304 0 0 virtlockd 2023-07-04T11:03:00.000+02:00 source=Unraid Unraid kernel: [ 17867] 0 17867 355200 2727 266240 0 0 scrutiny 2023-07-04T11:03:00.000+02:00 source=Unraid Unraid kernel: [ 16881] 0 16881 161947 1216 208896 0 0 nginx 2023-07-04T11:03:00.000+02:00 source=Unraid Unraid kernel: [ 16837] 0 16837 161947 1216 208896 0 0 nginx 2023-07-04T11:03:00.000+02:00 source=Unraid Unraid kernel: [ 16800] 0 16800 161947 1216 208896 0 0 nginx 2023-07-04T11:03:00.000+02:00 source=Unraid Unraid kernel: [ 16762] 0 16762 161947 1216 208896 0 0 nginx 2023-07-04T11:03:00.000+02:00 source=Unraid Unraid kernel: [ 16721] 0 16721 161947 1216 208896 0 0 nginx 2023-07-04T11:03:00.000+02:00 source=Unraid Unraid kernel: [ 16681] 0 16681 161947 1216 208896 0 0 nginx 2023-07-04T11:03:00.000+02:00 source=Unraid Unraid kernel: [ 16654] 0 16654 161947 1216 208896 0 0 nginx 2023-07-04T11:03:00.000+02:00 source=Unraid Unraid kernel: [ 16630] 0 16630 161947 1216 208896 0 0 nginx 2023-07-04T11:03:00.000+02:00 source=Unraid Unraid kernel: [ 16607] 0 16607 161947 1216 208896 0 0 nginx 2023-07-04T11:03:00.000+02:00 source=Unraid Unraid kernel: [ 16562] 0 16562 161947 1216 208896 0 0 nginx 2023-07-04T11:03:00.000+02:00 source=Unraid Unraid kernel: [ 16504] 0 16504 161947 1216 208896 0 0 nginx 2023-07-04T11:03:00.000+02:00 source=Unraid Unraid kernel: [ 16456] 0 16456 161947 1216 208896 0 0 nginx 2023-07-04T11:03:00.000+02:00 source=Unraid Unraid kernel: [ 16414] 0 16414 161947 1216 208896 0 0 nginx 2023-07-04T11:03:00.000+02:00 source=Unraid Unraid kernel: [ 16376] 0 16376 161947 1216 208896 0 0 nginx 2023-07-04T11:03:00.000+02:00 source=Unraid Unraid kernel: [ 16355] 0 16355 161947 1216 208896 0 0 nginx 2023-07-04T11:03:00.000+02:00 source=Unraid Unraid kernel: [ 16332] 0 16332 161947 1216 208896 0 0 nginx 2023-07-04T11:03:00.000+02:00 source=Unraid Unraid kernel: [ 16313] 0 16313 161947 1216 208896 0 0 nginx 2023-07-04T11:03:00.000+02:00 source=Unraid Unraid kernel: [ 16291] 0 16291 161947 1216 208896 0 0 nginx 2023-07-04T11:03:00.000+02:00 source=Unraid Unraid kernel: [ 16261] 0 16261 161947 1216 208896 0 0 nginx 2023-07-04T11:03:00.000+02:00 source=Unraid Unraid kernel: [ 16232] 0 16232 161947 1216 208896 0 0 nginx 2023-07-04T11:03:00.000+02:00 source=Unraid Unraid kernel: [ 16212] 0 16212 161947 1216 208896 0 0 nginx 2023-07-04T11:03:00.000+02:00 source=Unraid Unraid kernel: [ 16204] 0 16204 161947 1216 208896 0 0 nginx 2023-07-04T11:03:00.000+02:00 source=Unraid Unraid kernel: [ 16203] 0 16203 161947 1220 208896 0 0 nginx 2023-07-04T11:03:00.000+02:00 source=Unraid Unraid kernel: [ 16202] 0 16202 161947 1216 208896 0 0 nginx 2023-07-04T11:03:00.000+02:00 source=Unraid Unraid kernel: [ 16190] 1000 16190 5750 1500 81920 0 0 Xvfb 2023-07-04T11:03:00.000+02:00 source=Unraid Unraid kernel: [ 16112] 0 16112 145420 1818 94208 0 0 nginx 2023-07-04T11:03:00.000+02:00 source=Unraid Unraid kernel: [ 16104] 0 16104 973 70 49152 0 0 bash 2023-07-04T11:03:00.000+02:00 source=Unraid Unraid kernel: [ 16101] 0 16101 697818 24103 823296 0 0 python3 2023-07-04T11:03:00.000+02:00 source=Unraid Unraid kernel: [ 16095] 0 16095 181921 4465 143360 0 0 go2rtc 2023-07-04T11:03:00.000+02:00 source=Unraid Unraid kernel: [ 16091] 65534 16091 69 5 36864 0 0 s6-log 2023-07-04T11:03:00.000+02:00 source=Unraid Unraid kernel: [ 16090] 65534 16090 69 7 36864 0 0 s6-log 2023-07-04T11:03:00.000+02:00 source=Unraid Unraid kernel: [ 16087] 65534 16087 69 4 36864 0 0 s6-log 2023-07-04T11:03:00.000+02:00 source=Unraid Unraid kernel: [ 16041] 0 16041 131 18 28672 0 0 s6-fdholderd 2023-07-04T11:03:00.000+02:00 source=Unraid Unraid kernel: [ 16040] 0 16040 47 1 24576 0 0 s6-ipcserverd 2023-07-04T11:03:00.000+02:00 source=Unraid Unraid kernel: [ 16030] 0 16030 53 4 24576 0 0 s6-supervise 2023-07-04T11:03:00.000+02:00 source=Unraid Unraid kernel: [ 16029] 0 16029 53 5 24576 0 0 s6-supervise 2023-07-04T11:03:00.000+02:00 source=Unraid Unraid kernel: [ 16028] 0 16028 53 6 24576 0 0 s6-supervise 2023-07-04T11:03:00.000+02:00 source=Unraid Unraid kernel: [ 16027] 0 16027 53 4 24576 0 0 s6-supervise 2023-07-04T11:03:00.000+02:00 source=Unraid Unraid kernel: [ 16026] 0 16026 53 4 24576 0 0 s6-supervise 2023-07-04T11:03:00.000+02:00 source=Unraid Unraid kernel: [ 16025] 0 16025 53 5 24576 0 0 s6-supervise 2023-07-04T11:03:00.000+02:00 source=Unraid Unraid kernel: [ 16024] 0 16024 53 6 24576 0 0 s6-supervise 2023-07-04T11:03:00.000+02:00 source=Unraid Unraid kernel: [ 16023] 0 16023 53 5 28672 0 0 s6-supervise 2023-07-04T11:03:00.000+02:00 source=Unraid Unraid kernel: [ 16022] 0 16022 53 5 24576 0 0 s6-supervise 2023-07-04T11:03:00.000+02:00 source=Unraid Unraid kernel: [ 15984] 99 15984 166116 36237 585728 0 0 mariadbd 2023-07-04T11:03:00.000+02:00 source=Unraid Unraid kernel: [ 15807] 0 15807 50 1 24576 0 0 s6-linux-init-s 2023-07-04T11:03:00.000+02:00 source=Unraid Unraid kernel: [ 15806] 0 15806 53 6 24576 0 0 s6-supervise 2023-07-04T11:03:00.000+02:00 source=Unraid Unraid kernel: [ 15759] 99 15759 493526 43842 790528 0 0 mono 2023-07-04T11:03:00.000+02:00 source=Unraid Unraid kernel: [ 15712] 0 15712 79225 35284 647168 0 0 vector 2023-07-04T11:03:00.000+02:00 source=Unraid Unraid kernel: [ 15706] 99 15706 428 37 45056 0 0 mariadbd-safe 2023-07-04T11:03:00.000+02:00 source=Unraid Unraid kernel: [ 15700] 0 15700 571 60 36864 0 0 bash 2023-07-04T11:03:00.000+02:00 source=Unraid Unraid kernel: [ 15652] 0 15652 180256 2103 114688 0 1 containerd-shim 2023-07-04T11:03:00.000+02:00 source=Unraid Unraid kernel: [ 15561] 999 15561 54773 785 135168 0 0 postgres 2023-07-04T11:03:00.000+02:00 source=Unraid Unraid kernel: [ 15560] 999 15560 18345 580 114688 0 0 postgres 2023-07-04T11:03:00.000+02:00 source=Unraid Unraid kernel: [ 15558] 999 15558 54800 878 147456 0 0 postgres 2023-07-04T11:03:00.000+02:00 source=Unraid Unraid kernel: [ 15557] 999 15557 54666 1659 139264 0 0 postgres 2023-07-04T11:03:00.000+02:00 source=Unraid Unraid kernel: [ 15556] 999 15556 54666 939 147456 0 0 postgres 2023-07-04T11:03:00.000+02:00 source=Unraid Unraid kernel: [ 15555] 999 15555 54699 1007 155648 0 0 postgres 2023-07-04T11:03:00.000+02:00 source=Unraid Unraid kernel: [ 15513] 0 15513 210 15 36864 0 0 vlmcsd 2023-07-04T11:03:00.000+02:00 source=Unraid Unraid kernel: CPU: 1 PID: 32346 Comm: node Tainted: P O 6.1.36-Unraid #1 2023-07-04T11:03:00.000+02:00 source=Unraid Unraid kernel: [ 15479] 0 15479 180256 2777 114688 0 1 containerd-shim 2023-07-04T11:03:00.000+02:00 source=Unraid Unraid kernel: [ 15405] 1100 15405 5744396 381475 4644864 0 0 java 2023-07-04T11:03:00.000+02:00 source=Unraid Unraid kernel: [ 15349] 0 15349 47 1 24576 0 0 s6-ipcserverd 2023-07-04T11:03:00.000+02:00 source=Unraid Unraid kernel: [ 15346] 0 15346 47 1 24576 0 0 s6-ipcserverd 2023-07-04T11:03:00.000+02:00 source=Unraid Unraid kernel: [ 15332] 0 15332 53 5 24576 0 0 s6-supervise 2023-07-04T11:03:00.000+02:00 source=Unraid Unraid kernel: [ 15331] 0 15331 53 6 24576 0 0 s6-supervise 2023-07-04T11:03:00.000+02:00 source=Unraid Unraid kernel: [ 15330] 0 15330 53 6 24576 0 0 s6-supervise 2023-07-04T11:03:00.000+02:00 source=Unraid Unraid kernel: [ 15329] 0 15329 53 5 24576 0 0 s6-supervise 2023-07-04T11:03:00.000+02:00 source=Unraid Unraid kernel: [ 15328] 0 15328 53 5 24576 0 0 s6-supervise 2023-07-04T11:03:00.000+02:00 source=Unraid Unraid kernel: [ 15327] 0 15327 53 5 24576 0 0 s6-supervise 2023-07-04T11:03:00.000+02:00 source=Unraid Unraid kernel: [ 15246] 0 15246 1911122 331930 4112384 0 0 python 2023-07-04T11:03:00.000+02:00 source=Unraid Unraid kernel: [ 15224] 0 15224 180256 2643 114688 0 1 containerd-shim 2023-07-04T11:03:00.000+02:00 source=Unraid Unraid kernel: [ 15205] 0 15205 223057 30886 532480 0 0 AdGuardHome 2023-07-04T11:03:00.000+02:00 source=Unraid Unraid kernel: [ 15086] 99 15086 651503 27495 634880 0 0 Radarr 2023-07-04T11:03:00.000+02:00 source=Unraid Unraid kernel: [ 15038] 1100 15038 696 23 45056 0 0 tini 2023-07-04T11:03:00.000+02:00 source=Unraid Unraid kernel: [ 14976] 0 14976 180192 1942 118784 0 1 containerd-shim 2023-07-04T11:03:00.000+02:00 source=Unraid Unraid kernel: [ 14909] 0 14909 50 1 24576 0 0 s6-linux-init-s 2023-07-04T11:03:00.000+02:00 source=Unraid Unraid kernel: [ 14907] 0 14907 53 4 24576 0 0 s6-supervise 2023-07-04T11:03:00.000+02:00 source=Unraid Unraid kernel: [ 14878] 0 14878 204 9 36864 0 0 tini 2023-07-04T11:03:00.000+02:00 source=Unraid Unraid kernel: [ 14850] 0 14850 180320 2004 122880 0 1 containerd-shim 2023-07-04T11:03:00.000+02:00 source=Unraid Unraid kernel: [ 14841] 0 14841 50 1 24576 0 0 s6-linux-init-s 2023-07-04T11:03:00.000+02:00 source=Unraid Unraid kernel: [ 14838] 0 14838 53 4 24576 0 0 s6-supervise 2023-07-04T11:03:00.000+02:00 source=Unraid Unraid kernel: [ 14618] 101 14618 2279 414 49152 0 0 nginx 2023-07-04T11:03:00.000+02:00 source=Unraid Unraid kernel: [ 14617] 101 14617 2279 414 49152 0 0 nginx 2023-07-04T11:03:00.000+02:00 source=Unraid Unraid kernel: [ 14616] 101 14616 2279 414 49152 0 0 nginx 2023-07-04T11:03:00.000+02:00 source=Unraid Unraid kernel: [ 14615] 101 14615 2279 414 49152 0 0 nginx 2023-07-04T11:03:00.000+02:00 source=Unraid Unraid kernel: [ 14614] 101 14614 2279 414 49152 0 0 nginx 2023-07-04T11:03:00.000+02:00 source=Unraid Unraid kernel: [ 14613] 101 14613 2279 414 49152 0 0 nginx 2023-07-04T11:03:00.000+02:00 source=Unraid Unraid kernel: [ 14612] 101 14612 2279 414 49152 0 0 nginx 2023-07-04T11:03:00.000+02:00 source=Unraid Unraid kernel: [ 14611] 101 14611 2279 414 49152 0 0 nginx 2023-07-04T11:03:00.000+02:00 source=Unraid Unraid kernel: [ 14610] 101 14610 2279 414 49152 0 0 nginx 2023-07-04T11:03:00.000+02:00 source=Unraid Unraid kernel: [ 14609] 101 14609 2279 414 49152 0 0 nginx 2023-07-04T11:03:00.000+02:00 source=Unraid Unraid kernel: [ 14608] 101 14608 2279 414 49152 0 0 nginx 2023-07-04T11:03:00.000+02:00 source=Unraid Unraid kernel: [ 14607] 101 14607 2279 414 49152 0 0 nginx 2023-07-04T11:03:00.000+02:00 source=Unraid Unraid kernel: [ 14606] 101 14606 2279 414 49152 0 0 nginx 2023-07-04T11:03:00.000+02:00 source=Unraid Unraid kernel: [ 14605] 101 14605 2279 414 49152 0 0 nginx 2023-07-04T11:03:00.000+02:00 source=Unraid Unraid kernel: [ 14604] 101 14604 2279 414 49152 0 0 nginx 2023-07-04T11:03:00.000+02:00 source=Unraid Unraid kernel: [ 14603] 101 14603 2279 414 49152 0 0 nginx 2023-07-04T11:03:00.000+02:00 source=Unraid Unraid kernel: [ 14602] 101 14602 2279 414 49152 0 0 nginx 2023-07-04T11:03:00.000+02:00 source=Unraid Unraid kernel: [ 14601] 101 14601 2279 431 49152 0 0 nginx 2023-07-04T11:03:00.000+02:00 source=Unraid Unraid kernel: [ 14600] 101 14600 2279 431 49152 0 0 nginx 2023-07-04T11:03:00.000+02:00 source=Unraid Unraid kernel: [ 14599] 101 14599 2279 431 49152 0 0 nginx 2023-07-04T11:03:00.000+02:00 source=Unraid Unraid kernel: [ 14598] 101 14598 2285 433 49152 0 0 nginx 2023-07-04T11:03:00.000+02:00 source=Unraid Unraid kernel: [ 14280] 0 14280 52 7 28672 0 0 s6-svscan 2023-07-04T11:03:00.000+02:00 source=Unraid Unraid kernel: [ 14085] 101 14085 2161 314 53248 0 0 nginx 2023-07-04T11:03:00.000+02:00 source=Unraid Unraid kernel: [ 13955] 0 13955 937 63 45056 0 0 cron 2023-07-04T11:03:00.000+02:00 source=Unraid Unraid kernel: [ 13938] 0 13938 52 1 24576 0 0 s6-supervise 2023-07-04T11:03:00.000+02:00 source=Unraid Unraid kernel: [ 13700] 0 13700 53 5 24576 0 0 s6-supervise 2023-07-04T11:03:00.000+02:00 source=Unraid Unraid kernel: [ 13619] 0 13619 53 5 24576 0 0 s6-supervise 2023-07-04T11:03:00.000+02:00 source=Unraid Unraid kernel: [ 13438] 0 13438 50 1 24576 0 0 s6-linux-init-s 2023-07-04T11:03:00.000+02:00 source=Unraid Unraid kernel: [ 13286] 999 13286 54666 3258 155648 0 0 postgres 2023-07-04T11:03:00.000+02:00 source=Unraid Unraid kernel: [ 12988] 0 12988 52 8 24576 0 0 s6-svscan 2023-07-04T11:03:00.000+02:00 source=Unraid Unraid kernel: [ 12849] 0 12849 52 7 24576 0 0 s6-svscan 2023-07-04T11:03:00.000+02:00 source=Unraid Unraid kernel: [ 12378] 0 12378 180256 2292 114688 0 1 containerd-shim 2023-07-04T11:03:00.000+02:00 source=Unraid Unraid kernel: [ 12137] 1000 12137 2892 202 65536 0 0 opensearch-dock 2023-07-04T11:03:00.000+02:00 source=Unraid Unraid kernel: [ 11393] 1000 11393 558 16 40960 0 0 dumb-init 2023-07-04T11:03:00.000+02:00 source=Unraid Unraid kernel: [ 10921] 999 10921 312530 3142 167936 0 0 go-cron 2023-07-04T11:03:00.000+02:00 source=Unraid Unraid kernel: [ 10557] 999 10557 612412 32020 700416 0 0 mongod 2023-07-04T11:03:00.000+02:00 source=Unraid Unraid kernel: [ 10209] 0 10209 204 10 36864 0 0 tini 2023-07-04T11:03:00.000+02:00 source=Unraid Unraid kernel: [ 9876] 0 9876 204 10 36864 0 0 tini 2023-07-04T11:03:00.000+02:00 source=Unraid Unraid kernel: [ 7276] 0 7276 612083 9836 446464 0 0 containerd 2023-07-04T11:03:00.000+02:00 source=Unraid Unraid kernel: [ 7180] 61 7180 1428 629 45056 0 0 avahi-daemon 2023-07-04T11:03:00.000+02:00 source=Unraid Unraid kernel: [ 7038] 0 7038 995 756 45056 0 0 sh 2023-07-04T11:03:00.000+02:00 source=Unraid Unraid kernel: [ 6929] 0 6929 172862 3575 139264 0 0 shfs 2023-07-04T11:03:00.000+02:00 source=Unraid Unraid kernel: [ 6156] 0 6156 23181 2064 163840 0 0 php-fpm 2023-07-04T11:03:00.000+02:00 source=Unraid Unraid kernel: [ 5976] 0 5976 652 235 40960 0 0 agetty 2023-07-04T11:03:00.000+02:00 source=Unraid Unraid kernel: [ 5972] 0 5972 652 233 40960 0 0 agetty 2023-07-04T11:03:00.000+02:00 source=Unraid Unraid kernel: [ 4178] 0 4178 1281 785 45056 0 0 atd 2023-07-04T11:03:00.000+02:00 source=Unraid Unraid kernel: [ 1820] 0 1820 650 24 40960 0 0 acpid 2023-07-04T11:03:00.000+02:00 source=Unraid Unraid kernel: [ 1433] 0 1433 52847 1079 69632 0 0 rsyslogd 2023-07-04T11:03:00.000+02:00 source=Unraid Unraid kernel: 0 pages cma reserved 2023-07-04T11:03:00.000+02:00 source=Unraid Unraid kernel: Total swap = 0kB 2023-07-04T11:03:00.000+02:00 source=Unraid Unraid kernel: Node 0 hugepages_total=0 hugepages_free=0 hugepages_surp=0 hugepages_size=2048kB 2023-07-04T11:03:00.000+02:00 source=Unraid Unraid kernel: Node 0 DMA: 0*4kB 0*8kB 0*16kB 0*32kB 0*64kB 0*128kB 0*256kB 0*512kB 1*1024kB (U) 1*2048kB (M) 3*4096kB (M) = 15360kB 2023-07-04T11:03:00.000+02:00 source=Unraid Unraid kernel: Node 0 DMA32 free:122372kB boost:0kB min:6020kB low:8936kB high:11852kB reserved_highatomic:0KB active_anon:1900kB inactive_anon:2748980kB active_file:0kB inactive_file:372kB unevictable:0kB writepending:4kB present:3037444kB managed:2945280kB mlocked:0kB bounce:0kB free_pcp:248kB local_pcp:0kB free_cma:0kB 2023-07-04T11:03:00.000+02:00 source=Unraid Unraid kernel: free:61397 free_pcp:62 free_cma:0 2023-07-04T11:03:00.000+02:00 source=Unraid Unraid kernel: slab_reclaimable:52862 slab_unreclaimable:63682 2023-07-04T11:03:00.000+02:00 source=Unraid Unraid kernel: Mem-Info: 2023-07-04T11:03:00.000+02:00 source=Unraid Unraid kernel: RBP: 00007ffc5b195250 R08: 0000000000000000 R09: 00001e0be983fd41 2023-07-04T11:03:00.000+02:00 source=Unraid Unraid kernel: Code: Unable to access opcode bytes at 0x55b3dd0c63e6. 2023-07-04T11:03:00.000+02:00 source=Unraid Unraid kernel: do_user_addr_fault+0x36a/0x530 2023-07-04T11:03:00.000+02:00 source=Unraid Unraid kernel: filemap_fault+0x317/0x52f 2023-07-04T11:03:00.000+02:00 source=Unraid Unraid kernel: __alloc_pages+0x132/0x1e8 2023-07-04T11:03:00.000+02:00 source=Unraid Unraid kernel: dump_header+0x4a/0x211 2023-07-04T11:03:00.000+02:00 source=Unraid Unraid kernel: Hardware name: ASUS System Product Name/ROG CROSSHAIR VII HERO, BIOS 4603 09/13/2021 2023-07-04T11:03:11.000+02:00 source=Unraid Unraid emhttpd: read SMART /dev/sde 2023-07-04T11:18:12.000+02:00 source=Unraid Unraid emhttpd: spinning down /dev/sde 2023-07-04T13:56:22.000+02:00 source=Unraid Unraid kernel: hrtimer: interrupt took 7515 ns 2023-07-04T17:19:45.000+02:00 source=Unraid Unraid webGUI: Successful login user root from 10.10.10.21 2023-07-04T17:20:58.000+02:00 source=Unraid Unraid emhttpd: read SMART /dev/sdd 2023-07-04T17:20:58.000+02:00 source=Unraid Unraid emhttpd: read SMART /dev/sdf ===== END ===== unraid-diagnostics-20230704-1730.zip
  6. Therefore is a problem with the plugin, because compose manager should do a clean stop of the compose files before restart or shutdown, so on next time it will autostart without issues. I need brx for security reasons, I need the containers to be in different vlans/networks and to setup different firewalls rules between hosts. This won't be possible if everything is behind on IP with a layer 3-4 fw.
  7. but br1 is a network created by unraid the internal ones are only docker compose, do you mean that those networks are the problematic? in any case might be something that can be solve in the plugin side, it doesn't make sense that if the plugin launches the compose file it doesn't work and If I do it manually pressing the button it works.
  8. this is how it looks like after restarting the array and if I try to start some of them I have to stop them and start again to work. Maybe a delay won't fix it and the issue is another but in any case delay and order features would be nice.
  9. This configuration or similar (different names on the internal network) networks: br1: driver: macvlan external: true graylog-network: internal: true In a clean restart fails some of the compose fail to start, so I have to stop them and start them again.
  10. Right now there is no clean way to monitor the security on Unraid OS, I think is something critical since many people is publishing dockers to internet. It's not compatible with auditd or wazuh or elastic agents or similar solutions. Right now even easy projects like crowdsec are compatible with auditd so people can easily implement some monitoring or going more advance with wazuh or other tools make use of the sigma rules, wazuh, security onion, Qradar Community edition, etc. So the request at least is to have auditd official support which is the standard way to monitor linux OS, wazuh support would be awesome as well. https://slackbuilds.org/repository/15.0/system/audit/
  11. Some of my compose containers fail to start at boot, I'm under the impression that this is because some network components or other things of the OS are not ready so early. Anyone is experiencing the same thing? any way to fix it or add a delay? I'm using only the autostart feature that comes with compose manager, Thanks in advance.
  12. can we get support for auditd? https://unix.stackexchange.com/questions/502878/slackware-14-2-turn-on-the-auditd-daemon https://slackbuilds.org/repository/14.2/system/audit/ a newer version would be better. https://slackbuilds.org/repository/15.0/system/audit/ https://github.com/linux-audit I think is crucial to be able to monitor Unraid security Or wazuh agent? not sure if it's compatible but this is the only reference to slackware I have found https://github.com/wazuh/wazuh/blob/master/src/init/init.sh
  13. Have anyone manage to install/enable auditd in Unraid? Is there any way to get security logs from unraid/slackware? auditd doesn't work, wazuh either.... security is always forgotten in Unraid. @limetech can we get "official" auditd support in Unraid? would this instructions works with the latest version of unraid? https://unix.stackexchange.com/questions/502878/slackware-14-2-turn-on-the-auditd-daemon https://slackbuilds.org/repository/15.0/system/audit/ https://github.com/linux-audit
  14. did you found any solution to monitor the security of unraid?
  15. It's possible with the docker version with - /var/run/docker.sock:/var/run/docker.sock server: http_listen_port: 9080 grpc_listen_port: 0 positions: filename: /positions/positions.yaml clients: - url: http://10.10.40.251:3100/loki/api/v1/push scrape_configs: - job_name: system static_configs: - targets: - localhost labels: job: varlogs __path__: /host/log/*log - job_name: docker # use docker.sock to filter containers docker_sd_configs: - host: unix:///var/run/docker.sock refresh_interval: 15s #filters: # - name: label # values: ["logging=promtail"] # use container name to create a loki label relabel_configs: - source_labels: ['__meta_docker_container_name'] regex: '/(.*)' target_label: 'container' - source_labels: ['__meta_docker_container_log_stream'] target_label: 'logstream' - source_labels: ['__meta_docker_container_label_logging_jobname'] target_label: 'job' promtail: # run as root, update to rootless mode later user: "0:0" container_name: Mon-Promtail image: grafana/promtail:main command: -config.file=/etc/promtail/docker-config.yaml depends_on: - loki restart: unless-stopped networks: mon-netsocketproxy: mon-netgrafana: br1: ipv4_address: 10.10.40.252 dns: 10.10.50.5 ports: - 9800:9800 volumes: # logs for linux host only - /var/log:/host/log #- /var/lib/docker/containers:/var/lib/docker/containers:ro - /mnt/user/Docker/Monitoring/Promtail/promtail-config.yaml:/etc/promtail/docker-config.yaml - /mnt/user/Docker/Monitoring/Promtail/positions:/positions - /var/run/docker.sock:/var/run/docker.sock labels: - "com.centurylinklabs.watchtower.enable=true"
  16. @Uleepera please check this, I think you don't need to edit the file And if you don't mind report back I plan to do the same soon.
  17. @jakobklemm Have you found a solution? Have you tried specifying the log driver in the docker run/compose instead on the file like explained here? https://linuxblog.xyz/posts/grafana-loki/ How did you installed the driver? like this? "docker plugin install grafana/loki-docker-driver:latest --alias loki --grant-all-permissions" Does this survive to reboots?
  18. I fact what would make sense for unraid is to move from docker to podman and then use compose.
  19. I have the same problem, I upgrade to RC, the plugin disappears from the plugin list I installed it again, gives that error, it "works" but appears as not installed everywhere.
  20. I'm trying to use secrets with docker compose in unraid. Right now I'm using this compose ############################################################### # Nextcloud ############################################################### version: "3.9" # Services #################################################### services: db: image: postgres:alpine container_name: Nextcloud_Postgres restart: unless-stopped healthcheck: test: ["CMD-SHELL", "pg_isready -d $${POSTGRES_DB} -U $${POSTGRES_USER}"] interval: 10s timeout: 5s retries: 10 networks: - nextcloud_network environment: - TZ - POSTGRES_PASSWORD - POSTGRES_USER - POSTGRES_DB #- POSTGRES_INITDB_ARGS #- POSTGRES_INITDB_WALDIR #- POSTGRES_HOST_AUTH_METHOD secrets: - POSTGRES_PASSWORD - POSTGRES_USER - POSTGRES_DB volumes: - /mnt/user/Docker/Nextcloud/postgres/data:/var/lib/postgresql/data:z labels: - "com.centurylinklabs.watchtower.enable=true" pgbackups: image: prodrigestivill/postgres-backup-local container_name: NextCloud_pgbackups restart: unless-stopped user: postgres:postgres # Optional: see below networks: - nextcloud_network volumes: - /mnt/user/Docker/Nextcloud/pgbackups:/backups links: - db depends_on: db: condition: service_healthy environment: - TZ - POSTGRES_HOST - POSTGRES_DB - POSTGRES_USER - POSTGRES_PASSWORD #- POSTGRES_PASSWORD_FILE=/run/secrets/db_password <-- alternative for POSTGRES_PASSWORD (to use with docker secrets) - POSTGRES_EXTRA_OPTS=-Z6 - SCHEDULE=0 1 */3 * * #At 01:00 AM, every 3 days - BACKUP_KEEP_DAYS=6 #- BACKUP_KEEP_WEEKS=4 #- BACKUP_KEEP_MONTHS=6 - HEALTHCHECK_PORT=5432 secrets: - POSTGRES_DB - POSTGRES_USER - POSTGRES_PASSWORD labels: - "com.centurylinklabs.watchtower.enable=true" redis: image: redis:alpine container_name: NextCloud_Redis restart: unless-stopped command: redis-server --requirepass $REDIS_HOST_PASSWORD volumes: - /mnt/user/Docker/Nextcloud/redis:/data environment: - TZ networks: - nextcloud_network secrets: - REDIS_HOST_PASSWORD labels: - "com.centurylinklabs.watchtower.enable=true" app: image: nextcloud:fpm-alpine container_name: Nextcloud restart: unless-stopped depends_on: db: condition: service_healthy networks: nextcloud_network: br1: ipv4_address: 10.10.40.161 dns: - 10.10.50.5 volumes: - /mnt/user/Docker/Nextcloud/nextcloud:/var/www/html:z - /mnt/user/Docker/Nextcloud/nextcloud/custom_apps:/var/www/html/custom_apps:z - /mnt/user/Docker/Nextcloud/nextcloud/config:/var/www/html/config:z - /mnt/user/Docker/Nextcloud/nextcloud/data:/var/www/html/data:z - /mnt/user/Personal/Nextcloud:/var/www/html/data - /mnt/user/Personal/Photos:/Albums environment: - TZ - POSTGRES_DB - POSTGRES_USER - POSTGRES_PASSWORD - POSTGRES_HOST - REDIS_HOST - REDIS_HOST_PASSWORD secrets: - POSTGRES_PASSWORD - POSTGRES_DB - POSTGRES_USER - REDIS_HOST_PASSWORD labels: - "com.centurylinklabs.watchtower.enable=true" web: build: ./web container_name: NextCloud_Nginx-fpm restart: unless-stopped networks: nextcloud_network: br1: ipv4_address: 10.10.40.160 ports: - 8080:80 dns: - 10.10.50.5 volumes: - /mnt/user/Docker/Nextcloud/nextcloud:/var/www/html:z,ro environment: - TZ depends_on: - app cron: image: nextcloud:fpm-alpine container_name: NextCloud_Cron restart: unless-stopped depends_on: - db - redis networks: - nextcloud_network volumes: - /mnt/user/Docker/Nextcloud/nextcloud:/var/www/html:z - /mnt/user/Docker/Nextcloud/nextcloud/custom_apps:/var/www/html/custom_apps:z - /mnt/user/Docker/Nextcloud/nextcloud/config:/var/www/html/config:z - /mnt/user/Docker/Nextcloud/nextcloud/data:/var/www/html/data:z - /mnt/user/Personal/Nextcloud:/var/www/html/data:z environment: - TZ entrypoint: /cron.sh labels: - "com.centurylinklabs.watchtower.enable=true" # Networks #################################################### networks: br1: driver: macvlan external: true nextcloud_network: internal: true # Docker Secrets ############################################## secrets: # POSTGRES_PASSWORD POSTGRES_PASSWORD: file: $DOCKERDIR/secrets/POSTGRES_PASSWORD.txt # POSTGRES_USER POSTGRES_USER: file: $DOCKERDIR/secrets/POSTGRES_USER.txt # POSTGRES_DB POSTGRES_DB: file: $DOCKERDIR/secrets/POSTGRES_DB.txt # REDIS_HOST_PASSWORD REDIS_HOST_PASSWORD: file: $DOCKERDIR/secrets/REDIS_HOST_PASSWORD.txt with this env file ############################################################### # Nextcloud ############################################################### DOCKERDIR=/boot/config/plugins/compose.manager/projects/Nextcloud TZ=Europe/Madrid PUID=99 PGID=100 # Redis #REDIS_HOST_PASSWORD=/run/secrets/REDIS_HOST_PASSWORD.txt REDIS_HOST_PASSWORD=password REDIS_HOST=redis # Postgres POSTGRES_PASSWORD=/run/secrets/POSTGRES_PASSWORD.txt POSTGRES_USER=/run/secrets/POSTGRES_USER.txt POSTGRES_DB=/run/secrets/POSTGRES_DB.txt POSTGRES_HOST=db but as you can see below the secrets are not being loaded correctly If I put the passwords in the env file it works I was trying to learn how to use secrets. I have the feeling that docker is not loading correctly the secrets from the path /boot/config/plugins/compose.manager/projects/Nextcloud/secrets Maybe it doesn't have access, or it's not the right path... I think I'm doing everything right but I have spend a few hours reading and trying to fix it without success https://docs.docker.com/compose/use-secrets/
  21. But is there a benefit of using secrets with files vs including the secrets in environment variables, right? @neecapp https://gist.github.com/bvis/b78c1e0841cfd2437f03e20c1ee059fe I have tried to implement it like it is explained here, I am also using docker compose manager https://github.com/brokenscripts/authentik_traefik But I'm not sure what is the real path where I have to store the secrets, in the example is "/ssd/compose/secrets/authelia_notifier_smtp_password" What would be the path for docker compose in unraid? @primeval_god Any help or guidance will be welcome.
  22. Instead of trying to do script like I'm trying can you add the feature to schedule auto updates in docker compose manager? CPU pinning support would be awesome as well.
  23. I have some macvlan errors but no crashes so far. With 6.11 everything was fine. Regarding the fix by disable bridging, is that even an option? if I disable bridging how can I give an static IP to every docker?
  24. @primeval_god Could you please share the commands you use in the plugin to run compose up/down and update? On the other hand I have this bug where this containers are detected like all of them have an updated while is not true and when I click on apply update I get this message "Configuration not found. Was this container created using this plugin?"
  25. I have noticed that CPU Pinning feature doesn't support/detect docker compose containers.