Jump to content

New server just failed, is this a bug, or my issue?


Go to solution Solved by JorgeB,

Recommended Posts

New Server build to get stable (Hahahaha), and just got this in the Sys log server;

 

Date	Time	Level	Host Name	Category	Program	Messages
2024-04-17	11:17:45	Info	iPlex	kern	kernel	traps: lsof[29698] general protection fault ip:148120483c6e sp:a1bfc289255a8de8 error:0 in libc-2.37.so[14812046b000+169000]
2024-04-17	11:14:26	Info	iPlex	kern	kernel	traps: lsof[22489] general protection fault ip:153286a80c6e sp:a9cf6240a6091734 error:0 in libc-2.37.so[153286a68000+169000]
2024-04-17	11:12:21	Info	iPlex	kern	kernel	traps: lsof[17478] general protection fault ip:147adb8b5c6e sp:7c59ab0461e19d2e error:0 in libc-2.37.so[147adb89d000+169000]
2024-04-17	11:11:40	Info	iPlex	kern	kernel	traps: lsof[16503] general protection fault ip:15105604ec6e sp:248ae9a5c44e6efa error:0 in libc-2.37.so[151056036000+169000]
2024-04-17	11:11:01	Info	iPlex	kern	kernel	traps: lsof[15225] general protection fault ip:1497638d9c6e sp:fbc3203c1f187d09 error:0 in libc-2.37.so[1497638c1000+169000]
2024-04-17	11:03:58	Alert	iPlex	kern	kernel	BUG: Bad rss-counter state mm:00000000e871bdd6 type:MM_ANONPAGES val:1
2024-04-17	10:48:37	Info	iPlex	user	emhttpd	shcmd (3597): /usr/local/sbin/update_cron

 

iplex-diagnostics-20240417-1227.zip

Edited by kcossabo
Link to comment

More data. 

 

- One Container went un responsive
- CPU, most cores at 100%
- bandwidth drop to almost zero

nothing running was that CPU intensive.

 

Apr 17 15:39:46 iPlex kernel: </IRQ>
Apr 17 15:39:46 iPlex kernel: <TASK>
Apr 17 15:39:46 iPlex kernel: asm_common_interrupt+0x22/0x40
Apr 17 15:39:46 iPlex kernel: RIP: 0010:cpuidle_enter_state+0x11d/0x202
Apr 17 15:39:46 iPlex kernel: Code: 91 f4 9f ff 45 84 ff 74 1b 9c 58 0f 1f 40 00 0f ba e0 09 73 08 0f 0b fa 0f 1f 44 00 00 31 ff e8 84 b0 a4 ff fb 0f 1f 44 00 00 <45> 85 e4 0f 88 ba 00 00 00 48 8b 04 24 49 63 cc 48 6b d1 68 49 29
Apr 17 15:39:46 iPlex kernel: RSP: 0018:ffffc9000024fe98 EFLAGS: 00000246
Apr 17 15:39:46 iPlex kernel: RAX: ffff88a00f6c0000 RBX: ffff88a00f6f6600 RCX: 0000000000000000
Apr 17 15:39:46 iPlex kernel: RDX: 00000aa7fd12888a RSI: ffffffff820d8766 RDI: ffffffff820d8c6f
Apr 17 15:39:46 iPlex kernel: RBP: 0000000000000001 R08: 0000000000000002 R09: 0000000000000002
Apr 17 15:39:46 iPlex kernel: R10: 0000000000000020 R11: 0000000000000020 R12: 0000000000000001
Apr 17 15:39:46 iPlex kernel: R13: ffffffff82320640 R14: 00000aa7fd12888a R15: 0000000000000000
Apr 17 15:39:46 iPlex kernel: ? cpuidle_enter_state+0xf7/0x202
Apr 17 15:39:46 iPlex kernel: cpuidle_enter+0x2a/0x38
Apr 17 15:39:46 iPlex kernel: do_idle+0x18d/0x1fb
Apr 17 15:39:46 iPlex kernel: cpu_startup_entry+0x2a/0x2c
Apr 17 15:39:46 iPlex kernel: start_secondary+0x101/0x101
Apr 17 15:39:46 iPlex kernel: secondary_startup_64_no_verify+0xce/0xdb
Apr 17 15:39:46 iPlex kernel: </TASK>
Apr 17 15:39:46 iPlex kernel: ---[ end trace 0000000000000000 ]---
Apr 17 15:53:46 iPlex kernel: traps: lsof[12235] general protection fault ip:153f2d5c9c6e sp:2f81851bcb832f19 error:0 in libc-2.37.so[153f2d5b1000+169000]
Apr 17 15:55:43 iPlex kernel: traps: lsof[18331] general protection fault ip:1518b3f78c6e sp:47adfde6dd61f953 error:0 in libc-2.37.so[1518b3f60000+169000]

 

Link to comment
1 hour ago, kcossabo said:

failure again... Running memory test now

Definitely worth doing.    Since the memory test is only definitive if it reports errors, then even if it passes running with less RAM sticks can sometimes help as well.

Link to comment
33 minutes ago, itimpi said:

Since the memory test is only definitive if it reports errors, then even if it passes running with less RAM sticks can sometimes help as well.

So , spin the DIMMs to determine if there is an issue? I assume we are not saying less memory is better? I just through out the packaging? 

Link to comment
3 minutes ago, kcossabo said:

So , spin the DIMMs to determine if there is an issue? I assume we are not saying less memory is better? I just through out the packaging? 

 

No - it is that less RAM sticks can sometimes help as that puts less load on the RAM controller, and it also eliminates the case where one stick is faulty/borderline and the other is OK.   It is at least an easy test to carry out to see if it helps pinpoint an issue.

Link to comment

Cool

 

Many events.

- Built server With 64G

- finished unRAID install and moving data to Array (18TB)
- added 64G more

- Docker Plex launch WITH
— Transcoding to /dev/shm/

Memory usage (watched on website) showed less than 5% utilization

Plex ingesting shows, and marking commercials and thumb nails

 

Crash.

Seems like it is Memory, but;

- first time executing work load

- no real memory in use

I can not see anywhere that the error points to memory.

Another hour or so on the memory test, then I will pull 2x DIMM (must be paired) down to 64G, and try again.

The Parity checking is beating up the HDD.

Link to comment
Posted (edited)

 

 

ABE839DA-F082-47FE-A231-FB1F73F82EB2_1_102_o.thumb.jpeg.1d12afba465ec5529f22cc590dc480c8.jpeg

 

Removed 50% of the memory (2 of 4 DIMMs)

 

Still getting the errors.

 

Steps taken so far
1) Removed Network UPS Server (NUT) as I was getting
 

Apr 17 20:09:39 iPlex upsd[3149]: UPS [ups] data is no longer stale
Apr 17 20:10:09 iPlex usbhid-ups[3145]: nut_libusb_get_report: Input/Output Error
Apr 17 20:10:11 iPlex usbhid-ups[3145]: #012Reconnecting. If you saw "nut_libusb_get_interrupt: Input/Output Error" or similar message in the log above, try setting "pollonly" flag in "ups.conf" options section for this driver!
Apr 17 20:10:11 iPlex usbhid-ups[3145]: nut_libusb_get_string: Input/Output Error
Apr 17 20:10:12 iPlex usbhid-ups[3145]: nut_libusb_get_report: Input/Output Error
Apr 17 20:10:13 iPlex usbhid-ups[3145]: #012Reconnecting. If you saw "nut_libusb_get_interrupt: Input/Output Error" or similar message in the log above, try setting "pollonly" flag in "ups.conf" options section for this driver!
Apr 17 20:10:21 iPlex usbhid-ups[3145]: nut_libusb_get_report: Input/Output Error
Apr 17 20:10:23 iPlex usbhid-ups[3145]: #012Reconnecting. If you saw "nut_libusb_get_interrupt: Input/Output Error" or similar message in the log above, try setting "pollonly" flag in "ups.conf" options section for this driver!
Apr 17 20:10:23 iPlex usbhid-ups[3145]: nut_libusb_get_string: Input/Output Error
Apr 17 20:10:23 iPlex usbhid-ups[3145]: nut_libusb_get_report: Input/Output Error
Apr 17 20:10:25 iPlex usbhid-ups[3145]: #012Reconnecting. If you saw "nut_libusb_get_interrupt: Input/Output Error" or similar message in the log above, try setting "pollonly" flag in "ups.conf" options section for this driver!
Apr 17 20:10:25 iPlex usbhid-ups[3145]: nut_libusb_get_report: Input/Output Error
Apr 17 20:10:27 iPlex usbhid-ups[3145]: #012Reconnecting. If you saw "nut_libusb_get_interrupt: Input/Output Error" or similar message in the log above, try setting "pollonly" flag in "ups.conf" options section for this driver!

 

Still got failures

 

Apr 17 20:12:11 iPlex root: plugin: nut-dw.plg removed
Apr 17 20:14:51 iPlex kernel: traps: lsof[28903] general protection fault ip:15517794ac6e sp:58d971605684f957 error:0 in libc-2.37.so[155177932000+169000]
Apr 17 20:15:18 iPlex kernel: traps: lsof[29991] general protection fault ip:14dbe18ddc6e sp:1940c4ee01a6ed39 error:0 in libc-2.37.so[14dbe18c5000+169000]

 

Unplugged the UPS USB cord

 

Apr 17 20:30:13 iPlex kernel: usb 1-3: USB disconnect, device number 2
Apr 17 20:48:54 iPlex kernel: traps: lsof[2100] general protection fault ip:148298636c6e sp:ad331a67e1ca092 error:0 in libc-2.37.so[14829861e000+169000]
Apr 17 20:50:24 iPlex kernel: traps: lsof[4223] general protection fault ip:145ff9d60c6e sp:8a07ac029a14ec37 error:0 in libc-2.37.so[145ff9d48000+169000]
Apr 17 20:51:58 iPlex kernel: traps: lsof[5424] general protection fault ip:147e56030c6e sp:5cf56d4987016ac8 error:0 in libc-2.37.so[147e56018000+169000]

 

Changed the Docker for Plex to not use /dev/shm/

 

Will continue to test, tomorrow.

Edited by kcossabo
Link to comment

Apr 18, 2024  10:13:34 AM 

Dug into the "lsof" which seems to be a 'file' thing. 

From there, looked at recent changed, and a Dual NGFF SSD to SATA was added. The device presents as a single device.  

There were no logs pointing to the CACHE / Dual_NGFF device

 

I initiated the 'mover' to drain the cache to make the ARRAY as Primary and remove it from the flow

 

Apr 18 06:05:52 iPlex kernel: veth59ee0af: renamed from eth0
Apr 18 06:05:54 iPlex kernel: veth9e7e1ae: renamed from eth0
Apr 18 06:13:33 iPlex emhttpd: shcmd (1767): /usr/local/sbin/mover &> /dev/null &
Apr 18 06:16:52 iPlex kernel: BTRFS warning (device sdf1): csum failed root 5 ino 50996 off 1492402176 csum 0x61f8bf3d expected csum 0x908391cd mirror 1
Apr 18 06:16:52 iPlex kernel: BTRFS error (device sdf1): bdev /dev/sdf1 errs: wr 0, rd 0, flush 0, corrupt 13, gen 0
Apr 18 06:16:52 iPlex kernel: BTRFS warning (device sdf1): csum failed root 5 ino 50996 off 1492406272 csum 0xe2b7ae2f expected csum 0x8188ffff mirror 1
Apr 18 06:16:52 iPlex kernel: BTRFS error (device sdf1): bdev /dev/sdf1 errs: wr 0, rd 0, flush 0, corrupt 14, gen 0
Apr 18 06:16:52 iPlex kernel: BTRFS warning (device sdf1): csum failed root 5 ino 50996 off 1492402176 csum 0x61f8bf3d expected csum 0x908391cd mirror 1
Apr 18 06:16:52 iPlex kernel: BTRFS error (device sdf1): bdev /dev/sdf1 errs: wr 0, rd 0, flush 0, corrupt 15, gen 0
Apr 18 06:16:52 iPlex kernel: BTRFS warning (device sdf1): csum failed root 5 ino 50996 off 1492402176 csum 0x61f8bf3d expected csum 0x908391cd mirror 1
Apr 18 06:16:52 iPlex kernel: BTRFS error (device sdf1): bdev /dev/sdf1 errs: wr 0, rd 0, flush 0, corrupt 16, gen 0
Apr 18 06:16:52 iPlex shfs: copy_file: /mnt/what-the-h/T_Media/Torrent/downloads/9-1-1.S06E17.Love.Is.in.the.Air.1080p.AMZN.WEBRip.DDP5.1.x264-KiNGS/9-1-1.S06E17.Love.Is.in.the.Air.1080p.AMZN.WEB-DL.DDP5.1.H.264-KiNGS.mkv /mnt/disk3/T_Media/Torrent/downloads/9-1-1.S06E17.Love.Is.in.the.Air.1080p.AMZN.WEBRip.DDP5.1.x264-KiNGS/9-1-1.S06E17.Love.Is.in.the.Air.1080p.AMZN.WEB-DL.DDP5.1.H.264-KiNGS.mkv.partial (5) Input/output error
Apr 18 06:16:52 iPlex kernel: BTRFS warning (device sdf1): csum failed root 5 ino 50996 off 1492402176 csum 0x61f8bf3d expected csum 0x908391cd mirror 1
Apr 18 06:16:52 iPlex kernel: BTRFS error (device sdf1): bdev /dev/sdf1 errs: wr 0, rd 0, flush 0, corrupt 17, gen 0
Apr 18 06:16:52 iPlex kernel: BTRFS warning (device sdf1): csum failed root 5 ino 50996 off 1492402176 csum 0x61f8bf3d expected csum 0x908391cd mirror 1
Apr 18 06:16:52 iPlex kernel: BTRFS error (device sdf1): bdev /dev/sdf1 errs: wr 0, rd 0, flush 0, corrupt 18, gen 0
Apr 18 06:19:30 iPlex emhttpd: Starting services...
Apr 18 06:19:30 iPlex emhttpd: shcmd (1782): /etc/rc.d/rc.samba restart

 

I suspect with this out put that the SSD is bad.

Will monitor and update.

Link to comment

I did not know that btrfs is usually RAM issue, but I did do the RAM test with all 128G and it passed (see above). After 8 hours I only had the following.

 

Apr 18 13:59:00 iPlex kernel: BTRFS error (device loop2): block=754319360 write time tree block corruption detected
Apr 18 13:59:00 iPlex kernel: BTRFS: error (device loop2) in btrfs_commit_transaction:2494: errno=-5 IO failure (Error while writing out transaction)
Apr 18 13:59:00 iPlex kernel: BTRFS info (device loop2: state E): forced readonly
Apr 18 13:59:00 iPlex kernel: BTRFS warning (device loop2: state E): Skipping commit of aborted transaction.
Apr 18 13:59:00 iPlex kernel: BTRFS: error (device loop2: state EA) in cleanup_transaction:1992: errno=-5 IO failure
Apr 18 14:20:11 iPlex emhttpd: spinning down /dev/sdf
Apr 18 14:20:12 iPlex emhttpd: read SMART /dev/sdf


Once again btrfs issues (pointing to RAM). I can swap the 2x DIMMS in for the other two to see if the issue goes away. I need to figure out how to decode the error log. The non error "emhttpd:" is pointing to the same SSD, but I do not know what 'device loop2' is.

 


 

Link to comment
11 hours ago, kcossabo said:

seems that the "device loop2:" is  docker.img?

Usually yes, diags will confirm if you want to post them.

 

11 hours ago, kcossabo said:

I can swap the 2x DIMMS in for the other two to see if the issue goes away.

That's a good idea, or try just one stick, and if still errors a different on, that will basically rule out a RAM issu.

Link to comment

The manual for the ASROCK Z690 Motherboard does not list a single DIMM as valid. I will try that though.

Last night. 
- Dual DIMM (Set B1 & B2)

 

No errors, for hours. Docker Containers working GREAT. Added Intel iGPU encoding, went to bed.

Server CRASH at what seems to be at 2:11 

Attached is the Syslog. 

I will attempt the single DIMM. I keep seeing issues with CPU: xx PID: xxxx Comm: xxx Tainted:  <But I have no experience in Linux Advance Trouble Shooting>

All_2024-4-19-6_54_11.html

Link to comment

Built on a new SSD a copy of Ubuntu 20
- stress --cpu 28  --vm 4 --vm-bytes 15G --timeout 1h

 

This ran no issues with CPU Cores showing over 95% utilization and total memory usage up to 50G

I am currently deploying dockers and going to load all the apps on it to see if it is stable with Docker

I will eventually try one DIMM, but I am loosing faith that this is a memory issue verse a driver conflict or somthing.

 

Link to comment

The Ubuntu build on the same hardware dual DIMM is holding GREAT

kevin@kevin-Z690-Extreme:~$ uptime
 15:25:21 up  3:44,  4 users,  load average: 463.22, 143.12, 54.41

 

I have (storage on a 10GE Attached NAS)
- Plex
- Sonarr
- Radarr
- Prolarr
- SabNZBD
- etc

with this being run every so often to stress the machine

version: "3"
services:
    stress-ng:
        stdin_open: true
        tty: true
        image: alexeiled/stress-ng
        command: --cpu 4 --io 2 --vm 1 --vm-bytes 1G --timeout 60s --metrics-brief --tz --class memory --all 1


Syslog sent to the NAS, and no Errors, or Warning, and nothing on the Kernel 

 

Just the Dockers running

top - 15:28:53 up  3:48,  4 users,  load average: 148.91, 296.56, 149.61
Tasks: 539 total,   2 running, 536 sleeping,   0 stopped,   1 zombie
%Cpu(s):  5.0 us,  1.4 sy,  1.0 ni, 88.7 id,  3.8 wa,  0.0 hi,  0.1 si,  0.0 st
MiB Mem :  64058.2 total,  25706.2 free,   1924.4 used,  36427.6 buff/cache
MiB Swap:   2048.0 total,   1302.2 free,    745.8 used.  61420.6 avail Mem

    PID USER      PR  NI    VIRT    RES    SHR S  %CPU  %MEM     TIME+ COMMAND
2063502 kevin     35  15  174960 127856  12344 D  30.6   0.2   0:05.62 Plex Transcoder
2064842 kevin     20   0 2612352 128612  70300 S  24.6   0.2   0:00.74 Radarr
2064787 kevin     20   0   49684  22816  10832 D  21.3   0.0   0:00.64 Plex Transcoder
 113809 kevin     20   0    2596    672    608 S  20.3   0.0  23:27.21 EasyAudioEncode
 500468 root       0 -20       0      0      0 I   4.0   0.0   1:12.67 kworker/u57:1-xprtiod
2064891 kevin     20   0 2571768  44388  32824 R   3.0   0.1   0:00.09 Prowlarr
  74520 kevin     20   0  246780  50332  14020 S   2.7   0.1   4:11.54 Plex Media Serv
1035376 root       0 -20       0      0      0 I   2.7   0.0   0:14.66 kworker/u57:2-xprtiod
 231080 kevin     20   0   46212  33380   3976 S   1.3   0.1   1:29.99 s-tui

 

 

when I launch the stress container

 

top - 15:29:57 up  3:49,  4 users,  load average: 199.68, 277.07, 151.68
Tasks: 45656 total, 301 running, 44559 sleeping,   0 stopped, 796 zombie
%Cpu(s): 57.0 us, 40.4 sy,  0.7 ni,  0.6 id,  0.3 wa,  0.0 hi,  0.9 si,  0.0 st
MiB Mem :  64058.2 total,   2121.0 free,  19096.1 used,  42841.1 buff/cache
MiB Swap:   2048.0 total,   1302.2 free,    745.8 used.  44032.6 avail Mem

    PID USER      PR  NI    VIRT    RES    SHR S  %CPU  %MEM     TIME+ COMMAND
2067349 root      20   0  543060 263236    764 S 634.1   0.4   1:51.84 stress-ng-memth
2067327 root      20   0   76004  25812   1256 R  86.9   0.0   0:06.40 stress-ng-matri
2067352 root      20   0   53476   3384   3236 R  73.4   0.0   0:05.78 stress-ng-remap
2294296 kevin     20   0 2586004 100632  61768 R  68.1   0.2   0:02.18 Prowlarr
2067326 root      20   0   51620   1524   1256 R  65.6   0.0   0:06.33 stress-ng-matri
2067310 root      20   0   51436   5832   2760 R  62.2   0.0   0:06.22 stress-ng-cpu
2309138 kevin     20   0 2587056  98644  59136 R  59.4   0.2   0:01.90 Radarr
2067328 root      20   0   84220    272    124 R  56.2   0.0   0:09.60 stress-ng-mcont
2067315 root      20   0   51688   1540   1256 R  55.9   0.0   0:03.91 stress-ng-bsear
2067401 root      20   0   51428   1276   1128 R  52.5   0.0   0:04.48 stress-ng-zero
2067324 root      20   0   51428    272    124 R  52.2   0.0   0:06.84 stress-ng-lsear
2067317 root      20   0   51428    272    124 R  50.9   0.0   0:05.30 stress-ng-full
2285103 kevin     20   0   49696  23276  11288 D  50.9   0.0   0:01.75 Plex Transcoder



With Plex Playing a movie and Stress running

 

top - 15:32:10 up  3:51,  4 users,  load average: 769.14, 489.41, 248.16
Tasks: 11308 total, 363 running, 9863 sleeping,   0 stopped, 1082 zombie
%Cpu(s): 49.2 us, 48.3 sy,  0.1 ni,  1.3 id,  0.4 wa,  0.0 hi,  0.7 si,  0.0 st
MiB Mem :  64058.2 total,   4519.6 free,  12072.8 used,  47465.9 buff/cache
MiB Swap:   2048.0 total,   1303.4 free,    744.6 used.  51145.2 avail Mem

    PID USER      PR  NI    VIRT    RES    SHR S  %CPU  %MEM     TIME+ COMMAND
3295912 root      20   0  543060 263220    764 S 530.6   0.4   1:07.05 stress-ng-memth
  74520 kevin     20   0  290748  67456  23396 S 103.2   0.1   5:11.59 Plex Media Serv
3295915 root      20   0   53476   3376   3236 R  88.0   0.0   0:04.19 stress-ng-remap
3476968 kevin     20   0 2612288 128744  70472 S  68.9   0.2   0:02.76 Radarr
3478942 kevin     20   0 2612776 118756  68468 S  54.3   0.2   0:02.04 Prowlarr
3385697 kevin     20   0   50468  21684  11128 R  42.3   0.0   0:04.67 Plex Transcoder
3295892 root      20   0   84220    264    124 R  35.6   0.0   0:08.33 stress-ng-mcont
3295940 root      20   0   51428   5824   2760 R  33.2   0.0   0:04.76 stress-ng-cpu
3295930 root      20   0   53668   3600   1256 R  32.4   0.0   0:05.11 stress-ng-tsear
3295878 root      20   0   51688   1528   1256 R  31.1   0.0   0:03.78 stress-ng-bsear
3295911 root      20   0   51428    264    124 R  26.9   0.0   0:04.07 stress-ng-pipe
3295891 root      20   0   76004  25800   1256 R  24.7   0.0   0:03.63 stress-ng-matri
3295918 root      20   0   51428    264    124 S  24.5   0.0   0:03.93 stress-ng-pipe
3295953 root      20   0   59624   9016    696 R  23.7   0.0   0:01.83 stress-ng-vm-ad
3295876 root      20   0   51428    264    124 R  22.6   0.0   0:03.87 stress-ng-atomi


SYSLOG

 

Date	Time	Level	Host Name	Category	Program	Messages
2024-04-19	15:32:37	Info	kevin-Z690-Extreme	kern	kernel	[13935.834179] perf: interrupt took too long (10574 > 10451), lowering kernel.perf_event_max_sample_rate to 18750
2024-04-19	15:32:34	Info	kevin-Z690-Extreme	daemon	systemd	run-docker-runtime\x2drunc-moby-6c2e64c8f4909419f7680079d0ece0e68212c13cd39e9e59850265befa938cbe-runc.I2VQ7D.mount: Succeeded.
2024-04-19	15:32:34	Info	kevin-Z690-Extreme	daemon	systemd	run-docker-runtime\x2drunc-moby-6c2e64c8f4909419f7680079d0ece0e68212c13cd39e9e59850265befa938cbe-runc.I2VQ7D.mount: Succeeded.
2024-04-19	15:32:34	Info	kevin-Z690-Extreme	daemon	systemd	run-docker-runtime\x2drunc-moby-6c2e64c8f4909419f7680079d0ece0e68212c13cd39e9e59850265befa938cbe-runc.I2VQ7D.mount: Succeeded.
2024-04-19	15:32:23	Info	kevin-Z690-Extreme	daemon	systemd	run-docker-runtime\x2drunc-moby-6c2e64c8f4909419f7680079d0ece0e68212c13cd39e9e59850265befa938cbe-runc.EgDAWP.mount: Succeeded.
2024-04-19	15:32:18	Info	kevin-Z690-Extreme	daemon	ModemManager	<info> [base-manager] couldn't check support for device '/sys/devices/pci0000:00/0000:00:01.0/0000:01:00.0': not supported by any plugin
2024-04-19	15:32:16	Notice	kevin-Z690-Extreme	user	gnome-shell	Removing a network device that was not added
2024-04-19	15:32:15	Info	kevin-Z690-Extreme	kern	kernel	[13914.021363] eth0: renamed from veth0073a4f
2024-04-19	15:32:15	Info	kevin-Z690-Extreme	daemon	containerd	time="2024-04-19T15:32:15.586307720-04:00" level=info msg="starting signal loop" namespace=moby path=/run/containerd/io.containerd.runtime.v2.task/moby/074d66598dde77f895654f38a292d9d75d75fa37f7c17bb05e30ee1e992d7107 pid=3558375 runtime=io.containerd.runc.v2
2024-04-19	15:32:15	Info	kevin-Z690-Extreme	daemon	containerd	time="2024-04-19T15:32:15.585305963-04:00" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
2024-04-19	15:32:15	Info	kevin-Z690-Extreme	daemon	containerd	time="2024-04-19T15:32:15.585277154-04:00" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1
2024-04-19	15:32:15	Info	kevin-Z690-Extreme	daemon	containerd	time="2024-04-19T15:32:15.583503212-04:00" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1
2024-04-19	15:32:15	Info	kevin-Z690-Extreme	daemon	dockerd	time="2024-04-19T15:32:15.514577571-04:00" level=info msg="No non-localhost DNS nameservers are left in resolv.conf. Using default external servers"
2024-04-19	15:32:15	Warning	kevin-Z690-Extreme	daemon	systemd-udevd	veth0073a4f: Could not generate persistent MAC: No data available
2024-04-19	15:32:15	Info	kevin-Z690-Extreme	daemon	NetworkManager	<info> [1713555135.5130] manager: (veth0073a4f): new Macvlan device (/org/freedesktop/NetworkManager/Devices/167)
2024-04-19	15:32:15	Info	kevin-Z690-Extreme	daemon	systemd-udevd	ethtool: autonegotiation is unset or enabled, the speed and duplex are not writable.
2024-04-19	15:32:15	Info	kevin-Z690-Extreme	daemon	dockerd	time="2024-04-19T15:32:15.491492307-04:00" level=warning msg="macvlan driver does not support port exposures"
2024-04-19	15:32:15	Info	kevin-Z690-Extreme	daemon	systemd	var-lib-docker-overlay2-c8eadedf26b26816594780581ee70fb03a8c364568c8fe7bf4f77a2bcc4758a5-merged.mount: Succeeded.
2024-04-19	15:32:15	Info	kevin-Z690-Extreme	daemon	systemd	var-lib-docker-overlay2-c8eadedf26b26816594780581ee70fb03a8c364568c8fe7bf4f77a2bcc4758a5-merged.mount: Succeeded.
2024-04-19	15:32:15	Info	kevin-Z690-Extreme	daemon	systemd	var-lib-docker-overlay2-c8eadedf26b26816594780581ee70fb03a8c364568c8fe7bf4f77a2bcc4758a5-merged.mount: Succeeded.
2024-04-19	15:32:15	Info	kevin-Z690-Extreme	daemon	systemd	run-docker-netns-d9f43cf2b3bc.mount: Succeeded.
2024-04-19	15:32:15	Info	kevin-Z690-Extreme	daemon	systemd	run-docker-netns-d9f43cf2b3bc.mount: Succeeded.
2024-04-19	15:32:15	Info	kevin-Z690-Extreme	daemon	systemd	run-docker-netns-d9f43cf2b3bc.mount: Succeeded.
2024-04-19	15:32:15	Error	kevin-Z690-Extreme	daemon	systemd-udevd	veth0e08c50: Failed to get link config: No such device
2024-04-19	15:32:15	Notice	kevin-Z690-Extreme	user	gnome-shell	Removing a network device that was not added
2024-04-19	15:32:15	Info	kevin-Z690-Extreme	daemon	systemd-udevd	Using default interface naming scheme 'v245'.
2024-04-19	15:32:15	Info	kevin-Z690-Extreme	daemon	systemd-udevd	ethtool: autonegotiation is unset or enabled, the speed and duplex are not writable.
2024-04-19	15:32:15	Info	kevin-Z690-Extreme	daemon	NetworkManager	<info> [1713555135.3815] manager: (veth0e08c50): new Macvlan device (/org/freedesktop/NetworkManager/Devices/166)
2024-04-19	15:32:15	Info	kevin-Z690-Extreme	kern	kernel	[13913.342426] veth0e08c50: renamed from eth0
2024-04-19	15:32:15	Info	kevin-Z690-Extreme	daemon	containerd	time="2024-04-19T15:32:15.189142501-04:00" level=warning msg="cleanup warnings time=\"2024-04-19T15:32:15-04:00\" level=info msg=\"starting signal loop\" namespace=moby pid=3558180 runtime=io.containerd.runc.v2\n"
2024-04-19	15:32:15	Info	kevin-Z690-Extreme	daemon	containerd	time="2024-04-19T15:32:15.141558291-04:00" level=info msg="cleaning up dead shim"
2024-04-19	15:32:15	Info	kevin-Z690-Extreme	daemon	containerd	time="2024-04-19T15:32:15.141543140-04:00" level=warning msg="cleaning up after shim disconnected" id=074d66598dde77f895654f38a292d9d75d75fa37f7c17bb05e30ee1e992d7107 namespace=moby
2024-04-19	15:32:15	Info	kevin-Z690-Extreme	daemon	containerd	time="2024-04-19T15:32:15.141394641-04:00" level=info msg="shim disconnected" id=074d66598dde77f895654f38a292d9d75d75fa37f7c17bb05e30ee1e992d7107
2024-04-19	15:32:15	Info	kevin-Z690-Extreme	daemon	dockerd	time="2024-04-19T15:32:15.139520084-04:00" level=info msg="ignoring event" container=074d66598dde77f895654f38a292d9d75d75fa37f7c17bb05e30ee1e992d7107 module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
2024-04-19	15:31:55	Info	kevin-Z690-Extreme	daemon	avahi-daemon	Registering new address record for fe80::d46a:23ff:febd:f017 on veth4bbe09d.*.
2024-04-19	15:31:55	Info	kevin-Z690-Extreme	daemon	avahi-daemon	New relevant interface veth4bbe09d.IPv6 for mDNS.
2024-04-19	15:31:55	Info	kevin-Z690-Extreme	daemon	avahi-daemon	Joining mDNS multicast group on interface veth4bbe09d.IPv6 with address fe80::d46a:23ff:febd:f017.
2024-04-19	15:31:55	Info	kevin-Z690-Extreme	daemon	systemd	run-docker-runtime\x2drunc-moby-6c2e64c8f4909419f7680079d0ece0e68212c13cd39e9e59850265befa938cbe-runc.WkppKB.mount: Succeeded.
2024-04-19	15:31:55	Info	kevin-Z690-Extreme	daemon	systemd	run-docker-runtime\x2drunc-moby-6c2e64c8f4909419f7680079d0ece0e68212c13cd39e9e59850265befa938cbe-runc.WkppKB.mount: Succeeded.
2024-04-19	15:31:55	Info	kevin-Z690-Extreme	daemon	systemd	run-docker-runtime\x2drunc-moby-6c2e64c8f4909419f7680079d0ece0e68212c13cd39e9e59850265befa938cbe-runc.WkppKB.mount: Succeeded.
2024-04-19	15:31:54	Warning	kevin-Z690-Extreme	kern	kernel	[13892.310404] x86/split lock detection: #AC: stress-ng-lockb/3295886 took a split_lock trap at address: 0x48a137
2024-04-19	15:31:54	Info	kevin-Z690-Extreme	kern	kernel	[13892.258837] br-f655724312d9: port 1(veth4bbe09d) entered forwarding state
2024-04-19	15:31:54	Info	kevin-Z690-Extreme	kern	kernel	[13892.258835] br-f655724312d9: port 1(veth4bbe09d) entered blocking state
2024-04-19	15:31:54	Info	kevin-Z690-Extreme	kern	kernel	[13892.258814] IPv6: ADDRCONF(NETDEV_CHANGE): veth4bbe09d: link becomes ready
2024-04-19	15:31:54	Notice	kevin-Z690-Extreme	user	gnome-shell	Removing a network device that was not added
2024-04-19	15:31:54	Info	kevin-Z690-Extreme	daemon	NetworkManager	<info> [1713555114.1597] device (br-f655724312d9): carrier: link connected
2024-04-19	15:31:54	Info	kevin-Z690-Extreme	daemon	NetworkManager	<info> [1713555114.1595] device (veth4bbe09d): carrier: link connected
2024-04-19	15:31:54	Info	kevin-Z690-Extreme	kern	kernel	[13892.226644] eth0: renamed from veth9ae4039
2024-04-19	15:31:54	Info	kevin-Z690-Extreme	daemon	containerd	time="2024-04-19T15:31:54.039431340-04:00" level=info msg="starting signal loop" namespace=moby path=/run/containerd/io.containerd.runtime.v2.task/moby/cf00b4047aa3b5c026590205e93dd082384f25b54cea138107235536642733e5 pid=3295815 runtime=io.containerd.runc.v2
2024-04-19	15:31:54	Info	kevin-Z690-Extreme	daemon	containerd	time="2024-04-19T15:31:54.039371283-04:00" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
2024-04-19	15:31:54	Info	kevin-Z690-Extreme	daemon	containerd	time="2024-04-19T15:31:54.039366785-04:00" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1
2024-04-19	15:31:54	Info	kevin-Z690-Extreme	daemon	containerd	time="2024-04-19T15:31:54.039316648-04:00" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1
2024-04-19	15:31:54	Info	kevin-Z690-Extreme	daemon	dockerd	time="2024-04-19T15:31:54.022533711-04:00" level=info msg="No non-localhost DNS nameservers are left in resolv.conf. Using default external servers"
2024-04-19	15:31:54	Info	kevin-Z690-Extreme	kern	kernel	[13892.116053] device veth4bbe09d entered promiscuous mode
2024-04-19	15:31:54	Info	kevin-Z690-Extreme	kern	kernel	[13892.116007] br-f655724312d9: port 1(veth4bbe09d) entered disabled state
2024-04-19	15:31:54	Info	kevin-Z690-Extreme	kern	kernel	[13892.116004] br-f655724312d9: port 1(veth4bbe09d) entered blocking state
2024-04-19	15:31:54	Warning	kevin-Z690-Extreme	daemon	systemd-udevd	veth4bbe09d: Could not generate persistent MAC: No data available
2024-04-19	15:31:54	Info	kevin-Z690-Extreme	daemon	systemd-udevd	Using default interface naming scheme 'v245'.
2024-04-19	15:31:54	Info	kevin-Z690-Extreme	daemon	systemd-udevd	ethtool: autonegotiation is unset or enabled, the speed and duplex are not writable.
2024-04-19	15:31:54	Info	kevin-Z690-Extreme	daemon	NetworkManager	<info> [1713555114.0169] manager: (veth4bbe09d): new Veth device (/org/freedesktop/NetworkManager/Devices/165)
2024-04-19	15:31:54	Info	kevin-Z690-Extreme	daemon	NetworkManager	<info> [1713555114.0162] manager: (veth9ae4039): new Veth device (/org/freedesktop/NetworkManager/Devices/164)
2024-04-19	15:31:54	Warning	kevin-Z690-Extreme	daemon	systemd-udevd	veth9ae4039: Could not generate persistent MAC: No data available
2024-04-19	15:31:54	Info	kevin-Z690-Extreme	daemon	systemd-udevd	Using default interface naming scheme 'v245'.
2024-04-19	15:31:54	Info	kevin-Z690-Extreme	daemon	systemd-udevd	ethtool: autonegotiation is unset or enabled, the speed and duplex are not writable.
2024-04-19	15:31:48	Info	kevin-Z690-Extreme	daemon	dockerd	2024/04/19 15:31:48 http: superfluous response.WriteHeader call from go.opentelemetry.io/contrib/instrumentation/net/http/otelhttp.(*respWriterWrapper).WriteHeader (wrap.go:98)
2024-04-19	15:31:47	Info	kevin-Z690-Extreme	daemon	dockerd	2024/04/19 15:31:47 http: superfluous response.WriteHeader call from go.opentelemetry.io/contrib/instrumentation/net/http/otelhttp.(*respWriterWrapper).WriteHeader (wrap.go:98)
2024-04-19	15:31:41	Info	kevin-Z690-Extreme	daemon	dockerd	2024/04/19 15:31:41 http: superfluous response.WriteHeader call from go.opentelemetry.io/contrib/instrumentation/net/http/otelhttp.(*respWriterWrapper).WriteHeader (wrap.go:98)
2024-04-19	15:31:36	Info	kevin-Z690-Extreme	daemon	systemd	var-lib-docker-overlay2-f90f828a7fd641f2079c92749ea794ed5cea2b76bcd05abebd09d71e0fa015dd-merged.mount: Succeeded.
2024-04-19	15:31:36	Info	kevin-Z690-Extreme	daemon	systemd	var-lib-docker-overlay2-f90f828a7fd641f2079c92749ea794ed5cea2b76bcd05abebd09d71e0fa015dd-merged.mount: Succeeded.
2024-04-19	15:31:36	Info	kevin-Z690-Extreme	daemon	systemd	run-docker-netns-a997888ecc06.mount: Succeeded.
2024-04-19	15:31:36	Info	kevin-Z690-Extreme	daemon	systemd	var-lib-docker-overlay2-f90f828a7fd641f2079c92749ea794ed5cea2b76bcd05abebd09d71e0fa015dd-merged.mount: Succeeded.
2024-04-19	15:31:36	Info	kevin-Z690-Extreme	daemon	systemd	run-docker-netns-a997888ecc06.mount: Succeeded.
2024-04-19	15:31:36	Info	kevin-Z690-Extreme	daemon	systemd	run-docker-netns-a997888ecc06.mount: Succeeded.
2024-04-19	15:31:36	Notice	kevin-Z690-Extreme	user	gnome-shell	Removing a network device that was not added
2024-04-19	15:31:36	Notice	kevin-Z690-Extreme	user	gnome-shell	Removing a network device that was not added
2024-04-19	15:31:36	Info	kevin-Z690-Extreme	daemon	NetworkManager	<info> [1713555096.1000] device (vethe95046b): released from master device br-f655724312d9
2024-04-19	15:31:36	Error	kevin-Z690-Extreme	daemon	systemd-udevd	vethd426110: Failed to get link config: No such device
2024-04-19	15:31:36	Info	kevin-Z690-Extreme	daemon	avahi-daemon	Withdrawing address record for fe80::2471:5aff:fe89:59eb on vethe95046b.
2024-04-19	15:31:36	Info	kevin-Z690-Extreme	kern	kernel	[13874.165189] br-f655724312d9: port 1(vethe95046b) entered disabled state
2024-04-19	15:31:36	Info	kevin-Z690-Extreme	kern	kernel	[13874.165188] device vethe95046b left promiscuous mode
2024-04-19	15:31:36	Info	kevin-Z690-Extreme	kern	kernel	[13874.164914] br-f655724312d9: port 1(vethe95046b) entered disabled state
2024-04-19	15:31:36	Info	kevin-Z690-Extreme	daemon	avahi-daemon	Leaving mDNS multicast group on interface vethe95046b.IPv6 with address fe80::2471:5aff:fe89:59eb.
2024-04-19	15:31:36	Info	kevin-Z690-Extreme	daemon	avahi-daemon	Interface vethe95046b.IPv6 no longer relevant for mDNS.
2024-04-19	15:31:36	Info	kevin-Z690-Extreme	daemon	systemd-udevd	Using default interface naming scheme 'v245'.
2024-04-19	15:31:36	Info	kevin-Z690-Extreme	daemon	systemd-udevd	ethtool: autonegotiation is unset or enabled, the speed and duplex are not writable.
2024-04-19	15:31:36	Info	kevin-Z690-Extreme	daemon	NetworkManager	<info> [1713555096.0629] manager: (vethd426110): new Veth device (/org/freedesktop/NetworkManager/Devices/163)
2024-04-19	15:31:35	Info	kevin-Z690-Extreme	kern	kernel	[13874.096052] vethd426110: renamed from eth0
2024-04-19	15:31:35	Info	kevin-Z690-Extreme	kern	kernel	[13874.096028] br-f655724312d9: port 1(vethe95046b) entered disabled state
2024-04-19	15:31:35	Info	kevin-Z690-Extreme	daemon	dockerd	time="2024-04-19T15:31:35.991818124-04:00" level=warning msg="failed to close stdin: task cf00b4047aa3b5c026590205e93dd082384f25b54cea138107235536642733e5 not found: not found"
2024-04-19	15:31:35	Info	kevin-Z690-Extreme	daemon	containerd	time="2024-04-19T15:31:35.991237445-04:00" level=warning msg="cleanup warnings time=\"2024-04-19T15:31:35-04:00\" level=info msg=\"starting signal loop\" namespace=moby pid=3294834 runtime=io.containerd.runc.v2\n"
2024-04-19	15:31:35	Info	kevin-Z690-Extreme	daemon	containerd	time="2024-04-19T15:31:35.986905511-04:00" level=info msg="cleaning up dead shim"
2024-04-19	15:31:35	Info	kevin-Z690-Extreme	daemon	containerd	time="2024-04-19T15:31:35.986900352-04:00" level=warning msg="cleaning up after shim disconnected" id=cf00b4047aa3b5c026590205e93dd082384f25b54cea138107235536642733e5 namespace=moby
2024-04-19	15:31:35	Info	kevin-Z690-Extreme	daemon	containerd	time="2024-04-19T15:31:35.986865847-04:00" level=info msg="shim disconnected" id=cf00b4047aa3b5c026590205e93dd082384f25b54cea138107235536642733e5
2024-04-19	15:31:35	Info	kevin-Z690-Extreme	daemon	dockerd	time="2024-04-19T15:31:35.986773818-04:00" level=info msg="ignoring event" container=cf00b4047aa3b5c026590205e93dd082384f25b54cea138107235536642733e5 module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
2024-04-19	15:31:30	Info	kevin-Z690-Extreme	daemon	systemd	run-docker-runtime\x2drunc-moby-6c2e64c8f4909419f7680079d0ece0e68212c13cd39e9e59850265befa938cbe-runc.c5lYU5.mount: Succeeded.
2024-04-19	15:31:30	Info	kevin-Z690-Extreme	daemon	systemd	run-docker-runtime\x2drunc-moby-6c2e64c8f4909419f7680079d0ece0e68212c13cd39e9e59850265befa938cbe-runc.c5lYU5.mount: Succeeded.
2024-04-19	15:31:30	Info	kevin-Z690-Extreme	daemon	systemd	run-docker-runtime\x2drunc-moby-6c2e64c8f4909419f7680079d0ece0e68212c13cd39e9e59850265befa938cbe-runc.c5lYU5.mount: Succeeded.
2024-04-19	15:31:22	Info	kevin-Z690-Extreme	daemon	systemd	run-docker-runtime\x2drunc-moby-6c2e64c8f4909419f7680079d0ece0e68212c13cd39e9e59850265befa938cbe-runc.lkx7Vp.mount: Succeeded.
2024-04-19	15:31:17	Info	kevin-Z690-Extreme	daemon	systemd	run-docker-runtime\x2drunc-moby-6c2e64c8f4909419f7680079d0ece0e68212c13cd39e9e59850265befa938cbe-runc.8apVeY.mount: Succeeded.
2024-04-19	15:31:03	Info	kevin-Z690-Extreme	daemon	dockerd	2024/04/19 15:31:03 http: superfluous response.WriteHeader call from go.opentelemetry.io/contrib/instrumentation/net/http/otelhttp.(*respWriterWrapper).WriteHeader (wrap.go:98)
2024-04-19	15:31:00	Info	kevin-Z690-Extreme	daemon	dockerd	2024/04/19 15:31:00 http: superfluous response.WriteHeader call from go.opentelemetry.io/contrib/instrumentation/net/http/otelhttp.(*respWriterWrapper).WriteHeader (wrap.go:98)
2024-04-19	15:30:57	Info	kevin-Z690-Extreme	daemon	dockerd	2024/04/19 15:30:57 http: superfluous response.WriteHeader call from go.opentelemetry.io/contrib/instrumentation/net/http/otelhttp.(*respWriterWrapper).WriteHeader (wrap.go:98)
2024-04-19	15:30:54	Info	kevin-Z690-Extreme	daemon	dockerd	2024/04/19 15:30:54 http: superfluous response.WriteHeader call from go.opentelemetry.io/contrib/instrumentation/net/http/otelhttp.(*respWriterWrapper).WriteHeader (wrap.go:98)
2024-04-19	15:30:51	Info	kevin-Z690-Extreme	daemon	dockerd	2024/04/19 15:30:51 http: superfluous response.WriteHeader call from go.opentelemetry.io/contrib/instrumentation/net/http/otelhttp.(*respWriterWrapper).WriteHeader (wrap.go:98)
2024-04-19	15:30:50	Info	kevin-Z690-Extreme	daemon	systemd	run-docker-runtime\x2drunc-moby-6c2e64c8f4909419f7680079d0ece0e68212c13cd39e9e59850265befa938cbe-runc.qE8CVE.mount: Succeeded.
2024-04-19	15:30:48	Info	kevin-Z690-Extreme	daemon	dockerd	2024/04/19 15:30:48 http: superfluous response.WriteHeader call from go.opentelemetry.io/contrib/instrumentation/net/http/otelhttp.(*respWriterWrapper).WriteHeader (wrap.go:98)
2024-04-19	15:30:45	Info	kevin-Z690-Extreme	daemon	dockerd	2024/04/19 15:30:45 http: superfluous response.WriteHeader call from go.opentelemetry.io/contrib/instrumentation/net/http/otelhttp.(*respWriterWrapper).WriteHeader (wrap.go:98)
2024-04-19	15:30:45	Info	kevin-Z690-Extreme	daemon	systemd	run-docker-runtime\x2drunc-moby-6c2e64c8f4909419f7680079d0ece0e68212c13cd39e9e59850265befa938cbe-runc.3JW0Qp.mount: Succeeded.
2024-04-19	15:30:45	Info	kevin-Z690-Extreme	daemon	systemd	run-docker-runtime\x2drunc-moby-6c2e64c8f4909419f7680079d0ece0e68212c13cd39e9e59850265befa938cbe-runc.3JW0Qp.mount: Succeeded.
2024-04-19	15:30:45	Info	kevin-Z690-Extreme	daemon	systemd	run-docker-runtime\x2drunc-moby-6c2e64c8f4909419f7680079d0ece0e68212c13cd39e9e59850265befa938cbe-runc.3JW0Qp.mount: Succeeded.
2024-04-19	15:30:42	Info	kevin-Z690-Extreme	daemon	dockerd	2024/04/19 15:30:42 http: superfluous response.WriteHeader call from go.opentelemetry.io/contrib/instrumentation/net/http/otelhttp.(*respWriterWrapper).WriteHeader (wrap.go:98)
2024-04-19	15:30:39	Info	kevin-Z690-Extreme	daemon	dockerd	2024/04/19 15:30:39 http: superfluous response.WriteHeader call from go.opentelemetry.io/contrib/instrumentation/net/http/otelhttp.(*respWriterWrapper).WriteHeader (wrap.go:98)
2024-04-19	15:30:36	Info	kevin-Z690-Extreme	daemon	dockerd	2024/04/19 15:30:36 http: superfluous response.WriteHeader call from go.opentelemetry.io/contrib/instrumentation/net/http/otelhttp.(*respWriterWrapper).WriteHeader (wrap.go:98)
2024-04-19	15:30:33	Info	kevin-Z690-Extreme	daemon	dockerd	2024/04/19 15:30:33 http: superfluous response.WriteHeader call from go.opentelemetry.io/contrib/instrumentation/net/http/otelhttp.(*respWriterWrapper).WriteHeader (wrap.go:98)
2024-04-19	15:30:32	Info	kevin-Z690-Extreme	daemon	systemd	run-docker-runtime\x2drunc-moby-6c2e64c8f4909419f7680079d0ece0e68212c13cd39e9e59850265befa938cbe-runc.cZqkeB.mount: Succeeded.
2024-04-19	15:30:32	Info	kevin-Z690-Extreme	daemon	systemd	run-docker-runtime\x2drunc-moby-6c2e64c8f4909419f7680079d0ece0e68212c13cd39e9e59850265befa938cbe-runc.cZqkeB.mount: Succeeded.
2024-04-19	15:30:32	Info	kevin-Z690-Extreme	daemon	systemd	run-docker-runtime\x2drunc-moby-6c2e64c8f4909419f7680079d0ece0e68212c13cd39e9e59850265befa938cbe-runc.cZqkeB.mount: Succeeded.
2024-04-19	15:30:30	Info	kevin-Z690-Extreme	daemon	dockerd	2024/04/19 15:30:30 http: superfluous response.WriteHeader call from go.opentelemetry.io/contrib/instrumentation/net/http/otelhttp.(*respWriterWrapper).WriteHeader (wrap.go:98)
2024-04-19	15:30:27	Info	kevin-Z690-Extreme	daemon	dockerd	2024/04/19 15:30:27 http: superfluous response.WriteHeader call from go.opentelemetry.io/contrib/instrumentation/net/http/otelhttp.(*respWriterWrapper).WriteHeader (wrap.go:98)
2024-04-19	15:30:27	Info	kevin-Z690-Extreme	daemon	systemd	run-docker-runtime\x2drunc-moby-6c2e64c8f4909419f7680079d0ece0e68212c13cd39e9e59850265befa938cbe-runc.LKvsQH.mount: Succeeded.
2024-04-19	15:30:27	Info	kevin-Z690-Extreme	daemon	systemd	run-docker-runtime\x2drunc-moby-6c2e64c8f4909419f7680079d0ece0e68212c13cd39e9e59850265befa938cbe-runc.LKvsQH.mount: Succeeded.
2024-04-19	15:30:27	Info	kevin-Z690-Extreme	daemon	systemd	run-docker-runtime\x2drunc-moby-6c2e64c8f4909419f7680079d0ece0e68212c13cd39e9e59850265befa938cbe-runc.LKvsQH.mount: Succeeded.
2024-04-19	15:30:24	Info	kevin-Z690-Extreme	daemon	dockerd	2024/04/19 15:30:24 http: superfluous response.WriteHeader call from go.opentelemetry.io/contrib/instrumentation/net/http/otelhttp.(*respWriterWrapper).WriteHeader (wrap.go:98)
2024-04-19	15:30:21	Info	kevin-Z690-Extreme	daemon	dockerd	2024/04/19 15:30:21 http: superfluous response.WriteHeader call from go.opentelemetry.io/contrib/instrumentation/net/http/otelhttp.(*respWriterWrapper).WriteHeader (wrap.go:98)
2024-04-19	15:30:18	Info	kevin-Z690-Extreme	daemon	dockerd	2024/04/19 15:30:18 http: superfluous response.WriteHeader call from go.opentelemetry.io/contrib/instrumentation/net/http/otelhttp.(*respWriterWrapper).WriteHeader (wrap.go:98)
2024-04-19	15:30:15	Info	kevin-Z690-Extreme	daemon	dockerd	2024/04/19 15:30:15 http: superfluous response.WriteHeader call from go.opentelemetry.io/contrib/instrumentation/net/http/otelhttp.(*respWriterWrapper).WriteHeader (wrap.go:98)
2024-04-19	15:30:12	Info	kevin-Z690-Extreme	daemon	dockerd	2024/04/19 15:30:12 http: superfluous response.WriteHeader call from go.opentelemetry.io/contrib/instrumentation/net/http/otelhttp.(*respWriterWrapper).WriteHeader (wrap.go:98)
2024-04-19	15:30:09	Info	kevin-Z690-Extreme	daemon	dockerd	2024/04/19 15:30:09 http: superfluous response.WriteHeader call from go.opentelemetry.io/contrib/instrumentation/net/http/otelhttp.(*respWriterWrapper).WriteHeader (wrap.go:98)
2024-04-19	15:30:09	Info	kevin-Z690-Extreme	daemon	systemd	run-docker-runtime\x2drunc-moby-6c2e64c8f4909419f7680079d0ece0e68212c13cd39e9e59850265befa938cbe-runc.3jIz3Q.mount: Succeeded.
2024-04-19	15:30:09	Info	kevin-Z690-Extreme	daemon	systemd	run-docker-runtime\x2drunc-moby-6c2e64c8f4909419f7680079d0ece0e68212c13cd39e9e59850265befa938cbe-runc.3jIz3Q.mount: Succeeded.
2024-04-19	15:30:09	Info	kevin-Z690-Extreme	daemon	systemd	run-docker-runtime\x2drunc-moby-6c2e64c8f4909419f7680079d0ece0e68212c13cd39e9e59850265befa938cbe-runc.3jIz3Q.mount: Succeeded.
2024-04-19	15:30:06	Info	kevin-Z690-Extreme	daemon	dockerd	2024/04/19 15:30:06 http: superfluous response.WriteHeader call from go.opentelemetry.io/contrib/instrumentation/net/http/otelhttp.(*respWriterWrapper).WriteHeader (wrap.go:98)
2024-04-19	15:30:04	Info	kevin-Z690-Extreme	daemon	systemd	run-docker-runtime\x2drunc-moby-6c2e64c8f4909419f7680079d0ece0e68212c13cd39e9e59850265befa938cbe-runc.9spsOq.mount: Succeeded.
2024-04-19	15:30:04	Info	kevin-Z690-Extreme	daemon	systemd	run-docker-runtime\x2drunc-moby-6c2e64c8f4909419f7680079d0ece0e68212c13cd39e9e59850265befa938cbe-runc.9spsOq.mount: Succeeded.
2024-04-19	15:30:04	Info	kevin-Z690-Extreme	daemon	systemd	run-docker-runtime\x2drunc-moby-6c2e64c8f4909419f7680079d0ece0e68212c13cd39e9e59850265befa938cbe-runc.9spsOq.mount: Succeeded.
2024-04-19	15:30:03	Info	kevin-Z690-Extreme	daemon	dockerd	2024/04/19 15:30:03 http: superfluous response.WriteHeader call from go.opentelemetry.io/contrib/instrumentation/net/http/otelhttp.(*respWriterWrapper).WriteHeader (wrap.go:98)
2024-04-19	15:30:01	Info	kevin-Z690-Extreme	authpriv	CRON	pam_unix(cron:session): session closed for user root
2024-04-19	15:30:01	Info	kevin-Z690-Extreme	cron	CRON	(root) CMD ([ -x /etc/init.d/anacron ] && if [ ! -d /run/systemd/system ]; then /usr/sbin/invoke-rc.d anacron start >/dev/null; fi)
2024-04-19	15:30:01	Info	kevin-Z690-Extreme	authpriv	CRON	pam_unix(cron:session): session opened for user root by (uid=0)
2024-04-19	15:30:00	Info	kevin-Z690-Extreme	daemon	dockerd	2024/04/19 15:30:00 http: superfluous response.WriteHeader call from go.opentelemetry.io/contrib/instrumentation/net/http/otelhttp.(*respWriterWrapper).WriteHeader (wrap.go:98)
2024-04-19	15:29:58	Info	kevin-Z690-Extreme	daemon	systemd	run-docker-runtime\x2drunc-moby-6c2e64c8f4909419f7680079d0ece0e68212c13cd39e9e59850265befa938cbe-runc.GdKuBf.mount: Succeeded.
2024-04-19	15:29:58	Info	kevin-Z690-Extreme	daemon	systemd	run-docker-runtime\x2drunc-moby-6c2e64c8f4909419f7680079d0ece0e68212c13cd39e9e59850265befa938cbe-runc.GdKuBf.mount: Succeeded.
2024-04-19	15:29:58	Info	kevin-Z690-Extreme	daemon	systemd	run-docker-runtime\x2drunc-moby-6c2e64c8f4909419f7680079d0ece0e68212c13cd39e9e59850265befa938cbe-runc.GdKuBf.mount: Succeeded.
2024-04-19	15:29:57	Info	kevin-Z690-Extreme	daemon	dockerd	2024/04/19 15:29:57 http: superfluous response.WriteHeader call from go.opentelemetry.io/contrib/instrumentation/net/http/otelhttp.(*respWriterWrapper).WriteHeader (wrap.go:98)
2024-04-19	15:29:54	Info	kevin-Z690-Extreme	daemon	dockerd	2024/04/19 15:29:54 http: superfluous response.WriteHeader call from go.opentelemetry.io/contrib/instrumentation/net/http/otelhttp.(*respWriterWrapper).WriteHeader (wrap.go:98)
2024-04-19	15:29:51	Info	kevin-Z690-Extreme	daemon	dockerd	2024/04/19 15:29:51 http: superfluous response.WriteHeader call from go.opentelemetry.io/contrib/instrumentation/net/http/otelhttp.(*respWriterWrapper).WriteHeader (wrap.go:98)
2024-04-19	15:29:48	Info	kevin-Z690-Extreme	daemon	dockerd	2024/04/19 15:29:48 http: superfluous response.WriteHeader call from go.opentelemetry.io/contrib/instrumentation/net/http/otelhttp.(*respWriterWrapper).WriteHeader (wrap.go:98)
2024-04-19	15:29:45	Info	kevin-Z690-Extreme	daemon	dockerd	2024/04/19 15:29:45 http: superfluous response.WriteHeader call from go.opentelemetry.io/contrib/instrumentation/net/http/otelhttp.(*respWriterWrapper).WriteHeader (wrap.go:98)
2024-04-19	15:29:42	Info	kevin-Z690-Extreme	daemon	dockerd	2024/04/19 15:29:42 http: superfluous response.WriteHeader call from go.opentelemetry.io/contrib/instrumentation/net/http/otelhttp.(*respWriterWrapper).WriteHeader (wrap.go:98)
2024-04-19	15:29:41	Info	kevin-Z690-Extreme	daemon	systemd	run-docker-runtime\x2drunc-moby-6c2e64c8f4909419f7680079d0ece0e68212c13cd39e9e59850265befa938cbe-runc.cnv7eQ.mount: Succeeded.
2024-04-19	15:29:39	Info	kevin-Z690-Extreme	daemon	dockerd	2024/04/19 15:29:39 http: superfluous response.WriteHeader call from go.opentelemetry.io/contrib/instrumentation/net/http/otelhttp.(*respWriterWrapper).WriteHeader (wrap.go:98)
2024-04-19	15:29:36	Info	kevin-Z690-Extreme	daemon	dockerd	2024/04/19 15:29:36 http: superfluous response.WriteHeader call from go.opentelemetry.io/contrib/instrumentation/net/http/otelhttp.(*respWriterWrapper).WriteHeader (wrap.go:98)



I am going to shut this down, pull one of the DIMMs 

If the server boots, run unRAID and see if it crashes, but I do not see how the Ubuntu Build can not have a memory issue and unRAID be crashing due to them.

 

Link to comment
Apr 19 16:18:52 iPlex kernel: macvlan_broadcast+0x10a/0x150 [macvlan]
Apr 19 16:18:52 iPlex kernel: ? _raw_spin_unlock+0x14/0x29
Apr 19 16:18:52 iPlex kernel: macvlan_process_broadcast+0xbc/0x12f [macvlan]

 

Macvlan call traces will usually end up crashing the server, switching to ipvlan should fix it (Settings -> Docker Settings -> Docker custom network type -> ipvlan (advanced view must be enabled, top right)), then reboot.
 

Link to comment

MACVLAN is important to my Security profile, as each Container has a specific IP and MAC that can be identified by Firewall, IDS and IPS.

 

This is very very unfortunate, and if I can not find a work around, may require me moving from unRAID. 

MACVLAN is a core function of Dockers.

Link to comment

Join the conversation

You can post now and register later. If you have an account, sign in now to post with your account.
Note: Your post will require moderator approval before it will be visible.

Guest
Reply to this topic...

×   Pasted as rich text.   Restore formatting

  Only 75 emoji are allowed.

×   Your link has been automatically embedded.   Display as a link instead

×   Your previous content has been restored.   Clear editor

×   You cannot paste images directly. Upload or insert images from URL.

×
×
  • Create New...