NewDisplayName Posted January 18 Share Posted January 18 (edited) Im currently at work and dont have any diagnostics, but ill add them asap. (i mirrored syslog also, so i can add that too) Im one of the customers who have problem with the macvlan issue. I tried ipvlan, but i have a fritzbox, and this company dont care to support ipvlan. I cant redirect any ports and thats an issue. So ive switched yesterday (while updating to 6.12.6) to macvlan following this "tutorial": https://docs.unraid.net/unraid-os/release-notes/6.12.4/ For those users, we have a new method that reworks networking to avoid issues with macvlan. Tweak a few settings and your Docker containers, VMs, and WireGuard tunnels should automatically adjust to use them: Settings > Network Settings > eth0 > Enable Bonding = Yes or No, either work with this solution Settings > Network Settings > eth0 > Enable Bridging = No (this will automatically enable macvlan) Settings > Docker > Host access to custom networks = Enabled Is there anything other i should have done? Most dockers are in bridge, some are in br1, i have a VM (which wasnt turned on) Server was doing parity sync (because i added new drives) Edited January 18 by NewDisplayName Quote Link to comment
JorgeB Posted January 18 Share Posted January 18 To use mcvlan you need to disable bridging for the interface docker is using, if it's eth1 disable bridging for that NIC. Quote Link to comment
NewDisplayName Posted January 18 Author Share Posted January 18 (edited) Here it is. Nothing in logs and as far as i can tell ive set it up like u said. unraid-server-diagnostics-20240118-1400.zipsyslog-previous Edited January 18 by NewDisplayName Quote Link to comment
NewDisplayName Posted January 18 Author Share Posted January 18 (edited) While were at it thats my network.cfg # Generated settings: IFNAME[0]="eth0" DESCRIPTION[0]="LAN1" PROTOCOL[0]="ipv4" USE_DHCP[0]="no" IPADDR[0]="192.168.0.110" NETMASK[0]="255.255.255.0" GATEWAY[0]="192.168.0.1" DNS_SERVER1="1.1.1.1" DNS_SERVER2="8.8.8.8" USE_DHCP6[0]="no" MTU[0]="1500" HWADDR[0]=AE:8F:B1:AA:25:F9 SYSNICS="1" HWADDR=AE:8F:B1:AA:25:F9 what is the correct syntax for hwaddr? With [0] or without? Edited January 18 by NewDisplayName Quote Link to comment
JorgeB Posted January 18 Share Posted January 18 If it's about eth0 leave the [0] Quote Link to comment
NewDisplayName Posted January 18 Author Share Posted January 18 (edited) 4 hours ago, JorgeB said: If it's about eth0 leave the [0] r u sure? because thats what i added on my own... what about the crash? I dont really want to wait again weeks for a response because the server keeps crashing every 1-2 days which isnt that good for the drives and i dont have parity... I have no problem in helping finding the issue, but i need input on what to try. Edited January 18 by NewDisplayName Quote Link to comment
JorgeB Posted January 18 Share Posted January 18 1 minute ago, NewDisplayName said: r u sure? To be honest, not really. 1 minute ago, NewDisplayName said: what about the crash? Enable the syslog server and post that after a crash. Quote Link to comment
NewDisplayName Posted January 18 Author Share Posted January 18 1 minute ago, JorgeB said: To be honest, not really. Enable the syslog server and post that after a crash. ???? I already posted the log AFTER crash and BEFORE crash. Quote Link to comment
JorgeB Posted January 18 Share Posted January 18 And there's nothing relevant there, post another one if it crashes again, but macvaln issues usually leave call traces on the syslog, problem may not be that, one other thing you can try is to boot the server in safe mode with all docker/VMs disabled, let it run as a basic NAS for a few days, if it still crashes it's likely a hardware problem, if it doesn't start turning on the other services one by one. Quote Link to comment
NewDisplayName Posted January 18 Author Share Posted January 18 (edited) 3 hours ago, JorgeB said: And there's nothing relevant there, post another one if it crashes again, but macvaln issues usually leave call traces on the syslog, problem may not be that, one other thing you can try is to boot the server in safe mode with all docker/VMs disabled, let it run as a basic NAS for a few days, if it still crashes it's likely a hardware problem, if it doesn't start turning on the other services one by one. Its running along time without any crash ever (before multiple poeple startet to have issues with macvlan, like years, multiple versions), it just startet after updating. It also is running very long without any crashes in ipvlan. ONLY in macvlan is it crashing, but it was working fine for years. And just in macvlan it will crash 95% every day, but it just startet after that update, like 6.10 or something? i just want to make sure you guys understand the problem. So you tell me that there might be a docker, vm or hardware issue EVEN when this only happens after these macvlan issues startet to occour to multiple people at the same time? Edited January 18 by NewDisplayName Quote Link to comment
JorgeB Posted January 19 Share Posted January 19 11 hours ago, NewDisplayName said: ONLY in macvlan is it crashing, AFAIK no one else has macvlan issues as long as bridging is disabled, which according to your screenshot it is, and if macvlan was the problem I would expect to see some macvlan related call traces in the syslog, there is nothing in the syslog you posted, so the problem may not be that. Quote Link to comment
NewDisplayName Posted January 19 Author Share Posted January 19 (edited) Okay, can you tell me which commands i can run to see in ssh if its using macvlan correct? does that help? Quote root@Unraid-Server:~# ifconfig br-04fb1877a554: flags=4163<UP,BROADCAST,RUNNING,MULTICAST> mtu 1500 inet 172.19.0.1 netmask 255.255.0.0 broadcast 172.19.255.255 inet6 fe80::42:efff:fef3:177c prefixlen 64 scopeid 0x20<link> ether 02:42:ef:f3:17:7c txqueuelen 0 (Ethernet) RX packets 114934 bytes 60474092 (57.6 MiB) RX errors 0 dropped 0 overruns 0 frame 0 TX packets 132250 bytes 23116561 (22.0 MiB) TX errors 0 dropped 0 overruns 0 carrier 0 collisions 0 br-514fa62878e3: flags=4099<UP,BROADCAST,MULTICAST> mtu 1500 inet 172.31.0.1 netmask 255.255.0.0 broadcast 172.31.255.255 ether 02:42:95:da:b6:50 txqueuelen 0 (Ethernet) RX packets 0 bytes 0 (0.0 B) RX errors 0 dropped 0 overruns 0 frame 0 TX packets 0 bytes 0 (0.0 B) TX errors 0 dropped 0 overruns 0 carrier 0 collisions 0 br-9dc3c3d319c7: flags=4099<UP,BROADCAST,MULTICAST> mtu 1500 inet 172.30.0.1 netmask 255.255.0.0 broadcast 172.30.255.255 ether 02:42:23:88:09:51 txqueuelen 0 (Ethernet) RX packets 0 bytes 0 (0.0 B) RX errors 0 dropped 0 overruns 0 frame 0 TX packets 0 bytes 0 (0.0 B) TX errors 0 dropped 0 overruns 0 carrier 0 collisions 0 br-a6339c7cf36b: flags=4163<UP,BROADCAST,RUNNING,MULTICAST> mtu 1500 inet 192.168.16.1 netmask 255.255.240.0 broadcast 192.168.31.255 inet6 fe80::42:5ff:feab:acc7 prefixlen 64 scopeid 0x20<link> ether 02:42:05:ab:ac:c7 txqueuelen 0 (Ethernet) RX packets 0 bytes 0 (0.0 B) RX errors 0 dropped 0 overruns 0 frame 0 TX packets 5 bytes 526 (526.0 B) TX errors 0 dropped 0 overruns 0 carrier 0 collisions 0 br-ba7d287b5243: flags=4163<UP,BROADCAST,RUNNING,MULTICAST> mtu 1500 inet 172.18.0.1 netmask 255.255.0.0 broadcast 172.18.255.255 inet6 fe80::42:bcff:feed:2dcf prefixlen 64 scopeid 0x20<link> ether 02:42:bc:ed:2d:cf txqueuelen 0 (Ethernet) RX packets 39551105 bytes 5410708565 (5.0 GiB) RX errors 0 dropped 0 overruns 0 frame 0 TX packets 224214731 bytes 335609995200 (312.5 GiB) TX errors 0 dropped 0 overruns 0 carrier 0 collisions 0 docker0: flags=4163<UP,BROADCAST,RUNNING,MULTICAST> mtu 1500 inet 172.17.0.1 netmask 255.255.0.0 broadcast 172.17.255.255 ether 02:42:7e:25:68:80 txqueuelen 0 (Ethernet) RX packets 9748456 bytes 2191223357 (2.0 GiB) RX errors 0 dropped 0 overruns 0 frame 0 TX packets 20098445 bytes 25073750829 (23.3 GiB) TX errors 0 dropped 0 overruns 0 carrier 0 collisions 0 eth0: flags=4163<UP,BROADCAST,RUNNING,MULTICAST> mtu 1500 inet 192.168.0.110 netmask 255.255.255.0 broadcast 0.0.0.0 ether ae:8f:b1:aa:25:f9 txqueuelen 1000 (Ethernet) RX packets 686277833 bytes 1003158227139 (934.2 GiB) RX errors 0 dropped 236667 overruns 0 frame 0 TX packets 104534565 bytes 12143267747 (11.3 GiB) TX errors 0 dropped 0 overruns 0 carrier 0 collisions 0 lo: flags=73<UP,LOOPBACK,RUNNING> mtu 65536 inet 127.0.0.1 netmask 255.0.0.0 inet6 ::1 prefixlen 128 scopeid 0x10<host> loop txqueuelen 1000 (Local Loopback) RX packets 248546 bytes 17562141 (16.7 MiB) RX errors 0 dropped 0 overruns 0 frame 0 TX packets 248546 bytes 17562141 (16.7 MiB) TX errors 0 dropped 0 overruns 0 carrier 0 collisions 0 veth5477688: flags=4163<UP,BROADCAST,RUNNING,MULTICAST> mtu 1500 inet6 fe80::8bb:aeff:fec9:5f1c prefixlen 64 scopeid 0x20<link> ether 0a:bb:ae:c9:5f:1c txqueuelen 0 (Ethernet) RX packets 0 bytes 0 (0.0 B) RX errors 0 dropped 0 overruns 0 frame 0 TX packets 40 bytes 3072 (3.0 KiB) TX errors 0 dropped 0 overruns 0 carrier 0 collisions 0 veth5489106: flags=4163<UP,BROADCAST,RUNNING,MULTICAST> mtu 1500 inet6 fe80::d4ea:10ff:fe6f:7f90 prefixlen 64 scopeid 0x20<link> ether d6:ea:10:6f:7f:90 txqueuelen 0 (Ethernet) RX packets 72447 bytes 10685400 (10.1 MiB) RX errors 0 dropped 0 overruns 0 frame 0 TX packets 64140 bytes 29977859 (28.5 MiB) TX errors 0 dropped 0 overruns 0 carrier 0 collisions 0 veth0c567fb: flags=4163<UP,BROADCAST,RUNNING,MULTICAST> mtu 1500 inet6 fe80::acd2:22ff:fe51:d90c prefixlen 64 scopeid 0x20<link> ether ae:d2:22:51:d9:0c txqueuelen 0 (Ethernet) RX packets 0 bytes 0 (0.0 B) RX errors 0 dropped 0 overruns 0 frame 0 TX packets 310 bytes 22544 (22.0 KiB) TX errors 0 dropped 0 overruns 0 carrier 0 collisions 0 veth1806f64: flags=4163<UP,BROADCAST,RUNNING,MULTICAST> mtu 1500 inet6 fe80::4cb3:fbff:fe18:9d2a prefixlen 64 scopeid 0x20<link> ether 4e:b3:fb:18:9d:2a txqueuelen 0 (Ethernet) RX packets 32609 bytes 28520551 (27.1 MiB) RX errors 0 dropped 0 overruns 0 frame 0 TX packets 48053 bytes 48587957 (46.3 MiB) TX errors 0 dropped 0 overruns 0 carrier 0 collisions 0 veth25851bc: flags=4163<UP,BROADCAST,RUNNING,MULTICAST> mtu 1500 inet6 fe80::6026:a0ff:feb4:3500 prefixlen 64 scopeid 0x20<link> ether 62:26:a0:b4:35:00 txqueuelen 0 (Ethernet) RX packets 1931 bytes 188209 (183.7 KiB) RX errors 0 dropped 0 overruns 0 frame 0 TX packets 16684 bytes 7334693 (6.9 MiB) TX errors 0 dropped 0 overruns 0 carrier 0 collisions 0 veth2e0400e: flags=4163<UP,BROADCAST,RUNNING,MULTICAST> mtu 1500 inet6 fe80::7cd5:98ff:fef1:bc70 prefixlen 64 scopeid 0x20<link> ether 7e:d5:98:f1:bc:70 txqueuelen 0 (Ethernet) RX packets 524362 bytes 2144510016 (1.9 GiB) RX errors 0 dropped 0 overruns 0 frame 0 TX packets 603343 bytes 75223021 (71.7 MiB) TX errors 0 dropped 0 overruns 0 carrier 0 collisions 0 veth32ab6cc: flags=4163<UP,BROADCAST,RUNNING,MULTICAST> mtu 1500 inet6 fe80::68cf:e2ff:fea4:697a prefixlen 64 scopeid 0x20<link> ether 6a:cf:e2:a4:69:7a txqueuelen 0 (Ethernet) RX packets 9068265 bytes 1015787658 (968.7 MiB) RX errors 0 dropped 0 overruns 0 frame 0 TX packets 16880044 bytes 20249736555 (18.8 GiB) TX errors 0 dropped 0 overruns 0 carrier 0 collisions 0 veth48e49d8: flags=4163<UP,BROADCAST,RUNNING,MULTICAST> mtu 1500 inet6 fe80::98b8:39ff:fef3:d094 prefixlen 64 scopeid 0x20<link> ether 9a:b8:39:f3:d0:94 txqueuelen 0 (Ethernet) RX packets 317257 bytes 43834445 (41.8 MiB) RX errors 0 dropped 0 overruns 0 frame 0 TX packets 290277 bytes 1478934154 (1.3 GiB) TX errors 0 dropped 0 overruns 0 carrier 0 collisions 0 veth4d96dc1: flags=4163<UP,BROADCAST,RUNNING,MULTICAST> mtu 1500 inet6 fe80::c871:50ff:feb7:db80 prefixlen 64 scopeid 0x20<link> ether ca:71:50:b7:db:80 txqueuelen 0 (Ethernet) RX packets 264376 bytes 52056930 (49.6 MiB) RX errors 0 dropped 0 overruns 0 frame 0 TX packets 226954 bytes 364229717 (347.3 MiB) TX errors 0 dropped 0 overruns 0 carrier 0 collisions 0 veth4fb0de0: flags=4163<UP,BROADCAST,RUNNING,MULTICAST> mtu 1500 inet6 fe80::8cdb:87ff:fe5f:61 prefixlen 64 scopeid 0x20<link> ether 8e:db:87:5f:00:61 txqueuelen 0 (Ethernet) RX packets 0 bytes 0 (0.0 B) RX errors 0 dropped 0 overruns 0 frame 0 TX packets 1525 bytes 137617 (134.3 KiB) TX errors 0 dropped 0 overruns 0 carrier 0 collisions 0 veth6eda224: flags=4163<UP,BROADCAST,RUNNING,MULTICAST> mtu 1500 inet6 fe80::7c4a:fbff:fe1d:46bd prefixlen 64 scopeid 0x20<link> ether 7e:4a:fb:1d:46:bd txqueuelen 0 (Ethernet) RX packets 34988 bytes 56400862 (53.7 MiB) RX errors 0 dropped 0 overruns 0 frame 0 TX packets 77370 bytes 69598810 (66.3 MiB) TX errors 0 dropped 0 overruns 0 carrier 0 collisions 0 veth75ee0e5: flags=4163<UP,BROADCAST,RUNNING,MULTICAST> mtu 1500 inet6 fe80::74d2:9bff:feb6:6fc4 prefixlen 64 scopeid 0x20<link> ether 76:d2:9b:b6:6f:c4 txqueuelen 0 (Ethernet) RX packets 0 bytes 0 (0.0 B) RX errors 0 dropped 0 overruns 0 frame 0 TX packets 316 bytes 23180 (22.6 KiB) TX errors 0 dropped 0 overruns 0 carrier 0 collisions 0 veth76a623a: flags=4163<UP,BROADCAST,RUNNING,MULTICAST> mtu 1500 inet6 fe80::2cfd:a2ff:fe8c:a7f4 prefixlen 64 scopeid 0x20<link> ether 2e:fd:a2:8c:a7:f4 txqueuelen 0 (Ethernet) RX packets 0 bytes 0 (0.0 B) RX errors 0 dropped 0 overruns 0 frame 0 TX packets 1523 bytes 137561 (134.3 KiB) TX errors 0 dropped 0 overruns 0 carrier 0 collisions 0 veth7ae41cd: flags=4163<UP,BROADCAST,RUNNING,MULTICAST> mtu 1500 inet6 fe80::a042:b3ff:fe08:3a57 prefixlen 64 scopeid 0x20<link> ether a2:42:b3:08:3a:57 txqueuelen 0 (Ethernet) RX packets 2108 bytes 212314 (207.3 KiB) RX errors 0 dropped 0 overruns 0 frame 0 TX packets 777 bytes 421846 (411.9 KiB) TX errors 0 dropped 0 overruns 0 carrier 0 collisions 0 veth8089dbf: flags=4163<UP,BROADCAST,RUNNING,MULTICAST> mtu 1500 inet6 fe80::402f:afff:fecd:5527 prefixlen 64 scopeid 0x20<link> ether 42:2f:af:cd:55:27 txqueuelen 0 (Ethernet) RX packets 114319 bytes 20494359 (19.5 MiB) RX errors 0 dropped 0 overruns 0 frame 0 TX packets 101627 bytes 327723956 (312.5 MiB) TX errors 0 dropped 0 overruns 0 carrier 0 collisions 0 veth88d5bfa: flags=4163<UP,BROADCAST,RUNNING,MULTICAST> mtu 1500 inet6 fe80::2823:7ff:febe:7f9d prefixlen 64 scopeid 0x20<link> ether 2a:23:07:be:7f:9d txqueuelen 0 (Ethernet) RX packets 170307 bytes 15588607 (14.8 MiB) RX errors 0 dropped 0 overruns 0 frame 0 TX packets 331838 bytes 413193966 (394.0 MiB) TX errors 0 dropped 0 overruns 0 carrier 0 collisions 0 veth88dd59e: flags=4163<UP,BROADCAST,RUNNING,MULTICAST> mtu 1500 inet6 fe80::7c07:3fff:fee6:211 prefixlen 64 scopeid 0x20<link> ether 7e:07:3f:e6:02:11 txqueuelen 0 (Ethernet) RX packets 34742 bytes 37027006 (35.3 MiB) RX errors 0 dropped 0 overruns 0 frame 0 TX packets 65034 bytes 74140386 (70.7 MiB) TX errors 0 dropped 0 overruns 0 carrier 0 collisions 0 veth8d8315d: flags=4163<UP,BROADCAST,RUNNING,MULTICAST> mtu 1500 inet6 fe80::88a3:4aff:fec2:81de prefixlen 64 scopeid 0x20<link> ether 8a:a3:4a:c2:81:de txqueuelen 0 (Ethernet) RX packets 435329 bytes 1285427762 (1.1 GiB) RX errors 0 dropped 0 overruns 0 frame 0 TX packets 2827783 bytes 4380973753 (4.0 GiB) TX errors 0 dropped 0 overruns 0 carrier 0 collisions 0 veth92f15dc: flags=4163<UP,BROADCAST,RUNNING,MULTICAST> mtu 1500 inet6 fe80::9cff:d7ff:fedf:28c8 prefixlen 64 scopeid 0x20<link> ether 9e:ff:d7:df:28:c8 txqueuelen 0 (Ethernet) RX packets 1847 bytes 4046839 (3.8 MiB) RX errors 0 dropped 0 overruns 0 frame 0 TX packets 2175 bytes 340818 (332.8 KiB) TX errors 0 dropped 0 overruns 0 carrier 0 collisions 0 vetha4f19f5: flags=4163<UP,BROADCAST,RUNNING,MULTICAST> mtu 1500 inet6 fe80::e867:5aff:fe1f:6b99 prefixlen 64 scopeid 0x20<link> ether ea:67:5a:1f:6b:99 txqueuelen 0 (Ethernet) RX packets 416741 bytes 3247463485 (3.0 GiB) RX errors 0 dropped 0 overruns 0 frame 0 TX packets 1060362 bytes 70261841 (67.0 MiB) TX errors 0 dropped 0 overruns 0 carrier 0 collisions 0 vethae53a43: flags=4163<UP,BROADCAST,RUNNING,MULTICAST> mtu 1500 inet6 fe80::2058:70ff:fe4e:2fbe prefixlen 64 scopeid 0x20<link> ether 22:58:70:4e:2f:be txqueuelen 0 (Ethernet) RX packets 201878 bytes 84529942 (80.6 MiB) RX errors 0 dropped 0 overruns 0 frame 0 TX packets 218294 bytes 64149420 (61.1 MiB) TX errors 0 dropped 0 overruns 0 carrier 0 collisions 0 vethc9355ff: flags=4163<UP,BROADCAST,RUNNING,MULTICAST> mtu 1500 inet6 fe80::d8f9:b4ff:fe92:d6bb prefixlen 64 scopeid 0x20<link> ether da:f9:b4:92:d6:bb txqueuelen 0 (Ethernet) RX packets 0 bytes 0 (0.0 B) RX errors 0 dropped 0 overruns 0 frame 0 TX packets 11713 bytes 2610595 (2.4 MiB) TX errors 0 dropped 0 overruns 0 carrier 0 collisions 0 vethcfbe6e3: flags=4163<UP,BROADCAST,RUNNING,MULTICAST> mtu 1500 inet6 fe80::48b3:57ff:fe23:3bbb prefixlen 64 scopeid 0x20<link> ether 4a:b3:57:23:3b:bb txqueuelen 0 (Ethernet) RX packets 0 bytes 0 (0.0 B) RX errors 0 dropped 0 overruns 0 frame 0 TX packets 309 bytes 22502 (21.9 KiB) TX errors 0 dropped 0 overruns 0 carrier 0 collisions 0 vethe3a1a95: flags=4163<UP,BROADCAST,RUNNING,MULTICAST> mtu 1500 inet6 fe80::f084:70ff:fe1b:14e9 prefixlen 64 scopeid 0x20<link> ether f2:84:70:1b:14:e9 txqueuelen 0 (Ethernet) RX packets 0 bytes 0 (0.0 B) RX errors 0 dropped 0 overruns 0 frame 0 TX packets 310 bytes 22544 (22.0 KiB) TX errors 0 dropped 0 overruns 0 carrier 0 collisions 0 vethf77f5ae: flags=4163<UP,BROADCAST,RUNNING,MULTICAST> mtu 1500 inet6 fe80::a840:abff:fe5e:3b84 prefixlen 64 scopeid 0x20<link> ether aa:40:ab:5e:3b:84 txqueuelen 0 (Ethernet) RX packets 38324490 bytes 2642214983 (2.4 GiB) RX errors 0 dropped 0 overruns 0 frame 0 TX packets 216980856 bytes 326347390889 (303.9 GiB) TX errors 0 dropped 0 overruns 0 carrier 0 collisions 0 vhost0: flags=4163<UP,BROADCAST,RUNNING,MULTICAST> mtu 1500 ether 02:b1:5f:9d:51:fb txqueuelen 500 (Ethernet) RX packets 1376693 bytes 86249276 (82.2 MiB) RX errors 0 dropped 234741 overruns 0 frame 0 TX packets 1295 bytes 54390 (53.1 KiB) TX errors 0 dropped 0 overruns 0 carrier 0 collisions 0 virbr0: flags=4099<UP,BROADCAST,MULTICAST> mtu 1500 inet 192.168.122.1 netmask 255.255.255.0 broadcast 192.168.122.255 ether 52:54:00:5e:ce:b1 txqueuelen 1000 (Ethernet) RX packets 0 bytes 0 (0.0 B) RX errors 0 dropped 0 overruns 0 frame 0 TX packets 0 bytes 0 (0.0 B) TX errors 0 dropped 0 overruns 0 carrier 0 collisions 0 Edited January 19 by NewDisplayName Quote Link to comment
JorgeB Posted January 19 Share Posted January 19 Post the output of docker network ls Quote Link to comment
NewDisplayName Posted January 19 Author Share Posted January 19 14 minutes ago, JorgeB said: Post the output of docker network ls Does that look correct? Quote root@Unraid-Server:~# docker network ls WARNING: Error loading config file: /root/.docker/config.json: read /root/.docker/config.json: is a directory NETWORK ID NAME DRIVER SCOPE f2549c38417d bridge bridge local eee258978ef2 eth0 macvlan local ba7d287b5243 filesharing bridge local c426a08021c6 host host local d2b93c4c6a03 none null local a6339c7cf36b watchtower_default bridge local 9dc3c3d319c7 watchtower_immich bridge local 514fa62878e3 watchtower_webserver bridge local 04fb1877a554 webserver_webserver bridge local Quote Link to comment
NewDisplayName Posted January 19 Author Share Posted January 19 (edited) Btw ive got a new error i normally dont see: Jan 19 01:00:01 Unraid-Server kernel: mdcmd (60): set md_write_method 1 Jan 19 01:00:01 Unraid-Server kernel: mdcmd (61): set md_write_method auto Jan 19 01:11:48 Unraid-Server kernel: traps: lsof[31000] general protection fault ip:1546c44a4c6e sp:2221140b89765036 error:0 in libc-2.37.so[1546c448c000+169000] Jan 19 02:00:01 Unraid-Server kernel: mdcmd (62): set md_write_method 1 Edited January 19 by NewDisplayName Quote Link to comment
JorgeB Posted January 19 Share Posted January 19 That looks correct to me, the traps segfault should be OK to ignore, I'm used to see that in various diags. Quote Link to comment
NewDisplayName Posted January 19 Author Share Posted January 19 (edited) I find it hard to understand that you suggest there is an error in vm docker plugin or hardware when its working perfectly fine in ipvlan or even in macvlan (before that update 6.10 or something) Is there something i can like downgrade to test? Some sort of network driver or what ever? Or can i enable "more logs"? Like debug or something? Because its very likly to happen atleast once every 2 days, so that wouldnt be that much of log probably. Edited January 19 by NewDisplayName Quote Link to comment
JorgeB Posted January 19 Share Posted January 19 You can try more logs, as mentioned I'm not aware of anyone else having macvlan issues with 6.12.4+ and bridging disabled, and if macvlan was the problem it should leave some call traces logged. Quote Link to comment
BreakfastPurrito Posted January 19 Share Posted January 19 My server also started crashing after updating to 6.12.6. I use a macvlan for Pihole. I tried removing the macvlan and creating a new one, but now I'm getting the error "gateway already in use". Which is just the most useless error message. No shit the gateway is already in use. It's a gateway. But it's stopping me dead in my tracks. I get the same error message when I try to make an IPVlan. So now I have neither. I've been scouring the internet for the last few days trying to find a solution, but to no avail. Changing the setting described above doesn't help. I just want it to work the way it did before I updated. Quote Link to comment
NewDisplayName Posted January 19 Author Share Posted January 19 (edited) Interesting... i didnt tried macvlan in 6.12.4, did you? Why is it so hard to find out what changed after these complaints startet with macvlan? Edited January 19 by NewDisplayName Quote Link to comment
itimpi Posted January 19 Share Posted January 19 8 minutes ago, NewDisplayName said: Interesting... i didnt tried macvlan in 6.12.4, did you? Why is it so hard to find out what changed after these complaints startet with macvlan? I think the problem lies somewhere within the Linux kernel (probably in a driver) - not in any code developed by Limetech. Newer Unraid releases tend to use newer kernels and rely on kernel level issues being fixed by the maintainer of any particular component. That is why it is so hard to track down the cause. Note that this is not a 'new' issue as it has happened for some users for many Unraid releases. It is just that something in the Linux kernels being used by the latest Unraid releases seem to mean it occurs more frequently and/or on a wider variety of different hardware. Quote Link to comment
NewDisplayName Posted January 20 Author Share Posted January 20 13 hours ago, itimpi said: I think the problem lies somewhere within the Linux kernel (probably in a driver) - not in any code developed by Limetech. Newer Unraid releases tend to use newer kernels and rely on kernel level issues being fixed by the maintainer of any particular component. That is why it is so hard to track down the cause. Note that this is not a 'new' issue as it has happened for some users for many Unraid releases. It is just that something in the Linux kernels being used by the latest Unraid releases seem to mean it occurs more frequently and/or on a wider variety of different hardware. But, cant you simply test which part of the linux kernel or driver makes the problem and just keep it old till its fixed...? Quote Link to comment
itimpi Posted January 20 Share Posted January 20 3 minutes ago, NewDisplayName said: But, cant you simply test which part of the linux kernel or driver makes the problem and just keep it old till its fixed...? There are two problems with that: Identifying which part of the kernel has the problem and how it is triggered. I have no insight into this but I am sure if the culprits were known it would have been fixed long ago. Note that this issue is not Unraid specific - it can happen on any Linux system so if it were easy to do then it would already have happened. When you move to a new version of the kernel it is highly likely that old drivers/components will not even compile correctly. Quote Link to comment
NewDisplayName Posted January 20 Author Share Posted January 20 (edited) 15 minutes ago, itimpi said: There are two problems with that: Identifying which part of the kernel has the problem and how it is triggered. I have no insight into this but I am sure if the culprits were known it would have been fixed long ago. Note that this issue is not Unraid specific - it can happen on any Linux system so if it were easy to do then it would already have happened. When you move to a new version of the kernel it is highly likely that old drivers/components will not even compile correctly. I think my crashes startet with https://forums.unraid.net/bug-reports/stable-releases/since-612-hard-freezes-cought-a-call-trace-r2518/page/2/?tab=comments#comment-25171 btw, im currently in my 2. "server" (so completly upgraded mainboard, cpu, ram and power) - and the problem persists. Is there a way to enable some sort of debug logs? Maybe we can find out what the problem is. If all goes to hell, i will switch to macvlan, redirect some ports, switch back to ipvlan. Because i cant add new port forwardings while in ipvlan, but things added before just work fine... oO Since there is such a big thread about this whole issue in the german section, maybe it is even a problem with macvlan and fritzbox? Edited January 20 by NewDisplayName Quote Link to comment
NewDisplayName Posted January 20 Author Share Posted January 20 (edited) I also noticed drives dont spin down, it was set to 1h (since 9am, around 1,5h ago i changed it to 15min setting.) I enabled the activity plugin, and it says there was activity DISK 1,2,5,9,10 Wasnt that fixed years ago? As i startet my experiment, i spun down all drives. And unraid is like yeah, nows a good time for some smart data reading...? Jan 20 09:08:34 Unraid-Server emhttpd: spinning down /dev/sdm Jan 20 09:08:34 Unraid-Server emhttpd: spinning down /dev/sdj Jan 20 09:08:34 Unraid-Server emhttpd: spinning down /dev/sdk Jan 20 09:08:34 Unraid-Server emhttpd: spinning down /dev/sdh Jan 20 09:08:34 Unraid-Server emhttpd: spinning down /dev/sdg Jan 20 09:08:34 Unraid-Server emhttpd: spinning down /dev/sdd Jan 20 09:08:34 Unraid-Server emhttpd: spinning down /dev/sde Jan 20 09:08:34 Unraid-Server emhttpd: spinning down /dev/sdf Jan 20 09:08:34 Unraid-Server emhttpd: spinning down /dev/sdc Jan 20 09:08:34 Unraid-Server emhttpd: spinning down /dev/sdl Jan 20 09:08:34 Unraid-Server emhttpd: spinning down /dev/sdi Jan 20 09:08:50 Unraid-Server kernel: mdcmd (154): set md_num_stripes 1280 Jan 20 09:08:50 Unraid-Server kernel: mdcmd (155): set md_queue_limit 80 Jan 20 09:08:50 Unraid-Server kernel: mdcmd (156): set md_sync_limit 5 Jan 20 09:08:50 Unraid-Server kernel: mdcmd (157): set md_write_method Jan 20 09:09:18 Unraid-Server emhttpd: read SMART /dev/sdk Jan 20 09:09:18 Unraid-Server emhttpd: read SMART /dev/sdh Jan 20 09:09:18 Unraid-Server emhttpd: read SMART /dev/sdd Jan 20 09:09:18 Unraid-Server emhttpd: read SMART /dev/sdi Jan 20 09:09:30 Unraid-Server emhttpd: read SMART /dev/sdg Jan 20 09:10:00 Unraid-Server emhttpd: read SMART /dev/sdm Jan 20 09:10:00 Unraid-Server emhttpd: read SMART /dev/sdj Jan 20 09:10:00 Unraid-Server emhttpd: read SMART /dev/sde Jan 20 09:10:00 Unraid-Server emhttpd: read SMART /dev/sdf Jan 20 09:10:00 Unraid-Server emhttpd: read SMART /dev/sdc Jan 20 09:10:00 Unraid-Server emhttpd: read SMART /dev/sdl edit: another hour later, still not a single drive spun down, as per file activity still: 1,2,9,10 Edited January 20 by NewDisplayName Quote Link to comment
Recommended Posts
Join the conversation
You can post now and register later. If you have an account, sign in now to post with your account.
Note: Your post will require moderator approval before it will be visible.