Mlatx Posted October 19, 2022 Share Posted October 19, 2022 (edited) I recently had issues with macvlan crashing unraid. I tried to go with ipvlan, but issues with that cause me to revert back to macvlan without unique IP's. That's great and no issues with macvlan in the logs. Today, my server crashed shortly after what looks like disks spinning down. That was the last point in the log before restarting. The PSU is new, so I doubt it is a power issue. For now, I don't have disks spinning down. Also, I see ipv6 comments in the log, but unraid and pfsense are set to not use ipv6. What would cause disks spinning down to freeze the system? Why are ipv6 entries shown? To add some more context, I am seeing this error throughout the log not shown in the attached snipiet. Oct 15 00:00:20 server kernel: critical target error, dev sdg, sector 1950353247 op 0x3:(DISCARD) flags 0x800 phys_seg 1 prio class 0 Oct 15 00:00:20 server kernel: BTRFS warning (device sdg1): failed to trim 1 device(s), last error -121 I tried to manually run trim using, fstrim -v /mnt/cache/, and the same error showed up. I'll run an extended smart test on my cache drive. Could this be an imminent cache failure, or is it something else? log.txt Edited October 20, 2022 by Mlatx Quote Link to comment
JorgeB Posted October 20, 2022 Share Posted October 20, 2022 Nothing relevant logged, please post the diagnostics mostly to see the hardware used. Quote Link to comment
Mlatx Posted October 20, 2022 Author Share Posted October 20, 2022 I think I fixed the trim issue. I did not realize I had my cache drives on the SAS controller, which apparently doesn't trim. Now that they are connected to the mobo controller, trim works. I'm not sure if that is what cause the lockup. There are some warnings and error sin the log today related to netfiler, kworker, and call trace. server-diagnostics-20221020-0909.zip Quote Link to comment
Solution JorgeB Posted October 20, 2022 Solution Share Posted October 20, 2022 Strange that the permanent syslog didn't show this but the current syslog does: Oct 20 00:43:49 server kernel: macvlan_broadcast+0x10a/0x150 [macvlan] Oct 20 00:43:49 server kernel: macvlan_process_broadcast+0xbc/0x12f [macvlan] But that's good news since Macvlan call traces are usually the result of having dockers with a custom IP address and will end up crashing the server, switching to ipvlan should fix it (Settings -> Docker Settings -> Docker custom network type -> ipvlan (advanced view must be enabled, top right)). Quote Link to comment
Mlatx Posted October 20, 2022 Author Share Posted October 20, 2022 I don't have any custom IP addresses. I did, and had macvlan show up before. I posted in anotherr thread I started. ipvlan did not fix the issue. It caused other issues. Then I removed all custom IP addresses. I have the standard bridge network for internal dockers. I created a network, proxynet, for external dockers. And my nzbget has no network routing through delugevpn. I've never had these issues until 6.10. Quote Link to comment
JorgeB Posted October 20, 2022 Share Posted October 20, 2022 If there are macvlan crashes something is still using it, and like mentioned they will end up crashing the server. Quote Link to comment
Mlatx Posted October 20, 2022 Author Share Posted October 20, 2022 How do I find out if there is some remnant of a custom IP address? I saw this error just pop up: php-fpm[5996]: [WARNING] [pool www] server reached max_children setting (50), consider raising it Is that a macvlan issue as well? Quote Link to comment
JorgeB Posted October 20, 2022 Share Posted October 20, 2022 Don't think so, and if you are not using custom IPs you can still change to ipvlan. Quote Link to comment
Mlatx Posted October 20, 2022 Author Share Posted October 20, 2022 I've change it to ipvlan. I am not having the same issues I had when I had custom IP's, so let' see if this addresses the problem. I'll post back if this solution works. 1 Quote Link to comment
Mlatx Posted October 21, 2022 Author Share Posted October 21, 2022 I want to report back. Going from macvlan to ipvlan solved the issue. I'm not using any custom IP's. Using ipvlan with custom IP's caused problems with my containers having intermittent connectivity. If I need custom IP's in the future, I'll setup a vlan for docker. 1 Quote Link to comment
Recommended Posts
Join the conversation
You can post now and register later. If you have an account, sign in now to post with your account.
Note: Your post will require moderator approval before it will be visible.