JorgeB Posted July 5, 2023 Share Posted July 5, 2023 4 hours ago, kevin182 said: 2.5GBase-T PCIe Network Adapter RTL8125B Not being detected, this is not a software issue, try a different PCIe slot. 4 hours ago, kevin182 said: ASUS 2.5G Ethernet USB Adapter Both of these appear in system devices but no eth1 in network settings Not seeing the Asus detected in the diags, were do you see it? Quote Link to comment
mikl Posted July 5, 2023 Share Posted July 5, 2023 (edited) On 7/1/2023 at 12:12 AM, nraygun said: I guess the deal is there is no container port when on a custom network. Things are weird when using this new br1 I created on eth1. I noticed when I changed the WEBGUI_PORT to 8585, it does show up this way in the app - I just can't get to it without going through the proxy. The only thing that seems to work is leaving WEBGUI_PORT at 8080, configure Firefox to use the proxy, and then go into the app with 192.168.0.6:8080 and changing to 8585 in the app. If I do it any other way, I can't get to the ui without going through the proxy. I also see that my Host Port 3 is set to 8585 with the container port set to 8081 but 8081 doesnt show up in the port mappings. When using a custom docker network, the port(s) used is the one set by the app in the container. Setting the port in the template wont change anything. You need to change the port within the app for it to work on 8585 and/or change the environment variable if one is available. If you are using this threads method to avoid macvlan traces then your containers using static ip's (custom network) won't be able to communicate with the ones using the bridged network, since host access is disabled. Edited July 5, 2023 by mikl Quote Link to comment
kevin182 Posted July 5, 2023 Share Posted July 5, 2023 5 hours ago, JorgeB said: Not being detected, this is not a software issue, try a different PCIe slot. Not seeing the Asus detected in the diags, were do you see it? I will try a different PCIe slot I did not have it plugged in at that time but I did see it when it was plugged in. Any idea why the eth1 will not appear from the 1 that is in the diagnostics? Quote Link to comment
JorgeB Posted July 5, 2023 Share Posted July 5, 2023 24 minutes ago, kevin182 said: Any idea why the eth1 will not appear from the 1 that is in the Cannot see what chip it uses, possibly there's no driver, or did it work before? Quote Link to comment
kevin182 Posted July 5, 2023 Share Posted July 5, 2023 6 minutes ago, JorgeB said: Cannot see what chip it uses, possibly there's no driver, or did it work before? It did work before. I had eth1 assigned to it before upgrading from 6.11.5 to 6.12.2. Here is the chip. It does say it needs a driver for Linux. Here is another person having problems with it. Maybe the same? Over here you told them to submit a bug report. Quote Link to comment
JorgeB Posted July 5, 2023 Share Posted July 5, 2023 17 minutes ago, kevin182 said: Maybe the same? Over here you told them to submit a bug report. Looks like the same, I don't believe the other user created a bug report, you can create one if you want. Quote Link to comment
kevin182 Posted July 5, 2023 Share Posted July 5, 2023 (edited) 45 minutes ago, JorgeB said: Looks like the same, I don't believe the other user created a bug report, you can create one if you want. Posted Thanks for your help. I appreciate it. Edited July 5, 2023 by kevin182 1 Quote Link to comment
L0rdRaiden Posted July 5, 2023 Share Posted July 5, 2023 I have this configuration and I still have macvlan errors. Is this because I use the same network for VM as well? what could be the problem? Quote Link to comment
bonienl Posted July 5, 2023 Author Share Posted July 5, 2023 In the past I have seen VMs and containers using the same bridge interface may cause problems. Best to use the interface as a dedicated interface for docker only. Quote Link to comment
nraygun Posted July 6, 2023 Share Posted July 6, 2023 On 7/5/2023 at 7:52 AM, mikl said: When using a custom docker network, the port(s) used is the one set by the app in the container. Setting the port in the template wont change anything. You need to change the port within the app for it to work on 8585 and/or change the environment variable if one is available. If you are using this threads method to avoid macvlan traces then your containers using static ip's (custom network) won't be able to communicate with the ones using the bridged network, since host access is disabled. Thanks @mikl. I had changed the environment variable WEBGUI_PORT, and it did change the port in the container. But it was not accessible. It was only when I changed it in the app itself, not via any config, that it worked. I have since given up on this method. It also wreaked havoc on my VMs. I just put everything back to the way it was. I wasn't getting macvlan errors as of late. It was just in preparation for going to 6.12. Quote Link to comment
3dee Posted July 9, 2023 Share Posted July 9, 2023 I also have traces since unRAID 6.12.2. I connected another network cable, followed the guide and changed all the docker containers from br0 to br1. Still call traces instantly after starting the array, making crash my docker containers and today even crashing the whole system. I'm going back to 6.11.5 and recommend not to upgrade to 6.12.x Quote Link to comment
jaim3lo Posted July 11, 2023 Share Posted July 11, 2023 I followed this guide with additional changes: I assigned a static IP to eth1 (I don't not if it is relevant) I changed all my dockers to br1 with a static IP Docker custom network type: macvlan Host access to custom network: enabled I have 11 days without any crash, the only problem I have is with Nextcloud or FileBrowser, when use a static IP, it set the default port to 80/443 (I cannot change them) creating conflict with Nginx Proxy Manager. If I use Bridge/Host option the IP assigned is from the br0, not br1. Quote Link to comment
Xcelsior86 Posted July 15, 2023 Share Posted July 15, 2023 (edited) I've set this up as directed, but now I can't access any of the containers on the br1 interface. The docker network is on a VLAN through Pfsense. The containers seem to be running ok, ie NGINX proxy manager as I can access the hosts I've created with it, but I can't access the GUI of itself. Any ideas? EDIT: After posting this, I disconnected my VPN on my PC and I am now able to access the GUI's. I'll leave this here just in case someone else might run into this issue. Edited July 15, 2023 by Xcelsior86 Quote Link to comment
L0rdRaiden Posted July 17, 2023 Share Posted July 17, 2023 (edited) I got another call macvlan error Jul 17 18:03:04 Unraid kernel: ------------[ cut here ]------------ Jul 17 18:03:04 Unraid kernel: WARNING: CPU: 9 PID: 245 at net/netfilter/nf_conntrack_core.c:1210 __nf_conntrack_confirm+0xa4/0x2b0 [nf_conntrack] Jul 17 18:03:04 Unraid kernel: Modules linked in: nvidia_uvm(PO) xt_nat xt_CHECKSUM ipt_REJECT nf_reject_ipv4 xt_tcpudp ip6table_mangle ip6table_nat iptable_mangle vhost_net tun vhost vhost_iotlb tap macvlan xt_conntrack xt_MASQUERADE nf_conntrack_netlink nfnetlink xfrm_user iptable_nat nf_nat nf_conntrack nf_defrag_ipv6 nf_defrag_ipv4 xt_addrtype br_netfilter xfs md_mod tcp_diag inet_diag ip6table_filter ip6_tables iptable_filter ip_tables x_tables bridge 8021q garp mrp stp llc ixgbe xfrm_algo mdio igb i2c_algo_bit nvidia_drm(PO) nvidia_modeset(PO) zfs(PO) zunicode(PO) zzstd(O) zlua(O) zavl(PO) icp(PO) edac_mce_amd intel_rapl_msr edac_core intel_rapl_common iosf_mbi zcommon(PO) znvpair(PO) spl(O) kvm_amd nvidia(PO) kvm video drm_kms_helper crct10dif_pclmul crc32_pclmul crc32c_intel ghash_clmulni_intel sha512_ssse3 aesni_intel crypto_simd cryptd wmi_bmof mxm_wmi asus_wmi_sensors drm rapl k10temp i2c_piix4 nvme ccp backlight nvme_core i2c_core ahci syscopyarea cdc_acm sysfillrect sysimgblt libahci Jul 17 18:03:04 Unraid kernel: fb_sys_fops tpm_crb tpm_tis tpm_tis_core tpm wmi button acpi_cpufreq unix [last unloaded: xfrm_algo] Jul 17 18:03:04 Unraid kernel: CPU: 9 PID: 245 Comm: kworker/u64:5 Tainted: P O 6.1.38-Unraid #2 Jul 17 18:03:04 Unraid kernel: Hardware name: ASUS System Product Name/ROG CROSSHAIR VII HERO, BIOS 4603 09/13/2021 Jul 17 18:03:04 Unraid kernel: Workqueue: events_unbound macvlan_process_broadcast [macvlan] Jul 17 18:03:04 Unraid kernel: RIP: 0010:__nf_conntrack_confirm+0xa4/0x2b0 [nf_conntrack] Jul 17 18:03:04 Unraid kernel: Code: 44 24 10 e8 e2 e1 ff ff 8b 7c 24 04 89 ea 89 c6 89 04 24 e8 7e e6 ff ff 84 c0 75 a2 48 89 df e8 9b e2 ff ff 85 c0 89 c5 74 18 <0f> 0b 8b 34 24 8b 7c 24 04 e8 18 dd ff ff e8 93 e3 ff ff e9 72 01 Jul 17 18:03:04 Unraid kernel: RSP: 0018:ffffc90000438d98 EFLAGS: 00010202 Jul 17 18:03:04 Unraid kernel: RAX: 0000000000000001 RBX: ffff8884be48b300 RCX: a343d541328389c7 Jul 17 18:03:04 Unraid kernel: RDX: 0000000000000000 RSI: 0000000000000001 RDI: ffff8884be48b300 Jul 17 18:03:04 Unraid kernel: RBP: 0000000000000001 R08: f738cf72635c1332 R09: a602caa3a0dd9a76 Jul 17 18:03:04 Unraid kernel: R10: 11d3e2b4abc2d99c R11: ffffc90000438d60 R12: ffffffff82a11d00 Jul 17 18:03:04 Unraid kernel: R13: 0000000000034284 R14: ffff8881086d1a00 R15: 0000000000000000 Jul 17 18:03:04 Unraid kernel: FS: 0000000000000000(0000) GS:ffff888ffea40000(0000) knlGS:0000000000000000 Jul 17 18:03:04 Unraid kernel: CS: 0010 DS: 0000 ES: 0000 CR0: 0000000080050033 Jul 17 18:03:04 Unraid kernel: CR2: 000000c0004f9000 CR3: 0000000174692000 CR4: 0000000000350ee0 Jul 17 18:03:04 Unraid kernel: Call Trace: Jul 17 18:03:04 Unraid kernel: <IRQ> I have 3 physical nics for docker. br1 and br2 are exclusive for docker. br0 is shared with unraid OS. Could this be a problem too? I can not even share a NIC with unraid OS? Edited July 17, 2023 by L0rdRaiden Quote Link to comment
Masterwishx Posted July 18, 2023 Share Posted July 18, 2023 On 6/24/2023 at 12:26 PM, Masterwishx said: changed to br1 docker network without host access ,all was working fine no macvlan traces found . but today somehow founded again !??! on 6.11.5 was no Problems, when updated to 6.12 got macvlan calltraces found , made changes like in tutorial (docker network on br1 briged + no host access) some days was no macvlan calltraces, now on 6.12.3 there is every day got macvlan call traces found like from 6.12.0. This is on same hardware like 6.11.5 and this is really strange situation , that cant be fixed by Limetech or its related more to linux ? is this can be fixed only with VLAN supported on router or switch ? Quote Link to comment
sonic6 Posted July 18, 2023 Share Posted July 18, 2023 @JorgeB @bonienl can that "solution" marked as "deprecated"? more and more users sped time changing they're system, with the hope of fixing the traces. but it doesn't work, this only confuses. 3 1 Quote Link to comment
Masterwishx Posted July 18, 2023 Share Posted July 18, 2023 When i set br1 + static IP for urBackup then internet client cant find server (no internet access for server?) When its set to host both local and internet clients worknig , so maybe i missed somethink ? Should container see internet when using br1 + static IP + no host access? Quote Link to comment
ljm42 Posted July 19, 2023 Share Posted July 19, 2023 I really appreciate people posting about their experiences. But without diagnostics we have no way to begin to investigate. Note: we don't really need diagnostics about normal macvlan call traces, but if you get call traces with the solution discussed in the first post of this thread, we definitely need to see diagnostics. 2 Quote Link to comment
wHyEt2004 Posted July 21, 2023 Share Posted July 21, 2023 Hey all, Don't know if I'm right here but I followed the guide and have some issues thats why I postet here sorry if thats wrong. So I changed everything according to the guide here. since I had crashes with 6.12 what I didn't have with 6.11. But i now don't get any dns resolution in my docker containers anymore. The positve side is that the crashes are gone now Can sombody help me out why this is so? I'm kinda blind For more Info there are 2 Network cards one running the host and VMs and the other one as suggested in the guide for docker only the only difference is I have it on ipvlan since everybody says this is the way to go. And I was having crashes wenn it was on macvlan. thank you in advance greetings Nick tower-diagnostics-20230721-1429.zip Quote Link to comment
Selmak Posted July 26, 2023 Share Posted July 26, 2023 Following the upgrade to version 6.12, my system has become less stable, with random crashes that seem to be kernel panics. To address this issue, I took the advice of @bonienl and installed a second network card, which has mitigated the system crashes to some extent. However, I still encounter occasional kernel panics. I have attached my diagnostics in the hopes that they might shed light on the situation. In contrast, before the update (6.11.5), my system was running without any problems while utilizing macvlan. nexus-diagnostics-20230726-1316.zip Quote Link to comment
prune Posted July 28, 2023 Share Posted July 28, 2023 On 7/18/2023 at 3:43 PM, sonic6 said: @JorgeB @bonienl can that "solution" marked as "deprecated"? more and more users sped time changing they're system, with the hope of fixing the traces. but it doesn't work, this only confuses. I fully agree. Looks like this "fix" works for some people but not for others, it seems for me that it is not reliable and shouldn't be exposed as a solution. I, for example, have switched containers to a dedicated secondary nic with ipvlan with no luck. Also, being forced to use a secondary nic is not a solution, this is a workaround. In the past, I never had any problems with Docker networking on RedHat or Windows, and I always had only 1 nic. So Unraid will be the only platform in history where 2 nic's are mandatory in order to use Docker ?? Really ? 🤔 Quote Link to comment
sonic6 Posted July 29, 2023 Share Posted July 29, 2023 8 hours ago, prune said: So Unraid will be the only platform in history where 2 nic's are mandatory in order to use Docker ?? Really ? 🤔 to be fair, two NIC's aren't mandatory for using Docker. You just need it, when using the "workaround" for the MACVLAN problem and you aren't able to use IPVLAN. If you can use IPVLAN, you system should run fine without the "workaround". Quote Link to comment
baunegaard Posted July 30, 2023 Share Posted July 30, 2023 I also started seeing macvlan traces with 6.12.x, however i followed the advice in another thread to disable bridging for the nic running the dedicated Docker network and that seems to have resolved my traces. Right now the first post in this thread suggests enabling bridging, as i understand this has no benefits when the network is only used for Docker containers and disabling seems to have resolved my macvlan issues, so i think the suggestion should be to disable it. Quote Link to comment
JorgeB Posted July 31, 2023 Share Posted July 31, 2023 18 hours ago, baunegaard said: Right now the first post in this thread suggests enabling bridging, as i understand this has no benefits when the network is only used for Docker containers and disabling seems to have resolved my macvlan issues, so i think the suggestion should be to disable it. @bonienlcan you confirm if bridging has any advantages for this? Without bridging the call traces should not appear, or at least they should be much less likely to appear. Quote Link to comment
dlandon Posted July 31, 2023 Share Posted July 31, 2023 We are working on a bug in the network settings where the UI doesn't always show the correct Bridge setting when IPv4 + IPv6 protocol is selected and your ISP does not support IPv6. You might think by the UI that Bridging is not enabled and it is. Use this command in a terminal to check: 'ip link show br0'. 1 Quote Link to comment
Recommended Posts
Join the conversation
You can post now and register later. If you have an account, sign in now to post with your account.
Note: Your post will require moderator approval before it will be visible.