bonienl

Community Developer
  • Posts

    10233
  • Joined

  • Last visited

  • Days Won

    65

Report Comments posted by bonienl

  1. 4 hours ago, Jclendineng said:

    Is it recommended that I disable bridging on ALL interfaces and only use bonding?

     

    I recommend to disable bridging and use bond1 instead. We have seen that the new macvtap network gives much better performance than bridging. Though your current set up should work without issues.

     

    • Thanks 1
  2. 2 hours ago, sonic6 said:

    The Docker Area in Dashboard is very slow with 6.12.4RC18 .

     

    How many containers do you have?

    My main server has 30 containers and display is instantaneous on the dashboard.

    There are no code changes for the dashboard that would explain a different behavior.

     

  3. 13 hours ago, user12345678 said:

    I have a LACP bond that also acts as a VLAN trunk, from there I have several interfaces defined for various VLANs.

     

    Disabling bridging will change docker to use the bonded interface, including the VLAN networks.

     

    This is my docker set up

     

    # docker network ls
    NETWORK ID     NAME      DRIVER    SCOPE
    b773f11c6dcc   bond0     macvlan   local
    a9315eb8f702   bond0.3   macvlan   local
    5d489327c46b   bond0.4   macvlan   local
    d69bccba5929   bond0.5   macvlan   local
    a7b83120a899   bond0.6   macvlan   local

     

  4. 1 hour ago, snowy00 said:

    Are the containers still working in bridge mode?

     

    They should work as before. Host and bridge networks are not touched.

     

    1 hour ago, snowy00 said:

    It seems Unraid eth0 interface has to MAC addresses

     

    When "host access" is enabled then the IP address of eth0 is duplicated to the macvtap (vhost) interface. The macvtap  (vhost) interface has its own MAC address

     

    • Like 1
  5. 34 minutes ago, SpaceInvader said:

    Big update! I just found that setting address assignment to static for ipv6 resolves the atd process reloading nginx and there are no more entries in the nginx error log.

     

    Wow, great find. Let me digest this and see if I can come with a possible solution.

     

    Quote

     I don't know, if you saw my last message yet, but it seems to be related to the dhcp function specifically.

     

    Maybe, maybe not, but at least we have a pointer to work on.

    Thx

     

  6. 1 hour ago, SpaceInvader said:

    So after testing the fresh install I just created with the USB Creator  I got the exact same issue!

     

    It is not exactly the same. In this fresh install NFS is not started (correct) because it isn't enabled, while your previous log showed NFS getting started (wrong) with lots of errors.

     

    It is really strange that from the moment nginx is started there are errors, can't explain that.

    IPv4 and IPv6 both look alright.

     

    Don't know if it is related to your ethernet controller card, which is a Realtek 2.5G version (but operating on 1G).

    If you have another NIC perhaps it is worth to try.

     

  7. 7 hours ago, SpaceInvader said:

    So it just crashed in safe mode. I also disabled my vpn, before booting to safemode, to also exclude that.

     

    Thanks for testing

     

    8 hours ago, SpaceInvader said:

    I tried enabling and then disabling nfs again

     

    This is weird in your logs, because NFS is not enabled, it should not get started at all. Yet it does and gives lots of errors.

    In my testing enabling / disabling NFS in the GUI gives the correct behavior when starting the system. I never get the errors seen in your log. I checked all your config files and can't find anything wrong. A mystery!

     

    Somehow I think the RPC/NFS errors are related to the NGINX errors, or in other words the source of these errors make these services fail.

     

    8 hours ago, SpaceInvader said:

    The nginx error log is also attached,

     

    Can you post the content of file: etc/nginx/conf.d/servers.conf

     

    8 hours ago, SpaceInvader said:

    which seems to be an unrelated bug,

     

    Yeah unrelated, has nothing to do with the problem. Limetech site doesn't respond to IPv6.

     

    8 hours ago, SpaceInvader said:

    I'm out of ideas,

     

    We keep on investigating this issue.

     

  8. 1 hour ago, MaZeC said:

    So, then what changed compared to 6.12.2 then? I have everything on auto-configuration, all addresses get advertised and Docker chooses the fd::/8 ULA Address for its assignments. And it works flawless. on 6.12.3, docker straight refuses to use IPv6 and in the "docker inspect network" command it clearly shows:

     

    There is indeed a regression error in 6.12.3.

    The docker networks are created without IPv6 subnets. From your log:

    Jul 16 21:23:21 Z-Storage rc.docker: created network br0 with subnets: 10.1.2.0/24; 
    Jul 16 21:23:22 Z-Storage rc.docker: created network br0.69 with subnets: 10.10.0.0/24; 

     

    I made a fix and tested this.

    Jul 17 09:55:24 flora rc.docker: created network br0 with subnets: 10.0.101.0/24; 2a02:xxxx:xxxx:101::/64; 
    Jul 17 09:55:25 flora rc.docker: created network br0.6 with subnets: 10.0.106.0/24; fd4f:71d8:b745:45::/64; 

     

    Docker network

    # docker network inspect br0.6
    [
        {
            "Name": "br0.6",
            "Id": "67e3be2092a4ec3e618f2c8088aad5148fcba6c1ac54e37c8c5029bd0d557f0b",
            "Created": "2023-07-17T09:55:25.314164209+02:00",
            "Scope": "local",
            "Driver": "ipvlan",
            "EnableIPv6": true,
            "IPAM": {
                "Driver": "default",
                "Options": {},
                "Config": [
                    {
                        "Subnet": "10.0.106.0/24",
                        "Gateway": "10.0.106.1",
                        "AuxiliaryAddresses": {
                            "server": "10.0.106.21"
                        }
                    },
                    {
                        "Subnet": "fd4f:71d8:b745:45::/64",
                        "Gateway": "fd4f:71d8:b745:45::1",
                        "AuxiliaryAddresses": {
                            "server6": "fd4f:71d8:b745:45:ae1f:6bff:fee4:7c6a"
                        }
                    }
                ]
            },

     

    I guess for the time being, you need to stay on 6.12.2 until a new version is released.

     

     

    • Like 1
  9. Docker does not accept an IPv6 gateway which is a link local address (fe80::...)

     

    You need to change the gateway of br0.69 to something like: 2003:df:1f2c:8e45::1 (of course your router must have this address), like your correct example: fd4f:71d8:b745:45::1

     

    Alternatively: if you are using SLAAC, leave the gateway setting empty, and SLAAC will fill-in the link local address, which is then accepted by Docker (weirdness of Docker).

     

    • Like 1
  10. 53 minutes ago, SpaceInvader said:

    Another thing I just noticed is that with IPv6 enabled there always is this /usr/sbin/atd running a nginx reload every three seconds

     

    This is triggered because the DHCP client on your system is continuously adding and removing IP addresses.

    It is unclear yet to me why this is happening though. Still studying the diagnostics to find a clue.

     

    I have seen similar behavior in other diagnostics (together with failing services) and disabling IPv6 and run IPv4 only solves the problem.

     

     

  11. 16 minutes ago, SpaceInvader said:

    But this router has worked with the the same NAS that is now running unraid

     

    In version 6.12 services are bound to valid IP addresses for security reasons, this includes IPv6 addresses when IPv6 is enabled, this requires both IPv4 and IPv6 to have correct assignments.

     

    Earlier versions of Unraid did not have this requirement and an improper IPv6 assignment would probably go unnoticed when the server is accessed on its IPv4 address.

     

    Thanks for the additional information, it gives me more insight in what is happening.