• Unraid OS version 6.12.4-rc18 available


    ljm42

    This release has a fix for macvlan call traces(!) along with other bug fixes, security patches, and one new feature. We'd like your feedback before releasing it as 6.12.4

     

    This thread is perfect for quick questions or comments, but if you suspect there will be back and forth for your specific issue, please start a new topic. Be sure to include your diagnostics.zip.

     

    Upgrade steps for this release

    1. As always, prior to upgrading, create a backup of your USB flash device:  "Main/Flash/Flash Device Settings" - click "Flash Backup".
    2. Update all of your plugins. This is critical for the NVIDIA and Realtek plugins in particular.
    3. If the system is currently running 6.12.0 - 6.12.2, we're going to suggest that you stop the array at this point. If it gets stuck on "Retry unmounting shares", open a web terminal and type:
      umount /var/lib/docker

      The array should now stop successfully (This issue was resolved with 6.12.3)

    4. Go to Tools -> Update OS, change the Branch to "Next". If the update doesn't show, click "Check for Updates"
    5. Wait for the update to download and install
    6. If you have any plugins that install drivers (NVIDIA, Realtek, etc), wait for the notification that the new version of the driver has been downloaded. 
    7. Reboot

     

    Add Tools/System Drivers page

    This new page gives you visibility into the drivers available/in use on your system. 3rd party drivers installed by plugins (such as NVIDIA and Realtek) have an icon that links to the support page for that driver. And you can now add/modify/delete the modeprobe.d config file for any driver without having to find that file on your flash drive. Thanks to @SimonF for adding this functionality!

     

    Fix for macvlan call traces!

    The big news in this test release is that we believe we have resolved the macvlan issues that have been plaguing us recently! We'd appreciate your help confirming the changes. Huge thanks to @bonienl for tracking this down!

     

    The root of the problem is that macvlan used for custom Docker networks is unreliable when the parent interface is a bridge (like br0), it works best on a physical interface (like eth0). We believe this to be a longstanding kernel issue and have posted a bug report.

     

    If you are getting call traces related to macvlan, as a first step we'd recommend navigating to Settings/Docker, switch to advanced view, and change the "Docker custom network type" from macvlan to ipvlan. This is the default configuration that Unraid has shipped with since version 6.11.5 and should work for most systems.

     

    However, some users have reported issues with port forwarding from certain routers (Fritzbox) and reduced functionality with advanced network management tools (Ubiquity) when in ipvlan mode.

     

    For those users, in this rc we have a new method that reworks networking to avoid this. Simply tweak a few settings and your Docker containers, VMs, and WireGuard tunnels will automatically adjust to use them:

    • Settings -> Network Settings -> eth0 -> Enable Bridging = No
    • Settings -> Docker -> Host access to custom networks = Enabled

     

    Note: if you previously used the two-nic method for docker segregation, you'll also want to revert that:

    • Settings -> Docker -> custom network on interface eth0 (i.e. make sure eth0 is configured for the custom network, not eth1)

     

    When you start the array, the host, VMs, and Docker containers will all be able to communicate, and there should be no more call traces!

     

    Troubleshooting

    • If your Docker containers with custom IPs aren't starting, edit them and change the "Network type" to "Custom: eth0". We attempted to do this automatically, but depending on how things were customized you might need to do it manually.
    • If your VMs are having network issues, edit them and set the Network Source to "vhost0". Also, ensure there is a MAC address assigned.
    • If your WireGuard tunnels won't start, make a dummy change to each tunnel and save.
    • If you are having issues port forwarding to Docker containers (particularly on a Fritzbox) delete and recreate the port forward in your router.

     

    To get a little more technical...

    After upgrading to this release, if bridging remains enabled on eth0 then everything works as it used to. You can attempt to work around the call traces by disabling the custom Docker network, or using ipvlan instead of macvlan, or using the two-nic Docker segmentation method with containers on eth1.

     

    Starting with this release, when you disable bridging on eth0 we create a new macvtap network for Docker containers and VMs to use. It has a parent of eth0 instead of br0, which is how we avoid the call traces.

     

    A side benefit is that macvtap networks are reported to be faster than bridged networks, so you may see speed improvements when communicating with Docker containers and VMs.

     

    FYI: With bridging disabled for the main interface (eth0), then the Docker custom network type will be set to macvlan and hidden unless there are other interfaces on your system that have bridging enabled, in which case the legacy ipvlan option is available. To use the new fix being discussed here you'll want to keep that set to macvlan.

     

    Other Bug Fixes

    This release resolves corner cases in networking, Libvirt, Docker, WireGuard, NTP, NGINX, NFS and RPC. And includes an improvement to the VM Manager so it retains the VNC password during an update. And has a change to the shutdown process to allow the NUT plugin to shut the system down.

     

    A small change is that packages in /boot/extra are now treated more like packages installed by plugins, and the installation is logged to syslog rather than to the console.

     

    Known Issues

    Please see this page for information on crashes related to the i915 driver:
      https://docs.unraid.net/unraid-os/release-notes/6.12.0#known-issues

     

    If Docker containers have issues starting after a while, and you are running Plex, go to your Plex Docker container settings, switch to advanced view, and add this to the Extra Params

    --no-healthcheck

     

    This thread is perfect for quick questions or comments, but if you suspect there will be back and forth for your specific issue, please start a new topic. Be sure to include your diagnostics.zip.

     

    Rolling Back

    Before rolling back to an earlier version, it is important to first change this back to yes:

    • Settings -> Network Settings -> eth0 -> Enable Bridging = Yes

    And then start the array (along with the Docker and VM services) to update your Docker containers, VMs, and WireGuard tunnels back to their previous settings which should work in older releases.

     

    Once in the older version, confirm these settings are correct for your setup:

    • Settings -> Docker -> Host access to custom networks
    • Settings -> Docker -> Docker custom network type

     

    Changes vs. 6.12.3

     

    Networking

     

    New vhost network for both containers and VMs.

     

    When bridging enabled:

    • Create shim interface which is attached to bridge interface
    • Copy parent address to shim interface with lower metric to allow host access
    • More specific routes are no longer created

     

    When bridging disabled:

    • Copy parent address to vhost interface with lower metric to allow host access

     

    Bug fixes and improvements

    • create_network_ini:
      • fixed dhcp hook
      • improved IP address collection
    • diagnostics:
      • Add previous Unraid version to diagnostics version txt file.
      • Add ntp.conf, sshd.config, and servers.conf (with anonymized URLs)
      • anonymize IP addresses
    • libvirt, nginx, nfs, rpc: changed running process detection
    • nfsclient: start negotiation with v4, turn off atime modification
    • rc.6: leave /usr and /lib mounted during shutdown
    • rc.docker:
      • create same IPv6 network for containers and services
      • add more logging when stopping dockerd
    • rc.inet1:
      • do not use promiscuous mode for bridging
      • add persistent option to dhcpcd
    • rc.library: interfaces always listed in the same order, fix show ipv6
    • rc.libvirt: remove 'itco' watchdog from XML if present
    • rc.local: annotate auto-generated /etc/modprobe.d/zfs.conf file
    • rc.services:
      • add logging
      • exclude WireGuard "VPN tunneled access for docker" tunnels from services
      • exclude WireGuard tunnels for ntp (code optimization)
    • webgui:
      • Update monitor_nchan
      • Feedback: refactor feedback script
      • Shares and Pools: show "Minimum free space" as absolute number instead of percentage
      • Pools: minimum free space: only enabled when array is stopped
      • VM Manager: Retain VNC password during update.
      • VM Manager: Remove downloaded '.vv' files.
      • Dashboard: hide ZFS bar when no ZFS is used
      • Network settings: fix DNS settings sometimes disappear
      • Translations: trim key and value in language files
      • add System Drivers page

     

    Linux kernel

    • version 6.1.46 (CVE-2023-20593)
    • CONFIG_SCSI_MPI3MR: Broadcom MPI3 Storage Controller Device Driver

     

    Base Distro

    • btrfs-progs: 6.3.3

    • curl: version 8.2.0 (CVE-2023-32001)

    • kernel-firmware: version 20230724_59fbffa

    • krb5: version 1.19.2 (CVE-2023-36054)

    • openssh: version 9.3p2 (CVE-2023-38408)

    • openssl: version 1.1.1v (CVE-2023-3817 CVE-2023-3446)

    • samba: version 4.17.10 (CVE-2023-3496 CVE-2022-2127 CVE-2023-34968 CVE-2023-3496 CVE-2023-3347)

    • Like 7
    • Thanks 2
    • Upvote 1



    User Feedback

    Recommended Comments



    I did this

     

    Settings -> Network Settings -> eth0 -> Enable Bridging = No

    Settings -> Docker -> Host access to custom networks = Enabled

     

    My containers that earlier was set to br0 did not start up. The change to eth0 was there but I had to edit all containers, make a dummy change to get apply button active. After that, all good! 

     

    Thanks!

    Edited by Niklas
    Link to comment

    I cant find the setting where I select between macvlan and ipvlan in the docker settings anymore?? I have advanced on.

     

    Edit this works as intended! 

    Edited by Niklas
    Link to comment
    16 minutes ago, Niklas said:

    I cant find the setting where I select between macvlan and ipvlan in the docker settings anymore?? I have advanced on.

     

    ??

     

     

    In this mode it is always set to macvlan so the setting is hidden

    • Thanks 1
    Link to comment
    Quote

    Settings -> Network Settings -> eth0 -> Enable Bridging = No

    Settings -> Docker -> Host access to custom networks = Enabled

     

    Is it recommended to make these changes in 6.11.5 before the update from 6.11.5 to 6.12.4-rc18? Or should I update to 6.12.4-rc18 first and apply these changes afterwards?

     

    Link to comment

    Hi,

     

    thanks for the update! After the known issues with macvlan I updated the system from 6.11.5 without any issues.

     

    All Dockers were assigned automatically on eth0 -- Internal.

     

    So far no Issue --> I will keep you up to date.

     

    BIG THANKS!!!!

    Link to comment

    I Checked my router after the update an currently I have to devices with the same IP address that assigned to the Unraid Server,

     

    Are the containers still working in bridge mode?

     

    image.png.bbcba19337270ae0a1dd6f66baa16a28.png

     

    image.thumb.png.c21f5aae11074301ec2b225e2d7fd443.png

     

    It seems Unraid eth0 interface has to MAC addresses --> I use a static IP for may Unraid Server. 

     

    Ist that the Problem?

    image.thumb.png.aee98aede8f9c529b3e1e36740fd8ded.png

    Edited by snowy00
    Link to comment
    1 hour ago, bonienl said:

    Apply the changes AFTER updating

     

    @ich777 asked me to add the following to my question above:

     

    I'm running three Unraid server: One Unraid server running on bare metal and two Unraid server as VMs on that bare metal server. These two VMs act as DAS (Direct Attached Storage, access thru SMB) only - just the Array. No Docker Containers, no VMs.

     

    His idea is that bridging needs to be enabled on the Unraid VMs - it is currently already.

     

    Currently:

     

    Unraid Bare metal: Bonding=no, Bridging=yes, br0 member=eth0, VLANs=no

    Unraid VMs: Bonding=yes (active_backup(1)), Bridging=yes, VLANs=no

     

    Docker Bare metal: MACVLAN, Host access=no

    Docker on VMs: Disabled

     

    Is this ok? It's running happily since years, currently on Unraid 6.11.5 with Fritzbox DSL IPv4-only.

     

    Edited by hawihoney
    • Like 1
    Link to comment

    i am not sure, but i think the routing tablet isn't correct for vhost?

    image.thumb.png.109a0313e3f70594c3256a35325bba42.png

    image.thumb.png.6f5862cdbc98c5d4ffe2dbebcf43331f.png

     

    fd00:: is my ULA which is needed for peristant hostname resolving.

    2003:c0:xxxx:xxxx:: is the prefix from my provider, which is changing from time to time. so i can't use that addresses in my local DNS for local hostname resolving.

    Link to comment
    1 hour ago, snowy00 said:

    I Checked my router after the update an currently I have to devices with the same IP address that assigned to the Unraid Server,

     

    Are the containers still working in bridge mode?

     

    image.png.bbcba19337270ae0a1dd6f66baa16a28.png

     

    image.thumb.png.c21f5aae11074301ec2b225e2d7fd443.png

     

    It seems Unraid eth0 interface has to MAC addresses --> I use a static IP for may Unraid Server. 

     

    Ist that the Problem?

    image.thumb.png.aee98aede8f9c529b3e1e36740fd8ded.png

     

    this is normal and was the same with the old method.

    go into your unraid web terminal and type "ip address".
    then look for search for MAC-Adresses which are same like your router list with your unraid-server ip addresses

    should be look like this:

    image.thumb.png.9dbb0b95f8d0b2098be6f45a40989e11.png

    • Like 1
    Link to comment
    1 hour ago, snowy00 said:

    Are the containers still working in bridge mode?

     

    They should work as before. Host and bridge networks are not touched.

     

    1 hour ago, snowy00 said:

    It seems Unraid eth0 interface has to MAC addresses

     

    When "host access" is enabled then the IP address of eth0 is duplicated to the macvtap (vhost) interface. The macvtap  (vhost) interface has its own MAC address

     

    • Like 1
    Link to comment
    15 minutes ago, sonic6 said:

    i am not sure, but i think the routing tablet isn't correct for vhost?

     

    This is a display thing, the interface will use your public IPv6 address and works properly.

    Let me see if can fix that.

     

    • Like 2
    Link to comment
    7 minutes ago, bonienl said:

    @sonic6

     

    I like to doublecheck.

     

    Can you post the result of (mask your public IPv6 address as needed)

     

    ip -6 addr show eth0

     

     

    image.png.e64144be81f25521c19f38a2f252fb51.png

    • Like 1
    • Thanks 1
    Link to comment

    I have tested the this since early releases and no more macvlan traces, both with bridging disabled and enabled.

    Great work @bonienl and everyone else who contributed to fixing it

     

    • Like 2
    Link to comment

    I made a fix for displaying the IPv6 address in network settings. Should be included in the official release.

     

    • Like 1
    • Thanks 2
    Link to comment

    Greetings.

     

    Is it *only* bridges that have the macvlan issue?

     

    I have a LACP bond that also acts as a VLAN trunk, from there I have several interfaces defined for various VLANs.

     

    'Enable Bridging' is currently set to 'Yes' on that bond and as a result those interfaces are, for example, br2.10, br2.20, etc.

     

    I have several docker networks defined across those VLAN interfaces, they show as, for example, 'IPv4 custom network on interface br2.10', 'IPv4 custom network on interface br2.20', etc.

     

    If I simply set 'Enable Bridging' to 'No' on that bond, will this macvlan fix work then for all those custom docker networks?

     

    (I would assume I'd be looking at interface names like 'bond0.10' and 'bond0.20' then instead of 'br2.10', etc.)

     

    Link to comment

    My server has 3 NIC's: eth0, eth2, eth3

    eth2 and eth3 are down, no IP assignment in the network settings

     

     

    I was not able to assign the custom network for eth0 in the docker containers, when only this option was checked:

    "IPv4 custom network on interface eth0"

     

    After also checking "IPv4 custom network on interface eth2", Custom: eth0 became available in the docker network types.

    And custom: eth2 is not showing ... probably would have to enable eth3 first ;-)

     

    Edited by Sascha75
    Link to comment
    1 hour ago, Sascha75 said:

    My server has 3 NIC's: eth0, eth2, eth3

    eth2 and eth3 are down, no IP assignment in the network settings

     

     

    I was not able to assign the custom network for eth0 in the docker containers, when only this option was checked:

    "IPv4 custom network on interface eth0"

     

    After also checking "IPv4 custom network on interface eth2", Custom: eth0 became available in the docker network types.

    And custom: eth2 is not showing ... probably would have to enable eth3 first ;-)

     

     

    Thanks for testing, I've not seen that.

     

    On Settings -> Docker, get back to where only there is only a custom network on eth0. If you see the option for macvlan/ipvlan (you may not, depending on other settings), be sure it is set to macvlan. Then take a screenshot of the page.

     

    Please take a screenshot of Settings -> Network -> eth0. "Enable bridging" should be no.

     

    Then start the array

     

    Go to one of your Docker containers and take a screenshot of the "Network Type" options. I would expect to see "Custom: eth0" as an option, ideally selected by default.

     

    Then post all of the screenshots and the full diagnostics.zip (from Tools -> Diagnostics)

     

    Thanks!

    Link to comment
    4 hours ago, user12345678 said:

    Is it *only* bridges that have the macvlan issue?

     

    I'd recommend installing the Fix Common Problems plugin, it will alert you if it finds call traces related to macvlan in your syslog.

     

    If you aren't seeing call traces with your setup then I wouldn't worry about it.

    Link to comment
    1 hour ago, ljm42 said:

    I'd recommend installing the Fix Common Problems plugin,

     

    Already installed.

     

    I've run macvlan on this setup since I've been running Unraid and I've only ever seen maybe two or three call traces and they never brought my system down so I never thought much of them, never worried much about them.

     

    I had switched recently to ipvlan in anticipation of eventually upgrading to 6.12.x and decided I don't care much for ipvlan.

     

    I thought maybe I'd switch back to macvlan and this 'fix' may ward-off call traces since they seem to increase with the kernel in 6.12.x.

     

    I can either just try & see or stay on ipvlan I guess.

     

    I was just wondering if there was something about *just* using the physical interface vs some layer on top (like bridge, bond, etc.) or if it was *just* bridges that were the issue.

     

    Anyway, thanks for your thoughts!

    Link to comment
    19 hours ago, snowy00 said:

    Hi,

     

    thanks for the update! After the known issues with macvlan I updated the system from 6.11.5 without any issues.

     

    All Dockers were assigned automatically on eth0 -- Internal.

     

    So far no Issue --> I will keep you up to date.

     

    BIG THANKS!!!!

    Can general guidance be added to the release notes for folks still on 6.11.5?

    I’ve been waiting for this solution and I see that many recommendations on upgrading are relative to coming from a prior 6.12.x release. 
     

    And @snowy00, can you confirm you didn’t do anything special (outside of normal upgrade prep) when you went from 6.11.5 to this 6.12.4?

    Link to comment



    Join the conversation

    You can post now and register later. If you have an account, sign in now to post with your account.
    Note: Your post will require moderator approval before it will be visible.

    Guest
    Add a comment...

    ×   Pasted as rich text.   Restore formatting

      Only 75 emoji are allowed.

    ×   Your link has been automatically embedded.   Display as a link instead

    ×   Your previous content has been restored.   Clear editor

    ×   You cannot paste images directly. Upload or insert images from URL.


  • Status Definitions

     

    Open = Under consideration.

     

    Solved = The issue has been resolved.

     

    Solved version = The issue has been resolved in the indicated release version.

     

    Closed = Feedback or opinion better posted on our forum for discussion. Also for reports we cannot reproduce or need more information. In this case just add a comment and we will review it again.

     

    Retest = Please retest in latest release.


    Priority Definitions

     

    Minor = Something not working correctly.

     

    Urgent = Server crash, data loss, or other showstopper.

     

    Annoyance = Doesn't affect functionality but should be fixed.

     

    Other = Announcement or other non-issue.