• Unraid OS version 6.12.4-rc18 available


    ljm42

    This release has a fix for macvlan call traces(!) along with other bug fixes, security patches, and one new feature. We'd like your feedback before releasing it as 6.12.4

     

    This thread is perfect for quick questions or comments, but if you suspect there will be back and forth for your specific issue, please start a new topic. Be sure to include your diagnostics.zip.

     

    Upgrade steps for this release

    1. As always, prior to upgrading, create a backup of your USB flash device:  "Main/Flash/Flash Device Settings" - click "Flash Backup".
    2. Update all of your plugins. This is critical for the NVIDIA and Realtek plugins in particular.
    3. If the system is currently running 6.12.0 - 6.12.2, we're going to suggest that you stop the array at this point. If it gets stuck on "Retry unmounting shares", open a web terminal and type:
      umount /var/lib/docker

      The array should now stop successfully (This issue was resolved with 6.12.3)

    4. Go to Tools -> Update OS, change the Branch to "Next". If the update doesn't show, click "Check for Updates"
    5. Wait for the update to download and install
    6. If you have any plugins that install drivers (NVIDIA, Realtek, etc), wait for the notification that the new version of the driver has been downloaded. 
    7. Reboot

     

    Add Tools/System Drivers page

    This new page gives you visibility into the drivers available/in use on your system. 3rd party drivers installed by plugins (such as NVIDIA and Realtek) have an icon that links to the support page for that driver. And you can now add/modify/delete the modeprobe.d config file for any driver without having to find that file on your flash drive. Thanks to @SimonF for adding this functionality!

     

    Fix for macvlan call traces!

    The big news in this test release is that we believe we have resolved the macvlan issues that have been plaguing us recently! We'd appreciate your help confirming the changes. Huge thanks to @bonienl for tracking this down!

     

    The root of the problem is that macvlan used for custom Docker networks is unreliable when the parent interface is a bridge (like br0), it works best on a physical interface (like eth0). We believe this to be a longstanding kernel issue and have posted a bug report.

     

    If you are getting call traces related to macvlan, as a first step we'd recommend navigating to Settings/Docker, switch to advanced view, and change the "Docker custom network type" from macvlan to ipvlan. This is the default configuration that Unraid has shipped with since version 6.11.5 and should work for most systems.

     

    However, some users have reported issues with port forwarding from certain routers (Fritzbox) and reduced functionality with advanced network management tools (Ubiquity) when in ipvlan mode.

     

    For those users, in this rc we have a new method that reworks networking to avoid this. Simply tweak a few settings and your Docker containers, VMs, and WireGuard tunnels will automatically adjust to use them:

    • Settings -> Network Settings -> eth0 -> Enable Bridging = No
    • Settings -> Docker -> Host access to custom networks = Enabled

     

    Note: if you previously used the two-nic method for docker segregation, you'll also want to revert that:

    • Settings -> Docker -> custom network on interface eth0 (i.e. make sure eth0 is configured for the custom network, not eth1)

     

    When you start the array, the host, VMs, and Docker containers will all be able to communicate, and there should be no more call traces!

     

    Troubleshooting

    • If your Docker containers with custom IPs aren't starting, edit them and change the "Network type" to "Custom: eth0". We attempted to do this automatically, but depending on how things were customized you might need to do it manually.
    • If your VMs are having network issues, edit them and set the Network Source to "vhost0". Also, ensure there is a MAC address assigned.
    • If your WireGuard tunnels won't start, make a dummy change to each tunnel and save.
    • If you are having issues port forwarding to Docker containers (particularly on a Fritzbox) delete and recreate the port forward in your router.

     

    To get a little more technical...

    After upgrading to this release, if bridging remains enabled on eth0 then everything works as it used to. You can attempt to work around the call traces by disabling the custom Docker network, or using ipvlan instead of macvlan, or using the two-nic Docker segmentation method with containers on eth1.

     

    Starting with this release, when you disable bridging on eth0 we create a new macvtap network for Docker containers and VMs to use. It has a parent of eth0 instead of br0, which is how we avoid the call traces.

     

    A side benefit is that macvtap networks are reported to be faster than bridged networks, so you may see speed improvements when communicating with Docker containers and VMs.

     

    FYI: With bridging disabled for the main interface (eth0), then the Docker custom network type will be set to macvlan and hidden unless there are other interfaces on your system that have bridging enabled, in which case the legacy ipvlan option is available. To use the new fix being discussed here you'll want to keep that set to macvlan.

     

    Other Bug Fixes

    This release resolves corner cases in networking, Libvirt, Docker, WireGuard, NTP, NGINX, NFS and RPC. And includes an improvement to the VM Manager so it retains the VNC password during an update. And has a change to the shutdown process to allow the NUT plugin to shut the system down.

     

    A small change is that packages in /boot/extra are now treated more like packages installed by plugins, and the installation is logged to syslog rather than to the console.

     

    Known Issues

    Please see this page for information on crashes related to the i915 driver:
      https://docs.unraid.net/unraid-os/release-notes/6.12.0#known-issues

     

    If Docker containers have issues starting after a while, and you are running Plex, go to your Plex Docker container settings, switch to advanced view, and add this to the Extra Params

    --no-healthcheck

     

    This thread is perfect for quick questions or comments, but if you suspect there will be back and forth for your specific issue, please start a new topic. Be sure to include your diagnostics.zip.

     

    Rolling Back

    Before rolling back to an earlier version, it is important to first change this back to yes:

    • Settings -> Network Settings -> eth0 -> Enable Bridging = Yes

    And then start the array (along with the Docker and VM services) to update your Docker containers, VMs, and WireGuard tunnels back to their previous settings which should work in older releases.

     

    Once in the older version, confirm these settings are correct for your setup:

    • Settings -> Docker -> Host access to custom networks
    • Settings -> Docker -> Docker custom network type

     

    Changes vs. 6.12.3

     

    Networking

     

    New vhost network for both containers and VMs.

     

    When bridging enabled:

    • Create shim interface which is attached to bridge interface
    • Copy parent address to shim interface with lower metric to allow host access
    • More specific routes are no longer created

     

    When bridging disabled:

    • Copy parent address to vhost interface with lower metric to allow host access

     

    Bug fixes and improvements

    • create_network_ini:
      • fixed dhcp hook
      • improved IP address collection
    • diagnostics:
      • Add previous Unraid version to diagnostics version txt file.
      • Add ntp.conf, sshd.config, and servers.conf (with anonymized URLs)
      • anonymize IP addresses
    • libvirt, nginx, nfs, rpc: changed running process detection
    • nfsclient: start negotiation with v4, turn off atime modification
    • rc.6: leave /usr and /lib mounted during shutdown
    • rc.docker:
      • create same IPv6 network for containers and services
      • add more logging when stopping dockerd
    • rc.inet1:
      • do not use promiscuous mode for bridging
      • add persistent option to dhcpcd
    • rc.library: interfaces always listed in the same order, fix show ipv6
    • rc.libvirt: remove 'itco' watchdog from XML if present
    • rc.local: annotate auto-generated /etc/modprobe.d/zfs.conf file
    • rc.services:
      • add logging
      • exclude WireGuard "VPN tunneled access for docker" tunnels from services
      • exclude WireGuard tunnels for ntp (code optimization)
    • webgui:
      • Update monitor_nchan
      • Feedback: refactor feedback script
      • Shares and Pools: show "Minimum free space" as absolute number instead of percentage
      • Pools: minimum free space: only enabled when array is stopped
      • VM Manager: Retain VNC password during update.
      • VM Manager: Remove downloaded '.vv' files.
      • Dashboard: hide ZFS bar when no ZFS is used
      • Network settings: fix DNS settings sometimes disappear
      • Translations: trim key and value in language files
      • add System Drivers page

     

    Linux kernel

    • version 6.1.46 (CVE-2023-20593)
    • CONFIG_SCSI_MPI3MR: Broadcom MPI3 Storage Controller Device Driver

     

    Base Distro

    • btrfs-progs: 6.3.3

    • curl: version 8.2.0 (CVE-2023-32001)

    • kernel-firmware: version 20230724_59fbffa

    • krb5: version 1.19.2 (CVE-2023-36054)

    • openssh: version 9.3p2 (CVE-2023-38408)

    • openssl: version 1.1.1v (CVE-2023-3817 CVE-2023-3446)

    • samba: version 4.17.10 (CVE-2023-3496 CVE-2022-2127 CVE-2023-34968 CVE-2023-3496 CVE-2023-3347)

    • Like 7
    • Thanks 2
    • Upvote 1



    User Feedback

    Recommended Comments



    1 hour ago, J05u said:

    Any updates for Arc driver compatibility?

    Probably not as the support is only available on kernel 6.2+ from what I understand.

     

    Quote

    Linux kernel

    version 6.1.46 (CVE-2023-20593)

     

    • Thanks 1
    Link to comment
    13 hours ago, user12345678 said:

    I have a LACP bond that also acts as a VLAN trunk, from there I have several interfaces defined for various VLANs.

     

    Disabling bridging will change docker to use the bonded interface, including the VLAN networks.

     

    This is my docker set up

     

    # docker network ls
    NETWORK ID     NAME      DRIVER    SCOPE
    b773f11c6dcc   bond0     macvlan   local
    a9315eb8f702   bond0.3   macvlan   local
    5d489327c46b   bond0.4   macvlan   local
    d69bccba5929   bond0.5   macvlan   local
    a7b83120a899   bond0.6   macvlan   local

     

    Link to comment
    10 hours ago, Sascha75 said:

    My server has 3 NIC's: eth0, eth2, eth3

    eth2 and eth3 are down, no IP assignment in the network settings

     

    That should work properly (I have 4 NICs), likely a configuration issue. Need to see your diagnostics.

     

    Link to comment
    2 hours ago, sonic6 said:

    The Docker Area in Dashboard is very slow with 6.12.4RC18 .

     

    How many containers do you have?

    My main server has 30 containers and display is instantaneous on the dashboard.

    There are no code changes for the dashboard that would explain a different behavior.

     

    Link to comment
    6 minutes ago, bonienl said:

    How many containers do you have?

    41 Containers are running.
    I did't change the count on containers in the last weeks.

    Link to comment
    13 hours ago, nraygun said:

    And @snowy00, can you confirm you didn’t do anything special (outside of normal upgrade prep) when you went from 6.11.5 to this 6.12.4?

     

    Yes, I can confirm it is not necessary do do something special.  

     

    My procedure was:

    - Stop VM and Docker

    - Install Update 

    - Change the network settings 

      Settings -> Network Settings -> eth0 -> Enable Bridging = No

      Settings -> Docker -> Host access to custom networks = Enabled

    - Start VM and Docker 

    - No additional modifications needed 

    • Like 2
    Link to comment
    10 hours ago, bonienl said:

    There are no code changes for the dashboard that would explain a different behavior.

     

    22 Dockers here and like @sonic6 mentioned, there is definately a changed behaviour since 6.12.4 rc13

     

    no changes here besides network setup due macvlan changes, same dockers, same vm's, same plugins, same hardware.

     

    turned off bridging

    changed docker's to eth0

    changed vm's to vhost

     

    nothing to worry about as its only cosmetic, but its been more or less instant here and now we have the delay, just to confirm @sonic6 post.

    • Thanks 1
    Link to comment

    I have only noticed it takes longer to check for updates maybe. GUI fast.

     

    Whats the

    Aug 23 19:07:55 Server monitor: Stop running nchan processes
    Aug 23 19:37:28 Server monitor: Stop running nchan processes
    Aug 23 20:29:32 Server monitor: Stop running nchan processes

    in the logs?

    Edited by Niklas
    Link to comment
    11 hours ago, bonienl said:

     

    That should work properly (I have 4 NICs), likely a configuration issue. Need to see your diagnostics.

     

    maybe that came across wrong, I was simply stating my current set up. I have manually disabled the other 2 NICs.

     

    the main issue is that the custom network for eth0 was not available for selection in docker until I added custom network for eth2. I will try to add some screenshots later.

    Link to comment
    23 hours ago, ljm42 said:

     

    Thanks for testing, I've not seen that.

     

    On Settings -> Docker, get back to where only there is only a custom network on eth0. If you see the option for macvlan/ipvlan (you may not, depending on other settings), be sure it is set to macvlan. Then take a screenshot of the page.

     

    Please take a screenshot of Settings -> Network -> eth0. "Enable bridging" should be no.

     

    Then start the array

     

    Go to one of your Docker containers and take a screenshot of the "Network Type" options. I would expect to see "Custom: eth0" as an option, ideally selected by default.

     

    Then post all of the screenshots and the full diagnostics.zip (from Tools -> Diagnostics)

     

    Thanks!

     

    I 've tried to reproduce the missing eth0 in docker networks by reverting all settings and repeating them, but was not able to do so. maybe I did no follow the correct order the first time. now my docker on br0 was automatically switched to eth0, as it is described in the instructions.

     

     

    • Like 1
    Link to comment

    First of all, great work! I did as instructed and macvlan works well so far.  Question:

     

    I have 3 NICs, eth0 and eth1/eth2, I disabled bridging on eth0, eth1/eth2 is using bonded LACP, with a few VLANs that docker is using. Is it recommended that I disable bridging on ALL interfaces and only use bonding? current Docker network below:

     

    root@Unraid:~# docker network ls
    NETWORK ID     NAME      DRIVER    SCOPE
    410ede2fc44b   br1       macvlan   local
    8ef814bdd249   br1.10    macvlan   local
    24c58ff2a324   br1.15    macvlan   local
    18d59eef887b   br1.20    macvlan   local
    a65a073f4cd7   bridge    bridge    local
    8002478340ab  eth0      macvlan   local
    f6e5f48f1e23     host      host      local
    5882761c0802  none      null      local

    Link to comment

    So far its been looking very good. 👍

    The Webterminal does work again on mobile (android 13, chrome) but clicking the syslog shortcut still breaks everything.
    And while im at the syslog, this is certainly new in .12.4 

    syslog.thumb.PNG.80e6e267a421ef37783a81e347993218.PNG

    this keeps on going for a while. seems to have started after the 24h dc.

    Link to comment
    4 hours ago, Jclendineng said:

    Is it recommended that I disable bridging on ALL interfaces and only use bonding?

     

    I recommend to disable bridging and use bond1 instead. We have seen that the new macvtap network gives much better performance than bridging. Though your current set up should work without issues.

     

    • Thanks 1
    Link to comment
    1 hour ago, Mainfrezzer said:

    So far its been looking very good. 👍

    The Webterminal does work again on mobile (android 13, chrome) but clicking the syslog shortcut still breaks everything.
    And while im at the syslog, this is certainly new in .12.4 

    syslog.thumb.PNG.80e6e267a421ef37783a81e347993218.PNG

    this keeps on going for a while. seems to have started after the 24h dc.

     

    i think this is because of the changing ipv6 prefix. fe80 prefixes changes in germany after DC's
    so this isn't usable for a static route?

    @bonienl can you take a look into that?

    • Like 1
    Link to comment
    1 hour ago, sonic6 said:

     

    i think this is because of the changing ipv6 prefix. fe80 prefixes changes in germany after DC's
    so this isn't usable for a static route?

    @bonienl can you take a look into that?

    That was spot on. The GUA prefix changed. Did notice that wireguard didn't use ipv6 anymore. the eth0 changed correctly but the vhost still used the old ipv6 address, which I reckon is being used.

    Link to comment

    Okay, throwing this up here because I'm missing something, a bit of a n00b, and maybe just slow and stupid.

     

    I had previously implemented ipvlan in Docker. Then I added a NIC (eth1) and implemented the "two-nic solution" to move back to macvlan. Seemed fine.

     

    I updated to 6.12.4-rc18 and made what I thought were the recommended changes. I ended up redefining some of the networking for Docker, as this sort of broke all my previous br1 bridges. No worries, really.

     

    But, I seem to have Docker on eth0 custom network now. Can I move it to eth1 and if so, how? What am I missing?

     

    Here's my setup...
    image.thumb.png.29c56480d1515dd81de1e10d1f8146ce.png

     

    image.png.9ffa8efa5eb0e4d3667a133b3b4facd8.png

     

    image.thumb.png.3b92b4eef295d6685b5565f5dd624fa1.png

    *note I tried to add routhing for eth1 here but it didn't save.

     

    image.thumb.png.d365ccdb85af12737dcb0de0cbf66f8d.png

    Link to comment
    On 8/23/2023 at 1:39 AM, ChatNoir said:

    Probably not as the [Intel Arc driver] support is only available on kernel 6.2+ from what I understand.

     

     


    That's correct, and the number of moving parts between the Linux 6.1, 6.2, 6.3 kernels, ZFS support, and Arc drivers in the last 9 months has made it very hard to make everybody happy here, I'm sure. I have a A380 installed waiting to add to my container for transcoding on Plex; however the host OS must support it first, which means 6.2 or later. 6.2 isn't LTS (AFAIK there woin't be any more 6.2 kernels?), 6.3 is still in active development and support, so that's the path forward... however OpenZFS only added 6.3 support two months ago. So for unRaid devs and release managers, it's probably a choice between the wide number of use cases and number of users of ZFS versus the very few who have Arc cards as far as deciding what kernel to use; moving to a new kernel requires a ton of regression, I'm sure.

    Link to comment
    1 hour ago, The_Holocron said:

    But, I seem to have Docker on eth0 custom network now. Can I move it to eth1 and if so, how? What am I missing?

     

    If you really want to do it, you should disable bridging on eth1 (screenshot 2) and move the custom network from eth0 to eth1 (screenshot 4)

     

    But what value do you see in doing this? With the changes in this release, there is no longer a benefit to segregating Docker to its own network, I think you'd be better off keeping things simple and not using eth1

    Link to comment
    Just now, ljm42 said:

     

    But what value do you see in doing this? With the changes in this release, there is no longer a benefit to segregating Docker to its own network, I think you'd be better off keeping things simple and not using eth1

    Fair points. I guess I'm trying to figure out what to do with eth1 now...

    • Like 1
    Link to comment

    Never had issues with the "original" br0 macvlan, but looking at the technical details: The new macvtap method seems nice and could be the new default, if I understand things correctly. Even if the kernel bug gets addressed, the macvtap on ethX seems a better default. Correct me if misunderstood something.

    • Like 2
    Link to comment



    Join the conversation

    You can post now and register later. If you have an account, sign in now to post with your account.
    Note: Your post will require moderator approval before it will be visible.

    Guest
    Add a comment...

    ×   Pasted as rich text.   Restore formatting

      Only 75 emoji are allowed.

    ×   Your link has been automatically embedded.   Display as a link instead

    ×   Your previous content has been restored.   Clear editor

    ×   You cannot paste images directly. Upload or insert images from URL.


  • Status Definitions

     

    Open = Under consideration.

     

    Solved = The issue has been resolved.

     

    Solved version = The issue has been resolved in the indicated release version.

     

    Closed = Feedback or opinion better posted on our forum for discussion. Also for reports we cannot reproduce or need more information. In this case just add a comment and we will review it again.

     

    Retest = Please retest in latest release.


    Priority Definitions

     

    Minor = Something not working correctly.

     

    Urgent = Server crash, data loss, or other showstopper.

     

    Annoyance = Doesn't affect functionality but should be fixed.

     

    Other = Announcement or other non-issue.