Unraid OS version 6.12.4 available


ljm42

Recommended Posts

The 6.12.4 release includes a fix for macvlan call traces(!) along with other features, bug fixes, and security patches.  All users are encouraged to upgrade.

 

Please refer also to the 6.12.0 Announcement post.

 

Upgrade steps for this release

  1. As always, prior to upgrading, create a backup of your USB flash device:  "Main/Flash/Flash Device Settings" - click "Flash Backup".
  2. Update all of your plugins. This is critical for the NVIDIA and Realtek plugins in particular.
  3. If the system is currently running 6.12.0 - 6.12.3, we're going to suggest that you stop the array at this point. If it gets stuck on "Retry unmounting shares", open a web terminal and type:
    umount /var/lib/docker

    The array should now stop successfully (This issue was thought to be resolved with 6.12.3, but some systems are still having issues)

  4. Go to Tools -> Update OS. If the update doesn't show, click "Check for Updates"
  5. Wait for the update to download and install
  6. If you have any plugins that install 3rd party drivers (NVIDIA, Realtek, etc), wait for the notification that the new version of the driver has been downloaded. 
  7. Reboot

 

Special thanks to all our contributors and beta testers and especially:

@bonienl for finding a solution to the macvlan problem!

@SimonF for bringing us the new System Drivers page

 

This thread is perfect for quick questions or comments, but if you suspect there will be back and forth for your specific issue, please start a new topic. Be sure to include your diagnostics.zip.

  • Like 14
  • Thanks 7
Link to comment
2 minutes ago, Chunks said:

Can I ask for a little clarity on this statement?

 

Is it recommended to revert from using 2 nic's to 1? Or just, running only 1 is an option now?

 

There is no need to segment docker traffic to a second nic to avoid the macvlan problem. You can still do that, but there isn't really any benefit. Far simpler to do everything on a single nic.

 

I would not recommend mixing solutions, so either use the previous 2 nic solution OR use the new solution mentioned here.

Link to comment

I was reading through the notes about macvtap having some benefits. Is there any downside to using it instead? 
 

from my understand staying set to macvlan on the docket network and disabling the bond would enable macvtap? 
 

Planning to make the shift to 6.12.4 from 6.11.5 this weekend and have been reading through things to have my ducks in a row. 

Link to comment
17 hours ago, Revan335 said:

With IPvlan no Config changes required?

 

Yep! Config changes are just for folks who want to use macvlan 

 

EDIT: To be more clear... if you already have ipvlan enabled, and are happy with that, then no config changes are needed after upgrading

Link to comment
1 hour ago, Gingersnap155 said:

I was reading through the notes about macvtap having some benefits. Is there any downside to using it instead? 
 

from my understand staying set to macvlan on the docket network and disabling the bond would enable macvtap? 
 

Planning to make the shift to 6.12.4 from 6.11.5 this weekend and have been reading through things to have my ducks in a row. 

 

We haven't found any downsides to enabling macvtap, as mentioned it may even be faster 

 

Bond doesn't matter, disable bridge to enable macvtap

  • Thanks 1
Link to comment
9 hours ago, ezhik said:

I keep my version as "PRODUCTION" rather than "LATEST".

This should not happen, did you wait until the plugin update helper told you it‘s okay to reboot now via the Unraid notifications?

 

This is also something that should be reported in the Nvidia thread since it‘s a plugin.

 

I‘ll try this on my test machine later today.

 

EDIT: This is now resolved and I'll post a quick follow up in the Nvidia Driver thread.

  • Thanks 1
  • Upvote 1
Link to comment

Can I get some clarification around moving from macvlan to ipvlan? 

I have upgraded my backup server and made the change to ipvlan with bridging still enabled. nothing seeing any issues but it is a backup with no VMs and only two dockers running not using a custom network

 

concerned about primary server that is still using macvlan, bridging enabled, host access to custom networks : disabled, ~30 docker containers (10 are using a custom network) and several VMs (some use their own ports on a 4 port nic -Blue Iris for example which is also on a separate vlan)

 

What exactly do I need to change when converting to ipvlan?  Does bridging need to be on or off? host access to custom networks?

 

Thanks!

  • Upvote 1
Link to comment

Hi,

Is this update gonna make some problems for me when I'm running like this?
I do have ab firewall that don't like when I use IPVLAN... when you have over 30 containers and all of the came up with same MAC address with different IP, I had problems to access my containers. That's why I use MACVLAN. Never hade any CALL TRACES error before.

Almost all my containers are using br0 network.

image.thumb.png.1766d0a71481a0d11e891697df4d0679.png

 

image.png.1c9202d964b7402cbe6823a3f1d9bf83.png

Edited by CiscoCoreX
Link to comment
1 hour ago, CiscoCoreX said:

Hi,

Is this update gonna make some problems for me when I'm running like this?
I do have ab firewall that don't like when I use IPVLAN... when you have over 30 containers and all of the came up with same MAC address with different IP, I had problems to access my containers. That's why I use MACVLAN. Never hade any CALL TRACES error before.

Almost all my containers are using br0 network.

Correct me if I'm wrong anyone, but as I understand it you can still use macvlan, but instead of br0 it will now be eth0 and everything should work as before. That's what I read when I skimmed through the release notes anyway. I'm also using macvlan and will continue to do so if I can.

  • Upvote 3
Link to comment

Quick question, if we want to follow the following steps do we do them BEFORE updating to 6.12.4 or AFTER updating?

 

"For those users, we have a new method that reworks networking to avoid this. Tweak a few settings and your Docker containers, VMs, and WireGuard tunnels should automatically adjust to use them:

Settings > Network Settings > eth0 > Enable Bridging = No

Settings > Docker > Host access to custom networks = Enabled"

Link to comment
4 hours ago, strike said:

Correct me if I'm wrong anyone, but as I understand it you can still use macvlan, but instead of br0 it will now be eth0 and everything should work as before. That's what I read when I skimmed through the release notes anyway. I'm also using macvlan and will continue to do so if I can.

Hi, did you upgrade? :)

Link to comment
9 minutes ago, strike said:

Not yet, gonna update during next week.

I took the risk and updated now,. Seems to my everything went smoothly. Only one thing i noticed, the DOCKER page took much longer time to open, hmmm

 

That's probably why? It took 12 sec top open the DOCKER tap.
image.png.05c1e13cfde62175d2995f99fa039004.png
 

Edited by CiscoCoreX
Link to comment

About to update from 6.12.3 using the ipvlan work around as I was getting crashes.  I'm also using ubiquity UDMP and a 48 port switch so I'm wondering if there is anything I specifically need to do other then what's in the update notes?

 

Also would the 6.12.3 setup cause issues when trying to enable aggregation through my switch since my dell r730xd and the network card should support it?  Because every time I tried my system would not like it.  And if so would this update help fix that?

 

Thanks guys.

Link to comment

Join the conversation

You can post now and register later. If you have an account, sign in now to post with your account.
Note: Your post will require moderator approval before it will be visible.

Guest
Reply to this topic...

×   Pasted as rich text.   Restore formatting

  Only 75 emoji are allowed.

×   Your link has been automatically embedded.   Display as a link instead

×   Your previous content has been restored.   Clear editor

×   You cannot paste images directly. Upload or insert images from URL.