Jump to content

ljm42

Administrators
  • Posts

    4,469
  • Joined

  • Last visited

  • Days Won

    32

Posts posted by ljm42

  1. Interesting. So 6.12.4 looks fine on Chrome on my Android phone. I went to whatsmybrowser.org and it says I have Chrome 116 on my phone, what version do you have on your tablet?

     

    Also, would you please test in safe mode when you get the chance? It will help narrow down where the problem could be.

  2. 1 minute ago, giafidis said:

    I'm running 6.12.3 and using macvlan (no call traces, no crashes), since my Router is a fritzbox and i had problems with ipvlan. Can i leave my Network Settings as they are before and after updating to 6.12.4?

     

    Sure. If you do end up with call traces then follow the instructions in the release notes to remediate them

  3. 2 minutes ago, craigr said:

    Above are my IPMI errors, one for each upgrade 😬.  When it crashes there is no cursor blinking at the terminal on the monitor connected to the machine, no ssh, no web GUI, but the IPMI fan control seems to be running as the fans are not at full speed.  I have to do a hard shutdown with the power button, and then of course a parity check starts on the next reboot which I always cancel because there will be no errors.

     

    Also, after this update, for some reason my Deluge docker would not start.  I forced an update and it started fine after that.

     

    I really don't like these crashes after moving to 6.12.x on my system.

     

    That is confusing. There are a few things I would try if it were my system:

    1) Check the bios for anything related to "fast boot" and disable it

    2) Avoid rebooting. Instead, do a shutdown to power the system off fully and then start it back up.

    3) Go to Settings -> Disk Settings and set "Enable Auto Start" to "no". This will ensure that if it crashes while booting, the array will not be affected. Of course, it also means you will have to manually start the array using the webgui once the system comes up.

     

     

    • Like 1
  4. 17 minutes ago, nka said:

    if I don't have any issue on 6.11.5 with macvlan, should/can I stay on macvlan?

    6.11.2 was crashing on me (freezing) with ipvlan ou macvlan... so I rolled back and been waiting since.

     

    6.12 has issues with macvlan where 6.11 did not. If you have macvlan enabled in 6.11 and make no configuration changes when upgrading, you will likely get call traces and crashes related to macvlan in 6.12.x

     

    Switching from macvlan to ipvlan is still a valid solution for most people but 6.12.4 has an alternate solution that allows you to stay with macvlan if you would rather do that.

  5. Oh for some reason I thought you had blanked that area out of the screenshot. Didn't realize that is what I was supposed to focus on : )

     

    Try booting in safe mode to determine whether the issue is related to a plugin.  Also, do you have any other computer/browsers you can try?

  6. 5 minutes ago, jsebright said:

    Upgrade from 6.12.3 to 6.12.4 appeared to go OK, but I have a network card issue.

    I have a 10gb fibre card, and onboard 1gb nic. These are set to a bond and it mainly uses the 10gb (onboard is enabled for WOL - a script after waking forces the 10gb card to be active).

     

    My fibre card is constantly showing as disconnected (Interface Ethernet Port 1 is down. Check cable!) . Changing designations between eth0 and eth1 makes no difference.

    It is listed in system devices as

    [15b3:1003] 02:00.0 Ethernet controller: Mellanox Technologies MT27500 Family [ConnectX-3]

     

    The new system drivers shows two drivers:

    mlx4_core    Mellanox ConnectX HCA low-level driver    Inuse    net/ethernet/mellanox/mlx4
    mlx4_en    Mellanox ConnectX HCA Ethernet driver    Inuse    net/ethernet/mellanox/mlx4
     

    I have the Mellanox Firmware Tools installed -  which seems to report the card OK.

     

    Anyone else with this issue?

     

     

    This thread is perfect for quick questions or comments, but if you suspect there will be back and forth for your specific issue, please start a new topic. Be sure to include your diagnostics.zip.

    • Thanks 1
  7. 11 hours ago, morreale said:

    What exactly do I need to change when converting to ipvlan?  Does bridging need to be on or off? host access to custom networks?

     

    Chances are, the only change you need to make is:

    Settings > Docker > Docker custom network type = ipvlan

     

    But here is some more info:

    Settings > Network Settings > eth0 > Enable Bonding = Yes or No, either work
    Settings > Network Settings > eth0 > Enable Bridging = Yes (must be yes to use ipvlan)
    Settings > Docker > Docker custom network type = ipvlan
    Settings > Docker > Host access to custom networks = Enabled (assuming you want that. disabled is ok too)

     

    • Upvote 1
  8. 8 hours ago, Ollebro said:

    Updated from Version: 6.12.3 to Version: 6.12.4 and lost all my docker containers. Any tips on how to get them back?

     

    This thread is perfect for quick questions or comments, but if you suspect there will be back and forth for your specific issue, please start a new topic. Be sure to include your diagnostics.zip.

  9. 10 hours ago, CiscoCoreX said:

    Hi,

    Is this update gonna make some problems for me when I'm running like this?
    I do have ab firewall that don't like when I use IPVLAN... when you have over 30 containers and all of the came up with same MAC address with different IP, I had problems to access my containers. That's why I use MACVLAN. Never hade any CALL TRACES error before.

    Almost all my containers are using br0 network.

     

    I would expect 6.12.x to throw call traces with this configuration.  If it does, then follow the guidance in the 6.12.4 release notes to mitigate. If not, then don't worry about it.

     

    • Like 1
  10. 10 hours ago, KluthR said:

    But thats not needed for the macvtap solution, right? Its just needed in case Host <-> containers want to talk, like ususal?

     

    Correct, if you don't need communication between host/containers/vms then you don't have to enable this. But we figure 99% of people will want that so we included it in the list.

    • Like 1
  11. 8 hours ago, bk101 said:

    Quick question, if we want to follow the following steps do we do them BEFORE updating to 6.12.4 or AFTER updating?

     

    "For those users, we have a new method that reworks networking to avoid this. Tweak a few settings and your Docker containers, VMs, and WireGuard tunnels should automatically adjust to use them:

    Settings > Network Settings > eth0 > Enable Bridging = No

    Settings > Docker > Host access to custom networks = Enabled"

     

    Upgrade first

  12. 3 hours ago, TheIlluminate said:

    About to update from 6.12.3 using the ipvlan work around as I was getting crashes.  I'm also using ubiquity UDMP and a 48 port switch so I'm wondering if there is anything I specifically need to do other then what's in the update notes?

     

    Also would the 6.12.3 setup cause issues when trying to enable aggregation through my switch since my dell r730xd and the network card should support it?  Because every time I tried my system would not like it.  And if so would this update help fix that?

     

    Thanks guys.

     

    Switching to ipvlan should prevent crashes related to macvlan, because macvlan won't be in use.

     

    If you ever have unexplained crashes, go to Settings -> Syslog Server and temporarily enable "Mirror syslog to flash", then after a crash you'll find the syslog on the flash drive in the logs folder.  That syslog will be useful in determining what happened. Don't leave this setting enabled long term though, as it adds a lot of writes to the flash drive.

     

    There are several ways to do bonding, and some require special configuration of your switch. I recommend avoiding those because it is very difficult for anyone to provide remote support without fully understanding your network.

  13. 1 hour ago, Gingersnap155 said:

    I was reading through the notes about macvtap having some benefits. Is there any downside to using it instead? 
     

    from my understand staying set to macvlan on the docket network and disabling the bond would enable macvtap? 
     

    Planning to make the shift to 6.12.4 from 6.11.5 this weekend and have been reading through things to have my ducks in a row. 

     

    We haven't found any downsides to enabling macvtap, as mentioned it may even be faster 

     

    Bond doesn't matter, disable bridge to enable macvtap

    • Thanks 1
  14. 17 hours ago, Revan335 said:

    With IPvlan no Config changes required?

     

    Yep! Config changes are just for folks who want to use macvlan 

     

    EDIT: To be more clear... if you already have ipvlan enabled, and are happy with that, then no config changes are needed after upgrading

  15. 4 hours ago, TRaSH said:

    In my current setup i use bonding (Mode 4 (802.3ad)), main reason was because when I'm doing heavy traffic up/download, i got issues during plex playback because my nic was fully saturated.

    With these changes i have the feeling i will go back to the same issues before i decided to bond them.

     

    I have updated the 6.12.4 release notes to make it clear that the new solution does work with bonding:

    https://docs.unraid.net/unraid-os/release-notes/6.12.4/

  16. 2 minutes ago, Chunks said:

    Can I ask for a little clarity on this statement?

     

    Is it recommended to revert from using 2 nic's to 1? Or just, running only 1 is an option now?

     

    There is no need to segment docker traffic to a second nic to avoid the macvlan problem. You can still do that, but there isn't really any benefit. Far simpler to do everything on a single nic.

     

    I would not recommend mixing solutions, so either use the previous 2 nic solution OR use the new solution mentioned here.

  17. 1 hour ago, Chunks said:

    When moving Dockers from a single interface (eth0) to a new dedicated one (eth1).... I had no problems when the dockers were swapped from br0 to eth1 networks. But I have a whole bunch of dockers using just "bridge". These get the IP address of the host system, the main unraid IP (eth0).

     

    For example:

     

    image.png.a30b21962704a04e47b298bbc2db6303.png

     

    Is the fix to make all of these eth1 as well? If so, what IP do I use? Can I create a single new address that they all can share, or is this the whole point of removing bridging? Or did I miss something/mess something up? 

     

     

    Docker containers in bridge or host mode will always use Unraid's main IP. This guide is specifically to deal with containers that have a dedicated IP on a custom network.

    • Like 1
  18. The 6.12.4 release includes a fix for macvlan call traces(!) along with other features, bug fixes, and security patches.  All users are encouraged to upgrade.

     

    Please refer also to the 6.12.0 Announcement post.

     

    Upgrade steps for this release

    1. As always, prior to upgrading, create a backup of your USB flash device:  "Main/Flash/Flash Device Settings" - click "Flash Backup".
    2. Update all of your plugins. This is critical for the NVIDIA and Realtek plugins in particular.
    3. If the system is currently running 6.12.0 - 6.12.3, we're going to suggest that you stop the array at this point. If it gets stuck on "Retry unmounting shares", open a web terminal and type:
      umount /var/lib/docker

      The array should now stop successfully (This issue was thought to be resolved with 6.12.3, but some systems are still having issues)

    4. Go to Tools -> Update OS. If the update doesn't show, click "Check for Updates"
    5. Wait for the update to download and install
    6. If you have any plugins that install 3rd party drivers (NVIDIA, Realtek, etc), wait for the notification that the new version of the driver has been downloaded. 
    7. Reboot

     

    Special thanks to all our contributors and beta testers and especially:

    @bonienl for finding a solution to the macvlan problem!

    @SimonF for bringing us the new System Drivers page

     

    This thread is perfect for quick questions or comments, but if you suspect there will be back and forth for your specific issue, please start a new topic. Be sure to include your diagnostics.zip.

    • Like 14
    • Thanks 7
×
×
  • Create New...