Jump to content
  • [6.8-RC1] kernel: tun: unexpected GSO


    Outcasst
    • Retest Minor
    Oct 12 03:35:17 Storage kernel: tun: unexpected GSO type: 0x0, gso_size 1357, hdr_len 1411
    Oct 12 03:35:17 Storage kernel: tun: 13 e4 3d f7 10 86 b8 9e 87 b1 5f 81 d9 7a 98 c9 ..=......._..z..
    Oct 12 03:35:17 Storage kernel: tun: 26 fa 2d 78 50 03 f2 b2 22 55 bc 68 29 75 83 46 &.-xP..."U.h)u.F
    Oct 12 03:35:17 Storage kernel: tun: 04 35 d4 e4 71 d8 5c 04 e3 e2 a2 6d 4e 1f 22 9d .5..q.\....mN.".
    Oct 12 03:35:17 Storage kernel: tun: 6f 97 72 60 c9 63 2b dc f4 ec c7 4f 68 60 66 9e o.r`.c+....Oh`f.

    Getting the above message repeated over and over again in the log whenever a docker tries to access the NIC.

    storage-diagnostics-20191012-0237.zip

    • Thanks 3


    User Feedback

    Recommended Comments



    I saw this issue in the beta testing of 6.8.  I solved it by remapping the ip addresses of my dockers.  You probably have unique ip addresses set up for your dockers.  Set the network type to something other than 'Custom:br0'.  This will then use the ip address of the Unraid server.  Assign a unique port to the docker and access the docker that way.

    Edited by dlandon

    Share this comment


    Link to comment
    Share on other sites

    That isn't a fix. While it may be an acceptable workaround for some, it's not for others.

    • Like 1

    Share this comment


    Link to comment
    Share on other sites

    Also seeing this.  I have dockers configured to go out of different interfaces based on IP address so setting back to UnraidIP:port does not work for me.

    Edited by B_Sinn3d

    Share this comment


    Link to comment
    Share on other sites

    I just configured a container to run on custom: br0 with a static IP and I'm not seeing it...  Maybe just set the container back to bridge and then redo it back to a static IP

    Share this comment


    Link to comment
    Share on other sites
    2 hours ago, BRiT said:

    That isn't a fix. While it may be an acceptable workaround for some, it's not for others.

    Maybe not,  but until linux or limetech fixes it, it's the only thing I found to get around the problem. 

    Share this comment


    Link to comment
    Share on other sites

    Stop all dockers and then start them one at a time.  You should see them work until you get to 4 or 5 , if I remember correctly, and then they will start to give the issue.

    • Thanks 1

    Share this comment


    Link to comment
    Share on other sites

    I originally thought it was a VM issue also, but I think it's really the build up of the number of dockers and VMs using static ip addresses.

     

    This is a strange problem with networking.  I don't know how LT will be able to find a solution to this.

    Share this comment


    Link to comment
    Share on other sites

    Does disabling GSO and/or all NIC offloading avoid this problem, or does that slow things down too much?

    Share this comment


    Link to comment
    Share on other sites

    When I had this problem in the beta series, I reworked the Tips and Tweaks plugin to turn off GSO and all offloading on Nics, it made no difference.

     

    From what I remember, everything worked fine, it’s just a flood of log messages that would eventually crash the server from the log growing too large.

    Share this comment


    Link to comment
    Share on other sites

    Dagnabbit. That's another show stopper that means I’ll have to remain on 6.6.7. Thanks for testing. 

    Share this comment


    Link to comment
    Share on other sites

    I'm seeing the same thing here. Keeps filling up /var/log

     

    Edit: This was happening a lot more when I had gso off on some interfaces. Nothing has changed. I'm regularly running

     

    rm /var/log/syslog && killall -HUP rsyslogd

    to keep /var/log clean

    Edited by optavia

    Share this comment


    Link to comment
    Share on other sites

    My observation:

     

    This error happens when a VM and a docker container with a custom network (macvlan) share the same interface.

     

    I have separated VMs and docker to their own dedicated VLAN interface and this error is not happening anymore for me.

     

    Share this comment


    Link to comment
    Share on other sites

    I'll test that theory out later. I have a spare NIC open in my server and can easily throw up a new VLAN. I'll let y'all know.

    Share this comment


    Link to comment
    Share on other sites

    Seeing this too, it's amazing how fast my log is filling up! Looks like I may need an hourly cron job for this. Noting that as activity on the interface increases so does the speed with which the messages occur. Downloading a file via a container has got my log rolling so fast I can hardly follow it. Cron job to run hourly has been created!

     

    Edit: sadly this new board has only one NIC so I'd like to be able to share it.

    Edited by BLKMGK

    Share this comment


    Link to comment
    Share on other sites
    7 minutes ago, BLKMGK said:

    sadly this new board has only one NIC

    If your switch / router supports VLANs, it maybe an option to use VLANs to make network segregation over the same physical interface.

    Share this comment


    Link to comment
    Share on other sites

    I moved my VMs off the docker br0 to br1 on a second ethernet port and now no errors.

     

    Edited by optavia
    • Like 1

    Share this comment


    Link to comment
    Share on other sites
    51 minutes ago, bonienl said:

    If your switch / router supports VLANs, it maybe an option to use VLANs to make network segregation over the same physical interface.

    Ah true, I use PFsense and DLink switches that are managed so that mihgt be possible if it doesn't need multiple NIC. Unfortunately it's not something I've ever messed with and I'm pretty ignorant about it. I'll wait and see how others fare and hopefully there's a solution that doesn't require this too. I should read up on this regardless, I've got some IP cameras I ought to better segment anyway. Thanks for the response!

    Share this comment


    Link to comment
    Share on other sites
    9 minutes ago, BLKMGK said:

    I use PFsense and DLink switches that are managed so that mihgt be possible if it doesn't need multiple NIC. 

    I have created VLANs on the same physical NIC.  My server has two NICs but I am using eth1 for something else.

     

    @bonienl wrote a very good guide on creating VLANs for docker, VMs, etc.  I followed that guide and it works beautifully.

     

    I have Ubiquiti UniFi routers and switches which support VLANs very well. 

    Share this comment


    Link to comment
    Share on other sites

    Thanks @bonienl for the diagnosis. Upgraded to RC1 after moving all my VMs to a separate NIC feeding br1 and everything seems good. As far as I can tell the disk write speed is also back to where it should be. Hopefully the move from 6.6.7 will be permanent this time! 🙂

    Share this comment


    Link to comment
    Share on other sites

    Moving the VMs to their own VLAN did resolve the issue. Not sure this is the best "fix", but it does show the root issue.

    Share this comment


    Link to comment
    Share on other sites

    I am also seeing these in my log.  I have like 6 dockers and 1 windows 10 VM running.   Most of these are a set with static ip’s on br0 network setting.

    Edited by JM2005
    • Like 1

    Share this comment


    Link to comment
    Share on other sites



    Join the conversation

    You can post now and register later. If you have an account, sign in now to post with your account.
    Note: Your post will require moderator approval before it will be visible.

    Guest
    Add a comment...

    ×   Pasted as rich text.   Restore formatting

      Only 75 emoji are allowed.

    ×   Your link has been automatically embedded.   Display as a link instead

    ×   Your previous content has been restored.   Clear editor

    ×   You cannot paste images directly. Upload or insert images from URL.


  • Status Definitions

     

    Open = Under consideration.

     

    Solved = The issue has been resolved.

     

    Solved version = The issue has been resolved in the indicated release version.

     

    Closed = Feedback or opinion better posted on our forum for discussion. Also for reports we cannot reproduce or need more information. In this case just add a comment and we will review it again.

     

    Retest = Please retest in latest release.


    Priority Definitions

     

    Minor = Something not working correctly.

     

    Urgent = Server crash, data loss, or other showstopper.

     

    Annoyance = Doesn't affect functionality but should be fixed.

     

    Other = Announcement or other non-issue.