Jump to content

bonienl

Community Developer
  • Posts

    10,233
  • Joined

  • Last visited

  • Days Won

    65

Report Comments posted by bonienl

  1. I made two tests to see performance. Both server and PC have a 10 Gbps connection

     

    1. Copy 14GB file from array to PC (nvme)

    image.png.3767f78ae7a73d488c04494f3164d603.png

     

    Transfer speeds hover between 240 MB/s and 200 MB/s, which is near the maximum the HDD drive can do

     

    2. Copy (same) 14GB file from PC (nvme) to cache (SSD pool in raid 10)

    image.png.e6820457c033b57036c95ede5417dfca.png

     

    Transfer speeds hover between 840 MB/s and 760 MB/s, which is near the 10 Gb/s link saturation.

  2. 8 hours ago, civic95man said:

    I noticed this same issue (custom:br1/custom:eth1) not showing up in the drop down under network in my container.  I assumed I was making a stupid mistake but maybe its something else.  I can post diagnostics later if its relevant to this case (don't want to steal the thread), otherwise I was going to open a topic under docker.

    See if my answer above applies to your situation too.

  3. 8 hours ago, RifleJock said:

    Diagnostics Attached.

    Your problem happens because both eth0 (br0) and eth1 (br1) use the same IPv6 subnet and gateway.

     

    Docker does not allow two networks with the same subnet and gateway. Hence br1 is not created.

     

    The easiest way to solve your issue, is define eth1 (br1) as IPv4 only and avoid the double IPv6 declaration.

  4. Your switch may support larger jumbo frames, but almost all other devices, like your server, router, Windows PC, can't handle a MTU size larger than 9000. This is actually the agreed size by the Joint Engineering Team and guaranteed to work with most (modern) equipment.

     

    There isn't an official standard for jumbo frames and you'll find many different implementions and sizes.

    Heck even differences between hardware and software releases from the same vendor.

     

  5. You have configured br1 to use DHCP, which means it gets its IP address and gateway address from your DHCP server (router).

     

    br0 and br1 have network settings, which means these can not be changed under the docker settings (they are configured automatically).

     

    But you have configured br1 with two bridge members eth1 and eth2, it seems eth1 is connected to the LAN side of your router, while eth2 seems to be connected to the WAN side.

     

    This setup is wrong and connects LAN and WAN directly to each other. This likely explains why br1 receives a public gateway address.

     

    If you want to use eth2 as a separate interface, first you need to take it out of the br1 bridge group.

    Second, you should not configure eth2 with public addresses, it is a bad idea to expose your server directly to the Internet.

     

    Btw this is not a bug.

    Docker does not accept the settings of br1, the gateway is in the wrong network

  6. 2 minutes ago, bastl said:

    Not 100% sure, but I guess the docker image pather always included the *.img extension. I started with unraid over 2 years ago and never changed anything in my docker config and never had to by any Unraid update.

    If you accepted the defaults and never changed them, it works all correctly in any Unraid version.

  7. Version 6.8 is more strict on user input and marks anything invalid rather then starting the service.

     

    A vDisk location must point to an image file and not a folder. This means a file with the .img extension.

    A storage location must point to a folder. This means the path must end with a slash

     

    See the examples given by @bastl

     

    Not a bug.

  8. 6 minutes ago, ymilord said:

    I would not have open this ticket otherwise. 

    According to the SiS site the latest Linux driver for these type of ethernet controllers is from 2005.

    What is provided with Unraid surely is the latest available driver version.

    Sorry, no other solution at hand. Your system doesn't have free PCI(E) slots to add additional cards?

  9. 16 minutes ago, ymilord said:

    But under windows it supports full gigabit? And has been for 4 years.

    Are you sure?

    I highly doubt that. Supported speeds are hardware determined and not driver dependent.

     

    It was quite common for your generation of motherboard to support 10/100 only.

  10. The interface reports to support 10 Mbps and 100 Mbps only

    Supported link modes:   10baseT/Half 10baseT/Full 
                            100baseT/Half 100baseT/Full 

    It is using driver version 1.4 which comes with the Linux kernel. Unlikely SiS provides a newer version

    driver: sis190
    version: 1.4

    There is a mismatch between server and switch which causes your connection to be flaky.

     

    Try to set a fixed setting of 100/FD on the Cisco switch.

     

    Side note: a lease time of 10 minutes is very short. It is recommended to make that longer, e.g. 1 day

     

    Alternatively you can add an Intel based NIC and use instead of the built-in NIC.

  11. 1 hour ago, limetech said:

    Please see if Unraid OS 6.8.2 solves this issue.

    Solved for me.

     

    I do get some questionable driver related messages

    Jan 26 20:13:30 vesta kernel: igb: loading out-of-tree module taints kernel.
    Jan 26 20:13:30 vesta kernel: igb 0000:06:00.0 eth1: mixed HW and IP checksum settings.
    Jan 26 20:13:30 vesta kernel: igb 0000:07:00.0 eth2: mixed HW and IP checksum settings.

     

    • Thanks 1
×
×
  • Create New...