Unraid 6.12.9 最新的r8125和r8156网卡驱动补丁


Recommended Posts

On 6/27/2022 at 9:01 PM, Each said:

感谢大佬的补丁,之前一直是win10到unraid用iperf3测速可以280MB/s,而反过来始终只有150MB/s左右,用另一台同样2.5G网卡的软路由分别与两者测试确认问题出来unraid上r8125发送数据上,更新了大佬的补丁并屏蔽了8169驱动,速度仍然如此,今天又反复找设置尝试,找到了一个影响的选项:

973601197_QQ20220627205301.thumb.jpg.822ead452c386295ffc6fa2ec1411268.jpg

在设置-网络设置里有个“启用桥接”的选项,默认是启用的,而且相应的在docker和虚拟机里有个custom:br0的网络选项,把这个选项该成否之后iperf3测试速度就双向稳定280了,测试了下从unraid里机械盘拷贝文件到win10的固态盘上,速度也成功达到了236MB/s

2009498593_QQ20220627202723.jpg.eeb5428cf630e39f7e8d66ecac2c0c15.jpg

PS:要修改这个选项要现在docker设置和虚拟机设置里把docer和虚拟机功能整个关掉才能修改

PS2:修改后原来使用custom:br0方式配置的docker和虚拟机的本地局域网网段的地址都会消失,需要手动重新使用custom:bond0来配置,如果地址多的话记得先记录一下再修改

PS3:关闭此桥接功能是否会有其它影响不明,先去把虚拟机地址都修改过来使用一段时间观察下看看了

 

更新:开始只改了一个docker的地址试了下,刚才去把所有docker和虚拟机修改配置启动起来试了下发现docker访问没有问题,但是win10虚拟机连不上了,再去研究看看哪里有没有什么修复的办法

再更新:尝试之后发现关了桥接后虚拟机更新了设置就没网卡了没法访问,目前看来问题和这有关但是貌似还是不得不忍受这慢速

 

On 6/27/2022 at 11:04 PM, Each said:

问题是这样修改之后你虚拟机能正常用么?我改过后docker能正常用,但是虚拟机更新一下之后就没网卡了都没法访问了,nas速度上来了没有桥接让虚拟机没法用了准备忍受慢速恢复回去了

 

On 6/27/2022 at 9:08 PM, xenoblade said:

确实是这样可以了,多谢老哥

先谢谢楼主提供的驱动和这位发现桥接影响速度的老哥,虽然查了一下原因不明,但是关掉之后确实好了。

 

另外跟各位补充一下,关掉桥接之后虚拟机还是有办法直接桥接到物理网络的,可以通过 macvtap 的方法配置,具体方法如下:

1. 先在 SETTINGS - VM Manager 里把 Default network source: 改为 virbr0,这个是用于虚拟机和宿主机(Unraid)之间通信的网络,因为 macvtap 桥接方式下虚拟机和宿主机不能直接通信

 

2.在虚拟机编辑界面右上角打开XML VIEW,找到网卡的位置,改成使用两个卡:

    <interface type='direct'>
      <mac address='xx:xx:xx:xx:xx:x1'/>
      <source dev='bond0' mode='vepa'/>
      <model type='virtio'/>
      <address type='pci' domain='0x0000' bus='0x01' slot='0x00' function='0x0'/>
    </interface>
    <interface type='bridge'>
      <mac address='xx:xx:xx:xx:xx:x2'/>
      <source bridge='virbr0'/>
      <model type='virtio'/>
      <address type='pci' domain='0x0000' bus='0x03' slot='0x00' function='0x0'/>
    </interface>

其中,

第一个 interface 是用 macvtap 的方式通过 bond0 网卡来跟物理网络连接(如果没有打开 bonding 的话这里 bond0 应该可以对应修改为 eth0 等物理网卡),

第二个 interface 是用默认的 NAT 网络 virbr0 来跟宿主机(Unraid)连接(会有单独的子网),如果只是需要虚拟机和宿主机互联用默认的 virbr0 应该就够了,但也可以参考官方文档[1]和示例配置步骤[3]来定制,配置文件目录在 /etc/libvirt/qemu/networks 里,但是不能直接编辑,要用 virsh 命令来编辑。

 

不过 Unraid 的表格编辑不支持 direct 这种配置,所以如果在 FORM VIEW 下修改了配置,网卡会被强制改回 bridge,所以每次在表格模式改完别的配置之后要再进 XML VIEW 手动修改网卡的部分。

 

这样配置物理网络、虚拟机、Unraid两两之间都能跑满速,暂时没发现有什么问题。

 

English Version:

If you need VMs to still connected to physical network after disabling bridging for full speed, you could follow the following steps to connect VMs to the physical network via macvtap [1]:

 

1. In SETTINGS - VM Manager, change "Default network source:" to virbr0 for connections between VMs and the Unraid host because macvtap doesn't allow communication betweeen VM guests and the host,

 

2. In the VM's edit page, change to XML VIEW by toggling the switch in the top-right corner of the page. Then edit the network cards section as below:

    <interface type='direct'>
      <mac address='xx:xx:xx:xx:xx:x1'/>
      <source dev='bond0' mode='vepa'/>
      <model type='virtio'/>
      <address type='pci' domain='0x0000' bus='0x01' slot='0x00' function='0x0'/>
    </interface>
    <interface type='bridge'>
      <mac address='xx:xx:xx:xx:xx:x2'/>
      <source bridge='virbr0'/>
      <model type='virtio'/>
      <address type='pci' domain='0x0000' bus='0x03' slot='0x00' function='0x0'/>
    </interface>

What the above config does is,

a. the first interface is connected using macvtap via bond0 to the physical network, so if bonding is disabled, you may need to change "bond0" to physical interfaces like "eth0";

b. the second interface uses the default NAT based virbr0 to connect the Unraid host with an independent subnet. For advanced customizations, you can refer to the libvirt docs[1] and this example guide[3]. Note that the configs are located in /etc/libvirt/qemu/networks but you must use the virsh command to edit them in the command line.

 

Also note that the FORM VIEW of the VM edit page in the Unraid GUI doesn't support "direct" type interface, so you will need to manually edit the network interfaces section every time you change other settings in FORM VIEW.

 

With the above configuration, machines on the physical networks, VMs, and Unraid should be able to all connected to each other at full speed now.

 

参考资料(References):

[1] https://libvirt.org/formatnetwork.html#using-a-macvtap-direct-connection

[2] https://notes.wadeism.net/post/kvm-network-setup/

[3] https://fabianlee.org/2019/06/05/kvm-creating-a-guest-vm-on-a-network-in-routed-mode/

Edited by benwwchen
enhance "English Version:" title
  • Like 2
  • Haha 1
  • Upvote 1
Link to comment
12 hours ago, ianccc said:

谢谢楼主,已经安装识别成功,请问这个驱动可以开启rss吗?可以的话,麻烦指点下指令

驱动编译已经改成默认开启的,直接用,没什么可以配置的。ethtool看驱动可以看到RSS的字。

Link to comment
On 8/2/2022 at 5:26 PM, benwwchen said:

 

 

先谢谢楼主提供的驱动和这位发现桥接影响速度的老哥,虽然查了一下原因不明,但是关掉之后确实好了。

 

另外跟各位补充一下,关掉桥接之后虚拟机还是有办法直接桥接到物理网络的,可以通过 macvtap 的方法配置,具体方法如下:

1. 先在 SETTINGS - VM Manager 里把 Default network source: 改为 virbr0,这个是用于虚拟机和宿主机(Unraid)之间通信的网络,因为 macvtap 桥接方式下虚拟机和宿主机不能直接通信

 

2.在虚拟机编辑界面右上角打开XML VIEW,找到网卡的位置,改成使用两个卡:

    <interface type='direct'>
      <mac address='xx:xx:xx:xx:xx:x1'/>
      <source dev='bond0' mode='vepa'/>
      <model type='virtio'/>
      <address type='pci' domain='0x0000' bus='0x01' slot='0x00' function='0x0'/>
    </interface>
    <interface type='bridge'>
      <mac address='xx:xx:xx:xx:xx:x2'/>
      <source bridge='virbr0'/>
      <model type='virtio'/>
      <address type='pci' domain='0x0000' bus='0x03' slot='0x00' function='0x0'/>
    </interface>

其中,

第一个 interface 是用 macvtap 的方式通过 bond0 网卡来跟物理网络连接(如果没有打开 bonding 的话这里 bond0 应该可以对应修改为 eth0 等物理网卡),

第二个 interface 是用默认的 NAT 网络 virbr0 来跟宿主机(Unraid)连接(会有单独的子网),如果只是需要虚拟机和宿主机互联用默认的 virbr0 应该就够了,但也可以参考官方文档[1]和示例配置步骤[3]来定制,配置文件目录在 /etc/libvirt/qemu/networks 里,但是不能直接编辑,要用 virsh 命令来编辑。

 

不过 Unraid 的表格编辑不支持 direct 这种配置,所以如果在 FORM VIEW 下修改了配置,网卡会被强制改回 bridge,所以每次在表格模式改完别的配置之后要再进 XML VIEW 手动修改网卡的部分。

 

这样配置物理网络、虚拟机、Unraid两两之间都能跑满速,暂时没发现有什么问题。

 

English Version:

If you need VMs to still connected to physical network after disabling bridging for full speed, you could follow the following steps to connect VMs to the physical network via macvtap [1]:

 

1. In SETTINGS - VM Manager, change "Default network source:" to virbr0 for connections between VMs and the Unraid host because macvtap doesn't allow communication betweeen VM guests and the host,

 

2. In the VM's edit page, change to XML VIEW by toggling the switch in the top-right corner of the page. Then edit the network cards section as below:

    <interface type='direct'>
      <mac address='xx:xx:xx:xx:xx:x1'/>
      <source dev='bond0' mode='vepa'/>
      <model type='virtio'/>
      <address type='pci' domain='0x0000' bus='0x01' slot='0x00' function='0x0'/>
    </interface>
    <interface type='bridge'>
      <mac address='xx:xx:xx:xx:xx:x2'/>
      <source bridge='virbr0'/>
      <model type='virtio'/>
      <address type='pci' domain='0x0000' bus='0x03' slot='0x00' function='0x0'/>
    </interface>

What the above config does is,

a. the first interface is connected using macvtap via bond0 to the physical network, so if bonding is disabled, you may need to change "bond0" to physical interfaces like "eth0";

b. the second interface uses the default NAT based virbr0 to connect the Unraid host with an independent subnet. For advanced customizations, you can refer to the libvirt docs[1] and this example guide[3]. Note that the configs are located in /etc/libvirt/qemu/networks but you must use the virsh command to edit them in the command line.

 

Also note that the FORM VIEW of the VM edit page in the Unraid GUI doesn't support "direct" type interface, so you will need to manually edit the network interfaces section every time you change other settings in FORM VIEW.

 

With the above configuration, machines on the physical networks, VMs, and Unraid should be able to all connected to each other at full speed now.

 

参考资料(References):

[1] https://libvirt.org/formatnetwork.html#using-a-macvtap-direct-connection

[2] https://notes.wadeism.net/post/kvm-network-setup/

[3] https://fabianlee.org/2019/06/05/kvm-creating-a-guest-vm-on-a-network-in-routed-mode/

感谢老哥的教程,涨知识了。

之前实在解决不了速度和虚拟机的矛盾,只好放弃了板载的r8125,另外买了块i225的独立网卡在开启桥接的情况下上下行速度就是正常的

Link to comment
  • 2 weeks later...

我遇到这个问题,研究了一天,还是没有找到解决方案,希望会有大佬能指点下

 

开启VM 就会出现这问题

error creating macvtap interface macvtap14@bond0 (00:11:32:12:34:56): Device or resource busy

 

<?xml version='1.0' encoding='UTF-8'?>
<domain type='kvm'>
  <name>DS3622xs</name>
  <uuid>ba5f662f-70f9-a96b-c383-1d550b7d319f</uuid>
  <metadata>
    <vmtemplate xmlns="unraid" name="Linux" icon="linux.png" os="linux"/>
  </metadata>
  <memory unit='KiB'>8388608</memory>
  <currentMemory unit='KiB'>4194304</currentMemory>
  <memoryBacking>
    <nosharepages/>
  </memoryBacking>
  <vcpu placement='static'>4</vcpu>
  <cputune>
    <vcpupin vcpu='0' cpuset='8'/>
    <vcpupin vcpu='1' cpuset='9'/>
    <vcpupin vcpu='2' cpuset='10'/>
    <vcpupin vcpu='3' cpuset='11'/>
  </cputune>
  <os>
    <type arch='x86_64' machine='pc-i440fx-5.2'>hvm</type>
    <loader readonly='yes' type='pflash'>/usr/share/qemu/ovmf-x64/OVMF_CODE-pure-efi.fd</loader>
    <nvram>/etc/libvirt/qemu/nvram/ba5f662f-70f9-a96b-c383-1d550b7d319f_VARS-pure-efi.fd</nvram>
  </os>
  <features>
    <acpi/>
    <apic/>
  </features>
  <cpu mode='host-passthrough' check='none' migratable='on'>
    <topology sockets='1' dies='1' cores='2' threads='2'/>
    <cache mode='passthrough'/>
  </cpu>
  <clock offset='utc'>
    <timer name='rtc' tickpolicy='catchup'/>
    <timer name='pit' tickpolicy='delay'/>
    <timer name='hpet' present='no'/>
  </clock>
  <on_poweroff>destroy</on_poweroff>
  <on_reboot>restart</on_reboot>
  <on_crash>restart</on_crash>
  <devices>
    <emulator>/usr/local/sbin/qemu</emulator>
    <disk type='file' device='disk'>
      <driver name='qemu' type='raw' cache='writeback'/>
      <source file='/mnt/disk1/isos/DS3622xs_7.0.1-42218.img'/>
      <target dev='hdc' bus='usb'/>
      <boot order='1'/>
      <address type='usb' bus='0' port='1'/>
    </disk>
    <disk type='file' device='disk'>
      <driver name='qemu' type='qcow2' cache='writeback'/>
      <source file='/mnt/disk1/domains/DS3622xs/vdisk2.img'/>
      <target dev='hdd' bus='sata'/>
      <address type='drive' controller='0' bus='0' target='0' unit='3'/>
    </disk>
    <controller type='usb' index='0' model='qemu-xhci' ports='15'>
      <address type='pci' domain='0x0000' bus='0x00' slot='0x07' function='0x0'/>
    </controller>
    <controller type='pci' index='0' model='pci-root'/>
    <controller type='pci' index='1' model='pci-bridge'>
      <model name='pci-bridge'/>
      <target chassisNr='1'/>
      <address type='pci' domain='0x0000' bus='0x00' slot='0x08' function='0x0'/>
    </controller>
    <controller type='pci' index='2' model='pci-bridge'>
      <model name='pci-bridge'/>
      <target chassisNr='2'/>
      <address type='pci' domain='0x0000' bus='0x00' slot='0x09' function='0x0'/>
    </controller>
    <controller type='pci' index='3' model='pci-bridge'>
      <model name='pci-bridge'/>
      <target chassisNr='3'/>
      <address type='pci' domain='0x0000' bus='0x00' slot='0x0a' function='0x0'/>
    </controller>
    <controller type='pci' index='4' model='pci-bridge'>
      <model name='pci-bridge'/>
      <target chassisNr='4'/>
      <address type='pci' domain='0x0000' bus='0x00' slot='0x0b' function='0x0'/>
    </controller>
    <controller type='pci' index='5' model='pci-bridge'>
      <model name='pci-bridge'/>
      <target chassisNr='5'/>
      <address type='pci' domain='0x0000' bus='0x00' slot='0x0c' function='0x0'/>
    </controller>
    <controller type='pci' index='6' model='pci-bridge'>
      <model name='pci-bridge'/>
      <target chassisNr='6'/>
      <address type='pci' domain='0x0000' bus='0x00' slot='0x0d' function='0x0'/>
    </controller>
    <controller type='pci' index='7' model='pci-bridge'>
      <model name='pci-bridge'/>
      <target chassisNr='7'/>
      <address type='pci' domain='0x0000' bus='0x00' slot='0x0e' function='0x0'/>
    </controller>
    <controller type='pci' index='8' model='pci-bridge'>
      <model name='pci-bridge'/>
      <target chassisNr='8'/>
      <address type='pci' domain='0x0000' bus='0x00' slot='0x0f' function='0x0'/>
    </controller>
    <controller type='pci' index='9' model='pci-bridge'>
      <model name='pci-bridge'/>
      <target chassisNr='9'/>
      <address type='pci' domain='0x0000' bus='0x00' slot='0x10' function='0x0'/>
    </controller>
    <controller type='sata' index='0'>
      <address type='pci' domain='0x0000' bus='0x00' slot='0x03' function='0x0'/>
    </controller>
    <controller type='virtio-serial' index='0'>
      <address type='pci' domain='0x0000' bus='0x00' slot='0x05' function='0x0'/>
    </controller>
    <interface type='bridge'>
      <mac address='88:c9:b3:b0:cc:b4'/>
      <source bridge='virbr0'/>
      <model type='virtio-net'/>
      <address type='pci' domain='0x0000' bus='0x00' slot='0x04' function='0x0'/>
    </interface>
    <interface type='direct'>
      <mac address='00:11:32:12:34:56'/>
      <source dev='bond0' mode='vepa'/>
      <model type='virtio-net'/>
      <address type='pci' domain='0x0000' bus='0x00' slot='0x12' function='0x0'/>
    </interface>
    <serial type='pty'>
      <target type='isa-serial' port='0'>
        <model name='isa-serial'/>
      </target>
    </serial>
    <console type='pty'>
      <target type='serial' port='0'/>
    </console>
    <channel type='unix'>
      <target type='virtio' name='org.qemu.guest_agent.0'/>
      <address type='virtio-serial' controller='0' bus='0' port='1'/>
    </channel>
    <input type='tablet' bus='usb'>
      <address type='usb' bus='0' port='2'/>
    </input>
    <input type='mouse' bus='ps2'/>
    <input type='keyboard' bus='ps2'/>
    <graphics type='vnc' port='-1' autoport='yes' websocket='-1' listen='0.0.0.0' keymap='en-us'>
      <listen type='address' address='0.0.0.0'/>
    </graphics>
    <audio id='1' type='none'/>
    <video>
      <model type='qxl' ram='65536' vram='65536' vgamem='16384' heads='1' primary='yes'/>
      <address type='pci' domain='0x0000' bus='0x00' slot='0x02' function='0x0'/>
    </video>
    <hostdev mode='subsystem' type='pci' managed='yes'>
      <driver name='vfio'/>
      <source>
        <address domain='0x0000' bus='0x03' slot='0x00' function='0x0'/>
      </source>
      <address type='pci' domain='0x0000' bus='0x00' slot='0x11' function='0x0'/>
    </hostdev>
    <hostdev mode='subsystem' type='pci' managed='yes'>
      <driver name='vfio'/>
      <source>
        <address domain='0x0000' bus='0x09' slot='0x00' function='0x0'/>
      </source>
      <address type='pci' domain='0x0000' bus='0x00' slot='0x06' function='0x0'/>
    </hostdev>
    <memballoon model='none'/>
  </devices>
</domain>

 

Link to comment
17 hours ago, dailou said:

我遇到这个问题,研究了一天,还是没有找到解决方案,希望会有大佬能指点下

 

开启VM 就会出现这问题

error creating macvtap interface macvtap14@bond0 (00:11:32:12:34:56): Device or resource busy

 

<?xml version='1.0' encoding='UTF-8'?>
<domain type='kvm'>
  <name>DS3622xs</name>
  <uuid>ba5f662f-70f9-a96b-c383-1d550b7d319f</uuid>
  <metadata>
    <vmtemplate xmlns="unraid" name="Linux" icon="linux.png" os="linux"/>
  </metadata>
  <memory unit='KiB'>8388608</memory>
  <currentMemory unit='KiB'>4194304</currentMemory>
  <memoryBacking>
    <nosharepages/>
  </memoryBacking>
  <vcpu placement='static'>4</vcpu>
  <cputune>
    <vcpupin vcpu='0' cpuset='8'/>
    <vcpupin vcpu='1' cpuset='9'/>
    <vcpupin vcpu='2' cpuset='10'/>
    <vcpupin vcpu='3' cpuset='11'/>
  </cputune>
  <os>
    <type arch='x86_64' machine='pc-i440fx-5.2'>hvm</type>
    <loader readonly='yes' type='pflash'>/usr/share/qemu/ovmf-x64/OVMF_CODE-pure-efi.fd</loader>
    <nvram>/etc/libvirt/qemu/nvram/ba5f662f-70f9-a96b-c383-1d550b7d319f_VARS-pure-efi.fd</nvram>
  </os>
  <features>
    <acpi/>
    <apic/>
  </features>
  <cpu mode='host-passthrough' check='none' migratable='on'>
    <topology sockets='1' dies='1' cores='2' threads='2'/>
    <cache mode='passthrough'/>
  </cpu>
  <clock offset='utc'>
    <timer name='rtc' tickpolicy='catchup'/>
    <timer name='pit' tickpolicy='delay'/>
    <timer name='hpet' present='no'/>
  </clock>
  <on_poweroff>destroy</on_poweroff>
  <on_reboot>restart</on_reboot>
  <on_crash>restart</on_crash>
  <devices>
    <emulator>/usr/local/sbin/qemu</emulator>
    <disk type='file' device='disk'>
      <driver name='qemu' type='raw' cache='writeback'/>
      <source file='/mnt/disk1/isos/DS3622xs_7.0.1-42218.img'/>
      <target dev='hdc' bus='usb'/>
      <boot order='1'/>
      <address type='usb' bus='0' port='1'/>
    </disk>
    <disk type='file' device='disk'>
      <driver name='qemu' type='qcow2' cache='writeback'/>
      <source file='/mnt/disk1/domains/DS3622xs/vdisk2.img'/>
      <target dev='hdd' bus='sata'/>
      <address type='drive' controller='0' bus='0' target='0' unit='3'/>
    </disk>
    <controller type='usb' index='0' model='qemu-xhci' ports='15'>
      <address type='pci' domain='0x0000' bus='0x00' slot='0x07' function='0x0'/>
    </controller>
    <controller type='pci' index='0' model='pci-root'/>
    <controller type='pci' index='1' model='pci-bridge'>
      <model name='pci-bridge'/>
      <target chassisNr='1'/>
      <address type='pci' domain='0x0000' bus='0x00' slot='0x08' function='0x0'/>
    </controller>
    <controller type='pci' index='2' model='pci-bridge'>
      <model name='pci-bridge'/>
      <target chassisNr='2'/>
      <address type='pci' domain='0x0000' bus='0x00' slot='0x09' function='0x0'/>
    </controller>
    <controller type='pci' index='3' model='pci-bridge'>
      <model name='pci-bridge'/>
      <target chassisNr='3'/>
      <address type='pci' domain='0x0000' bus='0x00' slot='0x0a' function='0x0'/>
    </controller>
    <controller type='pci' index='4' model='pci-bridge'>
      <model name='pci-bridge'/>
      <target chassisNr='4'/>
      <address type='pci' domain='0x0000' bus='0x00' slot='0x0b' function='0x0'/>
    </controller>
    <controller type='pci' index='5' model='pci-bridge'>
      <model name='pci-bridge'/>
      <target chassisNr='5'/>
      <address type='pci' domain='0x0000' bus='0x00' slot='0x0c' function='0x0'/>
    </controller>
    <controller type='pci' index='6' model='pci-bridge'>
      <model name='pci-bridge'/>
      <target chassisNr='6'/>
      <address type='pci' domain='0x0000' bus='0x00' slot='0x0d' function='0x0'/>
    </controller>
    <controller type='pci' index='7' model='pci-bridge'>
      <model name='pci-bridge'/>
      <target chassisNr='7'/>
      <address type='pci' domain='0x0000' bus='0x00' slot='0x0e' function='0x0'/>
    </controller>
    <controller type='pci' index='8' model='pci-bridge'>
      <model name='pci-bridge'/>
      <target chassisNr='8'/>
      <address type='pci' domain='0x0000' bus='0x00' slot='0x0f' function='0x0'/>
    </controller>
    <controller type='pci' index='9' model='pci-bridge'>
      <model name='pci-bridge'/>
      <target chassisNr='9'/>
      <address type='pci' domain='0x0000' bus='0x00' slot='0x10' function='0x0'/>
    </controller>
    <controller type='sata' index='0'>
      <address type='pci' domain='0x0000' bus='0x00' slot='0x03' function='0x0'/>
    </controller>
    <controller type='virtio-serial' index='0'>
      <address type='pci' domain='0x0000' bus='0x00' slot='0x05' function='0x0'/>
    </controller>
    <interface type='bridge'>
      <mac address='88:c9:b3:b0:cc:b4'/>
      <source bridge='virbr0'/>
      <model type='virtio-net'/>
      <address type='pci' domain='0x0000' bus='0x00' slot='0x04' function='0x0'/>
    </interface>
    <interface type='direct'>
      <mac address='00:11:32:12:34:56'/>
      <source dev='bond0' mode='vepa'/>
      <model type='virtio-net'/>
      <address type='pci' domain='0x0000' bus='0x00' slot='0x12' function='0x0'/>
    </interface>
    <serial type='pty'>
      <target type='isa-serial' port='0'>
        <model name='isa-serial'/>
      </target>
    </serial>
    <console type='pty'>
      <target type='serial' port='0'/>
    </console>
    <channel type='unix'>
      <target type='virtio' name='org.qemu.guest_agent.0'/>
      <address type='virtio-serial' controller='0' bus='0' port='1'/>
    </channel>
    <input type='tablet' bus='usb'>
      <address type='usb' bus='0' port='2'/>
    </input>
    <input type='mouse' bus='ps2'/>
    <input type='keyboard' bus='ps2'/>
    <graphics type='vnc' port='-1' autoport='yes' websocket='-1' listen='0.0.0.0' keymap='en-us'>
      <listen type='address' address='0.0.0.0'/>
    </graphics>
    <audio id='1' type='none'/>
    <video>
      <model type='qxl' ram='65536' vram='65536' vgamem='16384' heads='1' primary='yes'/>
      <address type='pci' domain='0x0000' bus='0x00' slot='0x02' function='0x0'/>
    </video>
    <hostdev mode='subsystem' type='pci' managed='yes'>
      <driver name='vfio'/>
      <source>
        <address domain='0x0000' bus='0x03' slot='0x00' function='0x0'/>
      </source>
      <address type='pci' domain='0x0000' bus='0x00' slot='0x11' function='0x0'/>
    </hostdev>
    <hostdev mode='subsystem' type='pci' managed='yes'>
      <driver name='vfio'/>
      <source>
        <address domain='0x0000' bus='0x09' slot='0x00' function='0x0'/>
      </source>
      <address type='pci' domain='0x0000' bus='0x00' slot='0x06' function='0x0'/>
    </hostdev>
    <memballoon model='none'/>
  </devices>
</domain>

 

先确定 Settings -> NetworkSettings 里的 Enable bonding 是打开的然后 Enable bridging 是关闭的?如果不是的话 VM 配置里的 bond0 可能要修改成物理网卡对应的 eth0(也可能是 eth1),我之前试过打开了 bonding 之后把 VM 里 bond0 的部分填成了 eth0 也是会出现类似你这种报错。

 

NetworkSettings 最后面的 Routing Table 部分里的 default 的 GATEWAY 可以看到当前在用的网卡,如果打开了 bonding 关闭了 bridging 的话这里应该是 "10.x.x.x(网关IP) via bond0"。不是的话可以先试试对应修改 VM 配置里的 bond0 为这里的网卡。

Edited by benwwchen
Link to comment
On 6/27/2022 at 9:01 PM, Each said:

感谢大佬的补丁,之前一直是win10到unraid用iperf3测速可以280MB/s,而反过来始终只有150MB/s左右,用另一台同样2.5G网卡的软路由分别与两者测试确认问题出来unraid上r8125发送数据上,更新了大佬的补丁并屏蔽了8169驱动,速度仍然如此,今天又反复找设置尝试,找到了一个影响的选项:

973601197_QQ20220627205301.thumb.jpg.822ead452c386295ffc6fa2ec1411268.jpg

在设置-网络设置里有个“启用桥接”的选项,默认是启用的,而且相应的在docker和虚拟机里有个custom:br0的网络选项,把这个选项该成否之后iperf3测试速度就双向稳定280了,测试了下从unraid里机械盘拷贝文件到win10的固态盘上,速度也成功达到了236MB/s

2009498593_QQ20220627202723.jpg.eeb5428cf630e39f7e8d66ecac2c0c15.jpg

PS:要修改这个选项要现在docker设置和虚拟机设置里把docer和虚拟机功能整个关掉才能修改

PS2:修改后原来使用custom:br0方式配置的docker和虚拟机的本地局域网网段的地址都会消失,需要手动重新使用custom:bond0来配置,如果地址多的话记得先记录一下再修改

PS3:关闭此桥接功能是否会有其它影响不明,先去把虚拟机地址都修改过来使用一段时间观察下看看了

 

更新:开始只改了一个docker的地址试了下,刚才去把所有docker和虚拟机修改配置启动起来试了下发现docker访问没有问题,但是win10虚拟机连不上了,再去研究看看哪里有没有什么修复的办法

再更新:尝试之后发现关了桥接后虚拟机更新了设置就没网卡了没法访问,目前看来问题和这有关但是貌似还是不得不忍受这慢速

感谢老哥 我也成功了,我有个虚拟win10,手动更新了一下网络哪里的ip也可以启动访问

 

Link to comment
  • 4 weeks later...
23 hours ago, zzf said:

大佬,6.11发布了,有空适配一下,感谢  感谢  感谢

升级了6.11  结果,网络界面,内置 8125b  1口  和pcie 4口 8125b 全部不显示了。

不过系统设备列表里可以看到,就是网络界面,没有网卡了。

 

临时插了个USB网卡进去的。。

 

兄弟,你试过上面  6.10 的 8125b 驱动,无法在 6.11上使用么??我还准备下载了,替换到  6.11里面试试的呢。

Link to comment
1 hour ago, 淡淡忧伤 said:

升级了6.11  结果,网络界面,内置 8125b  1口  和pcie 4口 8125b 全部不显示了。

不过系统设备列表里可以看到,就是网络界面,没有网卡了。

 

临时插了个USB网卡进去的。。

 

兄弟,你试过上面  6.10 的 8125b 驱动,无法在 6.11上使用么??我还准备下载了,替换到  6.11里面试试的呢。

我的可以原生支持2.5G网卡和pcie网卡

Link to comment
5 hours ago, 淡淡忧伤 said:

升级了6.11 结果,网络界面,内置 8125b 1口 和pcie 4口 8125b 全部不显示了。

不过系统设备列表里可以看到,就是网络界面,没有网卡了。

 

临时插了个USB网卡进去的。。

 

兄弟,你试过上面 6.10 的 8125b 驱动,无法在 6.11上使用么??我还准备下载了,替换到 6.11里面试试的呢。

6.11上没有测试6.10的,怕搞出问题,我主板上两个RTL8125还可以使用,就是USBRTL8156B系统设备列表里可以看到,网络界面不显示,6.10的时候使用这个帖子的,USBRTL8156B可以正常使用

Link to comment
8 hours ago, xenoblade said:

我的可以原生支持2.5G网卡和pcie网卡

我也疑惑,主板自有8125 网卡,

当初新装 6.10的时候,还可以识别 网卡使用,,后来追求完美,替换了  上面新驱动,

今天发现系统可以升级了,升级后,连接不上,,查看发现,没有网卡。进GUI页面,

设备列表有,,网络界面  内置加pcie 5个 网口,网络界面一个都没有了。

只好插了个USB临时进去,,后来又恢复到升级前的 6.10,全部正常了。

 

既然原生就可以用8125,为什么,升级 6.11后,全部不能用了。搞不懂。

也懒得重新新装,配置太麻烦了。等大神出新 6.11的补丁再升级。

 

Link to comment

原生的Unraid对R8125和R8156支持都不好,否则我也不用出这个补丁了。 官方好像也没意愿改进,说是等下个可以支持自定义驱动的Unraid版本,不知道啥时候出来。现在市面上好多主流主板都自带R8125网口了,都在玩2.5G网速,关键时刻缺驱动。

 

补丁版本不要混用。要是能混用,我干嘛还分别出6.10和6.9的补丁。这些版本的Linux内核版本都不一样的,用可能碰巧能用,但是很可能会有bug。最好用对应版本的补丁才会稳定。

6.11.0补丁正在制作中。

Link to comment
  • jinlife changed the title to Unraid 6.11.0 最新的r8125和r8152网卡驱动补丁

大佬,请教一个问题。

我是华擎J3455 ITX(板载网卡RTL8111)+ M.2 A+E NGFH 转的 RTL8111,等于两张卡是同型号,

用了你的驱动后现在能对IOMMU进行分组,但是无法分出两张卡来,统一都显示为RTL8111

这种情况下,怎么才能让其中一张网卡直通给虚拟机使用?

 

Link to comment
  • jinlife changed the title to Unraid 6.11.1 最新的r8125和r8152网卡驱动补丁
  • jinlife changed the title to Unraid 6.12.9 最新的r8125和r8156网卡驱动补丁

Join the conversation

You can post now and register later. If you have an account, sign in now to post with your account.
Note: Your post will require moderator approval before it will be visible.

Guest
Reply to this topic...

×   Pasted as rich text.   Restore formatting

  Only 75 emoji are allowed.

×   Your link has been automatically embedded.   Display as a link instead

×   Your previous content has been restored.   Clear editor

×   You cannot paste images directly. Upload or insert images from URL.