Jump to content

benwwchen

Members
  • Posts

    3
  • Joined

  • Last visited

Posts posted by benwwchen

  1. 17 hours ago, dailou said:

    我遇到这个问题,研究了一天,还是没有找到解决方案,希望会有大佬能指点下

     

    开启VM 就会出现这问题

    error creating macvtap interface macvtap14@bond0 (00:11:32:12:34:56): Device or resource busy

     

    <?xml version='1.0' encoding='UTF-8'?>
    <domain type='kvm'>
      <name>DS3622xs</name>
      <uuid>ba5f662f-70f9-a96b-c383-1d550b7d319f</uuid>
      <metadata>
        <vmtemplate xmlns="unraid" name="Linux" icon="linux.png" os="linux"/>
      </metadata>
      <memory unit='KiB'>8388608</memory>
      <currentMemory unit='KiB'>4194304</currentMemory>
      <memoryBacking>
        <nosharepages/>
      </memoryBacking>
      <vcpu placement='static'>4</vcpu>
      <cputune>
        <vcpupin vcpu='0' cpuset='8'/>
        <vcpupin vcpu='1' cpuset='9'/>
        <vcpupin vcpu='2' cpuset='10'/>
        <vcpupin vcpu='3' cpuset='11'/>
      </cputune>
      <os>
        <type arch='x86_64' machine='pc-i440fx-5.2'>hvm</type>
        <loader readonly='yes' type='pflash'>/usr/share/qemu/ovmf-x64/OVMF_CODE-pure-efi.fd</loader>
        <nvram>/etc/libvirt/qemu/nvram/ba5f662f-70f9-a96b-c383-1d550b7d319f_VARS-pure-efi.fd</nvram>
      </os>
      <features>
        <acpi/>
        <apic/>
      </features>
      <cpu mode='host-passthrough' check='none' migratable='on'>
        <topology sockets='1' dies='1' cores='2' threads='2'/>
        <cache mode='passthrough'/>
      </cpu>
      <clock offset='utc'>
        <timer name='rtc' tickpolicy='catchup'/>
        <timer name='pit' tickpolicy='delay'/>
        <timer name='hpet' present='no'/>
      </clock>
      <on_poweroff>destroy</on_poweroff>
      <on_reboot>restart</on_reboot>
      <on_crash>restart</on_crash>
      <devices>
        <emulator>/usr/local/sbin/qemu</emulator>
        <disk type='file' device='disk'>
          <driver name='qemu' type='raw' cache='writeback'/>
          <source file='/mnt/disk1/isos/DS3622xs_7.0.1-42218.img'/>
          <target dev='hdc' bus='usb'/>
          <boot order='1'/>
          <address type='usb' bus='0' port='1'/>
        </disk>
        <disk type='file' device='disk'>
          <driver name='qemu' type='qcow2' cache='writeback'/>
          <source file='/mnt/disk1/domains/DS3622xs/vdisk2.img'/>
          <target dev='hdd' bus='sata'/>
          <address type='drive' controller='0' bus='0' target='0' unit='3'/>
        </disk>
        <controller type='usb' index='0' model='qemu-xhci' ports='15'>
          <address type='pci' domain='0x0000' bus='0x00' slot='0x07' function='0x0'/>
        </controller>
        <controller type='pci' index='0' model='pci-root'/>
        <controller type='pci' index='1' model='pci-bridge'>
          <model name='pci-bridge'/>
          <target chassisNr='1'/>
          <address type='pci' domain='0x0000' bus='0x00' slot='0x08' function='0x0'/>
        </controller>
        <controller type='pci' index='2' model='pci-bridge'>
          <model name='pci-bridge'/>
          <target chassisNr='2'/>
          <address type='pci' domain='0x0000' bus='0x00' slot='0x09' function='0x0'/>
        </controller>
        <controller type='pci' index='3' model='pci-bridge'>
          <model name='pci-bridge'/>
          <target chassisNr='3'/>
          <address type='pci' domain='0x0000' bus='0x00' slot='0x0a' function='0x0'/>
        </controller>
        <controller type='pci' index='4' model='pci-bridge'>
          <model name='pci-bridge'/>
          <target chassisNr='4'/>
          <address type='pci' domain='0x0000' bus='0x00' slot='0x0b' function='0x0'/>
        </controller>
        <controller type='pci' index='5' model='pci-bridge'>
          <model name='pci-bridge'/>
          <target chassisNr='5'/>
          <address type='pci' domain='0x0000' bus='0x00' slot='0x0c' function='0x0'/>
        </controller>
        <controller type='pci' index='6' model='pci-bridge'>
          <model name='pci-bridge'/>
          <target chassisNr='6'/>
          <address type='pci' domain='0x0000' bus='0x00' slot='0x0d' function='0x0'/>
        </controller>
        <controller type='pci' index='7' model='pci-bridge'>
          <model name='pci-bridge'/>
          <target chassisNr='7'/>
          <address type='pci' domain='0x0000' bus='0x00' slot='0x0e' function='0x0'/>
        </controller>
        <controller type='pci' index='8' model='pci-bridge'>
          <model name='pci-bridge'/>
          <target chassisNr='8'/>
          <address type='pci' domain='0x0000' bus='0x00' slot='0x0f' function='0x0'/>
        </controller>
        <controller type='pci' index='9' model='pci-bridge'>
          <model name='pci-bridge'/>
          <target chassisNr='9'/>
          <address type='pci' domain='0x0000' bus='0x00' slot='0x10' function='0x0'/>
        </controller>
        <controller type='sata' index='0'>
          <address type='pci' domain='0x0000' bus='0x00' slot='0x03' function='0x0'/>
        </controller>
        <controller type='virtio-serial' index='0'>
          <address type='pci' domain='0x0000' bus='0x00' slot='0x05' function='0x0'/>
        </controller>
        <interface type='bridge'>
          <mac address='88:c9:b3:b0:cc:b4'/>
          <source bridge='virbr0'/>
          <model type='virtio-net'/>
          <address type='pci' domain='0x0000' bus='0x00' slot='0x04' function='0x0'/>
        </interface>
        <interface type='direct'>
          <mac address='00:11:32:12:34:56'/>
          <source dev='bond0' mode='vepa'/>
          <model type='virtio-net'/>
          <address type='pci' domain='0x0000' bus='0x00' slot='0x12' function='0x0'/>
        </interface>
        <serial type='pty'>
          <target type='isa-serial' port='0'>
            <model name='isa-serial'/>
          </target>
        </serial>
        <console type='pty'>
          <target type='serial' port='0'/>
        </console>
        <channel type='unix'>
          <target type='virtio' name='org.qemu.guest_agent.0'/>
          <address type='virtio-serial' controller='0' bus='0' port='1'/>
        </channel>
        <input type='tablet' bus='usb'>
          <address type='usb' bus='0' port='2'/>
        </input>
        <input type='mouse' bus='ps2'/>
        <input type='keyboard' bus='ps2'/>
        <graphics type='vnc' port='-1' autoport='yes' websocket='-1' listen='0.0.0.0' keymap='en-us'>
          <listen type='address' address='0.0.0.0'/>
        </graphics>
        <audio id='1' type='none'/>
        <video>
          <model type='qxl' ram='65536' vram='65536' vgamem='16384' heads='1' primary='yes'/>
          <address type='pci' domain='0x0000' bus='0x00' slot='0x02' function='0x0'/>
        </video>
        <hostdev mode='subsystem' type='pci' managed='yes'>
          <driver name='vfio'/>
          <source>
            <address domain='0x0000' bus='0x03' slot='0x00' function='0x0'/>
          </source>
          <address type='pci' domain='0x0000' bus='0x00' slot='0x11' function='0x0'/>
        </hostdev>
        <hostdev mode='subsystem' type='pci' managed='yes'>
          <driver name='vfio'/>
          <source>
            <address domain='0x0000' bus='0x09' slot='0x00' function='0x0'/>
          </source>
          <address type='pci' domain='0x0000' bus='0x00' slot='0x06' function='0x0'/>
        </hostdev>
        <memballoon model='none'/>
      </devices>
    </domain>

     

    先确定 Settings -> NetworkSettings 里的 Enable bonding 是打开的然后 Enable bridging 是关闭的?如果不是的话 VM 配置里的 bond0 可能要修改成物理网卡对应的 eth0(也可能是 eth1),我之前试过打开了 bonding 之后把 VM 里 bond0 的部分填成了 eth0 也是会出现类似你这种报错。

     

    NetworkSettings 最后面的 Routing Table 部分里的 default 的 GATEWAY 可以看到当前在用的网卡,如果打开了 bonding 关闭了 bridging 的话这里应该是 "10.x.x.x(网关IP) via bond0"。不是的话可以先试试对应修改 VM 配置里的 bond0 为这里的网卡。

  2. On 6/27/2022 at 9:01 PM, Each said:

    感谢大佬的补丁,之前一直是win10到unraid用iperf3测速可以280MB/s,而反过来始终只有150MB/s左右,用另一台同样2.5G网卡的软路由分别与两者测试确认问题出来unraid上r8125发送数据上,更新了大佬的补丁并屏蔽了8169驱动,速度仍然如此,今天又反复找设置尝试,找到了一个影响的选项:

    973601197_QQ20220627205301.thumb.jpg.822ead452c386295ffc6fa2ec1411268.jpg

    在设置-网络设置里有个“启用桥接”的选项,默认是启用的,而且相应的在docker和虚拟机里有个custom:br0的网络选项,把这个选项该成否之后iperf3测试速度就双向稳定280了,测试了下从unraid里机械盘拷贝文件到win10的固态盘上,速度也成功达到了236MB/s

    2009498593_QQ20220627202723.jpg.eeb5428cf630e39f7e8d66ecac2c0c15.jpg

    PS:要修改这个选项要现在docker设置和虚拟机设置里把docer和虚拟机功能整个关掉才能修改

    PS2:修改后原来使用custom:br0方式配置的docker和虚拟机的本地局域网网段的地址都会消失,需要手动重新使用custom:bond0来配置,如果地址多的话记得先记录一下再修改

    PS3:关闭此桥接功能是否会有其它影响不明,先去把虚拟机地址都修改过来使用一段时间观察下看看了

     

    更新:开始只改了一个docker的地址试了下,刚才去把所有docker和虚拟机修改配置启动起来试了下发现docker访问没有问题,但是win10虚拟机连不上了,再去研究看看哪里有没有什么修复的办法

    再更新:尝试之后发现关了桥接后虚拟机更新了设置就没网卡了没法访问,目前看来问题和这有关但是貌似还是不得不忍受这慢速

     

    On 6/27/2022 at 11:04 PM, Each said:

    问题是这样修改之后你虚拟机能正常用么?我改过后docker能正常用,但是虚拟机更新一下之后就没网卡了都没法访问了,nas速度上来了没有桥接让虚拟机没法用了准备忍受慢速恢复回去了

     

    On 6/27/2022 at 9:08 PM, xenoblade said:

    确实是这样可以了,多谢老哥

    先谢谢楼主提供的驱动和这位发现桥接影响速度的老哥,虽然查了一下原因不明,但是关掉之后确实好了。

     

    另外跟各位补充一下,关掉桥接之后虚拟机还是有办法直接桥接到物理网络的,可以通过 macvtap 的方法配置,具体方法如下:

    1. 先在 SETTINGS - VM Manager 里把 Default network source: 改为 virbr0,这个是用于虚拟机和宿主机(Unraid)之间通信的网络,因为 macvtap 桥接方式下虚拟机和宿主机不能直接通信

     

    2.在虚拟机编辑界面右上角打开XML VIEW,找到网卡的位置,改成使用两个卡:

        <interface type='direct'>
          <mac address='xx:xx:xx:xx:xx:x1'/>
          <source dev='bond0' mode='vepa'/>
          <model type='virtio'/>
          <address type='pci' domain='0x0000' bus='0x01' slot='0x00' function='0x0'/>
        </interface>
        <interface type='bridge'>
          <mac address='xx:xx:xx:xx:xx:x2'/>
          <source bridge='virbr0'/>
          <model type='virtio'/>
          <address type='pci' domain='0x0000' bus='0x03' slot='0x00' function='0x0'/>
        </interface>

    其中,

    第一个 interface 是用 macvtap 的方式通过 bond0 网卡来跟物理网络连接(如果没有打开 bonding 的话这里 bond0 应该可以对应修改为 eth0 等物理网卡),

    第二个 interface 是用默认的 NAT 网络 virbr0 来跟宿主机(Unraid)连接(会有单独的子网),如果只是需要虚拟机和宿主机互联用默认的 virbr0 应该就够了,但也可以参考官方文档[1]和示例配置步骤[3]来定制,配置文件目录在 /etc/libvirt/qemu/networks 里,但是不能直接编辑,要用 virsh 命令来编辑。

     

    不过 Unraid 的表格编辑不支持 direct 这种配置,所以如果在 FORM VIEW 下修改了配置,网卡会被强制改回 bridge,所以每次在表格模式改完别的配置之后要再进 XML VIEW 手动修改网卡的部分。

     

    这样配置物理网络、虚拟机、Unraid两两之间都能跑满速,暂时没发现有什么问题。

     

    English Version:

    If you need VMs to still connected to physical network after disabling bridging for full speed, you could follow the following steps to connect VMs to the physical network via macvtap [1]:

     

    1. In SETTINGS - VM Manager, change "Default network source:" to virbr0 for connections between VMs and the Unraid host because macvtap doesn't allow communication betweeen VM guests and the host,

     

    2. In the VM's edit page, change to XML VIEW by toggling the switch in the top-right corner of the page. Then edit the network cards section as below:

        <interface type='direct'>
          <mac address='xx:xx:xx:xx:xx:x1'/>
          <source dev='bond0' mode='vepa'/>
          <model type='virtio'/>
          <address type='pci' domain='0x0000' bus='0x01' slot='0x00' function='0x0'/>
        </interface>
        <interface type='bridge'>
          <mac address='xx:xx:xx:xx:xx:x2'/>
          <source bridge='virbr0'/>
          <model type='virtio'/>
          <address type='pci' domain='0x0000' bus='0x03' slot='0x00' function='0x0'/>
        </interface>

    What the above config does is,

    a. the first interface is connected using macvtap via bond0 to the physical network, so if bonding is disabled, you may need to change "bond0" to physical interfaces like "eth0";

    b. the second interface uses the default NAT based virbr0 to connect the Unraid host with an independent subnet. For advanced customizations, you can refer to the libvirt docs[1] and this example guide[3]. Note that the configs are located in /etc/libvirt/qemu/networks but you must use the virsh command to edit them in the command line.

     

    Also note that the FORM VIEW of the VM edit page in the Unraid GUI doesn't support "direct" type interface, so you will need to manually edit the network interfaces section every time you change other settings in FORM VIEW.

     

    With the above configuration, machines on the physical networks, VMs, and Unraid should be able to all connected to each other at full speed now.

     

    参考资料(References):

    [1] https://libvirt.org/formatnetwork.html#using-a-macvtap-direct-connection

    [2] https://notes.wadeism.net/post/kvm-network-setup/

    [3] https://fabianlee.org/2019/06/05/kvm-creating-a-guest-vm-on-a-network-in-routed-mode/

    • Like 3
    • Haha 1
    • Upvote 1
×
×
  • Create New...