shpitz461 Posted February 5, 2023 Share Posted February 5, 2023 Hi everyone, I cannot find an answer anywhere so I'll ask. Is there a way to have the VM network bridge connect @ 40gbe instead of the standard 10gbe? I have an Mellanox ConnectX-3 card that I'm using as eth0 and eth1 (eth1 is disconnected), but any Windows VM I create with virtio bridge results in a 10gbe connection. Here's the NIC's info: Quote lspci -vvvnn -s c1:00.0 c1:00.0 Network controller [0280]: Mellanox Technologies MT27500 Family [ConnectX-3] [15b3:1003] Subsystem: Hewlett-Packard Company InfiniBand FDR/EN 10/40Gb Dual Port 544QSFP Adapter [103c:18d6] Control: I/O- Mem+ BusMaster+ SpecCycle- MemWINV- VGASnoop- ParErr- Stepping- SERR- FastB2B- DisINTx+ Status: Cap+ 66MHz- UDF- FastB2B- ParErr- DEVSEL=fast >TAbort- <TAbort- <MAbort- >SERR- <PERR- INTx- Latency: 0, Cache Line Size: 64 bytes Interrupt: pin A routed to IRQ 464 IOMMU group: 11 Region 0: Memory at f8700000 (64-bit, non-prefetchable) Region 2: Memory at 2befe000000 (64-bit, prefetchable) Expansion ROM at f8600000 [disabled] Capabilities: [40] Power Management version 3 Flags: PMEClk- DSI- D1- D2- AuxCurrent=0mA PME(D0-,D1-,D2-,D3hot-,D3cold-) Status: D0 NoSoftRst+ PME-Enable- DSel=0 DScale=0 PME- Capabilities: [48] Vital Product Data Product Name: HP ConnectX-3 QSFP Read-only fields: [PN] Part number: 649281-B21 [EC] Engineering changes: A5 [SN] Serial number: CN [V0] Vendor specific: HP 2P 4X FDR VPI/2P 40GbE CX-3 HCA [RV] Reserved: checksum good, 0 byte(s) reserved Read/write fields: [V1] Vendor specific: N/A [YA] Asset tag: N/A [RW] Read-write area: 107 byte(s) free [RW] Read-write area: 253 byte(s) free [RW] Read-write area: 253 byte(s) free [RW] Read-write area: 253 byte(s) free [RW] Read-write area: 253 byte(s) free [RW] Read-write area: 253 byte(s) free [RW] Read-write area: 253 byte(s) free [RW] Read-write area: 253 byte(s) free [RW] Read-write area: 253 byte(s) free [RW] Read-write area: 253 byte(s) free [RW] Read-write area: 253 byte(s) free [RW] Read-write area: 253 byte(s) free [RW] Read-write area: 253 byte(s) free [RW] Read-write area: 253 byte(s) free [RW] Read-write area: 253 byte(s) free [RW] Read-write area: 252 byte(s) free End Capabilities: [9c] MSI-X: Enable+ Count=128 Masked- Vector table: BAR=0 offset=0007c000 PBA: BAR=0 offset=0007d000 Capabilities: [60] Express (v2) Endpoint, MSI 00 DevCap: MaxPayload 512 bytes, PhantFunc 0, Latency L0s <64ns, L1 unlimited ExtTag- AttnBtn- AttnInd- PwrInd- RBE+ FLReset+ SlotPowerLimit 116W DevCtl: CorrErr+ NonFatalErr+ FatalErr+ UnsupReq- RlxdOrd- ExtTag- PhantFunc- AuxPwr- NoSnoop- FLReset- MaxPayload 512 bytes, MaxReadReq 512 bytes DevSta: CorrErr- NonFatalErr- FatalErr- UnsupReq- AuxPwr- TransPend- LnkCap: Port #8, Speed 8GT/s, Width x8, ASPM L0s, Exit Latency L0s unlimited ClockPM- Surprise- LLActRep- BwNot- ASPMOptComp+ LnkCtl: ASPM Disabled; RCB 64 bytes, Disabled- CommClk+ ExtSynch- ClockPM- AutWidDis- BWInt- AutBWInt- LnkSta: Speed 8GT/s, Width x8 TrErr- Train- SlotClk+ DLActive- BWMgmt- ABWMgmt- DevCap2: Completion Timeout: Range ABCD, TimeoutDis+ NROPrPrP- LTR- 10BitTagComp- 10BitTagReq- OBFF Not Supported, ExtFmt- EETLPPrefix- EmergencyPowerReduction Not Supported, EmergencyPowerReductionInit- FRS- TPHComp- ExtTPHComp- AtomicOpsCap: 32bit- 64bit- 128bitCAS- DevCtl2: Completion Timeout: 50us to 50ms, TimeoutDis- LTR- 10BitTagReq- OBFF Disabled, AtomicOpsCtl: ReqEn- LnkCap2: Supported Link Speeds: 2.5-8GT/s, Crosslink- Retimer- 2Retimers- DRS- LnkCtl2: Target Link Speed: 8GT/s, EnterCompliance- SpeedDis- Transmit Margin: Normal Operating Range, EnterModifiedCompliance- ComplianceSOS- Compliance Preset/De-emphasis: -6dB de-emphasis, 0dB preshoot LnkSta2: Current De-emphasis Level: -6dB, EqualizationComplete+ EqualizationPhase1+ EqualizationPhase2+ EqualizationPhase3+ LinkEqualizationRequest- Retimer- 2Retimers- CrosslinkRes: unsupported Capabilities: [c0] Vendor Specific Information: Len=18 <?> Capabilities: [100 v1] Alternative Routing-ID Interpretation (ARI) ARICap: MFVC- ACS-, Next Function: 0 ARICtl: MFVC- ACS-, Function Group: 0 Capabilities: [148 v1] Device Serial Number ff-ff-ff-ff-ff-ff-ff Capabilities: [108 v1] Single Root I/O Virtualization (SR-IOV) IOVCap: Migration- 10BitTagReq- Interrupt Message Number: 000 IOVCtl: Enable- Migration- Interrupt- MSE- ARIHierarchy+ 10BitTagReq- IOVSta: Migration- Initial VFs: 16, Total VFs: 16, Number of VFs: 0, Function Dependency Link: 00 VF offset: 1, stride: 1, Device ID: 1004 Supported Page Size: 000007ff, System Page Size: 00000001 Region 2: Memory at 000002bede000000 (64-bit, prefetchable) VF Migration: offset: 00000000, BIR: 0 Capabilities: [154 v2] Advanced Error Reporting UESta: DLP- SDES- TLP- FCP- CmpltTO- CmpltAbrt- UnxCmplt- RxOF- MalfTLP- ECRC- UnsupReq- ACSViol- UEMsk: DLP- SDES- TLP- FCP- CmpltTO- CmpltAbrt- UnxCmplt- RxOF- MalfTLP- ECRC- UnsupReq+ ACSViol- UESvrt: DLP+ SDES- TLP- FCP+ CmpltTO+ CmpltAbrt- UnxCmplt+ RxOF+ MalfTLP+ ECRC+ UnsupReq- ACSViol- CESta: RxErr- BadTLP- BadDLLP- Rollover- Timeout- AdvNonFatalErr- CEMsk: RxErr- BadTLP- BadDLLP- Rollover- Timeout- AdvNonFatalErr- AERCap: First Error Pointer: 00, ECRCGenCap+ ECRCGenEn+ ECRCChkCap+ ECRCChkEn+ MultHdrRecCap- MultHdrRecEn- TLPPfxPres- HdrLogCap- HeaderLog: 00000000 00000000 00000000 00000000 Capabilities: [18c v1] Secondary PCI Express LnkCtl3: LnkEquIntrruptEn- PerformEqu- LaneErrStat: 0 Kernel driver in use: mlx4_core Kernel modules: mlx4_core Thanks! Quote Link to comment
Solution Vr2Io Posted February 6, 2023 Solution Share Posted February 6, 2023 (edited) When in virtio NIC, the link speed just a number, it can perform more then it. ( In some physical Switch / NIC also that, i.e. RJ45-SFP+ media converter, throughput could larger/smaller then link-speed ) Someone bridge NIC in Unraid network and you may test any performance different by Windows VM. I haven't bridge NIC. Edited February 6, 2023 by Vr2Io 1 Quote Link to comment
shpitz461 Posted February 6, 2023 Author Share Posted February 6, 2023 Interesting info, thanks @Vr2Io! Reason I was asking is iperf3 shows ~10gbit in both directions: Quote >iperf3 -c 192.168.1.3 Connecting to host 192.168.1.3, port 5201 [ 5] local 192.168.1.188 port 1626 connected to 192.168.1.3 port 5201 [ ID] Interval Transfer Bitrate [ 5] 0.00-1.00 sec 1.06 GBytes 9.13 Gbits/sec [ 5] 1.00-2.00 sec 1.08 GBytes 9.30 Gbits/sec [ 5] 2.00-3.00 sec 1.04 GBytes 8.95 Gbits/sec [ 5] 3.00-4.00 sec 1.04 GBytes 8.90 Gbits/sec [ 5] 4.00-5.00 sec 1.02 GBytes 8.72 Gbits/sec [ 5] 5.00-6.00 sec 1.02 GBytes 8.75 Gbits/sec [ 5] 6.00-7.00 sec 1.03 GBytes 8.84 Gbits/sec [ 5] 7.00-8.00 sec 1.01 GBytes 8.67 Gbits/sec [ 5] 8.00-9.00 sec 1.04 GBytes 8.94 Gbits/sec [ 5] 9.00-10.00 sec 1.02 GBytes 8.73 Gbits/sec - - - - - - - - - - - - - - - - - - - - - - - - - [ ID] Interval Transfer Bitrate [ 5] 0.00-10.00 sec 10.4 GBytes 8.89 Gbits/sec sender [ 5] 0.00-10.04 sec 10.4 GBytes 8.86 Gbits/sec receiver iperf Done. C:\Users\Admin>iperf3 -c 192.168.1.3 -R Connecting to host 192.168.1.3, port 5201 Reverse mode, remote host 192.168.1.3 is sending [ 5] local 192.168.1.188 port 1634 connected to 192.168.1.3 port 5201 [ ID] Interval Transfer Bitrate [ 5] 0.00-1.00 sec 965 MBytes 8.09 Gbits/sec [ 5] 1.00-2.00 sec 901 MBytes 7.56 Gbits/sec [ 5] 2.00-3.00 sec 949 MBytes 7.96 Gbits/sec [ 5] 3.00-4.00 sec 959 MBytes 8.05 Gbits/sec [ 5] 4.00-5.00 sec 943 MBytes 7.91 Gbits/sec [ 5] 5.00-6.00 sec 984 MBytes 8.26 Gbits/sec [ 5] 6.00-7.00 sec 1023 MBytes 8.58 Gbits/sec [ 5] 7.00-8.00 sec 1011 MBytes 8.49 Gbits/sec [ 5] 8.00-9.00 sec 984 MBytes 8.26 Gbits/sec [ 5] 9.00-10.00 sec 966 MBytes 8.11 Gbits/sec - - - - - - - - - - - - - - - - - - - - - - - - - [ ID] Interval Transfer Bitrate Retr [ 5] 0.00-10.05 sec 9.47 GBytes 8.09 Gbits/sec 0 sender [ 5] 0.00-10.00 sec 9.46 GBytes 8.13 Gbits/sec receiver iperf Done. Any idea how to improve that? I get the same speed with multiple streams. Quote Link to comment
ghost82 Posted February 6, 2023 Share Posted February 6, 2023 Are you using virtio or virtio-net for the emulated controller? I don't remember which is faster, I think I remember it's virtio, try to switch to it and retest. Unraid default virtio-net, if I remember well, should be because of some issues with vms and dokers, if you use them. 1 Quote Link to comment
shpitz461 Posted February 6, 2023 Author Share Posted February 6, 2023 virtio. As far as I know virtio-net is limited to 1gbps. I am running the iperf3 docker server to do the testing. When connected to another pc with the same NIC via a 40gbe switch I get 20-35gbps so i know the network chain is good. Quote Link to comment
Vr2Io Posted February 7, 2023 Share Posted February 7, 2023 (edited) 9 hours ago, shpitz461 said: Reason I was asking is iperf3 shows ~10gbit in both directions: If Windows was VM, then you just test on virtual to virtual and nothing relate to 40G physical NIC. BTW, test on Windows VM ( virtio driver ), I also got 10g, there should be driver limitation, if test on docker to docker, I got 55G. Connecting to host 192.168.9.6, port 5201 Reverse mode, remote host 192.168.9.6 is sending [ 4] local 192.168.12.19 port 49736 connected to 192.168.9.6 port 5201 [ ID] Interval Transfer Bandwidth [ 4] 0.00-1.00 sec 1.22 GBytes 10.5 Gbits/sec [ 4] 1.00-2.00 sec 1.21 GBytes 10.4 Gbits/sec [ 4] 2.00-3.00 sec 1.22 GBytes 10.5 Gbits/sec [ 4] 3.00-4.00 sec 1.23 GBytes 10.6 Gbits/sec [ 4] 4.00-5.00 sec 1.27 GBytes 10.9 Gbits/sec [ 4] 5.00-6.00 sec 1.27 GBytes 10.9 Gbits/sec [ 4] 6.00-7.00 sec 1.26 GBytes 10.8 Gbits/sec [ 4] 7.00-8.00 sec 1.15 GBytes 9.85 Gbits/sec [ 4] 8.00-9.00 sec 1.26 GBytes 10.8 Gbits/sec [ 4] 9.00-10.00 sec 1.27 GBytes 10.9 Gbits/sec - - - - - - - - - - - - - - - - - - - - - - - - - [ ID] Interval Transfer Bandwidth Retr [ 4] 0.00-10.00 sec 12.4 GBytes 10.6 Gbits/sec 0 sender [ 4] 0.00-10.00 sec 12.4 GBytes 10.6 Gbits/sec receiver http://networkstatic.net/measuring-network-bandwidth-using-iperf-and-docker/ docker run -it --rm --name=iperf3-server --net='host' -p 5201:5201 networkstatic/iperf3 -s docker run -it --rm --net='host' networkstatic/iperf3 -c 192.168.9.6 [ 5] local 192.168.9.6 port 58752 connected to 192.168.9.6 port 5201 [ ID] Interval Transfer Bitrate Retr Cwnd [ 5] 0.00-1.00 sec 6.50 GBytes 55.8 Gbits/sec 0 895 KBytes [ 5] 1.00-2.00 sec 6.44 GBytes 55.3 Gbits/sec 0 1023 KBytes [ 5] 2.00-3.00 sec 6.47 GBytes 55.6 Gbits/sec 0 895 KBytes [ 5] 3.00-4.00 sec 6.43 GBytes 55.2 Gbits/sec 0 895 KBytes [ 5] 4.00-5.00 sec 6.42 GBytes 55.2 Gbits/sec 0 895 KBytes [ 5] 5.00-6.00 sec 6.35 GBytes 54.5 Gbits/sec 0 895 KBytes [ 5] 6.00-7.00 sec 6.49 GBytes 55.8 Gbits/sec 0 895 KBytes [ 5] 7.00-8.00 sec 6.52 GBytes 56.0 Gbits/sec 0 1023 KBytes [ 5] 8.00-9.00 sec 6.58 GBytes 56.5 Gbits/sec 0 895 KBytes [ 5] 9.00-10.00 sec 6.58 GBytes 56.5 Gbits/sec 0 895 KBytes - - - - - - - - - - - - - - - - - - - - - - - - - [ ID] Interval Transfer Bitrate Retr [ 5] 0.00-10.00 sec 64.8 GBytes 55.6 Gbits/sec 0 sender [ 5] 0.00-10.04 sec 64.8 GBytes 55.4 Gbits/sec receiver Edited February 7, 2023 by Vr2Io 1 Quote Link to comment
ghost82 Posted February 7, 2023 Share Posted February 7, 2023 I'm using virtio on a mac vm with an emulated controller bridged to an old 82574L Gigabit Network Connection (1 GbE), and I have 14+ Gbs/s, approaching 20 Gbs/s from host to guest: iperf3 -s ----------------------------------------------------------- Server listening on 5201 (test #1) ----------------------------------------------------------- Accepted connection from 192.168.2.2, port 49359 [ 5] local 192.168.2.1 port 5201 connected to 192.168.2.2 port 49360 [ ID] Interval Transfer Bitrate [ 5] 0.00-1.00 sec 1.66 GBytes 14.3 Gbits/sec [ 5] 1.00-2.00 sec 1.61 GBytes 13.8 Gbits/sec [ 5] 2.00-3.00 sec 1.68 GBytes 14.4 Gbits/sec [ 5] 3.00-4.00 sec 1.70 GBytes 14.6 Gbits/sec [ 5] 4.00-5.00 sec 1.71 GBytes 14.7 Gbits/sec [ 5] 5.00-6.00 sec 1.72 GBytes 14.8 Gbits/sec [ 5] 6.00-7.00 sec 1.71 GBytes 14.7 Gbits/sec [ 5] 7.00-8.00 sec 1.72 GBytes 14.8 Gbits/sec [ 5] 8.00-9.00 sec 1.72 GBytes 14.8 Gbits/sec [ 5] 9.00-10.00 sec 1.67 GBytes 14.4 Gbits/sec [ 5] 10.00-10.00 sec 2.35 MBytes 14.7 Gbits/sec - - - - - - - - - - - - - - - - - - - - - - - - - [ ID] Interval Transfer Bitrate [ 5] 0.00-10.00 sec 16.9 GBytes 14.5 Gbits/sec receiver ----------------------------------------------------------- Server listening on 5201 (test #2) ----------------------------------------------------------- iperf3 -c 192.168.2.2 Connecting to host 192.168.2.2, port 5201 [ 5] local 192.168.2.1 port 38408 connected to 192.168.2.2 port 5201 [ ID] Interval Transfer Bitrate Retr Cwnd [ 5] 0.00-1.00 sec 2.31 GBytes 19.8 Gbits/sec 0 3.93 MBytes [ 5] 1.00-2.00 sec 2.08 GBytes 17.8 Gbits/sec 0 3.93 MBytes [ 5] 2.00-3.00 sec 2.07 GBytes 17.8 Gbits/sec 0 3.93 MBytes [ 5] 3.00-4.00 sec 2.04 GBytes 17.5 Gbits/sec 0 3.93 MBytes [ 5] 4.00-5.00 sec 2.20 GBytes 18.9 Gbits/sec 0 3.93 MBytes [ 5] 5.00-6.00 sec 2.28 GBytes 19.6 Gbits/sec 0 3.93 MBytes [ 5] 6.00-7.00 sec 2.28 GBytes 19.6 Gbits/sec 0 3.93 MBytes [ 5] 7.00-8.00 sec 2.29 GBytes 19.6 Gbits/sec 0 3.93 MBytes [ 5] 8.00-9.00 sec 2.31 GBytes 19.8 Gbits/sec 0 3.93 MBytes [ 5] 9.00-10.00 sec 2.28 GBytes 19.6 Gbits/sec 0 3.93 MBytes - - - - - - - - - - - - - - - - - - - - - - - - - [ ID] Interval Transfer Bitrate Retr [ 5] 0.00-10.00 sec 22.1 GBytes 19.0 Gbits/sec 0 sender [ 5] 0.00-10.00 sec 22.1 GBytes 19.0 Gbits/sec receiver iperf Done. 2 Quote Link to comment
shpitz461 Posted February 7, 2023 Author Share Posted February 7, 2023 Ha, you da man! I already have iperf3 server running in a docker, so I just called your docker client piece, and this is what I'm getting: docker run -it --rm --net='host' networkstatic/iperf3 -c 192.168.1.3 Quote Connecting to host 192.168.1.3, port 5201 [ 5] local 192.168.1.3 port 33174 connected to 192.168.1.3 port 5201 [ ID] Interval Transfer Bitrate Retr Cwnd [ 5] 0.00-1.00 sec 4.29 GBytes 36.8 Gbits/sec 0 895 KBytes [ 5] 1.00-2.00 sec 4.08 GBytes 35.0 Gbits/sec 0 895 KBytes [ 5] 2.00-3.00 sec 4.03 GBytes 34.6 Gbits/sec 0 895 KBytes [ 5] 3.00-4.00 sec 3.96 GBytes 34.1 Gbits/sec 0 767 KBytes [ 5] 4.00-5.00 sec 3.87 GBytes 33.2 Gbits/sec 0 895 KBytes [ 5] 5.00-6.00 sec 4.03 GBytes 34.6 Gbits/sec 0 895 KBytes [ 5] 6.00-7.00 sec 4.12 GBytes 35.4 Gbits/sec 0 895 KBytes [ 5] 7.00-8.00 sec 4.05 GBytes 34.7 Gbits/sec 0 1023 KBytes [ 5] 8.00-9.00 sec 3.94 GBytes 33.8 Gbits/sec 0 767 KBytes [ 5] 9.00-10.00 sec 4.13 GBytes 35.5 Gbits/sec 0 895 KBytes - - - - - - - - - - - - - - - - - - - - - - - - - [ ID] Interval Transfer Bitrate Retr [ 5] 0.00-10.00 sec 40.5 GBytes 34.8 Gbits/sec 0 sender [ 5] 0.00-10.04 sec 40.5 GBytes 34.6 Gbits/sec receiver The initial test in the OP was from a VM to unraid iperf3 server docker instance, i actually never tested VM->VM. Now you got me curious. Quote Link to comment
shpitz461 Posted February 7, 2023 Author Share Posted February 7, 2023 Have you done any tweaking? I followed this guide: https://fasterdata.es.net/host-tuning/linux/ Quote Link to comment
Vr2Io Posted February 7, 2023 Share Posted February 7, 2023 (edited) 1 hour ago, shpitz461 said: Have you done any tweaking? I followed this guide: https://fasterdata.es.net/host-tuning/linux/ No, I don't need high performance in vitual to vitual, no application need that. I only need real media transfer 1GB/s would be fine, actual media just around this level. Edited February 7, 2023 by Vr2Io 1 Quote Link to comment
Recommended Posts
Join the conversation
You can post now and register later. If you have an account, sign in now to post with your account.
Note: Your post will require moderator approval before it will be visible.