DuckBrained

Members
  • Posts

    34
  • Joined

  • Last visited

Recent Profile Visitors

The recent visitors block is disabled and is not being shown to other users.

DuckBrained's Achievements

Noob

Noob (1/14)

5

Reputation

  1. One other thing, when you're on your VM settings page, under network do you have virbr0 available or just br0?
  2. Sure, let me write it up and I'll post it here, might take me a day or two to complete as I'll need to retrace my steps. Also, can you confirm if you need to access any of your VMs directly from the EXTERNAL network. This is something I needed which gave me headaches (as I'm using audio over IP to specialist hardware) but if they just need to talk to each other, Unraid, and the internet, it's a lot simpler!
  3. I did all you said, I only changed the metric after I'd tried removing the gateway like you suggested, the problem persisted in both scenarios. There's no logical reason they should "swap" after a period of time. Bug logged. Meantime I found another solution.
  4. Thanks, I've done some testing including real world and I seem to be getting transfers as expected. I also reset my network switch which seemed to help.
  5. I have two NICs configured, eth0 is a 10Gbe AQC107 model, eth1 is an Intel 1Gbe model. When using iperf3 between my Mac and the server via a 10Gbe switch, I see full transfer rates. However, running the test 30 minutes later, the performance has dropped as if the main adapter is now eth1 and the secondary adapter is eth0. Full details in this thread:
  6. My configuration: I have a Mac laptop with a 10Gbe hub. I have an UnRAID server with 10Gbe NIC, an AQC107 chipset model built into the motherboard. I have eth0 configured with br0 for VMs. When I run iperf3 between the Mac and the server, I get full speeds 10Gbps as expected. However, as I boot each virtual machine that's assigned to br0, the iperf3 performance drops by approximately 1Gbps per VM booted. I also see CPU usage spikes on cores that are isolated and shouldn't be usable by UnRAID itself. Full diagnostic process available in this thread.
  7. Well, a short while later, about thirty minutes, and the routes have swapped again. My brain hurts.
  8. I added 0 to the metric for eth1 and now it seems to be working - the test is to see if it's still working in a few hours I guess Thanks for the pointer!
  9. OK, I've done that and now I get 1Gb/s on both ports: Connecting to host 192.168.3.1, port 5201 [ 4] local 192.168.0.27 port 55966 connected to 192.168.3.1 port 5201 [ ID] Interval Transfer Bandwidth [ 4] 0.00-1.00 sec 107 MBytes 899 Mbits/sec Connecting to host 192.168.0.4, port 5201 [ 4] local 192.168.0.27 port 55968 connected to 192.168.0.4 port 5201 [ ID] Interval Transfer Bandwidth [ 4] 0.00-1.00 sec 108 MBytes 903 Mbits/sec [ 4] 1.00-2.00 sec 105 MBytes 883 Mbits/sec
  10. I'm on the cusp of having everything working as I need, but I'm struggling with a networking anomaly whereby UnRAID (or something network related) is causing the interfaces or routes to swap. I have two ethernet adapters, one is 10Gb which I want to use for shares, and the other is 1Gb which is used by the VMs to access the internet and some audio devices on the network. The reason for two interfaces is due to some weird behaviour detailed here. Anyway, here's my config: All works fine, I run iPerf from my Mac and I get 10Gbit speeds on the 192.168.0.4 address and 100Mbit speeds on the 3.1 address. Great. Except a bit later running the same tests, the speeds have swapped to the other interfaces: This is a real issue as I have DNS mapped to the 0.4 address and basically I just lose the 10Gbit speeds for no apparent reason. Can anyone shed any light on this? Or make any suggestions to changes I should make? The 1Gbit interface needs to be able to access devices on the network as does the 10Gbit interface. This is basically a workaround to a bug, yet now I see another bug...
  11. So, I put in a workaround. I set up a virtual network virbr1 with jumbo frames, and the Windows VMs access UnRAID via this network. I'm now getting 2GBps (yes GigaBytes) transfers. I'll write up how to do this but in essence it's simply set up the virtual network, bind it to Samba, then access UnRAID shares via the IP address of the virtual network. Jumbo frames is important as it went from 300MB/s to 2GB/s with this setting, as well as using virtio not virtio-net. Also, I have the other network adapter bound for the rest of the network. For most people you could set the Virtual Network as a bridge, but my use case requires all the VMs to be accessible from the LAN as I'm using audio over IP.
  12. A brief background, due to some kind of bug (detailed here) I now have my primary NIC for UnRAID set to a 10Gbe port (br0), and all my VMs mapped to a 1Gbe port (br1). This solves the problem in the link with network write speeds across the 10Gbe switch. However - when I access my SMB shares from the VMs, I only get 1Gbe transfer speeds. Basically as if my VM is heading "out the door" onto the network then "back in" to UnRAID to access the data. Surely there's some way that the VM hypervisor can recognise that the shares are local and provide better speeds? The NIC in Windows shows 100Gbe as the connection speed. Any pointers? Thanks
  13. Another update. As I start each Windows VM that uses br0, the iperf speeds drop by approx 1Gbit/s, once they are all (7) launched I'm down to 3Gbit/s. As I shut down each VM, the speed increases again. When they are all shut down, I get full network speed. If I assign a Windows VM to virbr0 instead, it has no effect on the network speed. So it's something to do with the bridge. Problem is, virbr0 is no good for me as the Windows machines need to be able to accept incoming traffic. So, is this a bug? My ethernet controller is: Ethernet controller: Aquantia Corp. AQC107 NBase-T/IEEE 802.3bz Ethernet Controller [AQtion] (rev 02) Thanks
  14. Futher testing. I shut down all of the VMs and Dockers on UnRAID and now I get full speed writes: [ 4] 8.00-9.00 sec 1.09 GBytes 9.40 Gbits/sec So is this a resource issue?
  15. Update: I ran iPerf, and I see only poor speeds. From client (Mac) to server (UnRAID) [ 5] 6.00-7.00 sec 320 MBytes 2.68 Gbits/sec I flipped the settings and see from UnRAID to client: [ 5] 1.00-2.00 sec 1.09 GBytes 9.38 Gbits/sec 0 656 KBytes So it's a networking issue, but why is a big puzzle right now. That CPU spike on transfer to the server is puzzling me.