pfsense without nic passthrough - Issues.


Recommended Posts

Unraid: 6.7.2

R7 1700 - 64gb 2666mhz.

Aquantia Corp. AQC107 NBase-T/IEEE 802.3bz Ethernet Controller [AQtion] (rev 02)

Pfsense currently running on ESXI and took config from that one (e3 1225V3)
Installed new pfsense on unraid, imported config, assigned interfaces, shut down esxi based pfsense instance.

 

I had internet, I could ping.

Ie from 192.168.51.111 -> 192.168.52.10 (unraid - BR0), 192.168.59.11, 192.168.53.50, 10.10.5.10 and on and on so all my networks and vpns worked.

Speedtest, 80 mbit then dropped to 5-8 mbit.. had 60 mbit upload first test, 2nd just nothing and 11mbit down 0 upload.

I have 150 mbit connection.

I could ping everything and speedtest did not cause drops nor bufferbloat, downloaded and I had no drops just slow.

No errors on interfaces.

Accessing anything on atleast the .52 network did not work, it showed the icon in webbrowser for 52.11 (esxi host) but not loading, unraid nothing.
Samba shares nothing but I could ping.

Firewall showed nothing being blocked between the source-dest, states showed active connections.

 

is it because BR0 is routed network in pfsense and also the unraid interface.

 

And yes, I would use Nic passthrough if I could, but I cannot cause I do not have any more ports to use on my motherboard.

Is it impossible to get good performance on pfsense in kvm without nic passthrough ?

 

<?xml version='1.0' encoding='UTF-8'?>
<domain type='kvm'>
  <name>Ole1ro01</name>
  <uuid>8fc52a0d-d610-2b46-6fb2-1d06d064debf</uuid>
  <metadata>
    <vmtemplate xmlns="unraid" name="FreeBSD" icon="freebsd.png" os="freebsd"/>
  </metadata>
  <memory unit='KiB'>2097152</memory>
  <currentMemory unit='KiB'>2097152</currentMemory>
  <memoryBacking>
    <nosharepages/>
  </memoryBacking>
  <vcpu placement='static'>4</vcpu>
  <cputune>
    <vcpupin vcpu='0' cpuset='0'/>
    <vcpupin vcpu='1' cpuset='8'/>
    <vcpupin vcpu='2' cpuset='2'/>
    <vcpupin vcpu='3' cpuset='10'/>
  </cputune>
  <os>
    <type arch='x86_64' machine='pc-i440fx-3.1'>hvm</type>
  </os>
  <features>
    <acpi/>
    <apic/>
  </features>
  <cpu mode='host-passthrough' check='none'>
    <topology sockets='1' cores='4' threads='1'/>
  </cpu>
  <clock offset='utc'>
    <timer name='rtc' tickpolicy='catchup'/>
    <timer name='pit' tickpolicy='delay'/>
    <timer name='hpet' present='no'/>
  </clock>
  <on_poweroff>destroy</on_poweroff>
  <on_reboot>restart</on_reboot>
  <on_crash>restart</on_crash>
  <devices>
    <emulator>/usr/local/sbin/qemu</emulator>
    <disk type='file' device='disk'>
      <driver name='qemu' type='raw' cache='writeback'/>
      <source file='/mnt/user/domains/Ole1ro01/vdisk1.img'/>
      <target dev='hdc' bus='sata'/>
      <boot order='1'/>
      <address type='drive' controller='0' bus='0' target='0' unit='2'/>
    </disk>
    <controller type='usb' index='0' model='ich9-ehci1'>
      <address type='pci' domain='0x0000' bus='0x00' slot='0x07' function='0x7'/>
    </controller>
    <controller type='usb' index='0' model='ich9-uhci1'>
      <master startport='0'/>
      <address type='pci' domain='0x0000' bus='0x00' slot='0x07' function='0x0' multifunction='on'/>
    </controller>
    <controller type='usb' index='0' model='ich9-uhci2'>
      <master startport='2'/>
      <address type='pci' domain='0x0000' bus='0x00' slot='0x07' function='0x1'/>
    </controller>
    <controller type='usb' index='0' model='ich9-uhci3'>
      <master startport='4'/>
      <address type='pci' domain='0x0000' bus='0x00' slot='0x07' function='0x2'/>
    </controller>
    <controller type='pci' index='0' model='pci-root'/>
    <controller type='sata' index='0'>
      <address type='pci' domain='0x0000' bus='0x00' slot='0x0e' function='0x0'/>
    </controller>
    <controller type='virtio-serial' index='0'>
      <address type='pci' domain='0x0000' bus='0x00' slot='0x0f' function='0x0'/>
    </controller>
    <interface type='bridge'>
      <mac address='52:54:00:7b:e7:b1'/>
      <source bridge='br0'/>
      <model type='virtio'/>
      <address type='pci' domain='0x0000' bus='0x00' slot='0x03' function='0x0'/>
    </interface>
    <interface type='bridge'>
      <mac address='52:54:00:5a:dc:17'/>
      <source bridge='br0.50'/>
      <model type='virtio'/>
      <address type='pci' domain='0x0000' bus='0x00' slot='0x04' function='0x0'/>
    </interface>
    <interface type='bridge'>
      <mac address='52:54:00:9a:f5:90'/>
      <source bridge='br0.51'/>
      <model type='virtio'/>
      <address type='pci' domain='0x0000' bus='0x00' slot='0x05' function='0x0'/>
    </interface>
    <interface type='bridge'>
      <mac address='52:54:00:85:a5:e4'/>
      <source bridge='br0.53'/>
      <model type='virtio'/>
      <address type='pci' domain='0x0000' bus='0x00' slot='0x06' function='0x0'/>
    </interface>
    <interface type='bridge'>
      <mac address='52:54:00:d1:13:45'/>
      <source bridge='br0.54'/>
      <model type='virtio'/>
      <address type='pci' domain='0x0000' bus='0x00' slot='0x09' function='0x0'/>
    </interface>
    <interface type='bridge'>
      <mac address='52:54:00:3a:35:a2'/>
      <source bridge='br0.56'/>
      <model type='virtio'/>
      <address type='pci' domain='0x0000' bus='0x00' slot='0x0a' function='0x0'/>
    </interface>
    <interface type='bridge'>
      <mac address='52:54:00:36:33:30'/>
      <source bridge='br0.57'/>
      <model type='virtio'/>
      <address type='pci' domain='0x0000' bus='0x00' slot='0x0b' function='0x0'/>
    </interface>
    <interface type='bridge'>
      <mac address='52:54:00:fa:91:49'/>
      <source bridge='br0.58'/>
      <model type='virtio'/>
      <address type='pci' domain='0x0000' bus='0x00' slot='0x0c' function='0x0'/>
    </interface>
    <interface type='bridge'>
      <mac address='52:54:00:37:ce:41'/>
      <source bridge='br0.59'/>
      <model type='virtio'/>
      <address type='pci' domain='0x0000' bus='0x00' slot='0x0d' function='0x0'/>
    </interface>
    <serial type='pty'>
      <target type='isa-serial' port='0'>
        <model name='isa-serial'/>
      </target>
    </serial>
    <console type='pty'>
      <target type='serial' port='0'/>
    </console>
    <channel type='unix'>
      <target type='virtio' name='org.qemu.guest_agent.0'/>
      <address type='virtio-serial' controller='0' bus='0' port='1'/>
    </channel>
    <input type='tablet' bus='usb'>
      <address type='usb' bus='0' port='1'/>
    </input>
    <input type='mouse' bus='ps2'/>
    <input type='keyboard' bus='ps2'/>
    <graphics type='vnc' port='-1' autoport='yes' websocket='-1' listen='0.0.0.0' keymap='no'>
      <listen type='address' address='0.0.0.0'/>
    </graphics>
    <video>
      <model type='qxl' ram='65536' vram='65536' vgamem='16384' heads='1' primary='yes'/>
      <address type='pci' domain='0x0000' bus='0x00' slot='0x02' function='0x0'/>
    </video>
    <memballoon model='virtio'>
      <address type='pci' domain='0x0000' bus='0x00' slot='0x08' function='0x0'/>
    </memballoon>
  </devices>
</domain>
 

Link to comment
  • 5 months later...

Hi

 

I have been working on this also for a while. A totally virtualised pfsense VM running on unraid. In the end I could eventually tweak the performance to be able to get approx 150mb/s passthrough on the NICS but CPU would max out on an intel I5. I worked long and hard but eventually found the incompatibility of pfsense to work with the bro interface NICS under virtio to be the big issue.

 

I finally decided to change pfsense to opnsense and retest as that project fork seems to have better compatability with the NIC drivers (and seems pretty much identical to pfsense). On that VM I setup 2 br0 interfaces using virtio (basically you dont need to edit the xml just use the gui) and set the machine to i440fx-4.2 (latest) with seabios. Im on unraid 6.8.1 and also set the primary disk to virtio before installation rather than leaving it on IDE. During install of opnsense I set it to use legacy boot (MBR) rather than secure boot and installed. Without any changes I can get the full speed now on the NICS and in addition the CPU is low during time of high activity. I believe that PFsense is just struggling with incompatible drivers for virtualisation on QEMU, while OPNSense has them out the box. It might be possible to fix pfsense if you wanted to use that (adding better drivers somehow?), but in my case I was happy to just change over to this fork of the project and use it instead.

 

Hopefully this information can help you, sorry its a few months late, but I only started setting my one up in December and its taken me this long to find a workaround and then test it in production.

 

Pete

Edited by PeteUnraid
spelling and additional info
  • Thanks 1
Link to comment
  • 2 months later...
On 1/16/2020 at 6:49 PM, PeteUnraid said:

Hi

 

I have been working on this also for a while. A totally virtualised pfsense VM running on unraid. In the end I could eventually tweak the performance to be able to get approx 150mb/s passthrough on the NICS but CPU would max out on an intel I5. I worked long and hard but eventually found the incompatibility of pfsense to work with the bro interface NICS under virtio to be the big issue.

 

I finally decided to change pfsense to opnsense and retest as that project fork seems to have better compatability with the NIC drivers (and seems pretty much identical to pfsense). On that VM I setup 2 br0 interfaces using virtio (basically you dont need to edit the xml just use the gui) and set the machine to i440fx-4.2 (latest) with seabios. Im on unraid 6.8.1 and also set the primary disk to virtio before installation rather than leaving it on IDE. During install of opnsense I set it to use legacy boot (MBR) rather than secure boot and installed. Without any changes I can get the full speed now on the NICS and in addition the CPU is low during time of high activity. I believe that PFsense is just struggling with incompatible drivers for virtualisation on QEMU, while OPNSense has them out the box. It might be possible to fix pfsense if you wanted to use that (adding better drivers somehow?), but in my case I was happy to just change over to this fork of the project and use it instead.

 

Hopefully this information can help you, sorry its a few months late, but I only started setting my one up in December and its taken me this long to find a workaround and then test it in production.

 

Pete

Thanks allot for your research, This has solved 2 days of trial error. Opensense works great! 
When i was using virtual pf, I wouldn't get past 100 mbps. Almost started blaming my network card, until I found this thread.

Link to comment
On 4/9/2020 at 12:07 PM, marlon420bud said:

Thanks allot for your research, This has solved 2 days of trial error. Opensense works great! 
When i was using virtual pf, I wouldn't get past 100 mbps. Almost started blaming my network card, until I found this thread.

Yeah its basically the exact same product anyway. Following guides for pfsense or opnsense works basically the same if you are setting things up and I had no problem changing to opnsense at home. I use pfsense at work still as you can buy support but the products are very close. 

Link to comment
  • 9 months later...
On 4/13/2020 at 11:28 PM, PeteUnraid said:

Yeah its basically the exact same product anyway. Following guides for pfsense or opnsense works basically the same if you are setting things up and I had no problem changing to opnsense at home. I use pfsense at work still as you can buy support but the products are very close. 

 

I'm atempting simillar thing. 

 

 

 

So host is Unraid that is also data server with smb, nfs, etc, dockers - classic unraid use. 

 

pfSense VM (Q35 5.1/OVMF, Primary disk SATA) with 2 ports of PCI-E NIC  passed.

Port 1 WAN 10G fible kind of gpon thing

Port 2 LAN 10G that goes to switch (typical solution with Vlans for vatious networks etc...)

 

So at this point no problems, works as it should. 

 

 

For second part I wanted to use virtual NIC (virtio-net) as interface to Unraid to other VM's and for Unraid itself. Or at least just for VM's, as I can live with uraid using another 10G port direcetly from the swith. But I'd like certain VM's to have separate network managed by pfSense without going outside of KVM. All in all it would maybe be more efficient to run directly 10G physical NIC to unraid to get most of NAS part.  But all those VM's and Dockers can't have dedicated cards and if they don't need to go through switch they shouldn't...

 

Not here is where I'm stuck - pfSense does not see br0 (virtio-net). 

 

I saw that you ended up using i440fx and SeaBIOS ? Is that my problem ?  Did you have to load drivers to pfSense to make virtio NIC work ?

 

I did not have any issues under ESXI and with vswitches and or advenced ones spread over hosts you could manage your network easily like a real one. 

Except passthrough for GPU's was limites as well as using ZFS would require passthrough etc... and this is not someting that works all that great on vmware.

 

 

 

Link to comment

Join the conversation

You can post now and register later. If you have an account, sign in now to post with your account.
Note: Your post will require moderator approval before it will be visible.

Guest
Reply to this topic...

×   Pasted as rich text.   Restore formatting

  Only 75 emoji are allowed.

×   Your link has been automatically embedded.   Display as a link instead

×   Your previous content has been restored.   Clear editor

×   You cannot paste images directly. Upload or insert images from URL.