user2352

Members
  • Posts

    14
  • Joined

Recent Profile Visitors

The recent visitors block is disabled and is not being shown to other users.

user2352's Achievements

Newbie

Newbie (1/14)

15

Reputation

  1. yeah, I had the same issues and managed to pass through my i350 card to pfsense by adjusting the xml file as described here: However, looking at your xml it seems you already did the necessary adjustments so there seems to be a different issue. Just poking around in the dark you could try a couple of other things: use a different machine type. I am using <type arch='x86_64' machine='pc-q35-4.1'>hvm</type> try to pass through only one entry first to see if this works try to add an alias to your hostdev entry like below <hostdev mode='subsystem' type='pci' managed='yes'> <driver name='vfio'/> <source> <address domain='0x0000' bus='0x04' slot='0x00' function='0x0'/> </source> <alias name='hostdev0'/> <address type='pci' domain='0x0000' bus='0x04' slot='0x00' function='0x0' multifunction='on'/> </hostdev> I am also using below entry in my syslinux config - Unraid OS - for my network card (adjust the values to your needs) kernel /bzimage append xen-pciback.hide=(01:00.1)(01:00.2)(01:00.3) initrd=/bzroot
  2. That's correct. Your xml looks fine. What's the error message you are getting?
  3. try this. Using same bus but different function value. <hostdev mode='subsystem' type='pci' managed='yes'> <driver name='vfio'/> <source> <address domain='0x0000' bus='0x03' slot='0x00' function='0x0'/> </source> <address type='pci' domain='0x0000' bus='0x01' slot='0x00' function='0x0'/> </hostdev> <hostdev mode='subsystem' type='pci' managed='yes'> <driver name='vfio'/> <source> <address domain='0x0000' bus='0x03' slot='0x00' function='0x1'/> </source> <address type='pci' domain='0x0000' bus='0x01' slot='0x00' function='0x1'/> </hostdev> <hostdev mode='subsystem' type='pci' managed='yes'> <driver name='vfio'/> <source> <address domain='0x0000' bus='0x03' slot='0x00' function='0x2'/> </source> <address type='pci' domain='0x0000' bus='0x01' slot='0x00' function='0x2'/> </hostdev> <hostdev mode='subsystem' type='pci' managed='yes'> <driver name='vfio'/> <source> <address domain='0x0000' bus='0x03' slot='0x00' function='0x3'/> </source> <address type='pci' domain='0x0000' bus='0x01' slot='0x00' function='0x3'/> </hostdev>
  4. Have you tried the settings below? Also check that the address types aren't already taken by other devices in the VM. <hostdev mode='subsystem' type='pci' managed='yes'> <driver name='vfio'/> <source> <address domain='0x0000' bus='0x07' slot='0x00' function='0x0'/> </source> <alias name='hostdev0'/> <address type='pci' domain='0x0000' bus='0x01' slot='0x00' function='0x0' multifunction='on'/> </hostdev> <hostdev mode='subsystem' type='pci' managed='yes'> <driver name='vfio'/> <source> <address domain='0x0000' bus='0x07' slot='0x00' function='0x1'/> </source> <alias name='hostdev0'/> <address type='pci' domain='0x0000' bus='0x01' slot='0x00' function='0x1' multifunction='on'/> </hostdev> <hostdev mode='subsystem' type='pci' managed='yes'> <driver name='vfio'/> <source> <address domain='0x0000' bus='0x08' slot='0x00' function='0x0'/> </source> <alias name='hostdev0'/> <address type='pci' domain='0x0000' bus='0x01' slot='0x00' function='0x2' multifunction='on'/> </hostdev> <hostdev mode='subsystem' type='pci' managed='yes'> <driver name='vfio'/> <source> <address domain='0x0000' bus='0x08' slot='0x00' function='0x1'/> </source> <alias name='hostdev0'/> <address type='pci' domain='0x0000' bus='0x01' slot='0x00' function='0x3' multifunction='on'/> </hostdev> If this doesn't work have you tried splitting the IOMMU groups? Explanation in this video:
  5. @thierrybla Have you tried giving all cards the same bus with different function value like this? <hostdev mode='subsystem' type='pci' managed='yes'> <driver name='vfio'/> <source> <address domain='0x0000' bus='0x05' slot='0x00' function='0x0'/> </source> <address type='pci' domain='0x0000' bus='0x01' slot='0x00' function='0x0' multifunction='on'/> </hostdev> <hostdev mode='subsystem' type='pci' managed='yes'> <driver name='vfio'/> <source> <address domain='0x0000' bus='0x05' slot='0x00' function='0x1'/> </source> <address type='pci' domain='0x0000' bus='0x01' slot='0x00' function='0x1'/> </hostdev> <hostdev mode='subsystem' type='pci' managed='yes'> <driver name='vfio'/> <source> <address domain='0x0000' bus='0x06' slot='0x00' function='0x0'/> </source> <address type='pci' domain='0x0000' bus='0x01' slot='0x00' function='0x2' multifunction='on'/> </hostdev> <hostdev mode='subsystem' type='pci' managed='yes'> <driver name='vfio'/> <source> <address domain='0x0000' bus='0x06' slot='0x00' function='0x1'/> </source> <address type='pci' domain='0x0000' bus='0x01' slot='0x00' function='0x3'/> </hostdev>
  6. Glad it worked. Can't help much with the no link-up issue though. You can try to manually enter the WAN interface (4 tries at worst ;)) and see if that works.
  7. I have managed to pass through my nics by editing the hostdev address type like described here: Maybe you could try this.
  8. ok, below is the xml of one of your passed through devices. <hostdev mode='subsystem' type='pci' managed='yes'> <driver name='vfio'/> <source> <address domain='0x0000' bus='0x07' slot='0x00' function='0x3'/> </source> <address type='pci' domain='0x0000' bus='0x05' slot='0x00' function='0x0'/> </hostdev> the address type is inserted automatically by unraid and its generating a seperate bus for every hostdev. Looking at the error log I assume pfsense does not correctly allocate those. You can try the following which worked for me: <hostdev mode='subsystem' type='pci' managed='yes'> <driver name='vfio'/> <source> <address domain='0x0000' bus='0x07' slot='0x00' function='0x0'/> </source> <address type='pci' domain='0x0000' bus='0x01' slot='0x00' function='0x0'/> </hostdev> <hostdev mode='subsystem' type='pci' managed='yes'> <driver name='vfio'/> <source> <address domain='0x0000' bus='0x07' slot='0x00' function='0x1'/> </source> <address type='pci' domain='0x0000' bus='0x01' slot='0x00' function='0x1'/> </hostdev> <hostdev mode='subsystem' type='pci' managed='yes'> <driver name='vfio'/> <source> <address domain='0x0000' bus='0x07' slot='0x00' function='0x2'/> </source> <address type='pci' domain='0x0000' bus='0x01' slot='0x00' function='0x2'/> </hostdev> <hostdev mode='subsystem' type='pci' managed='yes'> <driver name='vfio'/> <source> <address domain='0x0000' bus='0x07' slot='0x00' function='0x3'/> </source> <address type='pci' domain='0x0000' bus='0x01' slot='0x00' function='0x3'/> </hostdev> I just changed the bus for the following entries to "0x01" (value of the first entry, which I assume is allocated correctly in pfsense) and additionally increased the function value for every entry. Afterwards I reset pfsense from the command line and after rebooting all passed through network cards were available. I hope this helps.
  9. I had this exact same problem and got it to work. Can you post your VM xml? Maybe I can help.
  10. ok ... fresh install on the stick solved the problem. I was a little quick deleting the plugins folder though. Didn't consider the docker configs 😅
  11. Hi, I needed to reboot because unraid somehow crashed and now I am constantly receiving a kernel panic error "too many boot vars". I can boot up using safe mode without plugins though. I therefore deleted everything in the config/plugins folder but trying to boot normally still results in that error. Any suggestions? (Diagnostics file is from a safe mode boot) server-diagnostics-20190603-0943.zip
  12. That's indeed weird. I had the same problem but renaming the actual VM folder fixed it for me. I am on ver 6.6.6 😈
  13. Yeah, that seems like a bug. A temporary fix would be to manually rename the VM folder in the domains directory to the new name.
  14. This happens when you change the name of the VM. The primary vdisk location is set to auto and is still looking for the "oldname.img". You will need to manually select the correct img file.