Jump to content

user2352

Members
  • Content Count

    11
  • Joined

Community Reputation

8 Neutral

About user2352

  • Rank
    Member

Recent Profile Visitors

The recent visitors block is disabled and is not being shown to other users.

  1. Have you tried the settings below? Also check that the address types aren't already taken by other devices in the VM. <hostdev mode='subsystem' type='pci' managed='yes'> <driver name='vfio'/> <source> <address domain='0x0000' bus='0x07' slot='0x00' function='0x0'/> </source> <alias name='hostdev0'/> <address type='pci' domain='0x0000' bus='0x01' slot='0x00' function='0x0' multifunction='on'/> </hostdev> <hostdev mode='subsystem' type='pci' managed='yes'> <driver name='vfio'/> <source> <address domain='0x0000' bus='0x07' slot='0x00' function='0x1'/> </source> <alias name='hostdev0'/> <address type='pci' domain='0x0000' bus='0x01' slot='0x00' function='0x1' multifunction='on'/> </hostdev> <hostdev mode='subsystem' type='pci' managed='yes'> <driver name='vfio'/> <source> <address domain='0x0000' bus='0x08' slot='0x00' function='0x0'/> </source> <alias name='hostdev0'/> <address type='pci' domain='0x0000' bus='0x01' slot='0x00' function='0x2' multifunction='on'/> </hostdev> <hostdev mode='subsystem' type='pci' managed='yes'> <driver name='vfio'/> <source> <address domain='0x0000' bus='0x08' slot='0x00' function='0x1'/> </source> <alias name='hostdev0'/> <address type='pci' domain='0x0000' bus='0x01' slot='0x00' function='0x3' multifunction='on'/> </hostdev> If this doesn't work have you tried splitting the IOMMU groups? Explanation in this video:
  2. @thierrybla Have you tried giving all cards the same bus with different function value like this? <hostdev mode='subsystem' type='pci' managed='yes'> <driver name='vfio'/> <source> <address domain='0x0000' bus='0x05' slot='0x00' function='0x0'/> </source> <address type='pci' domain='0x0000' bus='0x01' slot='0x00' function='0x0' multifunction='on'/> </hostdev> <hostdev mode='subsystem' type='pci' managed='yes'> <driver name='vfio'/> <source> <address domain='0x0000' bus='0x05' slot='0x00' function='0x1'/> </source> <address type='pci' domain='0x0000' bus='0x01' slot='0x00' function='0x1'/> </hostdev> <hostdev mode='subsystem' type='pci' managed='yes'> <driver name='vfio'/> <source> <address domain='0x0000' bus='0x06' slot='0x00' function='0x0'/> </source> <address type='pci' domain='0x0000' bus='0x01' slot='0x00' function='0x2' multifunction='on'/> </hostdev> <hostdev mode='subsystem' type='pci' managed='yes'> <driver name='vfio'/> <source> <address domain='0x0000' bus='0x06' slot='0x00' function='0x1'/> </source> <address type='pci' domain='0x0000' bus='0x01' slot='0x00' function='0x3'/> </hostdev>
  3. Glad it worked. Can't help much with the no link-up issue though. You can try to manually enter the WAN interface (4 tries at worst ;)) and see if that works.
  4. I have managed to pass through my nics by editing the hostdev address type like described here: Maybe you could try this.
  5. ok, below is the xml of one of your passed through devices. <hostdev mode='subsystem' type='pci' managed='yes'> <driver name='vfio'/> <source> <address domain='0x0000' bus='0x07' slot='0x00' function='0x3'/> </source> <address type='pci' domain='0x0000' bus='0x05' slot='0x00' function='0x0'/> </hostdev> the address type is inserted automatically by unraid and its generating a seperate bus for every hostdev. Looking at the error log I assume pfsense does not correctly allocate those. You can try the following which worked for me: <hostdev mode='subsystem' type='pci' managed='yes'> <driver name='vfio'/> <source> <address domain='0x0000' bus='0x07' slot='0x00' function='0x0'/> </source> <address type='pci' domain='0x0000' bus='0x01' slot='0x00' function='0x0'/> </hostdev> <hostdev mode='subsystem' type='pci' managed='yes'> <driver name='vfio'/> <source> <address domain='0x0000' bus='0x07' slot='0x00' function='0x1'/> </source> <address type='pci' domain='0x0000' bus='0x01' slot='0x00' function='0x1'/> </hostdev> <hostdev mode='subsystem' type='pci' managed='yes'> <driver name='vfio'/> <source> <address domain='0x0000' bus='0x07' slot='0x00' function='0x2'/> </source> <address type='pci' domain='0x0000' bus='0x01' slot='0x00' function='0x2'/> </hostdev> <hostdev mode='subsystem' type='pci' managed='yes'> <driver name='vfio'/> <source> <address domain='0x0000' bus='0x07' slot='0x00' function='0x3'/> </source> <address type='pci' domain='0x0000' bus='0x01' slot='0x00' function='0x3'/> </hostdev> I just changed the bus for the following entries to "0x01" (value of the first entry, which I assume is allocated correctly in pfsense) and additionally increased the function value for every entry. Afterwards I reset pfsense from the command line and after rebooting all passed through network cards were available. I hope this helps.
  6. I had this exact same problem and got it to work. Can you post your VM xml? Maybe I can help.
  7. ok ... fresh install on the stick solved the problem. I was a little quick deleting the plugins folder though. Didn't consider the docker configs 😅
  8. Hi, I needed to reboot because unraid somehow crashed and now I am constantly receiving a kernel panic error "too many boot vars". I can boot up using safe mode without plugins though. I therefore deleted everything in the config/plugins folder but trying to boot normally still results in that error. Any suggestions? (Diagnostics file is from a safe mode boot) server-diagnostics-20190603-0943.zip
  9. That's indeed weird. I had the same problem but renaming the actual VM folder fixed it for me. I am on ver 6.6.6 😈
  10. Yeah, that seems like a bug. A temporary fix would be to manually rename the VM folder in the domains directory to the new name.
  11. This happens when you change the name of the VM. The primary vdisk location is set to auto and is still looking for the "oldname.img". You will need to manually select the correct img file.