biopixen

Members
  • Posts

    9
  • Joined

  • Last visited

Converted

  • Gender
    Undisclosed

biopixen's Achievements

Newbie

Newbie (1/14)

0

Reputation

  1. Hi! Yes, (Attached screenshot) I have also attached PCI devices and IOMMU Groups. IOMMU_Groups.txt pci_devices.txt
  2. Eumm.... Feeling stupid now, but im running 6.1.7. Could you please point me in the right direction? Thanks a bunch!
  3. Hi! Thanks for your input! Im however unable to see any difference from my current setup (mine is longer) cgroup_device_acl = [ "/dev/null", "/dev/full", "/dev/zero", "/dev/random", "/dev/urandom", "/dev/ptmx", "/dev/kvm", "/dev/kqemu", "/dev/rtc","/dev/hpet", "/dev/vfio/vfio", "/dev/vfio/1","/dev/vfio/2","/dev/vfio/3", "/dev/vfio/4","/dev/vfio/5","/dev/vfio/6", "/dev/vfio/7","/dev/vfio/8","/dev/vfio/9", "/dev/vfio/10","/dev/vfio/11","/dev/vfio/12", "/dev/vfio/13","/dev/vfio/14","/dev/vfio/15", "/dev/vfio/16","/dev/vfio/17","/dev/vfio/18", "/dev/vfio/19","/dev/vfio/20","/dev/vfio/21", "/dev/vfio/22","/dev/vfio/23","/dev/vfio/24", "/dev/vfio/25","/dev/vfio/26","/dev/vfio/27", "/dev/vfio/28","/dev/vfio/29","/dev/vfio/30", "/dev/vfio/31","/dev/vfio/32","/dev/vfio/33", "/dev/vfio/34","/dev/vfio/35","/dev/vfio/36", "/dev/vfio/37","/dev/vfio/38","/dev/vfio/39", "/dev/vfio/40","/dev/vfio/41","/dev/vfio/42", "/dev/vfio/43","/dev/vfio/44","/dev/vfio/45", "/dev/vfio/46","/dev/vfio/47","/dev/vfio/48", "/dev/vfio/49","/dev/vfio/50","/dev/vfio/51", "/dev/vfio/52","/dev/vfio/53","/dev/vfio/54", "/dev/vfio/55","/dev/vfio/56","/dev/vfio/57", "/dev/vfio/58","/dev/vfio/59","/dev/vfio/60", "/dev/vfio/61","/dev/vfio/62","/dev/vfio/63", "/dev/vfio/64","/dev/vfio/65","/dev/vfio/66", "/dev/vfio/67","/dev/vfio/68","/dev/vfio/69", "/dev/vfio/70","/dev/vfio/71","/dev/vfio/72", "/dev/vfio/73","/dev/vfio/74","/dev/vfio/75", "/dev/vfio/76","/dev/vfio/77","/dev/vfio/78", "/dev/vfio/79","/dev/vfio/80","/dev/vfio/81", "/dev/vfio/82","/dev/vfio/83","/dev/vfio/84", "/dev/vfio/85","/dev/vfio/86","/dev/vfio/87", "/dev/vfio/88","/dev/vfio/89","/dev/vfio/90", "/dev/vfio/91","/dev/vfio/92","/dev/vfio/93", "/dev/vfio/94","/dev/vfio/95","/dev/vfio/96", "/dev/vfio/97","/dev/vfio/98","/dev/vfio/99"
  4. Hi Guys! Trying to add a network card to my pfsense vm. 3Com Corporation 3c940 37:09.0 0200: 10b7:1700 Realtek RTL8111/8168/8411 03:00.0 0200: 10ec:8168 Added pci-stub.ids=10ec:8168,10b7:1700 to my syslinux.cfg rebooted Added the following line to my .xml-file (Im only testing ONE card at the moment for a proof of concept) <hostdev mode='subsystem' type='pci' managed='yes'> <driver name='vfio'/> <source> <address domain='0x0000' bus='0x37' slot='0x09' function='0x0'/> </source> </hostdev> That was changed to: (2 last lines automatically added) <hostdev mode='subsystem' type='pci' managed='yes'> <driver name='vfio'/> <source> <address domain='0x0000' bus='0x37' slot='0x09' function='0x0'/> </source> <address type='pci' domain='0x0000' bus='0x02' slot='0x06' function='0x0'/> </hostdev> However i receive the following error when trying to start the vm: virsh start pfSense160325 error: Failed to start domain pfSense160325 error: internal error: early end of file from monitor: possible problem: 2016-03-25T20:01:38.358902Z qemu-system-x86_64: -device vfio-pci,host=37:09.0,id=hostdev0,bus=pci.2,addr=0x6: vfio: failed to set iommu for container: Operation not permitted 2016-03-25T20:01:38.358943Z qemu-system-x86_64: -device vfio-pci,host=37:09.0,id=hostdev0,bus=pci.2,addr=0x6: vfio: failed to setup container for group 13 2016-03-25T20:01:38.358957Z qemu-system-x86_64: -device vfio-pci,host=37:09.0,id=hostdev0,bus=pci.2,addr=0x6: vfio: failed to get group 13 2016-03-25T20:01:38.358969Z qemu-system-x86_64: -device vfio-pci,host=37:09.0,id=hostdev0,bus=pci.2,addr=0x6: Device initialization failed 2016-03-25T20:01:38.358982Z qemu-system-x86_64: -device vfio-pci,host=37:09.0,id=hostdev0,bus=pci.2,addr=0x6: Device 'vfio-pci' could not be initialized Any clues?!
  5. Hi Guys! Trying to add a network card to my pfsense vm. 3Com Corporation 3c940 37:09.0 0200: 10b7:1700 Realtek RTL8111/8168/8411 03:00.0 0200: 10ec:8168 Added pci-stub.ids=10ec:8168,10b7:1700 to my syslinux.cfg rebooted Added the following line to my .xml-file (Im only testing ONE card at the moment for a proof of concept) <hostdev mode='subsystem' type='pci' managed='yes'> <driver name='vfio'/> <source> <address domain='0x0000' bus='0x37' slot='0x09' function='0x0'/> </source> </hostdev> That was changed to: (2 last lines automatically added) <hostdev mode='subsystem' type='pci' managed='yes'> <driver name='vfio'/> <source> <address domain='0x0000' bus='0x37' slot='0x09' function='0x0'/> </source> <address type='pci' domain='0x0000' bus='0x02' slot='0x06' function='0x0'/> </hostdev> However i receive the following error when trying to start the vm: virsh start pfSense160325 error: Failed to start domain pfSense160325 error: internal error: early end of file from monitor: possible problem: 2016-03-25T20:01:38.358902Z qemu-system-x86_64: -device vfio-pci,host=37:09.0,id=hostdev0,bus=pci.2,addr=0x6: vfio: failed to set iommu for container: Operation not permitted 2016-03-25T20:01:38.358943Z qemu-system-x86_64: -device vfio-pci,host=37:09.0,id=hostdev0,bus=pci.2,addr=0x6: vfio: failed to setup container for group 13 2016-03-25T20:01:38.358957Z qemu-system-x86_64: -device vfio-pci,host=37:09.0,id=hostdev0,bus=pci.2,addr=0x6: vfio: failed to get group 13 2016-03-25T20:01:38.358969Z qemu-system-x86_64: -device vfio-pci,host=37:09.0,id=hostdev0,bus=pci.2,addr=0x6: Device initialization failed 2016-03-25T20:01:38.358982Z qemu-system-x86_64: -device vfio-pci,host=37:09.0,id=hostdev0,bus=pci.2,addr=0x6: Device 'vfio-pci' could not be initialized Any clues?!
  6. Hi Guys! Trying to add a network card to my pfsense vm. 3Com Corporation 3c940 37:09.0 0200: 10b7:1700 Realtek RTL8111/8168/8411 03:00.0 0200: 10ec:8168 Added pci-stub.ids=10ec:8168,10b7:1700 to my syslinux.cfg rebooted Added the following line to my .xml-file (Im only testing ONE card at the moment for a proof of concept) <hostdev mode='subsystem' type='pci' managed='yes'> <driver name='vfio'/> <source> <address domain='0x0000' bus='0x37' slot='0x09' function='0x0'/> </source> </hostdev> That was changed to: (2 last lines automatically added) <hostdev mode='subsystem' type='pci' managed='yes'> <driver name='vfio'/> <source> <address domain='0x0000' bus='0x37' slot='0x09' function='0x0'/> </source> <address type='pci' domain='0x0000' bus='0x02' slot='0x06' function='0x0'/> </hostdev> However i receive the following error when trying to start the vm: virsh start pfSense160325 error: Failed to start domain pfSense160325 error: internal error: early end of file from monitor: possible problem: 2016-03-25T20:01:38.358902Z qemu-system-x86_64: -device vfio-pci,host=37:09.0,id=hostdev0,bus=pci.2,addr=0x6: vfio: failed to set iommu for container: Operation not permitted 2016-03-25T20:01:38.358943Z qemu-system-x86_64: -device vfio-pci,host=37:09.0,id=hostdev0,bus=pci.2,addr=0x6: vfio: failed to setup container for group 13 2016-03-25T20:01:38.358957Z qemu-system-x86_64: -device vfio-pci,host=37:09.0,id=hostdev0,bus=pci.2,addr=0x6: vfio: failed to get group 13 2016-03-25T20:01:38.358969Z qemu-system-x86_64: -device vfio-pci,host=37:09.0,id=hostdev0,bus=pci.2,addr=0x6: Device initialization failed 2016-03-25T20:01:38.358982Z qemu-system-x86_64: -device vfio-pci,host=37:09.0,id=hostdev0,bus=pci.2,addr=0x6: Device 'vfio-pci' could not be initialized Any clues?!
  7. Hi everyone. Im thinking about using a VM (W2K-server) to host my smb-shares for my home network (about 7 devices). What would happen if i create a virtual disk2 on my system and mounting and sharing this disk on the vm. The RAW disk is 4TB and my disks (4disks á 4TB and a parity). Will the disks "share" the image/raw-disk or will one disk be allocated? /Sebastian
  8. No, but the mover will do exactly what you are seeing if the dockerc share isn't set to cache only. Hi! Thanks for the feedback! I checked, and it seems that the share is cache only: (attached picture)
  9. Hi! Great job with the docker! Whenever i reboot the unraid server the couchpotato and nzbget-docker reverts back to "defaults" my /config points at: /mnt/cache/dockerc/couchpotato on my unraid system and is read/write-able... Is there something in the startup script that erases the config.ini and replaces it with a default one?