chvb

Members
  • Posts

    126
  • Joined

  • Last visited

Everything posted by chvb

  1. Your USB3 Controller should be enabled. 00:14.0 USB controller: Intel Corporation Sunrise Point-H USB 3.0 xHCI Controller (rev 31) You have to be deactivate these one 03:00.0 USB controller: ASMedia Technology Inc. Device 1242 Insert your usb flash drive into the Intel USB 3 Controller. Maybe this could be your solution.
  2. Maybe you should try to deactivate your USB in BIOS and give it another try. As you see, the USB is in the same group as your second Card.
  3. Did you use the ACS Override function for your passthrough? Which Plattform you are using (Skylake)? the problem is that the ACS override doesn't work that good on skylake. Maybe you should try to use another PCIe Slot
  4. Your second Card are in the same IOMMU Group as other devices. You have to try to enable the PCIe ACS Override Function. You can find this here: Settings -> VM Manager -> Enable PCIe ACS Override set it to enable Please reboot your UNRAID System.
  5. Yes, the VNC function have to be disabled with a nvidia card. otherwise you become the error 43 Code
  6. I think there is no Problem for your Setup. But... You need the internal GPU for the Console from Unraid. Then you have to passthrough your PCIe Card to your Windows OS for your use with Blueiris and Plex Theater. You also have to try, if your RAID Controller Cards works with UNRAID. Use the SSD as a cache drive. The VM should run on from the cache drive. I think its better to buy a second ssd drive. So you have a cache pool. You can also use the Docker function from UNRAID to run your Plex Media Server.
  7. hey, what are your syslinux.cfg settings? Did you try this line? append intel_iommu=on vfio_iommu_type1.allow_unsafe_interrupts=1 pcie_acs_override=downstream initrd=/bzroot What system you are using? especially the Mainboard. What about VT-d function. Did you enable it in the BIOS?
  8. Hello together, after i had much Problem with Passthrough with a HP Server i bought new Hardware. I successfully passed a Nvidia GTX 750 to a Windows VM with KVM. Now i would like to passthrough my 2 Digital Devices Cine S2 Tuner Cards. I used this Code in the Windows 10 Template. <hostdev mode='subsystem' type='pci' managed='yes'> <driver name='vfio'/> <source> <address domain='0x0000' bus='0x03' slot='0x00' function='0x0'/> </source> </hostdev> <hostdev mode='subsystem' type='pci' managed='yes'> <driver name='vfio'/> <source> <address domain='0x0000' bus='0x04' slot='0x00' function='0x0'/> </source> </hostdev> After that, i installed the Drivers of these Cards. In the Device Manager the two Cards where shown. Now, when i tune a channel in DVBViewer, the Card show 0% Signal.There is no picture. Transitional I use now the TVHeadend Docker with SATIP to the DVBViewer. This is solution is working. But i want to passthrough the Cards to the VM. thanks for any suggestions
  9. Hello together, I've created a DVBLink Docker. https://github.com/chvb/docker-templates The DVBLink Server doesn't start.when i connect to a ssh session inside the Docker and start the DVBLink Server manual (/usr/local/bin/dvblink/start.sh) i get the following error: 2016-May-31 11:46:04: --------------------------- 2016-May-31 11:46:04: Installed DVBLink products: 2016-May-31 11:46:04: DVBLink Server, 5.5.0, 12350 2016-May-31 11:46:04: --------------------------- I/O warning : failed to load external entity "/usr/local/bin/dvblink/auxes/social/settings.xml" I/O warning : failed to load external entity "/usr/local/bin/dvblink/auxes/updater/settings.xml" upower_listener::upower_listener() pure virtual method called terminate called without an active exception Aborted Docker Hub https://hub.docker.com/r/chvb/docker-dvblink/ Github https://github.com/chvb/Docker-DVBLink Can someone help me. Many thanks
  10. thanks for your answer. i used another solution: https://lime-technology.com/forum/index.php?topic=48630.0
  11. Hello together, finally, i got UNRAID working under the ESXi 6 Server with my HP ML310e Server. the Solution: Add in the syslinux.cfg mpt3sas.msix_disable=1 so all my Disks on the LSI controller are visible. I got passthrough my 2 Digital Devices Devices and the LSI Controller. Finally, i created a Windows VM with a passthrough of my Geforce GTX 750 for Hardware encoding. Now i will test if it works stable.
  12. Hello together, i want to passthrough a Digital Device Cine S2 Cart to my Windows VM. I've added the Card in the .xml file. The syslinux.cfg is already changed to: append intel_iommu=on vfio_iommu_type1.allow_unsafe_interrupts=1 pcie_acs_override=downstream initrd=/bzroot The Vm doesn't start: internal error: process exited while connecting to monitor: 2016-05-03T09:07:15.474529Z qemu-system-x86_64: -device vfio-pci,host=07:00.0,id=hostdev0,bus=pci.2,addr=0x3: vfio: failed to set iommu for container: Operation not permitted 2016-05-03T09:07:15.474551Z qemu-system-x86_64: -device vfio-pci,host=07:00.0,id=hostdev0,bus=pci.2,addr=0x3: vfio: failed to setup container for group 14 2016-05-03T09:07:15.474556Z qemu-system-x86_64: -device vfio-pci,host=07:00.0,id=hostdev0,bus=pci.2,addr=0x3: vfio: failed to get group 14 2016-05-03T09:07:15.474564Z qemu-system-x86_64: -device vfio-pci,host=07:00.0,id=hostdev0,bus=pci.2,addr=0x3: Device initialization failed This is the error from log: May 3 11:02:04 NAS kernel: ---[ end trace 357031a9f515e633 ]--- May 3 11:02:04 NAS kernel: ------------[ cut here ]------------ May 3 11:02:04 NAS kernel: WARNING: CPU: 0 PID: 20544 at lib/kobject.c:240 kobject_add_internal+0x209/0x25d() May 3 11:02:04 NAS kernel: kobject_add_internal failed for ddbridge1 with -EEXIST, don't try to register things with the same name in the same directory. May 3 11:02:04 NAS kernel: Modules linked in: xt_CHECKSUM ipt_REJECT nf_reject_ipv4 ebtable_filter ebtables vhost_net vhost macvtap macvlan tun iptable_mangle xt_nat veth ipt_MASQUERADE nf_nat_masquerade_ipv4 iptable_nat nf_conntrack_ipv4 nf_nat_ipv4 iptable_filter ip_tables nf_nat md_mod bonding stv6110x lnbp21 mpt3sas stv090x x86_pkg_temp_thermal coretemp kvm_intel ahci libahci tg3 raid_class ddbridge cxd2099© kvm dvb_core ptp pps_core scsi_transport_sas ipmi_si [last unloaded: md_mod] May 3 11:02:04 NAS kernel: CPU: 0 PID: 20544 Comm: libvirtd Tainted: G WC 4.4.6-unRAID #1 May 3 11:02:04 NAS kernel: Hardware name: HP ProLiant ML310e Gen8, BIOS J04 11/09/2013 May 3 11:02:04 NAS kernel: 0000000000000000 ffff8806670d7940 ffffffff81368c9a ffff8806670d7988 May 3 11:02:04 NAS kernel: 00000000000000f0 ffff8806670d7978 ffffffff8104a28a ffffffff8136acb9 May 3 11:02:04 NAS kernel: ffff8800d580f010 00000000ffffffef ffff88080359e420 0000000000000000 May 3 11:02:04 NAS kernel: Call Trace: May 3 11:02:04 NAS kernel: [<ffffffff81368c9a>] dump_stack+0x61/0x7e May 3 11:02:04 NAS kernel: [<ffffffff8104a28a>] warn_slowpath_common+0x8f/0xa8 May 3 11:02:04 NAS kernel: [<ffffffff8136acb9>] ? kobject_add_internal+0x209/0x25d May 3 11:02:04 NAS kernel: [<ffffffff8104a2e6>] warn_slowpath_fmt+0x43/0x4b May 3 11:02:04 NAS kernel: [<ffffffff8115a078>] ? sysfs_warn_dup+0x60/0x67 May 3 11:02:04 NAS kernel: [<ffffffff8136acb9>] kobject_add_internal+0x209/0x25d May 3 11:02:04 NAS kernel: [<ffffffff8136ada3>] kobject_add+0x96/0xa3 May 3 11:02:04 NAS kernel: [<ffffffff814423c7>] device_add+0x101/0x4fc May 3 11:02:04 NAS kernel: [<ffffffff810cdb23>] ? kfree_const+0x1b/0x1d May 3 11:02:04 NAS kernel: [<ffffffff8144293f>] device_create_groups_vargs+0xb0/0xe2 May 3 11:02:04 NAS kernel: [<ffffffff81442982>] device_create_vargs+0x11/0x13 May 3 11:02:04 NAS kernel: [<ffffffff814429b2>] device_create+0x2e/0x36 May 3 11:02:04 NAS kernel: [<ffffffffa02d0ed1>] ddb_probe+0xa6f/0xa9e [ddbridge] May 3 11:02:04 NAS kernel: [<ffffffff8144c550>] ? rpm_resume+0x8c/0x3f3 May 3 11:02:04 NAS kernel: [<ffffffff81397a10>] local_pci_probe+0x38/0x7b May 3 11:02:04 NAS kernel: [<ffffffff81397a10>] ? local_pci_probe+0x38/0x7b May 3 11:02:04 NAS kernel: [<ffffffff813984c0>] pci_device_probe+0xe3/0x111 May 3 11:02:04 NAS kernel: [<ffffffff81444a87>] driver_probe_device+0xf2/0x25d May 3 11:02:04 NAS kernel: [<ffffffff81444d04>] __device_attach_driver+0x6c/0x73 May 3 11:02:04 NAS kernel: [<ffffffff81444c98>] ? driver_allows_async_probing+0x2c/0x2c May 3 11:02:04 NAS kernel: [<ffffffff814433a9>] bus_for_each_drv+0x78/0x82 May 3 11:02:04 NAS kernel: [<ffffffff8144490c>] __device_attach+0xa1/0x101 May 3 11:02:04 NAS kernel: [<ffffffff81444977>] device_attach+0xb/0xd May 3 11:02:04 NAS kernel: [<ffffffff81443998>] bus_rescan_devices_helper+0x30/0x55 May 3 11:02:04 NAS kernel: [<ffffffff81443a3a>] store_drivers_probe+0x34/0x4c May 3 11:02:04 NAS kernel: [<ffffffff81442fe0>] bus_attr_store+0x20/0x25 May 3 11:02:04 NAS kernel: [<ffffffff81159cc4>] sysfs_kf_write+0x34/0x36 May 3 11:02:04 NAS kernel: [<ffffffff81159143>] kernfs_fop_write+0xe8/0x12b May 3 11:02:04 NAS kernel: [<ffffffff811098df>] __vfs_write+0x21/0xb9 May 3 11:02:04 NAS kernel: [<ffffffff8111fee1>] ? __alloc_fd+0x150/0x160 May 3 11:02:04 NAS kernel: [<ffffffff8111f9b5>] ? __fget+0x72/0x7e May 3 11:02:04 NAS kernel: [<ffffffff810773c8>] ? percpu_down_read+0xe/0x37 May 3 11:02:04 NAS kernel: [<ffffffff81109ed9>] vfs_write+0xbc/0x160 May 3 11:02:04 NAS kernel: [<ffffffff8110a626>] SyS_write+0x49/0x84 May 3 11:02:04 NAS kernel: [<ffffffff8161edae>] entry_SYSCALL_64_fastpath+0x12/0x6d May 3 11:02:04 NAS kernel: ---[ end trace 357031a9f515e634 ]--- thanks for your help
  13. Hello together, now I tested with the ESXi 6 U2 Server. I habe passthrough my Digital Devices Devices, the onboard Controller and the IT flashed LSI 9211 (Dell perc H310) to the UNRAID VM. I've got a problem with the Passthrough of the LSI Controller. The Controller is show up, but there are no drives. With the onboard Intel Controller the other devices are shown up. Now i Also passthrough my Nvidia Card to a Windows VM which works with ESXi. Does anybody know, why there are no disks show up with the passthrough LSI Controller? When i start Unraid without ESXi, there are no Problems. All disks showed up. Thanks
  14. Hello together, now i tested another Graphic Card in the HP Server. I've testet with a Radeon HD5000. Same errors here. So it is a HP issue. Next step: Install ESX Server. Passthrough the LSI Controller and TV Tuners to UNRAID. Create a Win10 Machine and Passthrough the Nvidia Card. I hope this solution will work for me.
  15. You only have to forward the port from your Router to the IP from your VM. Thats the only thing. Its like a real computer in your network.
  16. chvb

    CPU issue

    ok, you said, that you have updated the driver with windows Update. As i understand you have only the windows driver for your nvidia device installed? Please download and install the nvidia driver. Please report back.
  17. chvb

    CPU issue

    Have you checked your driver of the Nvidia Cards? Is there only a problem with the 4k videos? What happens, when you start a 3D Game?
  18. Hello, thanks for the info. I've tested the changes in the sysconfig with reboot. But also no success. I have also tested the card in another HP Server (ML310e V2) There i'am using the Unraid Version 6.1.9. Same Problems here. in the Bios settings it is possible to change the irq of the Card. But when i change the number, some other components would be changed automatically. Other Components are in the same group of the GPU. Maybe it is a Problem of HP. As i know, there is a big Problem with pci passthrough since version 5.1 U2. The latest Version without issues was the 5.1U1. Thanks for any more ideas.
  19. i also use the docker function. i have deactivate the emby docker. when i try to start the VM the following error is shown: internal error: early end of file from monitor, possible problem: 2016-04-26T17:32:34.961905Z qemu-system-x86_64: -device vfio-pci,host=05:00.0,id=hostdev0,x-vga=on,bus=pci.2,addr=0x5: vfio: failed to set iommu for container: Operation not permitted 2016-04-26T17:32:34.961934Z qemu-system-x86_64: -device vfio-pci,host=05:00.0,id=hostdev0,x-vga=on,bus=pci.2,addr=0x5: vfio: failed to setup container for group 12 2016-04-26T17:32:34.961939Z qemu-system-x86_64: -device vfio-pci,host=05:00.0,id=hostdev0,x-vga=on,bus=pci.2,addr=0x5: vfio: failed to get group 12 2016-04-26T17:32:34.961947Z qemu-system-x86_64: -device vfio-pci,host=05:00.0,id=hostdev0,x-vga=on,bus=pci.2,addr=0x5: Device initialization failed
  20. now i have check the ids from the Nvidia Card with the command: lspci -n and added this to the syslinux.cfg vfio-pci.ids=10de:1381,10de:0fbc but also no success.
  21. Hello, i've got wrong. there is no option in the BIOS of the ML310e with HP Shared Memory features. The only option with memory is this: No-Execute Memory Protection (it is enabled) i have set the setting iommu=nopt in the syslinux.cfg with no success. Any other ideas? This is frustraded. I want to use the Nvidia Card for GPU encoding. This is the current message, when i try to start the VM: internal error: process exited while connecting to monitor: 2016-04-26T10:51:38.882123Z qemu-system-x86_64: -device vfio-pci,host=05:00.0,id=hostdev0,x-vga=on,bus=pci.2,addr=0x5: vfio: failed to set iommu for container: Operation not permitted 2016-04-26T10:51:38.882149Z qemu-system-x86_64: -device vfio-pci,host=05:00.0,id=hostdev0,x-vga=on,bus=pci.2,addr=0x5: vfio: failed to setup container for group 12 2016-04-26T10:51:38.882154Z qemu-system-x86_64: -device vfio-pci,host=05:00.0,id=hostdev0,x-vga=on,bus=pci.2,addr=0x5: vfio: failed to get group 12 2016-04-26T10:51:38.882163Z qemu-system-x86_64: -device vfio-pci,host=05:00.0,id=hostdev0,x-vga=on,bus=pci.2,addr=0x5: Device initialization failed Just for information. there is no Monitor connected. But i think, this isn't the problem? As i told, i would like to use the nvidie for hardware encoding.
  22. Hey saarg, many thanks. i will test this tomorrow morning (time for sleep now). In the BIOS settings i have seen the HP Shared Memory features. I will disable them and reply if it works. see you tomorrow.
  23. Hello saarg, thanks for your answer. here we go. nas-diagnostics-20160425-2115.zip iommu_groups_pci_devices.txt sysconfig.txt Win10xml.txt