Omnicrash

Members
  • Posts

    28
  • Joined

  • Last visited

About Omnicrash

  • Birthday 08/01/1985

Converted

  • Gender
    Male
  • URL
    https://www.omnicrash.net
  • Location
    Belgium

Recent Profile Visitors

The recent visitors block is disabled and is not being shown to other users.

Omnicrash's Achievements

Noob

Noob (1/14)

0

Reputation

  1. Yeah, but the problem I'm having, is that I cannot collectively assign my device(s) which are behind a PCIe to PCI bridge. I can only use a single function at a time. Another thing that I can think of that might cause this problem, is that the virtualization engine somehow fails in resetting one of the functions of the device, after already having reset the other one. Not sure how to fix this if that's the case.
  2. That'd be too easy now wouldn't it . Anyway, I think it might be an issue with shared IRQs, since both functions of the device of course use the same IRQ. Using them separately works just fine, but as soon as both are added I get that "Failed to setup INTx fd: Device or resource busy" error. One weird thing is that according to this link: http://www.linux-kvm.org/page/How_to_assign_devices_with_VT-d_in_KVM VT-d spec specifies that all conventional PCI devices behind a PCIe-to PCI/PCI-X bridge or conventional PCI bridge can only be collectively assigned to the same guest. PCIe devices do not have this restriction. I have the opposite problem I can only assign one device (function) behind the bridge to the guest at a time.
  3. Nope, still the same device initialization failed error unfortunately. Both 'devices' work separately, but never at the same time. EDIT: Actually, when I only add the second function of the device, it doesn't seem to show up in the guest OS using lspci...
  4. Yeah, this weird board (P10S-X) has like 5 PCI slots. I'll try that tomorrow! Currently rebooting to try ACS override anyways. EDIT: Group is still the same, getting the same error also. Since the only other device is the PCIe to PCI bridge, I think all PCI slots will be in this group anyway,
  5. I haven't tried enabling ACS override yet, since there seem to be a lot of warnings about this possibly causing corruption in certain configurations. However, despite my best google-fu, I cannot seem to find the post actually describing the issues. From what I understand though, ACS override is used to allow devices in the same IOMMU group to be used in multiple VM's?
  6. Unfortunately, that results in the following error when trying to run: internal error: qemu unexpectedly closed the monitor: 2017-02-14T00:41:22.857869Z qemu-system-x86_64: -device vfio-pci,host=05:00.0,id=hostdev0,bus=pci.2,addr=0x4: vfio: error getting device 0000:05:00.0 from group 10: No such device Verify all devices in group 10 are bound to vfio-<bus> or pci-stub and not already in use 2017-02-14T00:41:22.857919Z qemu-system-x86_64: -device vfio-pci,host=05:00.0,id=hostdev0,bus=pci.2,addr=0x4: vfio: failed to get device 0000:05:00.0 2017-02-14T00:41:22.871041Z qemu-system-x86_64: -device vfio-pci,host=05:00.0,id=hostdev0,bus=pci.2,addr=0x4: Device initialization failed When I remove the PCI bridge, I get this instead: internal error: qemu unexpectedly closed the monitor: 2017-02-14T00:42:37.708492Z qemu-system-x86_64: -device vfio-pci,host=06:00.1,id=hostdev1,bus=pci.2,addr=0x7: vfio: Error: Failed to setup INTx fd: Device or resource busy 2017-02-14T00:42:37.745806Z qemu-system-x86_64: -device vfio-pci,host=06:00.1,id=hostdev1,bus=pci.2,addr=0x7: Device initialization failed I've also tried adding it as a multifunctional device using: <hostdev mode='subsystem' type='pci' managed='yes'> <driver name='vfio'/> <source> <address domain='0x0000' bus='0x06' slot='0x00' function='0x0'/> </source> <address type='pci' domain='0x0000' bus='0x02' slot='0x06' function='0x0' multifunction='on'/> </hostdev> <hostdev mode='subsystem' type='pci' managed='yes'> <driver name='vfio'/> <source> <address domain='0x0000' bus='0x06' slot='0x00' function='0x1'/> </source> <address type='pci' domain='0x0000' bus='0x02' slot='0x06' function='0x1' multifunction='on'/> </hostdev> But that again results in this: internal error: process exited while connecting to monitor: 2017-02-14T00:44:38.569622Z qemu-system-x86_64: -device vfio-pci,host=06:00.1,id=hostdev1,bus=pci.2,multifunction=on,addr=0x6.0x1: vfio: Error: Failed to setup INTx fd: Device or resource busy 2017-02-14T00:44:38.605890Z qemu-system-x86_64: -device vfio-pci,host=06:00.1,id=hostdev1,bus=pci.2,multifunction=on,addr=0x6.0x1: Device initialization failed
  7. Yeah, the entire group is fine. I don't think I can separate the card to it's own group, I think it will always be in the PCI bridge's group. How do I add an entire group though? Currently, I can only add one device from the group.
  8. Hi, I'd appreciate any help on this one. I'm trying to pass my bt878-based PCI card to a Debian VM. I've followed some of the excellent guides here, but I can't seem to get it to work with both devices (functions) enabled. What currently works is this (which only adds the first function of the device: <hostdev mode='subsystem' type='pci' managed='yes'> <driver name='vfio'/> <source> <address domain='0x0000' bus='0x06' slot='0x00' function='0x0'/> </source> <address type='pci' domain='0x0000' bus='0x02' slot='0x04' function='0x0'/> </hostdev> Here's the info on the devices: IOMMU group 10 05:00.0 PCI bridge [0604]: ASMedia Technology Inc. ASM1083/1085 PCIe to PCI Bridge [1b21:1080] (rev 03) 06:00.0 Multimedia video controller [0400]: Brooktree Corporation Bt878 Video Capture [109e:036e] (rev 11) 06:00.1 Multimedia controller [0480]: Brooktree Corporation Bt878 Audio Capture [109e:0878] (rev 11) So, 06:00.0 & 06:00.1 should be passed. If it's somehow easier to pass through the entire IOMMU group, that's completely fine as well.
  9. I don't have cache drives . Anyway, I just confirmed it and submitted a bug report. Turning off direct_io and setting it back to user shares worked just fine.
  10. Ok, so here's a weird one. I just upgraded my server hw, and everything was working fine. Then I began tweaking stuff, and preparing to enable VMs. I shut down docker, and moved my docker.img to /mnt/disk1/system, and reconfigured docker. It started up fine, however most of my containers (Plex, PlexPy, Sonarr, ...) had disk I/O errors. I recreated my entire docker image two times with no effect. I managed to fix it by changing all my containers' config files from /mnt/user/appdata to /mnt/disk1/appdata. However, before this was working fine, and this should be no problem according to this post: http://lime-technology.com/forum/index.php?topic=40937.msg466185#msg466185. One thing I did while tweaking was turning on Tunable (enable Direct IO). Since the post I linked above mentioned FUSE, I think that might have something to do with it. I haven't tested disabling it again myself though. I've disabled it, and it did indeed fix the problem.
  11. I had the same problem, but since I was going to update anyway it was easily resolved: - Went to the plugins tab - Check for updates - Download & update option showed up at the very bottom After that I just rebooted, the shares worked but I had to wait for a while until the GUI worked again, but after that everything, plugins & dockers worked perfectly.
  12. It's strange though, because there was a container update roughly at the same time I got the update notification, so I expected it to include the update.
  13. Yup, same problem here: docker is updated, but Emby still claims to need an update.
  14. By default, in filebot.sh in your config folder, it is configured with the COPY action ( --action copy parameter). I presume settling in filebot means there are no new files added or removed, so it should work.
  15. Just ran into an nasty problem. My Sonarr image data kept being reset, so I had to scan the library every day again to reset it. I also installed Emby, and after a day it would start the wizard again, though not every setting was wiped. So I was thinking, what happens every day that could cause this? The mover maybe? I have my appdata share setup so it is a regular share that automatically gets moved at night from cache to the array. That way, reads should be fast enough, writes are fast and I have my data safe. Turns out I was right: by default the path that was given to (some of) my docker containers is not /mnt/user/appdata, but /mnt/cache/appdata! Woops! I realize many people out there may not run a similar setup as I do, but just wanted to post this as a warning to those that do, so they don't repeat my mistake.