Omnicrash

Members
  • Posts

    28
  • Joined

  • Last visited

Everything posted by Omnicrash

  1. Yeah, but the problem I'm having, is that I cannot collectively assign my device(s) which are behind a PCIe to PCI bridge. I can only use a single function at a time. Another thing that I can think of that might cause this problem, is that the virtualization engine somehow fails in resetting one of the functions of the device, after already having reset the other one. Not sure how to fix this if that's the case.
  2. That'd be too easy now wouldn't it . Anyway, I think it might be an issue with shared IRQs, since both functions of the device of course use the same IRQ. Using them separately works just fine, but as soon as both are added I get that "Failed to setup INTx fd: Device or resource busy" error. One weird thing is that according to this link: http://www.linux-kvm.org/page/How_to_assign_devices_with_VT-d_in_KVM VT-d spec specifies that all conventional PCI devices behind a PCIe-to PCI/PCI-X bridge or conventional PCI bridge can only be collectively assigned to the same guest. PCIe devices do not have this restriction. I have the opposite problem I can only assign one device (function) behind the bridge to the guest at a time.
  3. Nope, still the same device initialization failed error unfortunately. Both 'devices' work separately, but never at the same time. EDIT: Actually, when I only add the second function of the device, it doesn't seem to show up in the guest OS using lspci...
  4. Yeah, this weird board (P10S-X) has like 5 PCI slots. I'll try that tomorrow! Currently rebooting to try ACS override anyways. EDIT: Group is still the same, getting the same error also. Since the only other device is the PCIe to PCI bridge, I think all PCI slots will be in this group anyway,
  5. I haven't tried enabling ACS override yet, since there seem to be a lot of warnings about this possibly causing corruption in certain configurations. However, despite my best google-fu, I cannot seem to find the post actually describing the issues. From what I understand though, ACS override is used to allow devices in the same IOMMU group to be used in multiple VM's?
  6. Unfortunately, that results in the following error when trying to run: internal error: qemu unexpectedly closed the monitor: 2017-02-14T00:41:22.857869Z qemu-system-x86_64: -device vfio-pci,host=05:00.0,id=hostdev0,bus=pci.2,addr=0x4: vfio: error getting device 0000:05:00.0 from group 10: No such device Verify all devices in group 10 are bound to vfio-<bus> or pci-stub and not already in use 2017-02-14T00:41:22.857919Z qemu-system-x86_64: -device vfio-pci,host=05:00.0,id=hostdev0,bus=pci.2,addr=0x4: vfio: failed to get device 0000:05:00.0 2017-02-14T00:41:22.871041Z qemu-system-x86_64: -device vfio-pci,host=05:00.0,id=hostdev0,bus=pci.2,addr=0x4: Device initialization failed When I remove the PCI bridge, I get this instead: internal error: qemu unexpectedly closed the monitor: 2017-02-14T00:42:37.708492Z qemu-system-x86_64: -device vfio-pci,host=06:00.1,id=hostdev1,bus=pci.2,addr=0x7: vfio: Error: Failed to setup INTx fd: Device or resource busy 2017-02-14T00:42:37.745806Z qemu-system-x86_64: -device vfio-pci,host=06:00.1,id=hostdev1,bus=pci.2,addr=0x7: Device initialization failed I've also tried adding it as a multifunctional device using: <hostdev mode='subsystem' type='pci' managed='yes'> <driver name='vfio'/> <source> <address domain='0x0000' bus='0x06' slot='0x00' function='0x0'/> </source> <address type='pci' domain='0x0000' bus='0x02' slot='0x06' function='0x0' multifunction='on'/> </hostdev> <hostdev mode='subsystem' type='pci' managed='yes'> <driver name='vfio'/> <source> <address domain='0x0000' bus='0x06' slot='0x00' function='0x1'/> </source> <address type='pci' domain='0x0000' bus='0x02' slot='0x06' function='0x1' multifunction='on'/> </hostdev> But that again results in this: internal error: process exited while connecting to monitor: 2017-02-14T00:44:38.569622Z qemu-system-x86_64: -device vfio-pci,host=06:00.1,id=hostdev1,bus=pci.2,multifunction=on,addr=0x6.0x1: vfio: Error: Failed to setup INTx fd: Device or resource busy 2017-02-14T00:44:38.605890Z qemu-system-x86_64: -device vfio-pci,host=06:00.1,id=hostdev1,bus=pci.2,multifunction=on,addr=0x6.0x1: Device initialization failed
  7. Yeah, the entire group is fine. I don't think I can separate the card to it's own group, I think it will always be in the PCI bridge's group. How do I add an entire group though? Currently, I can only add one device from the group.
  8. Hi, I'd appreciate any help on this one. I'm trying to pass my bt878-based PCI card to a Debian VM. I've followed some of the excellent guides here, but I can't seem to get it to work with both devices (functions) enabled. What currently works is this (which only adds the first function of the device: <hostdev mode='subsystem' type='pci' managed='yes'> <driver name='vfio'/> <source> <address domain='0x0000' bus='0x06' slot='0x00' function='0x0'/> </source> <address type='pci' domain='0x0000' bus='0x02' slot='0x04' function='0x0'/> </hostdev> Here's the info on the devices: IOMMU group 10 05:00.0 PCI bridge [0604]: ASMedia Technology Inc. ASM1083/1085 PCIe to PCI Bridge [1b21:1080] (rev 03) 06:00.0 Multimedia video controller [0400]: Brooktree Corporation Bt878 Video Capture [109e:036e] (rev 11) 06:00.1 Multimedia controller [0480]: Brooktree Corporation Bt878 Audio Capture [109e:0878] (rev 11) So, 06:00.0 & 06:00.1 should be passed. If it's somehow easier to pass through the entire IOMMU group, that's completely fine as well.
  9. I don't have cache drives . Anyway, I just confirmed it and submitted a bug report. Turning off direct_io and setting it back to user shares worked just fine.
  10. Ok, so here's a weird one. I just upgraded my server hw, and everything was working fine. Then I began tweaking stuff, and preparing to enable VMs. I shut down docker, and moved my docker.img to /mnt/disk1/system, and reconfigured docker. It started up fine, however most of my containers (Plex, PlexPy, Sonarr, ...) had disk I/O errors. I recreated my entire docker image two times with no effect. I managed to fix it by changing all my containers' config files from /mnt/user/appdata to /mnt/disk1/appdata. However, before this was working fine, and this should be no problem according to this post: http://lime-technology.com/forum/index.php?topic=40937.msg466185#msg466185. One thing I did while tweaking was turning on Tunable (enable Direct IO). Since the post I linked above mentioned FUSE, I think that might have something to do with it. I haven't tested disabling it again myself though. I've disabled it, and it did indeed fix the problem.
  11. I had the same problem, but since I was going to update anyway it was easily resolved: - Went to the plugins tab - Check for updates - Download & update option showed up at the very bottom After that I just rebooted, the shares worked but I had to wait for a while until the GUI worked again, but after that everything, plugins & dockers worked perfectly.
  12. It's strange though, because there was a container update roughly at the same time I got the update notification, so I expected it to include the update.
  13. Yup, same problem here: docker is updated, but Emby still claims to need an update.
  14. By default, in filebot.sh in your config folder, it is configured with the COPY action ( --action copy parameter). I presume settling in filebot means there are no new files added or removed, so it should work.
  15. Just ran into an nasty problem. My Sonarr image data kept being reset, so I had to scan the library every day again to reset it. I also installed Emby, and after a day it would start the wizard again, though not every setting was wiped. So I was thinking, what happens every day that could cause this? The mover maybe? I have my appdata share setup so it is a regular share that automatically gets moved at night from cache to the array. That way, reads should be fast enough, writes are fast and I have my data safe. Turns out I was right: by default the path that was given to (some of) my docker containers is not /mnt/user/appdata, but /mnt/cache/appdata! Woops! I realize many people out there may not run a similar setup as I do, but just wanted to post this as a warning to those that do, so they don't repeat my mistake.
  16. Getting the following error since the latest update under General Settings: Warning: file_get_contents(/tmp/community.applications/tempFiles/Repositories.json): failed to open stream: No such file or directory in /usr/local/emhttp/plugins/dynamix/include/DefaultPageLayout.php(292) : eval()'d code on line 32 Warning: Invalid argument supplied for foreach() in /usr/local/emhttp/plugins/dynamix/include/DefaultPageLayout.php(292) : eval()'d code on line 34 Warning: natcasesort() expects parameter 1 to be array, null given in /usr/local/emhttp/plugins/dynamix/include/DefaultPageLayout.php(292) : eval()'d code on line 38 Warning: Invalid argument supplied for foreach() in /usr/local/emhttp/plugins/dynamix/include/DefaultPageLayout.php(292) : eval()'d code on line 42 Is anyone else getting this, or is something off with my setup? Impeccable timing, I wanted to add a feature to your recycle.bin plugin to modify the default ".Recycle.Bin" path, so I forked it and am currently in the process of (possibly) adding a bunch more features I've thought up ,
  17. Ah... figured it was something like this, but I swear I remember logging in as root the first time I tried it, must have misremembered. Anyway, I set up a separate account and everything works fine!
  18. The first time I installed unRAID in a VM and tested it, I could login as root through SMB. However, now my password is always rejected, both on all my Windows machines and through an Ubuntu VM. Connecting through ssh, sftp or ftp works fine. Guest account also works fine. I can view the folders and edit the public folders just fine as guest, both through the server's name and it's IP. However, using username root will always fail on any system. Any ideas? I've tried clearing credentials in Windows, and using SERVERNAME\root as login, both without success. I've also tried connecting directly through the IP.
  19. Just a quick update, everything worked perfectly! Pre-cleared the disks using VMware Workstation using Physical Disk Transferred all my files to the virtual unRAID setup Put them in real hardware New Config Assigned the correct filled/cleared drives, and a parity Now in the process of using an Ubuntu image to transfer other old files from a single drive that's now running in the VM (it's a mdadm drive so it won't mount using Unassigned Devices). Now I just need to buy a decent USB flash drive first before buying an unRAID license and adding the rest of the drives. This one keeps crapping out on 'Loading bzroot...' about 50% of the time. And yes, it happens both in VM and on bare-metal.
  20. Thanks for clearing that up, the ability to manage files transparently on a disk level is really cool! 34% into step 2 of the pre-clear of disk 1 now, so it'll be a while before I can actually start testing stuff.
  21. Looks like that's working fine at least. Been pre-clearing the first disk since yesterday. Good suggestion, I'll do that with a portion of the data to make sure. This is also a great option because it means I can spread data over more drives (using high-water) instead of filling a single drive completely with existing data. I have some followup questions to this approach: 1. Does unRAID (using either the Unassigned Devices plugin or a manual mount operation) support NTFS file systems (read-only will be fine)? 2. When copying the files, if I set up a parity, I should always copy from '/mnt/NTFSDISK' to '/mnt/SHARE' rather than directly to an unRAID disk, correct? Anyway thanks for the input, this will help me out immensely!
  22. I think my setup will be very similar to yours karateo, with the exception Plex (I just host my media as-is to XMBC/Kodi and Windows hosts, and the machine I'm running on isn't very powerful). I will be going with a Deluge docker for downloading since I found a Chrome extension called Remote Deluge which seems to feature almost everything I need. I don't think it supports general downloading like DS did, however it's very rare that I need to download a non-torrent directly to my NAS, so I can live with that.
  23. I'm currently in the process of moving my data to unRAID. The hardware that is currently running my NAS setup will also be used for my unRAID setup. It contains: 2x 4TB in RAID1 1x 3TB I also bought 2 new 4TB HDD's to add to my new setup and before that, to import my old data. Now my current plan is: 1. Set up VM using VMware Workstation, so I can still use my PC as well as my NAS during the transfer period 2. Pass-through unRAID USB drive, and attach new HDDs 3. Pre-clear the new drives 4. Import the existing data over network 5. Once that's done, shut down the old server 6. Install the new disks along with one of the old RAID1 disks in the server 7. Once everything is settled in, add the other RAID1 disk as parity, and add the 3TB as a cache This way, there should still be a copy of all data at all times. Now, currently using the VMWare Workstation pass-through, the disks aren't identified with their proper serial numbers, but rather as VMware_Virtual_SATA_Hard_Drive_000...1 etc. Will this be a problem when moving the disks to bare-metal? Or will they be recognised and imported?
  24. It doesn't need to be exposed to the outside world. It just needs to capture links, and download them on my server instead of locally (as in on the machine you're browsing on itself). I don't expose my machines either. My current SDS setup only works when I'm on my home network.
  25. I'm moving away from a Synology NAS to an unRAID setup. One of the only things I'll really miss will be Synology Download Station, which I could previously easily manage using the Chrome extension. Full list of all downloads right in the browser, along with the ability to capture all torrent/magnet/filesharing/general download links. - Can you suggest some good alternatives? For now I'll probably be running either Transmission or Deluge out of a Docker container, though I don't think either support anything other than torrents (which is not an ability I absolutely need, though it's very useful). - Same for the Chrome extension; is there even anything that compares to the SDS extension? The ability to capture browser links is a must have for me, and the ability to manage them from a menu with download notifications is a big plus. Any suggestions are welcome, and feel free to share your current setup and it's pro's/cons.