Prograde

Members
  • Content Count

    7
  • Joined

  • Last visited

Community Reputation

3 Neutral

About Prograde

  • Rank
    Newbie

Converted

  • Gender
    Undisclosed

Recent Profile Visitors

The recent visitors block is disabled and is not being shown to other users.

  1. I believe users with this specific card (Startech PEXUSB3S44V) have no issues passing them to the machine, but have issues once passed to the machine inside the VM. Mine throws a Code 10 error on the USB hub that the device "cannot start" when passed to a Windows 10 VM using either provided drivers or Microsoft drivers. It also does not function with an Ubuntu 16.04 VM, in my experience, which has native UAS/UASP support if I'm not mistaken. Either we need some sort of kernel patch to get it to work, or Startech's implementation on this card is subject, which may be the case either way. The
  2. Hello, I have had great success with both ESXI and unRAID passing through a USB controller to VMs using the Buffalo IFC-PCIE2U3, which is a simple dual port card powered by a single Renesas µPD720202 surface mount. Now you can imagine physical PCI-e ports can disappear quickly if you want to pass-through such native USB controllers to multiple VMs. What I have been looking for is a quad-port four-controller USB (3.0 or 2.0, just want some I/O) PCI-e card that can be split into four different pass-through devices to use with four different VMs similar to how something like an Intel I350-T
  3. Wow, thank you for all of your research, that is quite useful, an amazing amount of information there you have come up with! I have saved that link for future reference. I'm honestly surprised expanders did so well in your tests. Very little overhead with them. Looks like my rebuild times WOULD go up using an H310 and a dual-link expander to something like (doing some math) 22 hours if all 24 drive bays are populated. This, being limited to 95 MB/s, would be 4-6 hours slower than normal, which is just barely bearable, 24 hours being my cut-off. I'm almost willing to just say SAS
  4. Hello, I am looking to move my current Unraid server to a larger Supermicro 4U case with more drive bays, specifically something in the CSE 846 lineup that sport 24 front-port bays and support full height PCI-e cards. I've been using an H310 flashed to IT mode with an eight drive CSE 745 case-imposed limit, and the move to dual parity pushed my need for more bays, even with 6+2 8TB drives ("48TB accessible"). Now, with the "SAS-2 era" versions of the CSE 846, you can settle with either an expander backplane (BPN-SAS2-846EL1) limited to a maximum of 48 Gbps with two SFF-8087 upli
  5. Hello! Good news! Shortly after my last reply I bought an HDMI coupler to go between my (at the time unfinished) TV connection and the (at the time unfinished) keystone run I have for the Unraid server (my other troubleshooting systems used the same drop). No problems at all with that, being wired directly to the TV. I am limited to using HDMI as I am running through 20 ft and 100 ft in-wall repeaters run between keystones, and have no other video+audio HDMI gear to test with (DisplayPort is my preference). I comically did try to haul the 70" TV down to the basement rack where the Un
  6. Yes, as buried in my post above (sorry about the number of words, I tend to be verbose), I have changed both the Audio and Video components of the GTX 980 to MSI mode: I made this a must-do on every card I use HDMI with, so every card I tested with so far has been in MSI mode. I am going to drop that GT 740 into the X10SRL-F server after work, and test HDMI. Latest BIOS on that board too (dated 12/2015). Could it be a problem with the motherboard BIOS? Really seems unlikely to me, as the GTX 960 had issues in all of the systems I tested in, but maybe the problem lies with
  7. Hello, I've been trying to track down a hard-to-place and hard to replicate Video/Audio glitch I am having with Nvidia's Maxwell GPUs when passed to a Windows 10 virtual machine under unRAID with QEMU. Let me first say, I have been passing through AMD GPUs with ESXI for probably 3-4 years now since ESXI 5.0, with varying results, with most of my issues stemming from AMD's drivers. While I notice much better performance and general stability using Nvidia GPUs under unRAID, I am having one major issue that is stopping me from sticking with this solution. Problem Description: Anyw