Jump to content

Prograde

Members
  • Posts

    7
  • Joined

  • Last visited

Posts posted by Prograde

  1. On 6/24/2017 at 7:55 PM, saarg said:

     

    I don't see how enabling something in the kernel will help passing the card through. When passing a device through to a VM, the VM needs all the drivers for the device. In the host (unraid) the device is bound to the vfio driver.

     

    I believe users with this specific card (Startech PEXUSB3S44V) have no issues passing them to the machine, but have issues once passed to the machine inside the VM. Mine throws a Code 10 error on the USB hub that the device "cannot start" when passed to a Windows 10 VM using either provided drivers or Microsoft drivers. It also does not function with an Ubuntu 16.04 VM, in my experience, which has native UAS/UASP support if I'm not mistaken.

    Either we need some sort of kernel patch to get it to work, or Startech's implementation on this card is subject, which may be the case either way. The odd thing about this card specifically is that it uses the same Renesas µPD720202 controllers that I know work passed to VM in unRAID and ESXI, at least without a PCI Express Switch (PLX) in-between it and the host. This points to a potential botched implementation on Startech's part, unless they use some sort of custom firmware on the controllers themselves.

     

    However, I have record of this specific card working as expected in ESXI 6.0 U3 and ESXI 6.5.0d. I believe VMware added support for UAS/UASP standards with a patch in ESXI 5.5 back in 2014, and the Code 10 error in a Windows VM was what you would get before that patch. I know this in no way means unRAID needs explicit UAS/UASP support to get this card to work, as ESXI =/= KVM, but it it worth trying out.

    Thank you for the help!

    • Upvote 1
  2. Hello,

    I have had great success with both ESXI and unRAID passing through a USB controller to VMs using the Buffalo IFC-PCIE2U3, which is a simple dual port card powered by a single Renesas µPD720202 surface mount.

    Now you can imagine physical PCI-e ports can disappear quickly if you want to pass-through such native USB controllers to multiple VMs. What I have been looking for is a quad-port four-controller USB (3.0 or 2.0, just want some I/O) PCI-e card that can be split into four different pass-through devices to use with four different VMs similar to how something like an Intel I350-T4 can be split up into four separate NICs each with a single port.

    I have found and tested a few devices that use a PLX chip to feed four separate USB controllers, and have had no success finding a reliable solution to work with unRAID.

    Here are the two models I have tried so-far:

     

    HighPoint RocketU 1144C with four Asmedia 1042A Controllers - Latest Firmware dated to 2014

     

    IOMMU Splits up OK with ACS Override and VFIO blacklisting.
    Passes through to VMs OK, being seen in the OS.

     

    Windows: HighPoint drivers do work OK, and all is well on first boot. USB seems to work fine. Shutting down the VM and starting back up logs an error:

    DMAR: [DMA Read] Request device [0a:00.0] fault addr ed000 [fault reason 06] PTE Read access is not set.

    The port will not come back to functionality until the entire host is rebooted.

     

    Ubuntu: Works perfectly with native drivers on first boot. Shutting down the VM and starting back up logs an error:

    DMAR: [DMA Read] Request device [0a:00.0] fault addr ed000 [fault reason 06] PTE Read access is not set.

    The port will not come back to functionality until the entire host is rebooted.

     

    ESXI/Windows: Same issue after clean shut down and cold boot with not handing controller back to the host/VM properly.

     

    StarTech PEXUSB3S44V with four Renesas µPD720202 Controllers - Latest Firmware dated to 2012


    IOMMU Splits up OK with ACS Override and VFIO blacklisting.
    Passes through to VMs OK, being seen in the OS.

     

    Windows: Throws a Code 10 error, device cannot start directly on hub. Startech drivers and Microsoft xHCI drivers both do not work.

     

    Ubuntu: Does not work at all, presumably due to same issue the windows VM was seeing.

     

    ESXI/Windows: Seems to work identically to the IFC-PCIE2U3, in that everything functions both in and outside the VMs and between shut-downs and reboots. This is where I think there should be hope for the StarTech card with unRAID/KVM with some discrete setting, or as seen in the last link below, a kernel supporting USB Attached SCSI.


    Are there any 4 or more port USB PCI-e controllers other users have found to work properly with unRAID? Or even two port cards? This would add a whole lot of value to the platform, having a reliable way to pass USB controllers en masse to VMs.

    I have found a few references looking about the web for both cards, and some recent information regarding the Startech PEXUSB3S44V and a possible custom kernel:

     

    HighPoint 1144C References:

     

    https://www.redhat.com/archives/vfio-users/2016-June/msg00102.html

     

     

    Startech PEXUSB3S44V References:

     

    The last link is the most promising, and something I am looking to try out. I am willing to try out just about anything, having two spare test unRaid servers currently going and hardware in hand. Any insight or experience is welcomed.

     

    Thank you for your help!

    • Upvote 2
  3.  

    Wow, thank you for all of your research, that is quite useful, an amazing amount of information there you have come up with! I have saved that link for future reference. I'm honestly surprised expanders did so well in your tests. Very little overhead with them.

     

    Looks like my rebuild times WOULD go up using an H310 and a dual-link expander to something like (doing some math) 22 hours if all 24 drive bays are populated. This, being limited to 95 MB/s, would be 4-6 hours slower than normal, which is just barely bearable, 24 hours being my cut-off.

    I'm almost willing to just say SAS-2 with an expander is the fastest I would need until the physical SATA interface is replaced altogether.

    Anyway, I like the rest of your points in that link. Fortunately, the rest of the system (-mostly eBay deals-) I built is still up to speed, even being almost three(!) years old:
    Supermicro X10SRL-F, Xeon E5-1660 V3, 64GB Samsung M393A2G40DB0-CPB0, 1.4TB P420M PCI-e Cache (works on newer versions), Intel X520-DA2

    I've loved the simplicity of Unraid, and the community support due to users like you has been awesome. Thank you again!

  4. Hello,

    I am looking to move my current Unraid server to a larger Supermicro 4U case with more drive bays, specifically something in the CSE 846 lineup that sport 24 front-port bays and support full height PCI-e cards. I've been using an H310 flashed to IT mode with an eight drive CSE 745 case-imposed limit, and the move to dual parity pushed my need for more bays, even with 6+2 8TB drives ("48TB accessible").

     

    Now, with the "SAS-2 era" versions of the CSE 846, you can settle with either an expander backplane (BPN-SAS2-846EL1) limited to a maximum of 48 Gbps with two SFF-8087 uplinks or choose a "direct attach" "TQ" or "A" that either have 24 SATA ports (TQ) or 6 SFF-8087 ports (A), some of which are said to support even SAS-3 speeds, as there is no controller between your drives and your HBAs. I have a feeling that the 48 Gbps cap imposed by the expander would not be seen in my typical use, but I can see it being a severe detriment to rebuild and parity check speeds if all drives are being read from simultaneously and I have the entire 24-bay server populated. Although, theoretically, 48 Gbps gives you 250 MB/s to each of 24 disks, which is faster than the 7200 RPM HGST drives I use. I know theory is only good for just that, theorizing, so I remain concerned.

     

    Here is a great breakdown of SMCI backplane types by nephri at ServeTheHome: LINK

    What I am asking is if there is anyone with experience who can tell me that using an expander with 24 8TB 7200 RPM drives would affect rebuild and parity-check times significantly. I am currently sitting at 16-18 hours for a 7200 RPM 8TB rebuild/parity check using an 8 port TQ-style backplane and a single LSI 9211-8i equivalent, which is already not exactly timely. I am guessing that moving to an expander with more drives would push rebuild/parity check times over 24 hours, which would be my personal limit. Upping to 6 SFF-8087 connections would require another 4-port LSI card like a 9305-16i or 9201-16i, but that should not be a big deal if it saves time on rebuilds and parity checks.

    Thank you for your input!

×
×
  • Create New...