Jump to content

UhClem

Members
  • Posts

    282
  • Joined

  • Last visited

Posts posted by UhClem

  1. 9 hours ago, jang430 said:

    ... But while doing preclearing on 2 8 TB drives, I noticed my cpu reached 100%, and stayed there the whole time it was preclearing.  ...

    The whole time, or just post-read verify ??

    (I don't use Unraid, but I vaguely recollect the details of Joe's preclear.)

    Quote

    Will an LSI card offload cpu utilization when all drives are connected to it? 

    No, it will not affect the CPU usage. It does (effectively) eliminate the I/O bottleneck of the on-board (chipset) Sata sub-system.

    That CPU usage you saw during pre-clear (x2) should not guide any (re-configure) decision you make.

     

  2. 14 hours ago, extrobe said:

    I'm looking to upgrade a couple of my 8-port controllers (Dell H200's) with 16 port controllers, to free up some PCIe slots.

    Before recommending a seller, I'd like to make sure we are seeking the best solution. Based on your system specs (mobo/cpu), you actually have 5 PCIe g3 x8 slots -- OR only 3 slots if one (or two) need to supply x16 lanes (each).

     

    If it is the latter case (you need to "free up" an x16 slot, then I'd suggest considering a SAS(/SATA) expander, which can use one of the soon-to-be x0 slots (expanders only use a PCIe slot for power (no signals/lanes) and a place to live).

     

    If it's the former case (you want to repurpose an x8 slot now used by an H200, then, yes, probably a 16 (or more) port HBA is the answer. Instead of a 9201-16i though, I'd go for an Adaptec ASR-71605 (or -72405). Less $$, it's faster PCIe Gen3 (vs Gen2 for 9201) and 4500+ MB/s (vs ~3000 MB/s), and it's low-profile (repurposing flexibility in future). Only negative is that its mini-SAS ports are 8643 (vs 8087 on H200/9201), so new breakout cables are needed.

    Quote

    Does anyone have any recent experience with a seller in CN/HK they can share?

    I recently bought a SAS expander to play with, and despite going for low cost, was pleasantly surprised. I bought a Lenovo 03X3834 on eBay for $15 shipped from CN/HK. Ordered on 23Nov, and it arrived (NH,USA) on 06Dec. Very well-packaged (anti-static + bubble-wrap + box) Works fine! The link for the seller's eBay store is JiaWen2108 

    sells lots of HBA, expanders, cables.

     

  3. 10 hours ago, JorgeB said:

    ... so you want a device with better endurance and since it can't be trimmed and will be much more used faster performance will also help.

    Thanks!  Good points.

    I think that spec'd endurance (e.g., 600 TBW for 860 EVO 1TB) won't be an issue, for all but extreme use cases. (For a data point, I used a 860 EVO 500GB in a DVR (DirecTV HR24) for the last year. It had ~8000 hours and ~30 TBW when I secure-erased it. Sadly, I didn't think to do any write-performance tests before the erase.) An "extreme use case" might be an array for multi HD security cameras [e.g. 4 feeds @ 10GB/hr each (24/7) =~ 350 TBW/year]. Note, though, that you need a near-server-level NVME to exceed 1 PBW rating (for 1 TB device).

     

    As you said, the NVME-for-parity does offer significant performance "head-room", such that it's write speed can degrade (as expected w/o trim) with no effect on array write speed. It also allows one to forego turbo mode, eliminating read-contention with the other (N-1) data SSDs [during array writes] ... and a few watts of juice.

     

  4. On 9/11/2020 at 12:28 PM, JorgeB said:

    ... and parity should be a faster device than the others, since it will be overworked, for example I use an NVMe device for parity.

    A question, please ... [I might be missing something, since I don't use Unraid.] :

    For an all-SSD array, wouldn't turbo-mode alleviate the parity-dominating aspect (with no detrimental side-effects)?

    ["good" SSDs (sata), and in decent "trim", will get write speeds very close to their read speeds, no?]

     

  5. [ Assuming that:

    Quote

    using the other controller in the MD1200

     

    also means using a separate 8088 input connector [else there's a tiny chance that the (single?) 8088-IN is flaky]

    ]

     

    Then, I suspect that you might have a glitchy H810 controller.

     

    In either case, if you have a 8088=>4xSata breakout cable, you could "test" the H810 independent of the MD1200.

     

  6. 21 hours ago, kim_sv said:

    What you say about the SATA controller in the N40L, does that mean that the 5th SATA port on the motherboard shouldn't be used either to preserve speed?

    Sort of ... it shouldn't be used for one of the six array drives, since that would further divide the 650-700. But, it could/should be used for the cache drive; then it could only (slightly) impact mover operations, and only if TurboWrite was enabled. The (2-port?) add-in card would connect array drives 5 and 6. A "full-spec" PCIe x1 Gen2 card (e.g. ASM1061-based) , giving ~350 MB/sec, would not lower the "ceiling" of ~160 (for 4) on the mobo SATA.

     

    21 hours ago, kim_sv said:

    Is there a 4 port PCIe card that could work instead? If that is better for speed in any way?

    The only improvement would be that the cache SSD could operate at full (SataIII [~550]) speed, but that is moot, since, as your cache, it is inherently limited by your 1GbE network (on input) and your array (on output). Very little bang for the extra bucks, since you'd need a PCIe >=x2 card to handle >2 drives, and not lower the "ceiling".

     

    Here's a neat idea, if you really want to eliminate the speed bottleneck:

    (Assuming that your x16 slot is available,) Get a UNRAID-friendly LSI-based card (typically PCIe x8, at >= Gen2), and connect the built-in 4-bay Sata backplane to it. Note that the thick cable/connector (left of Sata-5), which connects that backplane to the mobo, is actually a Mini-SAS 8087. And can instead be connected to (one of the connections on) an add-in LSI card. That completely eliminates the bottleneck for SATA 1-4. "But, wait, there's more ..." Then you put a standard SAS-to-SATA breakout cable into the now-empty mobo connector and use 3 (of the 4) SATAs for drives 5 and 6, and your cache SSD. That gives you full SataIII for the SSD (FWIW) and you've still got SATA-5 and the eSata, plus "breakout #4", to play with for whatever. "But, wait ...." If you get a 4i4e card, you can (later) add 4+ more drives externally, and still no bottleneck! (Pretty neat, huh?) [Important, LSI card must have low-profile bracket.]

    21 hours ago, kim_sv said:

    is there any problem to build it up and start using it with the equipment I already have (PCIE card, eSATA to SATA) and upgrade it when I tracked down the new stuff? Can unRAID handle disks being moved between SATA ports?

    I don't use UNRAID, so I can't be certain about the re-config issue. Once you've cleared that, I strongly encourage you to start with what you've already got. Not only will it give you time to think it all through, and find exactly the pieces that will serve you best, but it will give you an opportunity to assess your N40L's performance with current version of UNRAID. Then you can extrapolate from that initial CPU/throughput ratio to be sure that the CPU won't become a new/unexpected bottleneck if/when the throughput increases from 650 to 1000 (2-port card) or 1300+ (LSI).

     

     

  7. 17 hours ago, kim_sv said:

     ...

    Now the question is, how large capacity drives can this N40L handle? I'm placing an order but I don't want to buy the wrong ones. 😃

    No limitation! -- for at least another decade :)

     

    (Since you mentioned that you'd be running 6 newer (ie, faster) array drives,) The SATA controller in the N40L (and 36L + 54L) has a maximum (combined) throughput limitation of ~650 MB/sec. With 4 array drives on the built-in SATAs (and 2 on the add-in), that would limit your "parallel" operations (parity-check, rebuild, Turbo-write) to 160 MB/sec, but many/most newer drives are capable of 200-250 MB/sec max (130-150 min). To maximize your array performance, you'd want to be very selective in your choice of add-in SATA card.

     

    The one you linked is based on the SiI3132, which is a notable under-performer (80MB/sec max for each of 2 drives [150-170 total])--scratch that one. Better would be any Asmedia ASM1061-based 2-port card; 170MB/sec each of 2 drives. At least that would preserve the 160 limitation imposed by the built-in.

     

    [Note: these configs don't utilize the N40L's eSata, so it would stay available for a backup/import-export external drive.]

     

    • Like 1
  8. 36 minutes ago, jbuszkie said:

    Ok..  I give up..  How were you able to tell that ata7 and ata8 where the Asmedia ports from the syslog?  Because I know my motherboard..  I could figure it out..  But I can't find anywhere which ports are mapped to which controller in the syslog...  There is no mention of asmedia in the syslog.

    How about this:

    [from your 800k syslog] lines 759-763

    Mar 11 07:50:54 Tower kernel: ahci 0000:04:00.0: SSS flag set, parallel bus scan disabled
    Mar 11 07:50:54 Tower kernel: ahci 0000:04:00.0: AHCI 0001.0200 32 slots 2 ports 6 Gbps 0x3 impl SATA mode
    Mar 11 07:50:54 Tower kernel: ahci 0000:04:00.0: flags: 64bit ncq sntf stag led clo pmp pio slum part ccc sxs 
    Mar 11 07:50:54 Tower kernel: scsi host7: ahci
    Mar 11 07:50:54 Tower kernel: scsi host8: ahci

    Then searching for 0000:04:00 leads to: [line 373]

    Mar 11 07:50:54 Tower kernel: pci 0000:04:00.0: [1b21:0612] type 00 class 0x010601

    And [1b21:0612] is the Vendor ID (Asmedia) : Device ID (ASM1062) pair for that controller.

     

    "A rose by any other name ... is still a rose."

     

    • Thanks 1
  9. 23 hours ago, johnnie.black said:

    If the R710 is PCIe 2.0 then yes, might as well get a 9211-8i or similar.

    Maybe ... maybe not. Consider that the 9211-8i (and its LSI SAS2008-based brethren) appear to have an ultimate bottleneck of their internal CPU/memory/firmware which precludes them achieving the maximum (real-world/measurable) PCIe Gen2 x8 throughput of 3200 MB/s. Your tests, on the H310, show 2560 (8x320) MB/s.

     

    Whereas the 9207-8 (and its SAS2308 brethren) do have the "muscle-power" to achieve, and exceed, full  Gen2x8 throughput (3200); you measured, on a Gen3x8, 4200 (8x525) MB/s--and that is limited, not by the 2308, but by maxing out the Sata3 (real-world) speed of ~525 [I suspect that SAS-12Gb SSDs would do better, no?***] So, for 9211 vs 9207, using Gen2, it comes down to a cost/benefit decision (plus, an appropriate degree of future-proof factor; re-use following a mobo upgrade).

     

    *** [Edit] Actually, NO -- it looks like 12G SAS connectivity was not offered until the 93xx series.

     

  10. 2 hours ago, charleslam said:

    I assumed that since the pro 3.1 card is not the same as the 3.0 card, that there would be its own problems.  but it sounds like this has been documented and tested as working with unraid. 

     

    Still, if i have learned one thing from this forum is things arent always as they appear.  have you confirmed that the 3.1 pro card works with unraid? Often people just refer to the "Pro" card and there are 2 different cards we are talking about here. 

     

    Thanks,

     

    Charles Lam

    [my mistake] Yes, all those "generic" references to the Sonnet Allegro Pro card clouded my thinking. [Note: the one hyper-link to the Pro card (on first page) now 404's--shame on Sonnet--they should have "preserved" the link, redirecting it to its new "home", under their LegacyProducts, and suggested the new "Pro".]

     

    I totally agree with you regarding the need for documented specificity before taking action on a reference/suggestion for a functioning solution. And, NO, I'm not using any of these cards--just a passing interest in atypical "solutions". [I was aware of this thread, but never actually dug into it--then I was trying to find out more about the Pericom Semiconductor PI7C9X2G608GP switch for this UnRAID thread

    ... and here we are.

     

    I don't know what chips are used on the Allegro Pro 3.0 card, but the 3.1 card uses the Pericom PI7C9x2G308GP (can see it in the picture @ the Amzn lnk)  [4<==>2x2]. I'm real curious to know which chip is used for the (2) USB controller chips. If anyone does get one to try, please do post a relevant snippet from either the kernel meggages or from lspci. tnx

    [Edit] Just found it--the 3.1 Pro uses 2 Asmedia ASM1142

     

    Sorry for the confusion/distraction. Good luck to all on your quest.

     

  11. On 11/7/2019 at 1:35 AM, charleslam said:

    ... unfortunately the allegro pro is no longer being produced ...

    Amazon link

     

    Something interesting about the Sonnet card: it uses a PCIe switch chip to split a PCIe_V2 x4 connection into two x2's each going to one of two USB controller chips. Apparently, each of those USB chips has two "sub-devices" each of which can be passed (as desired here).

     

    Compare that to the Startech card, which also has a PCIe switch chip (from the same company), but its switch splits a PCIe_V2 x4 connection into four x1's, each going to its own USB chip. Yet, this one causes grief.

     

    In addition to working as you all desire, the Sonnet card has the advantage of higher maximum single-port throughput, since its (2) x2 chips can share their two-lane bandwidth on-demand/as-needed to its two "sub-devices". Whereas, the Startech, even if it worked for you, would limit each "sub-device" to an x1 lane of throughput. [I realize that connectivity (vs bandwidth) is more the goal here, but ... just saying ...]

     

    --UhClem  "How can you be in two places at once, when you're not anywhere at all."

     

     

  12. From the above screen shot, I would guess that your controller card has a single (2-port) ASM1062 Sata-controller chip, and each of those 2 ports is connected to one (of 2 [PM 0 & PM 1]) port-multiplier chip. Since that is a POST screen, only the first port/drive on each PM chip is seen (by the BIOS). That is not a problem.

     

    The problem is that the kernel does not recognize these specific port-multiplier chips, and never "activates" them--this results in none of the PM-connected drives going active. As stated by others, switch to a card known to work with the hardware (your mobo) and software (unRaid) in play.

     

     

×
×
  • Create New...