Jump to content

UhClem

Members
  • Posts

    282
  • Joined

  • Last visited

Posts posted by UhClem

  1. I don't think it's a cable problem. There are (suggestive) indications. in the syslog, that your problem with your Disk3 is due to a flaky SATA port (ata6) on your motherboard (chipset). I would swap connections at the motherboard between Disk3 and another DiskN. If the problems DO "transfer" to DiskN (and stay on ata6), that does eliminate Disk3 and its cable, and nails it to the board.

     

    If not, ...

     

    Disclaimer: not an Unraid user (just like fun problems)

     

     

  2. Odd ... Your reported symptoms (only writing is slow; and speed @ 10-20 MB/sec) are precisely indicative of a drive's write-caching disabled. But, there's no arguing with your test result (-W saying w.c. is ON) [assuming that /dev/sdb IS one of the two slowpokes]

     

    Back to "Plan A" ... maybe Squid & others can see something from your Diagnostics [upload].

     

  3. 4 hours ago, Vr2Io said:

    It even have 8 SATA (two 8087 socket) version, look like have two Jmb585 controller and test got min 190MB/s per disk.

     

    Can't be two jmb585 without a PCIe switch (or a very bizarrely configured [chipset] M.2 slot with "self-bifurcation").

     

    More likely ONE jmb585;  and one jmb575 port multiplier.  Cute, still ...

     

    • Like 1
    • Thanks 1
  4. On 6/16/2022 at 4:29 PM, charlesshoults said:

    ...

    • I've purchased one StarTech Mini-SAS adapter so far, and will need to purchase one more.  SFF86448PLT2  0.5m cables for the card arrived today and the card itself will be here tomorrow.

    ...

    The StarTech adapter cards I'm using in the UnRaid server are pricey, more than $80 each.  Other brands of cards can be had for under $40 each, but I'm not sure how much faith I should put in branding.  Am I wasting my money with the more expensive adapters?

     

    Depends where you buy them 😀

    [Link]

    $22 vs $80 ?? ... [Inflation is everywhere. Is this the "exception that proves the rule"?]

     

  5. 9 hours ago, budgidiere said:

    Does a ROG MAXIMUS X CODE work?

    Yes. In addition to having the x16/x0 or x8/x8, it even provides  for bifurcating the (2nd) x8 to x4/x4, allowing 2 NVMe's to feed off the slot. And there is also a X4 (x16 mechanical) slot (plus  2/3 x1 slots). But it's not a budget mobo (like the MSI one)--I'd expect that a mobo with similar functionality and flexibility could be found at an intermediate price point. I'm not familiar with the MB scene nowadays, so I have no recommendation.

     

  6. 20 hours ago, Hoopster said:

    With HDDs attached to a PCIe 2.0 HBA, there is plenty of bandwidth.  I have several HDDs attached to a Dell H310/LSI 9211-8i (PCIe 2.0) and bandwidth is not a problem.  HDDs never get close to using PCIe 3.0 bandwidth.  ...

    I completely agree ...but (for a 2 SAS port card) ONLY if the HBA is given 8 lanes AND never an expander [ie max # drives is 8]. Given that the proposed case has 15 x 3.5, I want to allow for growing into that. [Also, given that case, I'm confused by OP's

    Quote

    I'm also still looking at drives, anyone have recommendations? I'm probably never going to need more than 24TB.

    OP ?? ]

    I see 2 GPUs in that list, plus mention of a 10G NIC. Also, looking to the future, it's a good idea to provide for additional NVMe's--a real boon for speeding up many workflow scenarios. Sounds like a tight "budget" for slots and lanes. Toward these ends, I would want to use a Gen3 HBA since it offers the option of 3200+ M/s bandwidth using ONLY 4 lanes; and Gen3/SAS2 HBAs are only a few $ more than Gen2/SAS2 these days [frugality ON].

  7. On 3/14/2022 at 10:30 AM, budgidiere said:

    I'm setting up a new sever with the following parts, anyone see any pitfalls?
    ...

    - MSI Z370-A-PRO
    ...
    - LSI 9201-8i
    ...

    Don't get a mobo w/all 16 (CPU) PCIe lanes going to a single x16 slot. Look for boards that (at least) offer the option of (2 slots as) x16/x0 OR x8/x8.

    Also, don't waste potential bandwidth by using Gen2 HBAs in a Gen3 ecosystem.

     

  8. You do NOT want a Mobo with the PCIe config like that MSI you linked. It has all 16 of the CPU's PCIe lanes going to ONE of its x16 slots. (The other two x16-wide slots, one of them x4 lanes & the other x1 lane, are connected to the chipset.)

    Nothing wrong with the Z390 chipset itself, but look for a mobo that allocates the CPU's 16 lanes among two slots, either as x16/x0 (when that 2nd slot is empty) OR as x8/x8 (when both are occupied).

     

    Even better, but I've never seen it!!, would be three CPU-connected slots, configured as: x16/x0/x0 or x8/x8/x0 OR x8/x4/x4.

     

     

  9. The (unique/specific) C246 chipset on your own [3Server] motherboard is flaky. (You should get a direct replacement.)

     

    The best documentation (readily available) for the issue is the output of:

    grep -e "ATA-10" -e "AER: Corr" -e "FPDMA QUE" -e "USB disc" syslog.txt

    Use the syslog.txt from the 20210930-0723 .zip file. It has all 4 HDDs  throwing errors. The "ATA-10" pattern just documents which HDD is ata[1357].00 . I'm pretty sure there are also relevant NIC errors in there, but I'm networking-ignorant. Note that all of these errors emanate from devices on the C246. Please examine a syslog.txt from your test-run on your Gen10 MS+; that box also uses the C246, but its syslog.txt will have none of these errors.

     

    Attached is the output from the above command (filtered thru uniq -c, for brevity).

    c246.txt

     

    • Thanks 1
  10. 3 minutes ago, Sandwich said:

    China.

     

    That said, given the very reasonable price, I'm willing to risk it. ;) Thanks!! I'll try to report back when I test it out.

    Understood. Given the ~20 (non-empty) reviews [mostly Russia/E. Eur], the product is as-advertised and functions properly; only negative appears to be seller's (lack of padded) packaging. I'm sure the community will welcome your report. Mazel tov.

     

  11. 2 minutes ago, Sandwich said:

    Well, I can easily ensure that it only ever has HDDs connected to it, and since those max out at what, 150MBps under ideal conditions? 150*6 = 900MBps, so an ~850MBps limit probably wouldn't ever be reached or noticed... ¯\_(ツ)_/¯

     

    I have some 4TB's that do 200 MB/s (typical 4TBs max ~150-160); typical 8TBs ~200; 12TBs ~240; 16TBs 260+ .

    [ Kafka wrote: "Better to have and not need, than to need and not have." ]

    Quote

    That said, that AliExpress card looks fine by me; how confident can we be that it's actually what it says it is?

    170+ orders & 60+ reviews (@4.8/5) [for what it's worth ??]

     

  12. 6 hours ago, Sandwich said:

    ...
    Since I ideally want a 6+ port card, would this be a wise purchase? https://www.amazon.com/gp/product/B08L78QHSJ

    No !!!!  NOT that card.  Unless you really want/need to restrict yourself to a x1 physical slot; hence limiting your total throughput to ~850 MB/s.

     

    If you have a x4 physical slot (which is at least x2 electrical) this one looks like an excellent value:

    https://www.aliexpress.com/item/4001269633905.html

    getting full PCIe3 x2 throughput of ~1700 MB/s (at < 30 $USD)

     

  13. On 7/8/2021 at 12:46 PM, JorgeB said:

    ...  this combo with SAS3 devices should be capable of around 4.4GB/s with single link and 7GB/s with dual link.

    As for the 7GB/s "estimate", note that it assumes that the 9300-8i itself actually has the muscle. It probably does, but it is likely that it will be PCIe limited to slightly less than 7.0 GB/s (ref: your x4 test which got 3477; [i.e. < 7000/2])

    On 7/8/2021 at 12:46 PM, JorgeB said:

    Curiously total bandwidth decreases with more devices, possibly due to the hardware used or Unraid but don't think so, using @UhClem's script only got around 4500MB/s max with dual link (with or without the B option) using 12 or more devices, so there shouldn't be a CPU/Unraid bottleneck, I suspect the technology used by PMC to emulate a 12G link with 6G devices loses some efficiency as you add more devices.

    Interesting. The under-performance of my script suggests another deficiency of the PMC implementation (vs LSI) :

    My script uses hdparm -t which does read()'s of 2MB (which the kernel deconstructs to multiple 512KB max to the device/controller). Recall that LSI graphic you included which quantified the Databolt R/W throughput for different Request sizes (the x-axis). There was a slight decrease of Read throughput at larger request sizes (64KB-512KB). I suspect that an analogous graph for the PMC edge-buffering expander would show a more pronounced tail off.

     

     

     

  14. 22 hours ago, JorgeB said:

     

    This was the result (PCIe 3.0 x4):

     

    
    ndk_sh t u v w x y z aa
      /dev/sdt: 409.95 MB/s
      /dev/sdu: 409.91 MB/s
      /dev/sdv: 409.88 MB/s
      /dev/sdw: 410.22 MB/s
      /dev/sdx: 410.31 MB/s
      /dev/sdy: 410.54 MB/s
      /dev/sdz: 412.00 MB/s
      /dev/sdaa: 410.20 MB/s
    Total =    3283.01 MB/s

     

    Ran it 3 times and this was the best of the 3, so strangely a little slower than an Unraid read check.

    Try the B option. It might help, or not ... Devices (and buses) can act strange when you push their limits.

    Quote

    Do you mind sharing that one also? Have 4 NVMe devices in a bifurcated x16 slot but no good way of testing, since an Unraid read check produces much slower than expected speeds with those.

    The nvmx script uses a home-brew prog instead of hdparm. Though I haven't used it myself, you can check out fio for doing all kinds of testing of storage.

    Quote

    That's a good result, but I expect NVMe devices will be a little more efficient, consider that you have the NVMe controller on the device(s) going directly to the PCIe bus, with SAS/SATA you have the SAS/SATA controller on each device, then you have the HBA and only then the PCIe bus, so I believe it can never be as fast as NVMe, though I have no problem in now acknowledging that the PCIe 3.0 bus itself can reach around 7GB/s with an x8 link,

    I completely agree with you.

    Quote

     still believe with SAS/SATA HBA it will always be a little slower, around 6.6-6.8GB/s.

    I do not completely agree with this.

    Quote

    PS: ...

    I'll send you a PM.

     

  15. 20 hours ago, JorgeB said:

    Interesting, I feel the other way, i.e. that they did a good job with it, ...

    Certainly ... but as an old-school hardcore hacker, I wonder if it could have been (at least a few %) better. I have to wonder if  any very large, and very competent, potential customer (e.g., GOOG, AMZN, MSFT), did a head-to-head comparison between LSI & PMC before placing their 1000+ unit chip order.

    Quote

    some more interesting info I found on this:

    That lays the whole story out--with good quantitative details. I commend LSI. And extra credit for "underplaying" their hand. Note how they used "jumps from 4100 MB/s to 5200 MB/s" when their own graph plot clearly shows ~5600. (and that is =~ your own 5520)

     

    I suspect that the reduction in read speed, but not write speed, is due to the fact that writing can take advantage of "write-behind" (similar to HDD's and OS's), but reading can not do "read-ahead" (whereas HDD's and OS's can).

    Quote

    It can be seen for example with lsscsi -v:

    Thanks for the verification.

     

  16. 22 hours ago, JorgeB said:

    I can test with the HBA in an x4 slot ...

    
    02:00.0 Serial Attached SCSI controller: Broadcom / LSI SAS3008 PCI-Express Fusion-MPT SAS-3 (rev 02)
                            LnkSta: Speed 8GT/s (ok), Width x4 (downgraded)

    1675163367_Screenshot2021-06-0609_18_05.png.e7f6f38f56cbe7c07b7cbb325682f584.png

     

    So an x8 slot should be able to do around 6800MB/s, possibly a little more with different hardware, still think 7000MB/s+ will be very difficult, but won't say it's impossible,

    You're getting there ... 😀 Maybe try a different testing procedure:. See the attached script. I use variations of it for SAS/SATA testing.

     

    Usage:

    ~/bin [ 1298 ] # ndk a b c d e
      /dev/sda: 225.06 MB/s
      /dev/sdb: 219.35 MB/s
      /dev/sdc: 219.68 MB/s
      /dev/sdd: 194.17 MB/s
      /dev/sde: 402.01 MB/s
    Total =    1260.27 MB/s

    ndk_sh.txt

    Speaking of testing (different script though) ...

    ~/bin [ 1269 ] # nvmx 0 1 2 3 4
    /dev/nvme0n1: 2909.2 MB/sec
    /dev/nvme1n1: 2907.0 MB/sec
    /dev/nvme2n1: 2751.0 MB/sec
    /dev/nvme3n1: 2738.8 MB/sec
    /dev/nvme4n1: 2898.5 MB/sec
    Total = 14204.5 MB/sec
    ~/bin [ 1270 ] # for i in {1..10}; do nvmx 0 1 2 3 4 | grep Total; done
    Total = 14205.8 MB/sec
    Total = 14205.0 MB/sec
    Total = 14207.5 MB/sec
    Total = 14205.8 MB/sec
    Total = 14203.3 MB/sec
    Total = 14210.6 MB/sec
    Total = 14207.0 MB/sec
    Total = 14208.0 MB/sec
    Total = 14203.4 MB/sec
    Total = 14201.9 MB/sec
    ~/bin [ 1271 ] #

    PCIe3 x16 slot [on HP ML30 Gen10, E-2234 CPU] nothing exotic

     

     

  17. On 6/5/2021 at 5:44 AM, JorgeB said:

     

    524779542_Screenshot2021-06-0507_48_30.png.067348e3785b79bdd9f73825eeddae40.png 501138169_Screenshot2021-06-0507_45_33.png.616d753d537b90249b5cf674b529bd81.png

     

    Left side 8 SATA3 SSDs directly connected to an LSI 9300-8i, right side same but now with an expander in the middle,

    Excellent evidence! But, to me, very disappointing that the implementations (both LSI & PMC, apparently)] of this feature are this sub-optimal.. Probably a result of cost/benefit analysis with regard to SATA users (the peasant class--"Let them eat cake."). Also surprising that this hadn't come to light previously.

     

    Speaking of the LSI/PMC thing ... Intel's SAS3 expanders (such as the OP's) are documented, by Intel, to use PMC expander chips. How did you verify that your SM backplane actually uses a LSI expander chip (I could not find anything from Supermicro themself; and I'm not confident relying on a "distributor" website)? Do any of the sg_ utils expose that detail? The reason for my "concern" is that the coincidence of both OP's & your results, with same 9300-8i and same test (Unraid parity check) [your 12*460 =~ OP's 28*200]  but different??? expander chip is curious.

     

  18. 12 hours ago, JorgeB said:

    I'm referring to HBAs, never got or seen someone get more than about 3000MB/s with a x8 PCIe 2.0 HBA and 6000MB/s with a PCIe x8 3.0 HBA, won't say it's not possible, just not typical, and we are talking about a HBA here.

    Please keep things in context.

    OP wrote:

    Quote

    I am wanting to use just one controller for 28 drives in my system. With a LSI 9300-8i and an Intel RES3TV360 I can get just over 200MB/s on all of the drives during a parity check. While this is fine I would prefer to get max speed if there are pcie 3.0 x16 sas controllers out there.

    Since the OP seemed to think that an x16 card was necessary, I replied:

    Quote

    It looks to me like you are not limited by PCIe bandwidth. PCIe gen3 @ x8 is good for (real-world) ~7000 MB/sec.

    And then you conflated the limitations of  particular/"typical" PCIe3 SAS/SATA HBAs with the limits of the PCIe3 bus itself.

     

    In order to design/configure an optimal storage subsystem, one needs to understand, and differentiate, the limitations of the PCIe bus, from the designs, and shortcomings, of the various HBA (& expander) options.

     

    If I had a single PCIe3 x8 slot and 32 (fast enough) SATA HDDs, I could get 210-220 MB/sec on each drive concurrently. For only 28 drives, 240-250..(Of course, you are completely free to doubt me on this ...) And, two months ago, before prices of all things storage got crazy, HBA + expansion would have cost < $100.

    =====

    Specific problems warrant specific solutiions. Eschew mediocrity.

     

  19. 7 hours ago, JorgeB said:

    In my experience it's closer to 6000MB/s, even LSI mentions 6400Mb/s as the maximum usable:

    image.png.0ecbe74956f900244b4ed95362fabf70.png

     

    In my direct, first-hand, experience, it is 7100+ MB/sec.  (I also measured 14,200+ MB/sec on PCIe3 x16). I used a PCIe3 x16 card supporting multiple (NVMe) devices. [In a x8 slot for the first measurement.]

    [Consider: a decent PCIe3 x4 NVMe SSD can attain 3400-3500 MB/sec.]

     

    That table's  "Typical" #s are factoring in an excessive amount of transport layer overhead.

    Quote

    Assuming the OP is using SATA3 devices (not SAS3) it also means the PMC expander chip has something equivalent to LSI's Databolt

    I'm pretty certain that the spec for SAS3 expanders eliminates the (SAS2) "binding" of link speed to device speed. I.e., Databolt is just Marketing.

    Quote

    in my tests I could get around 5600MB/s with a PCIe x8 HBA and a databolt enable LSI expander,

    Well, that's two tests of the 9300, with different  dual-link SAS3 expanders and different device mix, that are both capped at ~5600 ... prognosis: muscle deficiency [in the 9300].

     

  20. It looks to me like you are not limited by PCIe bandwidth. PCIe gen3 @ x8 is good for (real-world) ~7000 MB/sec.  If you are getting a little over 200 MB/sec each for 28 drives, that's ~6000 MB/sec. (You are obviously using a Dual-link connection HBA<==>Expander which is good for >> 7000 [9000].)  Either your 9300 does not have the muscle  to exceed 6000, or you have one (or more) drives that are stragglers,  handicapping the (parallel) parity operation. (I'm assuming you are not CPU-limited--I don't use unraid.)

     

×
×
  • Create New...