UhClem

Members
  • Posts

    265
  • Joined

  • Last visited

Everything posted by UhClem

  1. As for the 7GB/s "estimate", note that it assumes that the 9300-8i itself actually has the muscle. It probably does, but it is likely that it will be PCIe limited to slightly less than 7.0 GB/s (ref: your x4 test which got 3477; [i.e. < 7000/2]) Interesting. The under-performance of my script suggests another deficiency of the PMC implementation (vs LSI) : My script uses hdparm -t which does read()'s of 2MB (which the kernel deconstructs to multiple 512KB max to the device/controller). Recall that LSI graphic you included which quantified the Databolt R/W throughput for different Request sizes (the x-axis). There was a slight decrease of Read throughput at larger request sizes (64KB-512KB). I suspect that an analogous graph for the PMC edge-buffering expander would show a more pronounced tail off.
  2. Try the B option. It might help, or not ... Devices (and buses) can act strange when you push their limits. The nvmx script uses a home-brew prog instead of hdparm. Though I haven't used it myself, you can check out fio for doing all kinds of testing of storage. I completely agree with you. I do not completely agree with this. I'll send you a PM.
  3. Certainly ... but as an old-school hardcore hacker, I wonder if it could have been (at least a few %) better. I have to wonder if any very large, and very competent, potential customer (e.g., GOOG, AMZN, MSFT), did a head-to-head comparison between LSI & PMC before placing their 1000+ unit chip order. That lays the whole story out--with good quantitative details. I commend LSI. And extra credit for "underplaying" their hand. Note how they used "jumps from 4100 MB/s to 5200 MB/s" when their own graph plot clearly shows ~5600. (and that is =~ your own 5520) I suspect that the reduction in read speed, but not write speed, is due to the fact that writing can take advantage of "write-behind" (similar to HDD's and OS's), but reading can not do "read-ahead" (whereas HDD's and OS's can). Thanks for the verification.
  4. You're getting there ... 😀 Maybe try a different testing procedure:. See the attached script. I use variations of it for SAS/SATA testing. Usage: ~/bin [ 1298 ] # ndk a b c d e /dev/sda: 225.06 MB/s /dev/sdb: 219.35 MB/s /dev/sdc: 219.68 MB/s /dev/sdd: 194.17 MB/s /dev/sde: 402.01 MB/s Total = 1260.27 MB/s ndk_sh.txt Speaking of testing (different script though) ... ~/bin [ 1269 ] # nvmx 0 1 2 3 4 /dev/nvme0n1: 2909.2 MB/sec /dev/nvme1n1: 2907.0 MB/sec /dev/nvme2n1: 2751.0 MB/sec /dev/nvme3n1: 2738.8 MB/sec /dev/nvme4n1: 2898.5 MB/sec Total = 14204.5 MB/sec ~/bin [ 1270 ] # for i in {1..10}; do nvmx 0 1 2 3 4 | grep Total; done Total = 14205.8 MB/sec Total = 14205.0 MB/sec Total = 14207.5 MB/sec Total = 14205.8 MB/sec Total = 14203.3 MB/sec Total = 14210.6 MB/sec Total = 14207.0 MB/sec Total = 14208.0 MB/sec Total = 14203.4 MB/sec Total = 14201.9 MB/sec ~/bin [ 1271 ] # PCIe3 x16 slot [on HP ML30 Gen10, E-2234 CPU] nothing exotic
  5. Excellent evidence! But, to me, very disappointing that the implementations (both LSI & PMC, apparently)] of this feature are this sub-optimal.. Probably a result of cost/benefit analysis with regard to SATA users (the peasant class--"Let them eat cake."). Also surprising that this hadn't come to light previously. Speaking of the LSI/PMC thing ... Intel's SAS3 expanders (such as the OP's) are documented, by Intel, to use PMC expander chips. How did you verify that your SM backplane actually uses a LSI expander chip (I could not find anything from Supermicro themself; and I'm not confident relying on a "distributor" website)? Do any of the sg_ utils expose that detail? The reason for my "concern" is that the coincidence of both OP's & your results, with same 9300-8i and same test (Unraid parity check) [your 12*460 =~ OP's 28*200] but different??? expander chip is curious.
  6. Please keep things in context. OP wrote: Since the OP seemed to think that an x16 card was necessary, I replied: And then you conflated the limitations of particular/"typical" PCIe3 SAS/SATA HBAs with the limits of the PCIe3 bus itself. In order to design/configure an optimal storage subsystem, one needs to understand, and differentiate, the limitations of the PCIe bus, from the designs, and shortcomings, of the various HBA (& expander) options. If I had a single PCIe3 x8 slot and 32 (fast enough) SATA HDDs, I could get 210-220 MB/sec on each drive concurrently. For only 28 drives, 240-250..(Of course, you are completely free to doubt me on this ...) And, two months ago, before prices of all things storage got crazy, HBA + expansion would have cost < $100. ===== Specific problems warrant specific solutiions. Eschew mediocrity.
  7. In my direct, first-hand, experience, it is 7100+ MB/sec. (I also measured 14,200+ MB/sec on PCIe3 x16). I used a PCIe3 x16 card supporting multiple (NVMe) devices. [In a x8 slot for the first measurement.] [Consider: a decent PCIe3 x4 NVMe SSD can attain 3400-3500 MB/sec.] That table's "Typical" #s are factoring in an excessive amount of transport layer overhead. I'm pretty certain that the spec for SAS3 expanders eliminates the (SAS2) "binding" of link speed to device speed. I.e., Databolt is just Marketing. Well, that's two tests of the 9300, with different dual-link SAS3 expanders and different device mix, that are both capped at ~5600 ... prognosis: muscle deficiency [in the 9300].
  8. It looks to me like you are not limited by PCIe bandwidth. PCIe gen3 @ x8 is good for (real-world) ~7000 MB/sec. If you are getting a little over 200 MB/sec each for 28 drives, that's ~6000 MB/sec. (You are obviously using a Dual-link connection HBA<==>Expander which is good for >> 7000 [9000].) Either your 9300 does not have the muscle to exceed 6000, or you have one (or more) drives that are stragglers, handicapping the (parallel) parity operation. (I'm assuming you are not CPU-limited--I don't use unraid.)
  9. OK. I'd still suggest 24 hrs of MPrime (aka Prime95) Torture-Blend on the one in play here.
  10. JB, do you have ECC memory? (I know it's not a guarantee, but it gets you 90-95% of the way there.)
  11. The whole time, or just post-read verify ?? (I don't use Unraid, but I vaguely recollect the details of Joe's preclear.) No, it will not affect the CPU usage. It does (effectively) eliminate the I/O bottleneck of the on-board (chipset) Sata sub-system. That CPU usage you saw during pre-clear (x2) should not guide any (re-configure) decision you make.
  12. I've looked into "staggered spinup" for a DIY DAS. The key search term you want to research is "disk PUIS" ... Power Up In Standby. It looks a little tricky, but quite doable. (That isn't a solution for me because my drives are connected via a SAS expander.) [I don't use Unraid.]
  13. Before recommending a seller, I'd like to make sure we are seeking the best solution. Based on your system specs (mobo/cpu), you actually have 5 PCIe g3 x8 slots -- OR only 3 slots if one (or two) need to supply x16 lanes (each). If it is the latter case (you need to "free up" an x16 slot, then I'd suggest considering a SAS(/SATA) expander, which can use one of the soon-to-be x0 slots (expanders only use a PCIe slot for power (no signals/lanes) and a place to live). If it's the former case (you want to repurpose an x8 slot now used by an H200, then, yes, probably a 16 (or more) port HBA is the answer. Instead of a 9201-16i though, I'd go for an Adaptec ASR-71605 (or -72405). Less $$, it's faster PCIe Gen3 (vs Gen2 for 9201) and 4500+ MB/s (vs ~3000 MB/s), and it's low-profile (repurposing flexibility in future). Only negative is that its mini-SAS ports are 8643 (vs 8087 on H200/9201), so new breakout cables are needed. I recently bought a SAS expander to play with, and despite going for low cost, was pleasantly surprised. I bought a Lenovo 03X3834 on eBay for $15 shipped from CN/HK. Ordered on 23Nov, and it arrived (NH,USA) on 06Dec. Very well-packaged (anti-static + bubble-wrap + box) Works fine! The link for the seller's eBay store is JiaWen2108 sells lots of HBA, expanders, cables.
  14. Thanks! Good points. I think that spec'd endurance (e.g., 600 TBW for 860 EVO 1TB) won't be an issue, for all but extreme use cases. (For a data point, I used a 860 EVO 500GB in a DVR (DirecTV HR24) for the last year. It had ~8000 hours and ~30 TBW when I secure-erased it. Sadly, I didn't think to do any write-performance tests before the erase.) An "extreme use case" might be an array for multi HD security cameras [e.g. 4 feeds @ 10GB/hr each (24/7) =~ 350 TBW/year]. Note, though, that you need a near-server-level NVME to exceed 1 PBW rating (for 1 TB device). As you said, the NVME-for-parity does offer significant performance "head-room", such that it's write speed can degrade (as expected w/o trim) with no effect on array write speed. It also allows one to forego turbo mode, eliminating read-contention with the other (N-1) data SSDs [during array writes] ... and a few watts of juice.
  15. A question, please ... [I might be missing something, since I don't use Unraid.] : For an all-SSD array, wouldn't turbo-mode alleviate the parity-dominating aspect (with no detrimental side-effects)? ["good" SSDs (sata), and in decent "trim", will get write speeds very close to their read speeds, no?]
  16. 520-byte format Although this is more common on SAS drives, it is still a possibility on SATA drives. Beyond explicit disclosure by the re-seller, it could be indicated in a Model #. In many cases, but not all, there are tools available to re-format such drives. (It is not a common issue ... so ... just a heads-up)
  17. Yes. And, it is a drag to have to "re-buy" (different) cables. But, if it helps to rationalize/justify going with the 71605, note that you might get some added flexibility and/or future-proofing. The 71605 is a low-profile card (be sure to get the bracket you want/need); and it is PCIe Gen3, so it would likely suffice if you only gave it 4 (g3) lanes (=~ 12x275).
  18. Maybe the answer is "staring you in the face". See (currently) 3 threads below this one, same sub-forum. (LSI is not the only game in town ...)
  19. [ Assuming that: also means using a separate 8088 input connector [else there's a tiny chance that the (single?) 8088-IN is flaky] ] Then, I suspect that you might have a glitchy H810 controller. In either case, if you have a 8088=>4xSata breakout cable, you could "test" the H810 independent of the MD1200.
  20. "Was that nine chips, or only eight? In all this excitement ..." -- Dirty Harry 😀
  21. Check locally for the Adaptec ASR-71605. PCIe 3.0 x8 6G SAS/SATA 16-port. Supported by Unraid. Handles throughput of at minimum 4500 MB/s (8 x 560 SSDs), and very likely ~6000. Found one on ebay.de for ~60eu. Link
  22. Sort of ... it shouldn't be used for one of the six array drives, since that would further divide the 650-700. But, it could/should be used for the cache drive; then it could only (slightly) impact mover operations, and only if TurboWrite was enabled. The (2-port?) add-in card would connect array drives 5 and 6. A "full-spec" PCIe x1 Gen2 card (e.g. ASM1061-based) , giving ~350 MB/sec, would not lower the "ceiling" of ~160 (for 4) on the mobo SATA. The only improvement would be that the cache SSD could operate at full (SataIII [~550]) speed, but that is moot, since, as your cache, it is inherently limited by your 1GbE network (on input) and your array (on output). Very little bang for the extra bucks, since you'd need a PCIe >=x2 card to handle >2 drives, and not lower the "ceiling". Here's a neat idea, if you really want to eliminate the speed bottleneck: (Assuming that your x16 slot is available,) Get a UNRAID-friendly LSI-based card (typically PCIe x8, at >= Gen2), and connect the built-in 4-bay Sata backplane to it. Note that the thick cable/connector (left of Sata-5), which connects that backplane to the mobo, is actually a Mini-SAS 8087. And can instead be connected to (one of the connections on) an add-in LSI card. That completely eliminates the bottleneck for SATA 1-4. "But, wait, there's more ..." Then you put a standard SAS-to-SATA breakout cable into the now-empty mobo connector and use 3 (of the 4) SATAs for drives 5 and 6, and your cache SSD. That gives you full SataIII for the SSD (FWIW) and you've still got SATA-5 and the eSata, plus "breakout #4", to play with for whatever. "But, wait ...." If you get a 4i4e card, you can (later) add 4+ more drives externally, and still no bottleneck! (Pretty neat, huh?) [Important, LSI card must have low-profile bracket.] I don't use UNRAID, so I can't be certain about the re-config issue. Once you've cleared that, I strongly encourage you to start with what you've already got. Not only will it give you time to think it all through, and find exactly the pieces that will serve you best, but it will give you an opportunity to assess your N40L's performance with current version of UNRAID. Then you can extrapolate from that initial CPU/throughput ratio to be sure that the CPU won't become a new/unexpected bottleneck if/when the throughput increases from 650 to 1000 (2-port card) or 1300+ (LSI).
  23. No limitation! -- for at least another decade (Since you mentioned that you'd be running 6 newer (ie, faster) array drives,) The SATA controller in the N40L (and 36L + 54L) has a maximum (combined) throughput limitation of ~650 MB/sec. With 4 array drives on the built-in SATAs (and 2 on the add-in), that would limit your "parallel" operations (parity-check, rebuild, Turbo-write) to 160 MB/sec, but many/most newer drives are capable of 200-250 MB/sec max (130-150 min). To maximize your array performance, you'd want to be very selective in your choice of add-in SATA card. The one you linked is based on the SiI3132, which is a notable under-performer (80MB/sec max for each of 2 drives [150-170 total])--scratch that one. Better would be any Asmedia ASM1061-based 2-port card; 170MB/sec each of 2 drives. At least that would preserve the 160 limitation imposed by the built-in. [Note: these configs don't utilize the N40L's eSata, so it would stay available for a backup/import-export external drive.]
  24. Congratulations! (And, what do you mean "almost"? [such a bargain, too.])