Jump to content


  • Content Count

  • Joined

  • Last visited

Community Reputation

3 Neutral

About UhClem

  • Rank
    Advanced Member


  • Gender
  • Location
    NH, US

Recent Profile Visitors

The recent visitors block is disabled and is not being shown to other users.

  1. Thanks! Good points. I think that spec'd endurance (e.g., 600 TBW for 860 EVO 1TB) won't be an issue, for all but extreme use cases. (For a data point, I used a 860 EVO 500GB in a DVR (DirecTV HR24) for the last year. It had ~8000 hours and ~30 TBW when I secure-erased it. Sadly, I didn't think to do any write-performance tests before the erase.) An "extreme use case" might be an array for multi HD security cameras [e.g. 4 feeds @ 10GB/hr each (24/7) =~ 350 TBW/year]. Note, though, that you need a near-server-level NVME to exceed 1 PBW rating (for 1 TB device). As you said, the NVME-for-parity does offer significant performance "head-room", such that it's write speed can degrade (as expected w/o trim) with no effect on array write speed. It also allows one to forego turbo mode, eliminating read-contention with the other (N-1) data SSDs [during array writes] ... and a few watts of juice.
  2. A question, please ... [I might be missing something, since I don't use Unraid.] : For an all-SSD array, wouldn't turbo-mode alleviate the parity-dominating aspect (with no detrimental side-effects)? ["good" SSDs (sata), and in decent "trim", will get write speeds very close to their read speeds, no?]
  3. 520-byte format Although this is more common on SAS drives, it is still a possibility on SATA drives. Beyond explicit disclosure by the re-seller, it could be indicated in a Model #. In many cases, but not all, there are tools available to re-format such drives. (It is not a common issue ... so ... just a heads-up)
  4. Yes. And, it is a drag to have to "re-buy" (different) cables. But, if it helps to rationalize/justify going with the 71605, note that you might get some added flexibility and/or future-proofing. The 71605 is a low-profile card (be sure to get the bracket you want/need); and it is PCIe Gen3, so it would likely suffice if you only gave it 4 (g3) lanes (=~ 12x275).
  5. Maybe the answer is "staring you in the face". See (currently) 3 threads below this one, same sub-forum. (LSI is not the only game in town ...)
  6. [ Assuming that: also means using a separate 8088 input connector [else there's a tiny chance that the (single?) 8088-IN is flaky] ] Then, I suspect that you might have a glitchy H810 controller. In either case, if you have a 8088=>4xSata breakout cable, you could "test" the H810 independent of the MD1200.
  7. "Was that nine chips, or only eight? In all this excitement ..." -- Dirty Harry 😀
  8. Check locally for the Adaptec ASR-71605. PCIe 3.0 x8 6G SAS/SATA 16-port. Supported by Unraid. Handles throughput of at minimum 4500 MB/s (8 x 560 SSDs), and very likely ~6000. Found one on ebay.de for ~60eu. Link
  9. Sort of ... it shouldn't be used for one of the six array drives, since that would further divide the 650-700. But, it could/should be used for the cache drive; then it could only (slightly) impact mover operations, and only if TurboWrite was enabled. The (2-port?) add-in card would connect array drives 5 and 6. A "full-spec" PCIe x1 Gen2 card (e.g. ASM1061-based) , giving ~350 MB/sec, would not lower the "ceiling" of ~160 (for 4) on the mobo SATA. The only improvement would be that the cache SSD could operate at full (SataIII [~550]) speed, but that is moot, since, as your cache, it is inherently limited by your 1GbE network (on input) and your array (on output). Very little bang for the extra bucks, since you'd need a PCIe >=x2 card to handle >2 drives, and not lower the "ceiling". Here's a neat idea, if you really want to eliminate the speed bottleneck: (Assuming that your x16 slot is available,) Get a UNRAID-friendly LSI-based card (typically PCIe x8, at >= Gen2), and connect the built-in 4-bay Sata backplane to it. Note that the thick cable/connector (left of Sata-5), which connects that backplane to the mobo, is actually a Mini-SAS 8087. And can instead be connected to (one of the connections on) an add-in LSI card. That completely eliminates the bottleneck for SATA 1-4. "But, wait, there's more ..." Then you put a standard SAS-to-SATA breakout cable into the now-empty mobo connector and use 3 (of the 4) SATAs for drives 5 and 6, and your cache SSD. That gives you full SataIII for the SSD (FWIW) and you've still got SATA-5 and the eSata, plus "breakout #4", to play with for whatever. "But, wait ...." If you get a 4i4e card, you can (later) add 4+ more drives externally, and still no bottleneck! (Pretty neat, huh?) [Important, LSI card must have low-profile bracket.] I don't use UNRAID, so I can't be certain about the re-config issue. Once you've cleared that, I strongly encourage you to start with what you've already got. Not only will it give you time to think it all through, and find exactly the pieces that will serve you best, but it will give you an opportunity to assess your N40L's performance with current version of UNRAID. Then you can extrapolate from that initial CPU/throughput ratio to be sure that the CPU won't become a new/unexpected bottleneck if/when the throughput increases from 650 to 1000 (2-port card) or 1300+ (LSI).
  10. No limitation! -- for at least another decade (Since you mentioned that you'd be running 6 newer (ie, faster) array drives,) The SATA controller in the N40L (and 36L + 54L) has a maximum (combined) throughput limitation of ~650 MB/sec. With 4 array drives on the built-in SATAs (and 2 on the add-in), that would limit your "parallel" operations (parity-check, rebuild, Turbo-write) to 160 MB/sec, but many/most newer drives are capable of 200-250 MB/sec max (130-150 min). To maximize your array performance, you'd want to be very selective in your choice of add-in SATA card. The one you linked is based on the SiI3132, which is a notable under-performer (80MB/sec max for each of 2 drives [150-170 total])--scratch that one. Better would be any Asmedia ASM1061-based 2-port card; 170MB/sec each of 2 drives. At least that would preserve the 160 limitation imposed by the built-in. [Note: these configs don't utilize the N40L's eSata, so it would stay available for a backup/import-export external drive.]
  11. Congratulations! (And, what do you mean "almost"? [such a bargain, too.])
  12. How about this: [from your 800k syslog] lines 759-763 Mar 11 07:50:54 Tower kernel: ahci 0000:04:00.0: SSS flag set, parallel bus scan disabled Mar 11 07:50:54 Tower kernel: ahci 0000:04:00.0: AHCI 0001.0200 32 slots 2 ports 6 Gbps 0x3 impl SATA mode Mar 11 07:50:54 Tower kernel: ahci 0000:04:00.0: flags: 64bit ncq sntf stag led clo pmp pio slum part ccc sxs Mar 11 07:50:54 Tower kernel: scsi host7: ahci Mar 11 07:50:54 Tower kernel: scsi host8: ahci Then searching for 0000:04:00 leads to: [line 373] Mar 11 07:50:54 Tower kernel: pci 0000:04:00.0: [1b21:0612] type 00 class 0x010601 And [1b21:0612] is the Vendor ID (Asmedia) : Device ID (ASM1062) pair for that controller. "A rose by any other name ... is still a rose."
  13. Ultra320 SCSI is 320 MB/sec ... 12 drives ... even w/ 2 ports running is ~50 MB/sec per drive [if it can really share its bandwidth reasonably] --"Patience is a virtue."
  14. Maybe ... maybe not. Consider that the 9211-8i (and its LSI SAS2008-based brethren) appear to have an ultimate bottleneck of their internal CPU/memory/firmware which precludes them achieving the maximum (real-world/measurable) PCIe Gen2 x8 throughput of 3200 MB/s. Your tests, on the H310, show 2560 (8x320) MB/s. Whereas the 9207-8 (and its SAS2308 brethren) do have the "muscle-power" to achieve, and exceed, full Gen2x8 throughput (3200); you measured, on a Gen3x8, 4200 (8x525) MB/s--and that is limited, not by the 2308, but by maxing out the Sata3 (real-world) speed of ~525 [I suspect that SAS-12Gb SSDs would do better, no?***] So, for 9211 vs 9207, using Gen2, it comes down to a cost/benefit decision (plus, an appropriate degree of future-proof factor; re-use following a mobo upgrade). *** [Edit] Actually, NO -- it looks like 12G SAS connectivity was not offered until the 93xx series.
  15. Wouldn't the max throughput be (slightly) limited by the PCIe Gen2 of the R710? [ie, to ~400 MB/s each for 8xSata3-SSDs]