Jump to content

ed.

Members
  • Content Count

    5
  • Joined

  • Last visited

Community Reputation

0 Neutral

About ed.

  • Rank
    Newbie

Recent Profile Visitors

The recent visitors block is disabled and is not being shown to other users.

  1. Understood, thank you! Conclusion: PCI-E 2.0 8x speed limit ~3000 MB/s is not available with any hardware combination, sometimes it could drop to ~2500 MB/s which was the case in your tests. Maybe it was so because PCI-E 3.0 8x was working not as original PCI-E 3.0, but in compatibility mode as PCI-E 2.0 to support PCI-E 2.0 HBA which is implemented not as good as in more new "Asrock B150M-Pro4S" MB.
  2. Thank you again! But that is strange... With tests "Intel® RAID SAS2 Expander RES2SV240 + Dual Link on Dell H310 (4000MB/s)" for MB Supermicro X9SCM-F (the same one) speed limit was ~2460-2480 MB/s which is not a limit for PCI-E 2.0 (~3000 MB/s) which I would expect in the case. But with the same MB and PCIe 3.0 HBA dual link speed limit turns out to be 4400 MB/s. I cannot even guess what reason could be here. Do you have any idea? Since speed limit is Ok for "Asrock B150M-Pro4S" MB (about 3000 MB/s) and the same HBA, then I suppose to be here some "Supermicro X9SCM-F" MB incompatibility particular with the Dell H310 (and maybe with other PCI-E 2.0 HBAs too) but I cannot give any logical explanation to it.
  3. Thank you very much johnnie.black! I was not enough attentive reading the post before... So it looks like dual link works good only for tested combination: Asrock B150M-Pro4S + RES2SV240 (here we can see limitation of PCI-E 2.0 about 3000 Mb/sec) and it practically does make much difference with single link for Supermicro X9SCM-F MB (+ 260 Mb/s only). In other words, it depends on mother board. But here (for PCI-E 3.0 LSI 9207) dual link works up to its limit. There is no ** comment here so I suppose the measurement is done for Supermicro X9SCM-F, right? If yes then it is an interesting result which means that dual link speed depends not only on MB used but on combination "MB + HBA" too. Anyway, conclusion is that throughput for dual link "HBA" to Intel® RAID SAS2 Expander RES2SV240 is not less then 4400 MB/s. But it may be limited by MB (at least for PCI-E 2.0 HBAs).
  4. Hello trurl! Concerning to Unraid I am interested in usage of docker (because having many whole VMs is less practical for my tests) and storing important part of my data securely (only part!), so some amount of internal HDDs I can give to Unraid, but whole big external (via SAS expander) switchable storage I do not plan for now to use as Unraid massive. The questions are still actual. They are universal because it is all about hardware limitations related to any OS (Unraid as well even if I have not mentioned it here). I think that the knowledge will be useful for community people.
  5. Could anybody please help me with my questions below: A) What is internal throughput bottleneck for Intel RES2CV360 expander? I mean, is dual link really doubles speed? What about triple link input 6th link output? Is there any limit? Just could not found any "real world" examples anywhere. B) What is internal throughput bottleneck for Intel RES2CV240 expander? Question is the same as above, but for less ports' number expander. C) Does Adaptec HBA 71605H works with Intel RES2CV360 expander? D) Does Adaptec HBA 71605H works with Intel RES2CV240 expander? E) Does Adaptec HBA 71605H works with Intel RES2CV360 expander using dial link? F) If Adaptec HBA 71605H does not work with any of Intel RES2CV360 or RES2CV240 expanders what SAS2 expander could you suggest to use? F) Is there could be any issues with hotplugging SATA HDDs connected to RES2CV360 (or to RES2CV240)? (Without an expander there is no any problem for me.) I just do not know, maybe a few SATA3 HDDs work Ok with expander and then next SATA3 HDD is hotplugged and detected as SATA2... Or drives not always detected ... or something else. H) If you have a time please read below (it's quite much, sorry) and fix my calculations if they are wrong. The calculations were done assuming that there is no bottleneck in whole (internal) SAS expanders' throughput. Most questions are repeated below to be in context. Many thanks in advance! --------------------------------------------------------------------- I've been selecting HBA + expander for the following use case and need help here. I have many SATA3 HDDs which I do not use simultaneously. I plug them into hotpluggable 4x cages and keep them there, so I can turn any of them on or off with switches on the cages. Then I need a HDD I turn it on via switch in the cage, when I do not need it, I safely remove it in OS then turn it off via switch in the cage. Advantages are: low power consumption which is followed by lower heat dissipation and lower noise; lower HDD temperature (since HDD-neighbors are often off); much longer HDD life since I use them only when I need them (or when I check them). Disadvantages are: need to perform regular HDD checks (not often when a HDD is used seldom), need a backup for sensitive data. Practical experience: for last 5 years only two files got "pending" sectors which were not recovered automatically. One was recovered from backup, another one was fixed writing zeroes into problem area (It was a video, so no big problem). After that I have started to check HDDs periodically. I have a 16 internal port HBA which works smoothly, but apparently I have come to a point when I need an external JBOD storage with a SAS expander. In my case I want a DIY storage which has a SAS expander and few (or many - the number can increase in the course of time) HDD cages for 4x SATA drives. I prefer DIY because it could be done cheaper and (what is important for me) much less noisy. Please have a look into following configurations and correct me if my calculations are wrong. 1) LSI SAS2 9201-16i in PCI-E 3.0 8x (work as PCI-E 2.0) + Intel RES2CV360 expander. a) Two 4x ports go to Intel RES2CV360 expander via SFF-8087 to SFF-8087 cable and SFF-8087 to 8088 adapter card (i.e. UNICACA AC1215) so we have dual link to expander. The rest 7 SAS2 ports in expander will be connected eventually to 28 SATA3 HDDs. b) Two 4x ports go to internal 8 hotpluggable SATA HDD (two 4x cages). PCI-E 2.0 limitations for 8x lanes: ~3000 Mb/sec Single link limitations (to each 4x internal HDDs cage): 2200 Mb/sec Dual link to expander limitations: 2*2200=4400 Mb/sec RES2CV360 throughput expander limitations: *not less then 4400 Mb/sec *Speed limitation also could be because of combination MB + HBA (could depend on MB for PCI-E 2.0 HBAs) - not enough info. Expected speed: For all a) connection used only: 3000/28=107 Mb/sec For all b) connection used only: 3000/8=375 Mb/sec (more then enough since I do not have SSDs in the configuration) For all both a) and b) connections used: 3000/36=83 Mb/sec (really unlikely happened case) Practically maximal number of used drives is a half, preferably used drives are located in internal 2 x 4x HDD cages. Average HDD speed in internal cage: 160 Mb/sec (since more fast drives go there). So for external cages we have a rest after using all internal drives: 3000-160*8=1720 Mb/sec Average speed for HDDs in external cages: 125 Mb/sec (they are more slow ones) 1720/125=~14 HDD of 28 could be used with the speed. Altogether 8+14=22 HDDs could be used at the same time with acceptable speed. Although there is a case when no internal drives are used and that is when dual link is supposed to be somewhat useful (otherwise as described before 1720 Mb/Sec is not enough to load even a single SAS2 4x link) In the case we have (bottleneck here is PCI-E 2.0): No internal drives used case: 3000/125=24 external HDDs out of 28 ones. Another case is to use PCI-E 3.0 card. I have already Adaptec 71605H which is PCI-E 8x with 16 internal 4x ports. I am just not sure whether it would work with Intel RES2CV360 or Intel RES2CV240, does anyone have info about it? Another question is whether it would work as dual link with Intel RES2CV360? There is of cause also Adaptec AEC 82885T expander, but it is SAS3 (12 Gb) which I do not plan to use for now and it is a bit costly. In best case (when dual link works) we have: 2) Adaptec SAS2 71605H-16i in PCI-E 3.0 8x (work as PCI-E 3.0) + Intel RES2CV360 expander. a) Two 4x ports go to Intel RES2CV360 expander via SFF-8643 to SFF-8087 cable and SFF-8087 to 8088 adapter card (i.e. UNICACA AC1215) so we have dual link to expander. The rest 7 SAS2 ports in expander will be connected eventually to 28 SATA3 HDDs. b) Two 4x ports go to internal 8 hotpluggable SATA HDD (two 4x cages). PCI-E 3.0 limitations for 8x lanes: ~6000 Mb/sec Single link limitations (to each 4x internal HDDs cage): 2200 Mb/sec Dual link to expander limitations: 2*2200=4400 Mb/sec RES2CV360 throughput expander limitations: *not less then 4400 Mb/sec *Speed limitation also could be because of combination MB + HBA (could depend on MB for PCI-E 2.0 HBAs) - not enough info. 2.1. Expected speed (dual link works): For all a) connection used only: 4400/28=157 Mb/sec For all b) connection used only: 6000/8=750 Mb/sec (not reachable since I do not have SSDs in the configuration) For all both a) and b) connections used: 157 Mb/sec for external HDDs, the rest of throughput is: 6000-4400=1600 Mb/sec will be divided between 8 internal HDDs: 1600/8=200 Mb/sec So here even if we use all HDDs we have rather good ratio for internal/external speed: 157 Mb/sec for all external and 200 Mb/sec for all internal. We even can use all of them if needed (which is unlikely but nice to have). But if dual link does not work for combination 71605H+RES2CV360 then we have: 2.2. Expected speed (dual link does not work): For all a) connection used only: 2200/28=78 Mb/sec (worse then LSI) For all b) connection used only: 6000/8=750 Mb/sec - not reachable. For all both a) and b) connections used: 78 Mb/sec for external HDDs, the rest of throughput is: 6000-2200=3800 Mb/sec will be divided between 8 internal HDDs: 3800/8=400 Mb/sec - not reachable for my usage. The ratio for internal/external speed here is not good: 87 Mb/sec for all external and 400 Mb/sec for all internal and I do not like it. For fixing 2.2. case problem 2 expanders (i.e. RES2SV240) could be used (actually I would prefer to use one RES2CV360 with dual link) and we have a new case: 2.3. Expected speed (two expanders): For all a1) connection used only: 2200/20=110 Mb/sec (here max 20 drives could be used, for 14 HDDs speed is 157 Mb/sec). For all a2) connection used only: 2200/20=110 Mb/sec (here max 20 drives could be used, for 14 HDDs speed is 157 Mb/sec). So altogether we can have here 40 external HDDs max, but can have the same 28 with the same speed as in case 2.1. For all b) connection used only: 6000/8=750 Mb/sec - not reachable. For all both a1), a2) and b) connections used: 110 Mb/sec for external HDDs, the rest of throughput is: 6000-4400=1600 Mb/sec will be divided between 8 internal HDDs: 1600/8=200 Mb/sec So here even if we use all HDDs we have rather good ratio for internal/external speed: 157 Mb/sec (for 28 external HDDs) or 110 Mb/sec (for 40 external HDDs) and 200 Mb/sec for all internal HDDs. But which case should be actually selected - it depends on questions above. Thanks for reading so far! 🙂 P.S. * - added measurement based on post of johnnie.black