MvL Posted May 27, 2018 Posted May 27, 2018 Does it make much difference if I connect the parity and the cache drive to the motherboard (own sata port) or connect them behind a expander in the expander chassis? Edit: What I mean is there a speed difference?
Maticks Posted May 29, 2018 Posted May 29, 2018 SATA 6Gb/s has a maximum speed of 6 gigabits per second which is about 750 megabytes per second. PCIe 2.0 specifies 500 megabytes per second on each lane. Hence a PCIe 2.0 x1 slot has a maximum speed of 500 megabytes per second (an x8 slot would have a speed of 500*8 = 4000 megabytes per second). Thus an x1 slot will inhibit the speed of a SATA 6Gb/s port. That being said, your average SATA 6Gb/s SSD maxes out at about 500 megabytes per second, so if you have one SATA 6Gb/s drive on a PCIe x1 controller then your speed won't be inhibited too much (and it will be faster than a motherboard's SATA 3Gb/s port). Look at how many pci-e lanes are allocated to the onboard SATA controller. With you add on pci-e card look at how many lanes it has for the card it could be a 4x and make sure its in a 4x slot if thats the case. There are some crap SATA3 1x cards out there that are really low bandwidth.
MvL Posted May 29, 2018 Author Posted May 29, 2018 I have a expander backplane chassis what is connected dual link with a LSI 9207-8i (PCI-E 3.0). Some calculation: SFF8087 --> 4 * 750MB (per cable) = 3000MB * 2 (dual link) = 6000MB. The LSI is PCI-E 3.0 so 1000MB per lane thus 8000MB. Then the LSI is fast enough. During parity check with a 24 bays expander backplane chassis I have a speed of 6000MB / 24 drives = 250MB per drive. What is the speed when only a single drive is accessed is this also 250MB? So sata drives to the motherboard connected have a speed of 600MB per drive. What happens if I move the parity drive from the expander backplane chassis to directly connected to the motherboard? If I move my cache drive from the expander backplane to directly connected to the motherboard (if the speed is indeed max 250MB). I have a good speed increase. True? I'm trying to figure out if it has any value to move parity drives and cache drives to directly connected to the motherboard. The values are theoretically and not a real life situation!
JorgeB Posted May 29, 2018 Posted May 29, 2018 8 minutes ago, MvL said: SFF8087 --> 4 * 750MB (per cable) = 3000MB * 2 (dual link) = 6000MB. The LSI is PCI-E 3.0 so 1000MB per lane thus 8000MB. Then the LSI is fast enough. It's 4 * 600MB/s, so 4800MB/s total, of those 4400MB/s max will be available after overhead. 9 minutes ago, MvL said: During parity check with a 24 bays expander backplane chassis I have a speed of 6000MB / 24 drives = 250MB per drive. What is the speed when only a single drive is accessed is this also 250MB? 4400 / 24 = 183MB/s per drive, single drive up to ~550MB/s. For your case there should be no difference connecting either parity or cache onboard or on the HBA.
MvL Posted May 29, 2018 Author Posted May 29, 2018 Hi Johnnie! Quote It's 4 * 600MB/s, so 4800MB/s total, of those 4400MB/s max will be available after overhead. Yeah I remembered your values when answering Maticks post. Maticks is speaking of 750MB/sec and if I check with a online converter it also says 750MB. Of course it's al theoretical. I have seen your post with the graphics (you linked) and that are the real life values. Quote 4400 / 24 = 183MB/s per drive, single drive up to ~550MB/s. That answers my question. So you have full bandwidth when using one drive.
pwm Posted May 29, 2018 Posted May 29, 2018 3 hours ago, MvL said: Maticks is speaking of 750MB/sec and if I check with a online converter it also says 750MB. 6 Gbit/s divided by 8 bits gives 750 MB/s. But the transfer on the SATA cable isn't 8 bits followed by 8 bits followed by 8 bits. It's a synchronous serial stream with a self-synchronizing encoding where additional bits are required to make sure that you can't get too many zeros in a sequence or to many ones in a sequence. To be able to synchronize, the receiver must regularly see the data line toggle. Because of this, you should count 600 MB/s as raw transfer rate. Then you have the protocol overhead with message headers and with message gaps.
MvL Posted May 30, 2018 Author Posted May 30, 2018 Thanks for explaining pwm. Johnnie already pointed this out but I'm always curious why, how, etc.
Recommended Posts
Archived
This topic is now archived and is closed to further replies.