Grrrreg Posted March 1, 2020 Share Posted March 1, 2020 (edited) I've got a 126TB array with dual 12TB parity, mix of WD Red and White/shucked easystores. My parity syncs are taking an average of 2 1/2 days to complete. Currently it's running about 35-55MB/s. Supermicro X9DRi-F with SAS2 backplane connected to a single SAS9211-8i (pcie 2.0). I have one pcie 3.0 x16 slot available, and an additional 3.0 x8. (Two of the three x16 slots are populated with Nvidia P400 GPu's for Plex and a VM.) Would I expect much performance gain by splitting the backplane to two SAS9211-8i, or upgrading to a pcie 3.0 x16 controller? I have a spare LSI MegaRaid 9265-8i, but I don't know if it's worth trying to flash to IT mode as I can't find consistent information that it will work. Additional consideration, it's running about 15% CPU during parity with normal docker/VM load. Edited March 1, 2020 by Grrrreg Additional Info Quote Link to comment
Vr2Io Posted March 2, 2020 Share Posted March 2, 2020 First, does dual link connect from 9211-8i to backplane ? Speed also depends on how many disks in array, disk size combination ..... Best speed should be calculata ~2hrs per TB. Quote Link to comment
JorgeB Posted March 2, 2020 Share Posted March 2, 2020 18 hours ago, Grrrreg said: Currently it's running about 35-55MB/s. That's awfully slow, run the diskspeed docker to check all disks are performing normally, also posting the diags might give some clues. Quote Link to comment
Grrrreg Posted March 2, 2020 Author Share Posted March 2, 2020 @Benson - 2 8087 cables from the backplane to the 9211-8i, originally the SM server came with those connected to a MegaRaid 9265-8i. 2 - 12TB parity drives and the other 22 drives on that backplane are data, image of array attached and current diagnostics. Running Diskspeed now, looks like the parity drive possibly the bottle neck, though with a parity check running, I don't know if that's valid. I'll run another once the parity sync ends. seine-diagnostics-20200302-1348.zip Quote Link to comment
JorgeB Posted March 3, 2020 Share Posted March 3, 2020 It's going at about 85MB/s now, but it's past the smaller disks, wait for it to finish, then run speed test and if no conclusion post new diags after a new check start, let it run for 5 minutes and grab diags, and then you can cancel it. Quote Link to comment
Vr2Io Posted March 3, 2020 Share Posted March 3, 2020 (edited) Observe from Parity check start time and first 3TB complete time, there are around 16hrs, so ~55MB which match your description. Due to you have 24 disks, so 55MB * 24 = 1.32GB/s , this is to est. the actual storage bandwidth of your system. For controller side, it have x4 link ( in half ) connect with MB, theoretical it should provide 2GB/s bandwidth. Feb 29 15:24:26 Seine kernel: pci 0000:02:00.0: 16.000 Gb/s available PCIe bandwidth, limited by 5 GT/s x4 link at 0000:00:01.1 (capable of 32.000 Gb/s with 5 GT/s x8 link) So, as @johnnie.black suggest, pls use diskspeed docker to get more speed figure on different checkpoint. I would also suggest : - It should have 8 LEDs on LSI HBA ( 9211-8i ), if only 4 LED was ON that means you only connect in single link instead of dual link. - You must plug the HBA to a x8 PCIe (2.0) slot, otherwise not enough bandwidth for 24 disks. - If existing hardware in proper setup, parity check time should reduce more then 50% (est.) - If you want no bottleneck, you must reduce the no. of disks or change BP and HBA to 12Gb. Mar 1 04:00:02 Seine kernel: mdcmd (229): check Mar 1 04:00:02 Seine kernel: md: recovery thread: check P Q ... Mar 1 20:11:32 Seine kernel: mdcmd (231): spindown 9 Mar 1 20:11:33 Seine kernel: mdcmd (232): spindown 10 Mar 1 20:11:33 Seine kernel: mdcmd (233): spindown 11 Mar 1 20:11:34 Seine kernel: mdcmd (234): spindown 12 Mar 1 20:11:34 Seine kernel: mdcmd (235): spindown 20 Edited March 3, 2020 by Benson 1 Quote Link to comment
JorgeB Posted March 3, 2020 Share Posted March 3, 2020 For controller side, it have x4 link ( in half ) connect with MB, theoretical it should provide 2GB/s bandwidth. I didn't check that and wile theoretically you can have 2GB/s with a x4 link in practice it will max out at 1.5GB/s at best, so good catch by @Bensonand make sure the HBA is installed in a x8 slot. Quote Link to comment
Vr2Io Posted March 3, 2020 Share Posted March 3, 2020 (edited) I also calculate 85MB/s x 18 disks, so it quite match 1.5GB/s. It should x4 PCIe cause the bottleneck in first. Edited March 3, 2020 by Benson Quote Link to comment
Grrrreg Posted April 2, 2020 Author Share Posted April 2, 2020 What's the best way to share the DiskSpeed results? Quote Link to comment
Grrrreg Posted April 2, 2020 Author Share Posted April 2, 2020 Hopefully this works. sdq is a not part of the array. After reading the manual on my backplane, it only uses on 8087. I've got a new parity check running now after some issues created trying to pass an entire USB pcie card to my VM. Estimated time, 2 days, 9 hours, 52 minutes for 132TB array, I had to replace one 4TB WD Red that was failing since the last post. seine_diskspeed.7z Quote Link to comment
itimpi Posted April 2, 2020 Share Posted April 2, 2020 5 hours ago, Grrrreg said: Estimated time, 2 days, 9 hours, 52 minutes for 132TB array, FYI: The amount of data in the array should be irrelevant as far as elapsed time for the operation is concerned. The key factor is the size of the largest parity drive. Quote Link to comment
Recommended Posts
Join the conversation
You can post now and register later. If you have an account, sign in now to post with your account.
Note: Your post will require moderator approval before it will be visible.