Jump to content

Slow parity sync ,is 80mb/s right?


Recommended Posts

Seems about right, there could be a bottleneck somewhere with that many drives but since parity operations are limited to the slowest drive and these old 3TB ones are likely not able to do more it's probably more of the latter. 

Edited by Kilrah
Link to comment
1 hour ago, Kilrah said:

Seems about right, there could be a bottleneck somewhere with that many drives but since parity operations are limited to the slowest drive and these old 3TB ones are likely not able to do more it's probably more of the latter. 

Thanks a lot

But I have done the Disk Speed test in docker ,the result shows the slowest one is 90mb/s ,and its not the disk in the array but the unassigned device which even did not mounted...

So is there any other reason I can make some improvements.

Link to comment
5 hours ago, Kilrah said:

You'd have to describe your whole setup with how disks are connected through what HBAs/expanders, what mobo slots are used etc so someone can figure out if there's a bottleneck somewhere.

Thanks for your help 

here is the disk connection map

Host bridge: Intel Corporation Xeon E7 v2/Xeon E5 v2/Core i7 DMI2 (rev 04)

PCI bridge: Intel Corporation Xeon E7 v2/Xeon E5 v2/Core i7 PCI Express Root Port 2c (rev 04)

RAID bus controller: Broadcom / LSI SAS2008 PCI-Express Fusion-MPT SAS-2 [Falcon] (rev 03)
     13 drives(left 13 in the picture ,main disks in parity sync situation)

PCI bridge: Intel Corporation Xeon E7 v2/Xeon E5 v2/Core i7 PCI Express Root Port 3a (rev 04)

Serial Attached SCSI controller: Broadcom / LSI SAS2008 PCI-Express Fusion-MPT SAS-2 [Falcon] (rev 03)
     34 drives

579991278_2023-05-2519_33_52.thumb.png.4972eb0626f57b0d2bae77b4442dde12.png

 

2076150034_2023-05-2519_34_04.thumb.png.f899ff0b5eb98b99dcb01d3b9cdc3333.png

 

and it is going down to 14m/s ……

I cant figure out why this is happening ,all the 8 disk(2parity and 6 data disks) are in the first bus, and no background app is running .

 

 

Link to comment

This doesn't say how they're connected. So you have 13 drives connected to an 8-port card, and 32 drives connected to another 8-port card? That means you have expanders, and it's precisely what's between the controllers and drives in terms of hardware and cabling that's needed. Obviously there'll be some bandwidth sharing at play here.

 

Also what's that spam of something connecting/disconnecting via ssh 10 times per second?

Edited by Kilrah
Link to comment
Just now, Kilrah said:

This doesn't say how they're connected. So you have 13 drives connected to an 8-port card, and 32 drives connected to another 8-port card? That means you have expanders, and it's precisely what's between the controllers and drives in terms of hardware and cabling that's needed.

the 13 drive is in a Dell R720XD server rack ,and the another 34 is in a supermicro disk chaise ,connect with SFF-8088 cable and hba card .

Link to comment
4 minutes ago, Kilrah said:

This doesn't say how they're connected. So you have 13 drives connected to an 8-port card, and 32 drives connected to another 8-port card? That means you have expanders, and it's precisely what's between the controllers and drives in terms of hardware and cabling that's needed.

what kind of disk connection describ is better for understanding ,I am new…… sorry

Link to comment
1 hour ago, dafa said:

what kind of disk connection describ is better for understanding ,I am new…… sorry

Physically what's connected to what...

 

1 hour ago, dafa said:

the another 34 is in a supermicro disk chaise ,connect with SFF-8088 cable and hba card

Only one cable or 2? If one that's 4 SAS2 lanes, shared between 34 drives so yes when all are accessed simulataneously that would bean about 80MB/s to each of them.

 

Single drive disk speed test would not show anything since it's only when a significant number are accessed simulataneously that the bottleneck would be noticeable.

Link to comment
1 hour ago, Kilrah said:

Physically what's connected to what...

 

Only one cable or 2? If one that's 4 SAS2 lanes, shared between 34 drives so yes when all are accessed simulataneously that would bean about 80MB/s to each of them.

 

Single drive disk speed test would not show anything since it's only when a significant number are accessed simulataneously that the bottleneck would be noticeable.

I can get what you are takling about ...

But here is the situation ,all the 8 drives are syncing now is in the dell 720xd server itself .There should be no conection issue, and even there is one ,how could it slow down to 14m/s now……

Link to comment

Join the conversation

You can post now and register later. If you have an account, sign in now to post with your account.
Note: Your post will require moderator approval before it will be visible.

Guest
Reply to this topic...

×   Pasted as rich text.   Restore formatting

  Only 75 emoji are allowed.

×   Your link has been automatically embedded.   Display as a link instead

×   Your previous content has been restored.   Clear editor

×   You cannot paste images directly. Upload or insert images from URL.

×
×
  • Create New...