Jump to content
Sign in to follow this  
joeloooiz

Parity disk questions

3 posts in this topic Last Reply

Recommended Posts

I got really lucky and got my hands on a Backblaze Storage pod. It uses port multipliers to go from one to five SATA ports which allows for up to 45 drives in the server. 

 

I am currently in need of a parity rebuild which, by UnRAID's estimation would take 8 more days (they are two 10 TB parity drives). 

 

My question is - my two parity drives are plugged into the port multipliers which are then plugged into a RAID card. My motherboard has SATA ports on it and I'm curious if I should hard wire the two parity drives to those ports in order to reduce the amount of time it would take to rebuild the parity? I have a few other creative ideas on how to do this but essentially I could a) not use the port multipliers and hardwire the two drives directly or b) hardwire one of the port multipliers to a sata port on the motherboard to at least exclude the RAID card. 

 

I'm not sure which is best and would require the least amount of reconfiguring and/or rebuild time. 

 

Any advice would be greatly appreciated.

Share this post


Link to post
Posted (edited)

The most drives you can directly connect to the onboard SATA ports (don't connect a port multiplier to an onboard Intel port since they don't support it), the faster it will go, start with the largest ones, since those will be the ones that will use the bandwidth for longer/all the time.

 

For simplicity's sake, lets say you only have one 5 port multiplier connect to one controller SATA port, total bandwidth for those 5 disks will be around 275MB/s.

 

Parity is 10TB

Disk1 is 2TB

Disk3 is 1TB

Disk4 is 1TB

Disk5 is 1TB

 

So parity check would start at around 275/5* 200/5, around 55* 40MB/s, but once over 1TB it would only be checking two disks, so up to 140* 100MB/s per disk, and after 2TB only one disk would be using that multiplier, of course it can never go any faster than the slowest disk at that point, but there wouldn't be a controller bottleneck anymore.

 

So depending on the disk sizes, and how many disks you have, try to connect them as best as possible using the onboard SATA ports for as many as possible and then redistribute the rest putting as few as possible in each port multiplier, and if the capacities allow it in away that there aren't many large disks on the same port multiplier so that the bandwidth constraints last for the least amount of time possible.

 

Also note that SATA port multipliers are not really recommended since they have a tendency to timeout and sometimes an error on one disk can timeout/take out the rest of the disks on the same multiplier, but since you already have the hardware try it, for large builds SAS expanders are the way to go

 

*correction since the controller is PCIe 1.0 x1, so around 200MB/s usable max (for all ports)

 

 

Edited by johnnie.black
  • Upvote 1

Share this post


Link to post

So I have a few 10 TB disks aside from the two parity disks (four others I think for a total of six). I am also ready to move from a DAS (SA120) to this storage pod which would migrate another (12) 8 TB drives to this thing so spreading them out may not provide me much relief seeing as how I need at least 12 more spots. Once those 12 are in the server I'll have a total of 29 drives out of the available 45 which means I could keep at least the 10 TB drives to one per port multiplier. 

 

Its a conundrum to say the least.

Share this post


Link to post

Create an account or sign in to comment

You need to be a member in order to leave a comment

Create an account

Sign up for a new account in our community. It's easy!

Register a new account

Sign in

Already have an account? Sign in here.

Sign In Now
Sign in to follow this