[SOLVED] Anything I can do to speed up rebuild?


m4f1050

Recommended Posts

I just swapped my parity drive with a WD Red NAS 8 TB drive and from 8 hours (with WD Green 2 TB drive that I had before) it now says it will take 1 day and 8 hours.  I have all VM's down, all Dockers off and nothing is accessing the drives except the rebuild.  MBs vary from 64 to 77, goes up and down in that range which it always have, the Green drives are SATA II (3 Gig) and the Red is SATA III (6 Gig) but I guess once I swap all the drives it will somewhat speed up.

 

My question is, can something be done BEFORE the rebuild (i.e. defragmentation) or can I request a feature to only rebuild up to the largest size drive (obviously not including the parity which HAS to be the largest) and speed up the process?  Or will it automatically speed up the process once it reaches the end of the 2 TB space?

 

Thanks!

Edited by m4f1050
Link to comment
33 minutes ago, m4f1050 said:

MBs vary from 64 to 77

This is low even for 2TB disks, unless it's in the end when reading the most outer sectors, if not make sure you have no hardware bottleneck and are using the recommended tunables for larger arrays:

 

Settings -> Disk Settings

 

Tunable (md_num_stripes): 4096
Tunable (md_sync_window): 2048
Tunable (md_sync_thresh): 2000

  • Like 1
  • Upvote 1
Link to comment
2 hours ago, johnnie.black said:

This is low even for 2TB disks, unless it's in the end when reading the most outer sectors, if not make sure you have no hardware bottleneck and are using the recommended tunables for larger arrays:

 

Settings -> Disk Settings

 

Tunable (md_num_stripes): 4096
Tunable (md_sync_window): 2048
Tunable (md_sync_thresh): 2000

Is there any harm to use these high values for relatively smaller arrays (e.g. let's say 5-disk 30TB)?

Link to comment
4 hours ago, johnnie.black said:

This is low even for 2TB disks, unless it's in the end when reading the most outer sectors, if not make sure you have no hardware bottleneck and are using the recommended tunables for larger arrays:

 

Settings -> Disk Settings

 

Tunable (md_num_stripes): 4096
Tunable (md_sync_window): 2048
Tunable (md_sync_thresh): 2000

Ok, I set it to these values, it gave me an increase from 77 MBs --> 92 MBs.

 

The 8 TB is SATA III, but the 2 TB are SATA II   😕   Reason why I am moving to 8 TB.  But I am still waiting on 2 more 8 TB drives, I have 3 so I cancelled the rebuild, moved stuff out of one of the 2 TB to use it as parity and I am just going to copy stuff over to the 8 TB drives, THEN build my array and rebuild when I have all 8 TB drives.  No sense in doing a rebuild on the 8 TB parity and then build an array of 4 x 8 TB and do another rebuild... LOL

 

Thanks, that helped.

Edited by m4f1050
Link to comment
2 hours ago, johnnie.black said:

SATA2 is not a problem, it's still capable of around 275MB/s, no disk can go above that, SSDs are a different story.

Ok, so how can I speed it up even more?  I have 20 x 2 TB drives, is that my problem?

 

Well, 21 counting the parity drive, and 1 TB ssd for cache.

 

This unRAID server setup is *OLD* it's 5 years old   🤦‍♂️

 

I had 6 x 1.5 TB Seagate Barracuda drives when I first started this unRAID server.

Edited by m4f1050
Link to comment
3 hours ago, johnnie.black said:

You may have another bottlenecks, it might also be that the 2TB disks are older models and just slow, post your diagnostics it can give a better idea, also indicate in what slots the controllers are installed.

They are WD Caviar Green WD20EARS/WD20EARX/WD20EZRX drives.  About 6 or 7 year old drives.  2 x SuperMicro 8-port SAS controllers are on PCI-E x16 slots (mobo is an ASRock 970 Extreme 4, which has 3 PCI-E x16 - one has an RX 460 GPU and the other 2 have the controllers, and mobo has 5 internal SATA III and 1 external SATA III with wire going back inside for cache drive along with the parity drive on one of the internal SATA III, other 4 plus the 16 from controllers have the data drives. System has 32GB memory, when VM's are off unRAID uses all 32GB I believe?  VM's are using 12GB each = 24GB, leaving unRAID with 8GB, using 4 x 5 drive Icy Docks, all inside an Antec 1200 case with an 850watt single rail PSU.)

Link to comment
1 hour ago, m4f1050 said:

They are WD Caviar Green WD20EARS

Without the diagnostics can't say for sure since there are 3 a 4 platter models, even the 3 platter is slow but the 4 platter models can barely crack 100MB/s on the outer sectors, much slower on the inner ones, so that's one the problems, until you get rid of them the first 2TB will always be on the slow side.

 

1 hour ago, m4f1050 said:

2 x SuperMicro 8-port SAS controllers

SASLP or SAS2LP? SASLP can also be a bottleneck.

 

1 hour ago, m4f1050 said:

mobo is an ASRock 970 Extreme 4, which has 3 PCI-E x16

Yes but the bottom one is x4 electrically, it won't matter if the HBAa are SASLP, it can if they are SAS2LP.

  • Upvote 1
Link to comment
17 hours ago, johnnie.black said:

Without the diagnostics can't say for sure since there are 3 a 4 platter models, even the 3 platter is slow but the 4 platter models can barely crack 100MB/s on the outer sectors, much slower on the inner ones, so that's one the problems, until you get rid of them the first 2TB will always be on the slow side.

 

SASLP or SAS2LP? SASLP can also be a bottleneck.

 

Yes but the bottom one is x4 electrically, it won't matter if the HBAa are SASLP, it can if they are SAS2LP.

SASLP (PCI-E x4) -- Wow, ASUS says it has 3 x16, so they crippled the 3rd "x16" into a "x4" and lied about it? Interesting...!

I'm moving to 4 x 8TB data, 1 x 8TB parity and my 1 x 1TB cache and using the onboard SATA III (or are they also crippled by ASUS?) so I am going to sell the 2 controllers, the 21 x 2TB drives, the Icy Docks, the 850 watts PSU and the Antec 1200 case.  These drives have been great and working, mainly because I don't use the array and since unRAID spins them down, they literally haven't been used that much.  (Maybe 5 or 6 parity checks and 3 rebuilds with 0 errors total that I can remember.)

Link to comment

Oh ok.  I was reading the box, I guess I should've checked.  I'm ditching those cards anyways and getting a PCI-E with 2 ports (or 4 ports) but I need to find out which one to get that's SATA III and compatible.  I was looking at one with ASM1061, will that one be compatible?  Going with 6 x 8 TB on the internal SATA III controller on the MOBO and adding 2 x 1 TB Crucial SSD chache drives on the card.  If I get the 4 port card I can put a DVD burner for the Windows 10 VM that has the RX 460 pass-thru GPU.

 

The 4 port I am looking at has Marvell 88SE9215 chipset.

Edited by m4f1050
Link to comment
4 hours ago, johnnie.black said:

Yes.

 

Avoid Marvell based controllers with Unraid.

10-4...  I need a 4 port though, know of any working 4 port?  I ordered the other 8 TB Red NAS (total of 6 - 5 data 1 parity) drive and the other 1 TB Crucial SSD cache (total of 2 cache drives - cache pooling/raid 1/whatever you want to call it I guess LOL) drives plus a Bluray burner for my VM Windows 10.

 

EDIT:

I guess I can order 2 of these 2 port ASM1061's on Amazon for $11.99 each (2 day shipping so it won't slow down my transition)

https://www.amazon.com/dp/B005B0A6ZS

 

Now here is one interesting card...

It has an ASM1061 and 2 JMicron JMB575

 


https://www.amazon.com/dp/B0177GBY0Y

Edited by m4f1050
Link to comment
13 hours ago, johnnie.black said:

There aren't any new recommended 4 port controllers, the ones that exist all mostly Marvell, there's the old Adaptec 1430SA that still works well and can be found cheaply on ebay.

 

The last controller uses port multipliers, also avoid for performance and other possible issues.

 

 

 

 

Good to know.  Thanks for the advice on the Adaptec, but I ended up buying 2 ASM1061 and called it the day.  The Adaptec is SATA II and I would have kept one of the SuperMicro if I wanted to use SATA II, trying to avoid any bottlenecks, except the drive speed.  BTW I wanted SATA III because I am using it for 2 1 TB Crucial SSD's... The 2nd I guess since it was cheap I got it to run the Bluray burner and the parity drive, and move the partiy from the eSATA to SATA cable and use the eSATA from on-board for an eSATA dock I have for file transfer.

 

Hmmm, what do you recommend?  Should I run 2 data drives on each adapter (total of 2 + 2 = 4) and 1 data, 1 parity, 2 cache drives and the Bluray burner on the onboard?   That way I have the data spread on separate controllers?   Would that help on speed?

 

Scenario 1

On onboard (5 int, 1 ext): 5 x data drives, 1 x eSATA available for dock.

SATA III controller 1: 1 x parity drive, 1 x bluray burner

SATA III controller 2: 2 x SSD cache

 

Scenario 2

On onboard (5 int, 1 ext): 5 x data drives, 1 x eSATA available for dock.

SATA III controller 1: 1 x SSD cache, 1 x parity drive

SATA III controller 2: 1 x SSD cache, 1 x bluray burner

 

Scenario 3

On onboard (5 int, 1 ext): 1 x data drives, 1 x parity drive, 2 x SSD cache, 1 x bluray burner, 1 x eSATA available for dock.

SATA III controller 1: 2 x data drives

SATA III controller 2: 2 x data drives

 

Thanks for your help.

Edited by m4f1050
Link to comment
2 hours ago, m4f1050 said:

but I ended up buying 2 ASM1061 and called it the day.  The Adaptec is SATA II

It is, but in simultaneous use it has the same max performance as the two Asmedia, ~200MB/s per drive, since they are limited by the PCIe 2.0 x1 slot.

 

Connect the SSDs to the onboard controller for maximum performance, there won't be any difference for the remaining disks, so connect them as you like.

 

 

Link to comment
On 9/21/2018 at 6:33 PM, johnnie.black said:

It is, but in simultaneous use it has the same max performance as the two Asmedia, ~200MB/s per drive, since they are limited by the PCIe 2.0 x1 slot.

 

Connect the SSDs to the onboard controller for maximum performance, there won't be any difference for the remaining disks, so connect them as you like.

 

 

Ok, thanks.  Will do the Scenario 3.

Link to comment
  • 5 years later...
On 9/18/2018 at 4:56 AM, JorgeB said:

This is low even for 2TB disks, unless it's in the end when reading the most outer sectors, if not make sure you have no hardware bottleneck and are using the recommended tunables for larger arrays:

 

Settings -> Disk Settings

 

Tunable (md_num_stripes): 4096
Tunable (md_sync_window): 2048
Tunable (md_sync_thresh): 2000

 

Hi Jorge - I can't find the sync_window and sync _thresh on 6.12.  Is it somewhere else?  I see sync_limit.   

Link to comment

Join the conversation

You can post now and register later. If you have an account, sign in now to post with your account.
Note: Your post will require moderator approval before it will be visible.

Guest
Reply to this topic...

×   Pasted as rich text.   Restore formatting

  Only 75 emoji are allowed.

×   Your link has been automatically embedded.   Display as a link instead

×   Your previous content has been restored.   Clear editor

×   You cannot paste images directly. Upload or insert images from URL.