hawihoney Posted October 5, 2020 Share Posted October 5, 2020 (edited) I'm currently running a disk rebuild because of a disk replacement in the array. What happened - as always - is, that as soon as less disks are involved the performance degrades. E.g.: In an dual parity array with 24 disks (dual parity and 22 data disks), disk rebuild starts with around 130 MB/s. At position 2 TB, as soon as 12x 2 TB disks are done and spin down, disk rebuild drops down to around 90 MB/s. At position 3 TB, as soon as 4x 3 TB disks are done and spin down, disk rebuild drops even further down to 70 MB/s. I can see that on bare metal Unraid servers and Unraid VMs. I can see that since years on every disk rebuild and I don't get that. Any ideas why that happens? I mean less disks are involved, less reads and calculations need to be done. What is that? This puzzles me on every disk or parity rebuild. Thanks in advance. Edited October 5, 2020 by hawihoney Quote Link to comment
Vr2Io Posted October 5, 2020 Share Posted October 5, 2020 Problem seems on parity disk, what parity disk brand / model ? Quote Link to comment
trurl Posted October 5, 2020 Share Posted October 5, 2020 1 hour ago, hawihoney said: I mean less disks are involved, less reads and calculations need to be done. Assuming you have no bottlenecks due to port multipliers, disks are read in parallel and it is all the same very fast calculation, so number of disks should have no impact due to these factors. Normally all HDDs are faster on the outer cylinders and slower on the inner due to data density. Same RPM but more circumference on the outer has more data per rotation than the inner lesser circumference. And, the progress goes from outer to inner so slower later on is normal. Smaller disks are slower than larger disks as a rule due to data density. So as the smaller disks are finished there may be some increased speed due to the (slower) smaller disks not being involved, but that may be offset by the fact that the larger disks remaining are further along in progressing towards the slower inner cylinders. And, of course, some disks just don't seem to perform as well for some reason. I suspect you are actually seeing the effects of the slower inner cylinders and not really the effects of the smaller disks being finished. Have you tested the speed of your individual disks using the DiskSpeed docker? Quote Link to comment
hawihoney Posted October 5, 2020 Author Share Posted October 5, 2020 3 hours ago, Vr2Io said: Problem seems on parity disk, what parity disk brand / model ? I see this on my three arrays on every parity/disk rebuild. 1.) 2x 12TB TOSHIBA MG07ACA12TE 2.) 1x 6TB WESTERNDIGITAL WD6003FFBX, 1x 6TB WESTERNDIGITAL WD60EFRX 3.) 2x 6TB TOSHIBA HDWE160 Last one is currently rebuilding. Quote Link to comment
hawihoney Posted October 5, 2020 Author Share Posted October 5, 2020 (edited) 2 hours ago, trurl said: I suspect you are actually seeing the effects of the slower inner cylinders and not really the effects of the smaller disks being finished. It's really dropping drastically whenever the 2TB or 3TB disks are finished. I'm not sure if this is an outer vs. inner track thing because it's a.) suddenly and b.) the 12TB parity disks have a long way to go after the 2TB/3TB disks are out. 2 hours ago, trurl said: Assuming you have no bottlenecks due to port multipliers Sure, every array has it's own LSI 9300-8(i/e) HBA connected to a Supermicro BPN-SAS2-846EL1 backplane. Edited October 5, 2020 by hawihoney Quote Link to comment
Vr2Io Posted October 5, 2020 Share Posted October 5, 2020 (edited) 26 minutes ago, hawihoney said: I see this on my three arrays on every parity/disk rebuild. 1.) 2x 12TB TOSHIBA MG07ACA12TE 2.) 1x 6TB WESTERNDIGITAL WD6003FFBX, 1x 6TB WESTERNDIGITAL WD60EFRX 3.) 2x 6TB TOSHIBA HDWE160 Last one is currently rebuilding. OK, no abnormal on parity disk. I never got the said behaviour on my builds, the normal behaviour is once pass each barriers then speed will increase, this is not because the no. of disk, it is because previous barrier slowest in end of disk limit the speed. This happen in array check and rebuild as expected. For example, my 1st barrier is 6TB (12TB parity), at the end of 6TB, speed is 100MB/s but once pass it, then speed will return to 145MB/s. Edited October 5, 2020 by Vr2Io Quote Link to comment
jbartlett Posted October 6, 2020 Share Posted October 6, 2020 If you run a benchmark of all your drives and look at the left graph, the lowest line at any given point on the X axis is the max speed you are going to get at that point. It'll give you a visual representation of what vr2lo is saying. Quote Link to comment
Recommended Posts
Join the conversation
You can post now and register later. If you have an account, sign in now to post with your account.
Note: Your post will require moderator approval before it will be visible.