Jump to content

30% Read Performance Improvement Tweak... Still works in unRaid 4.2


Joe L.

Recommended Posts

June 2012... NOTE: all newer versions of unRAID above already set the read ahead to 1024.  This tip will no longer have the effect it did at the time this post was originally written.    (Lime-tech determined that 1024 was as good a value as any after learning from this thread)

 

If you are not sure, with the array running, issue this command to see the current setting:

blockdev --getra  /dev/md1

In other words, there is no need to make any change, as it likely will have no effect.

 

If you are running version 4.0,4.1, or 4.2,( or newer ) version of unRaid, adding a few lines to the end of your "go" script will improve the real world read performance by 30% or more.

 

The lines to add are:

sleep 30

for i in /dev/md*

do

    blockdev --setra 2048 $i

done

 

These lines should be added after the last line in the "go" script in your config folder on your flash drive.

The first line will sleep for 30 seconds to give the emhttp command time to configure your drives in the unRaid array.

The next four lines will run the blockdev command, in a loop, on each of your installed disks to set the "read-ahead" buffer to 2048 instead of the default value of 256.  Since most of the access when watching a movie is linear, the read-ahead buffer makes a huge difference.

 

I did an experiment on my array.  I copied an entire 5 Gig movie ISO image from the disk to /dev/null and timed it.  I then set the read-ahead buffer to 2048 and made a second copy of the same file.  It was way faster.  I set the buffer size back to the original value of 256 and made a third test.  It was slow again. Setting it back to 2048 made it fast again. 

 

root@Tower:~# blockdev --getra  /dev/md1
256

root@Tower:~# dd if=/mnt/disk1/Movies/URBANCOWBOY.ISO of=/dev/null
9988316+0 records in
9988316+0 records out
5114017792 bytes (5.1 GB) copied, 206.845 seconds, 24.7 MB/s

root@Tower:~# blockdev --setra 2048 /dev/md1
root@Tower:~# dd if=/mnt/disk1/Movies/URBANCOWBOY.ISO of=/dev/null
9988316+0 records in
9988316+0 records out
5114017792 bytes (5.1 GB) copied, 127.942 seconds, 40.0 MB/s

root@Tower:~# blockdev --setra 256 /dev/md1
root@Tower:~# dd if=/mnt/disk1/Movies/URBANCOWBOY.ISO of=/dev/null
9988316+0 records in
9988316+0 records out
5114017792 bytes (5.1 GB) copied, 206.176 seconds, 24.8 MB/s

root@Tower:~# blockdev --setra 2048 /dev/md1
root@Tower:~# dd if=/mnt/disk1/Movies/URBANCOWBOY.ISO of=/dev/null
9988316+0 records in
9988316+0 records out
5114017792 bytes (5.1 GB) copied, 128.239 seconds, 39.9 MB/s

 

An improvement from 24 MB/s to 40 MB/s is pretty decent and repeatable on my IDE based array.  Odds are it will be similar on an SATA based array and give even faster performance.  It may even help when writing, since we need to first read the existing disk contents before parity can be calculated.

 

Joe L.

Link to comment
  • Replies 109
  • Created
  • Last Reply

That was on my IDE drive.

On the SATA, I just did a quick test with a 1GB vob file, and the result is… that it doesn't seem to do anything on the SATA drives.

 

I need to check my drive for specs, since I believe the IDE to be the oldest with the less cache.

Link to comment

That was on my IDE drive.

On the SATA, I just did a quick test with a 1GB vob file, and the result is… that it doesn't seem to do anything on the SATA drives.

 

I need to check my drive for specs, since I believe the IDE to be the oldest with the less cache.

 

That makes sense.  The system buffer increase would improve low cache and/or less inefficient interfaces (i.e. older IDE) more than high cache, more efficient interfaces (i.e. newer SATA).  I was hoping that it would still offer some assistance to the newer drives.

 

I haven't implemented this change myself on my all-SATA system, but may do it anyway because who knows, I may buy a bunch of IDE 500GB drives if the price is right.

 

Tom, this needs to go into a future build.  "Optimized IDE performance" sounds like a nice addition.

 

 

Bill

Link to comment

One thing that may affect, or not, performance, is that I just checked, and that IDE drive is the only one with 2MB cache, when all other ones have 8MB.

The drive I did my experiment on was an IDE IBM/HITACHI HDS725050KLAT80 500 Gig drive with an 8MB Cache.  It went from 24 to 40 MB/s by setting the read-ahead buffer size to 2048.    I think there are a lot of factors involved, the built-in cache size being only one of them.

 

Joe L.

Link to comment

I just did another series of tests, this time with my newest (IDE) MAXTOR 500 Gig drive. It has a 16 MB cache.

 

This time, 256 resulted in 25.7 MB/s,  2048 resulted in 47 MB/s.  An even bigger improvement in performance than in my HITACHI drive with the smaller cache.

 

Joe L.

root@Tower:/boot# blockdev --setra 256 /dev/md8
root@Tower:/boot# dd if=/mnt/disk8/Movies/WIMBLEDON.ISO of=/dev/null
10127896+0 records in
10127896+0 records out
5185482752 bytes (5.2 GB) copied, 201.556 seconds, 25.7 MB/s
root@Tower:/boot# blockdev --setra 2048 /dev/md8
root@Tower:/boot# dd if=/mnt/disk8/Movies/WIMBLEDON.ISO of=/dev/null
10127896+0 records in
10127896+0 records out
5185482752 bytes (5.2 GB) copied, 110.398 seconds, 47.0 MB/s
root@Tower:/boot# blockdev --setra 256 /dev/md8
root@Tower:/boot# dd if=/mnt/disk8/Movies/WIMBLEDON.ISO of=/dev/null
10127896+0 records in
10127896+0 records out
5185482752 bytes (5.2 GB) copied, 201.642 seconds, 25.7 MB/s
root@Tower:/boot# blockdev --setra 2048 /dev/md8
root@Tower:/boot# dd if=/mnt/disk8/Movies/WIMBLEDON.ISO of=/dev/null
10127896+0 records in
10127896+0 records out
5185482752 bytes (5.2 GB) copied, 110.533 seconds, 46.9 MB/s

 

 

Link to comment

Seagate 750gb, 16mb cache (7200.10) 6gb ISO

256 = 39.5 MB/s, 2048 = 77.0 MB/s

 

root@MediaSRVA:~# blockdev --setra 256 /dev/md1
root@MediaSRVA:~# dd if=/mnt/disk1/movies/Cars/Cars.iso of=/dev/null
12703832+0 records in
12703832+0 records out
6504361984 bytes (6.5 GB) copied, 164.466 seconds, 39.5 MB/s
root@MediaSRVA:~# blockdev --setra 2048 /dev/md1
root@MediaSRVA:~# dd if=/mnt/disk1/movies/Cars/Cars.iso of=/dev/null
12703832+0 records in
12703832+0 records out
6504361984 bytes (6.5 GB) copied, 84.7249 seconds, 76.8 MB/s
root@MediaSRVA:~# blockdev --setra 256 /dev/md1
root@MediaSRVA:~# dd if=/mnt/disk1/movies/Cars/Cars.iso of=/dev/null
12703832+0 records in
12703832+0 records out
6504361984 bytes (6.5 GB) copied, 164.491 seconds, 39.5 MB/s
root@MediaSRVA:~# blockdev --setra 2048 /dev/md1
root@MediaSRVA:~# dd if=/mnt/disk1/movies/Cars/Cars.iso of=/dev/null
12703832+0 records in
12703832+0 records out
6504361984 bytes (6.5 GB) copied, 84.4416 seconds, 77.0 MB/s
root@MediaSRVA:~#

Link to comment

Seagate 750gb, 16mb cache (7200.10) 6gb ISO

256 = 39.5 MB/s, 2048 = 77.0 MB/s

That's more like a 94% increase in performance.    ;D

Very nice indeed.  Shows that even drives with a 16 mb cache are helped with the larger read-ahead buffer setting.  Certainly well worth adding the few lines to the "go" script to set the read-ahead  buffer sizes every time you re-boot.

 

Joe L.

Link to comment

Good job guys!  If you don't mind some more testing... how does increasing read-ahead help read transfer rate via network?  And does it make any difference reading from /mnt/diskX vs. a User share?

Tom,

Sorry to say, it makes a big difference when reading the same file through a "user share"

 

Looks like we drop down to about half of the performance of reading directly from /mnt/diskX

 

Are user-shares buffered?  Perhaps they could use some tweaking too?  They are way less efficient than reading directly from the /mnt/diskX directory.

 

23 to 28 MB/s vs. 47 MB/s.

 

Joe L.

root@Tower:/boot# blockdev --setra 256 /dev/md8
root@Tower:/boot# dd if=/mnt/user/Movies/WIMBLEDON.ISO of=/dev/null
10127896+0 records in
10127896+0 records out
5185482752 bytes (5.2 GB) copied, 181.625 seconds, 28.6 MB/s
root@Tower:/boot# blockdev --setra 2048 /dev/md8
root@Tower:/boot# dd if=/mnt/user/Movies/WIMBLEDON.ISO of=/dev/null
10127896+0 records in
10127896+0 records out
5185482752 bytes (5.2 GB) copied, 220.202 seconds, 23.5 MB/s
root@Tower:/boot# dd if=/mnt/disk8/Movies/WIMBLEDON.ISO of=/dev/null
10127896+0 records in
10127896+0 records out
5185482752 bytes (5.2 GB) copied, 110.238 seconds, 47.0 MB/s

Link to comment

Joe,

The User shares are implemented as a FUSE file system.  There are some mount options I can experiment with (e.g., "direct_io"), but I really need to look into the code to see how i/o is handled - just no time to do this yet.

 

A more interesting benchmark is, what effect does read-ahead have on network read speed, ie, reading from a Windows PC - does it significantly increase?

 

BTW, now that FUSE is included, we can look into adding NTFS3G.

Link to comment

I did some testing, and my IDE drive is the only one that is reliably affected by the change from 256 to 2048, and it is in the same range as yours, from 28 to 56 MB/s. My SATA drives sustain 56 MB/s whatever the setting.

 

My network transfer on one drive saturate at 30 MB/s, around 40 MB/s with two, using my really slow Mac Mini drive and a Firewire drive.

 

I launched "concurrent" dd on the 4 drives, I get around 25 MB/s on each drive, which would set my mobo limit at around 100MB/s

Link to comment

2 SATA are on an add-in Sil3512 pci cards

1 SATA is on the mobo (but the Sil3112 chip is PCI to SATA anyway)

1 ATA is on the mobo (nforce)

 

All the SATA behave the same, whether they are on the mobo or on the add-in card.

I am assuming you did run the blockdev --setra command on each of their respective /dev/md1, /dev/md2,/dev/md3, etc devices.  If you only ran it on the /dev/md1 device it would only affect the first disk in your array.

 

If you did run it, the effect might be related to the chipset driver and how well it handles the buffering... perhaps no effect on your server, but certainly does have an effect on some.

 

Joe L.

Link to comment

indeed, I changed them all.

 

There is definitely something going on there. I do not know why I would not want to have the blockdev set higher, but I am just thinking that if there is any advantage to run at 256 instead of 2048, it is not necessary to set everything higher for the sake of throughput, since in some case it doesn't change anything.

 

Now, if it doesn't matter, then we should just all run at 2048, since it does improve thing dramatically on certain disks.

Link to comment

I am assuming you did run the blockdev --setra command on each of their respective /dev/md1, /dev/md2,/dev/md3, etc devices.  If you only ran it on the /dev/md1 device it would only affect the first disk in your array.

 

Is this step something that has to be done separately from the addition to the go script?  OR is the go script addition all that is required?  Clear as mud I hope :)

Link to comment

Archived

This topic is now archived and is closed to further replies.


×
×
  • Create New...