Turbo Write


jonp

Recommended Posts

@75MB/s that's still pretty fast. Just for some local comparitive results. I use the following script as a local host benchmark.

 

Which provides these results with/without turbo-write enabled/disabled.

My array is only 4 drives wide on a hp micro server with a 2ghz xeon.

Drives are 3 & 4TB. Parity is a 4TB HGST 7200 RPM. RAM is 4GB, unraid is running under VMware ESXi as a guest using RDM'ed drives.

Not too shabby for a lil machine that can.

 

#!/bin/bash

if [ -z "$1" ] 
   then echo "Usage: $0 outputfilename"
        exit
fi

if [ -f "$1" ]
   then rm -vf $1
        sync
fi

# To free pagecache, dentries and inodes:
# echo 3 > /proc/sys/vm/drop_caches

trap "rm -vf '$1' " HUP INT QUIT TERM EXIT

bs=1024
count=4000000
count=10000000

total=$(( $bs * $count))

echo "writing $total bytes to: $1"
touch $1;rm -f $1
dd if=/dev/zero bs=$bs count=$count of=$1 &
BGPID=$!

trap "kill $BGPID 2>/dev/null; rm -vf '$1'; exit" INT HUP QUIT TERM EXIT

sleep 5
while kill -USR1 $BGPID 2>/dev/null
do    sleep 5
done

trap "rm -vf '$1'; exit" INT HUP QUIT TERM EXIT

echo "write complete, syncing"
sync
# echo "reading from: $1"
# dd if=$1 bs=$bs count=$count of=/dev/null
rm -vf $1

 

root@unRAID:/boot# [ -e /proc/mdcmd ] && echo 'set md_write_method 1' >> /proc/mdcmd
root@unRAID:/boot# /boot/local/bin/writeread10gb /mnt/disk1/test.dat    
writing 10240000000 bytes to: /mnt/disk1/test.dat
1013515264 bytes (1.0 GB) copied, 5.0011 s, 203 MB/s
1477878784 bytes (1.5 GB) copied, 10.0027 s, 148 MB/s
2025407488 bytes (2.0 GB) copied, 15.0111 s, 135 MB/s
2450924544 bytes (2.5 GB) copied, 20.0111 s, 122 MB/s
3001046016 bytes (3.0 GB) copied, 25.0111 s, 120 MB/s
3626887168 bytes (3.6 GB) copied, 30.0128 s, 121 MB/s
4356416512 bytes (4.4 GB) copied, 35.017 s, 124 MB/s
5033522176 bytes (5.0 GB) copied, 40.0211 s, 126 MB/s
5779306496 bytes (5.8 GB) copied, 45.0226 s, 128 MB/s
6277882880 bytes (6.3 GB) copied, 50.0252 s, 125 MB/s
6787147776 bytes (6.8 GB) copied, 55.2231 s, 123 MB/s
7436379136 bytes (7.4 GB) copied, 60.0311 s, 124 MB/s
8056297472 bytes (8.1 GB) copied, 65.0329 s, 124 MB/s
8679016448 bytes (8.7 GB) copied, 70.0352 s, 124 MB/s
9768040448 bytes (9.8 GB) copied, 75.1561 s, 130 MB/s
10149733376 bytes (10 GB) copied, 80.0611 s, 127 MB/s
10240000000 bytes (10 GB) copied, 81.2262 s, 126 MB/s
write complete, syncing
removed `/mnt/disk1/test.dat'

root@unRAID:/boot# [ -e /proc/mdcmd ] && echo 'set md_write_method 0' >> /proc/mdcmd
root@unRAID:/boot# sync
root@unRAID:/boot# /boot/local/bin/writeread10gb /mnt/disk1/test.dat                
writing 10240000000 bytes to: /mnt/disk1/test.dat
792613888 bytes (793 MB) copied, 5.0048 s, 158 MB/s
978371584 bytes (978 MB) copied, 10.0548 s, 97.3 MB/s
1070175232 bytes (1.1 GB) copied, 15.0448 s, 71.1 MB/s
1180541952 bytes (1.2 GB) copied, 20.0148 s, 59.0 MB/s
1425363968 bytes (1.4 GB) copied, 25.0148 s, 57.0 MB/s
1645171712 bytes (1.6 GB) copied, 30.0147 s, 54.8 MB/s
1876879360 bytes (1.9 GB) copied, 35.0155 s, 53.6 MB/s
2160645120 bytes (2.2 GB) copied, 40.0248 s, 54.0 MB/s
2457687040 bytes (2.5 GB) copied, 45.0248 s, 54.6 MB/s
3203949568 bytes (3.2 GB) copied, 50.0248 s, 64.0 MB/s
3545966592 bytes (3.5 GB) copied, 55.0253 s, 64.4 MB/s
3744470016 bytes (3.7 GB) copied, 60.0348 s, 62.4 MB/s
3837621248 bytes (3.8 GB) copied, 65.0448 s, 59.0 MB/s
4083180544 bytes (4.1 GB) copied, 70.0348 s, 58.3 MB/s
4203059200 bytes (4.2 GB) copied, 75.2201 s, 55.9 MB/s
4483929088 bytes (4.5 GB) copied, 80.0548 s, 56.0 MB/s
4664751104 bytes (4.7 GB) copied, 85.0448 s, 54.9 MB/s
4919473152 bytes (4.9 GB) copied, 90.0448 s, 54.6 MB/s
5240157184 bytes (5.2 GB) copied, 95.0448 s, 55.1 MB/s
5955642368 bytes (6.0 GB) copied, 100.055 s, 59.5 MB/s
6132057088 bytes (6.1 GB) copied, 105.055 s, 58.4 MB/s
6214882304 bytes (6.2 GB) copied, 110.055 s, 56.5 MB/s
6461868032 bytes (6.5 GB) copied, 115.055 s, 56.2 MB/s
6594380800 bytes (6.6 GB) copied, 120.065 s, 54.9 MB/s
6846481408 bytes (6.8 GB) copied, 125.065 s, 54.7 MB/s
7101137920 bytes (7.1 GB) copied, 130.065 s, 54.6 MB/s
7863050240 bytes (7.9 GB) copied, 135.065 s, 58.2 MB/s
8085197824 bytes (8.1 GB) copied, 140.075 s, 57.7 MB/s
8201982976 bytes (8.2 GB) copied, 145.085 s, 56.5 MB/s
8436970496 bytes (8.4 GB) copied, 150.075 s, 56.2 MB/s
8577868800 bytes (8.6 GB) copied, 155.075 s, 55.3 MB/s
8834466816 bytes (8.8 GB) copied, 160.085 s, 55.2 MB/s
9051507712 bytes (9.1 GB) copied, 165.08 s, 54.8 MB/s
9292321792 bytes (9.3 GB) copied, 170.085 s, 54.6 MB/s
9607111680 bytes (9.6 GB) copied, 175.085 s, 54.9 MB/s
10240000000 bytes (10 GB) copied, 178.476 s, 57.4 MB/s
write complete, syncing
removed `/mnt/disk1/test.dat'

Link to comment
  • Replies 60
  • Created
  • Last Reply

Top Posters In This Topic

Used your script on the same server with 18 drives as it removes the network from the equation, I believe results prove my point, turbo write can be a big benefit in small or large arrays as long as there are no controller bottlenecks, i.e., if your parity sync speed is 60MB/s turning turbo write on probably won’t make a big difference, irregardless of array size.

 

 

16 on 2 x Dell H310 + 2 onboard (parity check ~170MB/s)

 

writing 10240000000 bytes to: /mnt/disk1/test.dat
1154139136 bytes (1.2 GB) copied, 5.00074 s, 231 MB/s
1929135104 bytes (1.9 GB) copied, 10.0115 s, 193 MB/s
2740147200 bytes (2.7 GB) copied, 15.0273 s, 182 MB/s
3520714752 bytes (3.5 GB) copied, 20.0289 s, 176 MB/s
4018230272 bytes (4.0 GB) copied, 25.0474 s, 160 MB/s
4326691840 bytes (4.3 GB) copied, 30.0517 s, 144 MB/s
4725474304 bytes (4.7 GB) copied, 35.0583 s, 135 MB/s
5177771008 bytes (5.2 GB) copied, 40.0604 s, 129 MB/s
5659621376 bytes (5.7 GB) copied, 45.0639 s, 126 MB/s
6312625152 bytes (6.3 GB) copied, 50.0703 s, 126 MB/s
7001923584 bytes (7.0 GB) copied, 55.087 s, 127 MB/s
7723901952 bytes (7.7 GB) copied, 60.1003 s, 129 MB/s
8304370688 bytes (8.3 GB) copied, 65.1053 s, 128 MB/s
8848499712 bytes (8.8 GB) copied, 70.1063 s, 126 MB/s
9371674624 bytes (9.4 GB) copied, 75.1109 s, 125 MB/s
9900893184 bytes (9.9 GB) copied, 80.1153 s, 124 MB/s
10240000000 bytes (10 GB) copied, 82.9428 s, 123 MB/s
write complete, syncing

 

 

8 on Dell H310 + 8 on SASLP + 2 onboard (parity check ~80MB/s)

 

writing 10240000000 bytes to: /mnt/disk1/test.dat
987169792 bytes (987 MB) copied, 5.00141 s, 197 MB/s
1383049216 bytes (1.4 GB) copied, 10.0028 s, 138 MB/s
1762890752 bytes (1.8 GB) copied, 15.0054 s, 117 MB/s
2147509248 bytes (2.1 GB) copied, 20.0106 s, 107 MB/s
2541250560 bytes (2.5 GB) copied, 25.0101 s, 102 MB/s
2938103808 bytes (2.9 GB) copied, 30.0173 s, 97.9 MB/s
3336123392 bytes (3.3 GB) copied, 35.0314 s, 95.2 MB/s
3742831616 bytes (3.7 GB) copied, 40.0279 s, 93.5 MB/s
4095484928 bytes (4.1 GB) copied, 45.0384 s, 90.9 MB/s
4410442752 bytes (4.4 GB) copied, 50.0364 s, 88.1 MB/s
4754002944 bytes (4.8 GB) copied, 55.0384 s, 86.4 MB/s
5115134976 bytes (5.1 GB) copied, 60.0464 s, 85.2 MB/s
5497189376 bytes (5.5 GB) copied, 65.0434 s, 84.5 MB/s
5863474176 bytes (5.9 GB) copied, 70.0449 s, 83.7 MB/s
6227731456 bytes (6.2 GB) copied, 75.0504 s, 83.0 MB/s
6606889984 bytes (6.6 GB) copied, 80.0564 s, 82.5 MB/s
6983197696 bytes (7.0 GB) copied, 85.0644 s, 82.1 MB/s
7346509824 bytes (7.3 GB) copied, 90.0625 s, 81.6 MB/s
7671792640 bytes (7.7 GB) copied, 95.0684 s, 80.7 MB/s
8011658240 bytes (8.0 GB) copied, 100.075 s, 80.1 MB/s
8374109184 bytes (8.4 GB) copied, 105.078 s, 79.7 MB/s
8711222272 bytes (8.7 GB) copied, 110.081 s, 79.1 MB/s
9072125952 bytes (9.1 GB) copied, 115.078 s, 78.8 MB/s
9438495744 bytes (9.4 GB) copied, 120.086 s, 78.6 MB/s
9800893440 bytes (9.8 GB) copied, 125.083 s, 78.4 MB/s
10147955712 bytes (10 GB) copied, 130.09 s, 78.0 MB/s
10240000000 bytes (10 GB) copied, 131.404 s, 77.9 MB/s
write complete, syncing

 

 

Link to comment

I'd think not only controller bottlenecks, but also disk bottlenecks => i.e. if your array still has some old very low density drives (250GB or 333GB platters) those are going to notably limit how fast turbowrite can be.    It should still be faster than a standard write, since there are fewer disk I/O's, but won't hit the speeds you can get where all drives are modern high-density (1TB/platter or better) drives.

 

Note that since this would also slow down parity syncs, your idea of looking at the parity sync speed to get a feel for just how much of an improvement turbo write can provide works well.

 

Link to comment
  • 10 months later...
  • 1 year later...
1 hour ago, leejbarker said:

If I set Tunable (md_write_method): to Auto, then spin up all drives before a big write will it use Turbo Write (reconstruct write) automatically? Then the rest of the time use read/modify/write (unless all the disks are spinning for some reason)?

Not recurrently, for the moment auto is the same as turbo write disable, it's planned as a future enhancement.

Link to comment
3 minutes ago, johnnie.black said:

Not recurrently, for the moment auto is the same as turbo write disable, it's planned as a future enhancement.

Ok thanks :)

 

As per the rest of this discussion, a GUI switch would be great. Obvs with the auto function is enabled, a simple spin up all disks would start turbo write. But some people might not get that!

Link to comment

There is a turbo write plug in that will enable turbo write when all disks are spinning, and disable when a disk spins down. But with turbo write, every disk write will cause activity on every disk, so array may have a hard time getting out of turbo write. I think the plug in also has the manual control you are looking for.

Link to comment
On 02/01/2018 at 2:43 PM, SSD said:

There is a turbo write plug in that will enable turbo write when all disks are spinning, and disable when a disk spins down. But with turbo write, every disk write will cause activity on every disk, so array may have a hard time getting out of turbo write. I think the plug in also has the manual control you are looking for.

 

whats the plugin called?

Link to comment

Join the conversation

You can post now and register later. If you have an account, sign in now to post with your account.
Note: Your post will require moderator approval before it will be visible.

Guest
Reply to this topic...

×   Pasted as rich text.   Restore formatting

  Only 75 emoji are allowed.

×   Your link has been automatically embedded.   Display as a link instead

×   Your previous content has been restored.   Clear editor

×   You cannot paste images directly. Upload or insert images from URL.