sysctl vm.highmem_is_dirtyable=1 seems to have a positive effect w write speed


Recommended Posts

Chiming in from the following related threads.

 

X9SCM-F slow write speed, good read speed

http://lime-technology.com/forum/index.php?topic=22675.270

 

Your Chance to Chime In

http://lime-technology.com/forum/index.php?topic=25306.195

 

MicroServer N36L/N40L/N54L - 6 Drive Edition

http://lime-technology.com/forum/index.php?topic=11585.msg219935#msg219935

 

I did a new speed experiment since I wasn't all that happy with the speed on the N54L.

 

No matter what I did to retune the kernel, I couldn't inch more then 43MB/s parity generate speed nor a very high network transfer speed.

 

With all the advice given recently about the sysctl vm.highmem_is_dirtyable=1

I decided to try it.

 

Reason being that I wasn't getting near the speed I was expecting out of this puppy.

I was getting about 43MB/s max parity check speed and somewhere from 18MB/s to 23MB/s when copying files to the fastest drive.

 

Seagate 7200RPM 3TB drive and the same for an empty data drive.

These drives have a 1TB platter and have some pretty high preclear speeds.

 

Well I'm ecstatic to report for burst speeds I've gotten this puppy to reach almost 90MB/s write speed.

 

Once I enabled to the sysctl vm.highmem_is_dirtyable=1 I copied a 1.4GB movie file.

The max transfer speed was at 90MB/s using Teracopy from a local windows drive to a remote 'disk' share on the fastest unRAID drives.

Granted I have high speed parity and data drives along with 16GB of ram and a custom tuned kernel.

 

My goal is to allow moving files from my local machine to the server as fast as possible.

Some of the data is cached in the buffer cache, but that's OK for me.

I suppose for a safer copy, it may not be worth the extra speed.

 

I thought it worth while mentioning that the one parameter changed the speed so drastically.

 

These are now my settings in the go file.

Plus some results from my tests.

 

FWIW, I used windows XP over a 1GB network using Teracopy to copy a file from a local laptop drive to a remote N54L disk share.

 

The writeread10gb was done locally on the machine using the fastest two drives.

You'll see that over time the speed ends up at what the array is normally capable of.

For applications where burst speed may count, such as bittorrent, usenet downloads or PVR applications, this could be a welcome boost.

 

sysctl vm.vfs_cache_pressure=10
sysctl vm.swappiness=100
sysctl vm.dirty_ratio=20
# (you can set it higher as an experiment).
sysctl vm.min_free_kbytes=8192
sysctl vm.highmem_is_dirtyable=1

while [[ ${LOOP:=10} -gt 1 && ! -b /dev/md1 ]]
do
        (( LOOP=LOOP-1 ))
        echo "Waiting for /dev/md1 to come online ($LOOP)"
        sleep 1
done
sleep 1

for disk in /dev/md*
do blockdev --setra 2048 $disk
done

blockdev --setra 2048 /dev/sdd # Parity Drive. 

 

Logs to follow.

Link to comment

 

Local Test on DISK1 7200 RPM 3TB data drive, 7200rpm parity,

sysctl vm.highmem_is_dirtyable=1

 

 

sysctl vm.highmem_is_dirtyable=1


writing 10240000000 bytes to: /mnt/disk1/test.dd
1107451904 bytes (1.1 GB) copied, 5.03617 s, 220 MB/s
2107810816 bytes (2.1 GB) copied, 10.0765 s, 209 MB/s
2964157440 bytes (3.0 GB) copied, 15.1043 s, 196 MB/s
3495265280 bytes (3.5 GB) copied, 20.1945 s, 173 MB/s
3585582080 bytes (3.6 GB) copied, 25.4044 s, 141 MB/s
3694973952 bytes (3.7 GB) copied, 30.3444 s, 122 MB/s
3805021184 bytes (3.8 GB) copied, 35.3244 s, 108 MB/s
3878126592 bytes (3.9 GB) copied, 40.5544 s, 95.6 MB/s
3923067904 bytes (3.9 GB) copied, 45.5843 s, 86.1 MB/s
4013892608 bytes (4.0 GB) copied, 50.5743 s, 79.4 MB/s
4122543104 bytes (4.1 GB) copied, 55.5265 s, 74.2 MB/s
4218172416 bytes (4.2 GB) copied, 60.5679 s, 69.6 MB/s
4326061056 bytes (4.3 GB) copied, 65.6173 s, 65.9 MB/s
4455662592 bytes (4.5 GB) copied, 70.9542 s, 62.8 MB/s
4554470400 bytes (4.6 GB) copied, 75.7641 s, 60.1 MB/s
4642759680 bytes (4.6 GB) copied, 81.3641 s, 57.1 MB/s
4729955328 bytes (4.7 GB) copied, 85.9541 s, 55.0 MB/s
4839564288 bytes (4.8 GB) copied, 91.154 s, 53.1 MB/s
4947799040 bytes (4.9 GB) copied, 95.8979 s, 51.6 MB/s
5031826432 bytes (5.0 GB) copied, 101.214 s, 49.7 MB/s
5147431936 bytes (5.1 GB) copied, 106.154 s, 48.5 MB/s
5258511360 bytes (5.3 GB) copied, 111.064 s, 47.3 MB/s
5366772736 bytes (5.4 GB) copied, 116.234 s, 46.2 MB/s
5476918272 bytes (5.5 GB) copied, 121.174 s, 45.2 MB/s
5559445504 bytes (5.6 GB) copied, 126.234 s, 44.0 MB/s
5654553600 bytes (5.7 GB) copied, 131.264 s, 43.1 MB/s
5719061504 bytes (5.7 GB) copied, 136.584 s, 41.9 MB/s
5720445952 bytes (5.7 GB) copied, 141.784 s, 40.3 MB/s
5811733504 bytes (5.8 GB) copied, 146.984 s, 39.5 MB/s
5903823872 bytes (5.9 GB) copied, 151.604 s, 38.9 MB/s
5995934720 bytes (6.0 GB) copied, 156.454 s, 38.3 MB/s
6085051392 bytes (6.1 GB) copied, 161.584 s, 37.7 MB/s
6179431424 bytes (6.2 GB) copied, 166.565 s, 37.1 MB/s
6252450816 bytes (6.3 GB) copied, 171.734 s, 36.4 MB/s
6352212992 bytes (6.4 GB) copied, 176.784 s, 35.9 MB/s
6454244352 bytes (6.5 GB) copied, 181.844 s, 35.5 MB/s
6579989504 bytes (6.6 GB) copied, 186.736 s, 35.2 MB/s
6678868992 bytes (6.7 GB) copied, 191.964 s, 34.8 MB/s
6772200448 bytes (6.8 GB) copied, 196.853 s, 34.4 MB/s
6878163968 bytes (6.9 GB) copied, 201.984 s, 34.1 MB/s
6960870400 bytes (7.0 GB) copied, 207.283 s, 33.6 MB/s
7039201280 bytes (7.0 GB) copied, 211.972 s, 33.2 MB/s
7140676608 bytes (7.1 GB) copied, 217.123 s, 32.9 MB/s
7234837504 bytes (7.2 GB) copied, 222.076 s, 32.6 MB/s
7328699392 bytes (7.3 GB) copied, 227.153 s, 32.3 MB/s
7436080128 bytes (7.4 GB) copied, 232.166 s, 32.0 MB/s
7513248768 bytes (7.5 GB) copied, 237.563 s, 31.6 MB/s
7634260992 bytes (7.6 GB) copied, 242.593 s, 31.5 MB/s
7724561408 bytes (7.7 GB) copied, 247.403 s, 31.2 MB/s
7835747328 bytes (7.8 GB) copied, 252.403 s, 31.0 MB/s
7902901248 bytes (7.9 GB) copied, 257.423 s, 30.7 MB/s
8018326528 bytes (8.0 GB) copied, 262.563 s, 30.5 MB/s
8135921664 bytes (8.1 GB) copied, 267.437 s, 30.4 MB/s
8216900608 bytes (8.2 GB) copied, 272.513 s, 30.2 MB/s
8301011968 bytes (8.3 GB) copied, 277.653 s, 29.9 MB/s
8368538624 bytes (8.4 GB) copied, 282.743 s, 29.6 MB/s
8453365760 bytes (8.5 GB) copied, 287.635 s, 29.4 MB/s
8548049920 bytes (8.5 GB) copied, 293.063 s, 29.2 MB/s
8654702592 bytes (8.7 GB) copied, 297.754 s, 29.1 MB/s
8755459072 bytes (8.8 GB) copied, 302.913 s, 28.9 MB/s
8861316096 bytes (8.9 GB) copied, 307.973 s, 28.8 MB/s
8949744640 bytes (8.9 GB) copied, 313.423 s, 28.6 MB/s
9030812672 bytes (9.0 GB) copied, 318.113 s, 28.4 MB/s
9132983296 bytes (9.1 GB) copied, 323.203 s, 28.3 MB/s
9211621376 bytes (9.2 GB) copied, 328.045 s, 28.1 MB/s
9333068800 bytes (9.3 GB) copied, 333.113 s, 28.0 MB/s
9474348032 bytes (9.5 GB) copied, 338.203 s, 28.0 MB/s
9572865024 bytes (9.6 GB) copied, 343.553 s, 27.9 MB/s
9679008768 bytes (9.7 GB) copied, 348.216 s, 27.8 MB/s
9840333824 bytes (9.8 GB) copied, 353.272 s, 27.9 MB/s
9981170688 bytes (10 GB) copied, 358.413 s, 27.8 MB/s
10128851968 bytes (10 GB) copied, 363.423 s, 27.9 MB/s
10240000000 bytes (10 GB) copied, 367.252 s, 27.9 MB/s
write complete, syncing
reading from: /mnt/disk1/test.dd
10240000000 bytes (10 GB) copied, 14.7586 s, 694 MB/s
removing: /mnt/disk1/test.dd
removed `/mnt/disk1/test.dd'

Link to comment

 

Local Test on DISK1 7200 RPM 3TB data drive, 7200rpm parity,

sysctl vm.highmem_is_dirtyable=0

 

 

sysctl vm.highmem_is_dirtyable=0


root@unRAID:/tmp# egrep -v 'records in|records out' log.2
writing 10240000000 bytes to: /mnt/disk1/test.dd
711544832 bytes (712 MB) copied, 24.2327 s, 29.4 MB/s
712180736 bytes (712 MB) copied, 25.7152 s, 27.7 MB/s
1545176064 bytes (1.5 GB) copied, 30.2649 s, 51.1 MB/s
2062431232 bytes (2.1 GB) copied, 35.4721 s, 58.1 MB/s
2154693632 bytes (2.2 GB) copied, 40.6621 s, 53.0 MB/s
2214888448 bytes (2.2 GB) copied, 45.3959 s, 48.8 MB/s
2330588160 bytes (2.3 GB) copied, 50.5921 s, 46.1 MB/s
2415723520 bytes (2.4 GB) copied, 55.6021 s, 43.4 MB/s
2518275072 bytes (2.5 GB) copied, 60.6921 s, 41.5 MB/s
2626737152 bytes (2.6 GB) copied, 65.7921 s, 39.9 MB/s
2744656896 bytes (2.7 GB) copied, 70.732 s, 38.8 MB/s
2838336512 bytes (2.8 GB) copied, 75.812 s, 37.4 MB/s
2875282432 bytes (2.9 GB) copied, 80.9322 s, 35.5 MB/s
2923422720 bytes (2.9 GB) copied, 85.8718 s, 34.0 MB/s
2976892928 bytes (3.0 GB) copied, 90.8451 s, 32.8 MB/s
3043787776 bytes (3.0 GB) copied, 95.9519 s, 31.7 MB/s
3086029824 bytes (3.1 GB) copied, 100.952 s, 30.6 MB/s
3248387072 bytes (3.2 GB) copied, 106.762 s, 30.4 MB/s
3329176576 bytes (3.3 GB) copied, 111.182 s, 29.9 MB/s
3407576064 bytes (3.4 GB) copied, 116.082 s, 29.4 MB/s
3517535232 bytes (3.5 GB) copied, 121.282 s, 29.0 MB/s
3620168704 bytes (3.6 GB) copied, 126.192 s, 28.7 MB/s
3763377152 bytes (3.8 GB) copied, 131.382 s, 28.6 MB/s
3906913280 bytes (3.9 GB) copied, 136.262 s, 28.7 MB/s
4059436032 bytes (4.1 GB) copied, 141.295 s, 28.7 MB/s
4195783680 bytes (4.2 GB) copied, 146.472 s, 28.6 MB/s
4563644416 bytes (4.6 GB) copied, 151.406 s, 30.1 MB/s
5081478144 bytes (5.1 GB) copied, 156.582 s, 32.5 MB/s
5168632832 bytes (5.2 GB) copied, 161.502 s, 32.0 MB/s
5283842048 bytes (5.3 GB) copied, 166.534 s, 31.7 MB/s
5358945280 bytes (5.4 GB) copied, 171.732 s, 31.2 MB/s
5475587072 bytes (5.5 GB) copied, 177.031 s, 30.9 MB/s
5576279040 bytes (5.6 GB) copied, 181.701 s, 30.7 MB/s
5646070784 bytes (5.6 GB) copied, 186.871 s, 30.2 MB/s
5753691136 bytes (5.8 GB) copied, 191.785 s, 30.0 MB/s
5870679040 bytes (5.9 GB) copied, 196.843 s, 29.8 MB/s
6003030016 bytes (6.0 GB) copied, 201.895 s, 29.7 MB/s
6083822592 bytes (6.1 GB) copied, 207.061 s, 29.4 MB/s
6186554368 bytes (6.2 GB) copied, 212.161 s, 29.2 MB/s
6285632512 bytes (6.3 GB) copied, 217.055 s, 29.0 MB/s
6377460736 bytes (6.4 GB) copied, 222.103 s, 28.7 MB/s
6516134912 bytes (6.5 GB) copied, 227.331 s, 28.7 MB/s
6654080000 bytes (6.7 GB) copied, 232.311 s, 28.6 MB/s
6796796928 bytes (6.8 GB) copied, 237.251 s, 28.6 MB/s
6930742272 bytes (6.9 GB) copied, 242.274 s, 28.6 MB/s
7231502336 bytes (7.2 GB) copied, 247.314 s, 29.2 MB/s
7567844352 bytes (7.6 GB) copied, 252.501 s, 30.0 MB/s
7676834816 bytes (7.7 GB) copied, 257.481 s, 29.8 MB/s
7789339648 bytes (7.8 GB) copied, 262.443 s, 29.7 MB/s
7877948416 bytes (7.9 GB) copied, 267.621 s, 29.4 MB/s
7969674240 bytes (8.0 GB) copied, 272.681 s, 29.2 MB/s
8071590912 bytes (8.1 GB) copied, 277.661 s, 29.1 MB/s
8184083456 bytes (8.2 GB) copied, 282.633 s, 29.0 MB/s
8325297152 bytes (8.3 GB) copied, 287.711 s, 28.9 MB/s
8433172480 bytes (8.4 GB) copied, 292.713 s, 28.8 MB/s
8569648128 bytes (8.6 GB) copied, 297.881 s, 28.8 MB/s
8695846912 bytes (8.7 GB) copied, 302.793 s, 28.7 MB/s
8935642112 bytes (8.9 GB) copied, 307.843 s, 29.0 MB/s
9274315776 bytes (9.3 GB) copied, 312.921 s, 29.6 MB/s
9404392448 bytes (9.4 GB) copied, 318.391 s, 29.5 MB/s
9526797312 bytes (9.5 GB) copied, 323.121 s, 29.5 MB/s
9617257472 bytes (9.6 GB) copied, 328.021 s, 29.3 MB/s
9702138880 bytes (9.7 GB) copied, 333.201 s, 29.1 MB/s
9811256320 bytes (9.8 GB) copied, 338.131 s, 29.0 MB/s
9898537984 bytes (9.9 GB) copied, 343.221 s, 28.8 MB/s
10010510336 bytes (10 GB) copied, 348.162 s, 28.8 MB/s
10143802368 bytes (10 GB) copied, 353.291 s, 28.7 MB/s
10240000000 bytes (10 GB) copied, 356.746 s, 28.7 MB/s
write complete, syncing
reading from: /mnt/disk1/test.dd
10240000000 bytes (10 GB) copied, 10.729 s, 954 MB/s
removing: /mnt/disk1/test.dd
removed `/mnt/disk1/test.dd'

Link to comment

 

Local Test on DISK2 5400 RPM 3TB data drive, 7200rpm parity,

sysctl vm.highmem_is_dirtyable=1

 

 

writing 10240000000 bytes to: /mnt/disk2/test.dd
971440128 bytes (971 MB) copied, 5.04208 s, 193 MB/s
1773298688 bytes (1.8 GB) copied, 10.106 s, 175 MB/s
2507066368 bytes (2.5 GB) copied, 15.1618 s, 165 MB/s
2618115072 bytes (2.6 GB) copied, 20.2121 s, 130 MB/s
2706293760 bytes (2.7 GB) copied, 25.3346 s, 107 MB/s
2856174592 bytes (2.9 GB) copied, 30.3299 s, 94.2 MB/s
2975728640 bytes (3.0 GB) copied, 35.4948 s, 83.8 MB/s
3118706688 bytes (3.1 GB) copied, 40.4211 s, 77.2 MB/s
3227739136 bytes (3.2 GB) copied, 45.4709 s, 71.0 MB/s
3328586752 bytes (3.3 GB) copied, 50.5219 s, 65.9 MB/s
3377808384 bytes (3.4 GB) copied, 55.6427 s, 60.7 MB/s
3700761600 bytes (3.7 GB) copied, 60.6218 s, 61.0 MB/s
3924845568 bytes (3.9 GB) copied, 65.8198 s, 59.6 MB/s
3969766400 bytes (4.0 GB) copied, 70.7697 s, 56.1 MB/s
4016194560 bytes (4.0 GB) copied, 75.8697 s, 52.9 MB/s
4062929920 bytes (4.1 GB) copied, 81.1596 s, 50.1 MB/s
4110214144 bytes (4.1 GB) copied, 85.9124 s, 47.8 MB/s
4153787392 bytes (4.2 GB) copied, 91.0197 s, 45.6 MB/s
4207617024 bytes (4.2 GB) copied, 96.0795 s, 43.8 MB/s
4258223104 bytes (4.3 GB) copied, 101.3 s, 42.0 MB/s
4298081280 bytes (4.3 GB) copied, 106.239 s, 40.5 MB/s
4337427456 bytes (4.3 GB) copied, 111.26 s, 39.0 MB/s
4391990272 bytes (4.4 GB) copied, 116.569 s, 37.7 MB/s
4436726784 bytes (4.4 GB) copied, 121.349 s, 36.6 MB/s
4478064640 bytes (4.5 GB) copied, 126.401 s, 35.4 MB/s
4523729920 bytes (4.5 GB) copied, 133.019 s, 34.0 MB/s
4526089216 bytes (4.5 GB) copied, 139.109 s, 32.5 MB/s
4553720832 bytes (4.6 GB) copied, 141.659 s, 32.1 MB/s
4631794688 bytes (4.6 GB) copied, 146.591 s, 31.6 MB/s
4720993280 bytes (4.7 GB) copied, 152.009 s, 31.1 MB/s
4770038784 bytes (4.8 GB) copied, 156.909 s, 30.4 MB/s
4841550848 bytes (4.8 GB) copied, 161.839 s, 29.9 MB/s
4913931264 bytes (4.9 GB) copied, 166.919 s, 29.4 MB/s
4984337408 bytes (5.0 GB) copied, 171.909 s, 29.0 MB/s
5068654592 bytes (5.1 GB) copied, 176.881 s, 28.7 MB/s
5167817728 bytes (5.2 GB) copied, 181.932 s, 28.4 MB/s
5236204544 bytes (5.2 GB) copied, 188.039 s, 27.8 MB/s
5288272896 bytes (5.3 GB) copied, 192.379 s, 27.5 MB/s
5378384896 bytes (5.4 GB) copied, 197.289 s, 27.3 MB/s
5429414912 bytes (5.4 GB) copied, 202.142 s, 26.9 MB/s
5507884032 bytes (5.5 GB) copied, 207.369 s, 26.6 MB/s
5576877056 bytes (5.6 GB) copied, 212.319 s, 26.3 MB/s
5642466304 bytes (5.6 GB) copied, 217.439 s, 25.9 MB/s
5717054464 bytes (5.7 GB) copied, 222.409 s, 25.7 MB/s
5808697344 bytes (5.8 GB) copied, 227.431 s, 25.5 MB/s
5899432960 bytes (5.9 GB) copied, 232.669 s, 25.4 MB/s
5999109120 bytes (6.0 GB) copied, 238.129 s, 25.2 MB/s
6054568960 bytes (6.1 GB) copied, 242.629 s, 25.0 MB/s
6121939968 bytes (6.1 GB) copied, 247.659 s, 24.7 MB/s
6200505344 bytes (6.2 GB) copied, 252.671 s, 24.5 MB/s
6258684928 bytes (6.3 GB) copied, 258.169 s, 24.2 MB/s
6337806336 bytes (6.3 GB) copied, 262.751 s, 24.1 MB/s
6414566400 bytes (6.4 GB) copied, 267.989 s, 23.9 MB/s
6502020096 bytes (6.5 GB) copied, 272.939 s, 23.8 MB/s
6566220800 bytes (6.6 GB) copied, 277.979 s, 23.6 MB/s
6623360000 bytes (6.6 GB) copied, 283.069 s, 23.4 MB/s
6700160000 bytes (6.7 GB) copied, 288.009 s, 23.3 MB/s
6809711616 bytes (6.8 GB) copied, 293.051 s, 23.2 MB/s
6922675200 bytes (6.9 GB) copied, 298.259 s, 23.2 MB/s
6990173184 bytes (7.0 GB) copied, 303.359 s, 23.0 MB/s
7084800000 bytes (7.1 GB) copied, 308.211 s, 23.0 MB/s
7152862208 bytes (7.2 GB) copied, 313.338 s, 22.8 MB/s
7222133760 bytes (7.2 GB) copied, 318.448 s, 22.7 MB/s
7299789824 bytes (7.3 GB) copied, 323.508 s, 22.6 MB/s
7383778304 bytes (7.4 GB) copied, 328.438 s, 22.5 MB/s
7489885184 bytes (7.5 GB) copied, 333.588 s, 22.5 MB/s
7597457408 bytes (7.6 GB) copied, 338.528 s, 22.4 MB/s
7681049600 bytes (7.7 GB) copied, 343.57 s, 22.4 MB/s
7774442496 bytes (7.8 GB) copied, 348.638 s, 22.3 MB/s
7872463872 bytes (7.9 GB) copied, 353.768 s, 22.3 MB/s
7971767296 bytes (8.0 GB) copied, 358.878 s, 22.2 MB/s
8075926528 bytes (8.1 GB) copied, 363.776 s, 22.2 MB/s
8185545728 bytes (8.2 GB) copied, 368.888 s, 22.2 MB/s
8295523328 bytes (8.3 GB) copied, 373.861 s, 22.2 MB/s
8421144576 bytes (8.4 GB) copied, 379.46 s, 22.2 MB/s
9171822592 bytes (9.2 GB) copied, 383.998 s, 23.9 MB/s
9947874304 bytes (9.9 GB) copied, 389.092 s, 25.6 MB/s
10240000000 bytes (10 GB) copied, 391.01 s, 26.2 MB/s
write complete, syncing
reading from: /mnt/disk2/test.dd
10240000000 bytes (10 GB) copied, 10.4001 s, 985 MB/s
removing: /mnt/disk2/test.dd
removed `/mnt/disk2/test.dd'

Link to comment

 

And for completeness.

 

 

Test on DISK2 5400 RPM 3TB data drive, 7200rpm parity,

sysctl vm.highmem_is_dirtyable=0

 

 

 

 

root@unRAID:/tmp# egrep -v 'records in|records out' /tmp/log.5400.0 
root@unRAID:~# sysctl vm.highmem_is_dirtyable=0
vm.highmem_is_dirtyable = 0
root@unRAID:~# /boot/local/bin/writeread10gb /mnt/disk2/test.dd
writing 10240000000 bytes to: /mnt/disk2/test.dd
327980032 bytes (328 MB) copied, 5.03461 s, 65.1 MB/s
1182417920 bytes (1.2 GB) copied, 10.0858 s, 117 MB/s
1327739904 bytes (1.3 GB) copied, 15.4225 s, 86.1 MB/s
1416098816 bytes (1.4 GB) copied, 20.5025 s, 69.1 MB/s
1501418496 bytes (1.5 GB) copied, 26.0525 s, 57.6 MB/s
1544070144 bytes (1.5 GB) copied, 30.4925 s, 50.6 MB/s
1626076160 bytes (1.6 GB) copied, 35.8425 s, 45.4 MB/s
1702614016 bytes (1.7 GB) copied, 40.5724 s, 42.0 MB/s
1774842880 bytes (1.8 GB) copied, 45.7624 s, 38.8 MB/s
1865196544 bytes (1.9 GB) copied, 50.6124 s, 36.9 MB/s
1940988928 bytes (1.9 GB) copied, 55.6823 s, 34.9 MB/s
2013266944 bytes (2.0 GB) copied, 60.6324 s, 33.2 MB/s
2073687040 bytes (2.1 GB) copied, 66.1923 s, 31.3 MB/s
2089227264 bytes (2.1 GB) copied, 70.7622 s, 29.5 MB/s
2138047488 bytes (2.1 GB) copied, 75.8723 s, 28.2 MB/s
2180539392 bytes (2.2 GB) copied, 80.8322 s, 27.0 MB/s
2223596544 bytes (2.2 GB) copied, 85.8026 s, 25.9 MB/s
2260526080 bytes (2.3 GB) copied, 90.8922 s, 24.9 MB/s
2340455424 bytes (2.3 GB) copied, 96.1022 s, 24.4 MB/s
2444284928 bytes (2.4 GB) copied, 100.988 s, 24.2 MB/s
2535863296 bytes (2.5 GB) copied, 106.122 s, 23.9 MB/s
2606117888 bytes (2.6 GB) copied, 111.132 s, 23.5 MB/s
2684845056 bytes (2.7 GB) copied, 116.144 s, 23.1 MB/s
2788393984 bytes (2.8 GB) copied, 121.202 s, 23.0 MB/s
2858570752 bytes (2.9 GB) copied, 126.342 s, 22.6 MB/s
2943616000 bytes (2.9 GB) copied, 131.452 s, 22.4 MB/s
3054628864 bytes (3.1 GB) copied, 136.36 s, 22.4 MB/s
3154514944 bytes (3.2 GB) copied, 141.432 s, 22.3 MB/s
3459847168 bytes (3.5 GB) copied, 146.464 s, 23.6 MB/s
3960271872 bytes (4.0 GB) copied, 151.642 s, 26.1 MB/s
4025262080 bytes (4.0 GB) copied, 156.556 s, 25.7 MB/s
4099126272 bytes (4.1 GB) copied, 161.812 s, 25.3 MB/s
4176163840 bytes (4.2 GB) copied, 167.062 s, 25.0 MB/s
4252973056 bytes (4.3 GB) copied, 171.705 s, 24.8 MB/s
4307211264 bytes (4.3 GB) copied, 176.792 s, 24.4 MB/s
4368319488 bytes (4.4 GB) copied, 181.922 s, 24.0 MB/s
4452541440 bytes (4.5 GB) copied, 186.892 s, 23.8 MB/s
4541412352 bytes (4.5 GB) copied, 192.012 s, 23.7 MB/s
4628333568 bytes (4.6 GB) copied, 197.142 s, 23.5 MB/s
4711220224 bytes (4.7 GB) copied, 202.062 s, 23.3 MB/s
4795620352 bytes (4.8 GB) copied, 207.033 s, 23.2 MB/s
4868116480 bytes (4.9 GB) copied, 212.084 s, 23.0 MB/s
4961752064 bytes (5.0 GB) copied, 217.242 s, 22.8 MB/s
5039454208 bytes (5.0 GB) copied, 222.163 s, 22.7 MB/s
5112632320 bytes (5.1 GB) copied, 227.205 s, 22.5 MB/s
5192896512 bytes (5.2 GB) copied, 232.254 s, 22.4 MB/s
5291086848 bytes (5.3 GB) copied, 237.325 s, 22.3 MB/s
5415748608 bytes (5.4 GB) copied, 242.581 s, 22.3 MB/s
5527754752 bytes (5.5 GB) copied, 247.423 s, 22.3 MB/s
5647422464 bytes (5.6 GB) copied, 252.511 s, 22.4 MB/s
5939214336 bytes (5.9 GB) copied, 257.524 s, 23.1 MB/s
6150538240 bytes (6.2 GB) copied, 262.621 s, 23.4 MB/s
6195627008 bytes (6.2 GB) copied, 267.631 s, 23.1 MB/s
6245929984 bytes (6.2 GB) copied, 272.741 s, 22.9 MB/s
6308582400 bytes (6.3 GB) copied, 277.831 s, 22.7 MB/s
6414660608 bytes (6.4 GB) copied, 282.781 s, 22.7 MB/s
6484874240 bytes (6.5 GB) copied, 287.911 s, 22.5 MB/s
6578697216 bytes (6.6 GB) copied, 293.031 s, 22.5 MB/s
6686135296 bytes (6.7 GB) copied, 298.051 s, 22.4 MB/s
6793794560 bytes (6.8 GB) copied, 303.121 s, 22.4 MB/s
6934709248 bytes (6.9 GB) copied, 308.015 s, 22.5 MB/s
7170749440 bytes (7.2 GB) copied, 313.191 s, 22.9 MB/s
7244989440 bytes (7.2 GB) copied, 318.341 s, 22.8 MB/s
7338357760 bytes (7.3 GB) copied, 323.231 s, 22.7 MB/s
7428248576 bytes (7.4 GB) copied, 328.401 s, 22.6 MB/s
7508093952 bytes (7.5 GB) copied, 333.27 s, 22.5 MB/s
7607026688 bytes (7.6 GB) copied, 338.341 s, 22.5 MB/s
7717889024 bytes (7.7 GB) copied, 343.411 s, 22.5 MB/s
7835108352 bytes (7.8 GB) copied, 348.481 s, 22.5 MB/s
7938704384 bytes (7.9 GB) copied, 353.531 s, 22.5 MB/s
8118138880 bytes (8.1 GB) copied, 358.483 s, 22.6 MB/s
8317117440 bytes (8.3 GB) copied, 363.541 s, 22.9 MB/s
8372188160 bytes (8.4 GB) copied, 368.711 s, 22.7 MB/s
8485438464 bytes (8.5 GB) copied, 373.691 s, 22.7 MB/s
8565516288 bytes (8.6 GB) copied, 378.985 s, 22.6 MB/s
8651232256 bytes (8.7 GB) copied, 383.901 s, 22.5 MB/s
8763151360 bytes (8.8 GB) copied, 388.801 s, 22.5 MB/s
8856118272 bytes (8.9 GB) copied, 393.811 s, 22.5 MB/s
9007598592 bytes (9.0 GB) copied, 398.852 s, 22.6 MB/s
9239341056 bytes (9.2 GB) copied, 403.903 s, 22.9 MB/s
9322523648 bytes (9.3 GB) copied, 408.944 s, 22.8 MB/s
9390392320 bytes (9.4 GB) copied, 414.12 s, 22.7 MB/s
9494885376 bytes (9.5 GB) copied, 419.18 s, 22.7 MB/s
9589769216 bytes (9.6 GB) copied, 424.11 s, 22.6 MB/s
9673778176 bytes (9.7 GB) copied, 429.31 s, 22.5 MB/s
9769440256 bytes (9.8 GB) copied, 434.27 s, 22.5 MB/s
9887284224 bytes (9.9 GB) copied, 439.255 s, 22.5 MB/s
10001142784 bytes (10 GB) copied, 444.36 s, 22.5 MB/s
10115318784 bytes (10 GB) copied, 449.342 s, 22.5 MB/s
10240000000 bytes (10 GB) copied, 453.262 s, 22.6 MB/s
write complete, syncing
reading from: /mnt/disk2/test.dd
10240000000 bytes (10 GB) copied, 43.9383 s, 233 MB/s
removing: /mnt/disk2/test.dd
removed `/mnt/disk2/test.dd'

Link to comment

 

For further discussion

sysctl vm.highmem_is_dirtyable=1 seems to have a positive effect w write speed

http://lime-technology.com/forum/index.php?topic=25431.msg221288

 

WeeboTech,

 

That command has been mentioned already:

 

http://lime-technology.com/forum/index.php?topic=22675.msg213845#msg213845

 

Also, there is still write speed issue even with that command. see this post:

 

http://lime-technology.com/forum/index.php?topic=22675.msg219647#msg219647

 

The problem is that even with that, if you were copying a large file (say 10GB fille) over the network, it seems to work okay for first few gigs, then the problem comes back.

 

The real fix we are trying to say is.. the parameter (I think something like mem=4095). it's the one you mentioned earlier thread.

 

This seems like more of a 'solid fix'

 

My system isn't broken, and the parameter improves write burst, which may be good for people who write allot of small files.

 

Has anyone tested the mem= parameter along with the sysctl parameter to see if it has any effect.

 

Point is, with some kernel tuning, write burst increases.

 

While it doesn't solve the issue with high memory on some boards, it shows that there's definitely a hindrance somewhere.

 

I couldn't understand how I could double the ram in my old system and get hardly any improvement in caching and think this is part of the answer.

 

In any case, I understand what was previously discussed. I posted a separate thread so other people could make an attempt at testing to see if it helped improve the short term write burst speed.

Link to comment

I would suggest typing it in at the console and testing it.

 

Once you want to put it in permanently edit your /boot/config/go script and put the line in there.

 

My scriptlet below is a chunklet from my /boot/config/go script.

You can choose to use and include it or not.

 

Note that I did a blockdev --setra 2048 on my parity drive directly.

anyone planning to do the same will need to adjust that line accordingly.

Link to comment

Ran some very rough and nasty tests..

 

A 650Mb file, writing from my ws's SDD to a user share on the unraid.

 

with vm.highmem_is_dirtyable = 0

3 Writes: 32MB/s, 29MB/s, 32MB/s

2 Reads: 73MB/s, 68MB/s

 

with vm.highmem_is_dirtyable = 1

3 Writes: 40MB/s, 50MB/s, 50MB/s

2 Reads: 69MB/s, 72MB/s

 

While I wasn't overly concerned with the speed previously, I am happy to see some significant performance.  I might kick of a parity check tonight to see what speeds that gives me

Link to comment

I'm getting some ridiculously fast write speeds to the array after enabling that.  Here's the before and after speeds copying a 1GB VOB file to the parity protected array (7200RPM data drive with 7200RPM parity drive):

 

Before to \\server\Backup and \\server\disk2\Backup: 23s, windows file copy slows down to 45 MB/s

After to \\server\Backup: 17s, windows file copy reports 70MB/s

After to \\server\disk2\Backup: 10s, windows file copy reports 110+ MB/s

 

Tests writing to a 5400RPM data drive showed similar improvement:

 

Before to \\server\disk12\Folder: 24s, windows file copy slows down to 45 MB/s

After to \\server\disk12\Folder: 12s, windows file copy reports 100+ MB/s

 

The only surprise was the share on the 7200RPM cache drive:

 

Before to \\server\Folder: 15s, windows file copy slows reports 75 MB/s

After to \\server\Folder: 14s, windows file copy reports 80 MB/s

 

Using top to monitor CPU usage showed activity for several seconds after the file copy reportedly ended, so I would think this was only a good idea on a system with a UPS.  Files were copied from a sata3 SSD on my Windows 7 machine and the unraid server is running beta12a and has an Althon 250u (1.6GHz x2) with 8GB of memory.  If it matters, the parity and 7200RPM data drive are on the motherboard sata ports while the cache drive and 5400RPM drive are connected to a IBM BR10i.

Link to comment

I believe this setting allows high ram to be used more for buffer cache. Hence the reason it is

1. So fast.

2. Activity exists 'after' the copy while the buffer cache is being flushed.

 

As I mentioned before, this is good for moving allot of small files and good for bittorrent.

This becomes a big benefit for bittorrent

 

1. There are allot of random reads and writes.

2. A method to check the integrity of the file.

3. Also being able to download missing pieces if a problem occurs (force recheck). 

 

I would agree. it should probably only be used on a system with a UPS if you are moving files. i.e. deleting files from the source.

 

If you are writing files, then BOTH systems would need to be on the UPS.

 

My assumption is most people would have a UPS on the unRAID system anyway.

 

If you look closely at my additional settings, I explicitly set values to allow the kernel to buffer/cache more data.

 

As other people mention in other threads, Once you reach a certain window, the copies slow down to normal array speed.  This is expected.

 

If you are moving allot of pictures and/or music files over with tera copy, you'll see a benefit of using this setting.

Link to comment
  • 3 months later...

can I run this command while in the middle of a transfer?

 

I'm using TeraCopy (windoze) to transfer 2TB of data from my old Unraid to my freshly minted VM. Right now, I've only allocated 4GB ram. I'm seeing almost 80% used. Would love to pause the transfer, shutdown the VM, double the ram, and then issue this command. i'm I risking anything if I do that?

 

i'm at 20% transferred, and I go down as far as 9MB/s and up as high as 36MB/s right now.

Link to comment

can I run this command while in the middle of a transfer?

You can run it at any time

 

thanks Joe - i'm a bit behind..... unraid 5.0rc12a supports more than 4GB ram right? can it actually address 8GB, if I did that? I seemed to remember some discussion about that a while back.

Link to comment

4.6 can address high ram. It is only useful for the page & buffer cache. It does help on some of the long writes, but not all that much. I never tried the vm.highmem_is_dirtyable = 1 on 4.6, wish I had. I bet it would have worked better among my other tunings.

 

 

On some motherboards 5.x with certain hardware configurations there are speed problems with memory above 4GB. Best way is to try it out.

Link to comment

yeah - i decided to give it a whirl. i don't know if it's because i paused/restarted the teracopy, but i'm stuck at 114KB/s at the moment. literally choking on a tiny .srt file. i don't know the mechanics of the pause button, but let's assume for now that it's iterating through the files to see where it actually "stopped."

 

will report back, or go back to my 20-ish and be happy with that.

Link to comment

I don't have all the answers on this. Some snippets I found.

 

Only present if CONFIG_HIGHMEM is set.

 

This defaults to 0 (false), meaning that the dirty_ratio and dirty_background_ratio ratios are calculated as a percentage of low memory only. This protects against excessive scanning in page reclaim, swapping and general VM distress.

 

Setting this to 1 can be useful on 32 bit machines where you want to make random changes within an memory mapped file that is larger than your available low memory without causing large quantities of random IO. It is safe if the behavior of all programs running on the machine is known and memory will not be otherwise stressed.

 

Include a toggle in /proc/sys/vm/highmem_is_dirtyable which can be set to 1 to add the highmem back to the total available memory count.

 

Add vm.highmem_is_dirtyable toggle

 

A 32 bit machine with HIGHMEM64 enabled running DCC has an MMAPed file

of approximately 2Gb size which contains a hash format that is written

randomly by the dbclean process. On 2.6.16 this process took a few

minutes. With lowmem only accounting of dirty ratios, this takes about

12 hours of 100% disk IO, all random writes.

 

This patch includes some code cleanup from Linus and a toggle in

/proc/sys/vm/highmem_is_dirtyable which can be set to 1 to add the

highmem back to the total available memory count.

 

I had always wondered why adding another 4GB to my system with PAE enabled, never helped writes.

It helped cache reads, just not writes. With this toggle it helps cache writes, which is why we see improved write speed.  Only because of the caching.

 

With the current tunables defined, once a certain percentage of the buffer cache is full, writes go directly to the device.  With this option I believe, himem becomes more usable for write caching, thus the previous tuning percentage becomes larger.

Link to comment

Don't know why but this seemed to fix a problem I had with parity check speed after I upgraded my hardware. I recently increased RAM from 4GB to 16GB (new mobo and processor as well). With 16GB my parity check was going to take days so I restarted with the boot option limiting unRAID to 4GB and the parity check completed in a reasonable time. I didn't really like this solution so recently I did the parity check with 16GB and sysctl vm.highmem_is_dirtyable=1 and the parity check again completed in a reasonable time. None of these parity checks had any errors so I assume there weren't any writes happening. I have all of the tunables in the GUI set to defaults. I'll see what happens at the next monthly.

Link to comment

well - after restarting, this is going slightly faster.

 

i'm not seeing great jumps; but then again, I initiated the copy from a VM that's on the same host that has the Unraid 4.7 VM and the Unraid 5.0 VM. I'll have to try later on a physical to the 5.0 VM. that might reduce some of the I/O on the same machine?

 

Either way - did seem to help somewhat.

Link to comment
  • 4 weeks later...
You are really lucky that it somehow works for you.When I enabled highmem_is_dirtyable on my 16GB server, my copy speeds did double for smaller transfers, butfor large copies I started experiencing some severe timeouts that effectively broke the copy process. That made vm.highmem_is_dirtyable=1 unuseable for my server.

 

I used to have severe timeouts writing to an 8gb server with 4.x when a very large filesystem with lots of small files.

It seems that there was all this filesystem housekeeping going on and my transfer would time out and fail.

 

So I guess it could also depend on how fast your drives are or how full the filesystems are.

 

During my tests, I wrote one file that was over 10GB to an almost empty filesystem. It started out very fast, then near the end it had gone down to the normal 35MB/s.  My thoughts on your situation is there is probably something else going on.  Was your transfer to a cache drive or direct to a disk or user share.

 

My tests were on the local machine directly onto a disk.

 

In my case, I've been using this setting since on my N54L and it's been working very well.

Link to comment

Barzija,

 

First of all, thanks for the kernel discussion link!

 

If your hardware is capable of running ESXi, then you can run unRAID as a VM and limit the VM RAM to 3-4 GB and you should be ok.  This is what I am doing now with my setup and no longer have any "slow write" issues.  However Tom has indicated that 5.0-rc13 has a fix for the "slow write" condition...5.0-rc13 should be available very soon to test.

 

And....I did have timeouts/lost connection issues with large file transfers running unRAID "bare metal" with "sysctl vm.highmem_is_dirtyable=1", however no issues with running as a VM in ESXi with 3-4GB RAM...this seems to correlate with the kernel discussion you provided.

 

Moose

Link to comment

Join the conversation

You can post now and register later. If you have an account, sign in now to post with your account.
Note: Your post will require moderator approval before it will be visible.

Guest
Reply to this topic...

×   Pasted as rich text.   Restore formatting

  Only 75 emoji are allowed.

×   Your link has been automatically embedded.   Display as a link instead

×   Your previous content has been restored.   Clear editor

×   You cannot paste images directly. Upload or insert images from URL.