Enable reconstruct-write mode


Recommended Posts

Yes the driver would make a determination on a write-by-write basis.  So yes, if you were writing a large file, and a daemon spun up all the disks during the transfer, the mode would switch automatically from r/m/w to recon-write.

 

It would switch in the middle of writing that large file or after that file?  I'm just confused by what a "write" means when you say "write-to-write".  Is it a file, stripe, block?

 

In any case, that might be a way to make writing to a only a specific share a recon-write while others stay as rm-writes.  Just an idea and I think I've overstayed my welcome.  Thanks for indulging me.

Link to comment

Driver works at block level then I can only guess yes Tom means it would switch mode at middle of writing a file. I can also tell you that I did some tests yesterday on a tests VM to manually switch mode a few times during a file write (in fact I did stress tested switching it and then doing some data integrity tests) and could easily see it changing the method instantly by looking at virtual hdd's activity "leds".

Link to comment

Driver works at block level then I can only guess yes Tom means it would switch mode at middle of writing a file. I can also tell you that I did some tests yesterday on a tests VM to manually switch mode a few times during a file write (in fact I did stress tested switching it and then doing some data integrity tests) and could easily see it changing the method instantly by looking at virtual hdd's activity "leds".

 

 

perhaps we could/should attempt that whole md5sum benchmark test to double check integrity.

Link to comment

Driver works at block level then I can only guess yes Tom means it would switch mode at middle of writing a file. I can also tell you that I did some tests yesterday on a tests VM to manually switch mode a few times during a file write (in fact I did stress tested switching it and then doing some data integrity tests) and could easily see it changing the method instantly by looking at virtual hdd's activity "leds".

 

 

perhaps we could/should attempt that whole md5sum benchmark test to double check integrity.

 

Yes, that's a good idea :)  On some quick tests I made I could not find any problems, but I'm sure you can go even deeper than me on stress testing it as I remember you had an even more complex script and a real test system to do it when we tested reiserfs issues.

Link to comment

Driver works at block level then I can only guess yes Tom means it would switch mode at middle of writing a file. I can also tell you that I did some tests yesterday on a tests VM to manually switch mode a few times during a file write (in fact I did stress tested switching it and then doing some data integrity tests) and could easily see it changing the method instantly by looking at virtual hdd's activity "leds".

 

Well now that IS promising indeed.

Link to comment

Re-test now with 3 data disks, and writing to empty filesystem on the new disk:

 

root@unRAID:~# df -h

Filesystem            Size  Used Avail Use% Mounted on

tmpfs                128M  768K  128M  1% /var/log

/dev/sda1              16G  764M  15G  5% /boot

/dev/md1              1.9T  1.6T  280G  86% /mnt/disk1

/dev/md2              1.9T  452G  1.4T  25% /mnt/disk2

/dev/md3              1.9T  33M  1.9T  1% /mnt/disk3

shfs                  5.5T  2.0T  3.5T  37% /mnt/user

root@unRAID:~# ls -latr /mnt/disk3

total 0

drwxrwxrwx 4 nobody users 80 2013-11-28 04:58 ./

 

root@unRAID:~# dd if=/dev/zero bs=8M of=/mnt/disk3/test.tmp count=256 conv=fdatasync

256+0 records in

256+0 records out

2147483648 bytes (2.1 GB) copied, 52.3502 s, 41.0 MB/s

root@unRAID:~# dd if=/dev/zero bs=8M of=/mnt/disk3/test.tmp count=256 conv=fdatasync

256+0 records in

256+0 records out

2147483648 bytes (2.1 GB) copied, 52.2685 s, 41.1 MB/s

root@unRAID:~# dd if=/dev/zero bs=8M of=/mnt/disk3/test.tmp count=256 conv=fdatasync

256+0 records in

256+0 records out

2147483648 bytes (2.1 GB) copied, 50.2682 s, 42.7 MB/s

root@unRAID:~# dd if=/dev/zero bs=8M of=/mnt/disk3/test.tmp count=256 conv=fdatasync

256+0 records in

256+0 records out

2147483648 bytes (2.1 GB) copied, 52.5235 s, 40.9 MB/s

 

root@unRAID:~# mdcmd set md_write_method 1

root@unRAID:~# dd if=/dev/zero bs=8M of=/mnt/disk3/test.tmp count=256 conv=fdatasync

256+0 records in

256+0 records out

2147483648 bytes (2.1 GB) copied, 21.7445 s, 98.8 MB/s

root@unRAID:~# dd if=/dev/zero bs=8M of=/mnt/disk3/test.tmp count=256 conv=fdatasync

256+0 records in

256+0 records out

2147483648 bytes (2.1 GB) copied, 21.1719 s, 101 MB/s

root@unRAID:~# dd if=/dev/zero bs=8M of=/mnt/disk3/test.tmp count=256 conv=fdatasync

256+0 records in

256+0 records out

2147483648 bytes (2.1 GB) copied, 21.7772 s, 98.6 MB/s

root@unRAID:~# dd if=/dev/zero bs=8M of=/mnt/disk3/test.tmp count=256 conv=fdatasync

256+0 records in

256+0 records out

2147483648 bytes (2.1 GB) copied, 20.8454 s, 103 MB/s

 

Checked 'top' while doing tests and cpu usage is <10%, not much different on both write modes.

 

Note that I don't have any tweaks/tunes for burst write speeds, these are not just initial speeds.

 

Note2: max speed of my EARX is ~120MB/s (667GB platters) at beginning of the hdd, EFRX is ~150MB/s (1TB platters), then it could eventually go even a bit higher if using all EFRX.

 

Attached screens copying over network with Teracopy...

11.png.67bd60ca5596b34ec8763946890d55df.png

22.png.b6901579d86024d5cb16a69e13325ac2.png

Link to comment