Which to use--- Turbo Write or Normal Write


Recommended Posts

Today, unRAID can use two methods to write data to the array.  The first (and the default) way to read both the parity and the data disks to see what is currently stored there.  If then looks at the new data to be written and calculates what the parity data has to be.  It then writes both the new parity data and the data.  This method requires that you have to access the same sector twice.  Once for the read and once for the write. 

 

The second method is to spin up all of the data disks and read the data store on all of the data disks except for the disk on which the data is to be stored.  This information and the new data is used to calculate what the parity information is to be.  Then the data and parity are written to the data and parity  disks.  This method turns out to be faster because of the latency of having to wait for the same read head to get into position twice for the default method verses only once for the second method.  (The reader should understand that, for different disks, all disk operations essentially happen independently and will be in parallel.)  For purposes of discussion, let's call this method turbo write and the default method normal write.

 

It has been known for a long time that the normal write method speeds are approximately half of the read speeds.  And some users have long felt that an increase in write speed was both desirable and/or necessary.  The first attempt to address this issue was the cache drive.  The cache drive was an parity unprotected drive that all writes were made to and then the data would be transferred to the protected array at a later time.  This was often done overnight or some other period when usage of the array would be at a minimum.  This addressed the write speed issue but at the expense of another hard disk and the fact that the data was unprotected for some period of time.

 

Somewhere along the way LimeTech made some changes and introduced the turbo write feature.  It can be turn on with the following command:

 

mdcmd set md_write_method 1

and restored to the default (normal write) mode with this one:

mdcmd set md_write_method 0

 

One could activate the turbo write by inserting the command into the 'go' file (which sometimes requires a sleep command to allow the array to start before its execution).  A second alternative was  to actually type the proper command on the CLI.   

 

Beginning with version 6.2, the ability to select which method was to be used was included in the GUI.  (To find it, go to 'Settings'>>'Disk Settings' and look at the "Tunable (md_write_method):" dropdown list.  The 'Auto' and 'read/modify/write' are the normal write (and the default) mode.  The 'reconstruct write' is the turbo write mode.  This makes quite easy to select and change which of the write methods are used.   

 

Now that we have some background on the situation let's look at some of the more practical aspects of the two methods.  I though the first place to start was to have a comparison  of the actual write speeds in a real world environment.  Since I have a Test Bed server (more complete spec's in my signature) that is running 6.2 b19 with a dual parity, I decided to use this server for my tests. 

 

If you look at the specs for this server, you will find that it has a 6GB of RAM.  This is considerably in excess of what unRAID requires and the 64 bit version of unRAID uses all of the unused memory as a cache for writing to the array.  What will happen is that unRAID will accept data from the source (i.e., your copy operation) as fast as you can transfer it.  It will start the write process and if the data is arriving faster than it can be written to the array, it will buffer it to the RAM cache until the RAM cache is filled.  At that point, unRAID will throttle the data rate down to match the actual array write speed.  (But the RAM cache will be kept filled up with new data as soon as the older data is written.)  When your copy is finished on your end and the transmission of data stops, you may thinks that the write is finished but it really isn't until the RAM cache has been emptied and the data is safely stored on the data and parity disks.  There are very few programs which can detect when this event has occurred as most users assume that the copy is finished when it hands the last of the data off to the network. 

 

One program which will wait to report that the copy task is finished is ImgBurn.  ImgBurn is a very old freeware program that was developed but in the very early days when CD burners were first introduced back in the days when a 80386-16Mhz processor was the state of the art!  (The early CD burners had no on-board buffer and would make a 'coffee cup coaster' if one made a mouse click anywhere on the screen!)  The core of the CD writing portion of the software was done in Assembly Language and even today the entire program is only 2.6MB in size!  It is fast and has an absolute minimum of overhead when it is doing its thing!  As it runs, it does built a log file of the steps to collect much useful data.  I decided to make the first test the generation of a BluRay ISO on the server from a BluRay rip folder on my Win7 computer. 

 

Oh, I also forgot! Another complication of looking at data is what is meant by abbreviations K, M and G --- 1000 or 1024.  I have decided to report mine as 1000 as it makes the calculations easier when I use actual file sizes. 

 

I picked a BluRay folder (movie only) that was 20.89GB in size.  I spun down the drives for all of my testing before I started the write so the times include the spin-up time of the drives in all cases.  I should also point out that all of the tests, the data was written to an XFS formatted disk.  (I am not sure what effect of using a reiserfs formatted disk might have had.)  Here are the results:

 

Normal        Time  7:20         Ave  49.75MB/s       Max  122.01MB/s
Turbo         Time  4.01         Ave  90.83MB/s       Max  124.14MB/s

Wow, impressive gain.  Looks like a no brainier to use turbo write.  But remember this was a single file with one file table entry and allocation of disk space.  It is the best case scenario.

 

A test which might be more indicative of a typical transfer was need and what I decided to use was the 'MyDocuments' folder on my Win7 computer.  Now what to copy it with?  I have TeraCopy on my computer but I always had the feeling that it was really a shell (with a few bells and whistles) for the standard Windows Explorer copy routine which probably uses the DOS    copy  command as its underpinnings.  Plus, I was also aware that it Windows explorer doesn't provide any meaningful stats and, furthermore, it just terminates as soon as it passes off the last of the data.  This means that I had to use a stop watch technique to measure the time.  Not ideal but let's see what happens.  First for the statistics of what we are going to do:

 

Total size of the transfer:    14,790,116,238 Bytes

Number of Files:                  5,858

Number of Folders:              808

 

As you can probably see this size of transfer will overflow the RAM cache and should give a feel for what effect the file creation overhead has on performance.  So here are the results using the standard Windows Explorer copy routines and a stopwatch.

 

Normal        Time  6:45     Ave   36.52MB/s
Turbo         Time  5:30     Ave    44.81MB/s

Not near as impressive.  But, first, let me point out, I don't when exactly when the data finished up writing to the array.  I only know when it finished the transfer to the network.  But it typical to what the user will see when doing a transfer.  Second thing, I had no feeling about how any overhead in Windows Explorer was contributing to the results.  So I decided to try another test. 

 

I copied that ISO file (21,890,138,112 Bytes)  that I had made with ImgBurn back to the Windows PC.  Now, I used Windows Explorer to copy it back to the server using both modes.  (Remember the time recorded was when the Windows Explorer copy popup disappeared.)  Here are the results:

 

Normal        Time  6:37     Ave  55.14MB/s
Turbo         Time  5:17     Ave  69.05MB/s

 

After looking at these results, I decided to see how Teracopy would perform in copying over the 'MyDocuments' folder.  I turned off the 'verify-after-write' option in Teracopy so I go just measure the copy time.  (Teracopy also provides a timer which meant I didn't have to stopwatch the operation.)

 

Normal      Time  6:08   Ave  40.19MB/s
Turbo       Time  6:10   Ave  39.98MB/s

 

This test confirmed what I had always felt about Teracopy in that it has a considerable amount of overhead in its operation and this apparently reduced its effective data transfer rate below the write speed of even the normal write of unRAID! 

 

When I look at all of the results, I can say that turbo write is faster that the normal write in many cases.  But the results are not always as dramatic as one might hope.  There are many more factors determining what the actual transfer speed will be than just the raw disk writing speeds.  There is software overhead on both ends of the transfer and this overhead will impact the results. 

 

During these tests, I discovered a number of other things.  First, the power usage of a modern HD is about 3.5W when spun up with no R/W activity and about 4W with R/W activity.  (It appears that moving the R/W head does require some power!)  It has been suggested that one reason not to use turbo write is that it results in an increase in energy.  Some have said that using a cache drive is justified over using turbo write for that reason alone.  If you looking at it from an economical standpoint, how many hours of writing activity would you have to have to be saving money to justify buying and installing a cache disk?  I have the feeling that folks with small number of data disks would be much better off with Turbo Write rather than installing a cache drive just to get higher write speeds.  For those folks using VM's and Dockers which are storing their configuration data, they could now opt for a small (and cheaper) SSD rather than a larger one with space for the cache operation.  Thus folks with (say) eight of few drives would probably be better served by using turbo write than a large spinning cache drive. (And if an SSD could handle their caching needs, the energy saved with a cache drive over using turbo write would be virtually insignificant.  When you get to large arrays with (say) more than fifteen drives than a spinning cache drive begins to make a bit of sense from an energy consideration. 

 

A second observation is that the speed gains with Turbo Write is not as great with transfers involving large number of files and folders.  The overhead required on the server to create the required directories and allocate file space has a dramatic impact on performance!  The largest impact is on very large files and even then this impact can be diminish by installing large amounts of RAM because of unRAID's usage of unused RAM to cache writes.    I suspect many users might be well served to install more RAM than any other single action to achieve faster transfer speeds!  With the price of RAM these days and the fact that a lot of new boards will allow installation of what was once an unthinkable quality of RAM.  With 64GB of RAM on board and a Gb network, you can save an entire 50GB BluRay ISO to your server and never run out of RAM cache during the transfer.  (Actually, 32GB of RAM might be enough to achieve this.)  That you give you a write speed faster above 110MB/s! 

 

As I have attempted to point out, you do have a number of options to get an increase in write speeds to your unRAID server.  The quickest and cheapest to simply enable the turbo write option. Beyond that are memory upgrades and a cache drive.  You have to evaluate each one and decide which way you want to go.

 

I have no doubt that I have missed something along the way and some of you will have some other thoughts and ideas.  Some may even wish to do some additional testing to give a bit more insight into the various possible solutions.  Let's hear from you…

 

  • Thanks 2
Link to comment
  • 3 weeks later...

This made me think that it would be nicer to have a turbo writer option per share...

 

Let's say that I would activate it in case I'm copying movies and TV shows, but not in other cases where the normal files size is relatively small.

 

Maybe it doesn't make sense. Maybe it would make more sense if unRAID could determine the file size that is being copied beforehand and decide, above a certain size limit, to trigger the turbo write mode.

Link to comment

This made me think that it would be nicer to have a turbo writer option per share...

 

Let's say that I would activate it in case I'm copying movies and TV shows, but not in other cases where the normal files size is relatively small.

 

Maybe it doesn't make sense. Maybe it would make more sense if unRAID could determine the file size that is being copied beforehand and decide, above a certain size limit, to trigger the turbo write mode.

 

Once you spin all the drives up to copy a one large file, it does not make sense to switch modes.  Remember, unRAID is copying one file at a time.  Only the computer on the other end has any inking whether this is a batch copy of 10,000 files or a copy of a single 45GB file.

 

Plus, I have never had a cache drive on either of my two servers. I have no idea what the real-world performance is using one.  I can't believe that there is not performance hit for the overhead of file creation and space allocation. 

Link to comment

I use turbo-write via cron. Turn it on when I am most likely to use it all day long, turn it off at night around bedtime.

For some it may make sense to turn it on/off during the mover as well.

 

Using my server all day long to move mp3's and re-tag them, turbo write helps reduce the wait time significantly.

 

For those who may be interested, This is the cron table I install into /etc/cron.d/md_write_method

30 08 * * * [ -e /proc/mdcmd ] && echo 'set md_write_method 1' >> /proc/mdcmd
30 23 * * * [ -e /proc/mdcmd ] && echo 'set md_write_method 0' >> /proc/mdcmd
#
# * * * * * <command to be executed>
# | | | | |
# | | | | |
# | | | | +---- Day of the Week   (range: 1-7, 1 standing for Monday)
# | | | +------ Month of the Year (range: 1-12)
# | | +-------- Day of the Month  (range: 1-31)
# | +---------- Hour              (range: 0-23)
# +------------ Minute            (range: 0-59)

I find this is useful when you are reading and writing to one drive most of the time.

Once there are reads and writes to the other drives, things slow down. Therefore it all depends on how a site uses a server.

  • Like 1
Link to comment

Today, unRAID can use two methods to write data to the array.  The first (and the default) way to read both the parity and the data disks to see what is currently stored there.  If then looks at the new data to be written and calculates what the parity data has to be.  It then writes both the new parity data and the data.  This method requires that you have to access the same sector twice.  Once for the read and once for the write. 

 

The second method is to spin up all of the data disks and read the data store on all of the data disks except for the disk on which the data is to be stored.  This information and the new data is used to calculate what the parity information is to be.  Then the data and parity are written to the data and parity  disks.  This method turns out to be faster because of the latency of having to wait for the same read head to get into position twice for the default method verses only once for the second method.  (The reader should understand that, for different disks, all disk operations essentially happen independently and will be in parallel.)  For purposes of discussion, let's call this method turbo write and the default method normal write.

 

It has been known for a long time that the normal write method speeds are approximately half of the read speeds.  And some users have long felt that an increase in write speed was both desirable and/or necessary.  The first attempt to address this issue was the cache drive.  The cache drive was an parity unprotected drive that all writes were made to and then the data would be transferred to the protected array at a later time.  This was often done overnight or some other period when usage of the array would be at a minimum.  This addressed the write speed issue but at the expense of another hard disk and the fact that the data was unprotected for some period of time.

 

Somewhere along the way LimeTech made some changes and introduced the turbo write feature.  It can be turn on with the following command:

 

mdcmd set md_write_method 1

and restored to the default (normal write) mode with this one:

mdcmd set md_write_method 0

 

One could activate the turbo write by inserting the command into the 'go' file (which sometimes requires a sleep command to allow the array to start before its execution).  A second alternative was  to actually type the proper command on the CLI.   

 

Beginning with version 6.2, the ability to select which method was to be used was included in the GUI.  (To find it, go to 'Settings'>>'Disk Settings' and look at the "Tunable (md_write_method):" dropdown list.  The 'Auto' and 'read/modify/write' are the normal write (and the default) mode.  The 'reconstruct write' is the turbo write mode.  This makes quite easy to select and change which of the write methods are used.   

 

Now that we have some background on the situation let's look at some of the more practical aspects of the two methods.  I though the first place to start was to have a comparison  of the actual write speeds in a real world environment.  Since I have a Test Bed server (more complete spec's in my signature) that is running 6.2 b19 with a dual parity, I decided to use this server for my tests. 

 

If you look at the specs for this server, you will find that it has a 6GB of RAM.  This is considerably in excess of what unRAID requires and the 64 bit version of unRAID uses all of the unused memory as a cache for writing to the array.  What will happen is that unRAID will accept data from the source (i.e., your copy operation) as fast as you can transfer it.  It will start the write process and if the data is arriving faster than it can be written to the array, it will buffer it to the RAM cache until the RAM cache is filled.  At that point, unRAID will throttle the data rate down to match the actual array write speed.  (But the RAM cache will be kept filled up with new data as soon as the older data is written.)  When your copy is finished on your end and the transmission of data stops, you may thinks that the write is finished but it really isn't until the RAM cache has been emptied and the data is safely stored on the data and parity disks.  There are very few programs which can detect when this event has occurred as most users assume that the copy is finished when it hands the last of the data off to the network. 

 

One program which will wait to report that the copy task is finished is ImgBurn.  ImgBurn is a very old freeware program that was developed but in the very early days when CD burners were first introduced back in the days when a 80386-16Mhz processor was the state of the art!  (The early CD burners had no on-board buffer and would make a 'coffee cup coaster' if one made a mouse click anywhere on the screen!)  The core of the CD writing portion of the software was done in Assembly Language and even today the entire program is only 2.6MB in size!  It is fast and has an absolute minimum of overhead when it is doing its thing!  As it runs, it does built a log file of the steps to collect much useful data.  I decided to make the first test the generation of a BluRay ISO on the server from a BluRay rip folder on my Win7 computer. 

 

Oh, I also forgot! Another complication of looking at data is what is meant by abbreviations K, M and G --- 1000 or 1024.  I have decided to report mine as 1000 as it makes the calculations easier when I use actual file sizes. 

 

I picked a BluRay folder (movie only) that was 20.89GB in size.  I spun down the drives for all of my testing before I started the write so the times include the spin-up time of the drives in all cases.  I should also point out that all of the tests, the data was written to an XFS formatted disk.  (I am not sure what effect of using a reiserfs formatted disk might have had.)  Here are the results:

 

Normal        Time  7:20         Ave  49.75MB/s       Max  122.01MB/s
Turbo         Time  4.01         Ave  90.83MB/s       Max  124.14MB/s

Wow, impressive gain.  Looks like a no brainier to use turbo write.  But remember this was a single file with one file table entry and allocation of disk space.  It is the best case scenario.

 

A test which might be more indicative of a typical transfer was need and what I decided to use was the 'MyDocuments' folder on my Win7 computer.  Now what to copy it with?  I have TeraCopy on my computer but I always had the feeling that it was really a shell (with a few bells and whistles) for the standard Windows Explorer copy routine which probably uses the DOS    copy  command as its underpinnings.  Plus, I was also aware that it Windows explorer doesn't provide any meaningful stats and, furthermore, it just terminates as soon as it passes off the last of the data.  This means that I had to use a stop watch technique to measure the time.  Not ideal but let's see what happens.  First for the statistics of what we are going to do:

 

Total size of the transfer:    14,790,116,238 Bytes

Number of Files:                  5,858

Number of Folders:              808

 

As you can probably see this size of transfer will overflow the RAM cache and should give a feel for what effect the file creation overhead has on performance.  So here are the results using the standard Windows Explorer copy routines and a stopwatch.

 

Normal        Time  6:45     Ave   36.52MB/s
Turbo         Time  5:30     Ave    44.81MB/s

Not near as impressive.  But, first, let me point out, I don't when exactly when the data finished up writing to the array.  I only know when it finished the transfer to the network.  But it typical to what the user will see when doing a transfer.  Second thing, I had no feeling about how any overhead in Windows Explorer was contributing to the results.  So I decided to try another test. 

 

I copied that ISO file (21,890,138,112 Bytes)  that I had made with ImgBurn back to the Windows PC.  Now, I used Windows Explorer to copy it back to the server using both modes.  (Remember the time recorded was when the Windows Explorer copy popup disappeared.)  Here are the results:

 

Normal        Time  6:37     Ave  55.14MB/s
Turbo         Time  5:17     Ave  69.05MB/s

 

After looking at these results, I decided to see how Teracopy would perform in copying over the 'MyDocuments' folder.  I turned off the 'verify-after-write' option in Teracopy so I go just measure the copy time.  (Teracopy also provides a timer which meant I didn't have to stopwatch the operation.)

 

Normal      Time  6:08   Ave  40.19MB/s
Turbo       Time  6:10   Ave  39.98MB/s

 

This test confirmed what I had always felt about Teracopy in that it has a considerable amount of overhead in its operation and this apparently reduced its effective data transfer rate below the write speed of even the normal write of unRAID! 

 

When I look at all of the results, I can say that turbo write is faster that the normal write in many cases.  But the results are not always as dramatic as one might hope.  There are many more factors determining what the actual transfer speed will be than just the raw disk writing speeds.  There is software overhead on both ends of the transfer and this overhead will impact the results. 

 

During these tests, I discovered a number of other things.  First, the power usage of a modern HD is about 3.5W when spun up with no R/W activity and about 4W with R/W activity.  (It appears that moving the R/W head does require some power!)  It has been suggested that one reason not to use turbo write is that it results in an increase in energy.  Some have said that using a cache drive is justified over using turbo write for that reason alone.  If you looking at it from an economical standpoint, how many hours of writing activity would you have to have to be saving money to justify buying and installing a cache disk?  I have the feeling that folks with small number of data disks would be much better off with Turbo Write rather than installing a cache drive just to get higher write speeds.  For those folks using VM's and Dockers which are storing their configuration data, they could now opt for a small (and cheaper) SSD rather than a larger one with space for the cache operation.  Thus folks with (say) eight of few drives would probably be better served by using turbo write than a large spinning cache drive. (And if an SSD could handle their caching needs, the energy saved with a cache drive over using turbo write would be virtually insignificant.  When you get to large arrays with (say) more than fifteen drives than a spinning cache drive begins to make a bit of sense from an energy consideration. 

 

A second observation is that the speed gains with Turbo Write is not as great with transfers involving large number of files and folders.  The overhead required on the server to create the required directories and allocate file space has a dramatic impact on performance!  The largest impact is on very large files and even then this impact can be diminish by installing large amounts of RAM because of unRAID's usage of unused RAM to cache writes.    I suspect many users might be well served to install more RAM than any other single action to achieve faster transfer speeds!  With the price of RAM these days and the fact that a lot of new boards will allow installation of what was once an unthinkable quality of RAM.  With 64GB of RAM on board and a Gb network, you can save an entire 50GB BluRay ISO to your server and never run out of RAM cache during the transfer.  (Actually, 32GB of RAM might be enough to achieve this.)  That you give you a write speed faster above 110MB/s! 

 

As I have attempted to point out, you do have a number of options to get an increase in write speeds to your unRAID server.  The quickest and cheapest to simply enable the turbo write option. Beyond that are memory upgrades and a cache drive.  You have to evaluate each one and decide which way you want to go.

 

I have no doubt that I have missed something along the way and some of you will have some other thoughts and ideas.  Some may even wish to do some additional testing to give a bit more insight into the various possible solutions.  Let's hear from you…

 

Frank will the Turbo write eliminate the network error issue while copying large or slightly amounts of data from a win pc using teracopy or just win 10 transfer? 

 

I specifically upgraded both my Unraid machine and windows machine to I5's with 32gb of ram on very good MB's and still get the network error issues on transfers typically to an unraid share that's close to being full. It's irritating at times to come home and find out that teracopy crapped out with the network error on the 5th file.  I have since been using MC to move files once their on the unraid machine and it seems to be stable.  I am on unraid 6.1.9 pro.

 

If i copy from windows to the Cache drive (only 240gb SSD) the network errors are basically gone, but it seems that i have moved a 25gb iso before to the cache and it also gave the network error. but most of the time the transfers to the Cache were fine, keep in mind these transfers are smaller in size .... then once on the cache drive to a shared drive on the array the transfers appear to be somewhat stable on smaller files - ie 8gb-12gb...

 

thanks AJ

 

 

Link to comment

I specifically upgraded both my Unraid machine and windows machine to I5's with 32gb of ram on very good MB's and still get the network error issues on transfers typically to an unraid share that's close to being full. It's irritating at times to come home and find out that teracopy crapped out with the network error on the 5th file.

 

Sounds like you hit the ReiserFS full disk issue.  The Reiser file system is great most of the time, but becomes very inefficient if you fill the disk too full, causing long timeouts.  The fix is to never fill it too full (too late!), or convert the drives to XFS.

Link to comment

Just relating to the power consumption of both methods. i agree with a small number of drives it would be negligible.

Because as well remember the data is written to the cache first, then at a later time written to the array, so it uses power twice to do its writes.

 

I assume that the 'cache' you are referring to is a cache drive.  Yes, you are correct and thanks for pointing that out.  There will be power used by the cache drive when the data is first written to that drive, and by (at least) two disks (parity and a data disk) plus the cache Drive drive when it is written to the protected array.  So any power savings might well be insignificant---  especially for arrays with small numbers of data drives.

  • Like 1
Link to comment

... The 'Auto' and 'read/modify/write' are the normal write (and the default) mode.  The 'reconstruct write' is the turbo write mode.

 

Are you sure about this?  [i'm not at home this month, so can't test this]  I'd have thought the "Auto" mode would use turbo write if all disks were already spinning; normal writes otherwise.

 

IF that is the case, the easiest way to turn on turbo writes for a group of files [e.g. if you're getting ready to copy a bunch of large media files]  would be to simply click on the Spin Up button in the GUI.

 

 

Link to comment

... The 'Auto' and 'read/modify/write' are the normal write (and the default) mode.  The 'reconstruct write' is the turbo write mode.

 

Are you sure about this?  [i'm not at home this month, so can't test this]  I'd have thought the "Auto" mode would use turbo write if all disks were already spinning; normal writes otherwise.

 

IF that is the case, the easiest way to turn on turbo writes for a group of files [e.g. if you're getting ready to copy a bunch of large media files]  would be to simply click on the Spin Up button in the GUI.

 

Not that I am aware of.  What the Help for the Setting is as follows:

 

"    Selects the method to employ when writing to enabled disk in parity protected array.

 

    Auto selects read/modify/write."

 

But that is not real clear but it sort-of implies that the read/modify/write method is the Auto method.  But then, one has to ask, "Why have three choices rather than two"? I did a search and this post of yours was all I found:

 

    https://lime-technology.com/forum/index.php?topic=39554.msg370736#msg370736

 

Do you have any other references of release notes to indicate that such a feature was ever added?  Perhaps, someone from LimeTech could jump in and provide a definitive answer. 

Link to comment

I believe Auto is for future use, for now it works the same as normal writing mode.

I seem to recall (but too lazy to search - Sorry @trurl) that during the initial discussions about turbo write that Tom mentioned about making auto where if all the drives were already spinning it would switch to turbo mode.  No idea if it came to fruition.
Link to comment

... But then, one has to ask, "Why have three choices rather than two"?

 

Indeed.  I suspect the third choice ("Auto") may eventually do what I had hoped it now already did -- i.e. use turbo write IF all of the disks were already spinning; normal write otherwise.    My earlier post you referred to noted that this had been discussed -- I had hoped that v6.2 now had that implemented (but clearly it does not).  But the presence of the third choice would certainly seem to imply that it may be coming :-)

 

Link to comment

... But then, one has to ask, "Why have three choices rather than two"?

 

Indeed.  I suspect the third choice ("Auto") may eventually do what I had hoped it now already did -- i.e. use turbo write IF all of the disks were already spinning; normal write otherwise.    My earlier post you referred to noted that this had been discussed -- I had hoped that v6.2 now had that implemented (but clearly it does not).  But the presence of the third choice would certainly seem to imply that it may be coming :-)

 

 

In my case, I want to manually control it via cron. It just makes sense for what I do all day long.

 

Link to comment
  • 4 years later...
  • 2 years later...

Great detailed post and tests of the Turbo Write.
Highly appreciated!

What I am missing is clearly pointed out an interpretation of what are cons of using Turbo Mode.
The increased energy consumption per time unit is rather negligible.
I try to use logic and my understanding of the subject. To me seems like with two drives, one of which is a parity drive, its a no-brainer to use Turbo Mode (and perhaps in such cases it should be enabled by default). What about the cases with multiple drives? What undesirable scenario may occur with Turbo Mode on?

Link to comment

I think it takes a bit longer to write the first sector as the array disks seem to spin up sequentially.  Having more than one or two disks this could be a considerable period of time.  Again this time lost here vs the time required for 'read/write/modify' would depend on the size of the file being written.  The RAM disk write cache could made this issue invisible to the client depending on the amount of RAM in the server (and the percentage of RAM allotted via 'vm.dirty_ratio')...

Link to comment
2 hours ago, Frank1940 said:

I think it takes a bit longer to write the first sector as the array disks seem to spin up sequentially.  Having more than one or two disks this could be a considerable period of time.  Again this time lost here vs the time required for 'read/write/modify' would depend on the size of the file being written.  The RAM disk write cache could made this issue invisible to the client depending on the amount of RAM in the server (and the percentage of RAM allotted via 'vm.dirty_ratio')...

  

Yeah. For transfers of little data or when having many drives in an array, it seems to be better having Normal Write mode.

2 hours ago, JorgeB said:

For parity and just one data drive Unraid uses basically raid1.


Parity basically works as Raid1 from functionality point of view, or does Unraid use Raid controller too?

Edited by Digital Shamans
Link to comment

Join the conversation

You can post now and register later. If you have an account, sign in now to post with your account.
Note: Your post will require moderator approval before it will be visible.

Guest
Reply to this topic...

×   Pasted as rich text.   Restore formatting

  Only 75 emoji are allowed.

×   Your link has been automatically embedded.   Display as a link instead

×   Your previous content has been restored.   Clear editor

×   You cannot paste images directly. Upload or insert images from URL.