Will V6 write faster?


Recommended Posts

One of the biggest personal issues I have found with Unraid is very slow write speeds. I knew from the start when I first setup the system that performance wasn't a selling point. And even in an old wiki somewhere it said don't expect high write speeds. I don't use a cache drive and don't want to. I'd like higher write speeds without using another drive. Does V6 offer higher write speeds, like writing to a normal drive? Right now I can pull down about 110-120MB/sec and can only write about 40MB/sec. I'm looking to at least double my write speeds. I feel as though the technology has to be there, but since unraid has to write to that parity drive still, the write speeds still may be slow even on V6? At one point this almost made me switch to another product, but I've stuck with Unraid through the years since 2010 and been a supporter ever since. It is no secret that development here is a little slow, I can confirm that since I've been around since V4 went into V5. We waited a long time for that V5. While some of us complained, I may have even a little, I'm assuming we won't see a stable release on V6 for at least another year or longer (just my guess). Will V6 still incorporate the slow write speeds? Having more clients attached, more phones, more tablets, more everything I have been wanting better performance for a while. I really would LOVE to keep with UNraid for the long haul, but don't want to be left out in the dust waiting for a stable. I also don't understand the virtualization part? Are users just whipping up unraid into a VM and running that VM on the server where the drives are? If so, then how are the other VM's using/seeing that hardware if they are all dedicated to UNraid anyway? I understand VM's, I have Esxi running with a couple VM's on it now, but can't picture it with unraid. Maybe someone can explain it, cause I do want to understand it to see if I can benefit from it in the future.

 

Thanks

 

Link to comment
  • Replies 60
  • Created
  • Last Reply

Top Posters In This Topic

There is a way to get some improvement to write speeds. Search for md_write_method. I think this may even be available in V5.

 

Your questions about VMs are somewhat offbase, but there is a lot of discussion in the forums about this already, including subforums dedicated to the topic. It is a large subject and I won't try to rehash all of it here. You might also take a look at the wiki and the main limetech website. Here is a recent blog post on the main limetech site that summarizes this.

 

Maybe V6 stable is still a year off, but there are indications that we are approaching a release candidate. Things have progressed quite a bit, and Limetech has hired additional resources. A lot of new capabilities in V6, and I consider the recent betas to be actually more stable than V5.

Link to comment

Upgrading to V6 and converting your drives to XFS may yield an improvement though to be honest,  40MB/Sec if you are talking sustained is reasonable already. If you want much faster but are concerned about the risk of using a cache drive you can mirror two drives using traditional RAID and set the new volume as your cache drive.  This would likely give you over 100MB/Sec all day every day and protect you from loss of a single cache drive. 

Link to comment

Upgrading to V6 and converting your drives to XFS may yield an improvement though to be honest,  40MB/Sec if you are talking sustained is reasonable already. If you want much faster but are concerned about the risk of using a cache drive you can mirror two drives using traditional RAID and set the new volume as your cache drive.  This would likely give you over 100MB/Sec all day every day and protect you from loss of a single cache drive.

V6 supports btrfs cache pools. I have 2 x 120GB SSDs in a RAID1 btrfs cache pool. The unRAID GUI lets you configure this.
Link to comment

As already noted, the simple answer to your question is No -- writes are still done the same way ... and won't be any faster, since it requires 4 disk operations per write [as noted above, 2 reads and 2 writes => the reads are so the system knows what was on the data disk being written to and the parity disk; the writes are to write the new data to the data disk and update the parity on the parity disk].

 

There are 3 ways to increase the write speed:

 

(1)  The best way is to simply use a cache drive.    Your write speed will then be limited by the network transfer rate (or the disk speed, if the cache drive can't sustain the network rate -- but that's not likely with modern drives, and certainly not with SSDs).    The disadvantage of this is your data is not fault tolerant until it's moved to the array ... but iif you use a fault tolerant BTRFS cache pool you can eliminate this disadvantage; and if the pool drives are SSDs you'll still get maximum network rate writes.

 

(2) You can set the md_write_method to "reconstruct write mode" =>  this will make writes faster, but requires that all disks be spun up for writes.  Instead of 2 reads and 2 writes; this requires (n-2) reads and 2 writes => but in the normal method there's a full disk rotation between each read/write pair;  while in reconstruct mode all of the reads are done simultaneously and then the 2 writes are done simultaneously -- no waiting for a full disk rotation.    Instead of reading the old data from the two disks that need to be written to so the parity changes can be computed, this method simply re-computes parity for each write.

 

(3)  The special case of a 2-drive array (parity plus one data drive) will have excellent write speeds, as this is recognized by UnRAID as effectively a RAID-1 and writes are then done by simply writing to both drives with no intervening reads.  [Note this is effectively just a special case of the  "reconstruct-write mode" ... since (n-2) reads = 0 reads if there are only 2 disks  :)

 

Link to comment

There is a way to get some improvement to write speeds. Search for md_write_method. I think this may even be available in V5.

 

Your questions about VMs are somewhat offbase, but there is a lot of discussion in the forums about this already, including subforums dedicated to the topic. It is a large subject and I won't try to rehash all of it here. You might also take a look at the wiki and the main limetech website. Here is a recent blog post on the main limetech site that summarizes this.

 

Maybe V6 stable is still a year off, but there are indications that we are approaching a release candidate. Things have progressed quite a bit, and Limetech has hired additional resources. A lot of new capabilities in V6, and I consider the recent betas to be actually more stable than V5.

 

Would others agree on that? The V6 betas being more stable then the latest V5? I've been running V5 for a real long time and have had no issues besides failing drives. I'm actually doing a re-build right now from a replaced drive and will be adding replacing another suspected drive that is starting to go bad due to write errors. I'm very excited about V6 and may read up on how to upgrade. I wish I just could built a whole new complete server, but it would be too costly to do right now.

Link to comment

As already noted, the simple answer to your question is No -- writes are still done the same way ... and won't be any faster, since it requires 4 disk operations per write [as noted above, 2 reads and 2 writes => the reads are so the system knows what was on the data disk being written to and the parity disk; the writes are to write the new data to the data disk and update the parity on the parity disk].

 

There are 3 ways to increase the write speed:

 

(1)  The best way is to simply use a cache drive.    Your write speed will then be limited by the network transfer rate (or the disk speed, if the cache drive can't sustain the network rate -- but that's not likely with modern drives, and certainly not with SSDs).    The disadvantage of this is your data is not fault tolerant until it's moved to the array ... but iif you use a fault tolerant BTRFS cache pool you can eliminate this disadvantage; and if the pool drives are SSDs you'll still get maximum network rate writes.

 

(2) You can set the md_write_method to "reconstruct write mode" =>  this will make writes faster, but requires that all disks be spun up for writes.  Instead of 2 reads and 2 writes; this requires (n-2) reads and 2 writes => but in the normal method there's a full disk rotation between each read/write pair;  while in reconstruct mode all of the reads are done simultaneously and then the 2 writes are done simultaneously -- no waiting for a full disk rotation.    Instead of reading the old data from the two disks that need to be written to so the parity changes can be computed, this method simply re-computes parity for each write.

 

(3)  The special case of a 2-drive array (parity plus one data drive) will have excellent write speeds, as this is recognized by UnRAID as effectively a RAID-1 and writes are then done by simply writing to both drives with no intervening reads.  [Note this is effectively just a special case of the  "reconstruct-write mode" ... since (n-2) reads = 0 reads if there are only 2 disks  :)

 

Thanks for the detailed info. I'm going to have to start coming here more often and stay up with what's going on. V5 has been so stable, I rarely need to visit but if I want to keep myself aware of what's happening need to visit the forum more often.  I will for sure think about adding a cache drive and definitely check out option #2. Since I run my array spinned up 24/7. I don't think I have any SATA ports available, so adding an additional drive for a cache might be tricky. I have to review my options.

 

Thanks again.

 

Link to comment

I also don't understand the virtualization part? Are users just whipping up unraid into a VM and running that VM on the server where the drives are? If so, then how are the other VM's using/seeing that hardware if they are all dedicated to UNraid anyway? I understand VM's, I have Esxi running with a couple VM's on it now, but can't picture it with unraid. Maybe someone can explain it, cause I do want to understand it to see if I can benefit from it in the future.

 

It's unclear if you are interested in running Unraid as a guest in a different VM Host OS, or use the VM Hosting features of Unraid V6 as a Host.  If you -are- interested in running Unraid as a guest, there is a whole section here to discuss it.

 

http://lime-technology.com/forum/index.php?board=55.0

 

I've been running v5 for over a year as a guest VM on ESXi 5.x with no issues.

 

John

 

 

 

Link to comment

It really depends how much data you want to move to unRaid.

 

For files and folders up to 20GB you can use standard copy/paste method. It takes somewhere between 15-30 minutes to transfer files.

 

For larger folders I telnet to my box. Create a new directory and mount network share. This way I can use rsync script to copy files.

 

Also you can use robocopy script on your PC to move those files to your server and it's much faster than copy/paste

 

Once you have everything on your server, you do not do too much copying and writing speed becomes irrelevant.

Link to comment

I also don't understand the virtualization part? Are users just whipping up unraid into a VM and running that VM on the server where the drives are? If so, then how are the other VM's using/seeing that hardware if they are all dedicated to UNraid anyway? I understand VM's, I have Esxi running with a couple VM's on it now, but can't picture it with unraid. Maybe someone can explain it, cause I do want to understand it to see if I can benefit from it in the future.

 

It's unclear if you are interested in running Unraid as a guest in a different VM Host OS, or use the VM Hosting features of Unraid V6 as a Host.  If you -are- interested in running Unraid as a guest, there is a whole section here to discuss it.

 

http://lime-technology.com/forum/index.php?board=55.0

 

I've been running v5 for over a year as a guest VM on ESXi 5.x with no issues.

 

John

unRAID 6 can be a VM host, and setting up VMs is now built in to the webGUI. See the KVM subforum. Having said that, probably most addon applications are better handled as dockers. Plugins are now considered limited use cases, mostly webGUI enhancements. There are now many more dockers available in the unRAID community and out in the world at large than there ever were plugins for unRAID v5.

 

On the subject of VMs, though, some people are using the VM hosting of unRAID 6 to eliminate other PCs, by running Windows in a VM on their unRAID server, for example. Exciting times.

Link to comment

... Would others agree on that? The V6 betas being more stable then the latest V5? I've been running V5 for a real long time and have had no issues besides failing drives.

 

I would NOT agree with that.  The latest Beta of v6 certainly seems pretty stable, but there's no reason to think it's "more stable" than v5.    In fact, with so much new stuff added, and much of it still not "ready for prime time", it's hard to go along with that statement.  It's probably true, however, that the basic NAS functionality is very solid.

 

By the way, the write methods I discussed above work just fine in v5 => e.g. to set the "reconstruct write" mode you enter the command "mdcmd set md_write_method 1"

 

Link to comment

... Would others agree on that? The V6 betas being more stable then the latest V5? I've been running V5 for a real long time and have had no issues besides failing drives.

 

I would NOT agree with that.  The latest Beta of v6 certainly seems pretty stable, but there's no reason to think it's "more stable" than v5.    In fact, with so much new stuff added, and much of it still not "ready for prime time", it's hard to go along with that statement.  It's probably true, however, that the basic NAS functionality is very solid.

 

By the way, the write methods I discussed above work just fine in v5 => e.g. to set the "reconstruct write" mode you enter the command "mdcmd set md_write_method 1"

To some extent, it probably depends on how your are using your unRAID. One thing I don't think I have seen on v6 is emhttp or smb getting OOM killed, maybe because v6 is 64bit. The plugin problems of v5 have been addressed with a different technology in v6, although people can still use plugins in v6 so you can have plugin problems but it is mostly unnecessary now.

 

If you are only using it as a NAS, then v6 does have some features that allow you to keep a better eye on your array health with notifications, and there is also built-in UPS support.

 

YMMV, of course, but V6 seems to work better for me than v5 did.

 

Link to comment

I had unraid v5 with 18-nearly full drives. Most drives I kept 100-150GB of free space, except a few active ones with +500GB of free space. Most drives were 4TB, so I was way below the recommended 20% free space for reiserfs. Anytime I need to move files to one of the full disks, it can take 20-30 second pause that can cut off other samba streams on the server.

 

So moving to v6 and xfs, I'm hoping to get a few hundred GB extra usable free space per disk (since xfs can work well down to 5% free space right?) and more consistent transfers all around.

Link to comment

I was way below the recommended 20% free space for reiserfs

 

Say what? So for a 2TB drive, I should keep 400MB empty? Any official documentation on this topic?

The issue only happens when writing to an almost full drive.  ReiserFS can take awhile to figure out where to put the contents.  Reads are not affected.  XFS is not affected to the same extent.

 

My secondary server has <10 Gig free on each drive, and is all still ReiserFS

Link to comment

There's no need to keep anywhere near that amount of free space.

 

It IS true that Reiser overhead results in somewhat slower writes as a drive gets close to full [the last 5-10%].  The writes work fine ... the file system just has a bit of delay determining where to put the data (once the write starts, it writes at normal speed).

 

Unless it's a disk that you modify lot, there's no reason to not use it all, however.    Of the 18 data drives on my media server (all Reiser), 16 of them are 99 or 100% full.

 

Note that there is NO impact on read speeds.

 

Link to comment

Yeah, but in my case when reiserfs took its sweet time to start writing, it can stall for 20-30s, sometimes longer. And during that time it could affect other samba streams that are being served by unraid. I don't know what else could be causing it, except that on my 4TB drives, when the freespace is below 300GB then my system is prone to do that, whereas when I write to drives with 500+ GB it's smooth sailing. (v5.0.6)

Link to comment

... Would others agree on that? The V6 betas being more stable then the latest V5? I've been running V5 for a real long time and have had no issues besides failing drives.

 

I would NOT agree with that.  The latest Beta of v6 certainly seems pretty stable, but there's no reason to think it's "more stable" than v5.    In fact, with so much new stuff added, and much of it still not "ready for prime time", it's hard to go along with that statement.  It's probably true, however, that the basic NAS functionality is very solid.

 

By the way, the write methods I discussed above work just fine in v5 => e.g. to set the "reconstruct write" mode you enter the command "mdcmd set md_write_method 1"

 

Will give this a try. I'm comfortable around the command prompt so this should be easy. Not familair with the command, but I'll try it.

 

BRAIN FART: Looking for my settings, MOST FULL, MOST SPACE or whatever those drive settings were. Looked all over the GUI and can't find them. Yea, it's been a long day. I want to review what those settings are and make sure I have them set to be the most optimal. Maybe Gary can suggest what I use, seems VERY knowledgeable on this stuff.

And........thank you.

 

Link to comment

... Would others agree on that? The V6 betas being more stable then the latest V5? I've been running V5 for a real long time and have had no issues besides failing drives.

 

I would NOT agree with that.  The latest Beta of v6 certainly seems pretty stable, but there's no reason to think it's "more stable" than v5.    In fact, with so much new stuff added, and much of it still not "ready for prime time", it's hard to go along with that statement.  It's probably true, however, that the basic NAS functionality is very solid.

 

By the way, the write methods I discussed above work just fine in v5 => e.g. to set the "reconstruct write" mode you enter the command "mdcmd set md_write_method 1"

 

BTW, is there a way to check to see what the settings is currently? Is it in that /proc/mdcmd file?

 

Link to comment

... Would others agree on that? The V6 betas being more stable then the latest V5? I've been running V5 for a real long time and have had no issues besides failing drives.

 

I would NOT agree with that.  The latest Beta of v6 certainly seems pretty stable, but there's no reason to think it's "more stable" than v5.    In fact, with so much new stuff added, and much of it still not "ready for prime time", it's hard to go along with that statement.  It's probably true, however, that the basic NAS functionality is very solid.

 

By the way, the write methods I discussed above work just fine in v5 => e.g. to set the "reconstruct write" mode you enter the command "mdcmd set md_write_method 1"

 

Will give this a try. I'm comfortable around the command prompt so this should be easy. Not familair with the command, but I'll try it.

 

BRAIN FART: Looking for my settings, MOST FULL, MOST SPACE or whatever those drive settings were. Looked all over the GUI and can't find them. Yea, it's been a long day. I want to review what those settings are and make sure I have them set to be the most optimal. Maybe Gary can suggest what I use, seems VERY knowledgeable on this stuff.

And........thank you.

 

Found the options in the shares area, of course!

Using MOST FREE

Min Space 52428800

Split Level 999

 

Need to review this, since it has been like that since day one when I set this up in 2010.

 

Link to comment

... settings, MOST FULL, MOST SPACE

 

... make sure I have them set to be the most optimal...

 

There's really no "most optimal" setting with regard to allocation method.  This setting has nothing to do with performance.  Just lets you control how your drives are filled.

 

With the "Most Free" setting you're using, this will provide a very uniform utilization of the space on your drives, assuming they're all the same size.    The other choices simply fill the drives differently => "fill up" will fill the current drive before switching to another one;  high-water will fill a drive halfway; then switch to another drive and fill it halfway; etc. [and when all drives are half full will then fill drives to the 3/4 point (1/2 of the remaining space);  then to the 7/8ths level; then 15/16ths; then 31/32nds; etc.]

 

Most free is fine if you want all the drives to be used.

 

Link to comment

''' BTW, is there a way to check to see what the settings is currently? Is it in that /proc/mdcmd file?

 

Don't know -- and I'm not at home right now (out of town until Tuesday).  I do know you can set it back to "normal" with:

mdcmd set md_write_method 0

 

At one time Tom was considering adding a mode that would use reconstruct write mode IF all disks were currently spun up;  normal mode otherwise.  I don't think that's been done ... 'nor have I seen anything to indicate it's going to be.  It would be a pretty nice option ... basically you could then automatically turn it on by clicking "Spin Up" (if you were planning to do a lot of writes);  but just doing a write to the array would still only spin up one disk if they weren't already spun up.

 

Note that if you have a mix of drives of varying speeds and areal densities, or if you have controllers that result in bandwidth limitations for your drives [e.g. a PCI controller with multiple drives on it];  the reconstruct mode may not be faster than a normal write.

Link to comment

... settings, MOST FULL, MOST SPACE

 

... make sure I have them set to be the most optimal...

 

There's really no "most optimal" setting with regard to allocation method.  This setting has nothing to do with performance.  Just lets you control how your drives are filled.

 

With the "Most Free" setting you're using, this will provide a very uniform utilization of the space on your drives, assuming they're all the same size.    The other choices simply fill the drives differently => "fill up" will fill the current drive before switching to another one;  high-water will fill a drive halfway; then switch to another drive and fill it halfway; etc. [and when all drives are half full will then fill drives to the 3/4 point (1/2 of the remaining space);  then to the 7/8ths level; then 15/16ths; then 31/32nds; etc.]

 

Most free is fine if you want all the drives to be used.

 

Any suggestion as to what to put on

MIN FREE SPACE and SPLIT LEVEL?

 

Right now I am using MOST-FREE, but I do see weird space allocation. A couple 1TB drives %used is %90 and the larger drives a lot less used. Even back in 2010 this confused me a little. Especially to make it optimal for shares.

Hey- thanks.

 

Link to comment

Join the conversation

You can post now and register later. If you have an account, sign in now to post with your account.
Note: Your post will require moderator approval before it will be visible.

Guest
Reply to this topic...

×   Pasted as rich text.   Restore formatting

  Only 75 emoji are allowed.

×   Your link has been automatically embedded.   Display as a link instead

×   Your previous content has been restored.   Clear editor

×   You cannot paste images directly. Upload or insert images from URL.