Will V6 write faster?


Recommended Posts

Min free space should be set to the largest file you'll ever write, so a write won't start on a drive that doesn't have enough space for it.

 

Split level depends on what you're storing in a share => if, for example, you're storing DVD rips in their original form (with multiple .VOBs), you don't want the components split.    If, on the other hand, your media is stored in single-file containers, then there's no real reason to use a split level at all [although you may have associated art or info files that you want kept in the same folder].

 

I suspect the nearly full 1TB drive is due to a split level setting that has precluded some of your media from spreading to another disk.

 

Link to comment
  • Replies 60
  • Created
  • Last Reply

Top Posters In This Topic

Yeah, but in my case when reiserfs took its sweet time to start writing, it can stall for 20-30s, sometimes longer. And during that time it could affect other samba streams that are being served by unraid. I don't know what else could be causing it, except that on my 4TB drives, when the freespace is below 300GB then my system is prone to do that, whereas when I write to drives with 500+ GB it's smooth sailing. (v5.0.6)

 

I had the same bad experience with RFS, switched to XFS and now my drives have 300 to 700 MB free instead of several Gb and no timeouts or other problems like I had with RFS.

 

See also: http://lime-technology.com/forum/index.php?topic=38409.msg362836#msg362836

Link to comment

Yeah, but in my case when reiserfs took its sweet time to start writing, it can stall for 20-30s, sometimes longer. And during that time it could affect other samba streams that are being served by unraid. I don't know what else could be causing it, except that on my 4TB drives, when the freespace is below 300GB then my system is prone to do that, whereas when I write to drives with 500+ GB it's smooth sailing. (v5.0.6)

 

I had the same bad experience with RFS, switched to XFS and now my drives have 300 to 700 MB free instead of several Gb and no timeouts or other problems like I had with RFS.

 

See also: http://lime-technology.com/forum/index.php?topic=38409.msg362836#msg362836

 

I'd love to switch over to XFS, but can't reformat now. No where to store all my data. I do want to build another unraid box with V6, so maybe I'll take advantage here. It is too bad I can't change over to XFS with keeping the data intact. :(

 

 

Link to comment

... I'd love to switch over to XFS, but can't reformat now. No where to store all my data. I do want to build another unraid box with V6, so maybe I'll take advantage here. It is too bad I can't change over to XFS with keeping the data intact. :(

 

I gather you don't have any backups ... so building another server makes sense for multiple reasons.    Alternatively, you can simply make the NEXT disk you add to your server XFS ... and then copy all the data from one disk to it;  then reformat the disk you just emptied to XFS;  copy all the data from another disk to it;  reformat that disk to XFS;  etc.

 

Link to comment

... obviously you need to upgrade to v6 before you could do the transition I just outlined [Noted in your sig that you're still running v5]

 

I can definitely clear out a disk and reformat, then move the data back. It will just take time. I didn't know I could use XFS and RFS at the same time with user shares. I only backup the stuff I can't live without, all in another post. But I really think I'm going to build another box. So I have to see if changing to XFS right now is even important enough. I just ran into a write issue from upgrading...so I'm figuring that out now. Should of never touched it!

 

Link to comment

One IMPORTANT caveat:  If you DO decide to move files around (for reformatting or any other reason), be CERTAIN that neither the source nor the destination of a copy is a User share if both components of the copy (source and/or destination) are part of that share.    The safest thing is to do all of your copies/moves using the disk references -- NOT the user share references.  [i.e. copy from \\Tower\disk2\Myshare  to  \\Tower\disk3\Myshare  ==> NOT from \\Tower\Myshare to \\Tower\disk3\Myshare]

 

If you do this wrong, you'll lose all of the data you're trying to copy.

 

Link to comment

If you've actively using the drive [deleting, modifying, adding files] then you may indeed want to keep 10% or so free.    But if you're simply writing what's effectively static data [e.g. movies or other media files], then there's no reason to simply fill the drives.

 

I only keep media files (movies) on the drives in question, it's just that I keep 1 drive as an 'inbox' where new stuffs come in, gets reviewed and buffered, before they are moved to their long term folder in one of the other disks. So, yes, there are file deletion every now and then, but no other modifications. I suppose that has caused fragmentation over the years.

 

I had the same bad experience with RFS, switched to XFS and now my drives have 300 to 700 MB free instead of several Gb and no timeouts or other problems like I had with RFS.

 

See also: http://lime-technology.com/forum/index.php?topic=38409.msg362836#msg362836

 

Same here, I'm so happy that things are so much smoother now.

But as I've mentioned above, it could've been partly due to fragmentation too.

 

I'd love to switch over to XFS, but can't reformat now. No where to store all my data. I do want to build another unraid box with V6, so maybe I'll take advantage here. It is too bad I can't change over to XFS with keeping the data intact. :(

 

If building another unraid machine is an option, that would make things so much simpler and faster. You can work at your pace on the new machine while your existing machine still work, you can test the new machine with dummy data, run it for a bit and test things through, the copy speed between machines would be about the same, because network is not the bottleneck anyhow, and once you've copied everything you can still keep your old backup until you're 100% sure with the new setup. All the while keeping both arrays with valid parity throughout this whole process.

 

You can also potentially script the whole copy process as a single step, whereas if you're doing it in a single machine, it'll need user intervention on every 'disk change', with a bit of risk that can make you lose data along the way.

 

 

Link to comment

If you've actively using the drive [deleting, modifying, adding files] then you may indeed want to keep 10% or so free.    But if you're simply writing what's effectively static data [e.g. movies or other media files], then there's no reason to simply fill the drives.

 

I only keep media files (movies) on the drives in question, it's just that I keep 1 drive as an 'inbox' where new stuffs come in, gets reviewed and buffered, before they are moved to their long term folder in one of the other disks. So, yes, there are file deletion every now and then, but no other modifications. I suppose that has caused fragmentation over the years.

 

I had the same bad experience with RFS, switched to XFS and now my drives have 300 to 700 MB free instead of several Gb and no timeouts or other problems like I had with RFS.

 

See also: http://lime-technology.com/forum/index.php?topic=38409.msg362836#msg362836

 

Same here, I'm so happy that things are so much smoother now.

But as I've mentioned above, it could've been partly due to fragmentation too.

 

I'd love to switch over to XFS, but can't reformat now. No where to store all my data. I do want to build another unraid box with V6, so maybe I'll take advantage here. It is too bad I can't change over to XFS with keeping the data intact. :(

 

If building another unraid machine is an option, that would make things so much simpler and faster. You can work at your pace on the new machine while your existing machine still work, you can test the new machine with dummy data, run it for a bit and test things through, the copy speed between machines would be about the same, because network is not the bottleneck anyhow, and once you've copied everything you can still keep your old backup until you're 100% sure with the new setup. All the while keeping both arrays with valid parity throughout this whole process.

 

You can also potentially script the whole copy process as a single step, whereas if you're doing it in a single machine, it'll need user intervention on every 'disk change', with a bit of risk that can make you lose data along the way.

 

Are there significant improvements using the XFS file system then what I'm using now, RFS? If there aren't significant improvements that I'll see or notice I'll just keep the way it is and concentrate on a new build slowly. If XFS is worth it, I'm pretty good at using MC in the shell and would mostly like do all my moving through there with screen.

Link to comment

Are there significant improvements using the XFS file system then what I'm using now, RFS? If there aren't significant improvements that I'll see or notice I'll just keep the way it is and concentrate on a new build slowly. If XFS is worth it, I'm pretty good at using MC in the shell and would mostly like do all my moving through there with screen.

 

If you don't have any problems with your current rfs setup, then probably not.

 

Most quoted reasons are 'futureproofing' (developer in jail, max supported volume is 16tb vs xfs's 500tb), nothing imminent really. I've been recovering from a multiple-disk inconsistencies in my array the past week or so, running file comparisons (diff, rsync), checking drives (badblocks, preclear) and moving files to good drives (rsync, mc).. I use xfs on all my new drives, and am converting a few of my old drives to xfs in the process too... just the drives that gets more active usage than the rest (inbox-type of folder where stuff gets in and out every few weeks). I've noticed a huge difference between (years-old) reiserfs volume and (fresh) xfs.

Link to comment

Is there an online tutorial to show outline on how to do this? I know I'll be using my disk shares per Gary (thank you Gary) and can empty one drive out, re-format with XFS then move files back. I assume that's the basics. Do I need additional files or libraries to support XFS or it is bundled with V5.0.6?

 

Thanks, I look forward to doing something with it.

 

 

Link to comment

Is there an online tutorial to show outline on how to do this? I know I'll be using my disk shares per Gary (thank you Gary) and can empty one drive out, re-format with XFS then move files back. I assume that's the basics. Do I need additional files or libraries to support XFS or it is bundled with V5.0.6?

 

Thanks, I look forward to doing something with it.

unRAID v5 only supports RFS. There is no way to do this before upgrading to v6. There is a sticky at the top of this subforum with lots of discussion about moving files around on the server to get them onto XFS.
Link to comment

Is there an online tutorial to show outline on how to do this? I know I'll be using my disk shares per Gary (thank you Gary) and can empty one drive out, re-format with XFS then move files back. I assume that's the basics. Do I need additional files or libraries to support XFS or it is bundled with V5.0.6?

 

Thanks, I look forward to doing something with it.

unRAID v5 only supports RFS. There is no way to do this before upgrading to v6. There is a sticky at the top of this subforum with lots of discussion about moving files around on the server to get them onto XFS.

 

I think the strategy can be quite straightforward: Empty one disk by moving data off, convert this disk to XFS, copy data to this converted disk and free up the next disk. Repeat the action of converting, copying and freeing up over all disks.

 

That's how I did it, caveat ... it is a lengthy exercise, it took me almost 10 days, but system is running very smooth on XFS.

 

Link to comment

Is there an online tutorial to show outline on how to do this? I know I'll be using my disk shares per Gary (thank you Gary) and can empty one drive out, re-format with XFS then move files back. I assume that's the basics. Do I need additional files or libraries to support XFS or it is bundled with V5.0.6?

 

Thanks, I look forward to doing something with it.

unRAID v5 only supports RFS. There is no way to do this before upgrading to v6. There is a sticky at the top of this subforum with lots of discussion about moving files around on the server to get them onto XFS.

 

I think the strategy can be quite straightforward: Empty one disk by moving data off, convert this disk to XFS, copy data to this converted disk and free up the next disk. Repeat the action of converting, copying and freeing up over all disks.

 

That's how I did it, caveat ... it is a lengthy exercise, it took me almost 10 days, but system is running very smooth on XFS.

I did mine all in mc (within a screen session) before that sticky thread really got going with all the hand-wringing over different ways to verify. As far as I know everything moved fine.

 

v6 and XFS does seem to be snappier for me as well, but I don't have any data to back that up. The GUI in v6 is not only much improved with more features, it is also much quicker. It never seems to go away for a while before finally loading a page. Great work everyone, including you bonienl!

Link to comment

I second the posts above (snappier, etc), but keep in mind that xfs and rfs is the filesystem used on top of unraid's disk. So it won't do any speed up to parity check or disk rebuild speed (block level operation).

 

For me, it's most apparent when copying files TO the drive in question. I had an active inbox-type disk (reiserfs) with about 300gb free of 4tb drive, and it took 30 seconds OR MORE just to start copying. Sometimes the windows (that I copied from) timed out and I had to retry the copy operation. I've read that this is due to reiserfs preparing the space for the new files, so I guess it might have to do with disk fragmentation.

Link to comment

I did mine all in mc (within a screen session)

 

There is actually a nifty feature in mc to start a file copy in the background (F5 and click on 'background'), this allows the telnet session to be closed without aborting the copy process.

I have the impression that I have only scratched the surface of its capabilities. Really should take the time to read the docs sometime.
Link to comment

I did mine all in mc (within a screen session)

 

There is actually a nifty feature in mc to start a file copy in the background (F5 and click on 'background'), this allows the telnet session to be closed without aborting the copy process.

I have the impression that I have only scratched the surface of its capabilities. Really should take the time to read the docs sometime.

 

MC does it all. I even think it verifies a copy if you want. And it is super fast.

 

Link to comment

The GUI in v6 is not only much improved with more features, it is also much quicker. It never seems to go away for a while before finally loading a page.

 

If you are interested in even better performance, I created a plugin which installs zend OPcache. It allows PHP to cache code and avoids the re-compiling of the same code over and over again.

 

This is the plugin URL: https://github.com/bergware/dynamix/raw/master/unRAIDv6/dynamix.zend.opcache.plg

 

In my case the "snappy" became even more "snappier"  :)

Link to comment

The GUI in v6 is not only much improved with more features, it is also much quicker. It never seems to go away for a while before finally loading a page.

 

If you are interested in even better performance, I created a plugin which installs zend OPcache. It allows PHP to cache code and avoids the re-compiling of the same code over and over again.

 

This is the plugin URL: https://github.com/bergware/dynamix/raw/master/unRAIDv6/dynamix.zend.opcache.plg

 

In my case the "snappy" became even more "snappier"  :)

I have to get familiair on how to use plugins using beta 15. I'm not sure I would really  want at least until the first RC. I heard

Link to comment

The GUI in v6 is not only much improved with more features, it is also much quicker. It never seems to go away for a while before finally loading a page.

 

If you are interested in even better performance, I created a plugin which installs zend OPcache. It allows PHP to cache code and avoids the re-compiling of the same code over and over again.

 

This is the plugin URL: https://github.com/bergware/dynamix/raw/master/unRAIDv6/dynamix.zend.opcache.plg

 

In my case the "snappy" became even more "snappier"  :)

 

just upped to V6 Beta15. Not sure right now I want to fill any plugins right now since the system seems stable and running good. It is as easy as dropping the DLG file into the folder, and that's it?

 

Link to comment

The GUI in v6 is not only much improved with more features, it is also much quicker. It never seems to go away for a while before finally loading a page.

 

If you are interested in even better performance, I created a plugin which installs zend OPcache. It allows PHP to cache code and avoids the re-compiling of the same code over and over again.

 

This is the plugin URL: https://github.com/bergware/dynamix/raw/master/unRAIDv6/dynamix.zend.opcache.plg

 

In my case the "snappy" became even more "snappier"  :)

 

just upped to V6 Beta15. Not sure right now I want to fill any plugins right now since the system seems stable and running good. It is as easy as dropping the DLG file into the folder, and that's it?

No. You must use the webGUI to install plugins. Go to the Plugins page and paste the URL into the Install Plugin box
Link to comment

With one exception, I've been very happy with read/write speeds.  I did extensive testing (with and without cache) and documented in this post: http://lime-technology.com/forum/index.php?topic=39345.msg368527#msg368527

 

I've been having problems with Mover taking days to move 280gb data (just started writing through cache).  Saw decent speeds writing, but the actual movement is unbelievably slow.  Must be doing something wrong there...

Link to comment

With one exception, I've been very happy with read/write speeds.  I did extensive testing (with and without cache) and documented in this post: http://lime-technology.com/forum/index.php?topic=39345.msg368527#msg368527

 

I've been having problems with Mover taking days to move 280gb data (just started writing through cache).  Saw decent speeds writing, but the actual movement is unbelievably slow.  Must be doing something wrong there...

See v6 help link in my sig
Link to comment

I haven't done any careful benchmark or tried to quantify it in anyways, but using xfs is much smoother and file operations seem snappier. Enough to make me want to convert as many of my disks to xfs as I can. Here's where I currently am:

 

hujJQQ0.png

Link to comment

Join the conversation

You can post now and register later. If you have an account, sign in now to post with your account.
Note: Your post will require moderator approval before it will be visible.

Guest
Reply to this topic...

×   Pasted as rich text.   Restore formatting

  Only 75 emoji are allowed.

×   Your link has been automatically embedded.   Display as a link instead

×   Your previous content has been restored.   Clear editor

×   You cannot paste images directly. Upload or insert images from URL.