Is it necessary to convert to XFS?


Ned

Recommended Posts

Been on v6 for a while now and wondering if it is really necessary to try and migrate all of my disks to the XFS file system.  I have an array of 10 disks and I am not interested in adding any more disks - my current strategy of replacing old drives with newer larger drives as they near the end of their life span (or fail) has served me well from a capacity standpoint.  Given this strategy, I will obviously remain on the RFS system indefinitely.

 

What would I gain by moving to XFS?  It is a ton of work to try and migrate the disks moving data back and forth as I said, so is there any reason why I should not just maintain the status quo?

 

Thank you!

Link to comment

I would just add that if don’t have any performance issues with reiserfs and almost full disks, like waiting several seconds for copys to start, moving files to different folders, creating new folders, etc you should stay with it, it will spare you a lot of work and some have even experienced data loss due to mistakes during the move.

 

If you have any of the above performance issues you should change, it was night and day for me, the best upgrade I ever did to my servers.

 

Link to comment

Interesting points... I have occasionally had these issues but not significant.  I didn't realize it was due to disks being nearly full.  What is the recommended free space on disks when using RFS?  Does it go by percentage or free space (i.e. a larger disk with the same % free space has many more GB of free space).

Link to comment

As a Reiser disk gets very full (I've found ~ 95% or so) it can be slow to copy new data => actually it's not slow at doing the actual copy, it just takes a while to start it (like it's "thinking" about where to put the new data).

 

On my main media server, that's not an issue at all, as there are only a couple of disks that get written to ... 11 of my 15 data disks are full [i.e. < 1GB of free space].  There's NO difference in read speeds.

 

I'd agree that if you use your server for a lot of transient data ... i.e. you're constantly changing/deleting/adding files to it ... that it's best for those disks to be XFS.    But if it's largely static (as most media servers are) ... i.e. you write the data and it never changes ... then there's no compelling reason to switch file systems.

 

As I noted, I DO have one disk that's XFS ... it was one I added rather than replaced.  But I don't really notice any performance difference writing to that disk (a 4TB Red) vs. writing to any of my other 4TB Reds (although they aren't at the "very full" stage yet).

 

FWIW I've seen more data lost in the last year or so by folks trying to move all their data to XFS than I have from just about any other reason on the forum -- so if you DO decide to do that, be sure you are VERY careful about the process you use.  It's not hard at all .. but it's very time consuming; requires a bit of care; and quite candidly simply isn't necessary.

 

 

 

 

 

Link to comment

You guys are the best.  This is really helpful information.  My server is mainly for static file storage so I think in my case, it's best to leave things as is.  I try to keep my disks with at least 5% free space in general as well.

 

I'm with you on the 4TB reds BTW... those have been my "standard" for the past few drive replacements.  Nice and reliable and relatively good power consumption too.

Link to comment

...  I try to keep my disks with at least 5% free space in general as well.

 

There's really no reason to do that => when my drives get close to full, I copy new media directly to the drive shares to fill them up ... in fact I'll adjust just what I put on each drive to try and get "down to the last GB"  :)

 

As an example, looking at my first few drives, they have 422MB, 239MB, 190MB, 77.8MB, 284MB, and 915MB free => these are all 2TB or 4TB drives, so I'd say they're "FULL"  :)    Leaving 5% unused space would be 200GB of space on a 4TB drive ... no reason to waste that space.

 

Link to comment

With that small amount of free space though, if you did need to write to the drive or delete files, etc. then I assume there would be a performance impact ... what about potential for file fragmentation as well?

 

I guess like you say, if the data is static and will never change then you are ok.

Link to comment

I just upgraded to v6. I take it form this thread that running a mixed FS array is okay?

Would it be ok to do the following:

 

Replace a 2 TB in my large array (22 data disks) with a 5 TB drive.

Then move the data from another 2 TB drive onto the 3 TB of free space of the 5 TB drive.

Remove the 2 TB drive from the array.

Preclear another 5 TB drive.

Format it with XFS and add to the array.

Move all the data from the 5TB Reiser to the 5TB XFS

Remove the Reiser 5TB from the array

Preclear the Reiser 5TB

Format that 5TB as XFS and add to array.

Now I can start to migrate data from my 2,3 and 4 TB drives by repeating cycles of the above and adding new 4 and 5 TB drives as needed - to expand the array and migrate slowly to XFS.

 

As my disks go beyond 98% full, I get a lot of failed transfers to the array, with network device no longer available etc. I cannot wait to find a solution to that and I hope XFS will be it.

Link to comment

As my disks go beyond 98% full, I get a lot of failed transfers to the array, with network device no longer available etc. I cannot wait to find a solution to that and I hope XFS will be it.

 

I did post a solution to this somewhere here, you have to tweak your windows machine for a much longer timeout period.

 

However, I migrated to XFS and it's now lightning quick

Link to comment

There are a lot of extra and in my opinion unnecessary steps in there. I've modified things somewhat in your quoted text based on what I would do.

Ensure parity drive is 5TB or larger, if not, replace parity with a tested large drive.

Replace a 2 TB in my large array (22 data disks) with a tested 5 TB drive.

Then move the data from another 2 TB drive onto the 3 TB of free space of the 5 TB drive using file verification like rsync -c or teracopy.

Stop the array, and change the format of the now empty 2 TB Reiserfs drive to XFS.

Preclear another 5 TB drive to thoroughly test it.

Replace the empty 2TB XFS drive with the newly tested 5TB.

Move all the data from the 5TB Reiser to the 5TB XFS using file verification.

Stop the array.

Change the format of the now empty Reiser 5TB to XFS.

Move additional data to empty spots on XFS disks using file verification.

Every time a Reiserfs disk is empty, stop the array and change it to XFS.

Link to comment

As my disks go beyond 98% full, I get a lot of failed transfers to the array, with network device no longer available etc. I cannot wait to find a solution to that and I hope XFS will be it.

 

Almost certainly these timeouts will be over, one of my servers has all disks 99% full and when I need to change something is as snappy as an empty server, when it was reiser it was a pain, just creating a new folder would take several seconds.

 

Read this thread for some more suggestions how to make the move.

 

Link to comment

There are a lot of extra and in my opinion unnecessary steps in there. I've modified things somewhat in your quoted text based on what I would do.

 

Thank you! I missed this post earlier. I'll go over it again in a minute, but it makes sense. Good idea to format the drives when empty, before replacing them.

 

I was going to to use Midnight Commander to do the file transfers. Not sure about rsync - never used it. I will not use TeraCopy because of all the Windows timeout issues.

Link to comment

There are a lot of extra and in my opinion unnecessary steps in there. I've modified things somewhat in your quoted text based on what I would do.

 

Thank you! I missed this post earlier. I'll go over it again in a minute, but it makes sense. Good idea to format the drives when empty, before replacing them.

 

I was going to to use Midnight Commander to do the file transfers. Not sure about rsync - never used it. I will not use TeraCopy because of all the Windows timeout issues.

mc does not have built in copy verification, if you use it I would copy, not move the data, and then use one of the checksum verification programs to ensure the data got copied intact. It's not strictly necessary to move the data either, a full verified copy followed by changing the format to XFS will empty the drive for you much quicker than waiting for RFS to delete all those files. Same end result anyway, a blank XFS drive. All this is covered in much detail in the thread referenced by johnnie.black, but since I just went through much the same process, I figured I'd summarize what I did, customized to your specifics.
Link to comment

There are a lot of extra and in my opinion unnecessary steps in there. I've modified things somewhat in your quoted text based on what I would do.

 

Thank you! I missed this post earlier. I'll go over it again in a minute, but it makes sense. Good idea to format the drives when empty, before replacing them.

 

I was going to to use Midnight Commander to do the file transfers. Not sure about rsync - never used it. I will not use TeraCopy because of all the Windows timeout issues.

mc does not have built in copy verification, if you use it I would copy, not move the data, and then use one of the checksum verification programs to ensure the data got copied intact. It's not strictly necessary to move the data either, a full verified copy followed by changing the format to XFS will empty the drive for you much quicker than waiting for RFS to delete all those files. Same end result anyway, a blank XFS drive. All this is covered in much detail in the thread referenced by johnnie.black, but since I just went through much the same process, I figured I'd summarize what I did, customized to your specifics.

fitbrit,

 

There are a lot of differences between your procedure and that recommended by jonathanm that might not be obvious without a little study. Those differences will save you literally days.

 

For one thing, jonathanm doesn't actually preclear any drives that were already being used. This is because he never removes or adds them to the array. A drive only needs to be cleared when it is added to a new slot in a parity array. This is so parity will remain valid. It is still a good idea to preclear any new drive to test it even if it will be used in an existing slot, such as when rebuilding a smaller drive to a larger one, and jonathanm mentions that for new drives.

 

Another thing, jonathanm doesn't actually remove any drive from a slot without replacing it with a larger one to rebuild onto. If you actually remove a drive so the slot is empty, unRAID will have to rebuild parity. If in the end you decide you need to have fewer drives in the array, then remove all of the drives you want to remove all at once at the very end so you only have to rebuild parity once.

 

Let us know if you have any questions about any of this.

Link to comment

Join the conversation

You can post now and register later. If you have an account, sign in now to post with your account.
Note: Your post will require moderator approval before it will be visible.

Guest
Reply to this topic...

×   Pasted as rich text.   Restore formatting

  Only 75 emoji are allowed.

×   Your link has been automatically embedded.   Display as a link instead

×   Your previous content has been restored.   Clear editor

×   You cannot paste images directly. Upload or insert images from URL.