Jump to content

XFS works really well


TSM

Recommended Posts

Saturday morning I finished migrating all of my drives to XFS, and I must say the difference in system performance is striking.  It feels like I gave the server a hardware upgrade. 

 

My server has 14 data drives with a complex directory structure, with many sub-folders having contents that reside on multiple different drives.  I also have a lot of drives that are very close to being full, which I know can cause reiserfs to have issues sometimes. 

 

When opening folders with a lot of files in them that are strewn across multiple drives, there is still a delay but not big enough to cause me any anguish.  I used to have some folders that would cause Windows Explorer to time out once maybe even twice before I could finally view their contents.  Those folders took maybe 30 seconds to open after the upgrade.  And folders that took 10 to 20 seconds to open before, now seem to open almost instantly or with a brief hesitation.

 

Before doing the migration I was seriously considering upgrading my server's core hardware, but now I'm not so sure.  I might just leave it alone for another few years. 

Link to comment

Some if not all of that performance increase would have been realized by doing the exact same procedure, but with the source destination as a fresh reiserfs formatted drive. Reiserfs is apparently really bad at fragmented performance, so as you fill and delete files from a complex directory and folder structure over time it performs really poorly compared to a fresh fill of the same files.

 

There are other VERY good reasons to migrate away from reiserfs, but you can get the performance back with reiserfs by simply emptying out one drive at a time and copying back fresh. I know because I've done it a couple times now over the lifetime of my unraid server, and it never fails produce a really good bump in performance. Since unraid now supports XFS, the next time I need to do a purge and fill the destination will be XFS.

Link to comment

Some if not all of that performance increase would have been realized by doing the exact same procedure, but with the source destination as a fresh reiserfs formatted drive. Reiserfs is apparently really bad at fragmented performance, so as you fill and delete files from a complex directory and folder structure over time it performs really poorly compared to a fresh fill of the same files.

 

There are other VERY good reasons to migrate away from reiserfs, but you can get the performance back with reiserfs by simply emptying out one drive at a time and copying back fresh. I know because I've done it a couple times now over the lifetime of my unraid server, and it never fails produce a really good bump in performance. Since unraid now supports XFS, the next time I need to do a purge and fill the destination will be XFS.

What you say makes sense logically, but I can't see how my drives could have been fragmented badly enough for that to have single handedly caused the problems I saw.  All things being equal, I'm sure I had some fragmentation, but for the most part my files are written once, and then read multiple times.  I have maybe 20 smallish files that ever get changed, and almost nothing ever got moved around once written.  Plus, I think there are other posts where people have said that reiserfs disks that are very full can cause performance issues with unraid. 

 

I'm not saying you're wrong, because you may know a lot more about the topic than I do,  I'm just saying that I'm not convinced. 

Link to comment

I'm not saying you're wrong, because you may know a lot more about the topic than I do,  I'm just saying that I'm not convinced.

 

As you add/remove files to/from directories, those directories get fragmented, plus the metadata starts to be placed around the filesystem (Wherever free space may be).  The larger and deeper the directory tree, the more prone to this problem on reads.  As the filesystem becomes full, the filesystem driver has to search through the trees to allocate directory space and/or metadata space for stat information.

 

I've noticed that rsyncing data from one drive to another causes all the directories to be built first.

This pre-allocation of directory space and putting them at the outer tracks provides a speed boost.

I.E. The directory can possibly be read sequentially without random head movement while searching for other blocks/meta data.

 

I've noticed this type of benefit with rsyncing  and moving filesystems with ext3 and reiserfs.

I'm not that familiar with XFS at this point in time.  From what I remember, historically it was better at file allocation and removal.

Link to comment

I'm not saying you're wrong, because you may know a lot more about the topic than I do,  I'm just saying that I'm not convinced.

 

As you add/remove files to/from directories, those directories get fragmented, plus the metadata starts to be placed around the filesystem (Wherever free space may be).  The larger and deeper the directory tree, the more prone to this problem on reads.  As the filesystem becomes full, the filesystem driver has to search through the trees to allocate directory space and/or metadata space for stat information.

 

I've noticed that rsyncing data from one drive to another causes all the directories to be built first.

This pre-allocation of directory space and putting them at the outer tracks provides a speed boost.

I.E. The directory can possibly be read sequentially without random head movement while searching for other blocks/meta data.

 

I've noticed this type of benefit with rsyncing  and moving filesystems with ext3 and reiserfs.

I'm not that familiar with XFS at this point in time.  From what I remember, historically it was better at file allocation and removal.

Now File System fragmentation like you describe, that makes sense.  You say that rsync pre-allocates the directory space.  Would MC do the same thing?  I think somebody said that MC is really just a gui front end for the underlying Linux commands that would do the same thing. 

Link to comment

You say that rsync pre-allocates the directory space.  Would MC do the same thing?  I think somebody said that MC is really just a gui front end for the underlying Linux commands that would do the same thing.

 

pre-allocate might be the wrong word as it implies that all directories entries are pre-made before the files are copied.

From what I've seen, each parent directory is created before any files are copied.

 

If you have a really deep directory that has allot of files, that deep directory will eventually get fragmented as files are copied.

If there are only a small number of files per directory then that pre-created parent directory will be contiguous or less fragmented if it has to be extended.

 

MC with mv might do it on the parent directory, however I'm not sure it will build the whole tree like rsync -a would.

Link to comment

Join the conversation

You can post now and register later. If you have an account, sign in now to post with your account.
Note: Your post will require moderator approval before it will be visible.

Guest
Reply to this topic...

×   Pasted as rich text.   Restore formatting

  Only 75 emoji are allowed.

×   Your link has been automatically embedded.   Display as a link instead

×   Your previous content has been restored.   Clear editor

×   You cannot paste images directly. Upload or insert images from URL.

×
×
  • Create New...