RFS or XFS or BTRFS with unRAIDS 6b7+


SSD

Recommended Posts

A interesting note here is in 6b7, if you try xfs_check it returns with "xfs_check is deprecated and scheduled for removal in June 2014."  it appears that xfs_repair -n will become the primary tool.

 

I've read that XFS is extremely good with large files and very low CPU usage.  I primarily have large files and am using an ATOM processor.  Soo, I already started the move and have one more HD to switch over from RFS to XFS.  I use the BTRFS on my cache drive using docker and plex.  As long as I keep the library (many small files reads and writes) on that file system, I'll be good.

Link to comment
  • Replies 53
  • Created
  • Last Reply

Top Posters In This Topic

I've been trying to find some good fsck recovery stories to see if the general tool can match what reiserfsck has shown it can do, but with no success.  The separate xfs_check and xfs_repair utilities can help; but these apparently aren't nearly as robust as reiserfsck.

 

btrfs has more promise ... but at this point it seems it's just that -- a "promise" of a lot of really cool features that aren't necessarily ready for prime time.

 

As long as all your data is backed up, it clearly doesn't hurt to experiment a bit ... but it's not at all clear that there are any data integrity benefits to switching (except possibly with btrfs once it evolves a bit).

 

Did you see THIS post?

Link to comment

I've been trying to find some good fsck recovery stories to see if the general tool can match what reiserfsck has shown it can do, but with no success.  The separate xfs_check and xfs_repair utilities can help; but these apparently aren't nearly as robust as reiserfsck.

 

btrfs has more promise ... but at this point it seems it's just that -- a "promise" of a lot of really cool features that aren't necessarily ready for prime time.

 

As long as all your data is backed up, it clearly doesn't hurt to experiment a bit ... but it's not at all clear that there are any data integrity benefits to switching (except possibly with btrfs once it evolves a bit).

 

Did you see THIS post?

 

Yes, I read that.  It does show that XFS is pretty robust ... but I still haven't seen any recovery stories that can match what many have found using reiserfsck on this forum -- some of those recoveries have been nothing short of miraculous.  If the author wasn't serving a life sentence, I suspect this would still be a VERY viable file system.

 

I agree completely that it's a good idea to use a newer file system for NEW arrays going forward.  I just don't see any good reason to SWITCH a current array to anything else.  The only exception is that I'm thinking about adding a pair of SSDs as a btrfs cache pool, so I can get faster writes while still having immediate fault-tolerance (the lack of which is why I don't currently use a cache drive).

 

Link to comment

I have no immediate plans to move from RFS on my existing disks, but new disks are going to be XFS going forward.

 

Reiserfs is no longer being enhanced. You have a few Linux geeks doing enough to keep it compatible with OS advances is all. The Reiser4 initiative sort of died.

 

Linux distros are dropping support. Slack may maintain it for a while, but the filesystem's days are numbered.

 

I agree is has been fantastic at recovering scenarios that users have no right to expect it could recover from, but XFS may have similar levels of recoverability. It is wildly popular for enterprise use.

 

I'm considering running a test of reiserfsck, btrfs, and XFS with zeroing out the first 1Gig of a full 1T disk and comparing how well they can recover.

Link to comment

I'm considering running a test of reiserfsck, btrfs, and XFS with zeroing out the first 1Gig of a full 1T disk and comparing how well they can recover.

 

Not sure how realistic that "damage" is, but it should be an interesting test nevertheless.

 

Barring any feedback that changes my mind in the next few months, my plan for my NEXT UnRAID server is to use a btrfs cache pool (2-3 SSDs) and XFS for the data drives.    But as I noted, I have NO plans to change my existing servers, other than possibly adding a pair of SSDs as a btrfs cache pool; and also do not plan to mix file system types for the data drive (i.e. if I replace any disks, they'll still be Reiser).

 

But I'm definitely interested in your test results.

 

Link to comment

I'm considering running a test of reiserfsck, btrfs, and XFS with zeroing out the first 1Gig of a full 1T disk and comparing how well they can recover.

 

Not sure how realistic that "damage" is, but it should be an interesting test nevertheless.

 

Barring any feedback that changes my mind in the next few months, my plan for my NEXT UnRAID server is to use a btrfs cache pool (2-3 SSDs) and XFS for the data drives.    But as I noted, I have NO plans to change my existing servers, other than possibly adding a pair of SSDs as a btrfs cache pool; and also do not plan to mix file system types for the data drive (i.e. if I replace any disks, they'll still be Reiser).

 

But I'm definitely interested in your test results.

 

This roughly approximates my very first reiserfsck experience. If anyone has a better idea of how to corrupt a drive and test recoverability let me know.

Link to comment

I was running a btrfs pool in 6b6 and before 6b7 I was having some memory trouble and server lockups. Anyway there were some errors on the cache pool when I ran btrfs check.  However evening I read said not to use btrfs check -repair.  I'm not sure what the procedure to repair was, maybe scrub but it was hard to find info and didn't give me confidence

Link to comment

This roughly approximates my very first reiserfsck experience. If anyone has a better idea of how to corrupt a drive and test recoverability let me know.

What you suggested is essentially that same corruption as my error: Rebuild on the wrong disk for a short period of time.  That's what I did to a RFS cache drive full of data when I had an array drive die on me.  Didn't realize my mistake until about 1-2% of the cache drive (in the failed array drive's location) had been rebuilt from parity.  Ran the RFS tool and got back all but about 150GB of data on a 2TB cache drive (100GB free before the mistake) - had 250GB free after RFS recovery and ~3 files in LOST+FOUND.
Link to comment

This roughly approximates my very first reiserfsck experience. If anyone has a better idea of how to corrupt a drive and test recoverability let me know.

What you suggested is essentially that same corruption as my error: Rebuild on the wrong disk for a short period of time.  That's what I did to a RFS cache drive full of data when I had an array drive die on me.  Didn't realize my mistake until about 1-2% of the cache drive (in the failed array drive's location) had been rebuilt from parity.  Ran the RFS tool and got back all but about 150GB of data on a 2TB cache drive (100GB free before the mistake) - had 250GB free after RFS recovery and ~3 files in LOST+FOUND.

Zeroing the first bits and writing a valid but wrong filesystem to the first part of a drive is different, and depending on the recovery tools may not be handled well at all.

 

Perhaps instead of zeroing the first part of the drive, you could cat 2 half drive images into one, and see what the recovery tools sort out.

 

Even better, cat half a reiserfs and a xfs or btrfs together, run each fs recovery toolset on the result, and see what falls out.

Link to comment

This roughly approximates my very first reiserfsck experience. If anyone has a better idea of how to corrupt a drive and test recoverability let me know.

What you suggested is essentially that same corruption as my error: Rebuild on the wrong disk for a short period of time.  That's what I did to a RFS cache drive full of data when I had an array drive die on me.  Didn't realize my mistake until about 1-2% of the cache drive (in the failed array drive's location) had been rebuilt from parity.  Ran the RFS tool and got back all but about 150GB of data on a 2TB cache drive (100GB free before the mistake) - had 250GB free after RFS recovery and ~3 files in LOST+FOUND.

Zeroing the first bits and writing a valid but wrong filesystem to the first part of a drive is different, and depending on the recovery tools may not be handled well at all.

 

Perhaps instead of zeroing the first part of the drive, you could cat 2 half drive images into one, and see what the recovery tools sort out.

 

Even better, cat half a reiserfs and a xfs or btrfs together, run each fs recovery toolset on the result, and see what falls out.

 

 

The most common issue I've seen is assigning a data drive to the parity slot.

 

 

So zeroing the start of the drive and then doing a dd zero somewhere in the middle would be close to what we've seen.

1. overwriting the start of the drive.

2. hard drive corruption of superblock with bad sectors.  The only thing that saved me from number 2 was using dd_rescue in reverse mode.

Link to comment

I am strongly considering moving to BTRFS because of the ability to scrub data.  I went from FreeBSD to unRAID and the primary feature that I miss is the data scrub.  I would run a zpool scrub once a month and find some bit rot every third scrub or so.  From what I have read, Western Digital drives are most prone to data rot and I run only WD drives.

 

I just tried the "btrfs scrub" command in the terminal of unRAID and it seems to be implemented.  I think I will convert my small 64GB SSD to BTRFS and see how the scrub performs.

 

craigr

Link to comment

I know unRAID now offers single-drive BTRFS volumes an array drive format option, alongside the old standby RFS and the other new option, XFS.

 

However, Tom mentioned some issues with BTRFS that appear to be due to its copy-on-write filesystem features.

 

- btrfs is still a real pain to manage in some circumstances.  For example, try moving a directory that contains subvolumes to another partition and preserve the subvolumes! Very difficult.  The 'standard' unix tools: cp, mv, etc. simply don't work (well they work, but your 4GB docker directory balloons to 30-40-50GB).  By isolating Docker in it's own volume image file, it is easier to move around onto other devices.

 

I don't see anyone mentioning these issues when debating which filesystem option is best to use for new array drives.

 

Are the issues Tom mentioned irrelevant when BTRFS is used on an array drive? 

 

What I mean is, were the issues he raised only relevant when using BTRFS on the cache drive with multiple partitions and copying data around on that one drive?  (And thus the reason he's implemented mounting BTRFS loopback volume files rather than requiring BTRFS on the cache drive to support Docker.)

 

-- stewartwb

Link to comment

While I will be making the upgrade to unraid 6 somewhere in the RC status, one feature that would be a major factor in determining what FS to use is "shadow copy" 

 

I made a feature request here in case you want to chime in.

 

The shadow copy will allow for windows to retrieve previous versions of a file that is on a samba share, which can come in handy especially if you have working files stored there.

 

I'm not sure if xfs or btrfs can support that feature or not.

 

Link to comment

While I will be making the upgrade to unraid 6 somewhere in the RC status, one feature that would be a major factor in determining what FS to use is "shadow copy" 

 

I made a feature request here in case you want to chime in.

 

The shadow copy will allow for windows to retrieve previous versions of a file that is on a samba share, which can come in handy especially if you have working files stored there.

 

I'm not sure if xfs or btrfs can support that feature or not.

Support for this is certainly not something I have heard of being supported by these file systems, so I would be surprised if it appeared.  I think it is a very Microsoft specific feature.  However I would like to be proved wrong :)

Link to comment
  • 5 months later...

 

Is this something unraid is being designed to support down the line? Are its btrfs volumes set up as a subvolume by default to allow for snapshotting? I guess I'll find out soon enough as I am preparing my unraid system for the 6 upgrade. Will probably start eith making the cache drive btrfs.

Link to comment
  • 9 months later...

It's an old once, but I found this thread 1st hit on Google..

 

Did you change the default file system to XFS now? Because my freshly installed Unraid 6.1.6 formatted my drives in reiserfs

You can choose the default in Disk Settings. I think the default if you haven't chosen is XFS for array and btrfs for cache. You can change the filesystem of any individual disk with the array stopped and let unRAID reformat it.
Link to comment
  • 2 months later...

So i'm new to unRAID and like many others i too have my doubt about witch file system to choose, and since this thread is a bit old i was thinking what have changed over there years.

 

So is BTRFS still as "bad"?

I do not think that BTRFS is "bad" - it just seems that the recovery tools are not as mature.  BTRFS is now being adopted as preferred format by some mainline Linux distros so they are obviously happy with it.  However XFS has been around much longer and as such must be considered the more mature and stable.

 

In unRAID the default for the data disks is XFS, and for the cache disk it is BTRFS (so that a cache pool can be supported) although you can explicitly set the format for any disk from any of the currently supported formats (ReiserFSF, XFS, BTRFS).  The only time a specific format is mandatory is for you set up a cache pool in which case the cache disks will automatically be set to use BTRFS.

 

Having said that although Reiserfs is starting to be deprecated the recovery tool for that format is by far the best in recovering from severe corruption with minimal/no data loss.

Link to comment

Join the conversation

You can post now and register later. If you have an account, sign in now to post with your account.
Note: Your post will require moderator approval before it will be visible.

Guest
Reply to this topic...

×   Pasted as rich text.   Restore formatting

  Only 75 emoji are allowed.

×   Your link has been automatically embedded.   Display as a link instead

×   Your previous content has been restored.   Clear editor

×   You cannot paste images directly. Upload or insert images from URL.