Why does unRAID still default to XFS over BTRFS?


10 posts in this topic Last Reply

Recommended Posts

Even the latest version of unRAID still defaults to XFS for arrays.  Why is this?  Speed?  Stability?  BTRFS seems to only offer advantages (such as checksumming, although apparently bitrot can only be detected, not repaired).  There are several examples in this forum of users using it successfully (such as this one).

I am about to set up a new array, and would like to give BTRFS a go.  Any disadvantages I should be aware of?

Link to post

Short answer: eventually we probably will.  Long answer: <will be provided by others mostly bashing btrfs>

 

I use both, in particular a couple array devices are formatted with btrfs in order to be target of vdisk backup using send/receive.

 

BTW: no such thing as "bitrot".

Link to post
2 hours ago, limetech said:

BTW: no such thing as "bitrot"

 

1 hour ago, Mat1926 said:

i wonder about the history and who started it :)

Bitrot probably does exist with the same probability as the the Modern Physics concept that predicts if you thrown a tennis ball at a brick wall long enough that is a possibility that the tennis ball will emerge on the other side with both the wall and ball intact.   😲    

 

Link to post

Thanks guys, for your replies!  I don't think I'm qualified to say whether bitrot really exists or not - this article on Ars Technica (including example pictures) certainly seems to think so...
I have definitely seen that sort of corruption in my files in the past.  Of course that doesn't mean "bitrot" caused it, maybe file system corruption caused by a sudden crash?  Maybe it just depends on what you define as bitrot ;)

 

Question: if I use BTRFS on my volumes, will its checksums make plugins like Dynamix Fle Integrity obsolete?

Link to post

Most people think "bitrot" refers to flaws in the physical storage media developing over time, such that data written to a storage block at time 0 is not the same data that is read at time 1, and this fact goes undetected.

 

Indeed storage media does degrade over time, however, storage media devices also incorporate powerful error detection and correction circuitry.  This makes it mathematically impossible for errors due to degrading media to NOT be detected and then not reported as "unrecoverable media error", unless there is a firmware error or other physical defect in the device.

 

In virtually all h/w platforms, all physical data paths are protected by some kind of error detection scheme.  For example, if a DMA operation is initiated to read data from RAM and write to a storage controller over PCI bus, all the various data paths are protected with extra check bits, either in parallel with parallel data paths, or using checksums.  This means (again barring firmware errors), random bit errors in data leaving memory and arriving at the storage end up getting detected before ever being written to the media.

 

There is ONE subsystem in modern PC computer systems that is typically not protected however: system RAM.  If you have a file in RAM and along comes a random alpha particle (for example) and flips a bit, nothing detects this - in btrfs or zfs (for example) the s/w happily calculates an (incorrect) checksum and the h/w happily writes it all the way to the media.  Until one day you read the file and see corruption and say, "Damn you crappy storage device, you have bitrot!", when all along it was written that way.

 

If you really care about flawless storage, and indeed better system in general, you must use ECC RAM for starters.  Also use quality components everywhere else, especially PSU.  That will be your best defense against "bitrot".

Link to post
  • 3 weeks later...
On 10/13/2018 at 11:37 AM, limetech said:

or other physical defect in the device.

@limetech 

This is what I've generally assumed bitrot to be. My understanding is that as a drive (particularly hdds) starts to fail, files on bad sectors can be corrupted or unreadable. Am I missing something?

 

I've had files corrupted for unknown reasons and now I keep everything important on btrfs raid1. Actually, this is the main reason I'm not on unraid -- I want checksumming and files repaired before going to backup disks. That and the fact I don't need cache drives.

 

Can the Dynamix File Integrity plugin or something similar alert me with a list of corrupted files upon detection? That's all I'm using BTRFS raid1 for but it gives no performance benefit. 

Link to post
53 minutes ago, jayarmstrong said:

This is what I've generally assumed bitrot to be. My understanding is that as a drive (particularly hdds) starts to fail, files on bad sectors can be corrupted or unreadable. Am I missing something?

That is correct, but if the HDD reads a sector and cannot recover the data (error burst too long for ECC correction), this is reported as "read error" - in this case Unraid will reconstruct the missing data by reading parity plus all the other data drives at that location.  Finally, upon successful reconstruct, Unraid will return that data as "success" to original I/O request and also initiate a write-back to the error sector - if that write fails, the device gets disabled (affectionately referred to as "red balled").

Link to post

Join the conversation

You can post now and register later. If you have an account, sign in now to post with your account.
Note: Your post will require moderator approval before it will be visible.

Guest
Reply to this topic...

×   Pasted as rich text.   Restore formatting

  Only 75 emoji are allowed.

×   Your link has been automatically embedded.   Display as a link instead

×   Your previous content has been restored.   Clear editor

×   You cannot paste images directly. Upload or insert images from URL.