Btrfs vs Xfs for Pool Devices


Recommended Posts

1TB Team Group 2.5 inch SSD using btrfs was working fine as a Pool Device (not being used as Cache)  for over 6 months. However on the weekend it suddenly became read only. As I started to copy the files off to the Array the drive became unmountable. I took the Array into "maintenance mode" but I could not run a check on the btrfs file system.

 

On reboot of Unraid the SSD was recognized by the SuperMicro HBA card. But would not mount. It appears the file system was trashed.

 

I removed the SSD from my Unraid server and placed it into my Windows 10 computer. Windows disk manager recognized the SSD but not the partition. So I created a NTFS partition and copied 16 gbs of files to it as a test. I also ran CrystalDiskInfo on the SDD. Everything was fine. No errors were found.

 

Googling I found several threads on reddit of people reporting similar issues with SSDs using the btrfs files system. Many reported btrfs is a trash file system and converted their SSDs to Xfs and have not had an issue since. 

 

I placed the SSD back into Unraid and was able to re-assign it back to the Pool Device and reformated as Xfs.

 

Have any users here had similar  experiences with btrfs on their Pool Devices? I have other Pool Devices on my two Unraid servers formated as btrfs and I am now a bit concerned about having this issue reoccur.

 

Thank kindly

 

 

Link to comment
On 2/13/2024 at 4:26 AM, Vetteman said:

Have any users here had similar  experiences with btrfs on their Pool Devices? I have other Pool Devices on my two Unraid servers formated as btrfs and I am now a bit concerned about having this issue reoccur.

 

I tried out btrfs on a cache pool some years back - maybe 5 years? Not sure if cache pool is the right term - basically I had two duplicate SSDs in a RAID-1 type arrangement - this was before unRAID allowed multiple cache pools, I think. I followed the official setup process, so no weird customisation involved.

 

I ran into some problems with the filesystem and spent some time trying to fix it with help from this forum. I can't remember whether I was actually able to fix things, but whatever the outcome I do recall that my impression at the time was that it was just too risky for me to continue with btrfs. So I went back to a single cache drive on xfs and have never used btrfs since then. In fact I might have done what you did and reformatted the drives as xfs. (Unfortunately I can't find the thread now - but it's long outdated.)

 

I do believe that btrfs has matured since then and should be better. And it does seem that lots of people use it quite happily. My understanding is that at one time if you wanted redundancy/RAID 1 type protection on any pool device (in effect duplicating the pool drive), you had to use btrfs; xfs didn't support this. Perhaps these days you can do it with zfs, but that's too complicated for me.

 

For me, one important and sometimes overlooked feature of any data protection policy is that the user must have a high degree of confidence in the system. Whether this confidence is subjective, justified, etc. is in some ways immaterial. You want to be able to rest easy. I just couldn't trust btrfs after my experience with it, but I do accept that it's perfectly fine for others.

  • Like 1
Link to comment

Join the conversation

You can post now and register later. If you have an account, sign in now to post with your account.
Note: Your post will require moderator approval before it will be visible.

Guest
Reply to this topic...

×   Pasted as rich text.   Restore formatting

  Only 75 emoji are allowed.

×   Your link has been automatically embedded.   Display as a link instead

×   Your previous content has been restored.   Clear editor

×   You cannot paste images directly. Upload or insert images from URL.