Spies Posted September 5, 2017 Share Posted September 5, 2017 Just (hopefully) a quick question, how are Synology able to use BTRFS in a production release if it's so flakey on Unraid? Quote Link to comment
Tuftuf Posted September 5, 2017 Share Posted September 5, 2017 I'd say this is more of a BTRFS issue than Unraid based on things such as the table on https://btrfs.wiki.kernel.org/index.php/Status Synology are hoping for the best? Quote Link to comment
HellDiverUK Posted September 5, 2017 Share Posted September 5, 2017 (edited) 3 hours ago, Spies said: if it's so flakey on Unraid? Eh? Flakey? What makes you think it's flakey? Edited September 5, 2017 by HellDiverUK Quote Link to comment
c3 Posted September 5, 2017 Share Posted September 5, 2017 Last month RedHat also took a step back from btrfs Quote Btrfs has been deprecated The Btrfs file system has been in Technology Preview state since the initial release of Red Hat Enterprise Linux 6. Red Hat will not be moving Btrfs to a fully supported feature and it will be removed in a future major release of Red Hat Enterprise Linux. The Btrfs file system did receive numerous updates from the upstream in Red Hat Enterprise Linux 7.4 and will remain available in the Red Hat Enterprise Linux 7 series. However, this is the last planned update to this feature. From the 7.4 release notes. Quote Link to comment
Spies Posted September 5, 2017 Author Share Posted September 5, 2017 28 minutes ago, HellDiverUK said: Eh? Flakey? What makes you think it's flakey? Some people have had their cache pools corrupted. It's rare but it does happen and its enough to question whether BTRFS is reliable enough for a production environment. Quote Link to comment
HellDiverUK Posted September 5, 2017 Share Posted September 5, 2017 To be fair, unRAID isn't really a "production environment" OS - it's a home media server OS. Quote Link to comment
Spies Posted September 5, 2017 Author Share Posted September 5, 2017 My point is, Synology devices are, so how is their implementation different? Quote Link to comment
JorgeB Posted September 5, 2017 Share Posted September 5, 2017 (edited) From what I see in the forum most pool issues happen when there's a connection/cable problem in one of the devices, when a pool device drops offline and than reconnects, e.g., after a reboot, it can make the pool out of sync and create dual profiles leading to further issues (this is a btrfs problem and it should improve in the future, it would also help if unRAID reported theses errors, I believe this is a planned improvement also), but the point is, it mostly happens because of hardware issues and users not noticing there's a problem and continuing to use it until it faceplants, I have almost 300TB on btrfs and never had a single issue. Edited September 5, 2017 by johnnie.black Quote Link to comment
tdallen Posted September 5, 2017 Share Posted September 5, 2017 What makes you think that Synology users are enjoying rock solid stability using BTRFS? Quote Link to comment
HellDiverUK Posted September 5, 2017 Share Posted September 5, 2017 Synology is doing btrfs a little different - they're still using mdraid with a btrfs volume on top on each disk. They're not using btrfs' own RAID. It's the RAID that's not working quite right in btrfs. I've tried it on a DS916+ and it works fine. Quote Link to comment
JorgeB Posted September 5, 2017 Share Posted September 5, 2017 (edited) 2 minutes ago, HellDiverUK said: It's the RAID that's not working quite right in btrfs. That is correct, single volumes work very well, raid1/10 is mostly OK, but they can have issues when a disk drops/reconnects like I mentioned above, raid5/6 is still very buggy and should only be used for testing. Edited September 5, 2017 by johnnie.black Quote Link to comment
StevenD Posted September 5, 2017 Share Posted September 5, 2017 3 minutes ago, johnnie.black said: That is correct, single volumes work very well, raid1/10 is mostly OK, but they can have issues when a disk drops/reconnects like I mentioned above, raid5/6 is still very buggy and should only be used for testing. I had serious slowdowns with a single BTRFS cache drive. I re-formatted in it XFS and I havent had a problem since. Quote Link to comment
JorgeB Posted September 5, 2017 Share Posted September 5, 2017 Just now, StevenD said: I had serious slowdowns with a single BTRFS cache drive. I re-formatted in it XFS and I havent had a problem since. Some users appear to have that issue, most are not affected, and some have the same problem with xfs, still not convinced it's a btrfs only problem. Quote Link to comment
HellDiverUK Posted September 5, 2017 Share Posted September 5, 2017 2 minutes ago, StevenD said: I had serious slowdowns with a single BTRFS cache drive. I re-formatted in it XFS and I havent had a problem since. I though btrfs was required for cache due to Docker? Or has that changed? Can't say I've seen this issue, and I've been running btrfs cache since the first betas that allowed it. Quote Link to comment
Spies Posted September 5, 2017 Author Share Posted September 5, 2017 (edited) My cache (spinner) drive has VM images on it and I run XFS due to something I read about BTRFS not playing nice with high i/o (reading that somewhere - which I can't recall). Is that no longer true? Edited September 5, 2017 by Spies Quote Link to comment
itimpi Posted September 5, 2017 Share Posted September 5, 2017 9 minutes ago, HellDiverUK said: though btrfs was required for cache due to Docker? Or has that changed? Can't say I've seen this issue, and I've been running btrfs cache since the first betas that allowed it That was only true for a short time in one of the early b6 betas. Now you can use any file system to host the docker.img file (although that file DOES use BTRFS format internally). The only time you MUST use BTRFS for the cache is when you want a cache pool (i.e. multiple disks in the cache) Quote Link to comment
JorgeB Posted September 5, 2017 Share Posted September 5, 2017 (edited) 13 hours ago, Spies said: My cache (spinner) drive has VM images on it and I run XFS due to something I read about BTRFS not playing nice with high i/o (reading that somewhere - which I can't recall). VM images tend to fragment a lot on btrfs because of COW, it's not a problem for SSDs but it is for spinners, you can still use btrfs with COW disable, but you also loose its main benefits, like checksums and snapshots, probably better to just use xfs in those situations. Correction: snapshots can still be used though it's unclear to me how fragmentation will be impacted if they are used for VM images even with nodatacow. Edited September 6, 2017 by johnnie.black Quote Link to comment
limetech Posted September 5, 2017 Share Posted September 5, 2017 On the Share Settings page (when you create a new share), there is this setting: Enable Copy-on-write: Yes|No|Auto If you set this to 'No' then if that share is created on a btrfs volume, the top-level share directory will be created with the NOCOW bit set (to disable COW). The system-defined 'domains' share is created with this setting set to No. This means COW is disabled for vdisk.img files created in the 'domains' share (and in fact all files in that share will have COW turned off). The implication is VM performance remains very good, but no more data checksumming for those files. Quote Link to comment
bwnautilus Posted November 25, 2018 Share Posted November 25, 2018 Sorry to jump in on an old thread but I was searching synology and btrfs and this thread was at the top of the results list. I'm migrating from synology to unraid. My unraid array is currently using xfs. I just copied a synology directory to the array and realized that xfs doesn't support the file creation date (like btrfs does). My question: if I rebuild the array using btrfs will it support file creation dates? i.e. will I see the same file creation date from the synology NAS if I use 'cp -p -r' command? Thanks in advance. Quote Link to comment
limetech Posted November 25, 2018 Share Posted November 25, 2018 2 hours ago, bwnautilus said: My question: if I rebuild the array using btrfs will it support file creation dates? i.e. will I see the same file creation date from the synology NAS if I use 'cp -p -r' command? Good question. Those file systems indeed maintain creation time (called birth) but AFAIK no standard linux OS command directly displays it. It can be done however: http://moiseevigor.github.io/software/2015/01/30/get-file-creation-time-on-linux-with-ext4/ 1 Quote Link to comment
bwnautilus Posted November 25, 2018 Share Posted November 25, 2018 So when I mount a Synology share with Windows File Explorer and select the option to show the creation date is the SMB protocol interfacing with stat() on the Synology end? If so would the same thing happen on unraid? Quote Link to comment
limetech Posted November 25, 2018 Share Posted November 25, 2018 44 minutes ago, bwnautilus said: So when I mount a Synology share with Windows File Explorer and select the option to show the creation date is the SMB protocol interfacing with stat() on the Synology end? If so would the same thing happen on unraid? Just running a few tests: it looks like windows creation time is preserved. Modified time is updated properly. Access time is stuck as same as creation time because Unraid mounts the underlying volume with 'noatime,nodiratime'. Hope this helps. 1 Quote Link to comment
bwnautilus Posted November 25, 2018 Share Posted November 25, 2018 Thanks, yes this helps. I'm going to rebuild my array with btrfs. Quote Link to comment
Recommended Posts
Join the conversation
You can post now and register later. If you have an account, sign in now to post with your account.
Note: Your post will require moderator approval before it will be visible.