JSE

Members
  • Posts

    38
  • Joined

  • Last visited

Recent Profile Visitors

The recent visitors block is disabled and is not being shown to other users.

JSE's Achievements

Rookie

Rookie (2/14)

15

Reputation

  1. I would like to eventually see bcachefs pools be added to unraid someday, but it still misses a lot of functionality. Namely, the ability to easily monitor an array, scrub, rebalance when adding/removing devices, and a proper process for device replacement. Not to mention there have been several major data loss bugs since the 6.7 merge. The filesystem is still considered experimental for a reason. I would prefer we hold off until it has had time to mature and become a better, well rounded solution before it's considered for inclusion in unraid. Especially given we already have ZFS which has Arc caching, which is much better than linux's native caching.
  2. As @JorgeB mentioned NOCOW is no longer default for any shares, but even if you set NOCOW, you need to keep in mind that not only is redundancy compromised with the use of nocow, compression will not work whatsoever on those files. Compression needs copy on write to function. Nocow is equal to nocsums and no compression. This is true, however the zstd compression algorithm is more than fast enough on most modern CPUs to discard any files that can't be compressed and store them normally. For the files that are compressible. Alternatively unraid could use the `btrfs property set /path/to/share/on/btrfs compress zstd` on a directory or subvolume which will work identical to compress-force on the filesystem wide option. This would allow users to set compression on a per-share basis much like nocow. It doesn't allow setting a compression level, however given unraid defaults to 3 anyway, this option also defaults to 3 which I'd argue is a good default anyway. Given that ZFS supports compression on a per-dataset basis this might be a better solution rather than per filesystem basis in the long term, however the option would need to be set at share creation time to be most effective. Given that btrfs should be using subvolumes for shares this should depend on this feature request being implemented as well (though this option doesn't technically need to be on a per-subvolume basis like it does for ZFS datasets). IMO this would be an ever better way of handling compression as you could leave compression disabled for media shares so no CPU time is wasted whatsoever on media, but have it enabled for shares you know compression would be beneficial as it would use the force compression without any mount options at all.
  3. Yep creating and deleting subvolumes is as simple as a directory. Currently, a share on unraid is just a directory on the top level (root) of a disk or pool in unraid. So if you `mkdir myshare` on a disk it will create a share called "myshare". Alternatively, a subvolume is made as simple as `btrfs subv create myshare` and it for all intents and purposes works just like a directory, but with the added performance benefits and ability to snapshot it. Deleting it is the same as deleting a directory. You can `rmdir myshare` the subvolume and it will delete it much the same as `btrfs subv delete myshare` does (the latter is faster though). No formatting necessary
  4. This sounds perfect, exactly what we need. My thought was have the stats appear on pool page as well, possibly around the balance/scrub options with a button to clear the stats. But 100% we're on the same page here, definitely need this type of monitoring for pools. I haven't tested ZFS that much since it was added but if it's also missing monitoring we need that too
  5. Currently, when you format a disk or pool to btrfs in unraid, an option is provided to enable compression. While unraid does use the efficient zstd compression with the default level 3 compression level which I think is a very optimal default, it's using the `compress` mount option rather than `compress-force` option on the filesystem. Btrfs has a very rudimentary algorithm when you use the `compress` mount option where it will abandon even attempting compression on a file if the first few KiB is not compressible. This results in a lot of files that have portions that could compress not get compressed, and in a lot of cases behaves as if you didn't have compression enabled whatsoever. It makes the current unraid compression behavior to arguably be not very useful. Now if you were using some of the other algorithms like zlib, force compression could actually have a negative impact. However, zstd compression is smart enough to not store any compression it attempts if it doesn't yield an improvement in storage. This request is thus to use the compress-force option instead so compression actually happens on files that have headers that can't compress, or at least provide an option to enable force compression so those of us who do wan't compression can force it (such as a check box or an alternative option). This yields much more space savings for me than the current option, but I currently have to resort to remounting my disks with the compress-force option via the shell or a script rather than rely on the option unraid provides.
  6. High time it gets included then . Do you have a link? .... lack of this and lots of people can lose data, and could be recoverable right now and not even know it. I'm on a raid (ha, no pun intended) of recommending some changes to improve the reliability of btrfs pools since I do a lot of this stuff manually in shell but it really should be included in a more user friendly way since most people are not familiar or have the experience in managing a btrfs pool from shell, and honestly, they shouldn't need to.
  7. On ZFS with unraid, if you create a share and it exists on a ZFS volume, it's created as a dataset. This makes creating snapshots, rolling back, etc much easier. This feature request is to extend this behavior to btrfs subvolumes, where the top level directory (aka a share) should always be a subvolume instead of a regular directory. A subvolumes in btrfs is also it's own independent extent tree; it acts as if each subvolume is it's own independent filesystem even though they merely appear as a directory. What this means is, by using subvolumes per share, any filesystem locking behavior is limited only to the subvolume in question rather than the filesystem overall (in most cases). This allows for higher levels of concurrency and thus better performance, especially for pools with different shares that have high IO activity.
  8. With btrfs, if you have a live running pool and a disk disappears from the system (ie you pull it or a cable flakes out), or if the disk straight up just fails while the array is running, btrfs doesn't provide any indication via most of the monitoring commands to detect the missing disk. For example if you run `btrfs filesystem show` after a disk has dropped from a pool, it will still show a reference to the disk even though it's missing. Even if it's just a flaky cable and the disk reappears to the system, it will still remain unused until you fully remount the filesystem (and then scrub, not balance would be all that's necessary to resync but I digress). If you unmount the pool and remount it with the disk still missing, you will need the degraded option which unraid handles, but it's only after that remount with the degraded option that `btrfs filesystem show` will indicate any missing devices. It's also only after stopping the array will unraid indicate there are any missing devices with a pool. This means unraid users are in the dark if a disk fails or flakes out or completely fails while the array is running. If the user doesn't stop the array often, they could be in the dark that their pool is degraded for months even. Btrfs does however provide a means to detect device failures and issues via the `btrfs device stats` command. If any device stats show a non-0 value, this indicates there is an issue with the array and it's possible it's degraded. When a disk flakes out or fails for example, the device stats will indicate write errors. It is absolutely critical to monitor btrfs device stats to detect a degraded array event for a running array. Thus, this feature request is to have this critical feature be included in the unraid GUI when you're viewing a pool, and also have any non-0 value device stats be notified to the admin so they can act on it. Given that being able to reset these device stats to make detecting errors later possible, we'd also need a GUI option for resetting device stats after any issues are addressed. I have some other ideas to make btrfs pools more resilient and efficient (particularly around the fact I feel unraid will run balances much more than necessary) but that is left for a separate feature request, device stat monitoring is the most critical feature request I believe is a requirement for proper pool monitoring.
  9. Yep the issues with NOCOW being used by default go well beyond unraid, thankfully unraid has reverted this default in newer versions it seems. Now on to libvirt. I've also noticed some distros (like Arch) are utilizing systemd-tmpfiles to set the +C attribute on common database platforms as well, such as mysql/mariadb and postgresql. It's nice to see bcachefs finally merged and I hope to one day see unraid support it since it does potentially provide the same flexibilities as btrfs, potentially without the caveats that btrfs has. It too supports NOCOW and from my testing before it was marged, it was a mkfs option rather than a file attribute, at least at the time i tried it, so it seems in that regard nocow won't be an issue since you just wouldn't use it lol. It does however still need a lot more attention with regards to it's raid functionality and is missing features like scrub, rebalance, device monitoring, etc. When it sees improvements in these areas and proves itself to not be a data eater, I'd be happy to migrate over to it one day
  10. I think it would be nice to have it swapped out for compress-force now, before the next stable release. The current option should just straight up be forgot about imo. It's kinda useless.
  11. Hmmm interesting, then it should also have been faster tbh. What I meant by that is ZFS is just a better fs when it comes to performance. ZFS' ARC caching alone is a big difference. Btrfs has a lot of performance downsides. That said, many people do see performance upsides with Btrfs transparent compression and compress-force, since often the CPU is capable of compressing data faster and the block device is the bottleneck. I wonder if a lower compression level would help. If you could try :2 or :1, you might see much better performance. I'm not personally asking for adding compression level support in this release, it seems to be planned for later. I would just prefer compress would be switched out for compress-force so that compression can actually work
  12. Unraid is open source as far as the storage stack is concerned. 1. The UnRAID array uses a modified version of MD RAID, which has all of its corresponding sources stored right on USB that you could use to compile your own kernel with 2. Pools use Btrfs (and ZFS as of 6.12). A pool created on UnRAID is completely usable on other systems without any tinkering. XFS pools are single disk and mount like any single disk filesystem. 3. Disks inside your UnRAID array are independent formatted disks individually, with a dedicated parity disk. Nothing is striped, nor stored in any obscure, proprietary format. Even without the custom patches, all the data on disks are fully available and mountable on any standard linux distro. Additionally, docker containers use standard docker which can be used on standard linux. You can quite literally take the variables you set in the UnRAID docker web UI, pass them to docker on any standard linux distro, pass in the same data/mounts, and everything works perfectly. VMs are the same. They use bog standard KVM via Libvirt and QEMU, also readily available on most common distros. Rest assured, you're never locked in when it comes to your data with UnRAID. The true magic of UnRAID is the web UI and ease it provides for managing and monitoring the array. For that, it's very much worth it, even with some of its shortcomings imo, it still gets the closest to what I want in a storage+compute OS for personal use
  13. The reason the compress option instead of compress-force was faster is *because* it straight up just didn't compress your files for the most part . You might as well not use compression with the compress mount option, as the algorithm is known to be poor. Many files can have headers or first portions not compressible, yet the file straight up gets skipped based on the first 64KiB. Keep in mind, filesystem fragmentation can also play a part in these benchmarks. I'm not certain how fragmented it was, particularly free space. If you did a full balance in particular, free space would be less fragmented and that could improve performance of compress-force. The referenced benchmark was performed on a ramdisk to eliminate that bottleneck. At any rate, the fact ZFS is faster is just yet another example of the downsides of btrfs in my opinion. ZFS is always going to be faster. However, in my opinion, if people want compression, it's a given it will have CPU overhead. The point is to trade CPU for disk space, so in my opinion, this just further shows how the force option should be used. I've actually been doing this on UnRAID manually via a remount script with user scripts. I was hopeful the built in compress option would allow me to eliminate that. Most in the btrfs community recommend it over the standard compress option, especially with zstd compression. Like everything btrfs, the official documentation can be outdated.
  14. The current unraid default when you enable compression on btrfs in 6.12 is compress=zstd. (can be confirmed with mtab). However, this is generally not recommended by most in the btrfs community. Instead, compress-force should be used instead. The reason is, with compress, if the first 64KiB of a file is not compressible, Btrfs will not even attempt to compress the rest of the data. Compress-force meanwhile will attempt to compress the entire file, even if the first 64KiB isn't compressible (which can definitely be the case for many files). Using compress-force wont have any noticeable impact on performance, and in no case will it use more disk space. ZSTD compression is blazing fast, and any data that isn't compressible will be discarded anyway. Doing this will significantly increase compression ratios. Benchmarks a few years ago show default level 3 compresion, which Unraid is using will achieve over 800MiB/s on a Xeon E3 1650, so only on NVMEs would it potentially be a bottleneck. That's a rather old CPU as well, newer CPUs will be even faster.
  15. I'm aware of that, but what I mean is, I still want to create individual shares at the directory level (or more specifically, subvolume) for security reasons (each user gets their own share), but not have it run through shfs... in other words, *not* through /mnt/user. A disk share exposes the entire disk/pool, which is undesirable. My "hacky script" does exactly that. Since I don't use the unraid array at all, I rewrite the /mnt/user path to /mnt/storage (name of my btrfs pool) in samba's config to bypass the shfs bottleneck. I also add in shadow copy paths to snapshots so I can have shadow copy support, but at the very least, would be nice to bypass shfs when pools are exclusively used to avoid the bottleneck of shfs on faster networks.