RaidRascal

Members
  • Posts

    4
  • Joined

  • Last visited

RaidRascal's Achievements

Noob

Noob (1/14)

1

Reputation

  1. Curious if you've found any other solutions to this or if you just stuck with hardlinks off?
  2. Are there any plans to address the default SPA slop space ZFS reserves? By default the setting is 5 which equals to 1/32 the size of the pool, this affects even single drive "pools" like those being used in an Unraid array. I verified RC2 is using 5 as default. In recent releases of zfs they are limiting the maximum space allowed to be used by this value to 128GB by default but that is still a substantial amount of space IMO especially if that is spread over many drives in an array. It can be easily checked/modified temporarily in real time with: cat /sys/module/zfs/parameters/spa_slop_shift or echo "some number" > /sys/module/zfs/parameters/spa_slop_shift The change takes affect immediately and the usable space updates in the webui nearly instantly. Link to official docs on this: https://openzfs.github.io/openzfs-docs/Performance and Tuning/Module Parameters.html#spa-slop-shift Link to a much better explanation of how the setting works: https://unix.stackexchange.com/questions/582005/zpool-list-vs-zfs-list-why-free-space-is-10x-different/582391#582391 I bring this up because searching for this on the forums here brings up literally nothing. This leads me to believe most people who hop on the zfs bandwagon will not know about this if it isn't exposed and explained through the web interface and will be very confused why their drives are suddenly much smaller than before. Obviously bad things can happen in ZFS land if a filesystem is filled to 100% and there isn't enough slop space left to help out so I know this is a tricky topic.
  3. I haven't seen anyone mention the default recordsize being used on creation of a ZFS dataset this release. It's using the default recordsize=128k but IMO based on the use case I suspect the vast majority of users here fall into a default of recordsize=1M would be much better especially if we're not going to get a way to specify that option during creation. Many experts on ZFS including Jim Salter recommend this recordsize for almost all use cases outside of VM and database storage at this point. See a bit of discussion on this from "mercenary_sysadmin" (Jim Salter) here:
  4. I've experienced the exact same behavior right out of the box on a fresh install of rc2 with all new disks. This is my first time using Unraid being attracted by the ZFS support. This happened with XFS, btrfs and zfs array and pool devices so the underlying filesystem didn't seem to have an impact. After a lot of searching and head scratching I've been able to solve this issue by setting: "Tunable (support Hard Links): No" under Settings > Global Share Settings. This definitely isn't ideal for a lot of people, especially those who use torrents but for me it's acceptable for now. Hopefully someone official weighs in on this. I've verified NFS 4.2 is being used by the mounted shares and no amount of monkeying with the nfs mount or export settings made any impact on this issue.