I would personally ignore QEMU snapshots and just support snapshots in general with btrfs and ZFS, if ZFS is planned to be officially supported. (Performance warnings should be provided for those who snapshot VMs however on btrfs, since we use NOCOW to specifically avoid this. Snapshotting a nocow file requires copy on write to actually work which can negate the use of the NOCOW attribute.)
Snapshots are a core feature of both filesystems after all and work rather well, so it would be great to take advantage of them in a way that is convenient for the user. The UI could be updated to just create subvolumes when creating user shares, and there could even be a way to migrate old directories based shares to subvolumes on supported filesystems. Adding an ability to schedule snapshots and rotate them would also be nice, and it has the added benefit of being able to help protect against ransomware attacks on your SMB share.
Now obviously there are a few caveats to this, but let me quantify, here's how I would approach this:
I'd start with support for the "multiple arrays", but much better and configurable than it currently is. Btrfs (and ZFS) RAID could then become first class citizens for users who choose to use either of those instead of regular unraid parity. This is when snapshots could be easily supported as they would be an array specific feature. I wouldn't really focus on its support on cache pools or with regular unraid parity, since it's convoluted to pull off (you'd have to snapshot each independent disk, and if you have any mixed filesystem setups, it wouldn't be possible).
Having this expanded "multiple arrays" functionality would thus need to be done before proceeding with full ZFS support.
Both Btrfs and ZFS support self healing with scrub, but in the current state of the array you can't utilize it if you go with btrfs (apart from metadata with get duped). Your only option is to use a cache pool (or potentially unassigned devices), which aren't really great for general data storage, because then you can't really use the "cache pool" as a write cache anymore as it was intended, since it's acting as your array. This is actually how I use unraid right now btw. I simply assign a USB drive to the array just so I can start it and totally ignore its functionality, relying on btrfs RAID1 for my HDD array (don't need the extra space parity provides me and this protects my data much better than any parity raid could apart from ZFS RAIDZ).
With "multiple arrays", you'd get to choose Unraid Parity (the default), Btrfs, or potentially ZFS. The UX/UI design for Btrfs could easily be reused for ZFS when it's added, apart from the flexibility to add and remove devices, and you'd still be able to easily use cache pools for faster SSD writes while still being able to have the hard disk array be self healing. It would also allow users of ZFS or Btrfs arrays to easily use SSDs and TRIM in an array, which isn't really possible now without major downsides.
For snapshotting VMs themselves, this is the only thing I'd use reflink copies for, and it could be supported on cache pools. I wouldn't use them for anything more as you potentially suggested, as to "snapshot" an entire volume (ie, on xfs, since it also supports relinks) using that method would not only be slow, but also use *a lot* of metadata if you have a lot of files, since it's really allocating all those inodes all over again. A real snapshot is basically a glorified "i owe you", deferred reflinks, and best suited to a single btrfs or ZFS array where it's easy to manage.
Would really love to see this! Whatever you folks do, I'm sure I'll be excited to see it. Running the 6.10 RC right now