ZFS filesystem support


Recommended Posts

Include ZFS in the base unraid supported filesystem. 


Can be used with dockers for copy on write as well as snapshot support and quotas.  Would also make a great cache drive filesystem since you can use Raid-Z protection on the cache pool.  Also support filesystem compression...  Plus it is more mature than BTRFS.




  • Like 2
Link to comment



Running ZFS on OpenMediaVault and it's fantastic.  Shame OMV's base OS is a little long in the tooth (Debian Wheezy). 


I also see FreeNAS is moving to version 10 with a total re-write, and moving to a sort of "AppCafe" system using iocage.  It's in early alpha - I've had a quick play and it looks very, very nice for a home media server.  The system requirements are harsh, though - 8GB RAM minimum.

Link to comment
  • 2 weeks later...

Anyone that uses it will likely need to have a MFT more memory, at least 16gb just to dedicate to this filesystem. If you want to run dockers and VMs you will need even more,  like another 16gb. This puts min memory at 32gb for a semi useful server, with 64gb being more suitable.


Everyone asking for this support, are you aware of that and does your system meet those min specs?

Link to comment

I have a few Thecus N7700 NAS boxes that run ZFS flawlessly, and have done for 5 years.  They only have 1GB of RAM in them, admittedly the array size is 14TB only.


That makes no sense to me based on documented ZFS requirements. The general rule of thumb is 1GB of RAM per TB of storage space.


Are you sure it's actually ZFS?

Link to comment

isn't ECC ram highly highly highly recommended when using ZFS? I'd like to see ZFS as an option, but not as the one and only file system for unraid (not that thats what you're suggesting, just voicing my concern)


It's more that -- to achieve true end-to-end fault tolerance, you must have ECC ram.


The automatic checksumming that ZFS implements is still an enormous improvement in data integrity / bitrot detection, even without ECC ram.


And no, the various checksum snapshot projects that have been implemented for unraid are not a comparable solution.  They're (perhaps) better than nothing, but are still vastly inferior to having it built into the filesystem

Link to comment
  • 1 year later...

Bumping this old topic to get it some more attention. Especially since a branch with Nexenta ZFS based TRIM support is waiting to be accepted into the main line.


I for one would love to see ZFS support replace BTRFS use. Create n-drive zpool based on the current cache drive setup, and create specialized and quota limited ZFS datasets for the Docker and libvirt configuration mount points. Yes, Docker supports ZFS.


Isn't there some licensing issue (CDDL vs GPL) that prevents ZFS from being distributed with the Linux kernel? I'd rather stay with BTRFS, myself.


And from what I've seen, one only wants to stick with BTRFS on a system they're ready to nuke at a moment's notice.

Link to comment

How much truth is there to an article like this: https://forums.freenas.org/index.php?threads/ecc-vs-non-ecc-ram-and-zfs.15449/


I've seen many posts/articles claiming the same thing, which was the primary reason for not giving freenas a go back in the day. Are these concerns applicable to ZFS on unraid?


Key points from that article:


* Smart self-repairing file systems (like ZFS) absolutely require ECC RAM.  Dumber file systems, like all of the ones we use, just store your data, not try to check and fix it.  Bad RAM is always a bad thing, but if your file system is actively checking and 'fixing' your stored data, then bad RAM can be catastrophic, as it can automatically check your data, corrupt it in memory, detect it's wrong, and write it back to storage, thereby actively and progressively corrupting your data!  And with ZFS, if it corrupts the Zpool so that it can't be mounted, you may have lost it ALL!  (What you're running ZFS on doesn't matter, whether FreeNAS, unRAID, or anything else.)


* ECC RAM is a good thing if you can afford it, no matter what file systems you use.  It corrects single bit errors automatically and allows the system to continue, and halts the system if it detects multi-bit errors, before data corruption can occur.  Of course, a system halt can result in other damage, but no ongoing damage.


* Bad RAM is very serious, whether it's ECC or not.  The sooner it is detected and replaced, the better.  With ECC RAM, it can cause system halts.  With non-ECC RAM, it can cause system crashes and silently corrupted data.  I'm going to begin recommending, especially for non-ECC users like most of us, that periodic memory testing be added to our scheduled maintenance.  Unfortunately, memory testing has to be done offline, requires a reboot, so it will be disruptive to some.  The unRAID boot menu has a Memtest, fine for older machines, but not as good as the updated Memtest from PassMark (requires separate bootable media however).  Both are free for personal use.  How often it should be done, I don't know, but it seems of the same relative value as parity checks so monthly would be good.  Most of us are more likely to do it less than that, perhaps quarterly.  I'll add one last point, memory tests must be perfect, not one error is permissible.  It doesn't matter how infrequent an error is, if a very long memory test returns even one mistake, then one or more memory sticks need to be replaced.  Even one infrequently flaky memory bit can make the system unusable, untrustable.


* BACKUPS!  Backup, backup, backup!  There is no substitute for good versioned backups.  Regular backups are important, versioned backups are even better, as they ensure that you can recover from silent data corruption, corruption you may not have detected yet and is therefore being propagated to your backups.


* If you don't have ECC RAM, don't even consider file systems like ZFS, you are safer with file systems that don't 'scrub'.  This makes me somewhat concerned about BTRFS scrubs, especially attempts to automate BTRFS scrubbing.  If it only detects issues, that's one thing, and safer.  But if it attempts to automatically correct issues, then ECC RAM should be REQUIRED.  I'm afraid that for me, this may add one more to the list of BTRFS concerns.  I don't want to worry about whether a scrub could be damaging, instead of helping.


* ZFS has a terrific reputation, die-hard fans, but also some serious shortcomings.  It requires ECC RAM.  It requires MUCH more RAM available to it.  And it has absolutely NO recovery tools!  Supposedly, it's so good it doesn't need them!  Or so they say.  If you're ready to buy into that, more power to you, but those of us with years of computer experience will be highly skeptical.


These are just my thoughts, I welcome correction

  • Like 1
  • Upvote 1
Link to comment

For the record: we here at Lime Technology are agnostic when it comes to file systems and we welcome discussion.  However this is a requirement: no fanboi flame wars.  You want to talk about technical advantages/disadvantages, go for it!  But if you just want to say, "I read somewhere xxx sucks, don't use it!" well there are plenty of other places to do that.


Here are the main reasons we went with btrfs for the cache pool (vs other multi-device capable file systems):


1. Docker support.  When we first integrated Docker they didn't offer zfs support, but did offer btrfs, though these days I think they do support zfs now.


2. The h/w requirements to smoothly run zfs are quite onerous for a consumer NAS, though that too is less important.


3. The licensing was/is still an issue and we didn't feel like paying our lawyer 4 figures to give us the definitive answer of whether we can bundle zfs with unRAID OS, and we didn't want to go the "Guide" route instructing our customers to download zfs themselves in order to use a fundamental feature of the product.


4. Questionable linux integration.  zfs remains a third party component which is not updated in step with the linux kernel, which also means it's not tested alongside other kernel components during ongoing kernel development.  We never want to get into a situation where we have to update the kernel to address a serious bug or security issue, but can't update because it breaks another key component.


I guess there are other lessor reasons for using btrfs.  For example, personally I have studied quite a bit of the code base, and we are familiar with how it works and how to use the management tools.  Probably the way we will approach this moving forward is to develop better plugin/snapin support in unRAID OS to make it easier for many kinds of third party components to be integrated with unRAID OS.

  • Upvote 1
Link to comment
  • 1 month later...

Just an idea for Tom - create a second build that includes ZFS support, and charge a premium for it, to cover the ZFS licensing, development, and extra support costs.  So in the future you might release both 6.4.0 and 6.4.0Z.  Because of your point 4, you may choose to release fewer Z releases than main releases.

Link to comment
  • 8 months later...

Apologies for the necro, but having seen that single-drive vDev expansion is coming for ZFS some time in the future, I figured I'd nudge this again for visibility.



For myself, I'd be happy just having ZFS for cache, not the main array. 


Myself and some other users have been having some issues possibly related to BTRFS cache pooling (see below, the issue seem to go away when a single XFS cache device is used), and I feel like having something that's been around longer and has been put through it's paces a little more might be a nice option. I understand that for all the reasons Limetech listed above, it might still not be viable, but just putting it out there none-the-less.





  • Like 1
Link to comment
  • 1 year later...

I know that this was requested a couple of years ago. 


I wanted to keep this alive as i feel ZFS should be an option regardless of ram requirements or not as long as Limetech documents the requirements for using that file system type. 


So a +1 from me and open to further discussion, I am also not suggesting it becomes the primary, just a base option included with the release.


Link to comment
5 hours ago, spm37 said:

Surely - the company has matured enough to pay some money for licensing even if they released the ZFS version with an opt in additional fee ... i would buy it.


I don't think its a $$ issue.  As I understand it, it's more of that ZFS's licence is incompatible with GPL which means no binary distribution can happen with Linux.  Could be wrong though.

Link to comment

Oh ok... well that could be an issue.


Yes - i saw that ... and i would love to be able to make use of this 256GB of ECC ram for L2ARC and the Intel PCie NVMe for ZIL ... but i do love the unraid product and having GPU pass through is just magic ...so i will just keep that sitting there until one day - fingers crossed @limetech find a way :)



Edited by spm37
Link to comment

I enabled the plugin method and while it is not perfect i did hack it into the back of unraid so shares work and docker and vm's run off it now.

I would ultimately love native support, and further advocate it with some tests below


Unraid server is built as per the screen shots, currently parity is turned off which is ultimately going to be slower when performing this test

of creating a 20gb file of the NAS itself.  



- 2x Hitachi 3TB SATA Drives

- 1x Samsung 850 Pro 512GB SSD for ZIL (Logs)

Compression set to Lz4 and sync set to disabled.


ZFS Mount

[zfs] dd if=/dev/zero bs=1MB count=20000 of=20gbte
20000+0 records in
20000+0 records out
20000000000 bytes (20 GB, 19 GiB) copied, 8.42902 s, 2.5 GB/s

Unraid XFS Mount with Caching enabled (Raid 10 btrfs)

dd if=/dev/zero bs=1MB count=20000 of=20gbte
20000+0 records in
20000+0 records out
20000000000 bytes (20 GB, 19 GiB) copied, 41.0709 s, 487 MB/s

I know that real world that over network on 1gbe - this would not be an issue, as the network speed would be slower than the actual time it takes

to create the file. On 10gbe this would be an impact. 


Thought i would share :)









Link to comment
4 minutes ago, spm37 said:

ZFS Mount

[zfs] dd if=/dev/zero bs=1MB count=20000 of=20gbte
20000+0 records in
20000+0 records out
20000000000 bytes (20 GB, 19 GiB) copied, 8.42902 s, 2.5 GB/s

For this to have any meaning you need to disable compression for the pool before testing, zeros are highly compressible.

  • Upvote 1
Link to comment

Join the conversation

You can post now and register later. If you have an account, sign in now to post with your account.
Note: Your post will require moderator approval before it will be visible.

Reply to this topic...

×   Pasted as rich text.   Restore formatting

  Only 75 emoji are allowed.

×   Your link has been automatically embedded.   Display as a link instead

×   Your previous content has been restored.   Clear editor

×   You cannot paste images directly. Upload or insert images from URL.