Jump to content
jphipps

ZFS filesystem support

38 posts in this topic Last Reply

Recommended Posts

Maybe retest with urandom as the source sync?

 

dd if=/dev/urandom iflag=fullblock bs=1MB count=20000 of=20gbte

 

The fullblock flag is needed otherwise you might not write the same amount of data when dealing with random syncs and entropy in linux systems.

Share this post


Link to post

hmmm, I run a ZFS mirror cluster on Ubuntu 18.04.  It really is an amazing file system for its features and ease of use.

 

You don't require lots of RAM or ECC RAM, this is a myth.

1. plain ZFS can run on 1GB of RAM on any size array. If you have more RAM it will use more for a cache.  Once you enable Dedupe and L2ARC you needs lots of RAM (rule of thumb is about 1GB of RAM per TB of storage)

2. If you use ZFS for business reasons, like any server you should use ECC RAM.  For home use, storing movies... you don't need ECC RAM. btrfs doesn't require ECC.

 

ZoL is ZFS on Linux, it functions in linux like a native file system.  BSD has switched to this.

https://zfsonlinux.org/

Share this post


Link to post

Sure ... just handwave away legitimate concerns especially ECC Ram that was mentioned earlier.

Share this post


Link to post

Yes, unless bits rotting away keep you up all night, but then you'd already have ECC RAM in your unraid box.

Share this post


Link to post
14 minutes ago, rilles said:

Yes, unless bits rotting away keep you up all night, but then you'd already have ECC RAM in your unraid box.

bit rot on committed media is a different (almost non-existent) issue.

Share this post


Link to post
On 12/15/2016 at 9:09 AM, RobJ said:

* If you don't have ECC RAM, don't even consider file systems like ZFS, you are safer with file systems that don't 'scrub'.  This makes me somewhat concerned about BTRFS scrubs, especially attempts to automate BTRFS scrubbing.  If it only detects issues, that's one thing, and safer.  But if it attempts to automatically correct issues, then ECC RAM should be REQUIRED.  I'm afraid that for me, this may add one more to the list of BTRFS concerns.  I don't want to worry about whether a scrub could be damaging, instead of helping.

 

I suggest others re-read the above concern before handwaveing it away as a non-issue. (Copied it because it was well expressed)

Share this post


Link to post
49 minutes ago, BRiT said:

I suggest others re-read the above concern before handwaveing it away as a non-issue. (Copied it because it was well expressed)

From what I gather most believe that's a non issue, while possible, both for zfs and btrfs, you'd need to have hash collision for that, and the chances of that happening are extremely low, see for example here:

http://jrs-s.net/2015/02/03/will-zfs-and-non-ecc-ram-kill-your-data/

 

I use and strongly recommend ECC for anyone who cares about data integrity, but you're still better protected against data corruption with zfs or btrfs without ECC that would be on a non checksummed filesystem.

  • Upvote 1

Share this post


Link to post

RobJ has an opinion. But ECC is not REQUIRED for ZFS or btrfs, probably strongly suggested for the more data paranoid who may lose sleep over a possible "scrub of death".  But, RAID is not a backup - so I'm not going to be upset if bits flip in my movie stash and all my actual important stuff of course is backed up somewhere else.

 

Share this post


Link to post

I don't personally use ZFS, but I'm going to put these links here as some of you guys might be interested.  Greg KH has given a statement on ZFS in the v5 kernels which I suspect (although have no absolute proof) is what Unraid v6.7.0 will be using.

 

https://www.phoronix.com/scan.php?page=news_item&px=ZFS-On-Linux-5.0-Problem

 

https://github.com/zfsonlinux/zfs/issues/8259

Edited by CHBMB
  • Upvote 1

Share this post


Link to post

I don't care if it's ZFS or not.  What people like though is the self-repairing file system.  That's the point of this thread.  We can choose where we put it (cache or array), but an option would be great.  I don't know of a self-repairing option for any other file-system, but it seems to me, a company like lime could put pressure on to get it in the roadmap for one of the file systems, even if it is a 5-10 year plan.  Maybe it already is, I haven't actually looked.

 

ECC is just if you're a purist or not.  You get incremental improvements with various additions and implications when you leave bits out.  Up to the end-user to figure out if they're important.  ZFS does have a huge hardware penalty though, it's why I moved to Unraid - FreeNAS / TrueNAS and Proxmox performance was absolutely abysmal.  And all for the idea that your data is somehow randomly falling out of your drives while you sleep.  Absolutely not true.  But peace of mind does have a lot of value doesn't it.

Share this post


Link to post

The pressure needs to be on Oracle (current owners of Sun Microsystems IP) to get them to change the license terms on ZFS. Not until the terms are adjusted will you see the attitude towards it from Linux Kernal developers change.

Share this post


Link to post
1 hour ago, Marshalleq said:

What people like though is the self-repairing file system.  That's the point of this thread.  We can choose where we put it (cache or array), but an option would be great.  I don't know of a self-repairing option for any other file-system

Self-repair wouldn't work on the array drives, since each disk is an independent filesystem, it would work on the cache pool for redundant configs, and btrfs has the same self-repairing features as zfs, though there's no doubt zfs is more mature and stable.

Share this post


Link to post

Create an account or sign in to comment

You need to be a member in order to leave a comment

Create an account

Sign up for a new account in our community. It's easy!

Register a new account

Sign in

Already have an account? Sign in here.

Sign In Now