Soon™️ 6.12 Series


starbetrayer

Recommended Posts

1 hour ago, JorgeB said:

You can use zfs in the array without that limitation, because every device is a single filesystem, without raidz obliviously, for pools nothing that can be done since it's a zfs limitation, btrfs is more flexible, though not as robust, unfortunately you rarely can have everything.

If all the devices are their own filesystem, then how does parity work? If I'm not mistaken, I believe there has to be at least 4 drives in a pool in order for it to have redundancy. I could be wrong though. I appreciate your input and knowledge @JorgeB.

Link to comment
25 minutes ago, Tucubanito07 said:

If all the devices are their own filesystem, then how does parity work?

That's how it always worked with the array, parity is just bits, it doesn't care about the file systems.

https://wiki.unraid.net/Parity#How_parity_works

 

26 minutes ago, Tucubanito07 said:

If I'm not mistaken, I believe there has to be at least 4 drives in a pool in order for it to have redundancy.

2 devices for a mirror, 3 minimum for raidz, this is for pools, not the unRAID parity array.

 

Link to comment

... i would like to request a new feature based on the promising updates 

The possibilty to mount an (XFS)  unraid array on demand, 

The Background :

I use Unraid + ZFS for long time,

a XFS Pool witch is always "on" and a ZFS Pool witch i mount over the ZFS Plugin

This way i make my Backups to the ZFS Pool. whenever i plug in the ZFS Pool via THunderbolt.

 

With ZFS on Board i would like to switch it ... Unraid XFS Pool as Backup and ZFS as always on Storage 

 

thanks for reading & reply !

 

 

 

Link to comment

Exciting to hear some official details on this.

 

I will keep my media on a traditional unraid array but migrate important stuff to a zfs pool (likely a mirror of two 4TB hard drives). The important stuff would be documents, family photos, etc.

 

I will likely also migrate docker and vms to a zfs pool of two nvme disks.

 

Loving the potential here.

  • Like 1
Link to comment
2 minutes ago, FlyingTexan said:

Can someone in layman’s terms tell ‘em the advantage of switching from BTRFs?

 

ZFS has been around much longer and is battle tested in production with many businesses. I would trust my data longterm with ZFS more than BTRFS. Other than this, they are pretty similar in terms of feature set.

Link to comment

It depends on the setup..

 

Please chime in if I'm wrong but I think general read should be way faster basicaly a bit less than double per drive. Writes should be somewhat faster but parity work is still slow by comparison as it's on a single drive not striped parity but still faster than current parity in unraid array.

 

https://calomel.org/zfs_raid_speed_capacity.html

            ZFS Raid Speed Capacity and Performance Benchmarks
                   (speeds in megabytes per second)

 1x 4TB, single drive,          3.7 TB,  w=108MB/s , rw=50MB/s  , r=204MB/s 
 2x 4TB, mirror (raid1),        3.7 TB,  w=106MB/s , rw=50MB/s  , r=488MB/s 
 2x 4TB, stripe (raid0),        7.5 TB,  w=237MB/s , rw=73MB/s  , r=434MB/s 
 3x 4TB, mirror (raid1),        3.7 TB,  w=106MB/s , rw=49MB/s  , r=589MB/s 
 3x 4TB, stripe (raid0),       11.3 TB,  w=392MB/s , rw=86MB/s  , r=474MB/s 
 3x 4TB, raidz1 (raid5),        7.5 TB,  w=225MB/s , rw=56MB/s  , r=619MB/s 
 4x 4TB, 2 striped mirrors,     7.5 TB,  w=226MB/s , rw=53MB/s  , r=644MB/s 
 4x 4TB, raidz2 (raid6),        7.5 TB,  w=204MB/s , rw=54MB/s  , r=183MB/s 
 5x 4TB, raidz1 (raid5),       15.0 TB,  w=469MB/s , rw=79MB/s  , r=598MB/s 
 5x 4TB, raidz3 (raid7),        7.5 TB,  w=116MB/s , rw=45MB/s  , r=493MB/s 
 6x 4TB, 3 striped mirrors,    11.3 TB,  w=389MB/s , rw=60MB/s  , r=655MB/s 
 6x 4TB, raidz2 (raid6),       15.0 TB,  w=429MB/s , rw=71MB/s  , r=488MB/s 
10x 4TB, 2 striped 5x raidz,   30.1 TB,  w=675MB/s , rw=109MB/s , r=1012MB/s 
11x 4TB, raidz3 (raid7),       30.2 TB,  w=552MB/s , rw=103MB/s , r=963MB/s 
12x 4TB, 6 striped mirrors,    22.6 TB,  w=643MB/s , rw=83MB/s  , r=962MB/s 
12x 4TB, 2 striped 6x raidz2,  30.1 TB,  w=638MB/s , rw=105MB/s , r=990MB/s 
12x 4TB, raidz (raid5),        41.3 TB,  w=689MB/s , rw=118MB/s , r=993MB/s 
12x 4TB, raidz2 (raid6),       37.4 TB,  w=317MB/s , rw=98MB/s  , r=1065MB/s 
12x 4TB, raidz3 (raid7),       33.6 TB,  w=452MB/s , rw=105MB/s , r=840MB/s 
22x 4TB, 2 striped 11x raidz3, 60.4 TB,  w=567MB/s , rw=162MB/s , r=1139MB/s 
23x 4TB, raidz3 (raid7),       74.9 TB,  w=440MB/s , rw=157MB/s , r=1146MB/s
24x 4TB, 12 striped mirrors,   45.2 TB,  w=696MB/s , rw=144MB/s , r=898MB/s 
24x 4TB, raidz (raid5),        86.4 TB,  w=567MB/s , rw=198MB/s , r=1304MB/s 
24x 4TB, raidz2 (raid6),       82.0 TB,  w=434MB/s , rw=189MB/s , r=1063MB/s 
24x 4TB, raidz3 (raid7),       78.1 TB,  w=405MB/s , rw=180MB/s , r=1117MB/s 
24x 4TB, striped raid0,        90.4 TB,  w=692MB/s , rw=260MB/s , r=1377MB/s 
Edited by dopeytree
  • Like 2
Link to comment
9 minutes ago, dopeytree said:

It depends on the setup..

 

Please chime in if I'm wrong but I think general read should be way faster basicaly a bit less than double per drive. Writes should be somewhat faster but parity work is still slow by comparison as it's on a single drive not striped parity but still faster than current parity in unraid array.

 

https://calomel.org/zfs_raid_speed_capacity.html

            ZFS Raid Speed Capacity and Performance Benchmarks
                   (speeds in megabytes per second)

 1x 4TB, single drive,          3.7 TB,  w=108MB/s , rw=50MB/s  , r=204MB/s 
 2x 4TB, mirror (raid1),        3.7 TB,  w=106MB/s , rw=50MB/s  , r=488MB/s 
 2x 4TB, stripe (raid0),        7.5 TB,  w=237MB/s , rw=73MB/s  , r=434MB/s 
 3x 4TB, mirror (raid1),        3.7 TB,  w=106MB/s , rw=49MB/s  , r=589MB/s 
 3x 4TB, stripe (raid0),       11.3 TB,  w=392MB/s , rw=86MB/s  , r=474MB/s 
 3x 4TB, raidz1 (raid5),        7.5 TB,  w=225MB/s , rw=56MB/s  , r=619MB/s 
 4x 4TB, 2 striped mirrors,     7.5 TB,  w=226MB/s , rw=53MB/s  , r=644MB/s 
 4x 4TB, raidz2 (raid6),        7.5 TB,  w=204MB/s , rw=54MB/s  , r=183MB/s 
 5x 4TB, raidz1 (raid5),       15.0 TB,  w=469MB/s , rw=79MB/s  , r=598MB/s 
 5x 4TB, raidz3 (raid7),        7.5 TB,  w=116MB/s , rw=45MB/s  , r=493MB/s 
 6x 4TB, 3 striped mirrors,    11.3 TB,  w=389MB/s , rw=60MB/s  , r=655MB/s 
 6x 4TB, raidz2 (raid6),       15.0 TB,  w=429MB/s , rw=71MB/s  , r=488MB/s 
10x 4TB, 2 striped 5x raidz,   30.1 TB,  w=675MB/s , rw=109MB/s , r=1012MB/s 
11x 4TB, raidz3 (raid7),       30.2 TB,  w=552MB/s , rw=103MB/s , r=963MB/s 
12x 4TB, 6 striped mirrors,    22.6 TB,  w=643MB/s , rw=83MB/s  , r=962MB/s 
12x 4TB, 2 striped 6x raidz2,  30.1 TB,  w=638MB/s , rw=105MB/s , r=990MB/s 
12x 4TB, raidz (raid5),        41.3 TB,  w=689MB/s , rw=118MB/s , r=993MB/s 
12x 4TB, raidz2 (raid6),       37.4 TB,  w=317MB/s , rw=98MB/s  , r=1065MB/s 
12x 4TB, raidz3 (raid7),       33.6 TB,  w=452MB/s , rw=105MB/s , r=840MB/s 
22x 4TB, 2 striped 11x raidz3, 60.4 TB,  w=567MB/s , rw=162MB/s , r=1139MB/s 
23x 4TB, raidz3 (raid7),       74.9 TB,  w=440MB/s , rw=157MB/s , r=1146MB/s
24x 4TB, 12 striped mirrors,   45.2 TB,  w=696MB/s , rw=144MB/s , r=898MB/s 
24x 4TB, raidz (raid5),        86.4 TB,  w=567MB/s , rw=198MB/s , r=1304MB/s 
24x 4TB, raidz2 (raid6),       82.0 TB,  w=434MB/s , rw=189MB/s , r=1063MB/s 
24x 4TB, raidz3 (raid7),       78.1 TB,  w=405MB/s , rw=180MB/s , r=1117MB/s 
24x 4TB, striped raid0,        90.4 TB,  w=692MB/s , rw=260MB/s , r=1377MB/s 

That looks like a great reason to me to switch. I’m guessing this would require rewriting the entire array? Like formatting and starting all over?

Link to comment
2 hours ago, limetech said:

Also note that ZFS hard drive pools will require all devices in a pool to be 'spun up' during use.  IMO where ZFS will shine is in large flash-based pools (SSD, NVMe, etc).

Will it be possible to automatically spin down an entire raidz pool when inactive? Example use case would be a pool that is used for nightly backups. 

Link to comment
12 hours ago, brandon3055 said:

Will it be possible to automatically spin down an entire raidz pool when inactive?

Yep.

 

re: performance, preliminary testing shows that when used as a single device pool writes to zfs are a little slower than to xfs and btrfs, this for a large file transfer with disks, it's around 5 to 10% slower depending on the disk used, but still needs more testing, and still need to test with single pool flash-based devices and also compare performance with the various multi device pool options between btrfs and zfs.

 

 

  • Like 1
Link to comment
14 hours ago, limetech said:

The best way to think of this, anywhere you can select btrfs you can also select zfs, including 'zfs - encrypted' which is not using zfs built-in encryption but simply LUKS device encryption.

 

Can we get the zfs built-in encryption for zfs-only pools, or will these disks still be encrypted via LUKS?

 

As far as I know the currently available btrfs-pools are using LUKS.

But the zfs built-in encryption would allow for different encryption settings for each dataset.

Link to comment

I think it would be a bit much to expect ZFS feature parity with a project like TrueNAS that has always been ZFS focused for years. I think allowing ZFS in pools is a nice complement to unRAID. Snapshot/replication management would be nice, not only for ZFS but BTRFS as well, but I think the existing LUKS encryption implementation is a suitable compromise for now.

Link to comment

Join the conversation

You can post now and register later. If you have an account, sign in now to post with your account.
Note: Your post will require moderator approval before it will be visible.

Guest
Reply to this topic...

×   Pasted as rich text.   Restore formatting

  Only 75 emoji are allowed.

×   Your link has been automatically embedded.   Display as a link instead

×   Your previous content has been restored.   Clear editor

×   You cannot paste images directly. Upload or insert images from URL.