Soon™️ 6.12 Series


starbetrayer

Recommended Posts

16 minutes ago, JorgeB said:

Yes, as you currently can have xfs, btrfs and reiserfs in the array at the same time.



Holy S----- now i'm excited and pumped. is it seamless? no manual intervention and fully automated?, one last thing now that zfs is supported is there still a hard drive limit in the array assuming all will be zfs or removing the said limit is in the works?

Link to comment
3 hours ago, jonathanselye said:

if not, will there be a tool/plugin to migrate existing xfs(which probably most unraid user are using) to zfs

I very much doubt it.    In line with how Unraid handles existing file systems I expect it will be the users responsibility to move files to/from any zfs encrypted drives.   On that basis a full migration would follow the same steps as resiserfs->xfs migration would.

Link to comment
On 1/4/2023 at 11:19 AM, JorgeB said:

When available zfs will be an option for array and pools.

 

10 hours ago, limetech said:

ZFS support: this will let you create a named pool similar to how you can create named btrfs pools today.

 

37 minutes ago, JorgeB said:

Yes, as you currently can have xfs, btrfs and reiserfs in the array at the same time.

 

Can we get some clarification there? @JorgeB's 2 posts suggests ZFS will be supported for array and pools, @limetech's one suggests pools only... unless that's talking of ZFS pools anywhere instead of Unraid pools.

 

One way I'd think of ZFS being a worthy addition would be if ZFS was on top of unraid's parity system, and as such you could have one/multiple ZFS pools as part of the array, and do something like having 2,3,4-drive striped ZFS pools (no protection) made of array drives and rely on unraid's parity protection for the drives instead of ZFS's. Less capacity wasted to protection, but with read performance of a striped array. Of course writes would be limited by the parity drive(s), but since only 1/2 drives would be needed for protection of many ZFS drives SSDs could be used to alleviate that.

 

Still hoping for multiple arrays, which would be even more useful in that case...

Edited by Kilrah
Link to comment
2 hours ago, Kilrah said:

 

 

 

Can we get some clarification there? @JorgeB's 2 posts suggests ZFS will be supported for array and pools, @limetech's one suggests pools only... unless that's talking of ZFS pools anywhere instead of Unraid pools.

 

One way I'd think of ZFS being a worthy addition would be if ZFS was on top of unraid's parity system, and as such you could have one/multiple ZFS pools as part of the array, and do something like having 2,3,4-drive striped ZFS pools (no protection) made of array drives and rely on unraid's parity protection for the drives instead of ZFS's. Less capacity wasted to protection, but with read performance of a striped array. Of course writes would be limited by the parity drive(s), but since only 1/2 drives would be needed for protection of many ZFS drives SSDs could be used to alleviate that.

 

Still hoping for multiple arrays, which would be even more useful in that case...

 

Sorry, just so I understand (absolute newbie here):

If ZFS is not supported for arrays, there are no advantages over the status quo, correct? Apart from the fact that it is "officially" supported. You wouldn't benefit from the higher speeds and would only have the ZFS advantages (parity, snapshots, data integrity etc.) within the created pool, am I understanding correctly?

 

If that is the case, why are people excited about it? Isn't that what you could do with the Plugin? Again: Not judging, genuinly trying to understand it. Would be happy about a ELI5 😄

Edited by orhaN_utanG
Link to comment
14 minutes ago, orhaN_utanG said:

 

Sorry, just so I understand (absolute newbie here):

If ZFS is not supported for arrays, there are no advantages over the status quo, correct? Apart from the fact that it is "officially" supported. You wouldn't benefit from the higher speeds and would only have the ZFS advantages (parity, snapshots, data integrity etc.) within the created pool, am I understanding correctly?

 

If that is the case, why are people excited about it? Isn't that what you could do with the Plugin? Again: Not judging, genuinly trying to understand it. Would be happy about a ELI5 😄

This is what I took away from this as well. I’m also assuming if ZFS is pools only that we would need to have disks in the standard array still? 

Link to comment
48 minutes ago, orhaN_utanG said:

If ZFS is not supported for arrays, there are no advantages over the status quo, correct? Apart from the fact that it is "officially" supported. You wouldn't benefit from the higher speeds and would only have the ZFS advantages (parity, snapshots, data integrity etc.) within the created pool, am I understanding correctly?

What I would expect is that ZFS in the main array would be supported in the same way btrfs is supported today?  In other words ZFS drives would be single device ZFS file system.  To get maximum performance you would still need zfs pools outside the parity protected array. The big advantage of integration is the ability to participate in User Shares, and perhaps better stability than btrfs.

 

I could be wrong though - just going by what seems a logical first step into zfs support.

Link to comment
1 minute ago, Revan335 said:

Is this/ZFS for Cache Drives/for the Cache Pool too?

I would be very surprised if it was not as an alternative to btrfs for multi-drive pools.

2 minutes ago, Revan335 said:

Include a Migration from btrfs?

Doubt if any tooling would be provided for this.   I would expect it to be up to the user to format a pool as ZFS and then copy data to it just as is currently the case for btrfs.

Link to comment
20 minutes ago, limetech said:

Shares will have the concept of "primary" storage and "cache" storage. 

Any traction to the concept of a share having a

"new files written here pool"

"overflow when new file destination is full pool" (optional)

"mover enabled yes / no" (optional rules of when to invoke) (optional third pool as yes destination)

 

Instead of the cache yes/no/only/preferred setting?

 

This would accomplish a couple things, first it would clarify the historically muddy yes/no/only/preferred setting, second, it would more easily support pool to pool instead of being limited to primary / cache structure.

 

  • Like 2
  • Upvote 1
Link to comment
23 minutes ago, JonathanM said:

Any traction to the concept of a share having a

"new files written here pool"

"overflow when new file destination is full pool" (optional)

"mover enabled yes / no" (optional rules of when to invoke) (optional third pool as yes destination)

 

Instead of the cache yes/no/only/preferred setting?

 

This would accomplish a couple things, first it would clarify the historically muddy yes/no/only/preferred setting, second, it would more easily support pool to pool instead of being limited to primary / cache structure.

 

Sure open to design ideas.

  • Like 2
  • Upvote 1
Link to comment

Very exciting. Congrats team.

 

For me speed is no1 key as would like to be able to edit video in future on the system (up to 500Mb/s). Hoping to convert to raidz1 with 4x12tb this becomes main system with a second pool added in future.

10Gb network.

 

SSD Cache pools remain for appdata & VM although they could also become single ZFS if that enables snapshots & frees up a mirror.

If that were to happen would they be able to backup to the raidz1 pool or would that need to be a custom script.

 

 

 

 

 

Edited by dopeytree
  • Like 2
Link to comment
6 minutes ago, Tucubanito07 said:

Would you still be able to use different sizes of drives with the ZFS implementation?

Depends what you are asking. Each individual unRAID array disk can be a different size, and different format if desired. Those would each be single volume ZFS. A pool would need identical size disks in it to fully take advantage of the ZFS specific RAID functions. I'm not familiar enough with ZFS to comment on whether it's a good idea to mix sizes in a ZFS multi disk volume, but I suspect not.

 

ZFS is file system, much like BTRFS is a file system. How the file system itself deals with multiple member disks is unique to the file system, not Unraid. Currently the unRAID parity array only supports a single disk per slot, with whichever file system you want on it. Pools can have multiple disks, handled by their specific file system.

 

Clear as mud?

Link to comment
1 hour ago, JonathanM said:

Depends what you are asking. Each individual unRAID array disk can be a different size, and different format if desired. Those would each be single volume ZFS. A pool would need identical size disks in it to fully take advantage of the ZFS specific RAID functions. I'm not familiar enough with ZFS to comment on whether it's a good idea to mix sizes in a ZFS multi disk volume, but I suspect not.

 

ZFS is file system, much like BTRFS is a file system. How the file system itself deals with multiple member disks is unique to the file system, not Unraid. Currently the unRAID parity array only supports a single disk per slot, with whichever file system you want on it. Pools can have multiple disks, handled by their specific file system.

 

Clear as mud?

I know is not an Unraid and is a ZFS thing for multiple drives on a pool. However, I believe they were working on something for that situation. Regardless, if we can create a pool in ZFS for 10tb and the rest of the drive space to be btrs like and be able to use the unused space of a drive. Now, that would be amazing. 

Link to comment
6 hours ago, dopeytree said:

Very exciting. Congrats team.

 

For me speed is no1 key as would like to be able to edit video in future on the system (up to 500Mb/s). Hoping to convert to raidz1 with 4x12tb this becomes main system with a second pool added in future.

10Gb network.

 

SSD Cache pools remain for appdata & VM although they could also become single ZFS if that enables snapshots & frees up a mirror.

If that were to happen would they be able to backup to the raidz1 pool or would that need to be a custom script.

 


Is there a reason why you don't simply use SSD / NVME for your working directory when you're editing?

I have a 1TB Gen4 NVME that I use exclusively for photo / video editing. It's mounted as a single drive cache pool, with my /workingdata share set to "only" for cache, with that NVME selected.  It has zero issues saturating the 2x10gbe connection in to my Unraid box.  When I'm done with the edits, I move it to the array (which subsequently moves it to another NVME cache pool, then gets moved by the mover when I'm sleeping).

Mechanical disk RAIDz will never touch the throughput and scrub speed that modern NVME can do.  And they're cheap!  I just picked up another 1TB Intel (granted, PCIe3, not 4) for $69 at Micro Center the other day.

  • Like 1
  • Thanks 1
Link to comment

That's good to know. Also thats a bargain. Prices in the UK are a bit crazy.

 

I am just in the process of removing some cache mirrors to free us 2x 2tb drives. The main reason would be the files will be on the array to start with but if they are in a ZFS pool that will speed it all up. I was going to have a play with raid5/6 but I don't know if that has been fixed by the BTFS team. I fly drones so we don't edit that often but when we do its fairly large files. 

Also very excited for ZFS snapshots.

I guess I may opt to keep plex on a single drive outside of an array.

 

How much ram are you guys using when testing ZFS out?

Link to comment
6 hours ago, Tucubanito07 said:

I know is not an Unraid and is a ZFS thing for multiple drives on a pool. However, I believe they were working on something for that situation. Regardless, if we can create a pool in ZFS for 10tb and the rest of the drive space to be btrs like and be able to use the unused space of a drive. Now, that would be amazing. 

You can use different size devices to create a zfs raidz, but the extra space will go unused by zfs, and it cannot be used by a different filesystem, though not impossible think of the logistics to make that work with the GUI, to have the same disks in multiple pools.

  • Like 1
Link to comment
3 hours ago, JorgeB said:

You can use different size devices to create a zfs raidz, but the extra space will go unused by zfs, and it cannot be used by a different filesystem, though not impossible think of the logistics to make that work with the GUI, to have the same disks in multiple pools.

Yea this is why I don’t use ZFS and I use Unraid. That limitation is what stopping me from using it. 

  • Like 1
  • Thanks 1
Link to comment
17 minutes ago, Tucubanito07 said:

Yea this is why I don’t use ZFS and I use Unraid. That limitation is what stopping me from using it. 

You can use zfs in the array without that limitation, because every device is a single filesystem, without raidz obliviously, for pools nothing that can be done since it's a zfs limitation, btrfs is more flexible, though not as robust, unfortunately you rarely can have everything.

  • Like 1
Link to comment

Join the conversation

You can post now and register later. If you have an account, sign in now to post with your account.
Note: Your post will require moderator approval before it will be visible.

Guest
Reply to this topic...

×   Pasted as rich text.   Restore formatting

  Only 75 emoji are allowed.

×   Your link has been automatically embedded.   Display as a link instead

×   Your previous content has been restored.   Clear editor

×   You cannot paste images directly. Upload or insert images from URL.