Feature Request Poll for 6.11


What do you want to see in Unraid OS 6.11?  

1667 members have voted

You do not have permission to vote in this poll, or see the poll results. Please sign in or register to vote in this poll.

Recommended Posts

On 10/3/2021 at 10:32 PM, JorgeB said:

I've been using btrfs in all my servers for about 5 years, including all array drives, on my two main servers I take about 10 daily incremental snapshots, including but not limited to all my VMs, never had an issue.

 

Which plugin do you take the snapshots?

Its much more comfortable that taking snapshots on the PROXMOX...

Link to comment
  • 4 months later...

ZFS and it’s not even close.

 

Implementing ZFS will most likely require adjustments to current array setup. Adjustments would likely be made with a second array in mind... Cheers to the team's hard work and seeking community input!

Edited by danofun
Formatting
Link to comment
  • 3 months later...
  • 4 weeks later...

1. VM snapshots

2. ZFS Support

 

I don't care about ARM VM or multiple Arrays

 

 

  

On 8/24/2021 at 3:10 PM, dada051 said:

And this is why Limetech should prefer multiple arrays over ZFS

 

Tried that, been there. It' a pain in the a** ....... I want native ZFS support.

Edited by HardwareHarry
Link to comment
  • 2 weeks later...
  • 1 month later...
On 7/21/2022 at 12:25 AM, smartkid808 said:

I'd love multiple pools.  I am only 1 drive away from maxing out to 30 total drives 😞 

 

Does seem a bit odd to sell a OS tier which offers "unlimited drives" when reality is limited to 30.

Edited by dopeytree
  • Upvote 1
Link to comment
On 8/7/2021 at 2:54 PM, jonp said:

The reason it isn't on this list for this poll is for reasons that might not be so obvious. As it stands today, there are really 3 ways to do snapshots on Unraid today (maybe more ;-). One is using btrfs snapshots at the filesystem layer. Another is using simple reflink copies which still relies upon btrfs. Another still is using the tools built into QEMU to do this. Each method has pros and cons. 

 

The qemu method is universal as it works on every filesystem we support because it isn't filesystem dependent. Unfortunately it also performs incredibly slow.

 

Btrfs snapshots are really great, but you have to first define subvolumes to use them. It also relies on the fact that the underlying storage is formatted with btrfs. 

 

Reflink copies are really easy because they are essentially a smart copy command (just add --reflink to the end of any cp command). Still requires the source/destination to be on btrfs, but it's super fast, storage efficient, and doesn't even require you to have subvolumes defined to make use of it.

 

And with the potential for ZFS, we have yet another option as it too supports snapshots!

 

There are other challenges with snapshots as well, so it's a tougher nut to crack than some other features. Doesn't mean it's not on the roadmap ;-)

Maybe there are multiple possible ways to make them, but none of it is baked in the GUI for easy and painless way to do it

Link to comment
  • 3 weeks later...
On 8/7/2021 at 4:54 PM, jonp said:

The reason it isn't on this list for this poll is for reasons that might not be so obvious. As it stands today, there are really 3 ways to do snapshots on Unraid today (maybe more ;-). One is using btrfs snapshots at the filesystem layer. Another is using simple reflink copies which still relies upon btrfs. Another still is using the tools built into QEMU to do this. Each method has pros and cons. 

 

The qemu method is universal as it works on every filesystem we support because it isn't filesystem dependent. Unfortunately it also performs incredibly slow.

 

Btrfs snapshots are really great, but you have to first define subvolumes to use them. It also relies on the fact that the underlying storage is formatted with btrfs. 

 

Reflink copies are really easy because they are essentially a smart copy command (just add --reflink to the end of any cp command). Still requires the source/destination to be on btrfs, but it's super fast, storage efficient, and doesn't even require you to have subvolumes defined to make use of it.

 

And with the potential for ZFS, we have yet another option as it too supports snapshots!

 

There are other challenges with snapshots as well, so it's a tougher nut to crack than some other features. Doesn't mean it's not on the roadmap ;-)

I think it's important to note, XFS, which is what most unraid users are using on their array disks, also has native reflink support. So if you want an agnostic way of doing "snapshots" this way, look no further than reflinks. It works both with Btrfs and XFS, and is indeed now the default of Coreutils cp.

 

Old XFS formats do not support reflinks, but it can be enabled on any XFS filesystem formatted with the "newer" v5 on disk format which supports CRCs, which has been around since kernel 3.15. Reflinks have been enabled by default since kernel 5.1. I'd be willing to bet a large number of unraid users today can support this (if not a majority). 

Link to comment
2 minutes ago, JSE said:

I'd be willing to bet a large number of unraid users today can support this (if not a majority). 

I'd take that bet. There was a raft of posts a couple years ago about the "new" format taking up a bunch more space than prior versions, so the differential date line is pretty clear, and Unraid users tend to set it and forget it. Not very likely that a majority of drives are on the new format yet, since Unraid expands drives during in place replacement upgrades keeping the existing format. Only "newly" formatted drives can take advantage, and since there are a bunch of users still with ReiserFS drives I'm betting the pool of new formatted drives is a minority of users.

Link to comment
  • 3 weeks later...

ZFS - Would love a 4disk setup in raid5/6 - but wouldn't want it as a pool as in the current way cache works... Would want it to be a stand alone ZFS array or separate pool. I.e So it doesn't copy back to the main array. Although I can see how this would be useful for simple backing up or upgrading disks etc. so perhaps this is how it will be integrated... to make it easiest for user.

Edited by dopeytree
Link to comment
1 hour ago, dopeytree said:

wouldn't want it as a pool as in the current way cache works.

 

Currently pools can be set 1 of 4 ways for each share.

 

pool:YES new files targeted to the share written to pool - moved to parity array

pool:ONLY new files targeted to the share written to pool - mover leaves them there

pool:NO new files targeted to the share written to the array - mover leaves them there

pool:PREFER new files targeted to the share written to the pool - overflow writes to array, mover moves any array content back to the pool if there is space.

 

If the share is designated pool:ONLY then new files go to the pool and stay there, just like you are asking.

 

I'm unsure what you are saying about not wanting it the way it works currently.

Link to comment

It comes down to why one is running ZFS.. For a new setup the data protection on write is nice so ideally the actual array would be running ZFS too (if the user has drives that are supported i.e of the same size etc).

Or

there is a way to move files between a ZFS pool and Cache and just not run an XFS array.

 

Can mover do POOL to POOL2 - At the moment I don't think there is a way to use cache for new folders but then mover moves to a  ZFS pool.

Edited by dopeytree
Link to comment
4 minutes ago, dopeytree said:

 

there is a way to move files between a ZFS pool and Cache and just not run an XFS array.

I think what you are asking is for pool to pool mover in addition to pool - array and array - pool.

 

Personally I've wanted pool to pool mover ever since multiple pool support was implemented.

 

Perhaps after the addition of native ZFS pool support we might get our wish.

  • Like 1
Link to comment
1 minute ago, JonathanM said:

I think what you are asking is for pool to pool mover in addition to pool - array and array - pool.

 

Personally I've wanted pool to pool mover ever since multiple pool support was implemented.

 

Perhaps after the addition of native ZFS pool support we might get our wish.

 

Spot on. Think it would be the easiest way to implement & manage ZFS.

Link to comment

This would be my build probably 2x raidZ1 pools each with x3 12tb drives. Using 36tb to give 24tb of usable with parity protection.

https://raidcalculators.com/zfs-raidz-capacity.php

Or I could get rid of the normal XFS array and comibe the other 2 disks but acording to this: https://calomel.org/zfs_raid_speed_capacity.html Speed seems to drop off when you move up to raidZ2.

 

At present I only have 2x 12TB drives in use with 2x on the shelf waiting to see if ZFS comes along. 1 parity & 1 disk.

The 4 ssd drives are in and working great as docker & VM cache. The sata SSD's are for Plex media.

I also have some odd drives around that I will build in.

 

Perhaps you will be able to make a ZFS pool from mixed sized but only utilise the smallest size so like if you put in a 12tb, 10tb & 8TB your pool would act as if they were all 8TB drives.

 

I prefer the user interface layout and docker system on unraid to TrueNAS.

 

2114872834_Screenshot2022-09-30at20_26_27.thumb.png.7e4493863a1f6ba721f9f84f4a3ced7c.png

 

Edited by dopeytree
Link to comment
  • 2 weeks later...

Would be great to have mover settings per share.


So for example I want my backups share to move to the array as soon as possible (but I want it written to cache first) then I want my plex downloads to hang around on the cache for 30days... at the moment I have not found a way to do this.

 

Also we need some kind of simple GUI for mover progress 🙂

Edited by dopeytree
Link to comment
  • 2 months later...