Feature Request Poll for 6.11


What do you want to see in Unraid OS 6.11?  

1667 members have voted

You do not have permission to vote in this poll, or see the poll results. Please sign in or register to vote in this poll.

Recommended Posts

It's well on it's way to coming out of alpha, possibly even in the next major release - the current tracker for It can be found here. There's a big push to try to get it ready for the next summit in November, but I'm not sure it'll get there... We'll see though, it's exciting!

 

Even when it does though, there's a good many reasons not to use it; as stripes aren't recalculated to the new stripe width, there's some capacity loss overhead among other things. I'd be fine with Limetech simply not supporting modification of the zfs pool after initial creation, or if they want to enable it via the UI, just putting a great big disclaimer there that says something like "At your own risk (etc etc)".

 

The  biggest benefit to having zfs support built in for me at least is just the fact that it could then more easily utilize zfs pools within the rest of the ecosystem - there's enough complexity thanks to all the various options available within the filesystem that I could totally understand if initial implementation solely meant that the system's zfs information was better represented in the UI. Even if it meant something like "if you want a zfs pool, you have to create it from the command line, then select 'import pool' to do so".

 

Again given the scope of work, it'd also be understandable to implement UI features in phases, something like:

  • Phase 1 - Basic 'allow creation of zfs pool' in the UI (single files created with it and mounted as a cache pool to house user data, additional filesets are only able to be created via CLI - they just show up as folders inside, zvols the same and would still require manual formatting from cli as well). This is enough complexity for one release IMO
  • Phase 2 - Creation of additional filesets via the UI - The UI design for this is pretty big, so having it on it's own release phase would make sense to me. It means we have to have a way to represent the various filesets all in one place (maybe represent it as a dir tree?), then be able to select that fileset and see it's info properly (zfs get pool/fileset)
  • Phase 3 - UI button for on demand snapshot creation and ability to list/restore/delete snapshots for a given fileset - This is probably the biggest one yet as far as complexity... For that reason, I'd say forego allowing one to browse the snapshot contents in the UI until a later time.
  • (and so on)

Anyway, how it's broken down doesn't really matter imo, just that doing so makes the overall task far less monumental an undertaking. For a small team, even one as dedicated as Limetech, trying to do it all in one whack would be... rough lol. If doing this as part of a major release (e.g. 7.0), maybe including 2 of the phases would make sense I guess? 

 

I'm super excited for this; I've daydreamed and brainstormed how it might someday be done (could you tell? 😅), to the point I started drawing it out on a notepad a few times lol. Happy days!!

  • Like 1
  • Thanks 1
Link to comment
20 minutes ago, BVD said:

It's well on it's way to coming out of alpha, possibly even in the next major release - the current tracker for It can be found here. There's a big push to try to get it ready for the next summit in November, but I'm not sure it'll get there... We'll see though, it's exciting!

 

Even when it does though, there's a good many reasons not to use it; as stripes aren't recalculated to the new stripe width, there's some capacity loss overhead among other things. I'd be fine with Limetech simply not supporting modification of the zfs pool after initial creation, or if they want to enable it via the UI, just putting a great big disclaimer there that says something like "At your own risk (etc etc)".

 

The  biggest benefit to having zfs support built in for me at least is just the fact that it could then more easily utilize zfs pools within the rest of the ecosystem - there's enough complexity thanks to all the various options available within the filesystem that I could totally understand if initial implementation solely meant that the system's zfs information was better represented in the UI. Even if it meant something like "if you want a zfs pool, you have to create it from the command line, then select 'import pool' to do so".

 

Again given the scope of work, it'd also be understandable to implement UI features in phases, something like:

  • Phase 1 - Basic 'allow creation of zfs pool' in the UI (single files created with it and mounted as a cache pool to house user data, additional filesets are only able to be created via CLI - they just show up as folders inside, zvols the same and would still require manual formatting from cli as well). This is enough complexity for one release IMO
  • Phase 2 - Creation of additional filesets via the UI - The UI design for this is pretty big, so having it on it's own release phase would make sense to me. It means we have to have a way to represent the various filesets all in one place (maybe represent it as a dir tree?), then be able to select that fileset and see it's info properly (zfs get pool/fileset)
  • Phase 3 - UI button for on demand snapshot creation and ability to list/restore/delete snapshots for a given fileset - This is probably the biggest one yet as far as complexity... For that reason, I'd say forego allowing one to browse the snapshot contents in the UI until a later time.
  • (and so on)

Anyway, how it's broken down doesn't really matter imo, just that doing so makes the overall task far less monumental an undertaking. For a small team, even one as dedicated as Limetech, trying to do it all in one whack would be... rough lol. If doing this as part of a major release (e.g. 7.0), maybe including 2 of the phases would make sense I guess? 

 

I'm super excited for this; I've daydreamed and brainstormed how it might someday be done (could you tell? 😅), to the point I started drawing it out on a notepad a few times lol. Happy days!!

Yes i agree, doing it all in 1 go is a hard task as ZFS isnt that easy and pretty complex.

 

Like next release introduce the ZFS pool. User can make a new "array" pool etc the normal zfs way with no ability to add a single+ hdd's.

And work on it with new releases to increase support for ZFS to be more stable and better about it so that finally that you can make a ZFS array where you can do like you can do now with the unraid array but with the ZFS features.

 

That would be almost the best of both worlds unraid and zfs.

Link to comment
36 minutes ago, dada051 said:

Sure. But it has not the same inpact on ZFS that on another filesystem (starting with XFS for example).

 

I don't understand why for a normal PC, it's not really needed. My normal PC use and manipulate data from and to my Unraid, so if my Unraid needs ECC, my PC should too, no? 

 

I don't want to get into the whole zealotry that surrounds the 'ECC vs non-ECC' debate (it really is nuts how strong some people's beliefs seem to be on the subject), but will try to help explain just a bit in hopes it helps:

 

The whole idea behind doing any kind of RAID is to protect from data loss due to catastrophic hw failure; you can think of ECC as a way to protect from failures of the 'non-catastrophic' variety, where something 'went wrong', but not 'so wrong the drive failed'. Used to be that virtually all CPUs supported ECC; it was expected that anyone might want to ensure their data was 'correct', even us lowly home users. It wasn't until intel removed support from their consumer products as of the core i series that anyone even thought of trying to 'charge someone to use a previously free feature'.

 

No, you don't "need" ECC, but it's definitely helpful. If you've ever gone to open up an old jpg you'd saved off from 15 years ago and found "wait, why is a quarter of the image this green blobby mess?", then you've potentially encountered some of the results. Assuming it's not straight up 'bit-rot' (I hate that term), sometime during the many transfers from one machine to another, maybe something got flipped in memory, and it thought it wrote something to disk when in fact it wrote something else. It's happened to me, I save everything, and still have files from the 90's saved off. For data that long lived, the odds of encountering something being out of sorts goes up significantly when not using ECC and checksumming (I wasn't at the time lol).

 

If you want a brief synopsis on a recent take, arstechnica commented on Linus Torvald's rant on the same subject here

  • Like 1
Link to comment

Hey,

 

I know it's not in the pool but... If you could make the "x" button on notification bigger, or just the space around so when we click near it it close instead of loading the page it came from.

That would be awesome! It's really raging, even more when it's a docker notification.

Thanks

  • Like 2
Link to comment
On 8/8/2021 at 6:48 AM, Thorsten said:

That would be very nice if Unraid would support snapshots for VMs. I would prefer this feature above all others.

ZFS allows you to do snapshots for VM's.  And it has additional snapshot features too such as being able to make a duplicate of a VM with zero space requirement (i.e. it uses the difference).  I'm doing this now with unraid on ZFS unofficial and it's fantastic.

Link to comment
On 8/11/2021 at 7:08 AM, jonp said:

 

No.  Any requirements to expanding a ZFS pool will still hold weight if we implement it, which means a lot more UI programming work for us to make sure the UI respects the rules of ZFS.  That being said, there is an active project to make this possible with ZFS on GitHub:  https://github.com/openzfs/zfs/pull/8853.  Its still in Alpha stage so no idea when it would make it up the stack to be a native part of the project.

Not sure if you've seen this?  https://arstechnica.com/gadgets/2021/06/raidz-expansion-code-lands-in-openzfs-master/

 

Link to comment
22 hours ago, dada051 said:

I really hope "multiple arrays" will win, as array is a core feature of Unraid, and a plugin already allow ZFS on Unraid. In addition, ZFS almost recommand lot of RAM and ECC one whereas Unraid philosophy is to run on whatever hardware you have.

ZFS doesn't actually need a lot of RAM, that's a common misconception though.  What happens is there's a setting for the L1 ZFS cache (which sits in RAM) that can default to quite a high number in some distributions / implementations which makes it seem like it needs a lot of RAM.  This can be manually set to a lower value.  You can also get ZFS to cache at a reduced level i.e. tell it to cache metadata not whole files so that it doesn't use as much RAM.  This works well for large files that are not read often.  

 

As for ECC, well it would be great if RAM manufacturers gave us this for the extra 1 chip it requires (kinda like RAID parity on a RAM stick) at a non-exorbitant price yes.  I recently do use ECC on my production system, but also use non-ECC ram on many other systems for many years without issue.  It's a pretty big topic when you get into it around when exactly you might lose data.

Edited by Marshalleq
Link to comment
1 hour ago, Marshalleq said:

ZFS allows you to do snapshots for VM's.  And it has additional snapshot features too such as being able to make a duplicate of a VM with zero space requirement (i.e. it uses the difference).  I'm doing this now with unraid on ZFS unofficial and it's fantastic.

Btrfs allows snapshots, and Qemu with qcow2 image allows snapshots of running VM. So I don't see a real benefit of having ZFS. I do not say that I don't want ZFS on Unraid, but I prefer to have multiple arrays before, because it allow me to have one array of 7200rpm XFS disks and one array of 5400rpm XFS encrypted disks. If I need ZFS, I can use the plugin, or host TrueNAS in a VM for example. If I want to use 7200 and 5400 rpm disks, today, I have to limit my perf to 5400, or to build another Unraid box.

  

1 hour ago, Marshalleq said:

ZFS doesn't actually need a lot of RAM, that's a common misconception though.  What happens is there's a setting for the L1 ZFS cache (which sits in RAM) that can default to quite a high number in some distributions / implementations which makes it seem like it needs a lot of RAM.  This can be manually set to a lower value.  You can also get ZFS to cache at a reduced level i.e. tell it to cache metadata not whole files so that it doesn't use as much RAM.  This works well for large files that are not read often.  

 

As for ECC, well it would be great if RAM manufacturers gave us this for the extra 1 chip it requires (kinda like RAID parity on a RAM stick) at a non-exorbitant price yes.  I recently do use ECC on my production system, but also use non-ECC ram on many other systems for many years without issue.  It's a pretty big topic when you get into it around when exactly you might lose data.

I never said that is a requirement (both for ECC and quantity).

  • Like 3
Link to comment

ZFS +1

It is well known that the r/w speed of the unraid array is slow, and the SSD pool can help it.

However, the vm or container files in the pools cannot be protected by the array, and cannot be backed up or snapshotted while the vm or container is running.

So I hope that zfs can realize online backup of hot data without stopping the virtual machine or container.

We can get the r/w speed of the SSD pool, and the safety of the array too.

BTW: btrfs  seems to be abandoned, and ZFS continues to forward...

Link to comment
28 minutes ago, SeanOne said:

ZFS +1

It is well known that the r/w speed of the unraid array is slow, and the SSD pool can help it.

However, the vm or container files in the pools cannot be protected by the array, and cannot be backed up or snapshotted while the vm or container is running.

So I hope that zfs can realize online backup of hot data without stopping the virtual machine or container.

We can get the r/w speed of the SSD pool, and the safety of the array too.

BTW: btrfs  seems to be abandoned, and ZFS continues to forward...

Array can write over 150MBps with turbo write. Cache pool can help increase speed too. Cache pools with btrfs allow RAID (1 and others). Btrfs allows filesystem snapshots. And qemu allows snapshots of VM even when running (if they are qcow2 based) 

 

Don't know if btrfs is abandoned. But it works, and you have thé plugin to add zfs support. There is no plugin for multiple arrays support... 

Link to comment
On 8/14/2021 at 7:15 PM, dada051 said:

If you have to rebuild your cache periodicaly, it's not ZFS that will fix the root cause. 

Actually in my experience I'd agree, but only in reverse.  ZFS doesn't fix the cause, not having BTFRS does fix the cause though.  If you search on the forums (and on the internet) plenty of examples of randomly failing BTRFS.  But in fairness, this is now a year or so ago and I'm not up to speed if they've fixed that yet.  What you do get from ZFS is rock solid stability and early detection (and healing) around these kinds of issues.  So no it doesn't fix the cause but it likely removes the culprit or at least repairs properly while you figure it out - which BTRFS didn't do in the 3-4 times it completely killed my cache and I decided to bail on it altogether.

Link to comment
On 8/15/2021 at 5:39 PM, SeanOne said:

ZFS +1

It is well known that the r/w speed of the unraid array is slow, and the SSD pool can help it.

However, the vm or container files in the pools cannot be protected by the array, and cannot be backed up or snapshotted while the vm or container is running.

So I hope that zfs can realize online backup of hot data without stopping the virtual machine or container.

We can get the r/w speed of the SSD pool, and the safety of the array too.

BTW: btrfs  seems to be abandoned, and ZFS continues to forward...

I don't think ZFS wins with snapshots of running machines because technically you are meant to stop the machine before taking an image.  However practically there are many people who don't and the machine will recover 90% of cases.  What you can do is script the whole thing.  This should be possible on BTRFS and ZFS.  But my experience is with ZFS because it's more reliable so I can't say that I've done this on BTRFS.

Link to comment
On 8/15/2021 at 6:08 PM, dada051 said:

Array can write over 150MBps with turbo write. Cache pool can help increase speed too. Cache pools with btrfs allow RAID (1 and others). Btrfs allows filesystem snapshots. And qemu allows snapshots of VM even when running (if they are qcow2 based) 

 

Don't know if btrfs is abandoned. But it works, and you have thé plugin to add zfs support. There is no plugin for multiple arrays support... 

I agree that multiple arrays is important.  And I do take you logic that there's a plugin for ZFS - it's a really good point TBH.  That said, I yearn for aligned ZFS to beta updates so we can test it properly (I can't until the plugin is updated) and some alignment with other things in the GUI.  It's not just listing the zfs stuff, it's all the spin downs and other things that make it complicated.

 

Also 150Mbps is less than the performance of a single drive.  Unraid arrays are awesome but speed is not part of that awesome.  If you do basically any other array it's n-1 speed depending on your setup.  I.e. 250 x n devices -1.  When I bailed on unraid arrays (still use unraid but with proper ZFS) the performance of the whole system including unexpected areas like the performance of the gui - it was night and day.  I feel like my computer is running properly now and ZFS isn't even the fastest raid - it's just the safest.

Link to comment
12 hours ago, Marshalleq said:

Actually in my experience I'd agree, but only in reverse.  ZFS doesn't fix the cause, not having BTFRS does fix the cause though.  If you search on the forums (and on the internet) plenty of examples of randomly failing BTRFS.  But in fairness, this is now a year or so ago and I'm not up to speed if they've fixed that yet.  What you do get from ZFS is rock solid stability and early detection (and healing) around these kinds of issues.  So no it doesn't fix the cause but it likely removes the culprit or at least repairs properly while you figure it out - which BTRFS didn't do in the 3-4 times it completely killed my cache and I decided to bail on it altogether.

I think that what @dada051 means is that data or file system corruption is not normal and should not happen.

It happens to some people and not others, there might be hardware considerations.

While ZFS might be more resilient than ZFS BTRFS, changing the FS does not solve the root cause, only address the symptoms.

Link to comment
3 hours ago, ChatNoir said:

I think that what @dada051 means is that data or file system corruption is not normal and should not happen.

It happens to some people and not others, there might be hardware considerations.

While ZFS might be more resilient than ZFS, changing the FS does not solve the root cause, only address the symptoms.

"ZFS might be more resilient than ZFS" I think the second ZFS would be replaced by Btrfs :)
Yes, that's what I mean. ZFS can maybe fix under the hood, but it's not a great solution.

  • Haha 1
Link to comment

For me multiple arrays would me more useful.

  1. You can already obtain ZFS with the plugin or passing a controller card to a vm
    1. Plenty of guides out there for both
  2. Large Drives in today's market are insanely expensive.
    1. 8TB for $150 ouch.
  3. Allows use of already obtained hardware
    1. I have around 50x 2tb sas drives that are being unused.
  4. Helps with the slow write performance by splitting data types to their respective arrays
    1. Dual parity drops array transfers to 40mb/s

As a future features, i would prefer:

  1. parity reworked.
    1. No idea if it is even possible, but if having 1x parity drive, and then that drive is in a mirror set with drive(s). This removes the write penalty with having two parity drives, but actually gives you as many parity drives as you want to mirror. There are likely problems with this setup, that i do not see.
  2.  Tiered storage.
    1. Have a live file system. t0 for nvme, for high transfer rate items (10gbit). t1 for ssds, where VMs live. t2 is an r10 array, used for most transfers. r3 is the parity array. Data is moved between the four as necessary based upon frequency of access. There is a manual way of setting this up now with the multiple cache pools, but I would prefer an automatic method.
Edited by rukiftw
  • Like 1
Link to comment

I would like to be able to make a snapshot of a VM as a way of backing up the system and being able to restore it to a previous version. If that's what ZFS would do, that's fine with me. If it need to be an standalone feature, then that.

Would be great if either way would be able to do it while the VM is running, like in ESXi.

  • Like 1
Link to comment

I am a reletively new unRaid user with a small server. Originally unRaid was a hard sell on me for the sole reason it didn't have zfs. I love how user friendly it is and applications like Nextcloud and Plex were simply a few clicks. I am also a heavy Proxmox user and due to the how unRaid's parity works I got fed up and was about to leave unRaid. If ZFS is added it would be the best thing I would hands down become a unRaid power user too. There are a lot of things I just prefer about Proxmox which is not a discussion for here but I definitely won't be biased and will be running a hybrid setup thanks to zfs

  • Like 1
Link to comment
On 8/25/2021 at 3:48 AM, Xxharry said:

its not on the list but iscsi would be nice to have

iSCSI target (server) is already available via a plugin or do you want to bind targets (from other servers) via the initiator to unRAID, if yes then a usecase/explanation what you want to do exactly would be nice.

Link to comment
  • 2 weeks later...

IMO, ZFS is "ok" - but sort of goes against the whole basis of unraid.  Redundancy without striping.

 

I'm all for options.  But I voted for multiple arrays.  I moved away from FreeNAS to unraid because it's a pain to add space to freenas.  Unraid makes it simple.  Also, if you lose parity+1 drives in unraid, at least you have direct access to some of your data on the still-good drives.  with zfs, you're fubar.

 

Again - I can understand why some want zfs, but they also have to understand that by implementing zfs, you will lose quite a bit of unraid functionality/ease-of-use

 

Anyways, my 2 cents.  I'd actually like to see VM templates be a feature - where I can setup a baseline VM, flag it as a template, and unraid makes it easy for me to setup a new vm based on that template.

  • Like 1
Link to comment