Jump to content

Unraid Future Feature Desires Poll


Unraid Future Feature Desires Poll  

255 members have voted

You do not have permission to vote in this poll, or see the poll results. Please sign in or register to vote in this poll.

Recommended Posts

On 7/12/2024 at 12:57 AM, BRiT said:

Multiple Arrays.

 

I second that! 

With that, the limit of drives in an array per license should be extended.

 

Use case is data safety, higher speeds and energy savings through shorter rebuild and parity check times. This especially if there are various size disks in an array as one is always in the slow part of its platter or is in general not as performant.
In my case, I would split my drives in an array with 18tb drives and one with 8tb drives.


Would end a long wait if that feature would come;)

Edited by Georg
punctuation
  • Like 1
  • Upvote 1
Link to comment

As much as I'd also like to see multiple arrays and a new webui, I would really like to see a smarter and more customisable mover mechanism, expanding on the popular mover tuning plug-in to make better use of the cache instead of just filling it up then dumping it all.

 

With multiple pools now functional in 7.x and multiple unraid arrays a likely addition further down the line, better control over the mover and caching is getting more important for setting up smarter tier based systems. I'd like to (for example) have a 3 tier caching mover setup, rather than just a primary and secondary.

 

You could have for example a fast but small NVME pool ontop of a larger ZFS HDD or sata SSD pool, but all of that gets a weekly dump to a large unraid archive of spun down, mixed capacity disks. Currently this is mostly possible with NVME as primary, SATA as secondary now that the mover supports pool to pool directions, rather than just from a pool to the array like the old days. Then you currently need a manual script to handle the archival to the array, but having that handled by a more advanced mover logic (like the mover tuning plugin) would allow for files to stay on the middle tier as long as there is space available.

Edited by Faceman
Link to comment
On 7/15/2024 at 6:51 AM, Georg said:

With that the limit of drives in an array per license should be extended.

I think it would only have to be an extension on the top licence tier.

 

HDDs are available in such large sizes now that the vast majority of home NAS users can get by with 4-6 HDDs, and still have ~100TB of useable space in many cases. it's only the absolute top end power users that would run multiple arrays, with multiple multi-disk pools on top of that. so maybe the tiers should be 6 drives, 12 drives and unlimited drives ?

 

Edit: I'm out of date, the updated tiers are 6 and unlimited already.

 

 

Edited by Faceman
Link to comment

Virt Manager Functionality built in to Unraid. Managing multiple KVM,s with a GUI interface, including snapshot management and the ability to make changes to the config without hosing the xml file would be a great addition to the product. The older Docker versions were fine, but no longer supported.

Link to comment

Would be nice to see:

  1. Ability to take selected Pools offline to make configuration changes without needing to stop the whole "Array" & Pools (ideally keeping Docker and/or VMs running on Pools that have not been taken offline).
     
  2. Ability to make changes to Share settings that can then be updated without restarting the underlying SAMBA service(s).
    For Windows Clients, if there is an active file transfer underway when the SAMBA service(s) are restarted then it causes an error which stops the file transfer (although this can be "resumed" on Windows Clients).  Not sure if it's possible to just "update" the SAMBA service(s) configuration settings rather than a complete stop and start.
     
  3. Ability to have an "Advanced" ZFS configuration page where "custom/explicit" ZFS Pools can be configured.
    At present unRAID "automagically" selects the ZFS Pool configuration based upon the number of assigned devices (e.g. 2 = mirror or stripe, 3 = mirror, stripe or raidz1, 4 = mirror, stripe, raidz1 or raidz2, etc) but when I tried to create a 9 device array with 5 x 8TB disks raidz1 and 4 x 4TB disks raidz1 this was not possible.
    The option to override this behaviour and explicitly configure a ZFS Pool would be good.
     
  4. Ability to add additional "VDev(s)" to an existing ZFS Pool.
    My understanding is that a ZFS Pool can be expanded by adding an additional (ideally redundant) ZFS VDev to an existing ZFS Pool.  Adding this feature would also support my "Custom" configuration (Item 3.) request as I could then create a 5 disk raidz1 configuration then "add" an additional 4 disk raidz1 configuration within the same ZFS Pool.
    (Please note I am not talking about ZFS "RAIDZ Expansion".)
Link to comment
5 hours ago, PPH said:

Ability to take selected Pools offline to make configuration changes without needing to stop the whole "Array" & Pools (ideally keeping Docker and/or VMs running on Pools that have not been taken offline).

 

I'd love to see the whole STOP/START Array (System) feature go away entirely and replaced with individual prompts/warnings about specific pools as you mention above. Only take down what needs to be taken down for the change to be made. Keep the ability to take down specific pools or the actual disk array pool (if it's defined/used) without affecting the rest of the system. Maybe when/if the legacy array becomes a type of pool.

 

As far as the items on this specific poll however, Web UI is far and away my #1. Multiple Arrays seems to already be on the books,  mentioned to be coming in an upcoming 7.x release.

 

Edited by Espressomatic
  • Like 3
Link to comment
3 hours ago, JorgeB said:

This is already possible, the new vdev needs to be of the same type and width, but devices can be smaller or larger.

Ok - so nearly what I am after but not quite (I'd like to be able to add vdevs of any type and any width - I think this is technically possible with ZFS).  Although, personally I would always aim to add vdevs that include redundancy (e.g. mirror, raidz(n) etc).  I do understand that ZFS can be complex and that providing a simplified interface makes adoption of the technology easier but I do think a "custom" option would be useful for those that want to explicitly configure exact settings.

 

I will give the ^ above a try though (I've an unRAID Pro license/USB stick that I use within VMware Workstation for testing out upgrades and various procedures before trying them on my "production" unRAID boxes).

Link to comment
2 hours ago, KhayrDev said:

Switch from Slackware to a major Linux distro such as Ubuntu or Debian for base Unraid OS.

May I ask why?

Slackware us perfectly fine, working and actively developed.

 

This has also nothing to do with hardware support since the Kernel is responsible for the hardware and I assume you are referring to an article online which does compare these operating systems and which hardware architectures that it supports but that simply is not applicable to Unraid since Unraid only runs on x86_64.

You can of course use Qemu on Unraid to run other hardware architectures.

 

The host will be always x86_64 on Unraid, at least how it‘s currently implemented.

  • Upvote 1
Link to comment
3 minutes ago, ich777 said:

May I ask why?

Slackware us perfectly fine, working and actively developed.

 

He also made this topic, I also don't think it's a good idea (personally):

 

  • Like 1
Link to comment

So, probably very far down the priority list (and rightly so), but what about moving away from the USB boot device.  I know that  requires a complete rework of ... many things, but could add stability to the appliance.

Edited by smdion
  • Like 3
Link to comment
1 hour ago, smdion said:

moving away from the USB boot device

 

I'd love to see this as well, but for different reasons - I've had the same Kingston USB key in use since 2018 and it's had zero issues, so stability isn't an issue for me personally.

 

I'm thinking more of removing friction for installation and testing, with a goal of increasing sales and market share.

 

Booting from any disk would make Unraid more attractive to a lot of people considering a move from another platform or testing multiple platforms at one time.  see a lot of folks complaining about the USB key before having ever trying Unraid - installing to any media, would likely get them to try an install, easily spinning up a VM without having to pass a USB port before installing.

 

More people trying leads to more people buying - which is good for everyone.

 

 

Edited by Espressomatic
  • Like 1
Link to comment

Would love to be able to have VMs/dockers start without the array needing to start. 

This would help when using VMs for routing software. (I know many people don't like it but works perfectly for homelabs and my use case) 

Link to comment
On 7/19/2024 at 8:04 PM, WizP said:

Would love to be able to have VMs/dockers start without the array needing to start. 

This would help when using VMs for routing software. (I know many people don't like it but works perfectly for homelabs and my use case) 

They array is no longer required in 7.0beta.  I think this should already have what you want.

Link to comment
On 7/19/2024 at 8:04 PM, WizP said:

Would love to be able to have VMs/dockers start without the array needing to start. 

This would help when using VMs for routing software. (I know many people don't like it but works perfectly for homelabs and my use case) 

 

3 hours ago, smdion said:

They array is no longer required in 7.0beta.  I think this should already have what you want.


That's correct this is something i'm testing currently..., you still need to fix some default path-ing. Some docker paths, VM paths, lxc paths, and anything that you have saved to the array needs to be moved off disks...

DO AT YOUR OWN RISK!

To Move data off the array use the file plugin or terminal mc and cp commands to move data to new location... Then turn off all autostart if enabled for all dockers, vm and lxc if enabled. Then go to settings docker to turn off docker, and go to setting VM to turn off. Then turn off the array. Next go to tool and use the "new config" be sure to keep your pool devices under "Preserve current assignments:" !!! 
Go to the main tab and under array drop down select none. That's it...

 

New paths and startups...

Go to setting and make sure path exists and slowly check all docker and services are using new paths... as you go to setting and bring them back online.

 

Their an option exist in beta 7 atm that will reach be in production Unraid version 7 release... not sure if it is planned to be used in unraid v6...

This has been test and is a WIP as we just moved a test machine off and now need to see the performance hit if any and other testing...
Picture coming latter of friend's machine later as it is down atm to retrieve the m.2 nvme that used to be the array.

I still recommend having a btrfs 1 disk cache disk and 3 disk zfs raid z1 configuration when choosing to not have a unraid disk array setup...

Edited by bmartino1
spelling/grammer
Link to comment

More Mover features would be amazing. Specifically on a share by share use case. Say I want to have a share for all my freshly downloaded media to only move over after a week, while I only want my CCTV footage to move over once a day, my "working" share to only move data from cache to array every month. That would be invaluable for me. It would optimize cache usage IMO. Instead of having to manually setup cache > array and run Mover overnight on my working array and keeping CCTV/New Downloads for a week. You can set data intensive loads to go quicker, smaller loads to go later. 

Link to comment

hi, i posted it in the feature request but might as well put it here

 

 

maybe it is already in the work, but I watched the uncast "Unraid Story" and heard about multiple pools and new features added to the mover like moving data across pools.

 

So i was wondering can we have a third storage option to a share?

like I have nvme as principal storage, a zfs Z1 ssd pool as secondary and the hdd array as third.

so i can move daily from the nvme to the ssd pool and have great speed read/write and then monthly move data from the ssd pool to the array for long term stored files.

 

like i said maybe it is in the work one way or another but that would be awsome

 

thanks team for the good work here 

Link to comment

What unraid really misses is an built-in VM backup possibility. Look at proxmox and you know what a nearly perfect backup/restore solution is. I thought it was a point in 7.0 :/

 

So may I ask limetech: Do you consider a built-in VM backup/restore solution? Not that scripting plugin "crap" (sorry for that wording but it's fiddling and a backup solution shouldn't be fiddling)

 

Even for docker... 

Edited by enJOyIT
Link to comment

Join the conversation

You can post now and register later. If you have an account, sign in now to post with your account.
Note: Your post will require moderator approval before it will be visible.

Guest
Reply to this topic...

×   Pasted as rich text.   Restore formatting

  Only 75 emoji are allowed.

×   Your link has been automatically embedded.   Display as a link instead

×   Your previous content has been restored.   Clear editor

×   You cannot paste images directly. Upload or insert images from URL.

×
×
  • Create New...