Unraid Feature Request Wishlist


SpencerJ

Unraid Feature Wish List  

3041 members have voted

You do not have permission to vote in this poll, or see the poll results. Please sign in or register to vote in this poll.

Recommended Posts

Drive management;


Building in pre-clearing and so you can just plug in a drive and the gui prepares it, whilst the array is up and adds it with least disruption.


ability to add and remove drives to grow or shrink the array in a very user friendly way.

 

i have 15 x 3tb drives and as with all technology moving on, I can now buy 14tb drives. Would be great to be able to remove 4 old drives and replace with a single drive. All intuitively managed by the gui.

 

baked in unassigned drive support.

 

basically, very easily managed drive support to give users confidence to upgrade/change things around when required, which at the end of the day is exactly what unraid core functionality is all about....drive management.

Edited by Ockingshay
  • Like 1
Link to comment

Triple Parity Please 🙏

 

I ask this half serious, and I know the math behind this would be insane. I'd take this or multiple arrays on single hardware.

 

I have nearly 30 disks in a 45 Drives Storinator with 206TB usable, including X5 14TB drives. As single drive sizes go up, I'm getting more and more nervous ha ha. I have backups of everything, on and off site, but still, recovering would be a PITA.

  • Like 1
Link to comment

As snapshots are already possible with btrfs and zfs (if using the plugin and as i do since a while with just scrips and commandline incl send recieve to remote unraid box and works great), i vote for zfs support on cache and array.

It would unlock more supported and rock solid zfs raid features on the cache then the current limited (stable) options on btrfs and finaly would allow zfs features for the array itself.

btrfs is nice and all and i dabled with it for a while , but after the 3rd unrecoverable corruption issue i moved to unaasigned devices and zfs plugin for all docker,appdata,vms etc and never looked back since nor ever had an issue anymore.

Lastly and maybe the most important of all, zfs is extremely user friendly and btrfs is not. Specialy when stuff goes wrong. Then is "the" moment where you need simplicity and not complexity. Anyone who had to deal with cache corruption issues on btrfs by now will now this. Typicaly in most cases its sorry just rebuild the cache. You have to almost nuke zfs from orbit to get to that point while a simple cable connection hickup or bad shutdown on a raided btrfs is easily busting you up. Did dozens of distructive tests and would not trust my critical data to btrfs anymore. Unfortunately for multi ssd cache its my only option currently.

 

Multiple cache pools is a close second.

 

 

Link to comment
Snapshots as argument for preventing ransomware is a bit out of date. Almost all actual ransomware I read about deletes all versioning the user has access to provided by snapshots on the share level.

With copy on write filesystems and readonly snapshot its pretty hard for ransomewhare to affect old snapshots. As even deleing all data curently accessible stil follows the copy on write rules, not actualy deleting your old data. But it is possible for sure when your host is fully compromised with root access.

But to be totaly safe you can do what i do and have a second mostly isolated system that "exposes/shares NO data via any sharing protocol" smb/nfs/etc) and "pulls" snapshots(deltas via zfs/btrfs send recieve over secure ssh connection) from the primary every night for example. Dont mistakenly "push" data to it from primary as that means if your primary is compromised a hacker or hacker software has access to stored credetials that you use on that primary to acess the backup host and will just use those to jump to it.

 

Link to comment

@glennv I agree, btrfs is surprisingly unstable for a stable fs.  I just possibly realised something that I hope you can clarify, if I create a ZFS mirror and mount it to /mnt/cache, will unraid use that as it's normal cache?  I had always assumed the cache had to be btrfs for a mirror and XFS for a single disk.  I've replaced my docker with zfs, but hadn't considered it was possible for the cache.

Link to comment
24 minutes ago, Marshalleq said:

@glennv I agree, btrfs is surprisingly unstable for a stable fs.  I just possibly realised something that I hope you can clarify, if I create a ZFS mirror and mount it to /mnt/cache, will unraid use that as it's normal cache?  I had always assumed the cache had to be btrfs for a mirror and XFS for a single disk.  I've replaced my docker with zfs, but hadn't considered it was possible for the cache.

Nope zfs is only for unassigned devices. Dont mess with the cache !! Cache is part of the array.

Keep that btrfs if you want a fault tollerant option either raid1 or if more drives raid10.

It was on my wishlist (1st sentence)

Edited by glennv
Link to comment
3 minutes ago, Marshalleq said:

Damn. Thanks for that. If I could have an XFS mirrored cache I would, I have grown to dislike BTRFS.


Sent from my iPhone using Tapatalk

Yeah i am with you. The only thing you could theoreticaly do is use some hardware raid under a normal xfs cache for redundancy, but its not realy the most ellegant way and has its own issues . Guess we better stick with btrfs for now and wait it out untill zfs comes there. Eventualy it will have to.

 

Link to comment

I haven’t really looked into how it works but I am somewhat surprised I can’t just reformat the existing mirror with zfs in its place. I assume since we can’t it must have the cache built directly into the unraid array code and somehow it would detect the change in file system and disable it.


Sent from my iPhone using Tapatalk

Link to comment
  • 2 weeks later...

I would like to see QEMU non-native emulation support.  QEMU already has the ability to emulate non-native architectures.  Just building all the emulation targets would be awesome.  I honestly don't care if that means I have to write my Domain XML from scratch on non-native VMs; I would really like this as an option, especially if your already compiling QEMU for UNRAID.

 

I would also like a drop in method for adding kernel modules, for drivers and the like.  Rather than compiling a custom kernel for each use case.

  • Thanks 1
Link to comment

NTFS support in kernell

 

Uploading NTFS disks to array is very slow due to NTFS drivers. Is it possible to get them inside kernell so it will be as fast as XFS to XFS HDD write and read? NTFS to XFS is half HDD performance in Unraid

 

Ref: https://www.reddit.com/r/unRAID/comments/ewuogs/why_do_ntfs_to_xfs_copy_at_less_than_half_hdd/?utm_source=share&utm_medium=web2x

 

  • Like 1
Link to comment
2 hours ago, ajugland said:

NTFS support in kernell

 

Uploading NTFS disks to array is very slow due to NTFS drivers. Is it possible to get them inside kernell so it will be as fast as XFS to XFS HDD write and read? NTFS to XFS is half HDD performance in Unraid

 

Ref: https://www.reddit.com/r/unRAID/comments/ewuogs/why_do_ntfs_to_xfs_copy_at_less_than_half_hdd/?utm_source=share&utm_medium=web2x

 

Is there even a Linux NTFS kernel driver available?   I thought there was not :(

  • Haha 1
Link to comment

Adding native ZFS, allows for multiple pools, snapshots, vm enhancements (thin provisioning, backups without powerdown, linked vm;s), server 2 server backups (you can stream zfs), bitrot protection, zfs with l2arc and zil, De-duplication, realtime compression.... etc.

 

Unraid FS has its uses, multiple random sized discs, ideal for media storage etc, ZFS just ticks all the boxes for the other use cases.

 

Adding proper ZFS support ticks more than just 1 feature request

 

BTRFS is dead on arrival... anyone remember rieserfs ?

 

BTRFS is so experimental, almost every kernel release has had breaking changes and constant bugs.

BTRFS will kill itself in most situations and when you really want to tank all your data.... enable some "advanced" btrfs features, and power off your machine at the wall.

 

Ubuntu and Redhat use XFS as the default. 

Ubuntu uses ZFS by default for containers.

Proxmox has been using ZFS for years.

Solaris and *BSD use ZFS

ZFS on linux, is similar to *BSD zfs, but vastly different.

Edited by eXtremeSHOK
  • Like 2
Link to comment

Not sure I agree with you about BTRFS, I have two servers running unRAID that are using BTRFS and I have had zero issues. I think really if you want to use ZFS you have to go with FreeNAS. I know there is a plugin for ZFS for unRAID, I have never looked at it. Frankly I can't see ZFS ever coming to unRAID the same way it is implemented in FreeNAS, but that is just my opinion. I thoughts for ZFS all the drives had to be the same size?

Link to comment

Zfs on Linux, is not BSD zfs. Zfs on Linux is a near complete port and recode, many many optimizations.

 

I am aware there is an addon, this is about native support. 

 

With regards to btrfs ... I highly doubt you are using any of the advanced features, if it’s the default install, you might aswell be using xfs. Atleast recovery will be possible.

 

freenas is bsd, it’s not Linux kernel, the argument to use free as,  is the same as saying oh you want samba (cifs)... use  windows server. 

  • Like 1
Link to comment
  • SpencerJ changed the title to Unraid Feature Request Wishlist

Join the conversation

You can post now and register later. If you have an account, sign in now to post with your account.
Note: Your post will require moderator approval before it will be visible.

Guest
Reply to this topic...

×   Pasted as rich text.   Restore formatting

  Only 75 emoji are allowed.

×   Your link has been automatically embedded.   Display as a link instead

×   Your previous content has been restored.   Clear editor

×   You cannot paste images directly. Upload or insert images from URL.