Logimex Posted January 7, 2020 Share Posted January 7, 2020 I would like to see a GUI initiable, easy Drive removal and replacement Option wich will guide more insecure or inexpierenced Users through the remove / replace Process. 1 Quote Link to comment
stefan marton Posted January 10, 2020 Share Posted January 10, 2020 i request a cache pool read only for my media server like the feature on synology nas Quote Link to comment
zwolfinger Posted January 11, 2020 Share Posted January 11, 2020 Fusion IO support. Older tech but getting super cheap as enterprise leases run out. Quote Link to comment
scorcho99 Posted January 13, 2020 Share Posted January 13, 2020 On 1/2/2020 at 3:45 AM, sosdk said: I would like a cache setting that allows files to be on both cache pool and array. Second this, for same reason as Pauven. Its not as easy as it sounds though, because what happens when you change a file on the cache? How will that link up to the array? Quote Link to comment
Ellis34771 Posted January 14, 2020 Share Posted January 14, 2020 I have zero desire to use ZFS. I've looked at it. If I wanted it, I'd have gone with FreeNAS. I wouldn't mind if other storage options were to be added, but it sort of goes against what UnRAID is. (IMHO) Access logs would be helpful however. I also love the idea of Snapshots. 1 1 Quote Link to comment
Ockingshay Posted January 14, 2020 Share Posted January 14, 2020 (edited) Drive management; Building in pre-clearing and so you can just plug in a drive and the gui prepares it, whilst the array is up and adds it with least disruption. ability to add and remove drives to grow or shrink the array in a very user friendly way. i have 15 x 3tb drives and as with all technology moving on, I can now buy 14tb drives. Would be great to be able to remove 4 old drives and replace with a single drive. All intuitively managed by the gui. baked in unassigned drive support. basically, very easily managed drive support to give users confidence to upgrade/change things around when required, which at the end of the day is exactly what unraid core functionality is all about....drive management. Edited January 14, 2020 by Ockingshay 1 Quote Link to comment
isvein Posted January 14, 2020 Share Posted January 14, 2020 Server to server backup and iscsi would be nice Quote Link to comment
falconexe Posted January 16, 2020 Share Posted January 16, 2020 Triple Parity Please 🙏 I ask this half serious, and I know the math behind this would be insane. I'd take this or multiple arrays on single hardware. I have nearly 30 disks in a 45 Drives Storinator with 206TB usable, including X5 14TB drives. As single drive sizes go up, I'm getting more and more nervous ha ha. I have backups of everything, on and off site, but still, recovering would be a PITA. 1 Quote Link to comment
glennv Posted January 16, 2020 Share Posted January 16, 2020 As snapshots are already possible with btrfs and zfs (if using the plugin and as i do since a while with just scrips and commandline incl send recieve to remote unraid box and works great), i vote for zfs support on cache and array.It would unlock more supported and rock solid zfs raid features on the cache then the current limited (stable) options on btrfs and finaly would allow zfs features for the array itself.btrfs is nice and all and i dabled with it for a while , but after the 3rd unrecoverable corruption issue i moved to unaasigned devices and zfs plugin for all docker,appdata,vms etc and never looked back since nor ever had an issue anymore.Lastly and maybe the most important of all, zfs is extremely user friendly and btrfs is not. Specialy when stuff goes wrong. Then is "the" moment where you need simplicity and not complexity. Anyone who had to deal with cache corruption issues on btrfs by now will now this. Typicaly in most cases its sorry just rebuild the cache. You have to almost nuke zfs from orbit to get to that point while a simple cable connection hickup or bad shutdown on a raided btrfs is easily busting you up. Did dozens of distructive tests and would not trust my critical data to btrfs anymore. Unfortunately for multi ssd cache its my only option currently. Multiple cache pools is a close second. Quote Link to comment
glennv Posted January 16, 2020 Share Posted January 16, 2020 Snapshots as argument for preventing ransomware is a bit out of date. Almost all actual ransomware I read about deletes all versioning the user has access to provided by snapshots on the share level. With copy on write filesystems and readonly snapshot its pretty hard for ransomewhare to affect old snapshots. As even deleing all data curently accessible stil follows the copy on write rules, not actualy deleting your old data. But it is possible for sure when your host is fully compromised with root access.But to be totaly safe you can do what i do and have a second mostly isolated system that "exposes/shares NO data via any sharing protocol" smb/nfs/etc) and "pulls" snapshots(deltas via zfs/btrfs send recieve over secure ssh connection) from the primary every night for example. Dont mistakenly "push" data to it from primary as that means if your primary is compromised a hacker or hacker software has access to stored credetials that you use on that primary to acess the backup host and will just use those to jump to it. Quote Link to comment
Marshalleq Posted January 16, 2020 Share Posted January 16, 2020 @glennv I agree, btrfs is surprisingly unstable for a stable fs. I just possibly realised something that I hope you can clarify, if I create a ZFS mirror and mount it to /mnt/cache, will unraid use that as it's normal cache? I had always assumed the cache had to be btrfs for a mirror and XFS for a single disk. I've replaced my docker with zfs, but hadn't considered it was possible for the cache. Quote Link to comment
glennv Posted January 16, 2020 Share Posted January 16, 2020 (edited) 24 minutes ago, Marshalleq said: @glennv I agree, btrfs is surprisingly unstable for a stable fs. I just possibly realised something that I hope you can clarify, if I create a ZFS mirror and mount it to /mnt/cache, will unraid use that as it's normal cache? I had always assumed the cache had to be btrfs for a mirror and XFS for a single disk. I've replaced my docker with zfs, but hadn't considered it was possible for the cache. Nope zfs is only for unassigned devices. Dont mess with the cache !! Cache is part of the array. Keep that btrfs if you want a fault tollerant option either raid1 or if more drives raid10. It was on my wishlist (1st sentence) Edited January 16, 2020 by glennv Quote Link to comment
Marshalleq Posted January 16, 2020 Share Posted January 16, 2020 Damn. Thanks for that. If I could have an XFS mirrored cache I would, I have grown to dislike BTRFS. Sent from my iPhone using Tapatalk 1 Quote Link to comment
glennv Posted January 16, 2020 Share Posted January 16, 2020 3 minutes ago, Marshalleq said: Damn. Thanks for that. If I could have an XFS mirrored cache I would, I have grown to dislike BTRFS. Sent from my iPhone using Tapatalk Yeah i am with you. The only thing you could theoreticaly do is use some hardware raid under a normal xfs cache for redundancy, but its not realy the most ellegant way and has its own issues . Guess we better stick with btrfs for now and wait it out untill zfs comes there. Eventualy it will have to. Quote Link to comment
Marshalleq Posted January 16, 2020 Share Posted January 16, 2020 I haven’t really looked into how it works but I am somewhat surprised I can’t just reformat the existing mirror with zfs in its place. I assume since we can’t it must have the cache built directly into the unraid array code and somehow it would detect the change in file system and disable it. Sent from my iPhone using Tapatalk Quote Link to comment
huntastikus Posted January 29, 2020 Share Posted January 29, 2020 nVidia DRIVERS PLEASE!!!!!! Or a mechanism of installing drivers!!! 1 Quote Link to comment
zywolf Posted January 31, 2020 Share Posted January 31, 2020 Added Chinese support. Quote Link to comment
mkfelidae Posted January 31, 2020 Share Posted January 31, 2020 I would like to see QEMU non-native emulation support. QEMU already has the ability to emulate non-native architectures. Just building all the emulation targets would be awesome. I honestly don't care if that means I have to write my Domain XML from scratch on non-native VMs; I would really like this as an option, especially if your already compiling QEMU for UNRAID. I would also like a drop in method for adding kernel modules, for drivers and the like. Rather than compiling a custom kernel for each use case. 1 Quote Link to comment
ajugland Posted February 1, 2020 Share Posted February 1, 2020 NTFS support in kernell Uploading NTFS disks to array is very slow due to NTFS drivers. Is it possible to get them inside kernell so it will be as fast as XFS to XFS HDD write and read? NTFS to XFS is half HDD performance in Unraid Ref: https://www.reddit.com/r/unRAID/comments/ewuogs/why_do_ntfs_to_xfs_copy_at_less_than_half_hdd/?utm_source=share&utm_medium=web2x 1 Quote Link to comment
testdasi Posted February 1, 2020 Share Posted February 1, 2020 2 hours ago, ajugland said: NTFS support in kernell Wrong place to ask for this. You need to talk to Linus Torvalds. All kernel control is by the power-that-be. Unraid is an end user of the Linux kernel. Quote Link to comment
itimpi Posted February 1, 2020 Share Posted February 1, 2020 2 hours ago, ajugland said: NTFS support in kernell Uploading NTFS disks to array is very slow due to NTFS drivers. Is it possible to get them inside kernell so it will be as fast as XFS to XFS HDD write and read? NTFS to XFS is half HDD performance in Unraid Ref: https://www.reddit.com/r/unRAID/comments/ewuogs/why_do_ntfs_to_xfs_copy_at_less_than_half_hdd/?utm_source=share&utm_medium=web2x Is there even a Linux NTFS kernel driver available? I thought there was not 1 Quote Link to comment
ftrueck Posted February 1, 2020 Share Posted February 1, 2020 My biggest wish: ZFS as main file system for arrays. 1 Quote Link to comment
eXtremeSHOK Posted February 2, 2020 Share Posted February 2, 2020 (edited) Adding native ZFS, allows for multiple pools, snapshots, vm enhancements (thin provisioning, backups without powerdown, linked vm;s), server 2 server backups (you can stream zfs), bitrot protection, zfs with l2arc and zil, De-duplication, realtime compression.... etc. Unraid FS has its uses, multiple random sized discs, ideal for media storage etc, ZFS just ticks all the boxes for the other use cases. Adding proper ZFS support ticks more than just 1 feature request BTRFS is dead on arrival... anyone remember rieserfs ? BTRFS is so experimental, almost every kernel release has had breaking changes and constant bugs. BTRFS will kill itself in most situations and when you really want to tank all your data.... enable some "advanced" btrfs features, and power off your machine at the wall. Ubuntu and Redhat use XFS as the default. Ubuntu uses ZFS by default for containers. Proxmox has been using ZFS for years. Solaris and *BSD use ZFS ZFS on linux, is similar to *BSD zfs, but vastly different. Edited February 2, 2020 by eXtremeSHOK 2 Quote Link to comment
ashman70 Posted February 2, 2020 Share Posted February 2, 2020 Not sure I agree with you about BTRFS, I have two servers running unRAID that are using BTRFS and I have had zero issues. I think really if you want to use ZFS you have to go with FreeNAS. I know there is a plugin for ZFS for unRAID, I have never looked at it. Frankly I can't see ZFS ever coming to unRAID the same way it is implemented in FreeNAS, but that is just my opinion. I thoughts for ZFS all the drives had to be the same size? Quote Link to comment
eXtremeSHOK Posted February 2, 2020 Share Posted February 2, 2020 Zfs on Linux, is not BSD zfs. Zfs on Linux is a near complete port and recode, many many optimizations. I am aware there is an addon, this is about native support. With regards to btrfs ... I highly doubt you are using any of the advanced features, if it’s the default install, you might aswell be using xfs. Atleast recovery will be possible. freenas is bsd, it’s not Linux kernel, the argument to use free as, is the same as saying oh you want samba (cifs)... use windows server. 1 Quote Link to comment
Recommended Posts
Join the conversation
You can post now and register later. If you have an account, sign in now to post with your account.
Note: Your post will require moderator approval before it will be visible.