Soon™️ 6.12 Series


starbetrayer

Recommended Posts

21 minutes ago, Abnorm said:

Will it magically make mechanical drives faster ?

It doesn't "make drives faster", but the point is that on the standard unraid array you may have 8 drives but when you read/write a file it always comes from/to one single drive, limiting individual file access perrformance to one drive's performance. ZFS is/works as a RAID array, so accesses are spread across multiple drives depending on pool configuration.

 

For example a 5-drive raidz1 will be a bit less than 5x a single drive's performance.

https://calomel.org/zfs_raid_speed_capacity.html

Edited by Kilrah
Link to comment

I hope in the future the said 30+ drive limit will be removed and so we have an option to extend our zfs with vdev pairs(eg 4 device,5 device,6 device) for speed and also for more parity im currently building a 1pb server and im sad that i might outgrow my unraid server as much as possible i want to switch OS i want unraid's simplicity and user friendlyness as well as S tier UI i hope in the future end user have the choice to have no limit on the array even if we have to accept a pop up that we are doing it at our own risk, i can live with that.

I know that this was not really in the pipeline specially when unraid was built its target market was home enthusiast and at that time hard disks was not that affordable so at that time devs did not think that people will chuck in more that 30 drive in their servers but now that the times have changed specially server grade equipments are more affordable and easier to acquire even, i hope Limetech can look at my simple request :)

Edited by jonathanselye
updated
  • Like 1
Link to comment
13 hours ago, Kilrah said:

It doesn't "make drives faster", but the point is that on the standard unraid array you may have 8 drives but when you read/write a file it always comes from/to one single drive, limiting individual file access perrformance to one drive's performance. ZFS is/works as a RAID array, so accesses are spread across multiple drives depending on pool configuration.

 

For example a 5-drive raidz1 will be a bit less than 5x a single drive's performance.

https://calomel.org/zfs_raid_speed_capacity.html

I see, thanks for clearing that up. Parity is written across the vdev and not to a single drive, that will provide a real life performance benefit for sure. 

Link to comment
11 hours ago, jonathanselye said:

I hope in the future the said 30+ drive limit will be removed and so we have an option to extend our zfs with vdev pairs(eg 4 device,5 device,6 device) for speed and also for more parity im currently building a 1pb server and im sad that i might outgrow my unraid server as much as possible i want to switch OS i want unraid's simplicity and user friendlyness as well as S tier UI i hope in the future end user have the choice to have no limit on the array even if we have to accept a pop up that we are doing it at our own risk, i can live with that.

I know that this was not really in the pipeline specially when unraid was built its target market was home enthusiast and at that time hard disks was not that affordable so at that time devs did not think that people will chuck in more that 30 drive in their servers but now that the times have changed specially server grade equipments are more affordable and easier to acquire even, i hope Limetech can look at my simple request :)

I have 100TB available storage, i use 21 drives including m.2 cache. I don't see how i'll ever end up with 30+ drives, my single rack chassis can fit 24 drives max. As density is increasing almost monthly on drives, when a drive dies I'll replace it with a drive with nearly double or triple the capacity of the failed disk. Reducing the amount of drives and reducing the power consumption. 

Server grade equipment (like HPe, Dell etc) usually have the 24-drives limitation per chassis either way. Yes, there are exceptions. 

 

You mention that people will put more drives in server grade equipment, that is not really how it works. Newer server gear today has more data density, meaning more capacity but uses less space, which translates to less drives but drives with way more capacity, less rackspace required = profit and savings on hosting stuff in datacenters, where space is at a premium.

If you're thinking of using up old small capacity drives, that's fine, but isn't it better to just recycle them and buy some bigger drives, save $$$ on power and at the same time be a little greener ? 

 

Also, if the request was simple, unraid would support 30+ drives already. 

Link to comment
On 1/11/2023 at 5:39 PM, limetech said:
  • (BTW we could add ext4 but no one has really asked for that).

Well in that case, can I ask for it lol? Now that the linux kernel has NTFS3 it'd be nice to support that as well (it has full POSIX ACLs support too for those who need it!!), and it would allow someone to easily move their data over from Windows machines without formatting anything.

I've also mulled the idea of being able to easily support "no filesystem", or at least, Btrfs pools within the unraid array.

 

Then you could use a Btrfs RAID0 to take advantage of snapshots a lot easier and gain the read speed benefits. Then I could better take advantage of 10G on the unraid array itself. You'd also have the ability to add and even remove one disk at a time, and since btrfs now now has "degenerate stripes", even if data is already allocated it can still effectively use all the space, minus the speed improvement. It would never be worse than using independent filesystems speed wise, and balance would fix that up.

 

You'd still lose out on write speeds since the parity disk is the bottleneck, but you'd gain "stable" parity protection in the event of a drive failure. I've hacked this together before, but with the current design of Unraid and it's GUI, it's really not ideal.

 

It's flexibility like this that I'd love to see more of from Unraid, more so than ZFS. That said, very excited to see the inclusion of ZFS support :)

Edited by JSE
Link to comment
2 hours ago, Abnorm said:

I have 100TB available storage, i use 21 drives including m.2 cache. I don't see how i'll ever end up with 30+ drives, my single rack chassis can fit 24 drives max. As density is increasing almost monthly on drives, when a drive dies I'll replace it with a drive with nearly double or triple the capacity of the failed disk. Reducing the amount of drives and reducing the power consumption. 

Server grade equipment (like HPe, Dell etc) usually have the 24-drives limitation per chassis either way. Yes, there are exceptions. 

 

You mention that people will put more drives in server grade equipment, that is not really how it works. Newer server gear today has more data density, meaning more capacity but uses less space, which translates to less drives but drives with way more capacity, less rackspace required = profit and savings on hosting stuff in datacenters, where space is at a premium.

If you're thinking of using up old small capacity drives, that's fine, but isn't it better to just recycle them and buy some bigger drives, save $$$ on power and at the same time be a little greener ? 

 

Also, if the request was simple, unraid would support 30+ drives already. 

Well assuming you want to build a 1PB server with lets say 20tb drives, you still need minimum of 50, and that is without parity and may be reduced with format capacity, im just saying.

Link to comment
9 hours ago, jonathanselye said:

Well assuming you want to build a 1PB server with lets say 20tb drives, you still need minimum of 50, and that is without parity and may be reduced with format capacity, im just saying.

 

If you need petabytes of available storage, is really unraid the best way to go ? 

  • Upvote 3
Link to comment
16 hours ago, Abnorm said:

 

If you need petabytes of available storage, is really unraid the best way to go ? 

Well i think that they are going the ZFS route, might as well include it right so we can also take advantage of the speed of multiple vdev composing of multiple device per vdev.

That is what i am saying in my previous post that at the time unraid was developed it was aimed for home enthusiast, now that times have changed, i hope unraid will adapt and make PB on unraid possible. Just my 2 ¢

Edited by jonathanselye
typo
Link to comment
3 hours ago, apandey said:

Seeing a lot of core plug-ins updated in last few days to support 6.12. That must be a good sign. Sooner ;-)

6.12 is highlighting a lot of php warnings because of the update to php 8.  While the warnings are not fatal, they are very messy in their presentation - all over the UI potentially.  Those of us with core plugins are just trying to prepare for a public release.

 

That being said, some plugin authors may experience plugins that no longer work and they will need to make some adjustments.  This has been rare that a plugin no longer works.  Plugin authors will be able to work with one of the beta releases so they can sort out any issues with plugins.

  • Thanks 2
Link to comment

So I’m thinking about doing a flash based zfs array with this. How does parity with with ssds? TRIM and all that. Will a flash based zfs array in raidz work with parity? So 3 ssds plus a large platter drive for parity. The ssds will make up the zfs array and parity would be platter based. Would that be possible with zfs?

Link to comment
1 hour ago, Jclendineng said:

So I’m thinking about doing a flash based zfs array with this. How does parity with with ssds? TRIM and all that. Will a flash based zfs array in raidz work with parity? So 3 ssds plus a large platter drive for parity. The ssds will make up the zfs array and parity would be platter based. Would that be possible with zfs?

I believe flash to be the future but I’ve been told, No. also no point in having flash array with HDD parity. Parity still has to be written and you’re going to match it

Link to comment
4 hours ago, Jclendineng said:

So I’m thinking about doing a flash based zfs array with this. How does parity with with ssds? TRIM and all that. Will a flash based zfs array in raidz work with parity? So 3 ssds plus a large platter drive for parity. The ssds will make up the zfs array and parity would be platter based. Would that be possible with zfs?

So you need additional protection from unRAID Parity ontop of what ZFS already does for protection?

Sounds kinky and experimental. Better respect the filesystem authors intentions and don't mix the two teams. I think the mix is already called btrfs. Depending on what you need that protection for, I am sure if you explain it, we can come up with the best safest and so on solution, without having to "virtualize" filesystems ontop of eachoter. Been there done that, it could work, but... not in production.

 

The best protections are designed and pre-configured in a way that takes into account all combos between all fail scenarios across the protection layers. Failure mode and effects analysis is the process of reviewing as many components, assemblies, and subsystems as possible to identify potential failure modes in a system and their causes and effects. The issue is that how each layer works it's either guaranteed 100% and documented by the authors and you can understand and plan for it OR you will have to read the source code to fully understand and anticipate the logic (simulate the software in your head X the other layer X all failure modes permutations) ?

 

Just tell us what you mean by that, why do you need that additional Parity 

Link to comment
4 hours ago, Jclendineng said:

Will a flash based zfs array in raidz work with parity? So 3 ssds plus a large platter drive for parity. The ssds will make up the zfs array and parity would be platter based. Would that be possible with zfs?

As has been mentioned you won't be able to make a raidz pool in the array. If it's in the array it's going to have to be individual ZFS-formatted drives.

Link to comment
42 minutes ago, GRRRRRRR said:

So you need additional protection from unRAID Parity ontop of what ZFS already does for protection?

Sounds kinky and experimental. Better respect the filesystem authors intentions and don't mix the two teams. I think the mix is already called btrfs. Depending on what you need that protection for, I am sure if you explain it, we can come up with the best safest and so on solution, without having to "virtualize" filesystems ontop of eachoter. Been there done that, it could work, but... not in production.

 

The best protections are designed and pre-configured in a way that takes into account all combos between all fail scenarios across the protection layers. Failure mode and effects analysis is the process of reviewing as many components, assemblies, and subsystems as possible to identify potential failure modes in a system and their causes and effects. The issue is that how each layer works it's either guaranteed 100% and documented by the authors and you can understand and plan for it OR you will have to read the source code to fully understand and anticipate the logic (simulate the software in your head X the other layer X all failure modes permutations) ?

 

Just tell us what you mean by that, why do you need that additional Parity 

I was just being dumb. RAIDz1-3 would do what I want, I had a brain fart. So 3 drives in raidz1 would be best storage to parity efficiency 

Link to comment

I am here just to follow the thread - my only post in this thread will probably be this one.

I wanted ZFS for a long time, although just to be an "option", a "feature of the system" (as not having ZFS actually has left unRAID out of many discussions of NAS systems - which is unfair to unRAID and I am sure Tom is aware of this happening).
The only thing I really care about as a long time user and customer is
- ...deduplication (because by the nature of the data I keep - emulation data - I do have many duplicates),
- ...transparent compression-decompression officially supported (not "under-the-hood" tricks),
- ...have a "self-healing" FS on my data disks, not just the cache,
and all these WITHOUT losing
- ...the ability to have various size disks,
- ...able to replace one by one disk with larger size,
- ...easy recovery even when I go OVER the limits of RAID4 that unRAID practically is (i.e. I want to be able to recover the files from the healthy disks even if two disks die with a single parity).

Can I have those things when ZFS is incorporated to unRAID? (or some other way?)
I don't really care if the underlying FS is ZFS or whateverFS, as long as I get those and not lose the current benefits of unRAID.

Keep evolving!

Edited by NLS
  • Like 1
  • Upvote 1
Link to comment

.-..-. .... . -.-- --..-- / .-. -.-- .- -. --..-- / -... . / -.-. .- .-. . ..-. ..- .-.. / .-- .... .- - / -.-- --- ..- / ... .... --- --- - / .- - .-.-.- / -- --- ... - / - .... .. -. --. ... / .. -. / .... . .-. . / -.. --- -. .----. - / .-. . .- -.-. - / - --- --- / .-- . .-.. .-.. / - --- / -... ..- .-.. .-.. . - ... .-.-.- .-..-.

  • Haha 2
Link to comment

Join the conversation

You can post now and register later. If you have an account, sign in now to post with your account.
Note: Your post will require moderator approval before it will be visible.

Guest
Reply to this topic...

×   Pasted as rich text.   Restore formatting

  Only 75 emoji are allowed.

×   Your link has been automatically embedded.   Display as a link instead

×   Your previous content has been restored.   Clear editor

×   You cannot paste images directly. Upload or insert images from URL.