Risino15 Posted May 8, 2018 Share Posted May 8, 2018 I am just wondering if Lime Technology is planning to add support to UnRaid for more than 2 parity drives and more than 30 drives total. Quote Link to comment
limetech Posted May 8, 2018 Share Posted May 8, 2018 6 hours ago, Risino15 said: I am just wondering if Lime Technology is planning to add support to UnRaid for more than 2 parity drives and more than 30 drives total. Strictly speaking unRAID already supports 54 devices, 30 in parity-protected array, 24 in the cache pool. But you are probably wondering about a wider parity-protected array with 3 or more 'check' disks. Let me ask, what is the use case? Rather than increasing max width of 'the' array, wouldn't it be more generally useful and nearly as effective to permit creation of multiple arrays/pools in one server? [This discussion, if it continues, can lead to interesting places, but is not a support question. Probably I should move to Feature Requests, and may end up doing so, but for now moving to Pre-Ssales Support.] Quote Link to comment
Risino15 Posted May 8, 2018 Author Share Posted May 8, 2018 The 3 parity drives would make sense with more than 30 devices so those things should be added together. And multiple pools is a great idea. Or maybe add something like VDEVs in ZFS so we could make maybe 15 drives VDEVs with 2 parity drives each or something like that. It's not like that I need more drive support right now but in the future it could be good to have it and UnRaid could then be used in larger scale. Quote Link to comment
pwm Posted May 8, 2018 Share Posted May 8, 2018 3 minutes ago, limetech said: Rather than increasing max width of 'the' array, wouldn't it be more generally useful and nearly as effective to permit creation of multiple arrays/pools in one server? I would definitely recommend the support of multiple arrays before support of bigger arrays or support for a third parity drive. That would allow two-disk mirrors for high write speeds besides the cache pool. Quote Link to comment
JorgeB Posted May 8, 2018 Share Posted May 8, 2018 11 minutes ago, limetech said: Rather than increasing max width of 'the' array, wouldn't it be more generally useful and nearly as effective to permit creation of multiple arrays/pools in one server? Also vote for multiple arrays and/or multiple cache pools. Quote Link to comment
Risino15 Posted May 8, 2018 Author Share Posted May 8, 2018 Well if you add something like VDEVs in ZFS you could do bigger stuff with it. Quote Link to comment
JorgeB Posted May 8, 2018 Share Posted May 8, 2018 5 minutes ago, Risino15 said: Well if you add something like VDEVs in ZFS VDEVs are a ZFS thing and I would guess very difficult to implement similar functionally on unRAID Quote Link to comment
Risino15 Posted May 8, 2018 Author Share Posted May 8, 2018 It would be just something like multiple UnRaid pools striped into one. Quote Link to comment
JorgeB Posted May 8, 2018 Share Posted May 8, 2018 10 minutes ago, Risino15 said: It would be just something like multiple UnRaid pools striped into one. Exactly, and unRAID doens't stripe disks. Quote Link to comment
Risino15 Posted May 8, 2018 Author Share Posted May 8, 2018 Than they could make it something like JBOD so one VDEV would fill then second.. third.... Quote Link to comment
pwm Posted May 8, 2018 Share Posted May 8, 2018 1 hour ago, Risino15 said: Than they could make it something like JBOD so one VDEV would fill then second.. third.... And now you basically have multiple arrays, where a user share is allowed to have data on more than one array. Quote Link to comment
limetech Posted May 8, 2018 Share Posted May 8, 2018 2 hours ago, Risino15 said: you could do bigger stuff with it. I was really wondering, what is the application use case? That is, suppose we have to decide how to spend several hundred man-hours of effort, and the choice is between: a) developing either wider arrays or multiple arrays, etc., or, b) developing something else, say automated VM vdisk snapshot and backups? In other words, the OP question was compelling, not so much from a technical standpoint (it's all been discussed before), but rather from a business standpoint, or if you prefer, from a company resource allocation standpoint. Quote Link to comment
Risino15 Posted May 8, 2018 Author Share Posted May 8, 2018 It's up to Lime Technology. I just thrown this idea on the forum that maybe later it can be added. Everything has it's priorities and I understand that. Quote Link to comment
Greygoose Posted May 8, 2018 Share Posted May 8, 2018 I pick option B That would be a great feature. Quote Link to comment
MvL Posted May 8, 2018 Share Posted May 8, 2018 I my opinion multiply protected arrays is a great feature. Also multiply cache drives is a great feature, and maybe we need a separate ssd for storage of system, app data, domains, iso's. So many cool feature all take time to develop, and yeah what are the priorities.... I think for everyone they are different. Quote Link to comment
JorgeB Posted May 8, 2018 Share Posted May 8, 2018 1 hour ago, limetech said: b) developing something else, say automated VM vdisk snapshot and backups? I would prefer this one, but if you gonna do it please don't limit the snapshots to VMs, add snapshot functionality for all shares (for disks using btrfs obviously), I'm already using it, with the help of the user scripts plugin, but would really like GUI added functionally. This is what I'm currently doing: -script to snapshot a share at at set time (across multiple disks if applicable) -another script to list the existing snapshots of a share, on shares spanning multiple disks it lists from a single disk, since the remaining disk will have identical ones -another script to delete snapshots related to that same share, where I input the date I want to delete, with wildcards if needed, and snapshots for that share are deleted from all disks This is what I would like: -GUI option to snapshot any share, if the share exists on multiple disks, snapshot all of them, configurable expiration time, i.e., at the time of creation set snapshots to expire after one month and delete them automatically as they reach that age. -GUI option to list all existing snapshots related to a share, and have a checkbox to easily delete them if wanted (checking a date would delete related share snapshots across the various disks, if applicable) 1 Quote Link to comment
LFletcher Posted May 9, 2018 Share Posted May 9, 2018 I'd rather have the option for multiple arrays rather than wider ones. I have a 48 bay case (and an additional 60 bay expander) which i'd like to use with unRAID, but due to the current protected drive limit I would have to swap to another product in order to utilize them properly. Quote Link to comment
NewDisplayName Posted May 20, 2018 Share Posted May 20, 2018 Just a smal question does anyone know why 2 parity and 28 drives? Why you cant just have 50 and 2 parity? Quote Link to comment
pwm Posted May 20, 2018 Share Posted May 20, 2018 46 minutes ago, nuhll said: Just a smal question does anyone know why 2 parity and 28 drives? Why you cant just have 50 and 2 parity? It would be a bad idea - the best drives you can get specifies a 0.33% annual failure rate when they are new. So one drive failure every 6 years for such a big system. When the drives gets older then the annual failure rate will start to increase - which means the probability of multiple disk failures will start to be significant with such a big system. And disk failures aren't even the only danger when you build really big systems. There is a reason why big systems uses hierarchical designs to avoid that the failure probability starts to explode. The next thing is that it's a question of diminishing returns. When you have 5 data disks, you need to chip in 40% more to add 2 parity disks. When you have 25 data disks, you only need to add 8% for the 2 parity drives. So it isn't economically meaningful to double the number of drives just to keep down the cost of the parity drives - it's way better to instead use two independent arrays. Especially since two arrays allows better transfer rates and better availability and better ability to recover. The math isn't the same since the failure distribution is significantly different than the birthday distribution, but the birthday paradox is still an interesting concept about escalating probabilities: https://en.wikipedia.org/wiki/Birthday_problem Quote Link to comment
remotevisitor Posted May 20, 2018 Share Posted May 20, 2018 The fact that there is a limit with what appears to be such an arbitrary number suggests to me that it is possibly an implementation limit where a bit mask value detailing which drives are used by a share have to fit into a 32-bit value in an existing field in a file system data structure, hencethe size of the field cannot be changed, with the extra 4 bits used for something else like an error value. This is just pure speculation on my part on a possible reason for the limit. Quote Link to comment
NewDisplayName Posted May 21, 2018 Share Posted May 21, 2018 (edited) So its a mathematical thing? That 2 small arrays are better then 1 big, makes sense. Even more cool would be some sort of multi server setup. Like one controll panell (what we have now) but in background you could have an application server (or multiple) and a storage server (or multiple). Edited May 21, 2018 by nuhll Quote Link to comment
Risino15 Posted August 5, 2018 Author Share Posted August 5, 2018 I just had an idea in my head. Correct me if I am worng, but parity drives are just mirrored, right? (I'm not sure about this at all) so it wouldn't be hard to just allow more parity drives, right? Quote Link to comment
itimpi Posted August 5, 2018 Share Posted August 5, 2018 1 minute ago, Risino15 said: I just had an idea in my head. Correct me if I am worng, but parity drives are just mirrored, right? (I'm not sure about this at all) so it wouldn't be hard to just allow more parity drives, right? No, the parity drives are not mirrored. Different algorithms are used for parity1 and parity2. Quote Link to comment
Risino15 Posted August 5, 2018 Author Share Posted August 5, 2018 (edited) 3 minutes ago, itimpi said: Different algorithms are used for parity1 and parity2. Maybe a stupid question, but what if 3 drives and up were just raid 1? Is there a limitation why parity drives cannot just be mirrored? EDIT: So theoretically it would be possible to add HW Raid 1 array to the parity drive to "create" more parity drives? Edited August 5, 2018 by Risino15 Quote Link to comment
itimpi Posted August 5, 2018 Share Posted August 5, 2018 1 minute ago, Risino15 said: Maybe a stupid question, but what if 3 drives and up were just raid 1? Is there a limitation why parity drives cannot just be mirrored? I am not an expert on the maths behind this but I believe there is a need for different algorithms to handle multiple disk failures.so you can identify which content belongs on which drive. Quote Link to comment
Recommended Posts
Join the conversation
You can post now and register later. If you have an account, sign in now to post with your account.
Note: Your post will require moderator approval before it will be visible.