CyberSkulls Posted October 23, 2016 Share Posted October 23, 2016 By the way, I had heard that it's up to 30 drives now. 28 data + 2 parity, with cache drives cutting into that total. IIRC, the pro licence supports 30 drives, 2 parity, and up to 36 cache drives in a pool with no limit on the # of unassigned devices. And I might have to look into unassigned devices again. Maybe I misunderstood how it works. I don't honestly care if they aren't part of the parity protected array since I also have full backups if you can set all those drives to show up as a single share. As an example I wouldn't want to have 30 different shares with movies in it. I would need them pooled together to show as a single share. Same with the cache devices, I've never used the cache for anything other than a cache drive so I'm unsure how it might or might not work for my situation. One thing I've never understood is if we're allowed to have 24 cache devices to begin with why not just remove the restriction on the array limit or increase it and allow 54 total rather than 2+28+24? Or like I've asked Tom, create another tier above Pro, call it premier or enterprise or whatever name sounds catchy. I don't mean that as a smart ass if it comes across that way, I am genuinely curious. Sent from my iPhone using Tapatalk Link to comment
garycase Posted October 23, 2016 Share Posted October 23, 2016 I don't use the unassigned devices plugin, but I don't think it allows pooling. You could, based on the current restrictions, have a dual-parity array with 28 data drives and a 24-drive cache pool, which you can set as an unprotected pool. So you'd get 52 drives worth of storage capacity in effectively 2 arrays [the cache and the parity protected array]. You'd manage the cache "pool" by assigning cache-only shares. I agree, by the way, that if UnRAID was extended to allow managing a 2nd array for such large systems, it would be reasonable for LimeTech to make that a different class of license at a higher cost ... or perhaps to simply require an additional license for each additional array you were managing. I haven't experimented with it, but I wonder if you could run a 2nd array within a VM and simply pass-thru all the drives for the 2nd array [you'd probably have to pass through the controller(s) those drives were on]. If this works okay, you could effectively do what you're asking for already ... AND you could still have dual parity arrays on those managed from within VM's. The major disadvantage would be that the virtualized arrays wouldn't be independent of the primary array -- if you had to Stop it for any reasons everything would have to be shut down. Link to comment
CyberSkulls Posted October 23, 2016 Share Posted October 23, 2016 I don't use the unassigned devices plugin, but I don't think it allows pooling. You could, based on the current restrictions, have a dual-parity array with 28 data drives and a 24-drive cache pool, which you can set as an unprotected pool. So you'd get 52 drives worth of storage capacity in effectively 2 arrays [the cache and the parity protected array]. You'd manage the cache "pool" by assigning cache-only shares. I agree, by the way, that if UnRAID was extended to allow managing a 2nd array for such large systems, it would be reasonable for LimeTech to make that a different class of license at a higher cost ... or perhaps to simply require an additional license for each additional array you were managing. I haven't experimented with it, but I wonder if you could run a 2nd array within a VM and simply pass-thru all the drives for the 2nd array [you'd probably have to pass through the controller(s) those drives were on]. If this works okay, you could effectively do what you're asking for already ... AND you could still have dual parity arrays on those managed from within VM's. The major disadvantage would be that the virtualized arrays wouldn't be independent of the primary array -- if you had to Stop it for any reasons everything would have to be shut down. And that's what I kind of saw in a YouTube video that running the VM array was fine but stopping one caused some issues. Even when it came to two or more licensed GUID's showing in the system. I'll have to dig out a bunch of my craptastic Seagate 3TB drives, yes the dreaded ones that we all know and love to hate, and try to run two separate machines pulling off the same JBOD chassis and see what I get for results. Sent from my iPhone using Tapatalk Link to comment
bobkart Posted October 23, 2016 Share Posted October 23, 2016 Besides improved fault tolerance, other benefits include the potential for decreased Parity Sync / Data Rebuild times, on smaller parity sets (those comprised of 3TB drives versus 4TB drives, for example). I realize that lowering Parity Check times isn't a big concern, but lowering Data Rebuild times *is* something to strive for, IMHO. As to data rebuild times, I don't see any benefit here, as the time to rebuild a drive is essentially due to the drive size, of the drive being rebuilt, not the parity drive. And that doesn't change no matter how the array or arrays are set up. The only other factor would be a small one, the fact that it can't go faster than the slowest drive of its array. Hi RobJ, in response to the above, I ran an experiment in my test server. I had just finished some drive speed testing, so it was a perfect follow-on. The drive speed testing involves using a pair of matching drives (same model). First I did a Parity Sync, then a Parity Check. The two result sets below are for, first a pair of the Western Digital 3TB WD30EZRX drives, then a pair of the Seagate 4TB ST4000DM000 drives. pair of WD30EZRX Sync: 6:54:47 120.6 MB/sec Check: 6:53:47 120.9 MB/sec pair of ST4000DM000 Sync: 8:17:01 134.2 MB/sec Check: 8:16:29 134.3 MB/sec Then I mixed the drive sizes: a 4TB drive for parity, a 3TB drive for data: parity: ST4000DM000, data: WD30EZRX Sync: 10:58:52 101.2 MB/sec Check: 10:57:45 101.4 MB/sec The times went up from ~8.3 hours to 11 hours. Link to comment
garycase Posted October 23, 2016 Share Posted October 23, 2016 Very interesting results. I'd expect some "bump" in timing, but not that much. One would expect that the first 3TB of the sync/check would run at the rate of the WD's, then the last 1TB would be relatively slow (since you're on the innermost cylinders of the 4TB drive), but certainly not take as much additional time as your test did. These are both 1TB/platter units (assuming you don't have one of the old, original EZRX units that had 750GB platters -- your results clearly indicate that's not the case); have similar rotational rates (5400 WD, 5900 Seagate); and it's not at all clear why mixing them has the results you've seen. I suspect it has to do with the disparate rotation rates, which likely require some additional rotations, but I'd have thought the disk's buffers would offset that. I presume all tests used the same controller, so that's not a factor -- right? Link to comment
CyberSkulls Posted October 23, 2016 Share Posted October 23, 2016 I'd think you'd have no problem running two independent UnRAID servers using drives from the same chassis. The chassis did not like that idea and refused to cooperate with me. Apparently the SAS3 controllers HGST used in these chassis only allows control from one head unit. So running two unRAID servers connected to the same JBOD didn't work. Was hoping that would be a quick and dirty solution. I don't really want to run a plugin such as unassigned devices and have a crap load of shares so that's not an option. Only other option based on other posters in this thread (and I appreciated the feedback and help) would be to create a large cache only share/pool and not be able to use XFS. So that's not something I want to do at this point in time. So unfortunately now I'll have to move these chassis back to my old set up of running Windows Server and Stablebit Drive Pool & Drive Scanner and put my unRAID licenses in a drawer and hope for something to change in the future. Not the outcome I was hoping for I'll gladly move them back to unRAID if LT raises the artificial drive limitations, allows multiple Pro licenses to run multiple arrays on the same machine/hardware or creates a higher tier that allows for more drives. I would think any one of those three would be workable options. Sent from my iPhone using Tapatalk Link to comment
bobkart Posted October 23, 2016 Share Posted October 23, 2016 Very interesting results. I'd expect some "bump" in timing, but not that much. One would expect that the first 3TB of the sync/check would run at the rate of the WD's, then the last 1TB would be relatively slow (since you're on the innermost cylinders of the 4TB drive), but certainly not take as much additional time as your test did. These are both 1TB/platter units (assuming you don't have one of the old, original EZRX units that had 750GB platters -- your results clearly indicate that's not the case); have similar rotational rates (5400 WD, 5900 Seagate); and it's not at all clear why mixing them has the results you've seen. I suspect it has to do with the disparate rotation rates, which likely require some additional rotations, but I'd have thought the disk's buffers would offset that. I presume all tests used the same controller, so that's not a factor -- right? Yes, identical hardware for all runs, except I changed the 80w picoPSU to a 120w picoPSU for the third run (having repurposed the 80w picoPSU I had used for the first two runs). And unRAID 6.2, stock tunables. CPU is relatively low in performance (Celeron 1037U), but I confirmed that the CPU load was nowhere near maxing out for these runs (plus if the CPU was in the way, the pair of 4TB drives would also be affected). Using motherboard SATA ports. I think 'disparate rotation rates' has something to do with what we're seeing. One clue is that it took 8.25 hours to get to the 3TB mark, but by themselves the 3TB drives get there in under 7 hours. Link to comment
RobJ Posted October 24, 2016 Share Posted October 24, 2016 Besides improved fault tolerance, other benefits include the potential for decreased Parity Sync / Data Rebuild times, on smaller parity sets (those comprised of 3TB drives versus 4TB drives, for example). I realize that lowering Parity Check times isn't a big concern, but lowering Data Rebuild times *is* something to strive for, IMHO. As to data rebuild times, I don't see any benefit here, as the time to rebuild a drive is essentially due to the drive size, of the drive being rebuilt, not the parity drive. And that doesn't change no matter how the array or arrays are set up. The only other factor would be a small one, the fact that it can't go faster than the slowest drive of its array. Hi RobJ, in response to the above, I ran an experiment in my test server. ... I wasn't quite sure what you were responding to, so I wanted to make sure you understood that my response was only about data rebuild time, not parity sync or check time. As I said, they aren't the same thing, they're based on a different drive size. Rebuilding a 3TB only involves 3TB, not 4TB. Your parity numbers are interesting. I was going to comment, but Gary has exactly summarized my thoughts! I totally agree with him. Link to comment
bobkart Posted October 24, 2016 Share Posted October 24, 2016 I see your point about only the data drive needing to be rebuilt now. So whether a 3TB drive is being rebuilt as part of a 3TB array or 4TB array, the time should be similar (at least on paper). Got it. BUT, I think whatever is making my experimental situation above take 8.25 hours to get to the 3TB point (for both a parity check and a parity rebuild), when that only takes 7 hours when the 4TB drive(s) are not involved (disparate data/rotation rates being the leading suspect), would likely also affect a data rebuild. In fact that's easy enough for me to test. Granted it's not the reason I initially put forth. Thanks for helping me see that. (I don't think I've ever rebuilt a small-than-parity data drive, since most of my arrays use all same-sized drives.) Link to comment
garycase Posted October 24, 2016 Share Posted October 24, 2016 Agree ... I'd expect a 3TB rebuild in a mixed system to have the same behavior you saw in the parity sync/check experiment. Whatever the cause (and I agree it's almost certainly the different rotation speeds) it would have the same impact regardless of WHY the drives were all being accessed at once. Link to comment
bobkart Posted October 26, 2016 Share Posted October 26, 2016 pair of WD30EZRX Sync: 6:54:47 120.6 MB/sec Check: 6:53:47 120.9 MB/sec pair of ST4000DM000 Sync: 8:17:01 134.2 MB/sec Check: 8:16:29 134.3 MB/sec Then I mixed the drive sizes: a 4TB drive for parity, a 3TB drive for data: parity: ST4000DM000, data: WD30EZRX Sync: 10:58:52 101.2 MB/sec Check: 10:57:45 101.4 MB/sec The times went up from ~8.3 hours to 11 hours. While testing data rebuild times for the mixed-drive-size situation, it became clear that the results above are FUD. The only reason for it that can see is a somehow-marginally-bad 3TB drive. I replaced that drive with another same-model drive, to test data rebuild times. When I saw the unexpected results, I re-ran Parity Sync and Parity Check with the new drive. Here are the revised results (after '-change 3TB drive'): parity: ST4000DM000, data: WD30EZRX Sync: 10:58:52 101.2 MB/sec Check: 10:57:45 101.4 MB/sec -change 3TB drive Build2: 6:21:35 174.7 MB/sec Sync2: 9:04:58 122.4 MB/sec Check2: 9:03:56 122.6 MB/sec Build2: 6:21:37 174.7 MB/sec So the time increase of interest (Parity Sync/Check) is from ~8.3 hours to ~9.1 hours, *not* to just under 11 hours. The 3TB rebuild times (I ran it again to be sure) being less than the Parity Sync/Check times for the same-sized drives is likely due to the marginal drive. (I should probably check that drive for SMART problems.) Apologies for posting the earlier, misleading test results. When I free up another WD30EZRX I'll re-run Parity Sync/Check for a pair of them. I suspect it'll be close to that 6:21 time instead of 6:54. Link to comment
garycase Posted October 26, 2016 Share Posted October 26, 2016 Glad to see that. Your earlier results really didn't make sense -- as I noted before, the only thing that could even remotely account for it was the different rotation rates ... and THAT shouldn't make any difference because the drives' buffers should take care of that. Seems like in fact they do ... you just had a bad drive Link to comment
Recommended Posts
Archived
This topic is now archived and is closed to further replies.