bobkart

Members
  • Posts

    154
  • Joined

  • Last visited

Everything posted by bobkart

  1. I've never gotten one that didn't plug-n-play, including one purchased brand new. I have five or so of them, mostly purchased on eBay.
  2. The 9207-8es are what I use the most. Also have used successfully: 9200-8e 9300-8e 9202-16e 9211-4i 9211-8i They key consideration is that they are HBAs as opposed to RAID cards (IT mode).
  3. The RAM will take care of most of the writes. My primary server has 64GB of RAM, and after adjusting the dirty_ratio up to 75, I can write ~40GB files to it at GbE speeds (~112MB/s). With 3x that much RAM you can probably leave dirty_ratio alone and achieve the same results.
  4. All my backup unRAID servers work that way. I use rackmount SAS enclosures and widely-available SAS controllers in the host (9207-8e, 9300-8e, ...). So all you should need besides the NetApp disk shelf is a PCIe HBA/controller that's compatible with the disk shelf, that's also compatible with unRAID.
  5. It's tight: https://lime-technology.com/forum/index.php?topic=47025.0 mATX will only work if you leave out the bottom two drives.
  6. While testing data rebuild times for the mixed-drive-size situation, it became clear that the results above are FUD. The only reason for it that can see is a somehow-marginally-bad 3TB drive. I replaced that drive with another same-model drive, to test data rebuild times. When I saw the unexpected results, I re-ran Parity Sync and Parity Check with the new drive. Here are the revised results (after '-change 3TB drive'): parity: ST4000DM000, data: WD30EZRX Sync: 10:58:52 101.2 MB/sec Check: 10:57:45 101.4 MB/sec -change 3TB drive Build2: 6:21:35 174.7 MB/sec Sync2: 9:04:58 122.4 MB/sec Check2: 9:03:56 122.6 MB/sec Build2: 6:21:37 174.7 MB/sec So the time increase of interest (Parity Sync/Check) is from ~8.3 hours to ~9.1 hours, *not* to just under 11 hours. The 3TB rebuild times (I ran it again to be sure) being less than the Parity Sync/Check times for the same-sized drives is likely due to the marginal drive. (I should probably check that drive for SMART problems.) Apologies for posting the earlier, misleading test results. When I free up another WD30EZRX I'll re-run Parity Sync/Check for a pair of them. I suspect it'll be close to that 6:21 time instead of 6:54.
  7. I see your point about only the data drive needing to be rebuilt now. So whether a 3TB drive is being rebuilt as part of a 3TB array or 4TB array, the time should be similar (at least on paper). Got it. BUT, I think whatever is making my experimental situation above take 8.25 hours to get to the 3TB point (for both a parity check and a parity rebuild), when that only takes 7 hours when the 4TB drive(s) are not involved (disparate data/rotation rates being the leading suspect), would likely also affect a data rebuild. In fact that's easy enough for me to test. Granted it's not the reason I initially put forth. Thanks for helping me see that. (I don't think I've ever rebuilt a small-than-parity data drive, since most of my arrays use all same-sized drives.)
  8. Yes, identical hardware for all runs, except I changed the 80w picoPSU to a 120w picoPSU for the third run (having repurposed the 80w picoPSU I had used for the first two runs). And unRAID 6.2, stock tunables. CPU is relatively low in performance (Celeron 1037U), but I confirmed that the CPU load was nowhere near maxing out for these runs (plus if the CPU was in the way, the pair of 4TB drives would also be affected). Using motherboard SATA ports. I think 'disparate rotation rates' has something to do with what we're seeing. One clue is that it took 8.25 hours to get to the 3TB mark, but by themselves the 3TB drives get there in under 7 hours.
  9. As to data rebuild times, I don't see any benefit here, as the time to rebuild a drive is essentially due to the drive size, of the drive being rebuilt, not the parity drive. And that doesn't change no matter how the array or arrays are set up. The only other factor would be a small one, the fact that it can't go faster than the slowest drive of its array. Hi RobJ, in response to the above, I ran an experiment in my test server. I had just finished some drive speed testing, so it was a perfect follow-on. The drive speed testing involves using a pair of matching drives (same model). First I did a Parity Sync, then a Parity Check. The two result sets below are for, first a pair of the Western Digital 3TB WD30EZRX drives, then a pair of the Seagate 4TB ST4000DM000 drives. pair of WD30EZRX Sync: 6:54:47 120.6 MB/sec Check: 6:53:47 120.9 MB/sec pair of ST4000DM000 Sync: 8:17:01 134.2 MB/sec Check: 8:16:29 134.3 MB/sec Then I mixed the drive sizes: a 4TB drive for parity, a 3TB drive for data: parity: ST4000DM000, data: WD30EZRX Sync: 10:58:52 101.2 MB/sec Check: 10:57:45 101.4 MB/sec The times went up from ~8.3 hours to 11 hours.
  10. IIRC, the pro licence supports 30 drives, 2 parity, and up to 36 cache drives in a pool with no limit on the # of unassigned devices. Thanks for the clarification. I tend not to use cache drives so that part of the restrictions I'm not as close to. Note that (I believe) the limit on data drives is 28, *even* if you don't use dual parity. I.e. 29 data drives with single parity, or 30 data drives with no parity, are *not* options.
  11. I agree on the multiple arrays = multiple licenses; I suspect there would be licensing changes if multiple parity sets were added as an unRAID feature. By the way, I had heard that it's up to 30 drives now. 28 data + 2 parity, with cache drives cutting into that total. The most I've run is 24 drives (not counting my first build with each 'drive' being a RAID5 array), in the 88TB machine in my signature. And it has two complete backups of my primary server, another source of this suggestion. And definitely agree on the small demand point. So I don't imagine this would be added soon (if at all).
  12. My very first unRAID server was structured as you outline, only with RAID5 (not 6) built into the enclosures (not in RAID controllers), and I used a whole other enclosure (total of eight) for the unRAID parity drive: http://lime-technology.com/forum/index.php?topic=27327.msg332114#msg332114 But all the arguments for using unRAID instead of RAID can be brought to bear against that kind of drive array structure: can't mix drive sizes, can't incrementally grow a parity set, will lose all data in a parity set instead of just that on the failed drives when an unrecoverable situation occurs, ... I also like, for my always-on servers, that only the drive I'm reading from needs to spin up (plus the parity drives if writing). For my backup servers I now use Turbo Write mode. I do have a FreeNAS backup server and it has multiple fault-tolerant arrays. That's part of where the idea for this suggestion came from.
  13. Agreed. What I'm talking about though is breaking the drives into multiple DUAL-parity sets. And that's what I often do now. Combining those multiple servers into one saves on a few fronts: - unRAID license keys - motherboard/RAM/CPU/PSU/chassis overhead (both initial cost and ongoing energy consumption) The latter could range from 10 to 30 watts based on unRAID servers I've built over the years. More towards 10 watts these days. Admittedly I'd be more likely to use this on my backup servers as opposed to my always-on servers, which I've built to be very energy efficient, so that extra power consumption from splitting one server into multiple servers isn't a big deal. It's more about the license keys, extra initial cost, and *rack space*; for example, to split this server: https://lime-technology.com/forum/index.php?topic=2031.msg467864#msg467864 into two separate servers would likely cause it to need 8 rack spaces instead of the 6 rack spaces it uses as is. Yes I could go to 1U chassis for the two separate server "heads", but 1U chassis are more difficult to cool as quietly as 2U (or taller) chassis, due to how loud 40mm fans are compared to 80mm fans that move the same amount of air.
  14. Yeah, I didn't really expect any action to be taken on this request any time soon. I just wanted to bring it up again, after Dual Parity was in place, to make sure people didn't think that Dual Parity made multiple parity sets unnecessary (since that seemed to be the outcome of the earlier discussion).
  15. +1 on this feature request. And apologies for resurrecting such an old thread; I wasn't able to find a more recent one, and wanted to preserve the previous discussion. (I guess I could have linked back to this one from a new one.) I get that Dual Parity is more fault tolerant than the same drives divided into two Single-Parity sets. (But perhaps not 3+ Single-Parity sets . . . granted you'll need more drives to move in that direction.) But now that we have support for Dual Parity, it's not an either-or choice. Dual Parity could be applied to any/all of the multiple parity sets. Besides improved fault tolerance, other benefits include the potential for decreased Parity Sync / Data Rebuild times, on smaller parity sets (those comprised of 3TB drives versus 4TB drives, for example). Granted a complete Parity Check on all parity sets wouldn't be faster, but rebuilds are when a parity set is more vulnerable to additional drive failures, and reducing the duration of those processes can help avoid a data-loss situation. Actually I can sort of see a situation where even a Parity Check on multiple parity sets could take less time than if they were all one parity set: take one parity set comprised of 3TB drives, the other of 4TB drives. In the single-parity-set situation, the 3TB drives are slowing the 4TB drives down for the portion of the Parity Check that they're involved in. In the two-parity-set situation (both Parity Checks run in parallel of course), the 3TB drives can run at their own pace without slowing down the read rate of the 4TB drives, which then complete their check in less time. I realize that lowering Parity Check times isn't a big concern, but lowering Data Rebuild times *is* something to strive for, IMHO.
  16. I get it now. Rather than generate new parity based on the current parity (since that's not valid), generate it from just the data. Thanks for that information and the tip on working around that behavior if desired.
  17. I just installed 6.2 Stable on a new server. Regarding md_write_method, on my other three unRAID servers, I mainly use read/modify/write, but I have one backup server that I use reconstruct write on. The behavior on those servers is as I expect. They are all also running 6.2 Stable. On this new server, no matter what I've tried, I always get the reconstruct write behavior. I've tried setting it via the web GUI, via command line, rebooting after changing it, powering cycling, both Auto and read/modify/write, setting it to reconstruct write, then back. No joy. I can tell it's using the reconstruct write method both by the Reads/Writes columns on the main web GUI page, and the drive activity lights on the SAS enclosure. One possible explanation that comes to mind is that I didn't let the Parity Sync finish, wanting to test write speeds between the two write methods first. In fact I wondered if it would bother trying to maintain parity even when it's know to be invalid (the answer was yes). (Hopefully this is the right place to post this, if not let me know and I'll post it there instead.) And thanks for everyone's hard work at getting this release out. syslog.txt
  18. I was on a similar search before my latest build, had considered this one: http://www.istarusa.com/istarusa/products.php?model=D-3100HN#.V84II_nQfAQ 3U, 19" deep, 10 trayless hotswap bays. There is also an eight-bay version (D-380HN). Also considered the Norco RPC-2212: 2U, 12 hotswap bays, but it's ~26" deep. I got my latest build in just 2U, 15.5" deep, with room for nine drives (three trayless hotswap, six internal): http://lime-technology.com/forum/index.php?topic=47025.0 EDIT: The case I used in the above build won't hold nine drives when used with a micro-ATX motherboard (two drives less I believe). I used a mini-ITX motherboard to have room for nine drives.
  19. UPDATE - Lots of changes: increased RAM from 32GB to 64GB increased protected capacity from 40TB to 48TB by adding 6th data drive upgraded to unRAID 6.2.0-beta21 (will go to rc4 or later at my next power cycle) added second Parity drive added warm spare (as unused Cache drive) Power consumption at idle hovers in the mid-34-watt range. So easily under 35 watts. Result is about 0.72 watts idle per protected terabyte. I welcome people posting their capacity/power numbers here. I understand that energy efficiency is not everyone's priority; still, if anyone has impressive numbers, I'd be interested to see them.
  20. Using BTRFS for the cache drives you should be able to do that. I've not used a cache drive/array/pool recently so I'm not 100% sure on that. I'm not sure how capable a "trial" version of unRAID is, but if it allows setting up a BTRFS cache array, you could test the idea that way, assuming you have some spare drives laying around (and two are the same size).
  21. Regarding the NVMe drives, my understanding is that the mainboard won't let you RAID those. So any RAIDing that would happen with them would need to be in software (i.e. provided by an OS).
  22. 30+ days later and no sign of the apparent kernel bug. Still holding steady at 32 watts (idle) for 40TB net capacity; the three hotswap bays remain empty. I have 2TB unused capacity, so it will be a little while before I add another data drive . . . most likely I'll move to Dual Parity at that time. Then adding a warm spare (as unused Cache drive) will round out the build: six data drives, two parity drives, one warm spare, with the data drives inside the chassis and the spare/parity drives in the hotswap bays. I believe I'll be under 36 watts idle in that configuration, so I should be back under 3/4 watt per protected TB.
  23. I found one of my first posts here, from about a year and a half ago, in the Show Me Your Builds thread, with this picture of my first unRAID server (64TB), in the same 12U rack as the new 88TB server above. That one took up all twelve rack spaces, this one just six. And that one took multiple days to check the parity. And used probably twice the power this new one does.
  24. So yeah, I don't keep this one on 24x7, what with the loud fans and high power consumption. On the Parity Check speed front, my last one took around 10.6 hours, which is still pretty good considering that's 88TB of dual-parity-protected data. Once I get the motherboard throughput up to match the drives/enclosures, I'll probably switch to Turbo-Write mode, which should cut down on how long I need it turned on to do my backups.
  25. In that last picture you can see the smart power strip I use to automatically turn on/off the enclosures when the main server powers up/down. Those enclosures are server grade, with redundant power supplies, heavy-duty fans, alarms, the works. I started with the motherboard that came with the chassis (Supermicro H8SMi-2, Opteron 1385, 16GB RAM), but the PCIe bus was very slow. Had an Asus motherboard laying around (P6T SE, i7-930, 6GB RAM) and am using that now, but it still can't quite keep up with all those drives during, for example, Parity Checks. I have a Supermicro X10SLL-F on order; I really just wanted the simpler X10SLL-S, but I found the "better" board for less ($81 versus $90).