bobkart

Members
  • Posts

    154
  • Joined

  • Last visited

Everything posted by bobkart

  1. Thanks Joe, I'll likely do something like that.
  2. I have a bunch of scripts in /boot/bin/ that I use often from the command line. After the upgrade to 6.8 I can no longer execute them. Checking the protection I see 'rw' but no 'x'. Trying to chmod them to 700 has no effect. Is there a way to use the scripts now as easily as before the upgrade? I'd much rather type 'foo' than 'bash /boot/bin/foo' each time I need to run one of these scripts (and I use these scripts frequently).
  3. Archer, I'm interested to know what your idle power consumption is.
  4. A few more thoughts: Those Xeon E processors are likely difficult to find at the moment. But an i3-8100T would fit well. You don't necessarily need 2x m.2 slots on the motherboard to host a pair of NVMe drives: a simple adapter allows adding an m.2 drive via a PCIex4 slot. This of course assumes enough of those are left over from other uses. I'd lean slightly towards uATX over ATX in the interests of reducing idle power consumption. One thing I recently realized regarding the ECC-RAM-vs-SSD-cache-pool question: in the ECC RAM solution there is still a ~5-second delay before a write transfer starts while the target and parity drives are spun up (if they weren't already). I don't believe this happens for the SSD cache pool situation. So that's another mark in favor of the SSD approach. My motivator was power consumption and ECC RAM wins over a pair of SSDs in that regard. Regarding cases, the Norco RPC-3216 is 1U less in racking height than the 20-drive version you referred to. It will be hard to find one with much less depth (~24") and still have hotswap bays. There are some shallower models that have, say, eight hotswap drive bays, with the motherboard fitting under the drive bays, but I've not run into anything like that for over twelve drives. A final option would be a 2U, twelve-drive-bay model, like the RPC-2212. One approach I've used (and still use for my 32-drive, 84TB monster) is external drive enclosures, but that approach definitely loses on power efficiency, as the separate enclosure typically has it's own power supply, and thus contributes more overhead to the total.
  5. I get that priorities are different from person to person. Mostly I ask because the OP has these concerns (energy efficiency and fast access). Sounds like you have a SATA SSD cache pool now, which of course can easily keep up with a couple Gb/s of writes. That's a good-looking Supermicro board for sure . . . 2x m.2 slots are perfect for a high-performance cache pool (10Gb/s). And you'll be able to ditch the HBA until you add a ninth drive. Note that most people recommend a pair of drives in the cache pool as opposed to just one, so a drive failure won't necessarily result in data loss. EDIT: Recalling now that you're already past eight drives in your array (forgot to count the parity drive).
  6. Sounds like you're very close to half a watt of idle power consumption per terabyte of protected capacity, and it will only get better for you (that ratio) as you add or upgrade drives. I'm going to guess at 35-40 watts for your idle power consumption. Wondering now if you use single or double parity, and how your drives are connected (HBA or straight to motherboard). How are you on handling full-network-speed writes? I.e. the ECC-versus-cache-pool question raised here.
  7. That's a great-looking server, thanks for sharing those pictures. Those trayless hotswap bays are nice . . . they're the same ones I have in my two primary servers (the three-drive version). For me rackmount makes more sense as my whole entertainment center is comprised of side-by-side racks (four Middle Atlantic RK12s). Then my server closet has a Samson SRK21 to hold things like the primary server pair and a UPS. Once things start stacking up like that racks can give you higher density. Care to share your idle power consumption and total protected capacity numbers?
  8. The NVMe approach wins on cheaper-per-capacity and allows nuch larger capacity (i.e. larger single writes before slowdown). The ECC approach wins on possibly less power consumption (I see 16GiB DIMMs using well under one watt), certainly less complexity (and thus motherboard support needed) and it's much more invisible (i.e. no mover step involved). I started with 64GiB of ECC memory and was forced down to 32GiB when I moved to the mini-ITX motherboard we've discussed. I find that to still be sufficient for the writes I tend to do (easily handles 24-30GB at a time with no slowdown). Regarding server boards, I'm a big fan of those, and have a handful of other file servers (some unRAID, some FreeNAS) that all use SuperMicro server boards (although they could arguably be considered workstation boards, as they're LGA1150/LGA1151 as opposed to LGA2011). But these servers aren't on all the time, and my impression is that as you go from desktop to workstation to server board, as capabilities increase, so does power consumption (even at idle). So that's the main motivation for going with a small workstation board in my always-on applications. (Desktop would likely use less power but I'd lose ECC.)
  9. One key thing to note, that I just remembered, is that the i5's don't support ECC memory. Only Celeron, Pentium, i3 and Xeon. Odd, I know. You could still support full 10Gb/s write speeds without ECC memory if you used NVMe cache drives (SATA won't cut it as I'm sure you know). Obviously price goes up though. On cooling: - you could probably get away with just the three 12cm fans (in the middle of the chassis), which will be quieter than those plus the two 8cm fans (at the rear I believe those would be) - not sure on that motherboard but most support throttling the fans based on temperatures; I've yet to mess with that so can't help much - a good active CPU cooler will be plenty quiet (Noctua is what I use), but besides winning slightly on power consumption, a passive CPU cooler wins on having one less point of failure (no cooler fan to fail and fry your processor) - I doubt SSD temperatures will be an issue . . . I frequently run mine a good 10C hotter (up to 50C) than I do my mechanical drives, with no apparent adverse consequences I've never used WOL so can't help there. Also no 10GbE experience, but I suspect any motherboard made in the last few years can easily support such an adapter card. On power consumption, just taking a wild guess, but I suspect those backplanes will add a handful of watts compared to direct connections. Possibly you could only power them up row-by-row as you add drives though. Not sure as I've never taken a close look at one of those chassis. It seems possible that a setup close to mine, except with the obvious differences (chassis/backplanes, more/larger fans, and perhaps larger motherboard and more memory) could come in at 30-35 watts idle power consumption (for that same number of eight 8TB archive drives installed).
  10. For chassis I used the iStarUSA D-214-MATX. I didn't mention that earlier for a couple of reasons: - you're targeting more drives than that chassis can accommodate - I had to use parts from another chassis to get the archive drives to work (due to the lack of middle mounting holes) Here is the UCD thread I made for this build: https://forums.unraid.net/topic/45322-48tb-server-under-35-watts-idle-now-under-24-watts/ I had a different motherboard at first but it let me down . . . the one I'm using now has been rock solid. Regarding motherboard physical size, there are mATX and ATX versions of that mini-ITX board (C236M WS and C236 WS). Probably they will use marginally more power than the smallest version. I had tighter constraints than you do: had to be mini-ITX to fit that case with all six internal drives installed. Other constraints included eight SATA ports and support for ECC memory. I have similar unRAID setups with all thirty drives involved, which includes dual parity. So from my experience there isn't a problem with processor performance as far as parity checks/etc. are concerned. Regarding i5 versus i3 and idle power consumption, I suspect they're very close. Most positions I read on that question indicate that these more modern processors have no problem throttling down to low levels of power consumption when not being used much. Regarding 'T' versus non-'T' versions, that same argument would seem to apply, but maybe not as strongly. I chose the 'T' versions because it allowed me to use a passive cooler, saving another fraction of a watt. I suspect any sixth-generation-or-newer i3/i5 processor (LGA1151) and corresponding 'economy' motherboard will yield acceptable idle power consumption results. I've built Windows machines using such configurations that idle at ~8 watts, and that was a few years ago, so I suspect it's only better with seventh/eight-generation processors (like with a B250 chipset if ECC isn't a requirement).
  11. One option is the Lenovo SA120. I have one and it works fine with just two caveats: fan noise and geared more towards SAS drives than SATA drives. It connects via a single four-lane SAS cable (SFF-8088) . . . that will limit speeds to at most 2.4Gb/s, so ~200MB/s per drive will be the limit if all twelve drives are involved. A SAS HBA with external ports can drive that enclosure just fine from unRAID; I usually use a 9207-8E.
  12. I have two nearly-identical servers that are always on: one is my primary file server, the other is a backup, and is rsync'ed to from the primary at least daily. I stressed energy efficiency in those builds, and was able to achieve power consumption of just under half a watt per terabyte of capacity: under 24 watts (at idle) for 48TB of double-parity-protected capacity. Here are the relevant build details: - ASRock Rack C236 WSI - Intel i3 6100T or Pentium G4400T - pair of Micron 16GiB DDR4-2133 ECC UDIMMs - 8x Seagate ST8000AS0002 - SeaSonic SS-400FL2 - pair of Xigmatek 80mm fans This all fits in a 2U rackmount chassis under 16" in depth, and is very quiet. The SATA/SAS backplanes in your chassis will likely add overhead beyond what I see, as I connected the six data drives directly rather than in drive bays. The two parity drives are in a 3-in-2 trayless hot-swap drive cage. Keep in mind that I only use these servers for file serving, so if you have needs beyond that, the processor choice may need a rethink.
  13. Very nice to see that the Parity Check speed lost between 6.4.0 and 6.4.1 was regained in 6.5.0.
  14. I just upgraded one of my backup servers from 6.4.0 to 6.4.1. For the first time in several updates the time for a parity check has gone up: Just FYI; it's only 4.5%.
  15. Yes, using the x8 controller in an x4 slot will only reduce the per-channel bandwidth to ~500MB/s, which is easily enough for a spinning drive; a 7,200RPM 8TB drive might just break above 250MB/s for the outer cylinders. And, note that the X11SSL, which drops one PCIex4 slot compared to the X11SSM, will suffice if all you need is the three total PCIe slots. You might be able to find the SSL at a lower price than the SSM.
  16. Good questions. Moving drives away from the 'bottleneck' controller *will* help. But the three-card options still wins; here's why. That board has x8/x8/x4/x4 slots (all PCIe3). The x4 slots are the concern. They'll have ~4GB/s bandwidth instead of ~8GB/s. Splitting that over eight channels leaves ~500MB/s per channel, plenty for a spinning drive. This of course leaves out concerns of overall motherboard PCIe bandwidth, but that's present regardless of how the drives connect. The -16i option loses only because of the revision 2 PCIe connection, which ends up quartering the per-channel bandwidth instead of just halving it, otherwise it'd be pretty much a toss-up between the two options (cost notwithstanding). Probably what you weren't considering is that you *can* run an x8 card in an x4 slot (if physically compatible). The card and host figure it out and you just get a bandwidth reduction.
  17. I can't be sure about the flashing; all I know is I've never needed to flash any 92xx cards I bought, and that count is around a dozen. I did check my empty boxes and see that the one brand-new such card I bought was a 9211-4i, not a 9207-8e. Hopefully someone else can chime in on that. And, as long as we're talking about saving cost, three 9207-8i's look to come in $30-$40 under the two-card option you mention. But maybe your PCIe slots are otherwise occupied. Last thing I noticed is that the 9201-16i connects via PCIe2x8, so 16 channels over ~4GB/s leaves only ~250MB per channel . . . that might be cutting it close for some of the newer/faster drives (8TB and above).
  18. No flashing; they're 9207-8e's as opposed to 8i's, mostly bought used on eBay, but I believe one was brand new. Regarding memory, I'd lean towards a single 16GB ECC UDIMM instead of two 8GB's. Of course pricing could work against that choice. And: Micron/Crucial are pretty much the same, just FYI. On your controller prices, I see them on eBay in the $60 range. Don't know if that's an option for you.
  19. I'll second the ECC suggestion. And the X11SSM. Regarding SAS cards, the 9207's are solid performers; I have about six of those in various 16-to-24-drive setups. Recently I've tried some 9300-8i's with good success. Probably not as cost-effective as the 9207's but certainly more future-proof (12Gb/s versus 6Gb/s). EDIT: Admittedly the 12Gb/s mentioned is only for SAS drives, which don't tend to be used as often in unRAID servers. And of course the speed of the drives involved is usually the bottleneck, not the connection bandwidth.
  20. I've had good results with Supermicro motherboards. For the E3-1200v5/v6 processors, something like their X11SSL works well, with 6 SATA ports. Bump up to the X11SSM and you get 8 SATA ports. The top E3/v6 Xeon will come close to doubling the processing power of your current processor. The v5's will be a bit more affordable. To go past 4C/8T in a Xeon I suggest the E5-16/2600-v3/v4 series. The E5-1650v4 has good bang for the buck, at well over double your current processing power, and can be found at eBay in the $500 range. Drop back to the v3 series for even more cost effectiveness. Motherboards will be more expensive; even the smallest Supermicro model (X10SRM-F) will run you over $200 unless you find a good deal. Plenty of SATA ports though. For memory I tend towards Crucial, Samsung, and occasionally Hynix. EDIT: Forgot to mention: onboard VGA for the motherboards I mentioned . . . no graphics card needed.
  21. I swapped out the 4GB non-ECC DIMM for a 16GB ECC DIMM, and was surprised to see the idle power consumption drop, from 23.0 watts to 22.8 watts.
  22. Thanks for the information Earthworm, and for the offer to take those DIMMs off my hands. I may take you up on that; let me try a few more things first. I had trouble getting the new board to boot up, until I reseated the modular PSU cable on the PSU end. Then good. So this could have been the problem all along. I'll try the old board with a known-good PSU soon. It may turn into the online backup system I had planned for the new board, as I have pretty much all the other parts needed to build that. Parity Check on the new system finished in a little bit less time than the last one before the old board had the problem: Last check completed on Fri 25 Aug 2017 03:20:20 PM PDT (today), finding 0 errors. Duration: 15 hours, 17 minutes, 29 seconds. Average speed: 145.4 MB/sec
  23. New motherboard and processor is in place, working well so far. WIll run a Parity Check soon. Only have 4GB of non-ECC memory in there for now, but the change gave a substantial improvement in power consumption: 23.0 watts at idle, compared to mid-33's before. *And* more processing power (i3-6100T). That's over 2TB of single-parity-protected storage per idle watt. It will creep up more towards 24 watts when I put more memory in.
  24. Thanks for the advice Garycase. I tested the power supply with a power-supply tester, so it's not 'obviously' failed, but could still be off just enough to prevent that board from booting. Will circle back for that. No video comes out at any time over the course of what would be the boot sequence. Everything powers up (drives, fans) and LEDs on the motherboard come on, but no video. That might be acceptable for now it if would finish the boot sequence, but no joy. I also cycled through the DIMMs, one at a time in the primary DIMM slot, to rule out a DIMM failure. I also reset the CMOS (by removing the battery for several minutes). I'm using a UPS so should not have experienced any power loss. There was a series of serious-looking errors in the log (I was 'tail -f'-ing the log in Emacs/Cygwin). Something about I/O and locking I think. Was hoping a reboot cleared it up, so I didn't save it.
  25. Went to write some files to this server earlier today and it was unresponsive. No video, wouldn't boot. Motherboard looks to have failed. I just finished moving the drives to a hastily-assembled system, using parts I had laying around. No data loss as far as I can tell. Got everything backed up and am running a parity check now. By coincidence I had just ordered another motherboard, to start a build of an online backup (mirror) system for this one. Looks like that new board (ASRock Rack C236 WSI) will instead replace this failed Avoton board.