bobkart

Members
  • Posts

    154
  • Joined

  • Last visited

Converted

  • Gender
    Undisclosed

Recent Profile Visitors

2362 profile views

bobkart's Achievements

Apprentice

Apprentice (3/14)

7

Reputation

  1. Thanks Joe, I'll likely do something like that.
  2. I have a bunch of scripts in /boot/bin/ that I use often from the command line. After the upgrade to 6.8 I can no longer execute them. Checking the protection I see 'rw' but no 'x'. Trying to chmod them to 700 has no effect. Is there a way to use the scripts now as easily as before the upgrade? I'd much rather type 'foo' than 'bash /boot/bin/foo' each time I need to run one of these scripts (and I use these scripts frequently).
  3. Archer, I'm interested to know what your idle power consumption is.
  4. A few more thoughts: Those Xeon E processors are likely difficult to find at the moment. But an i3-8100T would fit well. You don't necessarily need 2x m.2 slots on the motherboard to host a pair of NVMe drives: a simple adapter allows adding an m.2 drive via a PCIex4 slot. This of course assumes enough of those are left over from other uses. I'd lean slightly towards uATX over ATX in the interests of reducing idle power consumption. One thing I recently realized regarding the ECC-RAM-vs-SSD-cache-pool question: in the ECC RAM solution there is still a ~5-second delay before a write transfer starts while the target and parity drives are spun up (if they weren't already). I don't believe this happens for the SSD cache pool situation. So that's another mark in favor of the SSD approach. My motivator was power consumption and ECC RAM wins over a pair of SSDs in that regard. Regarding cases, the Norco RPC-3216 is 1U less in racking height than the 20-drive version you referred to. It will be hard to find one with much less depth (~24") and still have hotswap bays. There are some shallower models that have, say, eight hotswap drive bays, with the motherboard fitting under the drive bays, but I've not run into anything like that for over twelve drives. A final option would be a 2U, twelve-drive-bay model, like the RPC-2212. One approach I've used (and still use for my 32-drive, 84TB monster) is external drive enclosures, but that approach definitely loses on power efficiency, as the separate enclosure typically has it's own power supply, and thus contributes more overhead to the total.
  5. I get that priorities are different from person to person. Mostly I ask because the OP has these concerns (energy efficiency and fast access). Sounds like you have a SATA SSD cache pool now, which of course can easily keep up with a couple Gb/s of writes. That's a good-looking Supermicro board for sure . . . 2x m.2 slots are perfect for a high-performance cache pool (10Gb/s). And you'll be able to ditch the HBA until you add a ninth drive. Note that most people recommend a pair of drives in the cache pool as opposed to just one, so a drive failure won't necessarily result in data loss. EDIT: Recalling now that you're already past eight drives in your array (forgot to count the parity drive).
  6. Sounds like you're very close to half a watt of idle power consumption per terabyte of protected capacity, and it will only get better for you (that ratio) as you add or upgrade drives. I'm going to guess at 35-40 watts for your idle power consumption. Wondering now if you use single or double parity, and how your drives are connected (HBA or straight to motherboard). How are you on handling full-network-speed writes? I.e. the ECC-versus-cache-pool question raised here.
  7. That's a great-looking server, thanks for sharing those pictures. Those trayless hotswap bays are nice . . . they're the same ones I have in my two primary servers (the three-drive version). For me rackmount makes more sense as my whole entertainment center is comprised of side-by-side racks (four Middle Atlantic RK12s). Then my server closet has a Samson SRK21 to hold things like the primary server pair and a UPS. Once things start stacking up like that racks can give you higher density. Care to share your idle power consumption and total protected capacity numbers?
  8. The NVMe approach wins on cheaper-per-capacity and allows nuch larger capacity (i.e. larger single writes before slowdown). The ECC approach wins on possibly less power consumption (I see 16GiB DIMMs using well under one watt), certainly less complexity (and thus motherboard support needed) and it's much more invisible (i.e. no mover step involved). I started with 64GiB of ECC memory and was forced down to 32GiB when I moved to the mini-ITX motherboard we've discussed. I find that to still be sufficient for the writes I tend to do (easily handles 24-30GB at a time with no slowdown). Regarding server boards, I'm a big fan of those, and have a handful of other file servers (some unRAID, some FreeNAS) that all use SuperMicro server boards (although they could arguably be considered workstation boards, as they're LGA1150/LGA1151 as opposed to LGA2011). But these servers aren't on all the time, and my impression is that as you go from desktop to workstation to server board, as capabilities increase, so does power consumption (even at idle). So that's the main motivation for going with a small workstation board in my always-on applications. (Desktop would likely use less power but I'd lose ECC.)
  9. One key thing to note, that I just remembered, is that the i5's don't support ECC memory. Only Celeron, Pentium, i3 and Xeon. Odd, I know. You could still support full 10Gb/s write speeds without ECC memory if you used NVMe cache drives (SATA won't cut it as I'm sure you know). Obviously price goes up though. On cooling: - you could probably get away with just the three 12cm fans (in the middle of the chassis), which will be quieter than those plus the two 8cm fans (at the rear I believe those would be) - not sure on that motherboard but most support throttling the fans based on temperatures; I've yet to mess with that so can't help much - a good active CPU cooler will be plenty quiet (Noctua is what I use), but besides winning slightly on power consumption, a passive CPU cooler wins on having one less point of failure (no cooler fan to fail and fry your processor) - I doubt SSD temperatures will be an issue . . . I frequently run mine a good 10C hotter (up to 50C) than I do my mechanical drives, with no apparent adverse consequences I've never used WOL so can't help there. Also no 10GbE experience, but I suspect any motherboard made in the last few years can easily support such an adapter card. On power consumption, just taking a wild guess, but I suspect those backplanes will add a handful of watts compared to direct connections. Possibly you could only power them up row-by-row as you add drives though. Not sure as I've never taken a close look at one of those chassis. It seems possible that a setup close to mine, except with the obvious differences (chassis/backplanes, more/larger fans, and perhaps larger motherboard and more memory) could come in at 30-35 watts idle power consumption (for that same number of eight 8TB archive drives installed).
  10. For chassis I used the iStarUSA D-214-MATX. I didn't mention that earlier for a couple of reasons: - you're targeting more drives than that chassis can accommodate - I had to use parts from another chassis to get the archive drives to work (due to the lack of middle mounting holes) Here is the UCD thread I made for this build: https://forums.unraid.net/topic/45322-48tb-server-under-35-watts-idle-now-under-24-watts/ I had a different motherboard at first but it let me down . . . the one I'm using now has been rock solid. Regarding motherboard physical size, there are mATX and ATX versions of that mini-ITX board (C236M WS and C236 WS). Probably they will use marginally more power than the smallest version. I had tighter constraints than you do: had to be mini-ITX to fit that case with all six internal drives installed. Other constraints included eight SATA ports and support for ECC memory. I have similar unRAID setups with all thirty drives involved, which includes dual parity. So from my experience there isn't a problem with processor performance as far as parity checks/etc. are concerned. Regarding i5 versus i3 and idle power consumption, I suspect they're very close. Most positions I read on that question indicate that these more modern processors have no problem throttling down to low levels of power consumption when not being used much. Regarding 'T' versus non-'T' versions, that same argument would seem to apply, but maybe not as strongly. I chose the 'T' versions because it allowed me to use a passive cooler, saving another fraction of a watt. I suspect any sixth-generation-or-newer i3/i5 processor (LGA1151) and corresponding 'economy' motherboard will yield acceptable idle power consumption results. I've built Windows machines using such configurations that idle at ~8 watts, and that was a few years ago, so I suspect it's only better with seventh/eight-generation processors (like with a B250 chipset if ECC isn't a requirement).
  11. One option is the Lenovo SA120. I have one and it works fine with just two caveats: fan noise and geared more towards SAS drives than SATA drives. It connects via a single four-lane SAS cable (SFF-8088) . . . that will limit speeds to at most 2.4Gb/s, so ~200MB/s per drive will be the limit if all twelve drives are involved. A SAS HBA with external ports can drive that enclosure just fine from unRAID; I usually use a 9207-8E.
  12. I have two nearly-identical servers that are always on: one is my primary file server, the other is a backup, and is rsync'ed to from the primary at least daily. I stressed energy efficiency in those builds, and was able to achieve power consumption of just under half a watt per terabyte of capacity: under 24 watts (at idle) for 48TB of double-parity-protected capacity. Here are the relevant build details: - ASRock Rack C236 WSI - Intel i3 6100T or Pentium G4400T - pair of Micron 16GiB DDR4-2133 ECC UDIMMs - 8x Seagate ST8000AS0002 - SeaSonic SS-400FL2 - pair of Xigmatek 80mm fans This all fits in a 2U rackmount chassis under 16" in depth, and is very quiet. The SATA/SAS backplanes in your chassis will likely add overhead beyond what I see, as I connected the six data drives directly rather than in drive bays. The two parity drives are in a 3-in-2 trayless hot-swap drive cage. Keep in mind that I only use these servers for file serving, so if you have needs beyond that, the processor choice may need a rethink.
  13. Very nice to see that the Parity Check speed lost between 6.4.0 and 6.4.1 was regained in 6.5.0.
  14. I just upgraded one of my backup servers from 6.4.0 to 6.4.1. For the first time in several updates the time for a parity check has gone up: Just FYI; it's only 4.5%.
  15. Yes, using the x8 controller in an x4 slot will only reduce the per-channel bandwidth to ~500MB/s, which is easily enough for a spinning drive; a 7,200RPM 8TB drive might just break above 250MB/s for the outer cylinders. And, note that the X11SSL, which drops one PCIex4 slot compared to the X11SSM, will suffice if all you need is the three total PCIe slots. You might be able to find the SSL at a lower price than the SSM.