• Content Count

  • Joined

  • Last visited

Community Reputation

6 Neutral

About bobkart

  • Rank


  • Gender

Recent Profile Visitors

1138 profile views
  1. Thanks Joe, I'll likely do something like that.
  2. I have a bunch of scripts in /boot/bin/ that I use often from the command line. After the upgrade to 6.8 I can no longer execute them. Checking the protection I see 'rw' but no 'x'. Trying to chmod them to 700 has no effect. Is there a way to use the scripts now as easily as before the upgrade? I'd much rather type 'foo' than 'bash /boot/bin/foo' each time I need to run one of these scripts (and I use these scripts frequently).
  3. Archer, I'm interested to know what your idle power consumption is.
  4. A few more thoughts: Those Xeon E processors are likely difficult to find at the moment. But an i3-8100T would fit well. You don't necessarily need 2x m.2 slots on the motherboard to host a pair of NVMe drives: a simple adapter allows adding an m.2 drive via a PCIex4 slot. This of course assumes enough of those are left over from other uses. I'd lean slightly towards uATX over ATX in the interests of reducing idle power consumption. One thing I recently realized regarding the ECC-RAM-vs-SSD-cache-pool question: in the ECC RAM solution there is still a ~5-second de
  5. I get that priorities are different from person to person. Mostly I ask because the OP has these concerns (energy efficiency and fast access). Sounds like you have a SATA SSD cache pool now, which of course can easily keep up with a couple Gb/s of writes. That's a good-looking Supermicro board for sure . . . 2x m.2 slots are perfect for a high-performance cache pool (10Gb/s). And you'll be able to ditch the HBA until you add a ninth drive. Note that most people recommend a pair of drives in the cache pool as opposed to just one, so a drive failure won't necessarily result in da
  6. Sounds like you're very close to half a watt of idle power consumption per terabyte of protected capacity, and it will only get better for you (that ratio) as you add or upgrade drives. I'm going to guess at 35-40 watts for your idle power consumption. Wondering now if you use single or double parity, and how your drives are connected (HBA or straight to motherboard). How are you on handling full-network-speed writes? I.e. the ECC-versus-cache-pool question raised here.
  7. That's a great-looking server, thanks for sharing those pictures. Those trayless hotswap bays are nice . . . they're the same ones I have in my two primary servers (the three-drive version). For me rackmount makes more sense as my whole entertainment center is comprised of side-by-side racks (four Middle Atlantic RK12s). Then my server closet has a Samson SRK21 to hold things like the primary server pair and a UPS. Once things start stacking up like that racks can give you higher density. Care to share your idle power consumption and total protected capacity numbers?
  8. The NVMe approach wins on cheaper-per-capacity and allows nuch larger capacity (i.e. larger single writes before slowdown). The ECC approach wins on possibly less power consumption (I see 16GiB DIMMs using well under one watt), certainly less complexity (and thus motherboard support needed) and it's much more invisible (i.e. no mover step involved). I started with 64GiB of ECC memory and was forced down to 32GiB when I moved to the mini-ITX motherboard we've discussed. I find that to still be sufficient for the writes I tend to do (easily handles 24-30GB at a time with no slowdow
  9. One key thing to note, that I just remembered, is that the i5's don't support ECC memory. Only Celeron, Pentium, i3 and Xeon. Odd, I know. You could still support full 10Gb/s write speeds without ECC memory if you used NVMe cache drives (SATA won't cut it as I'm sure you know). Obviously price goes up though. On cooling: - you could probably get away with just the three 12cm fans (in the middle of the chassis), which will be quieter than those plus the two 8cm fans (at the rear I believe those would be) - not sure on that motherboard but most support throttling th
  10. For chassis I used the iStarUSA D-214-MATX. I didn't mention that earlier for a couple of reasons: - you're targeting more drives than that chassis can accommodate - I had to use parts from another chassis to get the archive drives to work (due to the lack of middle mounting holes) Here is the UCD thread I made for this build: I had a different motherboard at first but it let me down . . . the one I'm using now has been rock solid. Regarding mothe
  11. One option is the Lenovo SA120. I have one and it works fine with just two caveats: fan noise and geared more towards SAS drives than SATA drives. It connects via a single four-lane SAS cable (SFF-8088) . . . that will limit speeds to at most 2.4Gb/s, so ~200MB/s per drive will be the limit if all twelve drives are involved. A SAS HBA with external ports can drive that enclosure just fine from unRAID; I usually use a 9207-8E.
  12. I have two nearly-identical servers that are always on: one is my primary file server, the other is a backup, and is rsync'ed to from the primary at least daily. I stressed energy efficiency in those builds, and was able to achieve power consumption of just under half a watt per terabyte of capacity: under 24 watts (at idle) for 48TB of double-parity-protected capacity. Here are the relevant build details: - ASRock Rack C236 WSI - Intel i3 6100T or Pentium G4400T - pair of Micron 16GiB DDR4-2133 ECC UDIMMs - 8x Seagate ST8000AS0002 - SeaSonic SS-400
  13. Very nice to see that the Parity Check speed lost between 6.4.0 and 6.4.1 was regained in 6.5.0.
  14. I just upgraded one of my backup servers from 6.4.0 to 6.4.1. For the first time in several updates the time for a parity check has gone up: Just FYI; it's only 4.5%.
  15. Yes, using the x8 controller in an x4 slot will only reduce the per-channel bandwidth to ~500MB/s, which is easily enough for a spinning drive; a 7,200RPM 8TB drive might just break above 250MB/s for the outer cylinders. And, note that the X11SSL, which drops one PCIex4 slot compared to the X11SSM, will suffice if all you need is the three total PCIe slots. You might be able to find the SSL at a lower price than the SSM.