bobkart

Members
  • Posts

    154
  • Joined

  • Last visited

Everything posted by bobkart

  1. Other than a few more 4TB drives (all the same model, the Seagate Desktop model), this was built out of parts I had laying around: 2x Xtore 12-bay SAS enclosures: http://www.scsi4me.com/product_info.php?products_id=1637 Supermicro 2U chassis/PSU, unknown model numbers 2x LSI SAS9207-8e HBAs Even though those enclosures are only SAS1 (3Gb/s X 4), it works to make a second connection, to the SAS daisy-chain output, to double the throughput. EDIT: In searching for a better link to those enclosures, I see that the second connection I'm making is to a second SAS input (not the SAS output). Although I think I've tried connecting to the SAS output also, with the same result (increased throughput). http://www.rackmountnet.com/rackmount-storage-chassis-sassata-hotswap-bays-depth-single-module-with-expander-460w-hotswap-redundant-p-1649.html
  2. After moving to 8TB drives I found myself with lots of leftover 4TB drives. And having lots of spare enclosures/rackmount chassis/SAS cards/etc. I realized with just a few more 4TB drives I could build a 24-drive backup server that would accommodate two copies of my primary server data, including dual parity. It all fits in six rack units. Since my pictures are close to 192KB in size, I can only attach one per post, so I'll be making a couple more posts to get them all in. First one shows the 12U rack it lives in, taking up the bottom half.
  3. I just used the sensors-detect that's already there (in /usr/sbin/). All I was missing was Perl . . . there are directions for installing Perl on that page I linked to. I notice now that the Perl installation didn't stick . . . it's now not installed, presumable since rebooting the server. It's okay though because you only need it long enough to run sensors-detect.
  4. Started seeing "kernel: BUG: unable to handle kernel paging request at ..." today in the log. Every 17 or so minutes for several hours. Rebooted, running Parity Check, log is clean so far, over an hour in. Searches of the forums seem to indicate that this is a potential memory corruption problem. And difficult to track down (that part I know, I develop software myself). If anyone else has seen this or know what might be causing it, I'd be interested to hear about it. In the meantime I'll be keeping an eye out for this showing up again.
  5. While working out the details on my latest server, I nearly decided on this case: Norco RPC-2212: http://www.newegg.com/Product/Product.aspx?Item=N82E16811219040 Chenbro makes one very similar to that one: Chenbro RM23212: http://www.amazon.com/Chenbro-without-Supply-Hotswap-RM23212/dp/B016VE6QBU Decided instead to do away with any backplane, to reduce power consumption: 40TB server, under 30 watts idle: http://lime-technology.com/forum/index.php?topic=47025.0 A 3U chassis might make more sense for your application, since 12 drives pretty much maxes out a 2U (hotswap) case, and you might want a couple more.
  6. I suspect that's just about the drives. For 4TB drives those aren't bad speeds. If your parity drive is 7200 RPM and you have a matching data drive (same make/model), that's the write speed to test that will yield best results. Otherwise you're limited by the slower of the two drives involved (parity/data). Also note that writing to the beginning of a drive (outer cylinders) is as much as twice as fast as writing to the end.
  7. I'm certainly no cooling expert but "if it were me" I'd try to make it work with the passive heatsinks, and just increase airflow through the case (if necessary). Those are some nice server boards over there at Natex, value-wise. I'm tempted to pick one up, torn between that and an LGA 2011 v3 build.
  8. I swapped out the 4x 8GB ECC UDIMMs for 2x 16GB ECC UDIMMs . . . power consumption dropped back down under 32 watts. Also upgraded to unRAID 6.1.9. Connected the three-bay hotswap cage to the motherboard (had to wait for 18" versions of those StarTech SATA cables to get here, 12" was not enough). Just plugged in a small drive, with the system powered up and the array started, to test the hotswap capability of the SATA ports. No problem, drive recognized, doing a preclear on it now. I'm at 96% capacity now so I'll be adding another data drive soon. Once the dust settles on that, I'll be thinking about upgrading to 6.2.0, I see there's a beta20 release now; I have beta19 on another server and it's working well so far.
  9. After some searching I found this page: http://lime-technology.com/wiki/index.php/Setting_up_CPU_and_board_temperature_sensing After following those directions, I can see my CPU temperatures now (among others): root@Luna:/mnt# sensors coretemp-isa-0000 Adapter: ISA adapter Core 0: +33.0°C (high = +98.0°C, crit = +98.0°C) Core 1: +33.0°C (high = +98.0°C, crit = +98.0°C) Core 2: +32.0°C (high = +98.0°C, crit = +98.0°C) Core 3: +32.0°C (high = +98.0°C, crit = +98.0°C) jc42-i2c-0-18 Adapter: SMBus I801 adapter at e000 temp1: +33.1°C (low = +0.0°C) ALARM (HIGH, CRIT) (high = +0.0°C, hyst = +0.0°C) (crit = +0.0°C, hyst = +0.0°C) jc42-i2c-0-1a Adapter: SMBus I801 adapter at e000 temp1: +35.8°C (low = +0.0°C) ALARM (HIGH, CRIT) (high = +0.0°C, hyst = +0.0°C) (crit = +0.0°C, hyst = +0.0°C) nct6776-isa-0290 Adapter: ISA adapter Vcore: +0.74 V (min = +0.00 V, max = +1.74 V) in1: +0.17 V (min = +0.00 V, max = +0.00 V) ALARM AVCC: +3.34 V (min = +2.98 V, max = +3.63 V) +3.3V: +3.33 V (min = +2.98 V, max = +3.63 V) in4: +0.56 V (min = +0.00 V, max = +0.00 V) ALARM in5: +1.86 V (min = +0.00 V, max = +0.00 V) ALARM in6: +1.68 V (min = +0.00 V, max = +0.00 V) ALARM 3VSB: +3.46 V (min = +2.98 V, max = +3.63 V) Vbat: +3.31 V (min = +2.70 V, max = +3.63 V) fan1: 0 RPM (min = 0 RPM) fan2: 0 RPM (min = 0 RPM) SYSTIN: +45.0°C (high = +0.0°C, hyst = +0.0°C) ALARM sensor = thermistor CPUTIN: +40.0°C (high = +80.0°C, hyst = +75.0°C) sensor = thermistor AUXTIN: -12.5°C (high = +80.0°C, hyst = +75.0°C) sensor = thermistor PCH_CHIP_TEMP: +0.0°C PCH_CPU_TEMP: +0.0°C PCH_MCH_TEMP: +0.0°C cpu0_vid: +0.000 V intrusion0: OK intrusion1: OK beep_enable: disabled The big piece that was missing was Perl, which the directions show you how to install. All I have left to do is augment my 'go' script so it still works after a reboot.
  10. 128MB RAM? "640KB ought to be enough for anyone." Of course I realize you mean 128GB. On a normal Linux machine, the 'sensors' command would work for reporting CPU temperatures. I tried this on unRAID and apparently it relies on Perl, which isn't installed. I know it's roundabout but you could install some other Linux onto your machine (that has a working 'sensors' command), then run some torture tests, monitor CPU temperatures, adjust cooling, etc. . . . *then* install unRAID with whatever cooling solution the above approach yielded, with confidence that it will suffice. No doubt someone else knows a better way to do it, that's just one way I can see.
  11. SAS controllers work fine, I've used quite a few of them with unRAID (Dell 12DNW, LSI 9200/9202/9207/9211, ...). My understanding is that if it's a RAID controller it may need to be flashed to IT mode, although I've not done that myself. Here's the unRAID hardware compatibility list for controllers; no doubt there are many more cards that are compatible than are listed there: http://lime-technology.com/wiki/index.php/Hardware_Compatibility#PCI_SATA_Controllers
  12. I have a pair of 7200RPM Hitachi drives for sale: http://seattle.craigslist.org/est/sop/5475285247.html
  13. UPDATE Found that the Intel SATA ports support hotswap whereas the Marvell SATA ports don't appear to. So enabled all the Marvell SATA ports, switched the six 8TB drives to all connect using those ports, and ran a Parity Check. The Parity Check completed in one second less time that with the six 8TB drives connected via the Intel SATA ports. Power consumption is at 32.1W idle now, ready to connect more drives via the three-bay hotswap cage.
  14. Nice improvement for sure. I actually typo'd that last post, my times are all in the 15-hour range . . . you can see my before-and-after-tuning improvement, where I pick up around half an hour (~142MB/s -> ~147MB/s)
  15. UPDATE Parity Check finished in 16 hours 10 minutes, within 3.5 minutes of the previous run. I had increased the poll_spindown frequency from 1/1800 to 1/60, so that might be related. On the power consumption front: starting with 29.0 watts idle, I replaced the 2x 16GB ECC UDIMMs with 4x 8GB ECC UDIMMs (I needed the 16GB ECC UDIMMs on another Avoton board). Idle power consumption went up to 29.6 watts. Then I added the three-bay hot-swap drive cage, idle power consumption went up to 30.8 watts. So now I have nine-drive capacity, although I've yet to enable the other six SATA ports. Here is the drive cage: iStarUSA BPN-DE230SS-BLACK 2 x 5.25" to 3 x 3.5" SAS/SATA Trayless Hot-Swap Cage - OEM http://www.newegg.com/Product/Product.aspx?Item=N82E16816215240 I'll be adding another data drive at some point. The plan is that the parity drive and a warm spare drive will be in the hot-swap cage, plus one extra bay for a guest drive. If/when dual parity becomes available, that warm spare drive will turn into the second parity drive.
  16. Hi razor, welcome to the forums. While I can't address whether that specific card is supported by unRAID, I can offer this. Apologies if you already know these things. The card in question is a RAID card. But of course you won't be using the RAID capabilities in unRAID. So in that sense it's overkill. I understand though: the server you're considering comes with the card. You're not so much deciding to buy the card as you are the whole server. Options I would consider if I were in your position - see if the seller will drop the price an acceptable amount for leaving the card out of the package - take your chances on the card (might need to flash it to IT mode) - sell the card and buy a much less expensive card, that's known to work with unRAID (LSI 9211-8i for example) I can usually get those 9211 cards for around $100 on eBay. Even new they're not much more than $200, versus $500 for that MegaRAID card.
  17. Yeah it wouldn't be so much the raw throughput of the controllers (your SSD test confirmed that's not a problem) as some timing issues that, combined with the drives' rotational latency and unRAID's read-modify-write technique, result in the behavior you're seeing. Back to the build: I hit a solid week of uptime with zero issues. I'll be adding a 3-bay hot swap drive cage next; I anticipate about a 3-watt hit for that. Just started a Parity Check, after first capturing the tunables I used last time in a script, so I can easily reapply them. This is because I see md_write_limit being set to a different value in disk.cfg, and md_sync_thresh isn't even listed. Five minutes in and I'm showing over 200MB/s, so far so good . . .
  18. I'll absorb the shipping if USPS Flat Rate Priority Mail works for you. So $100. I've been checking eBay for comparative prices and $115 is as low as I've seen.
  19. I also have as many as four 3TB drives I could part with, although I should probably keep one for cold spare. Not knowing where you are makes it hard to know how much of an option shipping them to you might be.
  20. You're definitely on to something JB. Once I hit a week straight of stable uptime on my new server (another day and a half to go), I'll be adding another 8TB drive, then I'll be in a position to test write speeds to an empty one of these drives. I still wonder if motherboard/disk controllers might have something to do with it, not to the point of even slowing down SSDs, but enough to cause the rotational latency to be a problem, versus some other configuration that lets them be written to faster. I guess we'll only know if that's the case if/when a counterexample is produced, where someone is able to write to these drives faster than ~60MB/s. I'm hoping to be that counterexample, but am not really expecting to be. Can you share details of the motherboard/disk controllers used in your test?
  21. I'm selling a couple 4TB drives: http://seattle.craigslist.org/est/sop/5475285247.html
  22. Yeah that graph definitely rules out my potential explanation for what you're seeing. Interesting. Your numbers using SSDs would seem to disprove any theory that it's the operating system, or motherboard, or disk controllers. That leads me to start thinking along the lines of it having something to do with rotational latency. But I'm way out of my areas of expertise on this, it's just the only thing I can think of so far that might prevent effective write speeds from scaling with increased drive *sequential* throughput. With SSDs not suffering from rotational latency, they would exhibit effective write speeds much more reflective of their sequential throughput. Again just reaching for *something* in the way of an explanation for what you're seeing, I could easily be way off. In addition to trying various md_write_limit values, did you also try the so-called Turbo Write Mode (md_write_method = 1)?
  23. Those are interesting test results, JB. I see your point: at 70% full, throughput should be more like half of whatever it is at the outer cylinders, yet you're seeing almost identical write speeds from both tests. Another potential explanation is that those 1TB drives you did this test with don't actually have less data (and thus less throughput) on the inner cylinders compared to the outer cylinders. How were the Parity Check speeds using (only) these drives? If they were consistent throughout the pass (assuming no other bottlenecks), that helps confirm the above hypothesis. I'll be adding a seventh drive (and three-bay hotswap cage) at some point, then I'll be able to check best-case write speeds for these drives.
  24. Thanks for the link; I was able to track down a good description of how it works from there: http://lime-technology.com/forum/index.php?topic=34521.msg375905#msg375905 I may consider using Turbo Write Mode for my backup servers, since I don't configure those drives to spin down (since the servers are only on long enough to finish the rsync).
  25. For that write, the target drive was nearly full. So the mid-40MB/s range represents the lowest speeds these drives can be written to (including parity overhead). I had estimated 50MB/s in my initial post, so I wasn't too far off. That estimate was based on seeing ~60MB/s when writing to a 5/8 full drive. I still estimate ~80MB/s for a nearly-empty drive, although I don't have one installed to test. So just using very round numbers, averaging a low of 40MB/s with a high of 80MB/s will yield 60MB/s; that's my current best-guess "round number" on an average write speed with these drives. In my experience the parity overhead creates *at least* a 2x reduction in write speed. So to get 110MB/s of "actual" disk writing speed (not counting any caching benefit), you'd need at least 220MB/s native write speed for the drives involved . . . not just the parity drive but the target data drive as well. I suppose a 7200RPM version of the these drives could hit that, at least on the outer cylinders, but I doubt that the inner cylinders could be written with that kind of speed: 7200RPM is about 22% more than 5900RPM, so "all other things being equal", having observed numbers just over 200MB/s during parity check/syncs, that's around 244MB/s when sped up to 7200RPM. Again "in my experience", the inner tracks take a 2x hit on speed compared to the outer tracks, so that's 122MB/s, divided again by two for the parity overhead is something more like 60MB/s. Note that I do see ~112MB/s at the start of my writes, and for "sufficiently small" files (limit seems to be around the mid-30GB mark), that speed persists throughout the copy (notwithstanding the wavering that occasionally drops it into the 80-90MB/s range), due to the large amount of memory I've made available for cache (24GiB). I'm interested to hear more about this MD WRITE command . . . I briefly searched for information on it but did not turn anything up. Do you have a link to where this is described?