Jump to content

garycase

Moderators
  • Posts

    13,623
  • Joined

  • Last visited

  • Days Won

    1

Everything posted by garycase

  1. Question for Joe ==> I'm not going to try this while my parity check is running, but if I was to access the share via Windows Explorer to look at the directories, does that cause any activity that might change the impact of Cache_Dirs ?? I'd assume not ... i.e. I should be able to browse the folders with ZERO impact on the ongoing parity test and Cache_Dirs functionality (as long as I don't actually open any of the files) => but am interested in whether you agree.
  2. Okay, I'll post back in ~ 8 hrs with the results. Started Cache_Dirs; gave it 20 minutes to ensure it had ample time to populate the cache; cleared the statistics; did a Spinup all drives; and started a parity check at exactly 10:30 local time. I've done a LOT of parity checks recently, as I "tuned" my parameters ... and it now runs almost exactly 7:41 => so in 8 hours I'll know for sure whether Cache_Dirs has any impact on that time. Just to ensure there's no impact from excessive refreshes of the Web GUI, I don't plan to even look at it until 8 hours have passed
  3. If you like the Lian-Li PC343B, check out the D8000 => basically the same case, but a much "cleaner" front panel without all the 5.25" bays ... instead it's got 20 drive cages that you access by popping off the front panel (it just pulls off) => you can either directly cable the drives; or you can buy optional hot-swap panels that mount in the back of the cages to turn all 20 into hot-swaps. VERY nice case ... by far my favorite for a high-drive-count system in a tower system. And SUPERB cooling ... 6 120mm fans blow air directly across the drives.
  4. Very interesting ... and surprising ... results. I didn't expect a big difference, but I DID expect that it would be better with Cache_Dirs NOT running. It is, in fact, very hard to understand how Cache_Dirs could actually IMPROVE the time !! One obvious factor is how many files you have cached. Is Cache_Dirs set to cache ALL of your files? ... My v5 server has, for example, 270,128 files in 20,651 folders. Your results are definitely intriguing, however => I'm going to start Cache_Dirs, give it a few minutes to populate the cache; and then fire up a parity check to see how it compares to the very-consistent 7:41 it's been taking.
  5. Five minutes sounds okay to me. If the Find loop runs every 10 seconds, that's about 30 iterations in 5 minutes ... so even if every iteration resulted in a 50-100ms "thrash & cache" [directory read] that would only be an extra couple seconds for the parity check. Sounds like the process is a bit more complex than I envisoned ... I thought it could simply be a fixed test for parity check/disk rebuild with two exits: (a) do nothing; or (b) do the next Find. Sounds like you need a completely different loop that tests for an in-process parity check/rebuild and sets or clears a "suspend" flag for Cache_Dirs. If it's easier, I'd be quite happy with a simple button in UnMenu (perhaps on the User Scripts page) that would suspend Cache_Dirs [With, of course, a corresponding one to "un-suspend it" ] Wouldn't be as nice as an automatic suspension, but simple enough to just do it before running a parity check. [Which also makes me wonder why you've never added Cache_Dirs to Unmenu ??]
  6. I was going to do exactly that, but since you've already started it, I'll wait and see what your results are. It can, of course, depend on your tunable disk parameters ... I have mine set to use significantly more buffers during parity checks than the default (which changed my parity check times from ~ 8:25 to 7:41). I do not, by the way, anticipate a major difference in the times ... but I do think it will take longer with Cache_Dirs active; and don't see any reason to have the unnecessary disk thrashing going on when it can easily be avoided.
  7. Clearly the amount of memory in the system; the settings of the various tunable disk parameters; the number of directory entires that need to be cached; any other activity that may be going on; etc. can all have some impact. But a check that simply says "Don't do any disk activity if a parity check is in progress" would certainly be at worst neutral (if no Finds were needed); and at best would stop all Cache_Dirs related disk I/O during the parity check if Finds were necessary to update the directories. All this will do is GUARANTEE that no physical disk I/O will be initiated by Cache_Dirs until the parity check has completed By the way, Joe indicated he's going to also do this for disk rebuilds ... which is certainly a good idea; as these also are other completely sequential I/O operations on the disks, so any unnecessary thrashing will slow them down as well.
  8. Running Cache_Dirs slows down parity checking ... disabling it does not. (see next comment) Indeed, they are two independent operations. But Cache_Dirs definitely DOES do some physical I/O ... whenever the buffers containing the directories are overwritten it has to re-read the directory info. Parity checking is obviously VERY I/O intensive ... reading ALL of the disks as quickly as it can and confirming that the parity is correct. Think of this process ... every disk is read IN ORDER, so there's virtually NO head movement except for single-cylinder seeks as the disk is traversed. A LOT of data is being buffered, so the data buffers that are holding the directory info for Cache_Dirs are clearly going to be overwritten ... and when Cache_Dirs does its next check, it's going to re-read the directory entries to try and keep the buffered directory info up-to-date. THOSE reads are going to require some seeks -- which has two impacts: (1) the disks get thrashed a bit by the extra seek operations; and (2) the time for these reads is added to the parity check time, since no parity checking can be done until it can continue reading data for the check. Note that, while quick, seeks are nevertheless the "long pole in the tent" in terms of disk operations ... i.e. they're VERY LONG compared to all the other things that are going on [10-15ms sounds quick, but when you do it enough thousands of times it adds up]. The same thing is true for ANY ongoing accesses during a parity check. Streaming a movie; writing a bunch of new data; copying files from the array; etc. all cause significant disk thrashing that will notably slow down the parity check time. NOT a particularly large amount of time in most cases, but nevertheless it's unnecessary disk thrashing ... which I like to avoid. [i NEVER watch a movie during a parity check ... or do anything else on the array either] By the way, think about the process: IF you were right and there was NO physical I/O required by Cache_Dirs, then the impact of a check that says "If Parity Check in progress, do not start a Find operation" would be ZERO => since no Find would be required anyway. So all this check does is GUARANTEE that no physical disk I/O will be initiated by Cache_Dirs until the parity check has completed You're absolutely right -- I had a "senior moment" when I wrote that My newest system has all 3TB WD Reds; but my older media server has a mix of 1, 1.5, and 2TB drives and the 1 & 1.5's are indeed spun down by the end of a parity check ... so the first time Cache_Dirs "asks" if a parity check is in progress and the answer's "No", so it starts another Find, those drives will likely have to spin up. Not a big deal ... but definitely a wrong statement on my part !!
  9. Certainly didn't mean to imply "now" ==> just sometime in the relatively near future ... anytime in the next few weeks is fine by me. I'd be nice if you'd post a note when it's done. I'm sure I'm not the only one who'd appreciate that modification.
  10. Does this comment also apply to my note r.e. suspending Cache_Dirs during parity checks? It sure seems that would be a VERY useful/nice feature. Seems like a simple "If parity check in progress, don't start the next Find" check in Cache_Dirs would basically suspend it => at least after the current Find completed. No reason to shut down Cache_Dirs ... it would simply keep checking at the current intervals, but as long as the parity check was in progress, wouldn't initiate any more Finds ... thus not interfering with the check. ... and of course once the parity check was over, the next time Cache_Dirs checked all the disks would still be spinning, so it'd be very quickly up-to-date. It does this today (puts itself to sleep) when the "mover" runs, so adding the logic for a parity check/disk rebuild is fairly easy. I thought it might be. Does that mean you're going to add it and update the Cache_Dirs download?
  11. Does this comment also apply to my note r.e. suspending Cache_Dirs during parity checks? It sure seems that would be a VERY useful/nice feature. Seems like a simple "If parity check in progress, don't start the next Find" check in Cache_Dirs would basically suspend it => at least after the current Find completed. No reason to shut down Cache_Dirs ... it would simply keep checking at the current intervals, but as long as the parity check was in progress, wouldn't initiate any more Finds ... thus not interfering with the check. ... and of course once the parity check was over, the next time Cache_Dirs checked all the disks would still be spinning, so it'd be very quickly up-to-date.
  12. Joe => I have no idea how "doable" this is ... but is there a way to have CacheDirs "suspend itself" if a parity check is in process? Not shut itself down (which would then require re-starting it) ... but just basically ask "Is parity check in process?" before it kicks off the threads to read the directories. Logically it seems like that'd be fairly simple, but I'm just not a Linux guy, so I don't really know.
  13. I don't know which ESX he might be running, but the D525 has no VT support http://ark.intel.com/products/49490 ... and therefore is NOT running ESXi => at least not any current version.
  14. ESXi runs on the D525 Atom board?? Very interesting. Which version of ESXi are you running?
  15. Nice build -- my 2nd server uses the same board, but in a Lian-Li PC-Q25B case and with 6 3TB WD Reds. I also have very low power draw ... idles ~ 20 watt and draw a max of 45 watts during parity checks.
  16. Probably the easiest option ... in fact, it's exactly the same as the one distributed with RC14 EXCEPT it doesn't have the 4 lines that force the 4GB option. Or just not update the syslinux.cfg at all and leave the one from RC12a or RC13. That's what I did.
  17. Khelm => Did you get the 72405 yet? ... and have you had a chance to measure the detailed power consumption?
  18. It's a shame the controller makers don't simply publish these power specs -- "Idle" and "Max" load specs would certainly be nice to know. The drive manufacturers publish much more detail on power consumption under various conditions ... and since the controllers are clearly part of that "chain" you'd think they'd provide the same level of detail !!
  19. I can confirm that the idle power doesn't seem to vary a lot among processors of the same Intel generation. A couple years ago I built an extra HTPC with a Core i5-2400S. A really nice little system, but I later decided to use it for a bunch of video re-rendering work, so I upgraded the CPU to a Core i7-2700k. I don't remember the actual values, but I do recall that my Kill-a-Watt showed virtually no difference in the idle power consumption of the system after the change (but of course a notably higher consumption under full load). Love the idea of R-47 walls !! Not something I could do ... we have a LOT of windows (almost the entire back of the home plus most of the front ... but it's a neat idea nevertheless. I DID spend a small fortune having all of the windows replaced with Low-E solar glass, and it makes a BIG difference in the hot summers around here (Texas).
  20. I saw your post in that thread and Tom's reply r.e. changing some of the tunable parameters. I also asked Tom if those parameters were applied differently during a rebuild, since one of the most perplexing results of your testing is that a rebuild works SO much better than a parity check, despite the underlying disk I/O and processing requirements being nearly identical !!
  21. Excellent. Now we can see a very good comparison with the 7260a I think these two controllers are by far the best solutions to a single-card 24-drive array. On paper, the 72405 would seem to have a slight edge due to the 10w lower idle power consumption ... but it'll be good to see some actual measurements. Me too. I'll spend far more money on things than I'll ever recover, just to keep the bills lower. Replaced the perfectly good electric attic vents in our home with solar units for well over a grand; bought two new SEER 18 dual-stage compressor heat pump units and had R52 blown in to the attic for nearly 20K; etc. Definitely help the monthly bills ... and the A/C units may actually pay off in ROI after a decade or so; but the key reason was to lower the monthly outflow. My plan for my next UnRAID is to make it as low-power as possible, with 24-drive potential. It's very tempting to use a SuperMicro X9SPV-M4-3UE with the 72405 ... in theory this should get idle consumption down in the 40w plus drives range ... so possibly as low as 60-65w with 24 drives spun down and a few 120mm fans. The X9SPV-M4-MQE would likely have an almost identical idle consumption ... I'd guess no more than a watt or two difference, despite the much higher TDP of the quad core (35w vs. 17w) => the extra power of the quad core might be nice for some purposes (not needed for UnRAID).
  22. Looks like Tom is getting one of the "problem" cards to take a look at the parity check issue ... perhaps there's hope for the 2760a http://lime-technology.com/forum/index.php?topic=27637.msg244492#msg244492
  23. Tough choice. To me the key advantages are that it's a single-slot solution, works out-of-the-box with UnRAID, and boots quickly (often a problem with add-in controllers). It's also FAST except for parity checks ... but you've proven that's not related to the card, so there's at least hope that the speed issue will be resolved. The only disadvantages you listed that bother me are the power consumption and heat. The price is a bit high, but not out-of-line for a 24-port card, and as a % of the cost of a 24-drive system it's really not bad. It would, of course, be very interesting to see as detailed an analysis of the Adaptec 72405 as you've done with this card. If it truly idles at 18w it should also be cooler; and if it matches the speed of the 2760a then clearly it would be a better choice.
  24. I had noticed that the Adaptec was Gen 3, but missed that the motherboard actually had Gen 3 slots ==> so you're absolutely right.
  25. ... and it's "only" $659 here: http://shopcomputech.com/supermicro-mb-mbd-x9spv-m4-3ue-o-i7-3517ue-qm77-16g-ddr3-pcie-sata-usb-miniitx.html While it seems a bit high (I suppose it is), when you consider it's an excellent SuperMicro board (call that a $200 "value"); and an embedded Core i7 (call that a $300 "value"); has 6 SATA ports and 4 Gb NICs, along with 4 USB v3 ports; it's certainly a VERY nice mini-ITX option with perhaps a $200 or so "premium" vs. what an equivalent quality setup would cost in a larger form-factor using a higher-power CPU. It gives you Atom-like power utilization with more than 5 times the processing power of a D525 Atom [PassMark 3807 vs. 694 or the D525].
×
×
  • Create New...