UhClem

Members
  • Posts

    282
  • Joined

  • Last visited

Everything posted by UhClem

  1. You've got the right idea, but I believe your details are off. (This is conjecture; I have no "inside info" ...) Early in manufacturing, the platters are probably graded, and that will likely dictate what RecordingDensity and rotational speed they are suited for; they will then be used in assembly of that particular model/line. I doubt that the motors are made to be adjusted for different speeds--there are 10K, 7.2K & 5.xK motors. (Other major components--heads, servos, actuators--are likely similarly graded/appropriated) Then the drives are assembled and then the weeding-out procedure continues, with "rejects" ending up as "bastard-sized" drives (ie 1.5TB--damaged head/surface) or, in the almost-worst-case (some of you guys aren't gonna like this ...) external drives. [The total FUBARs get sh*t-canned--stripped down for the good pieces.]
  2. ... and they should do the compile/link on a Linux environment sufficiently old that the resulting executable will be compatible with the shared libraries installed with unRAID versions going back to v4.7 (or earlier).
  3. If you track down the Product Manual for the ST4000DM000 [link], you still won't find any mention of RPM! But, "there's more than one way to skin a cat"--they DO specify the Average latency as 5.1 millisec. (on both pg 11 & 15). The rotational speed is directly derived from that spec by: RPM = (1 / (AvgLat * 2)) * 60 [for AvgLat = .0051 (seconds)] and yields 5882. But the 5.1 msec is likely rounded up slightly (cf. 5.0847 msec), so 5900 RPM. Personally, I plan on waiting to see if they produce a 4TB 7200RPM version (maybe ST4000DM001) to get that 15-20% boost in transfer rates, which should have a minimal increase in noise/power/heat (using a comparison of ST3000DM001 [3TB/7200] with ST3000DM003 [3TB/5900] as the basis).
  4. You want to allow the caching itself to complete, before starting the mover procedure. Otherwise, the simultaneous writing to, and reading from, the cache drive will cause its disk head assembly to "thrash" (lots of seeking), and its actual data throughput will nosedive. You won't have this concern IF you use a SSD for cache, but that is up to your own cost-benefit analysis. In this case, is there a chance that the mover will think it has "finished" when it has only "caught up to" the caching??
  5. Thanks. A worthwhile addition to the "toolbox". User Guide here [link] "Crime is big business -- even for a few good guys."
  6. There are numerous reports (on other forums' MicroServer threads [homeservershow.com & overclockers.com.au]) of people successfully installing one or another unlocked ("modified") BIOS (initially intended, and used, on the N36L/N40L) on the new N54L. If you stay with the stock BIOS, SATA ports 4 & 5 are locked in IDE mode and limited to SATA I speeds. Also they are incapable of hot-swap usage and port-multiplier controllability (both of which require the port to be in AHCI mode). Just Do It!!
  7. Note that I "stuck my nose in" here because it seemed like both RobJ and JoeL agreed that the time added to the script/cycle run (by the "feature" you described) was of some significance. It was only in the process of composing my previous (ie 2nd) response that I made an effort to quantify it (~0.5%). I'll bet neither of you realized it was so negligible--if I had, I wouldn't have bothered ... but then, look at all the fun we'd have missed. I still contend that a more apt rationale for adding the "feature" would be "I didn't want the drive to get bored." [Those 6 seeks every ~20 seconds are not going to affect the temperature. Here's a little experiment I just did. I had a spinning, but idle, drive at 30C. I did 5 minutes worth of flat-out reading (similar to your pre-read w/o the dance). When it finished, the drive was at 31C. I let the drive rest for 15 minutes; back to 30C. Then did 5 minutes of flat-out seeking (seektest); when finished, the drive was at 35C (that was ~1400 seeks per 20 sec.).] No, it does not. In even the most perverse case, where each and every seek resulted in a read-retry, it would not even have doubled that overhead. Ie, instead of ~0.5% extra, it would have been <~1.0%. How is that going to "show in the speed of the preclear process"? Also, it does not show in a SMART report. (In anticipation of a misguided reply ... Seek_Error_Rate is not only undocumented, but also looks to not even be "implemented" on most drives (only Seagate)) Huh? 995/1000 is a fraction, right? Seriously, any measurable speed difference was not caused in any way by the dislocation dance. ========== And, now for something completely different ... Here's a challenge: Add 10-20 lines to the preclear script that will cause the post-read phase to run just as fast as the pre-read phase for many/most users. For the rest, no change (but there would be such clamoring that they could easily/quickly join the party). [same exact functionality/results as now.] Who's wants to be the hero??? Think about it-- ~5-10 hours saved per cycle for a 2TB drive. [No questions ... just think ] --UhClem
  8. I know you did, and I respect that. (I've been in that position many times, and) That is precisely why I said "take a step back"--meaning to try to get a different view/perspective. But, here's the problem (as I see it): For a marginal drive, that little head-fake () might just cause the subsequent read attempt to be off-track and/or un-settled, BUT, if so, the drive will detect that; also, even if the drive doesn't detect that it hasn't settled sufficiently, and proceeds with the read, it will obviously get ECC failure. In all of those cases, the drive will merely RETRY the read (but this time with no/negligible prior head motion), and succeed. That little dislocation-dance (every 200 "cylinders") is just a (very minor) "waste of time" [looks like only about 0.5% extra], but has no chance of leading to any "feedback". Remember, a drive will RETRY a read 10-20 times before giving up and returning UNC to the driver (and the driver will RETRY 4-5 times before giving up and returning error to the calling program). However, I definitely agree that a new drive should really get a mechanical pummeling!! (But, instead of a couple of gnats, how about a swarm of horseflies.) I give my drives 5+ minutes of constant seeking, which also serves to verify that the drive's seek time is within spec. [seektest -n 20000 /dev/sdX -- my own little hack; don't know if Linux has something like it]. If any relevant component is sub-par, or inclined to be, now's the time to find out. To quote Crocodile Dundee: "Now, that's a torture test." (compared to the little twitches in preclear). I'll repeat this test occasionally (at least a few times per year). Following that initial torture session, I do a thorough surface integrity test (xfrtest ... /dev/sdX -- another personal hack). I try to repeat this once/twice per year, tracking the (very quantitative) results. --UhClem
  9. Joe, I understand what you intend the above dislocation enhancement to accomplish, but I'd suggest that you (take a step back and really) think about it. What it does do is cause a slight (probably undetectable) seek noise, and increase the elapsed time of those phases (apparently noticeably). [Neither of which were your actual intent (but unavoidable side effects of your intent).] --UhClem
  10. Two months older news now, than when I tried telling you ... Have you tried plugging the SansDigital into the MicroServer's eSATA port? [surprise!! ] Probably will, but performance (parity check, etc.) might be yucky. Even there, you will be limited to about 120 MB/s total bandwidth with the PM enclosure. The SiI3132 has a bogus transfer rate limit of ~120 MB/s (even though it is PCIe x1 v1, and should be able to get 180-200 MB/s).
  11. Agreed. But I am a software person, and a seemingly analogous motherboard's BIOS would be an easy starting point for comparison. I had done that too, but only within my limitations (I studied EE [almost 50 years ago] but wasn't good at it). One thing that caught my eye (in the DataSheet) was Table 61 (pg 112) -- Performance mode. But then there is the last entry in Table 28 (on pg 77) for AZ_SDOUT which implies (to me) that Performance mode is always available. Isn't it possible that HP decided to omit/remove a 6Gb/s setting from their BIOS so as to avoid that slight bump in TDP power draw (5.3W vs 4.9W--ref Table 61)? Or, as you surmised, maybe just to cripple the MicroServer market-segment-wise, relative to its more macho Proliant brethren? Hence, that is why I suggested checking another (SB820M-motherboard) BIOS--to see if anything jumps out at you. For example, in the MicroServer BIOS, can you tell me what effect the (SATA) 1.5G setting has, relative to the 3G choice? --UhClem
  12. If it was easy, we wouldn't be having this discussion . There are motherboards that use the same SB820M and DO support 6Gbps SATA. I suspect their BIOS might have some good clues.
  13. Is it possible to enable SATA III/6Gbps? It is documented as being available in the AMD SB820M. Thanks.
  14. Try putting the same contents on both a SD card AND a microSD card, both in the G3 together. It sounds like the BIOS-boot will only see the SD sub-device, but unRAID will only see the microSD sub-device. unRAID definitely needs fixing so that it can see, and scan, both sub-devices in the G3 (AND in the G2, by the way). I don't use unRAID, but I do have a G2 and uncrippled Linux sees both cards. I don't believe that unRAID has access to the microSD in a G2. --UhClem
  15. Expect a maximum sustained throughput of ~680 MB/sec (per board; assuming your PCIe infrastructure does not get saturated by multiple boards at "full throttle") Whether that qualifies as a bottleneck is your call . [Consider that 8 recent-generation drives (max transfer rate 150-180 MB/s) will only achieve about 50% of their max at begin of a parity check.] See this thread [link] for more details. My advice would be to take advantage of your motherboard's PCIe v2.0 performance and use a v2.0 controller (vs this v1 controller) e.g. M1015 (M1015 is fine in your x8(phys)/x4(elec) v2 slot). --UhClem "Measure twice--cut once."
  16. Is there a means for limiting this "check" to a subset (range of sectors / blocks / stripes, etc) of the array?
  17. Yes, this is where it does get interesting--and useful. ==> Important note: The purpose of the non-X test is just to determine the saturation point for a particular resource (MV8, in this case). We get that number by adding up the MB/s rates for all drives connected to that resource. The individual rates for the drives (in the non-X [saturation] test) are immaterial (and only of possible interest to the really hardcore). [More below] OK. The saturation point (max throughput) of your MV8 is ~680 MB/s. As I stated earlier, the max real-world throughput for a PCIe x4 v1 pathway is 780 MB/s (840 MB/s on better motherboards, in the right config). Hence, it certainly appears that the MV8 and/or its Marvell 88SE6480 chip does not have the processing power/data-handling chops to fully utilize that pathway. (A really meager/ancient CPU+Northbridge could be responsible, but that doesn't apply here.) By the way, it is best to run the non-X test with only drive letters on the tested resource/controller. In a final, full-system, test, you can test all drives on all resources, and make sure that your overall system (CPU+Northbridge+Southbridge) is not bandwidth-saturated. Back to the MV8 ... it can sustain 680 MB/s. Which means that you can comfortably put the 4 Hitachi 2TB Data drives plus one of the Seagate 2TB Data drives plus the Cache drive on the MV8 without ever affecting your real-world results. That is because a Parity-Check is your most demanding task (throughput-wise), and that will be limited by the ~120 MB/s speed of a Hitachi 2TB. Since the cache drive doesn't participate in the Check, those other 5 drives will only use (max) 600 MB/s (5*120). You could even add the other Seagate 2TB with very negligible "penalty"--a Parity Check would then (nominally) max out at 113 MB/s (680/6), instead of 120. That would extend the time frame before you'd need/want to add another controller. That takes care of the MV8. The other two disk throughput resources you have are (1) the 6 native SATA ports on your Z77, and (2) the 2 add-on SATA ports on the on-board ASM1061. The ASM1061 ports are limited by a PCIe x1 v2 to ~380 MB/s (or 420) total. I don't know about the 6 native ports, but I expect its limit to be well above 1000. If you have a couple of fast SATA3 SSDs, you could try to saturate that resource. "If you push something hard enough, it will fall over." You're welcome. --UhClem "I think we're all bozos on this bus."
  18. Is one of those (every) drives on the controller's first port (#0)? Just curious, if that is the possible source of the anomaly. Thanks for confirming--I like it when that happens . I seriously doubt there is any relationship; at least not a direct one. (I'd consider it an "identity glitch" not an "identity crisis".)
  19. It seems like only the first drive on the MV8 (sdd) is being "Identified" using the "hdparm -i" mechanism; could be a driver peculiarity (mvsas driver?). I'll try to make a change to the script that uses the -I option instead. Please note that the results (cleaned up) that you got are from the test with the X as the (optional) first argument. Those results are merely to determine the stand-alone speed for each drive, without any contention (for resources) from the other drives; in the X run, each (single drive) speed test runs to completion, and then the next drive's speed is tested. It really doesn't matter which controller the drive is attached to for the X test (except for the obvious mismatch of a fast drive on a SATA 1 connect). The interesting results come from the follow-on tests without the X. In those tests, all specified drive (letters) are speed-tested concurrently which will reveal any limitations that one or more hardware factors are exerting on the total throughput. For example, using your drive letter associations above, the command "./dsk.sh d e f g h i j" would speed-test all 7 drives connected to the MV8 simultaneously! The idea is to push the tested component to its saturation point so that you can make proper capacity planning (from a bandwidth perspective) decisions. If/when you saturate any particular component (in this example, the MV8), you will see it because one, or more, (and often all) the tested drives are underperforming their stand-alone (baseline/nominal) speed (from the initial X test run). If that 7-drive test run saturates the MV8, as I expect it will, you should try it without the cache drive (f). If that also saturates, omit another drive letter ... until all drives in a single test perform very close to their "X" speed. If that will work to get you (2 * (10 + 1)), instead of (20 + 1), that is a worthwhile goal, for risk minimization. Since you''d need to "choose up teams" anyway, you might as well do it by performance. [Don't let the dumb kids hinder the education of the smart ones, right?] But, are you really planning to add so many drives so soon? If not, I wouldn't rush into it. Maybe dabble with ESXi a little. Is it possible to experiment with different unRAID configs, by not doing any WRITEs to any Data drives, using a fresh/test Parity drive, and preserving the (actual) Parity drive [from a "Production" config.]? Just pondering ... (I don't use unRAID.)
  20. Gee, I feel guilty--indirectly responsible for your expenditureinvestment. Looking on the bright side, PC hardware is such an incredible bang-for-the-buck these days. [When I started programming, gasoline was $0.35/gallon and computer memory was $1/byte--I earned about $10k/yr--1968-69.] I would have expected to see a sustained 115-120 for the first 20% of the check. It may be that the chip on the MV8 (Marvell 88SE6480) might not have the processing crunch and/or data-handling throughput to saturate the PCIe x4 (v1). You might try moving one more drive from the MV8 to the Z77, and see if that speeds up the check. If so, move another. (and repeat). If not, please do run that dskt script. You might have an abnormally slow drive. Enjoy your new toy(s). --UhClem
  21. You're a couple of years behind; the DL green 2TBs did that. The newest ones (DM, 7200 [ie, OP's 3TBs]) do 170-180. (access transfer) You're still talking theory. PCIe x4 (v1) can only sustain 6 drives @140 MB/s (and only @130 MB/s on lesser motherboards). Jus' keepin' it real ... --UhClem
  22. Which will require an additiional controller (to replace the Syba; the Syba has a max (real-world) bandwidth of 150-175 MB/s, being PCIe x1 v1). Think about a PCIe v2 controller (maybe a M1015), so that you can exploit the v2 if/when you upgrade mobo, or totally reconfigure. Couple of clarifications. We're only talking about read speed limitations (your write speed (to array) is limited by the RAID4 methodology employed by unRAID). And, realistically, it is only during parity checks that you will push these limits, and only when you're in the first 30-50% of the check (outer/faster drive zones). So this is not something to really sweat about. But no reason not to be optimal either. I expect that you won't notice the "upper limit" until you exceed 6 drives on the MV8, or try using both of the JMicron ports. I think you can use all 5 of the Intel (real/SB) mobo ports without reaching their "tipping point" but dskt tell you (I don't have any ICH8 experience). So, allocated optimally, 11 data drives (+ parity) should be able to "parity check" at max (with current hardware). You don't list your drive model #s, but either the Sgt or Hit 2TBs will be the slowest, and will place an inherent limit on the others (during a parity check), so factor that into your "tipping point" decision. Ie, the Sgt 3TB data drives do not need to use all of their max 170+ MB/s, only what the slowest drive's max is. --UhClem
  23. Yes, it is very likely. (A simple measurement will tell--see below.) First, you should realize that both of those numbers are theoretical maximums. "You can't get there from here." That 2.5 Gbps is indicating a PCIe v1 connection (vs 5 Gbps for v2). And that number is per lane, [x4 in your now-corrected setup]. Both your motherboard and your AOC card are PCIe v1, so no bandwidth is being "wasted". No. Not the same issue, because real on-board SATA ports are in the Southbridge, and do not rely on the PCIexpress mechanism. But they are subject to other upper-limit factors. But, wouldn't 120-140 be better? I.e., strive to let the actual disk drive performance be the limiting factor, not the controllers or PCIexpress mechanism. This can possibly be achieved by not pushing the limit of any one bandwidth pool with too much combined disk drive bandwidth consumption. Don't put too many eggs (drives) in any one basket (controller/SB). But how much is too much? You can use the little shell script here [link], and peruse the related discussion in that thread. Given the questions you've asked, and the hardware/drives you have, I'm certain you will benefit. --UhClem "If you push something hard enough, it will fall over." --Fud's First Law of Opposition
  24. For some perspective, which is precisely appropriate, the entire source code for the initial release (outside of Bell Labs) of Unix was 132KB. A true work of art. Recommended reading for anyone with a real interest in Unix--you can find it here [link]. --UhClem "It was forty years ago, today ..." (in a few months)