JonathanM Posted March 11, 2017 Share Posted March 11, 2017 8 hours ago, jbartlett said: I'm okay with using the same measurement that dd uses to report with. I'm not quite sure I understand your question. This tool came out of people wondering why their parity speeds were tanking at certain spots and me wondering. So in that sense, it's purely a technical diagnostic tool. Pretty sure what UhClem was getting at was the difference between a 4TB drive being called 4TB on the box but shows up as 3.64TiB in disk properties. Marketing vs. technical specs. The marketing guys want to see the largest possible number for a given scenario, the tech wants a number that's strongly correlated with what's really being measured, to get an apples vs. apples comparison. You mentioned specifically parity check speeds, so you probably want to use the same units with which the parity check speed is expressed. Quote Link to comment
JorgeB Posted March 11, 2017 Share Posted March 11, 2017 (edited) 9 minutes ago, jonathanm said: You mentioned specifically parity check speeds, so you probably want to use the same units with which the parity check speed is expressed. Agree, and I believe it's already using the same units, at least my results point to that. Edited March 11, 2017 by johnnie.black Quote Link to comment
UhClem Posted March 11, 2017 Share Posted March 11, 2017 Thank you jonathanm, and johnnie.black . [10^6 rules this kingdom, I guess.] -- UhClem Quote Link to comment
jbartlett Posted March 11, 2017 Author Share Posted March 11, 2017 That is really odd. I'll look into it Quote Link to comment
jbartlett Posted March 12, 2017 Author Share Posted March 12, 2017 As for display purposes, most people want to see 4TB if they purchased a 4TB drive, even if it's a marketing stunt. If it's saying it's testing a 2 TB at the 1 TB spot or the 0.909 TiB location, it's still the same effective spot. i could add a switch in v3 to choose between them. Quote Link to comment
jbartlett Posted March 12, 2017 Author Share Posted March 12, 2017 Personally, I think in terms of TB for drive storage, TiB for everything else. Quote Link to comment
UhClem Posted March 12, 2017 Share Posted March 12, 2017 (edited) In hindsight, I should have just made a non-specific suggestion (to circumvent the memory-hog thing) like "Try reducing the buffer size (bs=N) and increasing the count=N." You'd have arrived at satisfactory specifics (without my bothersome meddling :)). Aside: The specific options I did suggest [bs=64M count=15] were intended to accomplish the following: 1) [Personal flaw] As a hardcore software person, I shun the whole 10^N game, so I wasn't going to even utter nMB. And I didn't want to appear ignorant of the guidelines for use of Unix/Linux Direct I/O that (strongly) suggest [but not insist] that operations stay on "block boundaries" [both memory and disk]. But, note that I was mistaken about this second concern, because 64,000,000 (64MB) *IS* evenly divisible by 4096! 2) Since this is all "behind the scenes" for the users of diskspeed.sh, any changes should go unnoticed (as much as possible). E.g., to reproduce the sampling points, it worked out that (15*64M) was *very* close to 1GB [fraction of 1%] (But even a slight change in sampling points can affect the (precise) reproducibility of results. Low-level disk layout/formatting is deeply complex--I'm still trying to learn [but I can't get into the sausage factory:)].) John, other than reducing the memory footprint, I wouldn't change anything else (in the context of this hubbub). [But do continue to nurture and develop "your baby".] -- UhClem Edited March 12, 2017 by UhClem to remove ambiguity Quote Link to comment
jbartlett Posted March 12, 2017 Author Share Posted March 12, 2017 I don't mind discussing different viewpoints, love it in fact. And your reiteration of the 4096 boundary does make excellent sense. Tests aren't precisely reproducible. You can perform the exact same test multiple times and get different results each time and that's why I added the iteration flag. Quote Link to comment
UhClem Posted March 13, 2017 Share Posted March 13, 2017 (edited) 10 hours ago, jbartlett said: Tests aren't precisely reproducible. You can perform the exact same test multiple times and get different results each time and that's why I added the iteration flag. Please allow me to rebut. ... In the context of performance testing of current era disk drives (2010-now), tests results really are precisely (within a fraction of a percent) reproducible. Providing the following conditions are met: 1. The system being used must be controlled absolutely -- no Windows:) -- on Unix-flavors, no other users, no daemons, cron-jobs, download agents etc. 2. The disk drive(s) must be healthy. Not just SMART-healthy, *healthy*. (consider: Upon receiving a distressing diagnosis, the patient says "I can't be sick, Doc; I walked in here, didn't I?") 3. The rest of the hardware should be non-flaky, and up to the demands of the tests being performed (ie, bus bandwidths, etc) To illustrate, (and to also provide data to support something I mentioned in a previous post), I performed the following test on 4 drives (all are 4TB HGST HDS724040ALE640 [non-NAS 7200rpm]) [call them a b c & d]. For each drive, consecutively (not concurrently), read the 5GiB from 500G-504G, measuring the speed for each GiB. Do the test 4 times [call them 1 2 3 & 4]. The actual results follow [using cat & paste to get the 16 outputs together]: --- a1 --- --- b1 --- --- c1 --- --- d1 --- 500G 156.6 M/sec 158.2 M/sec 156.3 M/sec 160.2 M/sec 501G 159.6 M/sec 154.8 M/sec 160.3 M/sec 157.7 M/sec 502G 155.3 M/sec 159.7 M/sec 154.1 M/sec 159.1 M/sec 503G 161.1 M/sec 154.1 M/sec 163.1 M/sec 159.9 M/sec 504G 154.8 M/sec 158.0 M/sec 153.3 M/sec 156.5 M/sec --- a2 --- --- b2 --- --- c2 --- --- d2 --- 500G 157.1 M/sec 158.8 M/sec 157.0 M/sec 160.8 M/sec 501G 159.6 M/sec 154.8 M/sec 160.3 M/sec 157.7 M/sec 502G 155.3 M/sec 159.7 M/sec 154.1 M/sec 159.1 M/sec 503G 161.1 M/sec 154.1 M/sec 163.1 M/sec 159.9 M/sec 504G 154.8 M/sec 158.0 M/sec 153.3 M/sec 156.5 M/sec --- a3 --- --- b3 --- --- c3 --- --- d3 --- 500G 157.1 M/sec 158.8 M/sec 157.0 M/sec 160.8 M/sec 501G 159.6 M/sec 154.8 M/sec 160.3 M/sec 157.7 M/sec 502G 155.3 M/sec 159.7 M/sec 154.1 M/sec 159.1 M/sec 503G 161.1 M/sec 154.1 M/sec 163.1 M/sec 159.9 M/sec 504G 154.8 M/sec 158.0 M/sec 153.3 M/sec 156.5 M/sec --- a4 --- --- b4 --- --- c4 --- --- d4 --- 500G 157.1 M/sec 158.8 M/sec 157.0 M/sec 160.8 M/sec 501G 159.6 M/sec 154.8 M/sec 160.3 M/sec 157.7 M/sec 502G 155.3 M/sec 159.7 M/sec 154.1 M/sec 159.1 M/sec 503G 161.1 M/sec 154.1 M/sec 163.1 M/sec 159.9 M/sec 504G 154.8 M/sec 158.0 M/sec 153.3 M/sec 156.5 M/sec 14 hours ago, UhClem said: But even a slight change in sampling points can affect the (precise) reproducibility of results. Low-level disk layout/formatting is deeply complex This is exemplified best by comparing 502G & 503G for drive c (above). 10 hours ago, jbartlett said: and that's why I added the iteration flag Options are good ... (just be very prudent when choosing the default behavior) Maybe, if you believe this example is representative (not an anomaly), you'd consider adding a flag to expand the sample size. Hypothetically (since I neither use unRAID nor employ a GUI when running Linux--and, hence, probably can't get the direct benefit of diskspeed.sh), because I know how to control my test environment, I would never use -i3, but I might welcome/use -e3. -- UhClem Edited March 13, 2017 by UhClem Quote Link to comment
jbartlett Posted March 13, 2017 Author Share Posted March 13, 2017 1 hour ago, UhClem said: Please allow me to rebut. ... In the context of performance testing of current era disk drives (2010-now), tests results really are precisely (within a fraction of a percent) reproducible. Sorry, in my context I mean by "precisely reproducible" is the exact same results, every time, I get fluctuations of around 1% consistently. Quote Link to comment
UhClem Posted March 14, 2017 Share Posted March 14, 2017 Me (as Bart at the blackboard): I will not discuss the precise meaning of "precisely". I will not discuss ... But I will continue my endeavor to have precisely performing hardware and software. -- UhClem Quote Link to comment
shooga Posted March 16, 2017 Share Posted March 16, 2017 This looks like a great utility. Quick question: The instructions say to "ensure no other processes are running on the server". It seems impossible to take that literally Does this mean processes that would impact disk performance? No parity check, file copy, mover, etc? I can easily shut down docker, but what about plugins? Quote Link to comment
RobJ Posted March 16, 2017 Share Posted March 16, 2017 5 hours ago, shooga said: This looks like a great utility. Quick question: The instructions say to "ensure no other processes are running on the server". It seems impossible to take that literally Does this mean processes that would impact disk performance? No parity check, file copy, mover, etc? I can easily shut down docker, but what about plugins? It just means operations with I/O significant enough to affect the speed measurements, like the ones you mentioned. 1 Quote Link to comment
jbartlett Posted March 17, 2017 Author Share Posted March 17, 2017 I'll update that to make it more clear, no other processes accessing the drives. 1 Quote Link to comment
shooga Posted March 17, 2017 Share Posted March 17, 2017 Thanks for the responses. Just wanted to make sure I really understood (and I do now). Quote Link to comment
Mehlhosen Posted April 1, 2017 Share Posted April 1, 2017 I'm new so please bear with me. How do i execute this program/script? I assume i have to download it to somewhere to my unRaid server - but where? Do i just type in "diskspeed.sh"? I've tried and i get this in return: Last login: Sat Apr 1 10:22:46 on ttys000 Madisons-MacBook:~ madisonmehlhose$ telnet -l root 10.0.1.17 Trying 10.0.1.17... Connected to 10.0.1.17. Escape character is '^]'. Password: Linux 4.9.19-unRAID. Last login: Sat Apr 1 10:22:32 -0400 2017 on /dev/pts/1 from 10.0.1.19. root@Tower:~# diskspeed.sh -bash: diskspeed.sh: command not found root@Tower:~# Theres not really any information on "running the script" can someone help me understand? Running Version 6.3.3 of unRAID Thanks in advance! Quote Link to comment
Fireball3 Posted April 1, 2017 Share Posted April 1, 2017 Welcome to the unRAID forum. You can use "pwd" to view the directory you're in. Given, you copied the diskspeed.sh to the flash, you have to change to /boot after you login via telnet. /boot is the root of the stick You can list the content of a directory with "ls -l" Once you're in the directory that contains the script, try "diskspeed.sh" or "sh diskspeed.sh" (all without quotes" 1 Quote Link to comment
Mehlhosen Posted April 2, 2017 Share Posted April 2, 2017 (edited) I figured it out shortly after my original post. First of all when i logged in i was in the "root" directory. As Fireball3 mentioned you can enter "pwd" to find what directory you are in. To get into another directory you need to enter "cd /directory" so i entered "cd /boot" to get into the boot directory. This is where i copied my diskspeed.sh file to thru Apple's Finder. Then you can run "diskspeed.sh" or any other variant. Thanks!! Edited April 2, 2017 by Mehlhosen Quote Link to comment
JorgeB Posted April 7, 2017 Share Posted April 7, 2017 Just for kicks, a very old disk, anyone got a slower one? Quote Link to comment
Fireball3 Posted April 10, 2017 Share Posted April 10, 2017 ...maybe an USB1.1 drive Why are those "stairs" in the graph? Quote Link to comment
JorgeB Posted April 10, 2017 Share Posted April 10, 2017 (edited) 7 hours ago, Fireball3 said: ...maybe an USB1.1 drive Why are those "stairs" in the graph? It's and old 6.4GB IDE PATA disk, stairs I believe are because the disk is too small so samples overlap. It's a shame speed hasn't increased on par with capacity, we now have disks over a 1000 times larger but speed only increased by a factor of about 15. Edited April 10, 2017 by johnnie.black Quote Link to comment
Fireball3 Posted April 10, 2017 Share Posted April 10, 2017 3 hours ago, johnnie.black said: It's a shame speed hasn't increased on par with capacity, we now have disks over a 1000 times larger but speed only increased by a factor of about 15. Remember we have SSD's also! They brought a significant boost in speed and capacity is also in an acceptable range. Once the price drops to HDD levels, the classic spinners will soon become extinct. Durability is steadily increasing also. I wonder where the full-cost pricing of a datacenter may be today, when comparing SSD vs. HDD. Quote Link to comment
trurl Posted April 10, 2017 Share Posted April 10, 2017 4 hours ago, johnnie.black said: It's and old 6.4GB IDE disk, stairs I believe are because the disk is too small so samples overlap. It's a shame speed hasn't increased on par with capacity, we now have disks over a 1000 times larger but speed only increased by a factor of about 15. The only things that small these days are giveaway USB2 flash drives. I wonder what kind of speed we would get with one of those? Quote Link to comment
trurl Posted April 10, 2017 Share Posted April 10, 2017 1 minute ago, trurl said: The only things that small these days are giveaway USB2 flash drives. I wonder what kind of speed we would get with one of those? Or even a DVD R/W, which is about that capacity. Of course I don't have hardware in my unRAID for that. I seldom touch optical media even on my desktop. Quote Link to comment
jbartlett Posted April 11, 2017 Author Share Posted April 11, 2017 On 4/10/2017 at 1:16 AM, johnnie.black said: It's and old 6.4GB IDE PATA disk, stairs I believe are because the disk is too small so samples overlap. It's a shame speed hasn't increased on par with capacity, we now have disks over a 1000 times larger but speed only increased by a factor of about 15. Did you modify the script so it would test under 25GB drives? Don't care if you did alter it, just wondering if there's a bug. Quote Link to comment
Recommended Posts
Join the conversation
You can post now and register later. If you have an account, sign in now to post with your account.
Note: Your post will require moderator approval before it will be visible.