Pauven

Members
  • Posts

    730
  • Joined

  • Last visited

  • Days Won

    7

Everything posted by Pauven

  1. I can confirm that ST8000NM0055 drives are most definitely affected by this issue. This bit me hard when I upgraded to v6.9.2 back in April. I had to roll back to 6.8.3 to recover from a dual-drive "failure" and inability to rebuild on 6.9.2, and never attempted any of the fixes posted here. I felt extremely lucky to escape without losing data, and I'm still running 6.8.3. optiman, glad to read this worked for you. Since it has been a couple weeks, is your system still okay? I'm starting to feel a little trapped on 6.8.3, so I'll probably have to apply this fix. Since we both have ST8000NM0055 drives, your results matter most to me. I was hopeful that this was a bug in 6.9.x that would be fixed in 6.10, and that I wouldn't need to do the drive fix. Came here to see if anyone had tested this on 6.10 without applying these fixes, but no dice. Paul
  2. Not the restore Unraid version feature (which I used) but rather a restore flash drive from backup. I had to manually copy some config files from the flash drive backup to get 6.8.3 working correctly. It took me a while to figure out which files needed restoring. Some type of automation here would have been nice. Really cool if it was integrated into the restore Unraid version feature - it could prompt to optionally restore certain files from an existing flash drive backup. That could certainly be the issue. But no way I'm going back to 6.9.2 on my production server to gather diags once it fails. I'm still 4 hours away from a full recovery, and I'm not into S&M. I know it's my personal perspective, but I feel that if 6.9.x issues as bad as this, it shouldn't be considered "stable". I wasn't gearing up for a testing run, I was upgrading my production server to a "stable" dot-dot-two release, with a reasonable expectation that the kinks were worked out, and with no awareness that I could be signing up for data loss. I was completely unprepared to deal with these issues, and my main goal was simply surviving.
  3. Cross-posting here for greater user awareness since this was a major issue - on 6.9.2 I was unable to perform a dual-drive data rebuild, and had to roll-back to 6.8.3. I know a dual-drive rebuild is pretty rare, and don't know if it gets sufficiently tested in pre-release stages. Wanted to make sure that users know that, at least on my hardware config, this is borked on 6.9.2. Also, it seems the infamous Seagate Ironwolf drive disablement issue may have affected my server, as both of my 8TB Ironwolf drives were disabled by Unraid 6.9.2. I got incredibly lucky that I only had two Ironwolfs, so data rebuild was an option. If I had 3 of those, recent data loss would likely have resulted. Paul
  4. As a long time Unraid user (over a decade now, and loving it!), I rarely have issues (glossing right over those Ryzen teething issues). It is with that perspective that I want to report that there are major issues with 6.9.2. I'd been hanging on to 6.8.3, avoiding the 6.9.x series as the bug reports seemed scary. I read up on 6.9.2 and finally decided that with two dot.dot patches it was time to try it. My main concern was that my two 8 TB Seagate Ironwolf drives might experience this issue: I had a series of unfortunate events that makes it extremely difficult to figure out what transpired, and in what order, so I'll just lay it all out. I'd been running 6.9.2 for almost a week, and I felt I was in the clear. I hadn't noticed any drives going offline. Two nights ago (4/27), somehow my power strip turned off - either circuit protection kicked in, or possibly a dog stepped on the power button, regardless, I didn't discover this before my UPS was depleted and the server shut itself down. Yesterday, after getting the server started up again, I was surprised to see my two Ironwolf drives had the red X's next to them, indicating they were disabled. I troubleshot this for a while, finding nothing in the logs, so it's possible that a Mover I kicked off manually yesterday (which would have been writing to these two drives) caused them to go offline on spin-up (according to the issue linked above), but that the subsequent power failure caused me to lose the logs of this event. [NOTE: I've since discovered that the automatic powerdown from the UPS failure was forced, which triggered diagnostics, and those logs were lost after all - diagnostics attached!!!] I was concerned that the Mover task had only written the latest data to the simulated array, so a rebuild seemed the right path forward to ensure I didn't lose any data. I had to jump through hoops to get Unraid to attempt to rebuild parity to these two drives - apparently you have to un-select them, start/stop the array, then re-select them, before Unraid will give the option to rebuild. Just a critique from a long-time user, this was not obvious and seems like there should be a button to force a drive back into the array without all these obstacles. Anyways, now to the real troubles. Luckily, I only have two Ironwolf drives, and with my dual parity (thanks LimeTech!!!), this was a recoverable situation. The rebuild only made it to about 46 GB before stopping. It appeared that Unraid thought the rebuild was still progressing, but obviously it was stalled. I quickly scanned through the log, finding no errors but lots of warnings related to the swapper being tainted. At this point, I discovered that even thought the GUI was responsive (nice work GUI gang!), the underlying system was pretty much hung. I couldn't pause or cancel the data rebuild, I couldn't powerdown or reboot, not through the GUI, and not through the command line. Issuing a command in the terminal would hang the terminal. Through the console I issues a powerdown, and it said it was doing it forcefully after awhile, but hung on collecting diagnostics. I finally resorted to the 10-second power button press to force the server off (and diagnostics are missing). I decided that the issue could be those two Ironwolf drives, and since I had two brand new Exos drives of the same capacity, I swapped those in and started the data rebuild with those instead. I tried this twice, and the rebuild never made it further than about 1% (an ominous 66.6 GB was the max rebuilt). At this point, I really didn't know if I had an actual hardware failure (the power strip issue was still in my thoughts), or software issue, but with a dual-drive failure and a fully unprotected 87 TB array, I felt more pressure to quickly resolve the issue rather than gather more diagnostics (sorry not sorry). So I rolled back to 6.8.3 (so glad I made that flash backup, really wish there was a restore function), and started the data rebuild again last night. This morning, the rebuild is still running great after 11 hours. It's at 63% complete, and should wrap up in about 6.5 hours based on history. So something changed between 6.8.3 and 6.9.2 that is causing this specific scenario to fail. I know a dual-drive rebuild is a pretty rare event, and I don't know if it has received adequate testing on 6.9.x. While the Seagate Ironwolf drive issue is bad enough, that's a known issue with multiple topics and possible workarounds. But the complete inability to rebuild data to two drives simultaneously seems like a new and very big issue, and this issue persisted even after removing the Ironwolf drives. I will tentatively offer that I may have done a single drive rebuild, upgrading a drive from 3TB to an 8TB Ironwolf, on 6.9.2. Honestly, I can't recall now if I did this before upgrading to 6.9.2 or after, but I'm pretty sure it was after. So on my system, I believe I was able to perform a single drive rebuild, and only the dual-drive rebuild was failing. I know we always get in trouble for not including Diagnostics, so I am including a few files: The 20210427-2133 diagnostics are from the forced powerdown two nights ago, on 6.9.2, when the UPS ran out of juice, and before I discovered that the two Ironwolf drives were disabled. Note, they might be disabled already in these diags, no idea of what to look for in there. The 20210420-1613 diagnostics is from 6.8.3, the day before I upgraded to 6.9.2. I think I hit the diagnostics button by accident. Figured it won't hurt to include it. And finally the 20210429-0923, is from right now, after downgrading to 6.8.3, and with the rebuild still in progress. Paul tower-diagnostics-20210427-2133.zip tower-diagnostics-20210429-0923.zip tower-diagnostics-20210420-1613.zip
  5. Everything these fine gents wrote is correct. I stopped development of UTT after Unraid v6.8 came out. There was some chatter that even v6.8 had some tunables that affected performance, and that what LimeTech was doing didn't work perfectly on all hardware, though as you can see it has been quiet here for well over a year, so I'm guessing the issues weren't enough for users to chase solutions. And perhaps LT did resolve some of those earlier v6.8 performance issues a few users experienced. Ultimately, my perspective is that beginning with v6.8, LT was actively working on internalizing performance tuning, and the need for UTT is no more. Additionally, the original major performance issue that I experienced on my hardware, that led me to create this tool, is gone since v6.8. So even if there were performance issues affecting some hardware configs, I'm lacking the motivation or time to troubleshoot them by revamping this code. I willingly pass the mantle on to anyone else that has a need to refine the code for newer Unraid versions. My shift has ended.
  6. Thanks Johnnie this is exactly the info I needed. I have created a "Frankenstore" backup solution (pic below), using 5 USB 16 TB drives. These are cheap drives, at ~$310 each, and even with 3D printing an exoskeleton for portability, wiring up a single power supply, and using a 7-port USB 3.0 hub with toggle switches, my total cost for an 80 TB backup solution is under $1700. The final solution is extremely portable, making it easy to take offsite for security. The 10A 12V power supply could easily support 6 drives, and possibly even 7, so I have a bit of room to grow to 96 or even 112 TB of backup capacity in the future, though for the next year 80 TB is plenty. The toggle switches on the USB hub are really cool, as it allows me to control the power-up order and get the same disk ID each time, though I'm not sure if that matters with the BTRFS pool. Of course, at such a low cost, I am expecting drive failures. Since this is primarily just an offline backup for my main array, I'm cool with taking that risk. When I read through your linked instructions, you talk about replacing a drive, but not specifically replacing a failed drive. Is the process the same, or will it be different? I'm assuming with a JBOD, I only lose the data on the failed drive, plus any files that might have been split across two drives onto the failed drive - I don't suppose there is a way to prevent splitting files across drives in a pool, is there? Also, with Unraid v6.9 in the wings, is using UD still the right way to go? I'm running 6.8.3, and do not run beta or even RC on my production server. Do you know if my UD BTRFS JBOD pool will migrate to v6.9's new multi-pool functionality, or would I have to recreate it from scratch and re-do my backup? Thanks! Paul
  7. I don't know that DPI really matters for monitors like it does for printing. Most DPI for monitors is 120 or below. What is probably more important is simply having a physical size large enough to cover a high-resolution 4K monitor. So if every banner image was sized to 3840 x 200, that would be high enough resolution to cover 4K widths, and easily scale down to lower resolutions, i.e. 1920 x 100 for a standard Full HD monitor. I don't know if there is an official banner height, but when I investigated it a while back I was coming up with a size of 91 pixels high, which seems a little odd. Perhaps it is correct, I don't know. If 91 pixels high, then that could mean we want to target 3840 x 182 as a banner size, and scale down from there. But then again, that might cause problems for even lower resolutions, as narrower windows would zoom in further, and keeping the aspect ratio locked would cause the image to run out of pixels height-wise. Perhaps we have to plan for a minimum width, i.e. 960 x 91, which would scale up to 1920 x 382 and 3840 x 764 1920 x 182 and 3840 x 364. If every Banner was 3840 x 764 3840 x 364, that is a still reasonable 3 megapixel 1.4 megapixel image. EDITED to correct some crazy typos. I'm actually decent at math... no really. I'm sure you're right, but I was thinking it might be nice to have some artist community commentary on the requirements before making a feature request. Perhaps other banner creators have some unique needs that I haven't thought about.
  8. As much as I enjoy being able to set custom banners, the scaling issue is a real challenge. I'm often moving my browser windows around and setting them to different sizes. Sometimes I have my Unraid window full screen (on a 4K monitor), and sometimes half-width on the screen, and sometimes quarter screen, and on rare occasions a truly custom size where I've grabbed the browser edge and widened/compressed with window width to fit what I'm working on. I also will look at it on other resolution screens or mobile devices. Because the banner image stretches to fit the full width, it becomes impossible to have a 1 resolution image that works perfectly on all screen ratios and resolutions. The current implementation really feels like a 1990's solution. This is extra frustrating with banners that have circular elements, like the 2001 Space Odyssey / HAL and Iron Man / Jarvis banners I created for another user several weeks back. If your browser width is set to the exact same width as the banner, it looks perfect. Anything else and you get ovals, and text in the banner is stretched/compressed in an ugly fashion. On that Jarvis banner, I even positioned a couple "folder" graphical elements to sit behind the Unraid version & server info text, but again this only works if you have the right browser width, otherwise the text doesn't center on these elements. Because I work with web design tools like WordPress, I know that better solutions are possible. Instead of stretching, the image could be scaled keeping the aspect ratio fixed. Possibly the Unraid text elements could scale with the image, maintaining positions on top of graphical elements. If cropping is necessary, I think cropping top/bottom is preferable to cropping the sides, though I'm sure there could be some banners where cropping width is a better solution - perhaps that could be a setting we can toggle. It even crossed my mind that it might be possible to have a multi-segment banner, where you have separate "Left", "Center", and "Right" images that get closer/further away from each other as the browser width changes. This could allow for you to set a static background for the Unraid text elements on the left and right, and a floating center image that ties it all together. Perhaps you would even need 5 elements to make this work correctly: "Left", "Left Gap Filler", Center", "Right Gap Filler", "Right". The Gap Fillers could stretch between the Center and Left/Right, connecting them seamlessly. That way we as banner designers can achieve the near impossible: correct aspect ratio and positioning behind the text elements with a modern responsive behavior to browser width changes. It would be very easy for a banner designer to chop a banner into 5 segments. Unfortunately, I don't have the programming skills to contribute to enhancing Unraid. All I can do is sit here and share ideas, hoping to get some discussion going on this challenge. Perhaps one of the really smart guys can even make it happen...
  9. Interesting, I did not know this. I knew some users had performance issues with 6.8 and that Lime-Tech was still refining their logic, but I hadn't heard that some of the tunables can still help. I guess once I get off my legacy version I can revisit this again. Most likely I'm waiting for 6.9.1, fingers crossed. Though to be honest, I do hope Lime-Tech can figure out the logic to truly make UTT unnecessary. Thanks, it feels good to help and even better to be appreciated!
  10. Cool, you've got the right idea. Just wondering if you are fully leveraging it with a front-end so you don't have to insert discs. Essentially, your discs are your backup, and your array is your media server. I've got 1800+ movies stored away in boxes in the basement (my backup), and watch everything directly from my array. Using my own GUI front-end, of course... 😉
  11. Yes. Yeeeeesssssss. This! I realize that is asking a lot, as typically files are only allowed to exist in one or the other, not both, and this probably throws off some internal checks. But it is definitely a feature I want (plus SSD/NVMe array support, which got a lot of votes but no mentions). My use case is that I have certain data (music/mp3's, software code, etc.) that I want immediate, fast access to all the time, without spinning up any drives, so it makes sense to put them on my NVMe cache drive. But I want that data backed up too. Sure, I can buy another $600 2TB NVMe drive just to create a mirrored cache pool, but ouch that's a lot of $$$. I'd rather give Lime-Tech $120 for another license that I don't need (let's call it a donation, baby), so that these files can be stored both in my protected array and my unprotected cache. All reads would come from cache, and all writes would go to both (or cache first, sync later). I had a script a while back that was syncing a few directories from my cache drive to my array, but it stopped working a while back and I haven't bothered to try and fix it. Plus, I think it was causing those duplicate file error messages as Unraid was detecting I had the same file in cache and the array, so I've been hesitant to try doing this again. I looked for a plugin that would handle this and found nothing. Native Unraid functionality for a "Use Cache Disk: Both" option would be awesome.
  12. UTT is not compatible with Unraid v6.8 or later. I developed the latest version using Unraid 6.6.6 (which is what I'm still running). I've avoided the 6.7.x series due to some known performance issues, and 6.8 for even bigger issues. So I don't have more recent versions available for testing and development. 6.6.6 works perfectly for me, and I have zero reason to chase version upgrades just to be on a #, so I might be here for a while. Which is all really pointless anyway, since Lime-Tech took away the tunables that UTT tunes in v6.8. In theory, UTT is dead and no longer even needed with v6.8, since Lime-Tech took back control of these tunables and have their own internal logic for tuning them. So long story short, UTT is dead for Unraid 6.8 or later. Though it still works for 6.7.3 and earlier.
  13. Drats. So wait... I was actually too quick to create the new version of UTT? I could have sat on my tookus and let Limetech fix the issue for me? That's disappointing. But your tuned values are sky-high, probably among the highest I have ever seen shared here. I would say that you have a special needs controller. Definitely share this with Limetech. Very interesting, thanks for sharing. It took me a long time and a lot of effort to come up with a testing strategy for the v6.0-v6.7 tunables, and these changes with 6.8 pretty much throw all that out the window. If anyone sees any info regarding the new tunables, please repost here. And fingers crossed that Limetech makes UTT unnecessary, as I really really really don't want to do it all over yet again...
  14. In my experience, a rebuild should be similar in time to a parity check. The parity check reads from all drives simultaneously, while a rebuild writes to one and reads from all others. Total bandwidth is close to identical, as is parity calculation load on the CPU. As jbartlett advised, one of your drives could be running slow.
  15. Tom, any reason you are no longer posting that RC's are available in the Prerelease forum? The last one I see is "Unraid OS version 6.7.0-rc8 available". Paul
  16. I think that is a really interesting finding. In a disk to disk transfer, you're both reading from and writing to 3 disks simultaneously (4 if you had dual parity), which is a very different workload than just reading from all disks simultaneously. I'm guessing what happened is that you went so low in memory, that disk to disk transfers were impacted. I'll have to do some testing on my server and see if that is something I can replicate. You have a server that responds very well to low values, at least as far as parity checks go. Actually, it seems to respond the same for almost any set of values, achieving around 141 MB/s across the board except for a few edge cases. For that type of server, you're probably best off just running stock Unraid tunables settings.
  17. Hi @DanielCoffey, thanks for lending a helping hand. Even though it has the same name, for some reason the file you posted has a different size than the original version. I think it would be wise if you remove the file you posted, just in case. Also, the original file is hosted on the Unraid forum, which has done a decent job of hosting files for years. Not sure why vekselstrom had an issue downloading, though it seems to have been a temporary issue. I think it would be best if we keep the download option centralized in the first post, which gives me control over updates.
  18. My monthly parity check completed in another record time for my server, dropping another 12 seconds (haha). Even though the UTT v4.1 enhancements resulted in slightly better peak numbers, my server was already well optimized so the additional performance was not impactful.
  19. UTT does not do any writes, only reads. Specifically, it applies a combination of tunables parameters, then initiates a non-correcting (read only) parity check, let's it run for 5 or 10 minutes (depending upon the test length you chose), then aborts the parity check. It then tries the next set of values and repeats. I believe dalben's report might be the very first time a drive failure has been reported during testing. UTT v4 works the same basic way as the previous versions, so there's years of data behind that statement. In theory, the tests that UTT performs are no more strenuous then a regular parity check. But anytime you spin up and use your hard drives, especially all of them at once generating max possible heat, you risk a drive failure - same as during a parity check. Some may feel that the stress is slightly harder than a parity check, as UTT keeps repeating the first 5/10 minutes of the parity check, for dozens of times (minimum 82 times, maximum 139 times), so it keeps all of your drives spinning at their fastest/hottest for the entire test period, unlike a true parity check that would allow smaller drives to complete and spin down as larger drives continue the check. But the stress should be less than hard drive bench-marking, especially tests that do random small file reads/writes and generate lots of head movement.
  20. Array integrity comes first, you did the right thing. Unfortunately, the slightly different results in this run causes the script to test a lower range in Pass 2, so it didn't retest that magical 177 MB/s test from your earlier run. Would have been interesting to see it retested, and if it consistently performs better.
  21. Uhmmmm... --- TEST PASS 2 (10 Hrs - 49 Sample Points @ 10min Duration) --- Tst | RAM | stri | win | req | thresh | MB/s ---------------------------------------------- 1 | 207 | 6144 | 3072 | 128 | 3064 | 148.0 2 | 216 | 6400 | 3200 | 128 | 3192 | 176.9 <-- !!!!!!!!!!!!!!!!!!!!!!!!!! 3 | 224 | 6656 | 3328 | 128 | 3320 | 146.6 4 | 233 | 6912 | 3456 | 128 | 3448 | 148.5 5 | 242 | 7168 | 3584 | 128 | 3576 | 148.3 6 | 250 | 7424 | 3712 | 128 | 3704 | 148.3 7 | 259 | 7680 | 3840 | 128 | 3832 | 148.8 8 | 267 | 7936 | 3968 | 128 | 3960 | 148.6 9 | 276 | 8192 | 4096 | 128 | 4088 | 146.9 10 | 285 | 8448 | 4224 | 128 | 4216 | 149.0 11 | 293 | 8704 | 4352 | 128 | 4344 | 148.3 12 | 302 | 8960 | 4480 | 128 | 4472 | 149.1 13 | 311 | 9216 | 4608 | 128 | 4600 | 108.6 14 | 319 | 9472 | 4736 | 128 | 4728 | 148.5 15 | 328 | 9728 | 4864 | 128 | 4856 | 145.8 16 | 337 | 9984 | 4992 | 128 | 4984 | 149.0 17 | 345 |10240 | 5120 | 128 | 5112 | 148.5 I think that has to be some kind of glitch, but I can't imagine how. I've never seen a 30 MB/s jump on a specific setting combo like that. Unless the whole time the wife was watching a movie, except for that one test.
  22. Definitely! I see a strong linear progression from 15 MB/s to 130 MB/s as the settings increase. I've never seen a range this large, or a speed that slow using Unraid default values! Fascinating! I'm very interested in seeing the Long results. Please include the CSV file too, I'll probably chart this one for everyone.
  23. There's only a handful of options to choose from, the menu has been greatly simplified. Short Test Run this to see if your system appears to respond to changing the Unraid disk Tunables. If your results look mostly flat, then go on with life and forget about this tool - your server doesn't need it. Some servers behave the same no matter what tunables you use. But if you see dramatically different speeds from the Short Test, then that shows your server appears to react to changing the tunables, and one of the real tests below could be worth the time. Sometimes you will even seem the outlines of a bell curve forming in the Short Test results, which is a very strong indicator that your server responds well to tuning. This test only takes a few minutes, so you don't have to waste much time to see if your server responds to tuning. Also, keep in mind that even if your server responds well to tuning, the fastest parameters might still be the Unraid stock values, so there's no guarantee that running the tests will discover values that make your server faster. Normal Test This is the quickest real test. It does not test the nr_requests values, and it uses a 5 minute duration for each test. Because the test adapts to how your HD controller responds to the tunables, it will optionally test some additional value ranges, so the run time varies from 8 to 10 hours. Thorough Test Same as the Normal Test, but includes the nr_requests tests, which add another 4 hours to the Normal Test duration. So far we have found that once all the other tunables have been optimized (by the normal tests), that nr_requests default value of 128 is best, making the nr_requests tests basically a waste of time. But there is always the possibility that your server might be different, so I make this optional if you want to check. Long Test (Recommended) This is exactly the same as the Normal Test, except each test duration is doubled from 5 minutes to 10 minutes. That means the test takes twice as long. Longer tests improve accuracy, making it easier to identify which settings work best. For example, if the Normal Test had an accuracy of +/- 1.0 MB/s, then the Long Test might double that accuracy to +/- 0.5 MB/s or better. Because the test duration is doubled, the total test time also doubles to 16-20 hours. I recommend this test because it has the increased accuracy of the 10 minute duration, without the extra 8 hours for the nr_requests test that are probably a waste of time. Xtra-Long Test This is exactly the same as the Thorough Test, except each test duration is doubled from 5 minutes to 10 minutes, for the same reason as the Long Test. Another way to think of this is that this is the Long Test plus the nr_requests tests. Because the test duration is doubled, the nr_requests tests add 8 hours, bringing total test length up to the 24-28 hour range. FYI on Test Accuracy Test accuracy is determined by looking at tests that get repeated in successive passes, for example Pass 2 Test 25 is always a repeat of the test result chosen from Pass 1, and Pass 2 Test 1 is usually a repeat of another test in Pass 1 as well. The fastest test result from Passes 1 & 2 also gets repeated in Pass 3. Because the test points can vary by server, sometimes you will get several more repeated test points to compare to determine accuracy. By comparing the reported speeds from one pass to the others for the exact same tests, you can determine the accuracy. The accuracy varies by server. Some servers, like mine, produce an accuracy of +/- 0.1 MB/s every single time, it's incredibly consistent. Other servers might be +/- 2.5 MB/s, while a few servers are +/- 10 MB/s or worse. Note, if you are seeing large accuracy variances, that might mean that you have processes running that are access the array, reading or writing data, which essentially makes the test results invalid. When I look at the results and make an accuracy determination, I usually use the worst result (biggest variance) and use that as the accuracy for the entire test. So if the test chosen from Pass 1 was 140.5 MB/s, and the Pass 2 Test 25 was 140.7 MB/s, then that is an accuracy of +/- 0.2 MB/s. But if another repeated test was 143.0 MB/s in one pass, and 142.0 MB/s in another pass, then that indicates an accuracy of +/- 1.0 MB/s, so I say the entire test is +/- 1.0 MB/s. It takes time for servers to 'settle down', so to speak, and produce accurate results. Modern hard drives have huge caches, HD controllers often have caches, all designed to improve short-term performance. System activity may temporarily affect throughput. The longer tests minimize these effect, improving accuracy. Also, the longer tests just provide for better math. For example, consider a 10 second test versus a 10 minute (600 seconds) test. 2000 MB moved in 10 seconds = 200 MB/s, and 2060 MB moved in 10 seconds = 206 MB/s. 120,000 MB moved in 600 seconds is also 200 MB/s, but 120,060 MB moved in 600 seconds is 200.1 MB/s. In this example, the variance in both tests was just 60 MB, but the average speed accuracy increased from +/- 6.0 MB/s to +/- 0.1 MB/s, 60 times more accurate. This helps illustrate why the Short Test, which uses a 10 second duration, is not accurate enough for usable results. Understanding the accuracy of your results is important when trying to determine which result is fastest. If your accuracy is +/- 1.0 MB/s, then for all intents and purposes, 162 MB/s is the same as 163 MB/s, and there's no reason to pick 163 over 162.