Community Developer
  • Posts

  • Joined

  • Last visited

  • Days Won


jbartlett last won the day on November 12 2019

jbartlett had the most liked content!


About jbartlett

  • Birthday 07/20/1970


  • Member Title
    John Bartlett


  • Gender
  • Location
    Seattle, WA

Recent Profile Visitors

The recent visitors block is disabled and is not being shown to other users.

jbartlett's Achievements

Community Regular

Community Regular (8/14)



  1. Cache & array drives are mounted by unraid. I take it your cache drive is a SSD/nvme? To test those, you need to add a mapping to the docker settings. Also note that any changes to the drive configuration under unraid requires the DiskSpeed Docker app to be restarted so it can see those changes.
  2. Right-click anywhere in the browser window to bring up the Dev Tools. Using Firefox & Chrome, it's "Inspect". Click on the "Console" tab and enter the command "ShowDebug()" and hit enter. That should make them visible.
  3. You have to click in the orange area above, just to the right of the period. If you click the Abort button, does it tell you that it's aborting and then changes to a "Continue" button after a few seconds?
  4. Check the details on the drives. Drives with the same model number could have a different revision with different performance. There could be other things impacting the drive. Smaller track sizes in a given area due to defects at manufacturer time could affect read times in a given area. In fact, I've been working on version 3 of DiskSpeed that can map out data zones, surface layouts, track sizes, etc over the entire drive. From my experience, shucked drives seem to be the bottom of the barrel when it comes to the platter quality so you can expect some differences in benchmarking different drives of even the same make/mode/revision. The platter surfaces could be an absolute mess but still quite solid for saving data on the good parts.
  5. Version 2.10.7 has been pushed. If you have been getting white benchmark screen or seemingly never ending benchmarks, try again with this version. Those issues were likely caused by an unforseen error and if it happens again, the benchmark will be aborted and the hidden iframe doing the work will become visible - the error message displayed at the bottom. 2.10.7 Change Log: Refactored Solid State benchmark to better saturate the drive connection Add a default 10 second pause before starting the read portion of a SSD benchmark to allow hidden write cache to flush After benchmarking a SSD, display if Trim is supported on the drive's information page If an error happens while benchmarking a drive, display the hidden iframe performing the benchmark to show the error and abort any other benchmark currently in progress. Reformat the Benchmark FAQ screen to make the information more user friendly Benchmark tests show the read/write ranges of the SSD's along with the average. A tight bar can indicate a drive that is consistent in its performance and does not utilize cache trickery. I just noticed that the displayed read speed doesn't match the graphs, investigating. But the graphs are showing the correct values. I'm working on adding a benchmark history for SSD's but I suspect this drive is going wonky on me.
  6. The application allows you to do this yourself, to upload an image for a drive that has none or to replace the image with one you prefer. View the drive in question, then click the Edit Drive button. Then click on the "Upload New Image" and follow the instructions. Note that if you submit a new drive image to replace an existing one, that image will only download on that particular server if you happen to reinstall or purge the app data.
  7. I figured out the super read speed today. The program uses FIO to benchmark the drives over 4 CPU threads on a given CPU. I configured it to create files of the given size divided by 4 to split over the threads. FIO was also dividing by 4 so the test files were only 25% of the size they were supposed to be. As such, it was reading the files in less than a second which it evaluated to the maximum possible bus speed. The next version will correct this. In the meantime, if you specify a 4 GB file test file size to or larger to compensate, you should see more reasonable test result.
  8. It's odd that you're getting two different scales. If the report is still off like shown here, please submit a Debug file via the Debug link on the bottom of the main page.
  9. You'd only be able to do a read test on a Parity drive. Since the drive doesn't have a usable partition, there can't be any write benchmarks if the parity drive is a SSD. You'd have to perform the benchmark prior to adding it as the Parity.
  10. I added an error trap around that block of code which will go out on the next release. Can you give me more information on that drive? Was it brand new, no partitions, not initialized for example?
  11. Sorry for the lack of responses, I haven't had time recently to visit. I'll respond soon in regards to them. For those who are having the trim issues, please update your Docker to reflect version 2.10.6.
  12. Hover just to the right of the period where marked here, the cursor will still have the text selection icon, if you see the arrow cursor, you're too far over.
  13. It errored, probably your nvme drive doesn't support the trim function. If you click on that hidden link I mentioned, it would show the error message (same as in a recent previous post in this thread). I added a fix which will be on the next release. I'm working on adding more than 4 write threads but it seems to barf at 8 threads but works on others - trying to figure out why.
  14. Thank you. I added an error trap around the trim process to catch this, it'll go out in the next release.
  15. Click on the period (just to the right of it) at the end of "hide or show it" line when benchmarking to display the hidden iframes that are doing the actual work. If there was an error during the trim process, it will show up there. Copy-n-paste that error here (note you don't need the java stack trace if displayed)