jbartlett

Community Developer
  • Posts

    1896
  • Joined

  • Last visited

  • Days Won

    8

Everything posted by jbartlett

  1. The Diagnostic zip file doesn't do me any good. There's a "Create Debug File" link inside the DiskSpeed app at the bottom of the main page that provides the data I can make use of.
  2. It helped a lot! Once I saw the difference between the two was displaying the controller block & iframe, I was able to isolate the issue to a location in the code. It loops over the list of drives selected and does some validation before adding the flagged drive(s) to the benchmark list. If you could, take a screen shot of your drive selection (no need to proceed past it) and submit a debug file from the "Create debug file" link on the bottom of the main page showing all the drives. Email to [email protected]. This will let me look at your configuration and trace through the logic using it. I appreciate your assistance!
  3. Unfortunately, I took Unraid off this box a few months ago and have Windows running bare metal now.
  4. Interesting that you said you couldn't find an iframe. If the iframes were rendered, clicking to the right of the period where the cursor is still in the text cursor and not an arrow cursor will make those iframes visible. Can you view the source and look for the following? Search for "The Error Occurred in" and look for a file/line below it. The error message itself will be above. The code below is just a quick bit I put together to get the HTML that gets generated. <tr> <td class="label">Message</td> <td>variable [B] doesn't exist</td> </tr> <tr> <td class="label">Stacktrace</td> <td>The Error Occurred in<br> <a class="-lucee-icon-minus" id="__btn$1" onclick="__LUCEE.oc( this );" style="cursor: pointer;"> <b>/var/www/test.cfm: line 3</b> </a> <br>
  5. Sorry, I still don't have any idea what you were referring to. Was it reporting anything like retrying (x)? If so, check the SpeedGap box on the drive selection screen. How many drives do you have attached to the controller that's not completing? It's set to timeout after 50 minutes, figured that should be enough time - unless someone has a crazy loaded down controller. Are you that someone?
  6. Pushed version 2.10.7.2 Updates controller speed/width parsing where it was displaying something like: Current Link Speed: (ok) width (ok) ( max throughput)
  7. If you mean the Vendor, a lot of SSD's do not populate the Vendor field and I have to add logic to catch those to pull the vendor from the model, if possible. This is why I provide the option to change the vendor. If you enter "Sabrent" into the vendor field, save, and then rescan, it'll pull up the image if someone has already submitted on. Otherwise, you can edit the drive, edit the image, and provide one yourself. If you're referring to the data units read/written, that's based off the sector size multiplied by the unit. If none of the above are what you are referring to, then you'll have to be more specific.
  8. Thank you, I fixed it for the next release. This is the command it was trying to execute. Can you please SSH into the server and enter this command? dd if=/dev/sdj of=/dev/null bs=1310720 skip=2441883 iflag=direct conv=noerror status=progress It will run until the end of the drive so press CTRL-C if it runs without issue for 30 seconds (the app stops this after 15 seconds). If there is no error and after you stop it, try entering Docker and then retrying it docker exec -it DiskSpeed bash dd if=/dev/sdj of=/dev/null bs=1310720 skip=2441883 iflag=direct conv=noerror status=progress
  9. Yup, in version 3, including benchmarking multi-device setups like RAIDs and the like. I've been working on v3 more frequently to get it ready for a beta release.
  10. It looks like the tools I use to gather the information changed their layout because I see that on my controller too. Will make it easier for me to fix. Have you tried a controller benchmark yet? It'll tell you if you are reaching capacity of what your controller can support.
  11. I've noticed recently that started happening in the 3.0 Alpha version but haven't investigated it until your mention of it in the 2.x version as I've never seen it now show up in version 2.x. The graph is displayed if the file (smb share path) \\nas\appdata\DiskSpeed\Instances\local\driveinfo\DriveBenchmarks.txt exists which contains the graph data. Can you check to see if that file exists and can be viewed when the graph is or is not visible?
  12. This display is due to parsing, your system is outputting a display that I haven't encountered and thus didn't properly parse. If you could submit a controller debug file (click on the debug link in the DiskSpeed app at the bottom of the page), I can take a look at it.
  13. Cache & array drives are mounted by unraid. I take it your cache drive is a SSD/nvme? To test those, you need to add a mapping to the docker settings. Also note that any changes to the drive configuration under unraid requires the DiskSpeed Docker app to be restarted so it can see those changes.
  14. Right-click anywhere in the browser window to bring up the Dev Tools. Using Firefox & Chrome, it's "Inspect". Click on the "Console" tab and enter the command "ShowDebug()" and hit enter. That should make them visible.
  15. You have to click in the orange area above, just to the right of the period. If you click the Abort button, does it tell you that it's aborting and then changes to a "Continue" button after a few seconds?
  16. Check the details on the drives. Drives with the same model number could have a different revision with different performance. There could be other things impacting the drive. Smaller track sizes in a given area due to defects at manufacturer time could affect read times in a given area. In fact, I've been working on version 3 of DiskSpeed that can map out data zones, surface layouts, track sizes, etc over the entire drive. From my experience, shucked drives seem to be the bottom of the barrel when it comes to the platter quality so you can expect some differences in benchmarking different drives of even the same make/mode/revision. The platter surfaces could be an absolute mess but still quite solid for saving data on the good parts.
  17. Version 2.10.7 has been pushed. If you have been getting white benchmark screen or seemingly never ending benchmarks, try again with this version. Those issues were likely caused by an unforseen error and if it happens again, the benchmark will be aborted and the hidden iframe doing the work will become visible - the error message displayed at the bottom. 2.10.7 Change Log: Refactored Solid State benchmark to better saturate the drive connection Add a default 10 second pause before starting the read portion of a SSD benchmark to allow hidden write cache to flush After benchmarking a SSD, display if Trim is supported on the drive's information page If an error happens while benchmarking a drive, display the hidden iframe performing the benchmark to show the error and abort any other benchmark currently in progress. Reformat the Benchmark FAQ screen to make the information more user friendly Benchmark tests show the read/write ranges of the SSD's along with the average. A tight bar can indicate a drive that is consistent in its performance and does not utilize cache trickery. I just noticed that the displayed read speed doesn't match the graphs, investigating. But the graphs are showing the correct values. I'm working on adding a benchmark history for SSD's but I suspect this drive is going wonky on me.
  18. The application allows you to do this yourself, to upload an image for a drive that has none or to replace the image with one you prefer. View the drive in question, then click the Edit Drive button. Then click on the "Upload New Image" and follow the instructions. Note that if you submit a new drive image to replace an existing one, that image will only download on that particular server if you happen to reinstall or purge the app data.
  19. I figured out the super read speed today. The program uses FIO to benchmark the drives over 4 CPU threads on a given CPU. I configured it to create files of the given size divided by 4 to split over the threads. FIO was also dividing by 4 so the test files were only 25% of the size they were supposed to be. As such, it was reading the files in less than a second which it evaluated to the maximum possible bus speed. The next version will correct this. In the meantime, if you specify a 4 GB file test file size to or larger to compensate, you should see more reasonable test result.
  20. It's odd that you're getting two different scales. If the report is still off like shown here, please submit a Debug file via the Debug link on the bottom of the main page.
  21. You'd only be able to do a read test on a Parity drive. Since the drive doesn't have a usable partition, there can't be any write benchmarks if the parity drive is a SSD. You'd have to perform the benchmark prior to adding it as the Parity.
  22. I added an error trap around that block of code which will go out on the next release. Can you give me more information on that drive? Was it brand new, no partitions, not initialized for example?
  23. Sorry for the lack of responses, I haven't had time recently to visit. I'll respond soon in regards to them. For those who are having the trim issues, please update your Docker to reflect version 2.10.6.
  24. Hover just to the right of the period where marked here, the cursor will still have the text selection icon, if you see the arrow cursor, you're too far over.