Jump to content

DiskSpeed, hdd/ssd benchmarking (unRAID 6+), version 2.10.9


Recommended Posts

Decided to bench my SSD and NVMEs. Am i missing something or is the GB/s meant to read Gbps?

There's no way my 4tb samsung sata SSD can hit 4.4 GB/s, more like 440MB/s in the real world.

Also, what's with the X axis MB/s ? Shouldn't that read Gbps/s ??

image.thumb.png.4fb1d6355e80999b84a43844ba30e37b.png

 

Edited by mikeyosm
Link to comment
On 7/15/2023 at 12:23 AM, ArveVM said:

new build and just got word of DiskSpeed,,,  - great tool, so thanks a lot ;)


Good readings on HBA and mobo-attached SSD's, but the nvme on the gigabyte z690 gamingX is not giving me any diskspeed ;(
The drive is cashe-drive, xfs, used for docker/appdata

please advice next step - have seen other in this post getting good data from their nvme - so guess there is an obvious config I'm missing?
- or something I should check/post for further investigation?



image.thumb.png.218839cd46a85520e8fa73e913155c21.png

soooo,, just putting my self out there - noob as I am ;)

Thanks for the exellent app,,, when I actually figured out how to match the FAQ-instructions,, everyting works fine ;)

and for other noobs that find the exellent FAQ a bit hard on the first try,, this is how I set the path allocation :):
image.thumb.png.2dd195cab52c8d8bb8abcd2482e028a3.png


then it is possible to "benchmar drive" for ssd's  both my mobo-attached sata and my nvme's come up just fine;)
Sample from my appdata-drive;
image.png.a06b25e44fc251d836d615f2b033325a.png

 

  • Like 1
Link to comment
  • 2 weeks later...
On 4/17/2023 at 11:27 PM, jbartlett said:

 

There is no easy way to bypass this. The files that give the way the CPU's are represented inside the Docker image has changed paths. The next patch will update the file references.

 

 

 

Running:

  • Unraid Version 6.12.3 2023-07-14,
  • DiskSpeed 2.10.6
  • Celeron® N5105 @ 2.00GHz

Is there a reason why I would still be getting the "WARNING: You have only 4 CPUs available..." error? (Trying to Benchmark a 2Tb Crucial BX500 SSD)

 

I also have on different hardware:

  • Unraid Version 6.11.5 2022-11-20
  • Diskspeed 2.10.6
  • i5-3570K CPU @ 3.40GHz

and get the same error 😔

 

Thanks for any help and continued forum support @jbartlett!

Link to comment

Hi

 

I installed and ran this for the first time today and got the following:

 

Lucee 5.3.10.120 Error (application)
Message	Error invoking external process
Stacktrace	The Error Occurred in
/var/www/ScanControllers.cfm: line 2044
2042: <CFFILE action="write" file="#PersistDir#/#exe()#_lsblk_-o_export_dev_#DriveID##P##Part.Partitions[NR].PartNo#_exec.txt" output="/sbin/blkid #Args#" addnewline="NO" mode="666">
2043: <CFIF URL.Debug NEQ "FOOBAR"><cfmodule template="cf_flushfs.cfm"></CFIF>
2044: <cfexecute name="/sbin/blkid" arguments="#Args#" variable="PartInfo2" timeout="90" />
2045: <CFIF StripCRLF(PartInfo2) EQ "">
2046: <!--- No output, try without partition id --->

called from /var/www/ScanControllers.cfm: line 2013
2011: <CFSET Part.Partitions[NR].PartNo=ListGetAt(CurrLine,1,":",true)>
2012: <CFSET Part.Partitions[NR].Start=Val(ListGetAt(CurrLine,2,":",true))>
2013: <CFSET Part.Partitions[NR].End=Val(ListGetAt(CurrLine,3,":",true))>
2014: <CFSET Part.Partitions[NR].Size=Val(ListGetAt(CurrLine,4,":",true))>
2015: <CFSET Part.Partitions[NR].FileSystem=ListGetAt(CurrLine,5,":",true)>

called from /var/www/ScanControllers.cfm: line 1860
1858: </CFIF>
1859: </CFLOOP>
1860: </CFLOOP>
1861:
1862: <!--- Admin drive creation --->

Java Stacktrace	lucee.runtime.exp.ApplicationException: Error invoking external process
  at lucee.runtime.tag.Execute.doEndTag(Execute.java:266)
  at scancontrollers_cfm$cf.call_000155_000156(/ScanControllers.cfm:2044)
  at scancontrollers_cfm$cf.call_000155(/ScanControllers.cfm:2013)
  at scancontrollers_cfm$cf.call(/ScanControllers.cfm:1860)
  at lucee.runtime.PageContextImpl._doInclude(PageContextImpl.java:1056)
  at lucee.runtime.PageContextImpl._doInclude(PageContextImpl.java:948)
  at lucee.runtime.listener.ClassicAppListener._onRequest(ClassicAppListener.java:65)
  at lucee.runtime.listener.MixedAppListener.onRequest(MixedAppListener.java:45)
  at lucee.runtime.PageContextImpl.execute(PageContextImpl.java:2493)
  at lucee.runtime.PageContextImpl._execute(PageContextImpl.java:2478)
  at lucee.runtime.PageContextImpl.executeCFML(PageContextImpl.java:2449)
  at lucee.runtime.engine.Request.exe(Request.java:45)
  at lucee.runtime.engine.CFMLEngineImpl._service(CFMLEngineImpl.java:1216)
  at lucee.runtime.engine.CFMLEngineImpl.serviceCFML(CFMLEngineImpl.java:1162)
  at lucee.loader.engine.CFMLEngineWrapper.serviceCFML(CFMLEngineWrapper.java:97)
  at lucee.loader.servlet.CFMLServlet.service(CFMLServlet.java:51)
  at javax.servlet.http.HttpServlet.service(HttpServlet.java:764)
  at org.apache.catalina.core.ApplicationFilterChain.internalDoFilter(ApplicationFilterChain.java:227)
  at org.apache.catalina.core.ApplicationFilterChain.doFilter(ApplicationFilterChain.java:162)
  at org.apache.tomcat.websocket.server.WsFilter.doFilter(WsFilter.java:53)
  at org.apache.catalina.core.ApplicationFilterChain.internalDoFilter(ApplicationFilterChain.java:189)
  at org.apache.catalina.core.ApplicationFilterChain.doFilter(ApplicationFilterChain.java:162)
  at org.apache.catalina.core.StandardWrapperValve.invoke(StandardWrapperValve.java:197)
  at org.apache.catalina.core.StandardContextValve.invoke(StandardContextValve.java:97)
  at org.apache.catalina.authenticator.AuthenticatorBase.invoke(AuthenticatorBase.java:541)
  at org.apache.catalina.core.StandardHostValve.invoke(StandardHostValve.java:135)
  at org.apache.catalina.valves.ErrorReportValve.invoke(ErrorReportValve.java:92)
  at org.apache.catalina.valves.AbstractAccessLogValve.invoke(AbstractAccessLogValve.java:687)
  at org.apache.catalina.valves.RemoteIpValve.invoke(RemoteIpValve.java:769)
  at org.apache.catalina.core.StandardEngineValve.invoke(StandardEngineValve.java:78)
  at org.apache.catalina.connector.CoyoteAdapter.service(CoyoteAdapter.java:360)
  at org.apache.coyote.http11.Http11Processor.service(Http11Processor.java:399)
  at org.apache.coyote.AbstractProcessorLight.process(AbstractProcessorLight.java:65)
  at org.apache.coyote.AbstractProtocol$ConnectionHandler.process(AbstractProtocol.java:890)
  at org.apache.tomcat.util.net.NioEndpoint$SocketProcessor.doRun(NioEndpoint.java:1789)
  at org.apache.tomcat.util.net.SocketProcessorBase.run(SocketProcessorBase.java:49)
  at org.apache.tomcat.util.threads.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1191)
  at org.apache.tomcat.util.threads.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:659)
  at org.apache.tomcat.util.threads.TaskThread$WrappingRunnable.run(TaskThread.java:61)
  at java.base/java.lang.Thread.run(Thread.java:829)
 
Timestamp	8/11/23 7:37:22 PM SAST

Any help would be appreciated!

 

Thanks!

Link to comment
  • 3 weeks later...
On 8/11/2023 at 10:43 AM, shabos said:

Hi

 

I installed and ran this for the first time today and got the following:

 

Any help would be appreciated!

 

Thanks!

 

I added an error trap around that block of code which will go out on the next release.

 

Can you give me more information on that drive? Was it brand new, no partitions, not initialized for example?

Link to comment
On 7/26/2023 at 11:48 PM, kiwijunglist said:

if you benchmark the parity drive in the array, does that break the parity?

 

You'd only be able to do a read test on a Parity drive. Since the drive doesn't have a usable partition, there can't be any write benchmarks if the parity drive is a SSD. You'd have to perform the benchmark prior to adding it as the Parity.

Edited by jbartlett
Link to comment
On 7/23/2023 at 2:49 AM, mikeyosm said:

Decided to bench my SSD and NVMEs. Am i missing something or is the GB/s meant to read Gbps?

There's no way my 4tb samsung sata SSD can hit 4.4 GB/s, more like 440MB/s in the real world.

Also, what's with the X axis MB/s ? Shouldn't that read Gbps/s ??

image.thumb.png.4fb1d6355e80999b84a43844ba30e37b.png

 

 

It's odd that you're getting two different scales. If the report is still off like shown here, please submit a Debug file via the Debug link on the bottom of the main page.

Link to comment
On 7/23/2023 at 2:49 AM, mikeyosm said:

Decided to bench my SSD and NVMEs. Am i missing something or is the GB/s meant to read Gbps?

There's no way my 4tb samsung sata SSD can hit 4.4 GB/s, more like 440MB/s in the real world.

Also, what's with the X axis MB/s ? Shouldn't that read Gbps/s ??

image.thumb.png.4fb1d6355e80999b84a43844ba30e37b.png

 

 

I figured out the super read speed today. The program uses FIO to benchmark the drives over 4 CPU threads on a given CPU. I configured it to create files of the given size divided by 4 to split over the threads. FIO was also dividing by 4 so the test files were only 25% of the size they were supposed to be.

 

As such, it was reading the files in less than a second which it evaluated to the maximum possible bus speed. The next version will correct this. In the meantime, if you specify a 4 GB file test file size to or larger to compensate, you should see more reasonable test result.

  • Thanks 1
Link to comment
6 hours ago, maciekish said:

Hi can you please add WD160EDGZ?

 

The application allows you to do this yourself, to upload an image for a drive that has none or to replace the image with one you prefer. View the drive in question, then click the Edit Drive button. Then click on the "Upload New Image" and follow the instructions.

 

Note that if you submit a new drive image to replace an existing one, that image will only download on that particular server if you happen to reinstall or purge the app data.

  • Like 1
Link to comment
  • jbartlett changed the title to DiskSpeed, hdd/ssd benchmarking (unRAID 6+), version 2.10.7

Version 2.10.7 has been pushed. If you have been getting white benchmark screen or seemingly never ending benchmarks, try again with this version. Those issues were likely caused by an unforseen error and if it happens again, the benchmark will be aborted and the hidden iframe doing the work will become visible - the error message displayed at the bottom.

 

2.10.7 Change Log:

  • Refactored Solid State benchmark to better saturate the drive connection

  • Add a default 10 second pause before starting the read portion of a SSD benchmark to allow hidden write cache to flush

  • After benchmarking a SSD, display if Trim is supported on the drive's information page

  • If an error happens while benchmarking a drive, display the hidden iframe performing the benchmark to show the error and abort any other benchmark currently in progress.

  • Reformat the Benchmark FAQ screen to make the information more user friendly

 

Benchmark tests show the read/write ranges of the SSD's along with the average. A tight bar can indicate a drive

that is consistent in its performance and does not utilize cache trickery.

image.png.cbc591c43a37a12ce123b59f0c62cdef.png

 

I just noticed that the displayed read speed doesn't match the graphs, investigating. But the graphs are showing the correct values.

 

image.png.2712eec61736d44b474807a4e63df126.png

 

image.png.7aafd8b0fabbd70ca147c071daeedd7a.png

 

I'm working on adding a benchmark history for SSD's but I suspect this drive is going wonky on me.

image.png.285f3e928b70674a8930c12c0dea93b2.png

Link to comment
5 hours ago, nraygun said:

Just discovered this plugin - great job!

 

Am I reading this right in that disk 2 (green) is a little on the wonky side because of how much it deviates from the other 2 disks? Disks 1-3 were all shucked at the same time. sdc is an unassigned older drive for backups.

 

Check the details on the drives. Drives with the same model number could have a different revision with different performance.

 

There could be other things impacting the drive. Smaller track sizes in a given area due to defects at manufacturer time could affect read times in a given area. In fact, I've been working on version 3 of DiskSpeed that can map out data zones, surface layouts, track sizes, etc over the entire drive.

 

From my experience, shucked drives seem to be the bottom of the barrel when it comes to the platter quality so you can expect some differences in benchmarking different drives of even the same make/mode/revision. The platter surfaces could be an absolute mess but still quite solid for saving data on the good parts.

  • Like 1
Link to comment

I am getting the same problem as others where I try and start the benchmark my drives and nothing happens. There also isn't and text displayed when I mouse over the end of the text "Click on a drive label to hide or show it."

DiskSpeed1.jpg

 

I have been able to benchmark my SSD cache drive once I added the UNRAID paths so that part works.

 

If I benchmark the controller that does appear to be able to run some tests against the disks.

DiskSpeed2.thumb.jpg.ae6f207c29c35039554172f34e60133d.jpg

Edited by The_Target
Link to comment
On 9/22/2023 at 12:38 AM, jbartlett said:

 

Check the details on the drives. Drives with the same model number could have a different revision with different performance.

 

There could be other things impacting the drive. Smaller track sizes in a given area due to defects at manufacturer time could affect read times in a given area. In fact, I've been working on version 3 of DiskSpeed that can map out data zones, surface layouts, track sizes, etc over the entire drive.

 

From my experience, shucked drives seem to be the bottom of the barrel when it comes to the platter quality so you can expect some differences in benchmarking different drives of even the same make/mode/revision. The platter surfaces could be an absolute mess but still quite solid for saving data on the good parts.

Thanks!

The revisions are the same. In fact, just about everything about the drives are the same.

I guess it's just platter quality on this particular drive as you said.

Link to comment
On 9/23/2023 at 8:32 AM, The_Target said:

I am getting the same problem as others where I try and start the benchmark my drives and nothing happens. There also isn't and text displayed when I mouse over the end of the text "Click on a drive label to hide or show it."

 

image.png.65ca9c385ab65088b883766f845f39f2.png

 

You have to click in the orange area above, just to the right of the period.

 

If you click the Abort button, does it tell you that it's aborting and then changes to a "Continue" button after a few seconds?

Link to comment
14 hours ago, jbartlett said:

 

image.png.65ca9c385ab65088b883766f845f39f2.png

 

You have to click in the orange area above, just to the right of the period.

 

If you click the Abort button, does it tell you that it's aborting and then changes to a "Continue" button after a few seconds?

 

Nothing happens when I try and click the orange area I have tried multiple browsers etc nothing happens.

 

The abort button changes to aborting benchmark and then just hangs there.

Link to comment
3 hours ago, The_Target said:

 

Nothing happens when I try and click the orange area I have tried multiple browsers etc nothing happens.

 

The abort button changes to aborting benchmark and then just hangs there.

 

Right-click anywhere in the browser window to bring up the Dev Tools. Using Firefox & Chrome, it's "Inspect".

Click on the "Console" tab and enter the command "ShowDebug()" and hit enter. That should make them visible.

Link to comment

I'm able to test my array drive fine but not the cache drives, tried a purge but same deal, I looked at the fact but didn't really get how to resolve how to test, I can't mount them through Unassigned Devices when they are assigned to the cache pool. Maybe you just can't test in this config?

Link to comment
On 10/1/2023 at 12:52 AM, Bushibot said:

I'm able to test my array drive fine but not the cache drives, tried a purge but same deal, I looked at the fact but didn't really get how to resolve how to test, I can't mount them through Unassigned Devices when they are assigned to the cache pool. Maybe you just can't test in this config?

 

Cache & array drives are mounted by unraid. I take it your cache drive is a SSD/nvme? To test those, you need to add a mapping to the docker settings.

 

image.png.6b6ca3f4a47426a1bcf25d7b37724ad6.png

 

Also note that any changes to the drive configuration under unraid requires the DiskSpeed Docker app to be restarted so it can see those changes.

Link to comment
1 hour ago, jbartlett said:

 

Cache & array drives are mounted by unraid. I take it your cache drive is a SSD/nvme? To test those, you need to add a mapping to the docker settings.

 

image.png.6b6ca3f4a47426a1bcf25d7b37724ad6.png

 

Also note that any changes to the drive configuration under unraid requires the DiskSpeed Docker app to be restarted so it can see those changes.

I’ll try that, thanks.

Link to comment

Sorry if this has been asked before but I couldn't find anything on the intro page or by googling. What does it mean for a controllers throughput to be reported as "downgraded":

 

One controller is "ok":

Quote

Broadcom / LSI
Serial Attached SCSI controller

Type: Onboard Controller
Current Link Speed: (ok) width (ok) ( max throughput)
Maximum Link Speed: 8GT/s width x8 (7.88 GB/s max throughput)


Oner controller is "downgraded":

Quote

 

SAS2308 PCI-Express Fusion-MPT SAS-2


Broadcom / LSI
Serial Attached SCSI controller

Type: Onboard Controller
Current Link Speed: (ok) width (downgraded) ( max throughput)
Maximum Link Speed: 8GT/s width x8 (7.88 GB/s max throughput)

 

 

I am mainly strugglines with very long disk rebuild times. I just swapped a 4TB out for a 20TB and it rebuilt around 160 MB/s which took about two days. I have some drives that I know are slower than others but I want to make sure my controllers are setup and cooled properly so that they are not the bottleneck. Everything is pulgged into PCIE-x16 on board slots (one is the GPU slot and the other is an x4) with mini SAS to SATA cables.


Benchmarks for "ok" controller (the 20TB drives are parity, parity2, disk1 and dis2):

image.png.1a4ea2251fc7586ac0f07b22afd7813e.png

 

Benchmarks for the "downgraded" controller (the 20TB drive is Disk 11, slower than what we saw with the "ok" controller but not terrible. However overall the drives here are much slower):

image.png.00be6b1051f4bbbd8c1aee77cd668571.png

 

Thanks!

 

Link to comment

OK I found some more information:
 

lspci -vv

 

The OK one is reporting x8, I assume this is 8 lanes for PCIE:
 

01:00.0 Serial Attached SCSI controller: Broadcom / LSI SAS2308 PCI-Express Fusion-MPT SAS-2 (rev 05)

...

                LnkSta: Speed 8GT/s, Width x8
                        TrErr- Train- SlotClk+ DLActive- BWMgmt- ABWMgmt-

 

The other is reporting x4 as downgraded:

 

08:00.0 Serial Attached SCSI controller: Broadcom / LSI SAS2308 PCI-Express Fusion-MPT SAS-2 (rev 05)

...

                LnkSta: Speed 8GT/s, Width x4 (downgraded)
                        TrErr- Train- SlotClk+ DLActive- BWMgmt- ABWMgmt-

 

I bought these cards: https://www.amazon.com/dp/B0BVVDT4F1?psc=1&ref=ppx_yo2ov_dt_b_product_details

 

Looks like they are PCIE-x8, I was under the assumption they were x4.

 

image.png.561894f8ec138f17d0d7e017dbae1c71.png

 

I am using this motherboard: https://www.amazon.com/dp/B0BG7DY6MT?ref=ppx_yo2ov_dt_b_product_details&th=1

 

 

It's second x16 slot is running x4 lanes:

image.png.d61879dc2a13aa720560e7b2747dc7d1.png

 

So maybe I am out of luck. Or maybe I can run bifurcate the x16 slot into x8 and x8 and get out of degrade mode? 

 

I see this in the motherboard manual. Looks like it's intent is if you want to run PCIE M.2 controllers but this may be simular, think it's worth a shot? 

 

image.png.9f0059db47dc7c2237183f388d63e27e.png

 

Link to comment

Join the conversation

You can post now and register later. If you have an account, sign in now to post with your account.
Note: Your post will require moderator approval before it will be visible.

Guest
Reply to this topic...

×   Pasted as rich text.   Restore formatting

  Only 75 emoji are allowed.

×   Your link has been automatically embedded.   Display as a link instead

×   Your previous content has been restored.   Clear editor

×   You cannot paste images directly. Upload or insert images from URL.

×
×
  • Create New...