Jump to content

DiskSpeed, hdd/ssd benchmarking (unRAID 6+), version 2.10.9


Recommended Posts

Posted (edited)
15 hours ago, spazoid said:

I tried force updating the container, but it did not resolve it.

Any ideas?

Please update the app, it should reflect 2.10.9.2.  This issue should be fixed.

 

If you encounter any further errors, I just need the line number block, don't need the stack trace.

Edited by jbartlett
Link to comment
  • 3 weeks later...

Hi guys

 

I'm quite new to Unraid and installd the DiskSpeed app but for whatever reason I am unable to run the benchmarks.

Unable to benchmark for the following reason
* Docker volume mount not detected

 

Any help would be appreciated.

 

Thanks

Link to comment
  • 2 weeks later...
On 6/17/2024 at 2:00 AM, PST said:

Hi guys

 

I'm quite new to Unraid and installd the DiskSpeed app but for whatever reason I am unable to run the benchmarks.

Unable to benchmark for the following reason
* Docker volume mount not detected

 

Any help would be appreciated.

 

Thanks

 

There's a FAQ link on the Benchmark page that explains how to set it up.

Link to comment
  • 3 weeks later...
Posted (edited)
On 7/15/2024 at 8:49 AM, Nebur692 said:

I have this error on the cache disk and I have space to spare:

 

image.thumb.png.eaa0cf824ce03ce4fe3f46d9627b7bf7.png

 

Can you open a shell prompt inside the Docker container

image.png.a0fe39f8ea2a520e19d863108bcf2bb7.png

 

and then enter: df -B 1KB | grep cache

 

Reply with a screen shot of the result

 

Edited by jbartlett
Link to comment
  • 2 weeks later...
  • 2 weeks later...

What could be the reason that the single drive speed is at about 115 MB/s but when it runs all drives at the same drive, each drives max out at about 250 MB/s?

 

It doesn't make any sense to me. I alreads reran the test and restarted the docker.

 

There is only one drive which does >200 MB/s at single drive speed.

 

Controller:

Fusion-MPT 24GSAS/PCIe SAS40xx/41xx


Broadcom / LSI
RAID bus controller

Type: Add-on Card in PCIe Slot SLOT4 PCIe 5.0 X8 (x8 PCI Express 5 x8)
Current & Maximum Link Speed: 16GT/s width x8 (15.75 GB/s max throughput)

 

Port 1: 	sdaa 	14TB 	Seagate ST14000NM001G Rev SN04 (Disk 6)
Port 2: 	sdab 	14TB 	Seagate ST14000NM002G Rev 0 (Disk 14)
Port 3: 	sdc 	1TB 	NVMe WDS100T3X0C-00SJ Rev 0 (docker)
Port 4: 	sdd 	1TB 	NVMe WD_BLACK SN770 1 Rev 0 (docker2)
Port 5: 	sde 	14TB 	Seagate ST14000NM001G Rev SN04 (Disk 1)
Port 6: 	sdf 	10TB 	Seagate ST10000VN0004 Rev SC61 (Disk 2)
Port 7: 	sdg 	10TB 	Seagate ST10000VN0004 Rev SC61 (Disk 3)
Port 8: 	sdh 	10TB 	Seagate ST10000VN0004 Rev SC61 (Disk 4)
Port 9: 	sdj 	10TB 	Seagate ST10000VN0004 Rev SC61 (Disk 5)
Port 10: 	sdk 	12TB 	Seagate ST12000NM001G Rev SN04 (Disk 13)
Port 11: 	sdl 	14TB 	Western Digital WUH721414ALE6L4 Rev LDGNW07G (Disk 7)
Port 12: 	sdm 	14TB 	Seagate ST14000NM001G Rev SN04 (Disk 8)
Port 13: 	sdn 	14TB 	Western Digital WUH721414ALE6L4 Rev LDGNW240 (Disk 9)
Port 14: 	sdo 	12TB 	Seagate ST12000NM0008 Rev SN04 (Disk 10)
Port 15: 	sdq 	12TB 	Western Digital WD120EFAX Rev 81.00A81 (Disk 11)
Port 16: 	sdr 	18TB 	Seagate ST18000NM000J Rev SN02 
Port 17: 	sds 	18TB 	Seagate ST18000NM000J Rev SN04 
Port 18: 	sdt 	18TB 	Seagate ST18000NM000J Rev SN01 (Disk 12)
Port 19: 	sdu 	18TB 	Seagate ST18000NM000J Rev SN01 (Disk 15)
Port 20: 	sdv 	18TB 	Seagate ST18000NM000J Rev SN02 (Parity)
Port 21: 	sdw 	18TB 	Seagate ST18000NM000J Rev SN02 (Parity 2)
Port 22: 	sdx 	1TB 	Western Digital WDS100T1R0A Rev 411010WR (Cache 3)
Port 23: 	sdy 	1TB 	Western Digital WDS100T1R0A Rev 411000WR (Cache)
Port 24: 	sdz 	1TB 	Western Digital WDS100T1R0A Rev 411000WR (Cache 2)

 

image.thumb.png.6bda426ff143445b7d3cd6537b4372dc.png

image.thumb.png.bf92b8179f9d12d9b179cd3f3dee84f6.png

image.thumb.png.3903dd935fbcc26b17ed214801f24933.png

 

Quote

24 drives reported a significantly slower single drive speed than all the drives reading at the same time. This is an abnormal test result. Please re-run this benchmark. If this result occurs again, restart the DiskSpeed docker app and try again.

 

What's wrong here?

 

Running unraid 7.0 beta2

 

hdparm looks "OK"?

root@unraid:/mnt/disk3# hdparm -tT /dev/sdv

/dev/sdv:
 Timing cached reads:   41692 MB in  1.99 seconds = 20945.44 MB/sec
 Timing buffered disk reads: 580 MB in  3.00 seconds = 193.13 MB/sec
root@unraid:/mnt/disk3# hdparm -tT /dev/sdw

/dev/sdw:
 Timing cached reads:   41634 MB in  1.99 seconds = 20917.79 MB/sec
 Timing buffered disk reads: 562 MB in  3.00 seconds = 187.14 MB/sec
root@unraid:/mnt/disk3# hdparm -tT /dev/sdx

/dev/sdx:
 Timing cached reads:   40174 MB in  1.99 seconds = 20182.08 MB/sec
 Timing buffered disk reads: 604 MB in  3.00 seconds = 201.29 MB/sec
root@unraid:/mnt/disk3# hdparm -tT /dev/sdc

/dev/sdc:
 Timing cached reads:   39974 MB in  1.99 seconds = 20080.78 MB/sec
 Timing buffered disk reads: 3626 MB in  3.00 seconds = 1208.21 MB/sec
root@unraid:/mnt/disk3#

 

Edited by enJOyIT
Link to comment

Running unraid 6.12.11

Container version: 2.10.9.3

 

When I click on the "upload drive & benchmark data" I get a 500 error.

 

image.thumb.png.25445ab5ba38750698c19a41ce2beb74.png

 

An error showed up in the container log, unsure if its directly related.

 

WARNING: An illegal reflective access operation has occurred
WARNING: Illegal reflective access by lucee.commons.lang.ClassUtil (jar:/opt/lucee/server/lucee-server/patches/6.0.3.1.lco) to constructor com.sun.org.apache.xerces.internal.jaxp.DocumentBuilderFactoryImpl()
WARNING: Please consider reporting this to the maintainers of lucee.commons.lang.ClassUtil
WARNING: Use --illegal-access=warn to enable warnings of further illegal reflective access operations
WARNING: All illegal access operations will be denied in a future release

 

Link to comment
  • 3 weeks later...
On 8/8/2024 at 5:56 PM, matthewdavis said:

Running unraid 6.12.11

Container version: 2.10.9.3

 

When I click on the "upload drive & benchmark data" I get a 500 error.

 

image.thumb.png.25445ab5ba38750698c19a41ce2beb74.png

 

An error showed up in the container log, unsure if its directly related.

 

WARNING: An illegal reflective access operation has occurred
WARNING: Illegal reflective access by lucee.commons.lang.ClassUtil (jar:/opt/lucee/server/lucee-server/patches/6.0.3.1.lco) to constructor com.sun.org.apache.xerces.internal.jaxp.DocumentBuilderFactoryImpl()
WARNING: Please consider reporting this to the maintainers of lucee.commons.lang.ClassUtil
WARNING: Use --illegal-access=warn to enable warnings of further illegal reflective access operations
WARNING: All illegal access operations will be denied in a future release

 

 

Are you still getting an error trying to upload the drive information?

 

The container log is unrelated.

Link to comment
On 8/8/2024 at 3:05 PM, enJOyIT said:

What could be the reason that the single drive speed is at about 115 MB/s but when it runs all drives at the same drive, each drives max out at about 250 MB/s?

 

It doesn't make any sense to me. I alreads reran the test and restarted the docker.

 

There is only one drive which does >200 MB/s at single drive speed.

 

Controller:

Fusion-MPT 24GSAS/PCIe SAS40xx/41xx


Broadcom / LSI
RAID bus controller

Type: Add-on Card in PCIe Slot SLOT4 PCIe 5.0 X8 (x8 PCI Express 5 x8)
Current & Maximum Link Speed: 16GT/s width x8 (15.75 GB/s max throughput)

 

Port 1: 	sdaa 	14TB 	Seagate ST14000NM001G Rev SN04 (Disk 6)
Port 2: 	sdab 	14TB 	Seagate ST14000NM002G Rev 0 (Disk 14)
Port 3: 	sdc 	1TB 	NVMe WDS100T3X0C-00SJ Rev 0 (docker)
Port 4: 	sdd 	1TB 	NVMe WD_BLACK SN770 1 Rev 0 (docker2)
Port 5: 	sde 	14TB 	Seagate ST14000NM001G Rev SN04 (Disk 1)
Port 6: 	sdf 	10TB 	Seagate ST10000VN0004 Rev SC61 (Disk 2)
Port 7: 	sdg 	10TB 	Seagate ST10000VN0004 Rev SC61 (Disk 3)
Port 8: 	sdh 	10TB 	Seagate ST10000VN0004 Rev SC61 (Disk 4)
Port 9: 	sdj 	10TB 	Seagate ST10000VN0004 Rev SC61 (Disk 5)
Port 10: 	sdk 	12TB 	Seagate ST12000NM001G Rev SN04 (Disk 13)
Port 11: 	sdl 	14TB 	Western Digital WUH721414ALE6L4 Rev LDGNW07G (Disk 7)
Port 12: 	sdm 	14TB 	Seagate ST14000NM001G Rev SN04 (Disk 8)
Port 13: 	sdn 	14TB 	Western Digital WUH721414ALE6L4 Rev LDGNW240 (Disk 9)
Port 14: 	sdo 	12TB 	Seagate ST12000NM0008 Rev SN04 (Disk 10)
Port 15: 	sdq 	12TB 	Western Digital WD120EFAX Rev 81.00A81 (Disk 11)
Port 16: 	sdr 	18TB 	Seagate ST18000NM000J Rev SN02 
Port 17: 	sds 	18TB 	Seagate ST18000NM000J Rev SN04 
Port 18: 	sdt 	18TB 	Seagate ST18000NM000J Rev SN01 (Disk 12)
Port 19: 	sdu 	18TB 	Seagate ST18000NM000J Rev SN01 (Disk 15)
Port 20: 	sdv 	18TB 	Seagate ST18000NM000J Rev SN02 (Parity)
Port 21: 	sdw 	18TB 	Seagate ST18000NM000J Rev SN02 (Parity 2)
Port 22: 	sdx 	1TB 	Western Digital WDS100T1R0A Rev 411010WR (Cache 3)
Port 23: 	sdy 	1TB 	Western Digital WDS100T1R0A Rev 411000WR (Cache)
Port 24: 	sdz 	1TB 	Western Digital WDS100T1R0A Rev 411000WR (Cache 2)

 

image.thumb.png.6bda426ff143445b7d3cd6537b4372dc.png

image.thumb.png.bf92b8179f9d12d9b179cd3f3dee84f6.png

image.thumb.png.3903dd935fbcc26b17ed214801f24933.png

 

 

What's wrong here?

 

Running unraid 7.0 beta2

 

hdparm looks "OK"?

root@unraid:/mnt/disk3# hdparm -tT /dev/sdv

/dev/sdv:
 Timing cached reads:   41692 MB in  1.99 seconds = 20945.44 MB/sec
 Timing buffered disk reads: 580 MB in  3.00 seconds = 193.13 MB/sec
root@unraid:/mnt/disk3# hdparm -tT /dev/sdw

/dev/sdw:
 Timing cached reads:   41634 MB in  1.99 seconds = 20917.79 MB/sec
 Timing buffered disk reads: 562 MB in  3.00 seconds = 187.14 MB/sec
root@unraid:/mnt/disk3# hdparm -tT /dev/sdx

/dev/sdx:
 Timing cached reads:   40174 MB in  1.99 seconds = 20182.08 MB/sec
 Timing buffered disk reads: 604 MB in  3.00 seconds = 201.29 MB/sec
root@unraid:/mnt/disk3# hdparm -tT /dev/sdc

/dev/sdc:
 Timing cached reads:   39974 MB in  1.99 seconds = 20080.78 MB/sec
 Timing buffered disk reads: 3626 MB in  3.00 seconds = 1208.21 MB/sec
root@unraid:/mnt/disk3#

 

Click on the DiskSpeed icon in your Docker apps and select "Console". Then enter the following command with replacing xxx with the drive ID and see if you get the expected speeds or the slower ones. Press CTRL-C to stop.

 

dd if=/dev/xxx of=/dev/null bs=1310720 skip=0 iflag=direct status=progress conv=noerror

Link to comment
Posted (edited)
On 8/27/2024 at 7:51 PM, pyrosrockthisworld said:

Still getting the 500 internal server error here

 

Please try again. I believe it is resolved now. If you still get an error, please let me know what drive vendors you have (Seagate, Samsung, etc.)

Edited by jbartlett
Link to comment

I don't know how I overlooked this app for so many years but its great! I used it to identify a few optimizations to my setup.

I did have a couple questions / comments though:

  1. I didn't see a way to test all controllers concurrently, or a way to test combinations of drives concurrently. Do you think this is feasible to implement? It would be very useful for triaging bottlenecks.
  2. I don't see the negotiated speed anywhere in the UI. I know all my disks are 6GB/s but some are negotiated at current speed of 3GB/s
  3. I don't think some of the throughput calculations are correct. For example I have a 9207-8e on an 8x connection and it says: "8GT/s width x8 (7.88 GB/s max throughput)". I also have a 9207-8i on a 4x PCIe connection and it says: "8GT/s width x4 (7.88 GB/s max throughput)". How can these both have the same max throughput?

Thanks, and I also submitted a new disk to your database :)

I'm curious what other people get for this one since it appears to be a 24TB HAMR drive, got em from serverpartsdeals. Not impressed with speed, but I do like the density.

https://www.strangejourney.net/hddb/ModelDatabase.cfm?Vendor=Seagate&Model=ST24000NM000C

Link to comment

 

21 hours ago, johnsanc said:

I don't know how I overlooked this app for so many years but its great! I used it to identify a few optimizations to my setup.

I did have a couple questions / comments though:

  1. I didn't see a way to test all controllers concurrently, or a way to test combinations of drives concurrently. Do you think this is feasible to implement? It would be very useful for triaging bottlenecks.
  2. I don't see the negotiated speed anywhere in the UI. I know all my disks are 6GB/s but some are negotiated at current speed of 3GB/s
  3. I don't think some of the throughput calculations are correct. For example I have a 9207-8e on an 8x connection and it says: "8GT/s width x8 (7.88 GB/s max throughput)". I also have a 9207-8i on a 4x PCIe connection and it says: "8GT/s width x4 (7.88 GB/s max throughput)". How can these both have the same max throughput?

Thanks, and I also submitted a new disk to your database :)

I'm curious what other people get for this one since it appears to be a 24TB HAMR drive, got em from serverpartsdeals. Not impressed with speed, but I do like the density.

https://www.strangejourney.net/hddb/ModelDatabase.cfm?Vendor=Seagate&Model=ST24000NM000C

1. I can look into doing both, benchmarking all drive controllers at the same time along with benchmarking certain drives across the entire system at the same time.

 

2. You can view the signal speed of a drive here
image.png.2b3bbce5c0f2edbc84319b6689ca783a.png

 

That data is as current as the last controller scan so if you suspect a drive has dropped to a lower signaling speed, rescan the controllers and then view the drive in question again.

 

3. They can both have 7.88 GB/s if one is on PCIe 3 and the other on PCIe 4. Here's the chart I have coded in, the value displayed is a cross reference between PCIe version & connection. The values were pulled off of some website ages ago, and looks like I need to add an entry for PCIe 6. I guess adding the PCIe version would help but you can reverence the GT/s value with the transfer rate column to see which one it is.
image.png.43411f351f059b536188b3b48ef83926.png

Edited by jbartlett
Link to comment

@johnsanc - Actually, I just added the PCIe version to the display. If you refresh the app, it should reflect version 2.10.9.6 and viewing the controller page will display the PCIe version if the controller reveals the data.

image.png.e1fa90c1077eb3fab584d0eb5a5e80d9.png

 

And WOW on those speeds on the ST24000NM000C at the high end. Goes to show you that capacity isn't all that's cracked up to be.

image.png.061bb2dddfadbd6978e4b988708c90ca.png

Link to comment

@jbartlett - Thanks i updated but I see the same thing basically:

Current & Maximum Link Speed: 8GT/s (PCIe 3) width x8 (7.88 GB/s max throughput)

Current & Maximum Link Speed: 8GT/s (PCIe 3) width x4 (7.88 GB/s max throughput)

image.png.8c27738bad56682407c61cb683de8b9a.png

 

How can x4 and x8 on PCIe 3 have the same throughput?

Not sure if it matters, but these are PCIe 4 slots, but the cards in them are PCIe 3.


 

My comment about the link speed was comparing what diskspeed says vs the Identity tab on a particular disk. I see diskspeed always says 6GB/s, but the Identity tab also shows the "current" speed like this:

 

image.png

Edited by johnsanc
Link to comment
59 minutes ago, johnsanc said:

Current & Maximum Link Speed: 8GT/s (PCIe 3) width x8 (7.88 GB/s max throughput)

Current & Maximum Link Speed: 8GT/s (PCIe 3) width x4 (7.88 GB/s max throughput)

Yup, that looks to be a bug! The x4 should be displaying 3.94 GB/s.

 

1 hour ago, johnsanc said:

Not sure if it matters, but these are PCIe 4 slots, but the cards in them are PCIe 3

The values are pulled from the controller card, so it would always report PCIe 3 even in a higher slot. Though now I wonder if I can get the slot values....

Link to comment

Couple other suggestions:

  1. Disk numbers should probably be in ascending order everywhere in the UI (graphs, checkbox selections). Today it looks like its all sorted as strings which puts things out of order and makes things more difficult to find.
  2. It would be really nice to be able to filter the graph with all the drives by model number. I find myself having to cross reference disk numbers with models on the left to be able to filter what I need. This would also be nice for seeing relative performance between drive models.
Link to comment
11 hours ago, TreksterDK said:

Just updated the docker to Version: 2.10.9.6, and now I get this error when I open the Web Interface

I added partial support for CD-ROM's in displaying them on the controller page but it required a lot of checks to be added throughout the app. This particular issue doesn't happen for me on my dev & prod system so not sure why it's happening for you but I added the CD-ROM check there. Please update and try again. It will reflect version 2.10.9.7

  • Thanks 1
Link to comment
4 hours ago, johnsanc said:

Couple other suggestions:

  1. Disk numbers should probably be in ascending order everywhere in the UI (graphs, checkbox selections). Today it looks like its all sorted as strings which puts things out of order and makes things more difficult to find.
  2. It would be really nice to be able to filter the graph with all the drives by model number. I find myself having to cross reference disk numbers with models on the left to be able to filter what I need. This would also be nice for seeing relative performance between drive models.

 

1. It sorts by assigned drive label putting the Parity first, then data drives, then cache drives. It probably puts pools after that, haven't tested the scenario yet. Then it's by drive letter. The main page showing the drives assigned to a controller is in controller port number sequence, as is the controller info page.

2. Interesting idea. As of now, you can toggle the visibility of a graph item by clicking on the drive in the legend.

Link to comment
14 minutes ago, jbartlett said:

 

1. It sorts by assigned drive label putting the Parity first, then data drives, then cache drives. It probably puts pools after that, haven't tested the scenario yet. Then it's by drive letter. The main page showing the drives assigned to a controller is in controller port number sequence, as is the controller info page.

2. Interesting idea. As of now, you can toggle the visibility of a graph item by clicking on the drive in the legend.

 Sorry, I probably wasn't clear. A picture is always better:
image.png.15d6c4f15e2a9571a86538b01a3819fa.png

I can see its ordered by Parity, then data, then pools. But the data disks are sorted as strings instead of by disk number.

I think it should be Data 1, Data 2, Data 3, etc. instead of Data 1, Data 10, Data 11, etc.

Link to comment

Join the conversation

You can post now and register later. If you have an account, sign in now to post with your account.
Note: Your post will require moderator approval before it will be visible.

Guest
Reply to this topic...

×   Pasted as rich text.   Restore formatting

  Only 75 emoji are allowed.

×   Your link has been automatically embedded.   Display as a link instead

×   Your previous content has been restored.   Clear editor

×   You cannot paste images directly. Upload or insert images from URL.

×
×
  • Create New...