DiskSpeed, hard drive benchmarking (unRAID 6+), version 2.8


529 posts in this topic Last Reply

Recommended Posts

  • Replies 528
  • Created
  • Last Reply

Top Posters In This Topic

Top Posters In This Topic

Popular Posts

This Docker Application will let you view your storage controllers & the drives attached to them and perform Benchmarks on both. Controller Benchmarks helps to identify if the drives attached to i

I'm taking this application out of BETA status. Version 2.0 has been released.   Release 2.0 Added progress bars to the drive benchmarking Rewrote the Controller Benchmark to b

I just noticed a trend in the file fragmentation. It seems the OS has the tendency to break the file up into chunks of a set size. I found this really strange because, well, why do it at all? Noticing

Posted Images

After updating to unraid 6.5.1 I’m getting this display in the docker tab where disk speed should be-

Warning: DOMDocument::load(): Document is empty in /boot/config/plugins/dockerMan/templates-user/._my-DiskSpeed.xml, line: 1 in /usr/local/emhttp/plugins/dynamix.docker.manager/include/DockerClient.php on line 215

Warning: DOMDocument::load(): Document is empty in /boot/config/plugins/dockerMan/templates-user/._my-DiskSpeed.xml, line: 1 in /usr/local/emhttp/plugins/dynamix.docker.manager/include/DockerClient.php on line 215

Warning: DOMDocument::load(): Document is empty in /boot/config/plugins/dockerMan/templates-user/._my-DiskSpeed.xml, line: 1 in /usr/local/emhttp/plugins/dynamix.docker.manager/include/DockerClient.php on line 215

Also got an email with the same warning.

 

brunnhilde-diagnostics-20180423-1813.zip

Link to post
31 minutes ago, wgstarks said:

After updating to unraid 6.5.1 I’m getting this display in the docker tab where disk speed should be

 

What version did you upgrade from? The reason I ask is that I'm currently using 6.5.1-rc6 and I don't see the problem you report so I'm wondering if the cause is in the diff between -rc6 and -final.

Link to post
Just now, John_M said:

 

What version did you upgrade from? The reason I ask is that I'm currently using 6.5.1-rc6 and I don't see the problem you report so I'm wondering if the cause is in the diff between -rc6 and -final.

Upgraded from 6.5.0.

Ran the update assistant tool first and it didn’t report any issues.

Link to post
1 hour ago, wgstarks said:

After updating to unraid 6.5.1 I’m getting this display in the docker tab where disk speed should be-

Those are hidden metadata files created by your Mac.  Safe to delete

Link to post
17 minutes ago, Squid said:

Those are hidden metadata files created by your Mac.  Safe to delete

Thanks squid.

Removed the hidden file, deleted the orphaned image and re-installed from CA. Everything looks good now.

Maybe I need to modify my cleanup user script to delete these as well as the .ds files?

Link to post
41 minutes ago, Virtike said:

Seagate 5TB

 

I've been shying away from all the Seagate DM drives except for NAS backup systems. They all seem to spin at 5,980 RPM and none of the drives I've tested has a smooth graph from start to end.

Link to post

While there are 5 regular disks and one ARC 1200 RAID Volume on my server, only 4 disks, disk 2,3,4 and cache get benched.  I attach diagnostics.


 
[0:0:0:0] disk Generic- USB3.0 CRW -0 1.00 /dev/sda 3.90GB
[0:0:0:1] disk Generic- USB3.0 CRW -1 1.00 /dev/sdb -
[1:0:0:0] disk Areca ARC-1200-VOL R001 /dev/sdc 3.00TB
[2:0:0:0] disk ATA ST2000DM001-1CH1 CC43 /dev/sdd 2.00TB
[2:0:1:0] disk ATA Hitachi HDS72202 A20N /dev/sde 2.00TB
[3:0:0:0] disk ATA ST2000DM001-1CH1 CC43 /dev/sdf 2.00TB
[4:0:0:0] disk ATA ST9500325AS SDM1 /dev/sdg 500GB
[5:0:0:0] disk ATA Hitachi HUA72202 A3FD /dev/sdh 2.00TB

 

preclear-diagnostics-20180425-1355.zip

Edited by dikkiedirk
Link to post
8 hours ago, dikkiedirk said:

While there are 5 regular disks and one ARC 1200 RAID Volume on my server, only 4 disks, disk 2,3,4 and cache get benched.  I attach diagnostics.

 

If you mean that it didn't detect the drives, the next beta will partially resolve that.

Link to post
On 2018-04-25 at 8:04 AM, dikkiedirk said:

While there are 5 regular disks and one ARC 1200 RAID Volume on my server, only 4 disks, disk 2,3,4 and cache get benched.  I attach diagnostics.



 
[0:0:0:0] disk Generic- USB3.0 CRW -0 1.00 /dev/sda 3.90GB
[0:0:0:1] disk Generic- USB3.0 CRW -1 1.00 /dev/sdb -
[1:0:0:0] disk Areca ARC-1200-VOL R001 /dev/sdc 3.00TB
[2:0:0:0] disk ATA ST2000DM001-1CH1 CC43 /dev/sdd 2.00TB
[2:0:1:0] disk ATA Hitachi HDS72202 A20N /dev/sde 2.00TB
[3:0:0:0] disk ATA ST2000DM001-1CH1 CC43 /dev/sdf 2.00TB
[4:0:0:0] disk ATA ST9500325AS SDM1 /dev/sdg 500GB
[5:0:0:0] disk ATA Hitachi HUA72202 A3FD /dev/sdh 2.00TB

 

preclear-diagnostics-20180425-1355.zip

Getting the same issue. 2 of my disks are being detected, but aren't getting benched. Where can I get you the diags/logs?

Link to post
1 hour ago, Caldorian said:

Getting the same issue. 2 of my disks are being detected, but aren't getting benched. Where can I get you the diags/logs?

 

Wait until the next beta and then click on the "Create Debug File" link at the bottom of the drives. I'm finalizing testing on it now. I'll post a change log when it's ready.

Link to post

Hi.

I run diskspeed for a controller that has 8 disks attached (attached file). The controller is an AOC-SASLp-MV8 (PCIe x4). 

Do those results indicate that the 4x PCIe lanes are saturated by the 4 first drives? Or could it be something else?

 

 

chart.thumb.jpeg.eefe83fcd66ed850ba21b1cd09ec7499.jpeg

Link to post
On 4/27/2018 at 11:56 PM, papnikol said:

I run diskspeed for a controller that has 8 disks attached (attached file). The controller is an AOC-SASLp-MV8 (PCIe x4). 

Do those results indicate that the 4x PCIe lanes are saturated by the 4 first drives? Or could it be something else?

 

That's fascinating.  It does look like that but I haven't seen any controller not load-share the reads like that. It should have kept scanning fewer drives though until it gets under 90% of the max throughput detected or it runs out of drives.

 

Here's a controller test on my test rig. It gives all the drives equal time on the first pass which maxed out the bandwidth even though it meant capping the read speeds of each drive.

 

92d2a812e5c42697109bb8326f1b2ef0.png.ee674c21fa4b0058a05aab803593deee.png

Link to post

Beta 3A

NOTE: Ryzen Threadripper (X399 Chipsets) with IOMMU enabled will not run this version successfully. I am diagnosing the issue.

 

If you have missing controllers or missing drives, please click on "Create Debug File" link at the bottom of the DiskSpeed home page to create a debug file with controller information included. Do not submit unRAID diagnostic files unless asked. Email the debug file to hddb@strangejourney.net

 

Change Log

  • Single thread the process which analyzes & cleans up the benchmarks to prevent against a race condition where two threads try to process it at the same time
  • Reworked how NVMe drives are detected
  • Scan all PCI root ports for controllers
  • Modified IOMMU detection
  • Handle drives rebranded with Apple
  • Drive model cleanup for ADATA, Corsair, LITEONIT
  • If the first word of the Model matches the Vendor, remove it
  • If the drive is not found in the Hard Drive Database, add a link to allow the Vendor to be modified.
  • Move benchmark processing files to Docker contained temp file to allow for auto-cleanup during docker updates
  • Detect if a drive was added to the Host after the Docker started
  • Do not submit benchmark results if the drive was tested at fewer than every 10%
  • Do not submit benchmark results if bandwidth capping was detected
  • Do not include labels for drives never benchmarked on the main page benchmark graph
  • Expand the debug file creation to include certain PCI device information to help identify controllers not yet supported
  • Don't display benchmark average if there's only one benchmark done
  • Fix text overlay style error on drive images, hopefully drives will now match the editor
  • Change the label of the "Purge Everything" button to make it more clear and add a confirmation dialog
Edited by jbartlett
Link to post

image.png.2b6b7d81398616e59833d1a2a5b351fe.png

7 hours ago, jbartlett said:

 

That's fascinating.  It does look like that but I haven't seen any controller not load-share the reads like that. It should have kept scanning fewer drives though until it gets under 90% of the max throughput detected or it runs out of drives.

 

 

Wow, I like your graph much more than mine. So, my array has 2 AOC-SASLP controllers with 8 and 3 HDDs respectivelly. I moved one drive so that there are 7 and 4 HDDs on each controller, in order to exceed both controllers' max bandwidth. The balancing problem appears in both controllers. Maybe it is a problem with the controller settings. Maybe I should try disabling int 13h (although I never understood what that is)....

 

image.png.78ddb1a90a28cf6339e2a5f531e57e28.png

 

 

 

 

Edited by papnikol
Link to post
On 4/23/2018 at 6:14 PM, wgstarks said:

After updating to unraid 6.5.1 I’m getting this display in the docker tab where disk speed should be-


Warning: DOMDocument::load(): Document is empty in /boot/config/plugins/dockerMan/templates-user/._my-DiskSpeed.xml, line: 1 in /usr/local/emhttp/plugins/dynamix.docker.manager/include/DockerClient.php on line 215

Warning: DOMDocument::load(): Document is empty in /boot/config/plugins/dockerMan/templates-user/._my-DiskSpeed.xml, line: 1 in /usr/local/emhttp/plugins/dynamix.docker.manager/include/DockerClient.php on line 215

Warning: DOMDocument::load(): Document is empty in /boot/config/plugins/dockerMan/templates-user/._my-DiskSpeed.xml, line: 1 in /usr/local/emhttp/plugins/dynamix.docker.manager/include/DockerClient.php on line 215

Also got an email with the same warning.

 

brunnhilde-diagnostics-20180423-1813.zip

Fixed in 6.5.2-rc1+

Link to post

Beta 3a: Remove debug line which had drives testing for only 5 seconds instead of the intended 15.

 

This may have resulted in slightly less accurate speed scores but should still be in an acceptable margin of error.

Link to post
On 4/27/2018 at 11:56 PM, papnikol said:

Hi.

I run diskspeed for a controller that has 8 disks attached (attached file). The controller is an AOC-SASLp-MV8 (PCIe x4). 

Do those results indicate that the 4x PCIe lanes are saturated by the 4 first drives? Or could it be something else?

 

 

chart.thumb.jpeg.eefe83fcd66ed850ba21b1cd09ec7499.jpeg

 

How many CPU threads do you have available to unRAID (& Docker)? As in, not pinned to any VM or the like?

Link to post

1. I have no VMs.

 

2. I have already changed my MOBO/CPU/RAM to much better configuration (but the 2 SASLP controllers remain).

MSI RD480/AMD Athlon 64 3700+/3GB

->

Asrock FM2A88X Pro+/: AMD Athlon X4 840 Quad Core @ 3100/8GB

 

3. These are the results for one of the controllers with the new configuration:

image.png.a0adf955b0fdb667b3314c9b390cf3c1.png

 

There is more bandwidth due to the move from PCI e 1.0 to PCIe 2.x. But still, the bandwidth is not balanced across all disks. The bandwidth usage is maximized for each drive until the last 2-3 drives have very little bandwidth available.

Edited by papnikol
Link to post

Join the conversation

You can post now and register later. If you have an account, sign in now to post with your account.
Note: Your post will require moderator approval before it will be visible.

Guest
Reply to this topic...

×   Pasted as rich text.   Restore formatting

  Only 75 emoji are allowed.

×   Your link has been automatically embedded.   Display as a link instead

×   Your previous content has been restored.   Clear editor

×   You cannot paste images directly. Upload or insert images from URL.