DiskSpeed, hard drive benchmarking (unRAID 6+), version 2.8.1


535 posts in this topic Last Reply

Recommended Posts

  • Replies 534
  • Created
  • Last Reply

Top Posters In This Topic

Top Posters In This Topic

Popular Posts

This Docker Application will let you view your storage controllers & the drives attached to them and perform Benchmarks on both. Controller Benchmarks helps to identify if the drives attached to i

I'm taking this application out of BETA status. Version 2.0 has been released.   Release 2.0 Added progress bars to the drive benchmarking Rewrote the Controller Benchmark to b

You are correct. I had an empty XFS volume that I copied large files to and when I created a file allocation map, they were NOT located at the start of the drive but spread out in three general areas

Posted Images

22 hours ago, jbartlett said:

View the advanced settings and check that the "WebUI" setting has "http://[IP]:[PORT:18888]/". If it does, it sounds like a bug not related to my Docker.

This is a fresh install.  Does it matter that this address is NOT in the subnet of the IP of the UnRAID server?

image.thumb.png.2bfdc0f2ef52bfa61a0088fd0baba600.png

Link to post
2 hours ago, TheWoodser said:

This is a fresh install.  Does it matter that this address is NOT in the subnet of the IP of the UnRAID server?

image.thumb.png.2bfdc0f2ef52bfa61a0088fd0baba600.png

The first IP is the IP of the Docker and it's fine. The 2nd set after the arrow should have the IP of your unraid box plus :18888.

 

Mine is "172.17.0.2:8888/TCP <--> 192.168.1.7:18888". I also have Network Type set to "Bridge", Console Shell Command set to "Shell", and Privileged set to ON.

 

Try this: Edit the Docker and change the name to something other than "DiskSpeed" to change the local cached version's name and then delete the docker. Install DiskSpeed via the AppStore or from the XML file on the 1st post on this thread.

 

If nothing works, you may want to post in the 6.8.1 general support thread or 6.9.0 beta thread depending on which version of unraid you're running that the Web UI link isn't being displayed.

 

-John

Link to post
2 minutes ago, jbartlett said:

The first IP is the IP of the Docker and it's fine. The 2nd set after the arrow should have the IP of your unraid box plus :18888.

 

Mine is "172.17.0.2:8888/TCP <--> 192.168.1.7:18888". I also have Network Type set to "Bridge", Console Shell Command set to "Shell", and Privileged set to ON.

 

Try this: Edit the Docker and change the name to something other than "DiskSpeed" to change the local cached version's name and then delete the docker. Install DiskSpeed via the AppStore or from the XML file on the 1st post on this thread.

 

If nothing works, you may want to post in the 6.8.1 general support thread or 6.9.0 beta thread depending on which version of unraid you're running that the Web UI link isn't being displayed.

 

-John

John, Thanks for the help.  I think there is something wonky with my network settings.  I opened a new thread....no Docker I install will give me a WebUI option.

 

Woodser

Link to post

Feature in progress - file fragmentation & allocation map.

One thing I've learned is that for some reason, the underlaying OS likes to split up files all over the drive. This is from a drive on my backup NAS, files are just copied there but I see that large files are broken up all over the drive in most cases.

 

Black = Allocated, non fragmented
Red = Allocated, fragmented

White = Unallocated

 

image.png.41c9eb66c8cce936a02ed0558fb04f24.png

Edited by jbartlett
Link to post

I just noticed a trend in the file fragmentation. It seems the OS has the tendency to break the file up into chunks of a set size. I found this really strange because, well, why do it at all? Noticing the trend of 32768, I did some research and some file systems (like ex4) have a maximum number of blocks that can fit in one extent and then the OS has to create a new extent to continue, thus creating a forced fragmentation chain by design. But it's interesting that in some extents break this barrier (btrfs in this case), typically on the last two extents but not always.

 

I had designed my app to display fragments in red but I think I'll just default everything to black with the option of display fragmented files in red after the scan.

File size of 10,000 BC.m4v is 2995331020 (731282 blocks of 4096 bytes)
 ext:     logical_offset:        physical_offset: length:   expected: flags:
   0:        0..   32767:     793856..    826623:  32768:
   1:    32768..   65535:     848352..    881119:  32768:     826624:
   2:    65536..   98303:     913888..    946655:  32768:     881120:
   3:    98304..  131071:     979424..   1012191:  32768:     946656:
   4:   131072..  163839:    1056000..   1088767:  32768:    1012192:
   5:   163840..  196607:    1121536..   1154303:  32768:    1088768:
   6:   196608..  327679:    1165152..   1296223: 131072:    1154304:
   7:   327680..  731281:    1318144..   1721745: 403602:    1296224: last,eof
10,000 BC.m4v: 8 extents found

Also adding a "Directory Hog" feature to show you where your drive space is going.

Link to post
  • 4 weeks later...

Thanks, @jbartlett, for the WIP updates!
I've been using your excellent docker for some time now and have a couple of questions:

  1. Would you expect these speed tests to show up any difference in file system performance?  I'm trying to see if I can measure a quantifiable difference between my XFS and BTRFS formatted drives (the graphs don't show anything obvious).
  2. Any possibility or plans to measure write (not just read) speeds?

Thanks again, and happy holidays!

Link to post

1. I don't think there would be any difference in speeds regardless of file system in reading of existing files. The creation & deletion of files, the file system can play an impact in the speed of the operations but that is also highly variable such as tree depth, number of files in a directory, even drive utilization percentage. I haven't thought about trying to benchmark that, not sure if there's enough value to the results to warrant it.

 

2. I've thought about it but I don't think many people would truly test such logic with their data, even if the write is writing what was already there back onto it. If I were to implement it, it might be only on an unpartitioned drive.

Link to post
8 hours ago, jbartlett said:

1. I don't think there would be any difference in speeds regardless of file system in reading of existing files. The creation & deletion of files, the file system can play an impact in the speed of the operations but that is also highly variable such as tree depth, number of files in a directory, even drive utilization percentage. I haven't thought about trying to benchmark that, not sure if there's enough value to the results to warrant it.

Thanks for your thoughts on this.  I've tried, somewhat naively, to time a file copy from /mnt/diskX to /mnt/diskY (and /mnt/diskZ) and compare the results.  They weren't particularly illuminating (nor consistent).  If I understand this correctly, the write speed would be affected by where on the drive's platter the data is being deposited, which is usually something beyond the user's control.  Is my assumption correct?

Link to post
On 12/29/2020 at 5:51 AM, servidude said:

Thanks for your thoughts on this.  I've tried, somewhat naively, to time a file copy from /mnt/diskX to /mnt/diskY (and /mnt/diskZ) and compare the results.  They weren't particularly illuminating (nor consistent).  If I understand this correctly, the write speed would be affected by where on the drive's platter the data is being deposited, which is usually something beyond the user's control.  Is my assumption correct?

You are correct. I had an empty XFS volume that I copied large files to and when I created a file allocation map, they were NOT located at the start of the drive but spread out in three general areas across the drive.

  • Like 1
  • Thanks 1
Link to post

I've relatively new to UNRAID and learning how to improve performance now. This utility is awesome!

 

My server is cobbled together from old parts and I'm replacing / reassigning hardware as I can.

 

When running a speed benchmark is there any reason the parity drive output would be expected look different than the smooth curve of the others? The array is running but there shouldn't be any data transfer or parity checks ongoing at this time.

 

If the answer is no, then I guess I just have a really wonky drive?

Capture.PNG

Link to post

Check the SMART report to see if there are pending reallocated sectors, such could explain slow spots because it's trying to do multiple reads of a sector. You can force a check for bad sectors by performing a preclear on it with no pre or post reads. Note that this is only if you would intend to use the drive as long term no update no risk storage. I had a drive with similar slow spots that developed over 20k pending sectors after a preclear.

Link to post

By curiosity I'm testing my 2 SSD pool cache. There are (Crucial MX500 & Sandisk ) in SATA III, do you know why the speed isn't constant at start ? Is it an expected behaviour ? Thank you !

 

image.thumb.png.ddfc4ce552ba0f345e9927fbc9bf1b08.png

 

I also tested both of my HDD (WD CMR and Seagate SMR, isn't a bit low speed ?

 

image.png.c56c30cbb12b99d7dcc346bb8df39157.png

 

EDIT : according to your database, it's okay for my 2 HDD.

Edited by Alex.b
Link to post
On 1/4/2021 at 5:50 AM, jbartlett said:

That's a trend across pretty much every SSD and I don't have an answer for you as to why. On the HDDB, I take the peek speed and report that as the transfer speed.

that's the result of trim, when data is deleted from a modern SSD trim is used to tell the controller that those blocks are free and can be erased, the controller does that and marks those blocks / pages as zeroes. when you try to read from these blocks the ssd controller does not actually read the flash it just sends you zeroes as fast as it can. this is the reason ssd's used in the unraid parity array can not be used with trim since that will invalidate the parity.

 

Link to post
that's the result of trim, when data is deleted from a modern SSD trim is used to tell the controller that those blocks are free and can be erased, the controller does that and marks those blocks / pages as zeroes. when you try to read from these blocks the ssd controller does not actually read the flash it just sends you zeroes as fast as it can. this is the reason ssd's used in the unraid parity array can not be used with trim since that will invalidate the parity.
 
Just to make sure I understand it right. The flat line that basically indicates the max. interface throughput is trimmed (empty) space on the SSD?
Link to post
26 minutes ago, Fireball3 said:
11 hours ago, LammeN3rd said:
that's the result of trim, when data is deleted from a modern SSD trim is used to tell the controller that those blocks are free and can be erased, the controller does that and marks those blocks / pages as zeroes. when you try to read from these blocks the ssd controller does not actually read the flash it just sends you zeroes as fast as it can. this is the reason ssd's used in the unraid parity array can not be used with trim since that will invalidate the parity.
 

Just to make sure I understand it right. The flat line that basically indicates the max. interface throughput is trimmed (empty) space on the SSD?

Yes, the controller or interface is probably the bottleneck

Link to post

So the correct logic would be to take the lowest speed as the maximum read speed? There will be interface issues on some systems so an average of the lowest speed of every reported SSD of the same make/model/revision would be better representative of the whole.

Link to post

you could have a look at het used space on a drive level, but that's not that easy when drives are used in BTRFS raid other than 2 drives in raid1.

NVMe drives usually report namespace utilisation so looking at that number and testing only the Namespace 1 utilization  would do the trick.

 

this is the graph from one of my NVMe drives:

image.png.434c65279661a4c3a31631aeb9f9a13d.png

 

and this is the used space (274GB):

image.png.4683b8a8cfd99b7d6aec280183c4e4a7.png

Link to post

to be honest, I don't think it makes real sense to test more than the first 10% of an ssd, this would bypass this issue on all but completely empty ssd's.

and ssd's don't have any speed difference for different positions of used flash when doing 100% read speed test, for a spinning disk this makes total sense but from a flash perspective a read workload has no difference as long there is data there.

 

Link to post

It's it true that solid state drives are always filled up according to those graphs posted? Will empty space always be shifted in the right side of the graph/drive or will it be more like a sawtooth pattern on a well run in drive where data may have been deleted in random areas of the drive?

Link to post
  • jbartlett changed the title to DiskSpeed, hard drive benchmarking (unRAID 6+), version 2.8.1

Join the conversation

You can post now and register later. If you have an account, sign in now to post with your account.
Note: Your post will require moderator approval before it will be visible.

Guest
Reply to this topic...

×   Pasted as rich text.   Restore formatting

  Only 75 emoji are allowed.

×   Your link has been automatically embedded.   Display as a link instead

×   Your previous content has been restored.   Clear editor

×   You cannot paste images directly. Upload or insert images from URL.