DiskSpeed, hdd/ssd benchmarking (unRAID 6+), version 2.10.7


Recommended Posts

2 hours ago, TheWoodser said:

This is a fresh install.  Does it matter that this address is NOT in the subnet of the IP of the UnRAID server?

image.thumb.png.2bfdc0f2ef52bfa61a0088fd0baba600.png

The first IP is the IP of the Docker and it's fine. The 2nd set after the arrow should have the IP of your unraid box plus :18888.

 

Mine is "172.17.0.2:8888/TCP <--> 192.168.1.7:18888". I also have Network Type set to "Bridge", Console Shell Command set to "Shell", and Privileged set to ON.

 

Try this: Edit the Docker and change the name to something other than "DiskSpeed" to change the local cached version's name and then delete the docker. Install DiskSpeed via the AppStore or from the XML file on the 1st post on this thread.

 

If nothing works, you may want to post in the 6.8.1 general support thread or 6.9.0 beta thread depending on which version of unraid you're running that the Web UI link isn't being displayed.

 

-John

Link to comment
2 minutes ago, jbartlett said:

The first IP is the IP of the Docker and it's fine. The 2nd set after the arrow should have the IP of your unraid box plus :18888.

 

Mine is "172.17.0.2:8888/TCP <--> 192.168.1.7:18888". I also have Network Type set to "Bridge", Console Shell Command set to "Shell", and Privileged set to ON.

 

Try this: Edit the Docker and change the name to something other than "DiskSpeed" to change the local cached version's name and then delete the docker. Install DiskSpeed via the AppStore or from the XML file on the 1st post on this thread.

 

If nothing works, you may want to post in the 6.8.1 general support thread or 6.9.0 beta thread depending on which version of unraid you're running that the Web UI link isn't being displayed.

 

-John

John, Thanks for the help.  I think there is something wonky with my network settings.  I opened a new thread....no Docker I install will give me a WebUI option.

 

Woodser

Link to comment

Feature in progress - file fragmentation & allocation map.

One thing I've learned is that for some reason, the underlaying OS likes to split up files all over the drive. This is from a drive on my backup NAS, files are just copied there but I see that large files are broken up all over the drive in most cases.

 

Black = Allocated, non fragmented
Red = Allocated, fragmented

White = Unallocated

 

image.png.41c9eb66c8cce936a02ed0558fb04f24.png

Edited by jbartlett
  • Like 1
Link to comment

I just noticed a trend in the file fragmentation. It seems the OS has the tendency to break the file up into chunks of a set size. I found this really strange because, well, why do it at all? Noticing the trend of 32768, I did some research and some file systems (like ex4) have a maximum number of blocks that can fit in one extent and then the OS has to create a new extent to continue, thus creating a forced fragmentation chain by design. But it's interesting that in some extents break this barrier (btrfs in this case), typically on the last two extents but not always.

 

I had designed my app to display fragments in red but I think I'll just default everything to black with the option of display fragmented files in red after the scan.

File size of 10,000 BC.m4v is 2995331020 (731282 blocks of 4096 bytes)
 ext:     logical_offset:        physical_offset: length:   expected: flags:
   0:        0..   32767:     793856..    826623:  32768:
   1:    32768..   65535:     848352..    881119:  32768:     826624:
   2:    65536..   98303:     913888..    946655:  32768:     881120:
   3:    98304..  131071:     979424..   1012191:  32768:     946656:
   4:   131072..  163839:    1056000..   1088767:  32768:    1012192:
   5:   163840..  196607:    1121536..   1154303:  32768:    1088768:
   6:   196608..  327679:    1165152..   1296223: 131072:    1154304:
   7:   327680..  731281:    1318144..   1721745: 403602:    1296224: last,eof
10,000 BC.m4v: 8 extents found

Also adding a "Directory Hog" feature to show you where your drive space is going.

  • Like 2
Link to comment
  • 4 weeks later...

Thanks, @jbartlett, for the WIP updates!
I've been using your excellent docker for some time now and have a couple of questions:

  1. Would you expect these speed tests to show up any difference in file system performance?  I'm trying to see if I can measure a quantifiable difference between my XFS and BTRFS formatted drives (the graphs don't show anything obvious).
  2. Any possibility or plans to measure write (not just read) speeds?

Thanks again, and happy holidays!

Link to comment

1. I don't think there would be any difference in speeds regardless of file system in reading of existing files. The creation & deletion of files, the file system can play an impact in the speed of the operations but that is also highly variable such as tree depth, number of files in a directory, even drive utilization percentage. I haven't thought about trying to benchmark that, not sure if there's enough value to the results to warrant it.

 

2. I've thought about it but I don't think many people would truly test such logic with their data, even if the write is writing what was already there back onto it. If I were to implement it, it might be only on an unpartitioned drive.

Link to comment
8 hours ago, jbartlett said:

1. I don't think there would be any difference in speeds regardless of file system in reading of existing files. The creation & deletion of files, the file system can play an impact in the speed of the operations but that is also highly variable such as tree depth, number of files in a directory, even drive utilization percentage. I haven't thought about trying to benchmark that, not sure if there's enough value to the results to warrant it.

Thanks for your thoughts on this.  I've tried, somewhat naively, to time a file copy from /mnt/diskX to /mnt/diskY (and /mnt/diskZ) and compare the results.  They weren't particularly illuminating (nor consistent).  If I understand this correctly, the write speed would be affected by where on the drive's platter the data is being deposited, which is usually something beyond the user's control.  Is my assumption correct?

Link to comment
On 12/29/2020 at 5:51 AM, servidude said:

Thanks for your thoughts on this.  I've tried, somewhat naively, to time a file copy from /mnt/diskX to /mnt/diskY (and /mnt/diskZ) and compare the results.  They weren't particularly illuminating (nor consistent).  If I understand this correctly, the write speed would be affected by where on the drive's platter the data is being deposited, which is usually something beyond the user's control.  Is my assumption correct?

You are correct. I had an empty XFS volume that I copied large files to and when I created a file allocation map, they were NOT located at the start of the drive but spread out in three general areas across the drive.

  • Like 1
  • Thanks 1
Link to comment

I've relatively new to UNRAID and learning how to improve performance now. This utility is awesome!

 

My server is cobbled together from old parts and I'm replacing / reassigning hardware as I can.

 

When running a speed benchmark is there any reason the parity drive output would be expected look different than the smooth curve of the others? The array is running but there shouldn't be any data transfer or parity checks ongoing at this time.

 

If the answer is no, then I guess I just have a really wonky drive?

Capture.PNG

Link to comment

Check the SMART report to see if there are pending reallocated sectors, such could explain slow spots because it's trying to do multiple reads of a sector. You can force a check for bad sectors by performing a preclear on it with no pre or post reads. Note that this is only if you would intend to use the drive as long term no update no risk storage. I had a drive with similar slow spots that developed over 20k pending sectors after a preclear.

Link to comment
On 1/4/2021 at 5:50 AM, jbartlett said:

That's a trend across pretty much every SSD and I don't have an answer for you as to why. On the HDDB, I take the peek speed and report that as the transfer speed.

that's the result of trim, when data is deleted from a modern SSD trim is used to tell the controller that those blocks are free and can be erased, the controller does that and marks those blocks / pages as zeroes. when you try to read from these blocks the ssd controller does not actually read the flash it just sends you zeroes as fast as it can. this is the reason ssd's used in the unraid parity array can not be used with trim since that will invalidate the parity.

 

Link to comment
that's the result of trim, when data is deleted from a modern SSD trim is used to tell the controller that those blocks are free and can be erased, the controller does that and marks those blocks / pages as zeroes. when you try to read from these blocks the ssd controller does not actually read the flash it just sends you zeroes as fast as it can. this is the reason ssd's used in the unraid parity array can not be used with trim since that will invalidate the parity.
 
Just to make sure I understand it right. The flat line that basically indicates the max. interface throughput is trimmed (empty) space on the SSD?
Link to comment
26 minutes ago, Fireball3 said:
11 hours ago, LammeN3rd said:
that's the result of trim, when data is deleted from a modern SSD trim is used to tell the controller that those blocks are free and can be erased, the controller does that and marks those blocks / pages as zeroes. when you try to read from these blocks the ssd controller does not actually read the flash it just sends you zeroes as fast as it can. this is the reason ssd's used in the unraid parity array can not be used with trim since that will invalidate the parity.
 

Just to make sure I understand it right. The flat line that basically indicates the max. interface throughput is trimmed (empty) space on the SSD?

Yes, the controller or interface is probably the bottleneck

Link to comment

you could have a look at het used space on a drive level, but that's not that easy when drives are used in BTRFS raid other than 2 drives in raid1.

NVMe drives usually report namespace utilisation so looking at that number and testing only the Namespace 1 utilization  would do the trick.

 

this is the graph from one of my NVMe drives:

image.png.434c65279661a4c3a31631aeb9f9a13d.png

 

and this is the used space (274GB):

image.png.4683b8a8cfd99b7d6aec280183c4e4a7.png

Link to comment

to be honest, I don't think it makes real sense to test more than the first 10% of an ssd, this would bypass this issue on all but completely empty ssd's.

and ssd's don't have any speed difference for different positions of used flash when doing 100% read speed test, for a spinning disk this makes total sense but from a flash perspective a read workload has no difference as long there is data there.

 

Link to comment
docker exec -it DiskSpeed bash

root@af468d0f3720:/usr/local/tomcat# nvme id-ns /dev/nvme0n1
NVME Identify Namespace 1:
nsze    : 0x3a386030
ncap    : 0x3a386030
nuse    : 0x10facc48

nsze: Total size of the name space LBA

ncap: Max number of LBA

nuse: LBA's allocated to the name space.

 

It looks like if I do a dd read on the device starting at the start not to exceed "nuse" would return data read

If nuse is under a given duration/size, a benchmark can not be done.

Alternately, if a file is found in excess of a given size that has no unwritten extends reported by "filefrag -e <fn>" on it, it can be read for a given number of seconds.

 

Thoughts?

Link to comment
  • jbartlett changed the title to DiskSpeed, hdd/ssd benchmarking (unRAID 6+), version 2.10.7

Join the conversation

You can post now and register later. If you have an account, sign in now to post with your account.
Note: Your post will require moderator approval before it will be visible.

Guest
Reply to this topic...

×   Pasted as rich text.   Restore formatting

  Only 75 emoji are allowed.

×   Your link has been automatically embedded.   Display as a link instead

×   Your previous content has been restored.   Clear editor

×   You cannot paste images directly. Upload or insert images from URL.