Jump to content
jbartlett

Drive performance testing (version 2.6.5) for UNRAID 5 thru 6.4

459 posts in this topic Last Reply

Recommended Posts

jbartlett: thanks again for this great script. I used it over the weekend to test out performance of my drives after upgrading my lowly Atom based server to a new Xeon one.

 

I think things looked good, but my PCIe SSD cache drive was so fast I couldn't tell a difference in the chart between all my other drives. I need to re-run the script excluding the SSD cache.

 

Just curious, is there any want to integrate write speed into this script as well?

 

You can click on the label for your SSD drive to hide it on the chart.

 

A write test would be destructive. If added, the current block would have to be written to another location, written to, and then copied back from the backup. If anything happens such as an error, it would leave the disk in a damaged state.

Share this post


Link to post

Sorry, a syslog won't help. If you run the program with the -l option, it'll create a log file as it runs. What version of UNRAID are you using? Also try running version 2.5 in the first post.

 

Thanks for the response and I am running 5.0.6.  I did run 2.5 initially but when I got the error I tried 2.6.1 as well.  I went back and ran 2.5 with the -l option and here is the log it created:

 

ndNumDisabled: 
mdNumInvalid: 0
mdNumMissing: 0
mdResyncDb: 
sbNumDisks: 6
mdResyncPos: 0
/tmp/inventory1.txt
==========

Disk /dev/sda: 999 MB, 999555072 bytes
255 heads, 63 sectors/track, 121 cylinders
Units = cylinders of 16065 * 512 = 8225280 bytes
Sector size (logical/physical): 512 bytes / 512 bytes
I/O size (minimum/optimal): 512 bytes / 512 bytes
Disk identifier: 0x09d34f4f

   Device Boot      Start         End      Blocks   Id  System
/dev/sda1   *           1         122      976096+   6  FAT16
Partition 1 has different physical/logical endings:
     phys=(120, 254, 63) logical=(121, 133, 12)

Disk /dev/sdc: 2000.4 GB, 2000398934016 bytes
1 heads, 63 sectors/track, 62016336 cylinders
Units = cylinders of 63 * 512 = 32256 bytes
Sector size (logical/physical): 512 bytes / 512 bytes
I/O size (minimum/optimal): 512 bytes / 512 bytes
Disk identifier: 0x00000000

   Device Boot      Start         End      Blocks   Id  System
/dev/sdc1               2    62016336  1953514552   83  Linux
Partition 1 does not end on cylinder boundary.

Disk /dev/sdb: 2000.4 GB, 2000398934016 bytes
1 heads, 63 sectors/track, 62016336 cylinders
Units = cylinders of 63 * 512 = 32256 bytes
Sector size (logical/physical): 512 bytes / 512 bytes
I/O size (minimum/optimal): 512 bytes / 512 bytes
Disk identifier: 0x00000000

   Device Boot      Start         End      Blocks   Id  System
/dev/sdb1               2    62016336  1953514552+  83  Linux
Partition 1 does not end on cylinder boundary.

Disk /dev/sdd: 5001.0 GB, 5000981078016 bytes
256 heads, 63 sectors/track, 605626 cylinders
Units = cylinders of 16128 * 512 = 8257536 bytes
Sector size (logical/physical): 512 bytes / 4096 bytes
I/O size (minimum/optimal): 4096 bytes / 4096 bytes
Disk identifier: 0x00000000

   Device Boot      Start         End      Blocks   Id  System
/dev/sdd1               1      266306  2147483647+  ee  GPT
Partition 1 does not start on physical sector boundary.

Disk /dev/sdf: 5001.0 GB, 5000981078016 bytes
256 heads, 63 sectors/track, 605626 cylinders
Units = cylinders of 16128 * 512 = 8257536 bytes
Sector size (logical/physical): 512 bytes / 4096 bytes
I/O size (minimum/optimal): 4096 bytes / 4096 bytes
Disk identifier: 0x00000000

   Device Boot      Start         End      Blocks   Id  System
/dev/sdf1               1      266306  2147483647+  ee  GPT
Partition 1 does not start on physical sector boundary.

Disk /dev/sde: 5001.0 GB, 5000981078016 bytes
256 heads, 63 sectors/track, 605626 cylinders
Units = cylinders of 16128 * 512 = 8257536 bytes
Sector size (logical/physical): 512 bytes / 4096 bytes
I/O size (minimum/optimal): 4096 bytes / 4096 bytes
Disk identifier: 0x00000000

   Device Boot      Start         End      Blocks   Id  System
/dev/sde1               1      266306  2147483647+  ee  GPT
Partition 1 does not start on physical sector boundary.

Disk /dev/sdg: 5001.0 GB, 5000981078016 bytes
256 heads, 63 sectors/track, 605626 cylinders
Units = cylinders of 16128 * 512 = 8257536 bytes
Sector size (logical/physical): 512 bytes / 4096 bytes
I/O size (minimum/optimal): 4096 bytes / 4096 bytes
Disk identifier: 0x00000000

   Device Boot      Start         End      Blocks   Id  System
/dev/sdg1               1      266306  2147483647+  ee  GPT
Partition 1 does not start on physical sector boundary.
==========
Current Unraid slot:  (Disk 5) - /dev/sdb
/tmp/diskspeed.tmp
==========

/dev/sdb:

ATA device, with non-removable media
Model Number:       Hitachi HDS5C3020ALA632                 
Serial Number:      ML0221F307PX3D
Firmware Revision:  ML6OA180
Transport:          Serial, ATA8-AST, SATA 1.0a, SATA II Extensions, SATA Rev 2.5, SATA Rev 2.6; Revision: ATA8-AST T13 Project D1697 Revision 0b
Standards:
Used: unknown (minor revision code 0x0029) 
Supported: 8 7 6 5 
Likely used: 8
Configuration:
Logical		max	current
cylinders	16383	16383
heads		16	16
sectors/track	63	63
--
CHS current addressable sectors:   16514064
LBA    user addressable sectors:  268435455
LBA48  user addressable sectors: 3907029168
Logical  Sector size:                   512 bytes
Physical Sector size:                   512 bytes
device size with M = 1024*1024:     1907729 MBytes
device size with M = 1000*1000:     2000398 MBytes (2000 GB)
cache/buffer size  = 26129 KBytes (type=DualPortCache)
Form Factor: 3.5 inch
Nominal Media Rotation Rate: 5940
Capabilities:
LBA, IORDY(can be disabled)
Queue depth: 32
Standby timer values: spec'd by Standard, no device specific minimum
R/W multiple sector transfer: Max = 16	Current = 16
Advanced power management level: disabled
DMA: mdma0 mdma1 mdma2 udma0 udma1 udma2 udma3 udma4 udma5 *udma6 
     Cycle time: min=120ns recommended=120ns
PIO: pio0 pio1 pio2 pio3 pio4 
     Cycle time: no flow control=120ns  IORDY flow control=120ns
Commands/features:
Enabled	Supported:
   *	SMART feature set
    	Security Mode feature set
   *	Power Management feature set
   *	Write cache
   *	Look-ahead
   *	Host Protected Area feature set
   *	WRITE_BUFFER command
   *	READ_BUFFER command
   *	NOP cmd
   *	DOWNLOAD_MICROCODE
    	Advanced Power Management feature set
    	Power-Up In Standby feature set
   *	SET_FEATURES required to spinup after power up
    	SET_MAX security extension
   *	48-bit Address feature set
   *	Device Configuration Overlay feature set
   *	Mandatory FLUSH_CACHE
   *	FLUSH_CACHE_EXT
   *	SMART error logging
   *	SMART self-test
    	Media Card Pass-Through
   *	General Purpose Logging feature set
   *	WRITE_{DMA|MULTIPLE}_FUA_EXT
   *	64-bit World wide name
   *	URG for READ_STREAM[_DMA]_EXT
   *	URG for WRITE_STREAM[_DMA]_EXT
   *	WRITE_UNCORRECTABLE_EXT command
   *	{READ,WRITE}_DMA_EXT_GPL commands
   *	Segmented DOWNLOAD_MICROCODE
    	unknown 119[7]
   *	Gen1 signaling speed (1.5Gb/s)
   *	Gen2 signaling speed (3.0Gb/s)
   *	Gen3 signaling speed (6.0Gb/s)
   *	Native Command Queueing (NCQ)
   *	Host-initiated interface power management
   *	Phy event counters
   *	NCQ priority information
    	Non-Zero buffer offsets in DMA Setup FIS
   *	DMA Setup Auto-Activate optimization
    	Device-initiated interface power management
    	In-order data delivery
   *	Software settings preservation
   *	SMART Command Transport (SCT) feature set
   *	SCT Write Same (AC2)
   *	SCT Error Recovery Control (AC3)
   *	SCT Features Control (AC4)
   *	SCT Data Tables (AC5)
Security: 
Master password revision code = 65534
	supported
not	enabled
not	locked
	frozen
not	expired: security count
not	supported: enhanced erase
504min for SECURITY ERASE UNIT. 
Logical Unit WWN Device Identifier: 5000cca369c380d5
NAA		: 5
IEEE OUI	: 000cca
Unique ID	: 369c380d5
Checksum: correct
==========
Model: [Hitachi HDS5C3020ALA632]
Serial: [ML0221F307PX3D]
GB: [1863]
startpos: [0]
startposdisp: [0]
CurrPer: [0]
Performance testing /dev/sdb (Disk 5) at 0 GB (0%)
dd if=/dev/sdb of=/dev/null bs=1GB count=1 skip=0 iflag=direct
/tmp/diskspeed_results.txt
==========
dd: memory exhausted
==========
ratedspeed: []
Program complete

Share this post


Link to post

Sorry, a syslog won't help. If you run the program with the -l option, it'll create a log file as it runs. What version of UNRAID are you using? Also try running version 2.5 in the first post.

 

Thanks for the response and I am running 5.0.6.  I did run 2.5 initially but when I got the error I tried 2.6.1 as well.  I went back and ran 2.5 with the -l option and here is the log it created:

 

dd if=/dev/sdb of=/dev/null bs=1GB count=1 skip=0 iflag=direct
/tmp/diskspeed_results.txt
==========
dd: memory exhausted

 

Looks like dropping down to 1 GB of RAM was the issue. It uses DD to copy 1 GB of data from the drive to a bit bucket but it looks like dd tries to read it into RAM first. Since the RAM is shared by UNRAID, you have less then 1 GB of memory available. You won't be able to run the script until you replace the bad stick.

 

Try the latest version again with the -f command option. It's more rough in the speed checks because it reads a smaller chunk of the disk area but it may be enough to run with lower memory. If that doesn't work, boot into safe mode and try again with -f

 

I'll add it to my To Do list to put in a free memory check in the next version.

Share this post


Link to post

 

Looks like dropping down to 1 GB of RAM was the issue. It uses DD to copy 1 GB of data from the drive to a bit bucket but it looks like dd tries to read it into RAM first. Since the RAM is shared by UNRAID, you have less then 1 GB of memory available. You won't be able to run the script until you replace the bad stick.

 

Try the latest version again with the -f command option. It's more rough in the speed checks because it reads a smaller chunk of the disk area but it may be enough to run with lower memory. If that doesn't work, boot into safe mode and try again with -f

 

I'll add it to my To Do list to put in a free memory check in the next version.

 

Thanks for the response and I will give that a try with -f.  I have more ram and a better processor that I haven't got around to installing yet.  After I do that I plan on upgrading to 6.2 so I should be able to use disk speed as normal since I'll have more memory installed.

Share this post


Link to post

Thanks for the response and I will give that a try with -f.  I have more ram and a better processor that I haven't got around to installing yet.  After I do that I plan on upgrading to 6.2 so I should be able to use disk speed as normal since I'll have more memory installed.

 

I added logic to check for enough free memory to the next version and it'll also tell you if the -f --fast option is available.

Share this post


Link to post

Added version 2.6.2 to first post

 

Change log

Added a check to ensure there is enough free memory available to execute the dd command

Added -n | --include option to specify which drives to test, comma delimited

Ignore floppy drives

Added support for nvme drives

Misc enhancements

Share this post


Link to post

Added version 2.6.2 to first post

 

Change log

Added a check to ensure there is enough free memory available to execute the dd command

Added -n | --include option to specify which drives to test, comma delimited

Ignore floppy drives

Added support for nvme drives

Misc enhancements

 

Thanks for the updated script.

 

When starting the script it tells me there is not enough free memory available to run. I have 20GB of RAM, and linux allocates almost all for 'cache', hence the free memory value is low.

 

Instead of testing on MemFree it is better to test on MemAvailable.

 

Share this post


Link to post

Thanks for the updated script.

 

When starting the script it tells me there is not enough free memory available to run. I have 20GB of RAM, and linux allocates almost all for 'cache', hence the free memory value is low.

 

Instead of testing on MemFree it is better to test on MemAvailable.

 

Thanks for the suggestion. Updated to 2.6.3 to implement it.

Share this post


Link to post

Updated script to 2.6.4 which adds support for UNRAID 6.3.0-RC9.

 

2.6.4 and on will no longer support UNRAID 5. I'm no longer going to maintain backward compatibility with it when version 2.5 works for it.

Share this post


Link to post

Finished upgrading my Seagate 4TB drives to WD Red Pro 6TB (full set) and immediately upgraded to RC9 and then ran a test - which is how I found out the script was broken in RC9.

 

Interesting that some of the drives show the same "Seagate Drop" at the start but not all.

 

I really am pleased with these drives but they do run a little hot. Plan accordingly.

diskspeed.png.2f314c3d82e78cb161f3680ee10130d3.png

Share this post


Link to post

Fantastic tool!!!

 

Thank you for your efforts, John.

 

One thought came to mind when I was running this benchmark.  I wonder if somehow the tool could add a function to identify drives on a particular controller and test the associated disks simultaneously to show the overall throughput of a given controller/bus?  If the card and slot are not limiting factors one would expect the graph to show the exact same arc from outer to inner tracks - indicating that the drives were the only limiting factor.  But just like in the tunables and parity checks you might find a different ceiling in the combined testing.  I ran into this when I moved one of my PCIe 3.0 x8 controllers into a v2.0 slot (4-lanes) and noticed the parity check speed cut by more than a third.  I'm betting a single drive performance test like this wouldn't have shown this difference.  Might be interesting to see.  

 

Anyway, very well done, and thank you.  I was just talking to a friend last week about "wouldn't it be cool if someone had a disk benchmark tool..."  :)

Share this post


Link to post
8 hours ago, lbosley said:

One thought came to mind when I was running this benchmark.  I wonder if somehow the tool could add a function to identify drives on a particular controller and test the associated disks simultaneously to show the overall throughput of a given controller/bus?  If the card and slot are not limiting factors one would expect the graph to show the exact same arc from outer to inner tracks - indicating that the drives were the only limiting factor.

 

As a matter of fact, I'm working on a drive mapping tool that'll show the controllers & which drive is connected to which port. That part is mostly done. Now I'm trying to figure out the plugin system & prettying up the resulting HTML & will be testing to see what happens when I create a RAID on an old Promise IDE controller.

 

Goal is to let you import your own graphics for the controllers & hard drives plus being able to mark which port is which on the graphic to visually link where a drive is plugged into.

 

This Drive Mapping plugin will be the foundation to the new disk speed utility which I plan on including heat maps of the drive indicating read speeds and by platter too.

 

It was interesting to see that my motherboard had more ports on its SATA controller than physical ports plus an IDE controller with no IDE plug.

Share this post


Link to post

The bus speed tests will be in the next verion to auto-detect how many drives it can test at once per controller and overall.

Edited by jbartlett

Share this post


Link to post

It would be interesting to see, how far this drop goes.

A more granular measurement during the first 500GB maybe?

Or is it a latency of the drive (parked heads) when idling?

Share this post


Link to post

The "Seagate Drop" (my terminology) is an odd tendency for Seagate drives to have unusual slow speeds in the first 25GB of the drive. With the drives getting larger, it's hard to test the range of it if it exists. I'm pondering the logic of a new switch that'll test for a slow start-of-drive and it's range. My way of avoiding it is to put a big file onto the drive first to fill it up before any others.

Edited by jbartlett

Share this post


Link to post

Used the latest diskspeed.sh

Am getting this weird error:

root@MediaStore:/tmp/diskspeed# bash diskspeed.sh -i 5 -s 21 -n /dev/sda -l

diskspeed.sh for UNRAID, version 2.6.4
By John Bartlett. Support board @ limetech: http://goo.gl/ysJeYV
/dev/sda: 157 MB/sec avg
/dev/sdb: Skipped
/dev/sdc: Skipped
/dev/sdd: Skipped
/dev/sde: Skipped
/dev/sdf: Skipped
/dev/sdg: Skipped
/dev/sdh: Skipped
/dev/sdi: Skipped
/dev/sdj: Skipped (boot or flash drive)

diskspeed.sh: line 1039: /tmp/diskspeed.sda.graph2: No such file or directory
rm: cannot remove '/tmp/diskspeed.sda.graph2': No such file or directory
To see a graph of the drive's speeds, please browse to the current
directory and open the file diskspeed.html in your Internet Browser
application.

The same happens regardless of how many drives I'm testing.

 

I've attached the diskspeed.log and diskspeed.html

diskspeed-output.zip

Edited by ken-ji

Share this post


Link to post
7 hours ago, ken-ji said:

diskspeed.sh -i 5 -s 21 -n /dev/sda -l

 

Try running this without the /dev/ part: diskspeed.sh -f -s 3 -n sda -l

 

If that works without error, try your longer test, again without /dev/

diskspeed.sh -i 5 -s 21 -n sda -l

Share this post


Link to post
On 12/14/2016 at 7:22 PM, jbartlett said:

 

 


dd if=/dev/sdb of=/dev/null bs=1GB count=1 skip=0 iflag=direct
/tmp/diskspeed_results.txt
==========
dd: memory exhausted
 

 

 

Looks like dropping down to 1 GB of RAM was the issue. It uses DD to copy 1 GB of data from the drive to a bit bucket but it looks like dd tries to read it into RAM first.  ...

John, I suggest you make a slight change to your methodology. Note that the Linux kernel  splits *large* O_DIRECT (iflag=direct) requests into 512k chunks anyway. Also, in a testing environment, all i/o requests (and especially O_DIRECT requests) should be in (integer) multiples of 4k. So, if you really want your sample size to be [in dd units] 1GB (ie, 10^9) vs 1G (ie, 2^30), then, instead of the above "dd ...", maybe use:

  dd if=/dev/sdb of=/dev/null bs=64M count=15 skip=0 iflag=direct

That will result in samples of 1.006 GB (close enough?) but much lower RAM burden. Of course, you'll need to scale your "skip=N" args by 15x as you march through the drives. [My personal preference, as an old-school software guy (Unix kernel development 40+ yrs ago [v4-v6]), is to stick to the 2^N path ("count=16" above), but ...]

 

I've done a lot of disk testing (recently, even) and find that sample sizes of 32M-128M meet the "principle of diminishing returns". Have you experimented with this? You could achieve faster completions and/or finer granularity with negligible, if any, loss of result quality.

 

--UhClem  "Base-8 arithmetic is just like base-10 ... if you're missing two fingers." --Tom Lehrer

 

Share this post


Link to post

Interesting. Using multiple smaller blocks just didn't occur to me.

 

I booted up my VirtualBox dev instance of UNRAID with RAM set to 1 GB total, my 1 GB test failed as expected but using smaller blocks worked fine. My testing also shows the 2^n path with count=16 with bs=64MB works best as the resulting 1024,000,000 bytes read is a true 1 gig (I'm a purist xD)

 

I'll make this adjustment and remove the RAM check altogether and replace it with a check after the test to see if it failed due to memory constraints. Also will add a "p" switch to specify a percentage to scan as the "s" sample size is confusing.

 

 

Share this post


Link to post
7 hours ago, jbartlett said:

Interesting. ...

My testing also shows the 2^n path with count=16 with bs=64MB works best as the resulting 1024,000,000 bytes read is a true 1 gig (I'm a purist xD)

 

For true papal purity, you need to use bs=64M (not MB). That will give a true 1 gig (2^30 = 1073741824.).

 

Make special note that dd reports its speed in ("fake") MB/sec (10^6), and you'll want to divide that by 1.048576 (to "pass through the gates").  diskspeed.sh users will have to accept that (superficially) their numerical results will drop by ~5%; maybe change the speed units from MB/sec to MiB/sec.  [Ask yourself this: Have you produced (and are your users using) a technical tool or a marketing tool?]

 

-- UhClem

 

Share this post


Link to post

I'm okay with using the same measurement that dd uses to report with.

 

4 hours ago, UhClem said:

Ask yourself this: Have you produced (and are your users using) a technical tool or a marketing tool?

 

I'm not quite sure I understand your question. This tool came out of people wondering why their parity speeds were tanking at certain spots and me wondering. So in that sense, it's purely a technical diagnostic tool.

Share this post


Link to post

I tested "1GB" and "1G" and both fell in roughly exact same ballpark. 1 GB averaged 132.75 MB/s, 1 G averaged 133.75., 4 passes at the start of the drive each. Negligible difference.

Share this post


Link to post
On 3/10/2017 at 5:08 PM, jbartlett said:

 

Try running this without the /dev/ part: diskspeed.sh -f -s 3 -n sda -l

 

If that works without error, try your longer test, again without /dev/

diskspeed.sh -i 5 -s 21 -n sda -l

 

Well...

root@MediaStore:/tmp/diskspeed# bash diskspeed.sh -f -l -n sda

diskspeed.sh for UNRAID, version 2.6.4
By John Bartlett. Support board @ limetech: http://goo.gl/ysJeYV
/dev/sda: 157 MB/sec avg
/dev/sdb: Skipped
/dev/sdc: Skipped
/dev/sdd: Skipped
/dev/sde: Skipped
/dev/sdf: Skipped
/dev/sdg: Skipped
/dev/sdh: Skipped
/dev/sdi: Skipped
/dev/sdj: Skipped (boot or flash drive)

diskspeed.sh: line 1039: /tmp/diskspeed.sda.graph2: No such file or directory
rm: cannot remove '/tmp/diskspeed.sda.graph2': No such file or directory
To see a graph of the drive's speeds, please browse to the current
directory and open the file diskspeed.html in your Internet Browser
application.

and here's the kicker for you

root@MediaStore:/tmp/diskspeed# bash diskspeed.sh -f -l

diskspeed.sh for UNRAID, version 2.6.4
By John Bartlett. Support board @ limetech: http://goo.gl/ysJeYV
/dev/sda: 158 MB/sec avg
/dev/sdb: 131 MB/sec avg
/dev/sdc: 128 MB/sec avg
/dev/sdd: 116 MB/sec avg
/dev/sde: 117 MB/sec avg
/dev/sdf: 124 MB/sec avg
/dev/sdg: 115 MB/sec avg
/dev/sdh: 115 MB/sec avg
/dev/sdi: 159 MB/sec avg
/dev/sdj: Skipped (boot or flash drive)

diskspeed.sh: line 1039: /tmp/diskspeed.sda.graph2: No such file or directory
rm: cannot remove '/tmp/diskspeed.sda.graph2': No such file or directory
diskspeed.sh: line 1039: /tmp/diskspeed.sdb.graph2: No such file or directory
rm: cannot remove '/tmp/diskspeed.sdb.graph2': No such file or directory
diskspeed.sh: line 1039: /tmp/diskspeed.sdc.graph2: No such file or directory
rm: cannot remove '/tmp/diskspeed.sdc.graph2': No such file or directory
diskspeed.sh: line 1039: /tmp/diskspeed.sdd.graph2: No such file or directory
rm: cannot remove '/tmp/diskspeed.sdd.graph2': No such file or directory
diskspeed.sh: line 1039: /tmp/diskspeed.sde.graph2: No such file or directory
rm: cannot remove '/tmp/diskspeed.sde.graph2': No such file or directory
diskspeed.sh: line 1039: /tmp/diskspeed.sdf.graph2: No such file or directory
rm: cannot remove '/tmp/diskspeed.sdf.graph2': No such file or directory
diskspeed.sh: line 1039: /tmp/diskspeed.sdi.graph2: No such file or directory
rm: cannot remove '/tmp/diskspeed.sdi.graph2': No such file or directory
To see a graph of the drive's speeds, please browse to the current
directory and open the file diskspeed.html in your Internet Browser
application.

 

Share this post


Link to post

Join the conversation

You can post now and register later. If you have an account, sign in now to post with your account.
Note: Your post will require moderator approval before it will be visible.

Guest
Reply to this topic...

×   Pasted as rich text.   Restore formatting

  Only 75 emoji are allowed.

×   Your link has been automatically embedded.   Display as a link instead

×   Your previous content has been restored.   Clear editor

×   You cannot paste images directly. Upload or insert images from URL.