Drive performance testing (version 2.6.5) for UNRAID 5 thru 6.4


Recommended Posts

I've got the code ported to Docker and reworked a lot of the controller & drive detection & optimization as my knowledge of such things increased. The foundation is set for finally adding the drive benchmarking now that the controller & drive optimization is done. I'm coding it to support testing multiple drives on the same controller at the same time after testing how many drives the controller can actually handle at the same time (exceeding bandwidth/etc). So if you have two controllers with 4 drives attached to each and bandwidth is not maxed out on either or the system bus, you'll be able to run a benchmark on all 8 drives at once. If the controller bandwidth is maxed out reading all drives at once (such as loaded up with SSD's), it'll test no more than x drives at once, testing additional drives once other drives on the controller are done testing.

 

All done via an auto-generated bash script. That's gonna be fun to develop. haha

 

So what I need now is people to do a sanity alpha test with controller & drive detection & testing before I go public alpha. Send me a PM if you are interested.

  • Upvote 1
Link to comment
  • 2 weeks later...
10 minutes ago, johnnie.black said:

WTF? How is this possible? This is a WD 2TB blue mobile 2.5", and it's not the plugin as I confirmed the results with other utils.

 

WD20SPZX.thumb.png.e80f3fb9dc84d14418a238d60730110e.png

Very nice transfer speed. But 2.5" drives can use higher data density because the platters are smaller and so introduces better mechanical tolerances, less vibrations etc. At the same time, a 3.5" drive has the head optimized for the middle part of the media so they need to reduce the data density on the outside tracks or the bit rate (frequency) becomes too high compared to what the drive heads are designed for.

 

That's also why they killed off the 5.25" drives - they couldn't keep up with the data densities of the 3.5" drives.

 

The main issue with 2.5" drives is that they are so thin that they can't have as many surfaces as 3.5" drives - one of the reasons why lots of 2.5" USB drives uses extra-thick drives.

 

Anyway - it will not take too many years before the 2.5" drives will leave the 3.5" drives in the dust. The best 2.5" drives are 5TB and have been for about two years now so soon time for a new high score. So they are right now about half the size of the best 3.5" drives.

Link to comment
10 minutes ago, pwm said:

Very nice transfer speed. But 2.5" drives can use higher data density because the platters are smaller and so introduces better mechanical tolerances, less vibrations etc. At the same time, a 3.5" drive has the head optimized for the middle part of the media so they need to reduce the data density on the outside tracks or the bit rate (frequency) becomes too high compared to what the drive heads are designed for.

 

Yeah, but I still don't get how it's possible to the speed to remain constant as it goes through the inner cylinders, I believe this disk uses SMR, so there's a faster PMR zone, still all SMR disks I've tested before act like a normal disks, i.e. if the transfer speed starts at 200MB/s it should end at around 100MB/s, not stay constant, I need to make more tests.

Link to comment
Just now, johnnie.black said:

 

Yeah, but I still don't get how it's possible to the speed to remain constant as it goes through the inner cylinders, I believe this disk uses SMR, so there's a faster PMR zone, still all SMR disks I've tested before act like a normal disks, i.e. if the transfer speed starts at 200MB/s it should end at around 100MB/s, not stay constant, I need to make more tests.

Yes, the inner tracks should drop off.

 

I tried to find any benchmarks but failed to get any matches on WD20SPZX, WD20NPVZ and WD15NPVZ. Just sales talk or references to much older WD drives. And WD only shows the interface bandwidth (6Gbit/s) in their datasheet.

 

Maybe the SMR means you fail to store data on the inner track because you haven't filled the drive full and whatever address you specify for your transfers ends up being mapped to an outside track?

Link to comment
1 minute ago, pwm said:

Maybe the SMR means you fail to store data on the inner track because you haven't filled the drive full and whatever address you specify for your transfers ends up being mapped to an outside track?

That's what I was thinking since the disk was has never been written to.

Link to comment
Just now, johnnie.black said:

That's what I was thinking since the disk was has never been written to.

I'm interested in the outcome of this. That would then mean we can't easily plan where to locate our critical data to optimize for bandwidth. If they use mapping logic, then it wouldn't even be sure that we can create four 500 GB partitions to force fast and slow regions.

 

Or maybe the unshingled region is large enough that the drive hasn't spilled over to the main storage region yet (while WD might have decided to take a bit of advantage of their unshingled region to "unintentionally" boost benchmarks).

Link to comment

Interesting. Looks like it's using smaller platters suitable for a 1.8" drive. Similar to how lots of high-end 3.5" drives have used 2.5" platters for a long time now.

 

Maybe my guess that it isn't too far until 2.5" drives will replace the 3.5" drives was wrong. Maybe they jump all the way to 1.8" drives. :ph34r:

Link to comment

Another run after filling the disk up, disk looks more in line with what I was expecting, still a weird uptick in the end:

 

5aacc69962b86_Screenshot2018-03-1707_37_34.thumb.png.ded134e920a56955c0d284a9cb783f0d.png

 

WD is doing some kind of weird sector mapping, and it's not just because SMR, as Seagate Archive drives behave like a normal disk in this test since new, even before they are written to.

Link to comment

I'm really under the weather, so maybe I am missing something, but I can not get diskspeed_2.6.5 to do anything correctly under unRAID 6.4.  I want to upgrade to 6.5, but I really want to check the speeds of all my discs first because I think one is slowing down my parity check.  My hardware config is in my signature.  All drives are spun up, but when i type diskspeed.sh or diskspeed.sh -s 11 -i 1  this is the output I get:

 

Quote

root@Tower:/boot# diskspeed.sh -s 11 -i 1

diskspeed.sh for UNRAID, version 2.4
By John Bartlett. Support board @ limetech: http://goo.gl/ysJeYV

cp: error reading '/proc/mdcmd': Input/output error
awk: cmd. line:2: BEGIN{printf("%0.0f",7716864
awk: cmd. line:2:                             ^ unexpected newline or end of string
awk: cmd. line:1: BEGIN{printf("%0.0f", / 4)}
awk: cmd. line:1:                        ^ unterminated regexp
awk: cmd. line:2: BEGIN{printf("%0.0f", / 4)}
awk: cmd. line:2:                            ^ unexpected newline or end of string
shuf: invalid input range: ââ
awk: cmd. line:2: BEGIN{printf("%0.0f",7716864
awk: cmd. line:2:                             ^ unexpected newline or end of string
awk: cmd. line:1: BEGIN{printf("%0.0f", / 4)}
awk: cmd. line:1:                        ^ unterminated regexp
awk: cmd. line:2: BEGIN{printf("%0.0f", / 4)}
awk: cmd. line:2:                            ^ unexpected newline or end of string
shuf: invalid input range: ââ
awk: cmd. line:2: BEGIN{printf("%0.0f",7716864
awk: cmd. line:2:                             ^ unexpected newline or end of string
awk: cmd. line:1: BEGIN{printf("%0.0f", / 4)}
awk: cmd. line:1:                        ^ unterminated regexp
awk: cmd. line:2: BEGIN{printf("%0.0f", / 4)}
awk: cmd. line:2:                            ^ unexpected newline or end of string
shuf: invalid input range: ââ
awk: cmd. line:2: BEGIN{printf("%0.0f",4000753426432
awk: cmd. line:2:                                   ^ unexpected newline or end of string
awk: cmd. line:1: BEGIN{printf("%0.0f", / 4)}
awk: cmd. line:1:                        ^ unterminated regexp
awk: cmd. line:2: BEGIN{printf("%0.0f", / 4)}
awk: cmd. line:2:                            ^ unexpected newline or end of string
shuf: invalid input range: ââ
awk: cmd. line:2: BEGIN{printf("%0.0f",4000753426432
awk: cmd. line:2:                                   ^ unexpected newline or end of string
awk: cmd. line:1: BEGIN{printf("%0.0f", / 4)}
awk: cmd. line:1:                        ^ unterminated regexp
awk: cmd. line:2: BEGIN{printf("%0.0f", / 4)}
awk: cmd. line:2:                            ^ unexpected newline or end of string
shuf: invalid input range: ââ
awk: cmd. line:2: BEGIN{printf("%0.0f",4000753426432
awk: cmd. line:2:                                   ^ unexpected newline or end of string
awk: cmd. line:1: BEGIN{printf("%0.0f", / 4)}
awk: cmd. line:1:                        ^ unterminated regexp
awk: cmd. line:2: BEGIN{printf("%0.0f", / 4)}
awk: cmd. line:2:                            ^ unexpected newline or end of string
shuf: invalid input range: ââ
awk: cmd. line:2: BEGIN{printf("%0.0f",4000753426432
awk: cmd. line:2:                                   ^ unexpected newline or end of string
awk: cmd. line:1: BEGIN{printf("%0.0f", / 4)}
awk: cmd. line:1:                        ^ unterminated regexp
awk: cmd. line:2: BEGIN{printf("%0.0f", / 4)}
awk: cmd. line:2:                            ^ unexpected newline or end of string
shuf: invalid input range: ââ
awk: cmd. line:2: BEGIN{printf("%0.0f",4000753426432
awk: cmd. line:2:                                   ^ unexpected newline or end of string
awk: cmd. line:1: BEGIN{printf("%0.0f", / 4)}
awk: cmd. line:1:                        ^ unterminated regexp
awk: cmd. line:2: BEGIN{printf("%0.0f", / 4)}
awk: cmd. line:2:                            ^ unexpected newline or end of string
shuf: invalid input range: ââ
awk: cmd. line:2: BEGIN{printf("%0.0f",4000753426432
awk: cmd. line:2:                                   ^ unexpected newline or end of string
awk: cmd. line:1: BEGIN{printf("%0.0f", / 4)}
awk: cmd. line:1:                        ^ unterminated regexp
awk: cmd. line:2: BEGIN{printf("%0.0f", / 4)}
awk: cmd. line:2:                            ^ unexpected newline or end of string
shuf: invalid input range: ââ
awk: cmd. line:2: BEGIN{printf("%0.0f",4000753426432
awk: cmd. line:2:                                   ^ unexpected newline or end of string
awk: cmd. line:1: BEGIN{printf("%0.0f", / 4)}
awk: cmd. line:1:                        ^ unterminated regexp
awk: cmd. line:2: BEGIN{printf("%0.0f", / 4)}
awk: cmd. line:2:                            ^ unexpected newline or end of string
shuf: invalid input range: ââ

./diskspeed.sh: line 397: / 1024: syntax error: operand expected (error token is "/ 1024")
All drives were excluded, nothing to report.
root@Tower:/boot#

What stupid thing am I doing wrong?

 

Also John, do you think you will need to make any changes for 6.5?  It seems that it may not need any updates.

 

Thanks for the hard work.  I've been using your script for several years and it is very helpful.

 

Best regards,

craigr

Edited by craigr
Link to comment
1 hour ago, johnnie.black said:

Script currently only works on v6.3.5 or older.

 

On 2/3/2018 at 3:39 PM, bonienl said:

I've made a small modification which allows your script to run on unRAID 6.4. Perhaps useful?

 

diskspeed.sh

 

Ah yes, thank you.  Grabbed the modified script from bonienl's post... working now!

 

craigr

Link to comment
2 hours ago, jbartlett said:

Bonienl’s modification is in the first post as well.

If your talking about the link for 2.6.5 in your first post of this thread, that is the one that does not work for me.  bonienl's link to his edited script does work.  So I think there is either something wrong with your script link or maybe something with PeaZip that doesn't extract properly... I just started trying PeaZip so it's new to me.  Either way, I got it running with bonienl's unzipped link.

 

Again, thanks John for maintaining this script.  It just helped me again and proved all my drives of the same model are running at about the same speed so I will stop worrying ;-)

 

Kind regards,

craigr

Link to comment

Ran a controller test and got this:

 

Quote
SiI 3124 PCI-X Serial ATA Controller


Silicon Image, Inc.
RAID bus controller

 

 

Bandwidth Utilizationsdcsdcsdbsdbsdbsdb2 Drives010M20M30M40M50M60M70M80M90M100M
 
 
 
 
Test Status:

37.61 GB read reading 2 drives simultaneously.
31.54 GB read reading 1 drives simultaneously, bandwidth was 100% of max data rate.
0 bytes read reading 0 drives simultaneously, bandwidth is potentially optimized at 0% of max data rate.

Reading -1 drives on the controller for human eyeball evaluation
Lucee 5.2.5.20 Error (expression)
Message invalid call of the function listGetAt, second Argument (posNumber) is invalid, invalid string list index [0]
pattern listgetat(list:string, position:number, [delimiters:string, [includeEmptyFields:boolean]]):string
Stacktrace The Error Occurred in
/var/www/TestControllerBandwidth.cfm: line 284
282: </CFLOOP>
283: <CFLOOP index="i" from="#DriveCount+1#" to="#ListLen(DriveList)#">
284: <CFSET DriveID=ListGetAt(DriveList,i)>
285: <CFSET Speed[DriveID]=ListAppend(Speed[DriveID],"null")>
286: </CFLOOP>
 
Java Stacktrace lucee.runtime.exp.FunctionException: invalid call of the function listGetAt, second Argument (posNumber) is invalid, invalid string list index [0]
  at lucee.runtime.functions.list.ListGetAt.call(ListGetAt.java:46)
  at lucee.runtime.functions.list.ListGetAt.call(ListGetAt.java:36)
  at testcontrollerbandwidth_cfm$cf.call(/TestControllerBandwidth.cfm:284)
  at lucee.runtime.PageContextImpl._doInclude(PageContextImpl.java:939)
  at lucee.runtime.PageContextImpl._doInclude(PageContextImpl.java:833)
  at lucee.runtime.listener.ClassicAppListener._onRequest(ClassicAppListener.java:63)
  at lucee.runtime.listener.MixedAppListener.onRequest(MixedAppListener.java:44)
  at lucee.runtime.PageContextImpl.execute(PageContextImpl.java:2405)
  at lucee.runtime.PageContextImpl._execute(PageContextImpl.java:2395)
  at lucee.runtime.PageContextImpl.executeCFML(PageContextImpl.java:2363)
  at lucee.runtime.engine.Request.exe(Request.java:44)
  at lucee.runtime.engine.CFMLEngineImpl._service(CFMLEngineImpl.java:1091)
  at lucee.runtime.engine.CFMLEngineImpl.serviceCFML(CFMLEngineImpl.java:1039)
  at lucee.loader.engine.CFMLEngineWrapper.serviceCFML(CFMLEngineWrapper.java:102)
  at lucee.loader.servlet.CFMLServlet.service(CFMLServlet.java:51)
  at javax.servlet.http.HttpServlet.service(HttpServlet.java:729)
  at org.apache.catalina.core.ApplicationFilterChain.internalDoFilter(ApplicationFilterChain.java:292)
  at org.apache.catalina.core.ApplicationFilterChain.doFilter(ApplicationFilterChain.java:207)
  at org.apache.tomcat.websocket.server.WsFilter.doFilter(WsFilter.java:52)
  at org.apache.catalina.core.ApplicationFilterChain.internalDoFilter(ApplicationFilterChain.java:240)
  at org.apache.catalina.core.ApplicationFilterChain.doFilter(ApplicationFilterChain.java:207)
  at org.apache.catalina.core.StandardWrapperValve.invoke(StandardWrapperValve.java:212)
  at org.apache.catalina.core.StandardContextValve.invoke(StandardContextValve.java:94)
  at org.apache.catalina.authenticator.AuthenticatorBase.invoke(AuthenticatorBase.java:504)
  at org.apache.catalina.core.StandardHostValve.invoke(StandardHostValve.java:141)
  at org.apache.catalina.valves.ErrorReportValve.invoke(ErrorReportValve.java:79)
  at org.apache.catalina.valves.AbstractAccessLogValve.invoke(AbstractAccessLogValve.java:620)
  at org.apache.catalina.valves.RemoteIpValve.invoke(RemoteIpValve.java:676)
  at org.apache.catalina.core.StandardEngineValve.invoke(StandardEngineValve.java:88)
  at org.apache.catalina.connector.CoyoteAdapter.service(CoyoteAdapter.java:502)
  at org.apache.coyote.http11.AbstractHttp11Processor.process(AbstractHttp11Processor.java:1132)
  at org.apache.coyote.AbstractProtocol$AbstractConnectionHandler.process(AbstractProtocol.java:684)
  at org.apache.tomcat.util.net.AprEndpoint$SocketProcessor.doRun(AprEndpoint.java:2527)
  at org.apache.tomcat.util.net.AprEndpoint$SocketProcessor.run(AprEndpoint.java:2516)
  at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1149)
  at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:624)
  at org.apache.tomcat.util.threads.TaskThread$WrappingRunnable.run(TaskThread.java:61)
  at java.lang.Thread.run(Thread.java:748)
 
Timestamp 3/19/18 6:50:56 PM EDT

It's a 4-port controller with only 2 drives attached, so it looks like there needs to be some logic to quit testing if all the ports aren't occupied.

 

Link to comment

Status Update: I'm about ready for open Alpha testing. Drive benchmark testing with pre-alpha team is happening now - will scan one drive per controller at the same time. Working on support to add multiple drives per controller but didn't want that to keep delaying the release.

  • Like 1
Link to comment

Join the conversation

You can post now and register later. If you have an account, sign in now to post with your account.
Note: Your post will require moderator approval before it will be visible.

Guest
Reply to this topic...

×   Pasted as rich text.   Restore formatting

  Only 75 emoji are allowed.

×   Your link has been automatically embedded.   Display as a link instead

×   Your previous content has been restored.   Clear editor

×   You cannot paste images directly. Upload or insert images from URL.