DiskSpeed, hdd/ssd benchmarking (unRAID 6+), version 2.10.7


Recommended Posts

When I run controller benchmarks everything runs fine. When I try to benchmark my drives I get repeated speed gap errors where the benchmarking never complete, even though I disabled Speed Gap detection. I've tried this in Firefox and Chrome. I've re-installed the plugin. I've tried this plugin a couple years ago  and had this Speed Gap issue, I still have this issue. Am I doing something wrong?

 

Thanks for your consideration!

 

-Jason

Link to comment
13 hours ago, jasonwert said:

When I run controller benchmarks everything runs fine. When I try to benchmark my drives I get repeated speed gap errors where the benchmarking never complete, even though I disabled Speed Gap detection. I've tried this in Firefox and Chrome. I've re-installed the plugin. I've tried this plugin a couple years ago  and had this Speed Gap issue, I still have this issue. Am I doing something wrong?

 

Thanks for your consideration!

 

-Jason

 

The controller benchmark doesn't check for abnormal reads but does warn of a potentially invalid test result where single-drive read over 15 seconds was quite a bit less than the all-drive benchmark for 15 seconds. The benchmark checks the min & max data read per second looking for a gap over a given MB/sec which is a sign of other processes accessing the drive during the test.

 

If you're sure no other processes are accessing the drives (watching the Main tab in Unraid looking for read/write counter increments while not benchmarking will indicate if there is or not), how is your system laid out? A screen shot of the System Bus Tree from the app will suffice.

 

Another thing you can try is to kick off a manual benchmark of 2 or more drives and then click the hidden link on the period at the end of "Click on a drive label to hide or show it." This will make visible the hidden iframes that are performing all the work behind the scenes and you can see additional information in the logs displayed that might give a hint.

Link to comment
On 6/13/2022 at 7:26 PM, jbartlett said:

Click on the DiskSpeed Docker icon and open a console window.

 

Copy-n-paste the following command after verifying your parity drive is still sdp and change it here if it is different.

dd if=/dev/sdp of=/dev/null bs=1310720 skip=0 iflag=direct status=progress conv=noerror

 

You should see it starting to copy data and updating every second with the progress. The MB/s value should quickly settle down with a typical 2-3 MB variance from one second to the next. When you press CTRL-C to abort (or let it read the entire drive), it should report something like the following:

2081423360 bytes (2.1 GB, 1.9 GiB) copied, 9.00526 s, 231 MB/s^C
1655+0 records in
1654+0 records out
2167930880 bytes (2.2 GB, 2.0 GiB) copied, 9.40043 s, 231 MB/s

 

 

 

On 6/13/2022 at 8:33 PM, jbartlett said:

 

Go to the tools page in UNRAID and click on the System Devices icon. Find your storage card and take note of the domain ID. It'll look something like "07:00.0" and be listed after the two hex numbers in brackets. Open a console window and enter replacing the domain id with your value.

lspci -vv -s 07:00.0

 

Look for lines starting with "LnkCap" and "LnkSta" that report a speed & width. Please copy-n-paste the results here.

 

On 6/13/2022 at 8:42 PM, JorgeB said:

It's reporting the controller link speed, PCIe 1.0 x4 is correct for a SASLP

 

I hate it when people try to help me and I seemingly ignore them. I read your answers but I had to be away for >1 month. When I came back, 1 HDD was dead so I first had to replace it. Now the problem has disappeared for some reason.

 

Thanks for your input and know that i did not ignore your input (at least intentionally)...

  • Like 1
  • Thanks 1
Link to comment

I have always had problems finishing the diskspeed benchmark to find which drive is going slow..  Is there a way to just benchmark a specified number of drives at a time?  I can't do another test for a while since i have a parity check going.. which has been getting really slow for an unknown reason..  But i will try one in a few days.  Right now i am not even starting any dockers because i don't want to interfere with the parity check. 

Link to comment
2 hours ago, FrozenGamer said:

I have always had problems finishing the diskspeed benchmark to find which drive is going slow..  Is there a way to just benchmark a specified number of drives at a time?  I can't do another test for a while since i have a parity check going.. which has been getting really slow for an unknown reason..  But i will try one in a few days.  Right now i am not even starting any dockers because i don't want to interfere with the parity check. 

 

When you first pull up the app or click on the "DiskSpeed" label at the top of any page, it should display a "Benchmark Drives" button. That in turn will display a Benchmark page where you can optionally select which drives you want to test. By default, it starts with all drives but if you uncheck the "Check all drives" checkbox, all your drives will be listed for individual selection.

Link to comment
  • 2 weeks later...

I have identified a problem.   I told it to choose disk 2 (sdab) and it is doing another disk which is not 8tb but 10.  then it gets stuck at 90% every time..

SAS2308 PCI-Express Fusion-MPT SAS-2: Scanning Disk 2 (sdab) at 8 TB   90%

It appears to continue to read at 133mb/s so i assume that isn't the slow disk, if i have one.  Seems to be continuing long enough that it isn't going to stop.

 

Link to comment
2 hours ago, FrozenGamer said:

I have identified a problem.   I told it to choose disk 2 (sdab) and it is doing another disk which is not 8tb but 10.  then it gets stuck at 90% every time..

SAS2308 PCI-Express Fusion-MPT SAS-2: Scanning Disk 2 (sdab) at 8 TB   90%

It appears to continue to read at 133mb/s so i assume that isn't the slow disk, if i have one.  Seems to be continuing long enough that it isn't going to stop.

 

 

It looks like you have more than 26 drives attached. Can you submit a debug file using the "Create Debug File" at the bottom of the page? You do not need the Controller Info item.

Link to comment

Hello! Just found this plugin and it looks super helpful. Unfortunately it seems to be erroring out on my cache drive during setup? 

 

Steps to Reproduce (on my system):

1. Install DiskSpeed from CA

2. Open WebGUI

3. See error after it reaches one of my cache drives (Crucial MX500): 15:20:09 Found drive Crucial CT2000MX500SSD1 Rev: M3CR043 Serial: 2210E6168449 (sdb), 1 partition

 

Error:

Lucee 5.2.9.31 Error (expression)

Messageinvalid call of the function listGetAt, second Argument (posNumber) is invalid, invalid string list index [2]

patternlistgetat(list:string, position:number, [delimiters:string, [includeEmptyFields:boolean]]):string

StacktraceThe Error Occurred in
/var/www/ScanControllers.cfm: line 1733

1731: <CFSET NR=i-2>
1732: <CFSET Part.Partitions[NR].PartNo=ListGetAt(CurrLine,1,":",true)>
1733: <CFSET Part.Partitions[NR].Start=Val(ListGetAt(CurrLine,2,":",true))>
1734: <CFSET Part.Partitions[NR].End=Val(ListGetAt(CurrLine,3,":",true))>
1735: <CFSET Part.Partitions[NR].Size=Val(ListGetAt(CurrLine,4,":",true))>


called from /var/www/ScanControllers.cfm: line 1643

1641: </CFIF>
1642: </CFLOOP>
1643: </CFLOOP>
1644:
1645: <!--- Admin drive creation --->

 

Diagnostic File: https://drive.google.com/file/d/1xC5YkGH1ZNy0DiTBN008npWPYKZJn9RP/view?usp=sharing

Edited by berta123
Link to comment
  • 3 weeks later...
On 8/10/2022 at 1:29 PM, berta123 said:

Hello! Just found this plugin and it looks super helpful. Unfortunately it seems to be erroring out on my cache drive during setup? 

 

Steps to Reproduce (on my system):

1. Install DiskSpeed from CA

2. Open WebGUI

3. See error after it reaches one of my cache drives (Crucial MX500): 15:20:09 Found drive Crucial CT2000MX500SSD1 Rev: M3CR043 Serial: 2210E6168449 (sdb), 1 partition

 

 

 

The error happens on the sdc, it's partition output isn't standard with extra blank lines, some padded with spaces. I've added code to handle the extra lines and code to catch other gotcha's so it'll continue.

Link to comment
On 8/9/2022 at 10:56 AM, FrozenGamer said:

I have identified a problem.   I told it to choose disk 2 (sdab) and it is doing another disk which is not 8tb but 10.  then it gets stuck at 90% every time..

SAS2308 PCI-Express Fusion-MPT SAS-2: Scanning Disk 2 (sdab) at 8 TB   90%

It appears to continue to read at 133mb/s so i assume that isn't the slow disk, if i have one.  Seems to be continuing long enough that it isn't going to stop.

 

I've verified that it's not a problem with having more than 26 drives, I added two 10 port hubs and filled them with USB drives to push my sdx counts over sdaa and benchmarked sdab and sdac, no issues and it benchmarked the correct drive.

 

Does it report something like "SpeedGap detected"? If so, you'll need to disable that when starting a benchmark from the main page, not from the drive itself. Also, if you select 2 or more drives to benchmark, you'll see the text "Click on a drive label to hide or show it." - that period on the end is a hidden hyperlink, clicking on it will reveal the hidden iframes that perform the actual work and you can see what's happening and if there's an issue.

Link to comment
16 hours ago, ChrisCox462 said:

I found that I had issues on initial start of the docker if I had old diskspeed data in the folder. if I delete the /appdata/DiskSpeed/ folder and reinstall the docker it works fine.

 

Renaming the "Instances" directory under "/appdata/DiskSpeed" will do the same without needing to install. If you do get this issue again and it's resolved by renaming the directory, let me know so I can get a copy of your "bad" data so I can duplicate and fix the issue.

Link to comment
  • 3 weeks later...

M vulnerability scanner just popped up with this for DiskSpeed:

 

Description

According to its version, the remote web server is obsolete and no longer maintained by its vendor or provider.

Lack of support implies that no new security patches for the product will be released by the vendor. As a result, it may contain security vulnerabilities.

 

Solution

Remove the web server if it is no longer needed. Otherwise, upgrade to a supported version if possible or switch to another server.

 

Output

  Product                : Tomcat
  Installed version      : 8.0.53
  Support ended          : 2018-06-30
  Supported versions     : 8.5.x / 9.x / 10.x
  Additional information : http://tomcat.apache.org/tomcat-80-eol.html

 

Seems like maybe it's time to upgrade Tomcat?

Link to comment
  • jbartlett changed the title to DiskSpeed, hard drive benchmarking (unRAID 6+), version 2.9.5

Pushed version 2.9.5 to the docker hub. @Howboys - let me know if it resolves your issues with the EOL notice. It's using the latest build of the Lucee app server.

 

I'm starting to use tagged versions. 2.9.5/latest resolves issues with funky partition output from the "parted" utility. Well, hopefully resolves it as I couldn't duplicate.

 

If you have issues with the version 2.9.5, change the repository to "jbartlett777/diskspeed:2.9.4" to roll back to the previous version.

Edited by jbartlett
Link to comment

I  have no problems running benchmarks on PCIe sata card -

ASM1166 Serial ATA Controller - ZyDAS Technology Corp. (ASMedia Technology Inc.) - SATA controller .

 

But do get this error when accessing the motherboard's chip: 

Device 43d2 - Micro-Star International Co., Ltd. [MSI] (Intel Corporation) - SATA controller 

 

Device 43d2

Micro-Star International Co., Ltd. [MSI] (Intel Corporation) 
SATA controller 

Type: Onboard Controller
Lucee 5.3.7.47 Error (expression)
Message	key [SPEED] doesn't exist
Stacktrace	The Error Occurred in
/var/www/DispController.cfm: line 36 
34: <CFOUTPUT>
35: <CFSET CK=0>
36: <CFIF HW[Key].Config.LnkSta.Speed EQ HW[Key].Config.LnkCap.Speed AND HW[Key].Config.LnkSta.Speed NEQ "" AND HW[Key].Config.LnkCap.Speed NEQ "">
37: <CFSET CK=1>
38: <span class="Bold">Current & Maximum Link Speed:</span> #HW[Key].Config.LnkSta.Speed# width #HW[Key].Config.LnkSta.Width# (#HW[Key].Config.LnkCap.Throughput# max throughput)<br>

Java Stacktrace	lucee.runtime.exp.ExpressionException: key [SPEED] doesn't exist
  at lucee.runtime.type.util.StructSupport.invalidKey(StructSupport.java:67)
  at lucee.runtime.type.StructImpl.get(StructImpl.java:149)
  at lucee.runtime.util.VariableUtilImpl.get(VariableUtilImpl.java:278)
  at lucee.runtime.PageContextImpl.get(PageContextImpl.java:1502)
  at dispcontroller_cfm$cf.call(/DispController.cfm:36)
  at lucee.runtime.PageContextImpl._doInclude(PageContextImpl.java:945)
  at lucee.runtime.PageContextImpl._doInclude(PageContextImpl.java:837)
  at lucee.runtime.listener.ClassicAppListener._onRequest(ClassicAppListener.java:64)
  at lucee.runtime.listener.MixedAppListener.onRequest(MixedAppListener.java:43)
  at lucee.runtime.PageContextImpl.execute(PageContextImpl.java:2416)
  at lucee.runtime.PageContextImpl._execute(PageContextImpl.java:2406)
  at lucee.runtime.PageContextImpl.executeCFML(PageContextImpl.java:2381)
  at lucee.runtime.engine.Request.exe(Request.java:43)
  at lucee.runtime.engine.CFMLEngineImpl._service(CFMLEngineImpl.java:1170)
  at lucee.runtime.engine.CFMLEngineImpl.serviceCFML(CFMLEngineImpl.java:1116)
  at lucee.loader.engine.CFMLEngineWrapper.serviceCFML(CFMLEngineWrapper.java:97)
  at lucee.loader.servlet.CFMLServlet.service(CFMLServlet.java:51)
  at javax.servlet.http.HttpServlet.service(HttpServlet.java:733)
  at org.apache.catalina.core.ApplicationFilterChain.internalDoFilter(ApplicationFilterChain.java:231)
  at org.apache.catalina.core.ApplicationFilterChain.doFilter(ApplicationFilterChain.java:166)
  at org.apache.tomcat.websocket.server.WsFilter.doFilter(WsFilter.java:53)
  at org.apache.catalina.core.ApplicationFilterChain.internalDoFilter(ApplicationFilterChain.java:193)
  at org.apache.catalina.core.ApplicationFilterChain.doFilter(ApplicationFilterChain.java:166)
  at org.apache.catalina.core.StandardWrapperValve.invoke(StandardWrapperValve.java:202)
  at org.apache.catalina.core.StandardContextValve.invoke(StandardContextValve.java:97)
  at org.apache.catalina.authenticator.AuthenticatorBase.invoke(AuthenticatorBase.java:542)
  at org.apache.catalina.core.StandardHostValve.invoke(StandardHostValve.java:143)
  at org.apache.catalina.valves.ErrorReportValve.invoke(ErrorReportValve.java:92)
  at org.apache.catalina.valves.AbstractAccessLogValve.invoke(AbstractAccessLogValve.java:690)
  at org.apache.catalina.valves.RemoteIpValve.invoke(RemoteIpValve.java:747)
  at org.apache.catalina.core.StandardEngineValve.invoke(StandardEngineValve.java:78)
  at org.apache.catalina.connector.CoyoteAdapter.service(CoyoteAdapter.java:343)
  at org.apache.coyote.http11.Http11Processor.service(Http11Processor.java:374)
  at org.apache.coyote.AbstractProcessorLight.process(AbstractProcessorLight.java:65)
  at org.apache.coyote.AbstractProtocol$ConnectionHandler.process(AbstractProtocol.java:868)
  at org.apache.tomcat.util.net.NioEndpoint$SocketProcessor.doRun(NioEndpoint.java:1590)
  at org.apache.tomcat.util.net.SocketProcessorBase.run(SocketProcessorBase.java:49)
  at java.base/java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1128)
  at java.base/java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:628)
  at org.apache.tomcat.util.threads.TaskThread$WrappingRunnable.run(TaskThread.java:61)
  at java.base/java.lang.Thread.run(Thread.java:834)
 
Timestamp	9/23/22 12:07:36 PM PDT

 

Link to comment
On 9/19/2022 at 10:35 PM, jbartlett said:

Pushed version 2.9.5 to the docker hub. @Howboys - let me know if it resolves your issues with the EOL notice. It's using the latest build of the Lucee app server.

 

I'm starting to use tagged versions. 2.9.5/latest resolves issues with funky partition output from the "parted" utility. Well, hopefully resolves it as I couldn't duplicate.

 

If you have issues with the version 2.9.5, change the repository to "jbartlett777/diskspeed:2.9.4" to roll back to the previous version.

 

Updated and now there's a few more vulns ("HIGH"):

 

Description
The version of Tomcat installed on the remote host is prior to 9.0.43. It is, therefore, affected by multiple vulnerabilities as referenced in the vendor advisory.

- When using Apache Tomcat versions 10.0.0-M1 to 10.0.0-M4, 9.0.0.M1 to 9.0.34, 8.5.0 to 8.5.54 and 7.0.0 to 7.0.103 if a) an attacker is able to control the contents and name of a file on the server; and b) the server is configured to use the PersistenceManager with a FileStore; and c) the PersistenceManager is configured with sessionAttributeValueClassNameFilter=null (the default unless a SecurityManager is used) or a sufficiently lax filter to allow the attacker provided object to be deserialized; and d) the attacker knows the relative file path from the storage location used by FileStore to the file the attacker has control over; then, using a specifically crafted request, the attacker will be able to trigger remote code execution via deserialization of the file under their control. Note that all of conditions a) to d) must be true for the attack to succeed. (CVE-2020-9484)

- An information disclosure vulnerability exists when responding to new h2c connection requests, Apache Tomcat versions 9.0.0.M1 to 9.0.41 could duplicate request headers and a limited amount of request body from one request to another meaning user A and user B could both see the results of user A's request. (CVE-2021-25122)

- when using Apache Tomcat 10.0.0-M1 to 10.0.0, 9.0.0.M1 to 9.0.41, 8.5.0 to 8.5.61 or 7.0.0. to 7.0.107 with a configuration edge case that was highly unlikely to be used, the Tomcat instance was still vulnerable to CVE-2020-9494. Note that both the previously published prerequisites for CVE-2020-9484 and the previously published mitigations for CVE-2020-9484 also apply to this issue. (CVE-2021-25329)

- A remote code execution vulnerability via deserialization exists when using Apache Tomcat 9.0.0.M1 to 9.0.41 with a configuration edge case that was highly unlikely to be used, the Tomcat instance was still vulnerable to CVE-2020-9494. Note that both the previously published prerequisites for CVE-2020-9484 and the previously published mitigations for CVE-2020-9484 also apply to this issue. (CVE-2021-25329)

Note that Nessus has not tested for this issue but has instead relied only on the application's self-reported version number.

Solution

Upgrade to Apache Tomcat version 9.0.43 or later.

-----------------------

Description
The version of Tomcat installed on the remote host is prior to 9.0.63. It is, therefore, affected by a vulnerability as referenced in the fixed_in_apache_tomcat_9.0.63_security-9 advisory.

- The documentation of Apache Tomcat 10.1.0-M1 to 10.1.0-M14, 10.0.0-M1 to 10.0.20, 9.0.13 to 9.0.62 and 8.5.38 to 8.5.78 for the EncryptInterceptor incorrectly stated it enabled Tomcat clustering to run over an untrusted network. This was not correct. While the EncryptInterceptor does provide confidentiality and integrity protection, it does not protect against all risks associated with running over any untrusted network, particularly DoS risks. (CVE-2022-29885)

Note that Nessus has not tested for this issue but has instead relied only on the application's self-reported version number.

Solution

Upgrade to Apache Tomcat version 9.0.63 or later.

-----------------------

Description
The version of Tomcat installed on the remote host is prior to 9.0.40. It is, therefore, affected by multiple vulnerabilities as referenced in the fixed_in_apache_tomcat_9.0.40_security-9 advisory.

- When serving resources from a network location using the NTFS file system, Apache Tomcat versions 10.0.0-M1 to 10.0.0-M9, 9.0.0.M1 to 9.0.39, 8.5.0 to 8.5.59 and 7.0.0 to 7.0.106 were susceptible to JSP source code disclosure in some configurations. The root cause was the unexpected behaviour of the JRE API File.getCanonicalPath() which in turn was caused by the inconsistent behaviour of the Windows API (FindFirstFileW) in some circumstances. (CVE-2021-24122)

- While investigating bug 64830 it was discovered that Apache Tomcat 10.0.0-M1 to 10.0.0-M9, 9.0.0-M1 to 9.0.39 and 8.5.0 to 8.5.59 could re-use an HTTP request header value from the previous stream received on an HTTP/2 connection for the request associated with the subsequent stream. While this would most likely lead to an error and the closure of the HTTP/2 connection, it is possible that information could leak between requests. (CVE-2020-17527)

Note that Nessus has not tested for this issue but has instead relied only on the application's self-reported version number.

Solution

Upgrade to Apache Tomcat version 9.0.40 or later.

-----------------------

Description
The version of Tomcat installed on the remote host is prior to 9.0.65. It is, therefore, affected by a vulnerability as referenced in the fixed_in_apache_tomcat_9.0.65_security-9 advisory.

- In Apache Tomcat 10.1.0-M1 to 10.1.0-M16, 10.0.0-M1 to 10.0.22, 9.0.30 to 9.0.64 and 8.5.50 to 8.5.81 the Form authentication example in the examples web application displayed user provided data without filtering, exposing a XSS vulnerability. (CVE-2022-34305)

Note that Nessus has not tested for this issue but has instead relied only on the application's self-reported version number.

Solution

Upgrade to Apache Tomcat version 9.0.65 or later.

 

There's more but it seems like these are still present.

Link to comment

I'm unable to use this docker after changing my appdata default location from mnt/user/appdata to mnt/arraycache/appdata.

 

I've uninstalled and reinstalled the docker, including deleting its appdata folder in unraid. 

When starting the docker, it stays on the 'Scanning Hardware Spinning Up Hard Drives' screen for a long time then displays this error:

DiskSpeed - Disk Diagnostics & Reporting tool
Version: 2.9.5

Scanning Hardware
22:32:41 Spinning up hard drives
Lucee 5.3.7.47 Error (application)
Message	Error invoking external process
Stacktrace	The Error Occurred in
/var/www/Spinup.cfm: line 137
135: <CFFILE action="write" file="/tmp/DiskSpeedTmp/spinup.sh" mode="766" output="#spinup#" addnewline="NO">
136: </cflock>
137: <CFEXECUTE name="/tmp/DiskSpeedTmp/spinup.sh" timeout="0" />
138: <CFELSE>
139: <cflock name="FileWrite" type="exclusive" throwontimeout="true" timeout="10">

called from /var/www/ScanControllers.cfm: line 257
255: </CFOUTPUT>
256: <CFFLUSH>
257: <CFINCLUDE template="Spinup.cfm">
258:
259: <CFIF FileExists("#PersistDir#/storage.json")>

called from /var/www/ScanControllers.cfm: line 250
248: <cfexecute name="/bin/ls" arguments="-l /sys/block" variable="BlockDevices" timeout="90" />
249: <CFFILE action="delete" file="#PersistDir#/ls_sysblock_exec.txt">
250: <CFFILE action="write" file="#PersistDir#/ls_sysblock.txt" output="#BlockDevices#" addnewline="NO" mode="666">
251:
252: <CFOUTPUT>

Java Stacktrace	lucee.runtime.exp.ApplicationException: Error invoking external process
  at lucee.runtime.tag.Execute.doEndTag(Execute.java:259)
  at spinup_cfm$cf.call(/Spinup.cfm:137)
  at lucee.runtime.PageContextImpl._doInclude(PageContextImpl.java:945)
  at lucee.runtime.PageContextImpl._doInclude(PageContextImpl.java:837)
  at lucee.runtime.PageContextImpl.doInclude(PageContextImpl.java:818)
  at scancontrollers_cfm$cf.call_000005(/ScanControllers.cfm:257)
  at scancontrollers_cfm$cf.call(/ScanControllers.cfm:250)
  at lucee.runtime.PageContextImpl._doInclude(PageContextImpl.java:945)
  at lucee.runtime.PageContextImpl._doInclude(PageContextImpl.java:837)
  at lucee.runtime.listener.ClassicAppListener._onRequest(ClassicAppListener.java:64)
  at lucee.runtime.listener.MixedAppListener.onRequest(MixedAppListener.java:43)
  at lucee.runtime.PageContextImpl.execute(PageContextImpl.java:2416)
  at lucee.runtime.PageContextImpl._execute(PageContextImpl.java:2406)
  at lucee.runtime.PageContextImpl.executeCFML(PageContextImpl.java:2381)
  at lucee.runtime.engine.Request.exe(Request.java:43)
  at lucee.runtime.engine.CFMLEngineImpl._service(CFMLEngineImpl.java:1170)
  at lucee.runtime.engine.CFMLEngineImpl.serviceCFML(CFMLEngineImpl.java:1116)
  at lucee.loader.engine.CFMLEngineWrapper.serviceCFML(CFMLEngineWrapper.java:97)
  at lucee.loader.servlet.CFMLServlet.service(CFMLServlet.java:51)
  at javax.servlet.http.HttpServlet.service(HttpServlet.java:733)
  at org.apache.catalina.core.ApplicationFilterChain.internalDoFilter(ApplicationFilterChain.java:231)
  at org.apache.catalina.core.ApplicationFilterChain.doFilter(ApplicationFilterChain.java:166)
  at org.apache.tomcat.websocket.server.WsFilter.doFilter(WsFilter.java:53)
  at org.apache.catalina.core.ApplicationFilterChain.internalDoFilter(ApplicationFilterChain.java:193)
  at org.apache.catalina.core.ApplicationFilterChain.doFilter(ApplicationFilterChain.java:166)
  at org.apache.catalina.core.StandardWrapperValve.invoke(StandardWrapperValve.java:202)
  at org.apache.catalina.core.StandardContextValve.invoke(StandardContextValve.java:97)
  at org.apache.catalina.authenticator.AuthenticatorBase.invoke(AuthenticatorBase.java:542)
  at org.apache.catalina.core.StandardHostValve.invoke(StandardHostValve.java:143)
  at org.apache.catalina.valves.ErrorReportValve.invoke(ErrorReportValve.java:92)
  at org.apache.catalina.valves.AbstractAccessLogValve.invoke(AbstractAccessLogValve.java:690)
  at org.apache.catalina.valves.RemoteIpValve.invoke(RemoteIpValve.java:747)
  at org.apache.catalina.core.StandardEngineValve.invoke(StandardEngineValve.java:78)
  at org.apache.catalina.connector.CoyoteAdapter.service(CoyoteAdapter.java:343)
  at org.apache.coyote.http11.Http11Processor.service(Http11Processor.java:374)
  at org.apache.coyote.AbstractProcessorLight.process(AbstractProcessorLight.java:65)
  at org.apache.coyote.AbstractProtocol$ConnectionHandler.process(AbstractProtocol.java:868)
  at org.apache.tomcat.util.net.NioEndpoint$SocketProcessor.doRun(NioEndpoint.java:1590)
  at org.apache.tomcat.util.net.SocketProcessorBase.run(SocketProcessorBase.java:49)
  at java.base/java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1128)
  at java.base/java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:628)
  at org.apache.tomcat.util.threads.TaskThread$WrappingRunnable.run(TaskThread.java:61)
  at java.base/java.lang.Thread.run(Thread.java:834)
 
Timestamp	9/23/22 10:37:45 PM EDT

 

Here is the log:

Spoiler
text  error  warn  system  array  login  

NOTE: Picked up JDK_JAVA_OPTIONS:  --add-opens=java.base/java.lang=ALL-UNNAMED --add-opens=java.base/java.io=ALL-UNNAMED --add-opens=java.rmi/sun.rmi.transport=ALL-UNNAMED
23-Sep-2022 22:31:38.978 INFO [main] org.apache.catalina.startup.VersionLoggerListener.log Server version name:   Apache Tomcat/9.0.39
23-Sep-2022 22:31:38.980 INFO [main] org.apache.catalina.startup.VersionLoggerListener.log Server built:          Oct 6 2020 14:11:46 UTC
23-Sep-2022 22:31:38.980 INFO [main] org.apache.catalina.startup.VersionLoggerListener.log Server version number: 9.0.39.0
23-Sep-2022 22:31:38.981 INFO [main] org.apache.catalina.startup.VersionLoggerListener.log OS Name:               Linux
23-Sep-2022 22:31:38.981 INFO [main] org.apache.catalina.startup.VersionLoggerListener.log OS Version:            5.15.46-Unraid
23-Sep-2022 22:31:38.981 INFO [main] org.apache.catalina.startup.VersionLoggerListener.log Architecture:          amd64
23-Sep-2022 22:31:38.981 INFO [main] org.apache.catalina.startup.VersionLoggerListener.log Java Home:             /usr/local/openjdk-11
23-Sep-2022 22:31:38.981 INFO [main] org.apache.catalina.startup.VersionLoggerListener.log JVM Version:           11.0.9+11
23-Sep-2022 22:31:38.981 INFO [main] org.apache.catalina.startup.VersionLoggerListener.log JVM Vendor:            Oracle Corporation
23-Sep-2022 22:31:38.981 INFO [main] org.apache.catalina.startup.VersionLoggerListener.log CATALINA_BASE:         /usr/local/tomcat
23-Sep-2022 22:31:38.982 INFO [main] org.apache.catalina.startup.VersionLoggerListener.log CATALINA_HOME:         /usr/local/tomcat
23-Sep-2022 22:31:38.996 INFO [main] org.apache.catalina.startup.VersionLoggerListener.log Command line argument: --add-opens=java.base/java.lang=ALL-UNNAMED
23-Sep-2022 22:31:38.997 INFO [main] org.apache.catalina.startup.VersionLoggerListener.log Command line argument: --add-opens=java.base/java.io=ALL-UNNAMED
23-Sep-2022 22:31:38.997 INFO [main] org.apache.catalina.startup.VersionLoggerListener.log Command line argument: --add-opens=java.rmi/sun.rmi.transport=ALL-UNNAMED
23-Sep-2022 22:31:38.997 INFO [main] org.apache.catalina.startup.VersionLoggerListener.log Command line argument: -Djava.util.logging.config.file=/usr/local/tomcat/conf/logging.properties
23-Sep-2022 22:31:38.997 INFO [main] org.apache.catalina.startup.VersionLoggerListener.log Command line argument: -Djava.util.logging.manager=org.apache.juli.ClassLoaderLogManager
23-Sep-2022 22:31:38.997 INFO [main] org.apache.catalina.startup.VersionLoggerListener.log Command line argument: -Djdk.tls.ephemeralDHKeySize=2048
23-Sep-2022 22:31:38.997 INFO [main] org.apache.catalina.startup.VersionLoggerListener.log Command line argument: -Djava.protocol.handler.pkgs=org.apache.catalina.webresources
23-Sep-2022 22:31:38.997 INFO [main] org.apache.catalina.startup.VersionLoggerListener.log Command line argument: -Dorg.apache.catalina.security.SecurityListener.UMASK=0027
23-Sep-2022 22:31:38.997 INFO [main] org.apache.catalina.startup.VersionLoggerListener.log Command line argument: -Xms64m
23-Sep-2022 22:31:38.998 INFO [main] org.apache.catalina.startup.VersionLoggerListener.log Command line argument: -Xmx512m
23-Sep-2022 22:31:38.998 INFO [main] org.apache.catalina.startup.VersionLoggerListener.log Command line argument: -Djava.security.egd=file:/dev/./urandom
23-Sep-2022 22:31:38.998 INFO [main] org.apache.catalina.startup.VersionLoggerListener.log Command line argument: -Dignore.endorsed.dirs=
23-Sep-2022 22:31:38.998 INFO [main] org.apache.catalina.startup.VersionLoggerListener.log Command line argument: -Dcatalina.base=/usr/local/tomcat
23-Sep-2022 22:31:38.998 INFO [main] org.apache.catalina.startup.VersionLoggerListener.log Command line argument: -Dcatalina.home=/usr/local/tomcat
23-Sep-2022 22:31:38.998 INFO [main] org.apache.catalina.startup.VersionLoggerListener.log Command line argument: -Djava.io.tmpdir=/usr/local/tomcat/temp
23-Sep-2022 22:31:39.013 INFO [main] org.apache.catalina.core.AprLifecycleListener.lifecycleEvent Loaded Apache Tomcat Native library [1.2.25] using APR version [1.6.5].
23-Sep-2022 22:31:39.013 INFO [main] org.apache.catalina.core.AprLifecycleListener.lifecycleEvent APR capabilities: IPv6 [true], sendfile [true], accept filters [false], random [true].
23-Sep-2022 22:31:39.013 INFO [main] org.apache.catalina.core.AprLifecycleListener.lifecycleEvent APR/OpenSSL configuration: useAprConnector [false], useOpenSSL [true]
23-Sep-2022 22:31:39.016 INFO [main] org.apache.catalina.core.AprLifecycleListener.initializeSSL OpenSSL successfully initialized [OpenSSL 1.1.1d  10 Sep 2019]
23-Sep-2022 22:31:39.216 INFO [main] org.apache.coyote.AbstractProtocol.init Initializing ProtocolHandler ["http-nio-8888"]
23-Sep-2022 22:31:39.235 INFO [main] org.apache.coyote.AbstractProtocol.init Initializing ProtocolHandler ["ajp-nio-127.0.0.1-8009"]
23-Sep-2022 22:31:39.236 INFO [main] org.apache.catalina.startup.Catalina.load Server initialization in [431] milliseconds
23-Sep-2022 22:31:39.258 INFO [main] org.apache.catalina.core.StandardService.startInternal Starting service [Catalina]
23-Sep-2022 22:31:39.258 INFO [main] org.apache.catalina.core.StandardEngine.startInternal Starting Servlet engine: [Apache Tomcat/9.0.39]
WARNING: An illegal reflective access operation has occurred
WARNING: Illegal reflective access by org.apache.felix.framework.ext.ClassPathExtenderFactory$DefaultClassLoaderExtender (file:/usr/local/tomcat/lucee/lucee.jar) to method java.net.URLClassLoader.addURL(java.net.URL)
WARNING: Please consider reporting this to the maintainers of org.apache.felix.framework.ext.ClassPathExtenderFactory$DefaultClassLoaderExtender
WARNING: Use --illegal-access=warn to enable warnings of further illegal reflective access operations
WARNING: All illegal access operations will be denied in a future release
23-Sep-2022 22:31:40.284 INFO [main] org.apache.coyote.AbstractProtocol.start Starting ProtocolHandler ["http-nio-8888"]
23-Sep-2022 22:31:40.291 SEVERE [main] org.apache.catalina.util.LifecycleBase.handleSubClassException Failed to start component [Connector[AJP/1.3-8009]]
        org.apache.catalina.LifecycleException: Protocol handler start failed
                at org.apache.catalina.connector.Connector.startInternal(Connector.java:1067)
                at org.apache.catalina.util.LifecycleBase.start(LifecycleBase.java:183)
                at org.apache.catalina.core.StandardService.startInternal(StandardService.java:438)
                at org.apache.catalina.util.LifecycleBase.start(LifecycleBase.java:183)
                at org.apache.catalina.core.StandardServer.startInternal(StandardServer.java:930)
                at org.apache.catalina.util.LifecycleBase.start(LifecycleBase.java:183)
                at org.apache.catalina.startup.Catalina.start(Catalina.java:772)
                at java.base/jdk.internal.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
                at java.base/jdk.internal.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62)
                at java.base/jdk.internal.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
                at java.base/java.lang.reflect.Method.invoke(Method.java:566)
                at org.apache.catalina.startup.Bootstrap.start(Bootstrap.java:342)
                at org.apache.catalina.startup.Bootstrap.main(Bootstrap.java:473)
        Caused by: java.lang.IllegalArgumentException: The AJP Connector is configured with secretRequired="true" but the secret attribute is either null or "". This combination is not valid.
                at org.apache.coyote.ajp.AbstractAjpProtocol.start(AbstractAjpProtocol.java:270)
                at org.apache.catalina.connector.Connector.startInternal(Connector.java:1064)
                ... 12 more
23-Sep-2022 22:31:40.293 INFO [main] org.apache.catalina.startup.Catalina.start Server startup in [1056] milliseconds
Exception in thread "Thread-39" java.lang.OutOfMemoryError: Java heap space
        at java.base/java.util.Arrays.copyOf(Arrays.java:3745)
        at java.base/java.lang.AbstractStringBuilder.ensureCapacityInternal(AbstractStringBuilder.java:172)
        at java.base/java.lang.AbstractStringBuilder.append(AbstractStringBuilder.java:686)
        at java.base/java.lang.StringBuffer.append(StringBuffer.java:409)
        at java.base/java.io.StringWriter.write(StringWriter.java:99)
        at lucee.commons.io.IOUtil.copy(IOUtil.java:351)
        at lucee.commons.io.IOUtil.copy(IOUtil.java:312)
        at lucee.commons.io.IOUtil.toString(IOUtil.java:845)
        at lucee.commons.io.IOUtil.toString(IOUtil.java:832)
        at lucee.commons.io.IOUtil.toString(IOUtil.java:792)
        at lucee.commons.cli.StreamGobbler.run(Command.java:168)
Exception in thread "Thread-193" java.lang.OutOfMemoryError: Java heap space
        at java.base/java.util.Arrays.copyOf(Arrays.java:3745)
        at java.base/java.lang.AbstractStringBuilder.ensureCapacityInternal(AbstractStringBuilder.java:172)
        at java.base/java.lang.AbstractStringBuilder.append(AbstractStringBuilder.java:686)
        at java.base/java.lang.StringBuffer.append(StringBuffer.java:409)
        at java.base/java.io.StringWriter.write(StringWriter.java:99)
        at lucee.commons.io.IOUtil.copy(IOUtil.java:351)
        at lucee.commons.io.IOUtil.copy(IOUtil.java:312)
        at lucee.commons.io.IOUtil.toString(IOUtil.java:845)
        at lucee.commons.io.IOUtil.toString(IOUtil.java:832)
        at lucee.commons.io.IOUtil.toString(IOUtil.java:792)
        at lucee.commons.cli.StreamGobbler.run(Command.java:168)
23.09.2022 22:31:40,037 ERROR [server.application] application->no password set and no password file found at [/opt/lucee/server/lucee-server/context/password.txt]

 

 

My Plex docker works completely fine with zero issues after changing the appdata default location. 

 

Could anyone tell me what the problem is. 

I'm attaching my diagnostics and a screenshot of the docker edit screen.

Thanks

Screen Shot 2022-09-23 at 10.46.59 PM.png

Screen Shot 2022-09-23 at 10.46.34 PM.png

threadripper19-diagnostics-20220923-2249.zip

Link to comment
On 9/23/2022 at 10:57 AM, dopeytree said:

Many thanks for this docker.. can I just check is this displaying read speed or is this capable of write testing?

Would be good to have it say 'read' speed or similar.

 

It displays an error when trying to share upload data for the database. Is there a problem with the database's end?

 

Version 3 (in dev) will have write testing for solid state media because it's required to read an existing file on some drives vs reading a location on the drive. I'm contemplating adding write testing for spinners but if I do, it'll likely be ONLY on a drive with no partitions.

 

On 9/23/2022 at 12:10 PM, dopeytree said:

I  have no problems running benchmarks on PCIe sata card -

ASM1166 Serial ATA Controller - ZyDAS Technology Corp. (ASMedia Technology Inc.) - SATA controller .

 

But do get this error when accessing the motherboard's chip: 

Device 43d2 - Micro-Star International Co., Ltd. [MSI] (Intel Corporation) - SATA controller 

 

I was able to duplicate this and implemented a fix including some other oddities I found from the newer version of Lucee and the base OS. Version 2.9.6 pushed.

Link to comment
  • jbartlett changed the title to DiskSpeed, hard drive benchmarking (unRAID 6+), version 2.9.6
On 9/23/2022 at 1:52 PM, Howboys said:

There's more but it seems like these are still present.

 

In version 2.9.5, I had taken out the "apt update" command in favor of keeping the docker size smaller since this Docker relies on another Docker (Lucee) and they had the same command. I put it back in to ensure that when *I* build, the my DiskSpeed docker is current.

 

Any other subsequent updates will rely on other teams implementing such as Lucee then Apache Tomcat and I think Tomcat is build off of Debian.

Link to comment
On 9/23/2022 at 7:50 PM, FQs19 said:

I'm unable to use this docker after changing my appdata default location from mnt/user/appdata to mnt/arraycache/appdata.

 

I've uninstalled and reinstalled the docker, including deleting its appdata folder in unraid. 

When starting the docker, it stays on the 'Scanning Hardware Spinning Up Hard Drives' screen for a long time then displays this error:

 

It looks like the permissions on the new directory aren't correct and the application can't write to it. While the Docker config is set to R/W, that doesn't mean squat if the directory itself is not writeable by Docker.

 

Open a shell prompt to the unraid server itself (not the Docker) and enter in the following lines. This is the same code when you run Unraid tool to apply new permissions.

 

chmod -R u-x,go-rwx,go+u,ugo+X '/mnt/arraycache/appdata/DiskSpeed'
chown -R nobody:users '/mnt/arraycache/appdata/DiskSpeed'
sync

 

 

Link to comment
  • jbartlett changed the title to DiskSpeed, hdd/ssd benchmarking (unRAID 6+), version 2.10.7

Join the conversation

You can post now and register later. If you have an account, sign in now to post with your account.
Note: Your post will require moderator approval before it will be visible.

Guest
Reply to this topic...

×   Pasted as rich text.   Restore formatting

  Only 75 emoji are allowed.

×   Your link has been automatically embedded.   Display as a link instead

×   Your previous content has been restored.   Clear editor

×   You cannot paste images directly. Upload or insert images from URL.