Steve Croft Posted December 27, 2021 Share Posted December 27, 2021 20 minutes ago, jbartlett said: I'm pretty sure it works but I'll look into it. And you're right about the checkbox. I'll build a patch soon. I don't actually make use of the library but I know the whole thing with "Some tool you may use might use it". What antivirus/malware do you have? It looks like the input/output butter is getting messed with. I know I've recently run into issues with Acronis's protection in that it intercepts the whole buffer and won't release it until it's closed. Fucks with programs that stream HTML intermediately such as updating scanning progress. Haven't heard of IOzone. I'm using dd to read from the drive from a given location for a given duration with the bitbucket as an output. Thanks @jbartlett. No antivirus/malware unfortunately. I do have a Aeotec Z-Wave USB Z-Stick Plus and USB connection to my ups but can't think of anything else that would cause this. Quote Link to comment
almulder Posted December 29, 2021 Share Posted December 29, 2021 (edited) Started up docker and went to gui and it keeps throwing an error on my one cache drive and never goes past it, just locks up there. sdaa & sdab are my cache drive (Raid 1) they are both new, less than 1 month old. No error in unraid 10:46:57 Found drive Samsung SSD 870 EVO Rev: SVT01B6Q Serial: *************** (sdaa), 1 partition 10:46:57 Found drive Samsung SSD 870 EVO Rev: SVT01B6Q Serial: *************** (sdab), 1 partition Lucee 5.2.9.31 Error (application) Messagetimeout [90000 ms] expired while executing [/sbin/parted -m /dev/sdac unit B print free] StacktraceThe Error Occurred in /var/www/ScanControllers.cfm: line 1714 1712: <CFIF DriveID NEQ ""> 1713: <!--- Fetch partition information ---> 1714: <cfexecute name="/sbin/parted" arguments="-m /dev/#DriveID# unit B print free" variable="PartInfo" timeout="90" /> 1715: <CFFILE action="write" file="#PersistDir#/parted_#DriveID#.txt" output="#PartInfo#" addnewline="NO" mode="666"> 1716: <CFSET TotalPartitions=0> called from /var/www/ScanControllers.cfm: line 1643 1641: </CFIF> 1642: </CFLOOP> 1643: </CFLOOP> 1644: 1645: <!--- Admin drive creation ---> Java Stacktracelucee.runtime.exp.ApplicationException: timeout [90000 ms] expired while executing [/sbin/parted -m /dev/sdac unit B print free] at lucee.runtime.tag.Execute._execute(Execute.java:241) at lucee.runtime.tag.Execute.doEndTag(Execute.java:252) at scancontrollers_cfm$cf.call_000163(/ScanControllers.cfm:1714) at scancontrollers_cfm$cf.call(/ScanControllers.cfm:1643) at lucee.runtime.PageContextImpl._doInclude(PageContextImpl.java:933) at lucee.runtime.PageContextImpl._doInclude(PageContextImpl.java:823) at lucee.runtime.listener.ClassicAppListener._onRequest(ClassicAppListener.java:66) at lucee.runtime.listener.MixedAppListener.onRequest(MixedAppListener.java:45) at lucee.runtime.PageContextImpl.execute(PageContextImpl.java:2464) at lucee.runtime.PageContextImpl._execute(PageContextImpl.java:2454) at lucee.runtime.PageContextImpl.executeCFML(PageContextImpl.java:2427) at lucee.runtime.engine.Request.exe(Request.java:44) at lucee.runtime.engine.CFMLEngineImpl._service(CFMLEngineImpl.java:1090) at lucee.runtime.engine.CFMLEngineImpl.serviceCFML(CFMLEngineImpl.java:1038) at lucee.loader.engine.CFMLEngineWrapper.serviceCFML(CFMLEngineWrapper.java:102) at lucee.loader.servlet.CFMLServlet.service(CFMLServlet.java:51) at javax.servlet.http.HttpServlet.service(HttpServlet.java:729) at org.apache.catalina.core.ApplicationFilterChain.internalDoFilter(ApplicationFilterChain.java:292) at org.apache.catalina.core.ApplicationFilterChain.doFilter(ApplicationFilterChain.java:207) at org.apache.tomcat.websocket.server.WsFilter.doFilter(WsFilter.java:52) at org.apache.catalina.core.ApplicationFilterChain.internalDoFilter(ApplicationFilterChain.java:240) at org.apache.catalina.core.ApplicationFilterChain.doFilter(ApplicationFilterChain.java:207) at org.apache.catalina.core.StandardWrapperValve.invoke(StandardWrapperValve.java:212) at org.apache.catalina.core.StandardContextValve.invoke(StandardContextValve.java:94) at org.apache.catalina.authenticator.AuthenticatorBase.invoke(AuthenticatorBase.java:492) at org.apache.catalina.core.StandardHostValve.invoke(StandardHostValve.java:141) at org.apache.catalina.valves.ErrorReportValve.invoke(ErrorReportValve.java:80) at org.apache.catalina.valves.AbstractAccessLogValve.invoke(AbstractAccessLogValve.java:620) at org.apache.catalina.valves.RemoteIpValve.invoke(RemoteIpValve.java:684) at org.apache.catalina.core.StandardEngineValve.invoke(StandardEngineValve.java:88) at org.apache.catalina.connector.CoyoteAdapter.service(CoyoteAdapter.java:502) at org.apache.coyote.http11.AbstractHttp11Processor.process(AbstractHttp11Processor.java:1152) at org.apache.coyote.AbstractProtocol$AbstractConnectionHandler.process(AbstractProtocol.java:684) at org.apache.tomcat.util.net.AprEndpoint$SocketWithOptionsProcessor.run(AprEndpoint.java:2464) at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1149) at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:624) at org.apache.tomcat.util.threads.TaskThread$WrappingRunnable.run(TaskThread.java:61) at java.lang.Thread.run(Thread.java:748) Timestamp12/29/21 10:48:27 AM MST Edited December 29, 2021 by almulder Quote Link to comment
jbartlett Posted December 31, 2021 Author Share Posted December 31, 2021 On 12/29/2021 at 9:57 AM, almulder said: Started up docker and went to gui and it keeps throwing an error on my one cache drive and never goes past it, just locks up there. sdaa & sdab are my cache drive (Raid 1) they are both new, less than 1 month old. No error in unraid RAID drives aren't currently supported but will be in version 3.0 though it still shouldn't hang like that. I'll take a look. 1 Quote Link to comment
dazzathewiz Posted December 31, 2021 Share Posted December 31, 2021 So ah, what would be the advice given this result... 😰 Quote Link to comment
jbartlett Posted January 3, 2022 Author Share Posted January 3, 2022 On 12/31/2021 at 1:47 AM, dazzathewiz said: So ah, what would be the advice given this result... 😰 Check your SMART values. Swap out the drive and let the parity rebuild it. If you're curious, run an extended SMART test after you move the files off and see if your SMART values have changed when it's complete. Alternately, run a single-cycle/pass preclear (read/write once) on the drive which should flush out any pending bad sectors. Quote Link to comment
dazzathewiz Posted January 3, 2022 Share Posted January 3, 2022 1 hour ago, jbartlett said: Check your SMART values. Swap out the drive and let the parity rebuild it. If you're curious, run an extended SMART test after you move the files off and see if your SMART values have changed when it's complete. Alternately, run a single-cycle/pass preclear (read/write once) on the drive which should flush out any pending bad sectors. Thanks very much - I will order another drive to replace that one and then play with those tests. The docker is great - the reason I used it was because I am seeing very high CPU IO Wait times when copy operations to the Array are happening and I couldn't pin down the cause. I suspect the issue will be this drive. So thanks for your efforts developing this one. Quote Link to comment
jbartlett Posted January 4, 2022 Author Share Posted January 4, 2022 1 hour ago, dazzathewiz said: Thanks very much - I will order another drive to replace that one and then play with those tests. The docker is great - the reason I used it was because I am seeing very high CPU IO Wait times when copy operations to the Array are happening and I couldn't pin down the cause. I suspect the issue will be this drive. So thanks for your efforts developing this one. I'd recommend updating your shares to exclude the drive in question until you get the new one in. 1 Quote Link to comment
jbartlett Posted January 30, 2022 Author Share Posted January 30, 2022 (edited) I've been working on adding support for benchmarking SSD's and wanted to post my method for others to weigh in on. The method I came up with allows DiskSpeed to come up with the following metrics using a Western Digital Black NVMe 250GB, model WDS256G1X0C: Read Speed: 427 MB/s Write Burst Speed: 202 MB/s (ends after 7.52GB) Write Sustained Speed: 118 MB/s - 149 MB/s There needs to be at least one partition mounted with at least 15GB of free space available. In this example, I will use "/mnt/cache". SSD's in a RAID or drive pool will not be tested as it is not possible to test just one drive in the series. The commands given below are formatted for repeating on your personal setups. For best results, shut down any VM's or active Dockers residing on the cache drive. 1. Sync drive sync "/mnt/cache" 2. Trim drive fstrim -v "/mnt/cache" 3. Create 10GB file with random data dd if=/dev/random of="/mnt/cache/DiskSpeed_fq9.junk" bs=1MB count=10000 conv=noerror status=progress 2> /boot/cache_write.txt 4. Sync drive sync "/mnt/cache" 5. Read the file dd if="/mnt/cache/DiskSpeed_fq9.junk" of=/dev/null iflag=direct status=progress 2> /boot/cache_read.txt 6. Delete the file & trim the drive again The line delimiter used for cache_read.txt & cache_write.txt is just a carriage return so you will need to use a program that can understand that such as Notepad++ or it will all appear on one line. The last line in cache_read.txt is used for the read speed. 10000000000 bytes (10 GB, 9.3 GiB) copied, 23.4231 s, 427 MB/s The cache_write.txt file is read line by line looking at the number of bytes read, the current duration, and speed. 203000000 bytes (203 MB, 194 MiB) copied, 1.00453 s, 202 MB/s When the number of seconds from the last line compared to the current line is more than 2 seconds, then the drive's buffer was filled and there is a pause while the drive flushed some of buffer to make room. The highest value taken up to this point is considered the Write Burst Speed as it utilizes the drive cache memory. The number of bytes at the current line (gap in the seconds) is Write Burst End value. From that point on, the Min-Max speeds are computed for the write sustained speed range. Edited January 30, 2022 by jbartlett Quote Link to comment
rgiorgio Posted February 4, 2022 Share Posted February 4, 2022 Has anyone gotten this to run on a synology nas with an intel celeron processor and, if so, how did you configure the container? This looks like an awesome tool, and I would love to try it, but I am not a huge linux tech type. Thanks in advance. Ray Quote Link to comment
jbartlett Posted February 5, 2022 Author Share Posted February 5, 2022 9 hours ago, rgiorgio said: Has anyone gotten this to run on a synology nas with an intel celeron processor and, if so, how did you configure the container? This looks like an awesome tool, and I would love to try it, but I am not a huge linux tech type. Thanks in advance. Ray I've gotten it to run under a different unix OS (I forget which) but the main thing you need to ensure is Privileged=true. This is also the main reason why DiskSpeed doesn't work in Windows because it doesn't support that flag. From there, it's just experimentation. Quote Link to comment
AndyT86 Posted March 4, 2022 Share Posted March 4, 2022 (edited) On 5/13/2018 at 4:13 PM, MMW said: Any ideas on this: It was working correctly until the last update (been away and only just had a chance to update the post). paradigmevo - I get this 'error' due to the virtual devices created by my IPMI interface. It creates a 'Virtual Floppy, CD Rom, etc' Ive looked for ways to stop it but to no avail. Anyway it was safe to ignore. At least in my particular case. Edited March 4, 2022 by AndyT86 1 Quote Link to comment
Hoopster Posted March 4, 2022 Share Posted March 4, 2022 1 hour ago, AndyT86 said: Ive looked for ways to stop it but to no avail. Do you need those virtual devices created by IPMI? In the IPMI Interface for my board, I just turned them all off. I would get a bunch of devices showing up in Unassigned Devices as well since were originally set to a value of 4. 0 got rid of them all. Quote Link to comment
lex87 Posted May 9, 2022 Share Posted May 9, 2022 (edited) Hello, i have some problems to get started with diskspeed docker container my setting are: <?xml version="1.0"?> <Container version="2"> <Name>diskspeed</Name> <Repository>jbartlett777/diskspeed</Repository> <Registry/> <Network>bridge</Network> <MyIP/> <Shell>sh</Shell> <Privileged>true</Privileged> <Support/> <Project/> <Overview/> <Category/> <WebUI/> <TemplateURL/> <Icon/> <ExtraParams/> <PostArgs/> <CPUset/> <DateInstalled>1652055180</DateInstalled> <DonateText/> <DonateLink/> <Description/> <Networking> <Mode>bridge</Mode> <Publish> <Port> <HostPort>1111</HostPort> <ContainerPort>18888</ContainerPort> <Protocol>tcp</Protocol> </Port> </Publish> </Networking> <Data/> <Environment/> <Labels/> <Config Name="Host Port 1" Target="18888" Default="" Mode="tcp" Description="Container Port: 18888" Type="Port" Display="always" Required="false" Mask="false">1111</Config> </Container> Even the docker is started i try to use http://192.168.178.2:1111/ but nothing appears. I don't get my root cause problem. Edited May 9, 2022 by lex87 Quote Link to comment
trurl Posted May 9, 2022 Share Posted May 9, 2022 1 hour ago, lex87 said: my setting are Post docker run instead of XML Quote Link to comment
jbartlett Posted May 10, 2022 Author Share Posted May 10, 2022 The container port must remain 8888. Quote Link to comment
lex87 Posted May 29, 2022 Share Posted May 29, 2022 On 5/9/2022 at 4:26 AM, trurl said: Post docker run instead of XML Thanks a lot issue solved On 5/10/2022 at 6:03 PM, jbartlett said: The container port must remain 8888. that helped Quote Link to comment
zoggy Posted May 29, 2022 Share Posted May 29, 2022 I was checking something recently after upgrading to 6.10.2 and noticed that DiskSpeed created its main folder with a unknown user pid: :/mnt/user/appdata# ls -ld DiskSpeed/ drwxrwxrwx 1 65534 users 18 Apr 13 10:12 DiskSpeed// :/mnt/user/appdata/DiskSpeed# ls -alh total 0 drwxrwxrwx 1 65534 users 18 Apr 13 10:12 ./ drwxrwxrwx 1 nobody users 292 May 21 02:46 ../ drwxrwxrwx 1 65534 users 10 Apr 13 10:12 Instances/ seemed to work just fine however? Quote Link to comment
jbartlett Posted May 30, 2022 Author Share Posted May 30, 2022 6 hours ago, zoggy said: I was checking something recently after upgrading to 6.10.2 and noticed that DiskSpeed created its main folder with a unknown user pid: :/mnt/user/appdata# ls -ld DiskSpeed/ drwxrwxrwx 1 65534 users 18 Apr 13 10:12 DiskSpeed// :/mnt/user/appdata/DiskSpeed# ls -alh total 0 drwxrwxrwx 1 65534 users 18 Apr 13 10:12 ./ drwxrwxrwx 1 nobody users 292 May 21 02:46 ../ drwxrwxrwx 1 65534 users 10 Apr 13 10:12 Instances/ seemed to work just fine however? It's the built-in docker user but I have a task that goes out and opens up the permissions on any new files. Quote Link to comment
AnimusAstralis Posted June 2, 2022 Share Posted June 2, 2022 Thanks for this essential tool. I strongly feel that SMART is not (has never been?) reliable enough for reporting true HDD health status. I've benchmarked my HDDs and now I'm puzzled with this result: I didn't find similar results in this thread. How such high speeds are even possible? What does this 'sine' behavior say? Also, max allowed speed gap doesn't increase after 25+ retries, so I had to just abort this benchmark. Benchmarks of two other similar HDDs don't report anything unusual. This is SMART of the weird one: Quote Link to comment
JorgeB Posted June 2, 2022 Share Posted June 2, 2022 13 minutes ago, AnimusAstralis said: How such high speeds are even possible? It's possible with WD SMR disks that haven't been fully written, when the disks knows there's no data there it returns zeros from the controller, doesn't read the disk surface, hence the higher speeds, once the disk has been fully written you should see normal results. 1 Quote Link to comment
AnimusAstralis Posted June 2, 2022 Share Posted June 2, 2022 (edited) 18 minutes ago, JorgeB said: It's possible with WD SMR disks that haven't been fully written, when the disks knows there's no data there it returns zeros from the controller, doesn't read the disk surface, hence the higher speeds, once the disk has been fully written you should see normal results. I've suspected that this is somehow connected with SMR. The weird HDD is only 44% full, but two other drives are 76% and 65% full respectively, so they produce normal graphs: So, are you saying that I can expect normal result if I fill my HDD over 50% or more? In this case DiskSpeed will need some tweaking, I suppose, because speed gap doesn't increase on retries and disabling it doesn't work either for some reason. First I thought that my HDD was faulty, but now I'm thinking that WD SMR technologies mess benchmarks up. Edited June 2, 2022 by AnimusAstralis Quote Link to comment
JorgeB Posted June 2, 2022 Share Posted June 2, 2022 6 minutes ago, AnimusAstralis said: So, are you saying that I can expect normal result if I fill my HDD over 50% or more? It should return normal results once the disk has being fully written at least once, after that actual used capacity used should not be important. 1 Quote Link to comment
jbartlett Posted June 3, 2022 Author Share Posted June 3, 2022 It's not uncommon it seems based on other submissions. But yeah, I'll need to code in a catch for sine waves 😅 It may be a good idea to get into the habit of putting your new drives through at least one pre-clear pass, one write pass at a minimum, which will prevent this from happening. Quote Link to comment
DataCollector Posted June 9, 2022 Share Posted June 9, 2022 (edited) Hello. I am new to diskspeed and by far i do not know everythin what this thing can do. I installed the Docker 'Diskspeed' on both of my Systems. On the 1st System it looks good. Just one small thing: I do miss an indicator which disk already was benchmarked and has a graph and with does not have a benchmark graph. With more than 20 Disks it is a hassle to keep track wwhen using this tool the first time. 😁 But that is just a luxus problem. On the 2nd System (look at my signature) I once started it, and it crashed. Then I stopped docker and tried to start again and now I always get an Server Error. I should mention, that 3 unassigned Devices are currently in the process of preclear. Is the Diskspeed Docker vulnerable when preclear is running on a disk? Edit: After precler is finished on all 3 Disks: diskspeed docker starts and works without rebooting the unraid System. It looks like diskspeed does really not like it, when some disks are in use (preclear). Edited June 9, 2022 by DataCollector edit after preclear is done Quote Link to comment
jbartlett Posted June 9, 2022 Author Share Posted June 9, 2022 6 hours ago, DataCollector said: On the 2nd System (look at my signature) I once started it, and it crashed. Then I stopped docker and tried to start again and now I always get an Server Error. There's not much I can do if the Docker app won't even start if a Preclear is underway. The Docker is just an application server, my program hasn't even kicked off until you open the GUI. I've no idea what would cause that but you might get a hint if you monitor the syslog while you start it. As for knowing which drives have been benchmarked, you can benchmark all drives at the same time from the main screen. Click the button "Benchmark Drives". Above it will be a benchmark graph if any have been done yet and you can see by which ones aren't included that still need to be done. You can optionally benchmark just those drives at the same time by unchecking the drives that have already been done on the following screen. 1 Quote Link to comment
Recommended Posts
Join the conversation
You can post now and register later. If you have an account, sign in now to post with your account.
Note: Your post will require moderator approval before it will be visible.