Jump to content

ptr727

Members
  • Content Count

    114
  • Joined

  • Last visited

Community Reputation

9 Neutral

About ptr727

  • Rank
    Advanced Member

Recent Profile Visitors

The recent visitors block is disabled and is not being shown to other users.

  1. I have no problem using the spin up or spin down buttons, so whatever unraid is doing does work.
  2. New user, installed on two similar systems, only difference is number of drives. Default ports does not work for host, need to change port number from 18888 to 8888, else keep getting connection refused. Fist server no problem with running tests. Second server crash in what appears to be a timeout waiting for the 20 spinning drives to spin up: ``` DiskSpeed - Disk Diagnostics & Reporting tool Version: 2.4 Scanning Hardware 08:25:25 Spinning up hard drives 08:25:25 Scanning system storage Lucee 5.2.9.31 Error (application) Messagetimeout [90000 ms] expired while executing [/usr/sbin/hwinfo --pci --bridge --storage-ctrl --disk --ide --scsi] StacktraceThe Error Occurred in /var/www/ScanControllers.cfm: line 243 241: <CFOUTPUT>#TS()# Scanning system storage<br></CFOUTPUT><CFFLUSH> 242: <CFFILE action="write" file="#PersistDir#/hwinfo_storage_exec.txt" output=" /usr/sbin/hwinfo --pci --bridge --storage-ctrl --disk --ide --scsi" addnewline="NO" mode="666"> 243: <cfexecute name="/usr/sbin/hwinfo" arguments="--pci --bridge --storage-ctrl --disk --ide --scsi" variable="storage" timeout="90" /><!--- --usb-ctrl --usb --hub ---> 244: <CFFILE action="delete" file="#PersistDir#/hwinfo_storage_exec.txt"> 245: <CFFILE action="write" file="#PersistDir#/hwinfo_storage.txt" output="#storage#" addnewline="NO" mode="666"> called from /var/www/ScanControllers.cfm: line 242 240: 241: <CFOUTPUT>#TS()# Scanning system storage<br></CFOUTPUT><CFFLUSH> 242: <CFFILE action="write" file="#PersistDir#/hwinfo_storage_exec.txt" output=" /usr/sbin/hwinfo --pci --bridge --storage-ctrl --disk --ide --scsi" addnewline="NO" mode="666"> 243: <cfexecute name="/usr/sbin/hwinfo" arguments="--pci --bridge --storage-ctrl --disk --ide --scsi" variable="storage" timeout="90" /><!--- --usb-ctrl --usb --hub ---> 244: <CFFILE action="delete" file="#PersistDir#/hwinfo_storage_exec.txt"> Java Stacktracelucee.runtime.exp.ApplicationException: timeout [90000 ms] expired while executing [/usr/sbin/hwinfo --pci --bridge --storage-ctrl --disk --ide --scsi] at lucee.runtime.tag.Execute._execute(Execute.java:241) at lucee.runtime.tag.Execute.doEndTag(Execute.java:252) at scancontrollers_cfm$cf.call_000006(/ScanControllers.cfm:243) at scancontrollers_cfm$cf.call(/ScanControllers.cfm:242) at lucee.runtime.PageContextImpl._doInclude(PageContextImpl.java:933) at lucee.runtime.PageContextImpl._doInclude(PageContextImpl.java:823) at lucee.runtime.listener.ClassicAppListener._onRequest(ClassicAppListener.java:66) at lucee.runtime.listener.MixedAppListener.onRequest(MixedAppListener.java:45) at lucee.runtime.PageContextImpl.execute(PageContextImpl.java:2464) at lucee.runtime.PageContextImpl._execute(PageContextImpl.java:2454) at lucee.runtime.PageContextImpl.executeCFML(PageContextImpl.java:2427) at lucee.runtime.engine.Request.exe(Request.java:44) at lucee.runtime.engine.CFMLEngineImpl._service(CFMLEngineImpl.java:1090) at lucee.runtime.engine.CFMLEngineImpl.serviceCFML(CFMLEngineImpl.java:1038) at lucee.loader.engine.CFMLEngineWrapper.serviceCFML(CFMLEngineWrapper.java:102) at lucee.loader.servlet.CFMLServlet.service(CFMLServlet.java:51) at javax.servlet.http.HttpServlet.service(HttpServlet.java:729) at org.apache.catalina.core.ApplicationFilterChain.internalDoFilter(ApplicationFilterChain.java:292) at org.apache.catalina.core.ApplicationFilterChain.doFilter(ApplicationFilterChain.java:207) at org.apache.tomcat.websocket.server.WsFilter.doFilter(WsFilter.java:52) at org.apache.catalina.core.ApplicationFilterChain.internalDoFilter(ApplicationFilterChain.java:240) at org.apache.catalina.core.ApplicationFilterChain.doFilter(ApplicationFilterChain.java:207) at org.apache.catalina.core.StandardWrapperValve.invoke(StandardWrapperValve.java:212) at org.apache.catalina.core.StandardContextValve.invoke(StandardContextValve.java:94) at org.apache.catalina.authenticator.AuthenticatorBase.invoke(AuthenticatorBase.java:492) at org.apache.catalina.core.StandardHostValve.invoke(StandardHostValve.java:141) at org.apache.catalina.valves.ErrorReportValve.invoke(ErrorReportValve.java:80) at org.apache.catalina.valves.AbstractAccessLogValve.invoke(AbstractAccessLogValve.java:620) at org.apache.catalina.valves.RemoteIpValve.invoke(RemoteIpValve.java:684) at org.apache.catalina.core.StandardEngineValve.invoke(StandardEngineValve.java:88) at org.apache.catalina.connector.CoyoteAdapter.service(CoyoteAdapter.java:502) at org.apache.coyote.http11.AbstractHttp11Processor.process(AbstractHttp11Processor.java:1152) at org.apache.coyote.AbstractProtocol$AbstractConnectionHandler.process(AbstractProtocol.java:684) at org.apache.tomcat.util.net.AprEndpoint$SocketWithOptionsProcessor.run(AprEndpoint.java:2464) at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1149) at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:624) at org.apache.tomcat.util.threads.TaskThread$WrappingRunnable.run(TaskThread.java:61) at java.lang.Thread.run(Thread.java:748) Timestamp2/20/20 8:29:32 AM PST ```
  3. Still happens on my 6.8.2, had to disable WSD.
  4. I tried with DirectIO yes, and DirectIO yes plus case insensitive yes, no difference (see attached results). Given that a disk share over SMB showed good performance, I am sceptical that it is a SMB issue, my money is on a performance problem in the shfs write path. DiskSpeedResult_Ubuntu_Cache.xlsx
  5. I have spent a significant amount of time and effort chasing the SMB performance problem (I immediately noticed the slowdown when I switched from W2K16 to Unraid), so I do think my side of the street has been well worn. I referenced the tool I wrote to automate the tests in the last three of my blog posts where I detail my troubleshooting, and every one posted in this thread. For completeness, here again: https://github.com/ptr727/DiskSpeedTest
  6. Ok, but why would a SMB option make a difference if it looks as if it is a "shfs" write problem, i.e. SMB over disk performance was good, SMB over user share performance was bad, read performance always good? I'll give it a try (case sensitive SMB will break Windows), but I won't be able to test until next week. I believe it should be easy to reproduce the results using the tool I've written, so I would suggest you profile the code yourself, rather than wait for my feedback to the experiments.
  7. Same happening to me running 6.8.2.
  8. Thank you for the info. Would it then be accurate to say the read/write and write performance problem shown in the ongoing SMB test results are caused by shfs? Can you comment on why the write performance is so massively impacted compared to read, especially since the target is the cache and needs no parity computations on write, i.e. can be read through and write through?
  9. Some more googling, and I now assume when you say shfs you are referring to Unraid's fuse filesystem, that happens to be similarly named to better known shfs, https://wiki.archlinux.org/index.php/Shfs. A few questions and comments: - Is unraid's fuse filesystem proprietary, or open source, or GPL and we can request source? - For operations hitting just the cache, no parity, no spanning, why the big disparity between read and write for what should be a noop? - Logically cache only shares should bypass fuse and go direct to disk, avoiding the performance problem. - All appdata usage on cache only will suffer from the same IO write performance problem as observed via SMB. Unless users explicitly change appdata for containers from mnt/user/appdata to mnt/cache/appdata.
  10. So, you are absolutely right, a "disk" share's performance is on par with that of Ubuntu. Can you tell me more about "shfs"? As far as I can google shfs was abandoned in 2004, replaced by SSHFS, but I don't understand why a remote ssh filesystem would be used, or are we taking vanilla libfuse as integrated into the kernel? DiskSpeedResult_Ubuntu_Cache.xlsx
  11. Testing now, about an hour left to go. Did you try to reproduce the results I see, instructions should be clear: https://github.com/ptr727/DiskSpeedTest
  12. See: https://github.com/ptr727/DiskSpeedTest https://github.com/Microsoft/diskspd/wiki/Command-line-and-parameters -Srw means disable local caching and enable remote write though (try to disable remote caching). What I found is that Unraid SMB is much worse at mixed readwrite and write compared to Ubuntu on the same exact hardware, where the expectation is a similar performance profile. Are you speculating that the problem is caused Fuse?
  13. I am more convinced than ever that this is an Unraid problem. I've done several rounds of tests, in my latest I ran Ubuntu Server on exactly the same hardware as Unraid, and the performance is significantly better compared to Unraid. See: https://blog.insanegenius.com/2020/02/02/unraid-vs-ubuntu-bare-metal-smb-performance/
  14. You are welcome to run a test on your own setup for comparison, I describe my test method. By my testing the Unraid numbers really are bad, I attached my latest set of data. DiskSpeedResult_Ubuntu_Cache.xlsx Btw, 500MBps is near 4Gbps, are you running 10Gbps ethernet?
  15. I have now tested Unraid vs. W2K19 VM, and Ubuntu VM, and now Ubuntu bare metal on the same hardware. There is no reason why Unraid should be slower on the cache drive, but the ReadWrite and Write performance is abysmal. https://blog.insanegenius.com/2020/02/02/unraid-vs-ubuntu-bare-metal-smb-performance/