emb531

Members
  • Posts

    30
  • Joined

  • Last visited

Recent Profile Visitors

The recent visitors block is disabled and is not being shown to other users.

emb531's Achievements

Noob

Noob (1/14)

8

Reputation

  1. Thanks that worked! Can that be removed from the OS so I don't have to do it after every reboot?
  2. This was broken in all of the RC releases as well. Assuming something to do with the PHP8 changes. Anyone notice that htop is no longer configured correctly on boot (not sorted by highest CPU and tree view selected) - it is sorted by PID now which pretty useless.
  3. I can do this, but why is it happening in the first place? Did something change with Avahi that it restarts every hour now?
  4. Every hour, once an hour, I see this in my logs: ay 4 10:24:02 Saturn avahi-daemon[3511]: Got SIGTERM, quitting. May 4 10:24:02 Saturn avahi-daemon[3511]: Leaving mDNS multicast group on interface br0.IPv4 with address 10.0.0.10. May 4 10:24:02 Saturn avahi-dnsconfd[3520]: read(): EOF May 4 10:24:02 Saturn avahi-daemon[3511]: avahi-daemon 0.8 exiting. May 4 10:24:02 Saturn avahi-daemon[5737]: Found user 'avahi' (UID 61) and group 'avahi' (GID 214). May 4 10:24:02 Saturn avahi-daemon[5737]: Successfully dropped root privileges. May 4 10:24:02 Saturn avahi-daemon[5737]: avahi-daemon 0.8 starting up. May 4 10:24:02 Saturn avahi-daemon[5737]: Successfully called chroot(). May 4 10:24:02 Saturn avahi-daemon[5737]: Successfully dropped remaining capabilities. May 4 10:24:02 Saturn avahi-daemon[5737]: Loading service file /services/sftp-ssh.service. May 4 10:24:02 Saturn avahi-daemon[5737]: Loading service file /services/smb.service. May 4 10:24:02 Saturn avahi-daemon[5737]: Loading service file /services/ssh.service. May 4 10:24:02 Saturn avahi-daemon[5737]: Joining mDNS multicast group on interface br0.IPv4 with address 10.0.0.10. May 4 10:24:02 Saturn avahi-daemon[5737]: New relevant interface br0.IPv4 for mDNS. May 4 10:24:02 Saturn avahi-daemon[5737]: Network interface enumeration completed. May 4 10:24:02 Saturn avahi-daemon[5737]: Registering new address record for 10.0.0.10 on br0.IPv4. May 4 10:24:03 Saturn avahi-daemon[5737]: Server startup complete. Host name is Saturn.local. Local service cookie is 2906881448. May 4 10:24:03 Saturn avahi-daemon[5737]: Service "Saturn" (/services/ssh.service) successfully established. May 4 10:24:03 Saturn avahi-daemon[5737]: Service "Saturn" (/services/smb.service) successfully established. May 4 10:24:03 Saturn avahi-daemon[5737]: Service "Saturn" (/services/sftp-ssh.service) successfully established. Diagnostics attached saturn-diagnostics-20230504-1028.zip
  5. Is there a way to move the array utilization indicator back to the other side next to Tools?
  6. Appears to be working now, thanks for the quick fix! Any chance of being able to save the graphs/data between reboots? Maybe writing the data to a share or disk?
  7. System Stats still not displaying historical data after update, rebooted as well.
  8. Same issue as well also on 6.11.5
  9. Same issue here as well - reinstalled plugin and rebooted but historical views not working.
  10. Since the update on 1/14 Recycle Bin is not working correctly, files are being deleted immediately and not being sent to the Recycle Bin. I can see the parent folder in the Recycle Bin (Movies) but the actual folder/file that was deleted is not present. Logging is working correctly and shows the unlinkat commands. I have restarted the plugin but same issue.
  11. Updated to 2.10.2.1 - same issue. The web interface is really slow to switch between disks now. I emailed a debug file. Running the command above in the container, I don't even see any of my data disks. unRAID is running bare metal, no special hardware or config really. # df -B 1K Filesystem 1K-blocks Used Available Use% Mounted on overlay 20905984 8889868 12016116 43% / tmpfs 65536 0 65536 0% /dev tmpfs 16354908 0 16354908 0% /sys/fs/cgroup shm 65536 0 65536 0% /dev/shm /dev/nvme0n1p1 976284628 117085344 859199284 12% /tmp/DiskSpeed /dev/loop2 20905984 8889868 12016116 43% /etc/hosts rootfs 16272104 1045784 15226320 7% /var/local/emhttp
  12. Same issue with the NVME and SSD drives, they are not part of any RAID or BTRFS, just all individual XFS formatted drives. I can't even create a debug file now either: Please wait, creating debug file... Scanning Hardware... Lucee 5.3.10.97 Error (application) Message Error invoking external process Detail /bin/chmod: cannot access '/tmp/DiskSpeed/Instances/local/debug/ScanControllers.html': No such file or directory Stacktrace The Error Occurred in /var/www/isolated/CreateDebugInfo.cfm: line 97 95: <CFSET FetchURL=ListFirst(CGI.Request_URL,"/") & "//" & ListGetAt(CGI.Request_URL,2,"/") & "/ScanControllers.cfm?Debug=Export"> 96: <CFHTTP method="GET" URL="#FetchURL#" throwOnError="no" redirect="no" path="#PersistDir#/debug/ScanControllers.html"></CFHTTP> 97: <cfexecute name="/bin/chmod" arguments="666 #PersistDir#/debug/ScanControllers.html" timeout="10" /> 98: 99: <CFSET InstanceDir=""> Java Stacktrace lucee.runtime.exp.ApplicationException: Error invoking external process at lucee.runtime.tag.Execute.doEndTag(Execute.java:266) at isolated.createdebuginfo_cfm$cf.call(/isolated/CreateDebugInfo.cfm:97) at lucee.runtime.PageContextImpl._doInclude(PageContextImpl.java:1056) at lucee.runtime.PageContextImpl._doInclude(PageContextImpl.java:948) at lucee.runtime.listener.ClassicAppListener._onRequest(ClassicAppListener.java:65) at lucee.runtime.listener.MixedAppListener.onRequest(MixedAppListener.java:45) at lucee.runtime.PageContextImpl.execute(PageContextImpl.java:2493) at lucee.runtime.PageContextImpl._execute(PageContextImpl.java:2478) at lucee.runtime.PageContextImpl.executeCFML(PageContextImpl.java:2449) at lucee.runtime.engine.Request.exe(Request.java:45) at lucee.runtime.engine.CFMLEngineImpl._service(CFMLEngineImpl.java:1216) at lucee.runtime.engine.CFMLEngineImpl.serviceCFML(CFMLEngineImpl.java:1162) at lucee.loader.engine.CFMLEngineWrapper.serviceCFML(CFMLEngineWrapper.java:97) at lucee.loader.servlet.CFMLServlet.service(CFMLServlet.java:51) at javax.servlet.http.HttpServlet.service(HttpServlet.java:764) at org.apache.catalina.core.ApplicationFilterChain.internalDoFilter(ApplicationFilterChain.java:227) at org.apache.catalina.core.ApplicationFilterChain.doFilter(ApplicationFilterChain.java:162) at org.apache.tomcat.websocket.server.WsFilter.doFilter(WsFilter.java:53) at org.apache.catalina.core.ApplicationFilterChain.internalDoFilter(ApplicationFilterChain.java:189) at org.apache.catalina.core.ApplicationFilterChain.doFilter(ApplicationFilterChain.java:162) at org.apache.catalina.core.StandardWrapperValve.invoke(StandardWrapperValve.java:197) at org.apache.catalina.core.StandardContextValve.invoke(StandardContextValve.java:97) at org.apache.catalina.authenticator.AuthenticatorBase.invoke(AuthenticatorBase.java:541) at org.apache.catalina.core.StandardHostValve.invoke(StandardHostValve.java:135) at org.apache.catalina.valves.ErrorReportValve.invoke(ErrorReportValve.java:92) at org.apache.catalina.valves.AbstractAccessLogValve.invoke(AbstractAccessLogValve.java:687) at org.apache.catalina.valves.RemoteIpValve.invoke(RemoteIpValve.java:769) at org.apache.catalina.core.StandardEngineValve.invoke(StandardEngineValve.java:78) at org.apache.catalina.connector.CoyoteAdapter.service(CoyoteAdapter.java:360) at org.apache.coyote.http11.Http11Processor.service(Http11Processor.java:399) at org.apache.coyote.AbstractProcessorLight.process(AbstractProcessorLight.java:65) at org.apache.coyote.AbstractProtocol$ConnectionHandler.process(AbstractProtocol.java:890) at org.apache.tomcat.util.net.NioEndpoint$SocketProcessor.doRun(NioEndpoint.java:1789) at org.apache.tomcat.util.net.SocketProcessorBase.run(SocketProcessorBase.java:49) at org.apache.tomcat.util.threads.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1191) at org.apache.tomcat.util.threads.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:659) at org.apache.tomcat.util.threads.TaskThread$WrappingRunnable.run(TaskThread.java:61) at java.base/java.lang.Thread.run(Thread.java:829) Timestamp 12/27/22 12:24:49 PM EST
  13. Hello, I am unable to benchmark my NVME drives with the latest update: Unable to benchmark for the following reason * No mounted partitions were found. You will need to restart the DiskSpeed docker after making changes to mounted drives for changes to take effect. I have restarted the container with the same issue, if I do the Rescan Controllers option I see these errors in my syslog: Dec 16 15:09:41 Saturn kernel: nvme0: Admin Cmd(0x7f), I/O Error (sct 0x0 / sc 0x1) Dec 16 15:09:41 Saturn kernel: nvme1: Admin Cmd(0x7f), I/O Error (sct 0x0 / sc 0x1) Dec 16 15:10:12 Saturn kernel: nvme nvme1: controller is down; will reset: CSTS=0x3, PCI_STATUS=0x10 Dec 16 15:10:12 Saturn kernel: nvme1: Admin Cmd(0x6), I/O Error (sct 0x3 / sc 0x71) Dec 16 15:10:12 Saturn kernel: nvme nvme1: Shutdown timeout set to 10 seconds Dec 16 15:10:12 Saturn kernel: nvme nvme1: 16/0/0 default/read/poll queues I emailed you a debug file at [email protected] - thanks!
  14. I re-enabled the logging, saw entries in syslog, ran the command to restart syslog, messages still continuing. Thanks for your quick responses!