AustinTylerDean

Members
  • Posts

    16
  • Joined

  • Last visited

Everything posted by AustinTylerDean

  1. I'm having an issue when the VPN docker updates itself, it's shutting down the traffic that's supposed to route through it. I find myself trying to access the dockers via the url, not getting a connection. Then I will log into Unraid, go check out my dockers, use curl to see if the VPN ip pops up, and then everything seems to work. Is there a way I can get all the dockers that are routed through the main VPN to refresh themselves if the connection has gone down? In all of this, I am ASSUMING they all went down because my Deluge-VPN docker updated itself.
  2. Trurl, we seem to be completely out of the woods! Let me know if I can help you in any way! Patreon account or whathaveyou!! You the best!
  3. Thank you very much for your help, again! I'll be sure to post results tomorrow! I'm cautiously optimistic, trurl, thank you!
  4. I definitely checked the connections. I'm using a server grade disk controller with 2x4 port fanout cables. I saw that maybe the cable near the connection to the parity drive was a little banged up so I switched it to a fresh unused one. That may be the ticket for the parity UDMA errors. I can't say anything is irreplaceable as it's all movie and TV related. And I don't have another 4TB disk laying around... Do you think there's a sizeable chance that I will lose data? I was hoping this chance was low considering the selling point of Unraid in the first place, however I'm the idiot here.
  5. Alright! So before this last check and reboot, the message that stated the parity was invalid was the floating status box which came up in red. NOW I get a much better status box that states the "array turned good". This is much pleasant. My fingers are crossed that we are good to this point! Trurl, I can't thank you enough for your time and expertise! tower-diagnostics-20210107-1945.zip
  6. I apologize for the lackluster details on my part. I distinctly remember seeing "Parity is invalid" under Array Operations of the Main page, however now I don't see this information anymore. I suppose I can now move to the disk 2 problem?
  7. The short SMART report is attached for the parity drive. By the way, it says my parity is bad now. EDITED : I swear I just saw something say my parity was invalid, but now I don't see mention of it anywhere. Sorry for my terrible Unraid skills. tower-smart-20210107-1909.zip
  8. I started the array! It doesn't look like I have any shares that were added. Disk 3 looks normal. Disk 2 is obviously still disabled. AND I also see that I have a SMART error on my parity disk now. Both screenshots attached. What is my next step? Getting after the disk 2? I feel like I am having a bunch of these UDMA CRC error counts happen every once in awhile.
  9. I don't see any shares present. Would this only show up if I had the array started?
  10. I ran it both with -v /dev/md3 in terminal, and in the GUI as just -v. I didn't see any difference.
  11. I've run the -nv command of xfs_repair and received the following on disk 3. I can't tell if it fixed anything. When I stop the array maintenance mode, the disk looks ready to be included in the array on startup, however since I have disk 2 disabled, I am worried that I will mess something up by trying to put the array online without disk 2 fixed first. Phase 1 - find and verify superblock... - block cache size set to 734312 entries Phase 2 - using internal log - zero log... zero_log: head block 521875 tail block 521875 - scan filesystem freespace and inode maps... - found root inode chunk Phase 3 - for each AG... - scan (but don't clear) agi unlinked lists... - process known inodes and perform inode discovery... - agno = 0 - agno = 1 - agno = 2 - agno = 3 - agno = 4 - agno = 5 - process newly discovered inodes... Phase 4 - check for duplicate blocks... - setting up duplicate extent list... - check for inodes claiming duplicate blocks... - agno = 0 - agno = 2 - agno = 3 - agno = 1 - agno = 4 - agno = 5 No modify flag set, skipping phase 5 Phase 6 - check inode connectivity... - traversing filesystem ... - agno = 0 - agno = 1 - agno = 2 - agno = 3 - agno = 4 - agno = 5 - traversal finished ... - moving disconnected inodes to lost+found ... Phase 7 - verify link counts... No modify flag set, skipping filesystem flush and exiting. XFS_REPAIR Summary Thu Jan 7 16:37:55 2021 Phase Start End Duration Phase 1: 01/07 16:37:51 01/07 16:37:51 Phase 2: 01/07 16:37:51 01/07 16:37:51 Phase 3: 01/07 16:37:51 01/07 16:37:54 3 seconds Phase 4: 01/07 16:37:54 01/07 16:37:54 Phase 5: Skipped Phase 6: 01/07 16:37:54 01/07 16:37:55 1 second Phase 7: 01/07 16:37:55 01/07 16:37:55 Total run time: 4 seconds
  12. Also adding the two SMART reports tower-smart-20210107-1553.zip tower-smart-20210107-1523.zip
  13. ChatNoir, I have attached the diagnostics file. To note, disk 3 did finish its SMART test with no problems. Disk 2 is currently sitting at 90% complete, and I will post the results of that as soon as it happens. I appreciate you replying. tower-diagnostics-20210107-1526.zip
  14. Hello All, (edited: Diagnostics have been posted below if anyone can take a crack at this for me I have two problems as listed in the title. For some reason I can't get the public image uploaders to work for my screenshot to show you. So I'll just say Disk 2 is disabled, and disk 3 us the unmountable: no file system. I don't really know when this all started happening. I am not a Linux guy. Just getting this box to work with Sonarr/NZBGet/Plex and all that comes with operating that via the helpful guides of SPACEINVADERONE was easily over 40 hours of work for me. What are my first steps into correcting this problem? I stopped the array, started it in maintenance mode, and have clicked on extended SMART test for the disk 3 so far. However it's comfortably sitting at 10% and don't see this being done tonight or maybe even tomorrow. I've searched a bit about these issues, but they seem very specific for the problems that each person is having. I'll be happy to do all the legwork that is asked of me if I can get some help from here. I really appreciate anyone's time in assisting me, and look forward to learning some new things from these problems. Thanks again, Austin
  15. Did not work with Unraid 6.5. Error posted below... Lucee 5.2.7.62 Error (expression) Message Array index [3] out of range, array size is [2] Stacktrace The Error Occurred in /var/www/Spinup.cfm: line 100 98: <CFSET BlockSize=512> 99: <CFELSE> 100: <CFSET BlockCount=HW[RefKey].Ports[RefPortNo].Attrib.Configuration.BlockCount> 101: <CFSET BlockSize=HW[RefKey].Ports[RefPortNo].Attrib.Configuration.LogicalSectorSize> 102: </CFIF> called from /var/www/Benchmark.cfm: line 316 314: 315: <CFSET WakeupDrives=""> 316: <CFINCLUDE template="Spinup.cfm"> 317: 318: <CFSET Test=StructNew()> Java Stacktrace lucee.runtime.exp.ExpressionException: Array index [3] out of range, array size is [2] at lucee.runtime.type.wrap.ListAsArray.getE(ListAsArray.java:111) at lucee.runtime.type.wrap.ListAsArray.get(ListAsArray.java:284) at lucee.runtime.type.wrap.ListAsArray.get(ListAsArray.java:289) at lucee.runtime.type.util.ArraySupport.get(ArraySupport.java:326) at lucee.runtime.util.VariableUtilImpl.get(VariableUtilImpl.java:263) at lucee.runtime.util.VariableUtilImpl.getCollection(VariableUtilImpl.java:257) at lucee.runtime.PageContextImpl.getCollection(PageContextImpl.java:1496) at spinup_cfm$cf.call(/Spinup.cfm:100) at lucee.runtime.PageContextImpl._doInclude(PageContextImpl.java:933) at lucee.runtime.PageContextImpl._doInclude(PageContextImpl.java:823) at lucee.runtime.PageContextImpl.doInclude(PageContextImpl.java:805) at benchmark_cfm$cf.call(/Benchmark.cfm:316) at lucee.runtime.PageContextImpl._doInclude(PageContextImpl.java:933) at lucee.runtime.PageContextImpl._doInclude(PageContextImpl.java:823) at lucee.runtime.listener.ClassicAppListener._onRequest(ClassicAppListener.java:64) at lucee.runtime.listener.MixedAppListener.onRequest(MixedAppListener.java:45) at lucee.runtime.PageContextImpl.execute(PageContextImpl.java:2464) at lucee.runtime.PageContextImpl._execute(PageContextImpl.java:2454) at lucee.runtime.PageContextImpl.executeCFML(PageContextImpl.java:2427) at lucee.runtime.engine.Request.exe(Request.java:44) at lucee.runtime.engine.CFMLEngineImpl._service(CFMLEngineImpl.java:1091) at lucee.runtime.engine.CFMLEngineImpl.serviceCFML(CFMLEngineImpl.java:1039) at lucee.loader.engine.CFMLEngineWrapper.serviceCFML(CFMLEngineWrapper.java:102) at lucee.loader.servlet.CFMLServlet.service(CFMLServlet.java:51) at javax.servlet.http.HttpServlet.service(HttpServlet.java:729) at org.apache.catalina.core.ApplicationFilterChain.internalDoFilter(ApplicationFilterChain.java:292) at org.apache.catalina.core.ApplicationFilterChain.doFilter(ApplicationFilterChain.java:207) at org.apache.tomcat.websocket.server.WsFilter.doFilter(WsFilter.java:52) at org.apache.catalina.core.ApplicationFilterChain.internalDoFilter(ApplicationFilterChain.java:240) at org.apache.catalina.core.ApplicationFilterChain.doFilter(ApplicationFilterChain.java:207) at org.apache.catalina.core.StandardWrapperValve.invoke(StandardWrapperValve.java:212) at org.apache.catalina.core.StandardContextValve.invoke(StandardContextValve.java:94) at org.apache.catalina.authenticator.AuthenticatorBase.invoke(AuthenticatorBase.java:496) at org.apache.catalina.core.StandardHostValve.invoke(StandardHostValve.java:141) at org.apache.catalina.valves.ErrorReportValve.invoke(ErrorReportValve.java:79) at org.apache.catalina.valves.AbstractAccessLogValve.invoke(AbstractAccessLogValve.java:620) at org.apache.catalina.valves.RemoteIpValve.invoke(RemoteIpValve.java:676) at org.apache.catalina.core.StandardEngineValve.invoke(StandardEngineValve.java:88) at org.apache.catalina.connector.CoyoteAdapter.service(CoyoteAdapter.java:502) at org.apache.coyote.http11.AbstractHttp11Processor.process(AbstractHttp11Processor.java:1132) at org.apache.coyote.AbstractProtocol$AbstractConnectionHandler.process(AbstractProtocol.java:684) at org.apache.tomcat.util.net.AprEndpoint$SocketProcessor.doRun(AprEndpoint.java:2527) at org.apache.tomcat.util.net.AprEndpoint$SocketProcessor.run(AprEndpoint.java:2516) at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1149) at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:624) at org.apache.tomcat.util.threads.TaskThread$WrappingRunnable.run(TaskThread.java:61) at java.lang.Thread.run(Thread.java:748) Timestamp 7/25/18 2:53:42 AM PDT