idean
Members-
Posts
8 -
Joined
-
Last visited
idean's Achievements
Noob (1/14)
0
Reputation
-
DiskSpeed, hdd/ssd benchmarking (unRAID 6+), version 2.10.8
idean replied to jbartlett's topic in Docker Containers
Force update seems to have cleared it up. I guess I should have tried that first. -
idean started following DiskSpeed, hdd/ssd benchmarking (unRAID 6+), version 2.10.8
-
DiskSpeed, hdd/ssd benchmarking (unRAID 6+), version 2.10.8
idean replied to jbartlett's topic in Docker Containers
I'm running the April 14 update and I get the following on startup: Lucee 5.3.12.1 Error (expression) Message key [CONFIG] doesn't exist Stacktrace The Error Occurred in /var/www/DispBenchmarkGraphs.cfm: line 62 60: <CFSET Key=Ref.DriveID[tmpDriveID].Key> 61: <CFSET PortNo=Ref.DriveID[tmpDriveID].PortNo> 62: <CFSET BenchmarksDir="#PersistDir#/driveinfo/#HW[Key].Ports[PortNo].Config.SaveDir#/benchmark"> 63: <CFSET MaxBenchDate=CreateDate(1970,1,1)> 64: <CFSET UseBenchDir=""> called from /var/www/DispOverview.cfm: line 76 74: </CFOUTPUT> 75: 76: <CFINCLUDE TEMPLATE="DispBenchmarkGraphs.cfm"> 77: 78: <CFOUTPUT> Java Stacktrace lucee.runtime.exp.ExpressionException: key [CONFIG] doesn't exist at lucee.runtime.type.util.StructSupport.invalidKey(StructSupport.java:67) at lucee.runtime.type.StructImpl.get(StructImpl.java:149) at lucee.runtime.util.VariableUtilImpl.get(VariableUtilImpl.java:278) at lucee.runtime.util.VariableUtilImpl.getCollection(VariableUtilImpl.java:272) at lucee.runtime.PageContextImpl.getCollection(PageContextImpl.java:1547) at dispbenchmarkgraphs_cfm$cf.call(/DispBenchmarkGraphs.cfm:62) at lucee.runtime.PageContextImpl._doInclude(PageContextImpl.java:1056) at lucee.runtime.PageContextImpl._doInclude(PageContextImpl.java:948) at lucee.runtime.PageContextImpl.doInclude(PageContextImpl.java:929) at dispoverview_cfm$cf.call(/DispOverview.cfm:76) at lucee.runtime.PageContextImpl._doInclude(PageContextImpl.java:1056) at lucee.runtime.PageContextImpl._doInclude(PageContextImpl.java:948) at lucee.runtime.listener.ClassicAppListener._onRequest(ClassicAppListener.java:65) at lucee.runtime.listener.MixedAppListener.onRequest(MixedAppListener.java:45) at lucee.runtime.PageContextImpl.execute(PageContextImpl.java:2493) at lucee.runtime.PageContextImpl._execute(PageContextImpl.java:2478) at lucee.runtime.PageContextImpl.executeCFML(PageContextImpl.java:2449) at lucee.runtime.engine.Request.exe(Request.java:45) at lucee.runtime.engine.CFMLEngineImpl._service(CFMLEngineImpl.java:1216) at lucee.runtime.engine.CFMLEngineImpl.serviceCFML(CFMLEngineImpl.java:1162) at lucee.loader.engine.CFMLEngineWrapper.serviceCFML(CFMLEngineWrapper.java:97) at lucee.loader.servlet.CFMLServlet.service(CFMLServlet.java:51) at javax.servlet.http.HttpServlet.service(HttpServlet.java:623) at org.apache.catalina.core.ApplicationFilterChain.internalDoFilter(ApplicationFilterChain.java:209) at org.apache.catalina.core.ApplicationFilterChain.doFilter(ApplicationFilterChain.java:153) at org.apache.tomcat.websocket.server.WsFilter.doFilter(WsFilter.java:51) at org.apache.catalina.core.ApplicationFilterChain.internalDoFilter(ApplicationFilterChain.java:178) at org.apache.catalina.core.ApplicationFilterChain.doFilter(ApplicationFilterChain.java:153) at org.apache.catalina.core.StandardWrapperValve.invoke(StandardWrapperValve.java:167) at org.apache.catalina.core.StandardContextValve.invoke(StandardContextValve.java:90) at org.apache.catalina.authenticator.AuthenticatorBase.invoke(AuthenticatorBase.java:481) at org.apache.catalina.core.StandardHostValve.invoke(StandardHostValve.java:130) at org.apache.catalina.valves.AbstractAccessLogValve.invoke(AbstractAccessLogValve.java:673) at org.apache.catalina.valves.ErrorReportValve.invoke(ErrorReportValve.java:93) at org.apache.catalina.valves.RemoteIpValve.invoke(RemoteIpValve.java:768) at org.apache.catalina.core.StandardEngineValve.invoke(StandardEngineValve.java:74) at org.apache.catalina.connector.CoyoteAdapter.service(CoyoteAdapter.java:343) at org.apache.coyote.http11.Http11Processor.service(Http11Processor.java:390) at org.apache.coyote.AbstractProcessorLight.process(AbstractProcessorLight.java:63) at org.apache.coyote.AbstractProtocol$ConnectionHandler.process(AbstractProtocol.java:926) at org.apache.tomcat.util.net.NioEndpoint$SocketProcessor.doRun(NioEndpoint.java:1791) at org.apache.tomcat.util.net.SocketProcessorBase.run(SocketProcessorBase.java:52) at org.apache.tomcat.util.threads.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1191) at org.apache.tomcat.util.threads.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:659) at org.apache.tomcat.util.threads.TaskThread$WrappingRunnable.run(TaskThread.java:61) at java.base/java.lang.Thread.run(Unknown Source) -
idean started following 6.12.9 Kernel does not recognize SATA ports on port multipliers.
-
Existing drive became unmountable when I replaced another
idean replied to idean's topic in General Support
All good now after a reboot... the "Unmountable" drive mounted right up! The drive I replaced is rebuilding, as expected. And I found an email from unRaid saying that the preclear on it completed last week, so the whole disk swap thing was a false alarm. Phew! It was just scary that the disk logically next to the one I replaced was now unmountable. It's scary that it this happened as I was replacing a drive. Was it due to the encryption key being missing from /root due to some sort of race condition? I use the webGui "events": # auto unlock array install -D /boot/custom/bin/fetch_key /usr/local/emhttp/webGui/event/starting/fetch_key install -D /boot/custom/bin/delete_key /usr/local/emhttp/webGui/event/started/delete_key install -D /boot/custom/bin/fetch_key /usr/local/emhttp/webGui/event/stopped/fetch_key My biggest lesson learned: Lots of screenshots and notes before and after system changes on unRaid. My memory in regards to serial numbers is totally unreliable. -
Existing drive became unmountable when I replaced another
idean replied to idean's topic in General Support
Ah, it only asks you to reformat when adding a drive then? Not when replacing? Good to know (if I got that right)! -
Background: all the drives have been plugged in for days. The old Disk 1 (6TB) was replaced in the interface with a newish pre-cleared 14TB drive. To do this, I stopped the array, selected the 14TB drive in "Disk 1", and re-started the array. When it started, it started rebuilding the array... sorta. Instead of putting the new drive in slot it seems to have put it in Drive 2 and moved old Drive 2 to Drive 1?? I'm not 100% sure about that, my short-term memory looking at serial numbers is not what it was. I just know that now I have it rebuilding Disk 1 (like it should) but Disk 2 is "Unmountable", which I expected Disk 1 to be, since I haven't yet formatted it. What should I do? Wait for this to rebuild before I format Disk 2? (Good thing I have dual parity??) Lesson learned: screenshots before and after any change so that I know exactly what serial numbers I'm dealing with and don't have to rely on my short term memory more than 5 minutes. unraid-diagnostics-20240325-0938.zip
-
Looks like I made it do it again... Next time (I hope there's not one...) I'll also have syslog data. unraid-diagnostics-20240219-1515.zip
-
unraid-diagnostics-20240219-1246.zip
-
No docker, but I require NFS. Was cleaning up some stuff using ncdu, deleted a folder, boom. I'm currently in the trial mode, now I'm rethinking this whole thing. Maybe I should stay with mergerfs+snapraid on Ubuntu... [Mon Feb 19 12:04:58 2024] ------------[ cut here ]------------ [Mon Feb 19 12:04:58 2024] nfsd: non-standard errno: -103 [Mon Feb 19 12:04:58 2024] WARNING: CPU: 1 PID: 24641 at fs/nfsd/nfsproc.c:909 nfserrno+0x45/0x51 [nfsd] [Mon Feb 19 12:04:58 2024] Modules linked in: nfsv3 nfsv4 dns_resolver nfs ipmi_devintf rpcsec_gss_krb5 md_mod xfs dm_crypt dm_mod nfsd auth_rpcgss oid_registry lockd grace sunrpc zfs(PO) zunicode(PO) zzstd(O) zlua(O) zavl(PO) icp(PO) zcommon(PO) znvpair(PO) spl(O) xt_MASQUERADE xt_tcpudp xt_mark iptable_nat ip6table_nat nf_nat nf_conntrack nf_defrag_ipv6 nf_defrag_ipv4 tun tcp_diag inet_diag ip6table_filter ip6_tables iptable_filter ip_tables x_tables efivarfs af_packet 8021q garp mrp bridge stp llc bonding tls ixgbe xfrm_algo mdio intel_rapl_msr intel_rapl_common iosf_mbi x86_pkg_temp_thermal intel_powerclamp coretemp kvm_intel kvm ast drm_vram_helper i2c_algo_bit drm_ttm_helper ttm drm_kms_helper crct10dif_pclmul crc32_pclmul crc32c_intel ghash_clmulni_intel drm sha512_ssse3 sha256_ssse3 sha1_ssse3 aesni_intel crypto_simd ipmi_ssif cryptd backlight rapl intel_cstate i2c_i801 agpgart qat_c3xxx i2c_smbus intel_qat i2c_core acpi_ipmi dh_generic rsa_generic syscopyarea input_leds ahci mpi sysfillrect [Mon Feb 19 12:04:58 2024] sysimgblt crc8 libahci joydev led_class fb_sys_fops asn1_decoder ipmi_si button acpi_cpufreq unix [last unloaded: md_mod] [Mon Feb 19 12:04:58 2024] CPU: 1 PID: 24641 Comm: nfsd Tainted: P O 6.1.64-Unraid #1 [Mon Feb 19 12:04:58 2024] Hardware name: iXsystems FREENAS-MINI-3.0-XL+/A2SDi-H-TF, BIOS 1.1c 06/25/2019 [Mon Feb 19 12:04:58 2024] RIP: 0010:nfserrno+0x45/0x51 [nfsd] [Mon Feb 19 12:04:58 2024] Code: c3 cc cc cc cc 48 ff c0 48 83 f8 26 75 e0 80 3d dd c9 05 00 00 75 15 48 c7 c7 b5 52 dc a0 c6 05 cd c9 05 00 01 e8 01 a9 2d e0 <0f> 0b b8 00 00 00 05 c3 cc cc cc cc 48 83 ec 18 31 c9 ba ff 07 00 [Mon Feb 19 12:04:58 2024] RSP: 0018:ffffc90001cd7d18 EFLAGS: 00010286 [Mon Feb 19 12:04:58 2024] RAX: 0000000000000000 RBX: ffff888186d57000 RCX: 0000000000000027 [Mon Feb 19 12:04:58 2024] RDX: 0000000000000002 RSI: ffffffff820d7e01 RDI: 00000000ffffffff [Mon Feb 19 12:04:58 2024] RBP: ffff888186d57180 R08: 0000000000000000 R09: ffffffff82245f10 [Mon Feb 19 12:04:58 2024] R10: 00007fffffffffff R11: ffffffff82966af6 R12: 000000000000002e [Mon Feb 19 12:04:58 2024] R13: ffff88847ad070b8 R14: ffff88847ad070ec R15: 0000000000000026 [Mon Feb 19 12:04:58 2024] FS: 0000000000000000(0000) GS:ffff88885fc40000(0000) knlGS:0000000000000000 [Mon Feb 19 12:04:58 2024] CS: 0010 DS: 0000 ES: 0000 CR0: 0000000080050033 [Mon Feb 19 12:04:58 2024] CR2: 0000145c47d4e690 CR3: 000000000420a000 CR4: 00000000003506e0 [Mon Feb 19 12:04:58 2024] Call Trace: [Mon Feb 19 12:04:58 2024] <TASK> [Mon Feb 19 12:04:58 2024] ? __warn+0xab/0x122 [Mon Feb 19 12:04:58 2024] ? report_bug+0x109/0x17e [Mon Feb 19 12:04:58 2024] ? nfserrno+0x45/0x51 [nfsd] [Mon Feb 19 12:04:58 2024] ? handle_bug+0x41/0x6f [Mon Feb 19 12:04:58 2024] ? exc_invalid_op+0x13/0x60 [Mon Feb 19 12:04:58 2024] ? asm_exc_invalid_op+0x16/0x20 [Mon Feb 19 12:04:58 2024] ? nfserrno+0x45/0x51 [nfsd] [Mon Feb 19 12:04:58 2024] ? nfserrno+0x45/0x51 [nfsd] [Mon Feb 19 12:04:58 2024] nfsd_rename+0x368/0x3d0 [nfsd] [Mon Feb 19 12:04:58 2024] nfsd4_rename+0x61/0x8f [nfsd] [Mon Feb 19 12:04:58 2024] nfsd4_proc_compound+0x43f/0x575 [nfsd] [Mon Feb 19 12:04:58 2024] nfsd_dispatch+0x1a9/0x262 [nfsd] [Mon Feb 19 12:04:58 2024] svc_process_common+0x332/0x4df [sunrpc] [Mon Feb 19 12:04:58 2024] ? ktime_get+0x35/0x49 [Mon Feb 19 12:04:58 2024] ? nfsd_svc+0x2b6/0x2b6 [nfsd] [Mon Feb 19 12:04:58 2024] ? nfsd_shutdown_threads+0x5b/0x5b [nfsd] [Mon Feb 19 12:04:58 2024] svc_process+0xc7/0xe4 [sunrpc] [Mon Feb 19 12:04:58 2024] nfsd+0xd5/0x155 [nfsd] [Mon Feb 19 12:04:58 2024] kthread+0xe7/0xef [Mon Feb 19 12:04:58 2024] ? kthread_complete_and_exit+0x1b/0x1b [Mon Feb 19 12:04:58 2024] ret_from_fork+0x22/0x30 [Mon Feb 19 12:04:58 2024] </TASK> [Mon Feb 19 12:04:58 2024] ---[ end trace 0000000000000000 ]---