Jump to content


  • Content Count

  • Joined

  • Last visited

Community Reputation

5 Neutral

About interwebtech

  • Rank
    Advanced Member
  • Birthday 05/03/1958


  • Gender
  • Location
    Everett, WA
  • Personal Text
    The obstacle is the path.

Recent Profile Visitors

1332 profile views
  1. cache_dirs appears to not be working properly with the latest 6.7 stable. Opening a cached share in windows exploder almost always lights up a drive on the array which delays the display of folders until its spun up. I've tried adjusting "Cache pressure of directories on system" from 10 to 1 but no change in behavior. logs attached tower-diagnostics-20190523-1807.zip
  2. StarTech.com 12U Adjustable Depth Open Frame 4 Post Server Rack with Casters/Levelers and Cable Management Hooks 4POSTRACK12U Black 2x StarTech.com 1U Adjustable Mounting Depth Vented Rack Mount Shelf CyberPower OR1500LCDRM1U 1U Rackmount UPS System NORCO 4U Rack Mount 24 x Hot-Swappable SATA/SAS 6G Drive Bays Server Rack mount RPC-4224 EVGA Supernova 850 G3, 80 Plus Gold 850W Modular Power Supply 220-G3-0850-X1 ASRock EP2C612 WS Motherboard 2x Intel Xeon E5-2620 v3 Six-Core Haswell Processor 2.4GHz LGA 2011-3 CPU 2x Intel LGA 2011-3 Cooling Fan/Heatsink 8x Crucial 8GB Single DDR4 2133 MT/s (PC4-2133) CL15 SR x4 ECC Registered DIMM CT8G4RFS4213 (64GB) 2x Samsung 970 EVO 1TB - NVMe PCIe M.2 2280 SSD (MZ-V7E1T0BW) 2x QNINE M.2 NVME SSD to PCIe adapter LSI Logic LSI00244 SAS 9201-16i 16Port 6Gb/s SAS/SATA Controller Card 1x NORCO Computer Parallel (reverse breakout) Cable (C-SFF8087-4S) 4x 10Gtek Internal Mini SAS SFF-8087 Cable, 0.5 Meter 2x Gigabit network adapters bonding to single interface Unraid OS Pro 6.x 1TB RAID1 Cache Pool 2x 8TB parity 15x HDD array @100TB
  3. I need to venture out of the Prereleases & General groups more often. First I heard of MERCH! Bought coffee mug, t-shirt & magnet. w00t!
  4. Oddly enough I saw no delay this time on syncing. I was Chicken Little on the last RC: "The install has hung! The install has hung!". lol
  5. rc6 to rc7 PS. I make a habit of spinning up all drives before running the upgrade. Speeds up the syncing process.
  6. I must be very impatient today. lol
  7. I will try leaving it be. I've never had it pause for a long time that I recall.
  8. I get this far and it halts. No errors on console. Tried 3 times from Tools | Update OS. tower-diagnostics-20190405-0117.zip
  9. Updated from rc5. All systems nominal. Nothing to report.
  10. I stopped/restarted the array but the errors are still listed on Main. Also, Fix Common Problems alerted me to the error state on the 3 disks. Event: Fix Common Problems - Tower Subject: Errors have been found with your server (Tower). Description: Investigate at Settings / User Utilities / Fix Common Problems Importance: alert **** disk8 (ST8000AS0002-1NA17Z_Z8406M0L) has read errors **** **** disk9 (ST4000DM000-1F2168_Z3024WY8) has read errors **** **** disk10 (ST4000DM000-1F2168_Z3024WMZ) has read errors **** Fresh set diags and screenie attached. tower-diagnostics-20190305-1617.zip
  11. It's complaining about those 3 disks that threw errors on spinup for the monthly parity check (OP above). Here is the full email: Event: Unraid Status Subject: Notice [TOWER] - array health report [FAIL] Description: Array has 18 disks (including parity & cache) Importance: warning Parity - ST8000VN0002-1Z8112_ZA124ASG (sdc) - standby [OK] Parity2 - ST8000VN0002-1Z8112_ZA12BHMW (sdd) - standby [OK] Disk 1 - ST8000VN0022-2EL112_ZA17V13V (sdb) - standby [OK] Disk 2 - ST6000DX000-1H217Z_Z4D04L2A (sde) - standby [OK] Disk 3 - ST8000VN0022-2EL112_ZA17SPGS (sdf) - standby [OK] Disk 4 - ST6000DM001-1XY17Z_Z4D23K9N (sdg) - standby [OK] Disk 5 - ST8000AS0002-1NA17Z_Z840J4R8 (sdh) - standby [OK] Disk 6 - HGST_HDN724040ALE640_PK1334PCKDKRPX (sdi) - standby [OK] Disk 7 - HGST_HDN724040ALE640_PK1334PCKAX1MX (sdn) - standby [OK] Disk 8 - ST8000AS0002-1NA17Z_Z8406M0L (sdo) - standby (disk has read errors) [NOK] Disk 9 - ST4000DM000-1F2168_Z3024WY8 (sdp) - standby (disk has read errors) [NOK] Disk 10 - ST4000DM000-1F2168_Z3024WMZ (sdq) - standby (disk has read errors) [NOK] Disk 11 - ST8000VN0022-2EL112_ZA179JR6 (sdj) - standby [OK] Disk 12 - WDC_WD80EMAZ-00WJTA0_7SJNBMRU (sdk) - standby [OK] Disk 13 - WDC_WD80EMAZ-00WJTA0_7SJNBNVU (sdl) - standby [OK] Disk 14 - WDC_WD80EMAZ-00WJTA0_7HJZ25AF (sdm) - standby [OK] Cache - Samsung_SSD_970_EVO_1TB_S467NF0K603458F (nvme0n1) - active 22 C [OK] Cache 2 - Samsung_SSD_970_EVO_1TB_S467NF0K602897J (nvme1n1) - active 23 C [OK] Parity is valid Last checked on Mon 04 Mar 2019 08:54:16 AM PST (yesterday), finding 0 errors. Duration: 1 day, 30 minutes, 42 seconds. Average speed: 90.7 MB/s I ran a 2nd parity check with corrections turned on that completed without error (see the last line of email). I thought that would clear the errors being reported. Diags and Main screen cap attached. tower-diagnostics-20190305-0921.zip
  12. Followup several days later... how do I clear the "FAIL" moniker on emails? I ran a correcting Parity check after the one referenced above to verify there were no remaining errors but the FAIL still appears on emails. Last check completed on Mon 04 Mar 2019 08:54:16 AM PST (yesterday), finding 0 errors. Notice [TOWER] - array health report [FAIL] (email)
  13. Not sure if this is a beta issue or more general so will err on side of not being related to beta. Last night 12:30ish, monthly parity check launched. I immediately got an email warning that the array had errors: Event: Unraid array errors Subject: Warning [TOWER] - array has errors Description: Array has 3 disks with read errors Importance: warning Disk 8 - ST8000AS0002-1NA17Z_Z8406M0L (sdo) (errors 128) Disk 9 - ST4000DM000-1F2168_Z3024WY8 (sdp) (errors 128) Disk 10 - ST4000DM000-1F2168_Z3024WMZ (sdq) (errors 128) Parity check continued and logged via GUI that is made corrections to fix 128 errors to each of 3 disks (all next to each other). Diagnostics attached. tower-diagnostics-20190301-1944.zip
  14. Thanks for fixing that. Now my 2x970 EVO NVMe are all good.