Jump to content

interwebtech

Members
  • Content Count

    1129
  • Joined

  • Last visited

Community Reputation

5 Neutral

About interwebtech

  • Rank
    Advanced Member
  • Birthday 05/03/1958

Converted

  • Gender
    Male
  • Location
    Everett, WA
  • Personal Text
    The obstacle is the path.

Recent Profile Visitors

1187 profile views
  1. interwebtech

    Unraid OS version 6.7.0-rc7 available

    rc6 to rc7 PS. I make a habit of spinning up all drives before running the upgrade. Speeds up the syncing process.
  2. interwebtech

    Unraid OS version 6.7.0-rc7 available

    I must be very impatient today. lol
  3. interwebtech

    Unraid OS version 6.7.0-rc7 available

    I will try leaving it be. I've never had it pause for a long time that I recall.
  4. interwebtech

    Unraid OS version 6.7.0-rc7 available

    I get this far and it halts. No errors on console. Tried 3 times from Tools | Update OS. tower-diagnostics-20190405-0117.zip
  5. interwebtech

    Unraid OS version 6.7.0-rc6 available

    Updated from rc5. All systems nominal. Nothing to report.
  6. interwebtech

    [6.7.0-rc5] Read Disk Errors on Parity Check start

    That did it. Thanks.
  7. interwebtech

    [6.7.0-rc5] Read Disk Errors on Parity Check start

    I stopped/restarted the array but the errors are still listed on Main. Also, Fix Common Problems alerted me to the error state on the 3 disks. Event: Fix Common Problems - Tower Subject: Errors have been found with your server (Tower). Description: Investigate at Settings / User Utilities / Fix Common Problems Importance: alert **** disk8 (ST8000AS0002-1NA17Z_Z8406M0L) has read errors **** **** disk9 (ST4000DM000-1F2168_Z3024WY8) has read errors **** **** disk10 (ST4000DM000-1F2168_Z3024WMZ) has read errors **** Fresh set diags and screenie attached. tower-diagnostics-20190305-1617.zip
  8. interwebtech

    [6.7.0-rc5] Read Disk Errors on Parity Check start

    It's complaining about those 3 disks that threw errors on spinup for the monthly parity check (OP above). Here is the full email: Event: Unraid Status Subject: Notice [TOWER] - array health report [FAIL] Description: Array has 18 disks (including parity & cache) Importance: warning Parity - ST8000VN0002-1Z8112_ZA124ASG (sdc) - standby [OK] Parity2 - ST8000VN0002-1Z8112_ZA12BHMW (sdd) - standby [OK] Disk 1 - ST8000VN0022-2EL112_ZA17V13V (sdb) - standby [OK] Disk 2 - ST6000DX000-1H217Z_Z4D04L2A (sde) - standby [OK] Disk 3 - ST8000VN0022-2EL112_ZA17SPGS (sdf) - standby [OK] Disk 4 - ST6000DM001-1XY17Z_Z4D23K9N (sdg) - standby [OK] Disk 5 - ST8000AS0002-1NA17Z_Z840J4R8 (sdh) - standby [OK] Disk 6 - HGST_HDN724040ALE640_PK1334PCKDKRPX (sdi) - standby [OK] Disk 7 - HGST_HDN724040ALE640_PK1334PCKAX1MX (sdn) - standby [OK] Disk 8 - ST8000AS0002-1NA17Z_Z8406M0L (sdo) - standby (disk has read errors) [NOK] Disk 9 - ST4000DM000-1F2168_Z3024WY8 (sdp) - standby (disk has read errors) [NOK] Disk 10 - ST4000DM000-1F2168_Z3024WMZ (sdq) - standby (disk has read errors) [NOK] Disk 11 - ST8000VN0022-2EL112_ZA179JR6 (sdj) - standby [OK] Disk 12 - WDC_WD80EMAZ-00WJTA0_7SJNBMRU (sdk) - standby [OK] Disk 13 - WDC_WD80EMAZ-00WJTA0_7SJNBNVU (sdl) - standby [OK] Disk 14 - WDC_WD80EMAZ-00WJTA0_7HJZ25AF (sdm) - standby [OK] Cache - Samsung_SSD_970_EVO_1TB_S467NF0K603458F (nvme0n1) - active 22 C [OK] Cache 2 - Samsung_SSD_970_EVO_1TB_S467NF0K602897J (nvme1n1) - active 23 C [OK] Parity is valid Last checked on Mon 04 Mar 2019 08:54:16 AM PST (yesterday), finding 0 errors. Duration: 1 day, 30 minutes, 42 seconds. Average speed: 90.7 MB/s I ran a 2nd parity check with corrections turned on that completed without error (see the last line of email). I thought that would clear the errors being reported. Diags and Main screen cap attached. tower-diagnostics-20190305-0921.zip
  9. interwebtech

    [6.7.0-rc5] Read Disk Errors on Parity Check start

    Followup several days later... how do I clear the "FAIL" moniker on emails? I ran a correcting Parity check after the one referenced above to verify there were no remaining errors but the FAIL still appears on emails. Last check completed on Mon 04 Mar 2019 08:54:16 AM PST (yesterday), finding 0 errors. Notice [TOWER] - array health report [FAIL] (email)
  10. Not sure if this is a beta issue or more general so will err on side of not being related to beta. Last night 12:30ish, monthly parity check launched. I immediately got an email warning that the array had errors: Event: Unraid array errors Subject: Warning [TOWER] - array has errors Description: Array has 3 disks with read errors Importance: warning Disk 8 - ST8000AS0002-1NA17Z_Z8406M0L (sdo) (errors 128) Disk 9 - ST4000DM000-1F2168_Z3024WY8 (sdp) (errors 128) Disk 10 - ST4000DM000-1F2168_Z3024WMZ (sdq) (errors 128) Parity check continued and logged via GUI that is made corrections to fix 128 errors to each of 3 disks (all next to each other). Diagnostics attached. tower-diagnostics-20190301-1944.zip
  11. Thanks for fixing that. Now my 2x970 EVO NVMe are all good.
  12. interwebtech

    Dashboard response time slows down

    Adding my report here rather than in a new thread. I have been experiencing page load delays in Chrome. I thought it might be an errant extension but have disabled ad blocker and spell checker (Grammarly). I believe it predates the 6.7.0 series but can't recall how far back. I have been using Firefox to access the webUI as 20-second delays between pages (I can see everything but no clicks allowed until it finished loading) are a PIA. I fired up Dev Tools today to get some screens of the offenders under Network tab. Looks like "var", DashboardApps.php, & Notify.php are the ones causing the delay on Main. On moving to Settings page, its var & Notify.php again. Some screen grabs from devtools below.
  13. interwebtech

    [6.7.0 rc1] GUI bug

    Verified fixed here. Reboot into RC2 and message ends up at "Array Started" as expected.
  14. interwebtech

    [6.7.0 rc1] GUI bug

    That cleared the message. Now displays "Array Started"
  15. interwebtech

    Speedtest.net for unRAID 6.1+

    Thanks for the fix. All good now.