Jump to content

JorgeB

Moderators
  • Posts

    67,435
  • Joined

  • Last visited

  • Days Won

    706

Everything posted by JorgeB

  1. It doesn't, but try writing directly to the array to see if the speed is the same or better.
  2. Backup any data still on that disk, format it, restore data or Move data from that disk to other array disks if there's space, format it, continue using it normally.
  3. Parity doesn't help with filesystem corruption, it helps when a disk fails.
  4. IMHO there's no much point in rebuilding, just formatting to create a new filesystem. Yes, but I would recommend trying one thing at a time or you'll don't know what fixed it, if it does indeed fix it.
  5. Replace/swap both cables (power + SATA) on disk7 and try again.
  6. Is the speed the same if you write to cache or directly to array with turbo write enable?
  7. That's not normal, either there's a hardware problem somewhere or the filesystem still had issues, you can re-format the disk (after backup) to make sure filesystem is recreated from scratch, and if it happens again to the same disk the it's likely hardware related.
  8. Latest one is 20.00.07.00.
  9. You have a cache pool so all shares will be shown as protected, despite current cache pool not being redundant, Unraid currently doesn't check for that, if there's a pool it assumes it's using the default raid1 profile.
  10. Downgrade to v6.8.3 and see if it helps, unlikely your problem is v6.9
  11. -Tools -> New Config -> Retain current configuration: All -> Apply -Assign any new data and/or cache drives you want -Start array to begin parity sync -New drives added will need to be formatted, unless already valid Unraid devices
  12. This is likely the problem, RAID controllers are not recommended for Unraid, but please post the diagnostics: Tools -> Diagnostics
  13. See if any of these helps: https://lime-technology.com/forums/topic/72837-error-log-filled-to-100-in-1day-47min-with-this/?do=findComment&comment=669775
  14. Exactly what version? Some earlier p20 releases have known issues with CRC errors.
  15. Do a couple of consecutive parity checks without rebooting and post new diags, but first you need to fix this error spamming the log (an then reboot): Apr 28 04:43:38 Tower nginx: 2020/04/28 04:43:38 [error] 3684#3684: *1298377 connect() to unix:/var/tmp/HomeAssistantCore.sock failed (111: Connection refused) while connecting to upstream, client: 192.168.1.157, server: , request: "GET /dockerterminal/HomeAssistantCore/token HTTP/1.1", upstream: "http://unix:/var/tmp/HomeAssistantCore.sock:/token", host: "tower", referrer: "http://tower/dockerterminal/HomeAssistantCore/" Apr 28 04:44:28 Tower nginx: 2020/04/28 04:44:28 [error] 3684#3684: *1298470 connect() to unix:/var/tmp/HomeAssistantCore.sock failed (111: Connection refused) while connecting to upstream, client: 192.168.1.157, server: , request: "GET /dockerterminal/HomeAssistantCore/ws HTTP/1.1", upstream: "http://unix:/var/tmp/HomeAssistantCore.sock:/ws", host: "tower"
  16. LSI HBAs can only trim SSDs with DRAT/RZAT, I would expect the 860 PRO to support that, since the 860 EVO does, but never confirmed.
  17. It's expect, you should have any vdisk outside the array for best performance, you can have it on a cache pool so it still remains protected.
  18. I would start there, if stable you can try 2133Mhz, but make sure there are no sync errors during a parity check even if stable, it's one of the possible side effects of overclocking RAM with Ryzen, even if memtest doesn't detect any errors.
  19. For me up to low double digits reallocate sectors are just "a few". If the dive fails an extended SMART test or gives read errors on parity check it should be replaced now, independent of how may reallocated sectors there are.
  20. If it doesn't happen again it was most likely an unclean shutdown, if there are more in the future there's a likely a hardware problem somewhere.
  21. A few reallocated sectors can be OK, if they remain stable, and as long as there are no read errors now.
  22. May 1 10:32:59 Tower kernel: md: recovery thread: P corrected, sector=2012443264 May 1 10:32:59 Tower kernel: md: recovery thread: P corrected, sector=2012443272 May 1 10:32:59 Tower kernel: md: recovery thread: P corrected, sector=2012443280 May 1 10:32:59 Tower kernel: md: recovery thread: P corrected, sector=2012443288 May 1 10:32:59 Tower kernel: md: recovery thread: P corrected, sector=2012443296 May 1 14:28:33 Tower kernel: md: recovery thread: P corrected, sector=2965791232 May 1 14:28:33 Tower kernel: md: recovery thread: P corrected, sector=2965791240 P.S.: On a non correcting check it will be logged as "incorrect" instead of "corrected".
×
×
  • Create New...