Jump to content

JorgeB

Moderators
  • Posts

    67,540
  • Joined

  • Last visited

  • Days Won

    707

Everything posted by JorgeB

  1. See if you can get the diagnostics on the GUI or by typing "diagnostics" in the console.
  2. @darckhartyou're issue might be related to this: https://forums.unraid.net/bug-reports/prereleases/690-beta-30-pre-skylake-intel-cpus-stuck-at-lowest-pstate-r1108/?do=findComment&comment=11255
  3. This helps sometimes with that Marvell controller, though best bet for the future would be to replace it.
  4. Diags after rebooting don't help much, try this and post that log after a crash.
  5. You're running out of RAM, possibly one of the docker containers, try enabling one at a time.
  6. Look for any ATA and/or sync errors.
  7. No need to bump, you said everything was normal in safe mode, and the diags are consistent with that, if it isn't please post diags showing that.
  8. Please post the diagnostics: Tools -> Diagnostics
  9. That's a great find, for now you can just add that line to the go file.
  10. File is cached to RAM during first transfer, then the next transfer(s) are done from RAM, so faster.
  11. Then this is mostly likely the bottleneck, you should get better speeds from SSD to SSD (an note that contrary to popular belief not all SSDs are capable of 500MB/s+, especial for sustained writes).
  12. First run xfs_repair, then delete.
  13. This suggest RAM cache interfering, what are the devices in use, both source and dest, e.g. SSD to SSD, disk to array, other?
  14. These issues aren't easy to diagnose, I would start by running memtest for a few hours, ideally at least 24, if no errors run a couple of more checks.
  15. Those mitigation were already on v6.8.3, my thinking is that could be some new mitigation or change specific to this kernel, also not a bad idea to retest with v6.8 if possible to confirm it's really a beta issue.
  16. There are constant ATA errors on disk1, that by itself shouldn't cause sync errors but if nothing else will make it much slower, replace cables and run another check, then post new diags.
  17. Parity checks are single threaded, so one core stuck at 100% will bottleneck, but yeah, that CPU should be enough for the array size, unless like mentioned some kernel mitigation is making a large difference.
  18. Diags show a high CPU utilization, suggesting that is the problem, or at least part of it, maybe the latest kernel vulnerability mitigations have an impact.
  19. Fragmentation yes, increased write amplification not so good, since it can reduce the SSD life.
  20. COW enable has some disadvantages for VMs, like increased fragmentation and write amplification, that's why LT disabled it by default for those shares, but I still prefer to have it enable for data integrity.
  21. Yes, first finish the rebuild, when that's done restart the array in maintenance mode and post the output of: reiserfsck --check /dev/md2
×
×
  • Create New...