Jump to content

JorgeB

Moderators
  • Posts

    67,459
  • Joined

  • Last visited

  • Days Won

    706

Everything posted by JorgeB

  1. You have an ACPI error spamming the log that makes it very difficult to look for anything else: Jun 5 21:22:23 Goliath kernel: ACPI Error: AE_AML_BUFFER_LIMIT, Evaluating _PMM (20180810/power_meter-338) Jun 5 21:22:24 Goliath kernel: ACPI Error: SMBus/IPMI/GenericSerialBus write requires Buffer of length 66, found length 32 (20180810/exfield-390) Jun 5 21:22:24 Goliath kernel: ACPI Error: Method parse/execution failed \_SB.PMI0._PMM, AE_AML_BUFFER_LIMIT (20180810/psparse-514) Jun 5 21:22:24 Goliath kernel: ACPI Error: AE_AML_BUFFER_LIMIT, Evaluating _PMM (20180810/power_meter-338) Jun 5 21:22:27 Goliath kernel: ACPI Error: SMBus/IPMI/GenericSerialBus write requires Buffer of length 66, found length 32 (20180810/exfield-390) Jun 5 21:22:27 Goliath kernel: ACPI Error: Method parse/execution failed \_SB.PMI0._PMM, AE_AML_BUFFER_LIMIT (20180810/psparse-514) Jun 5 21:22:27 Goliath kernel: ACPI Error: AE_AML_BUFFER_LIMIT, Evaluating _PMM (20180810/power_meter-338) See here how to fix it, then please post new log:
  2. Try rebooting in safe mode, if that doesn't work please post the diagnostics: Tools -> Diagnostics
  3. Please use the dedicated plugin support thread:
  4. Yeah, that really looks like the typical Marvell controller problem, it's dropping the disks, best bet is to replace it with one of the recommended ones.
  5. You could always unassign the disk and check the emulated disk data before rebuilding, but I see no reason for it to be any different then current one.
  6. Maybe a problem with the LSI, find it odd that it would overheat so fast.
  7. And are you sure that's the problem? I have some LSIs without direct cooling, just some general case airflow without any issues, and it should never make your server reboot anyway.
  8. LT checks this forum and then decides what features to implement/prioritize, you don't need to do anything more.
  9. We don't recommend Marvell for Unraid, and the 9230/9235 appear to be the worst offenders, please post the diagnostics: Tools -> Diagnostics.
  10. You need to check the exit status after running xfs_repair (you'll need to use the CLI to run it): echo $? 1=corruption was detected 0=no corruption detected
  11. CRC errors are usually the result of a bad SATA cable, replace it, and they are corrected, so it won't corrupt data.
  12. Correct, about 1600MB/s usable due to PCIe overhead, still enough for most spinners, even with all ports in use.
  13. I'm sorry but without all the logs from when the problem started (they start over after any reboot) I don't have other ideas on what could have happened.
  14. Only way to known with xfs_repair is to check the exit code, or always run without -n.
  15. No, Unraid can still use it for parity calculation despite being unmountable, only if a disk is disabled (red x) it can't be used for parity calculation. Not before rebuilding 5, since you only have one parity, also that type of error "invalid partition" shouldn't happen out of the blue, it suggests something changed the MBR of the disk, and that should not happen during normal utilization, though you're not the first this happens to.
  16. You can do it with Unraid, just need sg3_utils, this FAQ entry shows how you can installed them.
  17. It will link at x4 instead of x8, but it might be a non issue depending on how many and what type of devices you will have there.
  18. Too much time spent on the forums looking at diags
  19. Tools -> New config (but without parity obviously data from the old disk won't be rebuilt)
  20. HBA should work fine on the x4 slot, with limited bandwidth.
  21. Really looks like a disk problem, can you run another extended test?
  22. You're having issues with the main SATA controller (which is done by Asmedia for those AMD chipsets), those errors are quite common on Ryzen based boards especially under load, there are some reports that upgrading to v6.9-b1 helps, due to the much newer kernel.
×
×
  • Create New...