Jump to content

JorgeB

Moderators
  • Posts

    67,737
  • Joined

  • Last visited

  • Days Won

    708

Everything posted by JorgeB

  1. Diags are after rebooting so we can't see what happened, but most likely and like mentioned it was a controller issue, try using it in a different PCIe slot if available, and if there are more errors grab the diags before rebooting.
  2. There are errors in 8 disks, that suggests a controller issue, assuming they share one, diags if you didn't yet reboot might give more info.
  3. Jan 8 13:04:10 Beowulf kernel: EDAC MC0: 1 CE memory read error on CPU_SrcID#0_Ha#0_Chan#0_DIMM#0 (channel:0 slot:0 page:0x17fa6b offset:0x600 grain:32 syndrome:0x0 - area:DRAM err_code:0001:0090 socket:0 ha:0 channel_mask:1 rank:1) Server is reporting memory issues, System/IPMI event log in the BIOS might show more info for the affected DIMM.
  4. Please use the existing plugin support thread:
  5. You try the below, if it doesn't help look for a BIOS update or try using different PCIe slots if possible.
  6. Unlikely if used with the same controller.
  7. And FYI for the future the parity swap procedure must be done completely without any interruption or it will abort and you'll need to start over from the beginning.
  8. If the parity copy completed successfully it would be, but not a bad idea to make sure.
  9. Looks like some compatibility issue, stick with v6.9.1 for now or upgrade to v6.10-rc2.
  10. LSI is on a very old firmware, try updating, you can check if the disks are being detected in the HBA BIOS during boot, CTRL + C
  11. See if disabling spin down helps.
  12. Disable array auto start (settings - disk settings), you can then start the array and VMs won't auto-start, then check devices being passed though.
  13. The problem is the other pool member, looks like it dropped offline earlier, probably due to the controller issues, run a scrub on the pool and check that there aren't uncorrectable errors, also good idea to check this.
  14. Please use the existing docker support thread:
  15. Check filesystem on disk2, then you can re-sync parities.
  16. https://forums.unraid.net/bug-reports/stable-releases/683-shfs-error-results-in-lost-mntuser-r939/ Some workarounds discussed there, mostly disable NFS if not needed or you can change everything to SMB, can also be caused by Tdarr if you use that.
  17. Not sure, I don't use it, I guess it depends on the quantity and size of files, they should compress a lot, but if size is not a problem...
  18. Forgot to mention, there shouldn't be, but if there's an "all data on this device will be deleted" warning after any of the pool devices don't start it, in that case reboot first, warning should them be gone.
  19. Syslog starts over after a reboot, you can enable the syslog server and post that after a crash.
×
×
  • Create New...