Jump to content

JorgeB

Moderators
  • Posts

    67,826
  • Joined

  • Last visited

  • Days Won

    708

Everything posted by JorgeB

  1. No evidence it wasn't correct, just that there were errors again in the next check, this was for example common for some users using a SAS2LP controller with some disks, after every check there were the same 5 sync errors.
  2. Lots of call traces, start by running memtest, then a lot of OOM errors, you need to limit resources.
  3. This sometimes helps with dropping NVMe devices: Some NVMe devices have issues with power states on Linux, try this, on the main GUI page click on flash, scroll down to "Syslinux Configuration", make sure it's set to "menu view" (on the top right) and add this to your default boot option, after "append initrd=/bzroot" nvme_core.default_ps_max_latency_us=0 e.g.: append initrd=/bzroot nvme_core.default_ps_max_latency_us=0 Reboot and see if it makes a difference.
  4. This is usually NIC, cable, switch, etc, see if you can try with different ones.
  5. Doesn't look like it. Use the disk paths and it will give you an i/o error for any corrupt file, then restore those from backups if available.
  6. With 6.9+ you don't need those command in the go file, remove everything, you just need to generate keys and copy them to /boot/config/ssh/root. https://forums.unraid.net/topic/51160-passwordless-ssh-login/?do=findComment&comment=1086269
  7. You can't do that, you can use the new 8TB for parity, then add old parity to the array.
  8. Possibly a flash drive problem, also make sure UEFI boot is enabled in Unraid, there should be an EFI folder, if it's EFI- rename it.
  9. Rather strange, the pool was working correctly without any errors logged before this boot, then out of the blue it started detecting checksum and other errors on two different devices, suspect it might be related to some of the known issues that still exist with raid5/6, most serious of these are fixed starting with kernel 5.20. I could be pain but I would suggest copying all the data to a different place then re-format.
  10. Start by running a single stream iperf test in both directions to check network bandwidth.
  11. Do you mean you cannot boot? If yes where does it stop?
  12. Something strange is going on there.
  13. It's not logged as a disk problem and disk5 looks healthy, cancel the rebuild, replace cables/change slot on disk5 and try again.
  14. Diags are after rebooting so not much to see, if it happens again grab them before rebooting.
  15. No segfaults in the diags posted, run memtest and see if you get anymore.
  16. Aug 3 11:25:40 Tower kernel: md: import_slot: 4 empty Aug 3 11:25:40 Tower kernel: md: import_slot: 6 empty The array was started without disks 4 and 6 assigned, this will make Unraid emulate those disks, they were re-assigned after an array stop Aug 3 11:34:51 Tower kernel: md: import_slot: 4 replaced Aug 3 11:34:51 Tower kernel: md: import_slot: 6 replaced And that will make a rebuild required.
  17. That suggests a problem with a controller or disk, unfortunately no easy way to tell which without testing one by one.
  18. Yes, both are good and fast devices, MX500 has a firmware issue with false pending sectors, but can easily be "solved" by not monitoring that attribute.
  19. Do you mean just restoring the backup to a different device made Unraid start to crash again?
  20. Enable the syslog server and post that together with the complete diagnostics after a crash.
×
×
  • Create New...