Jump to content

JorgeB

Moderators
  • Posts

    67,755
  • Joined

  • Last visited

  • Days Won

    708

Everything posted by JorgeB

  1. Cache1 dropped offline, check/replace cables then post new diags after array start.
  2. No, it happened before: Jan 22 11:34:40 Tower emhttpd: copy: disk1 to disk29 running ... Jan 22 22:09:12 Tower kernel: kernel BUG at fs/buffer.c:3351! ... Jan 23 10:23:58 Tower emhttpd: copy: disk1 to disk29 completed If you reboot you'll just need to start over with the copy, and see if there's no crash this time.
  3. There was a kernel crash in the middle of the copy operation, but according to the syslog it completed, not sure if the crash is related or not, try refreshing the GUI, if the rebuild option still doesn't appear I would reboot and start over.
  4. The amount of writes reported in the GUI is basically meaningless, it can vary wildly with the device/controller used, you need to check the SSD SMART report and then again after 24H to see the actual writes.
  5. It's what btrfs is reporting, and I bet that it's correct, but if it isn't it's not an Unraid problem, at most could be a btrfs issue, you'd need to report it for example in the btrfs mailing list or their IRC channel.
  6. So that confirms the problem is network related, could be NIC, cable, NIC driver, etc.
  7. Do it the other way around since the problem is only in reading.
  8. Run a single stream iperf test to rule out network issues.
  9. It's logged more like a connection/power issue, but since it failed in a different slot it's likely a disk problem.
  10. Problem with the HBA: Jan 24 09:33:38 unRaid kernel: mpt3sas_cm0: SAS host is non-operational !!!! Make sure it's well seated and sufficiently cooled, you can also try a different PCIe slot if available, failing that try a different HBA.
  11. Diags you posted didn't show a rebuild, but yeah, in that case you should replace it.
  12. Once a device gets disable it needs to be rebuilt, just changing cables/slot won't fix anything, you can rebuild and see if the problem occurs again, if it does replace the disk.
  13. You can generate a new UUID with: xfs_admin -U generate /dev/sdX1 P.S. next time you might want to post the diags, you'd get an answer sooner.
  14. If you mean use an existing pool in a new server you just need to assign all the pool members and start the array, existing pool will be imported.
  15. Check this: https://forums.unraid.net/topic/103938-69x-lsi-controllers-ironwolf-disks-disabling-summary-fix/?do=getNewComment
  16. If you don't need graphics use the x16 slot, x4 used with a PCIe 3.0 HBA still has plenty of bandwidth for 8 drives, just keep in mind that slot goes thought the DMI, so bandwidth is shared with the remaining SATA ports, etc.
  17. Not surprisingly GUI is showing the correct usage, btrfs reports around 165GiB used (or 177GB), and that is the actual used space, like mentioned du isn't reliable with btrfs.
  18. There's clearly a hardware issue there, start by running memtest, if that doesn't find any issues board/controller would be my next suspect.
  19. Unlikely, GUI should be reporting what btrfs is reporting being used, but please post the diagnostics to confirm.
×
×
  • Create New...