Jump to content

JorgeB

Moderators
  • Posts

    67,737
  • Joined

  • Last visited

  • Days Won

    708

Everything posted by JorgeB

  1. First make sure that's your actual problem, 72H seems like a lot, could be some controller bottleneck or other config issue, difficult to say without any more info.
  2. Fastest way is if you can copy multiple disk to disk sessions with rsync or something similar, without parity of course, I can get usually around 400MB/s sustained for initial server sync, could be faster even without using SSH, for single disk copy you'll get 100 to 200MB/s depending on the disks used.
  3. No, maintenance mode won't attempt to mount the disks.
  4. Please post the diagnostics (downloaded after array start).
  5. It's perfectly safe, why wouldn't it be? It will just limit the available bandwidth, I tested some HBAs bandwidth in different slots here:
  6. Cache1 dropped offline, check/replace cables then post new diags after array start.
  7. No, it happened before: Jan 22 11:34:40 Tower emhttpd: copy: disk1 to disk29 running ... Jan 22 22:09:12 Tower kernel: kernel BUG at fs/buffer.c:3351! ... Jan 23 10:23:58 Tower emhttpd: copy: disk1 to disk29 completed If you reboot you'll just need to start over with the copy, and see if there's no crash this time.
  8. There was a kernel crash in the middle of the copy operation, but according to the syslog it completed, not sure if the crash is related or not, try refreshing the GUI, if the rebuild option still doesn't appear I would reboot and start over.
  9. The amount of writes reported in the GUI is basically meaningless, it can vary wildly with the device/controller used, you need to check the SSD SMART report and then again after 24H to see the actual writes.
  10. It's what btrfs is reporting, and I bet that it's correct, but if it isn't it's not an Unraid problem, at most could be a btrfs issue, you'd need to report it for example in the btrfs mailing list or their IRC channel.
  11. So that confirms the problem is network related, could be NIC, cable, NIC driver, etc.
  12. Do it the other way around since the problem is only in reading.
  13. Run a single stream iperf test to rule out network issues.
  14. It's logged more like a connection/power issue, but since it failed in a different slot it's likely a disk problem.
  15. Problem with the HBA: Jan 24 09:33:38 unRaid kernel: mpt3sas_cm0: SAS host is non-operational !!!! Make sure it's well seated and sufficiently cooled, you can also try a different PCIe slot if available, failing that try a different HBA.
  16. Diags you posted didn't show a rebuild, but yeah, in that case you should replace it.
  17. Once a device gets disable it needs to be rebuilt, just changing cables/slot won't fix anything, you can rebuild and see if the problem occurs again, if it does replace the disk.
  18. You can generate a new UUID with: xfs_admin -U generate /dev/sdX1 P.S. next time you might want to post the diags, you'd get an answer sooner.
  19. If you mean use an existing pool in a new server you just need to assign all the pool members and start the array, existing pool will be imported.
×
×
  • Create New...