Jump to content

JorgeB

Moderators
  • Posts

    67,736
  • Joined

  • Last visited

  • Days Won

    708

Everything posted by JorgeB

  1. Yes, strange, according to the syslog link is up: Dec 15 10:38:08 Bifrost kernel: e1000e 0000:00:1f.6 eth0: NIC Link is Up 1000 Mbps Full Duplex, Flow Control: None And there's nothing else related to eth0 after that.
  2. Yes, drives added later will need to be cleared, but they will be empty so not a problem.
  3. No, but there could be issues with the shucked drive, if the size is not the same as it was and/or it's hardware encrypted by the enclosure, some WD USB drives are.
  4. They still don't show up for me, and never created any rule.
  5. Worth trying, though it seems unlikely to me, suspect more some sort of compatibility issue with the WD disks and the expander/LSI combo.
  6. Strange issue, the expander is being correctly detected: Dec 15 06:02:42 Legion kernel: scsi 9:0:4:0: Enclosure HP HP SAS EXP Card 2.06 PQ: 0 ANSI: 5 Dec 15 06:02:42 Legion kernel: scsi 9:0:4:0: set ignore_delay_remove for handle(0x000e) Dec 15 06:02:42 Legion kernel: scsi 9:0:4:0: SES: handle(0x000e), sas_addr(0x500143801001d6e5), phy(36), device_name(0x0000000000000000) Dec 15 06:02:42 Legion kernel: scsi 9:0:4:0: enclosure logical id (0x500143801001d6e5), slot(0) Dec 15 06:02:42 Legion kernel: scsi 9:0:4:0: qdepth(254), tagged(1), scsi_level(6), cmd_que(1) Dec 15 06:02:42 Legion kernel: scsi 9:0:4:0: Attached scsi generic sg1 type 13 But the 4 disks are failing to initialize, maybe a timeout issue? Dec 15 06:02:42 Legion kernel: mpt2sas_cm0: log_info(0x31110101): originator(PL), code(0x11), sub_code(0x0101) ### [PREVIOUS LINE REPEATED 3 TIMES] ### Dec 15 06:02:42 Legion kernel: end_device-9:0:0: add: handle(0x000a), sas_addr(0x500143801001d6c4) Dec 15 06:02:42 Legion kernel: mpt2sas_cm0: log_info(0x31110101): originator(PL), code(0x11), sub_code(0x0101) ### [PREVIOUS LINE REPEATED 3 TIMES] ### Dec 15 06:02:42 Legion kernel: end_device-9:0:1: add: handle(0x000b), sas_addr(0x500143801001d6c5) Dec 15 06:02:42 Legion kernel: mpt2sas_cm0: log_info(0x31110101): originator(PL), code(0x11), sub_code(0x0101) ### [PREVIOUS LINE REPEATED 3 TIMES] ### Dec 15 06:02:42 Legion kernel: end_device-9:0:2: add: handle(0x000c), sas_addr(0x500143801001d6c6) Dec 15 06:02:42 Legion kernel: mpt2sas_cm0: log_info(0x31110101): originator(PL), code(0x11), sub_code(0x0101) ### [PREVIOUS LINE REPEATED 3 TIMES] ### Dec 15 06:02:42 Legion kernel: end_device-9:0:3: add: handle(0x000d), sas_addr(0x500143801001d6c7) Do you have a different brand disk/device you could use to test?
  7. I posted a link above how to update inside Unraid.
  8. Disk1 is also failing, this means single parity can't help, you can try using ddrescue with both failing disks.
  9. Unfortunately no call traces logged there, so not much to see.
  10. Dec 14 19:53:53 Skynet kernel: xhci_hcd 0000:29:00.3: remove, state 1 Dec 14 19:53:53 Skynet kernel: usb usb6: USB disconnect, device number 1 Dec 14 19:53:53 Skynet kernel: usb 6-2: USB disconnect, device number 2 Dec 14 19:53:53 Skynet kernel: md: disk5 write error, sector=41632536 Device disconnected, we don't recommend USB devices for array or pools.
  11. Yes, note sure what happened to disk1, you'd need to backup and re-format or ask for help e.g. on the xfs mailing list.
  12. Looks like a BIOS issue, bug or not correctly configured, looks for a BIOS update and re-check config.
  13. HBA is being correctly detected and initialized, but it's on a very old firmware, you can upgrade to see if that helps, but seems unlikely, though it should still be upgraded.
  14. 250MB/s average speed is impossible with the currently assigned array devices, it's most likely the result of an 8TB rebuild and an old bug, speed shown considers the full parity size vs the actual size rebuilt.
  15. It's difficult to say for sure, it might, it might not, also depends on the USB hardware, if they respect write barriers or not, if they don't and writes are lost the pool will go south quickly.
  16. It should still list the USB device (without UEFI in front), if CSM boot is enable.
  17. It's in single mode, since it crashed before the balance started, and it will likely crash again if you try to balance again, my recommendation is still to backup and restore. No.
  18. It's not recommended to use USB drives as array or pool members, but it's allowed.
  19. Btrfs crashed during the initial balance, this suggests there was some existing corruption in that filesystem, best bet is to backup and re-format the pool, some recovery options here if needed, you need to reboot, likely will need to force it, and if the pool crashes on next mount reboot again and try the recovery options before wiping it.
  20. I hid the FAQ entry about that when Unraid started supporting multiple pools, here it is: https://forums.unraid.net/topic/46802-faq-for-unraid-v6/?do=findComment&comment=462135
  21. Yes. Tom mentioned concerns over some buggy UEFI BIOS, which should be irrelevant by now.
  22. It's detecting a hardware issue, not sure if it's a serious one or not, but if it's restarting it probably is.
×
×
  • Create New...