Jump to content

JorgeB

Moderators
  • Posts

    67,696
  • Joined

  • Last visited

  • Days Won

    707

Everything posted by JorgeB

  1. It's an expander backplane, you can see the back of the expander chip and also the sticker for the SAS address, but it could be the SAS1 model, and those will be limited to 2TB disks, you should contact Supermicro, since the serial is visible they should be able to identify the model.
  2. Array is XFS, btrfs is much more sensitive to bad RAM, though if that is the problem you'll also get data corruption on the array, just undetected.
  3. Not stock, though if notifications are enable you get warnings if any of the monitored attributes change or a SMART test fails, but you can easily run a script with the User Scripts plugin to do that.
  4. Disks look OK, diags are after rebooting so we can't see what happened, but multiple disk errors are usually a power/connection/controller problem, one thing you should do is update the LSI to latest firmware, since it's on a very old one, then and if it happens again grab diags before rebooting.
  5. This is normal with SAS devices, SMART on the GUI only works correctly with SATA, diagnostics before rebooting might give some clues.
  6. These type of read errors can be intermittent, i.e., they can work today and fail tomorrow.
  7. Looks like Veeam doesn't work with everything: https://forums.veeam.com/vmware-vsphere-f24/xfs-reflinks-enable-not-recognized-t73927.html
  8. Problem is that this device wasn't decrypted after the reboot: Aug 30 20:40:37 Server emhttpd: import 32 cache device: (sdc) SanDisk_SSD_PLUS_240GB_184302A005B3 It's strange since the device was there, but since it wasn't decrypted it can be used by btrfs, so it was like the device wasn't present and the pool balanced to single, not sure why this is happening, if you have a spare try replacing that SSD with a different one, if it still happens it's likely a bug.
  9. This is a RAID controller and not recommended for Unraid, but it might work if you initialize/export the disks, note that all disks are detected by the controller: Aug 30 17:14:50 UnDragon kernel: hpsa 0000:0d:00.0: scsi 7:0:0:0: added RAID HP H240 controller SSDSmartPathCap- En- Exp=1 Aug 30 17:14:50 UnDragon kernel: hpsa 0000:0d:00.0: scsi 7:0:1:0: masked Direct-Access HITACHI HUS723020ALS641 PHYS DRV SSDSmartPathCap- En- Exp=0 Aug 30 17:14:50 UnDragon kernel: hpsa 0000:0d:00.0: scsi 7:0:2:0: masked Direct-Access ATA WDC WD20EARX-008 PHYS DRV SSDSmartPathCap- En- Exp=0 Aug 30 17:14:50 UnDragon kernel: hpsa 0000:0d:00.0: scsi 7:0:3:0: masked Direct-Access HITACHI HUS723020ALS641 PHYS DRV SSDSmartPathCap- En- Exp=0 Aug 30 17:14:50 UnDragon kernel: hpsa 0000:0d:00.0: scsi 7:0:4:0: masked Direct-Access ATA WDC WD20EURX-63T PHYS DRV SSDSmartPathCap- En- Exp=0 Aug 30 17:14:50 UnDragon kernel: hpsa 0000:0d:00.0: scsi 7:0:5:0: added Direct-Access ATA WDC WD20EURX-64H PHYS DRV SSDSmartPathCap- En- Exp=1 Aug 30 17:14:50 UnDragon kernel: hpsa 0000:0d:00.0: scsi 7:0:6:0: masked Direct-Access ATA WDC WD20EURX-63T PHYS DRV SSDSmartPathCap- En- Exp=0 Aug 30 17:14:50 UnDragon kernel: hpsa 0000:0d:00.0: scsi 7:0:7:0: masked Direct-Access ATA ST8000VN0022-2EL PHYS DRV SSDSmartPathCap- En- Exp=0 Aug 30 17:14:50 UnDragon kernel: hpsa 0000:0d:00.0: scsi 7:0:8:0: masked Direct-Access ATA WDC WD20EURX-64H PHYS DRV SSDSmartPathCap- En- Exp=0 Aug 30 17:14:50 UnDragon kernel: hpsa 0000:0d:00.0: scsi 7:0:9:0: added Direct-Access ATA WDC WD30EURX-63T PHYS DRV SSDSmartPathCap- En- Exp=1 Aug 30 17:14:50 UnDragon kernel: hpsa 0000:0d:00.0: scsi 7:0:10:0: added Direct-Access ATA WDC WD30EURX-63T PHYS DRV SSDSmartPathCap- En- Exp=1 Aug 30 17:14:50 UnDragon kernel: hpsa 0000:0d:00.0: scsi 7:0:11:0: masked Direct-Access ATA CT500MX500SSD1 PHYS DRV SSDSmartPathCap- En- Exp=0 Aug 30 17:14:50 UnDragon kernel: hpsa 0000:0d:00.0: scsi 7:0:12:0: masked Direct-Access ATA Samsung SSD 840 PHYS DRV SSDSmartPathCap- En- Exp=0 Aug 30 17:14:50 UnDragon kernel: hpsa 0000:0d:00.0: scsi 7:0:13:0: masked Enclosure HP Gen8 ServBP 12+2 enclosure SSDSmartPathCap- En- Exp=0 Looks like the ones that are being detected by Unraid are the ones with Exp=1 in then end, I never used any of these controllers, but maybe Exp means export, check controller BIOS for options.
  10. Unfortunately there's nothing logged about the crash: Aug 30 07:31:37 Tower kernel: w83795 0-002f: Failed to set bank to 128, err -6 Aug 30 07:31:37 Tower kernel: w83795 0-002f: Failed to set bank to 128, err -6 Aug 30 07:31:37 Tower kernel: w83795 0-002f: Failed to set bank to 128, err -6 Aug 30 07:31:37 Tower kernel: w83795 0-002f: Failed to set bank to 128, err -6 Aug 30 07:31:37 Tower kernel: w83795 0-002f: Failed to set bank to 128, err -6 Aug 30 07:31:37 Tower kernel: w83795 0-002f: Failed to set bank to 128, err -6 Aug 30 07:46:37 Tower kernel: microcode: microcode updated early to revision 0x1f, date = 2018-05-08 Mostly the log spam mentioned by trurl, last entry is after the crash.
  11. The extended SMART test failed on both disks with a read error, this is a disk problem, can't be controller/cable related.
  12. It not a software problem, it's not being detected at a hardware level, like if there's no NIC there.
  13. I don't know, I only use Mellanox, but before we can't see if there's a driver or not the NIC must be detected at the hardware level, and it's not.
  14. If you mean the array is not auto-starting after boot it's because auto-start is disable, change to enable in Settings -> Disk Settings
  15. NIC is not being detected by Linux/Unraid in the device list, this is not a software issue, try a different slot if available or check if it works in a different computer.
  16. Unfortunately there's nothing logged that points to what's causing the crashing, would suggest downgrading to the previous version you were running to confirm if crashing stops and it's not for example hardware related.
  17. Do you mean the unassigned device? If yes please use the existing plugin support thread:
  18. This usually indicates a drive error, xfs_repair won't work if there are read errors, you can try cloning the disk with ddrescue then run xfs_repair on the clone.
×
×
  • Create New...