Jump to content

JorgeB

Moderators
  • Posts

    67,831
  • Joined

  • Last visited

  • Days Won

    708

Everything posted by JorgeB

  1. I'm also using the NUT plugin. That's from the archived notifications.
  2. I get both: 10-08-2022 15:35 UPS Status Alert [TOWER11] - UPS On battery The UPS is on battery 10-08-2022 15:35 UPS Status Notice [TOWER11] - UPS Online The UPS is back on line Note that the "back to normal" notification is a notice, not an alert or warning, so you need to have notifications enabled for those too, AFAIK there's no notification for low battery shutdown.
  3. New config would keep all data if the disk is being passed completely, you need to know the actual disk positions, then you could test, if it doesn't work you can always go bak to VMware, going to move this to the virtualizing Unraid subforum, might get more help there from other VMware users.
  4. You should not have started a rebuild, anyway the SSD assigned as disk9 dropped offline, stop the array, unassign disk9, check filesystem on the emulated disk9.
  5. You can also get the diags in the console, see the link above for how to, but if all is well now it doesn't matter, get them if it happens again.
  6. Yes, perfectly to safe to upgrade now, @Squidis that warning still there or maybe is he using an old release?
  7. Diags only show logged infor until August 14th, reboot and post new diags after array start.
  8. Those are normal if you want to keep using the S3 sleep plugin, it's from the controllers waking up.
  9. It means you are using raid0 profile for data (default is raid1), better performance but no redundancy, metadata is still raid1 for redundancy, metadata (and system) take very little space compared with data, so good to still use raid1.
  10. Those are from the SATA port multiplier connected to disks 1, 2 and 12, that Marvell controller uses one for some of the ports, leave disk2 there for now and move disks 1 and 12 to the enclosure, ideally you'd replace that controller with a recommended non Marvell one, or just move all those drives to the enclosure, but performance might not be the best.
  11. Aug 26 12:18:49 BlackIce kernel: macvlan_broadcast+0x10e/0x13c [macvlan] Aug 26 12:18:49 BlackIce kernel: macvlan_process_broadcast+0xf8/0x143 [macvlan] Macvlan call traces are usually the result of having dockers with a custom IP address and will end up crashing the server, upgrading to v6.10 and switching to ipvlan should fix it (Settings -> Docker Settings -> Docker custom network type -> ipvlan (advanced view must be enabled, top right)).
  12. BTRFS error (device nvme0n1p1): block=2530556395520 write time tree block corruption detected This is usually a sign of bad RAM, there was also some data corruption found before, so start by running memtest.
  13. Assuming the disks are being completely passed-through and just the names changed you can do a new config and re-assign all disks to their original slots.
  14. There are multiple profiles, run a balance to the one you want to use, default is raid1 for redundancy.
  15. Try updating to v6.11.0-rc4 and running xfs_repair again, it comes with newer xfsprogs.
  16. Doesn't look like a disk problem, try updating the LSI firmware to latest, if errors persist try connecting disk3 to the onboard SATA, you can swap with another disk.
  17. NICs are failing to initialize: Aug 28 16:18:42 HearnServer kernel: ixgbe 0000:05:00.0: enabling device (0000 -> 0002) Aug 28 16:18:42 HearnServer kernel: ixgbe 0000:05:00.0: Adapter removed Aug 28 16:18:42 HearnServer kernel: ixgbe: probe of 0000:05:00.0 failed with error -5 Aug 28 16:18:42 HearnServer kernel: ixgbe 0000:05:00.1: enabling device (0000 -> 0002) Aug 28 16:18:42 HearnServer kernel: ixgbe 0000:05:00.1: Adapter removed Aug 28 16:18:42 HearnServer kernel: ixgbe: probe of 0000:05:00.1 failed with error -5
  18. Looks more like a power/connection problem, check/replace cables and see if all the errors like these go away: Aug 28 18:03:29 Tower kernel: ata1.00: status: { DRDY } Aug 28 18:03:29 Tower kernel: ata1.00: failed command: READ FPDMA QUEUED Aug 28 18:03:29 Tower kernel: ata1.00: cmd 60/00:f0:20:0f:04/04:00:1e:00:00/40 tag 30 ncq dma 524288 in Aug 28 18:03:29 Tower kernel: res 40/00:00:00:00:00/00:00:00:00:00/00 Emask 0x4 (timeout) Aug 28 18:03:29 Tower kernel: ata1.00: status: { DRDY } Aug 28 18:03:29 Tower kernel: ata1: hard resetting link Aug 28 18:03:35 Tower kernel: ata1: link is slow to respond, please be patient (ready=0) Aug 28 18:03:38 Tower webGUI: Successful login user root from 192.168.1.83 Aug 28 18:03:39 Tower kernel: ata1: COMRESET failed (errno=-16) Aug 28 18:03:39 Tower kernel: ata1: hard resetting link Aug 28 18:03:45 Tower kernel: ata1: link is slow to respond, please be patient (ready=0) Aug 28 18:03:49 Tower kernel: ata1: COMRESET failed (errno=-16) Aug 28 18:03:49 Tower kernel: ata1: hard resetting link Aug 28 18:03:55 Tower kernel: ata1: link is slow to respond, please be patient (ready=0) Aug 28 18:04:08 Tower kernel: ata1: SATA link up 3.0 Gbps (SStatus 123 SControl 320) Aug 28 18:04:08 Tower kernel: ata1.00: ACPI cmd f5/00:00:00:00:00:e0 (SECURITY FREEZE LOCK) filtered out Aug 28 18:04:08 Tower kernel: ata1.00: ACPI cmd b1/c1:00:00:00:00:e0 (DEVICE CONFIGURATION OVERLAY) filtered out Aug 28 18:04:08 Tower kernel: ata1.00: ACPI cmd f5/00:00:00:00:00:e0 (SECURITY FREEZE LOCK) filtered out Aug 28 18:04:08 Tower kernel: ata1.00: ACPI cmd b1/c1:00:00:00:00:e0 (DEVICE CONFIGURATION OVERLAY) filtered out Aug 28 18:04:08 Tower kernel: ata1.00: configured for UDMA/100
  19. You are using USB, that is not recommended for array/pool disks, one of the reasons is that device names can change, to fix it for now you need to do a new condif and check "parity is already valid" before array start.
  20. Those errors look kvm related, enable the syslog server, don't run any VMs and post that log together with the complete diagnostics after a crash.
  21. If the drive is not detected there's not much chance of copying any data.
×
×
  • Create New...