Jump to content

JorgeB

Moderators
  • Posts

    67,884
  • Joined

  • Last visited

  • Days Won

    708

Everything posted by JorgeB

  1. Possibly, you try backup and re-formatting it, if the same replace it.
  2. If it's taking that long it won't generate them, get the syslog at least: cp /var/log/syslog /boot/syslog.txt Then attach it here.
  3. Difficult to say more without the v6.11.3 diags other than pool is also showing issues with v6.11.1, other than the mentioned corruption there's also this: Nov 11 11:20:46 Tower kernel: BTRFS error (device nvme2n1p1): incorrect extent count for 6093506871296; counted 8236, expected 8224 Looks like it's just a log tree problem, so it might be fixable by zeroing it, but make sure pool is backed up before trying, with the array stopped: btrfs rescue zero-log /dev/nvme2n1p1 Then start the array, if the pool mounts run a scrub and when it ends post new diags.
  4. Disk 4 is failing and needs to be replaced before you can sync parity, since there's no parity you could copy everything you can to another disk or use ddrescue to clone it and try and recover as much data as possible.
  5. SMART looks OK, but wait the test result, if it passes do a new config with old disk5, re-sync parity, then try the replacement again.
  6. Suggest you see here for better pool monitoring, so you are notified if it happens again.
  7. You can use the Unraid Discord channel, likely to find there someone willing to answer your questions.
  8. Look for a BIOS update but if it doesn't help booting CSM should not create any issues.
  9. If it happens again enable the syslog server and post that after a crash.
  10. IPMI console should still work, do you get a blank screen during/right after boot or after some minutes, by default console is blanked after a few minutes.
  11. Looks like the 2nd device is not part of the pool, please post the diagnostics.
  12. Are these shucked drives? If yes they might have the 3.3v issue, you can test by connecting one of them using a molex to sata adapter.
  13. If you are using v6.11.2 upgrade to v6.11.3, if not please post the diagnostics.
  14. Not with that partition layout, some disks have been known to damage the partition after a power cut, don't ask me why but I've seen it before, Unraid uses the 1st partition only, and it requires the disk to only have one partition and that it uses the full disk capacity, your disk has multiple partitions and the 1st partition is 512MB, hence the "invalid partition layout problem", like mentioned see if the UD plugin can mount the disk, it doesn't care about the layout, as long as they are valid partitions, which by the looks of the fdisk output they might not be.
  15. Nope, issue was only when partition/formatting new disks.
  16. Click on the pool and scroll down to the "Scrub" section.
  17. Simultaneous errors in so many disks is usually a power/connection or controller problem, not seeing anything in the log about the controller, so power down, check all connections and power back up, if it's an eternal enclosure check power/connection to it.
  18. That disk doesn't the required Unraid partition layout, it will never mount in the array, you can see if it mounts with the UD plugin.
  19. OK, check filesystem on disk1 (without -n) and unassign disk2 and see if it mounts with UD (order as shown on last diags).
×
×
  • Create New...