Jump to content

JorgeB

Moderators
  • Posts

    67,600
  • Joined

  • Last visited

  • Days Won

    707

Everything posted by JorgeB

  1. The one you had is supported, you deleted the diags but IIRC it was a SAS2008 or 2308 chip, it's a weird issue, there's a user with two identical controllers and only one of them works.
  2. Unraid should still wipe the disk and reformat it with the correct starting sector, 2048 starting sector won't cause issues with parity, but it might cause mounting issues in the future, also if you try to replace a disk with the standard 64 sector with one of the same size with a 2048 starting sector it won't be possible, due to the partition being smaller, so to avoid any future issues I would recommend wiping those disks and letting Unraid format them.
  3. Enable mover logging, run the mover, and post new diags.
  4. Not that I'm aware of, diagnostics might give some clues.
  5. It might help a little, but you should still be able to get close to line speed without it, so not the problem.
  6. Don't see any issues with the cache filesystem, other than being completely full, you just need to move/delete some data, docker image might need to be recreated after that is done.
  7. Disks are not mounting because of read errors: Mar 27 15:05:58 Media kernel: md: disk13 read error, sector=8590653888 Mar 27 15:05:58 Media kernel: md: disk13 read error, sector=8590653896 Mar 27 15:05:58 Media kernel: md: disk13 read error, sector=8590653904 ... Mar 27 15:05:58 Media kernel: md: disk14 read error, sector=8590548952 Mar 27 15:05:58 Media kernel: md: disk14 read error, sector=8590548960 Mar 27 15:05:58 Media kernel: md: disk14 read error, sector=8590548968 .. Mar 27 15:05:59 Media kernel: md: disk16 read error, sector=8592953024 Mar 27 15:05:59 Media kernel: md: disk16 read error, sector=8592953032 Mar 27 15:05:59 Media kernel: md: disk16 read error, sector=8592953040 etc, check what they have in common, controller, expander, etc, there's likely a problem with one of those.
  8. Also see here: https://forums.unraid.net/topic/46802-faq-for-unraid-v6/?do=findComment&comment=819173
  9. Are you formatting the disks before adding to the array?
  10. Clearly a network problem, you're getting under 10% expected performance, since it's low with both NICs unlikely to be a NIC problem, are both using the same switch or is the 2.5Gbe link direct? Also try with a different source computer if available.
  11. DIags are before starting the array, you can post new ones or wait for more issues and post then.
  12. It's happening to disk9 now, so it looks like a slot problem, or the cables since it's not clear to me if disk9 is using the same cables disk4/7 were.
  13. You need to replace X with correct letter, and since it's an NVMe device it would be: btrfs check /dev/nvme1n1p1
  14. Not that I can see, only that corruption was detected.
  15. Also, a scrub only checks data and metadata consistency, it doesn't look for filesystem corruption you can do that with the pool offline by typing: btrfs check /dev/sdX1 Note that if errors are found running btrfs check in repair model (btrfs check --repair) is considered dangerous, should only be done if told so.
  16. Sometimes rebooting can temporarily help, but you'll likely run into issues again.
  17. Correct, cache filesystem corrupted first, then it's normal to cause issues with any images using it.
  18. It means the SSD reached the expect life (TBW), it doesn't mean it's failing.
  19. No, but it's weird, only non rotational devices should have the partition start on sector 2048, the disks were formatted by Unraad correct? If you didn't reboot yet after formatting please post the diags.
  20. Logs are after rebooting so not much to see, if it happens again try this.
  21. Logs are spammed with these: Mar 26 08:19:40 Bender kernel: DMAR: [DMA Read] Request device [00:02.0] PASID ffffffff fault addr 0 [fault reason 06] PTE Read access is not set Mar 26 08:19:40 Bender kernel: DMAR: DRHD: handling fault status reg 2 Mar 26 08:19:40 Bender kernel: DMAR: [DMA Read] Request device [00:02.0] PASID ffffffff fault addr 0 [fault reason 06] PTE Read access is not set Mar 26 08:19:40 Bender kernel: DMAR: DRHD: handling fault status reg 2 Mar 26 08:19:40 Bender kernel: DMAR: [DMA Read] Request device [00:02.0] PASID ffffffff fault addr 0 [fault reason 06] PTE Read access is not set Mar 26 08:19:40 Bender kernel: DMAR: DRHD: handling fault status reg 2 Can't see anything else, please reboot and post new diags after array start.
  22. If you formatted cache and it's already corrupt it suggest an underlying hardware issue, post new diags just to see the actual issue.
×
×
  • Create New...