Jump to content

JorgeB

Moderators
  • Posts

    67,416
  • Joined

  • Last visited

  • Days Won

    705

Everything posted by JorgeB

  1. Check filesystem on disk5: https://wiki.unraid.net/Check_Disk_Filesystems#Checking_and_fixing_drives_in_the_webGui If it's fixed and the emulated disk mounts and data looks correct you can rebuild on top, recommend replacing/swapping cables first just to rule them out if it happens again to the same disk.
  2. Please post the diagnostics: Tools -> Diagnostics
  3. I think maybe @Hoopsterwas thinking of reallocated sectors? A single pending sector is a problem, unless it's a "false positive", i.e. there are no read errors or a failed SMART test. If you're having disk read errors then likely the disk has bad sectors, so any data there can't be read, parity sync will be invalid, if any other disk fails it can't be rebuild correctly, etc, so you should replace any such disk. Also a good idea to post the diagnostics (Tools -> Diagnostics) so we can have a better look, you can also run an extended SMART test on that disk to confirm if it's failing or not.
  4. There's another shfs memory related report, but in that case it evens happens on a clean install without any docker/VMs running, can you confirm in your case it only happens with dockers enable? Strange that only two users have this issue, especially if it's not even reproducible the same way.
  5. Keyboard should be immediately available, even before the menu appears, so you can for example enter the BIOS.
  6. That's OK but this one was really a disk problem, not cable related.
  7. Yes, not enough space on target disk, you can use a disk larger than 4TB (to mount with UD, not in the array).
  8. With a little luck parity sync will finish without error, but disk1 should be then replaced ASAP.
  9. According to the diags it's completely stalled, there's an ATA error for disk1, and it's not reporting SMART data, power down, check/replace cables on disk1 and try again.
  10. It might happen mostly under more load, or even a specific type of load.
  11. I wouldn't say the motherboard is bad, since this happens with multiple Ryzen models, so possibly a kernel/compatibility issue, you can also try v6.9-beta1 which uses a much newer kernel, if still the same then a different model board might help, if you're lucky.
  12. The problem is that the Marvell controller it's failing to identify the disks, this a known issue with Marvell and IOMMU enable, though usually the 9230 is the more affected model, if the workaround above didn't work it should work with IOMMU disable (if you don't need it): Apr 2 10:26:05 Rack kernel: ata9.00: qc timeout (cmd 0xec) Apr 2 10:26:05 Rack kernel: ata9.00: failed to IDENTIFY (I/O error, err_mask=0x4) Apr 2 10:26:05 Rack kernel: ata10.00: qc timeout (cmd 0xec) Apr 2 10:26:05 Rack kernel: ata10.00: failed to IDENTIFY (I/O error, err_mask=0x4) Apr 2 10:26:05 Rack kernel: ata9: SATA link up 6.0 Gbps (SStatus 133 SControl 300) Apr 2 10:26:05 Rack kernel: ata10: SATA link up 6.0 Gbps (SStatus 133 SControl 300) Apr 2 10:26:05 Rack kernel: DMAR: DRHD: handling fault status reg 102 Apr 2 10:26:05 Rack kernel: DMAR: [INTR-REMAP] Request device [04:00.0] fault index 1b [fault reason 38] Blocked an interrupt request due to source-id verification failure Apr 2 10:26:05 Rack kernel: DMAR: DRHD: handling fault status reg 202 Apr 2 10:26:05 Rack kernel: DMAR: [INTR-REMAP] Request device [04:00.0] fault index 1b [fault reason 38] Blocked an interrupt request due to source-id verification failure Apr 2 10:26:16 Rack kernel: ata9.00: qc timeout (cmd 0xec) Apr 2 10:26:16 Rack kernel: ata9.00: failed to IDENTIFY (I/O error, err_mask=0x4) Apr 2 10:26:16 Rack kernel: ata9: limiting SATA link speed to 3.0 Gbps Apr 2 10:26:16 Rack kernel: ata10.00: qc timeout (cmd 0xec) Apr 2 10:26:16 Rack kernel: ata10.00: failed to IDENTIFY (I/O error, err_mask=0x4) Apr 2 10:26:16 Rack kernel: ata10: limiting SATA link speed to 3.0 Gbps Apr 2 10:26:16 Rack kernel: ata9: SATA link up 3.0 Gbps (SStatus 123 SControl 320) Apr 2 10:26:16 Rack kernel: ata10: SATA link up 3.0 Gbps (SStatus 123 SControl 320) Apr 2 10:26:16 Rack kernel: DMAR: DRHD: handling fault status reg 302 Apr 2 10:26:16 Rack kernel: DMAR: [INTR-REMAP] Request device [04:00.0] fault index 1b [fault reason 38] Blocked an interrupt request due to source-id verification failure Apr 2 10:26:16 Rack kernel: DMAR: DRHD: handling fault status reg 402 Apr 2 10:26:16 Rack kernel: DMAR: [INTR-REMAP] Request device [04:00.0] fault index 1b [fault reason 38] Blocked an interrupt request due to source-id verification failure Apr 2 10:26:46 Rack kernel: ata9.00: qc timeout (cmd 0xec) Apr 2 10:26:46 Rack kernel: ata9.00: failed to IDENTIFY (I/O error, err_mask=0x4) Apr 2 10:26:46 Rack kernel: ata10.00: qc timeout (cmd 0xec) Apr 2 10:26:46 Rack kernel: ata10.00: failed to IDENTIFY (I/O error, err_mask=0x4)
  13. Possibly a flash drive issue, but difficult to say without at least the syslog, most likely a general support issue, not a bug, so closing this for now.
  14. Problem with the SATA controller, this is unfortunately rather common with Ryzen boards: Apr 3 13:13:07 Holt kernel: ahci 0000:01:00.1: Event logged [IO_PAGE_FAULT domain=0x0000 address=0x0800000000000000 flags=0x0010] Disabling IOMMU and/or a BIOS update might help.
  15. That would suggest another issue, like if you for example had a disabled disk, but can only guess without the diags.
  16. There was a recent request for the same thing, you can add to that, the more positive comments a feature request gets the more likely to be implemented by LT.
  17. The larger value of the two will take precedence, 500GB is lot to leave free for the array disks, you could set the Plex share to 50GB also.
  18. Share minimum space for the PlexMedia share is set to 500GB, more than it's available on cache, so it goes directly to the array.
  19. There's nothing to be moved, what's the share name you're trying to move from?
  20. If you have no SATA ports recommend either getting a 2 port controller like an Asmedia or tring a different model SSD.
  21. You do, try to confirm which of those containers is causing this, then you can post on the docker support thread.
×
×
  • Create New...