Jump to content

JorgeB

Moderators
  • Posts

    67,405
  • Joined

  • Last visited

  • Days Won

    705

Everything posted by JorgeB

  1. According to diags you have 4: 06:00.0 SATA controller [0106]: ASMedia Technology Inc. ASM1062 Serial ATA Controller [1b21:0612] (rev 02) Subsystem: ASMedia Technology Inc. ASM1062 Serial ATA Controller [1b21:0612] Kernel driver in use: ahci Kernel modules: ahci 07:00.0 SATA controller [0106]: ASMedia Technology Inc. ASM1062 Serial ATA Controller [1b21:0612] (rev 02) Subsystem: ASMedia Technology Inc. ASM1062 Serial ATA Controller [1b21:0612] Kernel driver in use: ahci Kernel modules: ahci 08:00.0 SATA controller [0106]: ASMedia Technology Inc. ASM1062 Serial ATA Controller [1b21:0612] (rev 02) Subsystem: ASMedia Technology Inc. ASM1062 Serial ATA Controller [1b21:0612] Kernel driver in use: ahci Kernel modules: ahci 09:00.0 SATA controller [0106]: ASMedia Technology Inc. ASM1062 Serial ATA Controller [1b21:0612] (rev 02) Subsystem: ASMedia Technology Inc. ASM1062 Serial ATA Controller [1b21:0612] Kernel driver in use: ahci Kernel modules: ahci You you really only have one this could be part of the problem.
  2. Strange, Asmedia controllers aren't usually affect by IOMMU, does the same happen if you use another of the Asmedia controllers you have installed?
  3. Stop all VMs and the VM service, change Domains share to cache="prefer" and run the mover, then post new diags.
  4. The 6 or 10 ports controller you're using is in fact a 2 port controller with SATA port multipliers, they are a known problem, you'll need to replace it to get rid of those timeout errors.
  5. Like mentioned above the correct procedure here would have been a new config, no need to re-sync parity. (Tools -> New config), but note:
  6. Next time please post diags instead, wait for the extended SMART test to finish, it will be the best indicator of the current disk health.
  7. Where are the vdisks located? You're using /mnt/user path and the domains shares exists on both cache and disk1.
  8. You can post the diagnostics during a check, but note there's a known issue with parity check speed in some cases with v6.8.x, especially for larger arrays.
  9. It still needs to be formatted, it doesn't need to be cleared. If the drive was cleared it means the preclear didn't work correctly, and if you want you can post about it on the appropriate support thread, preclear plugin, preclear docker, etc.
  10. -Unassign both disabled disks -Start array -Stop array -Re-assign both disks (to their original slots) -Start array to begin parity sync/data rebuild If there are any issues grab diags before rebooting or shutting down.
  11. I don't know what type o case you're using but as long as there's some airflow around them it should be fine, e.g. server cases usually have fans at the front for fresh air intake and then at the back for exhaust, that's enough to create some airflow, depending on the fans used and speed they rotate. Seems unnecessary to me in this case.
  12. Damn! You're using 5 LSI controllers! 😛 Everything looks fine so far, all are in IT mode and using latest firmware, no errors for now, make sure there's some airflow on the controllers, especially if they are stacked together, and some must be for so many.
  13. If the status is the same as last screenshot you need to re-enable both disks, you can do both at the same time: https://wiki.unraid.net/Troubleshooting#Re-enable_the_drive
  14. Check filesystem on the emulated disk2, remove -n or nothing will be done, and if it askw for -L use it, if the repair is successful and data looks good rebuild to a new or old disk.
  15. My solution would be to replace the known problematic controller, other than that can't help.
  16. Run again without -n, or nothing will be done.
  17. Likely if the Appdata folder is recovered without corruption, you should have a backup of that, look for the CA appdata backup plugin, the docker image itself can be easily recreated.
  18. Please post the diagnostics: Tools -> Diagnostics
  19. Disk1 is failing, you don't have parity so not possible to do a rebuild, xfs_repair won't normally repair a disk with bad sectors, you can clone with with ddrescue, then run xfs_repair to copy everything you can from it. P.S. array devices using USB is not recommend.
  20. Disk isn't the problem, it's the filesystem, like mentioned above you need to do a filesystem check.
  21. When this happens again post syslog.
×
×
  • Create New...