Jump to content

JorgeB

Moderators
  • Posts

    67,726
  • Joined

  • Last visited

  • Days Won

    708

Everything posted by JorgeB

  1. Enable the syslog server and post that log after a crash.
  2. If this is true disk should be mostly OK, assuming no other writes to the array, there will always be some writes to parity due to mounting and unmounting the other filesystems, to keep other options open if needed post new diags after trying to mount the affect disk with the UD plugin.
  3. There are two invalid disks with single parity, so they can't be emulated, you're best option is to do a new config, if the filesystems are still OK they should mount, or at least be fixable.
  4. Can't really help with command as I don't have SAS drives, maybe google can help.
  5. Disks are formatted with type2 protection, you need to remove that first:
  6. It's logged as a disk problem, but these can be intermittent, and since the extended SMART test passed the disk is OK for now, keep monitoring.
  7. That's the link width, that's OK, what's degraded is the speed, or generation, PCIe 1.0/2.0/3.0, sometimes there's an "Auto" option, it will depend on the BIOS.
  8. Like mentioned in the release notes you need to re-assign your cache, but IMHO you should try v6.9.1 and the cache will come back if you do.
  9. You'll need to clear the FCP error, then grabs new diags after it comes up again, without rebooting.
  10. If no errors are found during memtest run a couple o consecutive parity checks and post new diags, all without rebooting.
  11. Did you reboot after getting the error? Not seeing any issues so far.
  12. It would, IMHO not much point in creating one for v6.9.2 since it's no longer being developed, but if the issue remains in v6.10-rc2 you should create one in the per-release section.
  13. It was true with SAS2 expanders and HBAs, and AFAIK there are no exceptions, with SAS3 there are, LSI for example uses Databolt to improve performance with slower linking SAS2/SATA3 devices: PMC Sierra uses a similar feature, some numbers in the thread below: https://forums.unraid.net/topic/41340-satasas-controllers-tested-real-world-max-throughput-during-parity-check/
  14. No, check the BIOS, there's usually a PCIe link speed setting, if there isn't or it's correctly configured look for a BIOS update.
  15. There's could be some data corruption, if those read errors coincided with data on the rebuilt disk.
  16. This look more like a general support issue, possibly RAM related, please post the diagnostics so we can check the hardware used.
  17. If you are passing though any devices to the VM, or if they are bound, depending on the way you have them bound, the hardware IDs can and will likely change when adding new hardware, so if you were for example passing through device 01:00.0 that was a GPU before, it can be a different device after adding (or removing) some hardware, so check those.
  18. Like mentioned du output is meaningless with btrfs, it can be show more, it can show less.
  19. A valid xfs filesystem is still detected on mount: Nov 30 11:49:00 Tower kernel: XFS (md9): Mounting V5 Filesystem Nov 30 11:49:00 Tower kernel: XFS (md9): Internal error rhead->h_magicno != cpu_to_be32(XLOG_HEADER_MAGIC_NUM) at line 2886 of file fs/xfs/xfs_log_recover.c. Caller xlog_valid_rec_header+0x17/0x11a [xfs] Though it results in a crash, still find strange xfs_repair not finding a superblock, even if it wasn't 100% correct, but only solutions I see now are restoring from backups, if available, or try posting in the xfs mailing list and asking for help.
×
×
  • Create New...