Jump to content

JorgeB

Moderators
  • Posts

    67,771
  • Joined

  • Last visited

  • Days Won

    708

Everything posted by JorgeB

  1. Pool filesystem is corrupt, best bet to fix it is to backup and re-format the pool.
  2. Post the output of: cat /proc/interrupts SASLP tends to like IRQ16, if it's getting disable during high load it's a problem.
  3. Please don't cross post, if you need help post the diagnostics in your other thread linked below, also if you have a backup of the vdisk.
  4. Yes, rebooting will fix it, it's this issue:
  5. missing the partition, it's nvme0n1p1
  6. It is, did you already try xfs_repair, or prefer to go with the rebuild?
  7. Cache filesystem is corrupt, there are some recovery options here.
  8. If it doesn't work you can still try to rebuild it, so don't do anything that could invalidate parity.
  9. Depends mostly on how long the sync ran and the filesystem used, start with a filesystem check, you'll need to now which one it was, default is xfs.
  10. Very unlikely that a new config with help with that but just check "parity is already valid" before array start, nothing will be overwritten, assuming assignments are correct.
  11. Stop the array, unassign the remaining cache device, start array to make Unraid "forget" current cache config, stop array, reassign all cache devices (there can't be an "All existing data on this device will be OVERWRITTEN when array is Started" warning for any cache device), start array, post new diags after array start.
  12. Yes, as long as the NICs are supported.
  13. Btrfs detected data corruption in both devices Mar 16 11:12:23 Sanctuary kernel: BTRFS info (device nvme0n1p1): bdev /dev/nvme0n1p1 errs: wr 0, rd 0, flush 0, corrupt 65, gen 0 Mar 16 11:12:23 Sanctuary kernel: BTRFS info (device nvme0n1p1): bdev /dev/nvme1n1p1 errs: wr 0, rd 0, flush 0, corrupt 46, gen 0 This is usually a RAM issue, start by running memtest, since the filesystem was also affected if a problem if found best bet after fixing it is to backup and re-format the pool.
  14. Likely a device problem, no signs in the log of the second NVMe device.
  15. Disks looks fine, most likely another issue, diags after the problem might give some clues.
  16. Only one NVMe device is being detected by Linux, this is not an Unraid issue, though strange if the BIOS is detecting both, try booting with just one of them, in the same M.2 slot.
  17. Iperf only tests lan, could be am issue with the NIC, router, cable, etc, sometimes also a problem with the NIC driver or options.
  18. You can, but probably better to way a few days or a couple of weeks then run another one, just make sure no unclean shutdowns in between, or a few errors are normal.
  19. If it's read-only mover might not work, you can still move the data manually using your favorite tool. P.S. if btrfs keeps getting corrupt you likely have some hardware issue, like bad RAM, xfs is more resilient to hardware issues, but if there's a problem data will still get corrupt, you just won't know about it.
  20. You can use the mover to move all the data from the pool to the array, create new pool(s) and move data back: https://forums.unraid.net/topic/46802-faq-for-unraid-v6/?do=findComment&comment=511923
  21. It is a disk problem, it can also occur with CMR drives, full disk write might help, if if does an Unraid rebuild will do it, that test has nothing to do with contiguous data, and SMR by itself will not slow down reads, on a normally working disk.
  22. Mar 15 22:31:44 backup kernel: REISERFS (device md2): Remounting filesystem read-only Check filesystem on disk2, once that's fixed you should convert to xfs.
×
×
  • Create New...