Jump to content

JorgeB

Moderators
  • Posts

    67,755
  • Joined

  • Last visited

  • Days Won

    708

Everything posted by JorgeB

  1. Yes, they can make Unraid crash.
  2. Yes. Don't see much point in doing that instead.
  3. Yes, for disk3 make sure contents look correct, also look for a lost+found folder, before rebuilding on top of the old disk.
  4. Only things you care for now are the emulated disks, emulated disk3 is mounting now so it can be rebuilt. Use -L and post output.
  5. IIRC that was about enabling TRIM support for the array, at this time and since TRIM is not supported you can use any SSD, well except the V300 due to unrelated issues, AFAIK it's the only model that should be avoided and it's been discontinued for years now so likely not a problem anyway.
  6. I see that later in the diags disk3 shows is mounted, since you mentioned it didn't and the diags showed that initially I didn't check until the end, in that case post the output of xfs_repair just for disk4.
  7. Btrfs check --repair should be used with much care, it might make things even worse, a btrfs filesystem can quickly go bad with RAM issues, Ryzen with overclocked RAM is known to corrupt data, you should fix data and if it were me I would then backup and recreate that filesystem.
  8. First check was non correct, so seconds one was expect to find errors, though Ryzen with above spec RAM is known to corrupt data, fix that and run another correcting check, if it finds errors run a non correcting check without rebooting and post new diags.
  9. You don't even need to do a new config, just unassign parity and start the array.
  10. Start by running memtest, also good to post the complete diagnostics.
  11. Start the array in maintenance and post the output of: xfs_repair -v /dev/md3 xfs_repair -v /dev/md4
  12. GUI will show the actual used space, except for some pool configurations, like a pool with an odd number of devices in RAID1, also du isn't reliable with btrfs, which I assume is what the pool is using, see if this helps get the space back: https://forums.unraid.net/topic/51703-vm-faq/?do=findComment&comment=557606
  13. Like mentioned it doesn't look like the devices are the problem.
  14. Problem is not just the SSD, there are multiple timeout errors with various devices, also some read errors with e.g. disk3, this is usually a power/connection problem, could also be an enclosure/backplane issue.
  15. Looks more like a connection/power problem, there are also multiple timeout errors with the cache device, also make sure to check this.
  16. GUI and df for example will show the correct stats for that pool, not all tools show correct stats with btrfs, there are currently about 198GiB (or 213GB) used.
  17. Once it detects one error you can stop, test one DIMM at a time.
  18. Yes, that's the default, but it can be changed: https://forums.unraid.net/topic/46802-faq-for-unraid-v6/?do=findComment&comment=480421
  19. You can just power off, replace devices, when you power back on that pool will be empty, assign new device, start array, format new device.
  20. Btrfs is detecting data corruption: Jan 16 08:50:46 ZEUS kernel: BTRFS info (device nvme0n1p1): bdev /dev/nvme0n1p1 errs: wr 0, rd 0, flush 0, corrupt 52, gen 0 Jan 16 08:50:46 ZEUS kernel: BTRFS info (device nvme0n1p1): bdev /dev/nvme1n1p1 errs: wr 0, rd 0, flush 0, corrupt 56, gen 0 This is usually a RAM issue, start by running memtest.
×
×
  • Create New...