Jump to content

JorgeB

Moderators
  • Posts

    67,737
  • Joined

  • Last visited

  • Days Won

    708

Everything posted by JorgeB

  1. And in case I wasn't clear, that's what happened to you, you had issues with 4 disks, only 2 got disable because you have dual parity, with single parity just one would get disabled.
  2. See if those disks can be mounted with the UD plugin, if won't care about the partition layout/MBR info as Unraid does, of course it still needs a valid partition/filesystem to work.
  3. Correct. Don't know of any link/documentation describing this, but it's how it works.
  4. No, I would wait for the next scheduled one. Unraid will only disable one disk with single parity, two disks with dual parity, if there are errors in more disks due to for example a controller issue you just need to fix the issue and reboot/power back on, the disabled disk(s) will need to be rebuilt, like you had to do, the other ones will recover immediately after boot.
  5. ? Isn't it rebuilding? If there were any issues you need to post new diags.
  6. Is the M.2 device SATA or NVMe? If SATA it will share one of the 6 SATA ports.
  7. Currently any array assigned device can't be trimmed, regardless of filesystem used.
  8. I'd keep it, like mentioned it's common with those drives, possibly a firmware issue.
  9. Make sure you've enabled UEFI support in Unraid, either when using the USB tool or by manually renaming EFI- to EFI in the flash drive.
  10. Logs are missing a lot of data due to spam, but disk1 dropped offline, check/replace cables and post new diags after array start.
  11. It's known issue, Since v6.2 Unraid accepts partitions starting on sector 2048 for SSDs, so if there's already a partition starting on that sector it will be used, instead of creating one starting on the default sector 64 for disks.
  12. Possible yes, but I would guess unlikely.
  13. Unfortunately I can't see nothing relevant logged before the crashes, this usually points to a hardware issue, start by running memtest and/or try swapping some components if available, like PSU, board RAM, etc.
  14. 2.2TiB = 2.42TB Not necessarily: https://github.com/shundhammer/qdirstat/blob/master/README.md
  15. Look for a lost+found folder on that disk, if it exists check contents, there might be some lost/partial files there.
  16. According to diags share domains is shareUseCache="only", try toggling to something else them back to prefer.
  17. If you don't get an answer from someone that already tried it you can always use a trial Unraid license.
  18. Could be this: https://forums.unraid.net/bug-reports/prereleases/69x-610x-intel-i915-module-causing-system-hangs-with-no-report-in-syslog-r1674/?do=getNewComment&d=2&id=1674
  19. Yes, that was inevitable, both emulate disks ae mounting, so you can rebuild on top (with dual parity you can rebuild both at the same time): https://wiki.unraid.net/Manual/Storage_Management#Rebuilding_a_drive_onto_itself
  20. Since the disk still mounts you don't need to use that, you can't write to it but can still read it, so just copy the data normally using your favorite tool.
  21. If it's filesystem corruption it's not usually a device problem.
  22. LSI didn't like waking up, there's even a driver crash besides the many timeout errors, you should reboot first, that will clear the errors on the two still enable disks and the LSI issue, then start array to see if the emulated disks are mounting and post new diags.
  23. Jan 18 20:44:37 Enterprise kernel: BTRFS info (device md1): bdev /dev/md1 errs: wr 0, rd 0, flush 0, corrupt 3, gen 0 Jan 18 20:44:40 Enterprise kernel: BTRFS info (device md2): bdev /dev/md2 errs: wr 0, rd 0, flush 0, corrupt 2, gen 0 Btrfs is detecting data corruption in multiple disks, this is usually the result of RAM issues, Ryzen with RAM above max officially supported speeds is known to cause data corruption in some cases. As for disk 2 there's also filesystem corruption, probably resulting from the same issues, best bet is to backup and re-format the disk, ideally after fixing the hardware problem.
  24. Not a good sign, I would, especially if you can get a new one, since refurbished RMA drives are a craps shoot.
×
×
  • Create New...