Jump to content

JorgeB

Moderators
  • Posts

    67,432
  • Joined

  • Last visited

  • Days Won

    706

Everything posted by JorgeB

  1. Yes A sync error in this case could only be detected on parity2, and if detected it would be corrected (and logged in the history), if there's a read error on parity or any other device parity2 would be used to correct the error(s) and continue the rebuild so there's no corruption. The parity check after a rebuild is mostly to confirm that the rebuilt disk can be read back correctly, it's an optional step and IMHO there's not much reason to do it unless you don't trust your hardware, just wait for the next scheduled check, that's what I do, but parity2 being checked during the rebuild can't help for this anyway.
  2. Unlikely to be a power issue then. Not really, try with a single DIMM/channel in use, if that doesn't help trying a different board would be my next move.
  3. Checksum error on the docker image: May 1 11:29:50 Titan kernel: BTRFS warning (device loop2): csum failed root 5 ino 274 off 16384 csum 0x5fa31edb expected csum 0xcfa287d9 mirror 1 You should re-create it, but this might be the result of a hardware problem, like bad RAM, especially if it happens again in the near future.
  4. If it's just one port and the other drives are still on the RAID controller it won't make any difference.
  5. Though 30/40MB/s is still slow you're using slower TLC based SSDs (WD green is TLC, never heard of a Palit SSD, but should be similarly low end, and even if it isn't the pool will be limited by the slower device), they will never perform great, I recommended 3D TLC based, 860 EVO, MX500, WD Blue 3D, etc.
  6. Maybe a power issue, do you have another PSU you could try?
  7. Impossible, except for the first few GBs that will be cached to RAM, unless you have an SSD based array.
  8. https://forums.unraid.net/topic/48707-additional-scripts-for-userscripts-plugin/?do=findComment&comment=850539
  9. Then 50/60MB/s is normal and about the max you can get with the default writing mode.
  10. Also check the FAQ link for max RAM speeds, you're currently overclocking your RAM.
  11. loop2 is usually where the docker image is mounted, you can confirm with: df | grep docker
  12. You're welcome, just make sure to ask for help in the future if you have any doubts, this data lost could have been easily avoided.
  13. Two other devices dropped offline: Apr 25 19:47:52 unraid kernel: ata16: SATA link down (SStatus 0 SControl 300) Apr 25 19:47:57 unraid kernel: ata16: hard resetting link Apr 25 19:47:58 unraid kernel: ata8: SATA link down (SStatus 0 SControl 300) Apr 25 19:47:58 unraid kernel: ata8.00: disabled Apr 25 19:47:58 unraid kernel: ata16: SATA link down (SStatus 0 SControl 300) Apr 25 19:47:58 unraid kernel: ata16.00: disabled You're using a 6 or 10 port controller that is in fact a 2 port controller with SATA port multipliers, these controllers are a known source of multiple issues, it should be replaced.
  14. Unfortunately there's nothing on the syslog about the crash, you can try booting in safe mode but there's almost nothing installed and this looks more like a hardware issue.
  15. Very unlikely that's a disk problem, most likely culprit is still the SATA cable, you can also try a different SATA port.
  16. This is a Marvell issue with the Linux kernel, it can't be fixed by LT, but it can start working again (or not) in a future kernel.
  17. Copy the config folder from the trial to the new flash drive and register that one.
  18. Cache fs is OK but it's full, docker image ran out of space, and that's causing the problems.
  19. You likely saw this and have a script running to check for errors, note that the original script generates a notification if the array ia stopped, i.e. /mnt/cache doesn't exist, see the current one, it will check that the mountpoint exists first.
  20. Like mentioned already the raid controller is likely the main issue, HBAs or just regular SATA ports are recommended for Unraid.
  21. 👍 Note that reiserfs is not recommended for years now, you should convert to xfs.
×
×
  • Create New...