Jump to content

JorgeB

Moderators
  • Posts

    67,119
  • Joined

  • Last visited

  • Days Won

    703

Everything posted by JorgeB

  1. Looks to me like some confusion because you removed one of the cache devices and that device was later mounted and unmounted by UD, rebooting should clear the issue.
  2. Most recent Ryzen bios added a power supply idle control setting that usually solves the hanging issue without completely disabling c-states, look for "Power Supply Idle Control" (or similar) and set it to "typical current idle" (or similar).
  3. I assume the cache pool was just the SSDs, not the NVMe device which is formatted to xfs? If yes, and if you haven't yet, start the array one time with no cache devices assigned, then stop the array, re-assign both cache devices and start the array, unless something else happened it will use the old cache pool.
  4. I can reproduce these on one of my servers, it looks like something is making the read errors stats increase without an actual error, doubt it's a btrfs problem since it only happens with array disks, and at least for me, only when copying from one array disk to another, if I copy to a Windows desktop there are no errors.
  5. Self-repair wouldn't work on the array drives, since each disk is an independent filesystem, it would work on the cache pool for redundant configs, and btrfs has the same self-repairing features as zfs, though there's no doubt zfs is more mature and stable.
  6. The Marvell controller stopped responding. Marvell controller are known to be problematic by themselves, a Marvell controller with a port multiplier is just asking for trouble, I would recommend getting an LSI HBA instead.
  7. It will double the available bandwidth, for SAS2/SATA3 it's 2200MB/s vs 4400MB/s usable, if it's an advantage depends on the devices connected.
  8. Refurbished, almost no chance you'll get a new one.
  9. OK, there was some suspicion that specific boot problem could be related to very low RAM, but 3GB should be more than enough.
  10. I didn't follow everything you did, and your log is very difficult to analyze since it's spammed with various unrelated errors, I did see cache filesystem was corrupt, and I also see you're using a Marvell 9230 controller, those controllers are a known problem with Linux, they tend to drop disks without a reason, btrfs is particularly sensible to dropped disks, and corruption beyond repair is possible, even likely if it keeps happening, I suggest you replace that controller with for example an LSI HBA. Also, btrfs --check repair should only be used if told so by a btrfs maintainer or as a last resort, since many times it can do more harm than good, more info about btrfs recovery here.
  11. No, this was tested with v6.6.6
  12. ECC can go bad, but it won't corrupt data, if it can correct the error it will and the system works normally, if it can't it halts the system to avoid data corruption, at least that's how it's supposed to work [emoji846]
  13. I don't have a Ryzen CPU, but that bios setting seems to fix the locking up issue with Unraid/Linux.
  14. In the bios looks for "Power Supply Idle Control" (or similar) and set to "typical current idle" (or similar).
  15. I can now confirm trim works with SAS3 models and current Unraid, at least it does on a 9300-8i, but like with all LSI HBAs it only works on SSDs with RZAT or DRAT, e.g.: OK hdparm -I /dev/sdc | grep TRIM * Data Set Management TRIM supported (limit 8 blocks) * Deterministic read ZEROs after TRIM Not OK hdparm -I /dev/sdb | grep TRIM * Data Set Management TRIM supported (limit 8 blocks)
  16. Just installing won't help, but read the help for sdparm or google it and you might be able to get it working.
  17. IIRC hdparm can't spin down SAS disks, you need sdparm, not currently included with Unraid.
  18. Not currently, it's a known issue, they might in the future.
  19. That's weird, try rebooting, if still the same post diags, maybe something visible there.
  20. From what I gather most believe that's a non issue, while possible, both for zfs and btrfs, you'd need to have hash collision for that, and the chances of that happening are extremely low, see for example here: http://jrs-s.net/2015/02/03/will-zfs-and-non-ecc-ram-kill-your-data/ I use and strongly recommend ECC for anyone who cares about data integrity, but you're still better protected against data corruption with zfs or btrfs without ECC that would be on a non checksummed filesystem.
  21. It's a btrfs limitation with raid5/6, used spaced will show correctly, available space will not account for parity, but since it still decreases with used space not much of a problem, also note that while you can use raid5/6, I use it myself, it's still less stable than other profiles, and a UPS is strongly recommended because of the the write hole issue that affects most raid5/6 implementations.
  22. Looks more like a connection problem with the SSD, replace cables, if problems persist it could be the SSD.
×
×
  • Create New...