Jump to content

JorgeB

Moderators
  • Posts

    67,125
  • Joined

  • Last visited

  • Days Won

    703

Everything posted by JorgeB

  1. That's not correct, please post the diagnostics: Tools -> Diagnostics, though likely unrelated to this release, so probably best to post in the general support forum.
  2. Yep, it's Marvell: 01:00.0 Non-Volatile memory controller [0108]: Sandisk Corp WD Black NVMe SSD [15b7:5001] Subsystem: Marvell Technology Group Ltd. Device [1b4b:1093]
  3. mlx4 is the Mellanox driver, maybe try without that NIC for a while.
  4. Cache is 99% full, and after emptying some data run a balance since it's also fully allocated, though it might not be after some space is freed: https://lime-technology.com/forums/topic/62230-out-of-space-errors-on-cache-drive/?do=findComment&comment=610551
  5. WD Black NVMe devices do, at least initial models did, we can confirm if you post the diagnostics.
  6. You can use btrfs on the backup server and make snapshots before syncing, that's what I do on all my backup servers.
  7. Looks to me like some confusion because you removed one of the cache devices and that device was later mounted and unmounted by UD, rebooting should clear the issue.
  8. Most recent Ryzen bios added a power supply idle control setting that usually solves the hanging issue without completely disabling c-states, look for "Power Supply Idle Control" (or similar) and set it to "typical current idle" (or similar).
  9. I assume the cache pool was just the SSDs, not the NVMe device which is formatted to xfs? If yes, and if you haven't yet, start the array one time with no cache devices assigned, then stop the array, re-assign both cache devices and start the array, unless something else happened it will use the old cache pool.
  10. I can reproduce these on one of my servers, it looks like something is making the read errors stats increase without an actual error, doubt it's a btrfs problem since it only happens with array disks, and at least for me, only when copying from one array disk to another, if I copy to a Windows desktop there are no errors.
  11. Self-repair wouldn't work on the array drives, since each disk is an independent filesystem, it would work on the cache pool for redundant configs, and btrfs has the same self-repairing features as zfs, though there's no doubt zfs is more mature and stable.
  12. The Marvell controller stopped responding. Marvell controller are known to be problematic by themselves, a Marvell controller with a port multiplier is just asking for trouble, I would recommend getting an LSI HBA instead.
  13. It will double the available bandwidth, for SAS2/SATA3 it's 2200MB/s vs 4400MB/s usable, if it's an advantage depends on the devices connected.
  14. Refurbished, almost no chance you'll get a new one.
  15. OK, there was some suspicion that specific boot problem could be related to very low RAM, but 3GB should be more than enough.
  16. I didn't follow everything you did, and your log is very difficult to analyze since it's spammed with various unrelated errors, I did see cache filesystem was corrupt, and I also see you're using a Marvell 9230 controller, those controllers are a known problem with Linux, they tend to drop disks without a reason, btrfs is particularly sensible to dropped disks, and corruption beyond repair is possible, even likely if it keeps happening, I suggest you replace that controller with for example an LSI HBA. Also, btrfs --check repair should only be used if told so by a btrfs maintainer or as a last resort, since many times it can do more harm than good, more info about btrfs recovery here.
  17. No, this was tested with v6.6.6
  18. ECC can go bad, but it won't corrupt data, if it can correct the error it will and the system works normally, if it can't it halts the system to avoid data corruption, at least that's how it's supposed to work [emoji846]
  19. I don't have a Ryzen CPU, but that bios setting seems to fix the locking up issue with Unraid/Linux.
  20. In the bios looks for "Power Supply Idle Control" (or similar) and set to "typical current idle" (or similar).
  21. I can now confirm trim works with SAS3 models and current Unraid, at least it does on a 9300-8i, but like with all LSI HBAs it only works on SSDs with RZAT or DRAT, e.g.: OK hdparm -I /dev/sdc | grep TRIM * Data Set Management TRIM supported (limit 8 blocks) * Deterministic read ZEROs after TRIM Not OK hdparm -I /dev/sdb | grep TRIM * Data Set Management TRIM supported (limit 8 blocks)
  22. Just installing won't help, but read the help for sdparm or google it and you might be able to get it working.
×
×
  • Create New...