Jump to content

JorgeB

Moderators
  • Posts

    67,652
  • Joined

  • Last visited

  • Days Won

    707

Everything posted by JorgeB

  1. New config, unassign the disks you want to remove from the array and create new pools and assign them there, parity will need to be synced after array start.
  2. Without the answer to the last question difficult to say, but I'm not very hopeful.
  3. Like mentioned before the call traces suggest a problem with the docker network, if the linked threads didn't help disable all dockers then start enabling one by one to see if you can find the culprit.
  4. Sorry, don't know, never used it, you should ask in the existing docker support thread:
  5. OK, thanks, remember reading it only supported 24, and the Adaptec that uses the same PMC chip only mentions 24 devices. In my experience it's closer to 6000MB/s, even LSI mentions 6400Mb/s as the maximum usable: Assuming the OP is using SATA3 devices (not SAS3) it also means the PMC expander chip has something equivalent to LSI's Databolt, or max bandwidth would be limited to 4400MB/s with dual link, in my tests I could get around 5600MB/s with a PCIe x8 HBA and a databolt enable LSI expander, so it's possible that while that kind of technology certainly helps there could always be some extra overhead and not exactly the same as true SAS3 performance. As for the original question, the only x16 HBA I know of is the LSI 9405W-16i/e, though probably not worth the investment for this, and there's a risk that performance won't improve by much (if at all) unless using SAS3 devices.
  6. You can't directly replace with smaller devices, you could add new members and then remove the other ones, but if I understood correctly you don't have enough ports, in that case you could follow this to backup cache to array, replace pool, then restore data.
  7. It means parity doesn't match de calculate from the arrays data devices, but with data corruption the problem can be anywhere, could be already written corrupt, could be parity that is wrong, or just the calculation at that time is wrong.
  8. If there are still errors on consecutive checks you basically need to rule out the hardware involved, RAM is still a good candidate even without memtest finding errors, but could also be board/CPU or a disk, I would start by using just one DIMM at at time since it's the easiest thing to rule out.
  9. If everything is copied you just use the format button then restore the data back.
  10. Share split level overrides allocation method, set share(s) to split all.
  11. Button is next to array start/stop buttons.
  12. Based on the call trace see if this applies to you: https://forums.unraid.net/bug-reports/stable-releases/690691-kernel-panic-due-to-netfilter-nf_nat_setup_info-docker-static-ip-macvlan-r1356/ P.S. also need to fix filesystem on the cache device.
  13. Reboot, force if needed, but you appear to be having multiple issues.
  14. If the question is unrelated you should start a new thread.
  15. No, it's a filesystem problem, you just need to re-format on v6.9, after all data is backed up.
  16. Make sure the disks are formatted, if it's not that please post the diagnostics.
  17. Any important data there should be copied to another disk(s), you can use cp or midnight commander for example, just remember to copy from disk to disk, not disk to share, .e.g. from /mnt/disk1 to /mnt/disk2
  18. That was the most likely result, still since it was aborting when creating the free space cache had some hopes that it could help. Nnewer kernel can detect previously undetected corruptions, you can downgrade back to v6.8.x, backup al the data in that disk then upgrade, format and restore data.
  19. It's up to you, if it's an auto check due to unclean shutdown you can cancel since if there were errors found you need to run a correcting check, and if there weren't you can run one later.
  20. It will for sure until you re-start the array.
  21. With Unbalance it's normal, since it's copying from array disk to another array disk, turbo write doesn't work for that.
  22. No, turbo write should be as fast as your slowest disk at that position, assuming there are no controller bottlenecks, I can write at 200MB/s+ to some of my servers.
×
×
  • Create New...