JorgeB

Moderators
  • Posts

    61463
  • Joined

  • Last visited

  • Days Won

    646

Everything posted by JorgeB

  1. Try clicking on the double arrows in the upper right of the UD page. It should initiate udev to refresh the disk status.
  2. You don't need to do a new config, just downgrade back to v6.12.8, 6.12.10 should be out soon with a fix.
  3. Those SSDs won't be able to maintain high sustained write speeds, also please note that trim is not supported for array devices, I would recommend creating a raidz zfs pool instead, it will perform much better, you can use an old flash drive to fulfill the current array device requirement.
  4. You can try one of them first, or post in all.
  5. P.S. as a workaround for now, if you need to import a degraded pool give it the number of lots it should have, that won't trigger the bug, e.g.:
  6. Pool looks OK now, it was doing a balance, that would explain all the activity, I assume no more now? See here for better pool monitoring, so you'd get notified if a device drops again: https://forums.unraid.net/topic/46802-faq-for-unraid-v6/?do=findComment&comment=700582
  7. In that case very unlikely that they would all be reverse or not good.
  8. Enable the syslog server and post that after a crash, it may catch something.
  9. Understandable, I was just trying to make it easier to reproduce, because I wasn't able at first, but I've now managed to do it, difference was the number of slots for the new pool. Thanks for the report, it's a corner case, but still a bug.
  10. I do see the flash drive is being wiped, not sure why though. The pool you are trying to mount is degraded, missing a device, a device that is now a part of a different pool, maybe that is creating some confusion. Let me try to see if I can duplicate, you are just starting the array, correct, not doing anything else? Also please post a screenshot of main and the output of: zpool import before array start.
  11. Suggest posting in the container support thread/discoed:
  12. Reassign the old disk and upgrade parity first, if the old disk is no longer available or already disabled you can do a parity swap.
  13. If you have a different PC you can try swapping some parts, like the PSU for example, if the server has multiple RAM sticks try with just one, if the same try a different one, that will basically rule out the RAM.
  14. IIRC NFSv2 is no longer supported, only v3 or v4, try to see which version the Synology is using.
  15. That looks more like a network problem, still not seeing anything relevant in the logs, you need to check filesystem on disk1, but that's unrelated to the issue.
  16. Check the driver that Mint is using, if the driver is the same it should perform the same, also post the complete diagnostics.
  17. If you still need help please post the diagnostics to confirm the controller.
  18. This means one of the devices dropped offline in the past. Post new diagnostics.
  19. Try with a new flash drive using a stock install, no key needed, that should confirm if it's a flash drive or server problem.
  20. failed to start daemon: write /var/lib/docker/volumes/metadata.db: no space left on device Try recreating the docker image: https://docs.unraid.net/unraid-os/manual/docker-management/#re-create-the-docker-image-file Also see below if you have any custom docker networks: https://docs.unraid.net/unraid-os/manual/docker-management/#docker-custom-networks