Jump to content

JorgeB

Moderators
  • Posts

    67,572
  • Joined

  • Last visited

  • Days Won

    707

Everything posted by JorgeB

  1. It should also work with CSM boot, but it's failing to initialize: Jan 19 05:21:06 Tower kernel: nvme nvme0: I/O 25 QID 0 timeout, disable controller Jan 19 05:21:06 Tower kernel: nvme nvme0: Device shutdown incomplete; abort shutdown Jan 19 05:21:06 Tower kernel: nvme nvme0: Removing after probe failure status: -4
  2. A balance will still run if you click on it, balance isn't just used to convert profiles, to check current profile click on cache and see this section, in your case both should be raid1:
  3. Yep, I recommend always having checksums, I use btrfs and still create manual checksums with corz, for some situations they can still be useful. Yep. No, the problem can really be on parity, again no way of knowing without checksums. See first reply.
  4. If you using v6.9-rc it's a known issue, you can just click on spin up all to get the temps for now.
  5. It can be, even for newer models, possibly board dependent.
  6. Yes (assuming parity is valid), the data is still there, it's just the partitions that are invalid. Correct, and to remove you just unassign the device.
  7. I think it's your best bet, it worked many times before for similar situations, and you don't have much to lose, just don't rebuild on top if the emulated disk doesn't mount correctly.
  8. Disable all dockers, enable one by one and make sure it's stable before enabling the next one and so on.
  9. This is the first you should do.
  10. Problem appears to be network related, try to simplify network config as much as possible.
  11. Forgot to mention, it might degrade performance if using a SAS2/SATA3 expander, since device link speed affects total link bandwidth.
  12. Since you can't reproduce I'm going to close this for now, please re-open or create a new report in the future if necessary.
  13. Then difficult to say, but unlikely to be a disk problem, controller or PSU are most likely candidates.
  14. NetApp shelves can cause this, they often change disks IDs and can also change the partition, good call on the controller, it should not be used it >2TB drives.
  15. That's not the correct way of doing it, next time ask for help first.
  16. It means parity is out of sync, you need to run correcting check. Without checksums (or btrfs) you can only correct parity.
  17. And how did you do it? Don't remember you asking for help and that's not a standard procedure.
  18. This can happen when moving from/to RAID controllers that don't use the standard partition, you should be able to fix it by rebuilding all the disks one at a time, since Unraid will recreate the partition, stop array, unassign one of those disks, start array, the emulated disk should mount and show the correct data, if yes, and only if yes, reassign the disk to rebuild on top, then repeat for the remaining disks.
  19. Enable mover logging to see what it's doing.
  20. Search for 10GbE peer to pear videos, principle is the same.
  21. Please post the diagnostics before rebooting.
  22. Kingston DTSE9 USB 2.0, but not the newer USB 3.0 2nd gen.
  23. Since there's no redundancy in the pool there's no other way of replacing it, and if the device is failing the operation might also fail, in that can you'd need to destroy and re-create the pool.
  24. One thing that can help with the high load is to map as many dockers as possible to use disk shares instead of user shares.
×
×
  • Create New...