Jump to content

JorgeB

Moderators
  • Posts

    67,644
  • Joined

  • Last visited

  • Days Won

    707

Everything posted by JorgeB

  1. Because your disks (and most if not all) can't read that fast over the entire surface, what's reported is the average speed, 153MB/s for the last check seems about right.
  2. That won't be easy and not something I can help with, you can make the request here: https://forums.unraid.net/forum/53-feature-requests/
  3. It can't be, it's an impossible speed, it can only be from a rebuild (or clear) of a smaller disk than parity (it's an old bug). The billions of errors are likely the result of the parity swap, it's another bug, and as long as next checks find 0 errors you're fine.
  4. Please use the existing docker support thread:
  5. I wouldn't say 100% is normal, but high dashboard load during a transfer is normal, because of the i/o load, but unless Unraid/GUI gets unresponsive during that it's nothing to worry about.
  6. I would cancel and run it after the update.
  7. LSISAS2008: FWVersion(20.00.02.00) This is one of the problem firmware releases, update to 20.00.07.00.
  8. There's no driver in Unraid for that NIC, you can make a feature request, LT is usually good at adding those, if a compatible one is available.
  9. If it's the same with the docker stopped (not the dashboard load, that is normal to be high, if the server turns unresponsive), don't have many more ideas, you could try without encryption just to test, but it shouldn't be an issue with your CPU.
  10. Diags might show some clues on the issue.
  11. Best bet IMHO is to use ddrescue to clone disk1 to a new disk, then you can try to rebuild disk8 using the clone as disk1, there will likely be some or a lot of data corruption on the rebuilt disk, mostly depending on how successful ddrescue is, you can also run reiserfsck on the cloned disk1 before or after rebuilding, no point in trying to run it on the failing disk1, it will abort due to i/o errors.
  12. Enable this then post that log after a crash together with the complete diagnostics.
  13. There is/was an issue with the mover plugin that starts the mover at array stop, looks to me like that's what's happened here.
  14. It means the docker image is corrupt, delete and recreate.
  15. Top doesn't show i/o load, dashboard does.
  16. Known issue with some Dell servers, some workarounds below:
  17. Try this, if Docker/VM services are using the cache pool disable them, unassign all cache devices, start array to make Unraid "forget" current cache config, stop array, reassign all cache devices (there can't be an "All existing data on this device will be OVERWRITTEN when array is Started" warning for any cache device), re-enable Docker/VMs if needed, start array, post new diags after array start.
  18. Problem is that disk8 is already disable, so there will be some data loss.
  19. There are also some LSI firmwares that have known issues with that, what firmware are you using? Or post diags.
  20. That's good, though like trurl mentioned that's a lot of errors for an unclean shutdown, did you by chance do a parity swap recently?
  21. https://forums.unraid.net/topic/70529-650-call-traces-when-assigning-ip-address-to-docker-containers/ See also here: https://forums.unraid.net/bug-reports/stable-releases/690691-kernel-panic-due-to-netfilter-nf_nat_setup_info-docker-static-ip-macvlan-r1356/ P.S. there are also some checksum errors detected by btrfs, run a scrub.
  22. That looks more like it's related to the below, see if it applies to you: https://forums.unraid.net/topic/70529-650-call-traces-when-assigning-ip-address-to-docker-containers/ See also here: https://forums.unraid.net/bug-reports/stable-releases/690691-kernel-panic-due-to-netfilter-nf_nat_setup_info-docker-static-ip-macvlan-r1356/
  23. É possivel, mas primeiro a pool tem de ser convertida para raid1, depois basta seguir estas instruções.
×
×
  • Create New...