Jump to content

JorgeB

Moderators
  • Posts

    67,499
  • Joined

  • Last visited

  • Days Won

    706

Everything posted by JorgeB

  1. It won't hurt. No, you can have a redundant pool, but would need another device.
  2. First I would not recommend using an NVMe device on a disk array as writes are going to be limited by parity, you can use it as a UD device or with the new beta on its own "pool", log is being spammed with unrelated errors and I don't see where the NVMe dropped, but this can sometimes help with some: Some NVMe devices have issues with power states on Linux, try this, on the main GUI page click on flash, scroll down to "Syslinux Configuration", make sure it's set to "menu view" (on the top right) and add this to your default boot option, after "append" and before "initrd=/bzroot" nvme_core.default_ps_max_latency_us=0 Reboot and see if it makes a difference.
  3. You need to run xfs_repair without -n or nothing will be done, if it asks for it use -L also.
  4. Check all connections, including power.
  5. One of your cache devices dropped offline previously: Jul 21 17:19:45 UNRAID kernel: BTRFS info (device nvme0n1p1): bdev /dev/sde1 errs: wr 508143002, rd 191759497, flush 5698788, corrupt 0, gen 0 You need to run a scrub and check that all errors are corrected, see here for more info, you'll also need to re-create the docker image.
  6. The other way to resolve this is to rebuild one drive at a time with the new controller, so Unraid can re-create the partitions correclty.
  7. Best bet is to backup cache data and re-format the pool.
  8. Parity can't usually help with filesystem corruption, and it won't if it's 100% in sync, but because of the HBA problem it might not be, so you can try, stop the array, unassign disk5, start the array, if disk5 is still unmountable run xfs_repair on the emulated disk (it's also md5), if that doesn't work best bet is to ask for help on the xfs mailing list or to use a data recovery program like UFS explorer.
  9. You assume right. WD Helium models are all CMR, also any WD 8TB or larger model is also always CMR, at least for now.
  10. Make sure the BIOS is up to date, the option is called "power supply idle control" or similar, if you can't find it disable C-States globally. You're also overclocking the RAM, respect the max officially supported speeds for your config.
  11. Possibly a bug, unless the enable cache setting is now also on another place, edit /boot/config/shares.cfg and change: shareCacheEnabled="no" to "yes"
  12. Start here: https://forums.unraid.net/topic/46802-faq-for-unraid-v6/?do=findComment&comment=819173
  13. That setting is no longer there, it's set on a share by share basis:
  14. You should disable the mover logging so it doesn't spam the syslog.
  15. Disk looks fine, most likely an issue with the SASLP, since they are known to drop drives without a reason, could also be a cable/connection issue.
  16. Can't find them, either missed them or they are not there since there are several hours missing because of all the log spam, disk looks healthy, would recommend replacing/swapping cables to rule them out and rebuild on top.
  17. First thing you need to do is to make your server not internet facing, now let me see if I can find the disk errors in the middle of all those login attempts.
  18. https://forums.unraid.net/topic/46802-faq-for-unraid-v6/?do=findComment&comment=480421
  19. Try it on a different controller if possible, like the onboard SATA ports.
  20. Please post the diagnostics: Tools -> Diagnostics
  21. Yes if the filesystem was fixed, you can also reboot to see if there aren't any more errors.
  22. Problem first starts with the cache filesystem, then the docker because it's on cache. Possibly there's something using the cache at that time, like the mover.
  23. There's filesystem corruption on the cache pool, best bet is to backup any important data there and re-format the pool.
×
×
  • Create New...