Jump to content

JorgeB

Moderators
  • Posts

    67,492
  • Joined

  • Last visited

  • Days Won

    706

Everything posted by JorgeB

  1. Did you by any change save the diagnostics before rebooting?
  2. Sorry to reply to this locked thread but this was just posted on the btrfs mailing list, you can probably also see it in the df source code but just in case it helps: https://lore.kernel.org/linux-btrfs/[email protected]/T/#t Forgot to mention, as it currently is is we already knew used space on the GUI is not correct for any raid5/6 pool, but it was reported recently in the forum that a 3 device raid1 pool also reports the used space incorrectly, possibly the same for any odd number raid 1 pools, and df shows correct, though both report free space incorrectly, but that part I would think it's likely to be fixed by btrfs in the future.
  3. It might, if it doesn't backup current flash, recreate it from scratch and restore the config folder from the backup.
  4. Changed Status to Closed Changed Priority to Other
  5. Iotop is not part of Unraid, also it works for me with beta25, you likely have a missing dependency, try reinstalling python 2.7 from nerdpack, if you still have issues after that best bet is the nerdpack support thread.
  6. Are you sure the problem in on Unraid side, i.e. have you tried with different source computers?
  7. WD60EZAZ it's SMR, and while they generally perform OK with Unraid when writting sequentially you might be hitting the SMR wall, if the rebuild starts fast and it slows down after a few minutes it's likely that.
  8. Then you likely have a hardware problem.
  9. This means there were lost writes sometime in the past and it's usually a fatal error, you need to re-format the cache, if there's important data here are some recovery options, btrfs restore is likely the best for this. P.S. next time please create your own thread since it's unrelated to the OP.
  10. There have been some reports of issues with non QVL RAM, does it run stable with the QVL kit only?
  11. btrfs docker image or vdisks on a btrfs filesystem can and usually does have a very high write amplification, but a btrfs docker image on a xfs filesystem should not be a problem, recently released v6.9-beta25 also mostly fixes this problem for btrfs filesystems, though btrfs will always have more writes than xfs due to being a COW filesystem.
  12. Yes, docker image is still btrfs, but it shouldn't case high writes on a xfs filesystem, though if you want the latest beta allows xfs for the docker image.
  13. That's the docker image, if you have cache you can move it there.
  14. Doen't make much sense, it certainly isn't a general problem, don't remember ever seeing before, might be hardware related, can you try booting Unraid on a different PC?
  15. That shouldn't happen, it just disables the power states.
  16. This is a very strange issue, there are errors even during boot, before any plugins are loaded, if you can disable that NIC and try an ad-don one, you can also try latest beta which is currently more verbose with SMB errors.
  17. You can still rebuild one at a time, but obviously data will only be available as you do it.
  18. It won't hurt. No, you can have a redundant pool, but would need another device.
  19. First I would not recommend using an NVMe device on a disk array as writes are going to be limited by parity, you can use it as a UD device or with the new beta on its own "pool", log is being spammed with unrelated errors and I don't see where the NVMe dropped, but this can sometimes help with some: Some NVMe devices have issues with power states on Linux, try this, on the main GUI page click on flash, scroll down to "Syslinux Configuration", make sure it's set to "menu view" (on the top right) and add this to your default boot option, after "append" and before "initrd=/bzroot" nvme_core.default_ps_max_latency_us=0 Reboot and see if it makes a difference.
  20. You need to run xfs_repair without -n or nothing will be done, if it asks for it use -L also.
  21. Check all connections, including power.
  22. One of your cache devices dropped offline previously: Jul 21 17:19:45 UNRAID kernel: BTRFS info (device nvme0n1p1): bdev /dev/sde1 errs: wr 508143002, rd 191759497, flush 5698788, corrupt 0, gen 0 You need to run a scrub and check that all errors are corrected, see here for more info, you'll also need to re-create the docker image.
  23. The other way to resolve this is to rebuild one drive at a time with the new controller, so Unraid can re-create the partitions correclty.
×
×
  • Create New...