Jump to content

JorgeB

Moderators
  • Posts

    67,435
  • Joined

  • Last visited

  • Days Won

    706

Everything posted by JorgeB

  1. Can't move posts to the bug reports section (or don't know how to) but copied it there.
  2. I believe it was used before but due to unreliable values it stopped being used, on newer kernels it should work reliably, and better than current method for some situations. There are frequent posts on the forum from users using two different size devices in a pool, because free space is incorrectly reported and they run out of it, e.g. a pool made of 32GB + 64GB devices default raid1 Usable space will be around 32GB, GUI reports 47GB, df reports correctly: Also starting with even newer kernels, like the one on v6.9-beta1 it started correctly reporting free space for raid5/6 profiles, pool made of four 64GB devices using raid6 With df: Please change this for v6.9, with multiple pools it will likely affect more users.
  3. Posted this here since it's the version being developed but note that this bug is also present on v6.8.3. Like mentioned in the title the very nice pool direct convert options on the GUI are based on the number of pool slots instead of the actual number of devices, so if you have a 2 pool device (or even just 1 device) but have 4 pool slots selected you'll have the option to convert to raid5/6/10, you can then press balance but it will fail to convert to any invalid option.
  4. I believe it was used before but due to unreliable values it stopped being used, on newer kernels it should work reliably, and better than current method for some situations. There are frequent posts on the forum from users using two different size devices in a pool, because free space is incorrectly reported and they run out of it, e.g. a pool made of 32GB + 64GB devices default raid1 Usable space will be around 32GB, GUI reports 47GB, df reports correctly: Also starting with even newer kernels, like the one on v6.9-beta1 is started correctly reporting free space for raid5/6 profiles, pool made of four 64GB devices using raid6 With df: Please change this for v6.9, with multiple pools it will likely affect more users.
  5. Upgrade to v6.8.3 since it includes a newer xfsprogs and run xfs_repair again.
  6. WD60EFAX is SMR, not that's of much concern with Unraid, but only the old WD60EFRX is CMR.
  7. Diags after rebooting don't help much, if it keeps happening setup the syslog server/mirror feature.
  8. Same issue as this one, rebooting will fix it, not quite clear what the underlying cause is, possibly a fuser bug.
  9. Main suspect would be the overclocked RAM, respect the max officially supported RAM speed.
  10. Looks more like a power/connection issue, swap/replace BOTH cables (or slot) and rebuild on top.
  11. You're also having read errors on disk5, start by updating the LSI firmware, all p20 releases except latest one (20.00.07.00) have known issues.
  12. Max theoretical bandwidth for a PCIe 2.0 x1 link is 500MB/s, max usable bandwidth is around 400MB/s.
  13. Some weird SMART errors on the parity disk, you should run an extended SMART test.
  14. Changed Status to Closed Changed Priority to Other
  15. Please reboot and post new diags.
  16. I'm sorry, I have the bad habit of sometimes reading posts too fast, in this case I read this: as That's what my suggestion was based on, glad you found the way to rebuild them.
  17. Please post the complete diagnostics but looking at the syslog it appears parity is failing.
  18. It could be, you're still getting ATA errors on 3 disks, if replacing the SATA cables doesn't fix them it could be, try replacing those first, make sure they are good quality cables.
  19. Cable/port won't make any difference (unless there's a problem with the power, bad power can in some cases cause pending/reallocated sectors), preclear or a full write might or not mark/reallocate all bad sectors, difficult to say.
  20. JorgeB

    BTRFS error

    Looks like it.
  21. Yep, though rsync over ssh won't be very fast, also depends on CPU and if it can assist with hardware aes encryption. rsync -av /source root@ip:/dest
×
×
  • Create New...