Jump to content

JorgeB

Moderators
  • Posts

    67,441
  • Joined

  • Last visited

  • Days Won

    706

Everything posted by JorgeB

  1. See the first couple of posts in the UD thread, they have the answers to your questions, but if you still need help please post there.
  2. Latest version appears to be 5.6.0, then follow then instructions on the link above, mailing list is below: https://xfs.org/index.php/XFS_email_list_and_archives
  3. The problem is the filesystem, not the disk, rebuilding from parity won't help since it will rebuild the same filesystem, this looks like an xfs_repair bug since it seams unable to fix it.
  4. Note that for this you'd need to back up first, you can't format and rebuild from parity.
  5. It appears it's still failing, start array again, if the fs is still corrupt you'll need to ask for help on the xfs mailing list, manually upgrade xfsprogs to latest (not the one on the link) and try again, wait for a newer Unraid or re-format the disk.
  6. This is more a request than a bug report, newer kernels support btrfs raid1c3 (3 copies) and raid1c4 (4 copies), currently a pool converted to raid5 or raid6 by using the GUI options will use raid1 for metadata (and rightly so since it's not recommended to use raid5/6 for metadata), the problem for a raid6 pool is that redundancy won't be the same for data and metadata as warned in the log: kernel: BTRFS warning (device sdb1): balance: metadata profile raid1 has lower redundancy than data profile raid6 i.e., data pool data chunks can support two missing devices but metadata can only support one missing device, with the new raid1 profiles this doesn't need to happen anymore, so when converting a pool to raid6 metadata should be convert to raid1c3. Note that if the user downgrades to an older Unraid release the pool won't mount, but it can always be converted before downgrading. P.S. you could also add convert options to raid1c3/c4 for data with 3/4 devices, not sure that would be used by many, but always nice to have the option if not too much trouble.
  7. Can't move posts to the bug reports section (or don't know how to) but copied it there.
  8. I believe it was used before but due to unreliable values it stopped being used, on newer kernels it should work reliably, and better than current method for some situations. There are frequent posts on the forum from users using two different size devices in a pool, because free space is incorrectly reported and they run out of it, e.g. a pool made of 32GB + 64GB devices default raid1 Usable space will be around 32GB, GUI reports 47GB, df reports correctly: Also starting with even newer kernels, like the one on v6.9-beta1 it started correctly reporting free space for raid5/6 profiles, pool made of four 64GB devices using raid6 With df: Please change this for v6.9, with multiple pools it will likely affect more users.
  9. Posted this here since it's the version being developed but note that this bug is also present on v6.8.3. Like mentioned in the title the very nice pool direct convert options on the GUI are based on the number of pool slots instead of the actual number of devices, so if you have a 2 pool device (or even just 1 device) but have 4 pool slots selected you'll have the option to convert to raid5/6/10, you can then press balance but it will fail to convert to any invalid option.
  10. I believe it was used before but due to unreliable values it stopped being used, on newer kernels it should work reliably, and better than current method for some situations. There are frequent posts on the forum from users using two different size devices in a pool, because free space is incorrectly reported and they run out of it, e.g. a pool made of 32GB + 64GB devices default raid1 Usable space will be around 32GB, GUI reports 47GB, df reports correctly: Also starting with even newer kernels, like the one on v6.9-beta1 is started correctly reporting free space for raid5/6 profiles, pool made of four 64GB devices using raid6 With df: Please change this for v6.9, with multiple pools it will likely affect more users.
  11. Upgrade to v6.8.3 since it includes a newer xfsprogs and run xfs_repair again.
  12. WD60EFAX is SMR, not that's of much concern with Unraid, but only the old WD60EFRX is CMR.
  13. Diags after rebooting don't help much, if it keeps happening setup the syslog server/mirror feature.
  14. Same issue as this one, rebooting will fix it, not quite clear what the underlying cause is, possibly a fuser bug.
  15. Main suspect would be the overclocked RAM, respect the max officially supported RAM speed.
  16. Looks more like a power/connection issue, swap/replace BOTH cables (or slot) and rebuild on top.
  17. You're also having read errors on disk5, start by updating the LSI firmware, all p20 releases except latest one (20.00.07.00) have known issues.
  18. Max theoretical bandwidth for a PCIe 2.0 x1 link is 500MB/s, max usable bandwidth is around 400MB/s.
  19. Some weird SMART errors on the parity disk, you should run an extended SMART test.
  20. Changed Status to Closed Changed Priority to Other
  21. Please reboot and post new diags.
×
×
  • Create New...