Jump to content

JorgeB

Moderators
  • Posts

    67,771
  • Joined

  • Last visited

  • Days Won

    708

Everything posted by JorgeB

  1. Yep, that's why I mentioned Glad you found the issue.
  2. Does appear to be a disk issue, you can run an extended test to confirm, since this type of read errors can be intermittent, if it fails the test or you get more read errors replace.
  3. Below the balance and scrub sections of every btrfs filesystem, just click on the device on main and scroll down.
  4. Problem with the onboard controller, quite common with some Ryzen boards, usually under load, look for a BIOS update and if that doesn't help use an add-on controller, or a different board.
  5. As mentioned adding slots is not a problem. Why not? Just need to check the "I want to do this" or similar box next to the array start button.
  6. That is a big deal, only number of acceptable sync errors is 0. Except for one the sync errors started right at the 8TB mark, this suggests the initial sync or not correct or you did a parity swap, this sometimes results in a pesky bug that doesn't correctly zero out the reaming of the drive past the initial parity size.
  7. When you remove a device, it's not a problem when you add one. Don't see how. Because the UUID comes from the filesystem and it was the same.
  8. but v6.9.2 is not affected by the bug and won't let you anyway, so it's fine.
  9. Only logical explanation is that a some point in the past they were both assigned to the same pool, but would need the syslog to confirm, if the server didn't reboot since it happened.
  10. If you can't see start moving the data outside the pool, to the array for example, vdisks are a usual suspect, they can expand if not trimmed.
  11. Like suspected both devices are in the same pool, easiest way to fix is by doing the below, since I don't know which Unraid version you're using make sure you don't change the number of cache slots when removing the pool device, there's a bug in some releases. -stop the array -unassign both pool devices -start array -stop array -assign both devices to the same pool, to the one where you want the existing data, there can't be a "all data will be delete" warning in front of any member -start array -stop array -unassign the device you want to remove from that pool -start array -wait for the btrfs balance to finish, when it does stop the array -you can now assign the removed device to the other pool -start array, there will be an option to format that pool and once that's done you can use it.
  12. du (and other tools) isn't reliable with btrfs, this will be the actual used space.
  13. That's not what I need, post the output of btrfs fi usage -T /mnt/name_of_pool for both pools, if that's enough fine, if not sorry but not going going to ask file by file as I need them.
  14. Probably both devices are still in the same pool, please post diagnostics to confirm.
  15. Possibly, but with USB and older disks in the array it might not, you can run the the disk speed docker to check disk performance.
  16. No, it's a duplicate of disk2, so after wiping it there might still be issues with parity because there's only one btrfs array disk, likely also the reason parity is only showing 4TB, post new diags after fixing sdl and rebooting.
  17. Good news is that the crashing is gone, how is the speed? Pool issues are the result of a duplicate filesystem on the unassigned disk (sdl), I guess that's old parity? Wiping/reformatting/disconnecting it should fix that, after rebooting.
×
×
  • Create New...