Jump to content

JorgeB

Moderators
  • Posts

    67,521
  • Joined

  • Last visited

  • Days Won

    707

Everything posted by JorgeB

  1. Do you mean newer kernel/tools than this beta? df is still working correctly for me: Filesystem Size Used Avail Use% Mounted on /dev/sdf1 1.4T 3.6M 930G 1% /mnt/cache And now the newer btrfs tools in beta29 also support raid5/6: btrfs fi usage -T /mnt/cache Overall: Device size: 1.36TiB Device allocated: 17.06GiB Device unallocated: 1.35TiB Device missing: 0.00B Used: 288.00KiB Free (estimated): 930.15GiB (min: 700.11GiB) Data ratio: 1.50 Metadata ratio: 2.00 Global reserve: 3.25MiB (used: 0.00B) Multiple profiles: no
  2. df reports the correct used and free space for every possible combination (AFAIK) except raid1 with an odd number of devices, but that's a btrfs bug and should be fixed in the near future.
  3. Me too, it could be even better if it was done the same way df does it, but I already made a case for that and apparently it was not good enough.
  4. There was no parity. That's not the problem, problem is that Unraid rejects the disk after a reboot, and worse than that partition is invalid and disk can't be mounted again, it's easy to reproduce if you do the steps I outlined above.
  5. You can only do it with a disabled disk, if it wasn't already you'd need to disable it first, all of that info is in the linked procedure.
  6. You can use the invalid slot command if you have a spare disk of the same size or larger, I can post the instructions for that. If you can't do it in the GUI disable array auto-start by editing disk.cfg on your flash drive (config/disk.cfg) and changing startArray="yes" to "no".
  7. Best would be to have an appdata backup, then you could restore it, it won't be easy to restore from lost+found folders.
  8. By using the new config with the "trust parity" option, this assuming nothing changed on the array since you added the new parity, still: because of mounting the array without it, even if no data was changed.
  9. There's no magic way, you'd need to check folder by folder, "file" command might help identify some of the files.
  10. You shouldn't have done that, it's not possible to successfully sync parity with a failing disk, and now old parity won't be 100% in sync, but it's still your best bet, do a new config with it, check parity is already valid, then do the parity swap.
  11. You need to use the parity swap procedure.
  12. It's not a parity check, it's a parity sync, parity is still invalid, wait for that to finish before doing any heavy i/o on the array.
  13. Post the diags from before rebooting instead.
  14. Great, you should create a thread for updates/support here: https://forums.unraid.net/forum/61-plugin-support/
  15. How to reproduce: -have a non-rotational device assigned to the array with the old partition scheme. -wipe it manually or using the new erase function. -start array and re-format with new alignment. -reboot (or make any other array change) and will get a "wrong" device, since Unraid is still expecting the old size. -you'll need to do a new config to re-accept the device but it will be unmountable, so any data copied since will be lost.
  16. By default Linux uses 20% free RAM for write cache, this can be adjusted manually or for example with the tips and tweaks plugin.
  17. Try this: https://forums.unraid.net/topic/46802-faq-for-unraid-v6/?do=findComment&comment=781601
  18. If the data in lost+found is still recognizable move it to the correct place.
×
×
  • Create New...