JorgeB

Moderators
  • Content Count

    32059
  • Joined

  • Last visited

  • Days Won

    379

JorgeB last won the day on June 19

JorgeB had the most liked content!

Community Reputation

3843 Hero

About JorgeB

  • Rank
    Advanced Member

Converted

  • Gender
    Male

Recent Profile Visitors

16519 profile views
  1. Parity isn't valid so it can't help, also note that parity generally can't help with filesystem corruption anyway. If btrfs restore is not working with the old disk, not the emulated disk, there's not a great chance of recovery, but still btrfs maintainers might be able to help.
  2. Unfortunately can't help more, you can try IRC or the mailing list as mentioned in the FAQ.
  3. Yes, You can't have the same filesystem mounted twice, after it's wiped/formatted it's no longer the same filesystem, UUID will be different.
  4. You can't mount both at the same time, you can stop the array and mount the UD disk.
  5. I forgot that since some time ago you can't change the fs on emulated disks, this was done to avoid users thinking that they can use that to change the filesystem of a disk while keeping the data, you can do this instead, start the array in maintenance mode then type: wipefs -a /dev/md1 Then re-start array in normal mode and format the disk.
  6. I wanted to see the diags from when the disk got unmountable, before rebooting, but unless you saved them they will be lost. This error means writes to the device were lost. very likely because of using USB, which as mentioned is not recommended for array drives, btrfs restore might be able to recover some data, option 2 here: https://forums.unraid.net/topic/46802-faq-for-unraid-v6/?do=findComment&comment=543490
  7. Stop the array, click on the disk and change filesystem to a different one, start array, format then go back to xfs and format again.
  8. Yes, you need to format the emulated disk before mounting the other one, it will be slow and the array will be unprotected during the procedure but you should have backups of anything important anyway so it's a way to avoid buying a new disk.
  9. That's not how you add a new disk, and by doing that you invalided parity. Please post the diagnostics: Tools -> Diagnostics
  10. Those look more like a xfs bug to me, you'd need to post in the xfs mailing list (or re-create that filesystem by formatting the disk).
  11. Possibly, but it would likely affect more disks if that was the case, did you check the firmware? But only way to be sure would be to test with another one. Should be fine, you can run a non correcting check also.
  12. IIRC raid0 with an odd number of equal size drives reports space correctly, problem here is likely the different size drives, but the free space will be less wrong as the pool gets filled, and you can use the full space, which as mentioned is 3TB in this case.
  13. Looks like a corrected RAM error, check the system/IPMI event log, there should be more info there on the affected DIMM, then remove/replace it.
  14. Edit config/share.cfg in the flash drive and change: shareCacheEnabled="no" to shareCacheEnabled="yes" Then reboot.