Jump to content

JorgeB

Moderators
  • Posts

    67,492
  • Joined

  • Last visited

  • Days Won

    706

Everything posted by JorgeB

  1. Appears to be this issue, disabling NFS (if not needed) is reported to help.
  2. Yeah, while a few bugs are still normal there should be no show stoppers, and nothing that would put the data at risk, I updated one of my main servers today, and based on these past few hours the NVMe device that was writing about 3TB per day with v6.8 (without any optimizations) is now on course to write less than 200GB per day. The release notes mention how to re-partition the device(s), backup/restore would depend on your current backups, but you can always move the cache data to the array and then restore, like mentioned in the FAQ for the cache replacement procedure.
  3. Stop array, unassign the parity you want to remove, start array, stop array, re-assign it as a data device, start array to begin clearing, after the clear is done it needs to be formatted before use.
  4. Wrong free space is a btrfs bug, wrong used space is an Unraid bug, so you'd need to wait for a newer kernel that includes a btfs fix, the wrong used space in some cases is already a known issue and should be fixed in a future Unraid release. You can always see the correct stats by typing on the console: btrfs fi usage -T /mnt/cache
  5. Yep, df is also reporting the wrong free space, so it's a btrfs issue, df does report the correct used space unlike Unraid, but that's a known issue, since it currently calculates the used space by subtracting free space from total capacity.
  6. There was a recent change on how free space is reported with btrfs, unfortunately it doesn't appear to work correctly for a 3 device raid1 pool, I wonder if df reports correctly, please post the output of: df -hH | grep cache
  7. Diags are after rebooting so we can't see what happened but SMART looks fine, most likely not a disk problem, you can replace/swap cables to rule them out in case it happens again.
  8. Since the extended SMART test passed drive is OK for now.
  9. You have filesystem corruption on disk1, you need to check filesystem and since that was spamming the logs it's difficult to look for other issues, post new diags after fixing that when the issue re-occurs.
  10. Those messages end up filling the syslog. If by external you mean USB disks we don't recommend those used for pools/array devices, but it's not necessarily a problem, impossible to say if there are issues with this without the diags, ones posted just have the NFS spam.
  11. du is not reliable with btrfs, GUI will show the correct used/free space (with equal size devices), if you have Windows VMs this should help to recover some space.
  12. Sorry, no, but if you search the forum you should find some recommendations.
  13. So many errors could be explained for example by writing to the emulated disk once it got disabled before doing the new config, there could be other explanations, but again would need the previous diags,
  14. Whatever name you choose for the pool, pool name cache will still be at /mnt/cache, pool named newpool will be at /mnt/newpool
  15. That alone doesn't explain 500K sync errors, a few errors yes, but not that much, something is missing.
  16. Sop this wasn't correct: And again:
  17. Best bet is to backup any cache data, re-format and restore the data.
×
×
  • Create New...