Jump to content

JorgeB

Moderators
  • Posts

    67,771
  • Joined

  • Last visited

  • Days Won

    708

Everything posted by JorgeB

  1. Problem with the controller, you're using SATA port multipliers, those are not recommended, you can reboot and try again, but ideally not using the port multiplier.
  2. It's usually an issue with the board/slot and device combo, it might work better in a different slot if available.
  3. Deleting the old partition(s) is enough.
  4. You can do a new config to re-assign the data drives, then check "parity is already valid" before starting the array, it still is with parity1 after changing slots, not with parity2.
  5. Please use the existing docker support thread:
  6. Should be fine, but it's not something I have experience with.
  7. Krusader won't show used space reliably with btrfs, vdisks/images can grow with time if not trimmed, either enable that or use the mover to move the data to the array than back to cache, if there are any sparse files they will become smaller, also see this if you have any Windows VM, there are also reports that running btrfs defrag helps recover space, but don't do that if you use snapshots.
  8. This usually happens when the drives already have a partition starting on sector 2048, Unraid will format the drive but use the existing partition, since that starting sector is now valid for SSDs, it should never happen with a new or completely wiped disk.
  9. Changed Status to Closed Changed Priority to Other
  10. This forum is for Unraid bug reports, copied to general support forum, please continue discussion there:
  11. Like mentioned first try again with rc4 after re-formatting the pool.
  12. Let me expand a little on this, nothing I've seen so far suggest an Unraid/btrfs problem, btrfs corruption with Unraid users is not that uncommon, though most times it's caused by hardware issues, I'm a subscriber to the btrfs mailing list and AFAIK there are no unexplained corruption issues with recent kernels, also no one else using Unraid with rc4 has complained so far, and many users have btrfs pool(s). Data corruption detected still makes me think this was hardware related, but it's not certain, re-format the pool and see if it happens again quickly, if it does you should go back to the last known Unraid release that was stable for you and confirm it doesn't still happen.
  13. HBA is bound to VFIO-PCI, note that if that was an old bind IDs change when you add new hardware, so it needs to be corrected or removed.
  14. Log is spammed by the mover and because of that there's time missing, disable mover logging and post new diags next time.
  15. Unlikely, but do what I mentioned before, re-format the pool and monitor for more errors.
  16. Likely the remaining disks are partitioned on sector 2048, instead of Unraid default sector 64. You can check with fdisk -l /dev/sdX
  17. Yep, and the test failed, so it should be replaced.
  18. SMART is showing some issues, you should run an extended SMART test.
  19. Disk dropped offline, reboot/power cycle to see if it comes back and post the diagnostics.
  20. Please post the diagnostics
  21. There's no SMART report for disk9, check cables/reboot and post new diags.
  22. You can do a direct device replacement if you are on latest v6.10, it was broken on v6.9. https://forums.unraid.net/topic/46802-faq-for-unraid-v6/?do=findComment&comment=480419
  23. This is a bad idea, these controllers are known to change the MBR/delete start sectors of the disks, assuming parity is valid you should be able to have Unraid rebuild both arrays disks, one at the time, partition will be corrected recreated, like below: https://forums.unraid.net/topic/84717-solved-moving-drives-from-non-hba-raid-card-to-hba/?do=findComment&comment=794399 As for the cache device see if it mounts with UD, in case just the MBR was changed, but if the partition was damaged it won't and there's no option to rebuild that one.
  24. Used and free will be correct, total won't be for pools with different size devices.
×
×
  • Create New...