Jump to content

JorgeB

Moderators
  • Posts

    67,588
  • Joined

  • Last visited

  • Days Won

    707

Everything posted by JorgeB

  1. Correct, make sure you double check the assignments, most importantly don't assign any data disk to a parity slot.
  2. I forgot that most disks use 16 bits to store that value, max is 65,535 hours before reset, so it started over, test passed so disk is fine for now.
  3. I use mostly Supermicro cables and they have been reliable.
  4. Probably not what you want to hear bu IMHO best bet would be to replace that Marvell controller with one of the recommended models.
  5. Very possibly, but beyond my knowledge, you can likley find some help for that using #btrfs on freenode IRC.
  6. But you unassigned two cache devices and started the array, so they were both wiped: Mar 2 18:40:43 Sol emhttpd: shcmd (1232): /sbin/wipefs -a /dev/sde1 ... Mar 2 18:40:43 Sol emhttpd: shcmd (1234): /sbin/wipefs -a /dev/sdf1 And like mentioned you can' remove two devices at the same time and keep the pool, it needs to be one device at a time (assuming a raid1 or raid10 pool).
  7. Not quite clear what you mean by this, did the pool have data? You can't remove two devices at once and maintain data.
  8. No, this is still a discard operation, just done automatically asynchronously to avoid a performance hit.
  9. It's logged as a disk problem you should run an extended SMART test.
  10. No need to update the pools, you just need to update Unraid to v6.9, after that you can disable the TRIM plugin (if all the pool are btrfs), it won't break anything to leave it installed though.
  11. Please post the diagnostics -> Tools -> Diagnostics
  12. Those errors are normal after waking up from sleep, you can ignore.
  13. You don't lose the cache upgrading, or at least you shouldn't, you lose going back to v6.8, if after the upgrade there's no cache pool post new diags.
  14. Not sure what you mean, you just need to re-assign the devices.
  15. Current power on hours: 9 Power_On_Hours -O--CK 001 001 000 - 83019 SMART tests done: SMART Extended Self-test Log Version: 1 (1 sectors) Num Test_Description Status Remaining LifeTime(hours) LBA_of_first_error # 1 Extended offline Completed without error 00% 17432 - # 2 Short offline Completed without error 00% 7574
  16. Go to Tools -> New config Then re-assign all the disks as they should be, check "parity is already valid" before array start, start array.
  17. That is in the release notes, it will happen every time you go from v6.9 to v6.8. Just stop array, re-assign all cache pool devices, there can't be a "all data on this device will be deleted" warning for any cache device, then start array.
  18. You might not able to repair the filesystem on a failing drive, you can try cloning it with ddrescue, then repair the filesystem, but note the ddrescue is not flash device optimized.
  19. That's your best option.
  20. JorgeB

    Disk help

    As long as the disk is empty, you can browse it using the GUI or midnight commander for example.
  21. Try a different one, or swap drives around and see if the issue remains with the port or drive.
  22. Moved from prereleases since it's still present in v6.9.0
  23. That is usually a flash drive problem, backup and recreate it.
  24. JorgeB

    Disk help

    XFS uses a few GBs for metadata.
  25. This suggests there's still a connection problem.
×
×
  • Create New...