Jump to content

JorgeB

Moderators
  • Posts

    67,411
  • Joined

  • Last visited

  • Days Won

    705

Everything posted by JorgeB

  1. Because Tom did on purpose, I also don't agree it's the best option, and I complained about it.
  2. I meant when you were getting the "too many wrong" disks, to see what you were doing wrong on the parity swap procedure. No, problem is that some backplanes still have 3.3v on the SATA ports despite being power by molex connectors (that don't have 3.3v).
  3. I won't call it a bug since @limetechdid it on purpose, but yeah, making a change so that behavior only takes place when "auto" is selected would IMHO be much better, especially since there's already the auto setting for so long and it does nothing.
  4. Just to be clear, the array auto-starting with a missing array device was fixed some time ago, but it will still auto-start with a missing cache pool device.
  5. Errors start as soon as the disk is initialized, could be a connection problem (replace/swap) cables or it could be the controller, Marvell controllers are not recommended for Unraid.
  6. Correct, any existing data on the parity device will be deleted, data devices won't be touched.
  7. Yes, I also remember asking for that on one of the release threads, IIRC it would need to be changed by LT since it's an emhttp function, it doesn't autostart if an array device is missing, but it does for a missing pool device, if there's a feature request for that might be a good idea to bump it.
  8. Yes, if it's a new config, if it asks for a new encryption passphrase just enter current one.
  9. But you can't add the drives to an existing server, it requires a new server or a new config for an existing array.
  10. We'd need diags to see what the problem was.
  11. No, it was just at risk, I see no pool related errors.
  12. Like suspected this is a smartctl issue, there was another user with a similar issue before, it's not reporting any temp for those drives: === START OF READ SMART DATA SECTION === SMART Health Status: OK Current Drive Temperature: 0 C Drive Trip Temperature: 0 C v6.8.3 uses smartctl v7.1, v6.8.2 uses v7.0, so you'll need to wait for a smarctl fix, or if it takes a while LT can downgrade back to v7.0 for now on upcoming release(s).
  13. Please post the diagnostics but this is like a smartctl issue, not an Unraid one, also please change priority from urgent, see definitions on the right side
  14. I did a quick test and direct replacement without the old drive should work, but you need to disable array auto-start, or array will start automatically with the single cache device (and convert to single profile) before you can add the new one, still this is not well tested so proceed with care.
  15. It's possible you could do a direct replacement without the old drive, but IIRC this could sometimes not work correctly, and I can't test at the moment.
  16. Like mentioned you can, but to do a direct replacement old one needs to be connected, if you start without one of them the pool will be converted to single profile, you can then add another device.
  17. To make a direct replacement you'd need both old and new devices connected, which might not be an option with m.2 devices, you can still remove one device, convert pool to single, then add another device and reconvert to raid1, but before starting make sure your pool is redundant because of this bug, if you want post diags to confirm.
  18. Try running is safe mode for a week or two.
  19. Because that feature was only introduced on v6.8.0
  20. SMART test failed, disk needs to be replaced.
  21. Syslog is filled with checksum error on cache, this means data is corrupt and btrfs will give an I/O error when trying to copy/move those files, you'll need to delete and replace them, good idea to run memtest, bad RAM is the #1 source for data corruption.
  22. Because of this bug you cache pool isn't redundant, see instructions there to fix it.
  23. You can use use Unraid to replace one disk at a time, or two if you have dual parity, parity disk(s) need to be upgraded first obviously.
  24. It's fine to use SSDs in the array, at least for most of them, I remember an old Kingston model that would give a few sync errors after a power cycle, but it's the only one I know about, the main issue currently is that the SSDs in the array can't be trimmed, so they might lose some write performance/endurance, also I recommended using a faster SSD for parity, I've been using a small SSD array for some time and it's working well for now, don't notice any performance issues even without trim, I use regular SSDs for data devices and a NVMe device for parity.
×
×
  • Create New...