JorgeB

Moderators
  • Posts

    61658
  • Joined

  • Last visited

  • Days Won

    650

Everything posted by JorgeB

  1. It just means the pool needs to be redundant, for example raid1, it won't work for a single profile or raid0 pool. Diags are after rebooting so assuming nvme1n1 was the other pool member and with the array stopped type: btrfs-select-super -s 1 /dev/nvme1n1p1 Then and without starting the array post the output of btrfs fi show P.S. btrfs is detecting data corruption in multiple devices, unless these are old errors that were never reset you likely have a RAM problem.
  2. If you didn't yet reboot post the diags and the pool might still be salvageable.
  3. If it's a raid1 pool just stop array, uassign the device you want to remove, start array.
  4. Problem was before this boot, logs start over after every boot, we'd need the diags after it happens again.
  5. Enable the syslog server and post that after a crash.
  6. The move to the onboard SATA was mostly because the type of errors are logged more clearly, they do look like a power/connection problem, but since you already replaced cables and the same happens with the different controllers it suggests a disk problem, any way you can replace that disk with a different one?
  7. No issues so far, lets see how it goes, post new diags if it starts erroring out, don't forget to replace/swap power cable/slot to rule that out also.
  8. What do you mean split? Pool was convert to raid1 so now it's redundant, that's expected behavior.
  9. Yes. You will be able to add more vdevs to an existing pool.
  10. That's how it always worked with the array, parity is just bits, it doesn't care about the file systems. https://wiki.unraid.net/Parity#How_parity_works 2 devices for a mirror, 3 minimum for raidz, this is for pools, not the unRAID parity array.
  11. Same disk failing in different controllers suggests a disk problem, disk you use a different power cable (or backplane slot) also? In the last diags disk is connected to an LSI, you have two, connect to the onboard SATA and post new diags.
  12. Yes, then reboot and it should be resolved.
  13. That suggests you have a container mapping to that, creating that path.
  14. You can use zfs in the array without that limitation, because every device is a single filesystem, without raidz obliviously, for pools nothing that can be done since it's a zfs limitation, btrfs is more flexible, though not as robust, unfortunately you rarely can have everything.
  15. Yeah, if the firmware is updated could be a bad controller.
  16. https://wiki.unraid.net/Manual/Storage_Management#Rebuilding_a_drive_onto_itself
  17. It wasn't before, but for security reasons it's been like that since 6.9 or there about.
  18. Can you post a link to where you saw that? You can only remove a device from a redundant pool, the pool is then automatically converted to single (if there's only one device remaining).
  19. Start the array, if the emulated disk6 mounts and contents look correct we usually recommend rebuilding, other option would be a new config and a correcting parity check after, it will take the same time.