Jump to content

JorgeB

Moderators
  • Posts

    67,765
  • Joined

  • Last visited

  • Days Won

    708

Everything posted by JorgeB

  1. No, but a few sync errors after a parity check would be expected if the errors are bad/frequent enough.
  2. Feb 24 02:08:38 tmedia kernel: BTRFS info (device sdf1): forced readonly Corruption on the cache filesystem, best bet is to backup and re-format.
  3. Cache filesystem is corrupt, yoy should back up and re-format, but if this keeps happening it could point to a hardware problem. This is the docker image, Unraid can't write to it, just a consequence of the pool filesystem going read-only.
  4. Yes let that one finish, you can then try with just one of them first, but doing a new config should be enough.
  5. Diags are after rebooting so we can't see what happened, SMART looks mostly fine for both disks.
  6. Yes, but a new config should fix that, no rebuild needed.
  7. Do you mean those disks din't mount due to invalid partition layout? I would assume just removing the interposers wouldn't cause an issue other than the name change.
  8. OK, didn't look at everything initially, you have a second HBA, there won't be an issue with the disks connected there and also those on the onboard SATA controller, the disks on the NetApp will have their name changed, so a new config will be required.
  9. Not only not needed but it will prevent some functionalities, you'll need to move the disks a little back without them, never used NetApp enclosures but there should be holes in the caddy for that. If the the devices name change you'll need to go to Tools -> New config, reassign all the disks in their original positions, check "parity is already valid" and start the array.
  10. On a second look, they might not change, you'll need to try.
  11. Forgot to mention, besides not being able to run SMART tests it's also not possible to get drive temps and SMART attributes without removing them.
  12. Disks are been seen as SAS: === START OF INFORMATION SECTION === Vendor: ST8000DM Transport protocol: SAS (SPL-3) Interposers are the little board at the back of the caddys: If you remove this the disks will been seen as SATA, note that doing this will require doing a new config in Unraid since all the devices names will change.
  13. If you're using SATA disks with interposeres remove them and they will be seen as regular SATA devices, assuming a true HBA is being used.
  14. Disks changed name because they were moved from the RAID controller, also because of that they fail to mount in the array due to not conforming to the partition layout Unraid requires, that's two of the reasons we don't recommend RAID controllers, you have two options, assuming parity is still valid you can rebuild one disk at at time so that the correct partition layout can be recreated, like this: https://forums.unraid.net/topic/84717-moving-drives-from-non-hba-raid-card-to-hba/ Other option is to mount the disks with UD since it won't care about the incorrect partition layout and copy the data to new disks in the array.
  15. There appear to be 3 failing disks with single parity, so some some data loss is expected, disk2 is failing for sure, disk1 might also be, run an extended SMART test on disks 1 and 3 and post new diags once they are done.
  16. You should also disable NVMe RAID in case that's why no SATA controller being detected, and it will allow the NVMe device to also be detected.
  17. RAID is enable for NVMe devices, curiously no Intel SATA controller is being detected by Linux, only a two port Asmedia controller, which I assume are these ports: Any disks connected there should work.
  18. With WD disks it's good practice to monitor these attributes: 1 Raw_Read_Error_Rate POSR-K 200 200 051 - 997 200 Multi_Zone_Error_Rate ---R-- 100 253 000 - 0 A non zero value is never a good sign, especially if it keeps climbing, this one is from the failed disk, other ones are still at 0, so they should be good for now.
  19. Yes. Wait for the extended test results, but if they just did a parity check they should be OK for now.
  20. It's logged as a disk issue and the SMART test failed, so yes, the disk needs to be replaced.
  21. Yes, that's for the internal docker network.
×
×
  • Create New...