Jump to content

JorgeB

Moderators
  • Posts

    67,684
  • Joined

  • Last visited

  • Days Won

    707

Everything posted by JorgeB

  1. Weird error, try physically disconnecting the bad SSD from the server (sdd), boot back up start array and post new diags.
  2. Repeat the first part of the above so Unraid will forget the pool again but now assign only the good device (sde) and start the array, if it doesn't mount post new diags.
  3. It's still likely a RAM or other hardware issue, try with just two DIMMs, test both pairs alone, since that's an easy test to do and see if it is any better.
  4. Because of the Netapp they appear as SAS, so it's the same as if they were really SAS, and it work.
  5. There is no valid btrfs filesystem on either of those devices, try this: btrfs-select-super -s 1 /dev/sdX1 Do it on both devices, replace X with the correct letter, if the command completes on one or both make sure array auto-start is disable, reboot and post new diags before array start.
  6. Disk dropped offline so there's no SMART, first thing to do is to power cycle the server to see if the disk comes back online, if yes post new diags, if the disk is healthy it might be better doing a new config instead of a rebuild, since parity is suspect.
  7. This is usually the result of bad RAM or some other kernel memory corruption issue.
  8. First you need to fix this: Aug 22 17:25:13 Atlas kernel: md: disk10 read error, sector=228586584 Aug 22 17:25:13 Atlas kernel: md: disk10 write error, sector=228589648 Aug 22 17:25:13 Atlas kernel: md: disk10 write error, sector=228589656
  9. Disk has a failing NOW SMART attribute so it should have been replaced before, but run an extended SMART test to confirm.
  10. Just replace or swap with another and see if the problem goes away/follows the cable.
  11. It should, just assign the previous data disks to data slots.
  12. No point in trying to repair the emulated disks, Unraid can't emulated two disks with singe parity, after the new config they should mount, if they don't post new diags.
  13. Both disks look fine, but there are no complete long SMART tests to confirm.
  14. Not without seeing the diags from when it happened, previous fs on the disk should also not be an issue.
  15. Those are normal with SAS devices, it's a GUI issue, you can still download the SMART report to check if all looks good.
  16. Aug 23 10:35:11 Server kernel: sd 7:0:0:0: Power-on or device reset occurred Aug 23 10:35:11 Server kernel: sd 7:0:2:0: Power-on or device reset occurred This is happening to multiple devices, it's usually a power/connection problem.
  17. Yes, and that would work, but that's not what you did: You needed to first start the array with only the remaining cache device assigned, then after the balance completed stop the array to add the new device.
  18. I'm sorry but I don't understand why you did it if you knew it wasn't going to work. Diags would be most helpful grabbed before shutting down the server, but if those are not available, boot the server, stop the array if set to auto-start, if Docker/VM services are using the cache pool disable them, unassign all cache devices, start array to make Unraid "forget" current cache config, stop array, reassign both the original cache devices (there can't be an "All existing data on this device will be OVERWRITTEN when array is Started" warning for any cache device), start array, grab and post the diags.
  19. For a default raid1 pool Data, Metadata and System should all be using the raid1 profile, instead of single for Metadata and System as in the example above.
  20. Aug 23 07:33:12 simnas shfs: fuse internal error: unable to unhash node: 114943 This is what caused the problem, though can't tell you what caused the error, maybe some file that was moved or removed at an inconvenient time.
  21. Rebooting should fix it.
×
×
  • Create New...