Jump to content

JorgeB

Moderators
  • Posts

    67,736
  • Joined

  • Last visited

  • Days Won

    708

Everything posted by JorgeB

  1. According to the SMART report the test is still running, let it finish but based on the log looks more like a connection/power problem.
  2. Filesystem Size Used Avail Use% Mounted on /dev/md1 5.5T 143G 5.4T 3% /mnt/disk1 /dev/md2 5.5T 46G 5.5T 1% /mnt/disk2 /dev/md3 5.5T 39G 5.5T 1% /mnt/disk3 /dev/md4 5.5T 39G 5.5T 1% /mnt/disk4 /dev/sdb1 448G 444G 13M 100% /mnt/cache There's some data on disk1, but not much, and cache is full, other than that it's probably gone, Unraid never deletes data, but there are some ways you can lose some.
  3. You should run another full pre-clear to see if the issues are solved, and if there any more doubts with the plugin please use the existing plugin support thread:
  4. Doesn't look like it, cache is completely full and this would cause problems with docker/VMs, you need to move most of that data to the array, but other than that the array itself is basically empty.
  5. This could be caused by bad RAM, try running memtest.
  6. If that's from the auto parity check it's non correct, so you need to run a correcting check.
  7. Enable the syslog server and post that after a crash together with the diagnostics.
  8. You're using a RAID controller, those are not recommended, but if you still want to use you need to enable JBOD/RAID0 mode for all devices.
  9. Data on disk1 should be all there, on disk4 there could be some data loss, depends for how long the rebuild ran, you should never have attempted a rebuild with another disk disabled and single parity.
  10. Diags show disk4 is invalid, screenshot shows green, which is strange, in any case you started to rebuild disk4 with disk1 disable, which is not possible, but then canceled, so damage might not be that much, since both disks look healthy you can do a new config to re-configure the array, disk1 should immediately mount, disk4 may mount or not, it might a need a filesystem check.
  11. Again sounds like a board/BIOS problem, try the new BIOS.
  12. You don't need an 18TB disk, a small SAS disk (and a SAS compatible HBA) would do it, not sure if there's an easier way.
  13. Yep. Yes, no need to re-format, just assign as parity when done.
  14. Partition is not damaged, so likely it's missing the correct Unraid signature, easiest option to solve this, assuming parity is valid, is to rebuild the disk on top, Unraid will recreate the partition, to test you can unassign the disk and start the array, the emulated disk should mount, if yes and contents look correct rebuild on top by re-assigning the old disk.
  15. This means the partition is not valid for Unraid, post the output of: fdisk -l /dev/sdj
  16. Any hardware can go bad at any time.
  17. By this I meant the controller was not recommended but not the reason for the lockups, but like mentioned still a good idea to replace it, hardware related lockups are most often caused by PSU, board, RAM, unfortunately no easy way to diagnose other than start swapping some parts around.
  18. Very unlikely that controller would cause lockups, still good idea to replace it.
  19. You can do a new config to re-enable disk4 and rebuild parity, but since disk10 appears to be failing there might be read errors making it not 100% valid.
×
×
  • Create New...