Jump to content

JorgeB

Moderators
  • Posts

    67,737
  • Joined

  • Last visited

  • Days Won

    708

Everything posted by JorgeB

  1. If the files were deleted best bet is to use a file recovery app, like UFS explorer. Most common are the user share copy bug, being hacked, misconfigured docker/app, etc That wouldn't help with the actual problem, i.e., the cache being full, but it wouldn't cause any issues with the array data, those actions only affect the cache filesystem.
  2. According to the SMART report the test is still running, let it finish but based on the log looks more like a connection/power problem.
  3. Filesystem Size Used Avail Use% Mounted on /dev/md1 5.5T 143G 5.4T 3% /mnt/disk1 /dev/md2 5.5T 46G 5.5T 1% /mnt/disk2 /dev/md3 5.5T 39G 5.5T 1% /mnt/disk3 /dev/md4 5.5T 39G 5.5T 1% /mnt/disk4 /dev/sdb1 448G 444G 13M 100% /mnt/cache There's some data on disk1, but not much, and cache is full, other than that it's probably gone, Unraid never deletes data, but there are some ways you can lose some.
  4. You should run another full pre-clear to see if the issues are solved, and if there any more doubts with the plugin please use the existing plugin support thread:
  5. Doesn't look like it, cache is completely full and this would cause problems with docker/VMs, you need to move most of that data to the array, but other than that the array itself is basically empty.
  6. This could be caused by bad RAM, try running memtest.
  7. If that's from the auto parity check it's non correct, so you need to run a correcting check.
  8. Enable the syslog server and post that after a crash together with the diagnostics.
  9. You're using a RAID controller, those are not recommended, but if you still want to use you need to enable JBOD/RAID0 mode for all devices.
  10. Data on disk1 should be all there, on disk4 there could be some data loss, depends for how long the rebuild ran, you should never have attempted a rebuild with another disk disabled and single parity.
  11. Diags show disk4 is invalid, screenshot shows green, which is strange, in any case you started to rebuild disk4 with disk1 disable, which is not possible, but then canceled, so damage might not be that much, since both disks look healthy you can do a new config to re-configure the array, disk1 should immediately mount, disk4 may mount or not, it might a need a filesystem check.
  12. Again sounds like a board/BIOS problem, try the new BIOS.
  13. You don't need an 18TB disk, a small SAS disk (and a SAS compatible HBA) would do it, not sure if there's an easier way.
  14. Yep. Yes, no need to re-format, just assign as parity when done.
  15. Partition is not damaged, so likely it's missing the correct Unraid signature, easiest option to solve this, assuming parity is valid, is to rebuild the disk on top, Unraid will recreate the partition, to test you can unassign the disk and start the array, the emulated disk should mount, if yes and contents look correct rebuild on top by re-assigning the old disk.
  16. This means the partition is not valid for Unraid, post the output of: fdisk -l /dev/sdj
  17. Any hardware can go bad at any time.
  18. By this I meant the controller was not recommended but not the reason for the lockups, but like mentioned still a good idea to replace it, hardware related lockups are most often caused by PSU, board, RAM, unfortunately no easy way to diagnose other than start swapping some parts around.
  19. Very unlikely that controller would cause lockups, still good idea to replace it.
×
×
  • Create New...