Jump to content

JorgeB

Moderators
  • Posts

    67,704
  • Joined

  • Last visited

  • Days Won

    708

Everything posted by JorgeB

  1. Run it again without -n, if it asks for it use -L.
  2. You can rebuild on top of the old disk, just make sure the emulated disk is mounting and contents look correct before doing it.
  3. If the pool is redundant you can remove one cache device and let the pool balance, then add the other one, you just can't do a direct replacement.
  4. Flash drive dropped offline, make sure you're using a USB 2.0 port.
  5. Older Unraid releases will show wrong stats when using different size devices, but still only 250GB will be usable in default raid1. Yes, if you're still on v6.8.x or older, cache replacement is broken on v6.9.x or newer, though you can still remove one device, then add the other one.
  6. You can but unlikely to help, since only a few disks had an issue, see if they have something in common, like a power cable/splitter.
  7. Looks like a controller problem: Oct 21 12:48:28 tobor-server kernel: aacraid 0000:01:00.0: outstanding cmd: midlevel-0 Oct 21 12:48:28 tobor-server kernel: aacraid 0000:01:00.0: outstanding cmd: lowlevel-0 Oct 21 12:48:28 tobor-server kernel: aacraid 0000:01:00.0: outstanding cmd: error handler-48 Oct 21 12:48:28 tobor-server kernel: aacraid 0000:01:00.0: outstanding cmd: firmware-33 Oct 21 12:48:28 tobor-server kernel: aacraid 0000:01:00.0: outstanding cmd: kernel-0 Oct 21 12:48:28 tobor-server kernel: aacraid 0000:01:00.0: Controller reset type is 3 Oct 21 12:48:28 tobor-server kernel: aacraid 0000:01:00.0: Issuing IOP reset Oct 21 12:49:49 tobor-server kernel: aacraid 0000:01:00.0: IOP reset failed Oct 21 12:49:49 tobor-server kernel: aacraid 0000:01:00.0: ARC Reset attempt failed If possible use one the recommended controllers, like an LSI HBA, you also have filesystem corruption on multiple disks.
  8. See if you get the diags or at least the syslog using the console, if not you'll need to force a reboot.
  9. If it's still crashing in maintenance mode then there are other issues, and filesystem corruption is a result of the crashing, not the reason.
  10. So then you'dneed to set the VM timeout to like 5 minutes and set the general timeout to 6 minutes, or just shutdown the VMs manually before rebooting/stopping the array, that's what I do.
  11. You should always start a new thread, but since you're here, there were simultaneous errors on 5 disks, disks 1 through 5, disk2 got disabled because it was the first to give write error, it could happen to any of the 5, this is usually a power/connection problem.
  12. Sorry, misread, yes that's the latest one, and it's whatever is on the LSI site.
  13. This is a SAS1 backplane, half bandwidth of SAS2, will also likely have issues with drives > 2.2TB, fill up the front backplane first, then connect both cables from one HBA to the back baclplane to check if it supports dual link, you can confirm with the output of: cat /sys/class/sas_host/host#/device/port-#\:0/sas_port/port-#\:0/num_phys Replace all the #s with the correct host number, if you don't known post the diags, if the output is 4 it means single link, 8 means dual link.
  14. Unraid driver is constantly crashing during the parity check, this happens with some hardware, best bet is to try v6.10-rc1, newer kernel might help.
  15. Doesn't look like a disk problem, you can run an extended SMART test to confirm.
  16. You need to increase the general timeout, they can't be the same or the second one will kick in, still it looks like 180 wasn't enough for the VMs to shutdown, do they shutdown if you stop the array? If yes post new diags after doing that.
×
×
  • Create New...