Jump to content

JorgeB

Moderators
  • Posts

    67,432
  • Joined

  • Last visited

  • Days Won

    706

Everything posted by JorgeB

  1. Log is spammed with these: May 8 03:41:00 Predator kernel: caller _nv000908rm+0x1bf/0x1f0 [nvidia] mapping multiple BARs ... May 8 03:41:30 Predator kernel: resource sanity check: requesting [mem 0x000c0000-0x000fffff], which spans more than PCI Bus 0000:00 [mem 0x000c0000-0x000dffff window] Difficult to look for other issues, but disk appears to be offline, at least there's no SMART report for disk9, so swap cables/slot with another disk and try again.
  2. Any particular reason you're running 6.8.0-rc1? Update to latest stable and see if it makes a difference, if not post new diags.
  3. Yes, you can't re-add existing devices like that, there would be a "any data on this devices will be lost" warning next to them, and starting the array wipes them, a 4 disk pool can't mount with 2 missing devices, the correct way of fixing the pool would be what I wrote above.
  4. I moved this here because you're virtualizing Unraid and that's not officially supported, though it would probably work better if you pass-through the controller to Unraid, but I'll leave it to anyone with experience with VMWare top give you more help.
  5. You don't need to do a new config, just unassign parity, start/stop array, re-assign parity.
  6. If there were read errors, yes it should be replaced, you can also run an extended SMART test to confirm.
  7. You need to have proper cooling, how would would do a parity check or rebuild?
  8. Any cleared or pre-cleared device needs to be formatted before use.
  9. UPS is reporting it's on battery, AFAIK nothing you can do on the Unraid side.
  10. Please post the diagnostics: Tools-> diagnostics
  11. You'll destroy the pool doing that, for the future the correct procedure would be: Stop the array, if Docker/VM services are using the cache pool disable them, unassign all cache devices, start array to make Unraid "forget" current cache config, stop array, reassign all cache devices, re-enable Docker/VMs if needed, start array. Alternatively you can also do a new config and reassign all devices, then check parity is already valid before starting the array.
  12. Re-syunc parity, you can also replace/swap cables just to rule them out if it happens again.
  13. Disk is OK, replace/swap cables and try again, and post full diags if still issues.
  14. Please post the diagnostics: Tools -> Diagnostics
  15. I would recommend connecting the SSDs to the onboard SATA ports, but change the controller to AHCI, it's set to IDE, you can also check this out for better pool monitoring.
  16. Disk3 dropped offline so there's no SMART (there also wasn't on the original diags), check cables to see if it comes back online and post new diags. Also, you're still using SASLP controllers, those can go wrong at any time and cause multiple disk issues.
  17. Yes it does, please post the full diags if you haven't rebooted yet.
  18. You'd need to run chkdsk on the unassigned device itself, not using the VM, connect the device on a Windows desktop, or boot your server with a Win10 install flashdrive and use the command line.
  19. https://forums.unraid.net/topic/46802-faq-for-unraid-v6/?do=findComment&comment=545988
  20. Also, would recommended using a Linux native fs for the unassigned device that contains the vdisk, any special reason you're using NTFS?
  21. Disk looks fine but you should run a long test to confirm.
  22. You should have save the diags before rebooting, or we can't see what happened, for current situation first fix file system on disk3, if successful rebuild both, you can do it at the same time since you have dual parity.
×
×
  • Create New...