Jump to content

JorgeB

Moderators
  • Posts

    67,884
  • Joined

  • Last visited

  • Days Won

    708

Everything posted by JorgeB

  1. That looks like a board issue, first make sure the slots don't share the same PCIe lanes, if not try different slots if available.
  2. Lots of crashing but not easy for me to see the reason, though it looks to me memory related.
  3. Assuming files weren't in use likely there are duplicate files, enable the mover logging, run the mover and post diags after.
  4. Start here: https://forums.unraid.net/topic/46802-faq-for-unraid-v6/?do=findComment&comment=819173
  5. It's normal that when you add or remove hardware the IDs will change.
  6. NVMe device is being passed-through to the Windows 10 VM, edit the XML and remove that.
  7. Before proceeding let me start over, disks are in a different order compared to what they were before, also you mentioned parity didn't survive but there was one disk assigned as parity, is one of these disks supposed to be the old parity?
  8. This is very strange. Difficult for me to say but it's possible.
  9. Disk7 dropped offline, looks more like a power/connection problem, power down, check replace cables and post new diags.
  10. That's normal, if there's an error with an array or pool disk.
  11. Install the Unassigned Devices plugin and mount the WD via SMB.
  12. In that case maybe the best way to find the culprit might be to shutdown all services and containers then start enabling them one by one to try and find the culprit.
  13. I can confirm this issue is fixed on the latest internal test release, so v6.11.4 should be out soon with the fix for this.
  14. As long as the LSI is in IT mode it's plug and play.
  15. Check filesystem on disk2, assuming all the disks were xfs click on disks 3 and 4 and change the filesystem from auto to xfs, then post new diags after array start.
  16. Without rebooting since the last diags but with the array stopped type: wipefs -a /dev/nvme1n1p1 Then start array and post new diags.
  17. One of your pool devices dropped offline in the past: Nov 10 23:34:03 Tower kernel: BTRFS info (device nvme1n1p1): bdev /dev/nvme0n1p1 errs: wr 644541091, rd 578222484, flush 1832217, corrupt 0, gen 0 Start by running a correcting scrub on the pool to see if all errors are correctable.
  18. Best bet is to use the existing docker support thread:
  19. Please post the complete diagnostics when the load starts increasing.
  20. Install the tips and tweaks plugin and set vm.dirty_background_ratio to 1 and vm.dirty_ratio to 2, that's been known to help for servers with a lot of RAM.
  21. Log is filled with nginx related errors, make sure you don't have an Android device with a browser window opened on the GUI, that is known to cause issues when the device is sleeping.
  22. Diags are from v6.11.1 but btrfs is detecting data corruption in one of the cache devices: Nov 11 11:20:45 Tower kernel: BTRFS info (device nvme2n1p1): bdev /dev/nvme2n1p1 errs: wr 0, rd 0, flush 0, corrupt 9, gen 0 Start by running a scrub.
  23. With a SATA port multiplier, included in the controller or external. I see what's going on, your board includes a 2 port Asmedia controller, so the one you added has a port multiplier, and because of that it should be replaced.
×
×
  • Create New...