Jump to content

steve1977

Members
  • Posts

    1,471
  • Joined

  • Last visited

Everything posted by steve1977

  1. Got it, thanks! On a positive note if this setup works fine, I could add the hyper M.2 card in the first slot and have the full x16 (i.e., four NVME)? Or an issue as the on-board NVME may also require 4 lanes to add up to 29 instead of the 28 available lanes? When you say severely bottleneck, what are the implications? Is the array and parity impacted? Would it be unstable or just a bit slower? Lastly, I saw the new CPUs for X299 will feature 48 of 44/28 lanes. Any thoughts whether this would require a new mobo as well? I am thinking to upgrade and wonder whether to wait for Cascade Lake X with 48 lanes or just go with the current 44 lanes CPUs.
  2. Just run it, but not fully clear to me. Please see results https://pastebin.com/s8N6q3As
  3. I have now moved my HBA to slot 3. GPU stays in slot 2. Slot 1 is empty for now. It seems everything is working. Does this mean I am now running on x16/x8 or on x8/x1. Do HBA and GPU benefit from x16 over x8? I did some googling and it seems the difference is negligible for GPU. Didn't find any info for HBAs. I am considering to upgrade the CPU to get all three cards working. I could upgrade to a CPU with 44 lanes or wait a few month as Cascade Lake seems to feature 48 lanes? With 48 lanes, I'd be able to run all 3 cards as x16?
  4. Thanks. Still a bit cryptic to me Assume the slots are counted started from the CPU? If so, my raid card is currently in the first, my nvidia in the second, and the third one is empty. In order to use the hyper card with 4 M.2 disks, I'd need to plug it into the first slot? If so, could the Nvidia or the raid card function in the x1 slot? Even if I upgrade the CPU, I could not use the hyper card in the third slot (only x8)? Can I change what slot uses how many lanes? I.e., from 16/16/8 to 16/8/16? Thanks!!!
  5. Thanks. How did you come up with the number of 4 spare lanes?
  6. Agree, this reads like a bug. I am facing the same issue. No longer have the USB device and my VM is un-usable. Is there some fix?
  7. I'm considering to buy the card. Am I limited by the number of lanes? My setup below: https://forums.unraid.net/topic/83122-running-out-of-lanes/?tab=comments#comment-770691
  8. Thanks for your help. Let me report back. I did some changes in the config, but nothing worked. I now installed a dedicated fan for the raid card and things seem to work. So, it may have indeed be a cooling issue. I'll keep monitoring, but this may have been solved.
  9. I am not clear on how the number of mobo/cpu lanes impact performance, but hope to find some help from the community. My setup: X299-A with i7 7800X. I have two M.2 disks on board. A M1015 raid controller in first PCI slot and a Nvidia 1060 TI in the second. I am now thinking to add a Asus Hyper M.2 X16 Card V2 to the third PCI slot to add yet anothr four M.2 disks. I read different messages about the feasibility and potential limitations due to the number of lanes. But didn't understand anything. maybe someone can shed some light in it and provide me with some advice? Thanks a lot for your help!
  10. Trying to connect an external disk to a VM, but not successful: error: Failed to attach device from /tmp/libvirthotplugusb.xml error: internal error: unable to execute QEMU command 'device_add': failed to open host usb device 2:2 Any thoughts?
  11. Thanks. The issue does not seem to be related to the plugin per se. Below is what I see when connecting it in the VM settings: internal error: qemu unexpectedly closed the monitor: 2019-08-31T16:40:56.431383Z qemu-system-x86_64: -device pcie-pci-bridge,id=pci.5,bus=pci.1,addr=0x0: Bus 'pci.1' not found
  12. Trying to connect an external disk to a VM, but not successful: error: Failed to attach device from /tmp/libvirthotplugusb.xml error: internal error: unable to execute QEMU command 'device_add': failed to open host usb device 2:2 Any thoughts?
  13. Ok, changed some cabling and what controller to connect the drives to. I am rebuilding the parity. I now see errors on disk 13, but no disk disabled (yet). Rebuilt under progress. Shall I stop the array or see whether it completes? tower-diagnostics-20190830-1111.zip
  14. Failed again. Any new insights from the log, which disk is this time failing together with disk 12? tower-diagnostics-20190830-0444.zip
  15. Oh... I am getting paranoid about saying errors in the GUI... I checked it again and indeed not showing errors now. No clue what I missed. Let's wait whether things are ok now. I have now moved four disks to a second raid card. I had planned for those four to come from the onboard controller, but may have done a mistake and moved from one raid to another raid instead.
  16. Parity and disk 13 errors somehow must be related. It's always the same frequence. One disk triggers errors (Unraid GUI is fine) followed by disk 12 with errors (and disk 12 being disabled). It seems though that I did something wrong as I'd wanted to move parity away from the on-board controllers, which may be on a faulty port. Must have done this wrong. Let me check and correct this.
  17. Seems, this didn't leave to any improvement. Disk 12 yet again failed after moving some to a second raid card. From what disks do the errors originate this time? I can double-check whether they have been moved to the second raid card. tower-diagnostics-20190827-1422.zip
  18. Disk 12 is the real issue. Disk 13 never disables / disappears / ejects. However, the errors from disk 13 seems to trigger disk 12 to disable. It is 100% consistent / replicable. Disk 12 always disables. Disk 13 only shows errors in the diagnostic (not GUI). After switching ports, parity has issues in log (not in GUI), but yet again it leads to disk 12 being disabled. I still believe it is related to the raid card. Either over-heating or just running with too much HD capacity for what the card can handle in a stable manner. Let me explore though to rule out all options. I am now trying to add a second raid card. And then connect four disks from the on-board controller over to the second raid card. This would address the concern of a potentially faulty port.
  19. Second thought - the parity has errors first that is not on the raid card, but on the on-board controller. Besides cable, what could cause the issue? Suspicious that it is always the same HD failing (now parity). Any idea what could be the reason? Beside heat?
  20. Thanks. This implies that the power connection is fine, but the sata cable is the trouble maker. I had changed the cable, so this should not be the issue. Probably over-heating issue?
  21. I now switched the power cable (but not the sata one). Curious what is now making trouble... Disk 13 or the parity? tower-diagnostics-20190827-1326.zip
  22. Thanks. So, the issue is not the disk 13. We are getting closer to narrow down the issue. Let me switch only the power cable. This way, we will know whether the issue is power or sata cable. It could be that 12 and 13 share the same power cable. Need to check this out later. The PSU per is highly unlikely the issue. I changed it twice before. The current one is quite new, over-powered and quite a good one.
  23. What about now? tower-diagnostics-20190826-2345.zip
  24. Here we go with the diagnostic log after switching both sata and power cable. Let's see whether disk 13 is still causing trouble. If so, it must be the disk itself rather than controller or cables. tower-diagnostics-20190826-1518.zip
×
×
  • Create New...