Jason Z

Members
  • Posts

    12
  • Joined

  • Last visited

Recent Profile Visitors

The recent visitors block is disabled and is not being shown to other users.

Jason Z's Achievements

Noob

Noob (1/14)

0

Reputation

  1. So I recently bought startechs pcie 4port usb 3.0 with 4 controllers. (Part# PEXUSB3S44V) and have been able to pass the controllers through to a VM, however when in the VM the usb hubs runs into a code 10. And if anything is plugged into the usb when booting unraid i get errors. Unfortunately i seem to not be the first one with this problem with the card and no one has a solution. I have done the usaual, fresh images, fresh vm, update drivers, update firmware, ect. So i was wondering if anyone had a PCIE 4 controller usb 3.0 card that they are using that they know works. I know the Sonnet Allgero Pro works, however that has been discontinued and replaced with a two controller solution, and i need 4 separate controllers. Thanks
  2. What I did find was that the PCIe lanes aren't technically limited by the CPU even though the CPU has a fixed number of lanes, the motherboard I mentioned earlier (https://www.asus.com/us/Motherboards/WS-X299-SAGE/) has two plx chips which actually artificially inflate the PCIe lanes so you can run things like 4x16 on a single 44 lane CPU. And even with boards like the one you listed you still have a space requirement for the GPU and unless you go to single lane width GPU you wont be able to fit more
  3. I have watched that video, as well as Linus ones. They don't even mention the motherboard. Thanks for your help, I'm sure I'll figure this pcie lane thing out in a bit. I'm not in a huge rush right now
  4. So i would assume this would be to close GPU: 1x16,3x8 Plus the 4 from the usb controllers so 44 out of 44 lanes would be used, i read that cpu have their own lanes that arent pcie for onboard SATA so that should be fine. What else are PCIE lanes used for? Also since I plan to also boot from a drive on windows, say i have a m2. ssd that isn't used in any of the VM and just when I boot directly to windows would that be taking lanes. And the opposite would the idle GPU be taking lanes of a straight windows boot? i honestly for the life of me am having a hard time finding good documentation on lane usage.....
  5. the problem isn't the motherboard as far as I understand. Intel chips only support 44 lanes.....i'm not sure how other people do this with intel unless I am missing something, as the motherboard I looked at supports x16 time 4
  6. I also i figured i would upgrade to the i9 9980x although it doesnt add more channels
  7. How close are you able to cut the PCIE lanes, i was looking at a new MOBO https://www.asus.com/us/Motherboards/WS-X299-SAGE/ which would have the gpu spots for 4x, and the u.2 for the usb extensions. However it looks like it doesn't support anything but 16 lanes in slot 1? That and a m.2 ssd, plus the bridge is an extra 8 lanes. Do you need to save lanes for other hard drives and ssds? Im not use to worrying about lanes since i have only ever bother with single gpu set ups. I can ditch the m2 if i need to but it is a huge upgrade from SATA ports....
  8. I'm open to changing motherboards, especially if I can fit another GPU in there, cost is really not to important, I have around 20 grand to play with as far as budget goes. And I didn't know a lot of these things existed before this, looks like I'm back off to researching motherboards. https://ark.intel.com/products/189123/Intel-Core-i9-9960X-X-series-Processor-22M-Cache-up-to-4-50-GHz- intel says that the max they support with this processor is 44lanes, which is also supported by my motherboard, that's were I got 44 from. Again many thanks I really appreciate it
  9. Many Thanks, i'll definitely look into that PLX Bridge thing, that may allow us to use 4 VR clients at once. Do you know how that PLX bridge works with PCIE Lanes usage. I Know the i9 9960X has 44 lanes, and you end up getting limited to x8 on some graphics already, with 4. Do you know if you would be capable of running that bridge, 4x gpu, and a M.2 ssd at the same time. I would assume all four graphics would run at x8 which should be fine in a 3.0 PCIE? Like I said I know a little but a lot of this is new to me.
  10. I have already found some PCIE with separate USB controllers, as the VR sets will all be the same. wish they didn't take up a PCIE slot though, pretty short on those. How hard was the Bios rom pass through, and is it possible to set up all of this off real Internet. set up some sort of router for IP address or something. Incase you haven't been able to tell this stuff is kind of out of my realm of study. The main PC with all the graphics cards will not have access to the companies internet, and we cant get access since it would require a specific image for a specific pre built pc...yuck.... but I will be able to take it home to initially set up things however it should be able to operate after that without real internet connection Thank you!
  11. Oh ok, so you physically boot the computer from the USB, I thought you had the computer running windows and then boot the software to set up the other VM's. Do you have any experience passing through the last GPU to a VM, because from what I have seen it is kind of complicated for NVidia cards, at least for me.
  12. I'm new to this, I'm a mechanical engineer who happened to be put into a situation that we need to be able to run at least 3 instances on a VR LAN based game created in Unreal. I have the experience with building computers so I am not worried about that part, its the new VM/VR application that I am new to. We need LAN because this computer will not have access to the internet besides when I set it up at home, it is not allowed on the company network, darn red tape. So this is what I was thinking and would like some feedback if there are some inherent flaws in my game plan. Build: CPU: i9-9960X MotherBoard: ASUS TUF x299 Mark 1 RAM: Corsair - Vengeance RGB Pro 128GB (8x16) Storage: 1 TB M.2, 2 TB SSD Video Cards: RTX 6000, 2x RTX 2080ti PCIE: 4- individual channel USB 3.0 Execution: Bypass the two 2080ti's to VM and set up the last instance on the main computer. I would bypass it to VM but I have read that it is a pain to get the last graphics card to VM since the CPU doesn't have a built in one. So I guess I'm asking if you can still game on the host PC while the two VM also run games? I'm not sure about the storage, is it easier to buy another 2 SSD have them purely for the VM machines? What is the best way to connect these game instances? Notes: The RTX 6000, is for other work applications I know other GTX series are cheaper, the OpenGL and Driver support for some applications is worth it. Any tips/comments or general help is very much appreciated as I have not used UnRAID or really any VM software before