Jump to content


  • Content Count

  • Joined

  • Last visited

Community Reputation

5 Neutral

1 Follower

About jordanmw

  • Rank
    Advanced Member

Recent Profile Visitors

173 profile views
  1. I guess I will get a diagnostics dump and upload it when I get home- has anyone had experience with the WD 1tb black NVME drives with x399 and unraid?
  2. No shell commands and gui becomes unresponsive. Haven't tried tailing the syslog yet but have removed one drive and got the same result. If I remove both, I am back to normal and the array starts. Also should mention that when I first installed them, it made me reassign the cache drives but still failed to start.
  3. After adding 2 1 Tb western digital NVME drives to my motherboard- all operations seem to freeze the OS. I added these drives to pass through to a couple of windows 10 VMs but everything I do after adding them causes my rig to freeze. Any assistance would be appreciated. I can start it in maintenance mode but any formating of the drives or starting the array freezes the system and requires a hard reboot. My bios and unraid both recognize the drives but can't get past initializing the array without a full freeze requiring reboot. I can't even collect log files when it freezes.
  4. I believe that you can migrate- but may run into licensing re-activation issues in windows. Spaceinvaderone on this forum has tons of info about the migration process- among other things.
  5. I moved off of softxpand for exactly the same reason- everything will work perfectly- as if multiple PCs were there. I am running a 4 gamers 1 cpu setup for exactly the same game collections. no idea about migrating what you have.
  6. jordanmw

    X399 Motherboard recommendation.

    I have a x399 taichi and have a 4 GPU gaming rig setup with GPU/USB passthrough. Works pretty well.....
  7. jordanmw

    Help with VR VM Gaming Build

    You can always look at getting ribbon cable extenders for the slots as long as they are still enabled. I have seen a few cases that have a totally separate area for the GPUs with cable extenders for the slots. Or just go watercooling for them- have seen a few people go that route.
  8. jordanmw

    Help with VR VM Gaming Build

    If I had your budget- I wouldn't even screw around with consumer level parts- I'd just go with a dual socket xeon rig with tons of PCI-E slots and be done with it. Something like this: https://www.asus.com/Motherboards/WS-C621E-SAGE/ I mean- why would you be looking at anything less? You could have enough slots and lanes to increase capacity in the future- when you decide that even 4 VR rigs isn't enough. Seems like a waste of effort to plan lanes out meticulously when you could grab 2 cpus and get all the advantages of the extra lanes- and separate die.
  9. jordanmw

    Help with VR VM Gaming Build

    Just follow their lead: www.pcworld.com/article/3222652/gaming/how-we-hosted-a-star-trek-vr-party.amp.html
  10. jordanmw

    Help with VR VM Gaming Build

    Anything attached will take lanes. I did a similar setup with the separate windows drive to boot to but found that I never used it past the burn in stability testing when first setting it up, so it was a waste of a drive. I don't know that much about pcie lanes either but know that it really can impede your speed if you don't have the setup just perfect when using all lanes.
  11. jordanmw

    Help with VR VM Gaming Build

    Yeah... threadripper was my answer....https://silentpc.com/articles/performance-and-pci-express-bus-lanes I need the extra lanes, and didn't have a lot of other options without going dual socket. That is your other option... dual xeons! If I had the budget you do, I wouldn't think twice.
  12. jordanmw

    Help with VR VM Gaming Build

    You should really plan to have 2x16 and 2x8 for a motherboard pcie requirements. Look at the Asus workstation boards. Plan on not cutting it that close.
  13. jordanmw

    Help with VR VM Gaming Build

    My board with my CPU says this: This processor includes 60 PCIe lanes with PHY of 16 lanes may each have a maximum of 8 PCIe ports (x1, x2, x4, x8, x16). Note that 48 lanes are dedicated for multiple GPUs with the other 12 lanes for I/O. So I am taking all 48 lanes for the multi-GPU setup- and using all 12 for the other IO. It is a tight setup and fully maxed out for IO, but works pretty flawlessly. I actually wish I had just spent the extra cash for a 1950x that has 16 cores. Extra cores come in handy as you realize how many games are now optimized for 4 cores or more.
  14. jordanmw

    Help with VR VM Gaming Build

    I got an x399 taichi motherboard with an 1920x and the combo of mb/cpu determines how many lanes you get. I have 3x M.2 1x U.2 and 4 pci-e slots that can run 16x8x16x8 so it works great for me. I have 12 CPU cores with SMT enabled. I have 2 of the M.2 slots populated with 1Tb WD black drives and use my U.2 to connect to a PLX adapter: https://www.microsatacables.com/u2-sff8639-to-pcie-4-lane-adapter-sff-993-u2-4l I literally have every port on the motherboard filled except 1 M.2 slot that is disabled because I used the U.2 port. Your board may have a similar limitation- but with that many cores- probably not as likely.
  15. jordanmw

    Help with VR VM Gaming Build

    Yeah- no internet shouldn't be an issue- I actually have a steamcache and dedicated server for some of my games that runs on the same machine so even when the internet goes down, clients can still update games and connect to that dedicated server. You will need a management IP address but it doesn't really need to have access to the internet unless you want to download updates for plugins that you install. The only other concern is management- You will likely want a router or switch and a separate computer for management that can connect to that router/switch. That is the best way- so you can run headless- with the 3rd GPU being taken by the third VM. Otherwise- you can do it by preventing the 3rd machine from booting up when unraid starts- and do management tasks with the 3rd GPU in GUI mode. Then when you are done with management- you boot the third machine and manage from any web browser from any running VM. You will want an IP range and just keep unraid and all 3 machines on the same range. My board has 4 PCI-E slots so I had to use the U.2 to PLX bridge for PCI-E 4x slot to use my USB card. All 4 slots are holding GPUs. Bios for GPU is not usually a big deal- just remove some header info from a dumped ROM and BOOM. Not sure about the 2000 series though.