Jump to content

jordanmw

Members
  • Posts

    288
  • Joined

  • Last visited

  • Days Won

    2

Everything posted by jordanmw

  1. Yeah- no internet shouldn't be an issue- I actually have a steamcache and dedicated server for some of my games that runs on the same machine so even when the internet goes down, clients can still update games and connect to that dedicated server. You will need a management IP address but it doesn't really need to have access to the internet unless you want to download updates for plugins that you install. The only other concern is management- You will likely want a router or switch and a separate computer for management that can connect to that router/switch. That is the best way- so you can run headless- with the 3rd GPU being taken by the third VM. Otherwise- you can do it by preventing the 3rd machine from booting up when unraid starts- and do management tasks with the 3rd GPU in GUI mode. Then when you are done with management- you boot the third machine and manage from any web browser from any running VM. You will want an IP range and just keep unraid and all 3 machines on the same range. My board has 4 PCI-E slots so I had to use the U.2 to PLX bridge for PCI-E 4x slot to use my USB card. All 4 slots are holding GPUs. Bios for GPU is not usually a big deal- just remove some header info from a dumped ROM and BOOM. Not sure about the 2000 series though.
  2. I have 4 nvidia cards passed through to each of my 4 gaming VMs. The last one required a bios rom to pass through to the last VM but worked fine besides that. The other thing you will probably have to look at is the number of USB controllers that you can break up into IOMMU groups to pass to each machine for the VR headsets. I got an allegro card with 4 controllers- works great for USB 3.0 passthrough.
  3. Did anyone see this development with Numa and TR processors? Check this out- could be something to try with unraid for those having numa issues. https://level1techs.com/article/unlocking-2990wx-less-numa-aware-apps I love that he mentions celeron 300A in this- really takes me back to the age of slockets and peltier TEC coolers and overclocking....
  4. I am also looking forward to adding this in to unraid releases.
  5. +1 wish everyone with a TR would light up this thread....
  6. I had a lot better luck removing the isolation on CPUs and changing my RAM settings back to auto in the bios. I have 4 VMs that are gaming machines, and a virtual game server and am not running into any stuttering while playing or doing smb activity while playing. Given, I have an asrock x399 instead of an MSI, but seems similar enough. I also have 64Gb of RAM so maybe that is helping with bandwidth- but when I isolated cores in addition to pinning them, I got similar results. It was only after throwing out the idea of pinning and isolating RAM that I was getting good results. I did do the EYPC change also- and the MSI fix, but that was all.
  7. I run into all sorts of issues with certain games using SMB shares. Had me trying all sorts of solutions for different games- once I passed through a drive or attached a vhdx on an SMB share, things worked properly. It was really hard to troubleshoot since games would fail with different errors and some of them would work, even when others won't. Tried reinstall of various framework ect, only worked with direct attach storage for me.
  8. Yeah- passed through to windows 10. I did try to add the drives this weekend and they were recognized- but my array wouldn't start and it lost the cache drives. I am thinking it was related to the drive assignment within linux- maybe replaced my sdd and sde drives that are set as cache. Anyone know how I can prevent that? I also noticed that my array would lock up completely if I tried to do any operations on them like format. Going to try one at a time and see what that gets me- really a PITA having to take all the video cards out when swapping them out/in. Wish there were bios options to disable each slot individually.
  9. Do you know what controller chip it uses?
  10. I appreciate the feedback- looks like you have a similar setup. The M.2 drives are the WD black 2280 and sit in the M.2 slots on the same board you have. The only thing that I am worried about now is that other people had issues with m.2 drives other than samsung. I am really hoping those drives work- then I may just pass them through to the other 2 machines and be done with it- but I can't help but think that I should move all the machines to those drives for the speed increase.
  11. Anyone have some experience with drive config with this SSD combo?
  12. I just grabbed 2 Western Digital Black m.2 1TB drives and it seems I might have gotten the same controller in them- does anyone know what the process looks like to get these things going?
  13. Looking for some advice for storage setup. Here is the situation: X399 Taichi 64Gb Ram 4x gtx960 with 4 gaming VMs configured and 1 game server VM, now for the drives I currently have: 2x 512Gb(raid0) Plextor setup as cache currently 2x 3Tb WD Purple setup as disk 1 and parity (couple of dockers, nothing else) 2x 1Tb Samsung 850 ssd setup as passthrough to 2 of the gaming VMs as game storage drives 2x 1Tb WD black m.2 drives- not configured So after getting really bad load times for games- I am looking for the best way to optimize all 4 machines. The 2 that currently have passthrough to the 1Tb ssds have no loading time issues, so passthrough might just be what I do with the other 2 machines, but looking for direction. Anyone have any advice on the best way to set this all up to optimize speed for all 4 machines? Should I shift my VMs to one of the m.2 drives- or both for cache? I have 2 of the VMs mounting a VHDX file on SMB and those are really slow for load times. If you had this hardware- and my goal of 4 fast gaming machines and one game server- what would you do?
  14. I think checking bios options should be your first step. Make sure those slots are set to x16 or x8 and enabled. Strange for neither one to show in IOMMU groupings. Have you tried to load another OS to verify that they are working there? The first thing I did with my threadripper build was install windows 10 with no activation and burned in the machine overnight to verify everything was recognized and working with no thermal issues. That helps a lot to eliminate hardware issues so you don't have to wonder during your unraid config. I have the acs downstream patch enabled also- I think that may have helped also.
  15. Even with bios set to channel and numa all setup within the VM, I am still getting memory used from both cores. 20ish MB? Not sure what is going on there- I can decrease the RAM to account for the 20 mb but it still grabs the same amount:
  16. So, should they be pinned- or isolated, or both for my gaming VMs?
  17. Does anyone know if the next RC or version will include some of this stuff, and maybe lstopo and hwloc? I saw lstopo mentioned.
  18. I love my taichi x399 also- passing all 4 slots with GPUs with no issues. Haven't tried bumping multiplier yet though.
  19. So how exactly should I be adding this tweak? Is it in the XML of the individual machines? Where do I add it, and what should I add for a 1920x?
  20. yeah, I'm seeing <cpu mode='host-passthrough' check='none'> <topology sockets='1' cores='4' threads='1'/> </cpu> when assigning I'm sure I read something about this though
  21. Trying to optimize my setup, maybe you guys will have some suggestions. Here is the setup: 1920x with 4x gtx 960 setup as 4 gaming machines and 1 game server. The performance is decent, but seems like it could be better. Here is my current config, what changes do you think I should make to optimize it? Also I saw all the info about the EYPC cache tweaks and wondering if that is something I should do also. Can someone chime in with their best guess? Brown is the game server- all other colors are individual machines with the same color graphics card as the CPUs.
×
×
  • Create New...