Jump to content

neogenesisrevo

Members
  • Content Count

    8
  • Joined

  • Last visited

Community Reputation

2 Neutral

About neogenesisrevo

  • Rank
    Newbie
  1. I understand the basic idea, but I'm not sure on how to optimize. I guess my confusion is with how emulatorpin and iothreadpin work. My i7-5820k is a 6 core 12 threads CPU. I'd like to split the cores between 2 VMs, and for now, I'd like to optimize it for gaming. So I've isolated CPUs 2,3,4,5,8,9,10,11 in Syslinux and I am currently assigning 4 to each VM. This part is pretty straightforward. Now, with the remaining 4 cores, (0,1,6,7) I'm a bit confused on how to best utilize them. Should emulatorpin be set to 0,6 in both VMs and then iothreadpin cpuset to 1,7, should iothreadpin and emulatorpin simply share all 4 remaining cores? Or should each VM have separate cores defined for emulatorpin and iothreadpin? Shouldthe iothreadpin CPUset actually come from threads already assigned to the VM? So if vm1 has 4,10,5,11 should iothreadpin CPUset='4,10'? Please help.
  2. Questions A) In your opinion, what is the best way to utilize a 512gb SSD drive, and a 512gb NVMe drive (I'm guessing SSD for cache, but not sure what the use for the NVMe would be). 2a) Is there anyway to setup partitions my NVMe, and pass through the partitions to various VMs instead of the whole device? 2b) What about using the NVMe as a diskshare or sharing the disk using something like SMB, NFS, etc? My knowledge on all this is very limited but my logic is that since the drive is physically in the computer, performance shouldn't be affected. Full System: Samsung 960 PRO 512gb - NVMe (currently unassigned, formally my cache disk) Samsung 850 PRO 512gb - SSD (Array Disk 3) 1TB - HDD (Array disk 4) 2TB - HDD (Parity) 128gb - SSD (Array disk 1) 40gb - SSD (Array disk 2) 4x gtx1080ti 48GBs DDR4 3000Mhz i7-5820k (28 PCIe lanes) Asrock x99 extreme4 motherboard A bit more info: I've read using an NVMe disk for cache is a waste. After watching Spaceinvader one's NVMe passthrough video I finally removed it as my cache and currently have it unassigned. I thought I could setup 3 partitions on the disk and instead of passing through the whole device, I could passthrough partitions individually to different VMs, but sadly this is not the case. Ideally, I'd like to use the NVMe disk to boot 2 VMs and also use it to share my steam library folder between the two VMs. There is one particular game that I play that (I believe) would greatly benefit from the extra speeds provided from the NVMe disk which would results in shorter loading screens.
  3. I believe I tried this already, but it just caused crashes. I was thinking something like, maybe give each vm only 6 "pinned" CPUs.. like vm A gets cpus 0-5 and vm b gets 6-11 but then just tell each VM to have 12 CPUs? The point would be 6 dedicated CPUs to each VM + 6 more to each when ever those are free... IDK if this can be done as my knowledge and experience with VMs is mostly based on the top 5-10 results in a Google search.. lol.
  4. I have a host machine with 48Gb or RAM, and 12 logical cores. I have 2 VMs that are used constantly so I've given each of them 6 cores and about 14Gb of RAM. Sometimes though, 1 of the 2 VMs isn't being used and having 6 cores sitting idle is such a waste. How can I setup my system to give either VM all 12 cores (and maybe a lot more RAM) when they are available, but to split up the cores evenly when one of the 2 VMs needs them? I've done some reading on libvirt.org and tried Googling "Dynamic CPUs in KVM" and tried a few things but there always seems to be a problem. I've installed Virt-manager and tried a topology with 12 sockets 1 CPU and 1 thread. Both VMs 'function' fine until I start running any major tasks on both of them, then they'll just lock up. I reverted back to just assigning 6 cores to each VM via Unraid VM panel for now so posting my XML 'as is' now wouldn't be all that helpful.
  5. Hahaha, thanks man! That's exactly what they meant.
  6. Hey everyone! After months of testing and learning, I finally managed to have 2 of my 4 1080TIs run in SLI. The information on how to do this has actually been online for a while but a bit scattered about (at least that was my experience). Overview of Steps 1) Achieve GPU pass-through 2) Mod the Nvidia Drivers to allow SLI in our VM 3) Use Nvidia Profile Inspector to get much better performance Edit Update After utilizing a few other VM optimizations, specifically CPU pinning, my SLI performance DRASTICALLY improved. My FPS in SLI went from the mid 40s to 70+ (I used a few different benchmarks such as the unigine benchmarks and also from personal experience playing ESO). When I started trying to get SLI to work in Unraid, I noticed that just passing through 2 GPUs to a single VM already resulted in very noticeable gain in performance. I am still tinkering with Nvidia Profile Inspector, so things might change. If they do, I will post an update. GPU pass-through My VM options: Bios: OVMF Machine: Q35.1 Sata for ISO drivers and VirtIO for Primary vDisk Follow the instructions in this Spaceinvader One video. Afterwards, pass through your 2 GPUs and they should appear in windows. Nvidia Drivers Mod Note: If you have any difficulties with this next part, you are better off asking for help here on the DifferentSLIAuto forum thread. So to my understanding motherboard manufactures must license the right to allow SLI on their boards from Nvidia. The reason we haven't been able to achieve SLI in Unraid is due to the fact that our VM's "motherboard" info simply not qualifying as a Nvidia approved motherboard for SLI. Luckily there has been a hack available for a while that allows SLI to be enabled for not only any motherboard but also any GPUs (aka, the GPUs don't even need to be the same model). This is what worked for me: I used Nvidia Driver version 430.86. If you use the same version, then these instructions SHOULD work for you. Install Nvidia Drivers The original method/program used by DifferentSLIAuto is no longer working for the latest versions of Nvidia drivers (driver versions 4xx and on). We have two choices, we can go with the old method and use an older driver or we can mod newer drivers manually. This is what I did, and what I'll be describing: Download DifferentSLIAuto version 1.7.1 Download a Hex Editor (I used HxD) Copy nvlddmkm.sys from C:\Windows\System32\DriverStore\FileRepository\nv_dispi.inf_amd64_b49751b9038af669 to your DifferentSLIAuto folder (NOTE: if you are not using driver version 430.86, then the nvlddmkm.sys file you must modify will be located some where else and you must find it yourself by going to Device Manager > Display adapters > YOUR CARD). Mod the copied nvlddmkm.sys file by opening it in a Hex Editor. Here are the changes for driver 430.86: Address: [OLD VALUE] [NEW CHANGED VALUE] 000000000027E86D: 84 C7 000000000027E86E: C0 43 000000000027E86F: 75 24 000000000027E870: 05 00 000000000027E871: 0F 00 000000000027E872: BA 00 000000000027E873: 6B 00 Save and exit Hex Editor In your DifferentSLIAuto folder, right click and edit install.cmd. Replace all instances of "nv_dispi.inf_amd64_7209bde3180ef5f7" with the location of where our original nvlddmkm.sys file was in our case this is "nv_dispi.inf_amd64_b49751b9038af669". The install.cmd file will modify the copy we added to the folder and replace the original one found at the location we specify here. Use this video for reference but please note that in the video the driver version is different then ours and they replace nv_dispi.inf_amd64_7209bde3180ef5f7, with nv_dispi.inf_amd64_9ab613610b40aa98 instead of nv_dispi.inf_amd64_b49751b9038af669. Move your DifferentSLIAuto Folder to the root of your C:\ drive. Set UAC to the lowest setting (OFF) in Control Panel\All Control Panel Items\Security and Maintenance. Run cmd.exe as admin and enter: bcdedit.exe /set loadoptions DISABLE_INTEGRITY_CHECKS bcdedit.exe /set NOINTEGRITYCHECKS ON bcdedit.exe /set TESTSIGNING ON Restart you computer into safe mode + network enabled (Video showing how to do it quickly using Shift + Restart) Within the DifferentSLIAuto folder located at "C:\", run install.cmd as admin. After only a few seconds the CMD window text should all be green indicating that all is well! Open up your Nvidia Control Panel and under 3d Settings it should now say "Configure SLI, Surround, PhysX". Click that option, and under SLI Configuration Select Maximize 3d performance and that's it! Nvidia Profile Inspector The default settings in the Nvidia Control Panel really suck. After FINALLY getting SLI to work I was getting only 40 FPS in SLI when before I was getting 100+ FPS prior to enabling SLI. I was about ready to give up when I came across Nvidia Profile Inspector! By changing a few settings with Nvidia Profile Inspector, I was able to finally get great SLI results (70 FPS). Keep in mind that I only changing settings in profile inspector for only a few hours, so I'm sure there are many optimizations to be made, so hopefully we can figure it out as a community. Run Nvidia Profile Inspector. I recommend the following settings for now for the _GLOBAL_DRIVER_PROFILE (Base Profile). Nvidia Profile Inspector Settings: 1 - Compatibility - SLI compatibility bits: 0x02C00005 SLI compatibility bits (DX10 + DX11): 0x080000F5 5 - Common - Power management mode: Prefer maximum performance Thread optimization: On 6 - SLI - NVIDIA predefined number of GPUs to use on SLI rendering mode: 0x00000002 SLI_PREDEFINED_GPU_COUNT_TWO NVIDIA predefined number of GPUs to use on SLI rendering mode (on DirectX 10): 0x00000002 SLI_PREDEFINED_GPU_COUNT_TWO NVIDIA predefined SLI mode: PLAY WITH BOTH: 0x00000002 SLI_PREDEFINED_MODE_FORCE_AFR & 0x00000003 SLI_PREDEFINED_MODE_FORCE_AFR2 NVIDIA predefined SLI mode on DirectX 10: PLAY WITH BOTH: 0x00000002 SLI_PREDEFINED_MODE_FORCE_AFR & 0x00000003 SLI_PREDEFINED_MODE_FORCE_AFR2 SLI rendering mode: Try: 0x00000000 SLI_RENDERING_MODE_AUTOSELECT, 0x00000002 SLI_RENDERING_MODE_FORCE_AFR. 0x00000003 SLI_RENDERING_MODE_FORCE_AFR2 MAKE SURE TO HIT APPLY CHANGES ON TOP RIGHT HAND CORNER Next we will make some changes in Control Panel > Nvidia Control Panel Nvidia Control Panel Manage 3D Settings > Global Settings Power management mode: Prefer maximum performance SLI rendering mode: start by leaving this alone, and then make it match your Nvidia Profile Inspector settings (so if you are trying 0x00000002 AFR, set this to Force alternate frame rendering 1, and if your are trying 0x00000003 AFR2, set this to alternate frame rendering 2) And that's it! Now keep in mind the settings above are far from the best, and are only a starting point for us. It is probably best to find individual game profiles for each title and go from there. I will be googling "Nvidia Profile Inspector <insert game here>" for a while and trying different settings out. Make sure you change the "NVIDIA predined number of GPUs" settings to TWO if you change profiles because in my experience it was defaulting to FOUR (this may be because I do have 4 physical cards installed on the motherboard, so if someone else gets different results please let me know). SOME CLOSING THOUGHTS I did some additional research which lead me to open up my motherboard manual. I discovered that in my case, my mother board PCIe slots change speed depending on a wide number of factors (for example, if I have a 28 lane CPU, some of my PCIe 3.0 slots (PCIE1/PCIE3/PCI5) STOP functioning in x16 and instead run at x16/x8/x4. If that wasn't a big enough kick in the nuts, since I have an m.2 SSD in my m.2 slot, my PCIE5 slot doesn't function at all). All in all, this was fun adventure for me, and I really hope this information helps people who are interested in trying SLI via VMs!
  7. Hey everyone! After months of testing and learning, I finally managed to have 2 of my 4 1080TIs run in SLI. The information on how to do this has actually been online for a while but a bit scattered about (at least that was my experience). Overview of Steps 1) Achieve GPU pass-through 2) Mod the Nvidia Drivers to allow SLI in our VM 3) Use Nvidia Profile Inspector to get much better performance A Few Quick Notes When I started trying to get SLI to work in Unraid, I noticed that just passing through 2 GPUs to a single VM already resulted in very noticeable gain in performance. I was expecting fulling configured SLI to boost performance further, but for the most part I was disappointed. In most cases I found SLI to result in either a decrease in performance or a negligible improvement outside of some bench marking programs. To get things going in the right direction I had to tinker a bit with Nvidia Profile Inspector, but all I've managed to achieve so far is a break even with just 2x GPU pass-through. I am still tinkering with Nvidia Profile Inspector, so things might change. If they do, I will post an update. GPU pass-through My VM options: Bios: OVMF Machine: Q35.1 Sata for ISO drivers and VirtIO for Primary vDisk Follow the instructions in this Spaceinvader One video. Afterwards, pass through your 2 GPUs and they should appear in windows. Nvidia Drivers Mod Note: If you have any difficulties with this next part, you are better off asking for help here on the DifferentSLIAuto forum thread. So to my understanding motherboard manufactures must license the right to allow SLI on their boards from Nvidia. The reason we haven't been able to achieve SLI in Unraid is due to the fact that our VM's "motherboard" info simply not qualifying as a Nvidia approved motherboard for SLI. Luckily there has been a hack available for a while that allows SLI to be enabled for not only any motherboard but also any GPUs (aka, the GPUs don't even need to be the same model). This is what worked for me: I used Nvidia Driver version 430.86. If you use the same version, then these instructions SHOULD work for you. Install Nvidia Drivers The original method/program used by DifferentSLIAuto is no longer working for the latest versions of Nvidia drivers (driver versions 4xx and on). We have two choices, we can go with the old method and use an older driver or we can mod newer drivers manually. This is what I did, and what I'll be describing: Download DifferentSLIAuto version 1.7.1 Download a Hex Editor (I used HxD) Copy nvlddmkm.sys from C:\Windows\System32\DriverStore\FileRepository\nv_dispi.inf_amd64_b49751b9038af669 to your DifferentSLIAuto folder (NOTE: if you are not using driver version 430.86, then the nvlddmkm.sys file you must modify will be located some where else and you must find it yourself by going to Device Manager > Display adapters > YOUR CARD). Mod the copied nvlddmkm.sys file by opening it in a Hex Editor. Here are the changes for driver 430.86: Address: [OLD VALUE] [NEW CHANGED VALUE] 000000000027E86D: 84 C7 000000000027E86E: C0 43 000000000027E86F: 75 24 000000000027E870: 05 00 000000000027E871: 0F 00 000000000027E872: BA 00 000000000027E873: 6B 00 Save and exit Hex Editor In your DifferentSLIAuto folder, right click and edit install.cmd. Replace all instances of "nv_dispi.inf_amd64_7209bde3180ef5f7" with the location of where our original nvlddmkm.sys file was in our case this is "nv_dispi.inf_amd64_b49751b9038af669". The install.cmd file will modify the copy we added to the folder and replace the original one found at the location we specify here. Use this video for reference but please note that in the video the driver version is different then ours and they replace nv_dispi.inf_amd64_7209bde3180ef5f7, with nv_dispi.inf_amd64_9ab613610b40aa98 instead of nv_dispi.inf_amd64_b49751b9038af669. Move your DifferentSLIAuto Folder to the root of your C:\ drive. Set UAC to the lowest setting (OFF) in Control Panel\All Control Panel Items\Security and Maintenance. Run cmd.exe as admin and enter: bcdedit.exe /set loadoptions DISABLE_INTEGRITY_CHECKS bcdedit.exe /set NOINTEGRITYCHECKS ON bcdedit.exe /set TESTSIGNING ON Restart you computer into safe mode + network enabled (Video showing how to do it quickly using Shift + Restart) Within the DifferentSLIAuto folder located at "C:\", run install.cmd as admin. After only a few seconds the CMD window text should all be green indicating that all is well! Open up your Nvidia Control Panel and under 3d Settings it should now say "Configure SLI, Surround, PhysX". Click that option, and under SLI Configuration Select Maximize 3d performance and that's it! Nvidia Profile Inspector The default settings in the Nvidia Control Panel really suck. After FINALLY getting SLI to work I was getting only 40FPS in SLI when before I was getting 100+ FPS prior to enabling SLI. I was about ready to give up when I came across Nvidia Profile Inspector! By changing a few settings with Nvidia Profile Inspector, I was able to finally get "reasonable" SLI results. Keep in mind that I only changing settings in profile inspector for only a few hours, so I'm sure there are many optimizations to be made, so hopefully we can figure it out as a community. Run Nvidia Profile Inspector. I recommend the following settings for now for the _GLOBAL_DRIVER_PROFILE (Base Profile). Nvidia Profile Inspector Settings: 1 - Compatibility - SLI compatibility bits: 0x02C00005 SLI compatibility bits (DX10 + DX11): 0x080000F5 5 - Common - Power management mode: Prefer maximum performance Thread optimization: On 6 - SLI - NVIDIA predefined number of GPUs to use on SLI rendering mode: 0x00000002 SLI_PREDEFINED_GPU_COUNT_TWO NVIDIA predefined number of GPUs to use on SLI rendering mode (on DirectX 10): 0x00000002 SLI_PREDEFINED_GPU_COUNT_TWO NVIDIA predefined SLI mode: PLAY WITH BOTH: 0x00000002 SLI_PREDEFINED_MODE_FORCE_AFR & 0x00000003 SLI_PREDEFINED_MODE_FORCE_AFR2 NVIDIA predefined SLI mode on DirectX 10: PLAY WITH BOTH: 0x00000002 SLI_PREDEFINED_MODE_FORCE_AFR & 0x00000003 SLI_PREDEFINED_MODE_FORCE_AFR2 SLI rendering mode: Try: 0x00000000 SLI_RENDERING_MODE_AUTOSELECT, 0x00000002 SLI_RENDERING_MODE_FORCE_AFR. 0x00000003 SLI_RENDERING_MODE_FORCE_AFR2 MAKE SURE TO HIT APPLY CHANGES ON TOP RIGHT HAND CORNER Next we will make some changes in Control Panel > Nvidia Control Panel Nvidia Control Panel Manage 3D Settings > Global Settings Power management mode: Prefer maximum performance SLI rendering mode: start by leaving this alone, and then make it match your Nvidia Profile Inspector settings (so if you are trying 0x00000002 AFR, set this to Force alternate frame rendering 1, and if your are trying 0x00000003 AFR2, set this to alternate frame rendering 2) And that's it! Now keep in mind the settings above are far from the best, and are only a starting point for us. It is probably best to find individual game profiles for each title and go from there. I will be googling "Nvidia Profile Inspector <insert game here>" for a while and trying different settings out. Make sure you change the "NVIDIA predined number of GPUs" settings to TWO if you change profiles because in my experience it was defaulting to FOUR (this may be because I do have 4 physical cards installed on the motherboard, so if someone else gets different results please let me know). SOME CLOSING THOUGHTS I did some additional research which lead me to open up my motherboard manual. I discovered that in my case, my mother board PCIe slots change speed depending on a wide number of factors (for example, if I have a 28 lane CPU, some of my PCIe 3.0 slots (PCIE1/PCIE3/PCI5) STOP functioning in x16 and instead run at x16/x8/x4. If that wasn't a big enough kick in the nuts, since I have an m.2 SSD in my m.2 slot, my PCIE5 slot doesn't function at all, and the cherry on top is that I just found out that (and I quote this straight from the manual) "PCIE2 (PCIe 2.0 x16 slot) is used for PCI Express x4 lane width cards."????? What does that even mean? I guess it means, we're advertising this slot as x16 but it's really just x4. All in all, this was fun adventure for me, and I really hope this information helps people who are interested in trying SLI via VMs!
  8. How can I spin up the array and boot up my VMs from the console screen? I'm guessing this is what people refer to as 'headless'? Still pretty new to all this, but I want to learn but I can't find a list of console commands anywhere. Thank you.