neogenesisrevo

Members
  • Posts

    18
  • Joined

  • Last visited

Everything posted by neogenesisrevo

  1. Can anyone else shed some light on this? Which log should I be checking to figure out what's delaying start up?
  2. <?xml version='1.0' encoding='UTF-8'?> <domain type='kvm' id='3'> <name>Linux Mint 20.1</name> <uuid>499fdc7f-ceb1-6232-cf5d-85ce7a191ded</uuid> <metadata> <vmtemplate xmlns="unraid" name="Linux" icon="mint.png" os="linux"/> </metadata> <memory unit='KiB'>12058624</memory> <currentMemory unit='KiB'>12058624</currentMemory> <memoryBacking> <nosharepages/> </memoryBacking> <vcpu placement='static'>8</vcpu> <cputune> <vcpupin vcpu='0' cpuset='4'/> <vcpupin vcpu='1' cpuset='16'/> <vcpupin vcpu='2' cpuset='5'/> <vcpupin vcpu='3' cpuset='17'/> <vcpupin vcpu='4' cpuset='6'/> <vcpupin vcpu='5' cpuset='18'/> <vcpupin vcpu='6' cpuset='7'/> <vcpupin vcpu='7' cpuset='19'/> </cputune> <resource> <partition>/machine</partition> </resource> <os> <type arch='x86_64' machine='pc-q35-5.1'>hvm</type> <loader readonly='yes' type='pflash'>/usr/share/qemu/ovmf-x64/OVMF_CODE-pure-efi.fd</loader> <nvram>/etc/libvirt/qemu/nvram/499fdc7f-ceb1-6232-cf5d-85ce7a191ded_VARS-pure-efi.fd</nvram> </os> <features> <acpi/> <apic/> </features> <cpu mode='host-passthrough' check='none' migratable='on'> <topology sockets='1' dies='1' cores='4' threads='2'/> <cache mode='passthrough'/> <feature policy='require' name='topoext'/> </cpu> <clock offset='utc'> <timer name='rtc' tickpolicy='catchup'/> <timer name='pit' tickpolicy='delay'/> <timer name='hpet' present='no'/> </clock> <on_poweroff>destroy</on_poweroff> <on_reboot>restart</on_reboot> <on_crash>restart</on_crash> <devices> <emulator>/usr/local/sbin/qemu</emulator> <disk type='file' device='disk'> <driver name='qemu' type='raw' cache='writeback'/> <source file='/mnt/user/domains/Linux Mint 20.1/vdisk1.img' index='1'/> <backingStore/> <target dev='hdc' bus='virtio'/> <boot order='1'/> <alias name='virtio-disk2'/> <address type='pci' domain='0x0000' bus='0x03' slot='0x00' function='0x0'/> </disk> <controller type='sata' index='0'> <alias name='ide'/> <address type='pci' domain='0x0000' bus='0x00' slot='0x1f' function='0x2'/> </controller> <controller type='pci' index='0' model='pcie-root'> <alias name='pcie.0'/> </controller> <controller type='pci' index='1' model='pcie-root-port'> <model name='pcie-root-port'/> <target chassis='1' port='0x8'/> <alias name='pci.1'/> <address type='pci' domain='0x0000' bus='0x00' slot='0x01' function='0x0' multifunction='on'/> </controller> <controller type='pci' index='2' model='pcie-root-port'> <model name='pcie-root-port'/> <target chassis='2' port='0x9'/> <alias name='pci.2'/> <address type='pci' domain='0x0000' bus='0x00' slot='0x01' function='0x1'/> </controller> <controller type='pci' index='3' model='pcie-root-port'> <model name='pcie-root-port'/> <target chassis='3' port='0xa'/> <alias name='pci.3'/> <address type='pci' domain='0x0000' bus='0x00' slot='0x01' function='0x2'/> </controller> <controller type='pci' index='4' model='pcie-root-port'> <model name='pcie-root-port'/> <target chassis='4' port='0xb'/> <alias name='pci.4'/> <address type='pci' domain='0x0000' bus='0x00' slot='0x01' function='0x3'/> </controller> <controller type='pci' index='5' model='pcie-root-port'> <model name='pcie-root-port'/> <target chassis='5' port='0xc'/> <alias name='pci.5'/> <address type='pci' domain='0x0000' bus='0x00' slot='0x01' function='0x4'/> </controller> <controller type='pci' index='6' model='pcie-root-port'> <model name='pcie-root-port'/> <target chassis='6' port='0xd'/> <alias name='pci.6'/> <address type='pci' domain='0x0000' bus='0x00' slot='0x01' function='0x5'/> </controller> <controller type='pci' index='7' model='pcie-root-port'> <model name='pcie-root-port'/> <target chassis='7' port='0xe'/> <alias name='pci.7'/> <address type='pci' domain='0x0000' bus='0x00' slot='0x01' function='0x6'/> </controller> <controller type='virtio-serial' index='0'> <alias name='virtio-serial0'/> <address type='pci' domain='0x0000' bus='0x02' slot='0x00' function='0x0'/> </controller> <controller type='usb' index='0' model='ich9-ehci1'> <alias name='usb'/> <address type='pci' domain='0x0000' bus='0x00' slot='0x07' function='0x7'/> </controller> <controller type='usb' index='0' model='ich9-uhci1'> <alias name='usb'/> <master startport='0'/> <address type='pci' domain='0x0000' bus='0x00' slot='0x07' function='0x0' multifunction='on'/> </controller> <controller type='usb' index='0' model='ich9-uhci2'> <alias name='usb'/> <master startport='2'/> <address type='pci' domain='0x0000' bus='0x00' slot='0x07' function='0x1'/> </controller> <controller type='usb' index='0' model='ich9-uhci3'> <alias name='usb'/> <master startport='4'/> <address type='pci' domain='0x0000' bus='0x00' slot='0x07' function='0x2'/> </controller> <interface type='bridge'> <mac address='52:54:00:eb:90:7d'/> <source bridge='br0'/> <target dev='vnet1'/> <model type='virtio-net'/> <alias name='net0'/> <address type='pci' domain='0x0000' bus='0x01' slot='0x00' function='0x0'/> </interface> <serial type='pty'> <source path='/dev/pts/1'/> <target type='isa-serial' port='0'> <model name='isa-serial'/> </target> <alias name='serial0'/> </serial> <console type='pty' tty='/dev/pts/1'> <source path='/dev/pts/1'/> <target type='serial' port='0'/> <alias name='serial0'/> </console> <channel type='unix'> <source mode='bind' path='/var/lib/libvirt/qemu/channel/target/domain-3-Linux Mint 20.1/org.qemu.guest_agent.0'/> <target type='virtio' name='org.qemu.guest_agent.0' state='disconnected'/> <alias name='channel0'/> <address type='virtio-serial' controller='0' bus='0' port='1'/> </channel> <input type='mouse' bus='ps2'> <alias name='input0'/> </input> <input type='keyboard' bus='ps2'> <alias name='input1'/> </input> <hostdev mode='subsystem' type='pci' managed='yes'> <driver name='vfio'/> <source> <address domain='0x0000' bus='0x04' slot='0x00' function='0x0'/> </source> <alias name='hostdev0'/> <rom file='/mnt/user/Data/1080ti.rom'/> <address type='pci' domain='0x0000' bus='0x04' slot='0x00' function='0x0'/> </hostdev> <hostdev mode='subsystem' type='pci' managed='yes'> <driver name='vfio'/> <source> <address domain='0x0000' bus='0x04' slot='0x00' function='0x1'/> </source> <alias name='hostdev1'/> <address type='pci' domain='0x0000' bus='0x05' slot='0x00' function='0x0'/> </hostdev> <hostdev mode='subsystem' type='pci' managed='yes'> <driver name='vfio'/> <source> <address domain='0x0000' bus='0x07' slot='0x00' function='0x3'/> </source> <alias name='hostdev2'/> <address type='pci' domain='0x0000' bus='0x06' slot='0x00' function='0x0'/> </hostdev> <hostdev mode='subsystem' type='usb' managed='no'> <source> <vendor id='0x1532'/> <product id='0x0064'/> <address bus='5' device='2'/> </source> <alias name='hostdev3'/> <address type='usb' bus='0' port='1'/> </hostdev> <hostdev mode='subsystem' type='usb' managed='no'> <source> <vendor id='0x1532'/> <product id='0x0227'/> <address bus='5' device='3'/> </source> <alias name='hostdev4'/> <address type='usb' bus='0' port='2'/> </hostdev> <memballoon model='none'/> </devices> <seclabel type='dynamic' model='dac' relabel='yes'> <label>+0:+100</label> <imagelabel>+0:+100</imagelabel> </seclabel> </domain>
  3. I don't have any storage connected to the USB controller. The problem happens even if nothing is attached to the USB slots. Would adding "boot order" to a USB controller do anything?
  4. Hello everyone, I have a USB controller that I'm passing through to my VMs. Whenever I do so, the VM takes a really long time to pass the TianoCore boot screen (Over 5 minutes). Once it gets past it, everything runs normally. This doesn't happen on my Windows 10 VM. I've tried 2 different versions of Linux Mint and Ubuntu but the problem persist. I'm not sure which logs I should be looking at in order to try and find the problem. Any help would be appreciated.
  5. I really hope someone can help me. I wanted to mention that it does appear that the server is indeed receiving the login request but for some reason it is being dropped. I don't have the slightest idea what could be causing this. Is there some sort of option that needs to be enabled to allow access outside the network? If my memory serves me right, I know that is the case in things like SQL or maybe SSH.. But I'm not entirely sure. If anyone has any suggestions, please let me know.
  6. Let's get this part out of the way first: I know that ftp isn't secure and I shouldn't expose my computer to the internet, and a VPN along with other things would help, so-on & so-on. I tried setting up both pure-ftpd and crushftp9. I am able to access the ftp server locally but for some reason I can't seem to get access from outside my network. I'm forwarding ports 9921, 9443, & 2222 from my router to my host as suggested, and the docker container is also matching the ports. Is there a setting I'm missing? I tried using https://ftptest.net/ and I'm getting the following: Reply: 257 "/" PWD command successful. Status: Current path is / Command: TYPE I Reply: 200 Command ok : Binary type selected. Command: PASV Reply: 227 Entering Passive Mode (69,202,243,35,39,21) Command: MLSD Error: Could not establish data connection: Connection refused Check that is part of your passive mode port range. If it is outside your desired port range, you have a router and/or firewall that is modifying your FTP data. Make sure that this port is open in your firewall Port needs to be forwarded in your router In some cases your ISP might block that port. In this case configure the server to use a different passive mode port range. Contact your ISP for details which ports are blocked. tower-diagnostics-20201021-2349.zip
  7. Hey, this might be silly for me to even ask but during passthrough are you keeping VNC enabled? I have the exact same issue when I have both VNC & a GPU enabled for a VM. By disabling VNC, everything just works for me. Edit: Also, I just reviewed your setup and it's very similar to mine, only I have 4 1080tis. I had this exact problem on two different systems, both due to chipset limitations. With my 1st system my m.2 slot was directly connected to one of my PCIe slots and both couldn't function at the same time. (weirdly enough it was one of the PCIe slots in the mid, which made me even more confused). My 2nd system was advertised as having multiple PCIe 4.0 slots, but I later found out that they couldn't all run as PCIe 4.0s at the same time. My 4th PCIe slot for my last simply just runs at only PCIe 1.0 when the other 3 are being used, which produced a black screen if I try to pass it through, effectively making my 4th 1080ti little more than a brick.
  8. Would it be possible to let a VM detect which motherboard chipset is on the host? This would come in very handy to get something like SLI working which is only enabled if the driver is being ran on Nvidia authorized motherboards. I might be wrong about this, but I vaguely remember something like this for CPUs. (Also, yes I know SLI is crap, but I'm just passing the time).
  9. I'm passing my Logitech headset to my Ubuntu VM but it sounds terrible. Scratchy is probably how I would describe it. I found similar threads a few years back and the solution was to buy a sound card. I was checking in to see if USB headsets work properly now. I tried changing the USB Controller settings to 3.0 (tried both options) but no difference.
  10. Unraid Version: 6.9 With the new version of Unraid supporting additional pools in combination with a few other features that I haven't been taking advantage of, I thought this would be a good time to rethink how I'm currently using my storage devices on my server. I would like to run RAID0 on at least 2 of the SSDs if not 3, but I also don't want to make sure they are put to a task that will benefit from the speed increase. I also have an NVME just sitting around at the moment not really doing a whole lot. The server will primary have 2 VMs running most of the time (although I have a few other I will be switching to). The 1st VM, running windows 10, is mainly used for work, mostly remote access to my offsite office. The other VM, running Ubuntu, will mostly be used for programming, gaming, etc. You could call that 2nd VM the "main" VM. Additionally, I do have a few Docker containers such as Emby, but they are rarely used. How can I best utilize all the hard drives to maximize performance? I don't mind complex setups so feel free to suggest utilizing the 9p protocol, device passthrough, etc. I look forward to hearing suggestions. Bonus points for anything cool, unique, or different. Setup: 1x 1TB HHD 3x 512GB SSDs via SATA 1x 512GB SSD via NVME (Also one additional HHD that will be dedicated to parity, so no need to consider that)
  11. I understand the basic idea, but I'm not sure on how to optimize. I guess my confusion is with how emulatorpin and iothreadpin work. My i7-5820k is a 6 core 12 threads CPU. I'd like to split the cores between 2 VMs, and for now, I'd like to optimize it for gaming. So I've isolated CPUs 2,3,4,5,8,9,10,11 in Syslinux and I am currently assigning 4 to each VM. This part is pretty straightforward. Now, with the remaining 4 cores, (0,1,6,7) I'm a bit confused on how to best utilize them. Should emulatorpin be set to 0,6 in both VMs and then iothreadpin cpuset to 1,7, should iothreadpin and emulatorpin simply share all 4 remaining cores? Or should each VM have separate cores defined for emulatorpin and iothreadpin? Shouldthe iothreadpin CPUset actually come from threads already assigned to the VM? So if vm1 has 4,10,5,11 should iothreadpin CPUset='4,10'? Please help.
  12. Questions A) In your opinion, what is the best way to utilize a 512gb SSD drive, and a 512gb NVMe drive (I'm guessing SSD for cache, but not sure what the use for the NVMe would be). 2a) Is there anyway to setup partitions my NVMe, and pass through the partitions to various VMs instead of the whole device? 2b) What about using the NVMe as a diskshare or sharing the disk using something like SMB, NFS, etc? My knowledge on all this is very limited but my logic is that since the drive is physically in the computer, performance shouldn't be affected. Full System: Samsung 960 PRO 512gb - NVMe (currently unassigned, formally my cache disk) Samsung 850 PRO 512gb - SSD (Array Disk 3) 1TB - HDD (Array disk 4) 2TB - HDD (Parity) 128gb - SSD (Array disk 1) 40gb - SSD (Array disk 2) 4x gtx1080ti 48GBs DDR4 3000Mhz i7-5820k (28 PCIe lanes) Asrock x99 extreme4 motherboard A bit more info: I've read using an NVMe disk for cache is a waste. After watching Spaceinvader one's NVMe passthrough video I finally removed it as my cache and currently have it unassigned. I thought I could setup 3 partitions on the disk and instead of passing through the whole device, I could passthrough partitions individually to different VMs, but sadly this is not the case. Ideally, I'd like to use the NVMe disk to boot 2 VMs and also use it to share my steam library folder between the two VMs. There is one particular game that I play that (I believe) would greatly benefit from the extra speeds provided from the NVMe disk which would results in shorter loading screens.
  13. I believe I tried this already, but it just caused crashes. I was thinking something like, maybe give each vm only 6 "pinned" CPUs.. like vm A gets cpus 0-5 and vm b gets 6-11 but then just tell each VM to have 12 CPUs? The point would be 6 dedicated CPUs to each VM + 6 more to each when ever those are free... IDK if this can be done as my knowledge and experience with VMs is mostly based on the top 5-10 results in a Google search.. lol.
  14. I have a host machine with 48Gb or RAM, and 12 logical cores. I have 2 VMs that are used constantly so I've given each of them 6 cores and about 14Gb of RAM. Sometimes though, 1 of the 2 VMs isn't being used and having 6 cores sitting idle is such a waste. How can I setup my system to give either VM all 12 cores (and maybe a lot more RAM) when they are available, but to split up the cores evenly when one of the 2 VMs needs them? I've done some reading on libvirt.org and tried Googling "Dynamic CPUs in KVM" and tried a few things but there always seems to be a problem. I've installed Virt-manager and tried a topology with 12 sockets 1 CPU and 1 thread. Both VMs 'function' fine until I start running any major tasks on both of them, then they'll just lock up. I reverted back to just assigning 6 cores to each VM via Unraid VM panel for now so posting my XML 'as is' now wouldn't be all that helpful.
  15. Hahaha, thanks man! That's exactly what they meant.
  16. Hey everyone! After months of testing and learning, I finally managed to have 2 of my 4 1080TIs run in SLI. The information on how to do this has actually been online for a while but a bit scattered about (at least that was my experience). Overview of Steps 1) Achieve GPU pass-through 2) Mod the Nvidia Drivers to allow SLI in our VM 3) Use Nvidia Profile Inspector to get much better performance Edit Update After utilizing a few other VM optimizations, specifically CPU pinning, my SLI performance DRASTICALLY improved. My FPS in SLI went from the mid 40s to 70+ (I used a few different benchmarks such as the unigine benchmarks and also from personal experience playing ESO). When I started trying to get SLI to work in Unraid, I noticed that just passing through 2 GPUs to a single VM already resulted in very noticeable gain in performance. I am still tinkering with Nvidia Profile Inspector, so things might change. If they do, I will post an update. GPU pass-through My VM options: Bios: OVMF Machine: Q35.1 Sata for ISO drivers and VirtIO for Primary vDisk Follow the instructions in this Spaceinvader One video. Afterwards, pass through your 2 GPUs and they should appear in windows. Nvidia Drivers Mod Note: If you have any difficulties with this next part, you are better off asking for help here on the DifferentSLIAuto forum thread. So to my understanding motherboard manufactures must license the right to allow SLI on their boards from Nvidia. The reason we haven't been able to achieve SLI in Unraid is due to the fact that our VM's "motherboard" info simply not qualifying as a Nvidia approved motherboard for SLI. Luckily there has been a hack available for a while that allows SLI to be enabled for not only any motherboard but also any GPUs (aka, the GPUs don't even need to be the same model). This is what worked for me: I used Nvidia Driver version 430.86. If you use the same version, then these instructions SHOULD work for you. Install Nvidia Drivers The original method/program used by DifferentSLIAuto is no longer working for the latest versions of Nvidia drivers (driver versions 4xx and on). We have two choices, we can go with the old method and use an older driver or we can mod newer drivers manually. This is what I did, and what I'll be describing: Download DifferentSLIAuto version 1.7.1 Download a Hex Editor (I used HxD) Copy nvlddmkm.sys from C:\Windows\System32\DriverStore\FileRepository\nv_dispi.inf_amd64_b49751b9038af669 to your DifferentSLIAuto folder (NOTE: if you are not using driver version 430.86, then the nvlddmkm.sys file you must modify will be located some where else and you must find it yourself by going to Device Manager > Display adapters > YOUR CARD). Mod the copied nvlddmkm.sys file by opening it in a Hex Editor. Here are the changes for driver 430.86: Address: [OLD VALUE] [NEW CHANGED VALUE] 000000000027E86D: 84 C7 000000000027E86E: C0 43 000000000027E86F: 75 24 000000000027E870: 05 00 000000000027E871: 0F 00 000000000027E872: BA 00 000000000027E873: 6B 00 Save and exit Hex Editor In your DifferentSLIAuto folder, right click and edit install.cmd. Replace all instances of "nv_dispi.inf_amd64_7209bde3180ef5f7" with the location of where our original nvlddmkm.sys file was in our case this is "nv_dispi.inf_amd64_b49751b9038af669". The install.cmd file will modify the copy we added to the folder and replace the original one found at the location we specify here. Use this video for reference but please note that in the video the driver version is different then ours and they replace nv_dispi.inf_amd64_7209bde3180ef5f7, with nv_dispi.inf_amd64_9ab613610b40aa98 instead of nv_dispi.inf_amd64_b49751b9038af669. Move your DifferentSLIAuto Folder to the root of your C:\ drive. Set UAC to the lowest setting (OFF) in Control Panel\All Control Panel Items\Security and Maintenance. Run cmd.exe as admin and enter: bcdedit.exe /set loadoptions DISABLE_INTEGRITY_CHECKS bcdedit.exe /set NOINTEGRITYCHECKS ON bcdedit.exe /set TESTSIGNING ON Restart you computer into safe mode + network enabled (Video showing how to do it quickly using Shift + Restart) Within the DifferentSLIAuto folder located at "C:\", run install.cmd as admin. After only a few seconds the CMD window text should all be green indicating that all is well! Open up your Nvidia Control Panel and under 3d Settings it should now say "Configure SLI, Surround, PhysX". Click that option, and under SLI Configuration Select Maximize 3d performance and that's it! Nvidia Profile Inspector The default settings in the Nvidia Control Panel really suck. After FINALLY getting SLI to work I was getting only 40 FPS in SLI when before I was getting 100+ FPS prior to enabling SLI. I was about ready to give up when I came across Nvidia Profile Inspector! By changing a few settings with Nvidia Profile Inspector, I was able to finally get great SLI results (70 FPS). Keep in mind that I only changing settings in profile inspector for only a few hours, so I'm sure there are many optimizations to be made, so hopefully we can figure it out as a community. Run Nvidia Profile Inspector. I recommend the following settings for now for the _GLOBAL_DRIVER_PROFILE (Base Profile). Nvidia Profile Inspector Settings: 1 - Compatibility - SLI compatibility bits: 0x02C00005 SLI compatibility bits (DX10 + DX11): 0x080000F5 5 - Common - Power management mode: Prefer maximum performance Thread optimization: On 6 - SLI - NVIDIA predefined number of GPUs to use on SLI rendering mode: 0x00000002 SLI_PREDEFINED_GPU_COUNT_TWO NVIDIA predefined number of GPUs to use on SLI rendering mode (on DirectX 10): 0x00000002 SLI_PREDEFINED_GPU_COUNT_TWO NVIDIA predefined SLI mode: PLAY WITH BOTH: 0x00000002 SLI_PREDEFINED_MODE_FORCE_AFR & 0x00000003 SLI_PREDEFINED_MODE_FORCE_AFR2 NVIDIA predefined SLI mode on DirectX 10: PLAY WITH BOTH: 0x00000002 SLI_PREDEFINED_MODE_FORCE_AFR & 0x00000003 SLI_PREDEFINED_MODE_FORCE_AFR2 SLI rendering mode: Try: 0x00000000 SLI_RENDERING_MODE_AUTOSELECT, 0x00000002 SLI_RENDERING_MODE_FORCE_AFR. 0x00000003 SLI_RENDERING_MODE_FORCE_AFR2 MAKE SURE TO HIT APPLY CHANGES ON TOP RIGHT HAND CORNER Next we will make some changes in Control Panel > Nvidia Control Panel Nvidia Control Panel Manage 3D Settings > Global Settings Power management mode: Prefer maximum performance SLI rendering mode: start by leaving this alone, and then make it match your Nvidia Profile Inspector settings (so if you are trying 0x00000002 AFR, set this to Force alternate frame rendering 1, and if your are trying 0x00000003 AFR2, set this to alternate frame rendering 2) And that's it! Now keep in mind the settings above are far from the best, and are only a starting point for us. It is probably best to find individual game profiles for each title and go from there. I will be googling "Nvidia Profile Inspector <insert game here>" for a while and trying different settings out. Make sure you change the "NVIDIA predined number of GPUs" settings to TWO if you change profiles because in my experience it was defaulting to FOUR (this may be because I do have 4 physical cards installed on the motherboard, so if someone else gets different results please let me know). SOME CLOSING THOUGHTS I did some additional research which lead me to open up my motherboard manual. I discovered that in my case, my mother board PCIe slots change speed depending on a wide number of factors (for example, if I have a 28 lane CPU, some of my PCIe 3.0 slots (PCIE1/PCIE3/PCI5) STOP functioning in x16 and instead run at x16/x8/x4. If that wasn't a big enough kick in the nuts, since I have an m.2 SSD in my m.2 slot, my PCIE5 slot doesn't function at all). All in all, this was fun adventure for me, and I really hope this information helps people who are interested in trying SLI via VMs!
  17. Hey everyone! After months of testing and learning, I finally managed to have 2 of my 4 1080TIs run in SLI. The information on how to do this has actually been online for a while but a bit scattered about (at least that was my experience). Overview of Steps 1) Achieve GPU pass-through 2) Mod the Nvidia Drivers to allow SLI in our VM 3) Use Nvidia Profile Inspector to get much better performance A Few Quick Notes When I started trying to get SLI to work in Unraid, I noticed that just passing through 2 GPUs to a single VM already resulted in very noticeable gain in performance. I was expecting fulling configured SLI to boost performance further, but for the most part I was disappointed. In most cases I found SLI to result in either a decrease in performance or a negligible improvement outside of some bench marking programs. To get things going in the right direction I had to tinker a bit with Nvidia Profile Inspector, but all I've managed to achieve so far is a break even with just 2x GPU pass-through. I am still tinkering with Nvidia Profile Inspector, so things might change. If they do, I will post an update. GPU pass-through My VM options: Bios: OVMF Machine: Q35.1 Sata for ISO drivers and VirtIO for Primary vDisk Follow the instructions in this Spaceinvader One video. Afterwards, pass through your 2 GPUs and they should appear in windows. Nvidia Drivers Mod Note: If you have any difficulties with this next part, you are better off asking for help here on the DifferentSLIAuto forum thread. So to my understanding motherboard manufactures must license the right to allow SLI on their boards from Nvidia. The reason we haven't been able to achieve SLI in Unraid is due to the fact that our VM's "motherboard" info simply not qualifying as a Nvidia approved motherboard for SLI. Luckily there has been a hack available for a while that allows SLI to be enabled for not only any motherboard but also any GPUs (aka, the GPUs don't even need to be the same model). This is what worked for me: I used Nvidia Driver version 430.86. If you use the same version, then these instructions SHOULD work for you. Install Nvidia Drivers The original method/program used by DifferentSLIAuto is no longer working for the latest versions of Nvidia drivers (driver versions 4xx and on). We have two choices, we can go with the old method and use an older driver or we can mod newer drivers manually. This is what I did, and what I'll be describing: Download DifferentSLIAuto version 1.7.1 Download a Hex Editor (I used HxD) Copy nvlddmkm.sys from C:\Windows\System32\DriverStore\FileRepository\nv_dispi.inf_amd64_b49751b9038af669 to your DifferentSLIAuto folder (NOTE: if you are not using driver version 430.86, then the nvlddmkm.sys file you must modify will be located some where else and you must find it yourself by going to Device Manager > Display adapters > YOUR CARD). Mod the copied nvlddmkm.sys file by opening it in a Hex Editor. Here are the changes for driver 430.86: Address: [OLD VALUE] [NEW CHANGED VALUE] 000000000027E86D: 84 C7 000000000027E86E: C0 43 000000000027E86F: 75 24 000000000027E870: 05 00 000000000027E871: 0F 00 000000000027E872: BA 00 000000000027E873: 6B 00 Save and exit Hex Editor In your DifferentSLIAuto folder, right click and edit install.cmd. Replace all instances of "nv_dispi.inf_amd64_7209bde3180ef5f7" with the location of where our original nvlddmkm.sys file was in our case this is "nv_dispi.inf_amd64_b49751b9038af669". The install.cmd file will modify the copy we added to the folder and replace the original one found at the location we specify here. Use this video for reference but please note that in the video the driver version is different then ours and they replace nv_dispi.inf_amd64_7209bde3180ef5f7, with nv_dispi.inf_amd64_9ab613610b40aa98 instead of nv_dispi.inf_amd64_b49751b9038af669. Move your DifferentSLIAuto Folder to the root of your C:\ drive. Set UAC to the lowest setting (OFF) in Control Panel\All Control Panel Items\Security and Maintenance. Run cmd.exe as admin and enter: bcdedit.exe /set loadoptions DISABLE_INTEGRITY_CHECKS bcdedit.exe /set NOINTEGRITYCHECKS ON bcdedit.exe /set TESTSIGNING ON Restart you computer into safe mode + network enabled (Video showing how to do it quickly using Shift + Restart) Within the DifferentSLIAuto folder located at "C:\", run install.cmd as admin. After only a few seconds the CMD window text should all be green indicating that all is well! Open up your Nvidia Control Panel and under 3d Settings it should now say "Configure SLI, Surround, PhysX". Click that option, and under SLI Configuration Select Maximize 3d performance and that's it! Nvidia Profile Inspector The default settings in the Nvidia Control Panel really suck. After FINALLY getting SLI to work I was getting only 40FPS in SLI when before I was getting 100+ FPS prior to enabling SLI. I was about ready to give up when I came across Nvidia Profile Inspector! By changing a few settings with Nvidia Profile Inspector, I was able to finally get "reasonable" SLI results. Keep in mind that I only changing settings in profile inspector for only a few hours, so I'm sure there are many optimizations to be made, so hopefully we can figure it out as a community. Run Nvidia Profile Inspector. I recommend the following settings for now for the _GLOBAL_DRIVER_PROFILE (Base Profile). Nvidia Profile Inspector Settings: 1 - Compatibility - SLI compatibility bits: 0x02C00005 SLI compatibility bits (DX10 + DX11): 0x080000F5 5 - Common - Power management mode: Prefer maximum performance Thread optimization: On 6 - SLI - NVIDIA predefined number of GPUs to use on SLI rendering mode: 0x00000002 SLI_PREDEFINED_GPU_COUNT_TWO NVIDIA predefined number of GPUs to use on SLI rendering mode (on DirectX 10): 0x00000002 SLI_PREDEFINED_GPU_COUNT_TWO NVIDIA predefined SLI mode: PLAY WITH BOTH: 0x00000002 SLI_PREDEFINED_MODE_FORCE_AFR & 0x00000003 SLI_PREDEFINED_MODE_FORCE_AFR2 NVIDIA predefined SLI mode on DirectX 10: PLAY WITH BOTH: 0x00000002 SLI_PREDEFINED_MODE_FORCE_AFR & 0x00000003 SLI_PREDEFINED_MODE_FORCE_AFR2 SLI rendering mode: Try: 0x00000000 SLI_RENDERING_MODE_AUTOSELECT, 0x00000002 SLI_RENDERING_MODE_FORCE_AFR. 0x00000003 SLI_RENDERING_MODE_FORCE_AFR2 MAKE SURE TO HIT APPLY CHANGES ON TOP RIGHT HAND CORNER Next we will make some changes in Control Panel > Nvidia Control Panel Nvidia Control Panel Manage 3D Settings > Global Settings Power management mode: Prefer maximum performance SLI rendering mode: start by leaving this alone, and then make it match your Nvidia Profile Inspector settings (so if you are trying 0x00000002 AFR, set this to Force alternate frame rendering 1, and if your are trying 0x00000003 AFR2, set this to alternate frame rendering 2) And that's it! Now keep in mind the settings above are far from the best, and are only a starting point for us. It is probably best to find individual game profiles for each title and go from there. I will be googling "Nvidia Profile Inspector <insert game here>" for a while and trying different settings out. Make sure you change the "NVIDIA predined number of GPUs" settings to TWO if you change profiles because in my experience it was defaulting to FOUR (this may be because I do have 4 physical cards installed on the motherboard, so if someone else gets different results please let me know). SOME CLOSING THOUGHTS I did some additional research which lead me to open up my motherboard manual. I discovered that in my case, my mother board PCIe slots change speed depending on a wide number of factors (for example, if I have a 28 lane CPU, some of my PCIe 3.0 slots (PCIE1/PCIE3/PCI5) STOP functioning in x16 and instead run at x16/x8/x4. If that wasn't a big enough kick in the nuts, since I have an m.2 SSD in my m.2 slot, my PCIE5 slot doesn't function at all, and the cherry on top is that I just found out that (and I quote this straight from the manual) "PCIE2 (PCIe 2.0 x16 slot) is used for PCI Express x4 lane width cards."????? What does that even mean? I guess it means, we're advertising this slot as x16 but it's really just x4. All in all, this was fun adventure for me, and I really hope this information helps people who are interested in trying SLI via VMs!
  18. How can I spin up the array and boot up my VMs from the console screen? I'm guessing this is what people refer to as 'headless'? Still pretty new to all this, but I want to learn but I can't find a list of console commands anywhere. Thank you.