Here comes the plot twist: unraid is my first touch to Linux distros. I was inspired by Linus Techtips video of having two rigs in one tower and I decided that I could try that. It also justified me to buy second Gigabyte g1 980ti if I at least try.
Anyhow. Like I said, I had zero experience tinkering around with Linux or virtual machines before so Google and this forum have been very useful. Last week I didn't have much time as I am searching project for my engineers thesis (energy technologies here) while working in my first profession.
I did, however managed to get stable VM and played around a bit, as in played games. I don't kbow what I did different than last time, maybe just lucky? Host froze once while in VM windows and I had to hard reset it, but VM started after reboot succesfully with no crashes.
I have pcie usb3 -card coming which should help me and have hot plug usb slots. I try to passtrough it for "primary" VM. It gives 2 ports behind and I can use case front ports trough it so total of four ports. Just enough for mouse, keyboard, usb headset and one for hot plug drives/sticks.
About pcie slot: Rampage IV Black edition has four full size pcie-slots, 16x/8x/16x/8x. I had to sacrifice first 16x for host gpu, which always is first gpu. No iGPU on lga2011 cpus. Third slot (second 16x) is empty so I have enough spacing for airflow for upper 980ti.
For disabling pcie slots, I don't know how 2011v3 mobos do it, but I have mechanical on/off switches on motherboard which I use for this. As third and four pcie slots are 8x only, I can't run 16x/16x sli, but after all 8x/8x is 1-2% loss in power.
Oh. About host crashing. Two things I have changed actually. I got rid of the raid and replaced it with single 500GB 850 Evo. Now unraid cache has 3x256 in data raid0 / metadata raid1. Having all drives in AHCI in system might do something, even raid wasn't oart of the unraid. Now I worry having one SSD under chipset and two under seperate controller. I might change it later so all cache drives are under ASMedia and HDDs go under chipset.
Second change: last time I used cores 0-5 on first vm, 6-9 on second and 10-11 on host. Now host got 0-1, first vm 2-7 and second (which I played around) has 8-11.