I finally had the time to try this, and a few blue screens of despair later, it WORKS! Even without applying any previous optimization I tried before in this topic, the overhead that was causing stuttering is gone. Thanks!
--
I'll try to describe the process I followed here to help someone else.
I tried to pass through with this method first: SSD Passthrough (had to change the ata- prefix with nvme-, everything else the same), but I noticed no real differencies because as you suggested the entire controller needs to be passed through, so i followed this method NVME controller passthrough, included not stubbing the controller but using the hostdev xml provided in the video description, with a few differencies:
1- I used minitool partition wizard to migrate the os, selecting "copy partitions without resize" to avoid the recovery partition to be unnecessarily stretched, and immediately after stretched the C partition, leaving a 10% overprovisioning.
2- With the most recent unraid version it seems that the modified clover isn't necessary, you simply stub the controller in the syslinux configuration or add the hostdev to the vm xml and click on update, then you specify the boot order adding <boot order='1'/> after the source, so that it looks something like this:
<hostdev mode='subsystem' type='pci' managed='yes'>
<driver name='vfio'/>
<source>
<address domain='0x0000' bus='0x41' slot='0x00' function='0x0'/>
</source>
<boot order='1'/>
<address type='pci' domain='0x0000' bus='0x00' slot='0x08' function='0x0'/>
</hostdev>
then, the device should be visible and selectable inside the GUI editor. You then have to simply select "none" as the primary vdisk location, update again and check that the boot order is still there inside the xml, and then boot the vm up.
I had to reboot a few times, inside the windows recovery options that followed the first blue screen telling me "no boot device" or something like that, select the "boot recovery" option (dunno if it's the correct name bc my interface isn't in english), reboot two times again, and it worked. I simply had to reinstall my nvidia drivers again, don't ask me why
--
With my configuration, seen that I wanted to pass through the same SSD that was occupied by the vdisk, I had to move the vdisk on another disk with krusader and then select the new location inside the gui editor. Don't do like I did and make TWO copies on the other drive, one as backup, because something might simply go wrong and corrupt your vdisk.
--
It works with the nvme drives, and now I want to try this method with the sata SSDs, too. The problem is that isolating the sata controller in it's own IOMMU group isn't that easy.
With the second-last stable bios of my x399 aorus, f3g (f3j was bugged as hell), it simply isn't possible, even with the acs override patch enabled. The sata controllers are always grouped with something else. Updating to the latest f10 bios with the new agesa, it seems to be feasible.
The obstacle I'm trying to overcome now, is to understand what sata controller I need to pass through without messing everything up.
I installed a plugin to run scripts inside the gui with community apps, then ran this script: iommu script that i found on reddit, to try to understand what sata controller I need to pass through.
It seems that now every sata drive is under the same sata controller, but later I'll try to change connectors.
I'll keep this topic updated!