So I did a few more tests last night and what I found was that my original assumption was not correct. When I started the server yesterday I was surprised to find that one of the controllers had a code 10 again. I had previously thought this was cleared up when rebooting the hypervisor.
I then ran a bunch of reboot, shutdown, restart tests with both controllers using a couple different USB flash drives on both passed through controllers. I had one weird situation where windows may not have yet polled the flash drive (had a drive present but said "please insert a drive" or something when I tried to open it immediately after boot) but no other problems at all. My running theory now is that if I have a flash drive installed on a passed through controller, unraid tries to do something with it on startup which causes the strange state. In theory the controller should just reset. In practice, maybe that doesn't always work.
I didn't run a lot of hypervisor reboot tests so I'm still not sure I've nailed it down. I would think adding the devices to the syslinux.cfg file under pci-stub.ids would prevent unraid from ever looking at them directly but I didn't have a chance to try that either.