Shima

Members
  • Posts

    9
  • Joined

  • Last visited

About Shima

  • Birthday June 7

Converted

  • Gender
    Male

Recent Profile Visitors

The recent visitors block is disabled and is not being shown to other users.

Shima's Achievements

Noob

Noob (1/14)

0

Reputation

  1. wow this worked! i tried like everything else! and this just worked
  2. i do believe that what you are experiencing is the cpu bottleneck. The trouble with the v1 and v2 e5's is that while they were supporting the PCIE 3.0, they would not fully utilise the GPU for whatever reason. I use the E5 2867w v2 which are 3.4 GHZ per core, and while this does help (higher clock speed), the best results i had was with using a slower GPU (1050TI) there still is stuttering, but that i believe is caused by the multi cpu setup, as the ram is shared between CPUs, that is causing some aditional bottlenecks, whenever the data is exchanged by the ram assigned to diffrent CPUs. The performance when you look at the framerate i assume is good, but the experience is not as good as the max framerate. What i find is for example max framerate of 90-120 fps, with low framerate going down to 1.2 fps, and while the lows are in 1-2% range it may not seem like a lot, its still easy to spot. Aparently its not as bad in newer CPUs, but you would get the same results running bare metal i bet.
  3. just looking at your features i see that the vendor name is set to 'none', and the hypervisor state is not set to hidden. try something like this did me a world of good when i hid the hyper v and changed vendor name to 'whatever', seriously also are you using the vnc card for windows installation? or the passthrough output of your gpu? made a diffrence for me when i used the dissplay connected to the gpu that i passed through.
  4. hi. Out of curiosity are you thinking of something like Vast.AI to be launched on a VM? i was thinking of running it on the unraid server some time back, and run the processing while the VMs were down or use mining GPUs like p104.100 with a asus turbo 1080 bios passed to the vm, but never got to it in the end. I would be curious to see how this would work.
  5. right. so i am not sure how to tag this as solved but i worked it out. I am unsure what helped but let me tell you what i did step by step. as i only want to pasthrough the NVIDIA cards, i have installed the AMD card in slot 2 (slot 1 is a pcie 8x in this motherboard), moved all nvidia cards to slots 4,5,6, with slot 3 empty. The reason behind this was that i suspected tha due to the fact i have same cards, by the same vendor, the exact same model and manufacturer, this may be somewhat confussing to the system. I am not quite sure behind the logic as my linux and KVM knowledge is nto that great, but it seemed to make a diffrence. At the same time i have recreated the VM from the scratch using OVMF bios (seabios would not power the card without a pcie 6 pin connector that the cards do not have, i assume its not initiating the power to the gpu through the pcie slot or something) adding a few lines to the XML and WITHOUT a VNC Gpu. FIRST i have edited the HyperV to make it hidden, plus some info i found on russian forums about the vendor tag, so in the hyperv fields i added this bit in the features. then i edited the GPU bios and removed the headder as in the space invader film (do bare in mind i have done this previously but with no effect) but i did not have a need to add it in the end, so the bios is stored, but not used in the end before even starting the VM i added those bits to the XML as per instruction found again in spacedinvader films specifically added multifunction='on' and changed sound card slot and function slot='0x05' function='0x1' I have also installed the OS with a keyboard passed through to the VM and a monitor connected. I have installed the GPU driver (old version 375.63) from the pendrive i had passed through as well, and restarted the VM. This worked! I now have a VM with a dummy plug connected to the GPU, working GPU passthrough, and i can remote to it to my liking I hope this may help someone if they face the same problem.
  6. so more updates. I tried the driver patching but the patch gives me an error, so it will not work, ad i dont have enough skill in powershell to debug it. used older 357.63 drivers, same issue, enabled above 4g decoding, same issue, , always getting code 43. i have tried several diffrent cards in several diffrent slots (i have only 3 slots available per CPU but a limitless supply of gtx 1050ti ) i think i tried everything already. so i guess the question is, how do i prevent my VM from being reckognised as a VM by the driver itself? i really dont want to buy the quaddro gpu as i got plenty of GTX cards laying about, but i would really like to get those 1050s working without building multiple setups and have them virtualised.
  7. so after rebooting the vm, adding an xbox controller via usb passthrough, and connecting the monitor, the nvidia dissplay went down., and remains down with the error code 43. Im stuck this is the current setup Please help
  8. right! i fixed it! I have disabled hyper V, recreated the VM, moved drivers to a flashdrive that i passed through to a vm, installed the driver prior to installine other devices, and it worked! damn this was hard. i was picking up clues across the forum, but managed to do it. did not have to use external custom bios, worked with the one already in the card as well :), now installing the virtio drivers and will give reboot a go.
  9. Hi guys. Im new here and i have been reviewing the posts but did not find an answer that would solve this. So i just started my adventure with unraid. I tried it in the past with some minor success, but this time around i am just unable to get the gpu working in my VM. The setup is Fujitsu Celsius r930, with dual xeon e5-2687w V2, and 32GB DDR3 ECC. So far i installed 4 diffrent GPUs in the box (3 diffrent GTX1050TI and an RX560) the issue i have is that while the GPU is passed through to the VM, its turned off by the OS (Win 10) I have tried this on several versions of Unraid (6.6.7, 6.7, 6.8) and always experience the same issue. The VM XML is bellow. So i have the latest driver installed, so far started unraid without UEFI, modified the GPU bios (as per the video from SpaceInvader), used one of his bios files, tried several scripts from the forum, without any result, reinstalled drivers, started the vm as q35 and i440fx (diffrent versions). Tried assigning diffrent cpu cores from diffrent cpus etc. When i do this with my RX 560, once the driver is installed, the VM crashes. The cards are single slot inno3D GTX 1050TI. and asus strix 560 4gb. the only thing i see here is : 2020-01-05 11:49:47.249+0000: Domain id=1 is tainted: high-privileges 2020-01-05 11:49:47.249+0000: Domain id=1 is tainted: host-cpu I will post the loggs from the VM bellow, but no errors were seen there, are there any other loggs i can find you guys would need to help me? Can anyone help me? here are my iommu groups