Tauro

Members
  • Posts

    13
  • Joined

  • Last visited

Recent Profile Visitors

The recent visitors block is disabled and is not being shown to other users.

Tauro's Achievements

Noob

Noob (1/14)

0

Reputation

  1. My Main Vdisk works fine and has a size of 120G(SSD) for Windows 10. My secondary Vdisk(700G) is on my Array. There are all my programs and my games on it. Because I wanna a save place, where I can put all the apps and move it to another VM, if it fails. Considering that Network share doesn't work as well and iSCSI isn't integraded, I choosed a second Vdisk, Now there appear things like Windows report: "corrupted Recycle Bin. can't delete it" no access to the uplay folder, it is corrupted, size 0kb. But if I switch to another win10 vm there are no problems but it's a seabios + i440 maschine and the boot takes 4min. I've reinstalled Windows 10 on a OVMF+i440 maschine and same problems. What is the problem? Why I have no access as admin? There was a problem like that in the past. My first try was with a vmshare and after some stuff happens and with a new vm I couldn't access the steam lib..
  2. I have the same issue and I tried it with 4 different GPUs each with two GPU BIOS dump (terminal and GPUz w/o NVheader) but on the X99 Platform maybe that is only a special problem with my Mainboard. Try to dump the vbios of your graphics card you have to listen carefully because every step and clue is important!
  3. The last three weeks I stuggled with unraid. I tested every setting that is important for me and there was several times that boot/bios screen. There are so many problems with unraid but there isn‘t really an alternative I think. So if I change something randomly in the vm setting it destroy the working vm setting/conf because it changes the bus='0x00' or slot='0x00' id and I think that it will change the boot priority or something. After I reorgenized it, ether one pci devices wasn't working but after that was fixed it worked fine. But in the past days I figured out that I only have to add a new vm and furthermore the existing vdisk.img to it and you must have the same maschine and bios! And for me it works.
  4. Maybe that could help. 1. duplicate the vm (vdisk1.img) to another folder. 2. create a new vm. 3. Add the same maschine (i440fx-2.5) and bios (ovmf) with same version, then add the same Primary vDisk (the duplicaded one), before you create it, add all the other stuff, too. You have to make sure that all settings are right, then you can finish and run it. I hope this will work!
  5. Thank you SpaceInvaderOne, for your script and video! Here's the original Manjaro Linux Logo.
  6. OMG after reinstalling Windows with at least 7 times, with different Versions. I've googled about gtx 970 and there was one guy that suggests to use the onboard gpu, so I thought, maybe I should test it again with a "newer" graphics card in pci-e slot 1 because that old hd5570 doesn't show the uefi. But with the HD6850 the uefi showed up. Next step was that I reconfigure the vm config so that there isn't a bios dump in it and has the right gpu (gtx 970). After I start the vm I installed at the device manager VirtIO Serial Driver again because that was a new vm. Immediately windows were refreshing and it was 1080p. Reinstalling the newest Nvidia driver wasn't a problem now. Next step is to get it working with single graphics card bacause it does only run with a speed of pci-e 8x. I'm trying to dump the bios via terminal. Maybe that will work. But next trial is to get the HD6850 with bios dump working mybe the gtx 970 has a special problem. And sorry for may bad English. EDIT#1: I think there is a problem with the mainboard because if done the dump via terminal and both works as bad as the gpu-z export without header. Yesterday I test the HD 6850 an there wasn't even a signal with a single card. In my dads pc is still my old gtx 660 next and last card I'll test. EDIT#2: It didn't work. The best results for a Windows 10 VM as it is recommended are OVMF+i440fx. For Linux Arch worked fine OVMF+Q35. And I fixed an issu that was my fault. The ntp or clock at my VM were wrong because I forgot to change it in UNRAID.
  7. Like I said in a previous post. I had it already dumped in a past, ok I didn't mention that I delete the header. With SeaBIOS and i440fx it says 'Booting from Harddisk.... boot failed and than boot from DVD and than blackscreen even with one core tip from spaceinvader. Now I try SeaBIOS with Q35. EDIT: Ok same error occurred.
  8. Ok Thanks. I tested it and it work but the graphics make now problems. I've tried to reinstall it over fivr times. First Direct from downloaded Nvidia. Next deinstall it over manager and nvidia again with reboot etc. Than over Windows search for driver. Now I think I'll test it with seabios.. 800x600 px isn't great ^^
  9. I've still default UEFI settings, now with better cooling and CSM on and for now it's stable. Problem is after I start The VM there comes boot bla bla and than blackscreen even with new vm. Are there any suggestions about i440fx/Q35 and SeaBIOS/OVMF? And I haven't ECC X99 doesn't support it unfortunately.
  10. Yeah, Thank you for your help! I had tried it with default bios settings with virtualisation stuff on but same Post Code Error appeared [FF]. First looked up maybe a shortning or smell but nothing. I replaced graphics card and it is not. Now I am testing the RAM in MemTest86 for 2h. Pass 1/4 but 91% of the second pass and no Error I think I'll stop it soon? With the last OC in windows I never had or at least really rare problems.
  11. There aren't settings to prefer the GTX 970. I can only say PCIEX16_1 is GEN2 or Bandwidth. And there are Options for MCTP, ACS Control, DMI, ASPM Support (L1? is disabled). CSM/Fast Boot are disabled. Ahh there's a GPU Post I see that my GTX is running at x1 Native and I have to switch it to Slot 3. After this I boot to Windows (not the VM) and dump the Bios with gpu-z so I don't need a second gpu for that. I removed the nvflash header. And now I test the passthrough vm, first Windows 10. EDIT: stuck at TianoCore Splashscreen comes in existing Windows 10 VM, i440fx-3.1, OVMF. EDIT#2: At the first time after I delete all old vms and make a new one I see a UEFI Interactive Shell with something 'Press ESC in 1 sec. to skip startup.nsh or any other key to continue.' EDIT#3: Passthrough works finally with SeaBIOS and Windows 10! But the next problem has appeared. But entire Unraid crashes randomly last action I was installing Nvidia driver... Could this be a stability issue?
  12. Ok, thank you, now I have a clue what to do next. Yes, I had put in the ATI because it didn't work with one. So, next step is to look up for the setting in uefi for preferred gpu use and dump the gtx970 bios.
  13. Hi Guys, I'm trying for several days to run VMs over GPU passthrough but it doesn't work... I'm running Unraid 6.7.0 UEFI Mode My Hardware is a Asus X99-A/USB 3.1 + [email protected] PCI Slot #1 - ATI Radeon HD 5570 I think (it's really old) PCI Slot #2 - NVIDIA GALAX GTX 970 EXOC I've seen I think all videos on about GPU passthrough on YT, all the amazing vids from Spaceinvader One, Linus Tech Tips and finally even b00bytrap thanks for that. With same settings and without. GPU BIOS dump file and without btw the GPU BIOS is from techpowerup GALAX or kown as KFA2 (same size and company). I've tried PCIe ACS override: Multi and off. These ACS setting in UEFI. And first install OS with VNC and switch to GPU passthrough afterwards. If I passthrough the GTX970 there's always a blackscreen or a vm logo. And I don't know but as far as I can the HD 5570 is on Slot #1 but when I reboot server there is only a output on the gtx970, wth? I tested it with Manjaro + HD 5570 and there was a boot menu where I could boot into the live desk but after it loads it's stuck at a last point, I seached for that issue in the web, but it doesn't work. My goal is to use my main PC as a NAS and PC. I want to use Linux+GNU Manjaro and for stuff that doesn't works I'll use Windows 10. The installation always works with VNC. The last thing I've tested was that I've installed Windows 10 with VNC, activated RDP and passthrough GTX970 and install NVIDIA Driver restarted the VM and the complete unraid server freezed. I hope you guys can help me! Thanks!