Nebrius

Members
  • Posts

    42
  • Joined

  • Last visited

Everything posted by Nebrius

  1. Hallo saber1, vielen Dank! Hat auch bei mir super funktioniert!
  2. In der Datenträgerverwaltung unter Windows werden mir nun zusätzliche 30 GB als "Nicht zugeordnet" angezeigt. Bei Rechtsklick auf Datenträger C ist der Punkt "Volume erweitern..." leider ausgegraut. Gibt es dennoch eine Möglichkeit gezielt "C" zu erweitern? Vielen Dank, Nebrius
  3. Nun wird mir in Unraid die Kapazität mit 100 GB angezeigt, Zuweisung steht weiter auf 70 GB. In der Windows-VM habe ich weiterhin nur 70 GB. Wie kann ich nun die Zuweisung auch auf 100 GB bringen, so dass mir innerhalb der VM 100 GB zur Verfügung stehen? Vielen Dank Nebrius
  4. Ahhhh, cool! Wie blind muss man sein? Danke sehr!
  5. Hallo liebe Leute, ich habe mir für CAD-Anwendungen eine eigene Windows10-VM eingerichtet. Jetzt merke ich, dass die gewählte Festplattengröße (70BG) für meine Zwecke nicht ausreicht und suche nach einem Weg die bestehende VM auf z.B. 100GB zu vergrößern. Leider habe ich hier im Forum keinen passenden Beitrag gefunden. Geht das, was ich vorhabe grundsätzlich? Wie müsste ich vorgehen? Ich möchte mir eine komplette Neuinstallation gerne sparen. Vielen Dank und beste Grüße Nebrius
  6. Hallo alturismo und Mitlesende, ich bin im englischsprachigen Teil "fremdgegangen" und dort wundervoll unterstützt worden. Nach dem ausführlichen hin und her verstehe ich jetzt auch was du, alturismo, mir mitteilen wolltest, ich hatte es nur zunächst nicht verstanden... Mein Problem ist gelöst und ich habe wieder eine Menge gelernt! 😍 Unable to pass-through graphics board to Win10-VM Beste Grüße Nebrius
  7. Thank you very much for the wonderful explanation, now even I understand. With the knowledge I have my graphics card now also run at desired slot. 😍 How can I show my appreciation?
  8. Ok, thank you. I want to try, if the gpu is also working in the former pci slot. Therefore I have to adjust the bus addresses in the xml file. As I understood this information is in the <hostdev></hostdev> part of the xml, in detail in the <address type='pci' domain='0x0000' bus='0x0X' ....> tag, is it? So I only would have to change the number by "bus"? The current working xml (2. pci slot) looks like this: <hostdev mode='subsystem' type='pci' managed='yes'> <driver name='vfio'/> <source> <address domain='0x0000' bus='0x04' slot='0x00' function='0x0'/> </source> <rom file='/mnt/user/isos/vbios/Geforce_RTX2070.rom'/> <address type='pci' domain='0x0000' bus='0x05' slot='0x00' function='0x0' multifunction='on'/> </hostdev> <hostdev mode='subsystem' type='pci' managed='yes'> <driver name='vfio'/> <source> <address domain='0x0000' bus='0x04' slot='0x00' function='0x1'/> </source> <address type='pci' domain='0x0000' bus='0x05' slot='0x00' function='0x1'/> </hostdev> <hostdev mode='subsystem' type='pci' managed='yes'> <driver name='vfio'/> <source> <address domain='0x0000' bus='0x04' slot='0x00' function='0x2'/> </source> <address type='pci' domain='0x0000' bus='0x05' slot='0x00' function='0x2'/> </hostdev> <hostdev mode='subsystem' type='pci' managed='yes'> <driver name='vfio'/> <source> <address domain='0x0000' bus='0x04' slot='0x00' function='0x3'/> </source> <address type='pci' domain='0x0000' bus='0x05' slot='0x00' function='0x3'/> </hostdev> Based on our conversation here, I adjusted the xml for working in the 1. pci slot: <hostdev mode='subsystem' type='pci' managed='yes'> <driver name='vfio'/> <source> <address domain='0x0000' bus='0x04' slot='0x00' function='0x0'/> </source> <rom file='/mnt/user/isos/vbios/Geforce_RTX2070.rom'/> <address type='pci' domain='0x0000' bus='0x04' slot='0x00' function='0x0' multifunction='on'/> </hostdev> <hostdev mode='subsystem' type='pci' managed='yes'> <driver name='vfio'/> <source> <address domain='0x0000' bus='0x04' slot='0x00' function='0x1'/> </source> <address type='pci' domain='0x0000' bus='0x05' slot='0x00' function='0x1'/> </hostdev> <hostdev mode='subsystem' type='pci' managed='yes'> <driver name='vfio'/> <source> <address domain='0x0000' bus='0x04' slot='0x00' function='0x2'/> </source> <address type='pci' domain='0x0000' bus='0x07' slot='0x00' function='0x2'/> </hostdev> <hostdev mode='subsystem' type='pci' managed='yes'> <driver name='vfio'/> <source> <address domain='0x0000' bus='0x04' slot='0x00' function='0x3'/> </source> <address type='pci' domain='0x0000' bus='0x08' slot='0x00' function='0x3'/> </hostdev> Can I give it a try? Am i on the right track?
  9. Hello ghost82, Thank you very much, I was able to start the VM and install the NVIDIA driver! But I have still some questions. You said: When I re-insert isolcpus=2-15 it is still working, when I re-insert pci=nommconf back to syslinux, the VM gives again no signal. What does pci=nommconf do? What do I need this for or can I leave it out?
  10. Can I put the graphics card back in the first slot? PCIEX16_2 shares bandwidth with two other slots.
  11. Ok, thank you!!! I try to copy, it should go faster. Which xml I should give a try?
  12. Sorry, you answered faster than me... root@tiger:~# fdisk -l "/mnt/user/domains/Windows 10 Test/vdisk1.img" Disk /mnt/user/domains/Windows 10 Test/vdisk1.img: 70 GiB, 75161927680 bytes, 146800640 sectors Units: sectors of 1 * 512 = 512 bytes Sector size (logical/physical): 512 bytes / 512 bytes I/O size (minimum/optimal): 512 bytes / 512 bytes root@tiger:~#
  13. With change to seabios I again see nothing (no signal).
  14. Ok, looks still the same.... I installed windows with OVMF in the VM-template, as machine I used Q35-5.1, vDisk bus is VirtIO. My other Win10-VM, which uses the same vDisk, but as graphic-card VNC, is still working.
  15. Hey, if I start the VM now, I see the message below on the monitor on Displayport: My opnsense hasn't assigned an address.
  16. Ok, still the same, no tianocore logo, just no signal. tiger-diagnostics-20220201-1718.zip
  17. Thank you again! Still no remote connection and no signal on monitor. tiger-diagnostics-20220201-1707.zip What's next, new motherboard?
  18. No it says: VM creation error XML error: Attempted double use of PCI Address 0000:04:00.0 <?xml version='1.0' encoding='UTF-8'?> <domain type='kvm'> <name>Windows 10 Test</name> <uuid>2edc8daf-caf0-4d2e-70ea-c7ab8cb5bac1</uuid> <metadata> <vmtemplate xmlns="unraid" name="Windows 10" icon="windows.png" os="windows10"/> </metadata> <memory unit='KiB'>8388608</memory> <currentMemory unit='KiB'>8388608</currentMemory> <memoryBacking> <nosharepages/> </memoryBacking> <vcpu placement='static'>4</vcpu> <cputune> <vcpupin vcpu='0' cpuset='1'/> <vcpupin vcpu='1' cpuset='9'/> <vcpupin vcpu='2' cpuset='6'/> <vcpupin vcpu='3' cpuset='14'/> </cputune> <os> <type arch='x86_64' machine='pc-q35-5.1'>hvm</type> <loader readonly='yes' type='pflash'>/usr/share/qemu/ovmf-x64/OVMF_CODE-pure-efi.fd</loader> <nvram>/etc/libvirt/qemu/nvram/2edc8daf-caf0-4d2e-70ea-c7ab8cb5bac1_VARS-pure-efi.fd</nvram> </os> <features> <acpi/> <apic/> <hyperv> <relaxed state='on'/> <vapic state='on'/> <spinlocks state='on' retries='8191'/> <vendor_id state='on' value='none'/> </hyperv> </features> <cpu mode='host-passthrough' check='none' migratable='on'> <topology sockets='1' dies='1' cores='2' threads='2'/> <cache mode='passthrough'/> </cpu> <clock offset='localtime'> <timer name='hypervclock' present='yes'/> <timer name='hpet' present='no'/> </clock> <on_poweroff>destroy</on_poweroff> <on_reboot>restart</on_reboot> <on_crash>restart</on_crash> <devices> <emulator>/usr/local/sbin/qemu</emulator> <disk type='file' device='disk'> <driver name='qemu' type='raw' cache='writeback'/> <source file='/mnt/user/domains/Windows 10 Test/vdisk1.img'/> <target dev='hdc' bus='virtio'/> <boot order='1'/> <address type='pci' domain='0x0000' bus='0x04' slot='0x00' function='0x0'/> </disk> <disk type='file' device='cdrom'> <driver name='qemu' type='raw'/> <source file='/mnt/user/isos/virtio-win-0.1.190-1.iso'/> <target dev='hdb' bus='sata'/> <readonly/> <address type='drive' controller='0' bus='0' target='0' unit='1'/> </disk> <controller type='usb' index='0' model='ich9-ehci1'> <address type='pci' domain='0x0000' bus='0x00' slot='0x07' function='0x7'/> </controller> <controller type='usb' index='0' model='ich9-uhci1'> <master startport='0'/> <address type='pci' domain='0x0000' bus='0x00' slot='0x07' function='0x0' multifunction='on'/> </controller> <controller type='usb' index='0' model='ich9-uhci2'> <master startport='2'/> <address type='pci' domain='0x0000' bus='0x00' slot='0x07' function='0x1'/> </controller> <controller type='usb' index='0' model='ich9-uhci3'> <master startport='4'/> <address type='pci' domain='0x0000' bus='0x00' slot='0x07' function='0x2'/> </controller> <controller type='pci' index='4' model='pcie-root-port'> <model name='pcie-root-port'/> <target chassis='4' port='0xb'/> <address type='pci' domain='0x0000' bus='0x00' slot='0x01' function='0x3'/> </controller> <controller type='pci' index='5' model='pcie-root-port'> <model name='pcie-root-port'/> <target chassis='5' port='0xc'/> <address type='pci' domain='0x0000' bus='0x00' slot='0x01' function='0x4'/> </controller> <controller type='pci' index='6' model='pcie-root-port'> <model name='pcie-root-port'/> <target chassis='6' port='0xd'/> <address type='pci' domain='0x0000' bus='0x00' slot='0x01' function='0x5'/> </controller> <controller type='pci' index='7' model='pcie-root-port'> <model name='pcie-root-port'/> <target chassis='7' port='0xe'/> <address type='pci' domain='0x0000' bus='0x00' slot='0x01' function='0x6'/> </controller> <controller type='pci' index='8' model='pcie-root-port'> <model name='pcie-root-port'/> <target chassis='8' port='0xf'/> <address type='pci' domain='0x0000' bus='0x00' slot='0x01' function='0x7'/> </controller> <controller type='pci' index='9' model='pcie-root-port'> <model name='pcie-root-port'/> <target chassis='9' port='0x10'/> <address type='pci' domain='0x0000' bus='0x00' slot='0x02' function='0x0'/> </controller> <controller type='pci' index='10' model='pcie-to-pci-bridge'> <model name='pcie-pci-bridge'/> <address type='pci' domain='0x0000' bus='0x01' slot='0x00' function='0x0'/> </controller> <controller type='virtio-serial' index='0'> <address type='pci' domain='0x0000' bus='0x02' slot='0x00' function='0x0'/> </controller> <controller type='sata' index='0'> <address type='pci' domain='0x0000' bus='0x00' slot='0x1f' function='0x2'/> </controller> <controller type='pci' index='0' model='pcie-root'/> <controller type='pci' index='1' model='pcie-root-port'> <model name='pcie-root-port'/> <target chassis='1' port='0x8'/> <address type='pci' domain='0x0000' bus='0x00' slot='0x01' function='0x0' multifunction='on'/> </controller> <controller type='pci' index='2' model='pcie-root-port'> <model name='pcie-root-port'/> <target chassis='2' port='0x9'/> <address type='pci' domain='0x0000' bus='0x00' slot='0x01' function='0x1'/> </controller> <controller type='pci' index='3' model='pcie-root-port'> <model name='pcie-root-port'/> <target chassis='3' port='0xa'/> <address type='pci' domain='0x0000' bus='0x00' slot='0x01' function='0x2'/> </controller> <interface type='bridge'> <mac address='52:54:00:00:05:7f'/> <source bridge='br0'/> <model type='virtio'/> <address type='pci' domain='0x0000' bus='0x03' slot='0x00' function='0x0'/> </interface> <serial type='pty'> <target type='isa-serial' port='0'> <model name='isa-serial'/> </target> </serial> <console type='pty'> <target type='serial' port='0'/> </console> <channel type='unix'> <target type='virtio' name='org.qemu.guest_agent.0'/> <address type='virtio-serial' controller='0' bus='0' port='1'/> </channel> <input type='tablet' bus='usb'> <address type='usb' bus='0' port='1'/> </input> <input type='mouse' bus='ps2'/> <input type='keyboard' bus='ps2'/> <hostdev mode='subsystem' type='pci' managed='yes'> <driver name='vfio'/> <source> <address domain='0x0000' bus='0x04' slot='0x00' function='0x0'/> </source> <rom file='/mnt/user/isos/vbios/Geforce_RTX2070.rom'/> <address type='pci' domain='0x0000' bus='0x04' slot='0x00' function='0x0' multifunction='on'/> </hostdev> <hostdev mode='subsystem' type='pci' managed='yes'> <driver name='vfio'/> <source> <address domain='0x0000' bus='0x04' slot='0x00' function='0x1'/> </source> <address type='pci' domain='0x0000' bus='0x04' slot='0x00' function='0x1'/> </hostdev> <hostdev mode='subsystem' type='pci' managed='yes'> <driver name='vfio'/> <source> <address domain='0x0000' bus='0x04' slot='0x00' function='0x2'/> </source> <address type='pci' domain='0x0000' bus='0x04' slot='0x00' function='0x2'/> </hostdev> <hostdev mode='subsystem' type='pci' managed='yes'> <driver name='vfio'/> <source> <address domain='0x0000' bus='0x04' slot='0x00' function='0x3'/> </source> <address type='pci' domain='0x0000' bus='0x04' slot='0x00' function='0x3'/> </hostdev> <memballoon model='none'/> </devices> </domain>
  19. Thank you for your quick response! 1. to 3. works fine. 4. done 5. I dumped it with script from Spaceinvader One, no nvidida header (starts with U²). 6. still no video output 7. no remote desktop connection possible... 10. I physically changed the pcie slot of the gpu and tried to start the VM again as you described. But then I get an error: Execution error Device 0000:01:00.2 not found: could not access /sys/bus/pci/devices/0000:01:00.2/config: No such file or directory Here are the diagnostics: tiger-diagnostics-20220201-1506.zip Optional: I don't believe in god, so that's not my way
  20. Hello all, I have set up a Windows 10 VM in Unraid. It seems to run without any problems via VNC. But when I try to pass a second graphics card through (GeForce RTX 2070), I get the message "guest has not initialized the display (yet)" via VNC. I have also connected a monitor to the graphics card via HDMI as well as via Displayport. On both monitors I get the message "no signal". I have set up the iGPU in the mainboard-UEFI-BIOS as the primary graphics card because I do not want to run Unraid headless and want to use it for this purpose (maybe important to know). HVM and IOMMU are activated. In the VM Manager, "PCIe ACS override" is set to "Both" and "VFIO allow unsafe interrupts" is set to Yes. I have activated ACS override because it separates the IOMMU groups better. I have bound IOMMU groups 13, 14, 15 and 16 (all belonging to the RTX 2070) to VFIO via "System devices". However, I have already tried out the different setting options (Disabled, Downstream, Multi-funkcion, Both in combination with "VFIO allow unsafe interrupts" Yes/No). In all cases the connected monitor says "No signal". If the graphics card is displayed under Tools -> System Devices, can I assume that it was correctly recognised by Unraid, or do I perhaps need to change something in the UEFI bios? I have thought so far that an output of the VM should be done in parallel via VNC and monitor to graphics card? If I only specify the GeForce RTX 2070 as the graphics card in the VM, I also have no signal on the monitor. I have read out the Rom of the graphics card (vbios) with the help of a script from Spaceinvader One vbios (130.6 kb size). VIDEO GUIDE - How to Easily Dump a vBIOS from any GPU directly from the Server for passthrough I've already tried everything I can think of and I'm not getting anywhere. Maybe it's not possible to pass the graphics card to a Windows 10 VM with my hardware combination (Geforce GTX 2070, Asus ROG STRIX B460-H Gaming)? Can anyone give me tips on how I can narrow down my problem in a structured way? For a beginner, there are an unmanageable number of adjustment screws, but I feel like I've already tried them all? Here is the "diagnosis" file, if necessary: tiger-diagnostics-20220201-1321.zip Someone gave me the hint to change something manualy in the xml-View of the VM, but I could not cope with it. Many thanks and best regards Nebrius P.S.: I have already posted my problem in the German part of the forum, but hope to reach a larger number of experts here.
  21. Mit Hilfe eines Scripts von Spaceinvader One konnte ich nun das vbios auslesen (130,6 kb). Leider bleibt der Monitor aber immer noch schwarz (kein Signal).... VIDEO GUIDE - How to Easily Dump a vBIOS from any GPU directly from the Server for passthrough
  22. Nun habe ich noch in der xml-Ansicht bei vendor_id value='2D76A8B352F1' eingetragen und <kvm> <hidden state='on'/> </kvm> <ioapic driver='kvm'/> . Leider auch ohne Erfolg. Ich bin ratlos...
  23. Hallo alturismo, vielen Dank für deine Antwort! ACS Override an, weil ich meine das in verschiedenen youtube-Beiträgen so gesehen zu haben... Ich habe aber auch schon die unterschiedlichen Einstellungsmöglichkeiten durchprobiert (Deaktiviert, Downstream, Multi-Funktion, Beide jeweils in Kombination mit "VFIO erlaube unsichere Interrupts" Ja/Nein). In allen Fällen sagt der angeschlossene Monitor "Kein Signal". Wenn die Grafikkarte unter Werkzeuge -> Systemgeräte angezeigt wird, kann ich davon ausgehen, dass sie von Unraid korrekt erkannt wurde, oder muss ich vielleicht noch etwas im UEFI-Bios ändern? Ich habe bislang gedacht, dass eine Ausgabe der VM parallel über VNC und Monitor an Grafikkarte erfolgen sollte? Du schreibst "slot bei audio anpassen". Wo kann ich den richtigen Slot (bus) ermitteln? Vielleicht habe ich ein Problem mit dem ROM der Grafikkarte? Ich habe verschiedene ROMs von TechPowerUp heruntergeladen, angepasst (alles gelöscht bis U²) und ausprobiert, ich kann ja kein Windows mit der Karte starten um das Bios der Karte auszulesen... Viele Grüße Nebrius
  24. Hallo zusammen, ich habe mir in Unraid eine Windows 10 VM eingerichtet. Die läuft über VNC scheinbar problemlos. Wenn ich aber dann versuche eine zweite Grafikkarte durchzureichen (GeForce RTX 2070 oder auch Asus GT710) bekomme ich über VNC die Meldung "guest has not initialized the display (yet)". Ich habe schon alles mir Erdenkliche durchprobiert und komme nicht weiter... Die iGPU habe ich im BIOS als primäre Grafikkarte eingerichtet, weil ich Unraid nicht headless betreiben und diese dafür nutzen möchte. HVM und IOMMU sind aktiviert. Im VM-Manager steht "PCIe ACS erzwingen" auf Beide und "VFIO erlaube unsichere Interrupts" auf Ja. IOMMU group 13, 14, 15 und 16 (allesamt zur RTX 2070 gehörig) habe ich über "Systemgeräte" an VFIO gebunden. Hier die "Diagnose"-Datei, falls erforderlich: tiger-diagnostics-20220130-1907.zip Die NVIDIA-Treiber brauche ich wohl nicht, da ich die Karte "nur" an VMs weiterreichen möchte? Ich freue mich über Ansätze mein Problem lösen zu können! Viele Grüße und Dank vorab Nebrius