Jump to content

bastl

Members
  • Posts

    1,267
  • Joined

  • Last visited

  • Days Won

    3

Posts posted by bastl

  1. 2 hours ago, ChrisV said:

    1. Leistungshungrig ist relativ: mein 2200g hat eine TDP von 65W, genau die der potenzielle Ryzen 1700. Im Idle dürften die doch nicht allzu weit auseinander liegen? --> Demnach würde auch mein MoBo weiter nutzbar sein, da erte Ryzengeneration.

    Von der ersten Generation würde ich schon fast abraten. Klar, du bekommst sie saubillig hinterher geworfen, aber für das gleiche Geld bekommst du auch schon die 2te Generation, die besseren RAM Support bietet, stromsparender ist bei mehr Leistung. Alle Ryzen profitieren von schnellerem RAM und da ist halt die erste Generation für mich eigentlich schon raus, da es da viele Probleme gab. Und mit nem BIOS Update für dein Board wird sogar auch schon die dritte Generation unterstützt, die ne ganze Schippe nochmal drauf gepackt hat. Wenn die 5000er verfügbar sein werden, werden die Preise für die 3xxx auch ganz schnell purzeln. Ich würde hier nicht voreilig nen 1700 kaufen. Gerade in Spielen ist der Unterschied zw. 1st und 2nd Gen und dann zur 3ten schon enorm.

     

    2 hours ago, ChrisV said:

    ließe sich aber vorab austesten, denn meine Frau hat so eine 1030 in ihrem Rechner.

    Den Gedanken hatte ich erst, als ich den Post schon abgesendet hatte, evtl. könntest du dir ja irgendwo leihweise ne Karte zum Testen mal besorgen.

     

    2 hours ago, ChrisV said:

    Meine Frage nach dem Langzeittest ist noch nicht beantwortet.

    Ich nutze VMs mit Unraid als DAily Driver nun schon seit 3 Jahren. Angefangen mit nem TR4 1950x für über 2 Jahre und nun seit nem halben Jahr nen 3960x. Klar ist kein eigentlicher Ryzen, die Chiplets und die Unterstützung von RAM sowie der Support von Unraid ist quasi identisch. Gab beim 1950x genauso RAM Probleme zum Start wie bei den 1000ern auch, wenn nicht sogar noch mehr Probleme, da Quadchannel und mehr Komponenten aufm Board. Nach dem ersten Monat rumgefrickel und Austausch mit dem Forum hier, lief dann aber auch alles. 2 VMs mit jeweils eigener GPU sowie USB Controller und ner durchgereichten NVME. Die 2000er Generation war für mich kein Grund zum Upgrade, trotz mehr möglichen Kernen und besserem Support. Support für 3000x Threadripper auf TR4 hat AMD leider gestrichen und so gabs bei mir halt nen Umzug auf ne neue Plattform. Was soll ich dir sagen, lief grundsätzlich alles von Beginn an out of the box und die Unterstützung für Virtualisierung hat sich nochmal verbessert. Aufsplittung der einzelnen Geräte in einzelne IOMMU Gruppen ist nun ohne extra Patch möglich. Ähnliches haben auch viele User von der 3ten Generation Ryzen berichtet, dass sich das verbessert hat. Du kannst inzwischen viel einfacher Geräte an VMs durchreichen, als noch auf der ersten Generation. Die einzigen Probleme die ich in der Zeit hatte kamen eigentlich nur zustande, da ich immer mal wieder Unraids RC Versionen getestet hab. Probleme nachvollziehbar hier im Forum gemeldet und zack beim nächsten stable Release war nen Fix dabei. Achja, Windows Update hatte mir nen Treiber für nen durchgereichten USB Controller zerschossen, wäre aber auch ohne Unraid passiert. Windoof halt 😬

     

    Allgemein zu deinem Board nochmal zu erwähnen, du hast halt leider nur EINEN Slot für ne GPU und 2 PCIE x1, 1xNVME und 4x Sata. Da ist eigentlich keine Luft für ne Erweiterung. Glaub mir, wenn du einmal auf den Geschmack gekommen bist, was du alles mit Unraid machen kannst (Docker, VM, Datengrab) dann willst/brauchst du auch ganz schnell mal noch nen Slot für nen HBA, weitere NVME oder noch mehr Sata Steckplätze für noch weitere Platten.

     

    Was auch paar mal schon angebracht wurde "für Unraid selber eine eigene GPU" ist eigentlich irrelevant. Unraid ist im Prinzip dafür ausgelegt headless zu laufen. Das einzige wo es Sinn macht, wenn du nen Plex Docker nutzt und für die Video Dekodierung ne extra GPU brauchst aber selbst das ist fraglich, ob das jeder braucht. Wer streamt schon 5-10 Videos im Haushalt parallel. Limetech selbst sagt, dass die Nutzung von Unraid mit ner GPU für die Darstellung einer GUI nur für die Fehlerdiagnose Sinn macht und eigentlich auch nicht wirklich empfohlen wird. Und selbst das kannst du. Bei mir startet Unraid wenn ich den entsprechenden Booteintrag wähle direkt mit der ersten GPU und würde mir nen Desktop zeigen mit nem Browser. Aber ganz ehrlich, brauchst du nicht. Ich hatte nicht einmal in 3 Jahren darauf zugegriffen. Zumal es eh in ner extrem kleinen Auflösung dargestellt wird und ne Qual ist zu bedienen. Übers Netzwerk in nem Browser oder aus ner VM heraus die GUI aufgerufen ist viel praktischer.

     

    • Like 1
  2. Vom Aspekt des Energiesparens macht es bei dir glaube recht wenig Sinn beides zusammenzuführen, wenn du gleichzeitig eine leistungshungrigere CPU verbauen willst. Als NAS würde dein Server ja weiterhin durchweg laufen und somit auch mehr Strom verbrauchen.

     

    1 hour ago, ChrisV said:

    1. Wie sieht das mit dem Pasthrough der Vega8 des Ryzens aus? - Läuft das oder brauche ich zwingend eine dedizierte Grafikkarte? Casual Gaming sollte möglich bleiben (diverse Anno-Titel sowie CS:GO mit meiner Frau) - vllt. eine GT1030 oder GTX1050... nur so fürs Einschätzen meiner Ansprüche.

    Soweit ich weiß, ist nen Passthrough der AMD integrierten Vega nicht möglich. Bin mir aber nicht 100%ig sicher. Dedizierte Vega Karten allein schon machen da genügend Probleme. Nvidia ist definitiv zu bevorzugen. Eine dedizierte GPU würde devinitiv den Stromverbrauch anhebt, was du auch im Hinterkopf behalten solltest, wenn die Kiste den ganzen Tag läuft. Dein 400W Netzteil sollte ausreichend sein für ein Upgrade wenn du es nicht übertreibst mit dem GPU Upgrade. Ne 1050ti sollte ohne Probleme möglich sein.

     

    Beim CPU Upgrade solltest du aber darauf achten, dass dein Board die CPU auch unterstützt. 3000er Serie müßte noch drin sein, beim 5000er ist die Frage, ob das Board noch nen BIOS Update erhält. Einige B450 sollen wohl Anfang nächsten Jahres noch Support erhalten. Ist nur die Frage welche Boards genau. Die ersten beiden Ryzen Generationen hatten sowohl mit Unraid die ein oder anderen Probleme gerade beim PCIE Passthrough als auch der RAM Support war nicht ganz optimal. Hier müßtest du dich auch mal schlau machen.

     

    Ansonsten sollte der Betrieb einer VM mit dedizierter GPU möglich sein. Ich hab ne 1080ti und ne 1050ti jeweils in eigenen VMs durchgereicht. Allerdings mit nem Threadripper 3960x. TRX40 ist hier recht easy wenns um Passthrough geht. Die meisten Geräte sind sauber getrennt in einzelne IOMMU Gruppen, was Grundlage dafür ist, dass das Durchreichen eines Gerätes in eine VM funktioniert. B450 ist da was die PCI Lanes angeht limitierter und es kann sein, dass je nach Belegung der PCI Slots dir die GPU nicht in eine IOMMU Gruppe separiert sondern mit anderen Geräten zusammen geschmissen wird. Was dann nen Passthrough fast unmöglich macht. Dein aktuelles B450 hat nur einen Slot für ne GPU. Die Option nen anderen Slot zu nutzen zur Fehlerbehebung ist bei dir quasi auch nicht möglich. Bei einigen Konfigurationen funktioniert der Passthrough nicht im ersten PCIE Slot.

     

    100%ige Garantie kann und wird dir leider keiner geben können, dass alles so funktionieren wird, wie du es dir wünschst.

     

  3. 5 hours ago, SteelCityColt said:

    Where am I going wrong?! 

    As I said earlier, I also have issues similiar to yours, trying to passthrough the onboard audio controller. Unraid will kinda freeze and I have to reset the server. That was the first issue i stumbled across when I got my TRX40. Issues with the sound card are fixed for me since I noticed the sound card itself is split into 2 USB audio devices which I'am able to passthrough to different VMs and can run them both at the same time.

     

    GPU passthrough never was an issue for me. 2 GPUs for 2 VMs, rock stable from day one.

     

    Onboard USB Controller passthrough showed similiar issues with Unraid becoming unresponsible. But not all controllers on my board. I think one of the two Starship ones caused the issues for me. The 3.1 controller passes through fine and is the only one I use and need. The FLR warning/error produced by the other controller will be fixed with newer linux kernels. Maybe check the Unraid 6.9 beta and test it or use a FLR patched Kernel. I think there is a custom kernel in one of the unraid forums which include the FLR patch and the vega patch.

  4. 16 hours ago, Vaggeto said:

    That makes sense on the audio. Do you mind sharing the code to prevent unRAID to use the audio controller that you are checking as a USB audio device? (or is it the xen-pciback.hide= code?)

    The PCI IDs in my syslinux.config is for the audio controller itself, not for the USB audio device.

    grafik.png.900bbed47949261d4ffd9f4d8da341a4.png

     

    1022:1487 = onboard audio controller

    grafik.png.cf2a9d7428f234d9740d12f138f43b13.png

    1b21:2142 = onboard USB 3.1 controller

    grafik.png.d39291d518c048855bd0ce19cdb6efc0.png

     

    16 hours ago, Vaggeto said:

    For whatever reason when I try to pass my controllers, the VM won't load. They are in their own IOMMU groups and everything. I posted here about it.

    I do have one additional controller on my motherboard, but it is the only controller with USB2 ports so I don't want to pass it through even if I could since I'd prefer for my unRAID USB flash drive to be on a USB2 port.

    I have kinda similar issue with one of the other USB controllers and an extra PCI card like you. The FLR issue should be fixed in newer kernels. Unraid 6.9 should come with a fix for this.

  5. 21 hours ago, Vaggeto said:

    So are you consider isolating these sound cards and passing them through as "Other PCI Devices" they way you normally would a sound card, or just using the standard "USB Devices" check-box in the VM config to pass it through?

    I only have set the checkbox for one of the USB Audio devices for my VM. The "Starship/Matisse HD Audio Controller" itself I don't use at all but I have a entry for it in the syslinux config to prevent Unraid to use it, just in case. I have no issues with this so far. Everything is working fine.

     

    21 hours ago, Vaggeto said:

    Now passing a USB controller has been extremely difficult, can you describe how you were able to pass through the USB controller? (or are you just passing through individual USB devices). I can pass through individual devices, but not the controller which would be greatly preferred for hotswaping etc.

    For my main VM I have also an entry for one of the USB controllers in my syslinux config. In my case it's a "ASMedia Technology Inc. ASM2142 USB 3.1 Host Controller" on the Aorus Xtreme. A multi USB hub from Anker with card reader and a couple USB ports are sitting on my desk and is connected to ASMedia controller.

     

    The following entry is from my xml for that controller. Bus address might be different for you. Keep in mind, it only works if the controller is in it's own IOMMU group, separated from other devices.

     

    grafik.png.247cb1b22ae323bba6c6df79c797d61a.png

        <hostdev mode='subsystem' type='pci' managed='yes'>
          <driver name='vfio'/>
          <source>
            <address domain='0x0000' bus='0x46' slot='0x00' function='0x0'/>
          </source>
          <alias name='hostdev3'/>
          <address type='pci' domain='0x0000' bus='0x08' slot='0x00' function='0x0'/>
        </hostdev>

    grafik.png.a81cb042693989f09c0754055c7f59db.png

  6. 7 minutes ago, Jaster said:

    Should I create a subvol there or can I use "the whole thing"?

    A subvol is a requirement for snapshots to work. Only formating a drive with BTRFS is not enough. I think there are a couple things you're not understanding right about BTRFS and its features. Maybe I wasn't clear enough.

     

    Source and target both have to be formatted with BTRFS.

    Snapshots are differences between 2 subvolumes.

    A subvolume is presented to a user as a share/folder.

     

    Small advice, please read the full post from JorgeB again for better understanding how this BTRFS feature works and whats the differences between reflink and snapshots are.

     

     

  7. 12 hours ago, Jaster said:

    Should I still use the domains share or should I use custom share for the used images?

    How would/should I set it up?

    Not sure, what Unraid changes in the future therefore you better don't change the default "Domains" share. In the advanced settings of VM manager you can point it to a new path for your VMs. This is how I did it.

     

    example from my setup for my main "VM share":

    btrfs subvolume create /mnt/cache/VMs

    This is the first step if you wanna move on with BTRFS snapshots. It creates the subvolume for your VMs, in my case on the cache. At the same time a user share with the same name will be created. You can find it under shares besides all your shares you already have. You can configure it like any other share. Simply move your VMs to this new share and adjust the paths in the xml and thats it. Next step ist to create a read only snapshot (see a couple posts above) in my case "/mnt/cache/VMs_backup" which also creates a new share on the cache which will be the base for future snapshots. Anything changes in "/mnt/cache/VMs" compared to "/mnt/cache/VMs_backup" will be included in a new snapshot.

     

    12 hours ago, Jaster said:

    I'm wondering if I need to set this up file by file or if I can script it somehow on a folder/share basis...?

    Each BTRFS subvolume is handled as it's own share. As soon as you create a subvolume, a user share will be created by Unraid. Technical you can create a subvol for each VM, but you need an extra script for each VM too and with a couple VMs it can get a messy. It's easier to have lets say "productive VM subvolume" which includes all your important VMs you regularly want to backup with 1-2 scripts and a second path for test VMs in case you play a lot with VMs. In my case as said earlier, my main 5 VMs ar sitting on the cache drive and another unassigned 500GB SSD hosts a couple VMs I use for testing only. When creating a new VM I only have to adjust the path for the vdisk to be included or excluded from the snapshots and thats it.

     

    13 hours ago, Jaster said:

    Create a script that creates a new "root backup" every Sunday and creates increments from those on a daily basis.

    Keep in mind each initial snapshots needs the same space on the target as all the source VMs you wanna backup. Lets say you have 5 VMs each with 100GB allocated you wanna backup. With your idea you have to transfer 500GB data each sunday and the changes during the week. Changes might not be that big, but with only the backups from sunday over 5 weeks you need 2500GB alone on your raid 10 SSD cache. This way you wear out your ssd's really fast. Better use a spinner as target.

  8. 19 hours ago, Masterwishx said:

    My VM Windows is working when booting for first time after unraid start , but if i close it then i need to reboot unraid for VM will work again .

    Maybe some one can help ?

    Do you have any devices passed through to the VM such as GPU, USB or network controller? Some devices aren't able to reset corretly if the VM is shutdown or rebooted. Only a server restart helps. Most affected are newer AMD GPUs for example.

  9. 2 hours ago, Jaster said:

    I'm planing a single nvme drive for the VMs and using the cache (btrfs raid) as the backup location.

    Keep in mind the only thing which you need for the snapshot feature is that both, source and target are formated btrfs. You can use the cache drive as target, sure, but keep in mind you can't simply copy that backups from this storage to let's say the xfs formated array. Mover won't work if you're planning this. The cache isn't the best solution for this. Imagine your backups filling up your cache drive until full and preventing docker from working or causing issues transfering files over your network to a share using that cache. When Unraid 6.9 build with multi cache pool support is released, it might be a good option to have a second pool only used for backups for example.

     

    2 hours ago, Jaster said:

    If I'd like to copy a specific snapshot, could I just "copy" (e.g. cp or krusader) or do I need to do some kind of restore?

    For restore you simply copy the files you need from a snapshot to where ever you want them. Overwrite a broken vdisk for example or for tests with a new VM to another unassigned device. It's up on you to use cp or krusader, both will work. It's basically a simple file copy for restore.

     

    2 hours ago, Jaster said:

    Are the deltas created from the initial snapshot or from the previous?

    The snapshots are differential and all based on the initial read only snapshot. You can delete all snapshots in between the first initial one and your last one or keep some in between, doesn't matter. Essential is you keep the first one. Over time each snapshot itself will use more and more space because the changes compared to the first one will increase. At some point you have to recreate a fresh up to date initial read only snapshot.

    • Like 1
  10. @Jaster Sry, I linked you the wrong thread. Here is the one I use for my snapshots. Method 2 is what I use.

    The initial Snapshot I do by hand every 2-3 months. For this snapshot I turn all my VMs down to have them in a safe shutdown state.

    1. create a read only snapshot of my VMs share. This share isn't the default "domains" share which is created by unraid. It is already a BTRFS subvol on my cache, created like described in the thread from JorgeB and hosts all my VMs.

    # create readonly snapshot
    btrfs subvolume snapshot -r /mnt/cache/VMs /mnt/cache/VMs_backup
    sync

     

    2. send/receive initial snapshot copy to the target drive mounted at "VMs_backup_hdd". This process will take some time transfering all my vdisks.

    btrfs send /mnt/cache/VMs_backup | btrfs receive /mnt/disks/VMs_backup_hdd
    sync

    3. After that I have 2 scripts running. First script runs every sunday, checking if VMs are running and if so, shutting them down and doing a snapshot. named as "VMs_backup_offline_" with the current date at the end.

    #!/bin/bash
    #backgroundOnly=false
    #arrayStarted=true
    cd /mnt/cache/VMs_backup
    sd=$(echo VMs_backup_off* | awk '{print $1}')
    ps=$(echo VMs_backup_off* | awk '{print $2}')
    
    if [ "$ps" == "VMs_backup_offline_$(date '+%Y%m%d')" ]
    then
        echo "There's already a snapshot from today"
    else
    	for i in `virsh list | grep running | awk '{print $2}'`; do virsh shutdown $i; done
    	
    		# Wait until all domains are shut down or timeout has reached.
    		END_TIME=$(date -d "300 seconds" +%s)
    
    		while [ $(date +%s) -lt $END_TIME ]; do
    			# Break while loop when no domains are left.
    			test -z "`virsh list | grep running | awk '{print $2}'`" && break
    			# Wait a little, we don't want to DoS libvirt.
    			sleep 1
    		done
    	echo "shutdown completed"
    	virsh list | grep running | awk '{print $2}'
        btrfs sub snap -r /mnt/cache/VMs /mnt/cache/VMs_backup_offline_$(date '+%Y%m%d')
    	for i in `virsh list --all --autostart|awk '{print $2}'|grep -v Name`; do virsh start $i; done
    	sync
        btrfs send -p /mnt/cache/VMs_backup/$ps /mnt/cache/VMs_backup_offline_$(date '+%Y%m%d') | btrfs receive /mnt/disks/VMs_backup_hdd
            if [[ $? -eq 0 ]]; then
            /usr/local/emhttp/webGui/scripts/notify -i normal -s "BTRFS Send/Receive beendet" -d "Script ausgeführt" -m "$(date '+%Y-%m-%d %H:%M') Information: BTRFS VM Offline Snapshot auf HDD erfolgreich abgeschlossen"
            btrfs sub del /mnt/cache/$sd
            #btrfs sub del /mnt/disks/VMs_backup_HDD/VMs_backup/$sd
            else
            /usr/local/emhttp/webGui/scripts/notify -i warning -s "BTRFS Send/Receive gescheitert" -d "Script abgebrochen" -m "$(date '+%Y-%m-%d %H:%M') Information: Es wurde heute bereits ein Offline Snapshot erstellt"
            fi
    fi

    4. The second script runs daily and snapshots the VM as "VMs_backup_online_" with date no matter if they are running or not. Keep in mind if you have to restore snapshots of VMs which where running at the time the snapshot was taken, they will be in a "crashed" state. Not had any issues with that so far, but there might be situations with databases running in a VM which might break by this. Therefore I have set the weekly snapshots with all my VMs turned of. Just in case.

    #!/bin/bash
    #description=
    #arrayStarted=true
    #backgroundOnly=false
    cd /mnt/cache/VMs_backup
    sd=$(echo VMs_backup_onl* | awk '{print $1}')
    ps=$(echo VMs_backup_onl* | awk '{print $2}')
    
    if [ "$ps" == "VMs_backup_online_$(date '+%Y%m%d')" ]
    then
        echo "There's already a snapshot from today"
    else
        btrfs sub snap -r /mnt/cache/VMs /mnt/cache/VMs_backup_online_$(date '+%Y%m%d')
        sync
        btrfs send -p /mnt/cache/VMs_backup/$ps /mnt/cache/VMs_backup_online_$(date '+%Y%m%d') | btrfs receive /mnt/disks/VMs_backup_hdd
            if [[ $? -eq 0 ]]; then
            /usr/local/emhttp/webGui/scripts/notify -i normal -s "BTRFS Send/Receive beendet" -d "Script ausgeführt" -m "$(date '+%Y-%m-%d %H:%M') Information: BTRFS VM Online Snapshot auf HDD erfolgreich abgeschlossen"
            btrfs sub del /mnt/cache/$sd
            #btrfs sub del /mnt/disks/backup/$sd
            else
            /usr/local/emhttp/webGui/scripts/notify -i warning -s "BTRFS Send/Receive gescheitert" -d "Script abgebrochen" -m "$(date '+%Y-%m-%d %H:%M') Information: Es wurde heute bereits ein Online Snapshot erstellt"
            fi
    fi

     

    I don't have it automated in the way that old snapshots getting deleted automatically. I monitor the target drive and if it's getting full i delete some old snapshots. First command lists all the snapshots and the second deletes a specific one. Don't delete the initial read only snapshot if you have differential snaps building up on that.

    btrfs sub list /mnt/disks/VMs_backup_hdd
    
    btrfs sub del /mnt/disks/VMs_backup_hdd/VMs_Offline_20181125

    If you have to restore a vdisk, simply go into the specific folder and copy the vdisk of the specific VM back to it's original share on the cache. The XML and NVRAM files for the VMs aren't backed up by this. Only the vdisks. To backup these files you can use the app "Backup/Restore Appdata" to backup the libvirt.img for example.

     

    EDIT:

    Forget to mention, I use a single 1TB NVME cache device formatted with BTRFS and a single old spinning rust 1,5TB hdd as unassigned device as target for the snapshots. Nothing special, no BTRFS raid involved.

    • Like 2
  11. @Jaster Have you tried to use BTRFS snapshots to backup your VMs? You need the source and target to be on BTRFS. I have my cache drive setup as BTRFS and a unassigned drive also formated as BTRFS as target for my backups. The initial backup transfers the first snapshot at takes some time, every new snapshot only transfers the differential changed or new data and is much quicker as a new copy of the VMs. Have a look into the following:

     

     

  12. 15 hours ago, [email protected] said:

    Yes I can see the sceen if I attach a monitor.

    What am I missing?

    You are missing nothing. If you add VNC as a GPU in your VM config it will be always the first GPU and passthrough a physical GPU in most cases has issues. VNC by most users is used for the initial setup of the VM only. Either you use your monitor and passthrough mouse and keybord to control it directly on your Unraid box or use other remote software like RDP or Anydesk to control it from another pc on your network.

     

     

  13. 13 hours ago, MichaelBernasconi said:

    Threadripper 1920X

    Make sure you optimize your VM especially for first gen TR4. Try not to use cores from different dies for example. You can find a couple tips in the forums how to improve the performance for gaming. However if this helps or not, the game in the current state isn't relly well optimized. I saw some streams with really beefy hardware and it runs like in your case in low 20fps ranges.

  14. Today I got a info from "Fix Common Problems" the container "letsencrypt" is deprecated. So far so good, I had already read a couple weeks ago that you guys have to switch the name for the container, but I never changed my setting until today.

     

    What I did so far:

    1. stop the letsencrypt container

    2. backup the config folder in appdata (copied to new folder called swag)

    3. edit the old "letsencrypt" container

    4. changed the name to swag

    5. switch to "linuxserver/swag" repo

    6. adjusted the config path to the new folder

    7. starting the swag container

    8. adjusting "trusted_proxies" in the nextcloud config.php in /appdata/nextcloud/www/nextcloud/config to swag

     

    Did I miss something?

  15. @mathieuhantz The usual way is to setup a VM without GPU passthrough via VNC, install the virtio drivers for network and make sure everything works. Enabling RDP or setting up remote software like Teamviewer or Anydesk is also a good advice, because as soon as you use a GPU for passthrough VNC should be disabled. Otherwise VNC will always be the primary GPU and the passed through one wont work.

     

    The "PCIe ACS Override" option has nothing to do with passing through USB devices like the 3 showed in your screen. It is needed for splitting up groups for PCIe passthrough in case you have a USB controller itself grouped with other devices and you want to use the full controller inside the VM, same for GPUs or network controllers.

×
×
  • Create New...