Koenig

Members
  • Posts

    74
  • Joined

  • Last visited

Report Comments posted by Koenig

  1. 1 hour ago, dlandon said:

    You are mounting the same remote share using CIFS and NFS and not sharing either one with SMB or NFS locally:

    May  2 19:34:04 Unraid unassigned.devices: Mounting Remote Share '//192.168.8.25/Backups'...
    May  2 19:34:04 Unraid unassigned.devices: Mount SMB share '//192.168.8.25/Backups' using SMB default protocol.
    May  2 19:34:04 Unraid unassigned.devices: Mount SMB command: /sbin/mount -t 'cifs' -o rw,noserverino,nounix,iocharset=utf8,file_mode=0777,dir_mode=0777,uid=99,gid=100,credentials='/tmp/unassigned.devices/credentials_Backups' '//192.168.8.25/Backups' '/mnt/remotes/NAS_Backups'
    May  2 19:34:04 Unraid kernel: Key type dns_resolver registered
    May  2 19:34:04 Unraid kernel: Key type cifs.idmap registered
    May  2 19:34:04 Unraid kernel: CIFS: No dialect specified on mount. Default has changed to a more secure dialect, SMB2.1 or later (e.g. SMB3.1.1), from CIFS (SMB1). To use the less secure SMB1 dialect to access old servers which do not support SMB3.1.1 (or even SMB3 or SMB2.1) specify vers=1.0 on mount.
    May  2 19:34:04 Unraid kernel: CIFS: Attempting to mount \\192.168.8.25\Backups
    May  2 19:34:04 Unraid unassigned.devices: Successfully mounted '//192.168.8.25/Backups' on '/mnt/remotes/NAS_Backups'.
    May  2 19:34:04 Unraid unassigned.devices: Device '//192.168.8.25/Backups' is not set to be shared.
    
    
    May  2 19:34:04 Unraid unassigned.devices: Mounting Remote Share 'NAS.KOENIG.SE:/mnt/user/Backups'...
    May  2 19:34:04 Unraid unassigned.devices: Mount NFS command: /sbin/mount -t 'nfs4' -o rw,noacl 'NAS.KOENIG.SE:/mnt/user/Backups' '/mnt/remotes/NFS_NAS_Backups'
    May  2 19:34:04 Unraid kernel: NFS: Registering the id_resolver key type
    May  2 19:34:04 Unraid kernel: Key type id_resolver registered
    May  2 19:34:04 Unraid kernel: Key type id_legacy registered
    May  2 19:34:04 Unraid nfsrahead[37996]: setting /mnt/remotes/NFS_NAS_Backups readahead to 128
    May  2 19:34:04 Unraid unassigned.devices: Successfully mounted 'NAS.KOENIG.SE:/mnt/user/Backups' on '/mnt/remotes/NFS_NAS_Backups'.
    May  2 19:34:04 Unraid unassigned.devices: Device 'NAS.KOENIG.SE:/mnt/user/Backups' is not set to be shared.

    Both will provide a local mount point that differ in name - '/mnt/remotes/NAS_backups' and '/mnt/remotes/NFS_NAS_Backups'.  Why are you doing this?  One mount point with either protocol will accomplish the same thing.  What are you trying to accomplish?  Both mount points will have the same information from the remote share.  Use one protocol or the other on a remote share and not both,

     

    While UD permits this, I can't say this is a good idea and is probably why you are seeing the CIFS errors.  I'm not sure how each protocol would respond to local /mnt/remotes/... file changes that end up getting reflected in the remote share.  i.e. writes to both mount points.

     

    I'm honestly thinking that UD should probably prevent this scenario.

    Yeah, sorry about that, the NFS-share was an attempt to get rid of the messages in the log, but I couldn't get write permissions on the NFS-share (not beeing familiar with how linux networks shares work) so it is not beeing used, but this is not how it has been setup in the long run, just the last couple of days - and it did not lessen or increase the amount of those messages in the log.

     

    But I agree with you, it is not meant to be that way, I have been using the SMB-share for years though.

     

     

     

  2. I'm still getting a lot of this in my log:

    May  3 07:40:54 Unraid kernel: CIFS: VFS: \\192.168.8.25\Backups Close unmatched open for MID:3047123
    May  3 07:56:07 Unraid kernel: CIFS: VFS: \\192.168.8.25\Backups Close unmatched open for MID:3110621
    May  3 08:03:55 Unraid kernel: CIFS: VFS: \\192.168.8.25\Backups Close unmatched open for MID:3143452
    May  3 08:23:18 Unraid kernel: CIFS: VFS: \\192.168.8.25\Backups Close unmatched open for MID:3224409
    May  3 08:34:51 Unraid kernel: CIFS: VFS: \\192.168.8.25\Backups Close unmatched open for MID:3272533
    May  3 08:35:23 Unraid kernel: CIFS: VFS: \\192.168.8.25\Backups Close unmatched open for MID:3274724

     

    Any idea what could be causing it?

    unraid-diagnostics-20230503-1011.zip

  3. 2 hours ago, Frank1940 said:

    One of the problems with SMB is that it can take a long time for the configuration to stabilize.  see this section of an MS document for details:

     

    https://learn.microsoft.com/en-us/previous-versions/windows/it-pro/windows-server-2003/cc737661(v=ws.10)?redirectedfrom=MSDN#registration-and-propagation

     

    Make sure that you have one of your Unraid servers on 24-7 and force that server to be the Local Master.  See this code (to be added to SMB Extras section of SMB settings) to do that:

     

    [global]
    preferred master = yes
    os level = 255

     

    It does not always solve the problem when a new computer is started up but it seems to help...

     

    LEt me add one more thing.  With peer-to-peer networking, which Unraid uses in it basic configuration, any computer may be either a client, a server or a client and server simultaneously.  It all depends on how you setup each computer!

    Both servers are on 24/7 the NAS has an uptime of 44 days as for now, had a power-outage back then otherwisw it would have an uptime since last OS-upgrade.

    I have the NAS set as master already but the line with "os level = 255" I have not, I will add it.

     

    They both act as client and server.

    On "NAS" I have a share "Backups" wich is then mounted on the other server, just namned Unraid.

    On "Unraid I have a share "ISOs" wich is then mounted on NAS, I thought it to be a good way of sharing the storage of ISO's for VM-deployment.

     

    This setup I have had for atleast 3 years now, should be 4 in just a matter of a month or so, but I have never before seen the log filling up with that message, I cannot be sure but I would bet good money on it not beeing so in RC2 even, due to it beeing some time between, and the fact that I do check to logs intemittently.

  4. 2 hours ago, dlandon said:

    You should set up static IP addresses on your servers.  On your 'nas' server you are mounting a remote share on the Unraid server:

    Mar  6 10:58:13 NAS unassigned.devices: Mounting Remote Share '//UNRAID/ISOs'...
    Mar  6 10:58:13 NAS unassigned.devices: Mount SMB share '//UNRAID/ISOs' using SMB default protocol.
    Mar  6 10:58:13 NAS unassigned.devices: Mount SMB command: /sbin/mount -t 'cifs' -o rw,noserverino,nounix,iocharset=utf8,file_mode=0777,dir_mode=0777,uid=99,gid=100,credentials='/tmp/unassigned.devices/credentials_ISOs' '//UNRAID/ISOs' '/mnt/remotes/UNRAID_ISOs'

    The problem is the IP address of Unraid can change and I'm not sure CIFS can handle that cleanly.

     

    On the 'nas' server, you have not done any network configuration:

    # Generated network settings
    USE_DHCP="yes"
    IPADDR=
    NETMASK=
    GATEWAY=
    BONDING="yes"
    BRIDGING="yes"

     

     

    I have all clients on my network set to use DHCP, and then I manage all IP and network from my pfSense router.

    They have had the same IP since the day they first connected to my network.

    Again I do not know if this is the best way to go about it but I started doing it this way long before I got my first Unraid-server and it has been working well, but as I mentioned earlier if this is a bad way to do it I'm open to suggestions.

  5. 14 hours ago, dlandon said:

    I just looked at your log again and see this.  This might be your problem depending on this device needing to be visible on the network:

    Apr 15 21:11:13 Unraid unassigned.devices: Successfully mounted '//192.168.8.25/Backups' on '/mnt/remotes/NAS_Backups'.
    Apr 15 21:11:13 Unraid unassigned.devices: Device '//192.168.8.25/Backups' is not set to be shared.

     

    You meaning it is not set to share?

     

    I don't want to share it from the server where I use unassigned devices to mount it, it is just supposed to be mounted so I can be able to map it in the docker "duplicati"

    On the other server (backup server - NAS) it is set to share on the network and from there it is visible on the network.

     

    Perhaps there's a better way of doing this, but this way has worked for atleast 3 years now, but I'm open to suggestions if there's a better way to go about it.

     

    EDIT: Both servers are Unraid, but the "backup"-server is on the latest "stable" version. 

    I usually wait a good while before updating that one.

     

    EDIT2: Attached a fresh diagnostics

     

    EDIT3: Attached a diagnostics from "NAS" as well.

    unraid-diagnostics-20230419-1014.zip

    nas-diagnostics-20230419-1021.zip

  6. 9 minutes ago, dlandon said:

    It looks to maybe be a networking issue:

    Apr 17 05:31:41 Unraid kernel: eth0: renamed from veth8c6b04b
    Apr 17 05:52:53 Unraid kernel: CIFS: VFS: \\192.168.8.25\Backups Close unmatched open for MID:8441542
    Apr 17 05:53:24 Unraid kernel: CIFS: VFS: \\192.168.8.25\Backups Close unmatched open for MID:8443211
    Apr 17 05:53:56 Unraid kernel: CIFS: VFS: \\192.168.8.25\Backups Close unmatched open for MID:8444803
    Apr 17 05:54:27 Unraid kernel: CIFS: VFS: \\192.168.8.25\Backups Close unmatched open for MID:8446309
    Apr 17 05:54:58 Unraid kernel: CIFS: VFS: \\192.168.8.25\Backups Close unmatched open for MID:8447662
    Apr 17 05:56:32 Unraid kernel: CIFS: VFS: \\192.168.8.25\Backups Close unmatched open for MID:8451591
    Apr 17 05:58:06 Unraid kernel: CIFS: VFS: \\192.168.8.25\Backups Close unmatched open for MID:8455817
    Apr 17 05:58:37 Unraid kernel: CIFS: VFS: \\192.168.8.25\Backups Close unmatched open for MID:8457093
    Apr 17 05:59:09 Unraid kernel: CIFS: VFS: \\192.168.8.25\Backups Close unmatched open for MID:8458260
    Apr 17 06:00:11 Unraid kernel: CIFS: VFS: \\192.168.8.25\Backups Close unmatched open for MID:8460857
    Apr 17 06:01:45 Unraid kernel: CIFS: VFS: \\192.168.8.25\Backups Close unmatched open for MID:8465455
    Apr 17 06:17:34 Unraid kernel: br-379290cfc013: port 1(veth87da490) entered disabled state
    Apr 17 06:17:34 Unraid kernel: veth48d86f5: renamed from eth0
    Apr 17 06:17:34 Unraid kernel: br-379290cfc013: port 1(veth87da490) entered disabled state
    Apr 17 06:17:34 Unraid kernel: device veth87da490 left promiscuous mode
    Apr 17 06:17:34 Unraid kernel: br-379290cfc013: port 1(veth87da490) entered disabled state
    Apr 17 06:17:35 Unraid kernel: br-379290cfc013: port 1(vethee73dfc) entered blocking state
    Apr 17 06:17:35 Unraid kernel: br-379290cfc013: port 1(vethee73dfc) entered disabled state
    Apr 17 06:17:35 Unraid kernel: device vethee73dfc entered promiscuous mode
    Apr 17 06:17:35 Unraid kernel: br-379290cfc013: port 1(vethee73dfc) entered blocking state
    Apr 17 06:17:35 Unraid kernel: br-379290cfc013: port 1(vethee73dfc) entered forwarding state
    Apr 17 06:17:35 Unraid kernel: eth0: renamed from veth4507586

     

     

    Might very well be, but I've not seen them before, that I can remember...

    But now they are very frequent, as I run "Duplicati" as a docker and backup some shares and appdata (different shares on diffrent days) to another Unraid server and have been doing that for atleast a couple of years and can't remember something like this filling up the log.

  7. I'm getting a lot of these:

    Apr 18 15:35:49 Unraid kernel: CIFS: VFS: \\192.168.8.25\Backups Close unmatched open for MID:4468347
    Apr 18 15:36:20 Unraid kernel: CIFS: VFS: \\192.168.8.25\Backups Close unmatched open for MID:4492680
    Apr 18 15:43:07 Unraid kernel: CIFS: VFS: \\192.168.8.25\Backups Close unmatched open for MID:4647123
    Apr 18 15:48:56 Unraid kernel: CIFS: VFS: \\192.168.8.25\Backups Close unmatched open for MID:4751101
    Apr 18 15:58:03 Unraid kernel: CIFS: VFS: \\192.168.8.25\Backups Close unmatched open for MID:4894247
    Apr 18 15:58:34 Unraid kernel: CIFS: VFS: \\192.168.8.25\Backups Close unmatched open for MID:4915435
    Apr 18 16:03:48 Unraid kernel: CIFS: VFS: \\192.168.8.25\Backups Close unmatched open for MID:4989147
    Apr 18 16:04:50 Unraid kernel: CIFS: VFS: \\192.168.8.25\Backups Close unmatched open for MID:5040925
    Apr 18 16:05:22 Unraid kernel: CIFS: VFS: \\192.168.8.25\Backups Close unmatched open for MID:5059100
    Apr 18 16:09:37 Unraid kernel: CIFS: VFS: \\192.168.8.25\Backups Close unmatched open for MID:5093655
    Apr 18 16:10:09 Unraid kernel: CIFS: VFS: \\192.168.8.25\Backups Close unmatched open for MID:5115018
    Apr 18 16:10:40 Unraid kernel: CIFS: VFS: \\192.168.8.25\Backups Close unmatched open for MID:5139303
    Apr 18 16:11:11 Unraid kernel: CIFS: VFS: \\192.168.8.25\Backups Close unmatched open for MID:5166674
    Apr 18 16:15:55 Unraid kernel: CIFS: VFS: \\192.168.8.25\Backups Close unmatched open for MID:5216088
    Apr 18 16:16:58 Unraid kernel: CIFS: VFS: \\192.168.8.25\Backups Close unmatched open for MID:5256386
    Apr 18 16:22:48 Unraid kernel: CIFS: VFS: \\192.168.8.25\Backups Close unmatched open for MID:5362106
    Apr 18 16:23:51 Unraid kernel: CIFS: VFS: \\192.168.8.25\Backups Close unmatched open for MID:5404597
    Apr 18 16:24:23 Unraid kernel: CIFS: VFS: \\192.168.8.25\Backups Close unmatched open for MID:5429570
    Apr 18 16:29:43 Unraid kernel: CIFS: VFS: \\192.168.8.25\Backups Close unmatched open for MID:5500655
    Apr 18 16:31:17 Unraid kernel: CIFS: VFS: \\192.168.8.25\Backups Close unmatched open for MID:5564601
    Apr 18 16:35:34 Unraid kernel: CIFS: VFS: \\192.168.8.25\Backups Close unmatched open for MID:5608888
    Apr 18 16:36:37 Unraid kernel: CIFS: VFS: \\192.168.8.25\Backups Close unmatched open for MID:5651233
    Apr 18 16:37:08 Unraid kernel: CIFS: VFS: \\192.168.8.25\Backups Close unmatched open for MID:5675616
    Apr 18 16:37:39 Unraid kernel: CIFS: VFS: \\192.168.8.25\Backups Close unmatched open for MID:5690590
    Apr 18 16:43:00 Unraid kernel: CIFS: VFS: \\192.168.8.25\Backups Close interrupted close
    Apr 18 16:43:00 Unraid kernel: CIFS: VFS: \\192.168.8.25\Backups Close cancelled mid failed rc:-9

     

    Since upgrading to RC3 and transferring files between my servers.

     

     

    unraid-diagnostics-20230418-1720.zip

  8. 20 hours ago, hot22shot said:

     

    Got my 5W back, PCIe ACS override was at disabled, I put it back to both so that powertop could do its magic.

     

     

    it has to be added to the default entry in your syslinux.cfg. You can edit it by clicking on your flash drive in the Main dashboard.

    A question on that, does "amd_pstate=passive" not work without ACS override enabled?

     

    I don't have it enabled, and if I were to enable it it would probably mess up my hardware passthroughs.

     

    I tried to google it but I couldn't rellay find any definitive answer so I'm going to ask you as you seem to know things about this - I have an AMD 3970X, would I benefit anything if I added "amd_pstate=passive" to syslinux.cfg?

  9. 15 minutes ago, JonathanM said:

    This.

    I haven't tried but shouldn't it be possible to use the mover to avoid doing a complete wipe?

    Something like this: 

    change share setting --> move files to array --> take the array offline --> reformat cache --> change share setting --> move files back to cache

  10. 2 hours ago, Kilrah said:

     

    Macinabox VMs have nvram and virsh refuses to remove them unless you add --nvram to the command.

     

    So

    virsh undefine --nvram MacinaboxCatalina

     

    Maybe unraid could add it to the "remove vm" command. 

    Thank you!

     

    Now I have my VM-tab back.

     

  11. I ran into the problems with all VM's gone on the VM-tab, but I can still se and edit them on the dashboard, all but one, when I try to edit my mac-os VM the edit-page is just blank.

    I can not remove the VM either, wich I could with some of the others.

     

    I ran the update- chech before I updated and it said I had one incompatible plugin gpu-statistic, wich I then removed before update.

     

    What would be my next step to getting my VM-tab back?

     

    unraid-diagnostics-20230322-0535.zip

  12. 12 hours ago, bigbangus said:

     

    Are you sure the GTX 760 isn't supported?

     

    I think it's on this list from Nvidia as a Kepler GPU:

    https://nvidia.custhelp.com/app/answers/detail/a_id/5204/~/list-of-kepler-series-geforce-desktop-gpus

    No, when you put it like that it seems I'm wrong.

     

    I was looking at wich cards where supported in the latest driver (as it is a rather new feature) and the 760 was missing out, so I assumed it was an older GPU, but my assumption was wrong.

     

    however there's also this: https://www.tomshardware.com/news/nvidia-end-support-kepler-gpu-windows7-windows-8-august-31 and the article about VM-support is dated 29-09-2021.

     

    Grey area perhaps?

  13. This release solved an issue for me that I thought was an hardware/BIOS error, when passing one of my Nvidia cards to windows VM the VM after a while lost connection to the monitor/s + the VM sort of got unresponsive and I had to force-close it.

    (The "sort of" because if the VM was set to do some videoencoding it still finished the encoding if I just let it be, even hours after it became unresponsive, this I determined by seeing that the resulting file was still growing after the machine went unresponsive)

     

    But passing it through to a linux-machine worked just fine.

     

    This issue must have happened when 6.10 went from beta to RC because I've had the issue for a long time, long before RC2, but it worked initially, I think my first install on this machine was a beta due to it beeing a ryzen 3970X-system.

  14. On 11/7/2021 at 3:19 AM, bigbangus said:

    Fixed it. The solution was just to upgrade my Win10 VM nvidia driver which was from 2020/09. It turns out that since 465.89 (2021/03) Nvidia has allowed virtual machines. For whatever reason 6.9.2 was allowing me to cheat the old driver and 6.10 wasn't. 

    Your card is not supported for passthrough in the Nvidia drivers, it is only Kepler or newer that are supported: https://nvidia.custhelp.com/app/answers/detail/a_id/5173/~/geforce-gpu-passthrough-for-windows-virtual-machine-(beta)

     

    So I think you are bound to test all the old workarounds/hacks to get it to work.

     

  15. 29 minutes ago, testdasi said:

     

    1. If you have many dockers with static IP (i.e. (1) doesn't fix it for you) then:
      • If you have multiple NIC's then connect another NIC (different from the one on the VM bridge), don't give it IP (important to not have IP, otherwise chances are your VM access to the shares will be done through the router i.e. limited to gigabit), add it to a different bridge (e.g. br1) and use it instead of br0 for your docker static IP.

     

    @Koenig: see above.

     

     

     

     

     

    I do have a Gibyte Aorus Xtreme with dual 10Gbe NIC so I might just test this suggestion, thank you, I had some other plans for the second NIC but that comes further into the future and perhaps this issue is ironed out until then.

  16. 10 minutes ago, testdasi said:

    So I got curious and did some more testing with the "unexpected GSO type" bug.

     

    Changing VM to 5.0 (Q35 or i440fx) fixes the issue when there are a small number of dockers with static IP on the same bridge as the VM (e.g. br0 custom network).

     

    In my particular server, "small number" is about 2-3 dockers.

    In my tests (with reboot in between), the error happened after the 3rd or 4th dockers with static IP were started (not sure why sometimes 3rd, sometimes 4th).

    The error seems to depend strictly on number of dockers and not number of VM. I started 5 VM with 2 static IP dockers and no error but 1 VM + 5 dockers guaranteed error.

     

    So conclusion:

    1. If you only have a few dockers with static IP, try changing machine type to 5.0 first. That may fix it without the need of any further tweaks.
    2. If you have many dockers with static IP (i.e. (1) doesn't fix it for you) then:
      • If you have multiple NIC's then connect another NIC (different from the one on the VM bridge), don't give it IP (important to not have IP, otherwise chances are your VM access to the shares will be done through the router i.e. limited to gigabit), add it to a different bridge (e.g. br1) and use it instead of br0 for your docker static IP.
      • If you only have a single NIC then you will have to change your VM network to virtio-net.
        • I think VLAN also fixes it but I have had no need for that so have no interest to test.

     

    And I think the errors are harmless, other than flooding syslog. When I started with 6.9.0, I didn't realise there were errors until my syslog filled up and there was no other issue at all.

     

    @Koenig: see above.

     

     

     

     

     

    Well, I only have the one docker with static IP....

  17. I just answered in another thread about the "unexpected GSO type" where someone claimed that using Q35-5.0 would solve the issue, Ill just paste that post here:

     

    "I just tried yesterday with 2 newly created VM Q35-5.0 machines (Windows 10), on Beta25 and I still get the "unexpected GSO type" flooded in my logs when i use "virto" so I don't see how using Q35-5.0 would be a solution.

    Only way I get rid of that in the logs for me is to use "vitio-net" with the severely diminished performance.

     

    Edit: 

     

    Just tried again and still same results, attaching my diagnostics if you wish to see for yor self."

    unraid-diagnostics-20200821-0848.zip

  18. On 8/11/2020 at 9:57 PM, testdasi said:

    Yep, 6.9.0 should bring improvement to your situation. But as I said, you need to wipe the drive in 6.9.0 to reformat it back to 1MiB alignment and needless to say it would make the drive incompatible with Unraid before 6.9.0.

    Essentially back up, stop array, unassign, blkdiscard, assign back, start and format, restore backup. Beside backing up and restoring from backup, the middle process took 5 minutes.

     

    I expect LT to provide more detailed guidance regarding this perhaps when 6.9.0 enters RC or at least when 6.9.0 becomes stable.

    Not that 6.9.0-beta isn't stable. I did see some bugs report but I personally have only seen the virtio / virtio-net thingie which was fixed by using Q35-5.0 machine type (instead of 4.2). No need to use virtio-net which negatively affects network performance.

     

     

     

    PS: been running iotop for 3 hours and still average about 345MB / hr. We'll see if my daily house-keeping affects it tonight.

    Not that this is really the thread to adress this but anyway:

    I just tried yesterday with 2 newly created VM Q35-5.0 machines (Windows 10), on Beta25 and I still get the "unexpected GSO type" flooded in my logs when i use "virto" so I don't see how using Q35-5.0 would be a solution.

    Only way I get rid of that in the logs for me is to use "vitio-net" with the severely diminished performance.

     

    Edit: 

     

    Just tried again and still same results, attaching my diagnostics if you wish to see for yor self.

    unraid-diagnostics-20200821-0848.zip

  19. 1 hour ago, fxp555 said:

    I think this is a known issue: Go to Settings -> Docker -> Disable Docker -> Apply -> Enable Docker -> Apply.

    Edit: Now that I realized that you already recreated docker.img I am not that sure. You could try disabling and enabling VMs as well.

     

    Yes, it doesn't help.

     

    I also have another issue or perhaps it is related, when I try to add custom network to Dockers (192.168.8.0/24, gateway: 192.168.8.1 and DHCP: not set) I lose all network connectivity to Unraid.

    If I via the  console reboot the machine, when rebooted the console says "ipv4 not set" but if I log in and run "ifconfig eth0" I can see it gets the correct IP (I have set it to always get the same IP via the DHCP-server in my LAN) but I can still not contact the web-gui.

     

    EDIT: A change to the network settings seems to have solved it, Not that any real change was made just a random setting back and forth to get the update button to become enabled and then press update, and then solved....

  20. 49 minutes ago, sub6 said:

    It seems to be the Code 43 aswell. Everything worked out of the Box for me under 6.9 beta 1, and now I am stuck. Windows deactivates the GPU and I can not activate it like @Koenig did. Nvidia Driver is ignored. Here is my XML. maybe someone kind find something.

     

    I tried somethings, like for example deactivated Hyper V and hide KVM (like shown in the XML), changed to VFIO Bind via the System Devices Tool, neither than via Syslinux, which I did not get to work.

     

    
    <?xml version='1.0' encoding='UTF-8'?>
    <domain type='kvm' id='15' xmlns:qemu='http://libvirt.org/schemas/domain/qemu/1.0'>
      <name>Windows 10</name>
      <uuid>bef0ca49-2fa1-9db7-1389-d312cb633419</uuid>
      <metadata>
        <vmtemplate xmlns="unraid" name="Windows 10" icon="windows.png" os="windows10"/>
      </metadata>
      <memory unit='KiB'>12582912</memory>
      <currentMemory unit='KiB'>12582912</currentMemory>
      <memoryBacking>
        <nosharepages/>
      </memoryBacking>
      <vcpu placement='static'>10</vcpu>
      <cputune>
        <vcpupin vcpu='0' cpuset='1'/>
        <vcpupin vcpu='1' cpuset='7'/>
        <vcpupin vcpu='2' cpuset='2'/>
        <vcpupin vcpu='3' cpuset='8'/>
        <vcpupin vcpu='4' cpuset='3'/>
        <vcpupin vcpu='5' cpuset='9'/>
        <vcpupin vcpu='6' cpuset='4'/>
        <vcpupin vcpu='7' cpuset='10'/>
        <vcpupin vcpu='8' cpuset='5'/>
        <vcpupin vcpu='9' cpuset='11'/>
      </cputune>
      <resource>
        <partition>/machine</partition>
      </resource>
      <os>
        <type arch='x86_64' machine='pc-i440fx-5.0'>hvm</type>
        <loader readonly='yes' type='pflash'>/usr/share/qemu/ovmf-x64/OVMF_CODE-pure-efi.fd</loader>
        <nvram>/etc/libvirt/qemu/nvram/bef0ca49-2fa1-9db7-1389-d312cb633419_VARS-pure-efi.fd</nvram>
      </os>
      <features>
        <acpi/>
        <apic/>
        <hyperv>
          <relaxed state='off'/>
          <vapic state='off'/>
          <spinlocks state='off'/>
        </hyperv>
        <kvm>
          <hidden state='on'/>
        </kvm>
      </features>
      <cpu mode='host-passthrough' check='none'>
        <topology sockets='1' dies='1' cores='5' threads='2'/>
        <cache mode='passthrough'/>
        <feature policy='require' name='topoext'/>
      </cpu>
      <clock offset='localtime'>
        <timer name='hypervclock' present='yes'/>
        <timer name='hpet' present='no'/>
      </clock>
      <on_poweroff>destroy</on_poweroff>
      <on_reboot>restart</on_reboot>
      <on_crash>restart</on_crash>
      <devices>
        <emulator>/usr/local/sbin/qemu</emulator>
        <disk type='block' device='disk'>
          <driver name='qemu' type='raw' cache='writeback'/>
          <source dev='/dev/disk/by-id/ata-Samsung_SSD_840_EVO_250GB_S1DBNSADB30857W' index='2'/>
          <backingStore/>
          <target dev='hdc' bus='sata'/>
          <boot order='1'/>
          <alias name='sata0-0-2'/>
          <address type='drive' controller='0' bus='0' target='0' unit='2'/>
        </disk>
        <disk type='file' device='cdrom'>
          <driver name='qemu' type='raw'/>
          <source file='/mnt/user/isos/virtio-win-0.1.173-2.iso' index='1'/>
          <backingStore/>
          <target dev='hdb' bus='ide'/>
          <readonly/>
          <alias name='ide0-0-1'/>
          <address type='drive' controller='0' bus='0' target='0' unit='1'/>
        </disk>
        <controller type='pci' index='0' model='pci-root'>
          <alias name='pci.0'/>
        </controller>
        <controller type='ide' index='0'>
          <alias name='ide'/>
          <address type='pci' domain='0x0000' bus='0x00' slot='0x01' function='0x1'/>
        </controller>
        <controller type='sata' index='0'>
          <alias name='sata0'/>
          <address type='pci' domain='0x0000' bus='0x00' slot='0x03' function='0x0'/>
        </controller>
        <controller type='virtio-serial' index='0'>
          <alias name='virtio-serial0'/>
          <address type='pci' domain='0x0000' bus='0x00' slot='0x04' function='0x0'/>
        </controller>
        <controller type='usb' index='0' model='ich9-ehci1'>
          <alias name='usb'/>
          <address type='pci' domain='0x0000' bus='0x00' slot='0x07' function='0x7'/>
        </controller>
        <controller type='usb' index='0' model='ich9-uhci1'>
          <alias name='usb'/>
          <master startport='0'/>
          <address type='pci' domain='0x0000' bus='0x00' slot='0x07' function='0x0' multifunction='on'/>
        </controller>
        <controller type='usb' index='0' model='ich9-uhci2'>
          <alias name='usb'/>
          <master startport='2'/>
          <address type='pci' domain='0x0000' bus='0x00' slot='0x07' function='0x1'/>
        </controller>
        <controller type='usb' index='0' model='ich9-uhci3'>
          <alias name='usb'/>
          <master startport='4'/>
          <address type='pci' domain='0x0000' bus='0x00' slot='0x07' function='0x2'/>
        </controller>
        <interface type='bridge'>
          <mac address='52:54:00:24:6e:d2'/>
          <source bridge='virbr0'/>
          <target dev='vnet0'/>
          <model type='virtio-net'/>
          <alias name='net0'/>
          <address type='pci' domain='0x0000' bus='0x00' slot='0x02' function='0x0'/>
        </interface>
        <serial type='pty'>
          <source path='/dev/pts/0'/>
          <target type='isa-serial' port='0'>
            <model name='isa-serial'/>
          </target>
          <alias name='serial0'/>
        </serial>
        <console type='pty' tty='/dev/pts/0'>
          <source path='/dev/pts/0'/>
          <target type='serial' port='0'/>
          <alias name='serial0'/>
        </console>
        <channel type='unix'>
          <source mode='bind' path='/var/lib/libvirt/qemu/channel/target/domain-15-Windows 10/org.qemu.guest_agent.0'/>
          <target type='virtio' name='org.qemu.guest_agent.0' state='disconnected'/>
          <alias name='channel0'/>
          <address type='virtio-serial' controller='0' bus='0' port='1'/>
        </channel>
        <input type='mouse' bus='ps2'>
          <alias name='input0'/>
        </input>
        <input type='keyboard' bus='ps2'>
          <alias name='input1'/>
        </input>
        <hostdev mode='subsystem' type='pci' managed='yes'>
          <driver name='vfio'/>
          <source>
            <address domain='0x0000' bus='0x26' slot='0x00' function='0x0'/>
          </source>
          <alias name='hostdev0'/>
          <rom file='/mnt/user/isos/AsusStrix_GTX970.rom'/>
          <address type='pci' domain='0x0000' bus='0x00' slot='0x05' function='0x0'/>
        </hostdev>
        <hostdev mode='subsystem' type='pci' managed='yes'>
          <driver name='vfio'/>
          <source>
            <address domain='0x0000' bus='0x26' slot='0x00' function='0x1'/>
          </source>
          <alias name='hostdev1'/>
          <address type='pci' domain='0x0000' bus='0x00' slot='0x06' function='0x0'/>
        </hostdev>
        <hostdev mode='subsystem' type='pci' managed='yes'>
          <driver name='vfio'/>
          <source>
            <address domain='0x0000' bus='0x03' slot='0x00' function='0x0'/>
          </source>
          <alias name='hostdev2'/>
          <address type='pci' domain='0x0000' bus='0x00' slot='0x08' function='0x0'/>
        </hostdev>
        <memballoon model='none'/>
      </devices>
      <seclabel type='dynamic' model='dac' relabel='yes'>
        <label>+0:+100</label>
        <imagelabel>+0:+100</imagelabel>
      </seclabel>
      <qemu:commandline>
        <qemu:arg value='-cpu'/>
        <qemu:arg value='host,topoext=on,invtsc=on,hv-time,hv-relaxed,hv-vapic,hv-spinlocks=0x1fff,hv-vpindex,hv-synic,hv-stimer,hv-reset,hv-frequencies,host-cache-info=on,l3-cache=off,-amd-stibp'/>
      </qemu:commandline>
    </domain>

     

     

    Somehow it seems you have read my solution but misunderstod it or something.

     

    From the looks of your xml you should start from the top of my solution post.