• Unraid OS version 6.9.0-beta22 available


    limetech

    Welcome (again) to 6.9 release development!

     

    This release marks hopefully the last beta before moving to -rc phase.  The reason we still mark beta is because we'd like to get wider testing of new multiple-pool feature, as well as perhaps sneak in a couple more refinements.  With that in mind, the obligatory disclaimer:

     

    Important: Beta code is not fully tested and not feature-complete.  We recommend running on test servers only!

     

    That said, here's what's new in this release...

     

    Multiple Pools

    This features permits you to define up to 35 named pools, of up to 30 storage devices/pool.  The current "cache pool" is now simply a pool named "cache".  Pools are created and managed via the Main page.

     

    Note: When you upgrade a server which has a cache pool defined, a backup of config/disk.cfg will be saved to config/disk.cfg.bak, and then cache device assignment settings are moved out of disk.cfg and into a new file, config/pools/cache.cfg.  If later you revert back to a pre-6.9 Unraid OS release you will lose your cache device assignments and you will have to manually re-assign devices to cache.  As long as you reassign the correct devices, data should remain intact.

     

    When you create a user share, or edit an existing user share, you can specify which pool should be associated with that share.  The assigned pool functions identically to current cache pool operation.

     

    Something to be aware of: when a directory listing is obtained for a share, the unRAID array disk volumes and all pools which contain that share are merged in this order:

      pool assigned to share

      disk1

      :

      disk28

      all the other pools in strverscmp() order.

     

    As with the current "cache pool", a single-device pool may be formatted with either xfs, btrfs, or reiserfs.  A multiple-device pool may only be formatted with btrfs.  A future release will include support for multiple "unRAID array" pools.  We are also considering zfs support.

     

    Something else to be aware of: Let's say you have a 2-device btrfs pool. This will be what btrfs calls "raid1" and what most people would understand to be "mirrored disks". Well this is mostly true in that the same data exists on both disks but not necessarily at the block-level.  Now let's say you create another pool, and what you do is unassign one of the devices from the existing 2-device btrfs pool and assign it to this pool.  Now you have x2 1-device btrfs pools.  Upon array Start user might understandably assume there are now x2 pools with exactly the same data.  However this is not the case. Instead, when Unraid OS sees that a btrfs device has been removed from an existing multi-device pool, upon array Start it will do a 'wipefs' on that device so that upon mount it will not be included in the old pool.  This of course effectively deletes all the data on the moved device.

     

    Language Translation

    A huge amount of work and effort has been implemented by @bonienl to provide multiple-language support in the Unraid OS Management Utility, aka, webGUI.  There are several language packs now available, and several more in the works.  Thanks to @Squid, language packs are installed via the Community Applications plugin - look for a new category entitled Language.

     

    Note: Community Applications HAS to be up to date to install languages.  Versions of CA prior to 2020.05.12 will not even load on this release.  As of this writing, the current version of CA is 2020.06.13a.  See also here.

     

    Each language pack exists in public Unraid organization github repos.  Interested users are encouraged to clone and issue Pull Requests to correct translations errors.  Language translations and PR merging is managed by @SpencerJ.

     

    Linux Kernel

    Upgraded to 5.7.

     

    Unfortunately, none of the out-of-tree drivers compile with this kernel.  In particular, these drivers are omitted:

    • Highpoint RocketRaid r750
    • Highpoint RocketRaid rr3740a
    • Tehuti Networks tn40xx

    If you require one of these drivers, please create a Bug Report and we'll spend some time looking for alternatives.  Better yet, pester the manufacturer of the controller and get them to update their drivers.

     

    Base Packages

    All updated to latest versions.  In addition, Linux PAM has been integrated.  This will permit us to install 2-factor authentication packages in a future release.

     

    Docker

    Updated to version 19.03.11

     

    Also now possible to select different icons for multiple containers of the same type.  This change necessitates a re-download of the icons for all your installed docker applications.  A delay when initially loading either the dashboard or the docker tab while this happens is to be expected prior to the containers showing up.

     

    Virtualization

    libvirt updated to version 6.4.0

    qemu updated to version 5.0.0

     

    In addition, integrated changes to System Devices page by user @Skitals with modifications by user @ljm42.  You can now select PCI devices to isolate from Linux upon boot simply by checking some boxes.  This makes it easier to reserve those devices for assignment to VM's.

     

    Note: If you had the VFIO-PCI Config plugin installed, you should remove it as that functionality is now built-in to Unraid OS 6.9.  Refer also @ljm42's excellent guide.

     

    In a future release we will include the NVIDIA and AMD GPU drivers natively into Unraid OS.  The primary use case is to facilitate accelerated transcoding in docker containers.  For this we require Linux to detect and auto-install the appropriate driver.  However, in order to reliably pass through an NVIDIA or AMD GPU to a VM, it's necessary to prevent Linux from auto-installing a GPU driver for those devices upon boot, which can be easily done now through System Devices page.  Users passing GPU's to VM's are encouraged to set this up now.

     

    "unexpected GSO errors"

     

    If your system log is being flooded with errors such as:

    Jun 20 09:09:21 Tower kernel: tun: unexpected GSO type: 0x0, gso_size 31, hdr_len 66

    You need to edit each VM and change the model type for the Ethernet bridge from "virtio" to "virtio-net".  In most cases this can be accomplished simply by clicking Update in "Form View" on the VM Edit page.  For other network configs it may be necessary to directly edit the xml.  For example:

    <interface type='bridge'>
          <mac address='xx:xx:xx:xx:xx:xx'/>
          <source bridge='br0'/>
          <model type='virtio-net'/>
          <address type='pci' domain='0x0000' bus='0x00' slot='0x03' function='0x0'/>
    </interface>

     

    Other

    • AFP support has been removed.
    • Numerous other Unraid OS and webGUI bug fixes and improvements.

     

    Version 6.9.0-beta22 2020-06-16

     

    Caution! This is beta sofware, consider using on test servers only.

     

    Base distro:

    • aaa_base: version 14.2
    • aaa_elflibs: version 15.0 build 23
    • acl: version 2.2.53
    • acpid: version 2.0.32
    • apcupsd: version 3.14.14
    • at: version 3.2.1
    • attr: version 2.4.48
    • avahi: version 0.8
    • bash: version 5.0.017
    • beep: version 1.3
    • bin: version 11.1
    • bluez-firmware: version 1.2
    • bridge-utils: version 1.6
    • brotli: version 1.0.7
    • btrfs-progs: version 5.6.1
    • bzip2: version 1.0.8
    • ca-certificates: version 20191130 build 1
    • celt051: version 0.5.1.3
    • cifs-utils: version 6.10
    • coreutils: version 8.32
    • cpio: version 2.13
    • cpufrequtils: version 008
    • cryptsetup: version 2.3.3
    • curl: version 7.70.0
    • cyrus-sasl: version 2.1.27
    • db48: version 4.8.30
    • dbus: version 1.12.18
    • dcron: version 4.5
    • devs: version 2.3.1 build 25
    • dhcpcd: version 8.1.9
    • diffutils: version 3.7
    • dmidecode: version 3.2
    • dnsmasq: version 2.81
    • docker: version 19.03.11
    • dosfstools: version 4.1
    • e2fsprogs: version 1.45.6
    • ebtables: version 2.0.11
    • eject: version 2.1.5
    • elvis: version 2.2_0
    • etc: version 15.0
    • ethtool: version 5.7
    • eudev: version 3.2.5
    • file: version 5.38
    • findutils: version 4.7.0
    • flex: version 2.6.4
    • floppy: version 5.5
    • fontconfig: version 2.13.92
    • freetype: version 2.10.2
    • fuse3: version 3.9.1
    • gawk: version 4.2.1
    • gd: version 2.2.5
    • gdbm: version 1.18.1
    • genpower: version 1.0.5
    • getty-ps: version 2.1.0b
    • git: version 2.27.0
    • glib2: version 2.64.3
    • glibc-solibs: version 2.30
    • glibc-zoneinfo: version 2020a build 1
    • glibc: version 2.30
    • gmp: version 6.2.0
    • gnutls: version 3.6.14
    • gptfdisk: version 1.0.5
    • grep: version 3.4
    • gtk+3: version 3.24.20
    • gzip: version 1.10
    • harfbuzz: version 2.6.7
    • haveged: version 1.9.8
    • hdparm: version 9.58
    • hostname: version 3.23
    • htop: version 2.2.0
    • icu4c: version 67.1
    • inetd: version 1.79s
    • infozip: version 6.0
    • inotify-tools: version 3.20.2.2
    • intel-microcode: version 20200609
    • iproute2: version 5.7.0
    • iptables: version 1.8.5
    • iputils: version 20190709
    • irqbalance: version 1.6.0
    • jansson: version 2.13.1
    • jemalloc: version 4.5.0
    • jq: version 1.6
    • keyutils: version 1.6.1
    • kmod: version 27
    • lbzip2: version 2.5
    • lcms2: version 2.10
    • less: version 551
    • libaio: version 0.3.112
    • libarchive: version 3.4.3
    • libcap-ng: version 0.7.10
    • libcgroup: version 0.41
    • libdaemon: version 0.14
    • libdrm: version 2.4.102
    • libedit: version 20191231_3.1
    • libestr: version 0.1.11
    • libevent: version 2.1.11
    • libfastjson: version 0.99.8
    • libffi: version 3.3
    • libgcrypt: version 1.8.5
    • libgpg-error: version 1.38
    • libgudev: version 233
    • libidn: version 1.35
    • libjpeg-turbo: version 2.0.4
    • liblogging: version 1.0.6
    • libmnl: version 1.0.4
    • libnetfilter_conntrack: version 1.0.8
    • libnfnetlink: version 1.0.1
    • libnftnl: version 1.1.7
    • libnl3: version 3.5.0
    • libpcap: version 1.9.1
    • libpciaccess: version 0.16
    • libpng: version 1.6.37
    • libpsl: version 0.21.0
    • librsvg: version 2.48.7
    • libseccomp: version 2.4.3
    • libssh2: version 1.9.0
    • libssh: version 0.9.4
    • libtasn1: version 4.16.0
    • libtirpc: version 1.2.6
    • libunistring: version 0.9.10
    • libusb-compat: version 0.1.5
    • libusb: version 1.0.23
    • libuv: version 1.34.0
    • libvirt-php: version 0.5.5
    • libvirt: version 6.4.0
    • libwebp: version 1.1.0
    • libwebsockets: version 3.2.2
    • libx86: version 1.1
    • libxml2: version 2.9.10
    • libxslt: version 1.1.34
    • libzip: version 1.7.0
    • lm_sensors: version 3.6.0
    • logrotate: version 3.16.0
    • lshw: version B.02.17
    • lsof: version 4.93.2
    • lsscsi: version 0.31
    • lvm2: version 2.03.09
    • lz4: version 1.9.1
    • lzip: version 1.21
    • lzo: version 2.10
    • mc: version 4.8.24
    • miniupnpc: version 2.1
    • mpfr: version 4.0.2
    • nano: version 4.9.3
    • ncompress: version 4.2.4.6
    • ncurses: version 6.2
    • net-tools: version 20181103_0eebece
    • nettle: version 3.6
    • network-scripts: version 15.0 build 9
    • nfs-utils: version 2.1.1
    • nghttp2: version 1.41.0
    • nginx: version 1.16.1
    • nodejs: version 13.12.0
    • nss-mdns: version 0.14.1
    • ntfs-3g: version 2017.3.23
    • ntp: version 4.2.8p14
    • numactl: version 2.0.11
    • oniguruma: version 6.9.1
    • openldap-client: version 2.4.49
    • openssh: version 8.3p1
    • openssl-solibs: version 1.1.1g
    • openssl: version 1.1.1g
    • p11-kit: version 0.23.20
    • patch: version 2.7.6
    • pciutils: version 3.7.0
    • pcre2: version 10.35
    • pcre: version 8.44
    • php: version 7.4.7 (CVE-2019-11048)
    • pixman: version 0.40.0
    • pkgtools: version 15.0 build 33
    • pm-utils: version 1.4.1
    • procps-ng: version 3.3.16
    • pv: version 1.6.6
    • qemu: version 5.0.0
    • qrencode: version 4.0.2
    • reiserfsprogs: version 3.6.27
    • rpcbind: version 1.2.5
    • rsync: version 3.1.3
    • rsyslog: version 8.2002.0
    • samba: version 4.12.3 (CVE-2020-10700, CVE-2020-10704)
    • sdparm: version 1.11
    • sed: version 4.8
    • sg3_utils: version 1.45
    • shadow: version 4.8.1
    • shared-mime-info: version 2.0
    • smartmontools: version 7.1
    • spice: version 0.14.1
    • sqlite: version 3.32.2
    • ssmtp: version 2.64
    • sudo: version 1.9.0
    • sysfsutils: version 2.1.0
    • sysvinit-scripts: version 2.1 build 31
    • sysvinit: version 2.96
    • talloc: version 2.3.1
    • tar: version 1.32
    • tcp_wrappers: version 7.6
    • tdb: version 1.4.3
    • telnet: version 0.17
    • tevent: version 0.10.2
    • traceroute: version 2.1.0
    • tree: version 1.8.0
    • ttyd: version 20200606
    • usbredir: version 0.7.1
    • usbutils: version 012
    • utempter: version 1.2.0
    • util-linux: version 2.35.2
    • vbetool: version 1.2.2
    • vsftpd: version 3.0.3
    • wget: version 1.20.3
    • which: version 2.21
    • wireguard-tools: version 1.0.20200513
    • wsdd: version 20180618
    • xfsprogs: version 5.6.0
    • xkeyboard-config: version 2.30
    • xorg-server: version 1.20.8
    • xterm: version 356
    • xz: version 5.2.5
    • yajl: version 2.1.0
    • zlib: version 1.2.11
    • zstd: version 1.4.5

    Linux kernel:

    • version 5.7.2
    • CONFIG_WIREGUARD: WireGuard secure network tunnel
    • CONFIG_IP_SET: IP set support
    • CONFIG_SENSORS_DRIVETEMP: Hard disk drives with temperature sensors
    • enabled additional hwmon native drivers
    • enabled additional hyperv drivers
    • firmware added:
    • BCM20702A1-0b05-180a.hcd
    • out-of-tree driver status:
    • igb: using in-tree version
    • ixgbe: using in-tree version
    • r8125: using in-tree version
    • r750: (removed)
    • rr3740a: (removed)
    • tn40xx: (removed)

    Management:

    • AFP support removed
    • Multiple pool support added
    • Multi-language support added
    • avoid sending spinup/spindown to non-rotational devices
    • get rid of 'system' plugin support (never used)
    • integrate PAM
    • integrate ljm42 vfio-pci script changes
    • webgui: turn off username autocomplete in login form
    • webgui: Added new display setting: show normalized or raw device identifiers
    • webgui: Add 'Portuguese (pt)' key map option for libvirt
    • webgui: Added "safe mode" one-shot safemode reboot option
    • webgui: Tabbed case select window
    • webgui: Updated case icons
    • webgui: Show message when too many files for browsing
    • webgui: Main page: hide Move button when user shares are not enabled
    • webgui: VMs: change default network model to virtio-net
    • webgui: Allow duplicate containers different icons
    • webgui: Allow markdown within container descriptions
    • webgui: Fix Banner Warnings Not Dismissing without reload of page
    • webgui: Network: allow metric value of zero to set no default gateway
    • webgui: Network: fix privacy extensions not set
    • webgui: Network settings: show first DNSv6 server
    • webgui: SysDevs overhaul with vfio-pci.cfg binding
    • webgui: Icon buttons re-arrangement
    • webgui: Add update dialog to docker context menu
    • webgui: Update Feedback.php
    • webgui: Use update image dialog for update entry in docker context menu
    • webgui: Task Plugins: Providing Ability to define Display_Name

    Edited by limetech

    • Like 23
    • Thanks 7



    User Feedback

    Recommended Comments



    58 minutes ago, J89eu said:

    Anyone getting Kernel Security Check Failure when trying to boot from Q35 5.0 with this update on a Windows 10 VM?

    You using a ryzen processor?

    Link to comment
    4 minutes ago, J89eu said:

    Yes I am

    Add this to the end of your VMs xml before the last domain line.

    Its a QEMU 5.0 bug that effects windows VM.

    <qemu:commandline>
        <qemu:arg value='-cpu'/>
        <qemu:arg value='host,topoext=on,invtsc=on,hv-time,hv-relaxed,hv-vapic,hv-spinlocks=0x1fff,hv-vpindex,hv-synic,hv-stimer,hv-reset,hv-frequencies,host-cache-info=on,l3-cache=off,-amd-stibp'/>
      </qemu:commandline>

     

    • Like 1
    Link to comment
    3 hours ago, david279 said:

    Add this to the end of your VMs xml before the last domain line.

    Its a QEMU 5.0 bug that effects windows VM.

    
    <qemu:commandline>
        <qemu:arg value='-cpu'/>
        <qemu:arg value='host,topoext=on,invtsc=on,hv-time,hv-relaxed,hv-vapic,hv-spinlocks=0x1fff,hv-vpindex,hv-synic,hv-stimer,hv-reset,hv-frequencies,host-cache-info=on,l3-cache=off,-amd-stibp'/>
      </qemu:commandline>

     

    I got the dreadful error code 43 after updating to this beta and applying this...

    What to do now, it was sort of my work@home-production machine with some software from work installed, Autocad, Magicad and some other stuff.

     

    Is there a way to get rid of the error 43? ( have dumped BIOS ) and I have the part of the .xml where you specify vendor ID, the graphics card in question is a RTX 2070, pretty please?

     

    My guess would be that something in those arguments tipped the driver off...

     

    EDIT: Yes pretty sure something in there breaks Nvidia passthrough, tried it on my other VM with a GT 1030 passed through with the exact same result, error code 43....

     

    EDIT2: Attaching my diagnostics if it could help someone to solve this.

    unraid-diagnostics-20200624-2210.zip

    Edited by Koenig
    Update
    Link to comment

    > AFP support has been removed.

    Please don't do this! I have multiple older Macs that connect (this is required due to the work I do and the need to support OLD apps that only run on older Macs running older MacOS versions), and SMB has serious bugs (data loss level of bugs) on these older machines. AFP support is THE biggest reason I'm super happy to have gone with Unraid. Totally understandable to hide it for the majority of users, but don't remove it completely.

    Link to comment
    24 minutes ago, jeremyn said:

    > AFP support has been removed.

    Please don't do this! I have multiple older Macs that connect (this is required due to the work I do and the need to support OLD apps that only run on older Macs running older MacOS versions), and SMB has serious bugs (data loss level of bugs) on these older machines. AFP support is THE biggest reason I'm super happy to have gone with Unraid. Totally understandable to hide it for the majority of users, but don't remove it completely.

    AFP is Apple’s own protocol which was originally developed for Classic macOS, and has been widely supported by networked devices including storage such as NAS. But its use has now been deprecated for six years, since OS X 10.9 Mavericks. The last release was of AFP 3.4 over seven years ago

     

    Time to move on from hardware that old.. 

     

    Link to comment
    2 hours ago, jeremyn said:

    > AFP support has been removed.

    Please don't do this! I have multiple older Macs that connect (this is required due to the work I do and the need to support OLD apps that only run on older Macs running older MacOS versions), and SMB has serious bugs (data loss level of bugs) on these older machines. AFP support is THE biggest reason I'm super happy to have gone with Unraid. Totally understandable to hide it for the majority of users, but don't remove it completely.

    You may want to consider sticking with the 6.8.3 version of Unraid which still  has AFP support.  You might also want to consider reducing that server to strictly a NAS box.  (A lot of the Unraid releases have been for general security issues that are usually a very minor issue for a basic NAS box.)

    • Like 1
    Link to comment

    maybe I missed this, but can different cache pools be mixed formats, as in main cache btrfs, second pool xfs? and if so, when pooled with xfs does it span data or stripe? Looking at using a pool for a backup copy of data on a few drives with the of accessibility xfs provides for recovery.

     

    thanks

    Link to comment
    18 minutes ago, 1812 said:

    maybe I missed this, but can different cache pools be mixed formats, as in main cache btrfs, second pool xfs?

    Yes.

     

    22 minutes ago, 1812 said:

    when pooled with xfs does it span data or stripe?

    Every pool is an independent filesystem, and xfs "pools" can only be single device, if a share exists on different pools data will be merged together by shfs when accessing that share.

    Link to comment
    Just now, johnnie.black said:

    Yes.

     

    Every pool is an independent filesystem, and xfs "pools" can only be single device, if a share exists on different pools data will be merged together by shfs when accessing that share.

    so to make sure I understand, if I want  to use 3 disks using xfs, I'll need to create 3 xfs pools each with a single disk, and then then create a share that specifies those pools and Unraid will do the rest, correct?

    Link to comment
    Just now, 1812 said:

    so to make sure I understand, if I want  to use 3 disks using xfs, I'll need to create 3 xfs pools each with a single disk, and then then create a share that specifies those pools and Unraid will do the rest, correct?

    The use cache setting that controls the Mover can only be set to one pool, though a share can still exist on different pools, you'd need to use the disk shares directly or change the use cache setting before use to use the "pool" you want.

    Link to comment
    4 hours ago, johnnie.black said:

    The use cache setting that controls the Mover can only be set to one pool, though a share can still exist on different pools, you'd need to use the disk shares directly or change the use cache setting before use to use the "pool" you want.

    Not only does it control the mover, but it controls which pool new files get written to. There is no way to have an individual user share set to use multiple pools.

     

    Like any user share, if files for the share (top level folder) exist on multiple pools, then those files would be included for any reads. But for writing new files and mover, only one pool can be selected per user share.

     

    • Like 1
    Link to comment

    Maybe this will clarify.

     

    In addition to the original cache pool named "cache" (2x500 btrfs raid1), I also have a cache pool that I have named "fast" (1x256 xfs).

     

    Each user share has an additional setting to select the cache pool for that user share. In this screenshot, I have selected my "fast" pool for the share named DVR with Use cache as Prefer so it can overflow.

    1969666944_2020-06-2615_22_16-unSERVER_Share.thumb.png.663903eff5af05c6eb05374f0a9a7e00.png

     

    • Like 4
    Link to comment

    @Koenig - those lines of XML you quoted did not work for me. In fact, they did not make any difference that I could notice whether they were included or not. I solved this problem by doing two things:

     

    1) Changed the "CPU" XML

    2) Used the Tools -> System Devices page to enable passthrough of my graphics card, instead of manually overriding as a kernel parameter in the syslinux config.

     

    Below is my post in another forum with how I solved it and a couple of references that helped me to get there.

     

    https://forums.serverbuilds.net/t/guide-remote-gaming-on-unraid/4248/130?u=dangerous25

    Link to comment
    On 6/24/2020 at 2:41 PM, Koenig said:

    I got the dreadful error code 43 after updating to this beta and applying this...

    What to do now, it was sort of my work@home-production machine with some software from work installed, Autocad, Magicad and some other stuff.

     

    Is there a way to get rid of the error 43? ( have dumped BIOS ) and I have the part of the .xml where you specify vendor ID, the graphics card in question is a RTX 2070, pretty please?

     

    My guess would be that something in those arguments tipped the driver off...

     

    EDIT: Yes pretty sure something in there breaks Nvidia passthrough, tried it on my other VM with a GT 1030 passed through with the exact same result, error code 43....

     

    EDIT2: Attaching my diagnostics if it could help someone to solve this.

    unraid-diagnostics-20200624-2210.zip 114.66 kB · 0 downloads

    i have the same error, did you managed to fixed?

    Link to comment

    so, i broke it.

     

    I attempted to add 2 pools (i had previously added 2 simple single and doulbe device pools and removed them with no issue). I make a new pool called backup with mixed file formats on the disks assuming it would format them. Server hung on mounting disks.

     

    Diags attached.server-diagnostics-20200627-1509.zip

     

    —edit

     

    i was able to get it to shutdown via webgui. upon reboot I was greeted with the option to format the disks in the pool.

    Edited by 1812
    Link to comment
    8 minutes ago, loomitz said:

    i have the same error, did you managed to fixed?

    Yes.

     

    I got rid of these lines: 

    <qemu:commandline>
        <qemu:arg value='-cpu'/>
        <qemu:arg value='host,topoext=on,invtsc=on,hv-time,hv-relaxed,hv-vapic,hv-spinlocks=0x1fff,hv-vpindex,hv-synic,hv-stimer,hv-reset,hv-frequencies,host-cache-info=on,l3-cache=off,-amd-stibp'/>
      </qemu:commandline>

    And changed the cpu-mode from host-passthrough to host-model and removed the line on cache so the cp sektion looks like this:

    <cpu mode='host-model' check='none'>
        <topology sockets='1' dies='1' cores='4' threads='2'/>
        <feature policy='require' name='topoext'/>
    </cpu>

    I also added a line to run kvm hidden:

        <hyperv>
          <relaxed state='on'/>
          <vapic state='on'/>
          <spinlocks state='on' retries='8191'/>
          <vpindex state='on'/>
          <synic state='on'/>
          <reset state='on'/>
          <vendor_id state='on' value='1234567890ab'/>
          <frequencies state='on'/>
        </hyperv>
        <kvm>
          <hidden state='on'/>
        </kvm>

    The part about hyper-v I had before, but I thought it might be good to have here for reference.

     

    I also have since earlier stubbed all devices i the actual IOMMU-group and the passed them through with the multifunction option like this:

        <hostdev mode='subsystem' type='pci' managed='yes'>
          <driver name='vfio'/>
          <source>
            <address domain='0x0000' bus='0x21' slot='0x00' function='0x0'/>
          </source>
          <address type='pci' domain='0x0000' bus='0x05' slot='0x00' function='0x0' multifunction='on'/>
        </hostdev>
        <hostdev mode='subsystem' type='pci' managed='yes'>
          <driver name='vfio'/>
          <source>
            <address domain='0x0000' bus='0x21' slot='0x00' function='0x1'/>
          </source>
          <address type='pci' domain='0x0000' bus='0x05' slot='0x00' function='0x1'/>
        </hostdev>
        <hostdev mode='subsystem' type='pci' managed='yes'>
          <driver name='vfio'/>
          <source>
            <address domain='0x0000' bus='0x21' slot='0x00' function='0x2'/>
          </source>
          <address type='pci' domain='0x0000' bus='0x05' slot='0x00' function='0x2'/>
        </hostdev>
        <hostdev mode='subsystem' type='pci' managed='yes'>
          <driver name='vfio'/>
          <source>
            <address domain='0x0000' bus='0x21' slot='0x00' function='0x3'/>
          </source>
          <address type='pci' domain='0x0000' bus='0x05' slot='0x00' function='0x3'/>
        </hostdev>

    No bios passed since it is a secondary card.

     

    I still got the error code 43 when booting the VM, but going into the device manager and deleting the card with error 43 and then doing a rescan of hardware got it back to functioning state again.

     

    I hope someone else will be helped by this, it has taken me many hours of googling, reading and many newly created VM's to get to this solution.

     

    On the plusside I'm also seeing rather large increase in performance for the VM by changeing to host-model instead of host-passthrough, I was under the assumption that passthrough was the best option, (might not be the change that did the uptick in performance, might as well be the updated kvm, I can't be really sure since this core and my processor won't boot in passthough-mode, but from the previous core and running passthough to this core and running host-model I'm seeing a significant uptick in performance)

    Link to comment
    39 minutes ago, 1812 said:

    so, i broke it.

     

    I attempted to add 2 pools (i had previously added 2 simple single and doulbe device pools and removed them with no issue). I make a new pool called backup with mixed file formats on the disks assuming it would format them. Server hung on mounting disks.

     

    Diags attached.server-diagnostics-20200627-1509.zip

     

    —edit

     

    i was able to get it to shutdown via webgui. upon reboot I was greeted with the option to format the disks in the pool.

    This is fixed in next release.

    • Thanks 3
    Link to comment

    Got the same Problem with my Windows VM. "kernel security check failure"

     

    @Koenig Your suggestions helped me getting the VM up an running again. But now I am stuck with 800x600 px resolution. Somehow the Nvidia Driver got deactivated and I can't get it installed. Using Geforce Experience I am trying to install the driver, but during the process the VM suddenly stops and reboots. Any Ideas?

    Link to comment

    I'm not sure raid5 in pools is working properly. I gave it the following disk sizes: 4,4,4,3,4,3,4 = 26 TB. Ran a raid 5 balance, shows:

     

    Data, RAID5: total=6.00GiB, used=1.75MiB
    System, RAID1: total=32.00MiB, used=16.00KiB
    Metadata, RAID1: total=1.00GiB, used=112.00KiB
    GlobalReserve, single: total=3.25MiB, used=0.00B

     

    but the web gui on the main tab shows 26TB free…. 

     

    raid 0,1,10 all end up with useable space as expected. 

     

    ---

     

    edit

     

    Also, clicking spin down underneath the pool doesn't seem to work. this new pool has nothing use it. Same issue with another single disk pool, no spin down, even when using the spin down button at the bottom of the web gui.

    Edited by 1812
    Link to comment

    It seems to be the Code 43 aswell. Everything worked out of the Box for me under 6.9 beta 1, and now I am stuck. Windows deactivates the GPU and I can not activate it like @Koenig did. Nvidia Driver is ignored. Here is my XML. maybe someone kind find something.

     

    I tried somethings, like for example deactivated Hyper V and hide KVM (like shown in the XML), changed to VFIO Bind via the System Devices Tool, neither than via Syslinux, which I did not get to work.

     

    <?xml version='1.0' encoding='UTF-8'?>
    <domain type='kvm' id='15' xmlns:qemu='http://libvirt.org/schemas/domain/qemu/1.0'>
      <name>Windows 10</name>
      <uuid>bef0ca49-2fa1-9db7-1389-d312cb633419</uuid>
      <metadata>
        <vmtemplate xmlns="unraid" name="Windows 10" icon="windows.png" os="windows10"/>
      </metadata>
      <memory unit='KiB'>12582912</memory>
      <currentMemory unit='KiB'>12582912</currentMemory>
      <memoryBacking>
        <nosharepages/>
      </memoryBacking>
      <vcpu placement='static'>10</vcpu>
      <cputune>
        <vcpupin vcpu='0' cpuset='1'/>
        <vcpupin vcpu='1' cpuset='7'/>
        <vcpupin vcpu='2' cpuset='2'/>
        <vcpupin vcpu='3' cpuset='8'/>
        <vcpupin vcpu='4' cpuset='3'/>
        <vcpupin vcpu='5' cpuset='9'/>
        <vcpupin vcpu='6' cpuset='4'/>
        <vcpupin vcpu='7' cpuset='10'/>
        <vcpupin vcpu='8' cpuset='5'/>
        <vcpupin vcpu='9' cpuset='11'/>
      </cputune>
      <resource>
        <partition>/machine</partition>
      </resource>
      <os>
        <type arch='x86_64' machine='pc-i440fx-5.0'>hvm</type>
        <loader readonly='yes' type='pflash'>/usr/share/qemu/ovmf-x64/OVMF_CODE-pure-efi.fd</loader>
        <nvram>/etc/libvirt/qemu/nvram/bef0ca49-2fa1-9db7-1389-d312cb633419_VARS-pure-efi.fd</nvram>
      </os>
      <features>
        <acpi/>
        <apic/>
        <hyperv>
          <relaxed state='off'/>
          <vapic state='off'/>
          <spinlocks state='off'/>
        </hyperv>
        <kvm>
          <hidden state='on'/>
        </kvm>
      </features>
      <cpu mode='host-passthrough' check='none'>
        <topology sockets='1' dies='1' cores='5' threads='2'/>
        <cache mode='passthrough'/>
        <feature policy='require' name='topoext'/>
      </cpu>
      <clock offset='localtime'>
        <timer name='hypervclock' present='yes'/>
        <timer name='hpet' present='no'/>
      </clock>
      <on_poweroff>destroy</on_poweroff>
      <on_reboot>restart</on_reboot>
      <on_crash>restart</on_crash>
      <devices>
        <emulator>/usr/local/sbin/qemu</emulator>
        <disk type='block' device='disk'>
          <driver name='qemu' type='raw' cache='writeback'/>
          <source dev='/dev/disk/by-id/ata-Samsung_SSD_840_EVO_250GB_S1DBNSADB30857W' index='2'/>
          <backingStore/>
          <target dev='hdc' bus='sata'/>
          <boot order='1'/>
          <alias name='sata0-0-2'/>
          <address type='drive' controller='0' bus='0' target='0' unit='2'/>
        </disk>
        <disk type='file' device='cdrom'>
          <driver name='qemu' type='raw'/>
          <source file='/mnt/user/isos/virtio-win-0.1.173-2.iso' index='1'/>
          <backingStore/>
          <target dev='hdb' bus='ide'/>
          <readonly/>
          <alias name='ide0-0-1'/>
          <address type='drive' controller='0' bus='0' target='0' unit='1'/>
        </disk>
        <controller type='pci' index='0' model='pci-root'>
          <alias name='pci.0'/>
        </controller>
        <controller type='ide' index='0'>
          <alias name='ide'/>
          <address type='pci' domain='0x0000' bus='0x00' slot='0x01' function='0x1'/>
        </controller>
        <controller type='sata' index='0'>
          <alias name='sata0'/>
          <address type='pci' domain='0x0000' bus='0x00' slot='0x03' function='0x0'/>
        </controller>
        <controller type='virtio-serial' index='0'>
          <alias name='virtio-serial0'/>
          <address type='pci' domain='0x0000' bus='0x00' slot='0x04' function='0x0'/>
        </controller>
        <controller type='usb' index='0' model='ich9-ehci1'>
          <alias name='usb'/>
          <address type='pci' domain='0x0000' bus='0x00' slot='0x07' function='0x7'/>
        </controller>
        <controller type='usb' index='0' model='ich9-uhci1'>
          <alias name='usb'/>
          <master startport='0'/>
          <address type='pci' domain='0x0000' bus='0x00' slot='0x07' function='0x0' multifunction='on'/>
        </controller>
        <controller type='usb' index='0' model='ich9-uhci2'>
          <alias name='usb'/>
          <master startport='2'/>
          <address type='pci' domain='0x0000' bus='0x00' slot='0x07' function='0x1'/>
        </controller>
        <controller type='usb' index='0' model='ich9-uhci3'>
          <alias name='usb'/>
          <master startport='4'/>
          <address type='pci' domain='0x0000' bus='0x00' slot='0x07' function='0x2'/>
        </controller>
        <interface type='bridge'>
          <mac address='52:54:00:24:6e:d2'/>
          <source bridge='virbr0'/>
          <target dev='vnet0'/>
          <model type='virtio-net'/>
          <alias name='net0'/>
          <address type='pci' domain='0x0000' bus='0x00' slot='0x02' function='0x0'/>
        </interface>
        <serial type='pty'>
          <source path='/dev/pts/0'/>
          <target type='isa-serial' port='0'>
            <model name='isa-serial'/>
          </target>
          <alias name='serial0'/>
        </serial>
        <console type='pty' tty='/dev/pts/0'>
          <source path='/dev/pts/0'/>
          <target type='serial' port='0'/>
          <alias name='serial0'/>
        </console>
        <channel type='unix'>
          <source mode='bind' path='/var/lib/libvirt/qemu/channel/target/domain-15-Windows 10/org.qemu.guest_agent.0'/>
          <target type='virtio' name='org.qemu.guest_agent.0' state='disconnected'/>
          <alias name='channel0'/>
          <address type='virtio-serial' controller='0' bus='0' port='1'/>
        </channel>
        <input type='mouse' bus='ps2'>
          <alias name='input0'/>
        </input>
        <input type='keyboard' bus='ps2'>
          <alias name='input1'/>
        </input>
        <hostdev mode='subsystem' type='pci' managed='yes'>
          <driver name='vfio'/>
          <source>
            <address domain='0x0000' bus='0x26' slot='0x00' function='0x0'/>
          </source>
          <alias name='hostdev0'/>
          <rom file='/mnt/user/isos/AsusStrix_GTX970.rom'/>
          <address type='pci' domain='0x0000' bus='0x00' slot='0x05' function='0x0'/>
        </hostdev>
        <hostdev mode='subsystem' type='pci' managed='yes'>
          <driver name='vfio'/>
          <source>
            <address domain='0x0000' bus='0x26' slot='0x00' function='0x1'/>
          </source>
          <alias name='hostdev1'/>
          <address type='pci' domain='0x0000' bus='0x00' slot='0x06' function='0x0'/>
        </hostdev>
        <hostdev mode='subsystem' type='pci' managed='yes'>
          <driver name='vfio'/>
          <source>
            <address domain='0x0000' bus='0x03' slot='0x00' function='0x0'/>
          </source>
          <alias name='hostdev2'/>
          <address type='pci' domain='0x0000' bus='0x00' slot='0x08' function='0x0'/>
        </hostdev>
        <memballoon model='none'/>
      </devices>
      <seclabel type='dynamic' model='dac' relabel='yes'>
        <label>+0:+100</label>
        <imagelabel>+0:+100</imagelabel>
      </seclabel>
      <qemu:commandline>
        <qemu:arg value='-cpu'/>
        <qemu:arg value='host,topoext=on,invtsc=on,hv-time,hv-relaxed,hv-vapic,hv-spinlocks=0x1fff,hv-vpindex,hv-synic,hv-stimer,hv-reset,hv-frequencies,host-cache-info=on,l3-cache=off,-amd-stibp'/>
      </qemu:commandline>
    </domain>

     

    Edited by sub6
    Link to comment
    2 hours ago, Koenig said:

    Yes.

     

    I got rid of these lines: 

    
    <qemu:commandline>
        <qemu:arg value='-cpu'/>
        <qemu:arg value='host,topoext=on,invtsc=on,hv-time,hv-relaxed,hv-vapic,hv-spinlocks=0x1fff,hv-vpindex,hv-synic,hv-stimer,hv-reset,hv-frequencies,host-cache-info=on,l3-cache=off,-amd-stibp'/>
      </qemu:commandline>

    And changed the cpu-mode from host-passthrough to host-model and removed the line on cache so the cp sektion looks like this:

    
    <cpu mode='host-model' check='none'>
        <topology sockets='1' dies='1' cores='4' threads='2'/>
        <feature policy='require' name='topoext'/>
    </cpu>

    I also added a line to run kvm hidden:

    
        <hyperv>
          <relaxed state='on'/>
          <vapic state='on'/>
          <spinlocks state='on' retries='8191'/>
          <vpindex state='on'/>
          <synic state='on'/>
          <reset state='on'/>
          <vendor_id state='on' value='1234567890ab'/>
          <frequencies state='on'/>
        </hyperv>
        <kvm>
          <hidden state='on'/>
        </kvm>

    The part about hyper-v I had before, but I thought it might be good to have here for reference.

     

    I also have since earlier stubbed all devices i the actual IOMMU-group and the passed them through with the multifunction option like this:

    
        <hostdev mode='subsystem' type='pci' managed='yes'>
          <driver name='vfio'/>
          <source>
            <address domain='0x0000' bus='0x21' slot='0x00' function='0x0'/>
          </source>
          <address type='pci' domain='0x0000' bus='0x05' slot='0x00' function='0x0' multifunction='on'/>
        </hostdev>
        <hostdev mode='subsystem' type='pci' managed='yes'>
          <driver name='vfio'/>
          <source>
            <address domain='0x0000' bus='0x21' slot='0x00' function='0x1'/>
          </source>
          <address type='pci' domain='0x0000' bus='0x05' slot='0x00' function='0x1'/>
        </hostdev>
        <hostdev mode='subsystem' type='pci' managed='yes'>
          <driver name='vfio'/>
          <source>
            <address domain='0x0000' bus='0x21' slot='0x00' function='0x2'/>
          </source>
          <address type='pci' domain='0x0000' bus='0x05' slot='0x00' function='0x2'/>
        </hostdev>
        <hostdev mode='subsystem' type='pci' managed='yes'>
          <driver name='vfio'/>
          <source>
            <address domain='0x0000' bus='0x21' slot='0x00' function='0x3'/>
          </source>
          <address type='pci' domain='0x0000' bus='0x05' slot='0x00' function='0x3'/>
        </hostdev>

    No bios passed since it is a secondary card.

     

    I still got the error code 43 when booting the VM, but going into the device manager and deleting the card with error 43 and then doing a rescan of hardware got it back to functioning state again.

     

    I hope someone else will be helped by this, it has taken me many hours of googling, reading and many newly created VM's to get to this solution.

     

    On the plusside I'm also seeing rather large increase in performance for the VM by changeing to host-model instead of host-passthrough, I was under the assumption that passthrough was the best option, (might not be the change that did the uptick in performance, might as well be the updated kvm, I can't be really sure since this core and my processor won't boot in passthough-mode, but from the previous core and running passthough to this core and running host-model I'm seeing a significant uptick in performance)

    it, Work! Thanks a lot my graphic card its now detected and working. 

    Link to comment
    49 minutes ago, sub6 said:

    It seems to be the Code 43 aswell. Everything worked out of the Box for me under 6.9 beta 1, and now I am stuck. Windows deactivates the GPU and I can not activate it like @Koenig did. Nvidia Driver is ignored. Here is my XML. maybe someone kind find something.

     

    I tried somethings, like for example deactivated Hyper V and hide KVM (like shown in the XML), changed to VFIO Bind via the System Devices Tool, neither than via Syslinux, which I did not get to work.

     

    
    <?xml version='1.0' encoding='UTF-8'?>
    <domain type='kvm' id='15' xmlns:qemu='http://libvirt.org/schemas/domain/qemu/1.0'>
      <name>Windows 10</name>
      <uuid>bef0ca49-2fa1-9db7-1389-d312cb633419</uuid>
      <metadata>
        <vmtemplate xmlns="unraid" name="Windows 10" icon="windows.png" os="windows10"/>
      </metadata>
      <memory unit='KiB'>12582912</memory>
      <currentMemory unit='KiB'>12582912</currentMemory>
      <memoryBacking>
        <nosharepages/>
      </memoryBacking>
      <vcpu placement='static'>10</vcpu>
      <cputune>
        <vcpupin vcpu='0' cpuset='1'/>
        <vcpupin vcpu='1' cpuset='7'/>
        <vcpupin vcpu='2' cpuset='2'/>
        <vcpupin vcpu='3' cpuset='8'/>
        <vcpupin vcpu='4' cpuset='3'/>
        <vcpupin vcpu='5' cpuset='9'/>
        <vcpupin vcpu='6' cpuset='4'/>
        <vcpupin vcpu='7' cpuset='10'/>
        <vcpupin vcpu='8' cpuset='5'/>
        <vcpupin vcpu='9' cpuset='11'/>
      </cputune>
      <resource>
        <partition>/machine</partition>
      </resource>
      <os>
        <type arch='x86_64' machine='pc-i440fx-5.0'>hvm</type>
        <loader readonly='yes' type='pflash'>/usr/share/qemu/ovmf-x64/OVMF_CODE-pure-efi.fd</loader>
        <nvram>/etc/libvirt/qemu/nvram/bef0ca49-2fa1-9db7-1389-d312cb633419_VARS-pure-efi.fd</nvram>
      </os>
      <features>
        <acpi/>
        <apic/>
        <hyperv>
          <relaxed state='off'/>
          <vapic state='off'/>
          <spinlocks state='off'/>
        </hyperv>
        <kvm>
          <hidden state='on'/>
        </kvm>
      </features>
      <cpu mode='host-passthrough' check='none'>
        <topology sockets='1' dies='1' cores='5' threads='2'/>
        <cache mode='passthrough'/>
        <feature policy='require' name='topoext'/>
      </cpu>
      <clock offset='localtime'>
        <timer name='hypervclock' present='yes'/>
        <timer name='hpet' present='no'/>
      </clock>
      <on_poweroff>destroy</on_poweroff>
      <on_reboot>restart</on_reboot>
      <on_crash>restart</on_crash>
      <devices>
        <emulator>/usr/local/sbin/qemu</emulator>
        <disk type='block' device='disk'>
          <driver name='qemu' type='raw' cache='writeback'/>
          <source dev='/dev/disk/by-id/ata-Samsung_SSD_840_EVO_250GB_S1DBNSADB30857W' index='2'/>
          <backingStore/>
          <target dev='hdc' bus='sata'/>
          <boot order='1'/>
          <alias name='sata0-0-2'/>
          <address type='drive' controller='0' bus='0' target='0' unit='2'/>
        </disk>
        <disk type='file' device='cdrom'>
          <driver name='qemu' type='raw'/>
          <source file='/mnt/user/isos/virtio-win-0.1.173-2.iso' index='1'/>
          <backingStore/>
          <target dev='hdb' bus='ide'/>
          <readonly/>
          <alias name='ide0-0-1'/>
          <address type='drive' controller='0' bus='0' target='0' unit='1'/>
        </disk>
        <controller type='pci' index='0' model='pci-root'>
          <alias name='pci.0'/>
        </controller>
        <controller type='ide' index='0'>
          <alias name='ide'/>
          <address type='pci' domain='0x0000' bus='0x00' slot='0x01' function='0x1'/>
        </controller>
        <controller type='sata' index='0'>
          <alias name='sata0'/>
          <address type='pci' domain='0x0000' bus='0x00' slot='0x03' function='0x0'/>
        </controller>
        <controller type='virtio-serial' index='0'>
          <alias name='virtio-serial0'/>
          <address type='pci' domain='0x0000' bus='0x00' slot='0x04' function='0x0'/>
        </controller>
        <controller type='usb' index='0' model='ich9-ehci1'>
          <alias name='usb'/>
          <address type='pci' domain='0x0000' bus='0x00' slot='0x07' function='0x7'/>
        </controller>
        <controller type='usb' index='0' model='ich9-uhci1'>
          <alias name='usb'/>
          <master startport='0'/>
          <address type='pci' domain='0x0000' bus='0x00' slot='0x07' function='0x0' multifunction='on'/>
        </controller>
        <controller type='usb' index='0' model='ich9-uhci2'>
          <alias name='usb'/>
          <master startport='2'/>
          <address type='pci' domain='0x0000' bus='0x00' slot='0x07' function='0x1'/>
        </controller>
        <controller type='usb' index='0' model='ich9-uhci3'>
          <alias name='usb'/>
          <master startport='4'/>
          <address type='pci' domain='0x0000' bus='0x00' slot='0x07' function='0x2'/>
        </controller>
        <interface type='bridge'>
          <mac address='52:54:00:24:6e:d2'/>
          <source bridge='virbr0'/>
          <target dev='vnet0'/>
          <model type='virtio-net'/>
          <alias name='net0'/>
          <address type='pci' domain='0x0000' bus='0x00' slot='0x02' function='0x0'/>
        </interface>
        <serial type='pty'>
          <source path='/dev/pts/0'/>
          <target type='isa-serial' port='0'>
            <model name='isa-serial'/>
          </target>
          <alias name='serial0'/>
        </serial>
        <console type='pty' tty='/dev/pts/0'>
          <source path='/dev/pts/0'/>
          <target type='serial' port='0'/>
          <alias name='serial0'/>
        </console>
        <channel type='unix'>
          <source mode='bind' path='/var/lib/libvirt/qemu/channel/target/domain-15-Windows 10/org.qemu.guest_agent.0'/>
          <target type='virtio' name='org.qemu.guest_agent.0' state='disconnected'/>
          <alias name='channel0'/>
          <address type='virtio-serial' controller='0' bus='0' port='1'/>
        </channel>
        <input type='mouse' bus='ps2'>
          <alias name='input0'/>
        </input>
        <input type='keyboard' bus='ps2'>
          <alias name='input1'/>
        </input>
        <hostdev mode='subsystem' type='pci' managed='yes'>
          <driver name='vfio'/>
          <source>
            <address domain='0x0000' bus='0x26' slot='0x00' function='0x0'/>
          </source>
          <alias name='hostdev0'/>
          <rom file='/mnt/user/isos/AsusStrix_GTX970.rom'/>
          <address type='pci' domain='0x0000' bus='0x00' slot='0x05' function='0x0'/>
        </hostdev>
        <hostdev mode='subsystem' type='pci' managed='yes'>
          <driver name='vfio'/>
          <source>
            <address domain='0x0000' bus='0x26' slot='0x00' function='0x1'/>
          </source>
          <alias name='hostdev1'/>
          <address type='pci' domain='0x0000' bus='0x00' slot='0x06' function='0x0'/>
        </hostdev>
        <hostdev mode='subsystem' type='pci' managed='yes'>
          <driver name='vfio'/>
          <source>
            <address domain='0x0000' bus='0x03' slot='0x00' function='0x0'/>
          </source>
          <alias name='hostdev2'/>
          <address type='pci' domain='0x0000' bus='0x00' slot='0x08' function='0x0'/>
        </hostdev>
        <memballoon model='none'/>
      </devices>
      <seclabel type='dynamic' model='dac' relabel='yes'>
        <label>+0:+100</label>
        <imagelabel>+0:+100</imagelabel>
      </seclabel>
      <qemu:commandline>
        <qemu:arg value='-cpu'/>
        <qemu:arg value='host,topoext=on,invtsc=on,hv-time,hv-relaxed,hv-vapic,hv-spinlocks=0x1fff,hv-vpindex,hv-synic,hv-stimer,hv-reset,hv-frequencies,host-cache-info=on,l3-cache=off,-amd-stibp'/>
      </qemu:commandline>
    </domain>

     

     

    Somehow it seems you have read my solution but misunderstod it or something.

     

    From the looks of your xml you should start from the top of my solution post.

    Link to comment

    @Koenig I updated my XML. Still using Syslinux vfio-bind for the PCI Devices. Do I need to change that? I am not sure about the part with the <hostdev> in your post. With the updated XML I started the VM, went to the Device Manager deleted the GPU, searched for changes and a few moments later the VM stops and reboots.

    <?xml version='1.0' encoding='UTF-8'?>
    <domain type='kvm' id='2'>
      <name>Windows 10</name>
      <uuid>bef0ca49-2fa1-9db7-1389-d312cb633419</uuid>
      <metadata>
        <vmtemplate xmlns="unraid" name="Windows 10" icon="windows.png" os="windows10"/>
      </metadata>
      <memory unit='KiB'>12582912</memory>
      <currentMemory unit='KiB'>12582912</currentMemory>
      <memoryBacking>
        <nosharepages/>
      </memoryBacking>
      <vcpu placement='static'>10</vcpu>
      <cputune>
        <vcpupin vcpu='0' cpuset='1'/>
        <vcpupin vcpu='1' cpuset='7'/>
        <vcpupin vcpu='2' cpuset='2'/>
        <vcpupin vcpu='3' cpuset='8'/>
        <vcpupin vcpu='4' cpuset='3'/>
        <vcpupin vcpu='5' cpuset='9'/>
        <vcpupin vcpu='6' cpuset='4'/>
        <vcpupin vcpu='7' cpuset='10'/>
        <vcpupin vcpu='8' cpuset='5'/>
        <vcpupin vcpu='9' cpuset='11'/>
      </cputune>
      <resource>
        <partition>/machine</partition>
      </resource>
      <os>
        <type arch='x86_64' machine='pc-i440fx-5.0'>hvm</type>
        <loader readonly='yes' type='pflash'>/usr/share/qemu/ovmf-x64/OVMF_CODE-pure-efi.fd</loader>
        <nvram>/etc/libvirt/qemu/nvram/bef0ca49-2fa1-9db7-1389-d312cb633419_VARS-pure-efi.fd</nvram>
      </os>
      <features>
        <acpi/>
        <apic/>
        <hyperv>
          <relaxed state='on'/>
          <vapic state='on'/>
          <spinlocks state='on' retries='8191'/>
          <vpindex state='on'/>
          <synic state='on'/>
          <reset state='on'/>
          <vendor_id state='on' value='1234567890ab'/>
          <frequencies state='on'/>
        </hyperv>
        <kvm>
          <hidden state='on'/>
        </kvm>
      </features>
      <cpu mode='custom' match='exact' check='full'>
        <model fallback='forbid'>EPYC-IBPB</model>
        <vendor>AMD</vendor>
        <topology sockets='1' dies='1' cores='5' threads='2'/>
        <feature policy='require' name='x2apic'/>
        <feature policy='require' name='tsc-deadline'/>
        <feature policy='require' name='hypervisor'/>
        <feature policy='require' name='tsc_adjust'/>
        <feature policy='require' name='clwb'/>
        <feature policy='require' name='umip'/>
        <feature policy='require' name='stibp'/>
        <feature policy='require' name='arch-capabilities'/>
        <feature policy='require' name='ssbd'/>
        <feature policy='require' name='xsaves'/>
        <feature policy='require' name='cmp_legacy'/>
        <feature policy='require' name='perfctr_core'/>
        <feature policy='require' name='clzero'/>
        <feature policy='require' name='wbnoinvd'/>
        <feature policy='require' name='amd-ssbd'/>
        <feature policy='require' name='virt-ssbd'/>
        <feature policy='require' name='rdctl-no'/>
        <feature policy='require' name='skip-l1dfl-vmentry'/>
        <feature policy='require' name='mds-no'/>
        <feature policy='require' name='pschange-mc-no'/>
        <feature policy='disable' name='monitor'/>
        <feature policy='require' name='topoext'/>
        <feature policy='disable' name='svm'/>
      </cpu>
      <clock offset='localtime'>
        <timer name='hypervclock' present='yes'/>
        <timer name='hpet' present='no'/>
      </clock>
      <on_poweroff>destroy</on_poweroff>
      <on_reboot>restart</on_reboot>
      <on_crash>restart</on_crash>
      <devices>
        <emulator>/usr/local/sbin/qemu</emulator>
        <disk type='block' device='disk'>
          <driver name='qemu' type='raw' cache='writeback'/>
          <source dev='/dev/disk/by-id/ata-Samsung_SSD_840_EVO_250GB_S1DBNSADB30857W' index='2'/>
          <backingStore/>
          <target dev='hdc' bus='sata'/>
          <boot order='1'/>
          <alias name='sata0-0-2'/>
          <address type='drive' controller='0' bus='0' target='0' unit='2'/>
        </disk>
        <disk type='file' device='cdrom'>
          <driver name='qemu' type='raw'/>
          <source file='/mnt/user/isos/virtio-win-0.1.173-2.iso' index='1'/>
          <backingStore/>
          <target dev='hdb' bus='ide'/>
          <readonly/>
          <alias name='ide0-0-1'/>
          <address type='drive' controller='0' bus='0' target='0' unit='1'/>
        </disk>
        <controller type='usb' index='0' model='ich9-ehci1'>
          <alias name='usb'/>
          <address type='pci' domain='0x0000' bus='0x00' slot='0x07' function='0x7'/>
        </controller>
        <controller type='usb' index='0' model='ich9-uhci1'>
          <alias name='usb'/>
          <master startport='0'/>
          <address type='pci' domain='0x0000' bus='0x00' slot='0x07' function='0x0' multifunction='on'/>
        </controller>
        <controller type='usb' index='0' model='ich9-uhci2'>
          <alias name='usb'/>
          <master startport='2'/>
          <address type='pci' domain='0x0000' bus='0x00' slot='0x07' function='0x1'/>
        </controller>
        <controller type='usb' index='0' model='ich9-uhci3'>
          <alias name='usb'/>
          <master startport='4'/>
          <address type='pci' domain='0x0000' bus='0x00' slot='0x07' function='0x2'/>
        </controller>
        <controller type='pci' index='0' model='pci-root'>
          <alias name='pci.0'/>
        </controller>
        <controller type='ide' index='0'>
          <alias name='ide'/>
          <address type='pci' domain='0x0000' bus='0x00' slot='0x01' function='0x1'/>
        </controller>
        <controller type='sata' index='0'>
          <alias name='sata0'/>
          <address type='pci' domain='0x0000' bus='0x00' slot='0x03' function='0x0'/>
        </controller>
        <controller type='virtio-serial' index='0'>
          <alias name='virtio-serial0'/>
          <address type='pci' domain='0x0000' bus='0x00' slot='0x04' function='0x0'/>
        </controller>
        <interface type='bridge'>
          <mac address='52:54:00:24:6e:d2'/>
          <source bridge='virbr0'/>
          <target dev='vnet0'/>
          <model type='virtio-net'/>
          <alias name='net0'/>
          <address type='pci' domain='0x0000' bus='0x00' slot='0x02' function='0x0'/>
        </interface>
        <serial type='pty'>
          <source path='/dev/pts/0'/>
          <target type='isa-serial' port='0'>
            <model name='isa-serial'/>
          </target>
          <alias name='serial0'/>
        </serial>
        <console type='pty' tty='/dev/pts/0'>
          <source path='/dev/pts/0'/>
          <target type='serial' port='0'/>
          <alias name='serial0'/>
        </console>
        <channel type='unix'>
          <source mode='bind' path='/var/lib/libvirt/qemu/channel/target/domain-2-Windows 10/org.qemu.guest_agent.0'/>
          <target type='virtio' name='org.qemu.guest_agent.0' state='disconnected'/>
          <alias name='channel0'/>
          <address type='virtio-serial' controller='0' bus='0' port='1'/>
        </channel>
        <input type='mouse' bus='ps2'>
          <alias name='input0'/>
        </input>
        <input type='keyboard' bus='ps2'>
          <alias name='input1'/>
        </input>
        <hostdev mode='subsystem' type='pci' managed='yes'>
          <driver name='vfio'/>
          <source>
            <address domain='0x0000' bus='0x26' slot='0x00' function='0x0'/>
          </source>
          <alias name='hostdev0'/>
          <rom file='/mnt/user/isos/AsusStrix_GTX970.rom'/>
          <address type='pci' domain='0x0000' bus='0x00' slot='0x05' function='0x0'/>
        </hostdev>
        <hostdev mode='subsystem' type='pci' managed='yes'>
          <driver name='vfio'/>
          <source>
            <address domain='0x0000' bus='0x26' slot='0x00' function='0x1'/>
          </source>
          <alias name='hostdev1'/>
          <address type='pci' domain='0x0000' bus='0x00' slot='0x06' function='0x0'/>
        </hostdev>
        <hostdev mode='subsystem' type='pci' managed='yes'>
          <driver name='vfio'/>
          <source>
            <address domain='0x0000' bus='0x03' slot='0x00' function='0x0'/>
          </source>
          <alias name='hostdev2'/>
          <address type='pci' domain='0x0000' bus='0x00' slot='0x08' function='0x0'/>
        </hostdev>
        <memballoon model='none'/>
      </devices>
      <seclabel type='dynamic' model='dac' relabel='yes'>
        <label>+0:+100</label>
        <imagelabel>+0:+100</imagelabel>
      </seclabel>
    </domain>

     

     

    Edited by sub6
    Link to comment

    @Koenig Now it works! I changed the vfio-bind from Syslinux to VFIO at boot via the System Devices Tool. Did this only for the GPU. The Onboard USB Controller is still done via the Syslinux Config. 

    Link to comment



    Guest
    This is now closed for further comments

  • Status Definitions

     

    Open = Under consideration.

     

    Solved = The issue has been resolved.

     

    Solved version = The issue has been resolved in the indicated release version.

     

    Closed = Feedback or opinion better posted on our forum for discussion. Also for reports we cannot reproduce or need more information. In this case just add a comment and we will review it again.

     

    Retest = Please retest in latest release.


    Priority Definitions

     

    Minor = Something not working correctly.

     

    Urgent = Server crash, data loss, or other showstopper.

     

    Annoyance = Doesn't affect functionality but should be fixed.

     

    Other = Announcement or other non-issue.