billington.mark

Members
  • Posts

    362
  • Joined

  • Last visited

Everything posted by billington.mark

  1. Same error if you change <model type='virtio'/> to <model type='e1000'/> (line 102) ?
  2. Personally, id get a pcie usb3 card and pass that through to each vm... less headaches with limitations of passing individual usb ports.
  3. If you want to pass the entire SSD to the VM, the easiest way to achieve this would be: -+Take a system image backup in windows 10 (Control panel>backup and recovery>Windows 7 backup>system image backup). save the backup to a share somewhere on unraid. On completion, shut down the VM. Change your VM XML, or in the GUI to pass through the SSD, and add the windows 10 installation iso and virtio driver iso to the vm as well. Start the VM, boot to the iso, goto recovery options, install the virtio network and storage controller drivers, restore from system image to the passed through SSD. if you fail miserably, you can reapply the original vdisk back to the vm and boot up again.
  4. You cant pass through individual ports, you'd need to pass through the entire device. You could probably create some bindings on the ports outside of the VM in unraid, then reference those bindings on a virtual adapter in the VM... although with this being a PFsense Vm, its probably worthwhile to pick up a dedicated NIC from ebay... not expensive and makes this a lot less complicated!
  5. virtio drivers are built into ubuntu, so use virtio for your vdisk
  6. A couple of things to take this a step or two further... Auto 'tick' the cpu thread pair when you choose CPUs NUMA node management so memory is assigned from the same NUMA node as the selected CPU(s) (Dual CPU system problems!) Building on the second point... a flag if you're passing hardware which isnt on PCIe lanes attached to the CPUs?.. not sure if thats even possible though?
  7. Post your IOMMU groups. (Tools> system devices). Sounds like you have more than one device in IOMMU group 2. if thats the case, you'll need to enable the PCIe ACS Override patch in Settings>VM Manager (enable advanced settings, reboot after changing) to break up the IOMMU groups so you can pass that device through.
  8. if you stub the USB PCIe add on card and have that passed through to the VM, devices plugged into it will only be detected by that VM. UnRaid wouldnt see the USB HDD.
  9. is the virtual bios you're using in the VM newer than the one thats shipped with unraid? The shipped unraid version is quite old and doesnt support booting from NVME PCIe devices. It'll be detected in windows setup fine, but you'll not be able to boot from it. you can download a new version from here https://www.kraxel.org/repos/jenkins/edk2/. You want the edk2.git-ovmf-x64-x-xxxxxxxx.xxxxx.xxxxxxxx.noarch.rpm file Open with 7zip and drill down until you get to the edk2.git\ovmf-x64\ folder and pull out the OVMF-pure-efi.fd and OVMF_VARS-pure-efi.fd files and put them on an accessible share somewhere. Preferably on a cache only share (be sure to do both). edit your VM XML to reference the new virtual bios like this (replace whats there at the moment): <loader readonly='yes' type='pflash'>/path/to/new/BIOS/OVMF-pure-efi.fd</loader> <nvram>/path/to/new/BIOS/OVMF_VARS-pure-efi.fd</nvram> After that the NVME device will be available to select and set as the default boot device. you set the boot order inside the VM, not in the XML. I highly doubt its an issue with your physical motherboard if its only effecting VMs when they reboot.
  10. Do a full windows backup (listed as backup and restore (windows 7) in server 2016) and have it back off to a network share or USB, external hdd etc. basically anythin you can get access to on the new machine. boot the windows server 2016 setup on the new machine, be it VM or physical and do a restore rather than a fresh installation. off the top of my head you select "repair my computer" rather than pressing "install now", and there's an option to restore from a backup.
  11. just remove this part: <boot dev='hd'/> I tend to just set the boot device order in the actual VM in the OVMF bios (press F2 on boot i think?). There you can re-order boot devices as you would on a normal full fat PC BIOS. As far as im aware, you cant set the boot order in the XML when one of your boot devices is a passed through PCIe device, like you can when you use VIRTIO disks. EDIT: post your VMs XML
  12. @gridrunner This explains the 'no windows logo' 'issue'. https://github.com/tianocore/edk2/commit/6e5e544f227f031d0b45828b56cec5668dd1bf5b Also mentioned here: https://www.reddit.com/r/VFIO/comments/5yws5u/win10_logo_replaced_by_tianocore_logo_during_boot/
  13. This is much neater I dont have the no windows logo issue on the version im running... If you want the version i'm using, drop me a PM.
  14. Ive found that didnt make any difference after booting the VM (may have been fixed since i last tested though?). I've always had to manually edit the boot order in the actual VM by invoking the OVMF setup (del \ F2 i think?) after the first boot.
  15. You can boot to windows with a passed through NVMe without using clover at all. I've been doing this for a while since the first 6.3 RC. Bear in mind this assumes you're doing a fresh install. The only reason you cant do this natively is that Unraid ships with an older OVMF bootloader for VMs which is before it started supporting NVMe devices. We will just be downloading a referencing a different bootloader so we add support for direct NVMe booting. So basically, build the VM (dont start it yet) and pass through the NVMe device: IOMMU group 41 [144d:a802] 81:00.0 Non-Volatile memory controller: Samsung Electronics Co Ltd NVMe SSD Controller SM951/PM951 (rev 01) would be: <hostdev mode='subsystem' type='pci' managed='yes'> <driver name='vfio'/> <source> <address domain='0x0000' bus='0x81' slot='0x00' function='0x0'/> </source> </hostdev> Download OVMF firmware from here https://www.kraxel.org/repos/jenkins/edk2/. You want the edk2.git-ovmf-x64-x-xxxxxxxx.xxxxx.xxxxxxxx.noarch.rpm file open with 7zip and drill down until you get to the edk2.git\ovmf-x64\ folder and pull out the OVMF-pure-efi.fd and OVMF_VARS-pure-efi.fd files and put them on an accessible share somewhere. Preferably on a cache only share (be sure to do both). in the xml below, Ive put them in /mnt/user/VMData/Windows10VM. reference the new files in XML (replace the default values): <os> <type arch='x86_64' machine='pc-q35-2.7'>hvm</type> <loader readonly='yes' type='pflash'>/mnt/user/VMData/Windows10VM/OVMF-pure-efi.fd</loader> <nvram>/mnt/user/VMData/Windows10VM/OVMF_VARS-pure-efi.fd</nvram> <boot dev='hd'/> </os> Boot the VM and go to the boot menu. Change the boot order to the DVD drive as top, NVMe as 2nd. Save and boot from the windows installation ISO. the NVMe drive should show as an available installation target in windows setup. Run through the installation as you normally would and you're done
  16. Some interesting info here regarding Virtualisation on Ryzen: https://www.redhat.com/archives/vfio-users/2017-March/msg00005.html Seems some patches will be introduced to get things working nicely in the future. Worth mentioning that any patches applied will need to wait for LimeTech to implement in updates. whether that updates the the Kernel, libvirt or QEMU...
  17. Yes, Avoid the Marvel ports. Its a known issue that they go a little crazy if you have visualization enabled. Its discussed here:
  18. Have you tried with SeaBIOS rather than OVMF?
  19. Seabios works great with non UEFI graphics cards passed through and when you don't need to boot from anything other than a VIRTIO controller (anything where 'bus=virtio' in your XML for the disk you're booting from) OVMF comes into its own when you want to boot from passed through devices like a SATA controllers or PCI-e NVME devices. You can still pass these through with Seabios, but you'll not be able to use them as a boot device. Performance wise, if you can boot from either with your VM setup, there's no difference.
  20. the dmidecode command returns EP2C602-4L/D16 I'll have a read through and see what i can come up with Mark
  21. Hi Guys, I dont seem to have the option for fan control in the plugin settings, the enable drop down is disabled. My fans and RPM is showing correctly when running ipmi-sensors -t fan Any ideas where to start with this? I have an ASRockRack EP2C602-4L/D16
  22. The ability to spoof vendor_id as hyperV enlightenment to fool Nvidia drivers into loading, is only available for libvirt v 1.3.3 and up. These are the versions in 6.3 RC6: Libvirt version:2.4.0 QEMU version:2.7.0
  23. AH! Right. Windows 7 on QEMU... Windows 7 doesnt really perform well in a VM, but there's another tweak you can apply to your <cpu> part of your XML. in your VM XML: change: <cpu mode='host-passthrough'> <topology sockets='1' cores='1' threads='2'/> </cpu> to: <cpu mode='host-passthrough'> <topology sockets='1' cores='1' threads='2'/> <feature policy='disable' name='hypervisor'/> </cpu> Source: http://vfio.blogspot.co.uk/2016/10/how-to-improve-performance-in-windows-7.html Bear in mind if you make any changes in the GUI to the VM after making this change in the XML, it will dissapear and you'll need to reapply manually.