Budget Ryzen 3900X Build


Recommended Posts

I am planning on building a 3900X system as cheap as possible. I have an old game-pc with an i7 2600K cpu and 1060GTX GPU that I want to replace with a virtual Windows 10 machine.

 

I will be using the Win10 VM for general work and casual gaming.

I also would like to run some more VM's for testing.

10-15 containers like plex, pi-hole, rutorrent, vscode...

 

Do I have reason not to go with a B450 motherboard? I will not do any high end gaming or similar. Do you have any other suggestions regarding my choice of components?

 

Components

CPU: AMD Ryzen 3900X with stock cooler

Motherboard: Some cheap B450 motherboard like: Asus ROG Strix B450-F Gaming or similar.

RAM: Some cheap 32GB (2x16GB) 3200MHz DDR4 (And am planning to upgrade to 64GB in the future)

M.2 SSD: Intel 660p Series M.2 2280 SSD 1TB

PSU: Fractal Design Edison M 750W (Gold)

Storage HDD:  1 x 8TB (Seagate Exos 7E8 ST8000NM0055 256MB 8TB) and planning to buy a few more in the future.

Enclosure: Unknown 

GPU: GTX 1060 (Already owned)

 

 

 

Link to comment
4 hours ago, Cliff said:

I am planning on building a 3900X system as cheap as possible. I have an old game-pc with an i7 2600K cpu and 1060GTX GPU that I want to replace with a virtual Windows 10 machine.

 

I did a very similar setup. 3900X 32 GB DDR4 & Gigabyte B450 Aorus PRO WiFi, 1070. I wish I had gone for a 570 board for the extra PCIe lanes. If you can go that route.

 

4 hours ago, Cliff said:

I will be using the Win10 VM for general work and casual gaming.

I also would like to run some more VM's for testing.

10-15 containers like plex, pi-hole, rutorrent, vscode...

 

This will work, however there is a few things that held me back.

 

First isolate the CPU cores you want to assign to the Win10 VM. This is a must, alas these cores are then not able to be used by anything else.

I run 3 other VMs besides a 6 core 12 thread VM for gaming. I also isolated the cores for these other VMs, but allow them to share as these VMs are not that busy.

 

Even doing that I was getting a lot of studders and tracked it down to my dockers hitting the same core that unRaid was running my emulator for the VMs on (core 0 I think). So I forced all of my dockers to use the same 4 cores 8 threads. This was required in my case to get rid of the studders. I also notice that when PLEX does a library scan it really crushes unraid on core 0/12. I plan to in the future move the emulator pin to a different core, but have not done that yet.

 

This is a lot of work to get a stable gaming performance, but at the moment I'm getting similar performance to my old gaming box (2600X 1070) in the VM. Maybe a shade better.

4 hours ago, Cliff said:

Do I have reason not to go with a B450 motherboard? I will not do any high end gaming or similar. Do you have any other suggestions regarding my choice of components?

 

More PCIe lanes. I think it is 4 more. Does not seem like a lot, but it is. Right now I'm considering an upgrade to a x570 board so that I don't bandwidth constrain my PCIe H310. Right now if I only have my 1 graphics card in the X16 slot, and have my H310 in the other slot I'm okish. But as soon as I put a usb PCIe card, or a dual GB nic in either of the other two slots the 2nd PCIe slot drops from 4X to 2X PCIe. and that really hurts array performance. However I kind of need the USB PCIe card for the VM, so I put it in when I'm not doing a parity check, and take it out if I can when I do.

 

Let me know if you have any more questions.

Link to comment

For your windows VM are you stubbing out a hard drive and being able to run it bare metal like spacedinvader or do you have it running off an SSD or something?

Also not to troll but I guess I expected the 3900 to be able to surpass the 2600. I guess there is that much VM overhead or did you just configure it to be equal. I know the 2600K is still somewhat relevant. I’m still running that in my main rig now.

Link to comment
1 hour ago, rxnelson said:

For your windows VM are you stubbing out a hard drive and being able to run it bare metal like spacedinvader or do you have it running off an SSD or something?

Also not to troll but I guess I expected the 3900 to be able to surpass the 2600. I guess there is that much VM overhead or did you just configure it to be equal. I know the 2600K is still somewhat relevant. I’m still running that in my main rig now.

 

Troll away :) Just kidding. I did not take it as such. 

 

I'm running an image file on a NVMe drive, but it is the only VM that runs on the NVMe and I'm getting about ~80% of the speed of the NVMe by doing this. Passing through a NVMe is not yet supported as of yet, from what I understand. When that works I'll switch over to a passed through NVMe but it's not that big of a deal.

 

As for the performance, I suspect it's better using 6 cores 12 threads on the 3900x vs bare metal 2600x, but as I'm using the VM for streaming games to other devices, I'm stuck at 60 fps, and that is also what I got using the 2600x for the same task. There is overhead, but I really don't have a way to measure it. Once I got the system stable and running the way I want it, I never really did a lot of benchmarks.

 

If you are considering a move to a similar setup, it really depends on your needs and what experience you are willing to accept. This VM also is my PLEX server so I can use hardware transcoding from the 1070, and so far I've been able to get away with using it for both. I do run a few more VMs, and a lot of dockers on the other 6 cores 12 threads, and this currently is replacing a dual xeon 2665 unraid server and the 2600x gaming box. I think I was running around 400+ watts, maybe 500, down to about 250ish or so gaming.

Link to comment

I have some questions. How much does memory speed matter if gaming is not a big priority? I was thinking about getting 2x16GB 3200MHz RAM. Is that a good choice?

 

Also If I get get a 1TB nvme M.2 HDD can I share the space with other VM's or docker containers? Do I have to specify a disk size value for the win10 VM or can everything share the same disk ?

 

Can't I use GPU transcoding if running a plex docker?

 

Link to comment
2 hours ago, Cliff said:

I have some questions. How much does memory speed matter if gaming is not a big priority? I was thinking about getting 2x16GB 3200MHz RAM. Is that a good choice?

 

Also If I get get a 1TB nvme M.2 HDD can I share the space with other VM's or docker containers? Do I have to specify a disk size value for the win10 VM or can everything share the same disk ?

 

Can't I use GPU transcoding if running a plex docker?

 

Officially the Ryzen 3900X support:

  • Single Rank 2 DIMMs DDR4-3200
  • Single Rank 4 DIMMs DDR4-2933
  • Double Rank 2 DIMMs DDR4-3200
  • Double Rank 4 DIMMs DDR4-2667

So your 2 DIMM 3200MHz theoretically should be ok.

However, anything above 2133MHz is an overclock. So if you run into instability running 24/7 though, drop it to 2667 first then if still unstable, do a memtest at 2133MHz.

If it can't run stable at 2667 MHz though, you theoretically have plenty of ground to return but just note that your replacement will not fare much better.

 

If you put the NVMe in the cache pool, for example, then your VM and container can share the space.

In that case, your VM will run on a vdisk so you have to specify the size. Same with docker image.

Note though that if you are after the highest and most consistent storage performance for your VM with the NVMe, you have to pass it through (i.e. in full) as a PCIe device. That means no sharing.

 

And do not get QLC NVMe (e.g. Intel 660p).

 

You can do Nvidia GPU hardware transcode in Plex docker. However, it means your VM cannot use that same GPU and you have to run Unraid Nvidia (which is a community-built branch of the official Unraid).

If you only have 1 powerful GPU then running Plex in your main VM is probably not a bad idea.

I personally have Plex as a docker with CPU software transcode since I don't need that many concurrent streams. When I need to do many concurrent transcodes or when it's time critical, I always do it on my workstation VM with a GPU.

 

 

 

Link to comment
17 hours ago, Cliff said:

Thanks, What is the reason for not getting a QLC NVMe? And do you have any tips for other 1TB NVMe's ?

Long reason:

Current QLC tech uses adaptive SLC cache (essentially using QLC cells like SLC cells). That means

  • The amount of cache you have is dependent on how much free space you have.
  • When you run out of the SLC cache then the drive will revert back to using QLC i.e. slow.
  • The wear on the SSD cells being used for SLC cache is quite a bit higher than normal.

What it translates to in real life (e.g. Unraid cache) is that it (I'm talking about the Intel 660p specifically as that's what I'm familiar with) performs about the same as a SATA SSD on average.

Adding to that is the fact that the 660p cannot be passed through to a VM as a PCIe device due Linux kernel conflicts.

 

Of course if you run out of SATA ports and must add an SSD at the lowest possible cost then it's not at all a bad idea.

I would just prefer actual NVMe performance and not a glorified SATA drive.

 

DRAM cache is another thing to look out for as the lack of which is even (way) worse than QLC.

I would pick the 660p over a DRAM-less SSD any day,

 

For your use case (i.e. sharing a NVMe among several things including vdisks), anything 3D TLC (aka Samsung V-NAND) with DRAM would be good.

If you want to pass it through as a PCIe device to a single VM then you need to research on the controller of the SSD as some don't like to be passed through or require special workarounds with limitations.

 

Link to comment
On 2/17/2020 at 6:35 AM, testdasi said:

Officially the Ryzen 3900X support:

  • Single Rank 2 DIMMs DDR4-3200
  • Single Rank 4 DIMMs DDR4-2933
  • Double Rank 2 DIMMs DDR4-3200
  • Double Rank 4 DIMMs DDR4-2667

So your 2 DIMM 3200MHz theoretically should be ok.

However, anything above 2133MHz is an overclock. So if you run into instability running 24/7 though, drop it to 2667 first then if still unstable, do a memtest at 2133MHz.

 

Hmmm... I will have to play around with this. I think I have 4 double rank DIMMS (3200) in my system at the moment, and have just set it to AUTO in the BIOS. Just picked up an x470 board and my RAM is currently running at 2133. 

 

If anyone is interested I'll test this out over the next few days and post back. I'm not sure how I'd be able to test any changes to performance but I'd like to see if I can get the system stable at 3200. 

 

I wonder if it would make more sense to run with 32 GB @ 3200 vs 64 GB @ 2667? I guess it would depend on if you need the extra ram for VMs, etc.

Link to comment

I just got all my parts and am going to try to set up my replacement virutal win10 machine today. But I have some more questions. I want my windows 10 machine to  use my nvme as a boot drive and use my gtx 1060 graphics card. But where do I begin ? I have a windows 10 iso and a valid licence. Do I have to edit any configuration files or is it "plug and play" when I create the VM? And do I also have to passthrough 3.5mm audio and usb ?

Link to comment
2 hours ago, Cliff said:

But where do I begin ? I have a windows 10 iso and a valid licence. Do I have to edit any configuration files or is it "plug and play" when I create the VM? And do I also have to passthrough 3.5mm audio and usb ?

Where do you begin? Start with watching SpaceInvader One tutorials on Youtube, particularly his VM playlist.

 

 

 

  • Like 1
Link to comment
8 hours ago, Cliff said:

I just got all my parts and am going to try to set up my replacement virutal win10 machine today. But I have some more questions. I want my windows 10 machine to  use my nvme as a boot drive and use my gtx 1060 graphics card. But where do I begin ? I have a windows 10 iso and a valid licence. Do I have to edit any configuration files or is it "plug and play" when I create the VM? And do I also have to passthrough 3.5mm audio and usb ?

 

As @testdasi said, go watch the SpaceInvader's videos on youtube. Follow his instructions to a T, and get the base VM up and running. You don't really need to pass through the nvme to the VM to get close to the performance of bare metal.

 

Get used to setting up VMs on unraid, and then once you have that down pat, move onto pass through. Get your video card passed through and working. Once you can do that reliability then you can look at passing through the NVMe if you still want to. 

Link to comment

Thanks, but when looking at the post where the spaceinvader video was posted it says that it is now outdated. 

 

When looking at "system devices" the NVMe drive is listed under SCSI Devices: /dev/nvme0n1

But I can't get anything other than the unraid bootup to show on the monitors. What am I doing wrong?

 

I also tried changing the boot-order so that the install ISO was started before the NVMe, but that made no differance.

 

virt1.JPG

virt2.JPG

nvme1.JPG

nvme2.JPG

Edited by Cliff
Link to comment
21 minutes ago, Cliff said:

Thanks, but when looking at the post where the spaceinvader video was posted it says that it is now outdated. 

 

When looking at "system devices" the NVMe drive is listed under SCSI Devices: /dev/nvme0n1

But I can't get anything other than the unraid bootup to show on the monitors. What am I doing wrong?

 

I also tried changing the boot-order so that the install ISO was started before the NVMe, but that made no differance.

 

Ok, so two things, if you are going to use the NVMe as bare metal (pass through) you have to do from the XML view. I'd recomend against that unless you know what you are doing.

 

Manual you have to give it a full path, so at the end of the path you have up top, add win10vm.img and then you should get asked for the Size. Set it in G (ie 256G).

 

See if that works.

Link to comment
1 hour ago, Cliff said:

Thanks, but when looking at the post where the spaceinvader video was posted it says that it is now outdated. 

 

When looking at "system devices" the NVMe drive is listed under SCSI Devices: /dev/nvme0n1

But I can't get anything other than the unraid bootup to show on the monitors. What am I doing wrong?

 

I also tried changing the boot-order so that the install ISO was started before the NVMe, but that made no differance.

Are you looking to use the NVMe exclusively for the VM? Is it showing up under unassigned devices on the Main page?

If the answers are YES and YES then:

  • Go to the Apps page and look for VFIO PCI Config plugin and install it.
  • Settings -> VFIO PCI Config -> Tick the box next to your NVMe (144d:a808) -> click BUILD VFIO-PCI.CFG
  • Reboot and then go back to the VM template, the drive should now show up under the Other PCI Device section.

 

Edited by testdasi
Link to comment

Thanks for the help, but I don't understand how to get the VM to show up on any monitor. I have connected my unraid gpu (GTX 1060) to a monitor and during boot I see the unraid info.

But when I select the GTX 1060 in the "create VM" page I never get any output from the VM. I was thinking It would show the windows 10 install menu from the win10 iso so that I could install it to the NVMe. 

 

Edit:

I tried the same settings but selected the disk=none and selected the NVMe that shoved up after using the plugin. I selected VNC instead of my GPU. Now I could install windows 10 to the NVMe but after the installer reboots I only get some kind of shell with some info like BLK1, BLK2...

 

XML:

 

<?xml version='1.0' encoding='UTF-8'?>
<domain type='kvm' id='7'>
  <name>Windows 10</name>
  <uuid>dc13e351-67f1-ac18-481b-7e7287651c92</uuid>
  <metadata>
    <vmtemplate xmlns="unraid" name="Windows 10" icon="windows.png" os="windows10"/>
  </metadata>
  <memory unit='KiB'>16777216</memory>
  <currentMemory unit='KiB'>16777216</currentMemory>
  <memoryBacking>
    <nosharepages/>
  </memoryBacking>
  <vcpu placement='static'>8</vcpu>
  <cputune>
    <vcpupin vcpu='0' cpuset='5'/>
    <vcpupin vcpu='1' cpuset='17'/>
    <vcpupin vcpu='2' cpuset='7'/>
    <vcpupin vcpu='3' cpuset='19'/>
    <vcpupin vcpu='4' cpuset='9'/>
    <vcpupin vcpu='5' cpuset='21'/>
    <vcpupin vcpu='6' cpuset='11'/>
    <vcpupin vcpu='7' cpuset='23'/>
  </cputune>
  <resource>
    <partition>/machine</partition>
  </resource>
  <os>
    <type arch='x86_64' machine='pc-q35-4.2'>hvm</type>
    <loader readonly='yes' type='pflash'>/usr/share/qemu/ovmf-x64/OVMF_CODE-pure-efi.fd</loader>
    <nvram>/etc/libvirt/qemu/nvram/dc13e351-67f1-ac18-481b-7e7287651c92_VARS-pure-efi.fd</nvram>
    <boot dev='hd'/>
  </os>
  <features>
    <acpi/>
    <apic/>
    <hyperv>
      <relaxed state='on'/>
      <vapic state='on'/>
      <spinlocks state='on' retries='8191'/>
      <vendor_id state='on' value='none'/>
    </hyperv>
  </features>
  <cpu mode='host-passthrough' check='none'>
    <topology sockets='1' cores='8' threads='1'/>
  </cpu>
  <clock offset='localtime'>
    <timer name='hypervclock' present='yes'/>
    <timer name='hpet' present='no'/>
  </clock>
  <on_poweroff>destroy</on_poweroff>
  <on_reboot>restart</on_reboot>
  <on_crash>restart</on_crash>
  <devices>
    <emulator>/usr/local/sbin/qemu</emulator>
    <controller type='sata' index='0'>
      <alias name='ide'/>
      <address type='pci' domain='0x0000' bus='0x00' slot='0x1f' function='0x2'/>
    </controller>
    <controller type='pci' index='0' model='pcie-root'>
      <alias name='pcie.0'/>
    </controller>
    <controller type='pci' index='1' model='pcie-root-port'>
      <model name='pcie-root-port'/>
      <target chassis='1' port='0x10'/>
      <alias name='pci.1'/>
      <address type='pci' domain='0x0000' bus='0x00' slot='0x02' function='0x0' multifunction='on'/>
    </controller>
    <controller type='pci' index='2' model='pcie-root-port'>
      <model name='pcie-root-port'/>
      <target chassis='2' port='0x11'/>
      <alias name='pci.2'/>
      <address type='pci' domain='0x0000' bus='0x00' slot='0x02' function='0x1'/>
    </controller>
    <controller type='pci' index='3' model='pcie-root-port'>
      <model name='pcie-root-port'/>
      <target chassis='3' port='0x12'/>
      <alias name='pci.3'/>
      <address type='pci' domain='0x0000' bus='0x00' slot='0x02' function='0x2'/>
    </controller>
    <controller type='pci' index='4' model='pcie-root-port'>
      <model name='pcie-root-port'/>
      <target chassis='4' port='0x13'/>
      <alias name='pci.4'/>
      <address type='pci' domain='0x0000' bus='0x00' slot='0x02' function='0x3'/>
    </controller>
    <controller type='pci' index='5' model='pcie-root-port'>
      <model name='pcie-root-port'/>
      <target chassis='5' port='0x14'/>
      <alias name='pci.5'/>
      <address type='pci' domain='0x0000' bus='0x00' slot='0x02' function='0x4'/>
    </controller>
    <controller type='virtio-serial' index='0'>
      <alias name='virtio-serial0'/>
      <address type='pci' domain='0x0000' bus='0x02' slot='0x00' function='0x0'/>
    </controller>
    <controller type='usb' index='0' model='qemu-xhci' ports='15'>
      <alias name='usb'/>
      <address type='pci' domain='0x0000' bus='0x00' slot='0x07' function='0x0'/>
    </controller>
    <interface type='bridge'>
      <mac address='52:54:00:9d:e6:5c'/>
      <source bridge='br0'/>
      <target dev='vnet0'/>
      <model type='virtio'/>
      <alias name='net0'/>
      <address type='pci' domain='0x0000' bus='0x01' slot='0x00' function='0x0'/>
    </interface>
    <serial type='pty'>
      <source path='/dev/pts/0'/>
      <target type='isa-serial' port='0'>
        <model name='isa-serial'/>
      </target>
      <alias name='serial0'/>
    </serial>
    <console type='pty' tty='/dev/pts/0'>
      <source path='/dev/pts/0'/>
      <target type='serial' port='0'/>
      <alias name='serial0'/>
    </console>
    <channel type='unix'>
      <source mode='bind' path='/var/lib/libvirt/qemu/channel/target/domain-7-Windows 10/org.qemu.guest_agent.0'/>
      <target type='virtio' name='org.qemu.guest_agent.0' state='disconnected'/>
      <alias name='channel0'/>
      <address type='virtio-serial' controller='0' bus='0' port='1'/>
    </channel>
    <input type='tablet' bus='usb'>
      <alias name='input0'/>
      <address type='usb' bus='0' port='2'/>
    </input>
    <input type='mouse' bus='ps2'>
      <alias name='input1'/>
    </input>
    <input type='keyboard' bus='ps2'>
      <alias name='input2'/>
    </input>
    <graphics type='vnc' port='5900' autoport='yes' websocket='5700' listen='0.0.0.0' keymap='sv'>
      <listen type='address' address='0.0.0.0'/>
    </graphics>
    <video>
      <model type='qxl' ram='65536' vram='65536' vgamem='16384' heads='1' primary='yes'/>
      <alias name='video0'/>
      <address type='pci' domain='0x0000' bus='0x00' slot='0x01' function='0x0'/>
    </video>
    <hostdev mode='subsystem' type='pci' managed='yes'>
      <driver name='vfio'/>
      <source>
        <address domain='0x0000' bus='0x08' slot='0x00' function='0x1'/>
      </source>
      <alias name='hostdev0'/>
      <address type='pci' domain='0x0000' bus='0x03' slot='0x00' function='0x0'/>
    </hostdev>
    <hostdev mode='subsystem' type='pci' managed='yes'>
      <driver name='vfio'/>
      <source>
        <address domain='0x0000' bus='0x03' slot='0x00' function='0x0'/>
      </source>
      <alias name='hostdev1'/>
      <address type='pci' domain='0x0000' bus='0x04' slot='0x00' function='0x0'/>
    </hostdev>
    <hostdev mode='subsystem' type='usb' managed='no'>
      <source>
        <vendor id='0x0603'/>
        <product id='0x00f2'/>
        <address bus='1' device='2'/>
      </source>
      <alias name='hostdev2'/>
      <address type='usb' bus='0' port='1'/>
    </hostdev>
    <memballoon model='none'/>
  </devices>
  <seclabel type='dynamic' model='dac' relabel='yes'>
    <label>+0:+100</label>
    <imagelabel>+0:+100</imagelabel>
  </seclabel>
</domain>

 

boot.JPG

Edited by Cliff
Link to comment
6 hours ago, Cliff said:

Ok, tried changing bios to seabios and now I could install and boot windows using vnc. I will try to edit and passthrough the gpu again after work. But how do I pass through the onboard audio to the win10 VM ? or do I need to buy a usb audio card ?

Start a new template with Q35 and OVMF. Don't use Seabios unless you have to.

 

To resolve your boot problem is pretty simple.

In the xml, remove this line:

<boot dev='hd'/>

Then look for the section corresponding to your NVMe (e.g. the part below - notice the "bus='0x03'" in between the <source> and </source>. That bus corresponds to the "03:00.0" of your NVMe SSD, that's how you identify it in the xml)

    <hostdev mode='subsystem' type='pci' managed='yes'>
      <driver name='vfio'/>
      <source>
        <address domain='0x0000' bus='0x03' slot='0x00' function='0x0'/>
      </source>
      <alias name='hostdev1'/>
      <address type='pci' domain='0x0000' bus='0x04' slot='0x00' function='0x0'/>
    </hostdev>

And add this to below </source>

<boot order='1'/>

i.e. the section will become like this:

    <hostdev mode='subsystem' type='pci' managed='yes'>
      <driver name='vfio'/>
      <source>
        <address domain='0x0000' bus='0x03' slot='0x00' function='0x0'/>
      </source>
      <boot order='1'/>
      <alias name='hostdev1'/>
      <address type='pci' domain='0x0000' bus='0x04' slot='0x00' function='0x0'/>
    </hostdev>

That's it. No need to resort to Seabios.

 

 

For your onboard soundcard, notice the green "+" sign on the left hand side of the Sound Card section of the VM template? Click on that it will allow you to add another sound card, then just select the one on the drop-down list corresponding to your onboard audio.

If it doesn't work i.e. gives you error when starting the VM, you need to use the VFIO-PCI.cfg plugin to tick the onboard soundcard, rebuild the VFIO-PCI.cfg and reboot (i.e. do the same thing as your NVMe, except the audio device shows up under Sound Card section instead of Other PCI Devices section).

 

Note: the 1st sound card you select should be your GPU HDMI audio. The onboard soundcard should be the 2nd sound card. That's to ensure you don't forget the GPU HDMI Audio, which must be passed through together with the GPU.

 

 

Link to comment

Thanks for all the help. I will try your suggestions later today.

I noticed that I can boot windows directly from the NVMe if I change the boot-order in the bios so thats good I guess.

But some strange behaviour with my current configuration is after I change the graphics to the 1060 and add the virtuio-network iso and try to start the VM it crashes the unraid webui and I have to reboot to be able to display the webui again.

Link to comment
24 minutes ago, Cliff said:

But some strange behaviour with my current configuration is after I change the graphics to the 1060 and add the virtuio-network iso and try to start the VM it crashes the unraid webui and I have to reboot to be able to display the webui again.

Nothing strange about it: you have only 1 GPU.

Once passed through to the VM, Unraid doesn't have any GPU work with.

The GPU will not be returned back to Unraid after passed through.

 

If you want to use the Unraid (boot) GUI as well as the VM then you need 2 GPUs.

Alternatively, access the GUI from another computer via the network.

Link to comment

Ok I tried again but every time I try to start the VM unraid crashes/stops responding and I have to reboot the entire server.

 

<?xml version='1.0' encoding='UTF-8'?>
<domain type='kvm'>
  <name>Windows 10</name>
  <uuid>3e8d5e3f-3d1d-b9a9-237e-4572615cabdc</uuid>
  <metadata>
    <vmtemplate xmlns="unraid" name="Windows 10" icon="windows.png" os="windows10"/>
  </metadata>
  <memory unit='KiB'>16777216</memory>
  <currentMemory unit='KiB'>16777216</currentMemory>
  <memoryBacking>
    <nosharepages/>
  </memoryBacking>
  <vcpu placement='static'>8</vcpu>
  <cputune>
    <vcpupin vcpu='0' cpuset='5'/>
    <vcpupin vcpu='1' cpuset='17'/>
    <vcpupin vcpu='2' cpuset='7'/>
    <vcpupin vcpu='3' cpuset='19'/>
    <vcpupin vcpu='4' cpuset='9'/>
    <vcpupin vcpu='5' cpuset='21'/>
    <vcpupin vcpu='6' cpuset='11'/>
    <vcpupin vcpu='7' cpuset='23'/>
  </cputune>
  <os>
    <type arch='x86_64' machine='pc-q35-4.2'>hvm</type>
    <loader readonly='yes' type='pflash'>/usr/share/qemu/ovmf-x64/OVMF_CODE-pure-efi.fd</loader>
    <nvram>/etc/libvirt/qemu/nvram/3e8d5e3f-3d1d-b9a9-237e-4572615cabdc_VARS-pure-efi.fd</nvram>
  </os>
  <features>
    <acpi/>
    <apic/>
    <hyperv>
      <relaxed state='on'/>
      <vapic state='on'/>
      <spinlocks state='on' retries='8191'/>
      <vendor_id state='on' value='none'/>
    </hyperv>
  </features>
  <cpu mode='host-passthrough' check='none'>
    <topology sockets='1' cores='8' threads='1'/>
  </cpu>
  <clock offset='localtime'>
    <timer name='hypervclock' present='yes'/>
    <timer name='hpet' present='no'/>
  </clock>
  <on_poweroff>destroy</on_poweroff>
  <on_reboot>restart</on_reboot>
  <on_crash>restart</on_crash>
  <devices>
    <emulator>/usr/local/sbin/qemu</emulator>
    <controller type='sata' index='0'>
      <address type='pci' domain='0x0000' bus='0x00' slot='0x1f' function='0x2'/>
    </controller>
    <controller type='pci' index='0' model='pcie-root'/>
    <controller type='pci' index='1' model='pcie-root-port'>
      <model name='pcie-root-port'/>
      <target chassis='1' port='0x8'/>
      <address type='pci' domain='0x0000' bus='0x00' slot='0x01' function='0x0' multifunction='on'/>
    </controller>
    <controller type='pci' index='2' model='pcie-root-port'>
      <model name='pcie-root-port'/>
      <target chassis='2' port='0x9'/>
      <address type='pci' domain='0x0000' bus='0x00' slot='0x01' function='0x1'/>
    </controller>
    <controller type='pci' index='3' model='pcie-root-port'>
      <model name='pcie-root-port'/>
      <target chassis='3' port='0xa'/>
      <address type='pci' domain='0x0000' bus='0x00' slot='0x01' function='0x2'/>
    </controller>
    <controller type='pci' index='4' model='pcie-root-port'>
      <model name='pcie-root-port'/>
      <target chassis='4' port='0xb'/>
      <address type='pci' domain='0x0000' bus='0x00' slot='0x01' function='0x3'/>
    </controller>
    <controller type='pci' index='5' model='pcie-root-port'>
      <model name='pcie-root-port'/>
      <target chassis='5' port='0xc'/>
      <address type='pci' domain='0x0000' bus='0x00' slot='0x01' function='0x4'/>
    </controller>
    <controller type='pci' index='6' model='pcie-root-port'>
      <model name='pcie-root-port'/>
      <target chassis='6' port='0xd'/>
      <address type='pci' domain='0x0000' bus='0x00' slot='0x01' function='0x5'/>
    </controller>
    <controller type='pci' index='7' model='pcie-root-port'>
      <model name='pcie-root-port'/>
      <target chassis='7' port='0xe'/>
      <address type='pci' domain='0x0000' bus='0x00' slot='0x01' function='0x6'/>
    </controller>
    <controller type='virtio-serial' index='0'>
      <address type='pci' domain='0x0000' bus='0x02' slot='0x00' function='0x0'/>
    </controller>
    <controller type='usb' index='0' model='qemu-xhci' ports='15'>
      <address type='pci' domain='0x0000' bus='0x00' slot='0x07' function='0x0'/>
    </controller>
    <interface type='bridge'>
      <mac address='52:54:00:57:20:f5'/>
      <source bridge='br0'/>
      <model type='virtio'/>
      <address type='pci' domain='0x0000' bus='0x01' slot='0x00' function='0x0'/>
    </interface>
    <serial type='pty'>
      <target type='isa-serial' port='0'>
        <model name='isa-serial'/>
      </target>
    </serial>
    <console type='pty'>
      <target type='serial' port='0'/>
    </console>
    <channel type='unix'>
      <target type='virtio' name='org.qemu.guest_agent.0'/>
      <address type='virtio-serial' controller='0' bus='0' port='1'/>
    </channel>
    <input type='mouse' bus='ps2'/>
    <input type='keyboard' bus='ps2'/>
    <hostdev mode='subsystem' type='pci' managed='yes'>
      <driver name='vfio'/>
      <source>
        <address domain='0x0000' bus='0x08' slot='0x00' function='0x0'/>
      </source>
      <address type='pci' domain='0x0000' bus='0x03' slot='0x00' function='0x0'/>
    </hostdev>
    <hostdev mode='subsystem' type='pci' managed='yes'>
      <driver name='vfio'/>
      <source>
        <address domain='0x0000' bus='0x08' slot='0x00' function='0x1'/>
      </source>
      <address type='pci' domain='0x0000' bus='0x04' slot='0x00' function='0x0'/>
    </hostdev>
    <hostdev mode='subsystem' type='pci' managed='yes'>
      <driver name='vfio'/>
      <source>
        <address domain='0x0000' bus='0x0a' slot='0x00' function='0x4'/>
      </source>
      <address type='pci' domain='0x0000' bus='0x05' slot='0x00' function='0x0'/>
    </hostdev>
    <hostdev mode='subsystem' type='pci' managed='yes'>
      <driver name='vfio'/>
      <source>
        <address domain='0x0000' bus='0x03' slot='0x00' function='0x0'/>
      </source>
      <boot order='1'/>
      <address type='pci' domain='0x0000' bus='0x06' slot='0x00' function='0x0'/>
    </hostdev>
    <hostdev mode='subsystem' type='usb' managed='no'>
      <source>
        <vendor id='0x0603'/>
        <product id='0x00f2'/>
      </source>
      <address type='usb' bus='0' port='1'/>
    </hostdev>
    <hostdev mode='subsystem' type='usb' managed='no'>
      <source>
        <vendor id='0x093a'/>
        <product id='0x2510'/>
      </source>
      <address type='usb' bus='0' port='2'/>
    </hostdev>
    <memballoon model='none'/>
  </devices>
</domain>

 

Link to comment

I am, I have even tried removing both audio and gpu just to get the system to boot. I think all the pci-devices is from when I installed win10 and the installer created a lot of partitions on the drive

 

logfile:

-blockdev '{"driver":"file","filename":"/etc/libvirt/qemu/nvram/071ebadd-1633-9d12-ca34-a9fab4b8fc78_VARS-pure-efi.fd","node-name":"libvirt-pflash1-storage","auto-read-only":true,"discard":"unmap"}' \
-blockdev '{"node-name":"libvirt-pflash1-format","read-only":false,"driver":"raw","file":"libvirt-pflash1-storage"}' \
-machine pc-q35-4.2,accel=kvm,usb=off,dump-guest-core=off,mem-merge=off,pflash0=libvirt-pflash0-format,pflash1=libvirt-pflash1-format \
-cpu host,hv-time,hv-relaxed,hv-vapic,hv-spinlocks=0x1fff,hv-vendor-id=none \
-m 16384 \
-overcommit mem-lock=off \
-smp 9,sockets=1,cores=9,threads=1 \
-uuid 071ebadd-1633-9d12-ca34-a9fab4b8fc78 \
-no-user-config \
-nodefaults \
-chardev socket,id=charmonitor,fd=33,server,nowait \
-mon chardev=charmonitor,id=monitor,mode=control \
-rtc base=localtime \
-no-hpet \
-no-shutdown \
-boot strict=on \
-device pcie-root-port,port=0x10,chassis=1,id=pci.1,bus=pcie.0,multifunction=on,addr=0x2 \
-device pcie-root-port,port=0x11,chassis=2,id=pci.2,bus=pcie.0,addr=0x2.0x1 \
-device pcie-root-port,port=0x12,chassis=3,id=pci.3,bus=pcie.0,addr=0x2.0x2 \
-device pcie-root-port,port=0x13,chassis=4,id=pci.4,bus=pcie.0,addr=0x2.0x3 \
-device qemu-xhci,p2=15,p3=15,id=usb,bus=pcie.0,addr=0x7 \
-device virtio-serial-pci,id=virtio-serial0,bus=pci.2,addr=0x0 \
-netdev tap,fd=35,id=hostnet0,vhost=on,vhostfd=36 \
-device virtio-net-pci,netdev=hostnet0,id=net0,mac=52:54:00:86:a7:05,bus=pci.1,addr=0x0 \
-chardev pty,id=charserial0 \
-device isa-serial,chardev=charserial0,id=serial0 \
-chardev socket,id=charchannel0,fd=38,server,nowait \
-device virtserialport,bus=virtio-serial0.0,nr=1,chardev=charchannel0,id=channel0,name=org.qemu.guest_agent.0 \
-device usb-tablet,id=input0,bus=usb.0,port=3 \
-vnc 0.0.0.0:0,websocket=5700 \
-k en-us \
-device qxl-vga,id=video0,ram_size=67108864,vram_size=67108864,vram64_size_mb=0,vgamem_mb=16,max_outputs=1,bus=pcie.0,addr=0x1 \
-device vfio-pci,host=0000:03:00.0,id=hostdev0,bus=pci.3,addr=0x0 \
-device usb-host,hostbus=1,hostaddr=3,id=hostdev1,bus=usb.0,port=1 \
-device usb-host,hostbus=1,hostaddr=6,id=hostdev2,bus=usb.0,port=2 \
-sandbox on,obsolete=deny,elevateprivileges=deny,spawn=deny,resourcecontrol=deny \
-msg timestamp=on
2020-02-26 17:55:07.185+0000: Domain id=4 is tainted: high-privileges
2020-02-26 17:55:07.185+0000: Domain id=4 is tainted: host-cpu
char device redirected to /dev/pts/0 (label charserial0)
libusb: error [udev_hotplug_event] ignoring udev action bind

 

Edited by Cliff
Link to comment

Join the conversation

You can post now and register later. If you have an account, sign in now to post with your account.
Note: Your post will require moderator approval before it will be visible.

Guest
Reply to this topic...

×   Pasted as rich text.   Restore formatting

  Only 75 emoji are allowed.

×   Your link has been automatically embedded.   Display as a link instead

×   Your previous content has been restored.   Clear editor

×   You cannot paste images directly. Upload or insert images from URL.