Jump to content
SpaceInvaderOne

**VIDEO GUIDE ** The best way to install and setup a Windows 10 VM as a daily driver or a Gaming VM PART 1 AND PART 2 **

70 posts in this topic Last Reply

Recommended Posts

Hi, Guys. This is the first part of a two-part video about setting up a Windows 10 KVM VM in unRAID. (second part in a day or 2 if work lets me !) The first part deals with setting up the VM correctly to be able to use as a 'daily driver'. Then the second part passing through hardware to turn it into a gaming VM.

The first part consists of

  1. Download a windows 10 iso.
  2. Where to Buy a license for windows 10 pro for $20
  3. How to assign resources and correctly pin you CPUs.
  4. How to install the virtio drivers including the qxl graphics driver.
  5. How to remove or block the windows 10 data mining - phone home - etc with anti beacon.
  6. How to install multiple useful programmes with ninite
  7. Using Splashtop desktop for good quality remote viewing
  8. How to install a virtual sound card to have sound in Splashtop/RDP etc.
  9. Using mapped drives and symlinks to get the most out of the array.
  10. Windows tweaks for VM compatibility.
  11. general tips

Hope you find it useful :)

The best way to install and setup a windows 10 vm as a daily driver or a Gaming VM

 

 

 

 

Below is the T second part of a two-part video about setting up a Windows 10 KVM VM in unRAID.
 The second part deals with passing through hardware and potential problems and solutions showing you how
 to turn it into a gaming VM. Hope you find it useful. :)

 

Edited by gridrunner

Share this post


Link to post

You suggest only setting up a small vdisk for the vm.   Yet you choose 70gb.  I normally set mine up as only 30gb, and I still have 18gb free after getting Win10 N Pro 1703 creators update all set up.  Yes Creators Update 1703 N only uses 12gb after install.  Why do you want so much free space?

 

You are suggesting a different install process than what is in the wiki.  Adding the other virt io drivers later is faster, and it works well..   Should we be modifying the wiki for your new faster install process??

 

This will be a great help for the new unRaiders who want to try a Win10VM.  All that is left to do is create an automatic script that does this all for us without intervention......

 

Share this post


Link to post

Yeah, I consider 70G as quite small. I have known people setting up 500gig Vdisks !! 70G It is more space than I will fill up on the disk.

Windows recommends a minimum of 20 gigs of free space. But I think its useful to have  10 - 20 gigs more than that for temporary files and the desktop etc. I often have video /image files of up to 10 gigs. I may work on these on the desktop before putting on the array. Also, the vdisk size will be different to the file size on the disk until the vdisk is filled.

Good idea about the script :)

Share this post


Link to post

Great Video!  You nailed the VM provisioning.  I actually picked up some tips for Windows from this video.  I especially like the Windows 10 'Phone Home' removal software.  This is the kind of tool to help the new unRAID user.  Great job!

 

Looking forward to chapter 2.

Share this post


Link to post

Excellent guide, I am a fan of your video series.

I am aware of how to implement windows 10 vm but as a catch all one stop shop with the nuances explained and extra tips here, I will no doubt refer to your video again in future.  

As somone is who is currently looking at the overheads when running a 4k gaming machine as a vm, I am also looking forward to chapter 2!

Share this post


Link to post

Great job both in terms of videography, pacing, and content! Truly outstanding!! 

 

Few questions tangential to the technical content:

1 - many users are using Dockers for downloading as well as Plex, with Plex being pretty resource intensive at certain times. And most people would have 4 cores not 8. And 16G our RAM is probably most common. For a user that wants a basic Windows VM (non-gaming) how would you recommend provisioning CPU and RAM? Is there a minimum recommended Windows config that won't slow down Plex?

 

2 - I've always thought that splitting a core between host and VM could be a good thing. For example, if you have a VM with 1 thread from each of two cores, and unRaid owned the others, unRaid would still have access to all of the cores for transcoding, a good thing if the Windows VM is often idle. Why the recommendation to pin complete cores to VMs and not share them, in essence taking them out of the game even if lightly used much of the time.

 

Thanks again for this and your other videos! Great resources for unRaid users!! I plan to use this one and the one on online backups in the next few weeks, after completing my current drive upgrade cycle.

 

(#ssdindex - see first post in thread)

 

Thanks again! 

Share this post


Link to post
On 2017-5-31 at 11:50 AM, bjp999 said:

Great job both in terms of videography, pacing, and content! Truly outstanding!! 

 

Few questions tangential to the technical content:

1 - many users are using Dockers for downloading as well as Plex, with Plex being pretty resource intensive at certain times. And most people would have 4 cores not 8. And 16G our RAM is probably most common. For a user that wants a basic Windows VM (non-gaming) how would you recommend provisioning CPU and RAM? Is there a minimum recommended Windows config that won't slow down Plex?

 

2 - I've always thought that splitting a core between host and VM could be a good thing. For example, if you have a VM with 1 thread from each of two cores, and unRaid owned the others, unRaid would still have access to all of the cores for transcoding, a good thing if the Windows VM is often idle. Why the recommendation to pin complete cores to VMs and not share them, in essence taking them out of the game even if lightly used much of the time.

 

Thanks again for this and your other videos! Great resources for unRaid users!! I plan to use this one and the one on online backups in the next few weeks, after completing my current drive upgrade cycle.

 

Thanks again! 

Hi @bjp999 Thanks, glad you like the videos :)

 

Yes, I think the most common CPU and RAM most users would have would be 4 cores (8threads) and 16gb of ram.

When assigning resources to VMs and Docker Containers it is as much as an art as it is a science. It is best to experiment to find what works best for you but following certain principles. After the second part to this video I am realising a video about server, Docker container and vm tuning alot of which based off @dlandon excellent tips and tweaks plugin 

and post here 

 

Plex obviously will use more CPU when transcoding streams and direct play (non-transcoding) wouldn't affect CPU usage very much.

By default, each Docker container can access the whole of the server's resources as the host OS seems fit. So this includes all cores of the CPU. This is the case even if you have pinned the vCPUs in the VM. The pinning of the vCPUs in the VM just tells the VM it can only use those cores for its processes. However, it doesn't stop the host OS from being able to use them. So even pinning 3 out of the 4 cores for the VM will not slow down Plex that much unless the VM starts doing some heavy number crunching. It really isn't much different to if you ran a plex server on a bare-metal Windows machine. Plex would work fine, but then if you started rendering some video files in a video editing software it will effect plex as each process will have to "wait its turn" for the use of the CPU.

Now you can isolate cores from the host OS by adding isolcpus= and the numbers of the threads. This will stop the host system using these and you can assign them VMs or Docker containers manually. So then yes this would take them 'out of the game' as they would then only be used by whatever is pinned to them

The reason it is best, with a VM, not to split hyperthreaded cores is to stop context switching between hyperthreads. This wouldn't be so much of a problem for a vm that you don't game on or watch video on.

So anyway if you just wanted a basic Windows 10 VM for non-gaming processes then i would pin just one core to the VM for example core 4 (3,7). That would then just limit the VM to that core. I would then pin plex to the other 3 cores(1-3)  using   --cpuset-cpus=0,4,1,5,2,6 that would keep the two processes off the same cores.

 

Edited by gridrunner

Share this post


Link to post

Thanks for these. Just set up a gaming VM using the videos as a guide, and the process went without a hitch!

Share this post


Link to post
3 hours ago, driph said:

Thanks for these. Just set up a gaming VM using the videos as a guide, and the process went without a hitch!

your welcome. Glad it helped:)

Share this post


Link to post

Just watched part2 last night. Great video, as always!

 

A couple questions / comments:

 

1 - I know that unRAID prefers core0, so as a result I had avoided pinning VM cores there. In an earlier incarnation of unRAID VM a couple years ago (Xen), there were cases where core0 would get overloaded and unRAID would starve and fail. Is that a concern with KVM? Assuming no, I will start using core0 for VMs. (I had not been assigning Dockers by core or load, but may start to look at that).

 

2 - We continue to recommend that cores be pinned with their hyperthreaded twins. There are cases where multiple cores cause problems, and I can only pin one logical core. Does it matter whether pinning core0 vs its twin? What is the negative of assigning one VM to one core, and another to its twin? (I know its not recommended, just don't understand exactly why and if you had done any testing.)

 

3 - Very interesting approach of having multiple Docker containers linked to same appdata. (I had done this when moving to a different Docker for the same app). Makes sense to have a "High Transcode" configuration for Plex! You'd want to update to the next version in tandem (although you could update one, check it out to make sure it is working properly, and only then update the other). And be careful to not let them run together, as you said.

 

Thanks!

Share this post


Link to post

Thanks. Super helpful video. I have two questions:

 

1) For what kind of updates do I need to reduce the cores to 1? I have done so when installing the Win10 VM, but not sure whether this is required again for the "normal" daily Windows patches and updates? Or the upcoming Creators update?

 

2) Why do you suggest a small vdisk? Is it per se better to map network disks within the VM for file storage? I am a heavy user of Itunes. Is there any difference or preferred method in having the itunes library within the vdisk (VM) or on a mapped network drive?

 

3) I am about to start a Mac High Sierra VM (thanks for guidance in the other thread). When it comes to vdisk size, do I need to start a large 1TB vidks or is there also some way to do this with a small 70GB vdisk? It loks that Mac OS lacks the capability to auto-mount network disks at startup and also Photos library apparently cannot run on a non-Mac native file system. So, I better have it all inside the vdisk?

Share this post


Link to post

1 - After the initial install, no reason to reduce the core count for normal updates. If you ever have to boot from the .ISO image again, it might be needed.

 

2 - this is personal preference to a degree, but the best practice is to keep the vdisk lean and store your data on the array, or in an unassigned device. The vdisk contains the OS and installed programs only. The vdisk is easier to back up and all your data is externalitied, with the important parts parity protected.

 

3 - Not a Mac guy, but I'd experiment with options to externalize. You can create a second vdisk if desired. For my Windows VM, I wanted to keep my TEMP folder off the SSD, so created a vdisk on an unassigned spinner. (Windows was not happy with TEMP folder on a network drive, even if mapped to a drive letter.) But if you want the iTunes library on your OS vdisk, you can certainly do that. I just like to be able to compress and backup my image file, and having a 1T image would be completely impractical for me.

 

Here is my setup:

C: - vdisk mounted on SSD cache (backed up periodically)

D: - vdisk mounted on unassigned device (temp folder)

E: - mapped to spinner unassigned device

F: - mapped to unassigned SSD - used for VMware VMs I run under Windows.

 

I don't map array disks to drive letters, but can easily write to the array when needed.

Share this post


Link to post

As an added tip I would also consider using direct passthrough for your SSD/NVME drive to the Windows 10 VM. In the past, whenever I provisioned a W10 VM using vfio scsi driver / raw image, i ran in to random audio 'crackle & pop' issues even when using msiutil. Passing through an nvme meant i could also use samsung nvme driver for optimal performance.

Share this post


Link to post

Thanks for your replies.

 

Besides the inconvenience to do backups, is there any other strong reason not to have a 1TB visk on the cache drive? Does it negatively impact performance or bring higher chance of corruption?

 

I doubt I can get two vdisks to work with MacOS. And you are right, that MacOS doesn't allow to put my iphoto library on a mapped network drive (same as what you describe for your temp file).

 

My SSD/NVME is the cache disk. It carries two vdisks (one is the VM for MacOS and one the VM for Win10). Is this not a good setup? With my setup, I don't think I can even passthrough teh SSD/NVME as the vdisks are on the SSD/NVME?

Share this post


Link to post

Thanks for the very useful videos.

 

I'm not sure if this was addressed in the videos, but for the gaming VM, what are the physical connections? The second video still seems to show the VM running under Splashtop. Would the gaming be done via Splashtop? This seems unlikely to produce near bare metal performance.

 

The way I'm thinking about it, for Splashtop or, say, RDP, I just need a PC that is on the same network as the unRAID server (or can "see" it over the Internet). But the monitor of the PC I'm using would be connected to whatever graphics device I use on the PC, and is therefore totally independent of any graphics card I have on the server. So if I want to use the VM for gaming, would I need a separate monitor, mouse and keyboard (both connected directly to the server via passed through devices)?

 

I'm sure I'm missing something quite fundamental here, but I hope someone can explain what needs to be done.

 

Share this post


Link to post

OK, I think I get it now. I took a look at various N-gamers 1-CPU videos and I need to have the physical connections for the "real" VM, and not the remote desktop one, if that makes sense.

Share this post


Link to post

Hello,

 

Ive just spend like 5h trying to get the card working via bios dump (both guides old with dump via ssh and the new from GPU-Z or Webpage download) and im not able to get picture out of HDMI port to monitor.

 

I have 2x 1070 cards when i set 1st card in the 1st MB slot it wont work, when i set the 2nd card in 2nd MB slot it works and i get picture on monitor...

 

Is there any way how i get it working on 1st one please?

 

Thanks

Share this post


Link to post

Here is a vbios I dumped a while ago from a 1070. It should work fine for you, please give it a try.

If it still doesn't work please post your XML (with the vbios added to it) so I can take a look. Oh yeah and also post your iommu groups pls.

gtx1070.dump

Share this post


Link to post

Thanks for the bios dump - but the same monitor which shows normally boot screen/ unraid login and stuff in TXT mode - turns OFF soon as i start the Win10 VM with linked DUMP file.

 

I assume my dump is right as i have 10x Zotac 1070 cards , so should be done right.

 

Cards have their own group even without Enable PCIe ACS Override:set to NO

 

IOMMU group 0:	[1022:1452] 00:01.0 Host bridge: Advanced Micro Devices, Inc. [AMD] Family 17h (Models 00h-0fh) PCIe Dummy Host Bridge
IOMMU group 1:	[1022:1453] 00:01.3 PCI bridge: Advanced Micro Devices, Inc. [AMD] Family 17h (Models 00h-0fh) PCIe GPP Bridge
IOMMU group 2:	[1022:1452] 00:02.0 Host bridge: Advanced Micro Devices, Inc. [AMD] Family 17h (Models 00h-0fh) PCIe Dummy Host Bridge
IOMMU group 3:	[1022:1452] 00:03.0 Host bridge: Advanced Micro Devices, Inc. [AMD] Family 17h (Models 00h-0fh) PCIe Dummy Host Bridge
IOMMU group 4:	[1022:1453] 00:03.1 PCI bridge: Advanced Micro Devices, Inc. [AMD] Family 17h (Models 00h-0fh) PCIe GPP Bridge
IOMMU group 5:	[1022:1453] 00:03.2 PCI bridge: Advanced Micro Devices, Inc. [AMD] Family 17h (Models 00h-0fh) PCIe GPP Bridge
IOMMU group 6:	[1022:1452] 00:04.0 Host bridge: Advanced Micro Devices, Inc. [AMD] Family 17h (Models 00h-0fh) PCIe Dummy Host Bridge
IOMMU group 7:	[1022:1452] 00:07.0 Host bridge: Advanced Micro Devices, Inc. [AMD] Family 17h (Models 00h-0fh) PCIe Dummy Host Bridge
IOMMU group 8:	[1022:1454] 00:07.1 PCI bridge: Advanced Micro Devices, Inc. [AMD] Family 17h (Models 00h-0fh) Internal PCIe GPP Bridge 0 to Bus B
IOMMU group 9:	[1022:1452] 00:08.0 Host bridge: Advanced Micro Devices, Inc. [AMD] Family 17h (Models 00h-0fh) PCIe Dummy Host Bridge
IOMMU group 10:	[1022:1454] 00:08.1 PCI bridge: Advanced Micro Devices, Inc. [AMD] Family 17h (Models 00h-0fh) Internal PCIe GPP Bridge 0 to Bus B
IOMMU group 11:	[1022:790b] 00:14.0 SMBus: Advanced Micro Devices, Inc. [AMD] FCH SMBus Controller (rev 59)
[1022:790e] 00:14.3 ISA bridge: Advanced Micro Devices, Inc. [AMD] FCH LPC Bridge (rev 51)
IOMMU group 12:	[1022:1460] 00:18.0 Host bridge: Advanced Micro Devices, Inc. [AMD] Family 17h (Models 00h-0fh) Data Fabric: Device 18h; Function 0
[1022:1461] 00:18.1 Host bridge: Advanced Micro Devices, Inc. [AMD] Family 17h (Models 00h-0fh) Data Fabric: Device 18h; Function 1
[1022:1462] 00:18.2 Host bridge: Advanced Micro Devices, Inc. [AMD] Family 17h (Models 00h-0fh) Data Fabric: Device 18h; Function 2
[1022:1463] 00:18.3 Host bridge: Advanced Micro Devices, Inc. [AMD] Family 17h (Models 00h-0fh) Data Fabric: Device 18h; Function 3
[1022:1464] 00:18.4 Host bridge: Advanced Micro Devices, Inc. [AMD] Family 17h (Models 00h-0fh) Data Fabric: Device 18h; Function 4
[1022:1465] 00:18.5 Host bridge: Advanced Micro Devices, Inc. [AMD] Family 17h (Models 00h-0fh) Data Fabric: Device 18h; Function 5
[1022:1466] 00:18.6 Host bridge: Advanced Micro Devices, Inc. [AMD] Family 17h (Models 00h-0fh) Data Fabric Device 18h Function 6
[1022:1467] 00:18.7 Host bridge: Advanced Micro Devices, Inc. [AMD] Family 17h (Models 00h-0fh) Data Fabric: Device 18h; Function 7
IOMMU group 13:	[1022:43b9] 01:00.0 USB controller: Advanced Micro Devices, Inc. [AMD] Device 43b9 (rev 02)
[1022:43b5] 01:00.1 SATA controller: Advanced Micro Devices, Inc. [AMD] Device 43b5 (rev 02)
[1022:43b0] 01:00.2 PCI bridge: Advanced Micro Devices, Inc. [AMD] Device 43b0 (rev 02)
[1022:43b4] 02:00.0 PCI bridge: Advanced Micro Devices, Inc. [AMD] 300 Series Chipset PCIe Port (rev 02)
[1022:43b4] 02:02.0 PCI bridge: Advanced Micro Devices, Inc. [AMD] 300 Series Chipset PCIe Port (rev 02)
[1022:43b4] 02:03.0 PCI bridge: Advanced Micro Devices, Inc. [AMD] 300 Series Chipset PCIe Port (rev 02)
[1022:43b4] 02:04.0 PCI bridge: Advanced Micro Devices, Inc. [AMD] 300 Series Chipset PCIe Port (rev 02)
[1022:43b4] 02:06.0 PCI bridge: Advanced Micro Devices, Inc. [AMD] 300 Series Chipset PCIe Port (rev 02)
[1022:43b4] 02:07.0 PCI bridge: Advanced Micro Devices, Inc. [AMD] 300 Series Chipset PCIe Port (rev 02)
[1b21:1343] 06:00.0 USB controller: ASMedia Technology Inc. Device 1343
[8086:1539] 07:00.0 Ethernet controller: Intel Corporation I211 Gigabit Network Connection (rev 03)
IOMMU group 14:	[10de:1b81] 09:00.0 VGA compatible controller: NVIDIA Corporation GP104 [GeForce GTX 1070] (rev a1)
[10de:10f0] 09:00.1 Audio device: NVIDIA Corporation GP104 High Definition Audio Controller (rev a1)
IOMMU group 15:	[10de:1b81] 0a:00.0 VGA compatible controller: NVIDIA Corporation GP104 [GeForce GTX 1070] (rev a1)
[10de:10f0] 0a:00.1 Audio device: NVIDIA Corporation GP104 High Definition Audio Controller (rev a1)
IOMMU group 16:	[1022:145a] 0b:00.0 Non-Essential Instrumentation [1300]: Advanced Micro Devices, Inc. [AMD] Device 145a
IOMMU group 17:	[1022:1456] 0b:00.2 Encryption controller: Advanced Micro Devices, Inc. [AMD] Family 17h (Models 00h-0fh) Platform Security Processor
IOMMU group 18:	[1022:145c] 0b:00.3 USB controller: Advanced Micro Devices, Inc. [AMD] Family 17h (Models 00h-0fh) USB 3.0 Host Controller
IOMMU group 19:	[1022:1455] 0c:00.0 Non-Essential Instrumentation [1300]: Advanced Micro Devices, Inc. [AMD] Device 1455
IOMMU group 20:	[1022:7901] 0c:00.2 SATA controller: Advanced Micro Devices, Inc. [AMD] FCH SATA Controller [AHCI mode] (rev 51)
IOMMU group 21:	[1022:1457] 0c:00.3 Audio device: Advanced Micro Devices, Inc. [AMD] Family 17h (Models 00h-0fh) HD Audio Controller

 

This setup with 2 cards (2card1070a.xml)

image.png.a5cc58a0e37e400ca32668fefb411af2.png

 

This is what im getting in the VM LOG:

2018-01-01T23:41:56.045938Z qemu-system-x86_64: vfio_region_write(0000:09:00.0:region3+0x124c830, 0xff000000ff000000,8) failed: Device or resource busy
2018-01-01T23:41:56.045958Z qemu-system-x86_64: vfio_region_write(0000:09:00.0:region3+0x124c838, 0x0,1) failed: Device or resource busy
2018-01-01T23:41:56.045978Z qemu-system-x86_64: vfio_region_write(0000:09:00.0:region3+0x124c839, 0x0,1) failed: Device or resource busy
2018-01-01T23:41:56.045997Z qemu-system-x86_64: vfio_region_write(0000:09:00.0:region3+0x124c83a, 0x0,1) failed: Device or resource busy
2018-01-01T23:41:56.046017Z qemu-system-x86_64: vfio_region_write(0000:09:00.0:region3+0x124c83b, 0xff,1) failed: Device or resource busy
2018-01-01T23:41:56.046037Z qemu-system-x86_64: vfio_region_write(0000:09:00.0:region3+0x124d7c4, 0xff000000,4) failed: Device or resource busy
2018-01-01T23:41:56.046048Z qemu-system-x86_64: vfio_region_write(0000:09:00.0:region3+0x124d7c8, 0xff000000,4) failed: Device or resource busy
2018-01-01T23:41:56.046062Z qemu-system-x86_64: vfio_region_write(0000:09:00.0:region3+0x124d7cc, 0xff000000,4) failed: Device or resource busy
2018-01-01T23:41:56.046073Z qemu-system-x86_64: vfio_region_write(0000:09:00.0:region3+0x124d7d0, 0xff000000,4) failed: Device or resource busy
2018-01-01T23:41:56.046093Z qemu-system-x86_64: vfio_region_write(0000:09:00.0:region3+0x124d7d0, 0xff000000ff000000,8) failed: Device or resource busy
2018-01-01T23:41:56.046108Z qemu-system-x86_64: vfio_region_write(0000:09:00.0:region3+0x124d7d8, 0xff2f2f2fff000000,8) failed: Device or resource busy
2018-01-01T23:41:56.046128Z qemu-system-x86_64: vfio_region_write(0000:09:00.0:region3+0x124d7e0, 0xff9f9f9fffafafaf,8) failed: Device or resource busy
2018-01-01T23:41:56.046143Z qemu-system-x86_64: vfio_region_write(0000:09:00.0:region3+0x124d7e8, 0xff000000ff0f0f0f,8) failed: Device or resource busy
2018-01-01T23:41:56.046163Z qemu-system-x86_64: vfio_region_write(0000:09:00.0:region3+0x124d7f0, 0xff000000ff000000,8) failed: Device or resource busy
2018-01-01T23:41:56.046178Z qemu-system-x86_64: vfio_region_write(0000:09:00.0:region3+0x124d7f8, 0xff000000ff000000,8) failed: Device or resource busy
2018-01-01T23:41:56.046198Z qemu-system-x86_64: vfio_region_write(0000:09:00.0:region3+0x124d800, 0xff000000ff000000,8) failed: Device or resource busy
2018-01-01T23:41:56.046213Z qemu-system-x86_64: vfio_region_write(0000:09:00.0:region3+0x124d808, 0xff000000ff000000,8) failed: Device or resource busy
2018-01-01T23:41:56.046233Z qemu-system-x86_64: vfio_region_write(0000:09:00.0:region3+0x124d810, 0xff000000ff000000,8) failed: Device or resource busy
2018-01-01T23:41:56.046248Z qemu-system-x86_64: vfio_region_write(0000:09:00.0:region3+0x124d818, 0xff000000ff000000,8) failed: Device or resource busy
2018-01-01T23:41:56.046268Z qemu-system-x86_64: vfio_region_write(0000:09:00.0:region3+0x124d820, 0xff000000ff000000,8) failed: Device or resource busy
2018-01-01T23:41:56.046283Z qemu-system-x86_64: vfio_region_write(0000:09:00.0:region3+0x124d828, 0xff000000ff000000,8) failed: Device or resource busy
2018-01-01T23:41:56.046303Z qemu-system-x86_64: vfio_region_write(0000:09:00.0:region3+0x124d830, 0xff000000ff000000,8) failed: Device or resource busy
2018-01-01T23:41:56.046323Z qemu-system-x86_64: vfio_region_write(0000:09:00.0:region3+0x124d838, 0x0,1) failed: Device or resource busy
2018-01-01T23:41:56.046344Z qemu-system-x86_64: vfio_region_write(0000:09:00.0:region3+0x124d839, 0x0,1) failed: Device or resource busy
2018-01-01T23:41:56.046364Z qemu-system-x86_64: vfio_region_write(0000:09:00.0:region3+0x124d83a, 0x0,1) failed: Device or resource busy
2018-01-01T23:41:56.046383Z qemu-system-x86_64: vfio_region_write(0000:09:00.0:region3+0x124d83b, 0xff,1) failed: Device or resource busy
2018-01-01T23:41:56.046404Z qemu-system-x86_64: vfio_region_write(0000:09:00.0:region3+0x124e7c4, 0xff000000,4) failed: Device or resource busy
2018-01-01T23:41:56.046414Z qemu-system-x86_64: vfio_region_write(0000:09:00.0:region3+0x124e7c8, 0xff000000,4) failed: Device or resource busy
2018-01-01T23:41:56.046429Z qemu-system-x86_64: vfio_region_write(0000:09:00.0:region3+0x124e7cc, 0xff000000,4) failed: Device or resource busy
2018-01-01T23:41:56.046439Z qemu-system-x86_64: vfio_region_write(0000:09:00.0:region3+0x124e7d0, 0xff000000,4) failed: Device or resource busy
2018-01-01T23:41:56.046459Z qemu-system-x86_64: vfio_region_write(0000:09:00.0:region3+0x124e7d0, 0xff000000ff000000,8) failed: Device or resource busy
2018-01-01T23:41:56.046474Z qemu-system-x86_64: vfio_region_write(0000:09:00.0:region3+0x124e7d8, 0xffdfdfdfff000000,8) failed: Device or resource busy
2018-01-01T23:41:56.046494Z qemu-system-x86_64: vfio_region_write(0000:09:00.0:region3+0x124e7e0, 0xffffffffffffffff,8) failed: Device or resource busy
2018-01-01T23:41:56.046509Z qemu-system-x86_64: vfio_region_write(0000:09:00.0:region3+0x124e7e8, 0xff000000ff9f9f9f,8) failed: Device or resource busy
2018-01-01T23:41:56.046529Z qemu-system-x86_64: vfio_region_write(0000:09:00.0:region3+0x124e7f0, 0xff000000ff000000,8) failed: Device or resource busy
2018-01-01T23:41:56.046544Z qemu-system-x86_64: vfio_region_write(0000:09:00.0:region3+0x124e7f8, 0xff000000ff000000,8) failed: Device or resource busy
2018-01-01T23:41:56.046595Z qemu-system-x86_64: vfio_region_write(0000:09:00.0:region3+0x124e800, 0xff000000ff000000,8) failed: Device or resource busy
2018-01-01T23:41:56.046612Z qemu-system-x86_64: vfio_region_write(0000:09:00.0:region3+0x124e808, 0xff000000ff000000,8) failed: Device or resource busy
2018-01-01T23:41:56.046633Z qemu-system-x86_64: vfio_region_write(0000:09:00.0:region3+0x124e810, 0xff000000ff000000,8) failed: Device or resource busy

 

2card1070a.xml

Zotac1070mini.rom

Zotac1070mini-mod.dump

Share this post


Link to post

Seems like there in an issue with the OS it self im using, tried other USD with new install and its working right away

 

Share this post


Link to post

Ok seems like i found the root cause:

 

When i set USB flash (unraid) to boot as UEFI - it gives this error and GPU passthrou is not working at all, soon as i disable GPU works fine....

 

---

 

But now im facing other problem Win VM cant see unraid shares, im able to see devices on network , but not shares even when i type manually address - which work from other devices...

 

This is never ending :( 

Share this post


Link to post

Join the conversation

You can post now and register later. If you have an account, sign in now to post with your account.
Note: Your post will require moderator approval before it will be visible.

Guest
Reply to this topic...

×   Pasted as rich text.   Restore formatting

  Only 75 emoji are allowed.

×   Your link has been automatically embedded.   Display as a link instead

×   Your previous content has been restored.   Clear editor

×   You cannot paste images directly. Upload or insert images from URL.