• Posts

  • Joined

  • Last visited


  • Gender

Recent Profile Visitors

The recent visitors block is disabled and is not being shown to other users.

mackid1993's Achievements


Rookie (2/14)



  1. OK so you can change your CPU to this emulated one and everything will work but there is a performance hit from it: Here is the XML. You just have to edit the topology for your cores. I did this on 12700k so I gave it 20 cores. I no longer run this due to the performance issues. <cpu mode='custom' match='exact' check='partial'> <model fallback='allow'>Skylake-Client-noTSX-IBRS</model> <topology sockets='1' dies='1' cores='20' threads='1'/> <feature policy='disable' name='hypervisor'/> <feature policy='require' name='vmx'/> <feature policy='disable' name='mpx'/> </cpu>
  2. Thanks @SimonF! Any idea if we'll see it before the end of the year?
  3. No problem. I just want to again caution against Virtiofs in Windows at this time. It's known to cause lockups. We are hoping that when Limetech decides to upgrade QEMU to 7.2 or newer it will be more stable. I just recently set up a VLAN on my server and bound a secondary virtual NIC to it to keep SMB traffic between my VM and Unraid within the server and not dependent on my home network.
  4. Any news when we'll see a bump for qemu? I'm hoping we can really use Virtiofs soon on Windows.
  5. I scripted the setup here: sc stop VirtioFsSvc ping sc config VirtioFsSvc start=demand ping cmd /c ""C:\Program Files (x86)\WinFsp\bin\fsreg.bat" virtiofs "C:\Program Files\Virtio-Win\VioFS\virtiofs.exe" "-t %%1 -m %%2"" echo Confirm data was properly entered into HKLM\SOFTWARE\WOW6432Node\WinFsp\Services\virtiofs pause This is the mount script I use: "C:\Program Files (x86)\WinFsp\bin\launchctl-x64.exe" start virtiofs viofsJ tag1 J: "C:\Program Files (x86)\WinFsp\bin\launchctl-x64.exe" start virtiofs viofsl tag2 l: "C:\Program Files (x86)\WinFsp\bin\launchctl-x64.exe" start virtiofs viofsM tag3 m: "C:\Program Files (x86)\WinFsp\bin\launchctl-x64.exe" start virtiofs viofsS tag4 s: "C:\Program Files (x86)\WinFsp\bin\launchctl-x64.exe" start virtiofs viofsT tag5 T: "C:\Program Files (x86)\WinFsp\bin\launchctl-x64.exe" start virtiofs viofsU tag6 U: "C:\Program Files (x86)\WinFsp\bin\launchctl-x64.exe" start virtiofs viofsV tag7 V: "C:\Program Files (x86)\WinFsp\bin\launchctl-x64.exe" start virtiofs viofsY tag8 Y: Beware of the issues with Virtiofs currently, we are waiting for QEMU to be upgraded to 7.2 to hopefully patch a non-paged pool memory leak that causes Windows to lock up. Edit: I forgot to add my unmount script for anyone that may need it: "C:\Program Files (x86)\WinFsp\bin\launchctl-x64.exe" stop virtiofs viofsJ "C:\Program Files (x86)\WinFsp\bin\launchctl-x64.exe" stop virtiofs viofsl "C:\Program Files (x86)\WinFsp\bin\launchctl-x64.exe" stop virtiofs viofsM "C:\Program Files (x86)\WinFsp\bin\launchctl-x64.exe" stop virtiofs viofsS "C:\Program Files (x86)\WinFsp\bin\launchctl-x64.exe" stop virtiofs viofsT "C:\Program Files (x86)\WinFsp\bin\launchctl-x64.exe" stop virtiofs viofsU "C:\Program Files (x86)\WinFsp\bin\launchctl-x64.exe" stop virtiofs viofsV "C:\Program Files (x86)\WinFsp\bin\launchctl-x64.exe" stop virtiofs viofsY Just save any of these into a batch file (obviously modify with your tags and preferred drive letters and run. I like to set my mount script to run as a scheduled task every hour just incase something breaks and it unmounts.
  6. I think I figured this out. First I created a VLAN it was called br0.2. Unraid was assigned the IP I modified my SMB extras and added: bind interfaces only = yes interfaces = lo br0 br0.2 Then I added a second NIC to my VM and bound it to br0.2. On my Windows VM I mapped my shares using \\\share. It seems now that the traffic is staying within Unraid! If anyone sees this I'd appreciate any feedback. I hope I got this figured out right.
  7. Pardon my ignorance, I'm hoping someone could guide me in this or point me in the right direction. I'm wondering how to best set up the networking between my Windows VM and Unraid. I'm wondering if there is a way to keep the traffic between my Unraid shares and Windows VM within the Unraid server so bandwidth isn't bottlenecked by my gigabit router. I know that the Virtual NIC Unraid uses is 10 gbit so I was wondering how I can configure things so traffic internal to the Unraid server gets the full 10 gbit throughput. For example if I access a file on one of my Unraid shares mounted over SMB I don't want the rest of my network to be a bottleneck if possible. I'm wondering if there is a way to achieve this with a VLAN, I just don't know how to do it and I'm hoping there is a guide somewhere that can help me achieve this. Thanks!!
  8. FYI this is supposed to be fixed in the new version of QEMU which we are supposed to get in 6.13. It's a memory leak, if you watch taskmgr the non-paged pool will slowly balloon until Windows runs out of memory.
  9. I got around to doing a bit more testing and there definitely is a performance hit from this.
  10. Any word as to when we'll see Qemu 7.2? I'm dying to use this feature in Windows to overcome the SMB bottleneck.
  11. I'm not really seeing a difference. I also ran comparisons with Geekbench and the difference in performance seems like none. I got a lower score when using host passthrough rather than emulated which was probably just by chance. If I am correct maybe this can be something that can be added as an option in the GUI down the line. I have WSL working fine and my server is sitting at idle right now with the VM running in this configuration. I'm not home right now so it's got to be 80 degrees Farenheit in my apartment with the AC off and my CPU is sitting at 37 degrees celcius currently. From my testing I see no difference other than nested virtualization working. If there is anything I can do to help test performance let me know! Edit: Perhaps if this is a viable solution it may be helpful to make a more visible thread to help other users. I've seen a bunch of threads on this issue and it may be helpful to consolidate the discussion into one place.
  12. My CPU usage is fairly low. I also ran Geekbench and performance was close to baseline bare metal 12700k. Edit: To test I rolled everything back and ran Geekbench again. I actually got a lower score with the default XML for some reason.
  13. Sorry to dig up this old thread, I've been following it for a while and was able to find a solution for my i7 12700K with the help of this thread: Try modifying this section of your XML: <cpu mode='host-passthrough' check='none' migratable='on'> <topology sockets='1' dies='1' cores='8' threads='2'/> <cache mode='passthrough'/> </cpu> Change it to this, just be sure to keep your <topology> line the same to not change the core count: <cpu mode='custom' match='exact' check='partial'> <model fallback='allow'>Skylake-Client-noTSX-IBRS</model> <topology sockets='1' dies='1' cores='20' threads='1'/> <feature policy='disable' name='hypervisor'/> <feature policy='require' name='vmx'/> <feature policy='disable' name='mpx'/> </cpu> I also had to add <feature policy='disable' name='mpx'/> which wasn't in the superuser post. I was getting an error when booting my VM so there may be some modification needed based on CPU model. With this I was able to get WSL2 working! I'm not sure if this will greatly impact performance. I'm hoping someone more knowledgeable like @SimonF may be willing to comment as I've found him to be all knowing when it comes to Unraid and VMs!
  14. Thanks for the update. Hopefully we don't have to wait super long.
  15. It must be a 6.12 bug. Personally it's not causing an issue for me.