Jump to content

VirtioFS Support Page


Recommended Posts

9 minutes ago, johnsanc said:

Wow. Nice find. that perfectly explains the behavior I was seeing as well with my VMs.

 

My attempt to create a Windows 10 VM would just sit on a blank screen randomly most of the time. Lo and behold, memory backing on my other VM was the culprit. Considering I don't use virtiofs due to my unexplained performance issues, I'm just going to remove the memory backing config altogether. Does more harm than good unless you need it it seems.

Can you try installing the Tips and Tweaks plugin and setting these values lower, let me know if it helps you with memory backing on?

The help text said to set them to 2 and 3 for gaming and streaming VMs and that lower values are better if you have a lot of RAM which in my case 64 GB probably qualifies. I won't know for another couple of hours if this has worked.

 

The command cat /proc/buddyinfo is helpful for checking fragmentation. It shows free blocks of various sizes ranked from smallest to largest left to right.

If you have a lot of small blocks and few large ones it indicated fragmentation based on the research I was doing. The Normal zone is most relevant to VMs.

 

image.thumb.png.b71bb81bd9dfa0a09a62ee59d71129a4.png

Edited by mackid1993
Link to comment

I can confirm after letting my system sit for quite a while setting 

vm.dirty_background_ratio

to 2 %

and setting

vm.dirty_ratio

to 3 % solves this memory related issue and improves VM performance when using memory backing.

 

@SimonF this issue may affect more than just me as @johnsanc was seeing. After a memory backing enabled VM is up for a while other VMs won't start properly. I also personally noticed slowdowns on my main VM that seem to be related to this. Perhaps something needs to be set in Unraid automatically to modify these values when memory backing is in use. Is this something that can be done for 6.13 to improve VirtioFS support further?

Link to comment

I'm not sure if that setting did the trick or not, but either way it sounds like those defaults were not appropriate for my use case.

 

I tried making the changes you listed and let it sit several hours. I started my Windows 10 VM and it hung at the boot screen... so no noticeable change. I have 128GB of RAM and I originally allocated 96GB to my Windows 11 VM (with memorybacking), and 8GB to a Windows 10 VM. I dropped my Win11 VM down to 64GB and now Win10 boots without any issues.

 

So I guess even though I had plenty of memory to go around, something wasn't playing nice. Not sure if maybe I needed to reboot my Win11 VM after changing the two settings you mentioned for it to take effect. Maybe I'll try out a few other memory combinations tomorrow to see if I can reproduce the hanging behavior of my Win10 VM.

 

 

Link to comment
4 minutes ago, johnsanc said:

I'm not sure if that setting did the trick or not, but either way it sounds like those defaults were not appropriate for my use case.

 

I tried making the changes you listed and let it sit several hours. I started my Windows 10 VM and it hung at the boot screen... so no noticeable change. I have 128GB of RAM and I originally allocated 96GB to my Windows 11 VM (with memorybacking), and 8GB to a Windows 10 VM. I dropped my Win11 VM down to 64GB and now Win10 boots without any issues.

 

So I guess even though I had plenty of memory to go around, something wasn't playing nice. Not sure if maybe I needed to reboot my Win11 VM after changing the two settings you mentioned for it to take effect. Maybe I'll try out a few other memory combinations tomorrow to see if I can reproduce the hanging behavior of my Win10 VM.

 

 

Try rebooting your server to clear out the RAM or running 

echo 3 > /proc/sys/vm/drop_caches

and

echo 1 > /proc/sys/vm/compact_memory

in a terminal. Then let it sit with the 96 GB Win 11 VM for a few hours and see if your Win 10 VM will start.

 

I had this exact behavior, the memory fragmentation made it such that when I reduced the RAM on my Win 11 VM from 32 GB to 16 GB my other VMs would start. I think after you make that change in tips and tweaks you still need to clear everything out that built up and never got released either by running those commands or rebooting.

  • Thanks 1
Link to comment
5 hours ago, mackid1993 said:

I can confirm after letting my system sit for quite a while setting 

vm.dirty_background_ratio

to 2 %

and setting

vm.dirty_ratio

to 3 % solves this memory related issue and improves VM performance when using memory backing.

 

@SimonF this issue may affect more than just me as @johnsanc was seeing. After a memory backing enabled VM is up for a while other VMs won't start properly. I also personally noticed slowdowns on my main VM that seem to be related to this. Perhaps something needs to be set in Unraid automatically to modify these values when memory backing is in use. Is this something that can be done for 6.13 to improve VirtioFS support further?

Do you know the default values set on your system.

 

My defaults are

 

root@computenode:~# cat   /proc/sys/vm/dirty_ratio 
20
root@computenode:~# cat   /proc/sys/vm/dirty_background_ratio 
10
root@computenode:~# 

Link to comment
6 hours ago, SimonF said:

Do you know the default values set on your system.

 

My defaults are

 

root@computenode:~# cat   /proc/sys/vm/dirty_ratio 
20
root@computenode:~# cat   /proc/sys/vm/dirty_background_ratio 
10
root@computenode:~# 

They were 20 and 10 as well, but after letting it sit another 10 hours it got slow again. It almost seemed like that made it last longer. ☹️

Link to comment

I let my system sit a bit longer and took:

   <source type='memfd'/>
   <access mode='shared'/>
out of the XML. At this point the non Virtiofs VMs don't boot at all. As soon as I add the shared memory backing in they boot right way.

It seems that when memory backing is in use on one VM it needs to be enabled on all VMs otherwise after the memory usage settles in other VMs will not start.

 

Edit:

Testing further I set vm.min_free_kbytes to 1% of my installed memory, I added:
 

sysctl -w vm.min_free_kbytes=671089 

 

to my go file and rebooted to clear everything out. It was very low before, I'm hoping that this addresses some of the fragmentation issues. Regardless enabling memfd and shared on the other VMs fixed everything, but I don't believe it got to the root of the problem.

Edited by mackid1993
Link to comment
Posted (edited)

I'm getting a weird issue in Windows 11 where executables will not run from a VirtioFS mounted drive when mounting them using the script from the recommended post. Instead, Windows will show a network error stating that it cannot access the file path. This only happens with some executables though, not all. I have tried to follow the directions on the gitlab page to see if I can get it working that way, but the commands always return "access denied".

 

In one case, a file gave an error that windows cannot access the temporary version of the executable in the %appdata%\local\temp folder, and another gave a ShellExecuteEX failed; code 1203 error. But most cases the error just says that windows cannot access that file path. This does not happen if the executables are run from the vdisk.

 

Does anyone have any theories as to what might be going on?

 

Edit: just checked, error 1203 is "ERROR_NO_NET_OR_BAD_PATH: The network path was either typed incorrectly, does not exist, or the network provider is not currently available. Please try retyping the path or contact your network administrator." Which is basically the same as the other issues; windows can't access the file. When using network shares, I've had to use a registry edit that allows the system to see the shares, which is necessary to run executables. Could there be something similar going on here?

Edited by sixkittens
Link to comment
Posted (edited)
On 4/28/2024 at 11:09 AM, mackid1993 said:

One thing to note, some docker containers may try to steal your hugepages! Apache and Postgres are two that will, if you have those installed it would be prudent to assign a little extra RAM to prevent a condition where the VM won't start.

 

Check current free hugepages like this:

 

grep -i hugepages /proc/meminfo
HugePages_Total:   16384
HugePages_Free:    10216
HugePages_Rsvd:      175
HugePages_Surp:        0
Hugepagesize:       2048 kB

 

It looks like you can also just use this to set aside/adjust allocated memory for hugepages

 

virsh allocpages 2M 8192

 

Edited by jortan
Link to comment
Posted (edited)
12 hours ago, sixkittens said:

I'm getting a weird issue in Windows 11 where executables will not run from a VirtioFS mounted drive when mounting them using the script from the recommended post. Instead, Windows will show a network error stating that it cannot access the file path. This only happens with some executables though, not all. I have tried to follow the directions on the gitlab page to see if I can get it working that way, but the commands always return "access denied".

 

In one case, a file gave an error that windows cannot access the temporary version of the executable in the %appdata%\local\temp folder, and another gave a ShellExecuteEX failed; code 1203 error. But most cases the error just says that windows cannot access that file path. This does not happen if the executables are run from the vdisk.

 

Does anyone have any theories as to what might be going on?

 

Edit: just checked, error 1203 is "ERROR_NO_NET_OR_BAD_PATH: The network path was either typed incorrectly, does not exist, or the network provider is not currently available. Please try retyping the path or contact your network administrator." Which is basically the same as the other issues; windows can't access the file. When using network shares, I've had to use a registry edit that allows the system to see the shares, which is necessary to run executables. Could there be something similar going on here?

I don't think that's going to work, I tried to run some Steam games off a VirtioFS mount and they don't work. Which makes sense, they aren't NTFS drives it's FUSE and not really optimized to run programs from. It also may be a permissions issue. At least with my script Virtiofs.exe is running as SYSTEM.

Edited by mackid1993
Link to comment

I created a windows 10 23h2 vm with virtuous share using Spaceinvaders guide successfully but I ran into an issue trying to run Hyperspin from the share. So I shutdown the vm and removed the virtiofs share from the vms gui. Now my VM starts up with automatic repair because it blue screens with Critical Process Has Died. Anyone have any idea how to go about fixing this to get the vm to boot back to Windows?

Link to comment
On 5/3/2024 at 3:28 AM, drtechnolust said:

I created a windows 10 23h2 vm with virtuous share using Spaceinvaders guide successfully but I ran into an issue trying to run Hyperspin from the share. So I shutdown the vm and removed the virtiofs share from the vms gui. Now my VM starts up with automatic repair because it blue screens with Critical Process Has Died. Anyone have any idea how to go about fixing this to get the vm to boot back to Windows?

Maybe try to boot into a Windows installer and try uninstalling Winfsp from the command line to disable Virtiofs. You'll have to search around for the commands for that, but I believe dism can uninstall programs. A bluescreen is an issue with Windows and I personally have never seen Virtiofs cause this. I suggest keeping regular backups to make restoring from issues like this easier. I use Macrium Reflect Home Edition to take images of my VM and keep an iso of their rescue media with the Virtio drives baked in in /mnt/user/isos so I can easily revert my VM to my last backup if I break something.

Link to comment
On 5/1/2024 at 1:25 AM, jortan said:

 

Check current free hugepages like this:

 

grep -i hugepages /proc/meminfo
HugePages_Total:   16384
HugePages_Free:    10216
HugePages_Rsvd:      175
HugePages_Surp:        0
Hugepagesize:       2048 kB

 

It looks like you can also just use this to set aside/adjust allocated memory for hugepages

 

virsh allocpages 2M 8192

 

So you wouldn't want to do this in Unraid. There are many ways to set hugepages but setting the argument in the Syslinux config makes it persistent after every boot. You could add virsh allocpages to your go file and that should work as well but I think adding the arguments to the Syslinux is cleaner especially since nothing in Unraid is persistent unless it is stored on the flash. 

 

Moreover I wouldn't suggest using the default settings for Virtiofs because of the performance issues that build up as the memory usage settles in. Enabling hugepages makes a huge performance difference.

Link to comment
1 hour ago, mackid1993 said:

So you wouldn't want to do this in Unraid. There are many ways to set hugepages but setting the argument in the Syslinux config makes it persistent after every boot. You could add virsh allocpages to your go file and that should work as well but I think adding the arguments to the Syslinux is cleaner especially since nothing in Unraid is persistent unless it is stored on the flash. 

 

Moreover I wouldn't suggest using the default settings for Virtiofs because of the performance issues that build up as the memory usage settles in. Enabling hugepages makes a huge performance difference.

 

Agreed, my point was you can see hugepages available with the above command and then adjust if needed without needing an immediate reboot.  ie. if you are testing something, a docker stole your hugepages, etc.

Link to comment
10 minutes ago, jortan said:

 

Agreed, my point was you can see hugepages available with the above command and then adjust if needed without needing an immediate reboot.  ie. if you are testing something, a docker stole your hugepages, etc.

Oh that's a good point. When I get up tomorrow I'll add a note about that in my guide and mention you. Thanks!

Link to comment
Posted (edited)

@jortan I took a look at virsh allocpages and I wouldn't suggest using it unless done right after a fresh boot. I have 128GB of RAM in my server and as a test tried to allocate another 4096 2MB pages and it failed, likely due to fragmentation. I could drop caches and compact, but I'm going to stand by setting this at boot and either allocating a little extra for dockers and then capping the memory of offending containers or finding a way in the config for a particular container to disable hugepages.

 

It seems the majority of dockers that use hugepages by default are database engines like Postgres and Phpmyadmin, I read Apache was another culprit but I'm not sure if that was someone using Apache with mysql which would make more sense than Apache on it's own.

 

I'd say if you aren't running a database, dockers stealing hugepages shouldn't be an issue for you, and if you are running a database it's probably smart to allocate hugepages for it so it performs better! It's a win win, hugepages rock!

Edited by mackid1993
Link to comment
On 5/1/2024 at 9:43 AM, mackid1993 said:

I don't think that's going to work, I tried to run some Steam games off a VirtioFS mount and they don't work. Which makes sense, they aren't NTFS drives it's FUSE and not really optimized to run programs from. It also may be a permissions issue. At least with my script Virtiofs.exe is running as SYSTEM.

I was able to load games off a mounted share with no issue, but the read speed might make it less attractive for bigger games (bg3 for example)

 

Lately my windows 11 vm has been using 100% of one the cores on my system (core 1) for reasons I can't figure out. Btop just shows qemu with no other clues. I unmount Ed the vfs share and the same thing happened, so it's not that at least.

Link to comment
Posted (edited)
5 minutes ago, sage2050 said:

I was able to load games off a mounted share with no issue, but the read speed might make it less attractive for bigger games (bg3 for example)

 

Lately my windows 11 vm has been using 100% of one the cores on my system (core 1) for reasons I can't figure out. Btop just shows qemu with no other clues. I unmount Ed the vfs share and the same thing happened, so it's not that at least.

If you remove the virtiofs shares also <source type='memfd'/> <access mode='shared'/> from <memorybacking> in your XML does it go away?

If so you may want to see if hugepages help?

I was having one core randomly spiking and my mainboard temp spike as well and since I've switched to hugepages it seems to have stopped. It's super weird, but from what I've seen I wouldn't use Virtiofs without hugepages.

 

You also generally don't want your lowest CPU core pinned to the VM, Unraid uses that.

Edited by mackid1993
Link to comment
7 minutes ago, sage2050 said:

That does seem like it cleared it up, thanks for that. I'll go ahead and add the share back and try out hugepages tonight

That should help you. It makes a big difference and is quite easy to set up. Just make sure you have enough memory to reserve.

Link to comment

got hugepages setup and the cpu issues are completely gone. I had some docker containers using some memory apparently, I had to kind of guess around at the size until i saw the command above and resized accordingly. Is there any way to tell with certainty which containers are the culprit? i have a small postgres running but it doesn't account for all the reservations.

  • Like 1
Link to comment
11 hours ago, sage2050 said:

got hugepages setup and the cpu issues are completely gone. I had some docker containers using some memory apparently, I had to kind of guess around at the size until i saw the command above and resized accordingly. Is there any way to tell with certainty which containers are the culprit? i have a small postgres running but it doesn't account for all the reservations.

Trial and error. Allocate enough just for the VM, and start containers one by one then try to start the VM. Which ever ones make the VM not run are using hugepages. If you can post them here let us know, I'll update my post.

Link to comment
  • 3 weeks later...
On 5/5/2024 at 1:42 AM, mackid1993 said:

Maybe try to boot into a Windows installer and try uninstalling Winfsp from the command line to disable Virtiofs. You'll have to search around for the commands for that, but I believe dism can uninstall programs. A bluescreen is an issue with Windows and I personally have never seen Virtiofs cause this. I suggest keeping regular backups to make restoring from issues like this easier. I use Macrium Reflect Home Edition to take images of my VM and keep an iso of their rescue media with the Virtio drives baked in in /mnt/user/isos so I can easily revert my VM to my last backup if I break something.

@mackid1993 Firstly thankyou for all that you do, you have made the virtioFS setup a breeze! 

 

In your comment you mentioned Macrium reflect, I'm currently trying to backup my VM to a folder that is mounted via your  virtiofs script. The problem I'm having is the when Macrium attempts to write to the mapped drive it fails with "error - system cannot find the path specified" When I return to windows explorer the Mapped drive is no longer there. If a re-run the virtiofs script the drive reappears but fails again if trying to backup via Macrium.

 

I can create and open files within the mapped drive as per normal but it seems Macrium cannot use the virtiofs mapped method.

Can you share any knowledge on this, much apricated in advance?

Link to comment

Join the conversation

You can post now and register later. If you have an account, sign in now to post with your account.
Note: Your post will require moderator approval before it will be visible.

Guest
Reply to this topic...

×   Pasted as rich text.   Restore formatting

  Only 75 emoji are allowed.

×   Your link has been automatically embedded.   Display as a link instead

×   Your previous content has been restored.   Clear editor

×   You cannot paste images directly. Upload or insert images from URL.

×
×
  • Create New...