Jump to content

VirtioFS Support Page


Recommended Posts

Rust binary in /usr/bin

Bash in /usr/libexec

 

Attached php file in /usr/local/emhttp/plugins/dynamix.vm.manager/scripts

 

chmod +x for bash and php.

 

path in XML should be /usr/libexec/virtiofsd

 

Should work on 6.12 but I have not tested on that release.

 

virtiofsd.php

Link to comment
5 hours ago, SimonF said:

Rust binary in /usr/bin

Bash in /usr/libexec

 

Attached php file in /usr/local/emhttp/plugins/dynamix.vm.manager/scripts

 

chmod +x for bash and php.

 

path in XML should be /usr/libexec/virtiofsd

 

Should work on 6.12 but I have not tested on that release.

 

virtiofsd.php 2.09 kB · 1 download

Tested and working on 6.12. If this helps anyone I added the following to my go file (/boot/config/go).

Rust virtiofsd and virtiofsd.php are in /boot/virtiofsd and Simon's bash file is in /boot/virtofsd/bash

 

#copy virtiofsd bash wrapper
mv /usr/libexec/virtiofsd /usr/libexec/virtiofsd.old
cp /boot/virtiofsd/bash/virtiofsd /usr/libexec/virtiofsd 
chmod +x /usr/libexec/virtiofsd 

#copy php wrapper
cp /boot/virtiofsd/virtiofsd.php  /usr/local/emhttp/plugins/dynamix.vm.manager/scripts/virtiofsd.php
chmod +x /usr/local/emhttp/plugins/dynamix.vm.manager/scripts/virtiofsd.php

#copy rust virtiofsd
cp /boot/virtiofsd/virtiofsd /usr/bin/virtiofsd
chmod +x /usr/bin/virtiofsd

 

I have a parity check running, once it finishes in a few hours I'm going to reboot my server. If I made a typo I'll update this post. Everything was good after a reboot.

Edited by mackid1993
  • Like 1
Link to comment
22 hours ago, mackid1993 said:

Do you have any cores isolated from the host and pinned to your VM? I found core isolation makes a huge performance difference for me. Even if it's only a couple of cores and I still let the VM access the rest of them that aren't isolated.

 

Also just to throw out the obvious things, have you checked for any BIOS updates for your motherboard?


Thanks for trying to help, but yeah I do have cores isolated from host and pinned to VM. Its one of the first things I adjusted. I even tried different combinations of cores isolated and pinned with the same results.

BIOS updates are another issue... I tried that weeks ago and any BIOS update past the one I'm currently on simply doesn't work correctly. Cannot access UEFI at all. Oddly enough the firmware update must have worked since I can boot into my Windows NVME just fine, just cannot access UEFI to change anything to get unraid to boot from usb... so that's a no-go. In case you were curious: https://forum.asrock.com/forum_posts.asp?TID=32460&PID=114573

Link to comment
16 minutes ago, johnsanc said:


Thanks for trying to help, but yeah I do have cores isolated from host and pinned to VM. Its one of the first things I adjusted. I even tried different combinations of cores isolated and pinned with the same results.

BIOS updates are another issue... I tried that weeks ago and any BIOS update past the one I'm currently on simply doesn't work correctly. Cannot access UEFI at all. Oddly enough the firmware update must have worked since I can boot into my Windows NVME just fine, just cannot access UEFI to change anything to get unraid to boot from usb... so that's a no-go. In case you were curious: https://forum.asrock.com/forum_posts.asp?TID=32460&PID=114573

Idk, this sounds like an issue with your specific hardware not with Virtiofs or Unraid.

 

Edit:

 

@johnsanc As a side note, the both times I built a PC with an Asrock board it had weird UEFI issues and trouble booting. My old desktop had an Asrock board and it had UEFI issues, at one point I had to swap the BIOS chip because the ME region got corrupted.

 

I built a machine for a family member with an Asrock board that at times just would refuse to boot. After those experiences I now stick with Gigabyte and Asus. Asrock isn't great based on my experience, so I'm not surprised you flashed your BIOS and can't access it now. You may need to go on eBay and buy a new BIOS chip, they are very easy to change on most boards.

 

I also wonder, are you using an HBA or onboard SATA for your drives? If onboard SATA I'd wonder if something is up with your motherboard.

Edited by mackid1993
Link to comment

I'm seeing over 200 MB/s reading from my array and deepening on the transfer 600 - 800 MB/s nvme to nvme. SMB can get faster with nvme but no network overhead is nice.

 

So performance isn't great for nvme, this is crystal disk mark on my appdata share which is a Samsung 980 Pro. It's clearly not optimized for speed. It can certainly saturate an Unraid array though.

image.png.2fcea1cea4ecb957c9612be81b959695.png

Edited by mackid1993
added nvme benchmarks
Link to comment
On 3/24/2024 at 2:10 PM, mackid1993 said:

Are you running it with @jch's shell script. That made a difference for me.

yeah this is using the scripts posted. Same speed using old and new version of virtiofs, ~650MB/s. Can someone post a crystal disk mark of their vm maxing out an NVME? does the fact that i'm running a vdisk instead of baremetal affect my speeds?

Edited by sage2050
Link to comment
4 hours ago, sage2050 said:

yeah this is using the scripts posted. Same speed using old and new version of virtiofs, ~650MB/s. Can someone post a crystal disk mark of their vm maxing out an NVME? does the fact that i'm running a vdisk instead of baremetal affect my speeds?

I shared a crystal disk mark of my nvme, scroll up one post. It's not comparable to baremetal or nvme passthrough but it's fast enough for most use cases and will certainly max out an unraid array. Virtiofs (at least on Windows) is not optimized for performance, this is per the virtio-win devs.

Link to comment
On 4/1/2024 at 9:48 PM, mackid1993 said:

Tested and working on 6.12. If this helps anyone I added the following to my go file (/boot/config/go).

Rust virtiofsd and virtiofsd.php are in /boot/virtiofsd and Simon's bash file is in /boot/virtofsd/bash

 

#copy virtiofsd bash wrapper
mv /usr/libexec/virtiofsd /usr/libexec/virtiofsd.old
cp /boot/virtiofsd/bash/virtiofsd /usr/libexec/virtiofsd 
chmod +x /usr/libexec/virtiofsd 

#copy php wrapper
cp /boot/virtiofsd/virtiofsd.php  /usr/local/emhttp/plugins/dynamix.vm.manager/scripts/virtiofsd.php
chmod +x /usr/local/emhttp/plugins/dynamix.vm.manager/scripts/virtiofsd.php

#copy rust virtiofsd
cp /boot/virtiofsd/virtiofsd /usr/bin/virtiofsd
chmod +x /usr/bin/virtiofsd

 

I have a parity check running, once it finishes in a few hours I'm going to reboot my server. If I made a typo I'll update this post. Everything was good after a reboot.

6.13 implementation is slightly cbanged /usr/libexec/virtiofsd is now a symlink to /usr/local/sbin/virtiofsd wrapper. Rest is the same so no change functionally just where files are.

Link to comment
On 4/6/2024 at 11:05 AM, SimonF said:

6.13 implementation is slightly cbanged /usr/libexec/virtiofsd is now a symlink to /usr/local/sbin/virtiofsd wrapper. Rest is the same so no change functionally just where files are.

Ok. I mean before I update to 6.13 I'm going to take everything out of my go file regardless so it should all kind of just work right?

Link to comment
  • 2 weeks later...
On 4/8/2024 at 3:57 PM, SimonF said:

Yes

Hey @SimonF I decided to upgrade to 6.13 beta1 but noticed your php wrapper and bash script isn't there. Is that going to be in beta2? Just curious so I comment those lines out when I upgrade. I modified my go file to not copy rust virtiofsd but it looks like I still have to manually copy your bash script and php wrapper for now.

Link to comment
3 hours ago, mackid1993 said:

Hey @SimonF I decided to upgrade to 6.13 beta1 but noticed your php wrapper and bash script isn't there. Is that going to be in beta2? Just curious so I comment those lines out when I upgrade. I modified my go file to not copy rust virtiofsd but it looks like I still have to manually copy your bash script and php wrapper for now.

Yes not in 1 will be in next vers.

  • Like 1
Link to comment

Hey @SimonF I found a pretty interesting bug. It's been driving me nuts for days and it only happens after 8-12 hours of server uptime, it it doesn't occur until the server has been running a while but it's related to memory backing.

 

My configuration is my main Windows 11 VM with Virtiofs enabled (several mounts) with 32 GB of RAM assigned. I then have 2 test VMs without Virtiofs or memory backing, one is the Windows 11 Dev Channel and the other is the Canary channel each with 8GB of RAM. All VMs have access to all 20 threads of my 12700k. I recently set these two Dev and Canary VMs up which is when I noticed the problem. This is happening in 6.12.10 and 6.13 beta 1 so it has nothing to do with Unraid version or your php/bash script or even Virtiofsd. I even found that my memory was failing memtest and after replacing it the issue still occurs. I believe it has to do with memory backing being enabled on a VM.

 

What will happen is after the server is up for a while 8-12 hours or so and I have my memory backing enabled VM booted I can boot a single VM with 8GB of RAM but when I go to boot the second VM with 8 GB of RAM it will hang on boot. At worst it's even crashed qemu after my server was up for a while. At one point I even had all cores on my CPU pegged to 100%. What I found was when I made an identical copy of my main Win 11 VM without any Virtiofs or memory backing the issue goes away. Moreover when it's happening and I type pkill virtiofsd it doesn't clear it up right away which tells me it's a qemu bug and not a virtiofsd bug.

 

Interestingly if I drop the memory on my main Win 11 VM with memory backing/virtiofs to 16 GB the issue also clears up. This entire time my server will have at least 14 GB of RAM free so it's not like I'm out of memory, I have 64 GB.  If I drop the two 8 GB VMs to use 4 GB each they boot immediately. Nothing interesting is in the logs from what I can tell. It's just a super weird edge case bug, not sure how to open a report or where to open it.

 

 

Edited by mackid1993
Link to comment
4 minutes ago, mackid1993 said:

This entire time my server will have at least 14 GB of RAM free so it's not like I'm out of memory,

Perhaps out of unfragmented memory. Some operations require contiguous blocks, and over time more and more addresses can be tied up and unable to be reallocated, even if the total amount free is plenty. 

Link to comment
Just now, JonathanM said:

Perhaps out of unfragmented memory. Some operations require contiguous blocks, and over time more and more addresses can be tied up and unable to be reallocated, even if the total amount free is plenty. 

Oh! I never thought of that! Is there a way I can test for that? A command I can run? If so I'll increase the ram in my server to mitigate.

Link to comment
2 minutes ago, mackid1993 said:

Oh! I never thought of that! Is there a way I can test for that? A command I can run? If so I'll increase the ram in my server to mitigate.

After a minute of googling (as in, no real research) I found this which may or may not do something in Unraid, haven't tried it, so use at your own risk, it was billed as a "linux" solution.

Quote

echo 3 > /proc/sys/vm/drop_caches

and

echo 1 > /proc/sys/vm/compact_memory

This apparently a. clears speculative data that was cached b. consolidates the in use memory.

 

If you are game to try this, execute at a point where the VM would fail to launch.

 

To repeat, I HAVE NO CLUE IF THIS WILL DO BAD THINGS TO UNRAID. 

Link to comment
5 minutes ago, JonathanM said:

After a minute of googling (as in, no real research) I found this which may or may not do something in Unraid, haven't tried it, so use at your own risk, it was billed as a "linux" solution.

This apparently a. clears speculative data that was cached b. consolidates the in use memory.

 

If you are game to try this, execute at a point where the VM would fail to launch.

 

To repeat, I HAVE NO CLUE IF THIS WILL DO BAD THINGS TO UNRAID. 

Holy crap! That worked. I ran those commands and those VMs boot right away! Is this something can can be done inside Unraid to prevent this. Or should I just add these commands to a user script.

Edited by mackid1993
Link to comment
3 minutes ago, mackid1993 said:

Holy crap! That worked. I ran those commands and those VMs boot right away! Is this something can can be done inside Unraid to prevent this. Or should I just add these commands to a user script.

It will kill performance if run too often, as caching data is what speeds many things along. As a clean up tool run when performance isn't a priority, or before starting a task, it should be fine.

 

It does prove to some extent that you are over committing the memory you have for optimum performance, so more RAM would help if you really need to reserve that much RAM for VM use.

 

I would try reducing the VM RAM allocations and see if it hurts or helps the VM performance. RAM caching by the host is one of the things that can really speed up a VM, and if you deny the host that RAM it can hurt the VM speed.

Link to comment
Just now, JonathanM said:

It will kill performance if run too often, as caching data is what speeds many things along. As a clean up tool run when performance isn't a priority, or before starting a task, it should be fine.

 

It does prove to some extent that you are over committing the memory you have for optimum performance, so more RAM would help if you really need to reserve that much RAM for VM use.

 

I would try reducing the VM RAM allocations and see if it hurts or helps the VM performance. RAM caching by the host is one of the things that can really speed up a VM, and if you deny the host that RAM it can hurt the VM speed.

You just saved my sanity, I thought something was up with my motherboard. So I guess I can't push my RAM to 84% usage without having performance issues?

Since in figuring this out I found out my RAM was bad I had to run to Microcenter and buy a kit but I have a RMA in to Corsair right now for my original RAM. I may just put that in my server and call it a day now that I know what the problem is.

 

Thanks so much for your help!!! 😊

Link to comment
2 minutes ago, mackid1993 said:

So I guess I can't push my RAM to 84% usage without having performance issues?

Depends. There are other things you can tweak with regards to memory, cache pressure and such, and honestly Unraid is tuned for best performance with smaller amounts of RAM and may not make the best use of more than 64GB of RAM.

 

I don't have the luxury of owning any systems with more than 32GB right now, so I must leave hands on research as an exercise for the reader.

Link to comment
16 minutes ago, JonathanM said:

Depends. There are other things you can tweak with regards to memory, cache pressure and such, and honestly Unraid is tuned for best performance with smaller amounts of RAM and may not make the best use of more than 64GB of RAM.

 

I don't have the luxury of owning any systems with more than 32GB right now, so I must leave hands on research as an exercise for the reader.

Well thanks so much for your help! When I get my RMA back from Corsair and have an extra 64 GB on hand I'll see if it solves the issue. If not throwing those commands on a crontab to run once a day at like 3 am probably isn't a bad idea.

Link to comment

I think this may have something to do with the Disk Cache settings in Tips and Tweaks. I lowered the vm.dirty_background_ratio and vm.dirty_ratio. I have a feeling Virtiofs is caching dirtypages to memory and they are building up and causing this issue. Does that make sense to anyone who knows more than me.

Edited by mackid1993
Link to comment

Wow. Nice find. that perfectly explains the behavior I was seeing as well with my VMs.

 

My attempt to create a Windows 10 VM would just sit on a blank screen randomly most of the time. Lo and behold, memory backing on my other VM was the culprit. Considering I don't use virtiofs due to my unexplained performance issues, I'm just going to remove the memory backing config altogether. Does more harm than good unless you need it it seems.

Edited by johnsanc
Link to comment

Join the conversation

You can post now and register later. If you have an account, sign in now to post with your account.
Note: Your post will require moderator approval before it will be visible.

Guest
Reply to this topic...

×   Pasted as rich text.   Restore formatting

  Only 75 emoji are allowed.

×   Your link has been automatically embedded.   Display as a link instead

×   Your previous content has been restored.   Clear editor

×   You cannot paste images directly. Upload or insert images from URL.

×
×
  • Create New...