VirtioFS Support Page


Recommended Posts

On 9/10/2023 at 11:59 PM, benfishbus said:

The virtiofs support is working great with my ubuntu VM. I just had one small problem which I managed to solve with a workaround. Ubuntu could not automount via fstab an unraid share with the tag "media". I don't exactly know why. DMESG in the vm looked like this:

 

[   24.644594] systemd[1]: Mounting /mnt/unraid/ben...
[   24.647784] systemd[1]: Mounting /mnt/unraid/media...
[   24.669726] virtiofs virtio0: virtio_fs_setup_dax: No cache capability
[   24.679994] virtiofs virtio1: virtio_fs_setup_dax: No cache capability
[   24.685471] virtio-fs: tag </media> not found

 

I tried additional shares, and all of them worked - but not the "media" share. I tried changing the order of the shares in fstab - same result. If I ran "sudo mount -av" at a command prompt, ubuntu would mount the "media" share no problem - but never at bootup.

 

The workaround was to specify the "media" share manually in the vm config, using some other tag like "xmedia" - and suddenly ubuntu had no problem automounting the share.

 

Now if only I didn't have to resort to xml or virt-manager to add a virtual sound card to a vm in unraid...

If you let me know what is required I can see if it can be added to the template.

Link to comment
27 minutes ago, mackid1993 said:

Any news about the new Virtio drivers for Windows? Do they happen to resolve any issues with Virtiofs?

Not checked for a while, looks like 240 dropped 3 weeks ago, but I have not looked at them. I cannot recreate the memory issue so would not be able to validate if they fix the issue, but suspect it is more likely to be a qemu issue. 8.1.1 is release now. Not sure if that or 8.1.0 will be in the release.

 

Link to comment
16 minutes ago, SimonF said:

If you let me know what is required I can see if it can be added to the template.

Regarding sound? Not sure if this is what you're asking for, but virt-manager lists 3 models for me to choose from - "AC97", "HDA (ICH6)", and "HDA (ICH9)". I don't know if these are the options everyone sees, or just me based on my host hardware.

 

I have always had success with HDA (ICH9). Virt-manager adds this to the XML:

<sound model="ich9">
  <address type="pci" domain="0x0000" bus="0x00" slot="0x1b" function="0x0"/>
</sound>
<audio id='1' type='spice'/>

 

Can't remember whether sound actually worked in a SPICE session, or if I had to turn on/install RDP in the guest and connect that way...

 

 

  • Like 1
Link to comment

Having an issue with a virtiofs share on a debian machine where, when attempting to checkout a repository for git, it fails every time with the error
 

Cloning into 'app'...
remote: Enumerating objects: 265423, done.
remote: Counting objects: 100% (4860/4860), done.
remote: Compressing objects: 100% (1261/1261), done.
remote: Total 265423 (delta 3750), reused 4510 (delta 3498), pack-reused 260563
Receiving objects: 100% (265423/265423), 508.88 MiB | 10.80 MiB/s, done.
Resolving deltas: 100% (195111/195111), done.
fatal: failed to read object 62f9e4fb63e2fc6be70d98e4dcf158a735a00bf9: Stale file handle
fatal: remote did not send all necessary objects

 

Simply changing the share from virtiofs to 9p and updating my fstab resolves the issue, of course at the cost of performance, however

 

development     /mnt/development                    virtiofs rw,relatime    0       0

 

EDIT

 

On a hunch, since this share that does not explicitly have any benefit from being backed up, I changed it to being an exclusive share.  Issue is now resolved, performance is good.  I'm guessing since it wasn't an exclusive share (cache primary, array secondary), as the files were getting created in /mnt/user/development they were getting shifted over to /mnt/cache/development and the references were stale.

Edited by tenpaiyomi
Link to comment

For me besides the memory leak which I can live with if I must I also have the issue that just specific "drives" crash when Backblaze is trying to access too many files from it (it seems). Like, the drive gives me I/O errors. Any idea where that may come from?

I'm using the rust version 1.7.0 (want to update to 1.8.0 when I find the time). Removing

  <memoryBacking>
    <nosharepages/>
    <source type='memfd'/>
    <access mode='shared'/>
  </memoryBacking>

won't work as Unraid will complain then.

 

I was able to fix this with updating the Virtio drivers and WinFSP.

Edited by kadajawi
Link to comment
  • 2 weeks later...

If I try to add virtiofs/9p to an existing ubuntu VM it hangs here for a few minutes;

 

Capture.thumb.PNG.ab19d597e5b7736b8474623ebb4fd99d.PNG

 

It does eventually boot but the network doesn't come up. I can mount the passed through directory from within the guest but networking is dead. If I manually bring the interface up it doesn't get an IP address.

 

If I remove virtiofs from the VM config it goes back to normal and networking works. Any ideas?

Link to comment
On 10/19/2023 at 6:24 PM, mackid1993 said:

Any news about a QEMU bump for 6.13? Does anyone have any insider knowledge as to whether it is happening.

I have been informed currently at 8.1.0 and libvirt 9.7.0

 

Likely to be higher before public releases, as CVE fix in 8.1.2 for 9p.

Link to comment
5 hours ago, L0k1 said:

If I try to add virtiofs/9p to an existing ubuntu VM it hangs here for a few minutes;

 

Capture.thumb.PNG.ab19d597e5b7736b8474623ebb4fd99d.PNG

 

It does eventually boot but the network doesn't come up. I can mount the passed through directory from within the guest but networking is dead. If I manually bring the interface up it doesn't get an IP address.

 

If I remove virtiofs from the VM config it goes back to normal and networking works. Any ideas?

I have tried Debian and Ubuntu and cannot reproduce an error. Can you post diagnostics.

Link to comment
8 hours ago, SimonF said:

I have tried Debian and Ubuntu and cannot reproduce an error. Can you post diagnostics.

Sure, diagnostics attached, you'll find a bunch of old junk in there (the setup has gone through quite a few changes over the years, I've moved a few VM's and libvirt is complaining that I haven't updated the definitions). The VM in question is CasaOS, running on ubuntu jammy.

 

 

Edit: I've just gone through the diagnostics myself as I was concerned that they might contain passwords, they do so I've removed them.

Edited by L0k1
Removed diagnostics as they contain sensitive info.
Link to comment
20 minutes ago, L0k1 said:

Sure, diagnostics attached, you'll find a bunch of old junk in there (the setup has gone through quite a few changes over the years, I've moved a few VM's and libvirt is complaining that I haven't updated the definitions). The VM in question is CasaOS, running on ubuntu jammy.

unraid-diagnostics-20231022-1638.zip 281.51 kB · 0 downloads

Looks like you are using  virbr1 which will nat. any reason why you are not using vhost or bridge interface depending on your config?

 

<interface type="bridge">
<mac address="XX"/>
<source bridge="virbr1"/>
<model type="virtio-net"/>
<address type="pci" domain="0x0000" bus="0x01" slot="0x00" function="0x0"/>
</interface>

 

Link to comment
3 minutes ago, SimonF said:

Looks like you are using  virbr1 which will nat. any reason why you are not using vhost or bridge interface depending on your config?

 

<interface type="bridge">
<mac address="XX"/>
<source bridge="virbr1"/>
<model type="virtio-net"/>
<address type="pci" domain="0x0000" bus="0x01" slot="0x00" function="0x0"/>
</interface>

 

 

virbr1 is a bridge I created to connect VM's directly to my pfSense VM, there shouldn't be any NAT. This is it's definition;

 

Capture.PNG.9cc9d967d0050db11a901caa053fafd9.PNG

 

The CasaOS VM connects to the pfSense VM through this so I can firewall it and keep it off my main network.

Link to comment

To eliminate that being the problem I've moved the VM to vhost0 and still get the same. It was using a static IP but I've changed it to DHCP so I can switch between networks, without virtiofs it gets an IP and connectivity, enabling virtiofs in the config kills the guest's networking.

 

Edit: I've fixed it. Enabling virtiofs changes the network interface name, it was enp1s0 when I use virtiofs it gets changed to enp3s0. Working now.

 

What's the recommended fstab syntax to have it auto mount? 

Edited by L0k1
  • Like 1
Link to comment
14 minutes ago, L0k1 said:

To eliminate that being the problem I've moved the VM to vhost0 and still get the same. It was using a static IP but I've changed it to DHCP so I can switch between networks, without virtiofs it gets an IP and connectivity, enabling virtiofs in the config kills the guest's networking.

Not sure what to suggest at this point. Have you tried to allocate more memory to the VM as virtiofs uses shared memory. Not sure this should make a difference.

 

 

Link to comment
13 minutes ago, SimonF said:

Not sure what to suggest at this point. Have you tried to allocate more memory to the VM as virtiofs uses shared memory. Not sure this should make a difference.

 

 

 

Please see my edit, I probably wouldn't have run into this problem had I set it up from the start but adding it to an existing VM changes the name of the interface. Thanks for looking.

 

I've only been able to mount it with 

mount -t 9p -o trans=virtio ...

rather than

mount -v -t virtiofs ...

but I guess that doesn't matter?

Link to comment
4 minutes ago, L0k1 said:

 

Please see my edit, I probably wouldn't have run into this problem had I set it up from the start but adding it to an existing VM changes the name of the interface. Thanks for looking.

 

I've only been able to mount it with 

mount -t 9p -o trans=virtio ...

rather than

mount -v -t virtiofs ...

but I guess that doesn't matter?

Mount type will depend on template setting.

 

image.png

Link to comment

Thanks, pretty sure I chose virtiofs but I've enabled/disabled it so many times troubleshooting it I must have picked 9p by mistake.

 

Everything working now (well, except the VNC console seems to have broke now that I have it set to virtiofs, but that's not overly important and I'll look into it another time).

Link to comment
  • 3 weeks later...

I decided to give virtiofs another go since its been awhile and I have since upgraded to Windows 11. I noticed that while directory listings are faster than samba shares, the transfer speed is pretty bad. I can only get about 50 megabytes/second. Is there any configuration change I can try to improve performance?

Link to comment
On 4/29/2023 at 11:25 AM, mackid1993 said:

I also want to draw attention to this Reddit post. I observed the exact same issue described with the non paged pool leak. I used poolmon to track it down and it pointed to refs.sys. The non paged pool will just slowly grow and grow until the VM runs out of memory.

 

 

Oh cool, someone actually read my reddit post.  It would have been even cooler if one of you guys had left a comment linking back to this thread for further discussion 😀.  So here we are 7 months later, I have more data to share.

 

I am 99% sure this has nothing to do with refs.sys.  If you do a case-insensitive search for mmdi you will see the actual string it finds in refs.sys: MmDisableModifiedWriteOfSection.  So it's not really a match to that pool name.

 

There's a thread on superuser that describes more methods for hunting down these non-paged pool leaks.   Note where it says: 

 

Quote

If the pooltag only shows Windows drivers or is listed in the pooltag.txt ("C:\Program Files (x86)\Windows Kits\8.1\Debuggers\x64\triage\pooltag.txt")

you have use xperf to trace what causes the usage.

 

Mmdi is found in pooltag.txt so you actually have to use xperf and wpa for further debug.  Following the method described there, I captured a snapshot of memory growth, opened it in wpa, loaded symbols, and expanded the Mmdi pool to find stack references to winfsp-x64.dll and virtiofs.exe.  So there's the smoking gun, one of these drivers is the culprit.

 

I upgraded to the latest versions of WinFSP (2.0) and Virtio-win guest tools (0.1.240) and the leak is still active.

 

 

 

findstr mmdi.jpg

xperf wpa nonpaged mmdi.jpg

viofs driver versions.jpg

Link to comment
On 11/15/2023 at 7:41 AM, christophocles said:

 

 

Oh cool, someone actually read my reddit post.  It would have been even cooler if one of you guys had left a comment linking back to this thread for further discussion 😀.  So here we are 7 months later, I have more data to share.

 

I am 99% sure this has nothing to do with refs.sys.  If you do a case-insensitive search for mmdi you will see the actual string it finds in refs.sys: MmDisableModifiedWriteOfSection.  So it's not really a match to that pool name.

 

There's a thread on superuser that describes more methods for hunting down these non-paged pool leaks.   Note where it says: 

 

 

Mmdi is found in pooltag.txt so you actually have to use xperf and wpa for further debug.  Following the method described there, I captured a snapshot of memory growth, opened it in wpa, loaded symbols, and expanded the Mmdi pool to find stack references to winfsp-x64.dll and virtiofs.exe.  So there's the smoking gun, one of these drivers is the culprit.

 

I upgraded to the latest versions of WinFSP (2.0) and Virtio-win guest tools (0.1.240) and the leak is still active.

 

 

 

findstr mmdi.jpg

xperf wpa nonpaged mmdi.jpg

viofs driver versions.jpg

 

So it's probably not the driver per-se but something on the QEMU end.

Link to comment

Join the conversation

You can post now and register later. If you have an account, sign in now to post with your account.
Note: Your post will require moderator approval before it will be visible.

Guest
Reply to this topic...

×   Pasted as rich text.   Restore formatting

  Only 75 emoji are allowed.

×   Your link has been automatically embedded.   Display as a link instead

×   Your previous content has been restored.   Clear editor

×   You cannot paste images directly. Upload or insert images from URL.