VirtioFS Support Page


Recommended Posts

9 minutes ago, mackid1993 said:

@SimonF I was hoping this post I wrote could be highlighted higher up in the thread. I shared some scripts for mounting multiple Unraid shares as drive letters and made everything pretty much copy and paste. I also added instructions for setting it up with Task Scheduler.

 

done added as a recommendation

Link to comment
On 1/10/2024 at 5:14 PM, mackid1993 said:

The new VirtoFS Windows driver can be downloaded here. To use it you have to enable driver test mode at an admin command prompt with

bcdedit /set testsigning on

 

I also went into certmgr and added viofs.cat to Trusted Root Certification Authorities and Trusted Publishers.

 

If the watermark bothers you, this little tool makes it go away: https://winaero.com/download-universal-watermark-disabler/

 

It's useful until this fix is released with the newest Virtio drivers per the Virtio-win team.

 

Once the new drivers are released uninstall the watermark removal tool and turn off testsigning:

 

bcdedit /set testsigning off

 

It seems that after further testing the Virto-win team fixed the leak: https://github.com/virtio-win/kvm-guest-drivers-windows/pull/1022

 

I made a copy of these drivers so if they disappear someone ping me and I can share a new link.

 

@SimonF and @christophocles thank you for all of your help with this!

Been running for about 48 hours now with this fix, zero memory leaks that I can detect. Thanks so much for posting it here!

Link to comment
3 minutes ago, sazrocks said:

Been running for about 48 hours now with this fix, zero memory leaks that I can detect. Thanks so much for posting it here!

No problem, we should have official drivers very soon! If you're interested you can also use my scripts to mount your shares as individual drive letters rather than just one share mounted as Z:.

 

  • Like 1
Link to comment

It's great to finally have this bug squashed!  Thanks @SimonF@mackid1993 for assistance reporting the bug to the right people.

 

Now that my Win10 VM is stable, I am moving on to solving my next VirtioFS issue.  I also have a Linux VM that accesses these same shares, and it has a completely different problem.  After a few hours, one of the shares will quit responding, and any process that attempts to read data from that share will hang indefinitely.  I can't remount the share, I can't reboot normally, the only thing I can do is Force Power Off the VM and restart it.  Looking through the various log files, I can't figure out what is going on.  It only happens on the linux guest.  I have never seen it happen on the Windows guest.

 

I reported this issue on the virtiofsd gitlab site.  Has anyone else experienced this?  If so, please post a comment on this thread:

 

https://gitlab.com/virtio-fs/virtiofsd/-/issues/133

 

Link to comment
50 minutes ago, christophocles said:

It's great to finally have this bug squashed!  Thanks @SimonF@mackid1993 for assistance reporting the bug to the right people.

 

Now that my Win10 VM is stable, I am moving on to solving my next VirtioFS issue.  I also have a Linux VM that accesses these same shares, and it has a completely different problem.  After a few hours, one of the shares will quit responding, and any process that attempts to read data from that share will hang indefinitely.  I can't remount the share, I can't reboot normally, the only thing I can do is Force Power Off the VM and restart it.  Looking through the various log files, I can't figure out what is going on.  It only happens on the linux guest.  I have never seen it happen on the Windows guest.

 

I reported this issue on the virtiofsd gitlab site.  Has anyone else experienced this?  If so, please post a comment on this thread:

 

https://gitlab.com/virtio-fs/virtiofsd/-/issues/133

 

Have you tried the rust version of Virtiofsd?

Edit: See here 

 

Edited by mackid1993
Link to comment
6 minutes ago, mackid1993 said:

Have you tried the rust version of Virtiofsd?

Edit: See here 

 

 

Yes, Tumbleweed has shipped with the rust version for a few months now.  I switched from C to rust about a year ago to solve the issue of running out of file handles - the rust version added the option --inode-file-handles which avoided hitting the file handles limit.  Currently I'm running virtiofsd (rust) version 1.7.2 which is the current version in Tumbleweed repo.  On the gitlab site they have up to 1.9.0 and I haven't tried that yet, but the commenter above me on the issue thread is using that version.

Link to comment
3 hours ago, christophocles said:

 

Yes, Tumbleweed has shipped with the rust version for a few months now.  I switched from C to rust about a year ago to solve the issue of running out of file handles - the rust version added the option --inode-file-handles which avoided hitting the file handles limit.  Currently I'm running virtiofsd (rust) version 1.7.2 which is the current version in Tumbleweed repo.  On the gitlab site they have up to 1.9.0 and I haven't tried that yet, but the commenter above me on the issue thread is using that version.

I personally avoid Linux as a desktop OS. I'm more of a Windows guy. Most of my Unraid server besides the storage portion is done in the Windows VM. That's why VirtioFS was so important to me. I prefer to run as much as I can under Windows and leave a few hands off things like Plex and Roon to Docker. Since Unraid manages the storage I have no need for Windows Server, so I just run Windows 11 Pro.

 

Hopefully the virtio-fs team could fix that issue for you.

Link to comment
5 minutes ago, mackid1993 said:

I personally avoid Linux as a desktop OS. I'm more of a Windows guy. Most of my Unraid server besides the storage portion is done in the Windows VM. That's why VirtioFS was so important to me. I prefer to run as much as I can under Windows and leave a few hands off things like Plex and Roon to Docker. Since Unraid manages the storage I have no need for Windows Server, so I just run Windows 11 Pro.

 

Hopefully the virtio-fs team could fix that issue for you.


Yeah I used to be primarily a Windows guy, but I’ve weaned myself off of it over the past couple years.  I run Linux on the bare metal (as you do with Unraid) and I run Windows for the software that requires it.  Right now, that’s limited to Adobe products and Backblaze.  My storage is all managed with OpenZFS on the host OS.  I run other sandboxed services in a Linux guest VM.  VirtioFS is cross-platform and is what enables me to share the storage across multiple VMs while maintaining some level of sandboxing for applications.

 

 I’m sure we will be able to figure this one out.  I know others have the same issue and have reported it recently.  I’ve been living with it like this for about a year now.  It’s not as serious as the virtiofs-win memory leak, which made my Windows VM completely unusable.  One of the virtiofs threads seems to get deadlocked after a while, forcing me to reboot the guest.  I’m not sure if it’s happening in the guest, on the host virtiofs process, or in qemu.  But I do have the workaround of increasing the thread pool size which reduces the frequency of needing to reboot the guest.

  • Like 1
Link to comment
  • 2 weeks later...

Hi there,

 

i got some trouble with permissions with using Virtiofs in a Linux VM machine.

The VM (called workstation) is created with virtiofs and i was able to mount the share on the linux machine without any trouble and added it to fstab to make it permament.

 

VM:

2135657676_Screenshot2024-01-29185811.png.fad5305b0c6f690cb1c0e553a34478a6.png

 

fstab on client linux machine:

75714383_Screenshot2024-01-29185943.thumb.png.f497245d0d27bdbb8d1b68374bff2398.png

 

But whenever i create a new folder or file in the mounted directory "shared" on the linux machine, it is created with the wrong user permissions.

 

grafik.png.44e41302591ad729c4119fd83c1c4652.png

 

On the host they are translated into another user

901613012_Screenshot2024-01-29190215.png.4cd24821c80227cf2b516d0be37f04c3.png

 

Is there a way to set other default permissions (in the fstab?), whenever something is created in the mounted folder?

Security is not relevant in this case so 777 would be ok.

 

Any help is highly appreciated!

 

Edited by Jabberwocky
Link to comment

I'm trying out the new drivers to see if the VM freeze issue is resolved. However... is there a way to make the performance any better? Directory listings with tens of thousands of files are significantly faster than smb shares, but actual transfer speeds are terrible. I am getting about 45 MB/s with virtiofs vs 275 MB/s with samba shares.

Do I have something misconfigured? I don't understand all the hype over virtiofs if this kind of performance is normal. 

Link to comment
15 hours ago, johnsanc said:

I'm trying out the new drivers to see if the VM freeze issue is resolved. However... is there a way to make the performance any better? Directory listings with tens of thousands of files are significantly faster than smb shares, but actual transfer speeds are terrible. I am getting about 45 MB/s with virtiofs vs 275 MB/s with samba shares.

Do I have something misconfigured? I don't understand all the hype over virtiofs if this kind of performance is normal. 

Have you tried replacing the virtiofsd with newer rust version as the qemu has been removed aftern7.2

 

https://gitlab.com/virtio-fs/virtiofsd/-/releases

Link to comment
10 hours ago, johnsanc said:

Yes I have - The rust version made no significant difference. Speeds are still 45-60 MB/s. On the plus side, the VM seems stable and hasn't locked up with the 100% cpu utilization issue.

Note: I opened an issue here for more visibility beyond these forums: https://github.com/virtio-win/kvm-guest-drivers-windows/issues/1039

I don't have this issue. This is a transfer from my Unraid array to the nvme drive that my VM boots from. I get similar results with a share to share transfer as well. 100 + MB/s

image.png.e087a7884cbfc72dcdbc58d18a6fb6f7.png

Edited by mackid1993
Link to comment
14 hours ago, johnsanc said:

@mackid1993 - Thanks for sharing. Your speeds are much better than mine, but still pretty poor compared to SMB. Were you transferring to/from spinning disks or from an SSD cache pool to your to VM? I'm curious what speeds you get between a fast unraid cache drive and your VM.

This was spinning disks to NVME. NVME to NVME would be much quicker. What version of WinFSP are you on?

Link to comment

@mackid1993 - I am using the latest WinFSP: 2.0.23075. Its possible I could have something wrong with my setup or misconfigured, but I'm not sure what that might be. I followed all the directions here and got it up and running properly. I am using your launcher script and that also works just fine, just lackluster performance is all. For my setup my SMB speeds are about what I would expect to/from my SSD cache. 

Link to comment
1 hour ago, johnsanc said:

@mackid1993 - I am using the latest WinFSP: 2.0.23075. Its possible I could have something wrong with my setup or misconfigured, but I'm not sure what that might be. I followed all the directions here and got it up and running properly. I am using your launcher script and that also works just fine, just lackluster performance is all. For my setup my SMB speeds are about what I would expect to/from my SSD cache. 

I'm not sure but I can confirm SMB was much faster. So it's not just you.

Link to comment
6 minutes ago, johnsanc said:

So Yan confirmed the Windows drivers aren't optimized for performance... but that doesn't really explain the drastic difference between @mackid1993's results and mine. (400 MB/s vs my 60 MB/s)

My Windows 11 VM runs off an nvme drive bound to vfio and passed through. I don't use a vdisk. Maybe it's just system performance. I also give my VM 20 threads and 32 GB of ram.

Link to comment

Join the conversation

You can post now and register later. If you have an account, sign in now to post with your account.
Note: Your post will require moderator approval before it will be visible.

Guest
Reply to this topic...

×   Pasted as rich text.   Restore formatting

  Only 75 emoji are allowed.

×   Your link has been automatically embedded.   Display as a link instead

×   Your previous content has been restored.   Clear editor

×   You cannot paste images directly. Upload or insert images from URL.