VirtioFS Support Page


Recommended Posts

2 minutes ago, johnsanc said:

Thanks for the suggestion. Same issue though, ~40-60 MiB/s and a single cpu thread gets pegged to 100%. With emulated CPUs however the performance graph in Windows doesn't lock up. I think we just need more people trying this out with different hardware configurations. So far everyone I have asked uses Intel CPUs and it seems to work as expected.

Maybe check again with the next release of Unraid with the newer qemu and linux kernel.

Link to comment
  • 2 weeks later...
On 3/4/2024 at 3:17 PM, mackid1993 said:

You can also try the newer Rust version of Virtiofsd: https://gitlab.com/virtio-fs/virtiofsd/-/jobs/artifacts/main/download?job=publish

To install it sftp to your server and create a folder in /boot called virtiofsd and copy the executable there.

Then stop all VMs using Virtiofs.

Then open a terminal and type:

nano /boot/config/go

Then add the following to your go file:

#replace Virtiofsd with Rust version
mv /usr/libexec/virtiofsd /usr/libexec/virtiofsd.old
cp /boot/virtiofsd/virtiofsd /usr/libexec/virtiofsd 
chmod +x /usr/libexec/virtiofsd 

Then either run the 3 commands manually or reboot your server.

 

This may or may not help, but the version of Virtiofsd that ships with Unraid is very old so it may actually end up helping.

 

 

The arguments for the Rust version of virtiofsd diverges from the original version that Unraid has bundled so certain XML changes are not properly passed (i.e. cache). For absolute stability, I found the Rust version to be superior since it has various options for handling submounts and much better handling for host files changing via cache=never. 

 

For others looking for this functionality, what I did was using the latest version of the Rust virtiofsd (compiled version available at https://gitlab.com/virtio-fs/virtiofsd/-/releases) stored at /root/.virtiofsd/virtiofsd (though you can put it anywhere, just modify the script), and then combined it with this script (stored at /root/.virtiofsd/virtiofsd.sh) that passes the --fd parameter properly:

#!/bin/bash

# process -o option but ignore it because unraid generates the command for us
VALID_ARGS=$(getopt -o o -l fd: -- "$@")i
if [[ $? -ne 0 ]]; then
  exit 1;
fi

eval set -- "$VALID_ARGS"
while [ : ]; do
  case "$1" in
    --fd )
      FD="$2"
      shift 2
      ;;
    -o )
      shift 1
      ;;
    -- ) 
      shift; 
      break 
      ;;
    * )
      shift;
      ;;
  esac
done

# https://gitlab.com/virtio-fs/virtiofsd
/root/.virtio/virtiofsd \
	--fd="$FD" \
	--shared-dir="/mnt/<YOUR SHARE DIR HERE>" \
	--xattr \
	--cache="never" \
	--sandbox="chroot" \
	--inode-file-handles="mandatory" \
	--announce-submounts

 

The relevant excerpt from your XML config is (noting that this setup ignores most parameters and instead you should make argument changes directly in `virtiofsd.sh` above; through testing you do need to keep the `<target ... />` argument as that's the handle the windows driver will be looking for):

<filesystem type='mount' accessmode='passthrough'>
  <driver type='virtiofs' queue='1024'/>
  <binary path='/root/.virtio/virtiofsd.sh' xattr='on'>
    ...
  </binary>
  ...
  <target dir='YOUR HANDLE HERE'/>
  <alias name='fs0'/>
  ...
</filesystem>

 

These changes resulted in effectively native file transfer speeds between VM and host (~200MB/s) and I've noted it seems to no longer have any memory issues or weird file locking issues. 100GB+ transfers between the VM and host systems daily (large and small files) so I feel good about the current setup (I also followed the instructions on upgrading to the latest v248 drivers in windows). Using this setup allows me to create a synthetic directory in the host that only contains the folders I want to pass through to the VM and consolidates them under a single drive in the VM. The --announce-submounts argument was critical in doing this in a stable way.

Edited by jch
Link to comment
4 hours ago, jch said:

 

The arguments for the Rust version of virtiofsd diverges from the original version that Unraid has bundled so certain XML changes are not properly passed (i.e. cache). For absolute stability, I found the Rust version to be superior since it has various options for handling submounts and much better handling for host files changing via cache=never. 

 

For others looking for this functionality, what I did was using the latest version of the Rust virtiofsd (compiled version available at https://gitlab.com/virtio-fs/virtiofsd/-/releases) stored at /root/.virtiofsd/virtiofsd (though you can put it anywhere, just modify the script), and then combined it with this script (stored at /root/.virtiofsd/virtiofsd.sh) that passes the --fd parameter properly:

#!/bin/bash

# process -o option but ignore it because unraid generates the command for us
VALID_ARGS=$(getopt -o o -l fd: -- "$@")i
if [[ $? -ne 0 ]]; then
  exit 1;
fi

eval set -- "$VALID_ARGS"
while [ : ]; do
  case "$1" in
    --fd )
      FD="$2"
      shift 2
      ;;
    -o )
      shift 1
      ;;
    -- ) 
      shift; 
      break 
      ;;
    * )
      shift;
      ;;
  esac
done

# https://gitlab.com/virtio-fs/virtiofsd
/root/.virtio/virtiofsd \
	--fd="$FD" \
	--shared-dir="/mnt/<YOUR SHARE DIR HERE>" \
	--xattr \
	--cache="never" \
	--sandbox="chroot" \
	--inode-file-handles="mandatory" \
	--announce-submounts

 

The relevant excerpt from your XML config is (noting that this setup ignores most parameters and instead you should make argument changes directly in `virtiofsd.sh` above; through testing you do need to keep the `<target ... />` argument as that's the handle the windows driver will be looking for):

<filesystem type='mount' accessmode='passthrough'>
  <driver type='virtiofs' queue='1024'/>
  <binary path='/root/.virtio/virtiofsd.sh' xattr='on'>
    ...
  </binary>
  ...
  <target dir='YOUR HANDLE HERE'/>
  <alias name='fs0'/>
  ...
</filesystem>

 

These changes resulted in effectively native file transfer speeds between VM and host (~200MB/s) and I've noted it seems to no longer have any memory issues or weird file locking issues. 100GB+ transfers between the VM and host systems daily (large and small files) so I feel good about the current setup (I also followed the instructions on upgrading to the latest v248 drivers in windows). Using this setup allows me to create a synthetic directory in the host that only contains the folders I want to pass through to the VM and consolidates them under a single drive in the VM. The --announce-submounts argument was critical in doing this in a stable way.

my vm won't start with this method.

 

internal error: virtiofsd died unexpectedly

Link to comment
5 hours ago, jch said:

 

The arguments for the Rust version of virtiofsd diverges from the original version that Unraid has bundled so certain XML changes are not properly passed (i.e. cache). For absolute stability, I found the Rust version to be superior since it has various options for handling submounts and much better handling for host files changing via cache=never. 

 

For others looking for this functionality, what I did was using the latest version of the Rust virtiofsd (compiled version available at https://gitlab.com/virtio-fs/virtiofsd/-/releases) stored at /root/.virtiofsd/virtiofsd (though you can put it anywhere, just modify the script), and then combined it with this script (stored at /root/.virtiofsd/virtiofsd.sh) that passes the --fd parameter properly:

#!/bin/bash

# process -o option but ignore it because unraid generates the command for us
VALID_ARGS=$(getopt -o o -l fd: -- "$@")i
if [[ $? -ne 0 ]]; then
  exit 1;
fi

eval set -- "$VALID_ARGS"
while [ : ]; do
  case "$1" in
    --fd )
      FD="$2"
      shift 2
      ;;
    -o )
      shift 1
      ;;
    -- ) 
      shift; 
      break 
      ;;
    * )
      shift;
      ;;
  esac
done

# https://gitlab.com/virtio-fs/virtiofsd
/root/.virtio/virtiofsd \
	--fd="$FD" \
	--shared-dir="/mnt/<YOUR SHARE DIR HERE>" \
	--xattr \
	--cache="never" \
	--sandbox="chroot" \
	--inode-file-handles="mandatory" \
	--announce-submounts

 

The relevant excerpt from your XML config is (noting that this setup ignores most parameters and instead you should make argument changes directly in `virtiofsd.sh` above; through testing you do need to keep the `<target ... />` argument as that's the handle the windows driver will be looking for):

<filesystem type='mount' accessmode='passthrough'>
  <driver type='virtiofs' queue='1024'/>
  <binary path='/root/.virtio/virtiofsd.sh' xattr='on'>
    ...
  </binary>
  ...
  <target dir='YOUR HANDLE HERE'/>
  <alias name='fs0'/>
  ...
</filesystem>

 

These changes resulted in effectively native file transfer speeds between VM and host (~200MB/s) and I've noted it seems to no longer have any memory issues or weird file locking issues. 100GB+ transfers between the VM and host systems daily (large and small files) so I feel good about the current setup (I also followed the instructions on upgrading to the latest v248 drivers in windows). Using this setup allows me to create a synthetic directory in the host that only contains the folders I want to pass through to the VM and consolidates them under a single drive in the VM. The --announce-submounts argument was critical in doing this in a stable way.

Have you found a documented best setup anywhere for the rust vers. It is included in the next release so I can look at making changes. 

 

Looking at libvirt they have not made any changes to support new options. There is an issue but at least 1 year old.

Link to comment
On 3/14/2024 at 11:03 AM, jch said:

 

The arguments for the Rust version of virtiofsd diverges from the original version that Unraid has bundled so certain XML changes are not properly passed (i.e. cache). For absolute stability, I found the Rust version to be superior since it has various options for handling submounts and much better handling for host files changing via cache=never. 

 

For others looking for this functionality, what I did was using the latest version of the Rust virtiofsd (compiled version available at https://gitlab.com/virtio-fs/virtiofsd/-/releases) stored at /root/.virtiofsd/virtiofsd (though you can put it anywhere, just modify the script), and then combined it with this script (stored at /root/.virtiofsd/virtiofsd.sh) that passes the --fd parameter properly:

#!/bin/bash

# process -o option but ignore it because unraid generates the command for us
VALID_ARGS=$(getopt -o o -l fd: -- "$@")i
if [[ $? -ne 0 ]]; then
  exit 1;
fi

eval set -- "$VALID_ARGS"
while [ : ]; do
  case "$1" in
    --fd )
      FD="$2"
      shift 2
      ;;
    -o )
      shift 1
      ;;
    -- ) 
      shift; 
      break 
      ;;
    * )
      shift;
      ;;
  esac
done

# https://gitlab.com/virtio-fs/virtiofsd
/root/.virtio/virtiofsd \
	--fd="$FD" \
	--shared-dir="/mnt/<YOUR SHARE DIR HERE>" \
	--xattr \
	--cache="never" \
	--sandbox="chroot" \
	--inode-file-handles="mandatory" \
	--announce-submounts

 

The relevant excerpt from your XML config is (noting that this setup ignores most parameters and instead you should make argument changes directly in `virtiofsd.sh` above; through testing you do need to keep the `<target ... />` argument as that's the handle the windows driver will be looking for):

<filesystem type='mount' accessmode='passthrough'>
  <driver type='virtiofs' queue='1024'/>
  <binary path='/root/.virtio/virtiofsd.sh' xattr='on'>
    ...
  </binary>
  ...
  <target dir='YOUR HANDLE HERE'/>
  <alias name='fs0'/>
  ...
</filesystem>

 

These changes resulted in effectively native file transfer speeds between VM and host (~200MB/s) and I've noted it seems to no longer have any memory issues or weird file locking issues. 100GB+ transfers between the VM and host systems daily (large and small files) so I feel good about the current setup (I also followed the instructions on upgrading to the latest v248 drivers in windows). Using this setup allows me to create a synthetic directory in the host that only contains the folders I want to pass through to the VM and consolidates them under a single drive in the VM. The --announce-submounts argument was critical in doing this in a stable way.

So I had a chance to test this with multiple shares and I have to say this is an incredible speed improvement with Rust virtiofs. @SimonF this should really be implemented in 6.13.

 

Here is what I did.

 

First I created a directory on my flash:

/boot/virtiofsd

In the directory I placed the rust version of virtiofsd and a shell script for each share I want to mount. In my case I have:

appdata.sh
archives.sh
backup.sh
communityapplicationsappdatabackup.sh
downloads.sh
movies.sh
music.sh
software.sh

 

Note: Make these scripts using nano on the server. They must be formatted for UNIX. Making them in Notepad or Notepad++ on Windows will format them MSDOS and Virtiofs will die when booting the vm. 

 

Each shell script is exactly what @jch shared but modified for each individual share I want to mount in Windows. I modified the script to store virtiofsd in /usr/libexec:

Here is an example for my Music share, all of the other shares are the same:

 

#!/bin/bash

# process -o option but ignore it because unraid generates the command for us
VALID_ARGS=$(getopt -o o -l fd: -- "$@")i
if [[ $? -ne 0 ]]; then
  exit 1;
fi

eval set -- "$VALID_ARGS"
while [ : ]; do
  case "$1" in
    --fd )
      FD="$2"
      shift 2
      ;;
    -o )
      shift 1
      ;;
    -- ) 
      shift; 
      break 
      ;;
    * )
      shift;
      ;;
  esac
done

# https://gitlab.com/virtio-fs/virtiofsd
/usr/libexec/virtiofsd \
        --fd="$FD" \
        --shared-dir="/mnt/user/Music" \
        --xattr \
        --cache="never" \
        --sandbox="chroot" \
        --inode-file-handles="mandatory" \
        --announce-submounts

Change --shared-dir= and rinse and repeat for each individual share with a new shell script.

 

Next I modified my go file /boot/config/go

#replace Virtiofsd with Rust version
mv /usr/libexec/virtiofsd /usr/libexec/virtiofsd.old
cp /boot/virtiofsd/virtiofsd /usr/libexec/virtiofsd 
chmod +x /usr/libexec/virtiofsd 

#copy mount scripts for each virtofs share
#appdata
cp /boot/virtiofsd/appdata.sh /usr/libexec/appdata.sh
chmod +x /usr/libexec/appdata.sh
#archives
cp /boot/virtiofsd/archives.sh /usr/libexec/archives.sh
chmod +x /usr/libexec/archives.sh
#backup
cp /boot/virtiofsd/backup.sh /usr/libexec/backup.sh
chmod +x /usr/libexec/backup.sh
#communityapplicationsappdatabackup
cp /boot/virtiofsd/communityapplicationsappdatabackup.sh /usr/libexec/communityapplicationsappdatabackup.sh
chmod +x /usr/libexec/communityapplicationsappdatabackup.sh
#downloads
cp /boot/virtiofsd/downloads.sh /usr/libexec/downloads.sh
chmod +x /usr/libexec/downloads.sh
#movies
cp /boot/virtiofsd/movies.sh /usr/libexec/movies.sh
chmod +x /usr/libexec/movies.sh
#music
cp /boot/virtiofsd/music.sh /usr/libexec/music.sh
chmod +x /usr/libexec/music.sh
#software
cp /boot/virtiofsd/software.sh /usr/libexec/software.sh
chmod +x /usr/libexec/software.sh
#tv
cp /boot/virtiofsd/tv.sh /usr/libexec/tv.sh
chmod +x /usr/libexec/tv.sh

 

I appended the following to the go file so this is persistent upon boot. This renames the original virtiofsd to virtiofsd.old, copies the rust version to /usr/libexec and then copies each of my mount scripts to /usr/libexec and makes them executable.

 

Now for the xml, this part is easy. For each mount, replace

 

<binary path='/usr/libexec/virtiofsd' xattr='on'>

 

with

 

 <binary path='/usr/libexec/script.sh' xattr='on'>

 

This is what my finished XML looks like for my environment:

 

<filesystem type='mount' accessmode='passthrough'>
      <driver type='virtiofs' queue='1024'/>
      <binary path='/usr/libexec/appdata.sh' xattr='on'>
        <cache mode='always'/>
        <sandbox mode='chroot'/>
      </binary>
      <source dir='/mnt/user/appdata'/>
      <target dir='appdata'/>
      <alias name='fs0'/>
      <address type='pci' domain='0x0000' bus='0x02' slot='0x00' function='0x0'/>
    </filesystem>
    <filesystem type='mount' accessmode='passthrough'>
      <driver type='virtiofs' queue='1024'/>
      <binary path='/usr/libexec/archives.sh' xattr='on'>
        <cache mode='always'/>
        <sandbox mode='chroot'/>
      </binary>
      <source dir='/mnt/user/Archives'/>
      <target dir='Archives'/>
      <alias name='fs1'/>
      <address type='pci' domain='0x0000' bus='0x04' slot='0x00' function='0x0'/>
    </filesystem>
    <filesystem type='mount' accessmode='passthrough'>
      <driver type='virtiofs' queue='1024'/>
      <binary path='/usr/libexec/backup.sh' xattr='on'>
        <cache mode='always'/>
        <sandbox mode='chroot'/>
      </binary>
      <source dir='/mnt/user/Backup'/>
      <target dir='Backup'/>
      <alias name='fs2'/>
      <address type='pci' domain='0x0000' bus='0x05' slot='0x00' function='0x0'/>
    </filesystem>
    <filesystem type='mount' accessmode='passthrough'>
      <driver type='virtiofs' queue='1024'/>
      <binary path='/usr/libexec/communityapplicationsappdatabackup.sh' xattr='on'>
        <cache mode='always'/>
        <sandbox mode='chroot'/>
      </binary>
      <source dir='/mnt/user/CommunityApplicationsAppdataBackup'/>
      <target dir='CommunityApplicationsAppdataBackup'/>
      <alias name='fs3'/>
      <address type='pci' domain='0x0000' bus='0x06' slot='0x00' function='0x0'/>
    </filesystem>
    <filesystem type='mount' accessmode='passthrough'>
      <driver type='virtiofs' queue='1024'/>
      <binary path='/usr/libexec/downloads.sh' xattr='on'>
        <cache mode='always'/>
        <sandbox mode='chroot'/>
      </binary>
      <source dir='/mnt/user/Downloads'/>
      <target dir='Downloads'/>
      <alias name='fs4'/>
      <address type='pci' domain='0x0000' bus='0x07' slot='0x00' function='0x0'/>
    </filesystem>
    <filesystem type='mount' accessmode='passthrough'>
      <driver type='virtiofs' queue='1024'/>
      <binary path='/usr/libexec/movies.sh' xattr='on'>
        <cache mode='always'/>
        <sandbox mode='chroot'/>
      </binary>
      <source dir='/mnt/user/Movies'/>
      <target dir='Movies'/>
      <alias name='fs5'/>
      <address type='pci' domain='0x0000' bus='0x08' slot='0x00' function='0x0'/>
    </filesystem>
    <filesystem type='mount' accessmode='passthrough'>
      <driver type='virtiofs' queue='1024'/>
      <binary path='/usr/libexec/music.sh' xattr='on'>
        <cache mode='always'/>
        <sandbox mode='chroot'/>
      </binary>
      <source dir='/mnt/user/Music'/>
      <target dir='Music'/>
      <alias name='fs6'/>
      <address type='pci' domain='0x0000' bus='0x09' slot='0x00' function='0x0'/>
    </filesystem>
    <filesystem type='mount' accessmode='passthrough'>
      <driver type='virtiofs' queue='1024'/>
      <binary path='/usr/libexec/software.sh' xattr='on'>
        <cache mode='always'/>
        <sandbox mode='chroot'/>
      </binary>
      <source dir='/mnt/user/Software'/>
      <target dir='Software'/>
      <alias name='fs7'/>
      <address type='pci' domain='0x0000' bus='0x0a' slot='0x00' function='0x0'/>
    </filesystem>
    <filesystem type='mount' accessmode='passthrough'>
      <driver type='virtiofs' queue='1024'/>
      <binary path='/usr/libexec/tv.sh' xattr='on'>
        <cache mode='always'/>
        <sandbox mode='chroot'/>
      </binary>
      <source dir='/mnt/user/TV'/>
      <target dir='TV'/>
      <alias name='fs8'/>
      <address type='pci' domain='0x0000' bus='0x0b' slot='0x00' function='0x0'/>
    </filesystem>

 

Once done, reboot the server so the go file moves all of the files and sets permissions.

 

This is a huge improvement over the stock config. Thanks for this @jch. Hopefully we can get these settings implemented in 6.13. 

Edited by mackid1993
Added warning about using nano to create the scripts
Link to comment
20 minutes ago, johnsanc said:

Just tried this method. I also get the same message:

virtiofsd died unexpectedly: No such process


The log says:

libvirt:  error : libvirtd quit during handshake: Input/output error

 

Did you make the shell script in nano on the server. If you try to make it in Windows and it isn't UNIX formatted it won't work.

 

Also where are you placing rust virtiofsd?

 

Did you not run chmod +x ./script.sh

 

If you create the shell scripts on the server, copy and paste (and modify) what I provided into the go file and modify the xml properly it should work. You do have to reboot.

Edited by mackid1993
Link to comment

Doh, you are right. I just removed the ^M characters at the end of each line and it worked.

Speed is still terrible for me though. 60-70 MB/s max. A single vcpu gets utilized to 100% according to unraid dashboard.

Edited by johnsanc
Link to comment
1 minute ago, johnsanc said:

Doh, you are right. I just removed the ^M characters at the end of each line and it worked.

Speed is still terrible for me though. 60-70 MB/s max.

Nvme from the server to the nvme vm boot drive I was getting 1+ GB/s. I'm not sure why your system behaves this way.

Edited by mackid1993
Link to comment
8 minutes ago, johnsanc said:

No idea. So far I've seen no one report any results using AMD though. So hopefully someone else can try. The CPU behavior is very weird. I followed every guide to the letter and everything else works perfectly fine.

Maybe open an issue report on gitlab. I would test in Linux first to verify that it isn't a Windows driver issue. Then if it's isolated to Windows open a report with virtio-win.

Edited by mackid1993
Link to comment
On 3/16/2024 at 2:31 AM, mackid1993 said:

So I had a chance to test this with multiple shares and I have to say this is an incredible speed improvement with Rust virtiofs. @SimonF this should really be implemented in 6.13.

 

Here is what I did.

 

First I created a directory on my flash:

/boot/virtiofsd

In the directory I placed the rust version of virtiofsd and a shell script for each share I want to mount. In my case I have:

appdata.sh
archives.sh
backup.sh
communityapplicationsappdatabackup.sh
downloads.sh
movies.sh
music.sh
software.sh

 

Note: Make these scripts using nano on the server. They must be formatted for UNIX. Making them in Notepad or Notepad++ on Windows will format them MSDOS and Virtiofs will die when booting the vm. 

 

Each shell script is exactly what @jch shared but modified for each individual share I want to mount in Windows. I modified the script to store virtiofsd in /usr/libexec:

Here is an example for my Music share, all of the other shares are the same:

 

#!/bin/bash

# process -o option but ignore it because unraid generates the command for us
VALID_ARGS=$(getopt -o o -l fd: -- "$@")i
if [[ $? -ne 0 ]]; then
  exit 1;
fi

eval set -- "$VALID_ARGS"
while [ : ]; do
  case "$1" in
    --fd )
      FD="$2"
      shift 2
      ;;
    -o )
      shift 1
      ;;
    -- ) 
      shift; 
      break 
      ;;
    * )
      shift;
      ;;
  esac
done

# https://gitlab.com/virtio-fs/virtiofsd
/usr/libexec/virtiofsd \
        --fd="$FD" \
        --shared-dir="/mnt/user/Music" \
        --xattr \
        --cache="never" \
        --sandbox="chroot" \
        --inode-file-handles="mandatory" \
        --announce-submounts

Change --shared-dir= and rinse and repeat for each individual share with a new shell script.

 

Next I modified my go file /boot/config/go

#replace Virtiofsd with Rust version
mv /usr/libexec/virtiofsd /usr/libexec/virtiofsd.old
cp /boot/virtiofsd/virtiofsd /usr/libexec/virtiofsd 
chmod +x /usr/libexec/virtiofsd 

#copy mount scripts for each virtofs share
#appdata
cp /boot/virtiofsd/appdata.sh /usr/libexec/appdata.sh
chmod +x /usr/libexec/appdata.sh
#archives
cp /boot/virtiofsd/archives.sh /usr/libexec/archives.sh
chmod +x /usr/libexec/archives.sh
#backup
cp /boot/virtiofsd/backup.sh /usr/libexec/backup.sh
chmod +x /usr/libexec/backup.sh
#communityapplicationsappdatabackup
cp /boot/virtiofsd/communityapplicationsappdatabackup.sh /usr/libexec/communityapplicationsappdatabackup.sh
chmod +x /usr/libexec/communityapplicationsappdatabackup.sh
#downloads
cp /boot/virtiofsd/downloads.sh /usr/libexec/downloads.sh
chmod +x /usr/libexec/downloads.sh
#movies
cp /boot/virtiofsd/movies.sh /usr/libexec/movies.sh
chmod +x /usr/libexec/movies.sh
#music
cp /boot/virtiofsd/music.sh /usr/libexec/music.sh
chmod +x /usr/libexec/music.sh
#software
cp /boot/virtiofsd/software.sh /usr/libexec/software.sh
chmod +x /usr/libexec/software.sh
#tv
cp /boot/virtiofsd/tv.sh /usr/libexec/tv.sh
chmod +x /usr/libexec/tv.sh

 

I appended the following to the go file so this is persistent upon boot. This renames the original virtiofsd to virtiofsd.old, copies the rust version to /usr/libexec and then copies each of my mount scripts to /usr/libexec and makes them executable.

 

Now for the xml, this part is easy. For each mount, replace

 

<binary path='/usr/libexec/virtiofsd' xattr='on'>

 

with

 

 <binary path='/usr/libexec/script.sh' xattr='on'>

 

This is what my finished XML looks like for my environment:

 

<filesystem type='mount' accessmode='passthrough'>
      <driver type='virtiofs' queue='1024'/>
      <binary path='/usr/libexec/appdata.sh' xattr='on'>
        <cache mode='always'/>
        <sandbox mode='chroot'/>
      </binary>
      <source dir='/mnt/user/appdata'/>
      <target dir='appdata'/>
      <alias name='fs0'/>
      <address type='pci' domain='0x0000' bus='0x02' slot='0x00' function='0x0'/>
    </filesystem>
    <filesystem type='mount' accessmode='passthrough'>
      <driver type='virtiofs' queue='1024'/>
      <binary path='/usr/libexec/archives.sh' xattr='on'>
        <cache mode='always'/>
        <sandbox mode='chroot'/>
      </binary>
      <source dir='/mnt/user/Archives'/>
      <target dir='Archives'/>
      <alias name='fs1'/>
      <address type='pci' domain='0x0000' bus='0x04' slot='0x00' function='0x0'/>
    </filesystem>
    <filesystem type='mount' accessmode='passthrough'>
      <driver type='virtiofs' queue='1024'/>
      <binary path='/usr/libexec/backup.sh' xattr='on'>
        <cache mode='always'/>
        <sandbox mode='chroot'/>
      </binary>
      <source dir='/mnt/user/Backup'/>
      <target dir='Backup'/>
      <alias name='fs2'/>
      <address type='pci' domain='0x0000' bus='0x05' slot='0x00' function='0x0'/>
    </filesystem>
    <filesystem type='mount' accessmode='passthrough'>
      <driver type='virtiofs' queue='1024'/>
      <binary path='/usr/libexec/communityapplicationsappdatabackup.sh' xattr='on'>
        <cache mode='always'/>
        <sandbox mode='chroot'/>
      </binary>
      <source dir='/mnt/user/CommunityApplicationsAppdataBackup'/>
      <target dir='CommunityApplicationsAppdataBackup'/>
      <alias name='fs3'/>
      <address type='pci' domain='0x0000' bus='0x06' slot='0x00' function='0x0'/>
    </filesystem>
    <filesystem type='mount' accessmode='passthrough'>
      <driver type='virtiofs' queue='1024'/>
      <binary path='/usr/libexec/downloads.sh' xattr='on'>
        <cache mode='always'/>
        <sandbox mode='chroot'/>
      </binary>
      <source dir='/mnt/user/Downloads'/>
      <target dir='Downloads'/>
      <alias name='fs4'/>
      <address type='pci' domain='0x0000' bus='0x07' slot='0x00' function='0x0'/>
    </filesystem>
    <filesystem type='mount' accessmode='passthrough'>
      <driver type='virtiofs' queue='1024'/>
      <binary path='/usr/libexec/movies.sh' xattr='on'>
        <cache mode='always'/>
        <sandbox mode='chroot'/>
      </binary>
      <source dir='/mnt/user/Movies'/>
      <target dir='Movies'/>
      <alias name='fs5'/>
      <address type='pci' domain='0x0000' bus='0x08' slot='0x00' function='0x0'/>
    </filesystem>
    <filesystem type='mount' accessmode='passthrough'>
      <driver type='virtiofs' queue='1024'/>
      <binary path='/usr/libexec/music.sh' xattr='on'>
        <cache mode='always'/>
        <sandbox mode='chroot'/>
      </binary>
      <source dir='/mnt/user/Music'/>
      <target dir='Music'/>
      <alias name='fs6'/>
      <address type='pci' domain='0x0000' bus='0x09' slot='0x00' function='0x0'/>
    </filesystem>
    <filesystem type='mount' accessmode='passthrough'>
      <driver type='virtiofs' queue='1024'/>
      <binary path='/usr/libexec/software.sh' xattr='on'>
        <cache mode='always'/>
        <sandbox mode='chroot'/>
      </binary>
      <source dir='/mnt/user/Software'/>
      <target dir='Software'/>
      <alias name='fs7'/>
      <address type='pci' domain='0x0000' bus='0x0a' slot='0x00' function='0x0'/>
    </filesystem>
    <filesystem type='mount' accessmode='passthrough'>
      <driver type='virtiofs' queue='1024'/>
      <binary path='/usr/libexec/tv.sh' xattr='on'>
        <cache mode='always'/>
        <sandbox mode='chroot'/>
      </binary>
      <source dir='/mnt/user/TV'/>
      <target dir='TV'/>
      <alias name='fs8'/>
      <address type='pci' domain='0x0000' bus='0x0b' slot='0x00' function='0x0'/>
    </filesystem>

 

Once done, reboot the server so the go file moves all of the files and sets permissions.

 

This is a huge improvement over the stock config. Thanks for this @jch. Hopefully we can get these settings implemented in 6.13. 

I am lookingto see how this can be implemented, the rust version in the next released is stored in /usr/bin and we have a symlink from /usr/libexec to the new binary. So we could change the symlink to a wrapper to do the additional processing. I have also asked libvirt team if they have a time scale to implement in XML.

 

Looking at the old args we should be able to build new args as follows.

 

Old --fd=45 -o source=/mnt/cache,cache=always,sandbox=chroot,xattr

 

/usr/libexec/virtiofsd \
        --fd="$FD" \ # From fd=
        --shared-dir="/mnt/user/Music" \ # From source=
        --xattr \ # From sandbox
        --cache="never" \ #From cache=
        --sandbox="chroot" \ #From sandbox
        --inode-file-handles="mandatory" \ # Add as default
        --announce-submounts #Add as default.

 

@jchIs there a reason for cache = "never" ?

Link to comment
11 hours ago, SimonF said:

I am lookingto see how this can be implemented, the rust version in the next released is stored in /usr/bin and we have a symlink from /usr/libexec to the new binary. So we could change the symlink to a wrapper to do the additional processing. I have also asked libvirt team if they have a time scale to implement in XML.

 

Looking at the old args we should be able to build new args as follows.

 

Old --fd=45 -o source=/mnt/cache,cache=always,sandbox=chroot,xattr

 

/usr/libexec/virtiofsd \
        --fd="$FD" \ # From fd=
        --shared-dir="/mnt/user/Music" \ # From source=
        --xattr \ # From sandbox
        --cache="never" \ #From cache=
        --sandbox="chroot" \ #From sandbox
        --inode-file-handles="mandatory" \ # Add as default
        --announce-submounts #Add as default.

 

@jchIs there a reason for cache = "never" ?

I'm curious why were are disabling caching. It's supposed to improve performance is it not?

Link to comment
On 3/14/2024 at 1:06 PM, SimonF said:

Have you found a documented best setup anywhere for the rust vers. It is included in the next release so I can look at making changes. 

 

Looking at libvirt they have not made any changes to support new options. There is an issue but at least 1 year old.

 

They are all documented here: https://gitlab.com/virtio-fs/virtiofsd

EDIT: Oops I misread your question. No I haven't found anything that clearly outlines the "best" setup. The document I linked has some guidelines (i.e. they were very very adamant that `--announce-submounts` is extremely important if you're passing through a mount). Given it's an open-source project, probably worth reaching out to the maintainers especially if they will become default arguments in unraid.

 

On 3/17/2024 at 3:50 PM, mackid1993 said:

I'm curious why were are disabling caching. It's supposed to improve performance is it not?

 

So, I think this is a personal choice but what I've found is that if you enable caching then you have to be very careful about modifying the files in the host while the guest is running (at least when I was doing extensive testing a year ago). I have a project that I am running that concurrently accesses the files that the VM has access to but if your VM files aren't touched by any other process on the host than it's probably safe to re-enable cache. I believe `auto` in the rust virtiofsd implementation does some checking for changes but with older versions I found that I was seeing very strange file corruption, especially if mover moves the file. I disabled it because I couldn't get to the bottom of it and everything seems "fast enough" without it (I get near native writes anyway onto the nvme drive that I shared).

 

If you want to re-enable cache, I'd suggest going with "auto" (the default for rust virtiofsd) and then do testing where you make sure there isn't any file corruption under the following circumstances:

- actively write to a file on the guest, try to move file file on the host (to simulate mover)

- open the file on guest (to hit the cache), then modify the file on host, then make sure those changes propagate to the guest

- open the file on guest (to hit the cache), then rename the file on host and make sure those changes propagate to the guest

There might be more test cases but those were the ones causing issues with older versions.

 

Also thanks for expanding my steps to be more detailed :).

 

Edited by jch
Link to comment
16 minutes ago, jch said:

 

They are all documented here: https://gitlab.com/virtio-fs/virtiofsd

 

 

So, I think this is a personal choice but what I've found is that if you enable caching then you have to be very careful about modifying the files in the host while the guest is running (at least when I was doing extensive testing a year ago). I have a project that I am running that concurrently accesses the files that the VM has access to but if your VM files aren't touched by any other process on the host than it's probably safe to re-enable cache. I believe `auto` in the rust virtiofsd implementation does some checking for changes but with older versions I found that I was seeing very strange file corruption, especially if mover moves the file. I disabled it because I couldn't get to the bottom of it and everything seems "fast enough" without it (I get near native writes anyway onto the nvme drive that I shared).

 

If you want to re-enable cache, I'd suggest going with "auto" (the default for rust virtiofsd) and then do testing where you make sure there isn't any file corruption under the following circumstances:

- actively write to a file on the guest, try to move file file on the host (to simulate mover)

- open the file on guest (to hit the cache), then modify the file on host, then make sure those changes propagate to the guest

- open the file on guest (to hit the cache), then rename the file on host and make sure those changes propagate to the guest

There might be more test cases but those were the ones causing issues with older versions.

 

Also thanks for expanding my steps to be more detailed :).

 

Thanks I'm not going to go that far, I'll take your word for it. It is crazy fast now. Especially given the FUSE layer that Unraid has maybe it's best to leave cache disabled, especially given your point about the mover running.

Link to comment

I also got this up and running with the Rust version.  No change in speeds (still maxing out gen3 hardware).  But the big win is it resolved my stability issue.  Prior the drive would drop out, and not be accessible till after a reboot.  Restarting the service wouldn't work.  I've been using this for a week now, abusing it with read/writes.  Finally feel confident in using it after trying/testing virtioFS for quite some time in unraid.  I'm using intel xeon v2s.

Link to comment

The folder I've mounted is directly on the cache pool (i.e. by-passing FUSE via exclusive access or referencing /mnt/cache/<folder>/) and is on a pair of Samsung 870 EVO 2TB SSD's in an encrypted mirrored BTRFS configuration. My CrystalDiskMark results from the guest VM (this is with other concurrent writes going on -- was too lazy to do a real dedicated test) are below. It's fast enough for me and like @Shadowplay highlighted the stability is phenomenal now, I don't see any of the issues I previously had with virtiofsd. 

image.png.8b8732415cb72332c42046a870f019d6.png

Link to comment

Upgraded to 6.12.9 - performance is even worse now with the same behavior of a single vcpu being used to 100% during a file transfer. Only about 45 MiB/s. Need more people to try this with AMD, so far I believe literally everyone else who has tried this is using Intel.

Link to comment
1 hour ago, johnsanc said:

Upgraded to 6.12.9 - performance is even worse now with the same behavior of a single vcpu being used to 100% during a file transfer. Only about 45 MiB/s. Need more people to try this with AMD, so far I believe literally everyone else who has tried this is using Intel.

Do you have any cores isolated from the host and pinned to your VM? I found core isolation makes a huge performance difference for me. Even if it's only a couple of cores and I still let the VM access the rest of them that aren't isolated.

 

Also just to throw out the obvious things, have you checked for any BIOS updates for your motherboard?

Link to comment
On 3/16/2024 at 2:31 AM, mackid1993 said:

So I had a chance to test this with multiple shares and I have to say this is an incredible speed improvement with Rust virtiofs. @SimonF this should really be implemented in 6.13.

 

Here is what I did.

 

First I created a directory on my flash:

/boot/virtiofsd

In the directory I placed the rust version of virtiofsd and a shell script for each share I want to mount. In my case I have:

appdata.sh
archives.sh
backup.sh
communityapplicationsappdatabackup.sh
downloads.sh
movies.sh
music.sh
software.sh

 

Note: Make these scripts using nano on the server. They must be formatted for UNIX. Making them in Notepad or Notepad++ on Windows will format them MSDOS and Virtiofs will die when booting the vm. 

 

Each shell script is exactly what @jch shared but modified for each individual share I want to mount in Windows. I modified the script to store virtiofsd in /usr/libexec:

Here is an example for my Music share, all of the other shares are the same:

 

#!/bin/bash

# process -o option but ignore it because unraid generates the command for us
VALID_ARGS=$(getopt -o o -l fd: -- "$@")i
if [[ $? -ne 0 ]]; then
  exit 1;
fi

eval set -- "$VALID_ARGS"
while [ : ]; do
  case "$1" in
    --fd )
      FD="$2"
      shift 2
      ;;
    -o )
      shift 1
      ;;
    -- ) 
      shift; 
      break 
      ;;
    * )
      shift;
      ;;
  esac
done

# https://gitlab.com/virtio-fs/virtiofsd
/usr/libexec/virtiofsd \
        --fd="$FD" \
        --shared-dir="/mnt/user/Music" \
        --xattr \
        --cache="never" \
        --sandbox="chroot" \
        --inode-file-handles="mandatory" \
        --announce-submounts

Change --shared-dir= and rinse and repeat for each individual share with a new shell script.

 

Next I modified my go file /boot/config/go

#replace Virtiofsd with Rust version
mv /usr/libexec/virtiofsd /usr/libexec/virtiofsd.old
cp /boot/virtiofsd/virtiofsd /usr/libexec/virtiofsd 
chmod +x /usr/libexec/virtiofsd 

#copy mount scripts for each virtofs share
#appdata
cp /boot/virtiofsd/appdata.sh /usr/libexec/appdata.sh
chmod +x /usr/libexec/appdata.sh
#archives
cp /boot/virtiofsd/archives.sh /usr/libexec/archives.sh
chmod +x /usr/libexec/archives.sh
#backup
cp /boot/virtiofsd/backup.sh /usr/libexec/backup.sh
chmod +x /usr/libexec/backup.sh
#communityapplicationsappdatabackup
cp /boot/virtiofsd/communityapplicationsappdatabackup.sh /usr/libexec/communityapplicationsappdatabackup.sh
chmod +x /usr/libexec/communityapplicationsappdatabackup.sh
#downloads
cp /boot/virtiofsd/downloads.sh /usr/libexec/downloads.sh
chmod +x /usr/libexec/downloads.sh
#movies
cp /boot/virtiofsd/movies.sh /usr/libexec/movies.sh
chmod +x /usr/libexec/movies.sh
#music
cp /boot/virtiofsd/music.sh /usr/libexec/music.sh
chmod +x /usr/libexec/music.sh
#software
cp /boot/virtiofsd/software.sh /usr/libexec/software.sh
chmod +x /usr/libexec/software.sh
#tv
cp /boot/virtiofsd/tv.sh /usr/libexec/tv.sh
chmod +x /usr/libexec/tv.sh

 

I appended the following to the go file so this is persistent upon boot. This renames the original virtiofsd to virtiofsd.old, copies the rust version to /usr/libexec and then copies each of my mount scripts to /usr/libexec and makes them executable.

 

Now for the xml, this part is easy. For each mount, replace

 

<binary path='/usr/libexec/virtiofsd' xattr='on'>

 

with

 

 <binary path='/usr/libexec/script.sh' xattr='on'>

 

This is what my finished XML looks like for my environment:

 

<filesystem type='mount' accessmode='passthrough'>
      <driver type='virtiofs' queue='1024'/>
      <binary path='/usr/libexec/appdata.sh' xattr='on'>
        <cache mode='always'/>
        <sandbox mode='chroot'/>
      </binary>
      <source dir='/mnt/user/appdata'/>
      <target dir='appdata'/>
      <alias name='fs0'/>
      <address type='pci' domain='0x0000' bus='0x02' slot='0x00' function='0x0'/>
    </filesystem>
    <filesystem type='mount' accessmode='passthrough'>
      <driver type='virtiofs' queue='1024'/>
      <binary path='/usr/libexec/archives.sh' xattr='on'>
        <cache mode='always'/>
        <sandbox mode='chroot'/>
      </binary>
      <source dir='/mnt/user/Archives'/>
      <target dir='Archives'/>
      <alias name='fs1'/>
      <address type='pci' domain='0x0000' bus='0x04' slot='0x00' function='0x0'/>
    </filesystem>
    <filesystem type='mount' accessmode='passthrough'>
      <driver type='virtiofs' queue='1024'/>
      <binary path='/usr/libexec/backup.sh' xattr='on'>
        <cache mode='always'/>
        <sandbox mode='chroot'/>
      </binary>
      <source dir='/mnt/user/Backup'/>
      <target dir='Backup'/>
      <alias name='fs2'/>
      <address type='pci' domain='0x0000' bus='0x05' slot='0x00' function='0x0'/>
    </filesystem>
    <filesystem type='mount' accessmode='passthrough'>
      <driver type='virtiofs' queue='1024'/>
      <binary path='/usr/libexec/communityapplicationsappdatabackup.sh' xattr='on'>
        <cache mode='always'/>
        <sandbox mode='chroot'/>
      </binary>
      <source dir='/mnt/user/CommunityApplicationsAppdataBackup'/>
      <target dir='CommunityApplicationsAppdataBackup'/>
      <alias name='fs3'/>
      <address type='pci' domain='0x0000' bus='0x06' slot='0x00' function='0x0'/>
    </filesystem>
    <filesystem type='mount' accessmode='passthrough'>
      <driver type='virtiofs' queue='1024'/>
      <binary path='/usr/libexec/downloads.sh' xattr='on'>
        <cache mode='always'/>
        <sandbox mode='chroot'/>
      </binary>
      <source dir='/mnt/user/Downloads'/>
      <target dir='Downloads'/>
      <alias name='fs4'/>
      <address type='pci' domain='0x0000' bus='0x07' slot='0x00' function='0x0'/>
    </filesystem>
    <filesystem type='mount' accessmode='passthrough'>
      <driver type='virtiofs' queue='1024'/>
      <binary path='/usr/libexec/movies.sh' xattr='on'>
        <cache mode='always'/>
        <sandbox mode='chroot'/>
      </binary>
      <source dir='/mnt/user/Movies'/>
      <target dir='Movies'/>
      <alias name='fs5'/>
      <address type='pci' domain='0x0000' bus='0x08' slot='0x00' function='0x0'/>
    </filesystem>
    <filesystem type='mount' accessmode='passthrough'>
      <driver type='virtiofs' queue='1024'/>
      <binary path='/usr/libexec/music.sh' xattr='on'>
        <cache mode='always'/>
        <sandbox mode='chroot'/>
      </binary>
      <source dir='/mnt/user/Music'/>
      <target dir='Music'/>
      <alias name='fs6'/>
      <address type='pci' domain='0x0000' bus='0x09' slot='0x00' function='0x0'/>
    </filesystem>
    <filesystem type='mount' accessmode='passthrough'>
      <driver type='virtiofs' queue='1024'/>
      <binary path='/usr/libexec/software.sh' xattr='on'>
        <cache mode='always'/>
        <sandbox mode='chroot'/>
      </binary>
      <source dir='/mnt/user/Software'/>
      <target dir='Software'/>
      <alias name='fs7'/>
      <address type='pci' domain='0x0000' bus='0x0a' slot='0x00' function='0x0'/>
    </filesystem>
    <filesystem type='mount' accessmode='passthrough'>
      <driver type='virtiofs' queue='1024'/>
      <binary path='/usr/libexec/tv.sh' xattr='on'>
        <cache mode='always'/>
        <sandbox mode='chroot'/>
      </binary>
      <source dir='/mnt/user/TV'/>
      <target dir='TV'/>
      <alias name='fs8'/>
      <address type='pci' domain='0x0000' bus='0x0b' slot='0x00' function='0x0'/>
    </filesystem>

 

Once done, reboot the server so the go file moves all of the files and sets permissions.

 

This is a huge improvement over the stock config. Thanks for this @jch. Hopefully we can get these settings implemented in 6.13. 

I am adding support for the next release.

 

my pre-process is here https://github.com/unraid/webgui/pull/1650/files#diff-4c8d5cce21f2c78e645df54b1784ac87676c0af710e0cb0c941e277f8cc823d9

 

This bash script needs to be added to /usr/libexec/ and made executable, The preprocessor above also has to be executable.

 

root@computenode:/mnt/user/appdata# cat  /usr/libexec/virtiofsd 
#!/bin/bash

eval exec /usr/bin/virtiofsd $(/usr/local/emhttp/plugins/dynamix.vm.manager/scripts/virtiofsd.php "$@")
root@computenode:/mnt/user/appdata# 

 

it takes options sent thru to the command from the VM start.

 

root      1143     1  0 09:10 ?        00:00:00 /usr/bin/virtiofsd --syslog --inode-file-handles=mandatory --announce-submounts --fd=46 --shared-dir=/mnt/cache --cache=never --sandbox=chroot --xattr
root      1147     1  0 09:10 ?        00:00:00 /usr/bin/virtiofsd --syslog --inode-file-handles=mandatory --announce-submounts --fd=46 --shared-dir=/mnt/user/isos --cache=never --sandbox=chroot --xattr
root     21072     1  0 09:13 ?        00:00:00 /usr/bin/virtiofsd --syslog --inode-file-handles=mandatory --announce-submounts --fd=42 --shared-dir=/mnt/user/domains3 --cache=never --sandbox=chroot --xattr

 

Rust binary need to be in /usr/bin/virtiofsd

 

cache is set to never, other values are taken from the -o options.

 

The following additional switches are added.

 

--syslog' --inode-file-handles=mandatory --announce-submounts

 

but you can override these by creating /etc/libvirt/virtiofsd.opt and each option needs to be on a new line.

Link to comment
9 hours ago, SimonF said:

I am adding support for the next release.

 

my pre-process is here https://github.com/unraid/webgui/pull/1650/files#diff-4c8d5cce21f2c78e645df54b1784ac87676c0af710e0cb0c941e277f8cc823d9

 

This bash script needs to be added to /usr/libexec/ and made executable, The preprocessor above also has to be executable.

 

root@computenode:/mnt/user/appdata# cat  /usr/libexec/virtiofsd 
#!/bin/bash

eval exec /usr/bin/virtiofsd $(/usr/local/emhttp/plugins/dynamix.vm.manager/scripts/virtiofsd.php "$@")
root@computenode:/mnt/user/appdata# 

 

it takes options sent thru to the command from the VM start.

 

root      1143     1  0 09:10 ?        00:00:00 /usr/bin/virtiofsd --syslog --inode-file-handles=mandatory --announce-submounts --fd=46 --shared-dir=/mnt/cache --cache=never --sandbox=chroot --xattr
root      1147     1  0 09:10 ?        00:00:00 /usr/bin/virtiofsd --syslog --inode-file-handles=mandatory --announce-submounts --fd=46 --shared-dir=/mnt/user/isos --cache=never --sandbox=chroot --xattr
root     21072     1  0 09:13 ?        00:00:00 /usr/bin/virtiofsd --syslog --inode-file-handles=mandatory --announce-submounts --fd=42 --shared-dir=/mnt/user/domains3 --cache=never --sandbox=chroot --xattr

 

Rust binary need to be in /usr/bin/virtiofsd

 

cache is set to never, other values are taken from the -o options.

 

The following additional switches are added.

 

--syslog' --inode-file-handles=mandatory --announce-submounts

 

but you can override these by creating /etc/libvirt/virtiofsd.opt and each option needs to be on a new line.

I'll test this out, do I have to make any changes in my XML or do I just ensure I have the rust binary and your bash script in /usr/libexec?

Edit: or do I have to be on 6.13 to test this?

Edited by mackid1993
Link to comment

Join the conversation

You can post now and register later. If you have an account, sign in now to post with your account.
Note: Your post will require moderator approval before it will be visible.

Guest
Reply to this topic...

×   Pasted as rich text.   Restore formatting

  Only 75 emoji are allowed.

×   Your link has been automatically embedded.   Display as a link instead

×   Your previous content has been restored.   Clear editor

×   You cannot paste images directly. Upload or insert images from URL.