• [6.9.0-beta35] virtio-fs fails to start


    Popog
    • Minor

    Trying to create any VM with a simple virtio-fs mount fails

     

    Basic XML:

    <filesystem type='mount' accessmode='passthrough'>
        <driver type='virtiofs'/>
        <binary path='/usr/libexec/virtiofsd' />
        <source dir='/mnt/user/shared'/>
        <target dir='shared_mount'/>
    </filesystem>

    QEMU logs:

    Quote

    qemu-system-x86_64: -device vhost-user-fs-pci,chardev=chr-vu-fs0,tag=shared_mount,bus=pci.5,addr=0x0: Failed to write msg. Wrote -1 instead of 12.
    qemu-system-x86_64: -device vhost-user-fs-pci,chardev=chr-vu-fs0,tag=shared_mount,bus=pci.5,addr=0x0: vhost_dev_init failed: Operation not permitted
    shutting down, reason=failed

    virtio-fsd logs:

    Quote

    [ID: 00029648] virtio_session_mount: Waiting for vhost-user socket connection...
    [ID: 00029648] virtio_session_mount: Received vhost-user socket connection
    [ID: 00000001] pivot_root(., .): Invalid argument

     

    stefanha (QEMU person for this feature) said it was most likely because pivot_root doesn't work well under initramfs. QEMU 5.2 might mitigate this issue by adding a flags to use chroot instead (-o sandbox=chroot).

     

    Defaulting this behavior would require creating a script like

    exec /usr/libexec virtiofsd -o sandbox=chroot "$@"

     

    stefanha also suggested unraid devs may want to contact virtio-fs developers in #virtio-fs on Freenode IRC or the [email protected] mailing list (e.g. to see if there is any changes that can be made to unraid to fix pivot_root).




    User Feedback

    Recommended Comments

    Aw so im not the only one having issues. I thought it was just me, been trying to make it work for some time but over my head. If I run that script would that potentually solve the issue? (Knowing I would have to run it at each reboot.)

    Link to comment

    You would have to point your xml to that script as the binary, i.e.

    <binary path='/usr/libexec/virtiofsd-custom-script' />

    where /usr/libexec virtiofsd-custom-script is that one-liner that forwards to the actual virtiofsd.

     

    Also you'd need to upgrade to unraid to QEMU 5.2, or at least upgrade the virtiofsd binary, since 5.1 doesn't support that flag.

     

    I have not tried any of these proposed workarounds (no idea how to even upgrade QEMU inside of unraid), so who knows if they actually resolve everything, but at least they should get you to the next error.

    Link to comment

    I compiled QEMU 5.2 myself and it works amazing with your custom script.
    It is a lot faster than 9p and it also "works" with a Windows VM.

    For Linux VMs this works perfectly fine and haven't any issues that I had with 9p or NFS.

    Link to comment

    After building a custom QEMU 5.2 slackpkg and implementing the workaround described above, I was also able to use virtiofs to pass-through directories on my Unraid host to my VMs. However, determining the correct compilation options for QEMU was a time-consuming, iterative process. I reached out to @limetech and they confirmed that QEMU 6.0 will be included in Unraid 6.10 which is coming "soon". For future readers of this thread, if you are not in immediate need of this functionality, I would recommend waiting for Unraid 6.10. If you cannot wait I have a few notes that may help you get this working.
     

    1. Use a "full Slackware current" VM as your build machine. The Slackware current kernel is slightly ahead of Unraid 6.9.2 at the time of this writing (5.10.39 vs 5.10.28) but the QEMU package it produces is compatible. I downloaded my Slackware current install ISO from AlienBOB here.
    2. The QEMU 5.2.0 source code is available at https://www.qemu.org/download/#source (direct download link here).
    3. The QEMU 5.2.0 build scripts have a bug which results in incorrect options being passed to the linker. In order to build QEMU 5.2.0, you will need to apply a patch which can be found here (this may be resolved in QEMU 6.0).
    4. You might need to download and install a few additional slackpkgs onto your build vm to compile QEMU. I needed libseccomp-2.3.2, spice, and spice-protocol. The spice packages were not available on pkgs.org so I rebuilt them from source on my build machine (Note: these packages are already available on Unraid 6.9.2)
    5. A QEMU slackbuild script can be found on slackbuilds.org here. The QEMU build args that worked for me are included below. I also set the environment variables VERSION=5.2.0 and TARGETS="x86_64-softmmu,x86_64-linux-user". Additionally, I commented out the line "make config-all-devices.mak config-all-disas.mak" and removed a few nonexistent files from the cp command near the end of the build script.
      CXXFLAGS="$SLKCFLAGS" \
      ./configure \
        --prefix=/usr \
        --libdir=/usr/lib${LIBDIRSUFFIX} \
        --sysconfdir=/etc \
        --localstatedir=/var \
        --docdir=/usr/doc/$PRGNAM-$VERSION \
        --enable-system \
        --enable-kvm \
        --disable-debug-info \
        --enable-virtiofsd \
        --enable-virtfs \
        --enable-jemalloc \
        --enable-nettle \
        --enable-vnc \
        --enable-seccomp \
        --enable-spice \
        --enable-libusb \
        --audio-drv-list=" " \
        --disable-gtk \
        --disable-snappy \
        --disable-libdaxctl \
        --disable-sdl \
        --disable-sdl-image \
        --disable-virglrenderer \
        --disable-vde \
        --disable-vte \
        --disable-opengl \
        $with_vnc \
        $targets

       

    6. The procedure above produces a QEMU 5.2.0 slackpkg which can be installed on Unraid. One final note: Unraid 6.9.2 includes glibc-2.32 and QEMU 5.2.0 depends on glibc-2.33; a slackpkg for glibc-2.33 can be obtained here

     

    It's important to emphasize that I do not know the compilation options used to build QEMU for the official Unraid distribution so it's very possible that the QEMU package produced by the procedure above is missing some features that are present in the pre-installed QEMU. As such, I would caution against taking this route and suggest waiting for Unraid 6.10 unless you are truly in dire need. In the meantime I hope this helps others who may find themselves in the same situation that I was in!

    Edited by pants
    • Like 3
    • Thanks 1
    Link to comment

    EDIT: Holy stinky dog-poo! Okay kids, this all works using *exactly* the below xml (w/e/f your naming convention), just make sure when you create the *script, that you specify it with the shebang...

    Script: 

    #!/bin/sh
    exec /usr/libexec/virtiofsd -o sandbox=chroot "$@"

     

    XML:

      <memoryBacking>
        <source type='memfd'/>
        <access mode='shared'/>
      </memoryBacking>    
    ----
        <filesystem type='mount' accessmode='passthrough'>
          <driver type='virtiofs'/>
          <binary path='/usr/libexec/virtiofsd-script'/>
          <source dir='/mnt/user/domains'/>
          <target dir='unraidshare'/>
        </filesystem>

    After you get it running, mount with the usual mount commands, in my case:

    sudo mount -t virtiofs unraidshare /mnt/unraiddata/

     

    ---------See above!!! These are for notes of my progress...--------

    Spoiler

    This be an old thread, but relevant once again as we are now in Unraid 6.10-rc4. I have created a file creatively called `virtiofsd-script` and input it into my xml as follows:

      <memoryBacking>
        <source type='memfd'/>
        <access mode='shared'/>
      </memoryBacking>    
    ----
        <filesystem type='mount' accessmode='passthrough'>
          <driver type='virtiofs'/>
          <binary path='/usr/libexec/virtiofsd-script'/>
          <source dir='/mnt/user/domains'/>
          <target dir='unraidshare'/>
        </filesystem>

    But I am still having no joy. 

     

    To plagiarize myself from the Unraid Discord, I have the following:

    Quote

    To note, I have added a file with ```exec /usr/libexec/virtiofsd -o sandbox=chroot "$@"``` in it, resulting in:

    Execution error
    internal error: virtiofsd died unexpectedly

     

    Without that file, or just not pointing the `<binary>` path at it, I get the longer error of:

     

    internal error: qemu unexpectedly closed the monitor: 2022-04-10T21:24:51.687704Z qemu-system-x86_64: -device {"driver":"vhost-user-fs-pci","id":"fs0","chardev":"chr-vu-fs0","tag":"unraidshare","bus":"pci.1","addr":"0x0"}: Failed to write msg. Wrote -1 instead of 12. 2022-04-10T21:24:51.687766Z qemu-system-x86_64: -device {"driver":"vhost-user-fs-pci","id":"fs0","chardev":"chr-vu-fs0","tag":"unraidshare","bus":"pci.1","addr":"0x0"}: vhost_backend_init failed: Protocol error

    Any help or insight I could get would be awesome! Happy, of course, to play around with different things to try to get it working as requested/recommended as well. Its just for a blockchain node, so, nothing that can't be replaced, and I would really like that performance increase!

     

    Edited by omninewb
    • Like 1
    • Thanks 1
    Link to comment

    This fix is not working in 6.9.2 for me.

     

    2022-04-30T05:12:46.555460Z qemu-system-x86_64: -device vhost-user-fs-pci,chardev=chr-vu-fs0,tag=web,bus=pci.0,addr=0x3: Failed to write msg. Wrote -1 instead of 12.
    2022-04-30T05:12:46.555510Z qemu-system-x86_64: -device vhost-user-fs-pci,chardev=chr-vu-fs0,tag=web,bus=pci.0,addr=0x3: vhost_dev_init failed: Operation not permitted

     

    Link to comment
    On 4/11/2022 at 1:36 AM, omninewb said:

    EDIT: Holy stinky dog-poo! Okay kids, this all works using *exactly* the below xml (w/e/f your naming convention), just make sure when you create the *script, that you specify it with the shebang...

    Script: 

    #!/bin/sh
    exec /usr/libexec/virtiofsd -o sandbox=chroot "$@"

     

    XML:

      <memoryBacking>
        <source type='memfd'/>
        <access mode='shared'/>
      </memoryBacking>    
    ----
        <filesystem type='mount' accessmode='passthrough'>
          <driver type='virtiofs'/>
          <binary path='/usr/libexec/virtiofsd-script'/>
          <source dir='/mnt/user/domains'/>
          <target dir='unraidshare'/>
        </filesystem>

    After you get it running, mount with the usual mount commands, in my case:

    sudo mount -t virtiofs unraidshare /mnt/unraiddata/

     

    ---------See above!!! These are for notes of my progress...--------

      Reveal hidden contents

    This be an old thread, but relevant once again as we are now in Unraid 6.10-rc4. I have created a file creatively called `virtiofsd-script` and input it into my xml as follows:

      <memoryBacking>
        <source type='memfd'/>
        <access mode='shared'/>
      </memoryBacking>    
    ----
        <filesystem type='mount' accessmode='passthrough'>
          <driver type='virtiofs'/>
          <binary path='/usr/libexec/virtiofsd-script'/>
          <source dir='/mnt/user/domains'/>
          <target dir='unraidshare'/>
        </filesystem>

    But I am still having no joy. 

     

    To plagiarize myself from the Unraid Discord, I have the following:

    Any help or insight I could get would be awesome! Happy, of course, to play around with different things to try to get it working as requested/recommended as well. Its just for a blockchain node, so, nothing that can't be replaced, and I would really like that performance increase!

     

    Thousand thanks for you! Got my virtiofs working with this. Kinda lost hope already to get it working when tried before. 🥳

    Link to comment

    Late to the party with this - but is this so we can access UNRAID shares directly without using SMB/network shares from a Windows VM? If so, can someone kindly details the steps I need to perform to do this. Thanks.

    Link to comment
    On 4/10/2022 at 11:36 PM, omninewb said:

    EDIT: Holy stinky dog-poo! Okay kids, this all works using *exactly* the below xml (w/e/f your naming convention), just make sure when you create the *script, that you specify it with the shebang...

    Script: 

    #!/bin/sh
    exec /usr/libexec/virtiofsd -o sandbox=chroot "$@"

     

    XML:

      <memoryBacking>
        <source type='memfd'/>
        <access mode='shared'/>
      </memoryBacking>    
    ----
        <filesystem type='mount' accessmode='passthrough'>
          <driver type='virtiofs'/>
          <binary path='/usr/libexec/virtiofsd-script'/>
          <source dir='/mnt/user/domains'/>
          <target dir='unraidshare'/>
        </filesystem>

    After you get it running, mount with the usual mount commands, in my case:

    sudo mount -t virtiofs unraidshare /mnt/unraiddata/

     

    ---------See above!!! These are for notes of my progress...--------

      Reveal hidden contents

    This be an old thread, but relevant once again as we are now in Unraid 6.10-rc4. I have created a file creatively called `virtiofsd-script` and input it into my xml as follows:

      <memoryBacking>
        <source type='memfd'/>
        <access mode='shared'/>
      </memoryBacking>    
    ----
        <filesystem type='mount' accessmode='passthrough'>
          <driver type='virtiofs'/>
          <binary path='/usr/libexec/virtiofsd-script'/>
          <source dir='/mnt/user/domains'/>
          <target dir='unraidshare'/>
        </filesystem>

    But I am still having no joy. 

     

    To plagiarize myself from the Unraid Discord, I have the following:

    Any help or insight I could get would be awesome! Happy, of course, to play around with different things to try to get it working as requested/recommended as well. Its just for a blockchain node, so, nothing that can't be replaced, and I would really like that performance increase!

     

    Hi,

    Is this still working for you and can you confirm which version of Unraid you are currently using?

     

     

    Link to comment
    On 4/10/2022 at 11:36 PM, omninewb said:

    EDIT: Holy stinky dog-poo! Okay kids, this all works using *exactly* the below xml (w/e/f your naming convention), just make sure when you create the *script, that you specify it with the shebang...

    Script: 

    #!/bin/sh
    exec /usr/libexec/virtiofsd -o sandbox=chroot "$@"

     

    XML:

      <memoryBacking>
        <source type='memfd'/>
        <access mode='shared'/>
      </memoryBacking>    
    ----
        <filesystem type='mount' accessmode='passthrough'>
          <driver type='virtiofs'/>
          <binary path='/usr/libexec/virtiofsd-script'/>
          <source dir='/mnt/user/domains'/>
          <target dir='unraidshare'/>
        </filesystem>

    After you get it running, mount with the usual mount commands, in my case:

    sudo mount -t virtiofs unraidshare /mnt/unraiddata/

     

    ---------See above!!! These are for notes of my progress...--------

      Reveal hidden contents

    This be an old thread, but relevant once again as we are now in Unraid 6.10-rc4. I have created a file creatively called `virtiofsd-script` and input it into my xml as follows:

      <memoryBacking>
        <source type='memfd'/>
        <access mode='shared'/>
      </memoryBacking>    
    ----
        <filesystem type='mount' accessmode='passthrough'>
          <driver type='virtiofs'/>
          <binary path='/usr/libexec/virtiofsd-script'/>
          <source dir='/mnt/user/domains'/>
          <target dir='unraidshare'/>
        </filesystem>

    But I am still having no joy. 

     

    To plagiarize myself from the Unraid Discord, I have the following:

    Any help or insight I could get would be awesome! Happy, of course, to play around with different things to try to get it working as requested/recommended as well. Its just for a blockchain node, so, nothing that can't be replaced, and I would really like that performance increase!

     

    You dont need to use a script anymore.

     

    	<filesystem type='mount' accessmode='passthrough'>
    		<driver type='virtiofs' queue='1024' />
    		<source dir=''/>
    		<target dir=''/>
    		<binary path='/usr/libexec/virtiofsd'  xattr='on'>
    			<sandbox mode='chroot'/>
    			<cache mode='always'/>
    			<lock posix='on' flock='on'/>
    		</binary>
    	</filesystem>" 

     

    Edited by SimonF
    Link to comment

    @SimonF

     

    Since Unraid uses qemu 6.2 in the latest version this should work, doesn't it?

     

    For me adding your snippet causes a bunch of errors, for example:

    unsupported configuration: 'virtiofs' requires shared memory

     

    Before it kinda worked (I could at least add the XML Configuration and save it) but when I tried to mount it in debian the system complete froze and it also broke other mounts. Unraid VMs behave very weirdly sometimes.

    Link to comment
    7 hours ago, unifiedmamba said:

    @SimonF

     

    Since Unraid uses qemu 6.2 in the latest version this should work, doesn't it?

     

    For me adding your snippet causes a bunch of errors, for example:

    unsupported configuration: 'virtiofs' requires shared memory

     

    Before it kinda worked (I could at least add the XML Configuration and save it) but when I tried to mount it in debian the system complete froze and it also broke other mounts. Unraid VMs behave very weirdly sometimes.

    Which os vers are you running?

    Link to comment
      <memoryBacking>
        <source type='memfd'/>
        <access mode='shared'/>
      </memoryBacking>    

    @SimonF adding this to the config fixed the error in Unraid, but I am not able to use the mount inside my Debian VM.

    Whenever I try to cd or even ls the folder it's mounted to the shell completely freezes, via SSH and even VNC. It seems it does not get mounted properly.

     

    I can't even umount it.

    Edited by unifiedmamba
    Link to comment

    I have been testing with 6.10.3 and works with both 9P and Virtiofs.

     

    6.11 only seems to work at present with 9P.

     

    I am working with Limetech to see why.

     

    This is for a gui update for you will still need to manually add 

      <memoryBacking>
        <source type='memfd'/>
        <access mode='shared'/>
      </memoryBacking>  

    But I am looking at options to add this also.

     

    image.thumb.png.6f3129f6beed59557eb21fc9cdadf289.png

    Link to comment
    4 hours ago, unifiedmamba said:
      <memoryBacking>
        <source type='memfd'/>
        <access mode='shared'/>
      </memoryBacking>    

    @SimonF adding this to the config fixed the error in Unraid, but I am not able to use the mount inside my Debian VM.

    Whenever I try to cd or even ls the folder it's mounted to the shell completely freezes, via SSH and even VNC. It seems it does not get mounted properly.

     

    I can't even umount it.

    This is fixed in next release. But you need to manually add the memorybacking optioms. But mounting and access is fixed.

    Edited by SimonF
    Link to comment
    19 hours ago, SimonF said:

    This is fixed in next release. But you need to manually add the memorybacking optioms. But mounting and access is fixed.

    wish i knew this was borked before i updated :/

    Link to comment

    @SimonF But what's the reason for this mount freezing my whole shell? 

    When mounting it (even with -v) it throws no errors.

     

    And what is the fix for this that is implemented in the next version? Knowing Unraid this could probably take a few months :D

     

     

    Link to comment
    17 minutes ago, unifiedmamba said:

    @SimonF But what's the reason for this mount freezing my whole shell? 

    When mounting it (even with -v) it throws no errors.

     

    And what is the fix for this that is implemented in the next version? Knowing Unraid this could probably take a few months :D

     

     

    I think it is a QEMU/Kernel version issue, Versions are being bump to latest.

    Link to comment
    On 10/1/2022 at 6:45 PM, unifiedmamba said:

    @SimonF But what's the reason for this mount freezing my whole shell? 

    When mounting it (even with -v) it throws no errors.

     

    And what is the fix for this that is implemented in the next version? Knowing Unraid this could probably take a few months :D

     

     

    6.1.1 has released now.

    Link to comment

    Got it working with windows vm, not that hard to implement, only one problem is its worst than SAMBA, does it matter that i have a passthrough'ed 10G NIC?

    SAMBA SPEED - 500-650MB/s
    VIRT-IO SPEED - 100-120MB/s 

    Large or Small File it seems that is the maximum i can get, any way to tune this?

    Regards,
    Dhen

    Edited by dhendodong
    Link to comment
    21 hours ago, dhendodong said:

    Got it working with windows vm, not that hard to implement, only one problem is its worst than SAMBA, does it matter that i have a passthrough'ed 10G NIC?

    SAMBA SPEED - 500-650MB/s
    VIRT-IO SPEED - 100-120MB/s 

    Large or Small File it seems that is the maximum i can get, any way to tune this?

    Regards,
    Dhen

    Not looked at performance. You will need to do changes in XML editor, try removing the cache line in the binary statement to see if that improves the performance.

     

    	<filesystem type='mount' accessmode='passthrough'>
    		<driver type='virtiofs' queue='1024' />
    		<source dir=''/>
    		<target dir=''/>
    		<binary path='/usr/libexec/virtiofsd'  xattr='on'>
    			<sandbox mode='chroot'/>
    			<cache mode='always'/>
    			<lock posix='on' flock='on'/>
    		</binary>
    	</filesystem>" 

     

    Edited by SimonF
    Link to comment

    Thank you very much for your work @SimonF!

     

    So far it seems to work pretty good, however bonnie++ doesn't want to run on the mounted drive, but I'll get that sorted out.

    I had to remove the "cache mode" part since I don't wan't it to write to the cache pool.

     

    Without having measured with any tool, performance of my nextcloud instance seems to have improved quite a bit.

    Edited by unifiedmamba
    Link to comment
    57 minutes ago, unifiedmamba said:

    Thank you very much for your work @SimonF!

     

    So far it seems to work pretty good, however bonnie++ doesn't want to run on the mounted drive, but I'll get that sorted out.

    I had to remove the "cache mode" part since I don't wan't it to write to the cache pool.

     

    Without having measured with any tool, performance of my nextcloud instance seems to have improved quite a bit.

    Cache mode is not unraid cache but memory cache

    Link to comment


    Join the conversation

    You can post now and register later. If you have an account, sign in now to post with your account.
    Note: Your post will require moderator approval before it will be visible.

    Guest
    Add a comment...

    ×   Pasted as rich text.   Restore formatting

      Only 75 emoji are allowed.

    ×   Your link has been automatically embedded.   Display as a link instead

    ×   Your previous content has been restored.   Clear editor

    ×   You cannot paste images directly. Upload or insert images from URL.


  • Status Definitions

     

    Open = Under consideration.

     

    Solved = The issue has been resolved.

     

    Solved version = The issue has been resolved in the indicated release version.

     

    Closed = Feedback or opinion better posted on our forum for discussion. Also for reports we cannot reproduce or need more information. In this case just add a comment and we will review it again.

     

    Retest = Please retest in latest release.


    Priority Definitions

     

    Minor = Something not working correctly.

     

    Urgent = Server crash, data loss, or other showstopper.

     

    Annoyance = Doesn't affect functionality but should be fixed.

     

    Other = Announcement or other non-issue.