Jump to content

christophocles

Members
  • Posts

    6
  • Joined

  • Last visited

Recent Profile Visitors

The recent visitors block is disabled and is not being shown to other users.

christophocles's Achievements

Noob

Noob (1/14)

3

Reputation

  1. Yeah I used to be primarily a Windows guy, but I’ve weaned myself off of it over the past couple years. I run Linux on the bare metal (as you do with Unraid) and I run Windows for the software that requires it. Right now, that’s limited to Adobe products and Backblaze. My storage is all managed with OpenZFS on the host OS. I run other sandboxed services in a Linux guest VM. VirtioFS is cross-platform and is what enables me to share the storage across multiple VMs while maintaining some level of sandboxing for applications. I’m sure we will be able to figure this one out. I know others have the same issue and have reported it recently. I’ve been living with it like this for about a year now. It’s not as serious as the virtiofs-win memory leak, which made my Windows VM completely unusable. One of the virtiofs threads seems to get deadlocked after a while, forcing me to reboot the guest. I’m not sure if it’s happening in the guest, on the host virtiofs process, or in qemu. But I do have the workaround of increasing the thread pool size which reduces the frequency of needing to reboot the guest.
  2. Yes, Tumbleweed has shipped with the rust version for a few months now. I switched from C to rust about a year ago to solve the issue of running out of file handles - the rust version added the option --inode-file-handles which avoided hitting the file handles limit. Currently I'm running virtiofsd (rust) version 1.7.2 which is the current version in Tumbleweed repo. On the gitlab site they have up to 1.9.0 and I haven't tried that yet, but the commenter above me on the issue thread is using that version.
  3. It's great to finally have this bug squashed! Thanks @SimonF@mackid1993 for assistance reporting the bug to the right people. Now that my Win10 VM is stable, I am moving on to solving my next VirtioFS issue. I also have a Linux VM that accesses these same shares, and it has a completely different problem. After a few hours, one of the shares will quit responding, and any process that attempts to read data from that share will hang indefinitely. I can't remount the share, I can't reboot normally, the only thing I can do is Force Power Off the VM and restart it. Looking through the various log files, I can't figure out what is going on. It only happens on the linux guest. I have never seen it happen on the Windows guest. I reported this issue on the virtiofsd gitlab site. Has anyone else experienced this? If so, please post a comment on this thread: https://gitlab.com/virtio-fs/virtiofsd/-/issues/133
  4. I added more supporting info and I also submitted a bug report for WinFsp. https://github.com/winfsp/winfsp/issues/534
  5. It's not limited to unraid. I have the issue with opensuse tumbleweed as the host. I have not seen any change across multiple version upgrades. Currently my qemu version is 8.1.2. I'm also not sure how it could be caused by qemu, as that's running on the host. The memory leak is inside of the windows guest OS, so it seems to me that it has to be caused by the virtio-win kernel driver, or winfsp.
  6. Oh cool, someone actually read my reddit post. It would have been even cooler if one of you guys had left a comment linking back to this thread for further discussion 😀. So here we are 7 months later, I have more data to share. I am 99% sure this has nothing to do with refs.sys. If you do a case-insensitive search for mmdi you will see the actual string it finds in refs.sys: MmDisableModifiedWriteOfSection. So it's not really a match to that pool name. There's a thread on superuser that describes more methods for hunting down these non-paged pool leaks. Note where it says: Mmdi is found in pooltag.txt so you actually have to use xperf and wpa for further debug. Following the method described there, I captured a snapshot of memory growth, opened it in wpa, loaded symbols, and expanded the Mmdi pool to find stack references to winfsp-x64.dll and virtiofs.exe. So there's the smoking gun, one of these drivers is the culprit. I upgraded to the latest versions of WinFSP (2.0) and Virtio-win guest tools (0.1.240) and the leak is still active.
×
×
  • Create New...