johnsanc

Members
  • Posts

    307
  • Joined

  • Last visited

Everything posted by johnsanc

  1. I spoke too soon. Froze again and CPUs pegged. Also the 0/12 CPU is not allocated to the VM...
  2. Just a quick update - So far so good with 16GB allocated to the VM (25% of my total RAM in the system). The VM has been up longer than it ever has been so far with using virtiofs. @VegChan - Can you share your setup and findings as well so far? I'm curious if you were also using either 32GB+ and/or 50%+ of your system RAM allocated to VM.
  3. Here's the basic info of the host machine: M/B: ASRock X570 Creator Version BIOS: American Megatrends Inc. Version P2.40. Dated: 04/13/2020 CPU: AMD Ryzen 9 3900X 12-Core @ 3800 MHz HVM: Enabled IOMMU: Enabled Cache: 768 KiB, 6 MB, 64 MB Memory: 64 GiB DDR4 Multi-bit ECC (max. installable capacity 128 GiB) Network: eth0: 1000 Mbps, full duplex, mtu 1500 Kernel: Linux 5.19.14-Unraid x86_64 And I've attached my VM XML to this post. Note I changed my memory from 32gb down to 16gb because someone else I was chatting with recently mentioned they were having stability issues with anything 32gb and above. So far the VM has been running for about an hour and half with 16gb without any issues yet. I'll let you know how it goes. johnsanc-win10vm.xml
  4. Posting here for visibility since it seems like a few people are having issues with VirtioFS and Windows VMs... It looks like the inclusion of virtiofs configuration in the VM XML may result in Windows 10 VM freezes. When this occurs the VM is completely unresponsive and frozen, there is nothing useful in any logs, and all CPUs allocated to the VM become pegged at 100%. Usually this occurs within 30 minutes to 2 hours of the VM being started. I've confirmed that disabling the VirioFS service in windows has no impact on the freezing, nor does I/O load seem to have any impact. Not sure if it matters, but I initially had 32GB of the 64GB of memory I have allocated to the VM.
  5. Yes, same here. Unfortunately I cannot find anything useful in any logs that even gives a clue. Perhaps @SimonF has a suggestion for where to look, or maybe he can reproduce the issue as well?
  6. Thanks for confirming I’m not the only one. I just checked and all my CPUs assigned to that VM are pegged as well once the freeze occurs. i tried disabling the VirtIO-FS service in windows and it still freezes, so that sounds more like a host problem. Perhaps some configuration issue in the XML or bug? I’m open to suggestions since Virtiofs is sooooooo much faster than SMB shares for my use cases. I would love if we could get this VM freeze issue resolved.
  7. Posted in General the other day figured this is a better place to ask along with more details... so apologies for the double post as it doesn't look like I can move a topic. I upgraded to UnRAID 6.11.1 and everything went smoothly. However my Windows 10 VM just freezes without any apparent errors as far as I can tell. There is nothing suspicious that I can see in the Windows Event Viewer logs and I don't see anything different in my VM log compared to when it was working fine. I used to have issues after I upgraded to 6.11.0 where the network connection to the VM would drop, but I resolved this by upgrading the virtio drivers to the ones from the latest stable ISO. This issue with 6.11.1 is different though, and I can tell the VM is completely frozen. I usually use RDP for accessing the VM, but if I plug in a monitor the graphics card I have passed through to the VM I can see the time on the windows lock screen is frozen. The only major change I did was enabled the virtiofs feature. I've attached my diagnostics in case anyone can take a peek to see what the issue might be. I am not really sure where to look. My VM froze around 11:29 PM ET on 10/9/22 and again at 10:53 AM ET on 10/10/22. Basically it looks like the VM will not stay live for more than 2-3 hours without completely freezing and requiring me to force stop. I did notice that if I try to do a regular stop once its frozen the libvirt log shows the following: 2022-10-10 14:57:38.384+0000: 520: error : qemuAgentSend:759 : Guest agent is not responding: Guest agent not available for now 2022-10-10 14:58:14.162+0000: 519: error : qemuAgentSend:759 : Guest agent is not responding: Guest agent not available for now 2022-10-10 14:58:56.600+0000: 521: error : qemuAgentSend:759 : Guest agent is not responding: Guest agent not available for now 2022-10-10 14:59:27.392+0000: 521: warning : qemuDomainObjTaintMsg:6464 : Domain id=2 name='Windows 10' uuid=8dd0454f-e9f4-6e94-3bc2-93cbd146abcb is tainted: custom-ga-command 2022-10-10 14:59:32.393+0000: 521: error : qemuAgentSend:759 : Guest agent is not responding: Guest agent not available for now Any ideas? tower-diagnostics-20221010-1112.zip
  8. I upgraded to UnRAID 6.11.1 yesterday and everything went smoothly. However my Windows 10 VM appears to have stability issues and just freezes without any apparent errors as far as I can tell. There is nothing suspicious that I can see in the Windows Event Viewer logs and I don't see anything different in my VM log compared to when it was working fine. I used to have issues after I upgraded to 6.11.0 where the network connection to the VM would drop, but I resolved this by upgrading the virtio drivers to the ones from the latest stable ISO. This issue with 6.11.1 is different though, and I can tell the VM is completely frozen. I usually use RDP for accessing the VM, but if I plug in a monitor the graphics card I have passed through to the VM I can see the time on the windows lock screen is frozen. The only major change I did was enabled the virtiofs feature. I've attached my diagnostics in case anyone can take a peek to see what the issue might be. I am not really sure where to look. tower-diagnostics-20221009-1805.zip
  9. Thanks for confirming, just for reference I also reported that here: https://github.com/virtio-win/kvm-guest-drivers-windows/issues/839
  10. Make sure you don't have another network drive or something mapped to Z:, the Virtio-FS service does not handle drive letter conflicts. See my thread here for more details / troubleshooting Virtio-FS with windows: Also, potential bug report here for the service restart issue I was encountering: https://github.com/virtio-win/kvm-guest-drivers-windows/issues/839
  11. Did that resolve the restart issue for you? Or even with the dependencies you still have the restart issue?
  12. One other thing to note... the documentation shows to register the service with a dependency, but I already had the VirtIO-FS service active before enabling any of this, and it had no dependencies, so to modify that service for good measure I did this from an elevated command prompt: sc config VirtioFsSvc depend="WinFsp.Launcher/VirtioFsDrv" Didn't seem to resolve the broken user on restart though.
  13. OK so I figured out how to break it... stop your VirtIO-FS service and restart it. It will behave improperly with that broken S-1-5-0 user. Can you confirm? I think yesterday I was running into issues because I was stopping and starting services and changing drive letters. But it looks like if you let it start up automatically and don't stop it it works correctly.
  14. Well this is bizarre... last night my VM froze with nothing in the logs or event viewer, so I forced restarted it today and now I just did another check and I can see that it does work correctly exactly as you show in your screenshot above. Yesterday I was getting everything with that S-1-5-0 user. I have no idea what would have changed to make it work differently today...
  15. I am currently using a local account with administrator privileges and it does not work for me. Are you sure you can copy a folder of files to the mounted drive? If so, how do the permissions of the copied folder/files look on the host machine after you copied them? I hate windows permissions.
  16. Yes I am currently just mounting /mnt/user/ for now to keep it simple until the signed drivers support the mount tag. I changed my registry key to a different drive letter though to avoid conflicts with my other mapped network drives. That worked fine. Info here on how to do that: https://github.com/virtio-win/kvm-guest-drivers-windows/wiki/VirtIO-FS:-Shared-file-system#options So far everything seems to work great and the performance for me at least is MUCH better than SMB with a ton of small files, but the permissions issues are currently a show stopper.
  17. Excellent thanks for the update. Yeah it just took me a few minutes to realize the mount tag parameter wasn't supported yet. Hopefully we can get to the bottom of the permissions issues... otherwise this is basically unusable for basic Windows use cases. Im not sure if its a virtiofs issue or winfsp issue, or if theres some special configuration or way to make it play nice. Also, just to clarify, for my results above I used an extra SMB configuration to force the user/permissions that I want and its worked great for years. I am hoping we can do something like that with virtiofs/winfsp. [global] security = USER guest account = nobody public = yes guest ok = yes map to guest = bad user force user = nobody force group = users force directory mode = 0777 force create mode = 0666 create mask = 0666
  18. Another update - Permissions are also an issue and I haven't quite figured that one out yet. I tried a few basic tests like copying a folder of files from VM to the mounted drive and it did not work. it created the folder with 775 permissions but then none of the files within the folder could be copied. If anyone has any ideas I'm all ears. A few tests I did: - create folder with smb: nobody/users, 777 - create file with smb: nobody/users, 666 - create folder with virtiofs: nobody/users, 775 - create file with virtiofs: nobody/users, 664
  19. Apparently this is a very new feature that hasn't made its way into any of the standard driver ISOs yet. https://github.com/virtio-win/kvm-guest-drivers-windows/pull/804 I guess I'll stick with a single share for now to avoid the headaches.
  20. So it works correctly if I mount a single share, but if multiple are included it seems to map a random share to the Z: drive. How can you properly map multiple shares? I tried the instructions here but whenever I try to start the virtiofs service the console looks ok, but then I see errors in the event viewer: Console log used, where "games" is a mount tag specified in the XML "C:\Program Files (x86)\WinFsp\bin\launchctl-x64.exe" start virtiofs viofsY games Y: Event Viewer error: virtiofs: The service VirtIO-FS has failed to start (Status=c0000001). Has anyone got this to work correctly for multiple shares?
  21. EDIT: I was able to get this to work by removing a network drive I had mapped to Z: Apparently this service does not manage any drive letter conflicts correctly.
  22. Posting this here because I don't want to clutter up the announcement thread.... but how do you get virtiofs to work with a Windows 10 VM? The form composer created this for the settings: <filesystem type='mount' accessmode='passthrough'> <driver type='virtiofs' queue='1024'/> <binary path='/usr/libexec/virtiofsd' xattr='on'> <cache mode='always'/> <sandbox mode='chroot'/> <lock posix='on' flock='on'/> </binary> <source dir='/mnt/user/'/> <target dir='shares'/> <alias name='fs0'/> <address type='pci' domain='0x0000' bus='0x00' slot='0x02' function='0x0'/> </filesystem> I tried manually adding this to the XML which got around the startup error: <memoryBacking> <nosharepages/> <source type='memfd'/> <access mode='shared'/> </memoryBacking> But now what? I have winfsp installed and in Windows I can see there is a "VirtIO-FS Service". If I start that service nothing seems to happen. I expected to see a new drive letter mounted once the service starts. Is there some additional steps that are required? @SimonF - Any thoughts?
  23. I noticed something peculiar. Sometimes files modified over a SMB share are written with an Access timestamp that displays 12:00:00 AM when viewing properties of the file. When when I check the timestamp on Unraid it appears these files are set to Sep 13 30828. I am able to reproduce this fairly consistently if I modify the file shortly after its creation time, within about a minute. What is causing this invalid access timestamp to be written? Is this a known bug? Is there any way to resolve? There are a few Windows apps I use that do not handle this scenario properly. Any insight is appreciated!
  24. @JorgeB - Thanks so much for that link. I've read that post but after re-reading it carefully and checking I see now that my 9207-8i + RES2SV240 is actually capable of supporting 16 drives at 275 MB/s For some reason I had these diagrams and their associated speeds stuck in my head which are the LSI 2008 chipset: So that being said it looks like some potential options are: 12 drives Change my current setup to single link for my 12 drives in my main enclosure using my existing 9207-8i + RES2SV240 Route a SAS cable to my other enclosure Add another RES2SV240 for another 12 drives using single link This should maybe result in a slight bottleneck, but still very acceptable speeds without needing an extra HBA 16 drives Add a 9207-8e + RES2SV240 in dual link for 275 MB/s for the additional 16 drives 20 drives Add a 9207-8e + RES2SV240 in dual link for 275 MB/s for the 16 drives Route a cable from the internal expander out to the new enclosure for the additional 4 drives (since I'm only using 12 currently with a dual link setup) So it sounds like just getting a new HBA / Expander pair would be a good way to go if I didn't want to sacrifice any potential speed from my current setup.
  25. Yes I should have clarified - I am only interested in the HBA / Expander. My current setup is an Antec 1200 with 4x 3x5 drive cages and it works beautifully. It wasn't the most cost effective storage solution for a lot of drives but it was able to grow with my array over the years. And frankly it just looks way better than a rack IMO. My goal is to create another matching tower just for the extra drives using the same drive cages I use for my main enclosure. The problem is that I only have 1 PCE 4.0 x16 slot available to use. So here's what I've deduced so far: 12 drives = 9207-8e + RES2SV240 (dual link, basically mirroring what I have now internally. Single link may also work with very little bottleneck) 16 drives = not sure 20 drives = not sure