Jump to content

mackid1993

Members
  • Posts

    263
  • Joined

  • Last visited

Everything posted by mackid1993

  1. I can confirm after letting my system sit for quite a while setting vm.dirty_background_ratio to 2 % and setting vm.dirty_ratio to 3 % solves this memory related issue and improves VM performance when using memory backing. @SimonF this issue may affect more than just me as @johnsanc was seeing. After a memory backing enabled VM is up for a while other VMs won't start properly. I also personally noticed slowdowns on my main VM that seem to be related to this. Perhaps something needs to be set in Unraid automatically to modify these values when memory backing is in use. Is this something that can be done for 6.13 to improve VirtioFS support further?
  2. Can you try installing the Tips and Tweaks plugin and setting these values lower, let me know if it helps you with memory backing on? The help text said to set them to 2 and 3 for gaming and streaming VMs and that lower values are better if you have a lot of RAM which in my case 64 GB probably qualifies. I won't know for another couple of hours if this has worked. The command cat /proc/buddyinfo is helpful for checking fragmentation. It shows free blocks of various sizes ranked from smallest to largest left to right. If you have a lot of small blocks and few large ones it indicated fragmentation based on the research I was doing. The Normal zone is most relevant to VMs.
  3. I think this may have something to do with the Disk Cache settings in Tips and Tweaks. I lowered the vm.dirty_background_ratio and vm.dirty_ratio. I have a feeling Virtiofs is caching dirtypages to memory and they are building up and causing this issue. Does that make sense to anyone who knows more than me.
  4. Well thanks so much for your help! When I get my RMA back from Corsair and have an extra 64 GB on hand I'll see if it solves the issue. If not throwing those commands on a crontab to run once a day at like 3 am probably isn't a bad idea.
  5. You just saved my sanity, I thought something was up with my motherboard. So I guess I can't push my RAM to 84% usage without having performance issues? Since in figuring this out I found out my RAM was bad I had to run to Microcenter and buy a kit but I have a RMA in to Corsair right now for my original RAM. I may just put that in my server and call it a day now that I know what the problem is. Thanks so much for your help!!! 😊
  6. Holy crap! That worked. I ran those commands and those VMs boot right away! Is this something can can be done inside Unraid to prevent this. Or should I just add these commands to a user script.
  7. Oh! I never thought of that! Is there a way I can test for that? A command I can run? If so I'll increase the ram in my server to mitigate.
  8. Hey @SimonF I found a pretty interesting bug. It's been driving me nuts for days and it only happens after 8-12 hours of server uptime, it it doesn't occur until the server has been running a while but it's related to memory backing. My configuration is my main Windows 11 VM with Virtiofs enabled (several mounts) with 32 GB of RAM assigned. I then have 2 test VMs without Virtiofs or memory backing, one is the Windows 11 Dev Channel and the other is the Canary channel each with 8GB of RAM. All VMs have access to all 20 threads of my 12700k. I recently set these two Dev and Canary VMs up which is when I noticed the problem. This is happening in 6.12.10 and 6.13 beta 1 so it has nothing to do with Unraid version or your php/bash script or even Virtiofsd. I even found that my memory was failing memtest and after replacing it the issue still occurs. I believe it has to do with memory backing being enabled on a VM. What will happen is after the server is up for a while 8-12 hours or so and I have my memory backing enabled VM booted I can boot a single VM with 8GB of RAM but when I go to boot the second VM with 8 GB of RAM it will hang on boot. At worst it's even crashed qemu after my server was up for a while. At one point I even had all cores on my CPU pegged to 100%. What I found was when I made an identical copy of my main Win 11 VM without any Virtiofs or memory backing the issue goes away. Moreover when it's happening and I type pkill virtiofsd it doesn't clear it up right away which tells me it's a qemu bug and not a virtiofsd bug. Interestingly if I drop the memory on my main Win 11 VM with memory backing/virtiofs to 16 GB the issue also clears up. This entire time my server will have at least 14 GB of RAM free so it's not like I'm out of memory, I have 64 GB. If I drop the two 8 GB VMs to use 4 GB each they boot immediately. Nothing interesting is in the logs from what I can tell. It's just a super weird edge case bug, not sure how to open a report or where to open it.
  9. Memtest revealed I had bad RAM. Seems like that was the issue. Edit: It wasn't! It's a memory fragmentation issue somehow caused by memory backing being enabled on one VM for VirtioFS.
  10. Could having XMP enabled be the issue here. I just disabled it, I won't know for 12 hours... but does this sound like XMP being the issue to anyone?
  11. This ended up happening again. It seems to go away when I reboot my server and then after 12+ hours of uptime I can't start a second or 3rd VM. My primary VM has 32GB of RAM allocated, my server has 64 GB. The other two have 8GB. It's very strange everything is fine up until 12 hours and then the other two VMs won't start. Dialing the RAM on my primary VM back to 16GB resolves everything. Does anyone know why this is?
  12. Hey @SimonF I decided to upgrade to 6.13 beta1 but noticed your php wrapper and bash script isn't there. Is that going to be in beta2? Just curious so I comment those lines out when I upgrade. I modified my go file to not copy rust virtiofsd but it looks like I still have to manually copy your bash script and php wrapper for now.
  13. Thanks to @smdion on Discord. It wasn't core isolation, it turns out I had way too many things pinned to core 0/1 which was causing Unraid to lock up. Unpinning my VMs from core 0 and 1 seems to have stopped everything from locking up.
  14. I disabled core isolation and this problem went away. Anyone know why this happens?
  15. I have a main Windows 11 VM that has a few isolated cores but all cores in my system pinned to it. I also have a secondary Windows 11 VM running the latest 24H2 build for testing. That only has non isolated cores pinned to it. That VM will not boot if my primary VM is running. I'm wondering if two VMs can't share the same cores or of there is a setting I have to change to allow this? The second VM is only used briefly for testing. I'm not worried about performance with it. Thanks.
  16. Ok. I mean before I update to 6.13 I'm going to take everything out of my go file regardless so it should all kind of just work right?
  17. I shared a crystal disk mark of my nvme, scroll up one post. It's not comparable to baremetal or nvme passthrough but it's fast enough for most use cases and will certainly max out an unraid array. Virtiofs (at least on Windows) is not optimized for performance, this is per the virtio-win devs.
  18. I'm seeing over 200 MB/s reading from my array and deepening on the transfer 600 - 800 MB/s nvme to nvme. SMB can get faster with nvme but no network overhead is nice. So performance isn't great for nvme, this is crystal disk mark on my appdata share which is a Samsung 980 Pro. It's clearly not optimized for speed. It can certainly saturate an Unraid array though.
  19. Idk, this sounds like an issue with your specific hardware not with Virtiofs or Unraid. Edit: @johnsanc As a side note, the both times I built a PC with an Asrock board it had weird UEFI issues and trouble booting. My old desktop had an Asrock board and it had UEFI issues, at one point I had to swap the BIOS chip because the ME region got corrupted. I built a machine for a family member with an Asrock board that at times just would refuse to boot. After those experiences I now stick with Gigabyte and Asus. Asrock isn't great based on my experience, so I'm not surprised you flashed your BIOS and can't access it now. You may need to go on eBay and buy a new BIOS chip, they are very easy to change on most boards. I also wonder, are you using an HBA or onboard SATA for your drives? If onboard SATA I'd wonder if something is up with your motherboard.
  20. @SimonF Does this look right to you?
  21. Tested and working on 6.12. If this helps anyone I added the following to my go file (/boot/config/go). Rust virtiofsd and virtiofsd.php are in /boot/virtiofsd and Simon's bash file is in /boot/virtofsd/bash #copy virtiofsd bash wrapper mv /usr/libexec/virtiofsd /usr/libexec/virtiofsd.old cp /boot/virtiofsd/bash/virtiofsd /usr/libexec/virtiofsd chmod +x /usr/libexec/virtiofsd #copy php wrapper cp /boot/virtiofsd/virtiofsd.php /usr/local/emhttp/plugins/dynamix.vm.manager/scripts/virtiofsd.php chmod +x /usr/local/emhttp/plugins/dynamix.vm.manager/scripts/virtiofsd.php #copy rust virtiofsd cp /boot/virtiofsd/virtiofsd /usr/bin/virtiofsd chmod +x /usr/bin/virtiofsd I have a parity check running, once it finishes in a few hours I'm going to reboot my server. If I made a typo I'll update this post. Everything was good after a reboot.
  22. I'll test this out, do I have to make any changes in my XML or do I just ensure I have the rust binary and your bash script in /usr/libexec? Edit: or do I have to be on 6.13 to test this?
  23. Do you have any cores isolated from the host and pinned to your VM? I found core isolation makes a huge performance difference for me. Even if it's only a couple of cores and I still let the VM access the rest of them that aren't isolated. Also just to throw out the obvious things, have you checked for any BIOS updates for your motherboard?
  24. Just wondering if the updates to the kernel/QEMU in 6.13 are going to make this possible on newer Intel chips for things such as running WSL in Windows VMs. I know it's been an issue for a while and I'm curious if 6.13 will have fixes for this. Thanks!
  25. I have seen an nvme to nvme transfer go much faster over SMB, but Virtiofs has so many advantages that I don't really care.
×
×
  • Create New...