mackid1993

Members
  • Posts

    178
  • Joined

  • Last visited

Everything posted by mackid1993

  1. Thanks to @smdion on Discord. It wasn't core isolation, it turns out I had way too many things pinned to core 0/1 which was causing Unraid to lock up. Unpinning my VMs from core 0 and 1 seems to have stopped everything from locking up.
  2. I disabled core isolation and this problem went away. Anyone know why this happens?
  3. I have a main Windows 11 VM that has a few isolated cores but all cores in my system pinned to it. I also have a secondary Windows 11 VM running the latest 24H2 build for testing. That only has non isolated cores pinned to it. That VM will not boot if my primary VM is running. I'm wondering if two VMs can't share the same cores or of there is a setting I have to change to allow this? The second VM is only used briefly for testing. I'm not worried about performance with it. Thanks.
  4. Ok. I mean before I update to 6.13 I'm going to take everything out of my go file regardless so it should all kind of just work right?
  5. I shared a crystal disk mark of my nvme, scroll up one post. It's not comparable to baremetal or nvme passthrough but it's fast enough for most use cases and will certainly max out an unraid array. Virtiofs (at least on Windows) is not optimized for performance, this is per the virtio-win devs.
  6. I'm seeing over 200 MB/s reading from my array and deepening on the transfer 600 - 800 MB/s nvme to nvme. SMB can get faster with nvme but no network overhead is nice. So performance isn't great for nvme, this is crystal disk mark on my appdata share which is a Samsung 980 Pro. It's clearly not optimized for speed. It can certainly saturate an Unraid array though.
  7. Idk, this sounds like an issue with your specific hardware not with Virtiofs or Unraid. Edit: @johnsanc As a side note, the both times I built a PC with an Asrock board it had weird UEFI issues and trouble booting. My old desktop had an Asrock board and it had UEFI issues, at one point I had to swap the BIOS chip because the ME region got corrupted. I built a machine for a family member with an Asrock board that at times just would refuse to boot. After those experiences I now stick with Gigabyte and Asus. Asrock isn't great based on my experience, so I'm not surprised you flashed your BIOS and can't access it now. You may need to go on eBay and buy a new BIOS chip, they are very easy to change on most boards. I also wonder, are you using an HBA or onboard SATA for your drives? If onboard SATA I'd wonder if something is up with your motherboard.
  8. @SimonF Does this look right to you?
  9. Tested and working on 6.12. If this helps anyone I added the following to my go file (/boot/config/go). Rust virtiofsd and virtiofsd.php are in /boot/virtiofsd and Simon's bash file is in /boot/virtofsd/bash #copy virtiofsd bash wrapper mv /usr/libexec/virtiofsd /usr/libexec/virtiofsd.old cp /boot/virtiofsd/bash/virtiofsd /usr/libexec/virtiofsd chmod +x /usr/libexec/virtiofsd #copy php wrapper cp /boot/virtiofsd/virtiofsd.php /usr/local/emhttp/plugins/dynamix.vm.manager/scripts/virtiofsd.php chmod +x /usr/local/emhttp/plugins/dynamix.vm.manager/scripts/virtiofsd.php #copy rust virtiofsd cp /boot/virtiofsd/virtiofsd /usr/bin/virtiofsd chmod +x /usr/bin/virtiofsd I have a parity check running, once it finishes in a few hours I'm going to reboot my server. If I made a typo I'll update this post. Everything was good after a reboot.
  10. I'll test this out, do I have to make any changes in my XML or do I just ensure I have the rust binary and your bash script in /usr/libexec? Edit: or do I have to be on 6.13 to test this?
  11. Do you have any cores isolated from the host and pinned to your VM? I found core isolation makes a huge performance difference for me. Even if it's only a couple of cores and I still let the VM access the rest of them that aren't isolated. Also just to throw out the obvious things, have you checked for any BIOS updates for your motherboard?
  12. Just wondering if the updates to the kernel/QEMU in 6.13 are going to make this possible on newer Intel chips for things such as running WSL in Windows VMs. I know it's been an issue for a while and I'm curious if 6.13 will have fixes for this. Thanks!
  13. I have seen an nvme to nvme transfer go much faster over SMB, but Virtiofs has so many advantages that I don't really care.
  14. Ok that makes sense. I appreciate your help. Sorry for bothering you. 😊
  15. @dlandon This file has the shutdown sequence. It took about 4 minutes to shut down. syslog-previous
  16. I rebooted my server to repro the issue. My unlcean shutdown timeout is set to 420 seconds for issues just like this thankfully! Attached are my diagnostics zip. Thanks!! I guess UD doesn't expect the SMB share to be coming from within the house lol! apollo-diagnostics-20240327-1208.zip
  17. Unfortunately umount -f or umount -l at array stop doesn't seem to work as a workaround. It seems to be a bug with UD. Fortunately my timeout is so high that it eventually unmounts. It just takes 3 minutes or so.
  18. Thanks so much! For now I made a user script to run on array stop, umount -f /path/to/mount.
  19. That is a Windows VM hosted on my Unraid server that powers down when the system shuts down but before the SMB share is unmounted.
  20. I'll update this post in a moment, looking at the syslog it seems to hang on: Mar 27 11:21:12 Apollo unassigned.devices: Unmounting All Devices... Mar 27 11:21:34 Apollo unassigned.devices: Remote server '10.0.0.2' port '445' is not open; server apears to be offline. then after a while is says: Mar 27 11:23:02 Apollo kernel: CIFS: VFS: \\10.0.0.2 has not responded in 180 seconds. Reconnecting... Edit: It seems to force unmount and then shutdown gracefully. It just delays shutdown. I set my unclean shutdown timeout really high to avoid dirty shutdowns. Maybe I'll just add a userscript to umount -f on array stop.
  21. Hey @dlandon my SMB shares from my VM were working great, but the problem I'm running to now is when shutting my server down the VM shuts down but the SMB shares don't seem to want to automatically unmount. Is there anything I can do, possibly a user script to unmount these?
  22. @dlandon Setting a 180 second delay worked great. I was also able to use the device script to easily start up the related docker container on mount so it sees the SMB share properly. Thanks for your help!
  23. Yeah it seem that it doesn't automount when the array starts because the VM hasn't fully booted. Is there a workaround to delay the automount or automount via a script like using User Scripts?