johnsanc

Members
  • Posts

    302
  • Joined

  • Last visited

Everything posted by johnsanc

  1. Apologies. Updated diagnostics attached. tower-diagnostics-20231102-1947.zip
  2. I was going to post a diagnostics zip to another forum post, but it was taking forever to build the zip. I noticed the web UI was printing error lines from Dynamix File Integrity (export file not found errors). My syslogs are about 50mb in size because of this. After I noticed this I changed my settings to not print file integrity errors to the syslog. So now its been over an hour and I can see the zip is still being created. How can I force this process to stop?
  3. Attached is an older diagnostics zip from a couple weeks ago. I tried downloading a new one but it was taking forever due to literally hundreds of thousands of error lines from Dynamic File Integrity not finding export files. Also this other post of mine may be related to the VM issue: tower-diagnostics-20231016-1916.zip
  4. Any pointers on how to properly isolate it? I have it bound here: and vfio-pci.cfg: BIND=0000:2f:00.1|1022:149c 0000:2f:00.3|1022:149c 0000:33:00.0|10de:1e84 0000:33:00.1|10de:10f8 0000:33:00.2|10de:1ad8 0000:33:00.3|10de:1ad9 Long ago I had something in my go file... do I need to do something there?
  5. I am cleaning up my array and switched to BLAKE3. I want to ensure ALL files have a BLAKE3 hash and ideally wipe out all the older hashes since I have no use for them. What is the best way to do this? I noticed that the Remove action doesn't actually remove old hashes, it only seems to remove the timestamps + whatever hash is currently selected. Is this a bug or by design? Based on my current observations it seems the only way to start from a clean slate is to do a Clear and Remove on every disk for every hashing method.
  6. I have an issue that has been present I believe with every 6.12.x release. My Windows VM will not start properly unless its first shut down using a Force Stop command. Restarting reboots the VM, but it will not start properly. Graceful shutdown then restarting boots the VM but it does not start properly. There is zero information in the logs. This VM does use one of my graphics cards that is passed through. The only thing I noticed is when I plug in a monitor to see whats going on, the times it doesnt boot properly I see a big unraid logo then the screen goes black. When it does work properly after a Force Stop and restart it goes straight to a Windows spinner icon and loads up just fine. Any idea what could be causing this? Its really annoying and I'm not comfortable always force stopping the VM. It makes Windows updates very nerve racking.
  7. Anyone have any ideas on this? I also noticed that with 6.12.3 my Windows 10 VM starts but but there is no graphics nor does remote desktop work. I have to force stop it and restart in order for it to work properly. I never had these issues prior to 6.12.x. Was there some kind of change in the binding process that could be causing this?
  8. @SimonF - Not sure if this helps or not, but here is my windows 10 VM log: 2023-06-28 13:48:31.461+0000: Starting external device: TPM Emulator /usr/bin/swtpm socket --ctrl 'type=unixio,path=/run/libvirt/qemu/swtpm/1-Windows 10-swtpm.sock,mode=0600' --tpmstate dir=/var/lib/libvirt/swtpm/691a4041-8a00-7085-a6e4-0ba8106d2f44/tpm2,mode=0600 --log 'file=/var/log/swtpm/libvirt/qemu/Windows 10-swtpm.log' --terminate --tpm2 2023-06-28 13:48:31.503+0000: starting up libvirt version: 8.7.0, qemu version: 7.1.0, kernel: 6.1.34-Unraid, hostname: Tower LC_ALL=C \ PATH=/bin:/sbin:/usr/bin:/usr/sbin \ HOME='/var/lib/libvirt/qemu/domain-1-Windows 10' \ XDG_DATA_HOME='/var/lib/libvirt/qemu/domain-1-Windows 10/.local/share' \ XDG_CACHE_HOME='/var/lib/libvirt/qemu/domain-1-Windows 10/.cache' \ XDG_CONFIG_HOME='/var/lib/libvirt/qemu/domain-1-Windows 10/.config' \ /usr/local/sbin/qemu \ -name 'guest=Windows 10,debug-threads=on' \ -S \ -object '{"qom-type":"secret","id":"masterKey0","format":"raw","file":"/var/lib/libvirt/qemu/domain-1-Windows 10/master-key.aes"}' \ -blockdev '{"driver":"file","filename":"/usr/share/qemu/ovmf-x64/OVMF_CODE-pure-efi-tpm.fd","node-name":"libvirt-pflash0-storage","auto-read-only":true,"discard":"unmap"}' \ -blockdev '{"node-name":"libvirt-pflash0-format","read-only":true,"driver":"raw","file":"libvirt-pflash0-storage"}' \ -blockdev '{"driver":"file","filename":"/etc/libvirt/qemu/nvram/691a4041-8a00-7085-a6e4-0ba8106d2f44_VARS-pure-efi-tpm.fd","node-name":"libvirt-pflash1-storage","auto-read-only":true,"discard":"unmap"}' \ -blockdev '{"node-name":"libvirt-pflash1-format","read-only":false,"driver":"raw","file":"libvirt-pflash1-storage"}' \ -machine pc-q35-7.1,usb=off,dump-guest-core=off,memory-backend=pc.ram,pflash0=libvirt-pflash0-format,pflash1=libvirt-pflash1-format \ -accel kvm \ -cpu host,migratable=on,topoext=on,hv-time=on,hv-relaxed=on,hv-vapic=on,hv-spinlocks=0x1fff,hv-vendor-id=none,host-cache-info=on,l3-cache=off \ -m 36864 \ -object '{"qom-type":"memory-backend-ram","id":"pc.ram","size":38654705664}' \ -overcommit mem-lock=off \ -smp 16,sockets=1,dies=1,cores=8,threads=2 \ -uuid 691a4041-8a00-7085-a6e4-0ba8106d2f44 \ -display none \ -no-user-config \ -nodefaults \ -chardev socket,id=charmonitor,fd=36,server=on,wait=off \ -mon chardev=charmonitor,id=monitor,mode=control \ -rtc base=localtime \ -no-hpet \ -no-shutdown \ -boot strict=on \ -device '{"driver":"pcie-root-port","port":8,"chassis":1,"id":"pci.1","bus":"pcie.0","multifunction":true,"addr":"0x1"}' \ -device '{"driver":"pcie-root-port","port":9,"chassis":2,"id":"pci.2","bus":"pcie.0","addr":"0x1.0x1"}' \ -device '{"driver":"pcie-root-port","port":10,"chassis":3,"id":"pci.3","bus":"pcie.0","addr":"0x1.0x2"}' \ -device '{"driver":"pcie-root-port","port":11,"chassis":4,"id":"pci.4","bus":"pcie.0","addr":"0x1.0x3"}' \ -device '{"driver":"pcie-root-port","port":12,"chassis":5,"id":"pci.5","bus":"pcie.0","addr":"0x1.0x4"}' \ -device '{"driver":"pcie-root-port","port":13,"chassis":6,"id":"pci.6","bus":"pcie.0","addr":"0x1.0x5"}' \ -device '{"driver":"pcie-root-port","port":14,"chassis":7,"id":"pci.7","bus":"pcie.0","addr":"0x1.0x6"}' \ -device '{"driver":"pcie-root-port","port":15,"chassis":8,"id":"pci.8","bus":"pcie.0","addr":"0x1.0x7"}' \ -device '{"driver":"pcie-root-port","port":16,"chassis":9,"id":"pci.9","bus":"pcie.0","multifunction":true,"addr":"0x2"}' \ -device '{"driver":"pcie-root-port","port":17,"chassis":10,"id":"pci.10","bus":"pcie.0","addr":"0x2.0x1"}' \ -device '{"driver":"pcie-pci-bridge","id":"pci.11","bus":"pci.1","addr":"0x0"}' \ -device '{"driver":"qemu-xhci","p2":15,"p3":15,"id":"usb","bus":"pcie.0","addr":"0x7"}' \ -device '{"driver":"virtio-serial-pci","id":"virtio-serial0","bus":"pci.2","addr":"0x0"}' \ -blockdev '{"driver":"file","filename":"/mnt/user/domains/iso/Windows_10_Pro-20170304.iso","node-name":"libvirt-3-storage","auto-read-only":true,"discard":"unmap"}' \ -blockdev '{"node-name":"libvirt-3-format","read-only":true,"driver":"raw","file":"libvirt-3-storage"}' \ -device '{"driver":"ide-cd","bus":"ide.0","drive":"libvirt-3-format","id":"sata0-0-0","bootindex":1}' \ -blockdev '{"driver":"file","filename":"/mnt/user/domains/drivers/virtio-win-0.1.225.iso","node-name":"libvirt-2-storage","auto-read-only":true,"discard":"unmap"}' \ -blockdev '{"node-name":"libvirt-2-format","read-only":true,"driver":"raw","file":"libvirt-2-storage"}' \ -device '{"driver":"ide-cd","bus":"ide.1","drive":"libvirt-2-format","id":"sata0-0-1"}' \ -blockdev '{"driver":"file","filename":"/mnt/cache/domains/loaders/spaces_win_clover.img","node-name":"libvirt-1-storage","cache":{"direct":false,"no-flush":false},"auto-read-only":true,"discard":"unmap"}' \ -blockdev '{"node-name":"libvirt-1-format","read-only":false,"cache":{"direct":false,"no-flush":false},"driver":"raw","file":"libvirt-1-storage"}' \ -device '{"driver":"ide-hd","bus":"ide.2","drive":"libvirt-1-format","id":"sata0-0-2","bootindex":2,"write-cache":"on"}' \ -netdev tap,fd=37,id=hostnet0 \ -device '{"driver":"virtio-net","netdev":"hostnet0","id":"net0","mac":"52:54:00:1f:a0:a3","bus":"pci.3","addr":"0x0"}' \ -chardev pty,id=charserial0 \ -device '{"driver":"isa-serial","chardev":"charserial0","id":"serial0","index":0}' \ -chardev socket,id=charchannel0,fd=35,server=on,wait=off \ -device '{"driver":"virtserialport","bus":"virtio-serial0.0","nr":1,"chardev":"charchannel0","id":"channel0","name":"org.qemu.guest_agent.0"}' \ -chardev 'socket,id=chrtpm,path=/run/libvirt/qemu/swtpm/1-Windows 10-swtpm.sock' \ -tpmdev emulator,id=tpm-tpm0,chardev=chrtpm \ -device '{"driver":"tpm-tis","tpmdev":"tpm-tpm0","id":"tpm0"}' \ -device '{"driver":"usb-tablet","id":"input0","bus":"usb.0","port":"1"}' \ -audiodev '{"id":"audio1","driver":"none"}' \ -device '{"driver":"vfio-pci","host":"0000:33:00.0","id":"hostdev0","bus":"pci.4","addr":"0x0"}' \ -device '{"driver":"vfio-pci","host":"0000:33:00.1","id":"hostdev1","bus":"pci.5","addr":"0x0"}' \ -device '{"driver":"vfio-pci","host":"0000:2f:00.1","id":"hostdev2","bus":"pci.6","addr":"0x0"}' \ -device '{"driver":"vfio-pci","host":"0000:32:00.0","id":"hostdev3","bus":"pci.7","addr":"0x0"}' \ -device '{"driver":"vfio-pci","host":"0000:33:00.2","id":"hostdev4","bus":"pci.8","addr":"0x0"}' \ -device '{"driver":"vfio-pci","host":"0000:33:00.3","id":"hostdev5","bus":"pci.9","addr":"0x0"}' \ -sandbox on,obsolete=deny,elevateprivileges=deny,spawn=deny,resourcecontrol=deny \ -msg timestamp=on char device redirected to /dev/pts/0 (label charserial0)
  9. Is that not in the diagnostics I posted? Note I also tried not auto-starting any VMs and the issue still occurs during boot.
  10. @JorgeB - Yes the monitor is connected to the crappy GT710 that was not bound. When I removed the RTX2070 from being bound, the GT710 worked and I could see the terminal get to the login prompt. For some reason passing through one graphics card caused the other to stop working. This was not an issue with any prior unraid release.
  11. @JorgeB - Apologies, I misspoke. Yes that is my GTX 2070 Super I have for my Windows VM. I do want to pass that through. I have another card that is a crappy little GT710 just for console access which is not passed through. Not sure why my display no longer works with this GT710 and stops at that line. Works fine on 6.11.5.
  12. I just noticed this in the syslog which is the line directly after where my local display freezes. What is this? I am not passing through that graphics card or anything and I have it specifically for local terminal access in case there is an issue with SSH. Jun 27 13:30:40 Tower kernel: Console: switching to colour dummy device 80x25 Jun 27 13:30:40 Tower kernel: vfio-pci 0000:33:00.0: vgaarb: changed VGA decodes: olddecodes=io+mem,decodes=io+mem:owns=none
  13. OK so I confirmed its not a keyboard issue. The screen just freezes completely and I never get to the login screen. I can even sometimes see the cursor frozen in an "displayed" state depending on when the screen freezes. Also I changed my docker settings from macvlan to ipvlan and my server still crashed. Was not even up for a single day this time. Can you see anything in my diagnostics that explains either of these issues? I'm hesitant to stay on 6.12.x since this does not seem stable compared to 6.11.5 which would be up for months at a time without any issues. tower-diagnostics-20230627-1333.zip
  14. Thanks I’ll try switching to ipvlan. regarding the terminal it just does nothing and is cut off as shown in the photo. I’ll try a different keyboard just to make sure it’s not an issue with the wireless dongle I normally use.
  15. I upgraded to v6.12 last week. Last night I had an issue where all of a sudden unRAID was completely inaccessible and neither the Web GUI or SSH worked. I checked my monitor that is locally connected and I noticed that the terminal window had no command prompt and seemed to just hang during processing of USB devices. I hard rebooted the server since I had no other option and noticed the terminal hangs up in the same place again, but SSH and everything else works fine. What is causing this and why did not occur with previous versions? I've attached my diagnostics. Any thoughts are appreciated. tower-diagnostics-20230625-1205.zip
  16. I setup br1 with the same route as I had for br0. Using br1 then seemed to work... I'm definitely not an expert with the networking settings but if anyone knows why this issue occurred in the first place I'd love to know.
  17. Yes that works but then I cannot access via RDP from my laptop, so changing to virbr0 is not a good solution. Is there a suggested way to do this? I don't understand why this issue only occurs with 6.12 and downgrading reverts the behavior to work correctly for my current configuration.
  18. I was running 6.11.5 and upgraded to 6.12. The upgrade went smoothly but none of my VMs could access the internet. I could view the VMs with VNC and access my dockers that have a web GUI from within those VMs, but the VMs themselves cannot connect to the internet or default gateway. My VMs use br0. I tried downgrading to 6.11.5 and the issue was resolved. I've attached diagnostics from 6.12 before downgrading. tower-diagnostics-20230617-1431.zip The only thing that stood out to me was this in libvirt log: 2023-06-17 18:30:05.122+0000: 20194: error : virNetSocketReadWire:1791 : End of file while reading data: Input/output error
  19. FYI GordonJ just released 3.6.0 which should fix several linux-specific issues, especially with DatVault.
  20. Nope looks like an unresolved issue with qemu and memoryBacking. All we can do is wait I guess. I should add the issues above were also when I was trying to use VirtioFS.
  21. The issue is precisely this: https://gitlab.com/qemu-project/qemu/-/issues/1270 And it has not been corrected. Until there is an update to that issue, I'd say this entire feature should be avoided completely for most people.
  22. I have mentioned this several times before, but the lockups most people are having that peg all CPU cores is directly related to the presence of the memory backing config: <memoryBacking> <source type='memfd'/> <access mode='shared'/> </memoryBacking> Please try to reproduce with JUST this config in place and nothing related to virtioFS.
  23. I mentioned this before, but for me it locks up just with the memory backing config and nothing else related to virtioFS. I don’t think the lockups are related to virtioFS at all. Focus should be directed elsewhere.
  24. Has there been any progress on resolving the lockup issues due to memory backing? This feature is great but completely unreliable for many until that is resolved.
  25. Wow this is fantastic to see someone made a docker container for RomVault. You may want to join the RomVault discord as well for issues with Linux or any other general questions about app usage. Although RV was not specifically created for Linux, many users do use it in various Linux VMs.