Jump to content

mackid1993

Members
  • Posts

    202
  • Joined

  • Last visited

Everything posted by mackid1993

  1. Just another update, it seems like 16GB of RAM is a must for Backblaze at least with the amount of data I have. I now decided try balooning the RAM so I set the min to 4096 MB and the max to 16384 MB. Now to wait 24 hours again and see if it locks up on me. So far Task Manger is confused but things are stable and there is much more memory for unRAID itself.
  2. VirtioFS and Memory backing seems much more stable in 6.12-rc2. I have almost a day of uptime and I'm backing up to backblaze in my Windows VM with no issue. I found this (https://github.com/virtio-win/kvm-guest-drivers-windows/wiki/VirtIO-FS:-Shared-file-system#multiple-virtio-fs-instances) article to help mount multiple shares with VirtioFS with different drive letters. I have task scheduler run a batch script with elevated permissions at boot. You can set each individual share as a Mount Point in the VM manager like this: First step to do this you first have to stop and disable the VirtioFS Service: sc stop VirtioFsSvc sc config VirtioFsSvc start=demand Then run the command while putting in the location of your VirtioFS.exe "C:\Program Files (x86)\WinFsp\bin\fsreg.bat" virtiofs "<path to the binary>\virtiofs.exe" "-t %1 -m %2" This command makes the necessary changes to the registry. Then you can mount your different mount points as set in the VM manager with this command: "C:\Program Files (x86)\WinFsp\bin\launchctl-x64.exe" start virtiofs viofsY mount_tag0 Y: My completed batch file that I run as admin with task scheduler looks like this: "C:\Program Files (x86)\WinFsp\bin\launchctl-x64.exe" start virtiofs viofsJ Archives J: "C:\Program Files (x86)\WinFsp\bin\launchctl-x64.exe" start virtiofs viofsl Downloads l: "C:\Program Files (x86)\WinFsp\bin\launchctl-x64.exe" start virtiofs viofsM Music m: "C:\Program Files (x86)\WinFsp\bin\launchctl-x64.exe" start virtiofs viofsS Software s: "C:\Program Files (x86)\WinFsp\bin\launchctl-x64.exe" start virtiofs viofsT TV T: "C:\Program Files (x86)\WinFsp\bin\launchctl-x64.exe" start virtiofs viofsU Movies U: "C:\Program Files (x86)\WinFsp\bin\launchctl-x64.exe" start virtiofs viofsV Backup V: "C:\Program Files (x86)\WinFsp\bin\launchctl-x64.exe" start virtiofs viofsY CommunityApplicationsAppdataBackup Y: To unmount drives you run this command: "C:\Program Files (x86)\WinFsp\bin\launchctl-x64.exe" stop virtiofs viofsY My unmount script looks like this: "C:\Program Files (x86)\WinFsp\bin\launchctl-x64.exe" stop virtiofs viofsJ "C:\Program Files (x86)\WinFsp\bin\launchctl-x64.exe" stop virtiofs viofsl "C:\Program Files (x86)\WinFsp\bin\launchctl-x64.exe" stop virtiofs viofsM "C:\Program Files (x86)\WinFsp\bin\launchctl-x64.exe" stop virtiofs viofsS "C:\Program Files (x86)\WinFsp\bin\launchctl-x64.exe" stop virtiofs viofsT "C:\Program Files (x86)\WinFsp\bin\launchctl-x64.exe" stop virtiofs viofsU "C:\Program Files (x86)\WinFsp\bin\launchctl-x64.exe" stop virtiofs viofsV "C:\Program Files (x86)\WinFsp\bin\launchctl-x64.exe" stop virtiofs viofsY So far this works great with backblaze personal backup under Windows. I have 22 hours of uptime backing up with no lockups yet. I found it helpful to limit the number of threads backblaze uses to 8 which is the recommended and to also give my VM 16 GB of RAM.
  3. I'm running Windows 11. WinFsp is 2022.2 which was the latest version I could find. With the 229-1 drivers I managed over a day of uptime with no crashing so far. Before it would have completely locked up after about 10-12 hours.
  4. So far my Win 11 VM has been up 7 hours and 30 minutes without crashing yet with the newest 229-1 drivers. Hopefully we finally have stability now.
  5. I was getting it on Win 11 Pro. I finally gave up and removed the VirtIOFS stuff from my XML since it was unusable.
  6. Has anyone found a solution to the VM's freezing? Mine locks up after about 12 hours of use.
  7. @Djoss It looks like there is a new version of Cloudberry with some new features. Would you be able to publish an update to the container? Thanks!!
  8. I ended up having to restore from a flash backup. Something got corrupted. When I plugged a monitor into my server I got to the unraid boot screen but none of the options worked except for Memtest. When I tried to select one the 5 second countdown just started over again and the OS wouldn't load. I was able to grab a backup thanks to the My Servers plugin and that got me back up and running so painlessly. I was also able to update to RC6. Thanks!!
  9. Thank you for this post. I ran the update at work over the VPN. Once I get home I'll try disconnecting and reconnecting the flash drive! Hopefully this is the issue!! I'll report back later on.
  10. I just upgraded from RC5 and my server didn't come back up either. I'll have to investigate further in a little bit when I get home, I hope my flash wasn't corrupted!
  11. Thank you for your help. I decided to give up on this, I think it's probably something with the card being so old.
  12. So I downgraded back to stable 6.9.2 and recreated the VM from scratch. Despite this I'm still getting code 43 in Windows. Below is my current XML and I am attaching the diagnostics as suggested. Thank you! <?xml version='1.0' encoding='UTF-8'?> <domain type='kvm' id='3'> <name>Windows 11-3</name> <uuid>02d88f87-ad52-0031-ad90-036af4d858db</uuid> <metadata> <vmtemplate xmlns="unraid" name="Windows 10" icon="windows.png" os="windows10"/> </metadata> <memory unit='KiB'>16777216</memory> <currentMemory unit='KiB'>16777216</currentMemory> <memoryBacking> <nosharepages/> </memoryBacking> <vcpu placement='static'>7</vcpu> <cputune> <vcpupin vcpu='0' cpuset='0'/> <vcpupin vcpu='1' cpuset='4'/> <vcpupin vcpu='2' cpuset='1'/> <vcpupin vcpu='3' cpuset='5'/> <vcpupin vcpu='4' cpuset='6'/> <vcpupin vcpu='5' cpuset='3'/> <vcpupin vcpu='6' cpuset='7'/> </cputune> <resource> <partition>/machine</partition> </resource> <os> <type arch='x86_64' machine='pc-i440fx-5.1'>hvm</type> <loader readonly='yes' type='pflash'>/usr/share/qemu/ovmf-x64/OVMF_CODE-pure-efi.fd</loader> <nvram>/etc/libvirt/qemu/nvram/02d88f87-ad52-0031-ad90-036af4d858db_VARS-pure-efi.fd</nvram> </os> <features> <acpi/> <apic/> </features> <cpu mode='host-passthrough' check='none' migratable='on'> <topology sockets='1' dies='1' cores='7' threads='1'/> <cache mode='passthrough'/> </cpu> <clock offset='localtime'> <timer name='rtc' tickpolicy='catchup'/> <timer name='pit' tickpolicy='delay'/> <timer name='hpet' present='no'/> </clock> <on_poweroff>destroy</on_poweroff> <on_reboot>restart</on_reboot> <on_crash>restart</on_crash> <devices> <emulator>/usr/local/sbin/qemu</emulator> <disk type='file' device='disk'> <driver name='qemu' type='raw' cache='writeback'/> <source file='/mnt/disks/vm/Windows 11-3/vdisk1.img' index='3'/> <backingStore/> <target dev='hdc' bus='virtio'/> <boot order='1'/> <alias name='virtio-disk2'/> <address type='pci' domain='0x0000' bus='0x00' slot='0x03' function='0x0'/> </disk> <disk type='file' device='cdrom'> <driver name='qemu' type='raw'/> <source file='/mnt/user/isos/Win11_English_x64v1.iso' index='2'/> <backingStore/> <target dev='hda' bus='ide'/> <readonly/> <boot order='2'/> <alias name='ide0-0-0'/> <address type='drive' controller='0' bus='0' target='0' unit='0'/> </disk> <disk type='file' device='cdrom'> <driver name='qemu' type='raw'/> <source file='/mnt/user/isos/virtio-win-0.1.215-2.iso' index='1'/> <backingStore/> <target dev='hdb' bus='ide'/> <readonly/> <alias name='ide0-0-1'/> <address type='drive' controller='0' bus='0' target='0' unit='1'/> </disk> <controller type='virtio-serial' index='0'> <alias name='virtio-serial0'/> <address type='pci' domain='0x0000' bus='0x00' slot='0x04' function='0x0'/> </controller> <controller type='pci' index='0' model='pci-root'> <alias name='pci.0'/> </controller> <controller type='ide' index='0'> <alias name='ide'/> <address type='pci' domain='0x0000' bus='0x00' slot='0x01' function='0x1'/> </controller> <controller type='usb' index='0' model='qemu-xhci' ports='15'> <alias name='usb'/> <address type='pci' domain='0x0000' bus='0x00' slot='0x07' function='0x0'/> </controller> <interface type='bridge'> <mac address='52:54:00:22:d9:4f'/> <source bridge='br0'/> <target dev='vnet1'/> <model type='virtio-net'/> <alias name='net0'/> <address type='pci' domain='0x0000' bus='0x00' slot='0x02' function='0x0'/> </interface> <serial type='pty'> <source path='/dev/pts/2'/> <target type='isa-serial' port='0'> <model name='isa-serial'/> </target> <alias name='serial0'/> </serial> <console type='pty' tty='/dev/pts/2'> <source path='/dev/pts/2'/> <target type='serial' port='0'/> <alias name='serial0'/> </console> <channel type='unix'> <source mode='bind' path='/var/lib/libvirt/qemu/channel/target/domain-3-Windows 11-3/org.qemu.guest_agent.0'/> <target type='virtio' name='org.qemu.guest_agent.0' state='connected'/> <alias name='channel0'/> <address type='virtio-serial' controller='0' bus='0' port='1'/> </channel> <input type='tablet' bus='usb'> <alias name='input0'/> <address type='usb' bus='0' port='1'/> </input> <input type='mouse' bus='ps2'> <alias name='input1'/> </input> <input type='keyboard' bus='ps2'> <alias name='input2'/> </input> <hostdev mode='subsystem' type='pci' managed='yes'> <driver name='vfio'/> <source> <address domain='0x0000' bus='0x01' slot='0x00' function='0x0'/> </source> <alias name='hostdev0'/> <rom file='/mnt/user/isos/vbios/gtx770.rom'/> <address type='pci' domain='0x0000' bus='0x00' slot='0x05' function='0x0' multifunction='on'/> </hostdev> <hostdev mode='subsystem' type='pci' managed='yes'> <driver name='vfio'/> <source> <address domain='0x0000' bus='0x01' slot='0x00' function='0x1'/> </source> <alias name='hostdev1'/> <address type='pci' domain='0x0000' bus='0x00' slot='0x05' function='0x1'/> </hostdev> <memballoon model='none'/> </devices> <seclabel type='dynamic' model='dac' relabel='yes'> <label>+0:+100</label> <imagelabel>+0:+100</imagelabel> </seclabel> </domain> bertha-diagnostics-20220331-1319.zip
  13. Thank you, I will post a syslog when I have access to my server again later on today.
  14. I'm having some issues passing my older GTX 770 gpu through to a Windows 11 VM on Unraid 6.10.0 -RC4. I've watched the Space Invader One video so many times and have done everything mentioned. I've tried multiple edited vBioses from Techpowerup. I also have modified the XML like Space Invader One mentioned and I binded the IOMMU groups for the GPU in Tools -> System Devices. I am at a loss as to why I still get a Code 43 error in Windows. I'm not sure what could possibly be wrong. I'm embedding my XML below in the hopes that someone could help me. Thank you!! <?xml version='1.0' encoding='UTF-8'?> <domain type='kvm' id='4'> <name>New Windows 11</name> <uuid>9925d82c-19d8-f222-d50d-993f9ac1819f</uuid> <metadata> <vmtemplate xmlns="unraid" name="Windows 11" icon="windows11.png" os="windowstpm"/> </metadata> <memory unit='KiB'>16777216</memory> <currentMemory unit='KiB'>16777216</currentMemory> <memoryBacking> <nosharepages/> </memoryBacking> <vcpu placement='static'>7</vcpu> <cputune> <vcpupin vcpu='0' cpuset='0'/> <vcpupin vcpu='1' cpuset='4'/> <vcpupin vcpu='2' cpuset='1'/> <vcpupin vcpu='3' cpuset='5'/> <vcpupin vcpu='4' cpuset='6'/> <vcpupin vcpu='5' cpuset='3'/> <vcpupin vcpu='6' cpuset='7'/> </cputune> <resource> <partition>/machine</partition> </resource> <os> <type arch='x86_64' machine='pc-i440fx-6.2'>hvm</type> <loader readonly='yes' type='pflash'>/usr/share/qemu/ovmf-x64/OVMF_CODE-pure-efi-tpm.fd</loader> <nvram>/etc/libvirt/qemu/nvram/9925d82c-19d8-f222-d50d-993f9ac1819f_VARS-pure-efi-tpm.fd</nvram> </os> <features> <acpi/> <apic/> </features> <cpu mode='host-passthrough' check='none' migratable='on'> <topology sockets='1' dies='1' cores='7' threads='1'/> <cache mode='passthrough'/> </cpu> <clock offset='localtime'> <timer name='rtc' tickpolicy='catchup'/> <timer name='pit' tickpolicy='delay'/> <timer name='hpet' present='no'/> </clock> <on_poweroff>destroy</on_poweroff> <on_reboot>restart</on_reboot> <on_crash>restart</on_crash> <devices> <emulator>/usr/local/sbin/qemu</emulator> <disk type='file' device='disk'> <driver name='qemu' type='raw' cache='writeback'/> <source file='/mnt/disks/vm/New Windows 11/vdisk1.img' index='1'/> <backingStore/> <target dev='hdc' bus='virtio'/> <boot order='1'/> <alias name='virtio-disk2'/> <address type='pci' domain='0x0000' bus='0x00' slot='0x04' function='0x0'/> </disk> <controller type='usb' index='0' model='qemu-xhci' ports='15'> <alias name='usb'/> <address type='pci' domain='0x0000' bus='0x00' slot='0x07' function='0x0'/> </controller> <controller type='pci' index='0' model='pci-root'> <alias name='pci.0'/> </controller> <controller type='virtio-serial' index='0'> <alias name='virtio-serial0'/> <address type='pci' domain='0x0000' bus='0x00' slot='0x03' function='0x0'/> </controller> <interface type='bridge'> <mac address='52:54:00:82:f9:37'/> <source bridge='br0'/> <target dev='vnet3'/> <model type='virtio-net'/> <alias name='net0'/> <address type='pci' domain='0x0000' bus='0x00' slot='0x02' function='0x0'/> </interface> <serial type='pty'> <source path='/dev/pts/2'/> <target type='isa-serial' port='0'> <model name='isa-serial'/> </target> <alias name='serial0'/> </serial> <console type='pty' tty='/dev/pts/2'> <source path='/dev/pts/2'/> <target type='serial' port='0'/> <alias name='serial0'/> </console> <channel type='unix'> <source mode='bind' path='/var/lib/libvirt/qemu/channel/target/domain-4-New Windows 11/org.qemu.guest_agent.0'/> <target type='virtio' name='org.qemu.guest_agent.0' state='connected'/> <alias name='channel0'/> <address type='virtio-serial' controller='0' bus='0' port='1'/> </channel> <input type='mouse' bus='ps2'> <alias name='input0'/> </input> <input type='keyboard' bus='ps2'> <alias name='input1'/> </input> <tpm model='tpm-tis'> <backend type='emulator' version='2.0' persistent_state='yes'/> <alias name='tpm0'/> </tpm> <audio id='1' type='none'/> <hostdev mode='subsystem' type='pci' managed='yes'> <driver name='vfio'/> <source> <address domain='0x0000' bus='0x01' slot='0x00' function='0x0'/> </source> <alias name='hostdev0'/> <rom file='/mnt/user/isos/vbios.rom'/> <address type='pci' domain='0x0000' bus='0x00' slot='0x05' function='0x0' multifunction='on'/> </hostdev> <hostdev mode='subsystem' type='pci' managed='yes'> <driver name='vfio'/> <source> <address domain='0x0000' bus='0x01' slot='0x00' function='0x1'/> </source> <alias name='hostdev1'/> <address type='pci' domain='0x0000' bus='0x00' slot='0x05' function='0x1'/> </hostdev> <memballoon model='none'/> </devices> <seclabel type='dynamic' model='dac' relabel='yes'> <label>+0:+100</label> <imagelabel>+0:+100</imagelabel> </seclabel> </domain>
  15. I noticed that when I upgraded from 6.9.2 to 6.10 - RC4 many of my docker containers were throwing various errors. I had to remove and reinstall a few of them to correct the problem. I had issues with Cloudberry Backup, Sonarr (linuxserver), Radarr (linuxserver), Krusader (ich777) and Plex (linuxserver). In addition for plex I had to correct the permissions on the Appdata folder as it could not read the database.
  16. I had recycle bin installed, but decided I wanted to stop using it. I removed the plugin but noticed that when I delete something over SMB it still seems to go to the recycle bin despite the plugin being removed. Is there any cleanup I need to do after removing the plugin?
  17. It is, thank you so much for fixing that!
  18. I'm having this issue as well. I was able to downgrade the docker to the previous release and it doesn't happen there. Is it possible to fix this with an update? Thanks!!
  19. I was previously having the autostart issue, the following workaround has temporarily resolved it for me: 1. SSH into unRAID as root. 2. Determine the container ID docker ps -aqf "name=CrashPlan" 3. This will return a string. 3a. Run the following: docker update --restart=always containerID Hopefully this helps someone else who is having this issue. Kudos to the developer of this great docker, hopefully we can find a more permanent solution to this annoying but minor bug.
×
×
  • Create New...