sublimejackman

Members
  • Posts

    13
  • Joined

  • Last visited

Recent Profile Visitors

The recent visitors block is disabled and is not being shown to other users.

sublimejackman's Achievements

Noob

Noob (1/14)

1

Reputation

  1. This worked great for me. One caveat, is I did have to boot a freshly cloned drive on a bare metal PC one time before it would boot in Unraid as a VM. So if you are getting 'BOOT DEVICE INACESSIBLE" blue screen. Try booting the guest VM on bare metal one time first
  2. I had a similar issue after updating to 6.12.3. Linked below. For me, it was that I was passing thru the GPU but not the audio portion of it. Wasn't an issue until updated to 6.12.3, guess there were some changes with this release. Hope it helps!
  3. I will remove that entry, next time I take the array down and start it shall test. Thanks!! Sorry, I should have added to things I tried: -Uninstalling and reinstalling all NVIDIA drivers -Tried a clean install of windows server 2022, same error in windows -No errors when booting from an image of the VM on bare metal (restored to a different PC), GPU works fine -Different PCI-e card works in the slot the GPU is in now -PSU confirmed to be delivering power
  4. Hey all. Hopefully someone can help as this leaves me sorta dead in the water. After updating to 6.12.3, I can no longer passthrough my GPU to my Win 10 VM and autostarting any VMs gives me an error. Unfortunately, I didn't notice these issues/pay much attention to them until after I started a lengthy disk swap, that invloves (one at a time) replacing 4 disks. So I am unable to roll back to 6.10.x Attached are the diasgnositics. Two issues are: -Windows 10 no longer recognizes my GPU (GTX1060) - it shows it in an error state in windows device manager (sn attached) -Everytime I stop and start the array, none of my VMs autostarts and I get a virtio error Niether of these were issues in 6.10.x, everyone worked fine one minute, then immeadiately after the update these issues started. Things I've tried: -deleting the VM and remaking it -using VNC as the primary display and making the GPU secondary -trying to pass the GPU to my other windows server VM -GPU tested in seperate PC and works fine -Uninstalling and reinstalling all NVIDIA drivers -Tried a clean install of windows server 2022, same error in windows -No errors when booting from an image of the VM on bare metal (restored to a different PC), GPU works fine -Different PCI-e card works in the slot the GPU is in now -PSU confirmed to be delivering power Any help would be much appreciated! Thanks!! tower-diagnostics-20230812-1154.zip
  5. The services folks able to figure it out? I'm dead in the water here too...
  6. Completely dead in the water here. I didn't notice that the GPU wasn't passing thru until after I started adding a new drive, unable to roll back. My only use case for Unraid requires GPU support within VMs. My server is effectively useless
  7. Same issue here. After updating got a PCI error when booting my windows VM. No longer able to pass through my GPU to any VMs
  8. SAME! Completely broken, can no longer use my GPU in VMs. This completely breaks my use case of Unraid.
  9. Thanks. I just ended up chmod all the lower directories in the share. It seems to have only impacted about 1/3 of the subdirectories, not all of them. Looking at the log, the permissions were changed when I updated to 6.10.1, I just didn't notice until we did or monthly offsite; at which point we had updated to 6.10.2 So def an issue with 6.10.1 that lingers into 6.10.2. No tool in the web GUI would change the sub directories permissions in the share.
  10. I'm at a work stoppage here with the same issue in 6.10.2. I have share for security camera backups and I cannot access it from SMB in any environment. I get permission denied for any user. Including just read only. Can't go much longer without access to our backups
  11. Thanks!! Yeah. That's what I thought. Maybe a tape drive is in my future. I'm using an Intel Z390 (being an old man, it was hard to move to a threadripper, in retrospect...) and one of my bonds for gigabit is over thrunderbolt 3 (due to my lack of faith in Intel built-in RJ45), two video cards, two SATA PCI-e cards and I'm using the front USB 3.0 for the Unraid thumbdrive.
  12. I keep offline backups of my server. I use a bunch of USB 3.0 HDDs. So basically I 3-2-1 the HDs and rotate them to an offsite location. The problem is that if the windows 10 VM is running and I want to use them in the WIndows 10 VM. So without taking the VM down, I go into "edit" then check the new disk in the list of devices at the bottom. Every time I do this I get an error message. However, the drive is available in WIndows 10. The big problem is if I take the VM offline and then back online for any reason and that drive is no longer plugged in, I cannot start the VM. Since each drive is offsite 1/3 of the time, if this happens with a drive that I don't have on hand, I have to delete that VM and create a new one. This has happened so many times that I just have a copy of the working XML and make new one. This is more of a nuisance but I was just wondering if anyone had any insight on to why this is happening? Thanks!!! ```<?xml version='1.0' encoding='UTF-8'?> <domain type='kvm' id='4'> <name>Windows 10</name> <uuid>redacted</uuid> <metadata> <vmtemplate xmlns="unraid" name="Windows 10" icon="windows.png" os="windows10"/> </metadata> <memory unit='KiB'>24641536</memory> <currentMemory unit='KiB'>24641536</currentMemory> <memoryBacking> <nosharepages/> </memoryBacking> <vcpu placement='static'>14</vcpu> <cputune> <vcpupin vcpu='0' cpuset='0'/> <vcpupin vcpu='1' cpuset='1'/> <vcpupin vcpu='2' cpuset='9'/> <vcpupin vcpu='3' cpuset='2'/> <vcpupin vcpu='4' cpuset='3'/> <vcpupin vcpu='5' cpuset='11'/> <vcpupin vcpu='6' cpuset='4'/> <vcpupin vcpu='7' cpuset='12'/> <vcpupin vcpu='8' cpuset='5'/> <vcpupin vcpu='9' cpuset='13'/> <vcpupin vcpu='10' cpuset='6'/> <vcpupin vcpu='11' cpuset='14'/> <vcpupin vcpu='12' cpuset='7'/> <vcpupin vcpu='13' cpuset='15'/> </cputune> <resource> <partition>/machine</partition> </resource> <os> <type arch='x86_64' machine='pc-i440fx-4.2'>hvm</type> <loader readonly='yes' type='pflash'>/usr/share/qemu/ovmf-x64/OVMF_CODE-pure-efi.fd</loader> <nvram>/etc/libvirt/qemu/nvram/093dfbb6-48ed-1881-69e7-706db34f9cf2_VARS-pure-efi.fd</nvram> <boot dev='hd'/> </os> <features> <acpi/> <apic/> <hyperv> <relaxed state='on'/> <vapic state='on'/> <spinlocks state='on' retries='8191'/> <vendor_id state='on' value='none'/> </hyperv> </features> <cpu mode='host-passthrough' check='none'> <topology sockets='1' cores='7' threads='2'/> <cache mode='passthrough'/> </cpu> <clock offset='localtime'> <timer name='hypervclock' present='yes'/> <timer name='hpet' present='no'/> </clock> <on_poweroff>destroy</on_poweroff> <on_reboot>restart</on_reboot> <on_crash>restart</on_crash> <devices> <emulator>/usr/local/sbin/qemu</emulator> <controller type='virtio-serial' index='0'> <alias name='virtio-serial0'/> <address type='pci' domain='0x0000' bus='0x00' slot='0x03' function='0x0'/> </controller> <controller type='pci' index='0' model='pci-root'> <alias name='pci.0'/> </controller> <controller type='usb' index='0' model='qemu-xhci' ports='15'> <alias name='usb'/> <address type='pci' domain='0x0000' bus='0x00' slot='0x07' function='0x0'/> </controller> <interface type='bridge'> <mac address='redacted'/> <source bridge='br0'/> <target dev='vnet0'/> <model type='virtio'/> <alias name='net0'/> <address type='pci' domain='0x0000' bus='0x00' slot='0x02' function='0x0'/> </interface> <serial type='pty'> <source path='/dev/pts/0'/> <target type='isa-serial' port='0'> <model name='isa-serial'/> </target> <alias name='serial0'/> </serial> <console type='pty' tty='/dev/pts/0'> <source path='/dev/pts/0'/> <target type='serial' port='0'/> <alias name='serial0'/> </console> <channel type='unix'> <source mode='bind' path='/var/lib/libvirt/qemu/channel/target/domain-4-Windows 10/org.qemu.guest_agent.0'/> <target type='virtio' name='org.qemu.guest_agent.0' state='disconnected'/> <alias name='channel0'/> <address type='virtio-serial' controller='0' bus='0' port='1'/> </channel> <input type='mouse' bus='ps2'> <alias name='input0'/> </input> <input type='keyboard' bus='ps2'> <alias name='input1'/> </input> <hostdev mode='subsystem' type='pci' managed='yes'> <driver name='vfio'/> <source> <address domain='0x0000' bus='0x01' slot='0x00' function='0x0'/> </source> <alias name='hostdev1'/> <address type='pci' domain='0x0000' bus='0x00' slot='0x04' function='0x0'/> </hostdev> <hostdev mode='subsystem' type='pci' managed='yes'> <driver name='vfio'/> <source> <address domain='0x0000' bus='0x00' slot='0x1f' function='0x3'/> </source> <alias name='hostdev2'/> <address type='pci' domain='0x0000' bus='0x00' slot='0x05' function='0x0'/> </hostdev> <hostdev mode='subsystem' type='pci' managed='yes'> <driver name='vfio'/> <source> <address domain='0x0000' bus='0x03' slot='0x00' function='0x0'/> </source> <alias name='hostdev3'/> <address type='pci' domain='0x0000' bus='0x00' slot='0x06' function='0x0'/> </hostdev> <hostdev mode='subsystem' type='usb' managed='no'> <source> <vendor id='0x0c45'/> <product id='0x7403'/> <address bus='1' device='6'/> </source> <alias name='hostdev4'/> <address type='usb' bus='0' port='1'/> </hostdev> <hostdev mode='subsystem' type='usb' managed='no'> <source> <vendor id='0x1038'/> <product id='0x1824'/> <address bus='1' device='5'/> </source> <alias name='hostdev5'/> <address type='usb' bus='0' port='2'/> </hostdev> <hostdev mode='subsystem' type='usb' managed='no'> <source startupPolicy='optional'> <vendor id='0x1058'/> <product id='0x25a3'/> <address bus='2' device='15'/> </source> <alias name='hostdev6'/> <address type='usb' bus='0' port='4'/> </hostdev> <memballoon model='none'/> </devices> <seclabel type='dynamic' model='dac' relabel='yes'> <label>+0:+100</label> <imagelabel>+0:+100</imagelabel> </seclabel> </domain>```