ug2215

Members
  • Posts

    13
  • Joined

  • Last visited

Everything posted by ug2215

  1. I am having this same issue after making some hardware changes. Running 6.5.0. libvirt log is: 2018-03-18 18:10:55.949+0000: 4078: info : libvirt version: 4.0.0 2018-03-18 18:10:55.949+0000: 4078: info : hostname: Chimera 2018-03-18 18:10:55.949+0000: 4078: error : virNetSocketNewListenTCP:343 : Unable to resolve address '127.0.0.1' service '16509': Address family for hostname not supported chimera-diagnostics-20180318-1321.zip
  2. I was unable to find them in the plugins directory. It turns out that even if /boot/ is gone, you can change the "autostart" preference; so I was able to prevent them from coming up on boot and breaking things. I was able to resolve this issue by just changing the PCI address of a passed-through USB Controller. The one I wanted to passthrough is adjacent to the one that hosts the boot media. I just needed to increment it by one to resume grabbing the correct controller. The new NIC came in at 4, pushing everything above it up by one. The two USB controllers had been 7 and 8 but became 8 and 9. Thank you to trurl; that suggestion triggered the thought about USB passthrough. While doing that I thought: well, it's a little tricky to move this USB device because I pass through so many ports; if I pick the wrong one it will... do about what I'm seeing... Aha!
  3. I tried a different port to no avail, but in doing so I think I realized what is happening. I have my VMs setup for passthrough of USB controllers. I think that when I added a new PCIe device, the NIC, it came into the order at a new value and changed the values of existing passthrough PCIe devices. So, new question: how can I change VMs to not autostart from the command-line? I cannot find their XML files on-disk. Note: Therefore, I hypothesize that what is happening is that unRAID is booting successfully but then passing through the USB controller hosting the boot device. Then, the VM that received it fails to startup because its PCI devices are not as expected, and it releases it. But, it "blanks out" because it was pulled and comes back as a different drive device in /dev/sd*.
  4. Howdy, This problem was triggered when I added a PCIe network card, requiring a full reboot. The issue appears to be that my boot drive has moved from /dev/sda to /dev/sdd, yet unRAID keeps trying to mount /dev/sda1 to /boot even though it successfully boots, apparently using /dev/sdd1. I cannot figure out how to resolve this. After basically successful boot, the array is mounted and license key is recognized. However VMs leveraging virtd will not start and I see this error message in the web interface on the VM panel: Warning: parse_ini_file(/boot/config/domain.cfg): failed to open stream: No such file or directory in /usr/local/emhttp/plugins/dynamix.vm.manager/classes/libvirt_helpers.php on line 441 If I logon via SSH, I can see that /boot/ is empty: root@Chimera:~# ls /boot/ -l total 0 And /etc/mtab reflects the misunderstanding of the boot device's location (*** for emphasis, not present in actual file): root@Chimera:~# cat /etc/mtab proc /proc proc rw 0 0 sysfs /sys sysfs rw 0 0 tmpfs /var/log tmpfs rw,size=128m,mode=0755 0 0 ******/dev/sda1 /boot vfat rw,noatime,nodiratime,umask=0,shortname=mixed 0 1******* /mnt /mnt none rw,bind 0 0 /dev/md1 /mnt/disk1 btrfs rw,noatime,nodiratime 0 0 /dev/nvme0n1p1 /mnt/cache btrfs rw,noatime,nodiratime 0 0 shfs /mnt/user0 fuse.shfs rw,nosuid,nodev,noatime,allow_other 0 0 shfs /mnt/user fuse.shfs rw,nosuid,nodev,noatime,allow_other 0 0 /dev/loop0 /var/lib/docker btrfs rw 0 0 /dev/loop1 /etc/libvirt btrfs rw 0 0 In fact, there is no /dev/sda: root@Chimera:~# ls /dev/sd* /dev/sdb /dev/sdb1 /dev/sdc /dev/sdc1 /dev/sdd /dev/sdd1 The correct configuration is, to some degree, reflected in /etc/fstab: root@Chimera:~# cat /etc/fstab /dev/disk/by-label/UNRAID /boot vfat auto,rw,exec,noatime,nodiratime,umask=0,shortname=mixed 0 1 root@Chimera:~# ls -l /dev/disk/by-label/UNRAID lrwxrwxrwx 1 root root 10 Oct 24 18:51 /dev/disk/by-label/UNRAID -> ../../sdd1 I can successfully mount /dev/sdd1 and see my still-intact configuration files: root@Chimera:~# cd /tmp/ root@Chimera:/tmp# mkdir boot root@Chimera:/tmp# mount /dev/sdd1 boot/ root@Chimera:/tmp# ls boot/ System\ Volume\ Information/ changes.txt* license.txt* packages/ bzimage* config/ make_bootable.bat* previous/ bzroot* ldlinux.c32* make_bootable_mac* syslinux/ bzroot-gui* ldlinux.sys* memtest* syslog.txt* If I stop the array, the web interface will begin to complain that I am not registered; because it cannot find the license key in /boot/. With the array stopped, I can unmount /dev/sda1 from /boot/ and mount /dev/sdd1 to /boot/: root@Chimera:/tmp# umount /boot/ root@Chimera:/tmp# mount /dev/sdd1 /boot/ root@Chimera:/tmp# ls /boot/ System\ Volume\ Information/ bzroot* changes.txt* ldlinux.c32* license.txt* make_bootable_mac* packages/ syslinux/ bzimage* bzroot-gui* config/ ldlinux.sys* make_bootable.bat* memtest* previous/ syslog.txt* After doing this, the web interface stops complaining that there is no valid license key; it can now verify my licensed status because /boot/ is intact. Unfortunately, if I restart the array, /boot/ goes empty again: root@Chimera:/tmp# ls /boot/ -l total 0 This is particularly odd because mtab still reflects a correct mounting: (*** for emphasis, not present in actual file) root@Chimera:/tmp# cat /etc/mtab proc /proc proc rw 0 0 sysfs /sys sysfs rw 0 0 tmpfs /var/log tmpfs rw,size=128m,mode=0755 0 0 /mnt /mnt none rw,bind 0 0 ******/dev/sdd1 /boot vfat rw 0 0****** /dev/md1 /mnt/disk1 btrfs rw,noatime,nodiratime 0 0 /dev/nvme0n1p1 /mnt/cache btrfs rw,noatime,nodiratime 0 0 shfs /mnt/user0 fuse.shfs rw,nosuid,nodev,noatime,allow_other 0 0 shfs /mnt/user fuse.shfs rw,nosuid,nodev,noatime,allow_other 0 0 /dev/loop0 /var/lib/docker btrfs rw 0 0 /dev/loop1 /etc/libvirt btrfs rw 0 0 I have, of course, tried running filesystem repairs using both Windows and fsck, to no avail. The problem seems to be unRAID mounting the wrong location to /boot/, but I can't figure out how to change its mind. Please advise.
  5. Interesting suggestion to use a single vCPU. I do not have any update. I regularly reboot my two Windows VMs to avoid the lockups. I find that sometimes the whole machine begins performing poorly, and that I can save the hard powercycle by rebooting the Windows VMs manually before it locks up. I have not tried again to install Windows on 6.2; still using installations created on 6.1.9.
  6. I am still experiencing this issue. It still requires physically powercycling the machine. FYI, in general you can shutdown VMs from the command line with: virsh list virsh shutdown <numberOfVM> virsh destroy <numberOfVM> Of course, didn't work in this case: error: Timed out during operation: cannot acquire state change lock (held by remoteDispatchDomainGetBlockInfo)
  7. I got my motherboard back and so far have not had the issue again. I did make a few changes to preempt problems: 1) Changed machine type to pc-i440fx-2.5 2) Upgraded to RC 2.3 3) Ensured the VMs are using the latest virtio, 118-2 4) Freed up some RAM, had been running at peaks of 90% consumption, down to 75% I'm not convinced it's gone forever, but I haven't had it recur yet. My Windows VM Configs: <domain type='kvm' id='8'> <name>Gaming</name> <uuid>f8e306f5-4f9c-e700-e985-727cf78d591a</uuid> <metadata> <vmtemplate xmlns="unraid" name="Windows 10" icon="windows.png" os="windows10"/> </metadata> <memory unit='KiB'>8388608</memory> <currentMemory unit='KiB'>8388608</currentMemory> <memoryBacking> <nosharepages/> <locked/> </memoryBacking> <vcpu placement='static'>4</vcpu> <cputune> <vcpupin vcpu='0' cpuset='4'/> <vcpupin vcpu='1' cpuset='5'/> <vcpupin vcpu='2' cpuset='10'/> <vcpupin vcpu='3' cpuset='11'/> </cputune> <resource> <partition>/machine</partition> </resource> <os> <type arch='x86_64' machine='pc-i440fx-2.5'>hvm</type> </os> <features> <acpi/> <apic/> </features> <cpu mode='host-passthrough'> <topology sockets='1' cores='2' threads='2'/> </cpu> <clock offset='localtime'> <timer name='rtc' tickpolicy='catchup'/> <timer name='pit' tickpolicy='delay'/> <timer name='hpet' present='no'/> </clock> <on_poweroff>destroy</on_poweroff> <on_reboot>restart</on_reboot> <on_crash>restart</on_crash> <devices> <emulator>/usr/local/sbin/qemu</emulator> <disk type='file' device='disk'> <driver name='qemu' type='raw' cache='writeback'/> <source file='/mnt/user/vdisks/Gaming/vdisk1.img'/> <backingStore/> <target dev='hdc' bus='virtio'/> <boot order='1'/> <alias name='virtio-disk2'/> <address type='pci' domain='0x0000' bus='0x00' slot='0x04' function='0x0'/> </disk> <disk type='file' device='cdrom'> <driver name='qemu' type='raw'/> <source file='/mnt/user/ISOs/Win10_1511_1_English_x64.iso'/> <backingStore/> <target dev='hda' bus='ide'/> <readonly/> <boot order='2'/> <alias name='ide0-0-0'/> <address type='drive' controller='0' bus='0' target='0' unit='0'/> </disk> <disk type='file' device='cdrom'> <driver name='qemu' type='raw'/> <source file='/mnt/user/ISOs/virtio-win-0.1.118-2.iso'/> <backingStore/> <target dev='hdb' bus='ide'/> <readonly/> <alias name='ide0-0-1'/> <address type='drive' controller='0' bus='0' target='0' unit='1'/> </disk> <controller type='usb' index='0' model='ich9-ehci1'> <alias name='usb'/> <address type='pci' domain='0x0000' bus='0x00' slot='0x07' function='0x7'/> </controller> <controller type='usb' index='0' model='ich9-uhci1'> <alias name='usb'/> <master startport='0'/> <address type='pci' domain='0x0000' bus='0x00' slot='0x07' function='0x0' multifunction='on'/> </controller> <controller type='usb' index='0' model='ich9-uhci2'> <alias name='usb'/> <master startport='2'/> <address type='pci' domain='0x0000' bus='0x00' slot='0x07' function='0x1'/> </controller> <controller type='usb' index='0' model='ich9-uhci3'> <alias name='usb'/> <master startport='4'/> <address type='pci' domain='0x0000' bus='0x00' slot='0x07' function='0x2'/> </controller> <controller type='pci' index='0' model='pci-root'> <alias name='pci.0'/> </controller> <controller type='ide' index='0'> <alias name='ide'/> <address type='pci' domain='0x0000' bus='0x00' slot='0x01' function='0x1'/> </controller> <controller type='virtio-serial' index='0'> <alias name='virtio-serial0'/> <address type='pci' domain='0x0000' bus='0x00' slot='0x03' function='0x0'/> </controller> <interface type='bridge'> <mac address='52:54:00:09:a1:9e'/> <source bridge='br0'/> <target dev='vnet3'/> <model type='virtio'/> <alias name='net0'/> <address type='pci' domain='0x0000' bus='0x00' slot='0x02' function='0x0'/> </interface> <serial type='pty'> <source path='/dev/pts/4'/> <target port='0'/> <alias name='serial0'/> </serial> <console type='pty' tty='/dev/pts/4'> <source path='/dev/pts/4'/> <target type='serial' port='0'/> <alias name='serial0'/> </console> <channel type='unix'> <source mode='bind' path='/var/lib/libvirt/qemu/channel/target/domain-Gaming/org.qemu.guest_agent.0'/> <target type='virtio' name='org.qemu.guest_agent.0' state='connected'/> <alias name='channel0'/> <address type='virtio-serial' controller='0' bus='0' port='1'/> </channel> <hostdev mode='subsystem' type='pci' managed='yes' xvga='yes'> <driver name='vfio'/> <source> <address domain='0x0000' bus='0x03' slot='0x00' function='0x0'/> </source> <alias name='hostdev0'/> <address type='pci' domain='0x0000' bus='0x00' slot='0x05' function='0x0'/> </hostdev> <hostdev mode='subsystem' type='pci' managed='yes'> <driver name='vfio'/> <source> <address domain='0x0000' bus='0x03' slot='0x00' function='0x1'/> </source> <alias name='hostdev1'/> <address type='pci' domain='0x0000' bus='0x00' slot='0x06' function='0x0'/> </hostdev> <hostdev mode='subsystem' type='pci' managed='yes'> <driver name='vfio'/> <source> <address domain='0x0000' bus='0x00' slot='0x1b' function='0x0'/> </source> <alias name='hostdev2'/> <address type='pci' domain='0x0000' bus='0x00' slot='0x08' function='0x0'/> </hostdev> <hostdev mode='subsystem' type='usb' managed='no'> <source> <vendor id='0x04d9'/> <product id='0xfa50'/> <address bus='3' device='3'/> </source> <alias name='hostdev3'/> </hostdev> <hostdev mode='subsystem' type='usb' managed='no'> <source> <vendor id='0x06a3'/> <product id='0x8000'/> <address bus='3' device='2'/> </source> <alias name='hostdev4'/> </hostdev> <hostdev mode='subsystem' type='usb' managed='no'> <source> <vendor id='0x2433'/> <product id='0xb200'/> <address bus='3' device='4'/> </source> <alias name='hostdev5'/> </hostdev> <memballoon model='virtio'> <alias name='balloon0'/> <address type='pci' domain='0x0000' bus='0x00' slot='0x09' function='0x0'/> </memballoon> </devices> </domain> <domain type='kvm' id='14'> <name>Media</name> <uuid>07a9d082-742c-ccea-83cd-9df33be75e6e</uuid> <metadata> <vmtemplate xmlns="unraid" name="Windows 10" icon="windows.png" os="windows10"/> </metadata> <memory unit='KiB'>6291456</memory> <currentMemory unit='KiB'>6291456</currentMemory> <memoryBacking> <nosharepages/> <locked/> </memoryBacking> <vcpu placement='static'>2</vcpu> <cputune> <vcpupin vcpu='0' cpuset='3'/> <vcpupin vcpu='1' cpuset='9'/> </cputune> <resource> <partition>/machine</partition> </resource> <os> <type arch='x86_64' machine='pc-i440fx-2.5'>hvm</type> </os> <features> <acpi/> <apic/> <hyperv> <relaxed state='on'/> <vapic state='on'/> <spinlocks state='on' retries='8191'/> <vendor id='none'/> </hyperv> </features> <cpu mode='host-passthrough'> <topology sockets='1' cores='1' threads='2'/> </cpu> <clock offset='localtime'> <timer name='hypervclock' present='yes'/> <timer name='hpet' present='no'/> </clock> <on_poweroff>destroy</on_poweroff> <on_reboot>restart</on_reboot> <on_crash>restart</on_crash> <devices> <emulator>/usr/local/sbin/qemu</emulator> <disk type='file' device='disk'> <driver name='qemu' type='raw' cache='writeback'/> <source file='/mnt/user/vdisks/Media/vdisk1.img'/> <backingStore/> <target dev='hdc' bus='virtio'/> <boot order='1'/> <alias name='virtio-disk2'/> <address type='pci' domain='0x0000' bus='0x00' slot='0x04' function='0x0'/> </disk> <disk type='file' device='cdrom'> <driver name='qemu' type='raw'/> <source file='/mnt/user/ISOs/Win10_1511_1_English_x64.iso'/> <backingStore/> <target dev='hda' bus='ide'/> <readonly/> <boot order='2'/> <alias name='ide0-0-0'/> <address type='drive' controller='0' bus='0' target='0' unit='0'/> </disk> <disk type='file' device='cdrom'> <driver name='qemu' type='raw'/> <source file='/mnt/user/ISOs/virtio-win-0.1.118-2.iso'/> <backingStore/> <target dev='hdb' bus='ide'/> <readonly/> <alias name='ide0-0-1'/> <address type='drive' controller='0' bus='0' target='0' unit='1'/> </disk> <controller type='usb' index='0' model='nec-xhci'> <alias name='usb'/> <address type='pci' domain='0x0000' bus='0x00' slot='0x07' function='0x0'/> </controller> <controller type='pci' index='0' model='pci-root'> <alias name='pci.0'/> </controller> <controller type='ide' index='0'> <alias name='ide'/> <address type='pci' domain='0x0000' bus='0x00' slot='0x01' function='0x1'/> </controller> <controller type='virtio-serial' index='0'> <alias name='virtio-serial0'/> <address type='pci' domain='0x0000' bus='0x00' slot='0x03' function='0x0'/> </controller> <interface type='bridge'> <mac address='52:54:00:b2:4a:48'/> <source bridge='br0'/> <target dev='vnet4'/> <model type='virtio'/> <alias name='net0'/> <address type='pci' domain='0x0000' bus='0x00' slot='0x02' function='0x0'/> </interface> <serial type='pty'> <source path='/dev/pts/5'/> <target port='0'/> <alias name='serial0'/> </serial> <console type='pty' tty='/dev/pts/5'> <source path='/dev/pts/5'/> <target type='serial' port='0'/> <alias name='serial0'/> </console> <channel type='unix'> <source mode='bind' path='/var/lib/libvirt/qemu/channel/target/domain-Media/org.qemu.guest_agent.0'/> <target type='virtio' name='org.qemu.guest_agent.0' state='connected'/> <alias name='channel0'/> <address type='virtio-serial' controller='0' bus='0' port='1'/> </channel> <hostdev mode='subsystem' type='pci' managed='yes' xvga='yes'> <driver name='vfio'/> <source> <address domain='0x0000' bus='0x04' slot='0x00' function='0x0'/> </source> <alias name='hostdev0'/> <address type='pci' domain='0x0000' bus='0x00' slot='0x05' function='0x0'/> </hostdev> <hostdev mode='subsystem' type='pci' managed='yes'> <driver name='vfio'/> <source> <address domain='0x0000' bus='0x04' slot='0x00' function='0x1'/> </source> <alias name='hostdev1'/> <address type='pci' domain='0x0000' bus='0x00' slot='0x06' function='0x0'/> </hostdev> <hostdev mode='subsystem' type='usb' managed='no'> <source> <vendor id='0x046d'/> <product id='0xc52b'/> <address bus='3' device='17'/> </source> <alias name='hostdev2'/> </hostdev> <memballoon model='virtio'> <alias name='balloon0'/> <address type='pci' domain='0x0000' bus='0x00' slot='0x08' function='0x0'/> </memballoon> </devices> </domain>
  8. I upgraded my installation to 6.2 rc2 after getting the VMs installed using 6.1.9. I think the same essential issue cropped up again, in that the Windows VMs began performing very badly, and when I went to investigate I found qemu-system-x86 process chewing up all of the cpu cores assigned to those VMs. I was unable to shut one of them down and powered-off the whole machine. When I went to restart my motherboard decided to die on me for the second time, so I have not been able to resume diagnosis. So, I was able to temporarily bypass the issue by using 6.1.9 to install Windows VMs, but a similar issue appears to have emerged after I installed 6.2. For this reason I think that this is a Unraid 6.2 issue with Windows in general, not only with installation. I haven't been looking for other reports since my Mobo died for the obvious reason that I can't do anything about it until the RMA process is complete.
  9. I reverted to 6.1.9; we'll see if the lockup on VM shutdown recurs. I attempted to disable autostart of my VMs, reinstall 6.2 beta 22, and activate VMs again; but failed to restore the VM tab.
  10. After installing the beta, but before rebooting, 2 Windows VMs with GPUs were both able to shutdown without locking the whole system I did make some CPU assignment changes recently; I isolated most of the CPU cores used by VMs so that unRAID would not use them After reboot into beta 21 I'm totally broken because I didn't disable autostart for my VMs Specifically, I can't startup the VM options in unRAID, I get no error messages or other helpful information, I can't find the configuration files for my VMs on the box, and so I can't fix it. I also don't see my NVMe SSD, as I did when running the Beta previously, but it doesn't show up in lspci anymore either. INLINE EDIT: This issue was due to a bad motherboard; the PCIe slot the NVMe SSD was in was failing intermittently. Some background info: My GPUs are a Nvidia GT 710 and Nvidia GTX 950, with a Nvidia GT 9500 in slot one for the host. I seem to have the issue that I must leave a GPU for the host to be bound to I started out attempting to use the beta, but was not able to install Windows VMs successfully, and reverted to 6.1.9 My thread for this beta issue is at: http://lime-technology.com/forum/index.php?topic=49042.msg470409#msg470409
  11. Howdy, I am trying to install Windows 10 Pro as a VM in 6.2.0-beta21. The installer can be started and proceeds until it is switching from "Installing updates" to "Finishing up"; screenshot attached. The failed state is: VNC Session to VM continues working fine; cannot do anything, but the mouse moves around Web interface stops responding altogether SSH logon still works, running top shoes a high CPU usage for "wa", waiting for I/O top also shows CPU usage matching "wa" usage for qemu-system-x86 top also shows "load average" above 15.0, all indicating lots of I/O load even running "reboot" from CLI will not break the lock; the system must be power cycled I have tried leaving the system running for many hours, the state does not change. Note: I have actually changed hardware and see the same behavior; I recently built a new PC and am planning to use unRAID. I got this behavior first on my previous (lesser) hardware. Now a confession: my entire array is SSDs; disk and parity. Dockers and SMB shares work fine; haven't tried installing Linux in a VM yet. My understanding of the risks around this don't include this type of scenario. What logs would be helpful for this? I found the /var/log/libvirt/ directory but didn't find anything useful. EDIT: I switched to using 6.1.9 and was able to install Win10 successfully on my VMs. I have lost NVMe support for now, but expect it will return when the 6.2.0 update goes gold. VMconfig.txt