Jump to content


  • Content Count

  • Joined

  • Last visited

Community Reputation

3 Neutral

About huntastikus

  • Rank

Recent Profile Visitors

The recent visitors block is disabled and is not being shown to other users.

  1. here's one I just threw together, I really like how it looks on my page. Original: How it looks on the page:
  2. You guys rock as always!!! I have no issues with a 1050 and Emby
  3. https://www.linuxserver.io/donate/ doing the same right now! thank you for your time and efforts!
  4. I was able to solve the MariaDB hostname issue with adding a DNS entry for the MariaDB container, however, now I am stuck at Redis hostname not found in /etc/hosts. There is no variable to set this, according to run.sh it is statically set for Anybody get past this?
  5. Hello, this plugin is awesome, thank you for your hard work! I am experiencing an issue when rebooting a vm with a passed through GPU. Here is the setup: IGPU -> Unraid Console Titan XP -> VM 1050 -> Emby Docker. Everything is working as it should, I can encode and use the VM at the same time, but when the VM reboots Unraid completely freezes up. The webgui stops responding, nothing will launch in the console (like htop). I have to powercycle the server to get things going again. Is anybody else experiencing issues like this? I am using the latest version (rc4)
  6. Just tested VM reboot, worked without issues. Thanks for the help!
  7. I actually installed that plugin prior to it being released (I was a dummkopf and installed based on a post), I am wondering if now that it's released it may have created issues. I will post results. BTW, thank you for the fast response!
  8. I was not using the custom UNRAID build, I just removed the plugin. I will reboot UNRAID tonight and test VM reboot as well.
  9. Ugh, sorry, forgot, attached to main post.
  10. Since updating 6.7 rc2, when I reboot my vm that has hardware passed through to it (GPU, Bluetooth adapter, Wifi Adapter, Keyboard/Mouse) the whole system locks up. The web interface stops responding, I can log into the console of the server, but nothing seems to want to run. Even launching something as simple as HTOP just gives a blank window. I think last night there was a Windows Update that made the VM in question restart automatically, and I woke up to the whole server being completely unresponsive. I can reboot all the other VMs without issues. Is there anything I can post to help resolve this? Below is the configuration for my VM. <?xml version='1.0' encoding='UTF-8'?> <domain type='kvm' id='3'> <name>LivingRoom</name> <uuid>e5e78a51-21d5-72c4-b55a-10959226b828</uuid> <metadata> <vmtemplate xmlns="unraid" name="Windows 10" icon="windows.png" os="windows10"/> </metadata> <memory unit='KiB'>12582912</memory> <currentMemory unit='KiB'>12582912</currentMemory> <memoryBacking> <nosharepages/> </memoryBacking> <vcpu placement='static'>8</vcpu> <cputune> <vcpupin vcpu='0' cpuset='1'/> <vcpupin vcpu='1' cpuset='21'/> <vcpupin vcpu='2' cpuset='2'/> <vcpupin vcpu='3' cpuset='22'/> <vcpupin vcpu='4' cpuset='3'/> <vcpupin vcpu='5' cpuset='23'/> <vcpupin vcpu='6' cpuset='4'/> <vcpupin vcpu='7' cpuset='24'/> <emulatorpin cpuset='0,20'/> </cputune> <resource> <partition>/machine</partition> </resource> <os> <type arch='x86_64' machine='pc-i440fx-3.0'>hvm</type> <loader readonly='yes' type='pflash'>/usr/share/qemu/ovmf-x64/OVMF_CODE-pure-efi.fd</loader> <nvram>/etc/libvirt/qemu/nvram/e5e78a51-21d5-72c4-b55a-10959226b828_VARS-pure-efi.fd</nvram> </os> <features> <acpi/> <apic/> <hyperv> <relaxed state='on'/> <vapic state='on'/> <spinlocks state='on' retries='8191'/> <vendor_id state='on' value='none'/> </hyperv> </features> <cpu mode='host-passthrough' check='none'> <topology sockets='1' cores='4' threads='2'/> </cpu> <clock offset='localtime'> <timer name='hypervclock' present='yes'/> <timer name='hpet' present='no'/> </clock> <on_poweroff>destroy</on_poweroff> <on_reboot>restart</on_reboot> <on_crash>restart</on_crash> <devices> <emulator>/usr/local/sbin/qemu</emulator> <disk type='file' device='disk'> <driver name='qemu' type='raw' cache='writeback'/> <source file='/mnt/user/domains/LivingRoom/vdisk1.img'/> <backingStore/> <target dev='hdc' bus='virtio'/> <boot order='1'/> <alias name='virtio-disk2'/> <address type='pci' domain='0x0000' bus='0x00' slot='0x04' function='0x0'/> </disk> <disk type='file' device='disk'> <driver name='qemu' type='raw' cache='writeback'/> <source file='/mnt/user/domains/LivingRoom/vdisk2.img'/> <backingStore/> <target dev='hdd' bus='virtio'/> <alias name='virtio-disk3'/> <address type='pci' domain='0x0000' bus='0x00' slot='0x05' function='0x0'/> </disk> <disk type='file' device='cdrom'> <driver name='qemu' type='raw'/> <source file='/mnt/user/isos/virtio-win-0.1.160-1.iso'/> <backingStore/> <target dev='hdb' bus='ide'/> <readonly/> <alias name='ide0-0-1'/> <address type='drive' controller='0' bus='0' target='0' unit='1'/> </disk> <controller type='usb' index='0' model='qemu-xhci' ports='15'> <alias name='usb'/> <address type='pci' domain='0x0000' bus='0x00' slot='0x07' function='0x0'/> </controller> <controller type='pci' index='0' model='pci-root'> <alias name='pci.0'/> </controller> <controller type='ide' index='0'> <alias name='ide'/> <address type='pci' domain='0x0000' bus='0x00' slot='0x01' function='0x1'/> </controller> <controller type='virtio-serial' index='0'> <alias name='virtio-serial0'/> <address type='pci' domain='0x0000' bus='0x00' slot='0x03' function='0x0'/> </controller> <interface type='bridge'> <mac address='52:54:00:4c:dc:06'/> <source bridge='br1.2'/> <target dev='vnet2'/> <model type='virtio'/> <alias name='net0'/> <address type='pci' domain='0x0000' bus='0x00' slot='0x02' function='0x0'/> </interface> <serial type='pty'> <source path='/dev/pts/3'/> <target type='isa-serial' port='0'> <model name='isa-serial'/> </target> <alias name='serial0'/> </serial> <console type='pty' tty='/dev/pts/3'> <source path='/dev/pts/3'/> <target type='serial' port='0'/> <alias name='serial0'/> </console> <channel type='unix'> <source mode='bind' path='/var/lib/libvirt/qemu/channel/target/domain-3-LivingRoom/org.qemu.guest_agent.0'/> <target type='virtio' name='org.qemu.guest_agent.0' state='disconnected'/> <alias name='channel0'/> <address type='virtio-serial' controller='0' bus='0' port='1'/> </channel> <input type='mouse' bus='ps2'> <alias name='input0'/> </input> <input type='keyboard' bus='ps2'> <alias name='input1'/> </input> <hostdev mode='subsystem' type='pci' managed='yes'> <driver name='vfio'/> <source> <address domain='0x0000' bus='0x03' slot='0x00' function='0x0'/> </source> <alias name='hostdev0'/> <rom file='/mnt/user/isos/vbios/Edited.NVIDIA.TitanXPascal.12288.160711.rom'/> <address type='pci' domain='0x0000' bus='0x00' slot='0x06' function='0x0'/> </hostdev> <hostdev mode='subsystem' type='pci' managed='yes'> <driver name='vfio'/> <source> <address domain='0x0000' bus='0x03' slot='0x00' function='0x1'/> </source> <alias name='hostdev1'/> <address type='pci' domain='0x0000' bus='0x00' slot='0x08' function='0x0'/> </hostdev> <hostdev mode='subsystem' type='usb' managed='no'> <source> <vendor id='0x046d'/> <product id='0xc52b'/> <address bus='2' device='5'/> </source> <alias name='hostdev2'/> <address type='usb' bus='0' port='1'/> </hostdev> <hostdev mode='subsystem' type='usb' managed='no'> <source> <vendor id='0x0b05'/> <product id='0x1786'/> <address bus='2' device='8'/> </source> <alias name='hostdev3'/> <address type='usb' bus='0' port='2'/> </hostdev> <hostdev mode='subsystem' type='usb' managed='no'> <source> <vendor id='0x0b05'/> <product id='0x17cb'/> <address bus='2' device='7'/> </source> <alias name='hostdev4'/> <address type='usb' bus='0' port='3'/> </hostdev> <memballoon model='none'/> </devices> <seclabel type='dynamic' model='dac' relabel='yes'> <label>+0:+100</label> <imagelabel>+0:+100</imagelabel> </seclabel> </domain> athena-diagnostics-20190213-0723.zip
  11. 200 GB. I am running 3 VMs includnig a gaming vm, pfsense firewall and a vm I use for remotely administering stuff. I also run bunch of containers main ones are emby and nextcloud.
  12. Turned out somehow the iso I copied to the share got corrupted.... copied it again and everything works... SMH
  13. I just purchased Unraid and I spun up pfsense without any issues. now I am trying to set up a windows 10 vm and I am getting this: I have been finding others who had similar issues, but I couldn't find any with this EXACT message. I already tried : http://lime-technology.com/wiki/index.php/UnRAID_OS_version_6_Upgrade_Notes#My_OVMF_VM_doesn.27t_boot_correctly Could someone help me?