danofun

Members
  • Posts

    37
  • Joined

  • Last visited

Everything posted by danofun

  1. ZFS and it’s not even close. Implementing ZFS will most likely require adjustments to current array setup. Adjustments would likely be made with a second array in mind... Cheers to the team's hard work and seeking community input!
  2. I've had some success with allowing local access to Wireguard and it's attached containers. Im running this via docker-compose but it should work in unRAID's GUI as well. Here's what I've added. Wireguard Container: - Add environmental variable LAN_NETWORK and populate with your LAN (i.e. 192.168.1.0/24) - in the wg0.conf config file, add the following to the PostUp and PostDown lines PostUp=ip route add $LAN_NETWORK via $(ip route |awk '/default/ {print $3}') dev eth0 PostDown=ip route del $LAN_NETWORK via $(ip route |awk '/default/ {print $3}') dev eth0 You can then designate specific containers to utilize the Wireguard connection. In the container we want to use with Wireguard: - remove all ports as we will now be connecting to this container via the Wireguard container. - In docker-compose we'll add a network_mode: service:wireguard line. As for unRAID's GUI, there are a couple of discussions here and here regarding the container network. - your attached container needs to start after Wireguard so add the following depends_on: - wireguard Here is a sample docker-compose.yml showing NZBget routing through Wireguard: version: '3.6' services: # Wireguad wireguard: image: linuxserver/wireguard container_name: wireguard restart: always ports: - 51820:51820/udp # nzbget - 6789:6789 volumes: - /mnt/user/appdata/wireguard:/config - /lib/modules:/lib/modules environment: LAN_NETWORK: 192.168.1.0/24 PGID: 100 PUID: 99 TZ: America/New_York cap_add: - NET_ADMIN - SYS_MODULE sysctls: - net.ipv4.conf.all.src_valid_mark=1 - net.ipv6.conf.all.disable_ipv6=0 # NzbGET Usenet (NZB) Downloader nzbget: image: linuxserver/nzbget container_name: nzbget restart: always network_mode: service:wireguard volumes: - /mnt/user/appdata/nzbget:/config - /mnt/user/media:/media environment: PGID: ${PGID} PUID: ${PUID} TZ: ${TZ} depends_on: - wireguard One item of note, I was unable the make the kill switch work. Any help would be greatly appreciated. root@8de63d4b329d:/# iptables -I OUTPUT ! -o %i -m mark ! --mark $(wg show %i fwmark) -m addrtype ! --dst-type LOCAL -j REJECT && ip6tables -I OUTPUT ! -o %i -m mark ! --mark $(wg show %i fwmark) -m addrtype ! --dst-type LOCAL -j REJECT Unable to access interface: No such device iptables v1.6.1: mark: bad mark value for option "--mark", or out of range. Try `iptables -h' or 'iptables --help' for more information. root@8de63d4b329d:/#
  3. tv_grab_zz_sdjson_sqlite is still failing and can be fixed by including perl-lwp-useragent-determined in the build. I've submitted pull request #117 to solve this issue.
  4. Linuxserver.io team, thank you for so diligently maintaining and improving your docker containers! Great stuff! I've written a nzb post-processing script to trigger your beets container after a successful music download. With beets configured to notify Plex I finally have fully automated music downloads! The script triggers a beets import via ssh command which is sent to a beets docker container's host system. The ssh command can be authenticated via password or ssh key with an optional paraphrase. Any chance of adding openssh-client to your nzbget container? https://github.com/danofun/beets-nzbget-script
  5. Hi danofun, I just read your sierra guide. Great guide. Wow i wish i read it before making my video guide for sierra!! Great you have found how to stop sierra lagging the 1/4 speed problem. I said in my video i bet someone will fix this soon and you already had!! Anyway now i have added the patched clover files, into the files I linked in the description of the video and reuploaded them, to fix this. Thankyou. One thing i saw in your xml file <os> <type arch='x86_64' machine='pc-q35-2.5'>hvm</type> <loader readonly='yes' type='rom'>/usr/share/qemu/ovmf-x64/OVMF_CODE-pure-efi.fd</loader> <boot dev='hd'/> </os> I see you are mapping the unified OVMF.fd file as ROM. I guess this is just so its easier for people to copy and paste the xml not having to worry about creating the nvram which gets created based off the uuid on vm creation. But from what i have read we should never map the OVMF.fd file as ROM even though libvirt does support it. We should use writable pflash so the firmware binary is central but all our ovmf VMs can keep a private varstore. I didnt know apple had dropped the core2duo devices. Typical apple making forced obsolescence. Makes me think that my 2012 retina macbook pro will be being pushed to obsolescence in the next few years!! Love the icon you done for sierra...awesome gridrunner, the positive feedback is greatly appreciated. Your video guides will be invaluable to the community. The Sierra upgrade piece exists mainly to outline the differences at this point between Sierra and previous OS installs. Those who truly deserve the credit are the unRAID crew for this amazing platform, archedraft and peter_sm for getting the OS X ball rolling, and dreadkopp at InsanelyMac for putting together the patched version of Clover. Hopefully we can get these or similar resulting patches included in the mainline Clover code. I'll attempt to resurrect and troubleshoot this ticket. I had indeed included the unified OVMF.fd in the xml solely for ease of VM creation. To your point, it is probably in poor form to do so and I've upgraded the XML accordingly. Cheers!
  6. Look some pages back. I think you must change one parameter in the XML. Gus you must change the machine type to q35-2.5 in 6.2 for it to work <os> <type arch='x86_64' machine='pc-q35-2.5'>hvm</type> I've done that and I still can't get it to boot. It won't even boot with the install ISO. Try changing core2duo to Penryn
  7. gridrunner's guide should, for the most part, work for Sierra as well. Apple's core2duo devices were cut for Sierra so we'll have to change our emulation to Penryn. enoch's latest version seems to work well but clover is a little bit of a mess with Penryn. With the QEMU flag enabled in config.plist the VM runs at 1/4 speed and without the flag at ~10x speed. dreadkopp over at Insanelymac posted a patched Clover 3578 with the appropriate fixes but they have not yet been merged into Clover. Therefore, you'll have to use dreadkopp Clover version for now. You can follow the Clover ticket here. I did not attempt an upgrade but here's how I installed a new instance of OVMF Sierra with Clover EFI. Create Sierra Image 1. on a working Mac/Hackintosh, download Sierra from the App Store. 2. Create a 8GB file with a name sierra_usb. mkfile -n 8g sierra_usb 3. Mount the file as a disk. After this command you should see where the image is mounted. In this example, it was mounted on /dev/disk2. hdiutil attach -imagekey diskimage-class=CRawDiskImage -nomount sierra_usb 4. Partitition your virtual disk. First we will create a GPT partition map: gpt create /dev/disk2 5. Use Disk Utility to Erase your virtual disk, with a name Untitled 6. Create installer virtual disk from Installer.app sudo /Applications/Install\ macOS\ Sierra.app/Contents/Resources/createinstallmedia --volume /Volumes/Untitled --applicationpath /Applications/Install\ MacOS\ Sierra.app --nointeraction Install dreadkopp's patched Clover to the Virtual Install Disk 1. Open the Installer and choose the Install disk as the installation location. 2. Choose to customize and select Install for UEFI booting only 3. Select a theme for Clover 4. Under Drivers64UEFI, select DataHubDxe-64 AND OsxAptioFix2Drv 5. Complete the install action and open config.plist and make sure your resolution matches unRAID's OVMF resolution <key>ScreenResolution</key> <string>800x600</string> 6. Copy your SMBIOS settings from a previous Clover install or use Clover Configurator's SMBIOS wizard. In this example iMac 14.1. Unmount and Move Virtual Disk Image to unRAID 1. Unmount install disk diskutil unmount /dev/disk2 2. Move virtual disk image to unRAID share. For this example /mnt/user/domains/macOS/sierra_usb Installation 1. Create a virtual disk where we will install Sierra. In this example it's a 90GB virtual disk. qemu-img create -f raw /mnt/user/domains/macOS/vdisk1.img 90G 2. on unRAID, create a new custom VM. NOTE: You'll need to add in Apple's key per usual. This example is for a GPU passthrough VM so modify for your GPU bios or delete this line altogether. I've also passed through an entire USB controller (00:1d.0). <domain type='kvm' xmlns:qemu='http://libvirt.org/schemas/domain/qemu/1.0'> <name>macOS</name> <uuid>cf5aa9c4-c70c-4b00-bb27-2125bc8fcedc</uuid> <metadata> <vmtemplate xmlns="unraid" name="macOS" icon="/mnt/user/domains/macOS/OSX-10.12.png"/> </metadata> <memory unit='KiB'>8388608</memory> <currentMemory unit='KiB'>8388608</currentMemory> <vcpu placement='static'>2</vcpu> <cputune> <vcpupin vcpu='0' cpuset='0'/> <vcpupin vcpu='1' cpuset='1'/> </cputune> <resource> <partition>/machine</partition> </resource> <os> <type arch='x86_64' machine='pc-q35-2.5'>hvm</type> <loader readonly='yes' type='pflash'>/usr/share/qemu/ovmf-x64/OVMF_CODE-pure-efi.fd</loader> <nvram>/etc/libvirt/qemu/nvram/cf5aa9c4-c70c-4b00-bb27-2125bc8fcedc_VARS-pure-efi.fd</nvram> <boot dev='hd'/> </os> <features> <acpi/> </features> <cpu mode='custom' match='exact'> <model fallback='allow'>Penryn</model> </cpu> <clock offset='utc'/> <on_poweroff>destroy</on_poweroff> <on_reboot>restart</on_reboot> <on_crash>destroy</on_crash> <devices> <emulator>/usr/bin/qemu-system-x86_64</emulator> <disk type='file' device='disk'> <driver name='qemu' type='raw'/> <source file='/mnt/user/domains/macOS/vdisk1.img'/> <backingStore/> <target dev='hdc' bus='sata'/> </disk> <disk type='file' device='disk'> <driver name='qemu' type='raw'/> <source file='/mnt/user/domains/macOS/sierra_usb'/> <backingStore/> <target dev='hda' bus='sata'/> </disk> <controller type='usb' index='0'> <address type='pci' domain='0x0000' bus='0x02' slot='0x01' function='0x0'/> </controller> <controller type='sata' index='0'> <address type='pci' domain='0x0000' bus='0x00' slot='0x1f' function='0x2'/> </controller> <controller type='pci' index='0' model='pcie-root'> </controller> <controller type='pci' index='1' model='dmi-to-pci-bridge'> <model name='i82801b11-bridge'/> <address type='pci' domain='0x0000' bus='0x00' slot='0x1e' function='0x0'/> </controller> <controller type='pci' index='2' model='pci-bridge'> <model name='pci-bridge'/> <target chassisNr='2'/> <address type='pci' domain='0x0000' bus='0x01' slot='0x01' function='0x0'/> </controller> <controller type='virtio-serial' index='0'> <address type='pci' domain='0x0000' bus='0x02' slot='0x02' function='0x0'/> </controller> <interface type='bridge'> <mac address='52:54:00:00:20:30'/> <source bridge='br0'/> <target dev='vnet0'/> <model type='e1000-82545em'/> <address type='pci' domain='0x0000' bus='0x02' slot='0x03' function='0x0'/> </interface> <memballoon model='virtio'> <address type='pci' domain='0x0000' bus='0x02' slot='0x07' function='0x0'/> </memballoon> </devices> <seclabel type='none' model='none'/> <qemu:commandline> <qemu:arg value='-device'/> <qemu:arg value='isa-applesmc,osk=OSX_KEY'/> <qemu:arg value='-smbios'/> <qemu:arg value='type=2'/> <qemu:arg value='-device'/> <qemu:arg value='ioh3420,bus=pcie.0,addr=1c.0,multifunction=on,port=1,chassis=1,id=root.1'/> <qemu:arg value='-device'/> <qemu:arg value='vfio-pci,host=01:00.0,bus=root.1,addr=00.0,multifunction=on,x-vga=on,romfile=/mnt/user/domains/macOS/Powercolor.R9270.2048.131105.rom'/> <qemu:arg value='-device'/> <qemu:arg value='vfio-pci,host=01:00.1,bus=root.1,addr=00.1'/> <qemu:arg value='-device'/> <qemu:arg value='vfio-pci,host=00:1d.0,bus=root.1,addr=00.2'/> </qemu:commandline> </domain> 3. Start the VM and at the Clover boot screen select Install macOS 4. Once in the installer, Open Disk Utility and Partition the 90GB virtual disk 5. Quit Disk Utility and install as usual 6. After the initial installation in finished the VM will reboot and again select Install macOS 7. The install will resume and reboot once again 8. At the clover boot screen, select your Sierra partition 9. Complete the Sierra installation wizard and arrive at the desktop Post Installation 1. Open the Installer and choose the Sierra disk as the installation location. 2. Choose to customize and select Install for UEFI booting only 3. Select a theme for Clover 4. Under Drivers64UEFI, select DataHubDxe-64 AND OsxAptioFix2Drv 5. Complete the install action and open config.plist and make sure your resolution matches unRAID's OVMF resolution <key>ScreenResolution</key> <string>800x600</string> 6. Copy your SMBIOS settings from a previous Clover install or use Clover Configurator's SMBIOS wizard. In this example iMac 14.1 7. Shutdown the VM 4. In unRAID, 8edit the macOS VM XML and remove the install disk <disk type='file' device='disk'> <driver name='qemu' type='raw' <source file='/mnt/user/domains/macOS/sierra_usb'/> <backingStore/> <target dev='hda' bus='sata'/> </disk> 5. Start the VM and you should boot to the Clover Boot screen NOTES: - I was able to get HDMI audio working with the attached HDMIAudio.kext. - The patched Clover version is 3578 and is a version where kext injection is broken. This was fixed in 3585. Until the fixes are added to Clover builds or we get an updated patched version of Clover we'll have to install kexts to /System/Library/Extensions EDIT 2016-12-21: email4nickp has patched Clover 3923. It seems to work well after isolating and assigning CPUs to the VM. Alter your syslinux configuration to include the cpus you'd like to isolate append isolcpus=2,3 initrd=/bzroot and add an emulator pin to you're VM configuration <cputune> <vcpupin vcpu='0' cpuset='2'/> <vcpupin vcpu='1' cpuset='3'/> <emulatorpin cpuset='0'/> </cputune> HDMIAudio.kext.zip
  8. Same error here since today's nzbget docker container update. EDIT: It appears unrar and 7za are in the /app directory. An alternate temporary solution is to change Settings -> unpack -> UnrarCmd to /app/unrar and SevenZipCmd to /app/7za
  9. Same error here since today's nzbget docker container update.
  10. Seems as though this has already been dockerized. https://github.com/onedr0p/manage-this-node
  11. saarg, Thank you for creating and maintaining these dockers. The addition of XMLTV is greatly appreciated!! I'm a Schedules Direct user (tv_grab_na_dd) and earlier this year Gracenote services was discontinued so some URLs have changed. The docker's included xmltv-utils 0.5.63 does not include these changes (they were added to xmltv-utils 0.5.66). Therefore running tv_grab_na_dd --configure results in a 500 error. tv_grab_na_dd --configure --config-file /nonexistent/.xmltv/tv_grab_na_dd.conf returns Service description 'http://docs.tms.tribune.com/tech/tmsdatadirect/schedulesdirect/tvDataDelivery.wsdl' can't be loaded: 500 Can't connect to docs.tms.tribune.com:80 (Connection timed out) Adding the below to the docker's /etc/hosts fixes the error 54.85.117.227 docs.tms.tribune.com webservices.schedulesdirect.tmsdatadirect.com Any chance of adding this to the docker? Thanks again for your contributions!
  12. @TorchRedRob I'm also experiencing very poor performance using WebGrab+Plus and schedules direct. Any chance of sharing the XMLTV docker?
  13. Great work! I'm a happy Linuxserver nzbget and sonarr user and am grateful for your attention to detail and contribution to the community! Since you're taking requests, I'll throw another one out there, Emby. I've made the switch from Plex and hope to never look back... Keep up the great work!
  14. I've been experiencing the same issue since OVMF was added as an option.
  15. It may be prudent to note in the OP which version of OE is being pulled (latest stable, latest beta/RC, legacy stable, user choice, etc.)...especially for those that are using a shared mysql database. Also, how are OE upgrades handled? Nice work! John This... Also when there are multiple stable/rc builds will be able to choose which one or will we be locked into whichever one is the current unRAID OE VM? Taking a moment to address a few questions on the OE VM. Kernel panics from VM (not host OS) when trying to start Tough to diagnose because the VM image is based on 6.0.0 beta 2 from OpenELEC which is based on Kodi 15.0 beta 2 (Isengard). This means that the OS and the primary application are BOTH in a beta state, therefore issues relating to VM boot up could be due to problems in either of those layers on top of the potential for an issue relating to the virtual machine / GPU assignment itself. A few things you can try here: 1 - Toggle "advanced" view on the VM page and switch the VM from SeaBIOS to OVMF and see if that changes anything. 2 - Do not assign the audio device to the VM (just do the graphics) and see if it boots. 3 - Try assigning an alternate audio device from the HDMI audio on the graphics device (assign an on-board audio device) 4 - Try utilizing a port on the card OTHER than HDMI (e.g. Displayport, VGA, or DVI) RE: Updating the VM The VM template is configured in such a way to make the image itself attach in a read-only state. This allows you to create multiple OpenELEC instances off of a single image file. As new updates are released, we will release new versions of the VM available for download from within the web interface. You can then toggle to these new versions while pointing to the same config path so an update is really as simple as editing an existing VM, changing the version field, then downloading the new image and clicking update. Stable / RC Builds When 6.0.0 stable is released for OpenELEC, it's our intention to maintain pace with the current releases only, not necessarily RC's or betas. We are just starting our initial template implementation using the OE RC. @JonP I've tried all of your above suggestions. After completely removing the VM and associated files and choosing OVMF I'm receiving a "Failed to start Xorg Server" error. Suggestions 2-4 yield either the same error or a kernel panic when using SeaBIOS. @johnodon I've been successfully running Windows and Ubuntu KVM VMs for quite some time using both SeaBIOS and OVMF. All of your feedback has been greatly appreciated. Ok, you'll want to try modifying the XML to pass it your GPU rom manually (see the wiki manual in my signature for a guide on this). After editing the XML file OE VM is booting properly. Your help has been greatly appreciated. <domain type='kvm' xmlns:qemu='http://libvirt.org/schemas/domain/qemu/1.0'> <name>OpenELEC</name> <uuid>4eae880e-07f3-e670-40bb-7e9899ce542c</uuid> <metadata> <vmtemplate name="OpenELEC" icon="openelec.png" os="openelec" openelec="5.95.2_1"/> </metadata> <memory unit='KiB'>2097152</memory> <currentMemory unit='KiB'>2097152</currentMemory> <memoryBacking> <nosharepages/> <locked/> </memoryBacking> <vcpu placement='static'>2</vcpu> <cputune> <vcpupin vcpu='0' cpuset='2'/> <vcpupin vcpu='1' cpuset='3'/> </cputune> <os> <type arch='x86_64' machine='pc-q35-2.3'>hvm</type> </os> <features> <acpi/> <apic/> </features> <cpu mode='host-passthrough'> <topology sockets='1' cores='2' threads='1'/> </cpu> <clock offset='utc'> <timer name='rtc' tickpolicy='catchup'/> <timer name='pit' tickpolicy='delay'/> <timer name='hpet' present='no'/> </clock> <on_poweroff>destroy</on_poweroff> <on_reboot>restart</on_reboot> <on_crash>restart</on_crash> <devices> <emulator>/usr/bin/qemu-system-x86_64</emulator> <disk type='file' device='disk'> <driver name='qemu' type='raw' cache='writeback'/> <source file='/mnt/cache/vm/OpenELEC-unRAID.x86_64-5.95.2_1.img'/> <target dev='hda' bus='virtio'/> <boot order='1'/> <address type='pci' domain='0x0000' bus='0x02' slot='0x05' function='0x0'/> </disk> <controller type='usb' index='0' model='ich9-ehci1'> <address type='pci' domain='0x0000' bus='0x02' slot='0x03' function='0x7'/> </controller> <controller type='usb' index='0' model='ich9-uhci1'> <master startport='0'/> <address type='pci' domain='0x0000' bus='0x02' slot='0x03' function='0x0' multifunction='on'/> </controller> <controller type='sata' index='0'> <address type='pci' domain='0x0000' bus='0x00' slot='0x1f' function='0x2'/> </controller> <controller type='pci' index='0' model='pcie-root'/> <controller type='pci' index='1' model='dmi-to-pci-bridge'> <address type='pci' domain='0x0000' bus='0x00' slot='0x1e' function='0x0'/> </controller> <controller type='pci' index='2' model='pci-bridge'> <address type='pci' domain='0x0000' bus='0x01' slot='0x01' function='0x0'/> </controller> <controller type='virtio-serial' index='0'> <address type='pci' domain='0x0000' bus='0x02' slot='0x04' function='0x0'/> </controller> <filesystem type='mount' accessmode='passthrough'> <source dir='/mnt/user/appdata/OpenELEC/'/> <target dir='appconfig'/> <address type='pci' domain='0x0000' bus='0x02' slot='0x01' function='0x0'/> </filesystem> <interface type='bridge'> <mac address='52:54:00:82:84:a6'/> <source bridge='br0'/> <model type='virtio'/> <address type='pci' domain='0x0000' bus='0x02' slot='0x02' function='0x0'/> </interface> <serial type='pty'> <target port='0'/> </serial> <console type='pty'> <target type='serial' port='0'/> </console> <channel type='unix'> <source mode='bind' path='/var/lib/libvirt/qemu/channel/target/OpenELEC.org.qemu.guest_agent.0'/> <target type='virtio' name='org.qemu.guest_agent.0'/> <address type='virtio-serial' controller='0' bus='0' port='1'/> </channel> <memballoon model='virtio'> <address type='pci' domain='0x0000' bus='0x02' slot='0x06' function='0x0'/> </memballoon> </devices> <qemu:commandline> <qemu:arg value='-device'/> <qemu:arg value='ioh3420,bus=pcie.0,addr=1c.0,multifunction=on,port=1,chassis=1,id=root.1'/> <qemu:arg value='-device'/> <qemu:arg value='vfio-pci,host=01:00.0,bus=root.1,addr=00.0,multifunction=on,x-vga=on'/> <qemu:arg value='-device'/> <qemu:arg value='vfio-pci,host=01:00.1,bus=root.1,addr=00.1'/> <qemu:arg value='-device'/> <qemu:arg value='vfio-pci,host=00:1d.0'/> </qemu:commandline> </domain>
  16. But have you been successfully passing through a GPU to one of both of those? Or are you connecting to them via VNC/RDP? Can you also copy/paste your XML for the problematic OE VM? Successfully GPU passthrough on both Windows and Ubuntu. In fact, I'm posting this from a Windows 8.1 VM on a passed through AMD Radeon R9 270. Here is the template generated XML for the OE VM. <domain type='kvm'> <name>Openelec</name> <uuid>f4338f06-c190-0529-2aa4-5578bd244da2</uuid> <metadata> <vmtemplate name="OpenELEC" icon="openelec.png" os="openelec" openelec="5.95.2_1"/> </metadata> <memory unit='KiB'>524288</memory> <currentMemory unit='KiB'>524288</currentMemory> <memoryBacking> <nosharepages/> <locked/> </memoryBacking> <vcpu placement='static'>1</vcpu> <cputune> <vcpupin vcpu='0' cpuset='0'/> </cputune> <os> <type arch='x86_64' machine='pc-q35-2.3'>hvm</type> <loader type='pflash'>/usr/share/qemu/ovmf-x64/OVMF-pure-efi.fd</loader> </os> <features> <acpi/> <apic/> </features> <cpu mode='host-passthrough'> <topology sockets='1' cores='1' threads='1'/> </cpu> <clock offset='utc'> <timer name='rtc' tickpolicy='catchup'/> <timer name='pit' tickpolicy='delay'/> <timer name='hpet' present='no'/> </clock> <on_poweroff>destroy</on_poweroff> <on_reboot>restart</on_reboot> <on_crash>restart</on_crash> <devices> <emulator>/usr/bin/qemu-system-x86_64</emulator> <disk type='file' device='disk'> <driver name='qemu' type='raw' cache='writeback'/> <source file='/mnt/cache/vm/OpenELEC-unRAID.x86_64-5.95.2_1.img'/> <target dev='hda' bus='virtio'/> <boot order='1'/> <address type='pci' domain='0x0000' bus='0x02' slot='0x05' function='0x0'/> </disk> <controller type='usb' index='0' model='ich9-ehci1'> <address type='pci' domain='0x0000' bus='0x02' slot='0x03' function='0x7'/> </controller> <controller type='usb' index='0' model='ich9-uhci1'> <master startport='0'/> <address type='pci' domain='0x0000' bus='0x02' slot='0x03' function='0x0' multifunction='on'/> </controller> <controller type='sata' index='0'> <address type='pci' domain='0x0000' bus='0x00' slot='0x1f' function='0x2'/> </controller> <controller type='pci' index='0' model='pcie-root'/> <controller type='pci' index='1' model='dmi-to-pci-bridge'> <address type='pci' domain='0x0000' bus='0x00' slot='0x1e' function='0x0'/> </controller> <controller type='pci' index='2' model='pci-bridge'> <address type='pci' domain='0x0000' bus='0x01' slot='0x01' function='0x0'/> </controller> <controller type='virtio-serial' index='0'> <address type='pci' domain='0x0000' bus='0x02' slot='0x04' function='0x0'/> </controller> <filesystem type='mount' accessmode='passthrough'> <source dir='/mnt/user/appdata/OpenELEC/'/> <target dir='appconfig'/> <address type='pci' domain='0x0000' bus='0x02' slot='0x01' function='0x0'/> </filesystem> <interface type='bridge'> <mac address='52:54:00:d1:e8:d4'/> <source bridge='br0'/> <model type='virtio'/> <address type='pci' domain='0x0000' bus='0x02' slot='0x02' function='0x0'/> </interface> <serial type='pty'> <target port='0'/> </serial> <console type='pty'> <target type='serial' port='0'/> </console> <channel type='unix'> <source mode='bind' path='/var/lib/libvirt/qemu/channel/target/Openelec.org.qemu.guest_agent.0'/> <target type='virtio' name='org.qemu.guest_agent.0'/> <address type='virtio-serial' controller='0' bus='0' port='1'/> </channel> <hostdev mode='subsystem' type='pci' managed='yes'> <driver name='vfio'/> <source> <address domain='0x0000' bus='0x01' slot='0x00' function='0x0'/> </source> <address type='pci' domain='0x0000' bus='0x02' slot='0x06' function='0x0'/> </hostdev> <hostdev mode='subsystem' type='pci' managed='yes'> <driver name='vfio'/> <source> <address domain='0x0000' bus='0x01' slot='0x00' function='0x1'/> </source> <address type='pci' domain='0x0000' bus='0x02' slot='0x07' function='0x0'/> </hostdev> <memballoon model='virtio'> <address type='pci' domain='0x0000' bus='0x02' slot='0x08' function='0x0'/> </memballoon> </devices> </domain>
  17. I appreciate the lightning fast response! Here is the diagnostics file. lianli-diagnostics-20150721-1229.zip
  18. It may be prudent to note in the OP which version of OE is being pulled (latest stable, latest beta/RC, legacy stable, user choice, etc.)...especially for those that are using a shared mysql database. Also, how are OE upgrades handled? Nice work! John This... Also when there are multiple stable/rc builds will be able to choose which one or will we be locked into whichever one is the current unRAID OE VM? Taking a moment to address a few questions on the OE VM. Kernel panics from VM (not host OS) when trying to start Tough to diagnose because the VM image is based on 6.0.0 beta 2 from OpenELEC which is based on Kodi 15.0 beta 2 (Isengard). This means that the OS and the primary application are BOTH in a beta state, therefore issues relating to VM boot up could be due to problems in either of those layers on top of the potential for an issue relating to the virtual machine / GPU assignment itself. A few things you can try here: 1 - Toggle "advanced" view on the VM page and switch the VM from SeaBIOS to OVMF and see if that changes anything. 2 - Do not assign the audio device to the VM (just do the graphics) and see if it boots. 3 - Try assigning an alternate audio device from the HDMI audio on the graphics device (assign an on-board audio device) 4 - Try utilizing a port on the card OTHER than HDMI (e.g. Displayport, VGA, or DVI) RE: Updating the VM The VM template is configured in such a way to make the image itself attach in a read-only state. This allows you to create multiple OpenELEC instances off of a single image file. As new updates are released, we will release new versions of the VM available for download from within the web interface. You can then toggle to these new versions while pointing to the same config path so an update is really as simple as editing an existing VM, changing the version field, then downloading the new image and clicking update. Stable / RC Builds When 6.0.0 stable is released for OpenELEC, it's our intention to maintain pace with the current releases only, not necessarily RC's or betas. We are just starting our initial template implementation using the OE RC. @JonP I've tried all of your above suggestions. After completely removing the VM and associated files and choosing OVMF I'm receiving a "Failed to start Xorg Server" error. Suggestions 2-4 yield either the same error or a kernel panic when using SeaBIOS. @johnodon I've been successfully running Windows and Ubuntu KVM VMs for quite some time using both SeaBIOS and OVMF. All of your feedback has been greatly appreciated.
  19. Whenever I have this happen to me, it is usually because I am moving a passed-through GPU from one VM to another and I did not shut down one of the VMs properly. I think it keeps the vid card from resetting. My only fix has been to reboot the server. John Thank your for the response, John. Unfortunately, I'm experiencing the same issue after a server reboot. Are any users successfully using the OpenELEC unVM?
  20. The OpenELEC VM seem to be created properly in the web GUI but it is panicing on startup. All other VMs continue to work properly. <domain type='kvm' xmlns:qemu='http://libvirt.org/schemas/domain/qemu/1.0'> <name>OpenELEC</name> <uuid>0369e63e-a7eb-56f2-6ca6-3105b3870648</uuid> <metadata> <vmtemplate name="OpenELEC" icon="openelec.png" os="openelec" openelec="5.95.2_1"/> </metadata> <memory unit='KiB'>1048576</memory> <currentMemory unit='KiB'>1048576</currentMemory> <memoryBacking> <nosharepages/> <locked/> </memoryBacking> <vcpu placement='static'>1</vcpu> <cputune> <vcpupin vcpu='0' cpuset='1'/> </cputune> <os> <type arch='x86_64' machine='pc-q35-2.3'>hvm</type> </os> <features> <acpi/> <apic/> </features> <cpu mode='host-passthrough'> <topology sockets='1' cores='1' threads='1'/> </cpu> <clock offset='utc'> <timer name='rtc' tickpolicy='catchup'/> <timer name='pit' tickpolicy='delay'/> <timer name='hpet' present='no'/> </clock> <on_poweroff>destroy</on_poweroff> <on_reboot>restart</on_reboot> <on_crash>restart</on_crash> <devices> <emulator>/usr/bin/qemu-system-x86_64</emulator> <disk type='file' device='disk'> <driver name='qemu' type='raw' cache='writeback'/> <source file='/mnt/cache/vm/OpenELEC-unRAID.x86_64-5.95.2_1.img'/> <target dev='hda' bus='virtio'/> <boot order='1'/> <address type='pci' domain='0x0000' bus='0x02' slot='0x05' function='0x0'/> </disk> <controller type='usb' index='0' model='ich9-ehci1'> <address type='pci' domain='0x0000' bus='0x02' slot='0x03' function='0x7'/> </controller> <controller type='usb' index='0' model='ich9-uhci1'> <master startport='0'/> <address type='pci' domain='0x0000' bus='0x02' slot='0x03' function='0x0' multifunction='on'/> </controller> <controller type='sata' index='0'> <address type='pci' domain='0x0000' bus='0x00' slot='0x1f' function='0x2'/> </controller> <controller type='pci' index='0' model='pcie-root'/> <controller type='pci' index='1' model='dmi-to-pci-bridge'> <address type='pci' domain='0x0000' bus='0x00' slot='0x1e' function='0x0'/> </controller> <controller type='pci' index='2' model='pci-bridge'> <address type='pci' domain='0x0000' bus='0x01' slot='0x01' function='0x0'/> </controller> <controller type='virtio-serial' index='0'> <address type='pci' domain='0x0000' bus='0x02' slot='0x04' function='0x0'/> </controller> <filesystem type='mount' accessmode='passthrough'> <source dir='/mnt/cache/appdata/OpenELEC/'/> <target dir='appconfig'/> <address type='pci' domain='0x0000' bus='0x02' slot='0x01' function='0x0'/> </filesystem> <interface type='bridge'> <mac address='52:54:00:80:5f:d9'/> <source bridge='br0'/> <model type='virtio'/> <address type='pci' domain='0x0000' bus='0x02' slot='0x02' function='0x0'/> </interface> <serial type='pty'> <target port='0'/> </serial> <console type='pty'> <target type='serial' port='0'/> </console> <channel type='unix'> <source mode='bind' path='/var/lib/libvirt/qemu/channel/target/OpenELEC.org.qemu.guest_agent.0'/> <target type='virtio' name='org.qemu.guest_agent.0'/> <address type='virtio-serial' controller='0' bus='0' port='1'/> </channel> <memballoon model='virtio'> <address type='pci' domain='0x0000' bus='0x02' slot='0x06' function='0x0'/> </memballoon> </devices> <qemu:commandline> <qemu:arg value='-device'/> <qemu:arg value='ioh3420,bus=pcie.0,addr=1c.0,multifunction=on,port=2,chassis=1,id=root.1'/> <qemu:arg value='-device'/> <qemu:arg value='vfio-pci,host=01:00.0,bus=pcie.0,multifunction=on,x-vga=on'/> </qemu:commandline> </domain>
  21. mythweb seems to be working just fine on my end. Ability to view listings and schedule recordings is working correctly.
  22. Like a few others, I am able to configure mythtv then mythfilldatabase from the docker via RDP to use an HDHomeRun Prime. After running /usr/bin/mythbackend from the terminal I can watch and record TV in Kodi both on external devices and kodibuntu/windows VM's with a passed through GPU. Fantastic work @sparklyballs!!!