Jump to content

Bungy

Community Developer
  • Posts

    375
  • Joined

  • Last visited

Everything posted by Bungy

  1. Try changing the VPN address to uk-london.privateinternetaccess.com Sent from my Nexus 5X using Tapatalk
  2. Either one will work, so let's go with tower ip address Sent from my Nexus 5X using Tapatalk
  3. What is your LAN network IP address and subnet mask? Also try adding in the parameter: ENABLE_PRIVOXY=no Sent from my Nexus 5X using Tapatalk
  4. The docker is still supported, just my time for support is now more limited. Can you post your full config? My guess is that there is a parameter that isn't set. Likely the parameter for the LAN_NETWORK. If you're used to looking at docker-compose setups, here's my config. You can probably easily tell which environmental variables you'll need to change to get it to work. nzbget: restart: always image: jshridha/docker-nzbgetvpn privileged: true volumes: - /mnt/disks/external/Downloads:/data - /mnt/disks/external/appdata/nzbgetvpn/config:/config - /etc/localtime:/etc/localtime environment: - VPN_ENABLED=yes - VPN_USER=USERNAME - VPN_PASS=PASSWORD - VPN_REMOTE=us-east.privateinternetaccess.com - VPN_PORT=1198 - VPN_PROV=pia - VPN_PROTOCOL=udp - ENABLE_PRIVOXY=no - LAN_NETWORK=192.168.1.0/24 - STRONG_CERTS=no ports: - "6789:6789"
  5. It depends which container you're using. I don't believe you can if you're using Linuxserver's container. They generally design their containers to be single container deployments. You can use external databases when using Clue's container. Check here for documentation: https://github.com/clue/docker-ttrss Sent from my Nexus 5X using Tapatalk
  6. I want to say I saw the same problem but my memory may be failing me. Try to shutdown the machine, remove all but one ram disk, and try again. That worked for me on my 6gb system. Sent from my Nexus 5X using Tapatalk
  7. Same problem for me. I'm going to rely on you to downgrade the unraid version. My server is a couple hundred miles away from me and I don't want to risk it going down and needing to be there in person. Let me know how it goes. Good luck.
  8. Hey Bungy, thanks for your help so far. I've booted a Win10 and installed Hyper-V manager succesfully through "Programs and Features". However, I still can't seem to boot a VM in Hyper-V, it errors with same error message as I originally had. Would you be able to try and see if your setup boots a VM in HyperV? For me it doesn't want to boot anything, no matter if I select a physical disk, or create a new vhd. Google doesn't give me much except the same problems using VMWare as main hypervisor. Edit: I'm afraid this leads back to the thread/bug that jonp is referencing: https://bugzilla.kernel.org/show_bug.cgi?id=106621 They specially state: "This, together with "-cpu host,-hypervisor,+vmx will allow Hyper-V to be installed. It will however not allow to start these Virtual Machines." I'm really curious as to whether your setup can boot the VM in HyperV bungy My Sunday got away from me and I couldn't get started on this. I'll try to boot a VM tonight. While you wait for my progess, you can try to downgrade unRAID to the earlier 6.3 release candidates and see if it works there. I think I remember it working in rc3. Hopefully I'll have some news for you soon. Sent from my Nexus 5X using Tapatalk
  9. Win10 should work also. I originally tried ovmf and couldn't even get hyper-v enabled without using PowerShell. It worked once I switched to seabios. I'll give windows 10 a shot with the above XML and let you know how that works out. Sent from my Nexus 5X using Tapatalk
  10. Already done. You should just need to update your container and all will work. Sent from my Nexus 5X using Tapatalk
  11. I just tried this out for myself and it worked on unRAID 6.3.2. No errors when enabling Hyper-V in Server Manager. Below is my XML for your reference: <domain type='kvm' id='2'> <name>Server2016_2</name> <uuid>c0803c6e-8ce7-4efd-8143-cbc328512d12</uuid> <metadata> <vmtemplate xmlns="unraid" name="Windows Server 2016" icon="windows.png" os="windows2016"/> </metadata> <memory unit='KiB'>4194304</memory> <currentMemory unit='KiB'>4194304</currentMemory> <memoryBacking> <nosharepages/> </memoryBacking> <vcpu placement='static'>4</vcpu> <cputune> <vcpupin vcpu='0' cpuset='8'/> <vcpupin vcpu='1' cpuset='9'/> <vcpupin vcpu='2' cpuset='24'/> <vcpupin vcpu='3' cpuset='25'/> </cputune> <resource> <partition>/machine</partition> </resource> <os> <type arch='x86_64' machine='pc-i440fx-2.7'>hvm</type> </os> <features> <acpi/> <apic/> <hyperv> <relaxed state='on'/> <vapic state='on'/> <spinlocks state='on' retries='8191'/> <vendor_id state='on' value='none'/> </hyperv> </features> <cpu mode='host-passthrough'> <topology sockets='1' cores='2' threads='2'/> </cpu> <clock offset='localtime'> <timer name='hypervclock' present='yes'/> <timer name='hpet' present='no'/> </clock> <on_poweroff>destroy</on_poweroff> <on_reboot>restart</on_reboot> <on_crash>restart</on_crash> <devices> <emulator>/usr/local/sbin/qemu</emulator> <disk type='file' device='disk'> <driver name='qemu' type='qcow2' cache='writeback'/> <source file='/mnt/user/domains/Server2016_2/Server2016.qcow2'/> <backingStore/> <target dev='hdc' bus='virtio'/> <boot order='1'/> <alias name='virtio-disk2'/> <address type='pci' domain='0x0000' bus='0x00' slot='0x05' function='0x0'/> </disk> <disk type='file' device='cdrom'> <driver name='qemu' type='raw'/> <source file='/mnt/user/ISO/Virtio/virtio-win-0.1.126.iso'/> <backingStore/> <target dev='hdb' bus='ide'/> <readonly/> <alias name='ide0-0-1'/> <address type='drive' controller='0' bus='0' target='0' unit='1'/> </disk> <controller type='usb' index='0' model='ich9-ehci1'> <alias name='usb'/> <address type='pci' domain='0x0000' bus='0x00' slot='0x07' function='0x7'/> </controller> <controller type='usb' index='0' model='ich9-uhci1'> <alias name='usb'/> <master startport='0'/> <address type='pci' domain='0x0000' bus='0x00' slot='0x07' function='0x0' multifunction='on'/> </controller> <controller type='usb' index='0' model='ich9-uhci2'> <alias name='usb'/> <master startport='2'/> <address type='pci' domain='0x0000' bus='0x00' slot='0x07' function='0x1'/> </controller> <controller type='usb' index='0' model='ich9-uhci3'> <alias name='usb'/> <master startport='4'/> <address type='pci' domain='0x0000' bus='0x00' slot='0x07' function='0x2'/> </controller> <controller type='pci' index='0' model='pci-root'> <alias name='pci.0'/> </controller> <controller type='ide' index='0'> <alias name='ide'/> <address type='pci' domain='0x0000' bus='0x00' slot='0x01' function='0x1'/> </controller> <controller type='virtio-serial' index='0'> <alias name='virtio-serial0'/> <address type='pci' domain='0x0000' bus='0x00' slot='0x04' function='0x0'/> </controller> <interface type='bridge'> <mac address='52:54:00:49:a9:ad'/> <source bridge='br0'/> <target dev='vnet1'/> <model type='virtio'/> <alias name='net0'/> <address type='pci' domain='0x0000' bus='0x00' slot='0x03' function='0x0'/> </interface> <serial type='pty'> <source path='/dev/pts/2'/> <target port='0'/> <alias name='serial0'/> </serial> <console type='pty' tty='/dev/pts/2'> <source path='/dev/pts/2'/> <target type='serial' port='0'/> <alias name='serial0'/> </console> <channel type='unix'> <source mode='bind' path='/var/lib/libvirt/qemu/channel/target/domain-2-Server2016_2/org.qemu.guest_agent.0'/> <target type='virtio' name='org.qemu.guest_agent.0' state='disconnected'/> <alias name='channel0'/> <address type='virtio-serial' controller='0' bus='0' port='1'/> </channel> <input type='tablet' bus='usb'> <alias name='input0'/> <address type='usb' bus='0' port='1'/> </input> <input type='mouse' bus='ps2'> <alias name='input1'/> </input> <input type='keyboard' bus='ps2'> <alias name='input2'/> </input> <graphics type='vnc' port='5901' autoport='yes' websocket='5701' listen='0.0.0.0' keymap='en-us'> <listen type='address' address='0.0.0.0'/> </graphics> <video> <model type='qxl' ram='65536' vram='65536' vgamem='16384' heads='1' primary='yes'/> <alias name='video0'/> <address type='pci' domain='0x0000' bus='0x00' slot='0x02' function='0x0'/> </video> <memballoon model='virtio'> <alias name='balloon0'/> <address type='pci' domain='0x0000' bus='0x00' slot='0x06' function='0x0'/> </memballoon> </devices> <seclabel type='none' model='none'/> <seclabel type='dynamic' model='dac' relabel='yes'> <label>+0:+100</label> <imagelabel>+0:+100</imagelabel> </seclabel> </domain>
  12. I think my previous instructions may have been incorrect. I think we need to modify the kernel paramaters on boot to enable nested virtualization in unraid 6.3+. Modify /boot/syslinux/syslinux.cfg to append these parameters: append kvm-intel.nested=1 kvm-amd.nested=1 initrd=/bzroot instead of append initrd=/bzroot You can then check that nested virtualization is enabled using: cat /sys/module/kvm_intel/parameters/nested I'm sure the command is similar for amd machines.
  13. Ill try to post my xml soon. I wont be in front of a computer for a few days Sent from my Nexus 5X using Tapatalk
  14. Negative. I have definitely gotten it to work and was even able to use remotefx in the nested vm. Sent from my Nexus 5X using Tapatalk
  15. I also noticed I had to use seabios instead of ovmf. I'm not sure if that is still the case or if it is specific to my setup. Sent from my Nexus 5X using Tapatalk
  16. Nested virtualization was disabled in 6.3rc9 and 6.3stable. It can be re-enable by adding these lines to the top of your /boot/config/go script: echo "options kvm-intel nested=1" > /etc/modprobe.d/kvm-intel.conf echo "options kvm-amd nested=1" > /etc/modprobe.d/kvm-amd.conf
  17. Wow sooooo official. Hopefully that doesn't go on my permanent record!
  18. Did you read the moderator comments? It explicitly states the container is deprecated. Sent from my Nexus 5X using Tapatalk
  19. Sorry, but the openhab docker is deprecated now. I haven't pushed an update to it in a LONG time as I've switched to home-assistant for my home automation needs. I find it to be a much easier platform to configure and develop on. If you're set on using openhab, your best bet is to use the official openhab docker found here (https://hub.docker.com/r/openhab/openhab/). There isn't an unraid template for it, but I'm sure one can be created with little effort.
  20. Migrating mysql dockers may be difficult as linuxserver likely uses a different directory structure. Your best bet is to use mysqldump to dump the sql tables and then import them back into the new docker. Also, just FYI, it's actually the official mysql docker. I'm not the author of the container.
  21. You can pass through block devices to guest VMs without using iSCSI. In fact, its super easy to do. First, login via SSH or Telnet and type this command: v /dev/disk/by-id Locate the disk you wish to pass through and copy its name. Now go create your VM and under vDisk location, select "Manual" and in the path field, type /dev/disk/by-id/ and then paste in the name of the disk you wish to pass through. You can optionally include the partition so if you want, you can have one disk with 3 partitions and pass through different partitions to different VMs if you want. What does iSCSI gain you over this method? It gains me nothing for my use case. Your method is likely faster, I just didn't know about it! I'm actually very excited to try that out.
  22. For those interested in trying out iSCSI and figuring out if it works for you, I have a working install with the proper activated kernel modules. I currently am using targetcli in a docker and it works great. There is no GUI so you have to be comfortable with setting up the iqn, acls, luns, portals, etc. manually. I'm currently using iSCSI to allow guest VMs block level access to drives without having to passthrough drive controllers. One thing to keep in mind is you cannot create block level luns for drives that are already mounted. If you want to use those drives, you'll have to use fileio.
  23. All, I was able to recompile the unraid kernel for Unraid 6.3rc4 to include the necessary headers to get targetcli working on unRAID. I'm still in very early stages of getting things working and learning about iSCSI targets. I'm a bit lost on what benefit you guys are expecting by having unraid as an iSCSI target. I understand the difference between shares and block level access, but the use cases I've seen so far don't seem like they'd gain much from iSCSI. I believe the main arguments are for performance increases, however the performance would still be bottlenecked by the parity write speeds. I believe I'm missing something fundamentally, so anybody can educate me, I'd appreciate it.
  24. Fantastic! Glad that worked for you! Sent from my Nexus 5X using Tapatalk
  25. Make sure that the file has the permissions 0640 and is owned by uid:guid 33:65534. To set those permissions, use the following command: chmod 0640 config.php chown 33:65534 config.php After you set those permissions, it's likely that you cannot edit the file through samba anymore. Instead, use nano to edit the file: nano config.php Nano is built into unraid and is a very easy to use terminal-based text editor. It may be useful to google a quick nano tutorial. It's easy to use, but isn't straight forward your first time using it.
×
×
  • Create New...