UnspokenDrop7

Members
  • Posts

    15
  • Joined

  • Last visited

Converted

  • Gender
    Undisclosed

UnspokenDrop7's Achievements

Noob

Noob (1/14)

0

Reputation

  1. I "solved" my issue with the WebUI. After reading some of the latest posts regarding the memory leak issues I decided to install a fresh container. After shutting down my old container I renamed the appdata folder. Then I changed the container repository from linuxserver/unifi-controller:LTS to linuxserver/unifi-controller:5.11.50-ls40 (as proposed) and installed my new container. Now I could reach the WebUI! I then restored a backup from a year ago (2019-05), yes a year ago. And first everything was good, I signed in with my usual user/pass, started looking around (no changes made!) and suddenly I lost connection to the WebUI. I could not get it back. I fail to see the logic here. A new fresh container and a backup from a year ago, well before my issues started. First working, then not. Anyway, I removed the appdata folder and started from a clean install again. This time I did all, or at least all necessary, config manually to get my wifi back online. Now it has been working for at least 24h. Hopefully this helps you other guys having this issue... maybe you are more lucky with your backup, if you have any.
  2. Aha, I think I understand you now! You mean the container seems to be fixed or hardcoded to use TCP 8080? That should be no problem for me. I have no other docker container using port 8080. Or any other service running on my Unraid host using this port. I never have. On the other hand, I have never used 8080 to access the WebUI. The "instructions" (see attached image) says I should use 8443. So I have always used 8443. Anyway, just to make sure. If I switch to bridged mode and maps host port 8082 to container port 8080 I get another issue. My browser gives me SSL_ERROR_RX_RECORD_TOO_LONG. Command: root@localhost:# /usr/local/emhttp/plugins/dynamix.docker.manager/scripts/docker run -d --name='unifi-controller' --net='bridge' -e TZ="Europe/Berlin" -e HOST_OS="Unraid" -e 'PUID'='99' -e 'PGID'='100' -p '3478:3478/udp' -p '8082:8080/tcp' -p '8443:8443/tcp' -p '8880:8880/tcp' -p '8843:8843/tcp' -p '10001:10001/udp' -v '/mnt/user/appdata/unifi-controller':'/config':'rw' 'linuxserver/unifi-controller:LTS' 7b17630e25bbc111b6c73049741287990777930e4ab791bdac1890762a9cbda3 The command finished successfully! Again, if I look at the "instructions" it says port 8080 is used for "device and controller communications". So to my understanding 8080 should not be used for the WebUI. Or am I missing something here?
  3. I have used this docker container for years and I have always reached the WebUI via TCP 8443. I'm 100% sure of this as have a bookmark in my browser pointing towards https://v.x.y.z:8443/ Anyway, I tried to access the WebUI using 8080. But there is no answer on this port either. I wish I could have a look at the web server configuration of the docker container. But I'm not sure even where to start looking... or if I'm able to read the configuration once I find it. Maybe I at least would be able to verify what port it is listening on.
  4. Hi all, I just noticed the web UI is not working anymore, browsing is timed out. I have not made any changes. Not sure if there have been any updates of the container recently. But otherwise the container seems to work, I can access the CLI. I need some help on how to proceed the troubleshooting. Command: root@localhost:# /usr/local/emhttp/plugins/dynamix.docker.manager/scripts/docker run -d --name='unifi-controller' --net='host' -e TZ="Europe/Berlin" -e HOST_OS="Unraid" -e 'UDP_PORT_3478'='3478' -e 'TCP_PORT_8080'='8080' -e 'TCP_PORT_8443'='8443' -e 'TCP_PORT_8880'='8880' -e 'TCP_PORT_8843'='8843' -e 'UDP_PORT_10001'='10001' -e 'PUID'='99' -e 'PGID'='100' -v '/mnt/user/appdata/unifi-controller':'/config':'rw' 'linuxserver/unifi-controller:LTS' b0695b8cfaeea76cb8ea11d7829e008d96e482a8ec1f925cc602c54a9f46f560 The command finished successfully! [s6-finish] sending all processes the KILL signal and exiting. [s6-init] making user provided files available at /var/run/s6/etc...exited 0. [s6-init] ensuring user provided files have correct perms...exited 0. [fix-attrs.d] applying ownership & permissions fixes... [fix-attrs.d] done. [cont-init.d] executing container initialization scripts... [cont-init.d] 10-adduser: executing... [s6-init] making user provided files available at /var/run/s6/etc...exited 0. [s6-init] ensuring user provided files have correct perms...exited 0. [fix-attrs.d] applying ownership & permissions fixes... [fix-attrs.d] done. [cont-init.d] executing container initialization scripts... [cont-init.d] 10-adduser: executing... usermod: no changes ------------------------------------- _ () | | ___ _ __ | | / __| | | / \ | | \__ \ | | | () | |_| |___/ |_| \__/ Brought to you by linuxserver.io We gratefully accept donations at: https://www.linuxserver.io/donate/ ------------------------------------- GID/UID ------------------------------------- User uid: 99 User gid: 100 ------------------------------------- [cont-init.d] 10-adduser: exited 0. [cont-init.d] 20-config: executing... [cont-init.d] 20-config: exited 0. [cont-init.d] 30-keygen: executing... [cont-init.d] 30-keygen: exited 0. [cont-init.d] 99-custom-scripts: executing... [custom-init] no custom files found exiting... [cont-init.d] 99-custom-scripts: exited 0. [cont-init.d] done. [services.d] starting services [services.d] done.
  5. Wonderful, that's the solution! Thank you very much binhex!
  6. I have the same issue, the default plugins of Extractor and Label does not stick as activated after a restart of the container. The above workaround did not work for me. But I'm unsure I actually got it right. @PieQuest, do you really mean the data folder, not the config folder? Anyway, did someone found any solution to this? I moved from linuxserver's deluge to this one as I had other problems with that one... WebGUI was not working if not sticking to version 1.3.15 (I think it was). docker-install.txt docker-log.txt
  7. I have no idea, sorry. Thanks, but I got a 1709 from work. No fix or even clue about what the cause could be. I did some tests with 1709 and older win-virtio drivers but problem persists. I have also upgraded to version 6.5.3 of UnRAID but problem persists. I should mention that I have not kept track of all possible tests that could be done combining different versions of Windows 10, virtio drivers and UnRAID to see if any specific combination would work for me. At the end of the day it was just easier for me to buy some new hardware (MB, RAM, CPU) and using an old PSU and chassis I had.
  8. It has never been like this before, it's really weird. I have now created a new Windows 10 1803 VM and tested how it behaves with and without GPU passthrough (using my GTX 1060 this time). And it's the same! I have attached how it looks like in the Device Manager, you can clearly see that there is two different network adapters depending on if I use the built-in unRAID VNC or GPU passthrough. Don't mind the warning for my GTX 1060, it's probably because I have not pointed the VM to the correct VBIOS rom file. The only changes from earlier installations is that I now install Windows 10 1803 and uses the virtio-win-0.1.141. I will see if I can find some time to do another Windows 10 1803 installation using the previous virtio drivers I used. And if it possible, to find a Windows 10 1709 to install. Another thing would also be to upgrade unRAID from 6.5.0 to 6.5.2.
  9. Yes, and I was able to RDP and ping the VM when I used VNC. But when I switched to GPU passthrough I could neither ping or RDP to the machine. This made me believe that it was not booting correctly when using GPU passthrough. But what I discovered using a monitor connected to the GPU was that it actually was booting just fine. So I signed in to Windows and could see that the VM was using a dynamic IP address (10.220.0.226) assigned from my DHCP despite the fact that I had assigned it a static IP (10.220.0.13, which is not the same as unRAID) earlier when I connected via VNC. So I changed the network adapter again to my static configuration and when I applied the configuration I got a message that there already was another adapter using this configuration (this adapter was not visible) and I got the option to remove the "old" adapter's configuration and use it for this "new" adapter. Now I was able to ping and RPD to my VM when using GPU passthrough. So my conclusions are: Every time I booted the machine and used VNC it got my static IP settings. Every time I booted the machine and used GPU passthrough it got dynamic IP settings from my DHCP, since Windows used another adapter. I did not realize this so I tried to ping and RDP to the static IP even when I used GPU passthrough... but that was of course not working. How this could happened I do not know, it could be nice to know but I'm satisfied that it was solved.
  10. OK, but this was not my case. I did assigned the machine it's own IP address but not the same as unRAID. I was able to ping and RDP to the machine when using the built-in unRAID VNC. That would not have worked if there was an IP address conflict. I was also able to surf the Internet and access my file shares on my unRAID host which also would not have been possible if there was an IP address conflict. It seems to me the VM used one NIC when I configured it for VNC (my static IP settings) and another for GPU passthrough (this one was default and used DHCP). It may be important to know why this happened but for now I'm just glad that it was solved. Thanks for your input! Hopefully this will be useful for someone (maybe myself :)) in the future experience a similar issue.
  11. I was able to borrow a monitor from work. Using this I was able to see that the VM actually booted just fine on GPU passthrough. So I logged on and looked up the IP configuration and could see that the network adapter was configured to use DHCP, despite the fact that I had configured it with static settings. So I changed it back to my static settings and then got a notification that there was another adapter using these settings and I was given the option to remove the other adapter's settings and use them for this adapter. Now I was able to ping and connect to the machine using RDP. So for some reason that is not clear to me the machine used one network adapter when I was using the built-in unRAID VNC and another when I used GPU passthrough. It was working all the time it just didn't occur to me that the machine may have a different IP address when using GPU passthrough.
  12. I upgraded my Windows 10 VM to version 1803 just recently and during the necessary reboot the VM stopped working. It seems to start (at least unRAID seems to think so) but I can not reach it anymore, it doesn't respond to ping or RDP (the Windows firewall is not the problem, it's switched off). If I switch from GPU passthrough to just VNC it works, I can ping and RDP to it again. So I have tried to start over from a fresh install of Windows 10 1803 but it's the same. I have rebooted the unRAID host but no change. Here is the log from one of many attempts to boot up the VM with GPU passthrough, and then doing a "force stop" 20 min later: 2018-06-04 13:26:01.476+0000: starting up libvirt version: 4.0.0, qemu version: 2.11.1, hostname: nas-001 LC_ALL=C PATH=/bin:/sbin:/usr/bin:/usr/sbin HOME=/ QEMU_AUDIO_DRV=none /usr/local/sbin/qemu -name guest=DESKTOP-001,debug-threads=on -S -object secret,id=masterKey0,format=raw,file=/var/lib/libvirt/qemu/domain-1-DESKTOP-001/master-key.aes -machine pc-i440fx-2.11,accel=kvm,usb=off,dump-guest-core=off,mem-merge=off -cpu host,hv_time,hv_relaxed,hv_vapic,hv_spinlocks=0x1fff,hv_vendor_id=none -drive file=/usr/share/qemu/ovmf-x64/OVMF_CODE-pure-efi.fd,if=pflash,format=raw,unit=0,readonly=on -drive file=/etc/libvirt/qemu/nvram/e10cd70f-7874-b372-aca3-3291910b1870_VARS-pure-efi.fd,if=pflash,format=raw,unit=1 -m 4096 -realtime mlock=off -smp 1,sockets=1,cores=1,threads=1 -uuid e10cd70f-7874-b372-aca3-3291910b1870 -display none -no-user-config -nodefaults -chardev socket,id=charmonitor,path=/var/lib/libvirt/qemu/domain-1-DESKTOP-001/monitor.sock,server,nowait -mon chardev=charmonitor,id=monitor,mode=control -rtc base=localtime -no-hpet -no-shutdown -boot strict=on -device ich9-usb-ehc=pci.0,addr=0x5,romfile=/mnt/user/vms/gpu-roms/asus-gtx-1070.rom -device vfio-pci,host=01:00.0,id=hostdev1,bus=pci.0,addr=0x6 -device vfio-pci,host=02:00.1,id=hostdev2,bus=pci.0,addr=0x8 -device virtio-balloon-pci,id=balloon0,bus=pci.0,addr=0x9 -msg timestamp=on 2018-06-04 13:26:01.476+0000: Domain id=1 is tainted: high-privileges 2018-06-04 13:26:01.476+0000: Domain id=1 is tainted: host-cpu 2018-06-04T13:26:01.561925Z qemu-system-x86_64: -chardev pty,id=charserial0: char device redirected to /dev/pts/0 (label charserial0) 2018-06-04T13:46:27.662074Z qemu-system-x86_64: terminating on signal 15 from pid 10026 (/usr/sbin/libvirtd) 2018-06-04 13:46:29.063+0000: shutting down, reason=destroyed Is there any one else out there having the same experience? And maybe a solution? Thanks, Kristofer
  13. Hi, How did it go? Did you manage to reinstall UnRAID and restore all configuration successfully? If so, how did you do it step by step?
  14. Ok, instead of using the latest stable virtio drivers (0.1.102) I dowloaded the latest (0.1.118). Still could not find my drive. But, then I noticed that yesterday unRAID 6.2 was released so I updated (from 6.1.9) and voila now I can find my drive again, still using the latest virtio drivers. Can not figure out how I was able to install a win 10 VM from the begining... maybe some smaller update that broke something... anyway, this is how I made it work again. Hope it may help someone.
  15. Hi, I realize that this thread is solved... and maybe I should open a new thread? But I have the exact same issue. My first Windows 10 Pro setup worked just fine. Then I deleted the VM and now I'm about to setup a new one. When I reach the step "Where do you want to install Windows?" there is no disk available, just as expected. So I follow the instructions at http://lime-technology.com/wiki/index.php/UnRAID_Manual_6#Installing_a_Windows_VM: "... You will need to load the following drivers in the following order: Balloon NetKVM vioserial viostor (be sure to load this one last) For each driver that needs to be loaded, you will navigate to the driver folder, then the OS version, then the amd64 subfolder (never click to load the x86 folder) After each driver is loaded, you will need to click the Browse button again to load the next driver. After loading the viostor driver, your virtual disk will then appear to select for installation and you can continue installing Windows as normal...." But when I have completed the last driver load still no disk shows up. Have tried to do a refresh but still no disk. I have tried using both SeaBios and OVFM, still no disk. I also tried the different machine types, still no disk. What could be the cause of this?