arifer

Members
  • Posts

    18
  • Joined

  • Last visited

Recent Profile Visitors

The recent visitors block is disabled and is not being shown to other users.

arifer's Achievements

Noob

Noob (1/14)

3

Reputation

  1. Adding the following snippet under the "up" and "down" lines in "/usr/local/emhttp/plugins/compose.manager/scripts/compose.sh" successfully settled the problem raised by @Agent531C for the case of updating the web UI label. I can get the same result by first spinning down a stack, editing it, and then spinning it back up; as well as editing a stack and then recreating it by clicking on the "compose up" button on the web GUI. docker compose -f "$2" -p "$3" ps -a | awk '{if (NR!=1) {printf("%s.\"%s\"", sep, $1); sep=", "}}' | xargs -0 -I {} jq 'del({})' $DOCKER_JSON > $DOCKER_JSON docker compose -f "$2" -p "$3" up -d 2>&1 The following snippet is a cheap and dirty way of forcing unRAID to delete cached icons and retrieve them from the `net.unraid.docker.icon` label. I placed this udner the "up" "down" lines in the same file above. docker compose -f "$2" -p "$3" ps -a | awk '{if (NR!=1) {print $1}}' | xargs -I {} find $DOCKER_IMAGES $UNRAID_IMAGES -name {}.png -delete The environments used in the snippets are declared at the top of the file: DOCKER_MANAGER=/usr/local/emhttp/state/plugins/dynamix.docker.manager DOCKER_JSON=$DOCKER_MANAGER/docker.json DOCKER_IMAGES=$DOCKER_MANAGER/images UNRAID_IMAGES=/var/lib/docker/unraid/images
  2. The problem is indeed in the OS's docker manager plugin. It stores all the information in RAM and only supports changes to the XML template file. An quick and easy fix would be to delete `/usr/local/emhttp/state/plugins/dynamix.docker.manager/docker.json` and have it build again on reload every time you make a change to the compose file. Another approach is to include a small script within this plugin to remove the service entry from the same file every time `docker-compose down` is executed. You can use `docker compose ps -a` to get the names of all the containers of a stack, pipe that into `jq 'del(.CONTAINER_NAME)' /usr/local/emhttp/state/plugins/dynamix.docker.manager.docker.json` to remove the entries, and then let the server build the file again once the stack is spun up.
  3. I can't believe that I never bothered to check if there was a repository for this! Will put up a PR on it. Thank you ^^
  4. I've put together a small patch to dockerMan so as to get the icons from compose labels working. Solution: The `/usr/local/emhttp/plugins/dynamix.docker.manager/include/DockerClient.php` on unRAID needs to be changed. The problem about changing this file is that it does not sustain after reboots so it has to be patched every reboot. It's always possible to use the User Scripts plugin to edit the file on reboot at least until LimeTech fixes it fully or improves on it. Manual method Script: Refresh your Docker page and the icon should now show up. If it does not, you may need to clear your cache. A problem with this patch is that the icon image does not change if you change the value of the label "net.unraid.docker.icon" in the compose file. To force an update, you will have to delete the previous icon files. Suppose the service name is "foo-service", 2 icons will be made: /var/lib/docker/unraid/images/foo-service-icon.png /usr/local/emhttp/state/plugins/dynamix.docker.manager/images/foo-service-icon.png Delete both files and refresh your page to force unRAID to pull the new icons. Reason:
  5. Ah I see, I never knew about that. Thank you for enlightening me! I find that all my stacks are written to a file titled `name` in lower case instead of being capitalised. This is also explicitly expressed in the `exec.php` file. Adding a stack: file_put_contents("$folder/name",$stackName); Changing the name of a stack: file_put_contents("$compose_root/$script/name",trim($newName));
  6. Hi there, I'm loving this plugin and can't wait for the next update with icon support. I looked through the shell scripts in the `/usr/local/emhttp/plugins/compose.manager/event/` directory. I found a few oddities in the scripts that may need some looking at. The first is the hard-coding of the project names as "$dir/Name" which will not match any stack unless they are so incidentally named "Name". The second is the nested ifs that can be reduced, it's easier to read too that way. And the final oddity would be that none of these scripts are actually called anywhere. I'm guessing that this means that the auto-start feature is not in full-force yet. In the spoiler below, I'm providing the diff to the changes to the shell scripts that I think may help with the first 2 problems mentioned. Please feel free to consider using them!
  7. I only have a Home Assistant Virtual Machine running on my server with unRAID v6.9.2 and libvirt v6.5.0. It's been working fine for the most parts. However, after waking up this morning, I found out that the virtual machine disappeared. I checked the webGUI and saw in the Virtual Machines tab a big yellow banner that said "Libvirt Service failed to start". I know that I should have taken a screenshot of the page before proceeding but I forgot to log it down. I turned off and on the virtual machine support in the settings menu but nothing changed. I then checked the logs and it was clean, no error messages whatsoever. The last approach I took was to reboot the whole system. I rebooted it, started up the array, re-enabled VM support in the menu and now it's working fine. Checking the logs, there is an entry in it that is suspiciously foreboding. 2021-10-01 02:21:44.048+0000: 12337: info : libvirt version: 6.5.0 2021-10-01 02:21:44.048+0000: 12337: info : hostname: Babel 2021-10-01 02:21:44.048+0000: 12337: warning : networkNetworkObjTaint:5292 : Network name='default' uuid=a67b7b9b-bef4-488b-b243-c9e5f391a3b1 is tainted: hook-script 2021-10-01 02:21:44.923+0000: 12337: warning : qemuDomainObjTaint:6075 : Domain id=1 name='HomeAssistant' uuid=78b2f542-3fd7-039d-f5a5-7866d93e49d6 is tainted: high-privileges 2021-10-01 02:21:44.923+0000: 12337: warning : qemuDomainObjTaint:6075 : Domain id=1 name='HomeAssistant' uuid=78b2f542-3fd7-039d-f5a5-7866d93e49d6 is tainted: host-cpu 2021-10-01 02:21:45.764+0000: 12321: error : virNetSocketReadWire:1826 : End of file while reading data: Input/output error It raises an EoF error message. The first guess I have is that it might have something to do with the fact that my libvirt.img file is hosted on my cache drive but then I would also expect a similar issue with my docker containers which are also running on the same cache drive, which is not the case! I'm posting this here in hope that someone out there may shed some light on this issue.
  8. After it initialises the issue should sort itself out. I believe I faced some permissions issue in my appdata share so I fixed it through the Docker New Permissions tool and my reboots from then on had no hiccups. Tapatalk を使用して私の XQ-AS72 から送信
  9. I faced this issue once and I managed to get it fixed by turning off docker and VMs in the settings before starting up the array. I suggest you stop the array and try that out to see if it works. Tapatalk を使用して私の XQ-AS72 から送信
  10. Thank you for the hard work. The connection is finally going through and I'm seeing myself online and accessible through the internet now. Actually, it seems as if the my-servers page retrieves the connection information a lot faster than it used to! This is great! I'll mark this thread as closed.
  11. I got back and saw this in the debug logs for `unraid-api` ws:closed 1006 closed automatically, restarting ws:closed 1006 closed automatically, restarting ☁️ RELAY:DISCONNECTED ws:closed 1006 closed automatically, restarting ws:closed 1006 closed automatically, restarting ws:closed 1006 closed automatically, restarting ws:closed 1006 closed automatically, restarting ws:closed 1006 closed automatically, restarting ☁️ RELAY:DISCONNECTED ⌨️ INTERNAL:CONNECTED Error: WebSocket is not open: readyState 0 (CONNECTING) at WebSocket2.send (/usr/local/bin/unraid-api/node_modules/graceful-ws/dist/graceful-ws.js:1649:17) at GracefulWebSocket.send (/usr/local/bin/unraid-api/node_modules/graceful-ws/dist/graceful-ws.js:2458:39) at GracefulWebSocket.<anonymous> (/usr/local/bin/unraid-api/dist/index.js:32254:55) at GracefulWebSocket.emit (events.js:315:20) at GracefulWebSocket.EventEmitter.emit (domain.js:467:12) at WebSocket2.<anonymous> (/usr/local/bin/unraid-api/node_modules/graceful-ws/dist/graceful-ws.js:2479:12) at WebSocket2.onOpen (/usr/local/bin/unraid-api/node_modules/graceful-ws/dist/graceful-ws.js:1109:20) at WebSocket2.emit (events.js:315:20) at WebSocket2.EventEmitter.emit (domain.js:467:12) at WebSocket2.setSocket (/usr/local/bin/unraid-api/node_modules/graceful-ws/dist/graceful-ws.js:1566:14) Not sure if this helps but I hope it shed some light into why the connection keeps failing to sustain.
  12. Actually, it seems that it's starting to fail to connect again. I'm getting a flurry of reconnecting issues in the debug. libvirt: No changes detected. libvirt: No changes detected. ☁️ RELAY:Too Many Requests:RECONNECTING:NOW ☁️ RELAY:Too Many Requests:RECONNECTING:NOW ☁️ RELAY:429:Too Many Requests:RECONNECTING:30_000 ... libvirt: No changes detected. libvirt: No changes detected. ☁️ RELAY:Too Many Requests:RECONNECTING:NOW ☁️ RELAY:Too Many Requests:RECONNECTING:NOW libvirt: No changes detected. ☁️ RELAY:Too Many Requests:RECONNECTING:NOW
  13. I updated to v2021.09.15.1853 last night. I noticed when I was out today that I could not access it from the internet. When I got home, it first raised the error that Graphql was offline so I tried using `unraid-api restart` to get it running since I had encountered this issue before. However, it continued to fail to connect to the mothership no matter how long I wait or how many times I tried. I referenced this post and re-installed the plugin twice but that also did not make a difference. Once all the dry methods had failed, I tried to debug the issue. I ran `unraid-api --debug restart` and lo and behold, connection to the mothership is made and sustained. Right now I'm running that in a tmux session so that it remains online for as long as the server stays online. It also gives me a good log of what's happening just in case something fails.
  14. @iker 's solution worked for me on SWAG. I also had to include the following lines in my configuration.yaml file for the Hassio VM. http: use_x_forwarded_for: true trusted_proxies: - XXX.XXX.XXX.XXX # IP address of my unRAID box If you're facing problems getting the right IP address, try accessing the web app through the site you have set up the CNAME for like homeassistant.domain.host and search your Home Assistant logs under "Configuration >> Logs" for the following entry: where the IP address shown is what you need to insert into the code block above.
  15. @bigbangus You would need to add the following lines to the configuration.yaml file: http: use_x_forwarded_for: true trusted_proxies: - 172.XX.XX.XX # The SWAG docker IP address You can always check for your SWAG docker IP address by checking it in the port mappings column of the Dockers tab.