TheBrian

Members
  • Posts

    50
  • Joined

  • Last visited

Everything posted by TheBrian

  1. BepInEx requires this directory and libraries be removed. V+ is just a mod (one of thousands) that experiences this problem because the issue isn't with the mod but with BepInEx. This also isn't really a bug but more of a hygiene/clean-up issue. BepInEx no longer requires these libraries and if they exist, the game tries to load them but fails simply because they've been deprecated and should no longer be used. I would trigger off of BepInEx being enabled which would allow your container to work with any mod.
  2. If BepinEx is enabled, the unstripped_corlib directory should be removed. V+ requires BepInEx, so if this is the only mod you're supporting, then I suppose yes, you can trigger off of v+ being enabled.
  3. They're no longer needed. If they're seen, the game crashes:
  4. delete the 'unstripped_corlib' directory.
  5. I found my temporary root password at "/mnt/cache/appdata/<my container data dir>/config/initial_root_password"
  6. I figured it out. This isn't a CA or unRAID issue. It's OCI compatibility with DockerHub. Hopefully this can help someone else. My debugging process: I grepped through the entire directory /usr/local/emhttp/plugins/dynamix.docker.manager/include for "not available" which is the HTML label shown in the Unraid Docker UI To prove to myself that I found the right label, I added the "!" you see in the screenshot above. This label is inside a case/switch condition. I.e., 0="up-to-date", 1="update ready", 2="rebuild ready" else, "not available" within the DockerContainers.php file. My containers were returning a NULL value which results in the "default/else" (not available message). This condition evaluates the variable "$updateStatus". This variable is set by an array read of the $info['updated'] array element. This variable is set by DockerClient.php and where most of my debugging occurred. /usr/local/emhttp/plugins/dynamix.docker.manager/include/DockerClient.php: I added the following to the "getRemoteVersionV2" function to assist in debugging. $file = '/tmp/foo.txt'; file_put_contents($file, PHP_EOL . $image . '(' . $manifestURL . ')' . ' ', FILE_APPEND); This sent me on a reverse engineering journey and helped me create a little test script: #!/bin/bash repo="theoriginalbrian/phvalheim-server" TOKEN=$(curl --silent "https://auth.docker.io/token?scope=repository:$repo:pull&service=registry.docker.io" | jq -r '.token') curl --header "Accept: application/vnd.docker.distribution.manifest.v2+json" --header "Authorization: Bearer $TOKEN" "https://registry-1.docker.io/v2/$repo/manifests/latest" The output of this little test script: {"errors":[{"code":"MANIFEST_UNKNOWN","message":"OCI manifest found, but accept header does not support OCI manifests"}]} A bit of research landed me in DockerHub API documentation...in short, OCI images do not contain the SHA256 image digest (at least in the location DockerHub expects). This led me to look at how I'm building my images. My dev system is just a simple Rocky 8.6 (EL) VM, which runs podman. By default, podman builds in the OCI format. Simply passing "--format=docker" to my podman build command solved the issue. Results: After clicking "check for updates": After clicking "apply update": After clicking "apply update" and "check for updates:" TL;DR: add "--format=docker" to your docker/podman build command. -Brian
  7. You're likely having the same issue. I use other containers from CA that I didn't author and also have this issue.
  8. I'm wondering if someone can help with a published Unraid+Docker app problem. I've built 2 containers and published them to CA. One of the containers (phvalheim-server) show updates "not available". The other (intel-gpu-telegraf) works fine. The .xml templates are very similar and I can't seem to figure out how to fix this. I'm getting complaints from people using Unraid that this new container (phvalheim-server) is showing "not available" on everyone's installation. Including mine and two of my test systems. The only search results I find are related to all containers having this issue which is DNS related or a bug prior to 6.9. I'm on 6.11.5 and all versions since creating this container have had this issue. Template for phvalheim-server: https://github.com/brianmiller/docker-templates/blob/master/phvalheim-server/phvalheim-server.xml Template for intel-gpu-telegraf: https://github.com/brianmiller/docker-templates/blob/master/intel-gpu-telegraf/intel-gpu-telegraf.xml Here is a screenshot showing the issue. My other container works fine. Any help is appreciated. -Brian
  9. I'm seeing the same issue with one of my containers that I built. I have another that has a similar unraid docker template that doesn't have this issue.
  10. Bummer... I just upgraded to 6.11.2 last night and attempted a new disk install... same issue. Unsupported partition layout. While it's easy to say downgrade--for those of us with a mature system, it's not so easy to just shutdown I'll be waiting for 6.11.3.
  11. Same question here... we just installed it in two locations and love it, but noticed the UUD dashboard is waaaay behind. Should we continue using the container? It's pretty sweet...
  12. Understood. V+ is a joke of assumptions and dependencies. BepInEx is critical though. I'll say, I'm nearing punting on everyone's containers and spinning my own because of this. I agree with you, the containers should allow for modding, but not have assumption built-in.
  13. I did this. Your container downloads and installs BepInEx, but infinite loops and never starts with the error: symbol lookup error: /serverdata/serverfiles/doorstop_libs/libdoorstop_x64.so: undefined symbol: dlopen This is a vanilla/fresh container with only BepInEx enabled. It never starts.
  14. For those that are having issues with this container and Valheim. I punted and gave this container a shot: https://hub.docker.com/r/lloesche/valheim-server. All is well now. This container runs all mods without issues. I'm not sure why or how ICH777's container behaves differently, but it certainly does. I suspect it's the BepInEx pack that is being automatically downloaded from Thunderstore (5.4.1901). The container I linked above that works says "BepInEx.Preloader 5.4.19.0" in the logs.
  15. The latest issues appears to be that BepInEx 5.4.1901 simply doesn't run on Linux. All 19 containers are in an infinite loop with this error: symbol lookup error: /serverdata/serverfiles/doorstop_libs/libdoorstop_x64.so: undefined symbol: dlopen This version of BepInEx works fine on Windows clients and servers though.
  16. I'll give it a shot, but the link you provided goes nowhere. Some sort of AWS authentication attempt via Github... I've been testing with this: https://valheim.thunderstore.io/package/denikson/BepInExPack_Valheim/ And the latest release for BepInEx's github which is very much behind...
  17. You're correct. My start scripts were forcing V+ to be installed again. Thank you. I guess we wait. -Brian
  18. Bummer.. this isn't a ValheimPlus issue either. The problem exists with or without ValheimPlus enabled.
  19. After the Valheim update and after updating the container, all of my worlds are still down We have 19 containers. All are throwing a segfault after the game and container was updated. Connecting anonymously to Steam Public...OK Waiting for client config...OK Waiting for user info...OK Update state (0x3) reconfiguring, progress: 0.00 (0 / 0) Update state (0x61) downloading, progress: 9.97 (16872508 / 169315037) Update state (0x61) downloading, progress: 61.41 (103971985 / 169315037) Update state (0x81) verifying update, progress: 95.44 (161595153 / 169315037) Success! App '896660' fully installed. ---Prepare Server--- ---Found old save directory... Moving saves directory to new location...!--- Waiting for user info...OK Update state (0x3) reconfiguring, progress: 0.00 (0 / 0) Update state (0x61) downloading, progress: 9.97 (16872508 / 169315037) Update state (0x61) downloading, progress: 61.41 (103971985 / 169315037) Update state (0x81) verifying update, progress: 95.44 (161595153 / 169315037) Success! App '896660' fully installed. ---Prepare Server--- ---Found old save directory... Moving saves directory to new location...!--- ---ValheimPlus enabled!--- ---ValheimPlus Version Check--- ---ValheimPlus not found, downloading and installing v0.9.9.8...--- ---Successfully downloaded ValheimPlus v0.9.9.8--- Then it loops and keeps throwing: ---Update Check for Valheim enabled, running automatically every 60 minutes.--- /opt/scripts/start-server.sh: line 254: 77 Segmentation fault ${SERVER_DIR}/valheim_server.x86_64 -name "${SRV_NAME}" -port ${GAME_PORT} -world "${WORLD_NAME}" -password "${SRV_PWD}" -public ${PUBLIC} ${GAME_PARAMS} > /dev/null After the first attempt, the worlds directory has been renamed to worlds_local, which tells me the container "patch" is working, but the game is still hosed. -Brian
  20. Same issue for me: It used to work fine. Python version: 3.9.7 (default, Nov 24 2021, 21:15:59) [GCC 10.3.1 20211027] Python main interpreter initialized at 0x14ca7d482bf0 python threads support enabled your server socket listen backlog is limited to 100 connections your mercy for graceful operations on workers is 60 seconds mapped 145840 bytes (142 KB) for 1 cores *** Operational MODE: single process *** running "exec:/usr/bin/python3 ./manage.py collectstatic --noinput" (pre app)... /app/netbox/netbox/netbox/settings.py:57: UserWarning: The CACHE_TIMEOUT configuration parameter was removed in v3.0.0 and no longer has any effect. warnings.warn( /app/netbox/netbox/netbox/settings.py:61: UserWarning: The RELEASE_CHECK_TIMEOUT configuration parameter was removed in v3.0.0 and no longer has any effect. warnings.warn( 0 static files copied to '/app/netbox/netbox/static', 240 unmodified. running "exec:/usr/bin/python3 ./manage.py remove_stale_contenttypes --no-input" (pre app)... /app/netbox/netbox/netbox/settings.py:57: UserWarning: The CACHE_TIMEOUT configuration parameter was removed in v3.0.0 and no longer has any effect. warnings.warn( /app/netbox/netbox/netbox/settings.py:61: UserWarning: The RELEASE_CHECK_TIMEOUT configuration parameter was removed in v3.0.0 and no longer has any effect. warnings.warn( 0 static files copied to '/app/netbox/netbox/static', 240 unmodified. running "exec:/usr/bin/python3 ./manage.py remove_stale_contenttypes --no-input" (pre app)... /app/netbox/netbox/netbox/settings.py:57: UserWarning: The CACHE_TIMEOUT configuration parameter was removed in v3.0.0 and no longer has any effect. warnings.warn( /app/netbox/netbox/netbox/settings.py:61: UserWarning: The RELEASE_CHECK_TIMEOUT configuration parameter was removed in v3.0.0 and no longer has any effect. warnings.warn( running "exec:/usr/bin/python3 ./manage.py clearsessions" (pre app)... /app/netbox/netbox/netbox/settings.py:57: UserWarning: The CACHE_TIMEOUT configuration parameter was removed in v3.0.0 and no longer has any effect. warnings.warn( /app/netbox/netbox/netbox/settings.py:61: UserWarning: The RELEASE_CHECK_TIMEOUT configuration parameter was removed in v3.0.0 and no longer has any effect. warnings.warn( running "exec:/usr/bin/python3 ./manage.py invalidate all" (pre app)... /app/netbox/netbox/netbox/settings.py:57: UserWarning: The CACHE_TIMEOUT configuration parameter was removed in v3.0.0 and no longer has any effect. warnings.warn( /app/netbox/netbox/netbox/settings.py:61: UserWarning: The RELEASE_CHECK_TIMEOUT configuration parameter was removed in v3.0.0 and no longer has any effect. warnings.warn( Unknown command: 'invalidate' Type 'manage.py help' for usage. command "/usr/bin/python3 ./manage.py invalidate all" exited with non-zero code: 1 Wed Dec 15 14:52:40 2021 - FATAL hook failed, destroying instance SIGINT/SIGQUIT received...killing workers... /app/netbox/netbox/./netbox/settings.py:57: UserWarning: The CACHE_TIMEOUT configuration parameter was removed in v3.0.0 and no longer has any effect. warnings.warn( /app/netbox/netbox/./netbox/settings.py:61: UserWarning: The RELEASE_CHECK_TIMEOUT configuration parameter was removed in v3.0.0 and no longer has any effect. warnings.warn( WSGI app 0 (mountpoint='') ready in 1 seconds on interpreter 0x14ca7d482bf0 pid: 295 (default app) *** uWSGI is running in multiple interpreter mode *** spawned uWSGI master process (pid: 295) spawned uWSGI worker 1 (pid: 314, cores: 1) [uwsgi-daemons] spawning "/usr/bin/python3 ./manage.py rqworker" (uid: 99 gid: 100) /app/netbox/netbox/netbox/settings.py:57: UserWarning: The CACHE_TIMEOUT configuration parameter was removed in v3.0.0 and no longer has any effect. warnings.warn( /app/netbox/netbox/netbox/settings.py:61: UserWarning: The RELEASE_CHECK_TIMEOUT configuration parameter was removed in v3.0.0 and no longer has any effect. warnings.warn(
  21. Works for me Thanks for the work you put into this. Don't get me wrong, I absolutely love containers, but I have to say, I'm old and enjoy dedicated Linux VMs--it's so much faster for getting something new up and running or even modifying an existing system, but prone to so much error. I appreciate you keeping the main .jar in the config directory, allowing people to make persistent changes--unless you're doing some sort of evil CRC/MD5 checking and replacing anything we change. People like you make containerization worth it. Nicely done, I hope you keep it up! -Brian
  22. This is fantastic to hear, much less work for me! I suppose my kids are a bit impatient. What do you think of a config flag to let the end-user control when the .jar is updated? The Valheim container does it quite well, but I'm not sure of the logistics regarding the posting of the Minecraft .jar. It looks like the posted URL is likely dynamic and it may be a pain to scrape the downloads page with confidence. -Brian