Jump to content

CJW

Members
  • Posts

    17
  • Joined

  • Last visited

Posts posted by CJW

  1. I used to have GitLab-CE running fine.  Then the big update that changed the storage backend broke things.  I spent a while fiddling with that but eventually decided to say screw it and start fresh.  All my repos are copied to github, as well, so I didn't lose any data.

     

    So, starting with a completely fresh install of the GitLab-CE docker v16.7.0, things seemed to work.  Got it installed, got root set up, added a regular user, had the user add a blank repo and an SSH key that I validated with a simple ssh command to connect.

     

    I just can't push or clone anything.  Every attempt to interact with the GitLab instance fails with "remote: Internal API unreachable" on the command line.

     

    I don't know jack about how this software works.  It's too complicated.  But poking around in the logs I see things that make no sense on a clean install that shows no health issues in the Monitoring tab.  The following line shows up multiple times whenever I do a push/clone operation:

     

    [puma/puma_stderr.log]

    2023-12-27 23:17:46 +0000 Rack app ("GET /api/v4/internal/allowed" - (127.0.0.1)): #<ThreadError: deadlock; recursive locking> Error reached top of thread-pool: stack level too deep (SystemStackError)

     

    Anybody got any ideas?  I've about had it with the "improvements" to this container.  Something that used to "just work" has now become an aggravating time suck.

     

    Thanks.

  2. SOLVED:  Got some help from phobos on Discord.  I am using SQLIte3 and somehow the sync database (/data/owncloud.db-shm) got chown'ed to root, so the nextcloud user couldn't perform db transactions.  A simple 'chown abc:abc' on the offending file fixed it.

     

    --------------------

    I am suddenly having problems with my NextCloud container v27.0.2.  The GUI keeps throwing Internal Server Errors, and the log is filled with variations on the following error:

     

    PDOException: SQLSTATE[HY000]: General error: 8 attempt to write a readonly database in /app/www/public/3rdparty/doctrine/dbal/src/Driver/PDO/Statement.php:101
    Stack trace:
    #0 /app/www/public/3rdparty/doctrine/dbal/src/Driver/PDO/Statement.php(101): PDOStatement->execute()
    #1 /app/www/public/3rdparty/doctrine/dbal/src/Connection.php(1155): Doctrine\DBAL\Driver\PDO\Statement->execute()
    #2 /app/www/public/lib/private/DB/Connection.php(295): Doctrine\DBAL\Connection->executeStatement()
    #3 /app/www/public/3rdparty/doctrine/dbal/src/Query/QueryBuilder.php(354): OC\DB\Connection->executeStatement()
    #4 /app/www/public/lib/private/DB/QueryBuilder/QueryBuilder.php(280): Doctrine\DBAL\Query\QueryBuilder->execute()
    #5 /app/www/public/lib/private/AppConfig.php(332): OC\DB\QueryBuilder\QueryBuilder->execute()
    #6 /app/www/public/lib/private/AllConfig.php(227): OC\AppConfig->deleteKey()
    #7 /app/www/public/lib/base.php(751): OC\AllConfig->deleteAppValue()
    #8 /app/www/public/lib/base.php(1180): OC::init()
    #9 /app/www/public/cron.php(43): require_once('...')
    #10 {main}

     

    The variations are different PHP source files hitting the readonly database, but the error is always the same.

     

    Can anybody tell me what I need to do to fix this?

     

    Thanks.

  3. I realize this may not exactly be an Unraid question, but I am asking it in case other Unraid users have dealt with something similar.

     

    I have about 8 docker containers on a custom network with static IPs on my local LAN.  The Unraid server and some of the containers are reachable (web browser, netcat, ping, &c.).  Others, though, are timing out in the browser and showing "Destination host unreachable" to netcat or lost packets to ping.  All of the containers are visible to a VM running on the unraid server, so the routing only seems to be a problem from elsewhere on the LAN.

     

    Any ideas?  Thanks.

  4. I just installed :latest today on 6.8.0-rc5 (WireGuard installed but not configured or running) and am having problems similar to above---no Web UI, and my Unifi Android App can't see the controller.  I have tried both Bridge and br0 with a dedicated IP address.  Same issues with :LTS.

     

    I tried netcat on the exposed ports.  There's something listening on 8080 and 8880, but it doesn't respond to HTTP.  Connection is refused on 8081, 8443, 8843, and 10001.

     

    The log has a couple of errors in it.  The first appears to to be logging related, but the second is an IO error that might be more relevant.

     

    Quote

    2019-11-11 01:52:33,270 db-server ERROR Recovering from StringBuilderEncoder.encode('[2019-11-10T20:52:33,269] <db-server> WARN db - Unknown error, restarting mongo without logging to verify error


    ') error: org.apache.logging.log4j.core.appender.AppenderLoggingException: Error writing to stream logs/server.log org.apache.logging.log4j.core.appender.AppenderLoggingException: Error writing to stream logs/server.log


    at org.apache.logging.log4j.core.appender.OutputStreamManager.writeToDestination(OutputStreamManager.java:263)
    at org.apache.logging.log4j.core.appender.FileManager.writeToDestination(FileManager.java:261)
    at org.apache.logging.log4j.core.appender.rolling.RollingFileManager.writeToDestination(RollingFileManager.java:219)
    at org.apache.logging.log4j.core.appender.OutputStreamManager.flushBuffer(OutputStreamManager.java:293)

    [...]

    at com.ubnt.service.C.voidsuper.Ôo0000(Unknown Source)
    at com.ubnt.service.C.voidsuper.o00000(Unknown Source)
    at com.ubnt.service.C.voidsuper$1.run(Unknown Source)
    at java.lang.Thread.run(Thread.java:748)
    Caused by: java.io.IOException: Input/output error


    at java.io.FileOutputStream.writeBytes(Native Method)
    at java.io.FileOutputStream.write(FileOutputStream.java:326)
    at org.apache.logging.log4j.core.appender.OutputStreamManager.writeToDestination(OutputStreamManager.java:261)
    ... 39 more

     

  5. 7 hours ago, Squid said:

    Curious if this script https://forums.unraid.net/topic/48707-additional-scripts-for-userscripts-plugin/?page=9&amp;tab=comments#comment-683480 will return different values (run through user scripts)

    Just ran it, and got similar results (this is several days newer than the previous screenshot):

    Ampache Size: 1.4G Logs: 78.0kB
    CUPS Size: 732M Logs: 367.0B
    HandBrake Size: 252.7M Logs: 112.6kB
    mongo Size: 355M Logs: 20.7kB
    nextcloud Size: 186M Logs: 2.7kB
    openvpn-as Size: 209M Logs: 13.0kB
    pihole Size: 483M Logs: 2.7GB
    plex Size: 400M Logs: 4.1kB
    RDP-Calibre Size: 1.4G Logs: 3.1MB

    Now, the script did take quite a while to churn out the pihole entry...but that could just be counting log size.  I really suspect that pihole is the culprit, but I have no idea where the extra storage usage could be.

  6. On 11/9/2018 at 1:29 PM, bonienl said:

    Unraid version 6.6.5 has a function to calculate container sizes and associated usage.

     

    When the "writable" section becomes large, it is usually an indication that a path is set wrong and data is written inside the container.

    You can control the "log" file sizes by enabling log rotation in the Docker settings.

     

    I am still looking at the same problem.  I have a 40GB docker.img, and I am getting high usage warnings.  As of a couple of days ago, I was getting usage warnings of 33GB and I took the attached screenshot.  This adds up to no more than 10GB, even with nearly 5GB log usage on pihole.  None of my writables are big.  There is no diagnostic information I have yet found that comes close to identifying 30+GB of docker usage, so I have no idea what to fix.

     

    I have repeatedly verified all my path mappings, I have run in-container disk usage analytics, and I have run docker container usage analytics.  I still have over 20GB of reported usage that is completely unaccounted for by any method available to me.

    dockers-20181110.png

  7. I asked this earlier in this thread, but I will ask again:  How can one determine which container is causing the excessive usage?  I have used the "docker images" command and all the images together add up to less than 6GB, while unRAID is telling me that I am at 83% usage (33GB)  of my 40GB docker image.  I have bashed into each and every container I use and run "du -shx /" to get a usage report on the root (meaning docker.img loop) partition, and the sizes agree with the "docker images" output.

     

    So how exactly am I supposed to troubleshoot this 33GB image usage when all the diagnostics available to me are telling me that I am only using 6GB?

     

    Thanks.

  8. Sometime in the last day or two, my NextCloud instance has stopped working.  Desktop clients can't connect and the web UI comes up with a blank page.  The only feedback I am seeing is "500 Internal Server Error" through the desktop clients or when manually mounting the share via DAVFS to a linux box (even with verbose turned on).  Nothing shows up in the main docker container log, and when I shell into the container I am finding nothing in the nginx error log and nothing in the php logs.

     

    I use NextCloud a lot and this is quite troublesome.  I appreciate any help or any suggestion on where to look for additional error information.  Thanks!

  9. 12 hours ago, nuhll said:

    how?

     

    Was that question to me?

     

    You can change the target version in the "Edit" page (where you set up all your parameters).  The default container is "diginc/pi-hole:latest" (I think—I am not looking at the unraid screen right now—but the ':latest' part is the tag we want to change).  To find the available tags, you go to the docker bug page for the container.  That's in an earlier post in this thread, and can be found off the Community Applications listing for any given container.  In this case, it is https://hub.docker.com/r/diginc/pi-hole/.

     

    So, go to that page and click the "Tags" nav link at the top.  Based on the change dates, I figured out that ':latest' must refer to ':debian_v3.3.1' (or newer) and that the version I was running prior to the last update had to be ':debian_v3.3'.

     

    Last step is to edit the container, change the ':latest' tag to the ':debian_v3.3' tag and click Apply.  The container will reinstall based on the older version.

     

    I guess I will follow newer releases and try them every once in while to see if the problem is solved.  In the meantime, v3.3 seems to work well enough for me.

    • Upvote 1
  10. On 7/19/2018 at 8:35 PM, CJW said:

    Also having problems after the last update, with pihole docker quitting shortly after startup.

     

    ETA:  During the brief period it is up, I have thrown several dig queries at it ('dig google.com @pihole') and the resolve just fine, so I am not sure what the "DNS resolution is currently unavailable" error means.

     

    I have manually downgraded to the debian_v3.3 tag and I no longer have sudden crashes.  I don't know what the problem is with later versions, but I have had the same problem (crashing after only a couple of minutes) with v3.3.1, v3.3.1-1, and prerelease,

  11. Also having problems after the last update, with pihole docker quitting shortly after startup.

     

    ETA:  During the brief period it is up, I have thrown several dig queries at it ('dig google.com @pihole') and the resolve just fine, so I am not sure what the "DNS resolution is currently unavailable" error means.

     

    Here are the logs:

     

    Quote

    [s6-init] making user provided files available at /var/run/s6/etc...exited 0.
    [s6-init] ensuring user provided files have correct perms...exited 0.
    [fix-attrs.d] applying ownership & permissions fixes...
    [fix-attrs.d] 01-resolver-resolv: applying...
    [fix-attrs.d] 01-resolver-resolv: exited 0.
    [fix-attrs.d] done.
    [cont-init.d] executing container initialization scripts...
    [cont-init.d] 20-start.sh: executing...
    ::: Starting docker specific setup for docker diginc/pi-hole
    + [[ admin == '' ]]
    + pihole -a -p admin admin
    [✓] New password set
    Using custom DNS servers: 192.168.0.4 & 192.168.0.5
    DNSMasq binding to custom interface: br0
    Added ENV to php:
    "PHP_ERROR_LOG" => "/var/log/lighttpd/error.log",
    "ServerIP" => "192.168.0.166",
    "VIRTUAL_HOST" => "192.168.0.166",
    Using IPv4
    dnsmasq: syntax check OK.
    ::: Testing DNSmasq config: ::: Testing lighttpd config: Syntax OK
    ::: All config checks passed, starting ...
    ::: Docker start setup complete
    [✗] DNS resolution is currently unavailable
    [cont-init.d] 20-start.sh: exited 1.
    [cont-finish.d] executing container finish scripts...
    [cont-finish.d] done.
    [s6-finish] syncing disks.
    [s6-finish] sending all processes the TERM signal.
    [s6-finish] sending all processes the KILL signal and exiting.

     

  12. 10 minutes ago, Squid said:

    Could be runaway logging, downloads being saved into the image, etc

     

    Couple of entries in the docker FAQ about this topic

     

    Based on the earlier info in this thread, I did read the FAQ and look into those issues.  None of my dockers should be downloading anything that I am aware of (no torrent or TV clients or the like); I have not transcoded anything in Plex and I have my /transcode directory mapped from the array, anyway; and the reason I bashed into each running container was to look at the virtual filesystems for any signs of excess logging.

     

    My understanding—and, please, correct me if I am wrong—is that those logs would show up in the container filesystem.  So, if the container logs are not stored in "[CONTAINER]%/var/log", how do I find out what may be doing excessive logging?

  13. I just started getting warnings about high docker.img utilization and found this thread.  The problem is that I have no idea which container might be the culprit.  My docker.img is 24G.  I have 9 containers installed and "docker ps -s" shows usage of no more than 50MB (5GB virtual).  I have bashed into each running docker and run "du -h -d 1 -x /" and I am getting sizes that all agree with the virtual usage from the "docker ps -s" command.  I am getting warnings for about 75% usage, which would be about 16GB, so I have no idea where the other 10+GB of usage is coming from.  I have also run Container Cleanup and have, to the best of my knowledge, no orphaned containers.

     

    I would be happy to fix whatever container is causing the problems but I have no ideas at this point how to figure out where that problem is.  Can the docker.img file be mounted somehow to look into it and see what the actual usage is?

     

    Help, please!

×
×
  • Create New...