Jump to content
  • Dockers wanting to update, but don't in the end?


    urbanracer34
    • Minor

    So I appear to be having a problem with dockers, Specifically Linuxserver ones, but they said to me it is an unRAID issue and it is "not just us." I chatted with someone from Linuxserver in private and they said it is an issue with "Update all containers."

     

    The dockers will say there is an update ready, but when updated, it does not do anything.

     

    Tried manually updating a docker, same result. 

    gibson-diagnostics-20190829-1841.zip

    • Like 1


    User Feedback

    Recommended Comments



    Yeah, not just LSIO, though it would look that way because they have so many that people use.

     

    Pihole is another that doesn't update on mine.  The rest are ok.

    Share this comment


    Link to comment
    Share on other sites
    4 hours ago, rsuplido said:

    Anyone know if it was caused by the Community Applications plugin update on 8/27?

    Quite impossible actually.

    Share this comment


    Link to comment
    Share on other sites

    This looks to be an issue on docker's side (But I could be completely out to lunch -> wouldn't be the first time)

     

    It doesn't appear that they are reporting the proper sha for the manifests remotely

     

    Manifest URL: https://registry-1.docker.io/v2/linuxserver/radarr/manifests/latest
    Token URL: https://auth.docker.io/token?service=registry.docker.io&scope=repository%3Alinuxserver%2Fradarr%3Apull
    HTTP/1.1 200 OK
    Content-Length: 1788
    Content-Type: application/vnd.docker.distribution.manifest.v2+json
    Docker-Content-Digest: sha256:4c08590e968819112de4533298c2859d128afb449a704301bcfd647dfe38e327
    Docker-Distribution-Api-Version: registry/2.0
    Etag: "sha256:4c08590e968819112de4533298c2859d128afb449a704301bcfd647dfe38e327"
    Date: Sat, 31 Aug 2019 02:17:43 GMT
    Strict-Transport-Security: max-age=31536000
    
    
    Remote Digest: sha256:4c08590e968819112de4533298c2859d128afb449a704301bcfd647dfe38e327
    Update status: Image='linuxserver/radarr:latest', Local='sha256:eec8bb0287a5cb573eb5a14a5c2e1924ad7be66f89ea5e2549440defdafba02b', Remote='sha256:4c08590e968819112de4533298c2859d128afb449a704301bcfd647dfe38e327', Status='false'

    Now if I pull the image (ie: delete the container, and reload it), in theory the digest sha should be 

    4c08590e968819112de4533298c2859d128afb449a704301bcfd647dfe38e327

    but it's not if I'm reading everything correctly

    docker images --digests --no-trunc
    REPOSITORY                    TAG                 DIGEST                                                                    IMAGE ID                                                                  CREATED             SIZE
    linuxserver/radarr            latest              sha256:eec8bb0287a5cb573eb5a14a5c2e1924ad7be66f89ea5e2549440defdafba02b   sha256:cae2333bf55d51bbf508c333bdbbc70978d41baf0a23207de8bc3251afb73e6d   3 days ago          550MB
    

     

    Therefore an update shows as being available...

     

     

    IIRC, this issue pops up every year or so with docker.

    Share this comment


    Link to comment
    Share on other sites

    I have the same issues with all of my dockers now. LSIO and others. I also have issues tracing (tracert) hub.docker.com which is hosted by amazonaws. Their twitter support says it is all fine on their end, so not sure what to make of it. I can also not update Nextcloud through the webinterface, as the package is not downloading (again from amazonaws as far as I can tell). VPN of different regions also does not help.

    Share this comment


    Link to comment
    Share on other sites
    2 hours ago, Seige said:

    I also have issues tracing (tracert) hub.docker.com which is hosted by amazonaws.

    At what point is your traceroute dying?  I am able to traceroute without an issue, and it's only one hop from my firewall to Amazon.  I am on the east coast of the United States, not sure where you're located.  Sounds like it could be a regional issue.

    Share this comment


    Link to comment
    Share on other sites

    I am from Europe. It seems to be more reliable today, the timeout now happens once I reach AWS in Seattle, which I presume is their firewall.

    • Thanks 1

    Share this comment


    Link to comment
    Share on other sites

    It appears that the docker images --digests --no-trunc command is showing, for whatever reason, the digest of the manifest list rather than the manifest itself for containers pushed as part of a manifest list (https://docs.docker.com/engine/reference/commandline/manifest/#create-and-push-a-manifest-list). I'm not sure if that's always been the case, or is the result of some recent change on the Docker hub API. Also not sure if it's intentional or a bug.

     

    This causes an issue since in DockerClient.php (/usr/local/emhttp/plugins/dynamix.docker.manager/include), the request made to get the comparison digest is

                    /**
                     * Step 4: Get Docker-Content-Digest header from manifest file
                     */
                    $ch = getCurlHandle($manifestURL, 'HEAD');
                    curl_setopt( $ch, CURLOPT_HTTPHEADER, [
                            'Accept: application/vnd.docker.distribution.manifest.v2+json',
                            'Authorization: Bearer ' . $token
                    ]);

    which retrieves information about the manifest itself, not the manifest list. So it ends up comparing the list digest as reported by the local docker commands to the individual manifest digests as retrieved from docker hub, which of course do not match.

     

    Changing the Accept header to the list mime type: 'application/vnd.docker.distribution.manifest.list.v2+json' causes it to no longer consistently report updates available for these containers. Doing this however reports updates for all containers that do not use manifest lists, since the call now falls back to a v1 manifest if the list is not available and the digest for the v1 manifest doesn't match the digest for the v2 manifest.

     

    If the Accept header is instead changed to 'application/vnd.docker.distribution.manifest.list.v2+json,application/vnd.docker.distribution.manifest.v2+json' docker hub will fallback correctly to the v2 manifest, and the digests now match the local output for both containers using straight manifests and those using manifest lists. Until docker hub inevitably makes another change.

     

                    /**
                     * Step 4: Get Docker-Content-Digest header from manifest file
                     */
                    $ch = getCurlHandle($manifestURL, 'HEAD');
                    curl_setopt( $ch, CURLOPT_HTTPHEADER, [
                            'Accept: application/vnd.docker.distribution.manifest.list.v2+json,application/vnd.docker.distribution.manifest.v2+json',
                            'Authorization: Bearer ' . $token
                    ]);

     

    • Like 2

    Share this comment


    Link to comment
    Share on other sites
    8 hours ago, bluemonster said:

    It appears that the docker images --digests --no-trunc command is showing, for whatever reason, the digest of the manifest list rather than the manifest itself for containers pushed as part of a manifest list (https://docs.docker.com/engine/reference/commandline/manifest/#create-and-push-a-manifest-list). I'm not sure if that's always been the case, or is the result of some recent change on the Docker hub API. Also not sure if it's intentional or a bug.

    ......

    I made the change suggested above and my containers are now updating as expected.  Thanks

    Share this comment


    Link to comment
    Share on other sites

    After making the change and doing a [Check for Updates] all containers correctly report 'up-to-date'.

    Share this comment


    Link to comment
    Share on other sites
    10 hours ago, bluemonster said:

    It appears that the docker images --digests --no-trunc command is showing, for whatever reason, the digest of the manifest list rather than the manifest itself for containers pushed as part of a manifest list (https://docs.docker.com/engine/reference/commandline/manifest/#create-and-push-a-manifest-list). I'm not sure if that's always been the case, or is the result of some recent change on the Docker hub API. Also not sure if it's intentional or a bug.

     

    This causes an issue since in DockerClient.php (/usr/local/emhttp/plugins/dynamix.docker.manager/include), the request made to get the comparison digest is

    
                    /**
                     * Step 4: Get Docker-Content-Digest header from manifest file
                     */
                    $ch = getCurlHandle($manifestURL, 'HEAD');
                    curl_setopt( $ch, CURLOPT_HTTPHEADER, [
                            'Accept: application/vnd.docker.distribution.manifest.v2+json',
                            'Authorization: Bearer ' . $token
                    ]);

    which retrieves information about the manifest itself, not the manifest list. So it ends up comparing the list digest as reported by the local docker commands to the individual manifest digests as retrieved from docker hub, which of course do not match.

     

    Changing the Accept header to the list mime type: 'application/vnd.docker.distribution.manifest.list.v2+json' causes it to no longer consistently report updates available for these containers. Doing this however reports updates for all containers that do not use manifest lists, since the call now falls back to a v1 manifest if the list is not available and the digest for the v1 manifest doesn't match the digest for the v2 manifest.

     

    If the Accept header is instead changed to 'application/vnd.docker.distribution.manifest.list.v2+json,application/vnd.docker.distribution.manifest.v2+json' docker hub will fallback correctly to the v2 manifest, and the digests now match the local output for both containers using straight manifests and those using manifest lists. Until docker hub inevitably makes another change.

     

    
                    /**
                     * Step 4: Get Docker-Content-Digest header from manifest file
                     */
                    $ch = getCurlHandle($manifestURL, 'HEAD');
                    curl_setopt( $ch, CURLOPT_HTTPHEADER, [
                            'Accept: application/vnd.docker.distribution.manifest.list.v2+json,application/vnd.docker.distribution.manifest.v2+json',
                            'Authorization: Bearer ' . $token
                    ]);

     

    Can you do a PR on https://github.com/limetech/webgui/tree/master/plugins

     

    So the dev's can see it and make the changes for 6.8?

    Share this comment


    Link to comment
    Share on other sites
    10 hours ago, bluemonster said:

    If the Accept header is instead changed to

    @bluemonster excellent find. Tested it, and working fine.

    I made a PR with your correction, it will be available in the next version. Thanks

    Share this comment


    Link to comment
    Share on other sites

    Great to see a potential fix, was getting the same issue, but thought it was my change in dns settings :)

    Share this comment


    Link to comment
    Share on other sites

    update shows as being available before fix:

    pihole/pihole
    raymondmm/tasmoadmin
    linuxserver/letsencrypt
    linuxserver/oscam
    library/influxdb
    linuxserver/duckdns
    homeassistant/home-assistant
    library/telegraf
    linuxserver/sonarr
    linuxserver/transmission

     

    update shows as being available after fix: 
    spants/mqtt
    emby/embyserver
    grafana/grafana
    jonmaddox/harmony-api
    linuxserver/code-server

    Share this comment


    Link to comment
    Share on other sites

    Thank you for the input. This seems to have fixed it for now. I presume this change to the php file is not persistent across reboots?

    Share this comment


    Link to comment
    Share on other sites

    Anyone be kind enough to provide mini-tutorial on what they used to edit file above? I see basically it’s just that one line of code but... stopping array to do it etc?

    Share this comment


    Link to comment
    Share on other sites

    I just opened the terminal (ssh/telnet works too), cd /usr/local/emhttp/plugins/dynamix.docker.manager/include and typed nano DockerClient.php  The line of code is about half way into the file.  You do not need to stop the array or do anything else.

     

    Share this comment


    Link to comment
    Share on other sites
    57 minutes ago, darcon said:

    I just opened the terminal (ssh/telnet works too), cd /usr/local/emhttp/plugins/dynamix.docker.manager/include and typed nano DockerClient.php  The line of code is about half way into the file.  You do not need to stop the array or do anything else.

     

    By manually doing this fix now do we POTENTIALLY create side effect issues when issue is officially updated and pushed as a update to Unraid? Any known reason this wouldn't be advisable to immediately do?

     

    Edit: once I make changes above what do I type or do to get terminal to save file? Sorry for silly questions... i follow what needs changed and kind of why but... facepalm lol

    Edited by blaine07

    Share this comment


    Link to comment
    Share on other sites

    Thanks for the fix @bluemonster !

     

    Here is a bash file that will automatically implement the fix in 6.7.2 (and probably earlier, although I'm not sure how much earlier):
      https://gist.github.com/ljm42/74800562e59639f0fe1b8d9c317e07ab

     

    It is meant to be run using the User Scripts plugin, although that isn't required.

    Note that you need to re-run the script after every reboot.

    Remember to uninstall the script after you upgrade to Unraid 6.8

    More details in the script comments. 

    Edited by ljm42
    • Like 2

    Share this comment


    Link to comment
    Share on other sites

    By manually doing this fix now do we POTENTIALLY create side effect issues when issue is officially updated and pushed as a update to Unraid? Any known reason this wouldn't be advisable to immediately do?

     

    The change does not persist through reboots, so it won't cause any lasting harm.  Once unraid gets updated we'll reboot and everything will be back to normal.

    Edited by darcon
    • Like 1

    Share this comment


    Link to comment
    Share on other sites

    Cool, I ran that script and it fixed it for Docker Hub. It did not fix the similar problem for Quay.io that I have with the oauth2_proxy container.

    Share this comment


    Link to comment
    Share on other sites
    11 hours ago, bonienl said:

    @bluemonster excellent find. Tested it, and working fine.

    I made a PR with your correction, it will be available in the next version. Thanks

    If there a chance of a hotfix for such an issue or is the next version due very soon?

    Share this comment


    Link to comment
    Share on other sites
    44 minutes ago, local.bin said:

    If there a chance of a hotfix for such an issue or is the next version due very soon?

    This issue results in many false positives, but still containers which have a true update, are updated.

    The -rc release of version 6.8 is imminent, just a little bit more patience 😊

     

    Share this comment


    Link to comment
    Share on other sites



    Join the conversation

    You can post now and register later. If you have an account, sign in now to post with your account.
    Note: Your post will require moderator approval before it will be visible.

    Guest
    Add a comment...

    ×   Pasted as rich text.   Restore formatting

      Only 75 emoji are allowed.

    ×   Your link has been automatically embedded.   Display as a link instead

    ×   Your previous content has been restored.   Clear editor

    ×   You cannot paste images directly. Upload or insert images from URL.


  • Status Definitions

     

    Open = Under consideration.

     

    Solved = The issue has been resolved.

     

    Solved version = The issue has been resolved in the indicated release version.

     

    Closed = Feedback or opinion better posted on our forum for discussion. Also for reports we cannot reproduce or need more information. In this case just add a comment and we will review it again.

     

    Retest = Please retest in latest release.


    Priority Definitions

     

    Minor = Something not working correctly.

     

    Urgent = Server crash, data loss, or other showstopper.

     

    Annoyance = Doesn't affect functionality but should be fixed.

     

    Other = Announcement or other non-issue.