Jump to content

rara1234

Members
  • Posts

    52
  • Joined

  • Last visited

Posts posted by rara1234

  1. Read the faq entry.  Nothing to worry about

     

     

    Sent from my SM-T560NU using Tapatalk

     

    there's a better fix:

    this post: https://github.com/docker/distribution/issues/1439#issuecomment-237999672

     

    has the fix which worked perfectly for me without having to reinstall everything...

    The existing layers may be an important piece of the puzzle. I think what's happening is that one of the existing layers was downloaded by an older version of Docker that did not save tar-split metadata. The migration to content-addressability computed an ID for the layer, but this doesn't match the ID that comes directly from the tar. The migration also prepopulated the mapping between the layer digest and the artifact digest in

    /var/lib/docker/image/*/distribution

    . Pulling another image that uses the affected layer will try to use this layer digest from the migration, but that digest turns out to be wrong, so the pull fails.

     

    If this is correct, deleting

    /var/lib/docker/image/*/distribution

    should fix your problem pulling maxpowa/npomf:latest. This is just a cache, so it should be safe to remove (removing it will just cause extra data to be transfered during pushes and pulls until the cache is repopulated).

  2. Try as I might, I could never replicate the issue, so I couldn't ever play around with other possible solutions.  The FAQ entry is a solution, and only costs you a couple of minutes in download time.  But if you're going to try anything you would delete the container and image and then it in again.

     

    Just got home from work, and wanted to try something simpler before I killed my docker.img and reinstalled all dockers.

     

    Clicked on the app

    remove

    also remove image

    yes delete it

     

    reinstalled in CA

    works perfect now.

     

    Hopefully they do edit the second post on the announcement and list the issues and simple fixes that we are having.

    thanks again for pointing me to that.  I missed that in the FAQ when I had a quick look over it last night

     

     

    this post: https://github.com/docker/distribution/issues/1439#issuecomment-237999672 has the fix which worked perfectly for me without having to reinstall everything... you can just log in using SSH and clear the cache (details below):

     

    The existing layers may be an important piece of the puzzle. I think what's happening is that one of the existing layers was downloaded by an older version of Docker that did not save tar-split metadata. The migration to content-addressability computed an ID for the layer, but this doesn't match the ID that comes directly from the tar. The migration also prepopulated the mapping between the layer digest and the artifact digest in

    /var/lib/docker/image/*/distribution

    . Pulling another image that uses the affected layer will try to use this layer digest from the migration, but that digest turns out to be wrong, so the pull fails.

     

    If this is correct, deleting

    /var/lib/docker/image/*/distribution

    should fix your problem pulling maxpowa/npomf:latest. This is just a cache, so it should be safe to remove (removing it will just cause extra data to be transfered during pushes and pulls until the cache is repopulated).

     

  3. What do I do when I see 'layers from manifest don't match image configuration' during a docker app installation?

     

    I have a theory as to why this is actually happening, unfortunately I am unable to replicate this issue so I cannot test the theory.

     

    As to the solution, you will need to delete your docker.img file and recreate it again.  You can then reinstall your docker apps through Community Application's Previous Apps section or via the my* templates.  Your apps will be back the exact same way as before, with no adjustment of the volume mappings, ports, etc required.

     

    this post: https://github.com/docker/distribution/issues/1439#issuecomment-237999672

     

    has the fix which worked perfectly for me without having to reinstall everything...

    The existing layers may be an important piece of the puzzle. I think what's happening is that one of the existing layers was downloaded by an older version of Docker that did not save tar-split metadata. The migration to content-addressability computed an ID for the layer, but this doesn't match the ID that comes directly from the tar. The migration also prepopulated the mapping between the layer digest and the artifact digest in

    /var/lib/docker/image/*/distribution

    . Pulling another image that uses the affected layer will try to use this layer digest from the migration, but that digest turns out to be wrong, so the pull fails.

     

    If this is correct, deleting

    /var/lib/docker/image/*/distribution

    should fix your problem pulling maxpowa/npomf:latest. This is just a cache, so it should be safe to remove (removing it will just cause extra data to be transfered during pushes and pulls until the cache is repopulated).

  4. Just trying to set up CUPS.  Specifically want it for Google Cloud Print to my networked printer (Ricoh Aficio SG3110DN).

     

    I have set up an app password in my Google account, but I get this:

     

    *** Running /etc/my_init.d/config.sh...
    grep: /config/cloudprint/Service State: No such file or directory
    Google authentication failed.
    *** /etc/my_init.d/config.sh failed with status 1
    
    *** Killing all processes...

     

    Any ideas?

     

    OK, so I have CUPS working to share the printer, that's working fine.  It's just the Google Cloud Print part I'm struggling with.

     

    yeah, i have the same issue - i think it's because the script uses an outdated way to authenticate to Google. I raised a defect on the github repo: https://github.com/gfjardim/docker-containers/issues/56

  5. I have Emby (emby/embyserver:latest) running in one container, and Sonarr (linuxserver/sonarr:latest) is also installed. When i try to configure a "connection" between the two so that Sonarr notifies Emby of a new episode, i can't get it to save - it just reports "

    Unable to send test message: The request timed out

    " - however, when i look in Emby's notification centre, i see "

    Test from Sonarr 7 minutes ago Success! MediaBrowser has been successfully configured!

    ". I tried entering the Sonarr container and made a request using CURL ("

    curl -X POST -H "Content-Type: application/json" -d '{}' "http://192.168.1.5:8096/emby/emby/Library/Series/Updated?api_key=<key>

    ") and it works. I was able to add a "webhook" type connection without a problem though - and that does trigger an update to the library.

  6. ... but not using functionality which is important to my family on a daily basis for months at a time isn't a viable approach...

    How did your family get along before docker?

     

    If I needed to I could go back to that approach while I tried to troubleshoot individual dockers on my unRAID.

     

    I know what you mean - but I suspect the world might fall apart without Thomas and friends, pepper pig and regular updates on the comings and going a of the kardashians ;)

     

    Previously I had a separate machine running all of these apps. I don't have it any more. I could set these up on a vm but I suppose, but again, I'm sure there is some way to identify where the allocated storage has gone, we just don't know it yet.

     

    If I get time over Christmas I will try to rebuild the docker image one at a time. The problem is that it works fine for months then I get 30-50% growth (on a 10gb image) in a day or two, completely at random.

  7. ...If your docker image is growing it is absolutely because of a misbehaving container or a misconfigured container.

     

    i'd be happy to concede that if i could show it - but not using functionality which is important to my family on a daily basis for months at a time isn't a viable approach. Without this, how can i figure out which one it is? I mean, there must be some better way (i.e. some systematic/analytical approach using e.g. logs, tools to interrogate the system etc.) which can be used to narrow it down to the container(s) which are causing this?  :(

  8. sorry - i updated my answer after you'd replied to add that:

    1. I dont know how to figure out which container is causing it without disabling them one by one for 3 months at a time (the average time it takes for it to be noticeable in my case), and

    2.I would have expected that a badly configured app within a container shouldn't be able to permanently consume image storage i.e. the container should be reallocating storage before consuming and then not releasing more image storage.

  9. thank for the helpful reply. I have transcoding disabled in emby. I guess the issue is that I simply don't know how to narrow down the issue e.g. how to work out which container is causing the issue without disabling them one by one and waiting 3 months - but that's clearly not an option! And as far as i can tell, when storage is released within a container it should be released in the image too, which apparently isn't happening, and that's where i think the defect lies - badly configured apps within containers shouldn't be able to permanently consume storage...

  10. You haven't really given us enough information to reproduce. Most specifically, you haven't given us any details about how you have any of those dockers configured. I suspect your problem is related to the way you have configured one or more of your dockers, and it likely is the same for most in this thread.

     

    Not really a defect report so I have merged it back in to this thread where you can give us more details.

     

    that's not really very useful, and although I know you're trying to be helpful by keeping the forums clear of junk, with 4833 views on this thread this is clearly an issue for a large number of people. It would be really great if you could help figure out how to analyse or troubleshoot this issue, rather than closing defect reports with an "it's your fault" response without even letting lime tech see the issue report. What, specifically, do you think we need? a specific log file? the outcome of a specific command? Can you actually assist with this issue or highlight it to someone who can? It's already been dragging on for months.

     

    I think the main things that I note are:

    1. whatever docker containers everyone here is using, interrogating utilisation within the image file shows that consumed space is well below the docker allocated storage reported by unraid.

    2. for many people, things appear stable then suddenly rocket up 20-40% within a few hours

    3. there are a few containers which appear to be commonly used, but no containers which everyone is using.

     

    What would you propose the next steps are to progress this issue?

     

  11. unRAID OS Version: 6.1.3

     

    Description: I keep getting warnings about "Event: Docker high image disk utilization". Typically, docker runs fine for a few weeks, then suddenly these start and utilization reported goes up from 70% to 100% in a few days. If it gets to 100% the webgui crashes. There are no entries in the logs, and docker.log contains only this:

     

    time="2015-11-18T07:12:18.238256056Z" level=fatal msg="Error starting daemon: Unable to open the database file: unable to open database file"

     

    How to reproduce: Just usign docker. with teh following apps:

    needo/couchpotato:latest

    gfjardim/crashplan:latest

    gfjardim/cups:latest

    emby/embyserver:latest

    pinion/docker-mylar:latest

    linuxserver/sabnzbd:latest

    linuxserver/sonarr:latest

     

     

    Expected results: Docker image utilization does not approach 100% over time

     

    Actual results: Docker image utilization approaches 100% and the image file needs to be destroyed and recreated to recover service.

     

    Other information: this post has had over 4,400 views, so clearly others are also seeing it: https://lime-technology.com/forum/index.php?topic=42654.45

  12. adding the dbus change also fixed the discovery error for me. Now, however, when i go to add the discovered printer I see this:

    Add Printer OKI_DATA_CORP_C531 Error

     

    Unable to get list of printer drivers:

     

    Broken pipe

     

    restarting the container fixes it. After adding the printer, i sent a test print and now see this error:

    processing since

    Sat 03 Oct 2015 11:25:16 AM UTC

    "Unable to locate printer "oki-c531-2e94ba.local"."

  13. so by trawling through 20+ pages, i came across a reference to ".ui_info". Long story short, there's an additional step missing from the first post, which is:

     

    • on your local client, update the .ui_info file

    [*]Locate the file. On my Windows installation it's at c:\ProgramData\CrashPlan\.ui_info, on OSx it's at /Library/Application Support/CrashPlan/.ui_info

    [*]Replace the GUID (long string of letters and numbers) with "unRAID", so that the entire line looks like this:

    4243,unRaid

     

    and that's it. Long story short, apparently Code42 introduced some security feature where the client and server need to share some unique identifier to prevent people controlling others' servers.

  14. I'm getting annoyed by the temperature warnings. The WD RED drives i'm slowly replacing my older green drives with have a rated operating temperature up to 70c [1] and my older WD Green drives up to 60C [2]. While i'm aware that lower temperature correlates with longer life, at this time i don't really care :)

     

    The system sends me emails at (i think) 50 degrees. How can i change these thresholds?

     

    ref:

    [1] http://www.wdc.com/global/products/specs/?driveID=1324&language=1

    [2] http://www.wdc.com/wdproducts/library/SpecSheet/ENG/2879-701229.pdf

×
×
  • Create New...