Jump to content

Leaderboard


Popular Content

Showing content with the highest reputation since 08/20/19 in Report Comments

  1. 18 points
    It appears that the docker images --digests --no-trunc command is showing, for whatever reason, the digest of the manifest list rather than the manifest itself for containers pushed as part of a manifest list (https://docs.docker.com/engine/reference/commandline/manifest/#create-and-push-a-manifest-list). I'm not sure if that's always been the case, or is the result of some recent change on the Docker hub API. Also not sure if it's intentional or a bug. This causes an issue since in DockerClient.php (/usr/local/emhttp/plugins/dynamix.docker.manager/include), the request made to get the comparison digest is /** * Step 4: Get Docker-Content-Digest header from manifest file */ $ch = getCurlHandle($manifestURL, 'HEAD'); curl_setopt( $ch, CURLOPT_HTTPHEADER, [ 'Accept: application/vnd.docker.distribution.manifest.v2+json', 'Authorization: Bearer ' . $token ]); which retrieves information about the manifest itself, not the manifest list. So it ends up comparing the list digest as reported by the local docker commands to the individual manifest digests as retrieved from docker hub, which of course do not match. Changing the Accept header to the list mime type: 'application/vnd.docker.distribution.manifest.list.v2+json' causes it to no longer consistently report updates available for these containers. Doing this however reports updates for all containers that do not use manifest lists, since the call now falls back to a v1 manifest if the list is not available and the digest for the v1 manifest doesn't match the digest for the v2 manifest. If the Accept header is instead changed to 'application/vnd.docker.distribution.manifest.list.v2+json,application/vnd.docker.distribution.manifest.v2+json' docker hub will fallback correctly to the v2 manifest, and the digests now match the local output for both containers using straight manifests and those using manifest lists. Until docker hub inevitably makes another change. /** * Step 4: Get Docker-Content-Digest header from manifest file */ $ch = getCurlHandle($manifestURL, 'HEAD'); curl_setopt( $ch, CURLOPT_HTTPHEADER, [ 'Accept: application/vnd.docker.distribution.manifest.list.v2+json,application/vnd.docker.distribution.manifest.v2+json', 'Authorization: Bearer ' . $token ]);
  2. 11 points
    Thanks for the fix @bluemonster ! Here is a bash file that will automatically implement the fix in 6.7.2 (and probably earlier, although I'm not sure how much earlier): https://gist.github.com/ljm42/74800562e59639f0fe1b8d9c317e07ab It is meant to be run using the User Scripts plugin, although that isn't required. Note that you need to re-run the script after every reboot. Remember to uninstall the script after you upgrade to Unraid 6.8 More details in the script comments.
  3. 6 points
    Today's update to CA Auto Update will automatically apply the fix for this issue on affected systems (Whether or not the plugin is even enabled). You will though have to check for updates manually once to clear out the old update available status. If you are running @ljm42's patch script, you can safely remove it, as the Auto Update will not install the patch once 6.8+ is released.
  4. 5 points
    We may have got to the bottom of this. Please try new version 6.7.3-rc3 available on next branch.
  5. 3 points
    There was a kernel driver internal API change a few releases back that I missed, and md/unraid was doing something that's not valid now. I noticed this and put fix in upcoming 6.8 and gave to someone who could reproduce the corruption. Has been running far longer than it ever did before, so I think is safe for wider testing. Back-ported change to 6.7.3-rc3 and also updated to the latest 4.19 kernel patch release, because, why not?
  6. 3 points
    I am from Europe. It seems to be more reliable today, the timeout now happens once I reach AWS in Seattle, which I presume is their firewall.
  7. 3 points
    Just to highlight, because you're staff/limetech/closely associated, what you mention aboove ISN'T what this thread is about, its a side discussion. The issue here is that under 6.7, if I am streaming a movie and on another PC initiate a large file copy, the whole system comes to a grinding halt. Video stops, copy slows, it's unworkable. From my tests when I reach the 2.5 - 3Gb mark of a copy/transfer, the freeze kicks in. It would be handy if LT could acknowledge the thread and issue.
  8. 2 points
    There is an rc4, but it doesn't fix the slow array problem.
  9. 2 points
    You should never paste random code from the Internet into your computer without understanding what it does... but that aside, if you open up a terminal and paste in the following line: wget https://gist.githubusercontent.com/ljm42/74800562e59639f0fe1b8d9c317e07ab/raw/387caba4ddd08b78868ba5b0542068202057ee90/fix_docker_client -O /tmp/dockfix.sh; sh /tmp/dockfix.sh Then the fix should be applied until you reboot.
  10. 2 points
    @coblck You have to edit the DockerClient.php (/usr/local/emhttp/plugins/dynamix.docker.manager/include) change it like it's shown on the post I've linked you nano /usr/local/emhttp/plugins/dynamix.docker.manager/include/DockerClient.php Scroll down until you see "DOCKERUPDATE CLASS". Below it you should find "Step 4: Get Docker-Content-Digest header from manifest file" Change the following from 'Accept: application/vnd.docker.distribution.manifest.v2+json', to 'Accept: application/vnd.docker.distribution.manifest.list.v2+json,application/vnd.docker.distribution.manifest.v2+json', done. With a server restart that file will be reset. If you don't feel confident with editing a file, wait for the next Unraid update and ignore the docker update hints. If there is a real update for a docker container and you run the update, it will install.
  11. 2 points
    I know it's being looked at, hopefully will be fixed for 6.8
  12. 2 points
    Also, any users that created what should be a redundant pool from v6.7.0 should convert metadata to raid1 now, since even after this bug is fixed any existing pools will remain as they were, use: btrfs balance start -mconvert=raid1 /mnt/cache To check if it's using correct profile type: btrfs fi usage -T /mnt/cache Example of a v6.7 created pool, note that while data is raid1, metadata and system are single profile, i.e. some part of the metadata is on each device, and will be incomplete if one of them fails, all chunks types need to be raid1 for the pool to be redundant : Data Metadata System Id Path RAID1 single single Unallocated -- --------- --------- --------- -------- ----------- 2 /dev/sdg1 166.00GiB 1.00GiB - 764.51GiB 1 /dev/sdi1 166.00GiB 1.01GiB 4.00MiB 764.50GiB -- --------- --------- --------- -------- ----------- Total 166.00GiB 2.01GiB 4.00MiB 1.49TiB Used 148.08GiB 555.02MiB 48.00KiB
  13. 1 point
    Posted the same thing on the unRAID Users & Help page on FB, but here's my status: Upgraded to the new release 2 days ago, on 9/19/2019. Have a Sonarr container. Rebuilt the DB from scratch last night. Using the path /mnt/user/... DB is already corrupt, with Sonarr logs showing "yesterday" as the time. Have a Radarr container. Using the last good backup I could find. Using the path /mnt/disk*/.... DB is also corrupt. first error shows as 3:51PM today (CST), so made it a little longer. If there's anything else I can do to help or test... *please* let me know. I'd love to get this resolved. Attached diagnostics file... drogon-diagnostics-20190920-2109.zip
  14. 1 point
    Yes. -rc4 doesn't fix the issue, during testing was fooled by caching. Thought I accounted for that but it was late at night. The Linux block layer has undergone significant changes in last few releases, and I've had to do a lot of re-learnin'.
  15. 1 point
    It looks like it's from the Netdata Docker container.
  16. 1 point
    actually even for the appdata on cache, there will be 2 setup 1. map directly to /mnt/cache/xxx 2. map to user share /mnt/user/xxx with setup "use cache disk" only by testing on this 2 setup, we might be able to isolate if the issue is fuse related or not
  17. 1 point
    Here's my take on the situation. The sql thing has been an issue for a LONG time, but only under some very hard to pin down circumstances. The typical fix was just to be sure the sql database file was on a direct disk mapping instead of the user share fuse system. It seems to me like the sql software is too sensitive to timing, gives up and corrupts the database when the transaction takes too long. Fast forward to the 6.7.x release, and it's not just the fuse system, it's the entire array that is having performance issues. Suddenly, what was a manageable issue with sql corruption becomes an issue for anything but a direct cache mapping. So, I suspect fixing this concurrent access issue will help with the sql issue for many people as well, but I think the sql thing will ultimately require changes that are out of unraid's direct control, possibly some major changes with the database engine. The sql thing has been an issue in the background for years.
  18. 1 point
  19. 1 point
    Can confirm CA Auto Update resolves everything.
  20. 1 point
    I did run into this situation at first, but thought I had it worked out in the posted fix(the trick was supplying both mime-types in preference order). I use one of the containers you mentioned as having the issue post-patch, but don't see the behavior here. Did you edit the file manually, or did you use the script @ljm42 posted? If you edited it manually, would you mind pasting the exact line after the edit? It's very possible a typo in the right spot could cause this.
  21. 1 point
    I saw somewhere that version 6.8 is coming soon...but I cannot remember where (email, blog, somewhere). But I couldn't find anything about it here in the forums. I'm stuck at 6.6.7 until the sqlite issue is fixed. I cannot go back to fighting databases daily. If fixing this has been moved to less urgent...then I guess I will stay there (6.6.7). My needs are very small, and it does everything I need. I do think it sucks that we went from lots of discussion, asking for diagnostics, etc...and then nothing. Zip. Zilch.
  22. 1 point
    Thanks @bluemonster and @ljm42 That fix and User Script worked to fix my Unraid 6.7.2 Docker Update issue
  23. 1 point
    Thanks great work 👍 @bluemonster & @ljm42
  24. 1 point
    Thank You for the fix bluemonster Thanks for the script ljm42
  25. 1 point
    By manually doing this fix now do we POTENTIALLY create side effect issues when issue is officially updated and pushed as a update to Unraid? Any known reason this wouldn't be advisable to immediately do? The change does not persist through reboots, so it won't cause any lasting harm. Once unraid gets updated we'll reboot and everything will be back to normal.
  26. 1 point
    By manually doing this fix now do we POTENTIALLY create side effect issues when issue is officially updated and pushed as a update to Unraid? Any known reason this wouldn't be advisable to immediately do? Edit: once I make changes above what do I type or do to get terminal to save file? Sorry for silly questions... i follow what needs changed and kind of why but... facepalm lol
  27. 1 point
    I made the change suggested above and my containers are now updating as expected. Thanks
  28. 1 point
    This looks to be an issue on docker's side (But I could be completely out to lunch -> wouldn't be the first time) It doesn't appear that they are reporting the proper sha for the manifests remotely Manifest URL: https://registry-1.docker.io/v2/linuxserver/radarr/manifests/latest Token URL: https://auth.docker.io/token?service=registry.docker.io&scope=repository%3Alinuxserver%2Fradarr%3Apull HTTP/1.1 200 OK Content-Length: 1788 Content-Type: application/vnd.docker.distribution.manifest.v2+json Docker-Content-Digest: sha256:4c08590e968819112de4533298c2859d128afb449a704301bcfd647dfe38e327 Docker-Distribution-Api-Version: registry/2.0 Etag: "sha256:4c08590e968819112de4533298c2859d128afb449a704301bcfd647dfe38e327" Date: Sat, 31 Aug 2019 02:17:43 GMT Strict-Transport-Security: max-age=31536000 Remote Digest: sha256:4c08590e968819112de4533298c2859d128afb449a704301bcfd647dfe38e327 Update status: Image='linuxserver/radarr:latest', Local='sha256:eec8bb0287a5cb573eb5a14a5c2e1924ad7be66f89ea5e2549440defdafba02b', Remote='sha256:4c08590e968819112de4533298c2859d128afb449a704301bcfd647dfe38e327', Status='false' Now if I pull the image (ie: delete the container, and reload it), in theory the digest sha should be 4c08590e968819112de4533298c2859d128afb449a704301bcfd647dfe38e327 but it's not if I'm reading everything correctly docker images --digests --no-trunc REPOSITORY TAG DIGEST IMAGE ID CREATED SIZE linuxserver/radarr latest sha256:eec8bb0287a5cb573eb5a14a5c2e1924ad7be66f89ea5e2549440defdafba02b sha256:cae2333bf55d51bbf508c333bdbbc70978d41baf0a23207de8bc3251afb73e6d 3 days ago 550MB Therefore an update shows as being available... IIRC, this issue pops up every year or so with docker.
  29. 1 point
  30. 1 point
    Not LSIO dockers only: pihole/pihole raymondmm/tasmoadmin linuxserver/letsencrypt linuxserver/oscam library/influxdb linuxserver/duckdns homeassistant/home-assistant library/telegraf linuxserver/sonarr linuxserver/transmission
  31. 1 point
    Unraid 6.8 is a kernel version? -just kidding, as always expect news soon(tm)
  32. 1 point
    So far nothing has been found that fixes this and @limetech continues to be silent.
  33. 1 point
    OK, yes we had sort of established that, but this is confirmation then. Thanks.
  34. 1 point
    Everything bar the copy. The video I was playing on the HTPB stalled, doing a unraid directory listing on my pc was very slow with the top bar slowly filling. The copy that I triggered via Krusader seemed to be slowed as well. I really wish the Lime boys would at least acknowledge they are aware of this issue.
  35. 1 point
  36. 1 point
    After pulling my hair out for the last week looking for what I originally assumed was probably a network issue, I found this thread which describes the issue I'm having exactly. My system is a dual xeon 2650 setup with 96GB of ram, dual LSI2008SAS2 cards, two cache drives connected to the onboard sata controller intel c600/x79 chipset in raid1. Mover is currently configured to run hourly as my cache drives are relatively small @ 120GBs for the number of users within my household (8). I was already planning to jump to a 1TB nvme drive but guess I may need to seriously consider downgrading as my wife's identical twin lives with us which means WAFx2 is a major issue! 😱 Is there anything major to look out for when downgrading?
  37. 1 point
    What I do for checking the Sonarr DB is periodically go through the Logs filtering out everything but the errors. If there is a corruption in the DB you will see the malformed message there. Once I have gone through a log set, I will just clear the logs as well. Sonarr initially still seems to be working properly, when in fact the DB has only few corruptions. Once the number of corruptions increases then Sonarr starts initially showing slow responsiveness issues, until it reaches a point where you can't even get to the landing page. Anyway. When doing manual backups, unless you do them from inside Sonarr, where I assume the DB is paused for this process, I think it is better to stop the Docker altogether. I started also going through the SQLite site for additional information, and I would suggest before restoring from a manual backup to delete any existing .db-wal files as they contain pending transactions. If you don't back everything up so as to be able overwrite the .db-wal files with the exact same set when the actual DB was backed up, then these might cause a problem when restarting the DB as SQLite will probably try to apply the pending changes. More so if the db-wal files are corrupt or have been already partially applied.
  38. 1 point
    Installed the RC this morning. Should be able to give a report tonight as to whether it helps..
  39. 1 point
    <FacePalm> You totally baited me. I feel like I got rick rolled.
  40. 1 point