Jump to content

kamhighway

Members
  • Posts

    294
  • Joined

  • Last visited

Everything posted by kamhighway

  1. @PhAze: Thank you for your post. I can believe that there is more than one thing that can cause docker utilization to ramp up. In my case, setting Plex to transcode from the cache drive seems to have solved most of the problem. The img is still growing, albeit at a much slower rate. I did not know of the possibility of a "dangling image" so I'll look for that the next time I see a big jump in utilization of docker.img. It seems to me that we are part of a very small group that is experiencing this problem. Makes me wonder what I'm doing differently from most other users.
  2. The container does shrink, but it does not seem to free up the space in docker.img. We either need to figure out how to shrink docker.img when a docker is done using space in /tmp. Or, we need to move /tmp outside of docker.img by mapping it to the cache drive.
  3. The /tmp folder in the dockers may be empty now since the apps do a pretty good job of cleaning up after themselves. I suspect that what is happening is that utilization of docker.img goes up when a dockerized app is using /tmp. However, after the app cleans up /tmp, utilization of docker.img does not go down. This would explain why the size of the containers does not change eventhough docker.img utilization increases. If I am right, then mapping every docker's /tmp to the cache drive would be an easy solution.
  4. The problem is most evident when a dockerized app uses /tmp for big files such as transcoding in Plex. Unraring in sab may be another good example. In the sab docker, try mapping /tmp to /mnt/cache/appdata/sabnzbd/tmp. I am considering doing this for all my dockers so they can all use /tmp without utilizing docker.img. I'd be curious to know if this works for you as I am not 100% sure I've diagnosed the primary issue correctly.
  5. Update: It's been about 4 days now since I changed my Plex settings to perform transcoding in RAM and the amount of space utilized in my docker.img has changed only slightly. This result is consistent with my theory that allowing Plex to transcode inside its container was the problem. I don't think the problem is limited to Plex. Any container that runs an app that uses lots of tmp space is likely to exhibit the same problem. Maybe it should be standard practice to map every docker's /tmp directory to a share on the cache drive or to ram. Now that I'm fairly sure the Plex docker was my problem, here's how we could collaborate to be sure. If you are using a Plex docker, please post your answers to the following questions: 1) Which Plex Docker? Needo, Linux.io, BinHex, etc. 2) Did you follow JonP's instructions to set Plex to transcode either in RAM, Cache, or a Share? 3) Do you know for sure that someone watches content that Plex has to transcode? 4) Does your utilization of docker.img increase in big jumps (hundreds of MB/day)?
  6. @TexasDave, I never get the message that disk utilization has returned to normal.
  7. Then that's probably where its mounted. Like I said at work so couldn't check. @Squid Thanks again. I'll monitor the size of the directories at /var/lib/docker. The next time I see a big jump in utilization of docker.img, I should now be able to see what is growing. This is a big help.
  8. @squid Is docker.img mounted to /var/lib/docker/unraid or one level up at /var/lib/docker? It looks to me like the containers are one level up at /var/lib/docker. At that level I see: /containers of size 768, and /graph of size 16520. Does this sound right to you? root@Media:/var/lib/docker# ls -l total 20 drwx------ 1 root root 20 Sep 22 16:17 btrfs/ drwx------ 1 root root 768 Sep 29 10:49 containers/ drwx------ 1 root root 16520 Sep 27 15:06 graph/ drwx------ 1 root root 32 Sep 22 16:16 init/ -rw-r--r-- 1 root root 5120 Sep 29 10:49 linkgraph.db -rw------- 1 root root 604 Sep 29 10:49 repositories-btrfs drwx------ 1 root root 0 Sep 27 15:06 tmp/ drwx------ 1 root root 0 Sep 22 16:16 trust/ drwxrwxrwx 1 root root 112 Oct 1 12:03 unraid/ -rw-rw-rw- 1 root root 71 Oct 1 12:03 unraid-autostart -rw-rw-rw- 1 root root 166 Sep 29 10:49 unraid-update-status.json drwx------ 1 root root 256 Sep 27 14:43 volumes/
  9. @squid, thanks for your help. This has been the most progress I've had in understanding what is going on in a week. At that location, I see a /tmp folder that contains dozens of files with names like tmp-1018085421.url which mc reports to be of size 714424 (I don't know what units this is in). However, I see that /community.applications is of size 60, so I'm guessing the other file is very much bigger. The oldest tmp file seems to correspond to the date I last rebuilt docker.img. Here is a list of what's in /tmp inside docker.img. root@Media:/tmp# ls -l total 50472 drwxrwxrwx 3 root root 60 Sep 23 07:29 community.applications/ drwx------ 2 root root 40 Sep 24 16:59 mc-root/ drwxr-xr-x 4 root root 80 Sep 23 07:27 notifications/ drwxr-xr-x 2 root root 200 Sep 28 23:16 plugins/ -rw-rw-rw- 1 root root 0 Sep 28 10:00 preclear_assigned_disks1 -rw-rw-rw- 1 root root 1680 Sep 28 10:00 preclear_report_sda -rw-rw-rw- 1 root root 168 Sep 28 10:00 preclear_stat_sda -rw-rw-rw- 1 root root 10 Sep 28 10:00 read_speedsda -rw-rw-rw- 1 root root 4571 Sep 28 10:00 smart_finish_sda -rw-rw-rw- 1 root root 4571 Sep 27 09:55 smart_mid_after_zero1_sda -rw-rw-rw- 1 root root 144 Sep 27 09:55 smart_mid_pending_reallocate_sda -rw-rw-rw- 1 root root 4571 Sep 26 23:02 smart_mid_preread1_sda -rw-rw-rw- 1 root root 4574 Sep 26 09:22 smart_start_sda -rw-rw-rw- 1 root root 705595 Sep 23 13:47 tmp-1001038311.url -rw-rw-rw- 1 root root 714424 Sep 27 14:46 tmp-1018085421.url -rw-rw-rw- 1 root root 711853 Sep 25 10:04 tmp-1022176830.url -rw-rw-rw- 1 root root 711853 Sep 25 09:58 tmp-1026715570.url -rw-rw-rw- 1 root root 714424 Sep 28 09:40 tmp-1037033344.url -rw-rw-rw- 1 root root 711853 Sep 27 05:01 tmp-1041957111.url -rw-rw-rw- 1 root root 711853 Sep 25 09:57 tmp-104351457.url -rw-rw-rw- 1 root root 705595 Sep 23 11:43 tmp-1079397565.url -rw-rw-rw- 1 root root 714424 Sep 27 15:08 tmp-1114330130.url -rw-rw-rw- 1 root root 714554 Sep 28 23:16 tmp-1115295570.url -rw-rw-rw- 1 root root 714424 Sep 27 15:09 tmp-1115779883.url -rw-rw-rw- 1 root root 714554 Sep 29 10:48 tmp-1184253690.url -rw-rw-rw- 1 root root 714938 Oct 1 08:22 tmp-1244214065.url -rw-rw-rw- 1 root root 714424 Sep 28 02:27 tmp-124625526.url -rw-rw-rw- 1 root root 705595 Sep 23 11:07 tmp-1257712804.url -rw-rw-rw- 1 root root 711853 Sep 26 22:48 tmp-1262417929.url -rw-rw-rw- 1 root root 705595 Sep 23 11:26 tmp-1266159711.url -rw-rw-rw- 1 root root 705595 Sep 23 11:07 tmp-1303099892.url -rw-rw-rw- 1 root root 705595 Sep 23 11:17 tmp-1304736983.url -rw-rw-rw- 1 root root 714424 Sep 27 15:07 tmp-1320279264.url -rw-rw-rw- 1 root root 714424 Sep 27 14:44 tmp-137839263.url -rw-rw-rw- 1 root root 711853 Sep 26 14:04 tmp-1394325280.url -rw-rw-rw- 1 root root 714424 Sep 27 14:47 tmp-1475139423.url -rw-rw-rw- 1 root root 714554 Sep 29 10:48 tmp-1478797206.url -rw-rw-rw- 1 root root 711853 Sep 26 22:58 tmp-1502409291.url -rw-rw-rw- 1 root root 714554 Sep 28 10:04 tmp-1527394120.url -rw-rw-rw- 1 root root 714554 Sep 28 23:16 tmp-1548075922.url -rw-rw-rw- 1 root root 711853 Sep 26 22:47 tmp-1565010257.url -rw-rw-rw- 1 root root 714554 Sep 28 10:05 tmp-1584110921.url -rw-rw-rw- 1 root root 705595 Sep 23 07:30 tmp-160169679.url -rw-rw-rw- 1 root root 714424 Sep 27 15:21 tmp-1661075416.url -rw-rw-rw- 1 root root 711853 Sep 25 10:04 tmp-1672534634.url -rw-rw-rw- 1 root root 705595 Sep 23 11:43 tmp-1701087208.url -rw-rw-rw- 1 root root 714554 Sep 28 15:39 tmp-170252017.url -rw-rw-rw- 1 root root 708172 Sep 24 22:10 tmp-1731846235.url -rw-rw-rw- 1 root root 714554 Sep 29 10:49 tmp-1776699683.url -rw-rw-rw- 1 root root 711853 Sep 25 10:04 tmp-1794736320.url -rw-rw-rw- 1 root root 705595 Sep 23 11:17 tmp-1798316926.url -rw-rw-rw- 1 root root 705595 Sep 23 11:23 tmp-1867079391.url -rw-rw-rw- 1 root root 714938 Sep 30 18:02 tmp-1892408965.url -rw-rw-rw- 1 root root 711853 Sep 27 09:44 tmp-1992924892.url -rw-rw-rw- 1 root root 714424 Sep 27 15:06 tmp-2004304119.url -rw-rw-rw- 1 root root 705595 Sep 23 07:34 tmp-2058323394.url -rw-rw-rw- 1 root root 705595 Sep 23 07:44 tmp-2087387426.url -rw-rw-rw- 1 root root 714424 Sep 27 14:44 tmp-2088655953.url -rw-rw-rw- 1 root root 714554 Sep 28 15:39 tmp-2136823997.url -rw-rw-rw- 1 root root 714554 Sep 29 10:50 tmp-2143980462.url -rw-rw-rw- 1 root root 705595 Sep 23 10:56 tmp-238162114.url -rw-rw-rw- 1 root root 714424 Sep 27 15:07 tmp-244130472.url -rw-rw-rw- 1 root root 711853 Sep 26 11:22 tmp-297073847.url -rw-rw-rw- 1 root root 714424 Sep 27 14:37 tmp-307181750.url -rw-rw-rw- 1 root root 714554 Sep 29 13:18 tmp-319612520.url -rw-rw-rw- 1 root root 708172 Sep 24 05:56 tmp-364760669.url -rw-rw-rw- 1 root root 711853 Sep 26 14:23 tmp-381722333.url -rw-rw-rw- 1 root root 705595 Sep 23 07:29 tmp-45712661.url -rw-rw-rw- 1 root root 705595 Sep 23 07:37 tmp-46253092.url -rw-rw-rw- 1 root root 714554 Sep 28 23:17 tmp-489865132.url -rw-rw-rw- 1 root root 714424 Sep 27 14:46 tmp-49543768.url -rw-rw-rw- 1 root root 705595 Sep 23 07:30 tmp-532269974.url -rw-rw-rw- 1 root root 711853 Sep 27 09:48 tmp-54267583.url -rw-rw-rw- 1 root root 714554 Sep 29 06:36 tmp-569650762.url -rw-rw-rw- 1 root root 714554 Sep 29 07:31 tmp-595762458.url -rw-rw-rw- 1 root root 705595 Sep 23 07:31 tmp-626615610.url -rw-rw-rw- 1 root root 714424 Sep 27 14:46 tmp-741555393.url -rw-rw-rw- 1 root root 711853 Sep 26 11:20 tmp-771294825.url -rw-rw-rw- 1 root root 705595 Sep 23 07:36 tmp-793902811.url -rw-rw-rw- 1 root root 714554 Sep 28 10:05 tmp-870656813.url -rw-rw-rw- 1 root root 708172 Sep 24 13:29 tmp-87708617.url -rw-rw-rw- 1 root root 714424 Sep 27 14:43 tmp-926217946.url -rw-rw-rw- 1 root root 714424 Sep 27 14:37 tmp-9295676.url -rw-rw-rw- 1 root root 714938 Sep 30 19:15 tmp-933241343.url -rw-rw-rw- 1 root root 705595 Sep 23 11:22 tmp-970283086.url drwx------ 2 root root 60 Sep 26 09:22 tmux-0/ -rw-rw-rw- 1 root root 263768 Sep 27 09:55 zerosda Couple of questions: 1) Do you have any idea what is generating these tmp files? 2) If I did not rebuild docker.img, would these tmp files get purged from docker.img on their own?
  10. Thanks Squid. It that where docker.img is mounted in unraid's filesystem?
  11. The Community Applications plugin stores some files in it. Total space is ~2 Meg. This is interesting. So we can confirm that there are other things in docker.img than the containers. Is there any way to see what is in docker.img?
  12. Plex may not be the the only app that temporarily uses large amounts of diskspace. Any media server that does transcoding is likely to store big temporary files in the container. I see you have Serviio installed. I don't have any experience with Serviio, but if it does transcoding it could have the same problem I am thinking that Plex has. In my case, I suspect Plex is the problem because the amount of diskspace utilized in docker.img sometimes jumps by 1 or 2 gb a day, and sometimes it is relatively stable for days at a time. 1 to 2 gbs / day is way more than a container's log files would account for. And, I noticed that the growth spurts seem to coincide with the days my daughter watched a lot of anime's on her ipad. I don't know why cadvisor does not show the size of the Plex container growing. Maybe cadvisor does not count temp files as part of the container's size? Maybe cadvisor is reporting the size of the image since that would be the container size when the container is first started. My container sizes, as reported by cadvisor, have been constant while my docker utilization continues to climb. Questions for Lime Technology: 1) When a docker app creates temporary files in the container, does the utilization of docker.img increase? 2) When a docker app deletes temporary files in the container, does the utilization of docker.img decrease? 3) What else, besides containers, is stored in docker.img?
  13. @trurl Yes, it had no effect on the utilization of docker.img.
  14. @ikosa I used cadvisor as well. Like you, the size of my containers never changes. I conclude that either cadvisor is reporting to us the size of the images -- which AFAIK should not change -- or there is something in docker.img besides the containers that is growing. AFAIK Lime has not said whether there is anything else in docker.img. Perhaps Lime could say whether there is supposed to be anything else in docker.img.
  15. @jimbobulator If this turns out to be the problem, I think you could redirect /transcode to your cache drive. My theory is if you don't redirect /transcode to someplace outside of the container, then the container starts to use up space on docker.img. From JonP's post there are two places you need to change to make the redirect. First place is in the docker settings page. The second change is made using plex's web interface to change its server settings to use /transcode.
  16. I think the problem may be with Plex dockers. JonP wrote a post about moving the transcode directory to a ram: http://lime-technology.com/forum/index.php?topic=37553.0 JonP's post got me thinking about where my plex docker stores its transcode files if I don't point /transcode to ram as JonP suggests. If they are stored in docker.img, it would explain why utilization of docker.img can jump by a gigabyte a day sometimes, and go for long periods growing much slower. Utilization will jump whenever someone is watching something that plex needs to transcode. I'm going to follow JonP's advice and have /transcode point to /tmp and see if that stops the increasing utilization of docker.img. To the others dealing with the same issue: 1) Are you running a plex docker? 2) If yes, do you let plex transcode your media, or does plex always just play the file directly.
  17. Could someone who is using this docker successfully try something for me? Go to a picture and try to enter a comment. Does the browser go to full screen when you press the spacebar while in the comment field? I've tried this from a Mac using Safari, Firefox, and Chrome. Also tried it from win7 using Chome and Firefox. If Photoshow did not go to full screen upon pressing the spacebar in a comment field, it would be a perfect tool to get my relatives to add what they know about the people in the pictures in my library. If you have Photoshow working, please tell me how you fixed this problem? TIA
  18. @B_Sinn3d By my count, there are at least 5 of us that have this problem. LT and the docker authors don't seem to have this issue so there must be a solution. For now, I've given my docker.img 50GB and I still have to recreate it roughly once a week. I'm looking at moving the dockers off my unraid server until a solution is found.
  19. @ikosa, Backups could be another source of utilization. Sonarr does schedule automatic backups, but I don't know where they go. The backup directory should also probably be mapped to a directory on the cache drive.
  20. @ikosa, This problem would be the same with any container that generates log files. I don't know of an easy way to look inside the running containers to see which ones are growing inside of docker.img.
  21. @jimbobulator Here are the docker images on my machine: REPOSITORY TAG IMAGE ID CREATED VIRTUAL SIZE binhex/arch-couchpotato latest 7de43ac6e0dc 10 days ago 628.6 MB binhex/arch-sonarr latest 14b30ef80694 5 weeks ago 905.4 MB binhex/arch-moviegrabber latest f333212ec60a 6 weeks ago 518.2 MB google/cadvisor latest 175221acbf89 11 weeks ago 19.92 MB gfjardim/logitechmediaserver latest 465b1e79d3c8 4 months ago 615.9 MB needo/plex latest 8906416ebf13 4 months ago 603.9 MB gfjardim/btsync latest 69d6ec367640 5 months ago 297.7 MB needo/mariadb latest 566c91aa7b1e 14 months ago 590.6 MB I note that we have several in common: binhex/arch-couchpotato, binhex/arch-sonarr, and needo/plex. This provided me the clue to focus on these three containers. I've noticed that Sonarr seems to have a problem downloading torrents lately and the timing coincides with when I got the first message regarding utilization of the docker.img file. Could be that sonarr is writing error messages related to failed torrents into a logfile that is continually growing inside of docker.img. This could explain why sometimes the utilization jumps quickly, and sometimes it grows relatively slowly -- on days that sonarr tries to download a lot of torrents, it would generate a lot more error log entries than on days when there is nothing on sonarr's calendar to download. This would also explain why the problem only surfaced last week. Since we never commit the docker image with the error logs, the size of the image as reported by the command docker images, never changes. One way to fix this is to find the directory the sonarr container uses to write its log files and map that to a directory on the cache drive. I don't know enough about docker and sonarr to find the right directory, but maybe binhex could help us here. It may be that a container's directories for log files should always be mapped to a directory on the cache drive to prevent them from filling up docker.img.
  22. Running UR 6.1.0. I got the warning message "Docker high image disk utilization" this morning. First warning said utilization was 72%. Then about an hour later got a second warning saying utilization is 73%. From settings->docker i see this: Label: none uuid: 1c150737-1e3b-4d8a-a83c-81e40bf7507f Total devices 1 FS bytes used 16.49GiB devid 1 size 25.00GiB used 19.04GiB path /dev/loop0 btrfs-progs v4.1.2 From cAdvisor i see this: google/cadvisor latest 175221acbf890310cc61dc3d 19.00 MiB 7/1/2015, 5:06:45 PM gfjardim/btsync latest 69d6ec3676409cd60299b773 283.96 MiB 3/27/2015, 5:36:30 AM binhex/arch-moviegrabber latest f333212ec60ad6a58ab45984 494.22 MiB 8/7/2015, 9:14:54 AM needo/mariadb latest 566c91aa7b1e209ddd41e5b0 563.20 MiB 7/11/2014, 4:53:52 AM needo/plex latest 8906416ebf13bada755e356a 575.99 MiB 5/1/2015, 6:24:20 AM gfjardim/logitechmediaserver latest 465b1e79d3c88e69ab4c7cda 591.59 MiB 5/18/2015, 8:57:01 AM binhex/arch-couchpotato latest 7de43ac6e0dc047fbbccc125 599.44 MiB 9/9/2015, 7:24:48 AM binhex/arch-sonarr latest 14b30ef806943549665a558f 863.42 MiB 8/10/2015, 4:12:11 AM The only thing I can think of to do is wait a few hours and then check cAdvisor again to see which container has increased in size. Can anyone offer any better suggestions for diagnosing the problem? Update: I just got another warning saying that utilization is now 74%. So in 90 minutes utilization of the docker.img file increased by 1%. Looking at data for cAdvisor shows that all of the containers are exactly the same size as before. google/cadvisor latest 175221acbf890310cc61dc3d 19.00 MiB 7/1/2015, 5:06:45 PM gfjardim/btsync latest 69d6ec3676409cd60299b773 283.96 MiB 3/27/2015, 5:36:30 AM binhex/arch-moviegrabber latest f333212ec60ad6a58ab45984 494.22 MiB 8/7/2015, 9:14:54 AM needo/mariadb latest 566c91aa7b1e209ddd41e5b0 563.20 MiB 7/11/2014, 4:53:52 AM needo/plex latest 8906416ebf13bada755e356a 575.99 MiB 5/1/2015, 6:24:20 AM gfjardim/logitechmediaserver latest 465b1e79d3c88e69ab4c7cda 591.59 MiB 5/18/2015, 8:57:01 AM binhex/arch-couchpotato latest 7de43ac6e0dc047fbbccc125 599.44 MiB 9/9/2015, 7:24:48 AM binhex/arch-sonarr latest 14b30ef806943549665a558f 863.42 MiB 8/10/2015, 4:12:11 AM Settings->docker now shows Label: none uuid: 1c150737-1e3b-4d8a-a83c-81e40bf7507f Total devices 1 FS bytes used 17.00GiB devid 1 size 25.00GiB used 19.04GiB path /dev/loop0 btrfs-progs v4.1.2 Can I conclude that the containers are not growing? Is there anything else in docker.img that could be growing? Udpate 2: Its been about 3 hours and now utilization is at 76%. According to cAdvisor there has been no change is size for any of the containers. However settings->docker now shows this: Label: none uuid: 1c150737-1e3b-4d8a-a83c-81e40bf7507f Total devices 1 FS bytes used 17.42GiB devid 1 size 25.00GiB used 20.04GiB path /dev/loop0 btrfs-progs v4.1.2 Update 3: Updated to UR6.1.2. docker.img utilization continued to increase. When utilization hit 80%, I deleted docker.img and rebuilt it following directions in the sticky post in this forum. After rebuilding settings->docker now show this: Label: none uuid: 1c009fb0-4cb4-4574-8ee3-3a08847d4754 Total devices 1 FS bytes used 3.28GiB devid 1 size 25.00GiB used 6.04GiB path /dev/loop0 btrfs-progs v4.1.2 Although docker.img is now much less fully utilized, it is still growing so I don't think this solved the problem. Update 4: Left things alone overnight. In the morning settings->docker showed: Total devices 1 FS bytes used 5.48GiB devid 1 size 25.00GiB used 8.04GiB path /dev/loop0 cAdvisor says each container is the same size as it was yesterday. Don't know what else could be growing. I have now stopped each of the containers to see if utilization continues to rise even with all containers stopped. Update 5: With all containers stopped, there has been no change in utilization of docker.img for the past 2 hours. Going to start one container at a time starting with needo:Plex.
  23. RE: Photo management Installing your photoshow plugin could not have been easier. However, when i want to leave a comment about a photo, I notice that the second time i hit the space bar in the comments field makes the picture zoom to full screen. Were it not for this behavior, photoshow would be my first choice over piwigo and lychee -- your other photo management dockers. I am curious to know if anyone else gets the same behavior so I can rule out the possibility that something else on my computer is creating this problem. Even though it is not yet working well for me, I truly appreciate your efforts to make dockers out of so many applications. Best regards, Kamhighway Update: Found the PhotoShow demo site at http://www.photoshow-gallery.com/demo/ Log in as anonymous/password. Find a picture and enter a comment. When you type a second space character, the demo site goes into full screen mode. So it appears that this is problem with photoshow not with the docker.
  24. Very nice! Thank you for making this available to the community.
×
×
  • Create New...