cambriancatalyst Posted November 12, 2020 Share Posted November 12, 2020 (edited) Good afternoon, Sometime this afternoon my docker service began failing. I did not attempt to upgrade anything on my unraid machine, it seems to have happened out of the blue. Please see below output and please let me know if I can provide any additional information that would help in troubleshooting. Thank you very much to anyone with the time to respond. Quote Docker Containers Basic View ApplicationVersionNetworkPort Mappings (App to Host)Volume Mappings (App to Host)AutostartLog Warning: stream_socket_client(): unable to connect to unix:///var/run/docker.sock (Connection refused) in /usr/local/emhttp/plugins/dynamix.docker.manager/include/DockerClient.php on line 681 Couldn't create socket: [111] Connection refused Warning: Invalid argument supplied for foreach() in /usr/local/emhttp/plugins/dynamix.docker.manager/include/DockerClient.php on line 853 Warning: stream_socket_client(): unable to connect to unix:///var/run/docker.sock (Connection refused) in /usr/local/emhttp/plugins/dynamix.docker.manager/include/DockerClient.php on line 681 Couldn't create socket: [111] Connection refused Warning: Invalid argument supplied for foreach() in /usr/local/emhttp/plugins/dynamix.docker.manager/include/DockerClient.php on line 919 No Docker containers installed Edited November 12, 2020 by cambriancatalyst Added "No Docker containers installed" message Quote Link to comment
trurl Posted November 12, 2020 Share Posted November 12, 2020 12 minutes ago, cambriancatalyst said: provide any additional information You should always Go to Tools - Diagnostics and attach the complete Diagnostics ZIP file to your NEXT post in this thread. Quote Link to comment
cambriancatalyst Posted November 12, 2020 Author Share Posted November 12, 2020 Please find attached below. Thank you! tower-diagnostics-20201112-1244.zip Quote Link to comment
JorgeB Posted November 12, 2020 Share Posted November 12, 2020 Cache pool is completely allocated, you need to run a balance, see here, you'll also need to recreate the docker image once it's fixed. P.S. you should update to latest beta and the docker image is very large, probably something configured. Quote Link to comment
trurl Posted November 12, 2020 Share Posted November 12, 2020 1 minute ago, JorgeB said: need to recreate the docker image once it's fixed. Why have you allocated 100G for docker.img? 20G is usually much more than enough, but I see you have already used 39 of the 100G. I have 17 dockers and they are using less than half of 20G docker.img Making docker.img larger won't fix problems with filling and corrupting it. It will only make it take longer to fill. And your docker.img is indeed corrupt. You will have to recreate it (set it to use only 20G) and reinstall your dockers using the Previous Apps feature. But, reinstalling your dockers won't be enough since you obviously have one or more of you docker applications misconfigured. The usual reason for filling docker.img is an application writing to a path that isn't mapped to Unraid storage. Typical mistakes are specifiying a path within the application using a different upper/lower case than in the mappings (Linux is case-sensitive so /downloads is different from /Downloads), or specifiying a relative path (not beginning in /). Probably the best idea after you get your cache fixed is to recreate docker.img at 20G, and instead of reinstalling your containers, see if we can figure out what you have done wrong one application at a time. Quote Link to comment
cambriancatalyst Posted November 12, 2020 Author Share Posted November 12, 2020 I've run the balance and the cache was able to successfully allocate 50 some odd chunks. If I delete the image and restart docker will it automatically generate a new image? Once I have the new image up, I'll go through each application individually (eyeing the configurations for each) to see if there's anything off with my container mappings. Is there anything else I should do to try and identify the container causing the data leak? I plan on updating to the latest beta as soon as I have docker back to normal. Is it advised that I update first? Also, a bit unrelated, but I've been utilizing the vfio-pci plugin for isolating usb iommu groups for passthrough in the beta (without problems) despite it showing as unsupported in fix common problems. Should I stop doing that? Thank you both very much for all the support! Quote Link to comment
JorgeB Posted November 12, 2020 Share Posted November 12, 2020 12 minutes ago, cambriancatalyst said: If I delete the image and restart docker will it automatically generate a new image? See here. Quote Link to comment
trurl Posted November 12, 2020 Share Posted November 12, 2020 17 minutes ago, cambriancatalyst said: see if there's anything off with my container mappings. The problem is often a path specified within the application not matching a mapping. Quote Link to comment
cambriancatalyst Posted November 12, 2020 Author Share Posted November 12, 2020 Oh alright, so I would just have to avoid the application in that case. I've got docker back up and running. I do seem to be getting a weird message in my log over and over again. Would one of you be able to tell me if this is serious? I apologize for my ignorance. Quote 555064 Nov 12 14:19:48 Tower kernel: BTRFS info (device loop2): no csum found for inode 913 start 36864 Nov 12 14:19:48 Tower kernel: BTRFS warning (device loop2): csum failed root 4475 ino 913 off 36864 csum 0xca69d581 expected csum 0x00000000 mirror 1 Nov 12 14:19:48 Tower kernel: BTRFS error (device loop2): parent transid verify failed on 10479583232 wanted 555074 found 555064 Nov 12 14:19:48 Tower kernel: BTRFS error (device loop2): parent transid verify failed on 10479583232 wanted 555074 found 555064 Thank you to both of you, again. Quote Link to comment
trurl Posted November 12, 2020 Share Posted November 12, 2020 1 hour ago, trurl said: You should always Go to Tools - Diagnostics and attach the complete Diagnostics ZIP file to your NEXT post in this thread. Quote Link to comment
cambriancatalyst Posted November 12, 2020 Author Share Posted November 12, 2020 Apologies, I thought the previous might suffice. Please see below. tower-diagnostics-20201112-1424.zip Quote Link to comment
trurl Posted November 12, 2020 Share Posted November 12, 2020 Doesn't look like you deleted the corrupt docker.img Quote Link to comment
cambriancatalyst Posted November 12, 2020 Author Share Posted November 12, 2020 I deleted the docker image again and am receiving the below log output. Please also see screenshot of the docker volume info and diagnostic, attached. Thank you again for your help Quote Nov 12 14:48:56 Tower kernel: BTRFS info (device loop2): no csum found for inode 913 start 36864 Nov 12 14:48:56 Tower kernel: BTRFS warning (device loop2): csum failed root 4475 ino 913 off 36864 csum 0xca69d581 expected csum 0x00000000 mirror 1 Nov 12 14:48:57 Tower kernel: BTRFS info (device loop2): no csum found for inode 913 start 36864 Nov 12 14:48:57 Tower kernel: BTRFS warning (device loop2): csum failed root 4475 ino 913 off 36864 csum 0xca69d581 expected csum 0x00000000 mirror 1 Nov 12 14:48:57 Tower kernel: BTRFS info (device loop2): no csum found for inode 913 start 36864 Nov 12 14:48:57 Tower kernel: BTRFS warning (device loop2): csum failed root 4475 ino 913 off 36864 csum 0xca69d581 expected csum 0x00000000 mirror 1 Nov 12 14:48:58 Tower kernel: verify_parent_transid: 10 callbacks suppressed Nov 12 14:48:58 Tower kernel: BTRFS error (device loop2): parent transid verify failed on 10479583232 wanted 555074 found 555064 Nov 12 14:48:58 Tower kernel: BTRFS error (device loop2): parent transid verify failed on 10479583232 wanted 555074 found 555064 Nov 12 14:48:58 Tower kernel: BTRFS info (device loop2): no csum found for inode 913 start 36864 Nov 12 14:48:58 Tower kernel: BTRFS warning (device loop2): csum failed root 4475 ino 913 off 36864 csum 0xca69d581 expected csum 0x00000000 mirror 1 Nov 12 14:48:58 Tower kernel: BTRFS error (device loop2): parent transid verify failed on 10479583232 wanted 555074 found 555064 Nov 12 14:48:58 Tower kernel: BTRFS error (device loop2): parent transid verify failed on 10479583232 wanted 555074 found 555064 Nov 12 14:48:58 Tower kernel: BTRFS info (device loop2): no csum found for inode 913 start 36864 Nov 12 14:48:58 Tower kernel: BTRFS warning (device loop2): csum failed root 4475 ino 913 off 36864 csum 0xca69d581 expected csum 0x00000000 mirror 1 Nov 12 14:48:59 Tower kernel: BTRFS error (device loop2): parent transid verify failed on 10479583232 wanted 555074 found 555064 Nov 12 14:48:59 Tower kernel: BTRFS error (device loop2): parent transid verify failed on 10479583232 wanted 555074 found 555064 Nov 12 14:48:59 Tower kernel: BTRFS info (device loop2): no csum found for inode 913 start 36864 Nov 12 14:48:59 Tower kernel: BTRFS warning (device loop2): csum failed root 4475 ino 913 off 36864 csum 0xca69d581 expected csum 0x00000000 mirror 1 Nov 12 14:48:59 Tower kernel: BTRFS error (device loop2): parent transid verify failed on 10479583232 wanted 555074 found 555064 Nov 12 14:48:59 Tower kernel: BTRFS error (device loop2): parent transid verify failed on 10479583232 wanted 555074 found 555064 Nov 12 14:48:59 Tower kernel: BTRFS info (device loop2): no csum found for inode 913 start 36864 Nov 12 14:48:59 Tower kernel: BTRFS warning (device loop2): csum failed root 4475 ino 913 off 36864 csum 0xca69d581 expected csum 0x00000000 mirror 1 Nov 12 14:49:00 Tower kernel: BTRFS error (device loop2): parent transid verify failed on 10479583232 wanted 555074 found 555064 Nov 12 14:49:00 Tower kernel: BTRFS error (device loop2): parent transid verify failed on 10479583232 wanted 555074 found 555064 Nov 12 14:49:00 Tower kernel: BTRFS info (device loop2): no csum found for inode 913 start 36864 Nov 12 14:49:00 Tower kernel: BTRFS warning (device loop2): csum failed root 4475 ino 913 off 36864 csum 0xca69d581 expected csum 0x00000000 mirror 1 Nov 12 14:49:00 Tower kernel: BTRFS info (device loop2): no csum found for inode 913 start 36864 Nov 12 14:49:00 Tower kernel: BTRFS warning (device loop2): csum failed root 4475 ino 913 off 36864 csum 0xca69d581 expected csum 0x00000000 mirror 1 Nov 12 14:49:01 Tower kernel: btrfs_lookup_bio_sums: 2 callbacks suppressed Nov 12 14:49:01 Tower kernel: BTRFS info (device loop2): no csum found for inode 913 start 36864 Nov 12 14:49:01 Tower kernel: btrfs_print_data_csum_error: 2 callbacks suppressed Nov 12 14:49:01 Tower kernel: BTRFS warning (device loop2): csum failed root 4475 ino 913 off 36864 csum 0xca69d581 expected csum 0x00000000 mirror 1 Nov 12 14:49:01 Tower kernel: BTRFS info (device loop2): no csum found for inode 913 start 36864 Nov 12 14:49:01 Tower kernel: BTRFS warning (device loop2): csum failed root 4475 ino 913 off 36864 csum 0xca69d581 expected csum 0x00000000 mirror 1 Nov 12 14:49:02 Tower kernel: BTRFS info (device loop2): no csum found for inode 913 start 36864 Nov 12 14:49:02 Tower kernel: BTRFS warning (device loop2): csum failed root 4475 ino 913 off 36864 csum 0xca69d581 expected csum 0x00000000 mirror 1 Nov 12 14:49:02 Tower kernel: BTRFS info (device loop2): no csum found for inode 913 start 36864 Nov 12 14:49:02 Tower kernel: BTRFS warning (device loop2): csum failed root 4475 ino 913 off 36864 csum 0xca69d581 expected csum 0x00000000 mirror 1 tower-diagnostics-20201112-1450.zip Quote Link to comment
trurl Posted November 12, 2020 Share Posted November 12, 2020 According to that screenshot and diagnostics (and previous diagnostics now that I look again), docker.img is now /dev/loop4, don't know why /dev/loop2 is still hanging around. Maybe try rebooting. Quote Link to comment
cambriancatalyst Posted November 12, 2020 Author Share Posted November 12, 2020 Okay, thanks for confirming. I'll have to wait for a couple new drives to finish preclearing and will then reboot the system. I'll report back here or submit a new post if I encounter problems after rebooting. Thank you both again for all your help! Quote Link to comment
Recommended Posts
Join the conversation
You can post now and register later. If you have an account, sign in now to post with your account.
Note: Your post will require moderator approval before it will be visible.