Squid Posted March 30, 2018 Share Posted March 30, 2018 42 minutes ago, Uncledome said: Just hangs right after "Starting diagnostic collection". Curious about something if you haven't already rebooted. Can you open another terminal, then code cd / ls There's going to be a temporary folder in there named (in your case, Tower-diagnostics-Today'sDate&Time) Can you then ls THATFOLDERNAME/system And post the output Quote Link to comment
Uncledome Posted March 30, 2018 Share Posted March 30, 2018 9 minutes ago, Squid said: Curious about something if you haven't already rebooted. Can you open another terminal, then code cd / ls There's going to be a temporary folder in there named (in your case, Tower-diagnostics-Today'sDate&Time) Can you then ls THATFOLDERNAME/system And post the output Sure, Output of cd / and ls: root@Tower:~# cd / root@Tower:/# ls bin/ boot/ dev/ etc/ home/ init@ lib/ lib64/ mnt/ proc/ root/ run/ sbin/ sys/ tmp/ tower-diagnostics-20180330-0840/ tower-diagnostics-20180330-0841/ tower-diagnostics-20180330-0848/ usr/ var/ and output of ls Diag Folder: root@Tower:/# ls tower-diagnostics-20180330-0848/ config/ logs/ qemu/ shares/ smart/ system/ unRAID-6.5.0.txt root@Tower:/# I am currently still waiting for your answer before forcing the server down again. Quote Link to comment
bonienl Posted March 30, 2018 Share Posted March 30, 2018 Can you do tree /tower-diagnostics-20180330-0848 Quote Link to comment
Uncledome Posted March 30, 2018 Share Posted March 30, 2018 1 minute ago, bonienl said: Can you do tree /tower-diagnostics-20180330-0848 Sure root@Tower:/# tree tower-diagnostics-20180330-0848/ tower-diagnostics-20180330-0848/ ├── config ├── logs ├── qemu ├── shares ├── smart ├── system │ ├── df.txt │ ├── lscpu.txt │ ├── lsmod.txt │ ├── lsof.txt │ ├── lspci.txt │ ├── lsscsi.txt │ ├── lsusb.txt │ ├── memory.txt │ ├── ps.txt │ ├── top.txt │ └── vars.txt └── unRAID-6.5.0.txt 6 directories, 12 files root@Tower:/# Quote Link to comment
Squid Posted March 30, 2018 Share Posted March 30, 2018 What does this do? df -h and ifconfig -a -s both commands should complete instantaneously. Where diagnostics is hanging up might prove important on bonienl's investigation Quote Link to comment
Uncledome Posted March 30, 2018 Share Posted March 30, 2018 Just now, Squid said: What does this do? df -h and ifconfig -a -s both commands should complete instantaneously. Where diagnostics is hanging up might prove important on bonienl's investigation df -h does nothing, just shows a empty line without text which cannot be exited with CTRL+C. So i have to restart my terminal. ifconfig -a -s shows: root@Tower:~# ifconfig -a -s Iface MTU RX-OK RX-ERR RX-DRP RX-OVR TX-OK TX-ERR TX-DRP TX-OVR Flg bond0 1500 23636415 0 38 0 21457397 0 0 0 BMPmRU br0 1500 5307429 0 998 0 4803019 0 0 0 BMRU docker0 1500 1528385 0 0 0 1729053 0 0 0 BMRU erspan0 1450 0 0 0 0 0 0 0 0 BM eth0 1500 23579910 0 38 0 21457397 0 0 0 BMsRU eth1 1500 56505 0 0 0 0 0 0 0 BMsRU gre0 1476 0 0 0 0 0 0 0 0 O gretap0 1462 0 0 0 0 0 0 0 0 BM ip_vti0 1364 0 0 0 0 0 0 0 0 O lo 65536 158225 0 0 0 158225 0 0 0 LRU sit0 1480 0 0 0 0 0 0 0 0 O tunl0 1480 0 0 0 0 0 0 0 0 O veth3121 1500 1190196 0 0 0 1389978 0 0 0 BMRU veth4e02 1500 0 0 0 0 4767 0 0 0 BMRU veth6f3e 1500 3166 0 0 0 7943 0 0 0 BMRU veth78c8 1500 11491 0 0 0 16615 0 0 0 BMRU vethb6b7 1500 0 0 0 0 4722 0 0 0 BMRU vethcf79 1500 3373 0 0 0 8134 0 0 0 BMRU vethdad5 1500 4939 0 0 0 10547 0 0 0 BMRU vethec2c 1500 0 0 0 0 4785 0 0 0 BMRU Quote Link to comment
Squid Posted March 30, 2018 Share Posted March 30, 2018 Does this also hang the terminal? df /var/lib/docker Quote Link to comment
Uncledome Posted March 30, 2018 Share Posted March 30, 2018 1 minute ago, Squid said: Does this also hang the terminal? df /var/lib/docker No, this works: root@Tower:~# df /var/lib/docker Filesystem 1K-blocks Used Available Use% Mounted on /dev/loop3 20971520 6844000 12667136 36% /var/lib/docker Quote Link to comment
bonienl Posted March 30, 2018 Share Posted March 30, 2018 df /dev/loop2 (assuming loop2 is assigned to docker) Quote Link to comment
Uncledome Posted March 30, 2018 Share Posted March 30, 2018 1 minute ago, Squid said: it'll be loop3 Yea it is and I've posted the results right before bonienl Quote Link to comment
bonienl Posted March 30, 2018 Share Posted March 30, 2018 I like to know if any of the generated files have no content, can you do ls -l /tower-diagnostics-20180330-0848/system Quote Link to comment
Uncledome Posted March 30, 2018 Share Posted March 30, 2018 1 minute ago, bonienl said: I like to know if any of the generated files have no content, can you do ls -l /tower-diagnostics-20180330-0848/system there you go: root@Tower:~# ls -l /tower-diagnostics-20180330-0848/system total 192 -rw-rw-rw- 1 root root 0 Mar 30 08:48 df.txt -rw-rw-rw- 1 root root 1341 Mar 30 08:48 lscpu.txt -rw-rw-rw- 1 root root 2701 Mar 30 08:48 lsmod.txt -rw-rw-rw- 1 root root 4950 Mar 30 08:48 lsof.txt -rw-rw-rw- 1 root root 17032 Mar 30 08:48 lspci.txt -rw-rw-rw- 1 root root 1854 Mar 30 08:48 lsscsi.txt -rw-rw-rw- 1 root root 698 Mar 30 08:48 lsusb.txt -rw-rw-rw- 1 root root 252 Mar 30 08:48 memory.txt -rw-rw-rw- 1 root root 52961 Mar 30 08:48 ps.txt -rw-rw-rw- 1 root root 40168 Mar 30 08:48 top.txt -rw-rw-rw- 1 root root 51030 Mar 30 08:48 vars.txt Quote Link to comment
Squid Posted March 30, 2018 Share Posted March 30, 2018 For giggles, this should scroll through a ton of stuff du /var/lib/docker and end back at the terminal. Does it stop anywhere? Don't need the whole million lines from it. Just where it stops. Quote Link to comment
Uncledome Posted March 30, 2018 Share Posted March 30, 2018 2 minutes ago, Squid said: For giggles, this should scroll through a ton of stuff du /var/lib/docker and end back at the terminal. Does it stop anywhere? Don't need the whole million lines from it. Just where it stops. These are the last few lines before it stops: 0 /var/lib/docker/unraid/ca.backup2.datastore 0 /var/lib/docker/unraid 0 /var/lib/docker/containerd/daemon/io.containerd.content.v1.content/ingest 0 /var/lib/docker/containerd/daemon/io.containerd.content.v1.content 0 /var/lib/docker/containerd/daemon/io.containerd.snapshotter.v1.btrfs/active 0 /var/lib/docker/containerd/daemon/io.containerd.snapshotter.v1.btrfs/view 0 /var/lib/docker/containerd/daemon/io.containerd.snapshotter.v1.btrfs/snapshots 0 /var/lib/docker/containerd/daemon/io.containerd.snapshotter.v1.btrfs 0 /var/lib/docker/containerd/daemon/io.containerd.snapshotter.v1.overlayfs/snapshots 0 /var/lib/docker/containerd/daemon/io.containerd.snapshotter.v1.overlayfs 648 /var/lib/docker/containerd/daemon/io.containerd.metadata.v1.bolt 0 /var/lib/docker/containerd/daemon/io.containerd.runtime.v1.linux/moby/734d4b9da9e20a4b96828270d6c3f6d7ffc10e3dcf3c8554247673e76b344dd4 0 /var/lib/docker/containerd/daemon/io.containerd.runtime.v1.linux/moby/19647dee6de0be1ca75840af5a88b29aea3aa6799b455ee1995094b51f83a2f0 0 /var/lib/docker/containerd/daemon/io.containerd.runtime.v1.linux/moby/60ddf788b24a00021d4034e37ec760f34467a1c9179437836e43704ce1e445cc 0 /var/lib/docker/containerd/daemon/io.containerd.runtime.v1.linux/moby/9f4716af81f9e00f5e94c53ff602c56902e88c05fbd6f22e9e2296c8e43ff7dd 0 /var/lib/docker/containerd/daemon/io.containerd.runtime.v1.linux/moby/a3ecdae744a46beea66d9caec718aec884a4b0793f67f5e5fe17a83f1318fa08 0 /var/lib/docker/containerd/daemon/io.containerd.runtime.v1.linux/moby/dcf1c167e8731cfdeb87f2081e66eff6ec1812f9a5ee3d4c15f3caae0f39906d 0 /var/lib/docker/containerd/daemon/io.containerd.runtime.v1.linux/moby/68e3ad2825e29640a8d090fe0fd3090a3353d3f66cee11ceb042b7fddad8f7ef 0 /var/lib/docker/containerd/daemon/io.containerd.runtime.v1.linux/moby/43349a50c0d6c50d4987fc88996c481cedfce67c2532fe1b44be8d237b4bb380 0 /var/lib/docker/containerd/daemon/io.containerd.runtime.v1.linux/moby/7e44a412169eee513eafd0523eed2eabf216ce7d578640174a3ccb211b9e7cfa 0 /var/lib/docker/containerd/daemon/io.containerd.runtime.v1.linux/moby/8294edae393ac797db55b6060f621bc1f9efade4645888c2e4c10f6ddf8bdb93 0 /var/lib/docker/containerd/daemon/io.containerd.runtime.v1.linux/moby 0 /var/lib/docker/containerd/daemon/io.containerd.runtime.v1.linux 648 /var/lib/docker/containerd/daemon 648 /var/lib/docker/containerd 0 /var/lib/docker/tmp 0 /var/lib/docker/runtimes 39355848 /var/lib/docker root@Tower:~# Quote Link to comment
Squid Posted March 30, 2018 Share Posted March 30, 2018 Threw one theory out how about df -t tmpfs Quote Link to comment
Uncledome Posted March 30, 2018 Share Posted March 30, 2018 1 minute ago, Squid said: Threw one theory out how about df -t tmpfs There you go: root@Tower:~# df -t tmpfs Filesystem 1K-blocks Used Available Use% Mounted on tmpfs 32768 252 32516 1% /run tmpfs 16461524 0 16461524 0% /dev/shm cgroup_root 8192 0 8192 0% /sys/fs/cgroup tmpfs 131072 828 130244 1% /var/log shm 65536 0 65536 0% /var/lib/docker/containers/29665f5836321c13963824e87a964e6f041a7654c5d5a56d6878fc35bf8e7760/shm shm 65536 0 65536 0% /var/lib/docker/containers/9f4716af81f9e00f5e94c53ff602c56902e88c05fbd6f22e9e2296c8e43ff7dd/shm shm 65536 0 65536 0% /var/lib/docker/containers/034adf9c55653bb881968ca8d44355d9bdcb91702daabe289ca5fd0ed82a64d3/shm shm 65536 4 65532 1% /var/lib/docker/containers/a3ecdae744a46beea66d9caec718aec884a4b0793f67f5e5fe17a83f1318fa08/shm shm 65536 4 65532 1% /var/lib/docker/containers/dcf1c167e8731cfdeb87f2081e66eff6ec1812f9a5ee3d4c15f3caae0f39906d/shm shm 65536 8 65528 1% /var/lib/docker/containers/68e3ad2825e29640a8d090fe0fd3090a3353d3f66cee11ceb042b7fddad8f7ef/shm shm 65536 0 65536 0% /var/lib/docker/containers/1a2df515f75397057758aad73897792dc94411ac3aa98b7ae0bfe343fb70c366/shm shm 65536 0 65536 0% /var/lib/docker/containers/43349a50c0d6c50d4987fc88996c481cedfce67c2532fe1b44be8d237b4bb380/shm shm 65536 4 65532 1% /var/lib/docker/containers/7e44a412169eee513eafd0523eed2eabf216ce7d578640174a3ccb211b9e7cfa/shm shm 65536 0 65536 0% /var/lib/docker/containers/8294edae393ac797db55b6060f621bc1f9efade4645888c2e4c10f6ddf8bdb93/shm Quote Link to comment
bonienl Posted March 30, 2018 Share Posted March 30, 2018 Can you also post the syslog file (/var/log/syslog). Thanks for helping out Quote Link to comment
Uncledome Posted March 30, 2018 Share Posted March 30, 2018 Just now, bonienl said: Can you also post the syslog file (/var/log/syslog). Thanks for helping out Sure, i've attached the syslog file because its too much to quote. syslog Quote Link to comment
bonienl Posted March 30, 2018 Share Posted March 30, 2018 13 minutes ago, Uncledome said: i've attached the syslog file Does any or your containers have a folder mapping to the remote share you set up with unassigned devices? Quote Link to comment
Uncledome Posted March 30, 2018 Share Posted March 30, 2018 1 minute ago, bonienl said: Does any or your containers have a folder mapping to the remote share you set up with unassigned devices? Yea, multiple should have a mapping in their config. From the top of my head those are: Plex, Sabnzbd, Sonarr, Radarr, JDownloader, Deluge and maybe a few others (all my media is stored on the remote share) Quote Link to comment
pwm Posted March 30, 2018 Share Posted March 30, 2018 Just curios. The command below is likely to show one or more processes that are in state "D" (8th column of ps output). Quote ps -aux If you find any process with state "D" - what is the content of Quote ls -l /proc/<pid>/fd/ where <pid> is the PID (second column of ps output) of the process in state D? Regarding remote shares - they can sometimes hang badly. Sometimes possible to solve by doing Quote unmount -l <share> but that doesn't always solve the problem - often a reboot will be required. Quote Link to comment
Uncledome Posted March 30, 2018 Share Posted March 30, 2018 3 minutes ago, pwm said: Just curios. The command below is likely to show one or more processes that are in state "D" (8th column of ps output). If you find any process with state "D" - what is the content of where <pid> is the PID (second column of ps output) of the process in state D? Regarding remote shares - they can sometimes hang badly. Sometimes possible to solve by doing but that doesn't always solve the problem - often a reboot will be required. There are multiple PIDs with status D, Tried the ls -l with one of them: root@Tower:~# ls -l /proc/26457/fd/ total 0 lrwx------ 1 root root 64 Mar 30 11:47 0 -> socket:[811239] l-wx------ 1 root root 64 Mar 30 11:47 1 -> pipe:[857897] lrwx------ 1 root root 64 Mar 30 11:47 2 -> /dev/null lrwx------ 1 root root 64 Mar 30 11:47 3 -> socket:[863915] root@Tower:~# As for the remote mapping, I actually never had any issues but normally (before unraid which I am currently in trial) I used an ubuntu server for my docker (hosted on a XenServer Free Edition Server) where I mapped the remote share in fstab and passed it through to the containers. First time having those issues in my life . Quote Link to comment
bonienl Posted March 30, 2018 Share Posted March 30, 2018 Could you try again with unRAID 6.5.1-rc2 installed? In the syslog this is reported: 0 08:38:06 Tower kernel: BUG: unable to handle kernel NULL pointer dereference at 0000000000000038 Mar 30 08:38:06 Tower kernel: IP: tcp_push+0x4e/0xee After this a call trace is recorded and you get the famous "connection time out" message, you mentioned earlier. From this point on communication hangs. The above should be resolved in rc2 which has the kernel patch for the TCP bug. Quote Link to comment
Recommended Posts
Join the conversation
You can post now and register later. If you have an account, sign in now to post with your account.
Note: Your post will require moderator approval before it will be visible.