sekrit Posted May 7, 2019 Share Posted May 7, 2019 My network connection keeps failing. I am trying to figure out the problem, and in syslog.txt I find: May 5 12:30:48 7960UNRAID rc.inet1: ip link set eth0 master bond0 down May 5 12:30:49 7960UNRAID kernel: bond0: Enslaving eth0 as a backup interface with a down link May 5 13:00:30 7960UNRAID kernel: eth0: renamed from vethe4b16c9 May 5 20:00:02 7960UNRAID kernel: vethe4b16c9: renamed from eth0 May 5 20:00:23 7960UNRAID kernel: eth0: renamed from vethcbfbc2f Does any of this reflect what could be causing the problem? Quote Link to comment
Frank1940 Posted May 7, 2019 Share Posted May 7, 2019 I would suggest that you post up the Diagnostics File from this server. If you lose access with the WebGUI, login via the console and type diagnostics on the command line. That will write the Diagnostic File to the logs directory/folder on your flash drive. Upload that entire file with your next post. 1 Quote Link to comment
sekrit Posted May 7, 2019 Author Share Posted May 7, 2019 12 hours ago, Frank1940 said: I would suggest that you post up the Diagnostics File from this server. If you lose access with the WebGUI, login via the console and type diagnostics on the command line. That will write the Diagnostic File to the logs directory/folder on your flash drive. Upload that entire file with your next post. It seems as though it is a complete freeze. I go to log in with root (have not changed due to that i'm still working out bugs)... and I get no activity from the keyboard. Soooo... What do you advise? Should I reboot and do something, or not reboot and pull the USB? Please note that I had run "troubleshoot mode" from "CA (Some plugin or another LOL... The one for alerts and fixes), which is where I got the initial lines of script from...) However, it gives a zip file with a ton of Log and other files in it.... Which file should I post? Quote Link to comment
Frank1940 Posted May 7, 2019 Share Posted May 7, 2019 Post all files up. If you have multiple files that look similar sort them by date and post up only the recent ones which appear to be related to the problem. One (or more) is probably a zipped file. Do NOT unzip. 1 Quote Link to comment
sekrit Posted May 7, 2019 Author Share Posted May 7, 2019 3 minutes ago, Frank1940 said: Post all files up. If you have multiple files that look similar sort them by date and post up only the recent ones which appear to be related to the problem. One (or more) is probably a zipped file. Do NOT unzip. Here we go: 7960unraid-diagnostics-20190506-0510.zip Quote Link to comment
Frank1940 Posted May 7, 2019 Share Posted May 7, 2019 You should also have a file named FCPsyslog_tail.txt in the logs directory. Post that one up also. 1 Quote Link to comment
sekrit Posted May 7, 2019 Author Share Posted May 7, 2019 2 hours ago, Frank1940 said: You should also have a file named FCPsyslog_tail.txt in the logs directory. Post that one up also. Here you go, good Sir. FCPsyslog_tail.txt Quote Link to comment
Frank1940 Posted May 8, 2019 Share Posted May 8, 2019 OK, I requested that you post up these files as they are always needed when you have issues. (Snippets of syslogs are generally useless for figuring out what is going on.) I was hoping that some real Guru would step up to help you. Since that has not happened, I will point out some things which I noticed. From what I can see in the diagnostics file, you have your Network connection (the Cat5 cable) to the eth1 port not the eth0 port. It appears that this is actually working correctly. (I am surmising this is the reason for the message in your snippet.) First thing is that you have a problem with one of your Docker apps. You can look at the Docker.txt file in the logs folder of the Diagnostics file. You should probably stop the Docker from starting until you can figure out what is going on there. You appear to have four SSD's installed on this server. What are they being used for? Have you checked the file systems to make sure they are OK. (One of these SSD's-- according to the Smart data--- has had a very large number of dirty shutdowns.) 1 Quote Link to comment
sekrit Posted May 8, 2019 Author Share Posted May 8, 2019 (edited) 30 minutes ago, Frank1940 said: OK, I requested that you post up these files as they are always needed when you have issues. (Snippets of syslogs are generally useless for figuring out what is going on.) I was hoping that some real Guru would step up to help you. Since that has not happened, I will point out some things which I noticed. From what I can see in the diagnostics file, you have your Network connection (the Cat5 cable) to the eth1 port not the eth0 port. It appears that this is actually working correctly. (I am surmising this is the reason for the message in your snippet.) First thing is that you have a problem with one of your Docker apps. You can look at the Docker.txt file in the logs folder of the Diagnostics file. You should probably stop the Docker from starting until you can figure out what is going on there. You appear to have four SSD's installed on this server. What are they being used for? Have you checked the file systems to make sure they are OK. (One of these SSD's-- according to the Smart data--- has had a very large number of dirty shutdowns.) First of all, thank you so very much for your time and effort. I appreciate it greatly. NIC Ports: Motherboard (ASUS Rampage VI Extreme): Eth0 is a 10Gbe port/Eth1 is a 1Gbe port Docker: I have had a number of my system freezes while attempting transfers within "Krusader" (attemting to backup my WD My Cloud Data) So, if I can't transfer data, i'm totally hosed. SSD's: 1 SATA-Storage 3 NVME-1 cache, 1 storage, 1 unassigned (passthrough windows 10, functional also if not running unraid) Dirty Shutdowns: After system hangs, I have had no other choice but to do a cold restart. Edited May 8, 2019 by sekrit Quote Link to comment
Recommended Posts
Join the conversation
You can post now and register later. If you have an account, sign in now to post with your account.
Note: Your post will require moderator approval before it will be visible.