Jump to content

razierklinge

Members
  • Content Count

    6
  • Joined

  • Last visited

Community Reputation

0 Neutral

About razierklinge

  • Rank
    Newbie
  1. I did a replace and did not start a new device. This worked on my home PC's Ubuntu. I then created a VM with my unraid server using the same Ubuntu version. Got stuck in the same place as the Crashplan docker does, while trying to re-sign in after going through the replace device prompts. The logs show the same errors that I posted earlier in the thread, with connections being dropped/closed. So there is definitely something unique about how unraid is configured that is making Crashplan break.
  2. So what I ended up doing was creating an Ubuntu VM on my regular home PC and installed Crashplan there. At first it seemed like I was running into the same problem during the sign-in process, but I restarted the CrashplanEngine and then was able to login successfully and the backup started running. I suppose this means there is something going on with my Unraid server's configuration that is causing this problem of being unable to connect/backup. Looking at @Spritzup's message above, I realized I have a PiHole docker running that maps port 443. Is this potentially a commonality between those of us having issues connecting to Crashplan? It just seems strange that I've had this setup for a while now without any problems until recently.
  3. This is what the support said: Output of the command: /mnt/user/appdata/CrashPlanPRO/log/app.log:OS = Linux (4.19.56, amd64) /mnt/user/appdata/CrashPlanPRO/log/service.log:[08.30.19 22:38:57.446 INFO main up42.common.config.ServiceConfig] ServiceConfig:: OS = Linux /mnt/user/appdata/CrashPlanPRO/log/service.log:[08.30.19 22:38:57.831 INFO main om.code42.utils.SystemProperties] OS=Linux (4.19.56, amd64)
  4. Too late I contacted them last week and he said without any prompting from me that he could see I'm running the app in an unsupported version of Linux and that I would need to run it from a supported OS before he could provide any support.
  5. So I'm seeing lots of stuff like this: [08.30.19 23:09:58.735 INFO erTimeoutWrk e42.messaging.peer.PeerConnector] PC:: Cancelling connection attempt due to timeout - pending connection=PendingConnection[timeout(ms) = 30000, startTime = Fri Aug 30 23:09:23 MST 2019, remotePeer = RemotePeer-[guid=4200, state=CONNECTING]; Session-null, pendingChannel = com.code42.messaging.network.nio.NioNetworkLayer$1@57340d4d][08.30.19 23:09:58.737 INFO erTimeoutWrk 42.messaging.network.nio.Context] Channel became inactive. closedBy=THIS_SIDE, reason='Connect cancelled', channel=Context@1942087335[channelState=0], remote=[[4200@216.17.8.4:443(server)], transportPbK=X509.checksum(be4b2a71961d23dabb77ea3851a2fc8d)] [08.30.19 23:10:23.738 INFO re-event-2-4 abre.SabrePendingChannelListener] SABRE::Channel connect failed for guid 42, cause=io.netty.channel.ConnectTimeoutException: connection timed out: /162.222.41.12:443 [08.30.19 23:10:23.739 INFO re-event-2-4 .handler.ChannelLifecycleHandler] SABRE:: Channel became inactive. closedBy=THIS_SIDE, channel=[id: 0xe810da0f], sessionState=RemotePeer-[guid=42, state=CONNECTING]; Session-[localID=916848385999105934, remoteID=0, closed=false, remoteIdentity=ENDPOINT, local=null, remote=null] [08.30.19 23:11:13.740 INFO erTimeoutWrk e42.messaging.peer.PeerConnector] PC:: Cancelling connection attempt due to timeout - pending connection=PendingConnection[timeout(ms) = 30000, startTime = Fri Aug 30 23:10:38 MST 2019, remotePeer = RemotePeer-[guid=4200, state=CONNECTING]; Session-null, pendingChannel = com.code42.messaging.network.nio.NioNetworkLayer$1@2259404e][08.30.19 23:11:13.740 INFO erTimeoutWrk 42.messaging.network.nio.Context] Channel became inactive. closedBy=THIS_SIDE, reason='Connect cancelled', channel=Context@1972117142[channelState=0], remote=[[4200@216.17.8.47:4282]] [08.30.19 23:12:13.743 INFO erTimeoutWrk e42.messaging.peer.PeerConnector] PC:: Cancelling connection attempt due to timeout - pending connection=PendingConnection[timeout(ms) = 30000, startTime = Fri Aug 30 23:11:43 MST 2019, remotePeer = RemotePeer-[guid=4200, state=CONNECTING]; Session-null, pendingChannel = com.code42.messaging.network.nio.NioNetworkLayer$1@2a52b0f] [08.30.19 23:12:13.743 INFO erTimeoutWrk 42.messaging.network.nio.Context] Channel became inactive. closedBy=THIS_SIDE, reason='Connect cancelled', channel=Context@2116878626[channelState=0], remote=[[4200@216.17.8.48:4282]] Here's the odd thing. I wiped out my install and even recreated docker.img. When I did all that I was able to login and say I wanted to replace an existing backup. It said it would pull my settings and have me login again, and THAT is when things go to shit. It sits on "Connecting..." indefinitely and those messages above are what I see in the logs. If I kill the docker and restart it I am unable to login and get "Unable to sign in cannot connect to server" messages. The only time I can login is if I wipe out my container config and start fresh. I've gone into the container console and tried pinging the IPs it's unable to connect to and it seems to work fine. I'm very confused why this is suddenly broken.
  6. My crashplan has been saying "unable to connect to destination" for weeks, and I have no clue what changed. It's been fine for months without any interaction. Any ideas what I could check? I've tried restarting my whole server, restarting the docker itself, doing "rz, restart" in the internal crashplan console window. Nothing has made a difference.