[CONTAINER] CrashPlan & CrashPlan-Desktop


Recommended Posts

For the benefit of the rest:

 

1) log into your unraid server

2) log into the crashplan docker: docker exec -it CrashPlan bash

3) go to /etc/my_init.d

4) vi 01_config.sh

 

${TCP_PORT_4242} on line 46 needs to be replaced with ${SERVICE_PORT}

${TCP_PORT_4243} on line 47  needs to be replaced with ${BACKUP_PORT}

 

And indeed, then it works ! Just restart the docker

 

Thanks for finding the bug. Container updated.

 

Thanks for the quick fix! I pulled in the updated container and it worked fine. One question: how do I update the WebUI link? It still points to the CrashPlan website. I saw somewhere that I can change it manually, but the new 6.2.1 Docker GUI doesn't have the WebUI variable exposed and I'm not sure how to do it on my own. Detailed instructions would be greatly appreciated.

Link to comment

For the benefit of the rest:

 

1) log into your unraid server

2) log into the crashplan docker: docker exec -it CrashPlan bash

3) go to /etc/my_init.d

4) vi 01_config.sh

 

${TCP_PORT_4242} on line 46 needs to be replaced with ${SERVICE_PORT}

${TCP_PORT_4243} on line 47  needs to be replaced with ${BACKUP_PORT}

 

And indeed, then it works ! Just restart the docker

 

Thanks for finding the bug. Container updated.

 

Thanks, gfjardim - updated container resolves connectivity issues.

Link to comment

Mmmm.... I wonder if these 2 methods of increasing the Crashplan Ram/Memory differ in any way...

 

If you can access the CP gui, then by all means use the gui to change the memory usage.

 

But some of us were stuck in a crashing loop, where the app gui wouldn't stay up long enough to make changes.

Link to comment

Thanks for the quick fix! I pulled in the updated container and it worked fine. One question: how do I update the WebUI link? It still points to the CrashPlan website. I saw somewhere that I can change it manually, but the new 6.2.1 Docker GUI doesn't have the WebUI variable exposed and I'm not sure how to do it on my own. Detailed instructions would be greatly appreciated.

 

1. On the Docker Edit screen make sure it is in Advanced View mode (if in Basic, click to switch to Advanced)

2. The proper default WebUI URL is: http://[iP]:[PORT:4280]/vnc.html?autoconnect=true&host=[iP]&port=[PORT:4280]

3. If you have modified the WEB_PORT variable to be a port other than 4280, the port needs to also be modified in the above URL

4. Click Apply button to save edits

 

aoavwj.jpg

Link to comment

 

1. On the Docker Edit screen make sure it is in Advanced View mode (if in Basic, click to switch to Advanced)

2. The proper default WebUI URL is: http://[iP]:[PORT:4280]/vnc.html?autoconnect=true&host=[iP]&port=[PORT:4280]

3. If you have modified the WEB_PORT variable to be a port other than 4280, the port needs to also be modified in the above URL

4. Click Apply button to save edits

 

 

Thank you!

Link to comment

Thanks for the quick fix! I pulled in the updated container and it worked fine. One question: how do I update the WebUI link? It still points to the CrashPlan website. I saw somewhere that I can change it manually, but the new 6.2.1 Docker GUI doesn't have the WebUI variable exposed and I'm not sure how to do it on my own. Detailed instructions would be greatly appreciated.

 

1. On the Docker Edit screen make sure it is in Advanced View mode (if in Basic, click to switch to Advanced)

2. The proper default WebUI URL is: http://[iP]:[PORT:4280]/vnc.html?autoconnect=true&host=[iP]&port=[PORT:4280]

3. If you have modified the WEB_PORT variable to be a port other than 4280, the port needs to also be modified in the above URL

4. Click Apply button to save edits

 

Hoopster!! this is badass man, thank you so much. No more hassling with the id file.

Link to comment

So the fix is:

  • shutdown the CP docker
  • SSH to your server
  • type: cd /mnt/cache/appdata/CrashPlan/bin (or wherever your CP appdata files are)
  • type: nano run.conf
  • on the line that starts with SRV_JAVA_OPTS, change "-Xmx1024m" to "-Xmx2048m"
  • press CTRL-O, CTRL-X
  • type: exit
  • start the CP docker

 

Wow!  I knew that my backup set (200k photos = 1TB, plus other stuff) was going to require additional memory - it did when it was running on my Win7 machine, but I figured that I had some time to figure out how to do that while CP got started.

 

Changing this a few minutes ago has made a world of difference in performance!

Link to comment

Go back to page 59 in this thread and start reading from there.  Lots of users with VNC black screen issues.  For many it was fixed by a CrashPlan docker update.  One user reported the Kaspersky antivirus was the problem and as soon as he turned it off, he was able to connect with VNC with no problems.  Others found system settings that helped.  It seems like there is not always a single, simple cure for some of these strange issues.

 

I had the same problem once and then it just went away after who knows what update to what unRAID OS, docker engine or container or other component. Not too comforting I know.  I and everyone else want a quick, sure cure to issues like this.

Link to comment

I'm getting the exact same thing on multiple attempts at initial/clean builds (black screen when attempting to connect via VNC). Seems the current docker has a defect of some kind for new installs.

 

Anyone else have any thoughts as to why I am just getting a black screen when trying to open the UI via unRAID or VNCing straight into it using TightVNC?

Link to comment

I finally got it to come up. I ended up removing the template and the image a couple times and then FINALLY it worked. Took me around a dozen times to get it to finally work. Kinda strange IMO. There was a post saying this issue was fixed, but I guess it still lingers with a few system configs.

Link to comment

Where did the desktop docker go? I cannot update it anymore because it says it no longer exists?

 

The CrashPlan Desktop docker no longer exists.  Desktop (WebGUI via built-in VNC) functionality is now integrated with the CrashPlan server engine in the CrashPlan docker.  You can connect via built-in VNC or a stand-alone VNC client on port 4239 e.g. [server IP]::4239.

Link to comment

Only just noticed that my Crashplan hasn't been working for the past few days. When I connect via VNC it appears to be in a reboot cycle. Log shows this...

 

Openbox-Message: Unable to find a valid menu file "/var/lib/openbox/debian-menu.xml"
ERROR: openbox-xdg-autostart requires PyXDG to be installed



./run: line 20: 2334 Killed $JAVACOMMON $SRV_JAVA_OPTS -classpath "$TARGETDIR/lib/com.backup42.desktop.jar:$TARGETDIR/lang" com.backup42.service.CPService > /config/log/engine_output.log 2> /config/log/engine_error.log

/opt/startapp.sh: line 22: 2488 Killed ${JAVACOMMON} ${GUI_JAVA_OPTS} -classpath "./lib/com.backup42.desktop.jar:./lang:./skin" com.backup42.desktop.CPDesktop > /config/log/desktop_output.log 2> /config/log/desktop_error.log

 

Repeated over and over.

 

Any ideas?

Link to comment

No amount of uninstalling/reinstalling will get rid of this black screen!

 

Have tried all suggestions in this thread but constant black screen!  :'(

 

uninstalled & re-installed another 20 times or so today to try and get this working!  Any pointers on diagnosing this please?

 

Many thanks

Link to comment

No amount of uninstalling/reinstalling will get rid of this black screen!

 

Have tried all suggestions in this thread but constant black screen!  :'(

 

uninstalled & re-installed another 20 times or so today to try and get this working!  Any pointers on diagnosing this please?

 

Many thanks

 

This is at the end of my logfile, any ideas?

 

[10.18.16 00:52:48.097 INFO  ystemWatcher p42.service.peer.CPCConnectCheck] Connect to CPC after 0ms

[10.18.16 00:52:48.206 INFO  ystemWatcher up42.service.peer.PeerController] Attempting to connect to CPC. delay=60000

[10.18.16 00:52:48.461 INFO  e-listen-1-1 .handler.ChannelExceptionHandler] SABRE:: Uncaught IOException in channel pipeline! msg=not an SSL/TLS record: 8063000000412d31383738327c636f6d2e636f646534322e6d6573736167696e672e736563757269747$

[10.18.16 00:52:48.465 WARN  e-listen-1-1 handler.AppProtocolStartListener] SABRE:: Channel failed to reach state for app protocol start. channel=[id: 0x2f47079b, L:/192.168.0.33:58800 ! R:/216.17.8.52:443]

[10.18.16 00:52:48.469 INFO  e-listen-1-1 .handler.ChannelLifecycleHandler] SABRE:: Channel became inactive. ctx=ChannelHandlerContext(CHANNEL_LIFECYCLE_HANDLER, [id: 0x2f47079b, L:/192.168.0.33:58800 ! R:/216.17.8.52:443]), remotePeer=$

[10.18.16 00:52:48.470 INFO  e-listen-1-1 .network.sabre.SabreNetworkLayer] SABRE:: Cancelling connect to 42 at /216.17.8.52:443. connectFuture.isDone=true, channel=[id: 0x2f47079b, L:/192.168.0.33:58800 ! R:/216.17.8.52:443]

[10.18.16 00:52:50.398 WARN  e-listen-1-2 handler.AppProtocolStartListener] SABRE:: Channel failed to reach state for app protocol start. channel=[id: 0x4b0e7874, L:0.0.0.0/0.0.0.0:33562 ! R:/216.17.8.51:443]

[10.18.16 00:52:50.399 INFO  e-listen-1-2 .handler.ChannelLifecycleHandler] SABRE:: Channel became inactive. ctx=ChannelHandlerContext(CHANNEL_LIFECYCLE_HANDLER, [id: 0x4b0e7874, L:0.0.0.0/0.0.0.0:33562 ! R:/216.17.8.51:443]), remotePee$

[10.18.16 00:52:50.399 INFO  e-listen-1-2 .network.sabre.SabreNetworkLayer] SABRE:: Cancelling connect to 42 at /216.17.8.51:443. connectFuture.isDone=true, channel=[id: 0x4b0e7874, L:0.0.0.0/0.0.0.0:33562 ! R:/216.17.8.51:443]

[10.18.16 00:52:52.399 WARN  e-listen-1-1 handler.AppProtocolStartListener] SABRE:: Channel failed to reach state for app protocol start. channel=[id: 0x7d69e29b, L:0.0.0.0/0.0.0.0:51224 ! R:/216.17.8.48:443]

[10.18.16 00:52:52.400 INFO  e-listen-1-1 .handler.ChannelLifecycleHandler] SABRE:: Channel became inactive. ctx=ChannelHandlerContext(CHANNEL_LIFECYCLE_HANDLER, [id: 0x7d69e29b, L:0.0.0.0/0.0.0.0:51224 ! R:/216.17.8.48:443]), remotePee$

[10.18.16 00:52:52.400 INFO  e-listen-1-1 .network.sabre.SabreNetworkLayer] SABRE:: Cancelling connect to 42 at /216.17.8.48:443. connectFuture.isDone=true, channel=[id: 0x7d69e29b, L:0.0.0.0/0.0.0.0:51224 ! R:/216.17.8.48:443]

[10.18.16 00:52:54.396 INFO  e-listen-1-2 .handler.ChannelExceptionHandler] SABRE:: Uncaught IOException in channel pipeline! msg=not an SSL/TLS record: 8063000000412d31383738327c636f6d2e636f646534322e6d6573736167696e672e736563757269747$

[10.18.16 00:52:54.397 WARN  e-listen-1-2 handler.AppProtocolStartListener] SABRE:: Channel failed to reach state for app protocol start. channel=[id: 0xa008bf2f, L:/192.168.0.33:52478 ! R:/216.17.8.47:443]

[10.18.16 00:52:54.398 INFO  e-listen-1-2 .handler.ChannelLifecycleHandler] SABRE:: Channel became inactive. ctx=ChannelHandlerContext(CHANNEL_LIFECYCLE_HANDLER, [id: 0xa008bf2f, L:/192.168.0.33:52478 ! R:/216.17.8.47:443]), remotePeer=$

[10.18.16 00:52:54.398 INFO  e-listen-1-2 .network.sabre.SabreNetworkLayer] SABRE:: Cancelling connect to 42 at /216.17.8.47:443. connectFuture.isDone=true, channel=[id: 0xa008bf2f, L:/192.168.0.33:52478 ! R:/216.17.8.47:443]

[10.18.16 00:52:56.398 WARN  e-listen-1-1 handler.AppProtocolStartListener] SABRE:: Channel failed to reach state for app protocol start. channel=[id: 0x49f272c8, L:0.0.0.0/0.0.0.0:46166 ! R:/216.17.8.11:443]

[10.18.16 00:52:56.399 INFO  e-listen-1-1 .handler.ChannelLifecycleHandler] SABRE:: Channel became inactive. ctx=ChannelHandlerContext(CHANNEL_LIFECYCLE_HANDLER, [id: 0x49f272c8, L:0.0.0.0/0.0.0.0:46166 ! R:/216.17.8.11:443]), remotePee$

[10.18.16 00:52:56.399 INFO  e-listen-1-1 .network.sabre.SabreNetworkLayer] SABRE:: Cancelling connect to 42 at /216.17.8.11:443. connectFuture.isDone=true, channel=[id: 0x49f272c8, L:0.0.0.0/0.0.0.0:46166 ! R:/216.17.8.11:443]

[10.18.16 00:52:58.401 INFO  e-listen-1-2 .handler.ChannelExceptionHandler] SABRE:: Uncaught IOException in channel pipeline! msg=not an SSL/TLS record: 8063000000412d31383738327c636f6d2e636f646534322e6d6573736167696e672e736563757269747$

[10.18.16 00:52:58.402 WARN  e-listen-1-2 handler.AppProtocolStartListener] SABRE:: Channel failed to reach state for app protocol start. channel=[id: 0xbed47c54, L:/192.168.0.33:58750 ! R:/216.17.8.8:443]

[10.18.16 00:52:58.403 INFO  e-listen-1-2 .handler.ChannelLifecycleHandler] SABRE:: Channel became inactive. ctx=ChannelHandlerContext(CHANNEL_LIFECYCLE_HANDLER, [id: 0xbed47c54, L:/192.168.0.33:58750 ! R:/216.17.8.8:443]), remotePeer=R$

[10.18.16 00:52:58.403 INFO  e-listen-1-2 .network.sabre.SabreNetworkLayer] SABRE:: Cancelling connect to 42 at /216.17.8.8:443. connectFuture.isDone=true, channel=[id: 0xbed47c54, L:/192.168.0.33:58750 ! R:/216.17.8.8:443]

[10.18.16 00:53:00.402 INFO  e-listen-1-1 .handler.ChannelExceptionHandler] SABRE:: Uncaught IOException in channel pipeline! msg=not an SSL/TLS record: 8063000000412d31383738327c636f6d2e636f646534322e6d6573736167696e672e736563757269747$

[10.18.16 00:53:00.403 WARN  e-listen-1-1 handler.AppProtocolStartListener] SABRE:: Channel failed to reach state for app protocol start. channel=[id: 0x41e0ba7b, L:/192.168.0.33:60626 ! R:/216.17.8.7:443]

[10.18.16 00:53:00.403 INFO  e-listen-1-1 .handler.ChannelLifecycleHandler] SABRE:: Channel became inactive. ctx=ChannelHandlerContext(CHANNEL_LIFECYCLE_HANDLER, [id: 0x41e0ba7b, L:/192.168.0.33:60626 ! R:/216.17.8.7:443]), remotePeer=R$

[10.18.16 00:53:00.403 INFO  e-listen-1-1 .network.sabre.SabreNetworkLayer] SABRE:: Cancelling connect to 42 at /216.17.8.7:443. connectFuture.isDone=true, channel=[id: 0x41e0ba7b, L:/192.168.0.33:60626 ! R:/216.17.8.7:443]

[10.18.16 00:53:02.405 WARN  e-listen-1-2 handler.AppProtocolStartListener] SABRE:: Channel failed to reach state for app protocol start. channel=[id: 0xf8f32dcb, L:0.0.0.0/0.0.0.0:37998 ! R:/216.17.8.4:443]

[10.18.16 00:53:02.405 INFO  e-listen-1-2 .handler.ChannelLifecycleHandler] SABRE:: Channel became inactive. ctx=ChannelHandlerContext(CHANNEL_LIFECYCLE_HANDLER, [id: 0xf8f32dcb, L:0.0.0.0/0.0.0.0:37998 ! R:/216.17.8.4:443]), remotePeer$

[10.18.16 00:53:02.406 INFO  e-listen-1-2 .network.sabre.SabreNetworkLayer] SABRE:: Cancelling connect to 42 at /216.17.8.4:443. connectFuture.isDone=true, channel=[id: 0xf8f32dcb, L:0.0.0.0/0.0.0.0:37998 ! R:/216.17.8.4:443]

[10.18.16 00:53:04.398 WARN  e-listen-1-1 handler.AppProtocolStartListener] SABRE:: Channel failed to reach state for app protocol start. channel=[id: 0x73494c8f, L:0.0.0.0/0.0.0.0:53298 ! R:/216.17.8.3:443]

[10.18.16 00:53:04.399 INFO  e-listen-1-1 .handler.ChannelLifecycleHandler] SABRE:: Channel became inactive. ctx=ChannelHandlerContext(CHANNEL_LIFECYCLE_HANDLER, [id: 0x73494c8f, L:0.0.0.0/0.0.0.0:53298 ! R:/216.17.8.3:443]), remotePeer$

[10.18.16 00:53:04.399 INFO  e-listen-1-1 .network.sabre.SabreNetworkLayer] SABRE:: Cancelling connect to 42 at /216.17.8.3:443. connectFuture.isDone=true, channel=[id: 0x73494c8f, L:0.0.0.0/0.0.0.0:53298 ! R:/216.17.8.3:443]

[10.18.16 00:53:06.401 INFO  e-listen-1-2 .handler.ChannelExceptionHandler] SABRE:: Uncaught IOException in channel pipeline! msg=not an SSL/TLS record: 8063000000412d31383738327c636f6d2e636f646534322e6d6573736167696e672e736563757269747$

[10.18.16 00:53:06.402 WARN  e-listen-1-2 handler.AppProtocolStartListener] SABRE:: Channel failed to reach state for app protocol start. channel=[id: 0x41ed03b6, L:/192.168.0.33:33128 ! R:/216.17.8.55:443]

[10.18.16 00:53:06.403 INFO  e-listen-1-2 .handler.ChannelLifecycleHandler] SABRE:: Channel became inactive. ctx=ChannelHandlerContext(CHANNEL_LIFECYCLE_HANDLER, [id: 0x41ed03b6, L:/192.168.0.33:33128 ! R:/216.17.8.55:443]), remotePeer=$

[10.18.16 00:53:06.403 INFO  e-listen-1-2 .network.sabre.SabreNetworkLayer] SABRE:: Cancelling connect to 42 at /216.17.8.55:443. connectFuture.isDone=true, channel=[id: 0x41ed03b6, L:/192.168.0.33:33128 ! R:/216.17.8.55:443]

[10.18.16 00:53:08.404 WARN  e-listen-1-1 handler.AppProtocolStartListener] SABRE:: Channel failed to reach state for app protocol start. channel=[id: 0x8ba8b158, L:0.0.0.0/0.0.0.0:33130 ! R:central.crashplan.com/216.17.8.55:443]

[10.18.16 00:53:08.405 INFO  e-listen-1-1 .handler.ChannelLifecycleHandler] SABRE:: Channel became inactive. ctx=ChannelHandlerContext(CHANNEL_LIFECYCLE_HANDLER, [id: 0x8ba8b158, L:0.0.0.0/0.0.0.0:33130 ! R:central.crashplan.com/216.17.$

[10.18.16 00:53:08.405 INFO  e-listen-1-1 .network.sabre.SabreNetworkLayer] SABRE:: Cancelling connect to 42 at central.crashplan.com/216.17.8.55:443. connectFuture.isDone=true, channel=[id: 0x8ba8b158, L:0.0.0.0/0.0.0.0:33130 ! R:centr$

[10.18.16 00:53:11.580 INFO  MQ-Peer-0    saging.security.SecurityProvider] SP:: Session secured in 707 ms. RemotePeer-[guid=42, SERVER]; PeerConnectionState-[state=CONNECTING, mode=PRIVATE, currentAddressIndex=0, layer=2/2(Peer::NIO), $

[10.18.16 00:53:12.033 INFO  MQ-Peer-2    ervice.peer.PeerVersionValidator] WE have an old version, localVersion=1435813200480 (2015-07-02T05:00:00:480+0000), remoteVersion=1460005200540 (2016-04-07T05:00:00:540+0000), remoteGuid=42

[10.18.16 00:53:12.131 INFO  ystemWatcher up42.service.peer.PeerController]  Connected to CPC after 23sec

 

 

Link to comment

Just updated my unRAID crashplan machine to unRAID 6.2.1 and the crashplan docker.  I was able to get it installed and running without to much issue.

 

I am having one problem however.  Whenever crashplan is running and I try and do a parity check it is ungodly slow.  I go from 127 to about 30 for a full parity check.  As soon as I stop the Crashplan docker from running the parity speed will pick back up.

 

I am running this on an older HP ProLiant N40L Microserver with 4GB of RAM.  It use to be running unRAID 5 with a crashplan plugin and I did not have any of these speed issues with parity while crashplan was running. 

 

I don't want to go back to unRAID 5 and a plugin so any pointers or suggestions would be much appreciated.

 

I will be trying the unRAID tunables tester script with the crashplan docker running and see what I come up with but figured I would through this out there in case anyone has seen this before.

Link to comment

Unraid 6.2.2 here, brand new install of the Crashplan Docker using the Community Apps plugin.

Black screen.

 

 

 

If it helps any:

*** Running /etc/my_init.d/00_config.sh...

 

Current default time zone: 'Etc/UTC'

Local time is now: Sun Oct 23 00:27:33 UTC 2016.

Universal Time is now: Sun Oct 23 00:27:33 UTC 2016.

 

groupmod: GID '0' already exists

usermod: UID '0' already exists

usermod: no changes

*** Running /etc/my_init.d/01_config.sh...

*** Running /etc/rc.local...

*** Booting runit daemon...

*** Runit started as PID 53

_XSERVTransSocketOpenCOTSServer: Unable to open socket for inet6

_XSERVTransOpen: transport open failed for inet6/Tower1:1

 

_XSERVTransMakeAllCOTSServerListeners: failed to open listener for inet6

 

_XSERVTransmkdir: ERROR: euid != 0,directory /tmp/.X11-unix will not be created.

 

 

Xvnc TigerVNC 1.7.0 - built Sep 8 2016 10:39:22

Copyright © 1999-2016 TigerVNC Team and many others (see README.txt)

See http://www.tigervnc.org for information on TigerVNC.

Underlying X server release 11400000, The X.Org Foundation

 

Initializing built-in extension VNC-EXTENSION

Initializing built-in extension Generic Event Extension

Initializing built-in extension SHAPE

Initializing built-in extension MIT-SHM

Initializing built-in extension XInputExtension

Initializing built-in extension XTEST

Initializing built-in extension BIG-REQUESTS

Initializing built-in extension SYNC

Initializing built-in extension XKEYBOARD

Initializing built-in extension XC-MISC

Initializing built-in extension XINERAMA

Initializing built-in extension XFIXES

Initializing built-in extension RENDER

Initializing built-in extension RANDR

Initializing built-in extension COMPOSITE

Initializing built-in extension DAMAGE

Initializing built-in extension MIT-SCREEN-SAVER

Initializing built-in extension DOUBLE-BUFFER

Initializing built-in extension RECORD

Initializing built-in extension DPMS

Initializing built-in extension X-Resource

Initializing built-in extension XVideo

Initializing built-in extension XVideo-MotionCompensation

Initializing built-in extension GLX

 

Sat Oct 22 20:27:33 2016

vncext: VNC extension running!

vncext: Listening for VNC connections on all interface(s), port 4239

vncext: created VNC server for screen 0

Warning: could not find self.pem

 

Using local websockify at /opt/novnc/utils/websockify/run

Starting webserver and WebSockets proxy on port 4280

Oct 22 20:27:34 Tower1 syslog-ng[61]: syslog-ng starting up; version='3.5.6'

 

WebSocket server settings:

- Listen on :4280

- Flash security policy server

- Web server. Web root: /opt/novnc

- No SSL/TLS support (no cert file)

- proxying from :4280 to localhost:4239

 

 

Navigate to this URL:

 

http://Tower1:4280/vnc.html?host=Tower1&port=4280

 

Press Ctrl-C to exit

 

 

Openbox-Message: Unable to find a valid menu file "/var/lib/openbox/debian-menu.xml"

ERROR: openbox-xdg-autostart requires PyXDG to be installed

 

 

 

/opt/startapp.sh: line 21: /config/log/desktop_output.log: Permission denied

DownstairsPC.home - - [22/Oct/2016 20:27:48] 192.168.1.4: Plain non-SSL (ws://) WebSocket connection

DownstairsPC.home - - [22/Oct/2016 20:27:48] 192.168.1.4: Version hybi-13, base64: 'False'

 

DownstairsPC.home - - [22/Oct/2016 20:27:48] 192.168.1.4: Path: '/websockify'

DownstairsPC.home - - [22/Oct/2016 20:27:48] connecting to: localhost:4239

 

Sat Oct 22 20:27:48 2016

Connections: accepted: 127.0.0.1::57556

SConnection: Client needs protocol version 3.8

 

SConnection: Client requests security type None(1)

VNCSConnST: Server default pixel format depth 24 (32bpp) little-endian rgb888

VNCSConnST: Client pixel format depth 24 (32bpp) little-endian rgb888

 

Sat Oct 22 20:28:31 2016

Connections: closed: 127.0.0.1::57556 (Clean disconnection)

EncodeManager: Framebuffer updates: 2

EncodeManager: Tight:

EncodeManager: Solid: 2 rects, 786.592 kpixels

EncodeManager: 32 B (1:98324.8 ratio)

EncodeManager: Bitmap RLE: 1 rects, 160 pixels

EncodeManager: 68 B (1:9.58824 ratio)

EncodeManager: Total: 3 rects, 786.752 kpixels

EncodeManager: 100 B (1:31470.4 ratio)

 

Link to comment

Is this unsupported now? :-( will the plugin work instead?

 

I think this says it all ;)

Guys,  I'm back and will start to work soon.

 

***

 

Mine is working (6.2.2) and was freshly installed due to docker changes in 6.1.9->6.2, however I only use it to backup from the unraid server to the cloud.

 

I also have a black VNC from my seldom used windows 10 machine after the anniversary update, however from my primary Elementary OS machine access is fine.

 

The parity check speed is slower WHILE crashplan is backing up to the cloud, but not noticeably slower when crashplan is idle. 

 

...

 

Most of the issues people have been having, I do not know how to help with (black VNC, not able to backup TO the unraid crashplan from other computers, etc)

 

...

 

This Docker/Plugin is, to me, the MOST important added functionality to unraid and I hope it does not go unsupported.  The second most important added functionality is the preclearing of HDDs! 

 

These are both gfjardim's baby

Link to comment

Join the conversation

You can post now and register later. If you have an account, sign in now to post with your account.
Note: Your post will require moderator approval before it will be visible.

Guest
Reply to this topic...

×   Pasted as rich text.   Restore formatting

  Only 75 emoji are allowed.

×   Your link has been automatically embedded.   Display as a link instead

×   Your previous content has been restored.   Clear editor

×   You cannot paste images directly. Upload or insert images from URL.