Jagadguru

Members
  • Posts

    97
  • Joined

  • Last visited

Recent Profile Visitors

The recent visitors block is disabled and is not being shown to other users.

Jagadguru's Achievements

Apprentice

Apprentice (3/14)

10

Reputation

  1. How does unRAID recognize it's array disks? I want to switch back and forth seamlessly between running unRAID bare metal and virtualizing it under windows with vmware. The quandary is that the disks have a different id when passed though this they are not recognized as part of the array. I figured out that for the cache disks unRAID just looks for a link in /dev/ disks/ by-id that matches the id in disk.cfg That’s easy for me to script. But doing the same with array disks with the ids in super.dat does not work. What is it looking for?
  2. Stopping the docker container for a few days fixed it. It seems I have to do that every 28 days when the maintenance happens
  3. How do I turn off backup but let Crashplan still run to complete the local maintenance? Otherwise it does maintenance for a little bit and then backs up for a little bit and then does synchronization for 9 hrs in a loop. This has been going on for 25 days. Thanks again for the docker!
  4. I turned NFS off entirely and migrated the affected shares to SMB. I also re-enabled CA backup to shut down containers and restart them after backup, which I had been hesitant to do because of this bug. After doing that a month ago, the bug has not popped up since.
  5. Yes, after further investigation it is definitely the crash above that is causing me to loose connectivity to Crashplan's server. The crash breeks the messaging system with code42 and then because of that TLS breaks down with this error: [04.21.21 19:08:52.440 INFO re-event-2-3 .handler.ChannelExceptionHandler] SABRE:: Decoder issue in channel pipeline! msg=com.code42.messaging.MessageException: Message exceeded maximum size. Size=20975080. cause=com.code42.messaging.MessageException: Message exceeded maximum size. Size=20975080. Closing ctx=ChannelHandlerContext(EXCEPTION, [id: 0x6ddd77d8, L:/172.17.0.8:43432 - R:/162.222.41.249:4287]) And this problem started immediately after I was forced to enable 2FA. I thought it was just that the old version didn't work with 2FA so I upgraded the container but then this
  6. Started CrashplanPRO container up today and unfortunately it is still happening. It's now getting to 77% though. Here is the history: I 04/14/21 05:46PM [CS Domains] Scanning for files completed in 12 minutes: 9 files (90GB) found I 04/14/21 05:46PM [CrashPlan Docker Internal] Scanning for files to back up I 04/14/21 05:58PM [CrashPlan Docker Internal] Scanning for files completed in 12 minutes: 179 files (360.10MB) found I 04/18/21 10:19PM Code42 started, version 8.6.0, GUID 858699052691327104 I 04/18/21 10:49PM [CS Everything except Google Drive live mount, Trash and domains Backup Set] Scanning for files to back up I 04/19/21 10:41AM [CS Everything except Google Drive live mount, Trash and domains Backup Set] Scanning for files completed in 11.9 hours: 800,863 files (3.20TB) found I 04/19/21 10:41AM [CS Domains] Scanning for files to back up I 04/19/21 10:52AM [CS Domains] Scanning for files completed in 12 minutes: 9 files (90GB) found I 04/19/21 10:52AM [CrashPlan Docker Internal] Scanning for files to back up I 04/19/21 11:04AM [CrashPlan Docker Internal] Scanning for files completed in 12 minutes: 180 files (363.10MB) found The crash sometimes seems to coincide with the finish of a Scan for files. Now it says "Waiting for connection," although I can ping the destination server just fine from within the container. It also keeps saying the service.log.0 pending restore cancelled [04.19.21 19:58:18.081 INFO 3362_BckpSel tore.BackupClientRestoreDelegate] BC::stopRestore(): idPair=858699052691327104>41, selectedForRestore=false, event=STOP_REQUESTED canceled=false [04.19.21 19:58:18.081 INFO 3362_BckpSel tore.BackupClientRestoreDelegate] BC::Not selected for restore [04.19.21 19:58:18.081 INFO 3362_BckpSel tore.BackupClientRestoreDelegate] BC::0 pending restore canceled
  7. Yes it is still doing it, but I noticed that there is destination maintenance running at the same time and some times showing in the status line. So I have decided to shut down CrashplanPRO for four days to give maintenance a chance to run unfettered.
  8. Thank you for this image, it has worked great for years. After upgrading to the latest docker, however. And deleting the cache, Crashplan synchronizes with the destination server up to %54 or so and then there is this exception in log and synchronization starts at 0% again in a loop. [04.12.21 13:00:03.736 WARN er1WeDftWkr4 ssaging.peer.PeerSessionListener] PSL:: Invalid connect state during sessionEnded after being connected, com.code42.peer.exception.InvalidConnectStateException: RP:: Illegal DISCONNECTED state attempt, session is open RemotePeer-[guid=41, state=CONNECTED]; Session-[localID=1002437243897076021, remoteID=1002437243745158716, layer=Peer::Sabre, closed=false, expiration=null, remoteIdentity=STORAGE, local=172.17.0.6:45100, remote=162.222.41.249:4287] STACKTRACE:: com.code42.peer.exception.InvalidConnectStateException: RP:: Illegal DISCONNECTED state attempt, session is open RemotePeer-[guid=41, state=CONNECTED]; Session-[localID=1002437243897076021, remoteID=1002437243745158716, layer=Peer::Sabre, closed=false, expiration=null, remoteIdentity=STORAGE, local=172.17.0.6:45100, remote=162.222.41.249:4287] at com.code42.messaging.peer.ConnectionStateMachine.setState(ConnectionStateMachine.java:106) at com.code42.messaging.peer.ConnectionStateMachine.updateState(ConnectionStateMachine.java:248) at com.code42.messaging.peer.RemotePeer.lambda$updateStateFromEvent$0(RemotePeer.java:415) at com.code42.messaging.peer.RemotePeer.updateState(RemotePeer.java:468) at com.code42.messaging.peer.RemotePeer.updateStateFromEvent(RemotePeer.java:415) at com.code42.messaging.peer.RemotePeer.onSessionEnded(RemotePeer.java:563) at com.code42.messaging.peer.PeerSessionListener.sessionEnded(PeerSessionListener.java:133) at com.code42.messaging.SessionImpl.notifySessionEnding(SessionImpl.java:239) at com.code42.messaging.mde.ShutdownWork.handleWork(ShutdownWork.java:27) at com.code42.messaging.mde.UnitOfWork.processWork(UnitOfWork.java:163) at com.code42.messaging.mde.UnitOfWork.run(UnitOfWork.java:147) at java.base/java.util.concurrent.ThreadPoolExecutor.runWorker(Unknown Source) at java.base/java.util.concurrent.ThreadPoolExecutor$Worker.run(Unknown Source) at java.base/java.lang.Thread.run(Unknown Source)
  9. Here is my qemu args line: <qemu:arg value='-cpu'/> <qemu:arg value='IvyBridge,kvm=on,vendor=GenuineIntel,+invtsc,vmware-cpuid-freq=on,+pcid,+ssse3,+sse4.2,+popcnt,+avx,+aes,+xsave,+xsaveopt,+vmx,check'/> And I am able to run nox player without a problem on my Catalina VM
  10. Add +vmx and make sure kvm nested is on. Works for me on Intel.
  11. Unfortunately, the one of three servers this bug occurs on is a "production" server so it's hard to do debugging because it can't have much downtime. But it happened again and there are a lot of "kernel: tun: unexpected GSO type" errors Here are my diagnostics. cs-diagnostics-20210315-2045.zip
  12. Ok, thanks for answering. Next time I will post over there.