Jagadguru

Members
  • Posts

    97
  • Joined

  • Last visited

Everything posted by Jagadguru

  1. How does unRAID recognize it's array disks? I want to switch back and forth seamlessly between running unRAID bare metal and virtualizing it under windows with vmware. The quandary is that the disks have a different id when passed though this they are not recognized as part of the array. I figured out that for the cache disks unRAID just looks for a link in /dev/ disks/ by-id that matches the id in disk.cfg That’s easy for me to script. But doing the same with array disks with the ids in super.dat does not work. What is it looking for?
  2. Stopping the docker container for a few days fixed it. It seems I have to do that every 28 days when the maintenance happens
  3. How do I turn off backup but let Crashplan still run to complete the local maintenance? Otherwise it does maintenance for a little bit and then backs up for a little bit and then does synchronization for 9 hrs in a loop. This has been going on for 25 days. Thanks again for the docker!
  4. I turned NFS off entirely and migrated the affected shares to SMB. I also re-enabled CA backup to shut down containers and restart them after backup, which I had been hesitant to do because of this bug. After doing that a month ago, the bug has not popped up since.
  5. Yes, after further investigation it is definitely the crash above that is causing me to loose connectivity to Crashplan's server. The crash breeks the messaging system with code42 and then because of that TLS breaks down with this error: [04.21.21 19:08:52.440 INFO re-event-2-3 .handler.ChannelExceptionHandler] SABRE:: Decoder issue in channel pipeline! msg=com.code42.messaging.MessageException: Message exceeded maximum size. Size=20975080. cause=com.code42.messaging.MessageException: Message exceeded maximum size. Size=20975080. Closing ctx=ChannelHandlerContext(EXCEPTION, [id: 0x6ddd77d8, L:/172.17.0.8:43432 - R:/162.222.41.249:4287]) And this problem started immediately after I was forced to enable 2FA. I thought it was just that the old version didn't work with 2FA so I upgraded the container but then this
  6. Started CrashplanPRO container up today and unfortunately it is still happening. It's now getting to 77% though. Here is the history: I 04/14/21 05:46PM [CS Domains] Scanning for files completed in 12 minutes: 9 files (90GB) found I 04/14/21 05:46PM [CrashPlan Docker Internal] Scanning for files to back up I 04/14/21 05:58PM [CrashPlan Docker Internal] Scanning for files completed in 12 minutes: 179 files (360.10MB) found I 04/18/21 10:19PM Code42 started, version 8.6.0, GUID 858699052691327104 I 04/18/21 10:49PM [CS Everything except Google Drive live mount, Trash and domains Backup Set] Scanning for files to back up I 04/19/21 10:41AM [CS Everything except Google Drive live mount, Trash and domains Backup Set] Scanning for files completed in 11.9 hours: 800,863 files (3.20TB) found I 04/19/21 10:41AM [CS Domains] Scanning for files to back up I 04/19/21 10:52AM [CS Domains] Scanning for files completed in 12 minutes: 9 files (90GB) found I 04/19/21 10:52AM [CrashPlan Docker Internal] Scanning for files to back up I 04/19/21 11:04AM [CrashPlan Docker Internal] Scanning for files completed in 12 minutes: 180 files (363.10MB) found The crash sometimes seems to coincide with the finish of a Scan for files. Now it says "Waiting for connection," although I can ping the destination server just fine from within the container. It also keeps saying the service.log.0 pending restore cancelled [04.19.21 19:58:18.081 INFO 3362_BckpSel tore.BackupClientRestoreDelegate] BC::stopRestore(): idPair=858699052691327104>41, selectedForRestore=false, event=STOP_REQUESTED canceled=false [04.19.21 19:58:18.081 INFO 3362_BckpSel tore.BackupClientRestoreDelegate] BC::Not selected for restore [04.19.21 19:58:18.081 INFO 3362_BckpSel tore.BackupClientRestoreDelegate] BC::0 pending restore canceled
  7. Yes it is still doing it, but I noticed that there is destination maintenance running at the same time and some times showing in the status line. So I have decided to shut down CrashplanPRO for four days to give maintenance a chance to run unfettered.
  8. Thank you for this image, it has worked great for years. After upgrading to the latest docker, however. And deleting the cache, Crashplan synchronizes with the destination server up to %54 or so and then there is this exception in log and synchronization starts at 0% again in a loop. [04.12.21 13:00:03.736 WARN er1WeDftWkr4 ssaging.peer.PeerSessionListener] PSL:: Invalid connect state during sessionEnded after being connected, com.code42.peer.exception.InvalidConnectStateException: RP:: Illegal DISCONNECTED state attempt, session is open RemotePeer-[guid=41, state=CONNECTED]; Session-[localID=1002437243897076021, remoteID=1002437243745158716, layer=Peer::Sabre, closed=false, expiration=null, remoteIdentity=STORAGE, local=172.17.0.6:45100, remote=162.222.41.249:4287] STACKTRACE:: com.code42.peer.exception.InvalidConnectStateException: RP:: Illegal DISCONNECTED state attempt, session is open RemotePeer-[guid=41, state=CONNECTED]; Session-[localID=1002437243897076021, remoteID=1002437243745158716, layer=Peer::Sabre, closed=false, expiration=null, remoteIdentity=STORAGE, local=172.17.0.6:45100, remote=162.222.41.249:4287] at com.code42.messaging.peer.ConnectionStateMachine.setState(ConnectionStateMachine.java:106) at com.code42.messaging.peer.ConnectionStateMachine.updateState(ConnectionStateMachine.java:248) at com.code42.messaging.peer.RemotePeer.lambda$updateStateFromEvent$0(RemotePeer.java:415) at com.code42.messaging.peer.RemotePeer.updateState(RemotePeer.java:468) at com.code42.messaging.peer.RemotePeer.updateStateFromEvent(RemotePeer.java:415) at com.code42.messaging.peer.RemotePeer.onSessionEnded(RemotePeer.java:563) at com.code42.messaging.peer.PeerSessionListener.sessionEnded(PeerSessionListener.java:133) at com.code42.messaging.SessionImpl.notifySessionEnding(SessionImpl.java:239) at com.code42.messaging.mde.ShutdownWork.handleWork(ShutdownWork.java:27) at com.code42.messaging.mde.UnitOfWork.processWork(UnitOfWork.java:163) at com.code42.messaging.mde.UnitOfWork.run(UnitOfWork.java:147) at java.base/java.util.concurrent.ThreadPoolExecutor.runWorker(Unknown Source) at java.base/java.util.concurrent.ThreadPoolExecutor$Worker.run(Unknown Source) at java.base/java.lang.Thread.run(Unknown Source)
  9. Here is my qemu args line: <qemu:arg value='-cpu'/> <qemu:arg value='IvyBridge,kvm=on,vendor=GenuineIntel,+invtsc,vmware-cpuid-freq=on,+pcid,+ssse3,+sse4.2,+popcnt,+avx,+aes,+xsave,+xsaveopt,+vmx,check'/> And I am able to run nox player without a problem on my Catalina VM
  10. Add +vmx and make sure kvm nested is on. Works for me on Intel.
  11. Unfortunately, the one of three servers this bug occurs on is a "production" server so it's hard to do debugging because it can't have much downtime. But it happened again and there are a lot of "kernel: tun: unexpected GSO type" errors Here are my diagnostics. cs-diagnostics-20210315-2045.zip
  12. Ok, thanks for answering. Next time I will post over there.
  13. Is there a way to get sound from a V4L2 USB Capture device into debian-buster-nvidia? Or does UNRAID just not have the necessary drivers? Video is no problem, but I have been working on sound for some time. The sound part of the capture just does not show up. Works in a VM
  14. No, it works with Nvidia cards with a Kepler core, like my GT 710. Most (newer) Nvidia cards don't work at all, not to speak of sound.
  15. I am using your Debian-Nvidia-buster to run OBS, stream and encode 24/7. It works great. Much lighter than a virtual machine. I have tried both. The quote comes from https://developer.nvidia.com/blog/nvidia-ffmpeg-transcoding-guide/
  16. Do you guys happen to know if I can have two Nvidia Dockers, one running Shinobi Face detection and another NVENC transcoding on the same P2000 GPU at the same time? Edit: I found this online: Separate from the CUDA cores, NVENC/NVDEC run encoding or decoding workloads without slowing the execution of graphics or CUDA workloads running at the same time. But when I start one process(docker) it kicks the other out
  17. I cannot automatically backup my dockers now because of this bug. It's risky to stop/start dockers unless I am there to restart the server in case this bug pops up.
  18. @ich777 I figured out how to get the UnRAID GUI on one of my servers. I just have to run this script after every Nvidia Docker finishes starting #!/bin/bash killall slim export DISPLAY=:0 /usr/bin/slim
  19. I would like to be able to use UnRAID GUI at the same time as this. My motherboard Supermicro has built-in VGA. As the moment, the vga driver doesn't load. Is there any way?
  20. Wow what a great blog post and explanation! Thanks a lot. Now I be may be able to get full functionality out of MacOS with compatible WiFI.
  21. Has anyone ever tried using anything like this https://www.walmart.com/ip/PCI-E-Express-3-Port-1X-Multiplier-Riser-Card-Mining-Cable/269154426 PCI-E Express 3 Port 1X Multiplier Riser Card? My computer only has one port free (besides the GPU port) for MacOS and I would like to put in a fenvi T919 for macOS PC PCI Wifi Card Continuity Handoff BCM94360CD Native Airport WiFi BT 4.0 1750Mbps 5GHz/2.4GHz MIMO 802.11ac Beamforming+ WLAN PCI-E Card in addition to the USB controller card I'm already passing through. How do these multiplier cards pull off sharing just one PCIe lane? All of the expansion cards in the entire machine I want to use are just are 1x My system should be in the signature (Dell Optiplex 9020)
  22. Ok, I removed the balloon device from the VM definition XML and ran the my memory eater script again. This time when memory was filled up the system killed the memory eater process just like it should.