Jagadguru

Members
  • Posts

    101
  • Joined

  • Last visited

Everything posted by Jagadguru

  1. I had this working in unRAID 6.10.0-rc1. Since then I only run my windows VM bare metal because the WSL2 performance is like night and day. Anyway, here is my XML It says Winndows 10 but it's actually Windows 11. Yes enabling nested=1 was necessary <domain type='kvm'> <name>Windows 10 Pro</name> <uuid>5d8e4c80-d4cf-d855-d82b-e7b0c3a4d8d6</uuid> <metadata> <vmtemplate xmlns="unraid" name="Windows 10" icon="windows.png" os="windows10"/> </metadata> <memory unit='KiB'>11718750</memory> <currentMemory unit='KiB'>11718750</currentMemory> <memoryBacking> <nosharepages/> </memoryBacking> <vcpu placement='static'>16</vcpu> <cputune> <vcpupin vcpu='0' cpuset='0'/> <vcpupin vcpu='1' cpuset='8'/> <vcpupin vcpu='2' cpuset='1'/> <vcpupin vcpu='3' cpuset='9'/> <vcpupin vcpu='4' cpuset='2'/> <vcpupin vcpu='5' cpuset='10'/> <vcpupin vcpu='6' cpuset='3'/> <vcpupin vcpu='7' cpuset='11'/> <vcpupin vcpu='8' cpuset='4'/> <vcpupin vcpu='9' cpuset='12'/> <vcpupin vcpu='10' cpuset='5'/> <vcpupin vcpu='11' cpuset='13'/> <vcpupin vcpu='12' cpuset='6'/> <vcpupin vcpu='13' cpuset='14'/> <vcpupin vcpu='14' cpuset='7'/> <vcpupin vcpu='15' cpuset='15'/> </cputune> <resource> <partition>/machine</partition> </resource> <os> <type arch='x86_64' machine='pc-q35-6.0'>hvm</type> <loader readonly='yes' type='pflash'>/mnt/user/domains/Windows 10 Pro/OVMF_CODE.fd</loader> <nvram>/mnt/user/domains/Windows 10 Pro/OVMF_VARS.fd</nvram> </os> <features> <acpi/> <apic/> <hyperv> <relaxed state='on'/> <vapic state='on'/> <spinlocks state='on' retries='8191'/> <vendor_id state='on' value='none'/> </hyperv> </features> <cpu mode='host-passthrough' check='none' migratable='on'> <topology sockets='1' dies='1' cores='8' threads='2'/> <cache mode='passthrough'/> </cpu> <clock offset='localtime'> <timer name='hypervclock' present='yes'/> <timer name='hpet' present='no'/> </clock> <on_poweroff>destroy</on_poweroff> <on_reboot>restart</on_reboot> <on_crash>restart</on_crash> <devices> <emulator>/usr/local/sbin/qemu</emulator> <disk type='file' device='cdrom'> <driver name='qemu' type='raw'/> <source file='/mnt/user/isos/Windows/Window11_Insiders.iso'/> <target dev='hda' bus='sata'/> <readonly/> <boot order='2'/> <address type='drive' controller='0' bus='0' target='0' unit='0'/> </disk> <disk type='file' device='cdrom'> <driver name='qemu' type='raw'/> <source file='/mnt/user/isos/virtio-win-0.1.141-1.iso'/> <target dev='hdb' bus='sata'/> <readonly/> <address type='drive' controller='0' bus='0' target='0' unit='1'/> </disk> <disk type='file' device='disk'> <driver name='qemu' type='raw' cache='writeback'/> <source file='/mnt/user/domains/Windows 10 Pro/vdisk1.img'/> <target dev='hdc' bus='virtio'/> <boot order='1'/> <address type='pci' domain='0x0000' bus='0x05' slot='0x00' function='0x0'/> </disk> <controller type='usb' index='0' model='ich9-ehci1'> <address type='pci' domain='0x0000' bus='0x00' slot='0x07' function='0x7'/> </controller> <controller type='usb' index='0' model='ich9-uhci1'> <master startport='0'/> <address type='pci' domain='0x0000' bus='0x00' slot='0x07' function='0x0' multifunction='on'/> </controller> <controller type='usb' index='0' model='ich9-uhci2'> <master startport='2'/> <address type='pci' domain='0x0000' bus='0x00' slot='0x07' function='0x1'/> </controller> <controller type='usb' index='0' model='ich9-uhci3'> <master startport='4'/> <address type='pci' domain='0x0000' bus='0x00' slot='0x07' function='0x2'/> </controller> <controller type='pci' index='0' model='pcie-root'/> <controller type='pci' index='1' model='pcie-root-port'> <model name='pcie-root-port'/> <target chassis='1' port='0x8'/> <address type='pci' domain='0x0000' bus='0x00' slot='0x01' function='0x0' multifunction='on'/> </controller> <controller type='pci' index='2' model='pcie-root-port'> <model name='pcie-root-port'/> <target chassis='2' port='0x9'/> <address type='pci' domain='0x0000' bus='0x00' slot='0x01' function='0x1'/> </controller> <controller type='pci' index='3' model='pcie-root-port'> <model name='pcie-root-port'/> <target chassis='3' port='0xa'/> <address type='pci' domain='0x0000' bus='0x00' slot='0x01' function='0x2'/> </controller> <controller type='pci' index='4' model='pcie-root-port'> <model name='pcie-root-port'/> <target chassis='4' port='0x13'/> <address type='pci' domain='0x0000' bus='0x00' slot='0x01' function='0x3'/> </controller> <controller type='pci' index='5' model='pcie-root-port'> <model name='pcie-root-port'/> <target chassis='5' port='0x14'/> <address type='pci' domain='0x0000' bus='0x00' slot='0x01' function='0x4'/> </controller> <controller type='pci' index='6' model='pcie-root-port'> <model name='pcie-root-port'/> <target chassis='6' port='0xb'/> <address type='pci' domain='0x0000' bus='0x00' slot='0x01' function='0x5'/> </controller> <controller type='pci' index='7' model='pcie-to-pci-bridge'> <model name='pcie-pci-bridge'/> <address type='pci' domain='0x0000' bus='0x01' slot='0x00' function='0x0'/> </controller> <controller type='pci' index='8' model='pcie-root-port'> <model name='pcie-root-port'/> <target chassis='8' port='0xc'/> <address type='pci' domain='0x0000' bus='0x00' slot='0x01' function='0x6'/> </controller> <controller type='scsi' index='0' model='lsilogic'> <address type='pci' domain='0x0000' bus='0x00' slot='0x06' function='0x0'/> </controller> <controller type='sata' index='0'> <address type='pci' domain='0x0000' bus='0x00' slot='0x1f' function='0x2'/> </controller> <controller type='virtio-serial' index='0'> <address type='pci' domain='0x0000' bus='0x02' slot='0x00' function='0x0'/> </controller> <interface type='bridge'> <mac address='52:54:00:e3:02:d1'/> <source bridge='br0'/> <model type='virtio'/> <address type='pci' domain='0x0000' bus='0x00' slot='0x02' function='0x0'/> </interface> <serial type='pty'> <target type='isa-serial' port='0'> <model name='isa-serial'/> </target> </serial> <console type='pty'> <target type='serial' port='0'/> </console> <channel type='unix'> <target type='virtio' name='org.qemu.guest_agent.0'/> <address type='virtio-serial' controller='0' bus='0' port='1'/> </channel> <input type='tablet' bus='usb'> <address type='usb' bus='0' port='1'/> </input> <input type='mouse' bus='ps2'/> <input type='keyboard' bus='ps2'/> <tpm model='tpm-tis'> <backend type='emulator' version='2.0'/> </tpm> <audio id='1' type='none'/> <hostdev mode='subsystem' type='pci' managed='yes'> <driver name='vfio'/> <source> <address domain='0x0000' bus='0x02' slot='0x00' function='0x0'/> </source> <address type='pci' domain='0x0000' bus='0x04' slot='0x00' function='0x0' multifunction='on'/> </hostdev> <hostdev mode='subsystem' type='pci' managed='yes'> <driver name='vfio'/> <source> <address domain='0x0000' bus='0x02' slot='0x00' function='0x1'/> </source> <address type='pci' domain='0x0000' bus='0x04' slot='0x00' function='0x1'/> </hostdev> <hostdev mode='subsystem' type='pci' managed='yes'> <driver name='vfio'/> <source> <address domain='0x0000' bus='0x01' slot='0x00' function='0x0'/> </source> <address type='pci' domain='0x0000' bus='0x06' slot='0x00' function='0x0' multifunction='on'/> </hostdev> <hostdev mode='subsystem' type='pci' managed='yes'> <driver name='vfio'/> <source> <address domain='0x0000' bus='0x01' slot='0x00' function='0x1'/> </source> <address type='pci' domain='0x0000' bus='0x06' slot='0x00' function='0x1'/> </hostdev> <memballoon model='none'/> </devices> <seclabel type='dynamic' model='dac' relabel='yes'/> </domain>
  2. I actually bought that monitor in Linus's video and I can say it absolutely great for displaying a Mac VM, Windows VM, unRAID GUI at the same time or in any combination. It revolutionized my life with testing and setting up passthrough VMs. Before it was intolerable always switching inputs and hoping to catch the boot screens
  3. Yes, the nested VM's are as you said. I used the video guide I linked in my last message. Very roughly I used Windows 11 for both VM's, inner and outer. Passthrough the GPU to the outer. You first issue the PS commands in the Win host VM that are mentioned in the video to tell if you're GPU is suitable (which it probably is because there is broad support). Then you install the hyper-v VM with all the settings like in the video. The most crucial part, though is that you have to copy over your exact Graphics Driver file set for the installed GPU (driver) from the outer to the inner Windows and place them all in the right dirs. This is explicit in the video. Using the inner vM's drivers, installed or otherwise will not work. Then you run the included script in the Youtube description and start the VM and voila! GPU acceleration in a nested VM
  4. I just wanted to let anyone know that I have successfully tested Nvidia GPU-P on Windows on unRAID with an RTX3060 passed though. I used this guide video. Also WSL2 works with my intel 10700K and Gigabyte Z490 VISION G WSLg works, but not with hardware acceleration. I wonder why that could be? Works bare metal with the same Windows 11 installation
  5. How does unRAID recognize it's array disks? I want to switch back and forth seamlessly between running unRAID bare metal and virtualizing it under windows with vmware. The quandary is that the disks have a different id when passed though this they are not recognized as part of the array. I figured out that for the cache disks unRAID just looks for a link in /dev/ disks/ by-id that matches the id in disk.cfg That’s easy for me to script. But doing the same with array disks with the ids in super.dat does not work. What is it looking for?
  6. Stopping the docker container for a few days fixed it. It seems I have to do that every 28 days when the maintenance happens
  7. How do I turn off backup but let Crashplan still run to complete the local maintenance? Otherwise it does maintenance for a little bit and then backs up for a little bit and then does synchronization for 9 hrs in a loop. This has been going on for 25 days. Thanks again for the docker!
  8. I turned NFS off entirely and migrated the affected shares to SMB. I also re-enabled CA backup to shut down containers and restart them after backup, which I had been hesitant to do because of this bug. After doing that a month ago, the bug has not popped up since.
  9. Yes, after further investigation it is definitely the crash above that is causing me to loose connectivity to Crashplan's server. The crash breeks the messaging system with code42 and then because of that TLS breaks down with this error: [04.21.21 19:08:52.440 INFO re-event-2-3 .handler.ChannelExceptionHandler] SABRE:: Decoder issue in channel pipeline! msg=com.code42.messaging.MessageException: Message exceeded maximum size. Size=20975080. cause=com.code42.messaging.MessageException: Message exceeded maximum size. Size=20975080. Closing ctx=ChannelHandlerContext(EXCEPTION, [id: 0x6ddd77d8, L:/172.17.0.8:43432 - R:/162.222.41.249:4287]) And this problem started immediately after I was forced to enable 2FA. I thought it was just that the old version didn't work with 2FA so I upgraded the container but then this
  10. Started CrashplanPRO container up today and unfortunately it is still happening. It's now getting to 77% though. Here is the history: I 04/14/21 05:46PM [CS Domains] Scanning for files completed in 12 minutes: 9 files (90GB) found I 04/14/21 05:46PM [CrashPlan Docker Internal] Scanning for files to back up I 04/14/21 05:58PM [CrashPlan Docker Internal] Scanning for files completed in 12 minutes: 179 files (360.10MB) found I 04/18/21 10:19PM Code42 started, version 8.6.0, GUID 858699052691327104 I 04/18/21 10:49PM [CS Everything except Google Drive live mount, Trash and domains Backup Set] Scanning for files to back up I 04/19/21 10:41AM [CS Everything except Google Drive live mount, Trash and domains Backup Set] Scanning for files completed in 11.9 hours: 800,863 files (3.20TB) found I 04/19/21 10:41AM [CS Domains] Scanning for files to back up I 04/19/21 10:52AM [CS Domains] Scanning for files completed in 12 minutes: 9 files (90GB) found I 04/19/21 10:52AM [CrashPlan Docker Internal] Scanning for files to back up I 04/19/21 11:04AM [CrashPlan Docker Internal] Scanning for files completed in 12 minutes: 180 files (363.10MB) found The crash sometimes seems to coincide with the finish of a Scan for files. Now it says "Waiting for connection," although I can ping the destination server just fine from within the container. It also keeps saying the service.log.0 pending restore cancelled [04.19.21 19:58:18.081 INFO 3362_BckpSel tore.BackupClientRestoreDelegate] BC::stopRestore(): idPair=858699052691327104>41, selectedForRestore=false, event=STOP_REQUESTED canceled=false [04.19.21 19:58:18.081 INFO 3362_BckpSel tore.BackupClientRestoreDelegate] BC::Not selected for restore [04.19.21 19:58:18.081 INFO 3362_BckpSel tore.BackupClientRestoreDelegate] BC::0 pending restore canceled
  11. Yes it is still doing it, but I noticed that there is destination maintenance running at the same time and some times showing in the status line. So I have decided to shut down CrashplanPRO for four days to give maintenance a chance to run unfettered.
  12. Thank you for this image, it has worked great for years. After upgrading to the latest docker, however. And deleting the cache, Crashplan synchronizes with the destination server up to %54 or so and then there is this exception in log and synchronization starts at 0% again in a loop. [04.12.21 13:00:03.736 WARN er1WeDftWkr4 ssaging.peer.PeerSessionListener] PSL:: Invalid connect state during sessionEnded after being connected, com.code42.peer.exception.InvalidConnectStateException: RP:: Illegal DISCONNECTED state attempt, session is open RemotePeer-[guid=41, state=CONNECTED]; Session-[localID=1002437243897076021, remoteID=1002437243745158716, layer=Peer::Sabre, closed=false, expiration=null, remoteIdentity=STORAGE, local=172.17.0.6:45100, remote=162.222.41.249:4287] STACKTRACE:: com.code42.peer.exception.InvalidConnectStateException: RP:: Illegal DISCONNECTED state attempt, session is open RemotePeer-[guid=41, state=CONNECTED]; Session-[localID=1002437243897076021, remoteID=1002437243745158716, layer=Peer::Sabre, closed=false, expiration=null, remoteIdentity=STORAGE, local=172.17.0.6:45100, remote=162.222.41.249:4287] at com.code42.messaging.peer.ConnectionStateMachine.setState(ConnectionStateMachine.java:106) at com.code42.messaging.peer.ConnectionStateMachine.updateState(ConnectionStateMachine.java:248) at com.code42.messaging.peer.RemotePeer.lambda$updateStateFromEvent$0(RemotePeer.java:415) at com.code42.messaging.peer.RemotePeer.updateState(RemotePeer.java:468) at com.code42.messaging.peer.RemotePeer.updateStateFromEvent(RemotePeer.java:415) at com.code42.messaging.peer.RemotePeer.onSessionEnded(RemotePeer.java:563) at com.code42.messaging.peer.PeerSessionListener.sessionEnded(PeerSessionListener.java:133) at com.code42.messaging.SessionImpl.notifySessionEnding(SessionImpl.java:239) at com.code42.messaging.mde.ShutdownWork.handleWork(ShutdownWork.java:27) at com.code42.messaging.mde.UnitOfWork.processWork(UnitOfWork.java:163) at com.code42.messaging.mde.UnitOfWork.run(UnitOfWork.java:147) at java.base/java.util.concurrent.ThreadPoolExecutor.runWorker(Unknown Source) at java.base/java.util.concurrent.ThreadPoolExecutor$Worker.run(Unknown Source) at java.base/java.lang.Thread.run(Unknown Source)
  13. Here is my qemu args line: <qemu:arg value='-cpu'/> <qemu:arg value='IvyBridge,kvm=on,vendor=GenuineIntel,+invtsc,vmware-cpuid-freq=on,+pcid,+ssse3,+sse4.2,+popcnt,+avx,+aes,+xsave,+xsaveopt,+vmx,check'/> And I am able to run nox player without a problem on my Catalina VM
  14. Add +vmx and make sure kvm nested is on. Works for me on Intel.
  15. Unfortunately, the one of three servers this bug occurs on is a "production" server so it's hard to do debugging because it can't have much downtime. But it happened again and there are a lot of "kernel: tun: unexpected GSO type" errors Here are my diagnostics. cs-diagnostics-20210315-2045.zip
  16. Ok, thanks for answering. Next time I will post over there.
  17. Is there a way to get sound from a V4L2 USB Capture device into debian-buster-nvidia? Or does UNRAID just not have the necessary drivers? Video is no problem, but I have been working on sound for some time. The sound part of the capture just does not show up. Works in a VM
  18. No, it works with Nvidia cards with a Kepler core, like my GT 710. Most (newer) Nvidia cards don't work at all, not to speak of sound.
  19. I am using your Debian-Nvidia-buster to run OBS, stream and encode 24/7. It works great. Much lighter than a virtual machine. I have tried both. The quote comes from https://developer.nvidia.com/blog/nvidia-ffmpeg-transcoding-guide/
  20. Do you guys happen to know if I can have two Nvidia Dockers, one running Shinobi Face detection and another NVENC transcoding on the same P2000 GPU at the same time? Edit: I found this online: Separate from the CUDA cores, NVENC/NVDEC run encoding or decoding workloads without slowing the execution of graphics or CUDA workloads running at the same time. But when I start one process(docker) it kicks the other out
  21. I cannot automatically backup my dockers now because of this bug. It's risky to stop/start dockers unless I am there to restart the server in case this bug pops up.
  22. @ich777 I figured out how to get the UnRAID GUI on one of my servers. I just have to run this script after every Nvidia Docker finishes starting #!/bin/bash killall slim export DISPLAY=:0 /usr/bin/slim