Jagadguru

Members
  • Posts

    101
  • Joined

  • Last visited

Posts posted by Jagadguru

  1. I had this working in unRAID 6.10.0-rc1. Since then I only run my windows VM bare metal because the WSL2 performance is like night and day. Anyway, here is my XML

     

    It says Winndows 10 but it's actually Windows 11. Yes enabling nested=1 was necessary

     

    <domain type='kvm'>
      <name>Windows 10 Pro</name>
      <uuid>5d8e4c80-d4cf-d855-d82b-e7b0c3a4d8d6</uuid>
      <metadata>
        <vmtemplate xmlns="unraid" name="Windows 10" icon="windows.png" os="windows10"/>
      </metadata>
      <memory unit='KiB'>11718750</memory>
      <currentMemory unit='KiB'>11718750</currentMemory>
      <memoryBacking>
        <nosharepages/>
      </memoryBacking>
      <vcpu placement='static'>16</vcpu>
      <cputune>
        <vcpupin vcpu='0' cpuset='0'/>
        <vcpupin vcpu='1' cpuset='8'/>
        <vcpupin vcpu='2' cpuset='1'/>
        <vcpupin vcpu='3' cpuset='9'/>
        <vcpupin vcpu='4' cpuset='2'/>
        <vcpupin vcpu='5' cpuset='10'/>
        <vcpupin vcpu='6' cpuset='3'/>
        <vcpupin vcpu='7' cpuset='11'/>
        <vcpupin vcpu='8' cpuset='4'/>
        <vcpupin vcpu='9' cpuset='12'/>
        <vcpupin vcpu='10' cpuset='5'/>
        <vcpupin vcpu='11' cpuset='13'/>
        <vcpupin vcpu='12' cpuset='6'/>
        <vcpupin vcpu='13' cpuset='14'/>
        <vcpupin vcpu='14' cpuset='7'/>
        <vcpupin vcpu='15' cpuset='15'/>
      </cputune>
      <resource>
        <partition>/machine</partition>
      </resource>
      <os>
        <type arch='x86_64' machine='pc-q35-6.0'>hvm</type>
        <loader readonly='yes' type='pflash'>/mnt/user/domains/Windows 10 Pro/OVMF_CODE.fd</loader>
        <nvram>/mnt/user/domains/Windows 10 Pro/OVMF_VARS.fd</nvram>
      </os>
      <features>
        <acpi/>
        <apic/>
        <hyperv>
          <relaxed state='on'/>
          <vapic state='on'/>
          <spinlocks state='on' retries='8191'/>
          <vendor_id state='on' value='none'/>
        </hyperv>
      </features>
      <cpu mode='host-passthrough' check='none' migratable='on'>
        <topology sockets='1' dies='1' cores='8' threads='2'/>
        <cache mode='passthrough'/>
      </cpu>
      <clock offset='localtime'>
        <timer name='hypervclock' present='yes'/>
        <timer name='hpet' present='no'/>
      </clock>
      <on_poweroff>destroy</on_poweroff>
      <on_reboot>restart</on_reboot>
      <on_crash>restart</on_crash>
      <devices>
        <emulator>/usr/local/sbin/qemu</emulator>
        <disk type='file' device='cdrom'>
          <driver name='qemu' type='raw'/>
          <source file='/mnt/user/isos/Windows/Window11_Insiders.iso'/>
          <target dev='hda' bus='sata'/>
          <readonly/>
          <boot order='2'/>
          <address type='drive' controller='0' bus='0' target='0' unit='0'/>
        </disk>
        <disk type='file' device='cdrom'>
          <driver name='qemu' type='raw'/>
          <source file='/mnt/user/isos/virtio-win-0.1.141-1.iso'/>
          <target dev='hdb' bus='sata'/>
          <readonly/>
          <address type='drive' controller='0' bus='0' target='0' unit='1'/>
        </disk>
        <disk type='file' device='disk'>
          <driver name='qemu' type='raw' cache='writeback'/>
          <source file='/mnt/user/domains/Windows 10 Pro/vdisk1.img'/>
          <target dev='hdc' bus='virtio'/>
          <boot order='1'/>
          <address type='pci' domain='0x0000' bus='0x05' slot='0x00' function='0x0'/>
        </disk>
        <controller type='usb' index='0' model='ich9-ehci1'>
          <address type='pci' domain='0x0000' bus='0x00' slot='0x07' function='0x7'/>
        </controller>
        <controller type='usb' index='0' model='ich9-uhci1'>
          <master startport='0'/>
          <address type='pci' domain='0x0000' bus='0x00' slot='0x07' function='0x0' multifunction='on'/>
        </controller>
        <controller type='usb' index='0' model='ich9-uhci2'>
          <master startport='2'/>
          <address type='pci' domain='0x0000' bus='0x00' slot='0x07' function='0x1'/>
        </controller>
        <controller type='usb' index='0' model='ich9-uhci3'>
          <master startport='4'/>
          <address type='pci' domain='0x0000' bus='0x00' slot='0x07' function='0x2'/>
        </controller>
        <controller type='pci' index='0' model='pcie-root'/>
        <controller type='pci' index='1' model='pcie-root-port'>
          <model name='pcie-root-port'/>
          <target chassis='1' port='0x8'/>
          <address type='pci' domain='0x0000' bus='0x00' slot='0x01' function='0x0' multifunction='on'/>
        </controller>
        <controller type='pci' index='2' model='pcie-root-port'>
          <model name='pcie-root-port'/>
          <target chassis='2' port='0x9'/>
          <address type='pci' domain='0x0000' bus='0x00' slot='0x01' function='0x1'/>
        </controller>
        <controller type='pci' index='3' model='pcie-root-port'>
          <model name='pcie-root-port'/>
          <target chassis='3' port='0xa'/>
          <address type='pci' domain='0x0000' bus='0x00' slot='0x01' function='0x2'/>
        </controller>
        <controller type='pci' index='4' model='pcie-root-port'>
          <model name='pcie-root-port'/>
          <target chassis='4' port='0x13'/>
          <address type='pci' domain='0x0000' bus='0x00' slot='0x01' function='0x3'/>
        </controller>
        <controller type='pci' index='5' model='pcie-root-port'>
          <model name='pcie-root-port'/>
          <target chassis='5' port='0x14'/>
          <address type='pci' domain='0x0000' bus='0x00' slot='0x01' function='0x4'/>
        </controller>
        <controller type='pci' index='6' model='pcie-root-port'>
          <model name='pcie-root-port'/>
          <target chassis='6' port='0xb'/>
          <address type='pci' domain='0x0000' bus='0x00' slot='0x01' function='0x5'/>
        </controller>
        <controller type='pci' index='7' model='pcie-to-pci-bridge'>
          <model name='pcie-pci-bridge'/>
          <address type='pci' domain='0x0000' bus='0x01' slot='0x00' function='0x0'/>
        </controller>
        <controller type='pci' index='8' model='pcie-root-port'>
          <model name='pcie-root-port'/>
          <target chassis='8' port='0xc'/>
          <address type='pci' domain='0x0000' bus='0x00' slot='0x01' function='0x6'/>
        </controller>
        <controller type='scsi' index='0' model='lsilogic'>
          <address type='pci' domain='0x0000' bus='0x00' slot='0x06' function='0x0'/>
        </controller>
        <controller type='sata' index='0'>
          <address type='pci' domain='0x0000' bus='0x00' slot='0x1f' function='0x2'/>
        </controller>
        <controller type='virtio-serial' index='0'>
          <address type='pci' domain='0x0000' bus='0x02' slot='0x00' function='0x0'/>
        </controller>
        <interface type='bridge'>
          <mac address='52:54:00:e3:02:d1'/>
          <source bridge='br0'/>
          <model type='virtio'/>
          <address type='pci' domain='0x0000' bus='0x00' slot='0x02' function='0x0'/>
        </interface>
        <serial type='pty'>
          <target type='isa-serial' port='0'>
            <model name='isa-serial'/>
          </target>
        </serial>
        <console type='pty'>
          <target type='serial' port='0'/>
        </console>
        <channel type='unix'>
          <target type='virtio' name='org.qemu.guest_agent.0'/>
          <address type='virtio-serial' controller='0' bus='0' port='1'/>
        </channel>
        <input type='tablet' bus='usb'>
          <address type='usb' bus='0' port='1'/>
        </input>
        <input type='mouse' bus='ps2'/>
        <input type='keyboard' bus='ps2'/>
        <tpm model='tpm-tis'>
          <backend type='emulator' version='2.0'/>
        </tpm>
        <audio id='1' type='none'/>
        <hostdev mode='subsystem' type='pci' managed='yes'>
          <driver name='vfio'/>
          <source>
            <address domain='0x0000' bus='0x02' slot='0x00' function='0x0'/>
          </source>
          <address type='pci' domain='0x0000' bus='0x04' slot='0x00' function='0x0' multifunction='on'/>
        </hostdev>
        <hostdev mode='subsystem' type='pci' managed='yes'>
          <driver name='vfio'/>
          <source>
            <address domain='0x0000' bus='0x02' slot='0x00' function='0x1'/>
          </source>
          <address type='pci' domain='0x0000' bus='0x04' slot='0x00' function='0x1'/>
        </hostdev>
        <hostdev mode='subsystem' type='pci' managed='yes'>
          <driver name='vfio'/>
          <source>
            <address domain='0x0000' bus='0x01' slot='0x00' function='0x0'/>
          </source>
          <address type='pci' domain='0x0000' bus='0x06' slot='0x00' function='0x0' multifunction='on'/>
        </hostdev>
        <hostdev mode='subsystem' type='pci' managed='yes'>
          <driver name='vfio'/>
          <source>
            <address domain='0x0000' bus='0x01' slot='0x00' function='0x1'/>
          </source>
          <address type='pci' domain='0x0000' bus='0x06' slot='0x00' function='0x1'/>
        </hostdev>
        <memballoon model='none'/>
      </devices>
      <seclabel type='dynamic' model='dac' relabel='yes'/>
    </domain>

  2. 8 hours ago, ghost82 said:

    It will be something like this:

    https://www.youtube.com/watch?v=D9u1vX-pvLs

     

    I actually bought that monitor in Linus's video and I can say it absolutely great for displaying a Mac VM, Windows VM, unRAID GUI at the same time or in any combination.

     

    It revolutionized my life with testing and setting up passthrough VMs. Before it was intolerable always switching inputs and hoping to catch the boot screens

  3. 5 hours ago, hyellow said:

    Hi,

     

    I read this thread and i was wondering how you have managed it. So you are saying that you have succesfully shared a gpu in windows on unRaid?

    So unRaid > Windows VM > Hyper-V > Windows VM inside Hyper-V?

    I am trying to achieve the same, but with no luck. The VM inside Hyper-v just gives me a black screen.

     

    Can you share how you have managed it?

    Yes, the nested VM's are as you said. I used the video guide I linked in my last message. Very roughly I used Windows 11  for both VM's, inner and outer.  Passthrough the GPU to the outer.

     

    You first issue the PS commands in the Win host VM that are mentioned in the video to tell if you're GPU is suitable (which it probably is because there is broad support). Then you install the hyper-v VM with all the settings like in the video.

     

    The most crucial part, though is that you have to copy over your exact Graphics Driver file set for the installed GPU (driver) from the outer to the inner Windows and place them all in the right dirs. This is explicit in the video. Using the inner vM's drivers, installed or otherwise will not work.

     

    Then you run the included script in the Youtube description and start the VM and voila! GPU acceleration in a nested VM

  4. I just wanted to let anyone know that I have successfully tested Nvidia GPU-P on Windows on unRAID with an RTX3060 passed though.

     

    I used this guide video. 

     

     

    Also WSL2 works with my intel 10700K  and  Gigabyte Z490 VISION G

    WSLg works, but not with hardware acceleration. I wonder why that could be?

    Works bare metal with the same Windows 11 installation

  5. How does unRAID recognize it's array disks? I want to switch back and forth seamlessly between running unRAID bare metal and virtualizing it under windows with vmware. The quandary is that the disks have a different id when passed though this they are not recognized as part of the array.

     

    I figured out that for the cache disks unRAID just looks for a link in /dev/ disks/ by-id  that matches the id in disk.cfg That’s easy for me to script.

     

    But doing the same with array disks with the ids in super.dat does not work. What is it looking for?

  6. Just received this from Code42 Support:

     

    Quote

    I see that the archive was recently (~10 days ago) in maintenance, and that it's likely being impacted by a post-maintenance synchronization issue we're working to fix at this time.

    I have taken corrective action and signed you out of the app on the computer. Please sign back in and monitor the synchronization - we should see it complete this time.

    This has nothing to do with the 2FA implementation, as that is only used for website login. There was an app released shortly thereafter, which is the cause of the problem. Apologies for any frustration or inconvenience this had caused.

     

    • Thanks 1
  7. Yes, after further investigation it is definitely the crash above that is causing me to loose connectivity to Crashplan's server. The crash breeks the messaging system with code42 and then because of that TLS breaks down with this error:

     

    [04.21.21 19:08:52.440 INFO  re-event-2-3 .handler.ChannelExceptionHandler] SABRE:: Decoder issue in channel pipeline! msg=com.code42.messaging.MessageException: Message exceeded maximum size. Size=20975080. cause=com.code42.messaging.MessageException: Message exceeded maximum size. Size=20975080. Closing ctx=ChannelHandlerContext(EXCEPTION, [id: 0x6ddd77d8, L:/172.17.0.8:43432 - R:/162.222.41.249:4287])

     

    And this problem started immediately after I was forced to enable 2FA. I thought  it was just that the old version didn't work with 2FA so I upgraded the container but then this

  8. Started CrashplanPRO container up today and unfortunately it is still happening. It's now getting to 77% though. Here is the history:

     

    I 04/14/21 05:46PM [CS Domains] Scanning for files completed in 12 minutes: 9 files (90GB) found
    I 04/14/21 05:46PM [CrashPlan Docker Internal] Scanning for files to back up
    I 04/14/21 05:58PM [CrashPlan Docker Internal] Scanning for files completed in 12 minutes: 179 files (360.10MB) found
    I 04/18/21 10:19PM Code42 started, version 8.6.0, GUID 858699052691327104
    I 04/18/21 10:49PM [CS Everything except Google Drive live mount, Trash and domains Backup Set] Scanning for files to back up
    I 04/19/21 10:41AM [CS Everything except Google Drive live mount, Trash and domains Backup Set] Scanning for files completed in 11.9 hours: 800,863 files (3.20TB) found
    I 04/19/21 10:41AM [CS Domains] Scanning for files to back up
    I 04/19/21 10:52AM [CS Domains] Scanning for files completed in 12 minutes: 9 files (90GB) found
    I 04/19/21 10:52AM [CrashPlan Docker Internal] Scanning for files to back up
    I 04/19/21 11:04AM [CrashPlan Docker Internal] Scanning for files completed in 12 minutes: 180 files (363.10MB) found

     

    The crash sometimes seems to coincide with the finish of a Scan for files. 

     

    Now it says "Waiting for connection," although I can ping the destination server just fine from within the container.  

     

    It also keeps saying  the service.log.0 pending restore cancelled

    [04.19.21 19:58:18.081 INFO  3362_BckpSel tore.BackupClientRestoreDelegate] BC::stopRestore(): idPair=858699052691327104>41, selectedForRestore=false, event=STOP_REQUESTED canceled=false
    [04.19.21 19:58:18.081 INFO  3362_BckpSel tore.BackupClientRestoreDelegate] BC::Not selected for restore
    [04.19.21 19:58:18.081 INFO  3362_BckpSel tore.BackupClientRestoreDelegate] BC::0 pending restore canceled

  9. Thank you for this image, it has worked great for years.

     

    After upgrading to the latest docker, however. And deleting the cache, Crashplan synchronizes with the destination server up to %54 or so and then there is this exception in log and synchronization starts at 0% again in a loop.

     

    [04.12.21 13:00:03.736 WARN  er1WeDftWkr4 ssaging.peer.PeerSessionListener] PSL:: Invalid connect state during sessionEnded after being connected, com.code42.peer.exception.InvalidConnectStateException: RP:: Illegal DISCONNECTED state attempt, session is open RemotePeer-[guid=41, state=CONNECTED]; Session-[localID=1002437243897076021, remoteID=1002437243745158716, layer=Peer::Sabre, closed=false, expiration=null, remoteIdentity=STORAGE, local=172.17.0.6:45100, remote=162.222.41.249:4287]
    STACKTRACE:: com.code42.peer.exception.InvalidConnectStateException: RP:: Illegal DISCONNECTED state attempt, session is open RemotePeer-[guid=41, state=CONNECTED]; Session-[localID=1002437243897076021, remoteID=1002437243745158716, layer=Peer::Sabre, closed=false, expiration=null, remoteIdentity=STORAGE, local=172.17.0.6:45100, remote=162.222.41.249:4287]
            at com.code42.messaging.peer.ConnectionStateMachine.setState(ConnectionStateMachine.java:106)
            at com.code42.messaging.peer.ConnectionStateMachine.updateState(ConnectionStateMachine.java:248)
            at com.code42.messaging.peer.RemotePeer.lambda$updateStateFromEvent$0(RemotePeer.java:415)
            at com.code42.messaging.peer.RemotePeer.updateState(RemotePeer.java:468)
            at com.code42.messaging.peer.RemotePeer.updateStateFromEvent(RemotePeer.java:415)
            at com.code42.messaging.peer.RemotePeer.onSessionEnded(RemotePeer.java:563)
            at com.code42.messaging.peer.PeerSessionListener.sessionEnded(PeerSessionListener.java:133)
            at com.code42.messaging.SessionImpl.notifySessionEnding(SessionImpl.java:239)
            at com.code42.messaging.mde.ShutdownWork.handleWork(ShutdownWork.java:27)
            at com.code42.messaging.mde.UnitOfWork.processWork(UnitOfWork.java:163)
            at com.code42.messaging.mde.UnitOfWork.run(UnitOfWork.java:147)
            at java.base/java.util.concurrent.ThreadPoolExecutor.runWorker(Unknown Source)
            at java.base/java.util.concurrent.ThreadPoolExecutor$Worker.run(Unknown Source)
            at java.base/java.lang.Thread.run(Unknown Source)

     

     

  10. 9 hours ago, mSedek said:

    does not works, as soon as I pass the flag  +vmx  the network card stops working and also that video seems outdated and over complicated for something that should be overly simple..

     

    anyways, I dont think is something about IF i am able to run nested VMS as I don't have any issue running nested vms in Windows or linux, my problem is with macOS and so my first question remains unaswered

    Here is my qemu args line: 

     <qemu:arg value='-cpu'/>
        <qemu:arg value='IvyBridge,kvm=on,vendor=GenuineIntel,+invtsc,vmware-cpuid-freq=on,+pcid,+ssse3,+sse4.2,+popcnt,+avx,+aes,+xsave,+xsaveopt,+vmx,check'/>

    And I am able to run nox player without a problem on my Catalina VM

  11. On 3/28/2021 at 11:43 AM, mSedek said:

    Im trying to use the android emulator from Androidstudio as I do develop android apps.. under linux and windows vms I can do it no problem but macos reports "YOUR CPU DES NOT SUPPORT VT-X" Im running my VMs on threadripper..

     

    any tip to make it work? im passing this qemu params

     

    
      <qemu:commandline>
        <qemu:arg value='-device'/>
        <qemu:arg value='************************'/>
        <qemu:arg value='-cpu'/>
        <qemu:arg value='Skylake-Server,vendor=GenuineIntel,+hypervisor,+invtsc,kvm=on,+fma,+avx,+avx2,+aes,+ssse3,+sse4_2,+popcnt,+sse4a,+bmi1,+bmi2'/>
      </qemu:commandline>

     

    Add +vmx and make sure kvm nested is on. Works for me on Intel.

     

  12. Is there a way to get sound from a V4L2 USB Capture device into debian-buster-nvidia? Or does UNRAID just not have the necessary drivers? Video is no problem, but I have been working on sound for some time. The sound part of the capture just does not show up. Works in a VM

  13. 12 hours ago, ich777 said:

    Theoretically it should work just fine since I do the same in my Nvidia-Debian-Buster container but that's a little different since I'm doing 3D rendering and HW encoding in one container with Steam Link.

     

    I tested this also with Plex and Jellyfin where I started one transcode in the Plex and one transcode in the Jellyfin container at the same time and they both work just flawlessly.

     

    Good information can you give me the source to this?

    I am using your Debian-Nvidia-buster to run OBS, stream and encode 24/7. It works great. Much lighter than a virtual machine. I have tried both. The quote comes from https://developer.nvidia.com/blog/nvidia-ffmpeg-transcoding-guide/

    • Like 1
  14. Do you guys happen to know if I can have two Nvidia Dockers, one running Shinobi Face detection and another NVENC transcoding on the same P2000 GPU at the same time?

     

    Edit:

    I found this online:

    Separate from the CUDA cores, NVENC/NVDEC run encoding or decoding workloads without slowing the execution of graphics or CUDA workloads running at the same time.

     

    But when I start one process(docker) it kicks the other out

  15. Has anyone ever tried using anything like this https://www.walmart.com/ip/PCI-E-Express-3-Port-1X-Multiplier-Riser-Card-Mining-Cable/269154426

    PCI-E Express 3 Port 1X Multiplier Riser Card?

     

    My computer only has one port free (besides the GPU port) for MacOS and I would like to put in a fenvi T919 for macOS PC PCI Wifi Card Continuity Handoff BCM94360CD Native Airport WiFi BT 4.0 1750Mbps 5GHz/2.4GHz MIMO 802.11ac Beamforming+ WLAN PCI-E Card in addition to the USB controller card  I'm already passing through.

     

    How do these multiplier cards pull off sharing just one PCIe lane?

     

    All of the expansion cards in the entire machine I want to use are just are 1x

     

    My system should be in the signature (Dell Optiplex 9020)