Jump to content

LittelD

Members
  • Posts

    18
  • Joined

  • Last visited

Posts posted by LittelD

  1. Hi, 

     

    ich benötige mal Hilfe.

     

    Eigentlich sollte es ein einfaches ding sein , aber ich sehe den Wald vor lauter bäumen nicht mehr glaub ich.

     

    Ich versuche einen Unraidshare auf einer Linux VM zu mounten , das funktioniert soweit das ich den Ordner sehen kann aber mir rechte fehlen zum erstellen und löschen etc.

     

    diesen Befehl nutze ich 

    sudo mount -t cifs -o rw,vers=3.0,username=USER,password=PW //192.168.178.200/SHAREORDNER /home/ubuntuuser/ordner1/MOUNTORDNER


    wenn ich versuche der VM einen Unraidshare via 9p/VirtioFS hinzufügen , hat die VM plötzlich keine Netzwerk Verbindung mehr.

     

    ich komm hier irgendwie null weiter seit stunden und weiß auch nicht was ich noch suchen soll, da alles immer zum selben Ergebnis führt.

     

     

    vielen Dank für die Hilfe schonmal vor ab

  2. 2 minutes ago, ich777 said:

    There is an option for that for sure because another user here enabled it successfully on his Dell server.

     

    Something with large PCI address space or something like that.

    well this is not a Dell Server , this just a Optiplex 7020 and nothing in the bios that sounded even far away like that, or anyhting i couldnt understand/knew what it did exactly. 

  3. 33 minutes ago, ich777 said:

    Here is the error:

    Jan 10 12:28:22 UnraidTower kernel: NVRM: This PCI I/O region assigned to your NVIDIA device is invalid:
    Jan 10 12:28:22 UnraidTower kernel: NVRM: BAR1 is 0M @ 0x0 (PCI:0000:01:00.0)

     

    This seems pretty much BIOS related, please double check that you've enabled Above 4G Decoding (this is surely called very differently on your Dell server in the BIOS) and check for a option about resizable BAR support and enable it.

     

    The driver should work fine, the only thing what is preventing it from working is your BIOS currently, hope that helps. :)

     

    Please let me know how that option is called on the Dell servers if you find it.

    well sadly there is no option in the bios for this.... 

     

    guess journey ends here for now

     

    Thanks alot for your support

  4. 2 hours ago, ich777 said:

    Me too about the Tesla cards because they are Datacenter cards in general but the K series is confirmed working fine.

     

    From what I see the Tesla M40 24GB card uses the same driver as the consumer cards when you select the Cuda 12.0 Toolkit so in theory it should work just fine.

     

    Can you please post your Diagnostics with the driver installed so that I can see how your System is set up and what is maybe preventing it from being recognized?

    Thank God!!!! and i thought im fking stupid to read and understand basic stuff :D

     

    here are my diag files

    unraidtower-diagnostics-20230110-1211.zip

  5. On 1/5/2023 at 12:43 AM, ich777 said:

    Tesla and Quadro cards are already supported by the plugin.

     hi im kind of confused.

     

    I read multiple times that Tesla cards are supported and also multiple times that they are note Supported.

     

    I own a Tesla M40 24GB card, and that some seems to be not working with your plugin.  Am i doing something wrong or is this one not supported. As far as i could see, this card is not in the list of Supported Graphics, but none Tesla seems to be in that one.

     

    is there a way i could manually get the drivers running ?

     

     

    Thanks alot for your efforts

  6. 1 minute ago, mickr777 said:

    Ah I just saw you purchased a Telsa m40, worse case you might have to create a windows or linux vm and passthrough the gpu and install the gpu driver in the vm and then InvokeAI in it using there installer.

     

    https://github.com/invoke-ai/InvokeAI/releases/tag/v2.2.5

    nooooo passing through the card seems not to be that easy also hahahaha

    germans would say... vom regen in die taufe :D

  7. 35 minutes ago, mickr777 said:

    Did you install the unraid nvidia driver plug in?

    Also good to install NVTOP and GPU Statistics plugins with it too

     

     

    Also in your my-invokeai.xml only change the port like this leave the rest 9090, if your using a different default port

    <Config Name="Webui Port" Target="9090" Default="9090" Mode="tcp" Description="" Type="Port" Display="always" Required="true" Mask="false">7790</Config>

     

    yeah well, as far as i found out Tesla cards are not supported by the plugin. trying to find some other way :(

  8. My M40 arrived... but im still getting an error :D

    docker run
      -d
      --name='InvokeAI'
      --net='bridge'
      -e TZ="Europe/Berlin"
      -e HOST_OS="Unraid"
      -e HOST_HOSTNAME="UnraidTower"
      -e HOST_CONTAINERNAME="InvokeAI"
      -e 'HUGGING_FACE_HUB_TOKEN'='xxxxxxxxxxxxxxxxxxx'
      -l net.unraid.docker.managed=dockerman
      -l net.unraid.docker.webui='http://[IP]:[PORT:7790]/'
      -l net.unraid.docker.icon='https://i.ibb.co/LPkz8X8/logo-13003d72.png'
      -p '7790:7790/tcp'
      -v '/mnt/cache/appdata/invokeai/invokeai/':'/InvokeAI/':'rw'
      -v '/mnt/cache/appdata/invokeai/userfiles/':'/userfiles/':'rw'
      -v '/mnt/user/appdata/invokeai/venv':'/venv':'rw'
      --gpus all 'invokeai_docker'
    7db9xxxxxxxxxxxxx0eb5327xxxxxx
    docker: Error response from daemon: could not select device driver "" with capabilities: [[gpu]].

     

    ay ay seems not to be easy with my config :D

  9. Hi everyone,

     

    i just ordered a Tesla M40 24GB gard to play/have fun with (not games). And while lurking and collecting informations around the net, i came across someone who did a vgpu on Proxmox with this card.

     

    https://blog.zematoxic.com/06/03/2022/Tesla-M40-vGPU-Proxmox-7-1/

     

    I was wondering if this was possible on Unraid too?  

    As fas as i read around the forum vGPU is kind of a hot topic because mostly combined with consumer cards and activating functions which were not made for them. And since noone wants to annoy the big Nvidia or AMD this doesnt get kind of developed.

     

    But thanks to prices falling for Tesla cards this should be kind of interesting now. These aint limited atleast by nvidia , and were made for these kind of usecases.

     

    ( yeah i know, my bought M40 aint supported by Nvidia for vGPU see here https://docs.nvidia.com/grid/gpus-supported-by-vgpu.html, but thats why iam asking for Tesla cards in general )

     

    anyone has information about how to make this kind of work ? Since Tesla P4 is like 120 Euros and P40 only 250 Euros this is kind of intressting to set up multiple VMs with these cards to run decent graphical stuff over vnc (dont care about games).

  10. 1 hour ago, mickr777 said:

    Yes you can run from cpu, but is extremely slow, to do this in my-invokeai.xml from the guide

    change

    <ExtraParams>--gpus all</ExtraParams>

    to

    <ExtraParams/>

     

    sorry not working 

     

    getting following error then docker stops suddenly

     

    venv/lib/python3.10/site-packages/torch/cuda/__init__.py:88: UserWarning: HIP initialization: Unexpected error from hipGetDeviceCount(). Did you run some cuda functions before calling NumHipDevices() that might have already set an error? Error 101: hipErrorInvalidDevice (Triggered internally at ../c10/hip/HIPFunctions.cpp:110.)
      return torch._C._cuda_getDeviceCount() > 0

     

  11. Hi,

     

    i have an error i cant get ridoff by myself.

     

    System :

    Unraid 6.9.2

    Mainboard  Dell Inc. 08WKV3

    CPU : Xeon E3 1231 v3 (no IGPU)

    GPU : AMD FirePro W4100

    Ram : 16GB

     

     

    I have a Windows 10 VM and i want to pass the GPU to it. So Unraid will run headless.

    Normaly i can start the VM run Windows, and everything works. But when i do a reboot in the VM, my unraid system crashes and the Webinterface gives no response. Mostly same happens when i do a fore stop of the VM.

     

    i found following topic having kindoff same problem 

    https://forums.unraid.net/topic/91319-solved-vm-start-upshutdown-crashes-unraid/

     

    i tried adding  pcie_no_flr=1022:149c,1022:1487  to the config file, but no effect (dont have these in system device list anyways?!?) 

    I added for trying the Device IDs of the AMD card, but still effect.

     

    Currently the config file looks like this.

     

    label Unraid OS
      menu default
      kernel /bzimage
      append pcie_no_flr=1022:149c,1022:1487,1002:aab0,1002:682c pcie_acs_override=multifunction vfio_iommu_type1.allow_unsafe_interrupts=1 initrd=/bzroot

     

     

    this is the Config file of my VM

     

    <domain type='kvm'>
      <name>Windows_10</name>
      <uuid>f31c45c8-d216-ebd2-25e2-25dbb91a6744</uuid>
      <metadata>
        <vmtemplate xmlns="unraid" name="Windows 10" icon="windows.png" os="windows10"/>
      </metadata>
      <memory unit='KiB'>6291456</memory>
      <currentMemory unit='KiB'>6291456</currentMemory>
      <memoryBacking>
        <nosharepages/>
      </memoryBacking>
      <vcpu placement='static'>4</vcpu>
      <cputune>
        <vcpupin vcpu='0' cpuset='1'/>
        <vcpupin vcpu='1' cpuset='5'/>
        <vcpupin vcpu='2' cpuset='2'/>
        <vcpupin vcpu='3' cpuset='6'/>
      </cputune>
      <os>
        <type arch='x86_64' machine='pc-q35-5.1'>hvm</type>
        <loader readonly='yes' type='pflash'>/usr/share/qemu/ovmf-x64/OVMF_CODE-pure-efi.fd</loader>
        <nvram>/etc/libvirt/qemu/nvram/f31c45c8-d216-ebd2-25e2-25dbb91a6744_VARS-pure-efi.fd</nvram>
      </os>
      <features>
        <acpi/>
        <apic/>
        <hyperv>
          <relaxed state='on'/>
          <vapic state='on'/>
          <spinlocks state='on' retries='8191'/>
          <vendor_id state='on' value='none'/>
        </hyperv>
      </features>
      <cpu mode='host-passthrough' check='none' migratable='on'>
        <topology sockets='1' dies='1' cores='2' threads='2'/>
        <cache mode='passthrough'/>
      </cpu>
      <clock offset='localtime'>
        <timer name='hypervclock' present='yes'/>
        <timer name='hpet' present='no'/>
      </clock>
      <on_poweroff>destroy</on_poweroff>
      <on_reboot>restart</on_reboot>
      <on_crash>restart</on_crash>
      <devices>
        <emulator>/usr/local/sbin/qemu</emulator>
        <disk type='file' device='cdrom'>
          <driver name='qemu' type='raw'/>
          <source file='/mnt/user/isos/Windows.iso'/>
          <target dev='hda' bus='sata'/>
          <readonly/>
          <boot order='2'/>
          <address type='drive' controller='0' bus='0' target='0' unit='0'/>
        </disk>
        <disk type='file' device='cdrom'>
          <driver name='qemu' type='raw'/>
          <source file='/mnt/user/isos/virtio-win-0.1.190-1.iso'/>
          <target dev='hdb' bus='sata'/>
          <readonly/>
          <address type='drive' controller='0' bus='0' target='0' unit='1'/>
        </disk>
        <disk type='file' device='disk'>
          <driver name='qemu' type='raw' cache='writeback'/>
          <source file='/mnt/user/vms/Windows_10/20220214_0127_vdisk1.img'/>
          <target dev='hdc' bus='sata'/>
          <boot order='1'/>
          <address type='drive' controller='0' bus='0' target='0' unit='2'/>
        </disk>
        <controller type='usb' index='0' model='ich9-ehci1'>
          <address type='pci' domain='0x0000' bus='0x00' slot='0x07' function='0x7'/>
        </controller>
        <controller type='usb' index='0' model='ich9-uhci1'>
          <master startport='0'/>
          <address type='pci' domain='0x0000' bus='0x00' slot='0x07' function='0x0' multifunction='on'/>
        </controller>
        <controller type='usb' index='0' model='ich9-uhci2'>
          <master startport='2'/>
          <address type='pci' domain='0x0000' bus='0x00' slot='0x07' function='0x1'/>
        </controller>
        <controller type='usb' index='0' model='ich9-uhci3'>
          <master startport='4'/>
          <address type='pci' domain='0x0000' bus='0x00' slot='0x07' function='0x2'/>
        </controller>
        <controller type='pci' index='0' model='pcie-root'/>
        <controller type='pci' index='1' model='pcie-root-port'>
          <model name='pcie-root-port'/>
          <target chassis='1' port='0x8'/>
          <address type='pci' domain='0x0000' bus='0x00' slot='0x01' function='0x0' multifunction='on'/>
        </controller>
        <controller type='pci' index='2' model='pcie-root-port'>
          <model name='pcie-root-port'/>
          <target chassis='2' port='0x9'/>
          <address type='pci' domain='0x0000' bus='0x00' slot='0x01' function='0x1'/>
        </controller>
        <controller type='pci' index='3' model='pcie-root-port'>
          <model name='pcie-root-port'/>
          <target chassis='3' port='0xa'/>
          <address type='pci' domain='0x0000' bus='0x00' slot='0x01' function='0x2'/>
        </controller>
        <controller type='pci' index='4' model='pcie-root-port'>
          <model name='pcie-root-port'/>
          <target chassis='4' port='0x13'/>
          <address type='pci' domain='0x0000' bus='0x00' slot='0x02' function='0x3'/>
        </controller>
        <controller type='virtio-serial' index='0'>
          <address type='pci' domain='0x0000' bus='0x02' slot='0x00' function='0x0'/>
        </controller>
        <controller type='sata' index='0'>
          <address type='pci' domain='0x0000' bus='0x00' slot='0x1f' function='0x2'/>
        </controller>
        <interface type='bridge'>
          <mac address='52:54:00:df:0b:f0'/>
          <source bridge='br0'/>
          <model type='virtio-net'/>
          <address type='pci' domain='0x0000' bus='0x01' slot='0x00' function='0x0'/>
        </interface>
        <serial type='pty'>
          <target type='isa-serial' port='0'>
            <model name='isa-serial'/>
          </target>
        </serial>
        <console type='pty'>
          <target type='serial' port='0'/>
        </console>
        <channel type='unix'>
          <target type='virtio' name='org.qemu.guest_agent.0'/>
          <address type='virtio-serial' controller='0' bus='0' port='1'/>
        </channel>
        <input type='tablet' bus='usb'>
          <address type='usb' bus='0' port='1'/>
        </input>
        <input type='mouse' bus='ps2'/>
        <input type='keyboard' bus='ps2'/>
        <hostdev mode='subsystem' type='pci' managed='yes'>
          <driver name='vfio'/>
          <source>
            <address domain='0x0000' bus='0x03' slot='0x00' function='0x0'/>
          </source>
          <address type='pci' domain='0x0000' bus='0x03' slot='0x00' function='0x0'/>
        </hostdev>
        <memballoon model='none'/>
      </devices>
    </domain>

     

     

    Anyone can help me to get rid of this problem or has hints what to search for?

     

    Thanks in advance

×
×
  • Create New...