LittelD

Members
  • Posts

    18
  • Joined

  • Last visited

Everything posted by LittelD

  1. HI guys, how can i make it possible to use this docker with a Tesla M40 GPU headless? since its not connected to a display steam crashes . With a AMD GPU it works fine
  2. Hi, ich benötige mal Hilfe. Eigentlich sollte es ein einfaches ding sein , aber ich sehe den Wald vor lauter bäumen nicht mehr glaub ich. Ich versuche einen Unraidshare auf einer Linux VM zu mounten , das funktioniert soweit das ich den Ordner sehen kann aber mir rechte fehlen zum erstellen und löschen etc. diesen Befehl nutze ich sudo mount -t cifs -o rw,vers=3.0,username=USER,password=PW //192.168.178.200/SHAREORDNER /home/ubuntuuser/ordner1/MOUNTORDNER wenn ich versuche der VM einen Unraidshare via 9p/VirtioFS hinzufügen , hat die VM plötzlich keine Netzwerk Verbindung mehr. ich komm hier irgendwie null weiter seit stunden und weiß auch nicht was ich noch suchen soll, da alles immer zum selben Ergebnis führt. vielen Dank für die Hilfe schonmal vor ab
  3. well this is not a Dell Server , this just a Optiplex 7020 and nothing in the bios that sounded even far away like that, or anyhting i couldnt understand/knew what it did exactly.
  4. well sadly there is no option in the bios for this.... guess journey ends here for now Thanks alot for your support
  5. im really sorry i forgot that... now after install unraidtower-diagnostics-20230110-1230.zip
  6. Thank God!!!! and i thought im fking stupid to read and understand basic stuff here are my diag files unraidtower-diagnostics-20230110-1211.zip
  7. hi im kind of confused. I read multiple times that Tesla cards are supported and also multiple times that they are note Supported. I own a Tesla M40 24GB card, and that some seems to be not working with your plugin. Am i doing something wrong or is this one not supported. As far as i could see, this card is not in the list of Supported Graphics, but none Tesla seems to be in that one. is there a way i could manually get the drivers running ? Thanks alot for your efforts
  8. nooooo passing through the card seems not to be that easy also hahahaha germans would say... vom regen in die taufe
  9. yeah well, as far as i found out Tesla cards are not supported by the plugin. trying to find some other way
  10. My M40 arrived... but im still getting an error docker run -d --name='InvokeAI' --net='bridge' -e TZ="Europe/Berlin" -e HOST_OS="Unraid" -e HOST_HOSTNAME="UnraidTower" -e HOST_CONTAINERNAME="InvokeAI" -e 'HUGGING_FACE_HUB_TOKEN'='xxxxxxxxxxxxxxxxxxx' -l net.unraid.docker.managed=dockerman -l net.unraid.docker.webui='http://[IP]:[PORT:7790]/' -l net.unraid.docker.icon='https://i.ibb.co/LPkz8X8/logo-13003d72.png' -p '7790:7790/tcp' -v '/mnt/cache/appdata/invokeai/invokeai/':'/InvokeAI/':'rw' -v '/mnt/cache/appdata/invokeai/userfiles/':'/userfiles/':'rw' -v '/mnt/user/appdata/invokeai/venv':'/venv':'rw' --gpus all 'invokeai_docker' 7db9xxxxxxxxxxxxx0eb5327xxxxxx docker: Error response from daemon: could not select device driver "" with capabilities: [[gpu]]. ay ay seems not to be easy with my config
  11. Hi everyone, i just ordered a Tesla M40 24GB gard to play/have fun with (not games). And while lurking and collecting informations around the net, i came across someone who did a vgpu on Proxmox with this card. https://blog.zematoxic.com/06/03/2022/Tesla-M40-vGPU-Proxmox-7-1/ I was wondering if this was possible on Unraid too? As fas as i read around the forum vGPU is kind of a hot topic because mostly combined with consumer cards and activating functions which were not made for them. And since noone wants to annoy the big Nvidia or AMD this doesnt get kind of developed. But thanks to prices falling for Tesla cards this should be kind of interesting now. These aint limited atleast by nvidia , and were made for these kind of usecases. ( yeah i know, my bought M40 aint supported by Nvidia for vGPU see here https://docs.nvidia.com/grid/gpus-supported-by-vgpu.html, but thats why iam asking for Tesla cards in general ) anyone has information about how to make this kind of work ? Since Tesla P4 is like 120 Euros and P40 only 250 Euros this is kind of intressting to set up multiple VMs with these cards to run decent graphical stuff over vnc (dont care about games).
  12. Thanks alot, somehow didnt work. But i ordered a Tesla M40. I will wait and try then
  13. sorry not working getting following error then docker stops suddenly venv/lib/python3.10/site-packages/torch/cuda/__init__.py:88: UserWarning: HIP initialization: Unexpected error from hipGetDeviceCount(). Did you run some cuda functions before calling NumHipDevices() that might have already set an error? Error 101: hipErrorInvalidDevice (Triggered internally at ../c10/hip/HIPFunctions.cpp:110.) return torch._C._cuda_getDeviceCount() > 0
  14. Can you tell me what i need to do to get it running with a FirePro W4100 ?
  15. Thank you. Seems to work now . i can start, restart, force stop, and the unraid system itself does not crash.....
  16. I checked both devices under "tools- system devices" and clicked on 'bind to vfio boot' I guess that's what you mean with isolate/attach? How do I attach both devices as multifunction device to the vm?
  17. Hi, i have an error i cant get ridoff by myself. System : Unraid 6.9.2 Mainboard : Dell Inc. 08WKV3 CPU : Xeon E3 1231 v3 (no IGPU) GPU : AMD FirePro W4100 Ram : 16GB I have a Windows 10 VM and i want to pass the GPU to it. So Unraid will run headless. Normaly i can start the VM run Windows, and everything works. But when i do a reboot in the VM, my unraid system crashes and the Webinterface gives no response. Mostly same happens when i do a fore stop of the VM. i found following topic having kindoff same problem https://forums.unraid.net/topic/91319-solved-vm-start-upshutdown-crashes-unraid/ i tried adding pcie_no_flr=1022:149c,1022:1487 to the config file, but no effect (dont have these in system device list anyways?!?) I added for trying the Device IDs of the AMD card, but still effect. Currently the config file looks like this. label Unraid OS menu default kernel /bzimage append pcie_no_flr=1022:149c,1022:1487,1002:aab0,1002:682c pcie_acs_override=multifunction vfio_iommu_type1.allow_unsafe_interrupts=1 initrd=/bzroot this is the Config file of my VM <domain type='kvm'> <name>Windows_10</name> <uuid>f31c45c8-d216-ebd2-25e2-25dbb91a6744</uuid> <metadata> <vmtemplate xmlns="unraid" name="Windows 10" icon="windows.png" os="windows10"/> </metadata> <memory unit='KiB'>6291456</memory> <currentMemory unit='KiB'>6291456</currentMemory> <memoryBacking> <nosharepages/> </memoryBacking> <vcpu placement='static'>4</vcpu> <cputune> <vcpupin vcpu='0' cpuset='1'/> <vcpupin vcpu='1' cpuset='5'/> <vcpupin vcpu='2' cpuset='2'/> <vcpupin vcpu='3' cpuset='6'/> </cputune> <os> <type arch='x86_64' machine='pc-q35-5.1'>hvm</type> <loader readonly='yes' type='pflash'>/usr/share/qemu/ovmf-x64/OVMF_CODE-pure-efi.fd</loader> <nvram>/etc/libvirt/qemu/nvram/f31c45c8-d216-ebd2-25e2-25dbb91a6744_VARS-pure-efi.fd</nvram> </os> <features> <acpi/> <apic/> <hyperv> <relaxed state='on'/> <vapic state='on'/> <spinlocks state='on' retries='8191'/> <vendor_id state='on' value='none'/> </hyperv> </features> <cpu mode='host-passthrough' check='none' migratable='on'> <topology sockets='1' dies='1' cores='2' threads='2'/> <cache mode='passthrough'/> </cpu> <clock offset='localtime'> <timer name='hypervclock' present='yes'/> <timer name='hpet' present='no'/> </clock> <on_poweroff>destroy</on_poweroff> <on_reboot>restart</on_reboot> <on_crash>restart</on_crash> <devices> <emulator>/usr/local/sbin/qemu</emulator> <disk type='file' device='cdrom'> <driver name='qemu' type='raw'/> <source file='/mnt/user/isos/Windows.iso'/> <target dev='hda' bus='sata'/> <readonly/> <boot order='2'/> <address type='drive' controller='0' bus='0' target='0' unit='0'/> </disk> <disk type='file' device='cdrom'> <driver name='qemu' type='raw'/> <source file='/mnt/user/isos/virtio-win-0.1.190-1.iso'/> <target dev='hdb' bus='sata'/> <readonly/> <address type='drive' controller='0' bus='0' target='0' unit='1'/> </disk> <disk type='file' device='disk'> <driver name='qemu' type='raw' cache='writeback'/> <source file='/mnt/user/vms/Windows_10/20220214_0127_vdisk1.img'/> <target dev='hdc' bus='sata'/> <boot order='1'/> <address type='drive' controller='0' bus='0' target='0' unit='2'/> </disk> <controller type='usb' index='0' model='ich9-ehci1'> <address type='pci' domain='0x0000' bus='0x00' slot='0x07' function='0x7'/> </controller> <controller type='usb' index='0' model='ich9-uhci1'> <master startport='0'/> <address type='pci' domain='0x0000' bus='0x00' slot='0x07' function='0x0' multifunction='on'/> </controller> <controller type='usb' index='0' model='ich9-uhci2'> <master startport='2'/> <address type='pci' domain='0x0000' bus='0x00' slot='0x07' function='0x1'/> </controller> <controller type='usb' index='0' model='ich9-uhci3'> <master startport='4'/> <address type='pci' domain='0x0000' bus='0x00' slot='0x07' function='0x2'/> </controller> <controller type='pci' index='0' model='pcie-root'/> <controller type='pci' index='1' model='pcie-root-port'> <model name='pcie-root-port'/> <target chassis='1' port='0x8'/> <address type='pci' domain='0x0000' bus='0x00' slot='0x01' function='0x0' multifunction='on'/> </controller> <controller type='pci' index='2' model='pcie-root-port'> <model name='pcie-root-port'/> <target chassis='2' port='0x9'/> <address type='pci' domain='0x0000' bus='0x00' slot='0x01' function='0x1'/> </controller> <controller type='pci' index='3' model='pcie-root-port'> <model name='pcie-root-port'/> <target chassis='3' port='0xa'/> <address type='pci' domain='0x0000' bus='0x00' slot='0x01' function='0x2'/> </controller> <controller type='pci' index='4' model='pcie-root-port'> <model name='pcie-root-port'/> <target chassis='4' port='0x13'/> <address type='pci' domain='0x0000' bus='0x00' slot='0x02' function='0x3'/> </controller> <controller type='virtio-serial' index='0'> <address type='pci' domain='0x0000' bus='0x02' slot='0x00' function='0x0'/> </controller> <controller type='sata' index='0'> <address type='pci' domain='0x0000' bus='0x00' slot='0x1f' function='0x2'/> </controller> <interface type='bridge'> <mac address='52:54:00:df:0b:f0'/> <source bridge='br0'/> <model type='virtio-net'/> <address type='pci' domain='0x0000' bus='0x01' slot='0x00' function='0x0'/> </interface> <serial type='pty'> <target type='isa-serial' port='0'> <model name='isa-serial'/> </target> </serial> <console type='pty'> <target type='serial' port='0'/> </console> <channel type='unix'> <target type='virtio' name='org.qemu.guest_agent.0'/> <address type='virtio-serial' controller='0' bus='0' port='1'/> </channel> <input type='tablet' bus='usb'> <address type='usb' bus='0' port='1'/> </input> <input type='mouse' bus='ps2'/> <input type='keyboard' bus='ps2'/> <hostdev mode='subsystem' type='pci' managed='yes'> <driver name='vfio'/> <source> <address domain='0x0000' bus='0x03' slot='0x00' function='0x0'/> </source> <address type='pci' domain='0x0000' bus='0x03' slot='0x00' function='0x0'/> </hostdev> <memballoon model='none'/> </devices> </domain> Anyone can help me to get rid of this problem or has hints what to search for? Thanks in advance