Jump to content

cholzer

Members
  • Content Count

    47
  • Joined

  • Last visited

Community Reputation

1 Neutral

About cholzer

  • Rank
    Advanced Member

Recent Profile Visitors

The recent visitors block is disabled and is not being shown to other users.

  1. Just want to let you know that everything went well!
  2. @StevenD thanks a lot! There is no kind of configuration required, right?
  3. Hi guys! I have a rather scary task ahead of me. 😬 My current config: - core i5 system on an Asus mainboard, 8GB - LSI 9200-8i (IT-Mode) --- 1x 8GB Ironwolf Parity --- 4x 6GB Ironwolf Data - 1x 120GB SSD Cache (no dockers, no VM) Now I want to only move my Ironwolf HDD's and the unraid USB stick to a new system: - Xeon E5-2620, Supermicro X10SRi-F, 32GB - LSI 9300-8i (IT-Mode) - 1x 500GB SSD Cache Am I correct that this should basically be "plug and play"? unraid should boot up and the array should be present (minus the cache SSD which I will replace with a new one) Anything special that I have to pay attention to? 😅 Thanks in advance!
  4. This is the vbios I used (and removed the header from): https://www.techpowerup.com/vgabios/213099/asus-rtx2070super-8192-190623 The other Asus 2070 Super cards on Techpowerup are Strix, I do not have a Strix I have this one (picture matches as well). I don't know why I get a blackscreen with OVML but not with SeaBIOS, but that is what is happening on my rig. 😅 But what is confusing me now is that when I use the keyboard (which is passed through to the vm) then I can't control the VM, instead the terminal of unraid is showing up again. I have ordered an USB PCIE card now to pass the entire card to the VM and connect m+k to that card. Passing through one of the 2 mainboard USB controllers sadly did not work.
  5. Thanks for your reply! I downloaded the bios from techpowerup and removed the header with HxD 10 seconds ago I just got it to work! I must use SeaBios, with OVMF it does not work. OVMF+i440fx-4.2 -> blackscreen OVMF+Q25-4.2 -> blackscreen SeaBIOS+i440fx-4.2 -> works SeaBIOS+Q25-4.2 -> works Next issue is that as soon as I use the keyboard I passed through to the VM, the unraid terminal comes back. 😅
  6. Making my first baby steps with Win10 VM's in Unraid. I'm trying to passthrough a ASUS RTX 2070 Super to the VM but while the VM does boot, I only get a blackscreen. The IMMO Group of the RTX2007: [10de:1e84] 08:00.0 VGA compatible controller: NVIDIA Corporation TU104 [GeForce RTX 2070 SUPER] (rev a1) [10de:10f8] 08:00.1 Audio device: NVIDIA Corporation TU104 HD Audio Controller (rev a1) [10de:1ad8] 08:00.2 USB controller: NVIDIA Corporation TU104 USB 3.1 Host Controller (rev a1) [10de:1ad9] 08:00.3 Serial bus controller [0c80]: NVIDIA Corporation TU104 USB Type-C UCSI Controller (rev a1) I have added the last 2 to syslinuxconfig vfio-pci.ids=10de:1ad8,10de:1ad9 as mentioned here https://wiki.unraid.net/Unraid_6/Frequently_Asked_Questions#I.27m_having_problems_passing_through_my_RTX-class_GPU_to_a_virtual_machine This is my VM XML <?xml version='1.0' encoding='UTF-8'?> <domain type='kvm'> <name>Windows 10</name> <uuid>c1f234d5-f238-9111-c751-6ae64addbfaa</uuid> <metadata> <vmtemplate xmlns="unraid" name="Windows 10" icon="windows.png" os="windows10"/> </metadata> <memory unit='KiB'>9437184</memory> <currentMemory unit='KiB'>2097152</currentMemory> <memoryBacking> <nosharepages/> </memoryBacking> <vcpu placement='static'>1</vcpu> <cputune> <vcpupin vcpu='0' cpuset='2'/> </cputune> <os> <type arch='x86_64' machine='pc-q35-4.2'>hvm</type> <loader readonly='yes' type='pflash'>/usr/share/qemu/ovmf-x64/OVMF_CODE-pure-efi.fd</loader> <nvram>/etc/libvirt/qemu/nvram/c1f234d5-f238-9111-c751-6ae64addbfaa_VARS-pure-efi.fd</nvram> </os> <features> <acpi/> <apic/> <hyperv> <relaxed state='on'/> <vapic state='on'/> <spinlocks state='on' retries='8191'/> <vendor_id state='on' value='none'/> </hyperv> </features> <cpu mode='host-passthrough' check='none'> <topology sockets='1' cores='1' threads='1'/> </cpu> <clock offset='localtime'> <timer name='hypervclock' present='yes'/> <timer name='hpet' present='no'/> </clock> <on_poweroff>destroy</on_poweroff> <on_reboot>restart</on_reboot> <on_crash>restart</on_crash> <devices> <emulator>/usr/local/sbin/qemu</emulator> <disk type='file' device='disk'> <driver name='qemu' type='raw' cache='writeback'/> <source file='/mnt/cache/domains/Windows 10/vdisk1.img'/> <target dev='hdc' bus='virtio'/> <boot order='1'/> <address type='pci' domain='0x0000' bus='0x03' slot='0x00' function='0x0'/> </disk> <disk type='file' device='cdrom'> <driver name='qemu' type='raw'/> <source file='/mnt/user/isos/Windows.iso'/> <target dev='hda' bus='sata'/> <readonly/> <boot order='2'/> <address type='drive' controller='0' bus='0' target='0' unit='0'/> </disk> <disk type='file' device='cdrom'> <driver name='qemu' type='raw'/> <source file='/mnt/user/isos/virtio-win-0.1.173-2.iso'/> <target dev='hdb' bus='sata'/> <readonly/> <address type='drive' controller='0' bus='0' target='0' unit='1'/> </disk> <controller type='usb' index='0' model='ich9-ehci1'> <address type='pci' domain='0x0000' bus='0x00' slot='0x07' function='0x7'/> </controller> <controller type='usb' index='0' model='ich9-uhci1'> <master startport='0'/> <address type='pci' domain='0x0000' bus='0x00' slot='0x07' function='0x0' multifunction='on'/> </controller> <controller type='usb' index='0' model='ich9-uhci2'> <master startport='2'/> <address type='pci' domain='0x0000' bus='0x00' slot='0x07' function='0x1'/> </controller> <controller type='usb' index='0' model='ich9-uhci3'> <master startport='4'/> <address type='pci' domain='0x0000' bus='0x00' slot='0x07' function='0x2'/> </controller> <controller type='sata' index='0'> <address type='pci' domain='0x0000' bus='0x00' slot='0x1f' function='0x2'/> </controller> <controller type='pci' index='0' model='pcie-root'/> <controller type='pci' index='1' model='pcie-root-port'> <model name='pcie-root-port'/> <target chassis='1' port='0x8'/> <address type='pci' domain='0x0000' bus='0x00' slot='0x01' function='0x0' multifunction='on'/> </controller> <controller type='pci' index='2' model='pcie-root-port'> <model name='pcie-root-port'/> <target chassis='2' port='0x9'/> <address type='pci' domain='0x0000' bus='0x00' slot='0x01' function='0x1'/> </controller> <controller type='pci' index='3' model='pcie-root-port'> <model name='pcie-root-port'/> <target chassis='3' port='0xa'/> <address type='pci' domain='0x0000' bus='0x00' slot='0x01' function='0x2'/> </controller> <controller type='pci' index='4' model='pcie-root-port'> <model name='pcie-root-port'/> <target chassis='4' port='0xb'/> <address type='pci' domain='0x0000' bus='0x00' slot='0x01' function='0x3'/> </controller> <controller type='pci' index='5' model='pcie-root-port'> <model name='pcie-root-port'/> <target chassis='5' port='0xc'/> <address type='pci' domain='0x0000' bus='0x00' slot='0x01' function='0x4'/> </controller> <controller type='pci' index='6' model='pcie-root-port'> <model name='pcie-root-port'/> <target chassis='6' port='0xd'/> <address type='pci' domain='0x0000' bus='0x00' slot='0x01' function='0x5'/> </controller> <controller type='pci' index='7' model='pcie-root-port'> <model name='pcie-root-port'/> <target chassis='7' port='0xe'/> <address type='pci' domain='0x0000' bus='0x00' slot='0x01' function='0x6'/> </controller> <controller type='pci' index='8' model='pcie-root-port'> <model name='pcie-root-port'/> <target chassis='8' port='0xf'/> <address type='pci' domain='0x0000' bus='0x00' slot='0x01' function='0x7'/> </controller> <controller type='virtio-serial' index='0'> <address type='pci' domain='0x0000' bus='0x02' slot='0x00' function='0x0'/> </controller> <interface type='bridge'> <mac address='52:54:00:72:6b:0d'/> <source bridge='br0'/> <model type='virtio'/> <address type='pci' domain='0x0000' bus='0x01' slot='0x00' function='0x0'/> </interface> <serial type='pty'> <target type='isa-serial' port='0'> <model name='isa-serial'/> </target> </serial> <console type='pty'> <target type='serial' port='0'/> </console> <channel type='unix'> <target type='virtio' name='org.qemu.guest_agent.0'/> <address type='virtio-serial' controller='0' bus='0' port='1'/> </channel> <input type='mouse' bus='ps2'/> <input type='keyboard' bus='ps2'/> <hostdev mode='subsystem' type='pci' managed='yes'> <driver name='vfio'/> <source> <address domain='0x0000' bus='0x08' slot='0x00' function='0x0'/> </source> <rom file='/mnt/user/domains/vbios/Asus.RTX2070Super.8192.190623_noHeader.rom'/> <address type='pci' domain='0x0000' bus='0x04' slot='0x00' function='0x0'/> </hostdev> <hostdev mode='subsystem' type='pci' managed='yes'> <driver name='vfio'/> <source> <address domain='0x0000' bus='0x08' slot='0x00' function='0x1'/> </source> <address type='pci' domain='0x0000' bus='0x05' slot='0x00' function='0x0'/> </hostdev> <hostdev mode='subsystem' type='pci' managed='yes'> <driver name='vfio'/> <source> <address domain='0x0000' bus='0x08' slot='0x00' function='0x2'/> </source> <address type='pci' domain='0x0000' bus='0x06' slot='0x00' function='0x0'/> </hostdev> <hostdev mode='subsystem' type='pci' managed='yes'> <driver name='vfio'/> <source> <address domain='0x0000' bus='0x08' slot='0x00' function='0x3'/> </source> <address type='pci' domain='0x0000' bus='0x07' slot='0x00' function='0x0'/> </hostdev> <memballoon model='none'/> </devices> </domain> I also tried to "group" all 4 devices of the RTX2070, I tried with and without the vbios, I tried with "append iommu=pt pcie_acs_override=downstream,multifunction vfio_iommu_type1.allow_unsafe_interrupts=1" (and rebooted every time OFC! ) but I always get a blackscreen. Anyone an idea what I'd doing wrong? With VNC as GPU the VM works fine. System is an Ryzen 3800x, Asus Crosshair VIII
  7. That would be awesome! Do you also need a docker in unraid that the RPi sends its command to?
  8. Is that "www.home-assistant.io"? as far as i understand you use the mobile app to launch the VM, not a physical push button, is that right?
  9. Origin, Uplay and the EPIC Launcher did not like that last time I tried which is why I'd like to go a different route. I suppose I could use the unassigned devices plugin and then passthrough a disk to the VM and share it from there to my LAN. Don't need any parity for the game library.
  10. Hi everyone! I'd like to start a VM with a physical push button on the case of the pc (I suppose an arduino will be required?). Does anyone here use something like that? So far I could only find very old topics on the internet about projects that tried to achive this but those I found were abandoned or (according to comments) don't work anymore. Thanks in advance!
  11. Hi everyone! I've been using unraid for over a year now and I'm super happy with it! An issue that is annoying me more and more are the insane sizes of PC games and their patches. Currently I copy the install folder of a game from my main rig to the other 3 gaming PC's to avoid re-downloading the entire game on each client, or re-downloading the stupidly large patches (I don't have fast internet) on every client. While that works it does not fix the issue that on my main rig I frequently have to wait 60minutes for a patch to download before I can actually play (again, slow internet). And the main rig must be started when my son wants to grab a game from that rig for his PC. So my idea is to utilize the VM feature of unraid to deal with that. Goal: have a windows VM where all launchers are installed which keep all my games up to date the install folders of the launchers are exposed to the LAN so that clients can pull the install dir of the games What is the best way to achieve that? I'm especially puzzled by how to have this VM store the game library on the array as game launchers don't accept network shares/drives as game library location. Thanks in advance! :)
  12. Thank you for the very detailed explanation! I will follow it to the letter!
  13. Thanks itimpi! I thought that I can't use the array during the rebuild as writing new files on the array would change the parity which is required for the rebuild. Good to know that this is not the case! So best practice is to just remove the 6TB drive, put in the 8TB drive and rebuild?