cholzer

Members
  • Posts

    164
  • Joined

  • Last visited

Posts posted by cholzer

  1. 15 hours ago, testdasi said:

    How did you pick the vbios?

    There are 4 different versions on Techpowerup.

     

    Booting the VM under UEFI (i.e. OVMF) should not affect your ability to pass it through.

    I still think you got the wrong vbios.

    This is the vbios I used (and removed the header from): https://www.techpowerup.com/vgabios/213099/asus-rtx2070super-8192-190623

     

    The other Asus 2070 Super cards on Techpowerup are Strix, I do not have a Strix I have this one (picture matches as well). :)

    I don't know why I get a blackscreen with OVML but not with SeaBIOS, but that is what is happening on my rig. 😅

     

    But what is confusing me now is that when I use the keyboard (which is passed through to the vm) then I can't control the VM, instead the terminal of unraid is showing up again.

    I have ordered an USB PCIE card now to pass the entire card to the VM and connect m+k to that card. Passing through one of the 2 mainboard USB controllers sadly did not work.

  2. 9 minutes ago, testdasi said:

    How did you get your vbios?

    Your xml doesn't seem out of place so the number 1 most likely candidate is wrong vbios, particularly if downloaded from Techpowerup.

    Thanks for your reply!

    I downloaded the bios from techpowerup and removed the header with HxD

    10 seconds ago I just got it to work! :)

    I must use SeaBios, with OVMF it does not work.

    OVMF+i440fx-4.2 -> blackscreen
    OVMF+Q25-4.2 -> blackscreen
    SeaBIOS+i440fx-4.2 -> works
    SeaBIOS+Q25-4.2 -> works

     

    Next issue is that as soon as I use the keyboard I passed through to the VM, the unraid terminal comes back. 😅

  3. Making my first baby steps with Win10 VM's in Unraid.

    I'm trying to passthrough a ASUS RTX 2070 Super to the VM but while the VM does boot, I only get a blackscreen.

    The IMMO Group of the RTX2007:

    [10de:1e84] 08:00.0 VGA compatible controller: NVIDIA Corporation TU104 [GeForce RTX 2070 SUPER] (rev a1)
    [10de:10f8] 08:00.1 Audio device: NVIDIA Corporation TU104 HD Audio Controller (rev a1)
    [10de:1ad8] 08:00.2 USB controller: NVIDIA Corporation TU104 USB 3.1 Host Controller (rev a1)
    [10de:1ad9] 08:00.3 Serial bus controller [0c80]: NVIDIA Corporation TU104 USB Type-C UCSI Controller (rev a1)

    I have added the last 2 to syslinuxconfig vfio-pci.ids=10de:1ad8,10de:1ad9 as mentioned here https://wiki.unraid.net/Unraid_6/Frequently_Asked_Questions#I.27m_having_problems_passing_through_my_RTX-class_GPU_to_a_virtual_machine

    This is my VM XML

    <?xml version='1.0' encoding='UTF-8'?>
    <domain type='kvm'>
      <name>Windows 10</name>
      <uuid>c1f234d5-f238-9111-c751-6ae64addbfaa</uuid>
      <metadata>
        <vmtemplate xmlns="unraid" name="Windows 10" icon="windows.png" os="windows10"/>
      </metadata>
      <memory unit='KiB'>9437184</memory>
      <currentMemory unit='KiB'>2097152</currentMemory>
      <memoryBacking>
        <nosharepages/>
      </memoryBacking>
      <vcpu placement='static'>1</vcpu>
      <cputune>
        <vcpupin vcpu='0' cpuset='2'/>
      </cputune>
      <os>
        <type arch='x86_64' machine='pc-q35-4.2'>hvm</type>
        <loader readonly='yes' type='pflash'>/usr/share/qemu/ovmf-x64/OVMF_CODE-pure-efi.fd</loader>
        <nvram>/etc/libvirt/qemu/nvram/c1f234d5-f238-9111-c751-6ae64addbfaa_VARS-pure-efi.fd</nvram>
      </os>
      <features>
        <acpi/>
        <apic/>
        <hyperv>
          <relaxed state='on'/>
          <vapic state='on'/>
          <spinlocks state='on' retries='8191'/>
          <vendor_id state='on' value='none'/>
        </hyperv>
      </features>
      <cpu mode='host-passthrough' check='none'>
        <topology sockets='1' cores='1' threads='1'/>
      </cpu>
      <clock offset='localtime'>
        <timer name='hypervclock' present='yes'/>
        <timer name='hpet' present='no'/>
      </clock>
      <on_poweroff>destroy</on_poweroff>
      <on_reboot>restart</on_reboot>
      <on_crash>restart</on_crash>
      <devices>
        <emulator>/usr/local/sbin/qemu</emulator>
        <disk type='file' device='disk'>
          <driver name='qemu' type='raw' cache='writeback'/>
          <source file='/mnt/cache/domains/Windows 10/vdisk1.img'/>
          <target dev='hdc' bus='virtio'/>
          <boot order='1'/>
          <address type='pci' domain='0x0000' bus='0x03' slot='0x00' function='0x0'/>
        </disk>
        <disk type='file' device='cdrom'>
          <driver name='qemu' type='raw'/>
          <source file='/mnt/user/isos/Windows.iso'/>
          <target dev='hda' bus='sata'/>
          <readonly/>
          <boot order='2'/>
          <address type='drive' controller='0' bus='0' target='0' unit='0'/>
        </disk>
        <disk type='file' device='cdrom'>
          <driver name='qemu' type='raw'/>
          <source file='/mnt/user/isos/virtio-win-0.1.173-2.iso'/>
          <target dev='hdb' bus='sata'/>
          <readonly/>
          <address type='drive' controller='0' bus='0' target='0' unit='1'/>
        </disk>
        <controller type='usb' index='0' model='ich9-ehci1'>
          <address type='pci' domain='0x0000' bus='0x00' slot='0x07' function='0x7'/>
        </controller>
        <controller type='usb' index='0' model='ich9-uhci1'>
          <master startport='0'/>
          <address type='pci' domain='0x0000' bus='0x00' slot='0x07' function='0x0' multifunction='on'/>
        </controller>
        <controller type='usb' index='0' model='ich9-uhci2'>
          <master startport='2'/>
          <address type='pci' domain='0x0000' bus='0x00' slot='0x07' function='0x1'/>
        </controller>
        <controller type='usb' index='0' model='ich9-uhci3'>
          <master startport='4'/>
          <address type='pci' domain='0x0000' bus='0x00' slot='0x07' function='0x2'/>
        </controller>
        <controller type='sata' index='0'>
          <address type='pci' domain='0x0000' bus='0x00' slot='0x1f' function='0x2'/>
        </controller>
        <controller type='pci' index='0' model='pcie-root'/>
        <controller type='pci' index='1' model='pcie-root-port'>
          <model name='pcie-root-port'/>
          <target chassis='1' port='0x8'/>
          <address type='pci' domain='0x0000' bus='0x00' slot='0x01' function='0x0' multifunction='on'/>
        </controller>
        <controller type='pci' index='2' model='pcie-root-port'>
          <model name='pcie-root-port'/>
          <target chassis='2' port='0x9'/>
          <address type='pci' domain='0x0000' bus='0x00' slot='0x01' function='0x1'/>
        </controller>
        <controller type='pci' index='3' model='pcie-root-port'>
          <model name='pcie-root-port'/>
          <target chassis='3' port='0xa'/>
          <address type='pci' domain='0x0000' bus='0x00' slot='0x01' function='0x2'/>
        </controller>
        <controller type='pci' index='4' model='pcie-root-port'>
          <model name='pcie-root-port'/>
          <target chassis='4' port='0xb'/>
          <address type='pci' domain='0x0000' bus='0x00' slot='0x01' function='0x3'/>
        </controller>
        <controller type='pci' index='5' model='pcie-root-port'>
          <model name='pcie-root-port'/>
          <target chassis='5' port='0xc'/>
          <address type='pci' domain='0x0000' bus='0x00' slot='0x01' function='0x4'/>
        </controller>
        <controller type='pci' index='6' model='pcie-root-port'>
          <model name='pcie-root-port'/>
          <target chassis='6' port='0xd'/>
          <address type='pci' domain='0x0000' bus='0x00' slot='0x01' function='0x5'/>
        </controller>
        <controller type='pci' index='7' model='pcie-root-port'>
          <model name='pcie-root-port'/>
          <target chassis='7' port='0xe'/>
          <address type='pci' domain='0x0000' bus='0x00' slot='0x01' function='0x6'/>
        </controller>
        <controller type='pci' index='8' model='pcie-root-port'>
          <model name='pcie-root-port'/>
          <target chassis='8' port='0xf'/>
          <address type='pci' domain='0x0000' bus='0x00' slot='0x01' function='0x7'/>
        </controller>
        <controller type='virtio-serial' index='0'>
          <address type='pci' domain='0x0000' bus='0x02' slot='0x00' function='0x0'/>
        </controller>
        <interface type='bridge'>
          <mac address='52:54:00:72:6b:0d'/>
          <source bridge='br0'/>
          <model type='virtio'/>
          <address type='pci' domain='0x0000' bus='0x01' slot='0x00' function='0x0'/>
        </interface>
        <serial type='pty'>
          <target type='isa-serial' port='0'>
            <model name='isa-serial'/>
          </target>
        </serial>
        <console type='pty'>
          <target type='serial' port='0'/>
        </console>
        <channel type='unix'>
          <target type='virtio' name='org.qemu.guest_agent.0'/>
          <address type='virtio-serial' controller='0' bus='0' port='1'/>
        </channel>
        <input type='mouse' bus='ps2'/>
        <input type='keyboard' bus='ps2'/>
        <hostdev mode='subsystem' type='pci' managed='yes'>
          <driver name='vfio'/>
          <source>
            <address domain='0x0000' bus='0x08' slot='0x00' function='0x0'/>
          </source>
          <rom file='/mnt/user/domains/vbios/Asus.RTX2070Super.8192.190623_noHeader.rom'/>
          <address type='pci' domain='0x0000' bus='0x04' slot='0x00' function='0x0'/>
        </hostdev>
        <hostdev mode='subsystem' type='pci' managed='yes'>
          <driver name='vfio'/>
          <source>
            <address domain='0x0000' bus='0x08' slot='0x00' function='0x1'/>
          </source>
          <address type='pci' domain='0x0000' bus='0x05' slot='0x00' function='0x0'/>
        </hostdev>
        <hostdev mode='subsystem' type='pci' managed='yes'>
          <driver name='vfio'/>
          <source>
            <address domain='0x0000' bus='0x08' slot='0x00' function='0x2'/>
          </source>
          <address type='pci' domain='0x0000' bus='0x06' slot='0x00' function='0x0'/>
        </hostdev>
        <hostdev mode='subsystem' type='pci' managed='yes'>
          <driver name='vfio'/>
          <source>
            <address domain='0x0000' bus='0x08' slot='0x00' function='0x3'/>
          </source>
          <address type='pci' domain='0x0000' bus='0x07' slot='0x00' function='0x0'/>
        </hostdev>
        <memballoon model='none'/>
      </devices>
    </domain>

    I also tried to "group" all 4 devices of the RTX2070, I tried with and without the vbios, I tried with "append iommu=pt pcie_acs_override=downstream,multifunction vfio_iommu_type1.allow_unsafe_interrupts=1" (and rebooted every time OFC! :) ) but I always get a blackscreen.

    Anyone an idea what I'd doing wrong? With VNC as GPU the VM works fine. :)

    System is an Ryzen 3800x, Asus Crosshair VIII

  4. 22 hours ago, LiableLlama said:

    I actually use a pushbutton switch to power on/off my virtual machine. If you've got an RPi I can share my code with you (wouldn't be too hard to replicate for an Arduino). I use an RPi Zero that connects to WiFi and just sends a request to my Unraid server.

     

    That would be awesome! :)
    Do you also need a docker in unraid that the RPi sends its command to?

  5. 1 hour ago, dodgypast said:

    Map an unraid share as a drive and then point steam at that drive.

     

    I've done it with steam, YMMV with other clients.

    Origin, Uplay and the EPIC Launcher did not like that last time I tried which is why I'd like to go a different route.

    I suppose I could use the unassigned devices plugin and then passthrough a disk to the VM and share it from there to my LAN. Don't need any parity for the game library.

  6. Hi everyone!

     

    I'd like to start a VM with a physical push button on the case of the pc (I suppose an arduino will be required?).

     

    Does anyone here use something like that?

    So far I could only find very old topics on the internet about projects that tried to achive this but those I found were abandoned or (according to comments) don't work anymore.

    Thanks in advance! :)

  7. Hi everyone!

    I've been using unraid for over a year now and I'm super happy with it!

    An issue that is annoying me more and more are the insane sizes of PC games and their patches.

    Currently I copy the install folder of a game from my main rig to the other 3 gaming PC's to avoid re-downloading the entire game on each client, or re-downloading the stupidly large patches (I don't have fast internet) on every client.

     

    While that works it does not fix the issue that on my main rig I frequently have to wait 60minutes for a patch to download before I can actually play (again, slow internet). And the main rig must be started when my son wants to grab a game from that rig for his PC.


    So my idea is to utilize the VM feature of unraid to deal with that.

    Goal:

    • have a windows VM where all launchers are installed which keep all my games up to date
    • the install folders of the launchers are exposed to the LAN so that clients can pull the install dir of the games

     

    What is the best way to achieve that?

    I'm especially puzzled by how to have this VM store the game library on the array as game launchers don't accept network shares/drives as game library location.

     

    Thanks in advance! :)

  8. 6 hours ago, PeteB said:

    Here's what I do when I replace a data drive:

    1. Run a parity check first before doing anything else

    2. Set the mover to not run by changing it to a date well into the future. This will need to be undone after the array has been recovered.

    3. Take a screenshot of the state of the array so that you have a record of the disk assignments

    4. Ensure that any dockers which write directly to the array are NOT set to auto start

    5. Set the array to not autostart

    6 Stop all dockers

    7. Stop the array

    8. Unassign the OLD drive (ie: the one being replaced)

    9. Power down server

    10. Install the new drive

    11. Power on the server

    12. Assign the NEW drive into the slot where the old drive was removed

    13. Put a tick in the Yes I want to do this  box and click start. 

     

    The array will then rebuild onto the new disk. Dockers that don't write directly to the array can be restarted.

     

    When the rebuild is complete, the mover, docker and array auto start configuration can be returned to their normal settings.

     

    NOTE: You CAN write to the array during a rebuild operation, but I elect not to do so, to ensure my parity remains untouched for the duration of the recovery. Reading from the array is fine as the device contents are emulated whilst the drive is being rebuilt.

     

    Thank you for the very detailed explanation! I will follow it to the letter! :D

  9. 23 minutes ago, itimpi said:

    You CAN use the array while a disk is being rebuilt as the missing disk is emulated by the combination of the other disks plus parity.   However you will notice performance degradation - the amount being dependant on your hardware.    Many users prefer to not use the array while a disk is being rebuilt but that is a personal preference and not mandated by unRAID.


    Thanks itimpi!

     

    I thought that I can't use the array during the rebuild as writing new files on the array would change the parity which is required for the rebuild.

    Good to know that this is not the case!

     

    So best practice is to just remove the 6TB drive, put in the 8TB drive and rebuild?

  10. Hello everyone!

     

    I am running 6.3.5 with:

    • cache: 120GB SSD
    • Parity: 8TB
    • Data1: 6TB
    • Data2: 6TB
    • Data3: 6TB

     

    During the next 6 months I want to replace all data drives with 8TB models, and I will start with Data1 this week.

     

    If I just replace the drive and let the array rebuild then I can't use the NAS until the array has been rebuilt, right?

     

    So my question now is, what is the best practice to do this with minimal downtime?

     

    Thanks in advance! :)

  11. 11 hours ago, Squid said:

    Just FYI that if you do not have the problems, then do not run FCP in this mode.  On a properly running, stable system if you want logs preserved after a reboot, use the Tips & Tweaks plugin instead

     

    I had unRAID freeze after it was running for about one week. SMB shares and webGUI did not respond anymore - I could still ping the unRAID box though.

    Had to do a hard reboot of the system which meant that all logs were gone.

     

    Troubleshooting mode should help me out here to track down the source of this freeze (should it occur again within the next 14 days), right? Or should I use the Tips & Tweaks plugin instead. :)

  12. 29 minutes ago, pwm said:

    OK. Then it never emitted any critical error when it locked up.

     

    The only line it emitted after it had started up was the last part of the last line. After the login prompt, it emitted that it had stopped PID 7747. But that isn't an error message.

     

    But it's fully possible for a machine to freeze without emitting error messages too - it's just a question if some part of the Linux code has time to spot any error or not.

     

    Yeah it responded to a ping, but neither the webGUI nor SMB worked.


    The console did not respond to the keyboard either.

  13. Thanks a lot for this plugin! I have experienced a freeze of unRAID where this will be great to preserve the logs past the required, forced reboot!

     

    Quote

    When running in this mode the syslog is continually captured to the flash drive, and a diagnostics dump is performed every 30 minutes

     

  14. 2 hours ago, pwm said:

    The really critical errors are normally emitted to the console so it's always interesting to have a monitor connected. Logging to console will work even if the file system layer has broken in the Linux kernel.

     

    The screenshot I provided in the first post does not help to track down the cause of my unRAID freeze though. Or?

  15. 13 hours ago, johnnie.black said:

    The syslog starts over after evety reboot, so unfortunately it doesn't tell much of any previous issues.

    Hey, thanks for your reply!

     

    So what you are saying is that whenever unRAID is restarted we lose all logs, right?

     

    Is there an (debug) option to prevent that? Otherwise how does someone find the cause of freezes, crashes and random reboots? :-/

  16. On 5/9/2017 at 12:49 AM, Rudder2 said:

    It works.  The complete path is /usr/local/sbin/powerdown -r and you must use it.

     

    Thank you both CHBMB and Squid!  I though that was the problem but nowhere on-line could I find that path.

     

    The working script

     

    5910f477e6d03_Theworkingpowerdownuserscript.jpg.7f3d864afb6a8b9887296270506ce078.jpg

     

    Just want to spell it out so the next person who looks for this will hopefully be able to get it to work without asking.  I'm all about self help but sometimes after days of searching and coming up short I eventually ask for help.  I've been researching this for 3 days till I got frustrated and asked for help.

     

    Thank you for both of your time so much!

     

    Rudder2

     

    Thanks! I just had my unRAID freeze after it worked nicely for an entire week. I will try scheduled restarts to see if that makes the system more stable. :)

  17. unRAID 6.3.5 has been working great for the last week. But today, while copying files onto the array, it suddenly froze.

     

    I could still ping it from the command prompt, but I could no longer access the webGUI nor the SMB shares.

     

    I had to do a hard reboot to get unRAID up and running again. Now it started a parity check on its own.


    Attached is a screenshot that I took from the unRAID monitor and the logs. Anyone an idea what happened there?

    nas-diagnostics-20171202-1613.zip

    20171202_160759.jpg

  18. 2 hours ago, johnnie.black said:

    LSI2008 based controllers don't support trim on most SSDs, you should connect it to the onboard SATA ports, though it's currently set to IDE, enter bios and change it to AHCI.

     

     

    Thanks!

     

    I have change the SATA config on the mainboard to AHCI and connected the SSD Cache to one of the Sata 6G connectors.
    Cache drive shows up and is working inside of unRAID too.

     

    Trimming seems to work now too! :D

    Nov 25 11:40:42 NAS root: /mnt/cache: 105.6 GiB (113398353920 bytes) trimmed

    Thanks a lot!!!!

     

    But the log is flooded with this now:

    Nov 25 11:37:23 NAS root: error: plugins/preclear.disk/Preclear.php: wrong csrf_token

    Already tried to uninstall and reinstall the plugin, but still throws that error now.

     

    *edit*
    "wrong csrf_token" was caused by having a browser tab open from before I restarted the server. refreshed that and now the error is gone.