Jump to content

bastl

Members
  • Posts

    1,267
  • Joined

  • Last visited

  • Days Won

    3

Posts posted by bastl

  1. The PCI Root Port Patch only works on Q35 VMs on newer builds starting with 6.7_RC5. You need to insert the following QEMU arguments at the end of the xml. 

      <qemu:commandline>
        <qemu:arg value='-global'/>
        <qemu:arg value='pcie-root-port.speed=8'/>
        <qemu:arg value='-global'/>
        <qemu:arg value='pcie-root-port.width=16'/>
      </qemu:commandline>
    </domain>

    For me with this change the Nvidia system settings are reporting the correct PCI link speeds and benchmarks like 3DMark are working now. Without this I always had issues with system freezes when 3DMark starts up and checking for system information, GPUZ reporting wrong speeds and I guess there is a couple other software out there having problems. 

  2. @fluisterben Check if this works for you 

     

    I had no issues setting this up in a Mint VM. XML looks like the following

        <filesystem type='mount' accessmode='passthrough'>
          <source dir='/mnt/user/downloads/dl'/>
          <target dir='dl'/>
          <address type='pci' domain='0x0000' bus='0x01' slot='0x00' function='0x0'/>
        </filesystem>
        <interface type='bridge'>
          <mac address='52:54:00:e0:e7:31'/>
          <source bridge='br0'/>
          <model type='virtio'/>
          <address type='pci' domain='0x0000' bus='0x02' slot='0x00' function='0x0'/>
        </interface>

     

  3. @fluisterben If you have custom edits inside the xml for your network lets say changing the model of the emulated device, these changes will be lost if you change something else in the VM settings. Keep that in mind. As stated from itimpi increasing the vdisk size in the webui doesn't change the filesystem size inside the VM. You have to expand the partion inside the VM by yourself. Also to be mentioned and been documented in Unraids wiki, never decrease the vdisk size or you will break your VM if sectors of the vdisk are used by the VM and you cut them off. 

  4. First of all, is there a reason why you isolate all of your cores except of the first one? If you have any dockers on your server or any tasks running in the backround, lets say for syncing your server or for backups, all these task will only run on the left over cores, in your case 0 and 24. Better solution is to only isolate the cores you wanna use for a VM and let the rest unisolated to be handled by Unraid itself. 

    image.png.b5151e265b751591d3ed47117d38a03f.png

     

    Second thing, as far as I know it's not adviced to have a SSD/NVME as one of your array drives. For testing this might be ok but for long term use this isn't the best solution. Trim isn't supported on array drives, your SSD will become noticeably slow over time and as far as I understand it the way parity works on Unraid this won't be a feature in the near future of Unraid. 

    image.png.030efc1fe0528f45283a6c5a0276a5c6.png

     

    Next thing, the SSD you passthrough to the VM is a 32GB Transcend SSD, right? I don't know how old that thing is, but if it's one of the first gen SSDs and was used a lot over the past years that might be the reason why you see some stuttering. Also 32GB isn't that much for a Windows install. Running a SSD close to it's max capacity can also cause a decrease in performance. 

    image.png.93d523c91f1ec39440f7be8afbca0747.png

    image.png.bf09b445c306d246c44ba368cbf13551.png

    You have defined your disk as virtio. Usual for a vdisk file thats ok, but you wanna squeeze out a bit more performance and reduce the IO on the host, SCSI is the better choice. Before you switch to SCSI you first have to install the driver in the VM, otherwise Windows won't be able to boot. First add a small dummy SCSI vdisk via the unraid ui lets say only 1G, start your VM and go to device manager to install the SCSI driver from the virtio iso. Shutdown the VM and now switch from virtio to SCSI for your main disk of the VM. The dummy vdisk isn't needed anymore. The part in the xml should look something like this. Adjust it so it matches your config.

        <disk type='block' device='disk'>
          <driver name='qemu' type='raw' cache='none' io='threads' discard='unmap'/>
          <source dev='/dev/disk/by-id/ata-Samsung_SSD_850_EVO_1TB_S2RFNX0J606029L'/>
          <backingStore/>
          <target dev='hdd' bus='scsi'/>
          <alias name='scsi0-0-0-3'/>
          <address type='drive' controller='0' bus='0' target='0' unit='3'/>
        </disk>

    If you have by any chance a spinning hdd laying around, use that as array drive and use the Samsung SSD as cache drive. There you have a lot of space for a couple vdisks for VMs and can benefit of trim. 

     

     

    Next thing, try to reduce your emulator pins to only 2. I don't think more than 2 is useful. Select "6,30" and you should be ok.

    image.png.bdb288c29c85c6eb8e47f2fef2cd8349.png

     

     

    For the next thing I'am not exactly sure if a "exact" CPU definition is needed for an EPYC CPU. Usual this part is used for Threadripper CPUs to report the correct amount of cache of the CPU to the guest OS.

      <cpu mode='custom' match='exact' check='full'>
        <model fallback='forbid'>EPYC</model>
        <topology sockets='1' cores='12' threads='1'/>
        <feature policy='require' name='topoext'/>
        <feature policy='disable' name='monitor'/>
        <feature policy='require' name='hypervisor'/>
        <feature policy='disable' name='svm'/>
        <feature policy='disable' name='x2apic'/>
        <numa>
          <cell id='0' cpus='6-11' memory='8388608' unit='KiB'/>
        </numa>
      </cpu>

     

    Another thing you can try to reduce the stutter that my can be caused by disk IO, is to specify one "iothread" and pin it to 2 cores like in my example. With this i got slightly better latency on disk access. The cores "8,24" are on the same die as the rest of the cores in my example and not included in the passed through cores. In your case use the cores "6,30", remove them from the VM and only set them as emulatorpins and iothreads. 

      <vcpu placement='static'>14</vcpu>
      <iothreads>1</iothreads>
      <cputune>
        <vcpupin vcpu='0' cpuset='9'/>
        <vcpupin vcpu='1' cpuset='25'/>
        <vcpupin vcpu='2' cpuset='10'/>
        <vcpupin vcpu='3' cpuset='26'/>
        <vcpupin vcpu='4' cpuset='11'/>
        <vcpupin vcpu='5' cpuset='27'/>
        <vcpupin vcpu='6' cpuset='12'/>
        <vcpupin vcpu='7' cpuset='28'/>
        <vcpupin vcpu='8' cpuset='13'/>
        <vcpupin vcpu='9' cpuset='29'/>
        <vcpupin vcpu='10' cpuset='14'/>
        <vcpupin vcpu='11' cpuset='30'/>
        <vcpupin vcpu='12' cpuset='15'/>
        <vcpupin vcpu='13' cpuset='31'/>
        <emulatorpin cpuset='8,24'/>
        <iothreadpin iothread='1' cpuset='8,24'/>
      </cputune>

     

    • Like 1
  5. Short question, is the Vega card working after you reboot the VM which uses it? Usually the Vega cards have a reset bug where you can't restart a VM to pick it up again. Usually you have to restart the whole server. I would be surprised if it's working for you. Are you sure your card is picked up correctly by the VM and shows no errors in the device manager? 

  6. Ok, so your CPU pinning looks fine and your GPU is on the same node as the cores you use for that VM. Perfect. What I noticed from your numactl output, you're using a lot of RAM and only have 1.3GB free in total. In general Unraid doesn't care where the RAM is connected to, to which node. This could also cause your issue. Try the following to strictly use the RAM from a specific node. Ad the <numatune> part from the example below to your XML underneath the </cputune> element. This should prevent Unraid using RAM from the second node. Another thing you can try is to set a specific iothread and pin it to the same node as the VM, also in the example below. 

     

      <vcpu placement='static'>14</vcpu>
      <iothreads>1</iothreads>
      <cputune>
        <vcpupin vcpu='0' cpuset='9'/>
        <vcpupin vcpu='1' cpuset='25'/>
        <vcpupin vcpu='2' cpuset='10'/>
        <vcpupin vcpu='3' cpuset='26'/>
        <vcpupin vcpu='4' cpuset='11'/>
        <vcpupin vcpu='5' cpuset='27'/>
        <vcpupin vcpu='6' cpuset='12'/>
        <vcpupin vcpu='7' cpuset='28'/>
        <vcpupin vcpu='8' cpuset='13'/>
        <vcpupin vcpu='9' cpuset='29'/>
        <vcpupin vcpu='10' cpuset='14'/>
        <vcpupin vcpu='11' cpuset='30'/>
        <vcpupin vcpu='12' cpuset='15'/>
        <vcpupin vcpu='13' cpuset='31'/>
        <emulatorpin cpuset='8,24'/>
        <iothreadpin iothread='1' cpuset='8,24'/>
      </cputune>
      <numatune>
        <memory mode='strict' nodeset='0'/>
      </numatune>

    With "numastat qemu" you can check how the current RAM usage of all running VMs is looking. For some reasons the "strict" settings not always works and some users already reported that even with this setting Unraid tends to grab a couple megs from the other node as well. I myself couldn't figure out what causing this or how to prevent that. Maybe @limetech has an idea or can provide us a solution to prevent Unraid doing that. 

     

  7. 14 hours ago, xandyedgex said:

    <cputune>
        <vcpupin vcpu='0' cpuset='4'/>
        <vcpupin vcpu='1' cpuset='20'/>
        <vcpupin vcpu='2' cpuset='6'/>
        <vcpupin vcpu='3' cpuset='22'/>
        <vcpupin vcpu='4' cpuset='8'/>
        <vcpupin vcpu='5' cpuset='24'/>
        <vcpupin vcpu='6' cpuset='10'/>
        <vcpupin vcpu='7' cpuset='26'/>
        <vcpupin vcpu='8' cpuset='12'/>
        <vcpupin vcpu='9' cpuset='28'/>
        <vcpupin vcpu='10' cpuset='14'/>
        <vcpupin vcpu='11' cpuset='30'/>
        <emulatorpin cpuset='0,16'/>
      </cputune>

    Are you sure, you're not assigning cores from both CPUs with this settings? Can you post a screen of the settings of this VM where we can see the core pairings? 

     

    You have 2 CPUs each with 8 cores 16 threads. If cores 0-7 are from the first CPU you are mixing them with the second CPU if that one starts counting with 8 up to 16. You have choosen the core 4 thats in the lower half of all cores and 30 thats close to the last core. For me this looks wrong. Emulatorpin is set to 0,16. If that is the first physical/logical core I end up with the following scheme

     

    CPU1:
    0	16
    1	17
    2	18
    3	19
    4	20
    5	21
    6	22
    7	23
    	
    CPU2:
    8	24
    9	25
    10	26
    11	27
    12	28
    13	29
    14	30
    15	31

     

    EDIT:

    Post a screen of the output of the following command please

     

    numactl --hardware

    EDIT2:

    Example of my Threadripper CPU. This is basically also a dual CPU. The topology Unraid shows you in the UI might be different to mine, just to get you an idea. The marked cores in my config are all cores from the second die/CPU used by one VM. 

     

    image.png.2327e89452373b734230abca3a9e6512.png

     

    image.png.b7143265e216bd5aea7971f8a9619242.png

     

  8. 8 hours ago, Rick Sanchez said:

    multi-tick boxes

    Not exactly sure what you mean by that? Most video are done with an older Unraid version. The UI slightly changed over time.

    You have a 4 core 8 thread CPU. Cores 0-3 are the main cores and the other 4 are the logical cores/hyperthreads. If you select a specific core for a VM it's adviced to also select the HT core. CPU 0/4 is the first core pair (always used by Unraid itself), CPU 1/5 is the second pair and so on. 

     

    In the example below you can see I've given core pair 5 (cpu 4/20) and 6 (cpu 5/21) to a VM. In this case CPU 4 and 5 are the physical cores, 20 and 21 the logical cores. 

     

    image.png.f919695241a4d75ecb7f7aa07c88f134.png

    • Upvote 1
  9. You have to specify the driver for the vdisk by hand at the beginning of the installation process. At the stage where you have to select a drive to install the OS on to, you see a empty list, right? On the bottom you should see "install driver". Click that and navigate to your mounted virtio-win........iso and select the driver for the type of vdisk bus you have selected in the Unraid settings. I think you should find them under viostor or or vioscsi. You have to select the right OS and architecture. If 2k19 isn't listed select the latest one. Should be 2k16. Or try the W10 drivers, might also work. Select amd64 for the x64 and install the driver. After that you should see a disk in the list. If none of the drivers work switch your vdisk bus type in the VM settings to sata. 

    • Like 2
  10. @jbrodriguez Damn it, it always confuses me. Selecting 443 as port and the login as root and it is working. But how do I connect as a none root user now? In the web UI the permission configuration for some dockers and VMs are set to "read" and "exec" for a specific user, but the login in doesn't work for that user in the app. Username and password is the one which is set up in Unraids user settings, right? I remember it worked in older builds besides some strange issues with the checkboxes jumping around. I've reported that issue to you couple weeks ago. Now the mini game is gone but only root can login. 🤨

     

    The permission I've set are now visible in the description of that specific user in Unraids user management. Not sure if it's supposed to be that way.

     

    image.png.5c7882bebd8fcf5342a91c9862e0269d.png

     

  11. @jbrodriguez I've updated the unraid plugin and the android app and I can't add a server in the app. 

     

    image.png.da00fd3630fbdaef8ff8fb37188f23a0.png

     

    Open webui leads me to the webinterface as normal. Accessible via IP or local domain name. Port automatically switches to 2379 in the browser. Login as root is working. On my phone I put in the IP of Unraid the port 2378 specified in the plugin, root and password and it doesn't add the server. Discover also brings up no server. Same for turning on/off https in the app or changing the port to 2379. Entering the local domain name, same issue. 

     

    Edit:

    Webinterface is accessible from my phone

  12. @ken-ji Thanks. It looks like it's working. I'am having another virbr up and used by a pfsense VM and a Windows VM. Pfsense acts as a torproxy for the VM and Windows has access to the internet and hasn't any access to the lan services Unraid provides. The question is, how do I make the virbr persistent, so it survives a Unraid restart? Is there a config file somewhere on the flash device where I have to put the bridge settings in? I can't really find anything where the virbr0 is configured in.

  13. Some people reporting better performance on Q35 others don't see any differences. Some people have more success passing a device to a VM with Q35 for others that machine type won't change anything. AMD cards for example with the newest drivers people get a black screen on i440fx and only drivers from end of last year are working, with Q35 there is no issue.

     

    I can only speek for me and it feels a bit snappier in games with Q35 on the latest 6.7_RC5 version. I know, RC6 is out already, but I hadn't the time to test a lot. With the RC4 version Limetech indroduced a newer Qemu version whith which you're able now to set the correct link speed of the slot your GPU is plugged in. Earlier Qemu versions the GPU driver reported the wrong link speed and some software behaved odd. For me for example benchmarks like Firestrike always crashed at the beginning where it checks for hardware. With this new patched Qemu and an extra line in the xml the Nvidia system info tool is now reporting the correct link speeds and Firestrike works. 

     

    Before you start switching over to Q35 make sure you have a backup of your VM. I always created a new VM with the machine type of my choice and pointed it to the already existing vdisk or pass through the NVME where Windows was already installed on. If you're on the 6.7 RC builds you can check if the Qemu part inserted at the end gives you a better performance. It's adviced to set up the link speed so it matches the speed of the slot the GPU is plugged in. In my case it looks like this at the end of the xml.

      <qemu:commandline>
        <qemu:arg value='-global'/>
        <qemu:arg value='pcie-root-port.speed=8'/>
        <qemu:arg value='-global'/>
        <qemu:arg value='pcie-root-port.width=16'/>
      </qemu:commandline>
    </domain>

    The topic for the whole PCI-root-port-patch discussion you can find here

     

×
×
  • Create New...