Jump to content
We're Hiring! Full Stack Developer ×

bastl

Members
  • Posts

    1,267
  • Joined

  • Last visited

  • Days Won

    3

Posts posted by bastl

  1. 4 minutes ago, Jeffarese said:

    This container is a full nextcloud container.

    Than this isn't a "official" unraid container. You might have to check for that specific container how to setup the paths correctly to have the DB on a different drive if even possible to set it up this way.

     

    Edit:

     

    Wait a second. Does the @linuxserver.io team maybe have it already included in one of the last versions? My setup is running almost 2 years now with MariaDB as external DB.

  2. As long as you're not accessing any files or changing them, the array drive shouldn't spin up. At least it does for me. The config for the docker is in appdata on the share and the user share in HostPath2 sits on the array. I use MariaDB to handle the database from nextcloud, also sitting in appdata on the cache drive. I guess most people configured it like this. Not sure how you set it up using myssql or other databases. Also not sure if there is even a full nextcloud container with included DB support. Most of them I know always need an external DB.

  3. Nice to see you have fixed it. I could have run in the same issue with windows 10. First thing I do is to dissable everything in the privacy settings. No access to everything until you need it. But at some point you can forget that there is a checkbox you've ticked before to dissable something. Back in the days the main use of a firewall was to prevent access from the outside. These days more and more you have to configure firewalls to prevent stuff leaving your network. I hate that shift

    • Like 1
  4. @chron Best performance you will see by choosing cores and RAM from the same node and not by mixing them. Prefered nodes in your case are node0 or node2. Limit your VM to only one node and its RAM and isolate the cores from everything else like you see in my picture above. The used cores by my VM are isolated and only used by this specific VM. Don't use cores from node1+3 for VMs. It will work, sure, but you add an extra step the data has to travel to get to the RAM what adds latency and also reduces the speed slightly.

  5. 15 hours ago, chron said:

    <cputune>
        <vcpupin vcpu='0' cpuset='18'/>
        <vcpupin vcpu='1' cpuset='42'/>
        <vcpupin vcpu='2' cpuset='19'/>
        <vcpupin vcpu='3' cpuset='43'/>
        <vcpupin vcpu='4' cpuset='20'/>
        <vcpupin vcpu='5' cpuset='44'/>
        <vcpupin vcpu='6' cpuset='21'/>
        <vcpupin vcpu='7' cpuset='45'/>
        <emulatorpin cpuset='18-21'/>
      </cputune>
      <numatune>
        <memory mode='strict' nodeset='0'/>

    Are you sure the cores you're using are on node0? You have to set the correct node in numatune. The shown latency and speeds indicating that you're using the wrong node. With "strict" you limiting the VM to use RAM only from a specific node. In your case "0". The cores have to be from the same node to gain the best performance.

     

    example from my setup:

      <cputune>
        <vcpupin vcpu='0' cpuset='9'/>
        <vcpupin vcpu='1' cpuset='25'/>
        <vcpupin vcpu='2' cpuset='10'/>
        <vcpupin vcpu='3' cpuset='26'/>
        <vcpupin vcpu='4' cpuset='11'/>
        <vcpupin vcpu='5' cpuset='27'/>
        <vcpupin vcpu='6' cpuset='12'/>
        <vcpupin vcpu='7' cpuset='28'/>
        <vcpupin vcpu='8' cpuset='13'/>
        <vcpupin vcpu='9' cpuset='29'/>
        <vcpupin vcpu='10' cpuset='14'/>
        <vcpupin vcpu='11' cpuset='30'/>
        <vcpupin vcpu='12' cpuset='15'/>
        <vcpupin vcpu='13' cpuset='31'/>
        <emulatorpin cpuset='8,24'/>
        <iothreadpin iothread='1' cpuset='8,24'/>
      </cputune>
      <numatune>
        <memory mode='strict' nodeset='1'/>
      </numatune>

    grafik.thumb.png.e3a4283cf5fcb4840e15d464fe76a63a.png

     

    node0 = cores 0-7/16-23

    node1 = cores 8-15/24-31

  6. You basically did everything right. Shutdown the VM, move/copy the vdisk, change the path. In general this works without any issues. Except of one. If you have some custom changes in the xml and edit the VM config in the template and not in the xml the old custom changes in the xml can get lost. Save way is to change the path in the xml. Do you have a backup of your old xml you can use and only change the path in it?

     

    Maybe your vdisk got corrupted during the copy process. Did the copy process finished before you have started up the VM again? Is the new drive a healthy one or an old one you had laying around that might had some bad sectors?

  7. 3 hours ago, Select25 said:

    Coming to my Unraid Dashboard this Morning and saw that all of my Linuxserver Dockers have an update. But every Time i start the update ist looks like this.

     

    IMAGE ID [latest]: Pulling from linuxserver/nextcloud. 
    Status: Image is up to date for linuxserver/nextcloud:latest

    TOTAL DATA PULLED: 0 B

     

    Looks like there is nothing to Update or download? After the update is "complete" the update note stays away as long as i didn't check for updates. I allready rebooted my Server because i thought there is a Problem with my Network or Internet Connection but this seems fine. Anyone who can help me here?

    Same happened for me. DuckDNS, letsencrypt, mariadb, nextcloud, unifi-controller, duplicati showing available updates, but nothing gets pulled. Bitwarden, Netdata, urbackup and krusader are without an update notices. Selecting only one docker to update, 0 data is downloaded and check for updates again gives a notice for an available update again. Look like something with the Linuxserver Repo is broken @linuxserver.io

    • Upvote 1
  8. If the device you wanna passthrough isn't in it's own IOMMU group passthrough won't work. If you can't split the group up with ACS Override there isn't really much you can do. Check your BIOS if you can find an IOMMU setting and play around with it. Not all BIOSes have these options. Sometimes it can help to update your BIOS to get your groupings split.

  9. @Ernie11 Forget about the idea of playing games on a "virtual" gpu. It's an emulated GPU which maybe provides such feature sets like openGL or vulkan, but without any of the hardware acceleration behind it what a physical GPU provides. If you wanna play games in a VM which needs some horsepower, passthrough a physical GPU.

  10. I've used the Ubuntu template to create a PopOS Vm. I used the 19.04 and changed nothing in the template nore did I install anything. Below you can see that other resolutions like yours are available. Default VNC driver is QXL in this template. Check which one you are using.

     

    grafik.thumb.png.3ea54dcfa49657d06c9b6d895eacecb5.png

     

    <?xml version='1.0' encoding='UTF-8'?>
    <domain type='kvm' id='20'>
      <name>Pop</name>
      <uuid>6d039ddb-88c0-1e32-b457-b779ea549448</uuid>
      <metadata>
        <vmtemplate xmlns="unraid" name="Ubuntu" icon="ubuntu.png" os="ubuntu"/>
      </metadata>
      <memory unit='KiB'>4194304</memory>
      <currentMemory unit='KiB'>4194304</currentMemory>
      <memoryBacking>
        <nosharepages/>
      </memoryBacking>
      <vcpu placement='static'>4</vcpu>
      <cputune>
        <vcpupin vcpu='0' cpuset='2'/>
        <vcpupin vcpu='1' cpuset='18'/>
        <vcpupin vcpu='2' cpuset='3'/>
        <vcpupin vcpu='3' cpuset='19'/>
      </cputune>
      <resource>
        <partition>/machine</partition>
      </resource>
      <os>
        <type arch='x86_64' machine='pc-q35-3.1'>hvm</type>
        <loader readonly='yes' type='pflash'>/usr/share/qemu/ovmf-x64/OVMF_CODE-pure-efi.fd</loader>
        <nvram>/etc/libvirt/qemu/nvram/6d039ddb-88c0-1e32-b457-b779ea549448_VARS-pure-efi.fd</nvram>
      </os>
      <features>
        <acpi/>
        <apic/>
      </features>
      <cpu mode='host-passthrough' check='none'>
        <topology sockets='1' cores='4' threads='1'/>
      </cpu>
      <clock offset='utc'>
        <timer name='rtc' tickpolicy='catchup'/>
        <timer name='pit' tickpolicy='delay'/>
        <timer name='hpet' present='no'/>
      </clock>
      <on_poweroff>destroy</on_poweroff>
      <on_reboot>restart</on_reboot>
      <on_crash>restart</on_crash>
      <devices>
        <emulator>/usr/local/sbin/qemu</emulator>
        <disk type='file' device='disk'>
          <driver name='qemu' type='raw' cache='writeback'/>
          <source file='/mnt/user/VMs/Pop/vdisk1.img'/>
          <backingStore/>
          <target dev='hdc' bus='virtio'/>
          <boot order='1'/>
          <alias name='virtio-disk2'/>
          <address type='pci' domain='0x0000' bus='0x03' slot='0x00' function='0x0'/>
        </disk>
        <disk type='file' device='cdrom'>
          <driver name='qemu' type='raw'/>
          <source file='/mnt/user/isos/Linux/pop-os_19.04_amd64_nvidia_4.iso'/>
          <backingStore/>
          <target dev='hda' bus='sata' tray='open'/>
          <readonly/>
          <boot order='2'/>
          <alias name='sata0-0-0'/>
          <address type='drive' controller='0' bus='0' target='0' unit='0'/>
        </disk>
        <controller type='usb' index='0' model='ich9-ehci1'>
          <alias name='usb'/>
          <address type='pci' domain='0x0000' bus='0x00' slot='0x07' function='0x7'/>
        </controller>
        <controller type='usb' index='0' model='ich9-uhci1'>
          <alias name='usb'/>
          <master startport='0'/>
          <address type='pci' domain='0x0000' bus='0x00' slot='0x07' function='0x0' multifunction='on'/>
        </controller>
        <controller type='usb' index='0' model='ich9-uhci2'>
          <alias name='usb'/>
          <master startport='2'/>
          <address type='pci' domain='0x0000' bus='0x00' slot='0x07' function='0x1'/>
        </controller>
        <controller type='usb' index='0' model='ich9-uhci3'>
          <alias name='usb'/>
          <master startport='4'/>
          <address type='pci' domain='0x0000' bus='0x00' slot='0x07' function='0x2'/>
        </controller>
        <controller type='sata' index='0'>
          <alias name='ide'/>
          <address type='pci' domain='0x0000' bus='0x00' slot='0x1f' function='0x2'/>
        </controller>
        <controller type='pci' index='0' model='pcie-root'>
          <alias name='pcie.0'/>
        </controller>
        <controller type='virtio-serial' index='0'>
          <alias name='virtio-serial0'/>
          <address type='pci' domain='0x0000' bus='0x02' slot='0x00' function='0x0'/>
        </controller>
        <controller type='pci' index='1' model='pcie-root-port'>
          <model name='pcie-root-port'/>
          <target chassis='1' port='0x10'/>
          <alias name='pci.1'/>
          <address type='pci' domain='0x0000' bus='0x00' slot='0x02' function='0x0' multifunction='on'/>
        </controller>
        <controller type='pci' index='2' model='pcie-root-port'>
          <model name='pcie-root-port'/>
          <target chassis='2' port='0x11'/>
          <alias name='pci.2'/>
          <address type='pci' domain='0x0000' bus='0x00' slot='0x02' function='0x1'/>
        </controller>
        <controller type='pci' index='3' model='pcie-root-port'>
          <model name='pcie-root-port'/>
          <target chassis='3' port='0x12'/>
          <alias name='pci.3'/>
          <address type='pci' domain='0x0000' bus='0x00' slot='0x02' function='0x2'/>
        </controller>
        <controller type='pci' index='4' model='pcie-root-port'>
          <model name='pcie-root-port'/>
          <target chassis='4' port='0x13'/>
          <alias name='pci.4'/>
          <address type='pci' domain='0x0000' bus='0x00' slot='0x02' function='0x3'/>
        </controller>
        <controller type='pci' index='5' model='pcie-root-port'>
          <model name='pcie-root-port'/>
          <target chassis='5' port='0x14'/>
          <alias name='pci.5'/>
          <address type='pci' domain='0x0000' bus='0x00' slot='0x02' function='0x4'/>
        </controller>
        <interface type='bridge'>
          <mac address='52:54:00:cd:bd:8c'/>
          <source bridge='br0'/>
          <target dev='vnet3'/>
          <model type='virtio'/>
          <alias name='net0'/>
          <address type='pci' domain='0x0000' bus='0x01' slot='0x00' function='0x0'/>
        </interface>
        <serial type='pty'>
          <source path='/dev/pts/3'/>
          <target type='isa-serial' port='0'>
            <model name='isa-serial'/>
          </target>
          <alias name='serial0'/>
        </serial>
        <console type='pty' tty='/dev/pts/3'>
          <source path='/dev/pts/3'/>
          <target type='serial' port='0'/>
          <alias name='serial0'/>
        </console>
        <channel type='unix'>
          <source mode='bind' path='/var/lib/libvirt/qemu/channel/target/domain-20-Pop/org.qemu.guest_agent.0'/>
          <target type='virtio' name='org.qemu.guest_agent.0' state='disconnected'/>
          <alias name='channel0'/>
          <address type='virtio-serial' controller='0' bus='0' port='1'/>
        </channel>
        <input type='tablet' bus='usb'>
          <alias name='input0'/>
          <address type='usb' bus='0' port='1'/>
        </input>
        <input type='mouse' bus='ps2'>
          <alias name='input1'/>
        </input>
        <input type='keyboard' bus='ps2'>
          <alias name='input2'/>
        </input>
        <graphics type='vnc' port='5901' autoport='yes' websocket='5701' listen='0.0.0.0' keymap='de'>
          <listen type='address' address='0.0.0.0'/>
        </graphics>
        <video>
          <model type='qxl' ram='65536' vram='65536' vgamem='16384' heads='1' primary='yes'/>
          <alias name='video0'/>
          <address type='pci' domain='0x0000' bus='0x00' slot='0x01' function='0x0'/>
        </video>
        <memballoon model='virtio'>
          <alias name='balloon0'/>
          <address type='pci' domain='0x0000' bus='0x04' slot='0x00' function='0x0'/>
        </memballoon>
      </devices>
      <seclabel type='dynamic' model='dac' relabel='yes'>
        <label>+0:+100</label>
        <imagelabel>+0:+100</imagelabel>
      </seclabel>
    </domain>

     

  11. 1 hour ago, sec787 said:

    Unable to power on device, stuck in D3

    Also known as AMD reset bug. Some people with 5xx or the vega cards have that issue. If you search the forums you will find a couple reports. The card is only able to initialise once and can't be reset. Only a server reboot fixes this in most cases. Some people reported to not passthrough the audio part of the card works, for others using Q35 fixes it and for some it worked with passing through a GPU BIOS for this card. Another guy reported booting unraid in legacy mode vs uefi worked for him. Unfortunately there isn't a one click solution.

  12. Just an idea. Back when I played around with Server2008R2 a non activated install first informed the user to activate the install and after a certain amount of days it shuts itself down after a couple hours of use. The VM logs showed the VM is shutdown and not paused by a full cache disk or a disk the vdisk sits on. Just an idea.

     

    Edit:

    Don't know if that only depends on specific server versions or if something changed until today. I've last played around with Windows server versions, 2-3 years ago.

  13. As you maybe already noticed. It's kinda hard to compare 2 systems if every spec is different. Memory speeds and latency is a huge thing on the first gen Ryzen's. As already mentioned by testdasi, the chiplet design and the communication between the chips is the next thing you have to count in. Different cores for a VM can lead to different memory speeds/latency. The next thing, which slot for the GPU are you using? Some aren't connected directly to the CPU. Limiting the speed of the PCIe lanes by using a slot wired to the chipset can also be an issue. 16 vs 8 lane slots shouldn't be an issue but only using 4 lanes of the chipset shared with other devices (USB, network, storage) will bottleneck the GPU.

  14. ACS Override is not a thing that you set to gain performance. It's only usecase is to split your IOMMU groupings to separate devices from each other. 30k vs 37k is a huge difference for graphics score only. With overhead of virtualisation 1k maybe 2 is what you can expect. Disc IO as example shouldn't be the issue. Benchmarks and game engines are loading the most stuff at the start. Maybe the memory speed is what causing the difference for you. Are you using the same dimms and the same XMP profile for both tests?

×
×
  • Create New...