• Posts

  • Joined

  • Last visited

Posts posted by Squazz

  1. 1 hour ago, craigjl77 said:

    For GPU pass thru on Windoze I use Q35-5.1 as Machine Type and a BIOS set to SeaBios. Know this to be a recommendation of SpaceInvader1 and it works well for my VM's. Suggest that you give it a try 🙂

    Interresting, can you find a concrete place where this is recommended?

    As far as I know it's only recommended for Linux & MacOS?

  2. For the past 3 years, I've been daily driving a Windows VM (see link for topic around my setup)


    I've had some problems here and there, but this week everything just went heywire.

    I've started to get currupted boots, every, single, time.

    After tinkering with it for the past week, i've tried setting up a bunch of new VMs today. Fethcing the latest ISO for Windows, and basing my VMs on that.

    After install all of the updates, and rebooting, the problems starts. Sometimes the system just halst after trying to start up windows for half and hour. Other times it goes strait to the blue screen, telling me that bootup is corrupted, and needs to be repaired (which I'm never able to do).



    I'm trying to set up the machine with 

    CPU Mode: Host passthrough

    10 logical CPUs pinned (and isolated)

    Initial Memory: 30720

    Max Memory: 30720



    USB 2.0

    Primary vDisk Location: auto: /mnt/user/domains/Windows 10/vdisk1.img

    Primary vDisk Size: 110Gb

    Primary vDisk Type: Raw

    Primary vDisk Bus: VirtIO

    Passing through:

    - 1070ti

    - 1070it soundcard

    - onboard soundcard

    - Stubbed USB 3.0 controller: AMD Family 17h (Models 00h-0fh) USB 3.0 Host Controller | USB controller (30:00.3)


    The share "Domains" is set to "Prefer : Cache"


    My Cache pool is a single 500Gb Sata SSD: Samsung_SSD_860_EVO_500GB_S4XBNF1MA02769A - 500 GB (sdb)


    unRaid version: 6.9.2

    I'm running the latest BIOS version for my motherboard


    My Hardware

    CPU: Ryzen 7 1700 (65W)
    Motherboard: MSI MS-7A33 motherboard
    Ram: 16GB (2x 8GB 2400Mhz) + 32Gb (2x 16GB 2400Mhz)

    Raid drive: LSI 9201-8i

    GPU: 1070ti


    What can I provide of information to help you help me? I'm kinda stuck, I don't know what to do from here.

    Right now, I'm unable to use unRaid to host Windows VMs

  3. @Josh.5 unfortunately I don't really play games anymore. It's primarily work and video-consumption that's done on my computer.


    I've been trying to play some CS:GO, but it seems that FPS caps at 100FPS, and I too have experienced micro-sutters from time to time. In the end CS:GO competitive was a no-go though a VM for me. So I have a bare-metal machine for that.


    Passing though the entire SSD, and the entire USB controller seems to give the best results too.


    Regarding my XML, I see no harm in you getting my XML :)

    <?xml version='1.0' encoding='UTF-8'?>
    <domain type='kvm' id='3'>
      <name>Windows 10</name>
        <vmtemplate xmlns="unraid" name="Windows 10" icon="windows.png" os="windows10"/>
      <memory unit='KiB'>12582912</memory>
      <currentMemory unit='KiB'>12582912</currentMemory>
      <vcpu placement='static'>10</vcpu>
        <vcpupin vcpu='0' cpuset='3'/>
        <vcpupin vcpu='1' cpuset='11'/>
        <vcpupin vcpu='2' cpuset='4'/>
        <vcpupin vcpu='3' cpuset='12'/>
        <vcpupin vcpu='4' cpuset='5'/>
        <vcpupin vcpu='5' cpuset='13'/>
        <vcpupin vcpu='6' cpuset='6'/>
        <vcpupin vcpu='7' cpuset='14'/>
        <vcpupin vcpu='8' cpuset='7'/>
        <vcpupin vcpu='9' cpuset='15'/>
        <type arch='x86_64' machine='pc-q35-4.2'>hvm</type>
        <loader readonly='yes' type='pflash'>/usr/share/qemu/ovmf-x64/OVMF_CODE-pure-efi.fd</loader>
          <relaxed state='on'/>
          <vapic state='on'/>
          <spinlocks state='on' retries='8191'/>
          <vendor_id state='on' value='none'/>
      <cpu mode='host-passthrough' check='none'>
        <topology sockets='1' cores='10' threads='1'/>
      <clock offset='localtime'>
        <timer name='hypervclock' present='yes'/>
        <timer name='hpet' present='no'/>
        <disk type='block' device='disk'>
          <driver name='qemu' type='raw' cache='writeback'/>
          <source dev='/dev/disk/by-id/ata-Samsung_SSD_840_EVO_120GB_S1D5NSBF425061E' index='2'/>
          <target dev='hdc' bus='sata'/>
          <boot order='1'/>
          <alias name='sata0-0-2'/>
          <address type='drive' controller='0' bus='0' target='0' unit='2'/>
        <disk type='file' device='disk'>
          <driver name='qemu' type='raw' cache='writeback'/>
          <source file='/mnt/user/Kasper/vdisk1.img' index='1'/>
          <target dev='hdd' bus='virtio'/>
          <alias name='virtio-disk3'/>
          <address type='pci' domain='0x0000' bus='0x03' slot='0x00' function='0x0'/>
        <controller type='pci' index='0' model='pcie-root'>
          <alias name='pcie.0'/>
        <controller type='pci' index='1' model='pcie-root-port'>
          <model name='pcie-root-port'/>
          <target chassis='1' port='0x8'/>
          <alias name='pci.1'/>
          <address type='pci' domain='0x0000' bus='0x00' slot='0x01' function='0x0' multifunction='on'/>
        <controller type='pci' index='2' model='pcie-root-port'>
          <model name='pcie-root-port'/>
          <target chassis='2' port='0x9'/>
          <alias name='pci.2'/>
          <address type='pci' domain='0x0000' bus='0x00' slot='0x01' function='0x1'/>
        <controller type='pci' index='3' model='pcie-root-port'>
          <model name='pcie-root-port'/>
          <target chassis='3' port='0xa'/>
          <alias name='pci.3'/>
          <address type='pci' domain='0x0000' bus='0x00' slot='0x01' function='0x2'/>
        <controller type='pci' index='4' model='pcie-root-port'>
          <model name='pcie-root-port'/>
          <target chassis='4' port='0xb'/>
          <alias name='pci.4'/>
          <address type='pci' domain='0x0000' bus='0x00' slot='0x01' function='0x3'/>
        <controller type='pci' index='5' model='pcie-root-port'>
          <model name='pcie-root-port'/>
          <target chassis='5' port='0xc'/>
          <alias name='pci.5'/>
          <address type='pci' domain='0x0000' bus='0x00' slot='0x01' function='0x4'/>
        <controller type='pci' index='6' model='pcie-root-port'>
          <model name='pcie-root-port'/>
          <target chassis='6' port='0xd'/>
          <alias name='pci.6'/>
          <address type='pci' domain='0x0000' bus='0x00' slot='0x01' function='0x5'/>
        <controller type='pci' index='7' model='pcie-root-port'>
          <model name='pcie-root-port'/>
          <target chassis='7' port='0xe'/>
          <alias name='pci.7'/>
          <address type='pci' domain='0x0000' bus='0x00' slot='0x01' function='0x6'/>
        <controller type='virtio-serial' index='0'>
          <alias name='virtio-serial0'/>
          <address type='pci' domain='0x0000' bus='0x02' slot='0x00' function='0x0'/>
        <controller type='sata' index='0'>
          <alias name='ide'/>
          <address type='pci' domain='0x0000' bus='0x00' slot='0x1f' function='0x2'/>
        <controller type='usb' index='0' model='qemu-xhci' ports='15'>
          <alias name='usb'/>
          <address type='pci' domain='0x0000' bus='0x00' slot='0x07' function='0x0'/>
        <interface type='bridge'>
          <mac address='52:54:00:d3:d6:26'/>
          <source bridge='br0'/>
          <target dev='vnet0'/>
          <model type='virtio'/>
          <alias name='net0'/>
          <address type='pci' domain='0x0000' bus='0x01' slot='0x00' function='0x0'/>
        <serial type='pty'>
          <source path='/dev/pts/0'/>
          <target type='isa-serial' port='0'>
            <model name='isa-serial'/>
          <alias name='serial0'/>
        <console type='pty' tty='/dev/pts/0'>
          <source path='/dev/pts/0'/>
          <target type='serial' port='0'/>
          <alias name='serial0'/>
        <channel type='unix'>
          <source mode='bind' path='/var/lib/libvirt/qemu/channel/target/domain-3-Windows 10/org.qemu.guest_agent.0'/>
          <target type='virtio' name='org.qemu.guest_agent.0' state='disconnected'/>
          <alias name='channel0'/>
          <address type='virtio-serial' controller='0' bus='0' port='1'/>
        <input type='mouse' bus='ps2'>
          <alias name='input0'/>
        <input type='keyboard' bus='ps2'>
          <alias name='input1'/>
        <hostdev mode='subsystem' type='pci' managed='yes'>
          <driver name='vfio'/>
            <address domain='0x0000' bus='0x2f' slot='0x00' function='0x0'/>
          <alias name='hostdev0'/>
          <address type='pci' domain='0x0000' bus='0x04' slot='0x00' function='0x0'/>
        <hostdev mode='subsystem' type='pci' managed='yes'>
          <driver name='vfio'/>
            <address domain='0x0000' bus='0x2f' slot='0x00' function='0x1'/>
          <alias name='hostdev1'/>
          <address type='pci' domain='0x0000' bus='0x05' slot='0x00' function='0x0'/>
        <hostdev mode='subsystem' type='pci' managed='yes'>
          <driver name='vfio'/>
            <address domain='0x0000' bus='0x31' slot='0x00' function='0x3'/>
          <alias name='hostdev2'/>
          <address type='pci' domain='0x0000' bus='0x06' slot='0x00' function='0x0'/>
        <hostdev mode='subsystem' type='pci' managed='yes'>
          <driver name='vfio'/>
            <address domain='0x0000' bus='0x30' slot='0x00' function='0x3'/>
          <alias name='hostdev3'/>
          <address type='pci' domain='0x0000' bus='0x07' slot='0x00' function='0x0'/>
        <hostdev mode='subsystem' type='usb' managed='no'>
            <vendor id='0x08e4'/>
            <product id='0x017a'/>
            <address bus='1' device='3'/>
          <alias name='hostdev4'/>
          <address type='usb' bus='0' port='1'/>
        <memballoon model='none'/>
      <seclabel type='dynamic' model='dac' relabel='yes'>


  4. I'm trying to get the Google Play Music Manager docker working.


    Installing the image and logging in the first time works like a charm.

    But after logging in, I get the following message:




    When looking in the log, I see the following:


    Looking in the log it says: 2019-12-27 11:43:03,245 +0100 ERROR TId 0x14b62f5df740 Error: Domain (1) code (400) label (Bad Request) url (https://accounts.google.com/o/oauth2/programmatic_auth?email=SomeEmail@gmail.com) [ExtendedWebPage.cpp:48 ExtendedWebPage::extension()]


    If I navigate to the URL, I get the following message.




    Can somebody else verify that the docker image is still functioning?

    Am I doing something wrong?

  5. On 5/9/2019 at 11:09 PM, tjustice86 said:

    How are you passing through your audio? I would like to use HDMI passthrough but can't seem to get it to work with my VM. The on board audio controller works but I would like to watch movies and even game at at least 5.1. Any ideas or pointers?


    MOBO: MSI B450 Gaming Pro Carbon AC

    CPU: AMD Ryzen 7 1700x

    GPU: XFX AMD Radeon rx 580 


    I will have to check my cstates and memory speed when I get home.

    Hi there, I'm SO SORRY for replying so late.


    Right now, I'm using a USB DAC for my audio. Works like a charm :)

    When at my computer, I'm only running a pair of headphones, no surround.

  6. 7 hours ago, Squid said:

    And there will never be.  Problem is that when reading the file from /mnt/user/share/..., unRaid will give you the version from the lowest numbered drive (cache, disk1, disk2, etc)  The version which you may be expecting, or which may be the latter version could be on the highest numbered drive.  Hard to say, and I don't want the plugin to attempt to guess what the user wants to actually do.


    You can manually clean them up via Krusader (if you map /mnt instead of /mnt/user in the template) by navigating to the appropriate drive(s) and inspect the files accordingly


    Should be noted that under normal circumstances, duplicated files do not happen under unRaid.

    I totally understand, and I only have this problem due to me using unBalance :/

    I was hoping there was a way to tell a program "if you find any dupes on diskX, delete them", so that I was in charge of what files was removed. Guess I'll have to do it the hard way :P

  7. On 5/5/2017 at 8:45 PM, jonathanm said:

    Very definitely possible, and can be quite problematic for someone who doesn't have a good handle on what a user share is.

    Consider the following.




    In this example, the file accessed at /mnt/user/myshare/test.txt will be the one on disk1, for all intents and purposes /mnt/disk2/myshare/test.txt doesn't exist in the user share system. However... if you delete /mnt/user/myshare/test.txt, then the one on disk2 will show up the next time the user share system is reloaded. It's quite possible to have 2 or more identically named files this way, but the contents not be equal. You could theoretically hide files from the user shares this way, and cause all sorts of confusion.


    Typically this kind of shenanigans occurs when someone with a poor grasp of how unraid works starts playing around with disk shares. That is one of the main reasons disk shares are disabled by default.


    In my case, this happened when using unBalance to scatter files from one disk to multiple others.

    As I see this docker could not help with looking for duplicates on the disk-level, so we know of any other docker og app that can help us with finding and deleting these duplicate files? :)

  8. 1 hour ago, bonienl said:

    In Unraid 6.7 we do not suppress PHP warnings anymore.


    Please follow the advice of @johnnie.black and delete/recreate your docker image. You will need to re-install your containers, if you have CA installed, you can use its restore function to quickly get back all your containers.

    I'm not sure how to use the restore function. Where should I look?

  9. 3 minutes ago, johnnie.black said:
    Feb 10 16:21:58 NAS kernel: BTRFS warning (device loop2): csum failed root 5 ino 15295 off 0 csum 0x98f94189 expected csum 0xd349d779 mirror 1


    Docker image is corrupt, delete and recreate.

    How would I go around doing this? :)

  10. 6 minutes ago, CHBMB said:

    Well, you're using a release candidate, so probably should report it there.

    What version did you update from?

    I upgraded to the release candidate because I had the problem before on version 6.6.6
    Upgrading to the release candidate however gave me the error message, that I didn't have before

  11. 5 hours ago, CHBMB said:

    Not without more context.  Need to see what you're doing to generate that error message.   To clarify, is it the docker service that won't start?  Or a docker container?  If it's the former, you need to post diagnostics, if it's the latter then some more info on the container etc

    My bad :) It went a little fast with that post ;)

    It's the service that won't start


    Diagnostics: nas-diagnostics-20190210-1626.zip

    Image where I got the error-message


  12. I get the following error message


    Warning: stream_socket_client(): unable to connect to unix:///var/run/docker.sock (Connection refused) in /usr/local/emhttp/plugins/dynamix.docker.manager/include/DockerClient.php on line 654
    Couldn't create socket: [111] Connection refused
    Warning: Invalid argument supplied for foreach() in /usr/local/emhttp/plugins/dynamix.docker.manager/include/DockerClient.php on line 826

    Warning: stream_socket_client(): unable to connect to unix:///var/run/docker.sock (Connection refused) in /usr/local/emhttp/plugins/dynamix.docker.manager/include/DockerClient.php on line 654
    Couldn't create socket: [111] Connection refused
    Warning: Invalid argument supplied for foreach() in /usr/local/emhttp/plugins/dynamix.docker.manager/include/DockerClient.php on line 890

    Does anyone know what that means and what I should do about it?


  13. On 12/5/2017 at 2:36 PM, Frank1940 said:

    Apparently there is another round of problems with Windows 10 computers (which have updated recently) having difficulties connecting to an unRAID server using SMB.  You might want to look here fro a possible solution.




    There is a possibility that the Samba group may have a fix for this in the near/distance(?)  future.

    In 2019 this saved my day