Jump to content

Max

Members
  • Posts

    271
  • Joined

  • Last visited

Posts posted by Max

  1. @Squid hey something similar happened today as well everything was working fine and boom all of sudden my vm is stuck and all my dockers stopped only this time unraid was reporting that their were updates available for all my dockers which is weird but i know that i think this issue happened yesterday at the same time it today and my server is scheduled to auto update all my dockers and plugins at this time.

    which lead me to believe that it might has something to do with ca auto update applications plugin.

  2. 13 hours ago, Squid said:

    It appears that your cache drive dropped offline (or dropped dead) which is resulting in havoc.  Reseat the cabling to it

    if a had not ran that fix common problem plugin, i would have said that that cant be as i was still able to access all my data that was their on my cache drive and under unraid's main page all my drives were active and normal but as i did i know it had something to do with my cache drive as fcp plugin reported two errors and both were about my cache drive.

    1. Error --  my cache was read only or completly full

    2. error -- unraid was unable to write to docker.img ( we can conclude that this error popped up cause of first error.)

    my cache wasn't even half full at that time. so i thought that maybe unraid isn't detecting my cache drive's capacity properly so i rebooted my server and since then its been like 17 hours and 20 minutes and so far everything is working normally, as it should. All my dockers are up and running and my vm's are also working now.

    so dont know what really happened.

    fortunately one thing is for sure, that my cache didn't dropped dead on me.😅

  3. Everything was working fine up until like half an error ago than all of a sudden my vm was stuck and i noticed that half of dockers that were running at that time also stopped all of sudden than when i tried launching them back i ended up with execution error code 403 and now when im trying to launch my im getting back this error -- "Hook script execution failed: internal error: Child process (LC_ALL=C PATH=/bin:/sbin:/usr/bin:/usr/sbin HOME=/ /etc/libvirt/hooks/qemu 'Windows 10 New' prepare begin -) unexpected exit status 126: libvirt: error : cannot execute binary /etc/libvirt/hooks/qemu: Input/output error"

    im attaching my diagnostics with this post.

    please help me figure out how to fix this and how can i prevent it from happening again in the future.

    unraid-diagnostics-20200116-0128.zip

  4. so guys after some more testing it looks like it has to do something with unraid nvidia. so this is what i did i uninstalled unraid nvidia drivers and installed the stock unraid drivers and after a reboot both my gpu's started posting and gpuz showed that my gtx 1070 ti is running x16 mode when i ran windows vm through it and when i ran same vm through gtx 750 gpuz showed it was running in x4 mode i couldn't test it when both of them were running windows vm as i currently only have one windows vm installed but i tried one in alreaady installed vm and one in windows installation vm and both were posting at same time.

    so looks like their is something wrong unraid nvidia plugin, do we need to use something special or different while we using unraid nvidia plugin and windows vm???

  5. 30 minutes ago, JWMutant said:

    Just out of interest did isolate the 1070 with the vfio-pci.ids= command before using it as the GPU for your VM?

    no im not using vfio-pci.ids or intel_iommu command, do i need them ??

  6. 53 minutes ago, JWMutant said:

    Ok ill tackle this one thing at a time.

    The single biggest difference between sli and crossfire is sli has a minimum requirement on the board... that being 2 slots that are able to run at x8 speeds. Crossfire however doesnt have that requirement. Crossfire is more than happy to run one card at x8 and one card at x4. The fact that you cant set the primary slot to x8 doesnt mean it will not automatically reduce it to x8 speeds when it detects a second AMD card. There would be nothing stopping you from running sli on a board that doesnt support it "If" it had two x8 minimum slots.

     

    As far as your test you just performed you cant use the 1070 for unraid and at the same time use that card with a VM. You would need to set your VM to use VNC.

    thanks for info, i didn't know that about sli and crossfire.

    as for the test even i wasn't sure that whether it would work or not but tried it cause it works with igpu as i told u guys i was using igpu with windows vm and i set primary gpu as igpu it so unraid was posting through my igpu and as soon as i ran windows vm it would switch to windows it would never go back to unraid even after shutting down vm the only way to return that igpu to unraid was to restart my unraid server. and my gtx 1070 was being used by plex so i thought maybe it would work that way but as i wasn't sure i then tried that with both igpu and gtx 1070 ti, igpu for unraid and gtx 107ti for windows vm and result was the same.

    that's why im not sure its pcie lane issue.

  7. 8 minutes ago, JWMutant said:

    If your dead set on using that CPU with two cards then This is basically what you need

    But if your going to throw that kind of money at a board your better of selling the MB CPU and ram and getting a small upgrade.

    i know it doesn't support sli but im just saying that it supports crossfire so it should be able to run gpus in x8 mode and i dont think if i had amd gpus they would just magically run in x8 mode or maybe they would.

    anyway i just tried disabling my igpu and removed my gtx 750 and its still the same as soon as i run vm on it i end up with no signals on my board. so it looks like its something else entirely

  8. 12 minutes ago, JWMutant said:

    You will still need to set the gpu in the x16 slot to run at x8. Its been a good many years since I played around with bios setting on an oldish board so not sure if you can set it to run at x8.

    well i just tried selecting x4 in pcie slot configuration (PCH) and its still the same and i cant even find any setting which would let me run my gtx 1070ti in x8 mode.

    this is weird my motherboard supports crossfire, how are people suppose to use it if one gpu alone is going to take up all pcie lanes and this would mean i cant even add an nvme drive pcie based.

  9. 11 minutes ago, JWMutant said:

    If you can set and thats a big "If" your primary display to run at x8 then yes you might be in luck. However the second GPU will run at 70% of its potential at best.

     im using igpu as my primary display😅

  10. 11 minutes ago, JWMutant said:

    However saying that you could try what the specs say and be in luck.

     

     

    so if im getting it right then if it works then my gtx 1070ti will running in 8x and gtx 750 will be running in 4x mode, right??

  11. 16 minutes ago, JWMutant said:

    Not enough pci lanes available?

    What MB and CPU you using?

    its i7 4790k and as for MOBO its z97d3h motherboard.

    i know my cpu only supports x16 pcie lanes but my gpus can run on 8x, im not using them for gaming. im only gonna use them gpu transcoding (GTX 1070ti simply because it can transcode more formats) and for windows VM (gtx 750).im only gonna use it for really light which an igpu also can easily handle (i can tell cause from past couple of days it was running through igpu hd 4600)

    And i dont think pcie gen 3 slots can be bottleneck for either of them while runningin 8x mode.

    Edit : and i haven't attached any nvme drives or pcie cards, so my pcie lanes are only occupied by those gpus.

  12. hey guys

    i have been using windows 10 vm from past couple days through my igpu (hd4600) with bios as SEABIOS and it was working just fine but finally today i decided that now that everything is working fine i should put old gtx 750 in it to use it with my windows vm but my monitor is not getting any signals from it. i even tried my gtx 1070 ti which i was using for plex gpu transcoding but same results, no signals. so i guess their is something wrong with my xml file, so if u guys could take a look at it and guide me in the right direction, i would really appreciate it. 

    And A VERY HAPPY NEW YEAR TO YOU ALL!!!

    <?xml version='1.0' encoding='UTF-8'?>
    <domain type='kvm'>
      <name>Windows 10</name>
      <uuid>e3982d45-50d5-0372-603c-67a01be84f7a</uuid>
      <metadata>
        <vmtemplate xmlns="unraid" name="Windows 10" icon="windows.png" os="windows10"/>
      </metadata>
      <memory unit='KiB'>4194304</memory>
      <currentMemory unit='KiB'>4194304</currentMemory>
      <memoryBacking>
        <nosharepages/>
      </memoryBacking>
      <vcpu placement='static'>4</vcpu>
      <cputune>
        <vcpupin vcpu='0' cpuset='1'/>
        <vcpupin vcpu='1' cpuset='5'/>
        <vcpupin vcpu='2' cpuset='3'/>
        <vcpupin vcpu='3' cpuset='7'/>
      </cputune>
      <os>
        <type arch='x86_64' machine='pc-i440fx-4.1'>hvm</type>
      </os>
      <features>
        <acpi/>
        <apic/>
        <hyperv>
          <relaxed state='on'/>
          <vapic state='on'/>
          <spinlocks state='on' retries='8191'/>
          <vendor_id state='on' value='none'/>
        </hyperv>
      </features>
      <cpu mode='host-passthrough' check='none'>
        <topology sockets='1' cores='2' threads='2'/>
      </cpu>
      <clock offset='localtime'>
        <timer name='hypervclock' present='yes'/>
        <timer name='hpet' present='no'/>
      </clock>
      <on_poweroff>destroy</on_poweroff>
      <on_reboot>restart</on_reboot>
      <on_crash>restart</on_crash>
      <devices>
        <emulator>/usr/local/sbin/qemu</emulator>
        <disk type='file' device='disk'>
          <driver name='qemu' type='raw' cache='writeback'/>
          <source file='/mnt/user/domains/Windows 10/vdisk1.img'/>
          <target dev='hdc' bus='virtio'/>
          <boot order='1'/>
          <address type='pci' domain='0x0000' bus='0x00' slot='0x04' function='0x0'/>
        </disk>
        <disk type='file' device='cdrom'>
          <driver name='qemu' type='raw'/>
          <source file='/mnt/user/isos/Windows.iso'/>
          <target dev='hda' bus='ide'/>
          <readonly/>
          <boot order='2'/>
          <address type='drive' controller='0' bus='0' target='0' unit='0'/>
        </disk>
        <disk type='file' device='cdrom'>
          <driver name='qemu' type='raw'/>
          <source file='/mnt/user/isos/virtio-win-0.1.160-1.iso'/>
          <target dev='hdb' bus='ide'/>
          <readonly/>
          <address type='drive' controller='0' bus='0' target='0' unit='1'/>
        </disk>
        <controller type='usb' index='0' model='ich9-ehci1'>
          <address type='pci' domain='0x0000' bus='0x00' slot='0x07' function='0x7'/>
        </controller>
        <controller type='usb' index='0' model='ich9-uhci1'>
          <master startport='0'/>
          <address type='pci' domain='0x0000' bus='0x00' slot='0x07' function='0x0' multifunction='on'/>
        </controller>
        <controller type='usb' index='0' model='ich9-uhci2'>
          <master startport='2'/>
          <address type='pci' domain='0x0000' bus='0x00' slot='0x07' function='0x1'/>
        </controller>
        <controller type='usb' index='0' model='ich9-uhci3'>
          <master startport='4'/>
          <address type='pci' domain='0x0000' bus='0x00' slot='0x07' function='0x2'/>
        </controller>
        <controller type='pci' index='0' model='pci-root'/>
        <controller type='ide' index='0'>
          <address type='pci' domain='0x0000' bus='0x00' slot='0x01' function='0x1'/>
        </controller>
        <controller type='virtio-serial' index='0'>
          <address type='pci' domain='0x0000' bus='0x00' slot='0x03' function='0x0'/>
        </controller>
        <interface type='bridge'>
          <mac address='52:54:00:6c:d3:d2'/>
          <source bridge='br0'/>
          <model type='virtio'/>
          <address type='pci' domain='0x0000' bus='0x00' slot='0x02' function='0x0'/>
        </interface>
        <serial type='pty'>
          <target type='isa-serial' port='0'>
            <model name='isa-serial'/>
          </target>
        </serial>
        <console type='pty'>
          <target type='serial' port='0'/>
        </console>
        <channel type='unix'>
          <target type='virtio' name='org.qemu.guest_agent.0'/>
          <address type='virtio-serial' controller='0' bus='0' port='1'/>
        </channel>
        <input type='tablet' bus='usb'>
          <address type='usb' bus='0' port='1'/>
        </input>
        <input type='mouse' bus='ps2'/>
        <input type='keyboard' bus='ps2'/>
        <hostdev mode='subsystem' type='pci' managed='yes' xvga='yes'>
          <driver name='vfio'/>
          <source>
            <address domain='0x0000' bus='0x05' slot='0x00' function='0x0'/>
          </source>
          <address type='pci' domain='0x0000' bus='0x00' slot='0x05' function='0x0'/>
        </hostdev>
        <hostdev mode='subsystem' type='pci' managed='yes'>
          <driver name='vfio'/>
          <source>
            <address domain='0x0000' bus='0x05' slot='0x00' function='0x1'/>
          </source>
          <address type='pci' domain='0x0000' bus='0x00' slot='0x06' function='0x0'/>
        </hostdev>
        <hostdev mode='subsystem' type='pci' managed='yes'>
          <driver name='vfio'/>
          <source>
            <address domain='0x0000' bus='0x00' slot='0x03' function='0x0'/>
          </source>
          <address type='pci' domain='0x0000' bus='0x00' slot='0x08' function='0x0'/>
        </hostdev>
        <hostdev mode='subsystem' type='usb' managed='no'>
          <source>
            <vendor id='0x046d'/>
            <product id='0xc52b'/>
          </source>
          <address type='usb' bus='0' port='2'/>
        </hostdev>
        <hostdev mode='subsystem' type='usb' managed='no'>
          <source>
            <vendor id='0x3938'/>
            <product id='0x1031'/>
          </source>
          <address type='usb' bus='0' port='3'/>
        </hostdev>
        <memballoon model='none'/>
      </devices>
    </domain>

     

  13. 22 minutes ago, testdasi said:

    Changing IOMMU is done in the BIOS. Unraid doesn't have access to your mobo BIOS to change it.

     

    If you are 100% sure it is on in your BIOS (Enable / On is different from "Auto". VT-d is different from VT-x) then check your syslinux to see if you might have disabled it in the past manually.

    as you can see in this pic both vt-x and vt-d are enabled and could u explain steps for syslinux.

    P_20191217_184603.jpg

  14. okay i noticed another strange thing now, as i was adding cache drives to my server i that vm manager is not disabling (saw in spaceinvader's video that we have to disable dockers and vm manger before starting mover in order to transfer docker and libvert imge files to cache drive).

    Nothing happens when i select no from dropdown menu and click on apply.

    i have tried different browsers on different devices same thing happening on all devices.

  15. So guys recently i bought a new laptop and ever since i was thinking about turning my gaming pc into my unraid nas (i already have unraid nas just upgrading it to my gaming pc).

    Specs of my Gaming PC

    Mobo z97d3h gigabyte

    CPU i7 4790k

    RAM 16 gigs 

    GPU gtx 1070ti

    everything went fine its just that when i checked info it is still showing IOMMU disabled which was fine for my old nas as it was running off of some crappy h61 board g2020 and 4 gigs of ram but i don't know why its still showing disabled. are their any setting in unraid itself to enable it cause as far as bios, it is enabled in the bios, i have double checked it.

    please help me figuring this out.

  16. So guys from a long i have been thinking about giving this a try, so today i started using but I'm having some issues with nvidia hw transcoding. Whenever it's transcoding through hw transcoding (NVENC) i end with no compatible stream available error. And if i disable it transcodes through my cpu just fine so could you guys please help me with this.

×
×
  • Create New...