Jump to content

Max

Members
  • Content Count

    145
  • Joined

  • Last visited

Community Reputation

1 Neutral

About Max

  • Rank
    Advanced Member

Recent Profile Visitors

The recent visitors block is disabled and is not being shown to other users.

  1. thanks man, its been working great ever since i reseated my sata cable for cache drive, which i should have done the very first time u told me to😅. stupid me.
  2. okay i will try that, as you know better, im sorry i forgot to attach today's diagnostics report im attaching it to this report. the only thing that i dont get if its cable how its was able to run all fine and only crashed when it was auto updating my dockers. unraid-diagnostics-20200117-0007.zip
  3. @Squid hey something similar happened today as well everything was working fine and boom all of sudden my vm is stuck and all my dockers stopped only this time unraid was reporting that their were updates available for all my dockers which is weird but i know that i think this issue happened yesterday at the same time it today and my server is scheduled to auto update all my dockers and plugins at this time. which lead me to believe that it might has something to do with ca auto update applications plugin.
  4. if a had not ran that fix common problem plugin, i would have said that that cant be as i was still able to access all my data that was their on my cache drive and under unraid's main page all my drives were active and normal but as i did i know it had something to do with my cache drive as fcp plugin reported two errors and both were about my cache drive. 1. Error -- my cache was read only or completly full 2. error -- unraid was unable to write to docker.img ( we can conclude that this error popped up cause of first error.) my cache wasn't even half full at that time. so i thought that maybe unraid isn't detecting my cache drive's capacity properly so i rebooted my server and since then its been like 17 hours and 20 minutes and so far everything is working normally, as it should. All my dockers are up and running and my vm's are also working now. so dont know what really happened. fortunately one thing is for sure, that my cache didn't dropped dead on me.😅
  5. Everything was working fine up until like half an error ago than all of a sudden my vm was stuck and i noticed that half of dockers that were running at that time also stopped all of sudden than when i tried launching them back i ended up with execution error code 403 and now when im trying to launch my im getting back this error -- "Hook script execution failed: internal error: Child process (LC_ALL=C PATH=/bin:/sbin:/usr/bin:/usr/sbin HOME=/ /etc/libvirt/hooks/qemu 'Windows 10 New' prepare begin -) unexpected exit status 126: libvirt: error : cannot execute binary /etc/libvirt/hooks/qemu: Input/output error" im attaching my diagnostics with this post. please help me figure out how to fix this and how can i prevent it from happening again in the future. unraid-diagnostics-20200116-0128.zip
  6. so guys after some more testing it looks like it has to do something with unraid nvidia. so this is what i did i uninstalled unraid nvidia drivers and installed the stock unraid drivers and after a reboot both my gpu's started posting and gpuz showed that my gtx 1070 ti is running x16 mode when i ran windows vm through it and when i ran same vm through gtx 750 gpuz showed it was running in x4 mode i couldn't test it when both of them were running windows vm as i currently only have one windows vm installed but i tried one in alreaady installed vm and one in windows installation vm and both were posting at same time. so looks like their is something wrong unraid nvidia plugin, do we need to use something special or different while we using unraid nvidia plugin and windows vm???
  7. no im not using vfio-pci.ids or intel_iommu command, do i need them ??
  8. thanks for info, i didn't know that about sli and crossfire. as for the test even i wasn't sure that whether it would work or not but tried it cause it works with igpu as i told u guys i was using igpu with windows vm and i set primary gpu as igpu it so unraid was posting through my igpu and as soon as i ran windows vm it would switch to windows it would never go back to unraid even after shutting down vm the only way to return that igpu to unraid was to restart my unraid server. and my gtx 1070 was being used by plex so i thought maybe it would work that way but as i wasn't sure i then tried that with both igpu and gtx 1070 ti, igpu for unraid and gtx 107ti for windows vm and result was the same. that's why im not sure its pcie lane issue.
  9. i know it doesn't support sli but im just saying that it supports crossfire so it should be able to run gpus in x8 mode and i dont think if i had amd gpus they would just magically run in x8 mode or maybe they would. anyway i just tried disabling my igpu and removed my gtx 750 and its still the same as soon as i run vm on it i end up with no signals on my board. so it looks like its something else entirely
  10. well i just tried selecting x4 in pcie slot configuration (PCH) and its still the same and i cant even find any setting which would let me run my gtx 1070ti in x8 mode. this is weird my motherboard supports crossfire, how are people suppose to use it if one gpu alone is going to take up all pcie lanes and this would mean i cant even add an nvme drive pcie based.
  11. im using igpu as my primary display😅
  12. so if im getting it right then if it works then my gtx 1070ti will running in 8x and gtx 750 will be running in 4x mode, right??
  13. its i7 4790k and as for MOBO its z97d3h motherboard. i know my cpu only supports x16 pcie lanes but my gpus can run on 8x, im not using them for gaming. im only gonna use them gpu transcoding (GTX 1070ti simply because it can transcode more formats) and for windows VM (gtx 750).im only gonna use it for really light which an igpu also can easily handle (i can tell cause from past couple of days it was running through igpu hd 4600) And i dont think pcie gen 3 slots can be bottleneck for either of them while runningin 8x mode. Edit : and i haven't attached any nvme drives or pcie cards, so my pcie lanes are only occupied by those gpus.
  14. hey guys i have been using windows 10 vm from past couple days through my igpu (hd4600) with bios as SEABIOS and it was working just fine but finally today i decided that now that everything is working fine i should put old gtx 750 in it to use it with my windows vm but my monitor is not getting any signals from it. i even tried my gtx 1070 ti which i was using for plex gpu transcoding but same results, no signals. so i guess their is something wrong with my xml file, so if u guys could take a look at it and guide me in the right direction, i would really appreciate it. And A VERY HAPPY NEW YEAR TO YOU ALL!!! <?xml version='1.0' encoding='UTF-8'?> <domain type='kvm'> <name>Windows 10</name> <uuid>e3982d45-50d5-0372-603c-67a01be84f7a</uuid> <metadata> <vmtemplate xmlns="unraid" name="Windows 10" icon="windows.png" os="windows10"/> </metadata> <memory unit='KiB'>4194304</memory> <currentMemory unit='KiB'>4194304</currentMemory> <memoryBacking> <nosharepages/> </memoryBacking> <vcpu placement='static'>4</vcpu> <cputune> <vcpupin vcpu='0' cpuset='1'/> <vcpupin vcpu='1' cpuset='5'/> <vcpupin vcpu='2' cpuset='3'/> <vcpupin vcpu='3' cpuset='7'/> </cputune> <os> <type arch='x86_64' machine='pc-i440fx-4.1'>hvm</type> </os> <features> <acpi/> <apic/> <hyperv> <relaxed state='on'/> <vapic state='on'/> <spinlocks state='on' retries='8191'/> <vendor_id state='on' value='none'/> </hyperv> </features> <cpu mode='host-passthrough' check='none'> <topology sockets='1' cores='2' threads='2'/> </cpu> <clock offset='localtime'> <timer name='hypervclock' present='yes'/> <timer name='hpet' present='no'/> </clock> <on_poweroff>destroy</on_poweroff> <on_reboot>restart</on_reboot> <on_crash>restart</on_crash> <devices> <emulator>/usr/local/sbin/qemu</emulator> <disk type='file' device='disk'> <driver name='qemu' type='raw' cache='writeback'/> <source file='/mnt/user/domains/Windows 10/vdisk1.img'/> <target dev='hdc' bus='virtio'/> <boot order='1'/> <address type='pci' domain='0x0000' bus='0x00' slot='0x04' function='0x0'/> </disk> <disk type='file' device='cdrom'> <driver name='qemu' type='raw'/> <source file='/mnt/user/isos/Windows.iso'/> <target dev='hda' bus='ide'/> <readonly/> <boot order='2'/> <address type='drive' controller='0' bus='0' target='0' unit='0'/> </disk> <disk type='file' device='cdrom'> <driver name='qemu' type='raw'/> <source file='/mnt/user/isos/virtio-win-0.1.160-1.iso'/> <target dev='hdb' bus='ide'/> <readonly/> <address type='drive' controller='0' bus='0' target='0' unit='1'/> </disk> <controller type='usb' index='0' model='ich9-ehci1'> <address type='pci' domain='0x0000' bus='0x00' slot='0x07' function='0x7'/> </controller> <controller type='usb' index='0' model='ich9-uhci1'> <master startport='0'/> <address type='pci' domain='0x0000' bus='0x00' slot='0x07' function='0x0' multifunction='on'/> </controller> <controller type='usb' index='0' model='ich9-uhci2'> <master startport='2'/> <address type='pci' domain='0x0000' bus='0x00' slot='0x07' function='0x1'/> </controller> <controller type='usb' index='0' model='ich9-uhci3'> <master startport='4'/> <address type='pci' domain='0x0000' bus='0x00' slot='0x07' function='0x2'/> </controller> <controller type='pci' index='0' model='pci-root'/> <controller type='ide' index='0'> <address type='pci' domain='0x0000' bus='0x00' slot='0x01' function='0x1'/> </controller> <controller type='virtio-serial' index='0'> <address type='pci' domain='0x0000' bus='0x00' slot='0x03' function='0x0'/> </controller> <interface type='bridge'> <mac address='52:54:00:6c:d3:d2'/> <source bridge='br0'/> <model type='virtio'/> <address type='pci' domain='0x0000' bus='0x00' slot='0x02' function='0x0'/> </interface> <serial type='pty'> <target type='isa-serial' port='0'> <model name='isa-serial'/> </target> </serial> <console type='pty'> <target type='serial' port='0'/> </console> <channel type='unix'> <target type='virtio' name='org.qemu.guest_agent.0'/> <address type='virtio-serial' controller='0' bus='0' port='1'/> </channel> <input type='tablet' bus='usb'> <address type='usb' bus='0' port='1'/> </input> <input type='mouse' bus='ps2'/> <input type='keyboard' bus='ps2'/> <hostdev mode='subsystem' type='pci' managed='yes' xvga='yes'> <driver name='vfio'/> <source> <address domain='0x0000' bus='0x05' slot='0x00' function='0x0'/> </source> <address type='pci' domain='0x0000' bus='0x00' slot='0x05' function='0x0'/> </hostdev> <hostdev mode='subsystem' type='pci' managed='yes'> <driver name='vfio'/> <source> <address domain='0x0000' bus='0x05' slot='0x00' function='0x1'/> </source> <address type='pci' domain='0x0000' bus='0x00' slot='0x06' function='0x0'/> </hostdev> <hostdev mode='subsystem' type='pci' managed='yes'> <driver name='vfio'/> <source> <address domain='0x0000' bus='0x00' slot='0x03' function='0x0'/> </source> <address type='pci' domain='0x0000' bus='0x00' slot='0x08' function='0x0'/> </hostdev> <hostdev mode='subsystem' type='usb' managed='no'> <source> <vendor id='0x046d'/> <product id='0xc52b'/> </source> <address type='usb' bus='0' port='2'/> </hostdev> <hostdev mode='subsystem' type='usb' managed='no'> <source> <vendor id='0x3938'/> <product id='0x1031'/> </source> <address type='usb' bus='0' port='3'/> </hostdev> <memballoon model='none'/> </devices> </domain>
  15. @itimpi @testdasi Finally !!! guys finally i fixed it. phewww!!! It was simple bios gone bonkers issue, i just reflashed my bios and now IOMMU is showing enabled. Well thanks guys for your suggestions and atleast trying to help me out.