Max

Members
  • Posts

    262
  • Joined

  • Last visited

Everything posted by Max

  1. oyieee that's weird, as far as i have read asmedia controller need some extra work to be done even on bare metal hackintosh. even @ghost82 have said that he ended up getting inateck usb card as wasn't able to make his asmedia controller work. as far as i have read controllers that connected directly to chipset are easier to work with compared to asmedia ones, and my z97d3h's usb ports are all connected to chipset only still they dont work (they work but at such a low speed which, they might as well not work at all).
  2. did it work oob or u used some special kext or ssdt of some kind to make it work.
  3. has anyone ever been able to make his onboard usb controller work with macos vm. cause i neved had to do anything to make my usb ports work when i used to use this machine as bare metal hackintosh and as far as i know all use a usb card or use hotplug plugin to use usb with macos here.
  4. ohh thanks. that worked. now only have to figure out usb controller passthrough. dont really care about hdmi audio as long as onboard audio is working.
  5. yeah already figured that out through dortania opencore guide but didn't quite understand the data part. it says value is in hex/data, dont know what to do with that.
  6. yeah that worked, i knew vmxnet3 has bad upload speed problem but never expected it to be that bad.
  7. nah that didnt work. yeah i was manually hot-plugging, why you ask, cause im an idiot. i dont know why didn't think of that 😅 so after your suggestion i tried booting with alcid=1 and that worked, now onboard audio is working, now i only have figure out how to make it permanent. i mean without using it as boot arg. and one problem that i just noticed is that upload speed that im getting are really pathetic i have connection of 100 up and down, but im only getting 1-2 Mbps get up down is fine, down im getting proper 100.
  8. okay i have attached my efi folder and ioreg. for audio i cant check other ports now cause first my 1070ti doesn't have any other hdmi port and as display port, i dont have display port cable and right now i cant even go out and get one due lockdown situation here but even onboard audio is not being detected. and yeah i thought about getting usb card too but the problem is pcie lanes, dont have enough of them and soon i be getting lsi card to attach more hdd's so usb card is not an option for me. so if usb passthough doesn't work then maybe i will just use that plugin but the problem with that plugin is that after passing through my logitech keyboard through that plugin i have restart my vm to make it work, i dont know why that happens or why it actually happens with that keyboard only other usb devices dont require a restart. EFI.zip ioreg.ioreg
  9. guys i need some help setting up high sierra vm on my server. i did the installation using macinabox then replaced clover files with opencore files i downloaded from @Leoyzen repo and set my xml using his guide. now after installing nvidia web drivers the only two things are that are still not working are usb and audio. not even hdmi audio is not being detected, even though i have set the audio in same bus and slot and ofcourse different function still no audio and as for usb, if use those ports which are being pass through then they work but they so slow that i have to litteraly hold the key for more than a second before it register any key press and mouse cursor just jumps here and their but if i use usb hotplug plugin they work flawlessly ( which is weird or maybe not maybe they work diffrently IDK). <?xml version='1.0' encoding='UTF-8'?> <domain type='kvm' xmlns:qemu='http://libvirt.org/schemas/domain/qemu/1.0'> <name>MacinaboxHighSierra</name> <uuid>9af554f4-f63a-40ad-b744-394a0ca0cb2e</uuid> <description>MacOS High Sierra</description> <metadata> <vmtemplate xmlns="unraid" name="MacOS" icon="/mnt/user/domains/MacinaboxHighSierra/icon/highsierra.png" os="HighSierra"/> </metadata> <memory unit='KiB'>8388608</memory> <currentMemory unit='KiB'>8388608</currentMemory> <memoryBacking> <nosharepages/> </memoryBacking> <vcpu placement='static'>4</vcpu> <cputune> <vcpupin vcpu='0' cpuset='1'/> <vcpupin vcpu='1' cpuset='5'/> <vcpupin vcpu='2' cpuset='3'/> <vcpupin vcpu='3' cpuset='7'/> </cputune> <os> <type arch='x86_64' machine='pc-q35-4.2'>hvm</type> <loader readonly='yes' type='pflash'>/mnt/user/domains/MacinaboxHighSierra/ovmf/OVMF_CODE.fd</loader> <nvram>/mnt/user/domains/MacinaboxHighSierra/ovmf/OVMF_VARS.fd</nvram> </os> <features> <acpi/> <apic/> </features> <cpu mode='host-passthrough' check='none'> <topology sockets='1' cores='2' threads='2'/> </cpu> <clock offset='utc'> <timer name='rtc' tickpolicy='catchup'/> <timer name='pit' tickpolicy='delay'/> <timer name='hpet' present='no'/> </clock> <on_poweroff>destroy</on_poweroff> <on_reboot>restart</on_reboot> <on_crash>restart</on_crash> <devices> <emulator>/usr/local/sbin/qemu</emulator> <disk type='file' device='disk'> <driver name='qemu' type='qcow2' cache='writeback'/> <source file='/mnt/user/domains/MacinaboxHighSierra/Clover.qcow2'/> <target dev='hdc' bus='sata'/> <boot order='1'/> <address type='drive' controller='0' bus='0' target='0' unit='2'/> </disk> <disk type='file' device='disk'> <driver name='qemu' type='raw' cache='writeback'/> <source file='/mnt/user/domains/MacinaboxHighSierra/HighSierra-install.img'/> <target dev='hdd' bus='sata'/> <address type='drive' controller='0' bus='0' target='0' unit='3'/> </disk> <disk type='file' device='disk'> <driver name='qemu' type='raw' cache='writeback'/> <source file='/mnt/user/domains/MacinaboxHighSierra/macos_disk.img'/> <target dev='hde' bus='sata'/> <address type='drive' controller='0' bus='0' target='0' unit='4'/> </disk> <controller type='usb' index='0' model='ich9-ehci1'> <address type='pci' domain='0x0000' bus='0x00' slot='0x07' function='0x7'/> </controller> <controller type='usb' index='0' model='ich9-uhci1'> <master startport='0'/> <address type='pci' domain='0x0000' bus='0x00' slot='0x07' function='0x0' multifunction='on'/> </controller> <controller type='usb' index='0' model='ich9-uhci2'> <master startport='2'/> <address type='pci' domain='0x0000' bus='0x00' slot='0x07' function='0x1'/> </controller> <controller type='usb' index='0' model='ich9-uhci3'> <master startport='4'/> <address type='pci' domain='0x0000' bus='0x00' slot='0x07' function='0x2'/> </controller> <controller type='sata' index='0'> <address type='pci' domain='0x0000' bus='0x00' slot='0x1f' function='0x2'/> </controller> <controller type='pci' index='0' model='pcie-root'/> <controller type='pci' index='1' model='pcie-root-port'> <model name='pcie-root-port'/> <target chassis='1' port='0x10'/> <address type='pci' domain='0x0000' bus='0x00' slot='0x02' function='0x0' multifunction='on'/> </controller> <controller type='pci' index='2' model='pcie-root-port'> <model name='pcie-root-port'/> <target chassis='2' port='0x11'/> <address type='pci' domain='0x0000' bus='0x00' slot='0x02' function='0x1'/> </controller> <controller type='pci' index='3' model='pcie-root-port'> <model name='pcie-root-port'/> <target chassis='3' port='0x12'/> <address type='pci' domain='0x0000' bus='0x00' slot='0x02' function='0x2'/> </controller> <controller type='pci' index='4' model='pcie-root-port'> <model name='pcie-root-port'/> <target chassis='4' port='0x13'/> <address type='pci' domain='0x0000' bus='0x00' slot='0x02' function='0x3'/> </controller> <controller type='pci' index='5' model='pcie-root-port'> <model name='pcie-root-port'/> <target chassis='5' port='0x14'/> <address type='pci' domain='0x0000' bus='0x00' slot='0x02' function='0x4'/> </controller> <controller type='pci' index='6' model='pcie-root-port'> <model name='pcie-root-port'/> <target chassis='6' port='0x15'/> <address type='pci' domain='0x0000' bus='0x00' slot='0x02' function='0x5'/> </controller> <controller type='pci' index='7' model='pcie-root-port'> <model name='pcie-root-port'/> <target chassis='7' port='0x16'/> <address type='pci' domain='0x0000' bus='0x00' slot='0x02' function='0x6'/> </controller> <controller type='virtio-serial' index='0'> <address type='pci' domain='0x0000' bus='0x02' slot='0x00' function='0x0'/> </controller> <interface type='bridge'> <mac address='52:54:00:5a:ad:8f'/> <source bridge='br0'/> <model type='vmxnet3'/> <address type='pci' domain='0x0000' bus='0x00' slot='0x04' function='0x0'/> </interface> <serial type='pty'> <target type='isa-serial' port='0'> <model name='isa-serial'/> </target> </serial> <console type='pty'> <target type='serial' port='0'/> </console> <channel type='unix'> <target type='virtio' name='org.qemu.guest_agent.0'/> <address type='virtio-serial' controller='0' bus='0' port='1'/> </channel> <input type='tablet' bus='usb'> <address type='usb' bus='0' port='1'/> </input> <input type='mouse' bus='ps2'/> <input type='keyboard' bus='ps2'/> <hostdev mode='subsystem' type='pci' managed='yes'> <driver name='vfio'/> <source> <address domain='0x0000' bus='0x01' slot='0x00' function='0x0'/> </source> <rom file='/mnt/user/isos/vbios/Zotac.GTX1070Ti.8192.171016_1.rom'/> <address type='pci' domain='0x0000' bus='0x01' slot='0x00' function='0x0' multifunction='on'/> </hostdev> <hostdev mode='subsystem' type='pci' managed='yes'> <driver name='vfio'/> <source> <address domain='0x0000' bus='0x01' slot='0x00' function='0x1'/> </source> <address type='pci' domain='0x0000' bus='0x01' slot='0x00' function='0x1'/> </hostdev> <hostdev mode='subsystem' type='pci' managed='yes'> <driver name='vfio'/> <source> <address domain='0x0000' bus='0x00' slot='0x1b' function='0x0'/> </source> <address type='pci' domain='0x0000' bus='0x00' slot='0x03' function='0x0'/> </hostdev> <hostdev mode='subsystem' type='pci' managed='yes'> <driver name='vfio'/> <source> <address domain='0x0000' bus='0x00' slot='0x1d' function='0x0'/> </source> <address type='pci' domain='0x0000' bus='0x07' slot='0x00' function='0x0'/> </hostdev> <memballoon model='none'/> </devices> <qemu:commandline> <qemu:arg value='-cpu'/> <qemu:arg value='host,vendor=GenuineIntel,+hypervisor,+invtsc,kvm=on,+fma,+avx,+avx2,+aes,+ssse3,+sse4_2,+popcnt,+sse4a,+bmi1,+bmi2'/> </qemu:commandline> </domain> if anyone knows how to fix this then please help i would really appreciate ur help.
  10. okay i fixed turbo which was getting disabled after every boot, Intel Turbo/AMD Performance Boost was diabled in tips and tweaks plugin, turning it to enabled fixed it but still my processor is always running and full speed between 4.3 - 4.4 GHZ even at idle.
  11. intel speed step was set to auto so tried setting it to enabled but still its not idling, so it changing it didnt help and one more problem turbo keeps getting disabled after rebooting so have to use this command "echo "0" > /sys/devices/system/cpu/intel_pstate/no_turbo" to enable after every reboot, so is their any other command to permanently enable it. as for tips and tweaks plugin i have it already installed it on server.
  12. nope running stock. forgot to mention my processor its i7 4790k
  13. hey guys today out of just curiosity i used this command "grep MHz /proc/cpuinfo" and came to know that all my cpu cores are running at 4000 mhz all the time even when no dockers no vms are were running. so after some googling i tried my this command "cat /sys/devices/system/cpu/intel_pstate/no_turbo" which gave me "1" as a result, so i tried this command "echo "0" > /sys/devices/system/cpu/intel_pstate/no_turbo" to enable turbo now its turbo ing but now its always running at 4400 mhz all the time. So my problem is its not going down, its not using idle clock speeds even when its idle.
  14. would it be possible to add cpu clock speeds support ?? 😅
  15. @Bitz69 @j0nnymoe @CHBMB now i don't know how or why or whats diffrent now but now it working. i haven't done any changes to network or anything and yeah i dont have 10gbit adapters and i dont run pihole. maybe reseting my fiber modem and wifi router, is finally showing its effects.
  16. what should we do to fix this then, i even tried reseting my fiber modem as well as my wifi router. Edit : one thing is different now, sometimes now when i open unraid nvidia page i get this error now. Warning: parse_ini_file(/tmp/mediabuild/description): failed to open stream: No such file or directory in /usr/local/emhttp/plugins/Unraid-Nvidia/include/exec.php on line 60 Warning: parse_ini_file(/tmp/mediabuild/description): failed to open stream: No such file or directory in /usr/local/emhttp/plugins/Unraid-Nvidia/include/exec.php on line 60 Warning: parse_ini_file(/tmp/mediabuild/description): failed to open stream: No such file or directory in /usr/local/emhttp/plugins/Unraid-Nvidia/include/exec.php on line 60 Warning: parse_ini_file(/tmp/mediabuild/description): failed to open stream: No such file or directory in /usr/local/emhttp/plugins/Unraid-Nvidia/include/exec.php on line 60 Warning: parse_ini_file(/tmp/mediabuild/description): failed to open stream: No such file or directory in /usr/local/emhttp/plugins/Unraid-Nvidia/include/exec.php on line 60 Warning: parse_ini_file(/tmp/mediabuild/description): failed to open stream: No such file or directory in /usr/local/emhttp/plugins/Unraid-Nvidia/include/exec.php on line 60 Warning: parse_ini_file(/tmp/mediabuild/description): failed to open stream: No such file or directory in /usr/local/emhttp/plugins/Unraid-Nvidia/include/exec.php on line 60 Warning: parse_ini_file(/tmp/mediabuild/description): failed to open stream: No such file or directory in /usr/local/emhttp/plugins/Unraid-Nvidia/include/exec.php on line 60 Warning: parse_ini_file(/tmp/mediabuild/description): failed to open stream: No such file or directory in /usr/local/emhttp/plugins/Unraid-Nvidia/include/exec.php on line 60 Warning: parse_ini_file(/tmp/mediabuild/description): failed to open stream: No such file or directory in /usr/local/emhttp/plugins/Unraid-Nvidia/include/exec.php on line 60 Warning: parse_ini_file(/tmp/mediabuild/description): failed to open stream: No such file or directory in /usr/local/emhttp/plugins/Unraid-Nvidia/include/exec.php on line 60
  17. @Bitz69 @j0nnymoe yeah im also getting this error from past 4-5 days but i read an older post when someone reported this same error @CHBMB replied back that it to do something with digital ocean servers or spaces. so i was waiting it out.
  18. thanks man, its been working great ever since i reseated my sata cable for cache drive, which i should have done the very first time u told me to😅. stupid me.
  19. okay i will try that, as you know better, im sorry i forgot to attach today's diagnostics report im attaching it to this report. the only thing that i dont get if its cable how its was able to run all fine and only crashed when it was auto updating my dockers. unraid-diagnostics-20200117-0007.zip
  20. @Squid hey something similar happened today as well everything was working fine and boom all of sudden my vm is stuck and all my dockers stopped only this time unraid was reporting that their were updates available for all my dockers which is weird but i know that i think this issue happened yesterday at the same time it today and my server is scheduled to auto update all my dockers and plugins at this time. which lead me to believe that it might has something to do with ca auto update applications plugin.
  21. if a had not ran that fix common problem plugin, i would have said that that cant be as i was still able to access all my data that was their on my cache drive and under unraid's main page all my drives were active and normal but as i did i know it had something to do with my cache drive as fcp plugin reported two errors and both were about my cache drive. 1. Error -- my cache was read only or completly full 2. error -- unraid was unable to write to docker.img ( we can conclude that this error popped up cause of first error.) my cache wasn't even half full at that time. so i thought that maybe unraid isn't detecting my cache drive's capacity properly so i rebooted my server and since then its been like 17 hours and 20 minutes and so far everything is working normally, as it should. All my dockers are up and running and my vm's are also working now. so dont know what really happened. fortunately one thing is for sure, that my cache didn't dropped dead on me.😅
  22. Everything was working fine up until like half an error ago than all of a sudden my vm was stuck and i noticed that half of dockers that were running at that time also stopped all of sudden than when i tried launching them back i ended up with execution error code 403 and now when im trying to launch my im getting back this error -- "Hook script execution failed: internal error: Child process (LC_ALL=C PATH=/bin:/sbin:/usr/bin:/usr/sbin HOME=/ /etc/libvirt/hooks/qemu 'Windows 10 New' prepare begin -) unexpected exit status 126: libvirt: error : cannot execute binary /etc/libvirt/hooks/qemu: Input/output error" im attaching my diagnostics with this post. please help me figure out how to fix this and how can i prevent it from happening again in the future. unraid-diagnostics-20200116-0128.zip
  23. so guys after some more testing it looks like it has to do something with unraid nvidia. so this is what i did i uninstalled unraid nvidia drivers and installed the stock unraid drivers and after a reboot both my gpu's started posting and gpuz showed that my gtx 1070 ti is running x16 mode when i ran windows vm through it and when i ran same vm through gtx 750 gpuz showed it was running in x4 mode i couldn't test it when both of them were running windows vm as i currently only have one windows vm installed but i tried one in alreaady installed vm and one in windows installation vm and both were posting at same time. so looks like their is something wrong unraid nvidia plugin, do we need to use something special or different while we using unraid nvidia plugin and windows vm???
  24. no im not using vfio-pci.ids or intel_iommu command, do i need them ??
  25. thanks for info, i didn't know that about sli and crossfire. as for the test even i wasn't sure that whether it would work or not but tried it cause it works with igpu as i told u guys i was using igpu with windows vm and i set primary gpu as igpu it so unraid was posting through my igpu and as soon as i ran windows vm it would switch to windows it would never go back to unraid even after shutting down vm the only way to return that igpu to unraid was to restart my unraid server. and my gtx 1070 was being used by plex so i thought maybe it would work that way but as i wasn't sure i then tried that with both igpu and gtx 1070 ti, igpu for unraid and gtx 107ti for windows vm and result was the same. that's why im not sure its pcie lane issue.