iphillips77

Members
  • Posts

    24
  • Joined

  • Last visited

Everything posted by iphillips77

  1. Here are some tips for anyone having trouble getting HDMI audio to work with nvidia cards, particularly those like me who had it working under Sierra but had it break under High Sierra. Start by downloading HDMIAudio.kext 1.1 from here https://www.dropbox.com/s/9xenemmfwa1ee7b/HDMIAudio-1.1.dmg?dl=0 You'll also need ioRegistryExplorer from here https://mac.softpedia.com/get/System-Utilities/IORegistryExplorer.shtml Fire up ioRegistryExplorer. In the search field, look for HDAU. That should narrow things down to just your audio card. Take a look at the properties on the right.. You'll see vendor-id and device-id. Vendor-id should be <de 10 00 00> for any nvidia card. Your device-id will be <xx xx 00 00>, where xxxx is your ID. Mine was <ba 0f 00 00>, so BA0F is my device-id. Yours is probably different. Make a note of it. Mount your EFI partition, and put HDMIAudio.kext in EFI/CLOVER/kexts/Other. Next, fire up clover configurator and load up your config.plist. In the "kernel and kext patches" section, create a new entry under KextsToPatch. Use com.apple.driver.AppleHDAController as the kext name. Put DE101A0E in the "find" column, and DE10XXXX in the "replace" column, replacing XXXX with your device ID. I put DE10BA0F there. Reboot, and cross your fingers. This patches AppleHDAController on the fly, replacing one of the valid IDs (DE101A0E) with your card's ID. That was enough for me to get HDMIAudio.kext to work as well as it did in 10.12 Sierra. Not sure if this works under Mojave, I'm waiting for new web drivers to be released before I upgrade further. However I don't believe that this kext edit would cause any problems if done properly under any macOS version. Good luck!
  2. Just in case anyone else has the same problem in the future, and since I'm guessing I'm not the only person out there who keeps their server tucked away in the quiet of the basement it wouldn't surprise me -- I've got it sorted. It was the HDMI cable all along. I ended up picking up a "Luxe Series" active HDMI cable from Monoprice. I'm currently going from Mini DisplayPort on the Radeon R9 280, through a passive adapter to full-sized DisplayPort, through the Club3D DisplayPort to HDMI adapter, up through the floor and wall to my wall-mounted television. The cable is CL3 fire rated for in-wall use. So much for "all HDMI cables are the same"! So far it seems solid, 4k@60hz, and what appears to be 4:4:4. Not 100% sure, because it turns out this is an RGBW panel, but even with the extra stupid subpixel this is so much better than the 1080p display I was using before! Holding down "Option" while clicking "Scaled" in the display preference pane brings up all the scaled resolutions I could ever want, and some I don't, all at 60hz. Now if I could only get my sound to work. Off to the next problem! Thanks for your help, everyone.
  3. Thanks an awful lot for the suggestions, I'm getting closer. After some tweaking, I noticed that even though I was using the displayport output, it was showing up as 'HDMI or DVI' in system report. A little tinkering with the stock Mac OS framebuffer definitions and I've made some progress. I can now get 4K at 60Hz...... for a few seconds. Then, the screen blanks out and I get "snow". Not sure if it's actual snow, because that makes no sense, or if it's something that the TV is doing to generate fake snow due to signal loss.. I'm using a fairly long HDMI cable so that may be the problem. Another issue I'm seeing is if I use a "scaled" display, the option for 60hz vanishes. Except for 1920x1280.. I can get that in retina at 60hz, as well as unscaled 4k. Is this normal behaviour?
  4. First of all, I want to express my appreciation to Gridrunner et al for the all the knowledge and experience shared here in the forums and elsewhere. I'm currently running two so-far rock stable Sierra VMs -- one toiling away headless running Indigo (my home automation server) and various other "important" things, and this one, which I'm using as my daily driver. My daily, however, needs a bit of tweaking. I've got a pair of graphics cards in my server -- a Radeon R9 280, and an nVidia GTX 760. Both seem reasonably comparable in power, both seem to work pretty well when passed through to the Sierra VM. I do light gaming (console emulation mostly) on a Windows VM in the other room, either card is fine for that purpose, so I've got either at my disposal for the Mac. I'm running a 43" Insignia 4k television as a monitor. It supports 4k @ 60hz, with 4:4:4 colour space. Connected to the server via HDMI cable. On either card, I get the Mac displaying 4k just fine. However I can't get a refresh rate higher than 30hz. On either card. I've tried the HDMI ports on the cards, as well as an active DP to HDMI adapter (this one from Club 3D), which many people report success with, at least with genuine Apple machines. I know this is an issue that I'm likely going to need the help of the Hackintosh community for.. But it has raised some questions regarding my Clover setup. I can't get the VM to boot successfully with a system definition any newer than MacPro2,1. Pretty old. And having to specify a Penryn processor in the VM's xml definition.. Nosing around in this thread makes me wonder if that's because of an issue with Clover, that might have been corrected.. Anyway. I'm a bit out of my depth here. Does anyone have any insights they might be able to share as to how to configure my system so my hardware appears as modern as it actually is, so I can eliminate that as a possible reason why 4k@60hz screen modes might not be available? Thanks!
  5. It's more likely a problem with nvidia's utility. I had the same issue and was chasing my tail a bit.. Try downloading lspci for Windows and see what it tells you. Nvidia told me 1x no matter what I did, but lspci (and gpu-z, I think?) showed the correct number of lanes.
  6. New weirdness. Thought I'd gotten things pretty much sorted out. Was watching a video on Kodi on my Windows VM in the living room, and went over to transcode a video on the Mac VM to save on my ancient iPad. As soon as I started the transcode and the Mac's CPU meter approached 100%, the Windows VM started to sputter and lag. I've given the Windows VM cores 0-3, Mac VM uses 4-11, and unraid starts with isolcpus=0-7 in syslinux. Why would the Mac VM kill the Windows VM once it starts running something CPU intensive like a video transcode? It shouldn't be touching those cores at all.
  7. Have you tried swapping the two cards? Not all cards support PCI reset, which basically means that they'll work the first time but that's it. Had that problem with a USB card I was trying to use.. Had to hard reboot unraid every time I needed to restart the VM. Try swapping them, boot unraid off the 430 and pass the 210 to openelec and see if you have any luck.
  8. Yeah, figured that they've been pretty busy.. I'm going to send them an email momentarily just to have them give this thread a glance, I think there are some real issues here with CPU frequency governors and core assignments that, at the very least, could use some clarification. I still haven't run latency tests to determine exactly what the pairings are.. although from what I've read, Intel convention seems to be all physical cores followed by hyperthreaded cores. i.e., for a 4 core hyperthreaded CPU, pairings are (0,4) (1,5) (2,6) (3,7). 2 core would be (0,2) (1,3), 6 core (0,7) (1, etc. etc. etc. But I haven't run the tests because I don't know what to do with the information. If I do an isolcpus=0-3,6-9 in syslinux, to give unraid core pairs (4,10) and (5,11), things don't act the way I think they should when I start setting governor profiles. Some cores won't even report their frequency with /usr/bin/cpufreq-info unless I give isolcpus totally sequential cores (isolcpus=0-7 in my case now) So, in addition to cpu core pairs, I'm very curious as to what happens when we isolate those cores from host operations. How isolated are these cores? When I give unraid (4,10) and (5,11) to play with, and I go to the webgui dashboard, does core (0,1),(2,3) correspond to (4,10),(5,11)? And in the same instance, if I do a cpufreq-info, does isolating cpus change that numbering assignment or what? And does any of this really make any difference? Haha
  9. Scrapz.. Yep, it appears the 'ondemand' and 'conservative' governors have been deprecated for my CPU. All I have are 'performance' and 'powersave'. Also, found some tools already installed in unraid to manage CPU frequency.. /usr/bin/cpufreq-set, which allows you to set minimum and maximum frequencies for all cores or individually, as well as changing governers.. /usr/bin/cpufreq-info gives the current settings and /usr/bin/cpufreq-aperf seems to be a performance monitor tool. Much easier than catting and echoing!
  10. That was the first thing I tried, actually, but it seems that there are some differences between unRaid and Ubuntu when it comes to how CPU multipliers are handled. That file doesn't exist, so we can't change things that way. cat /sys/devices/system/cpu/cpu0/cpufreq/scaling_available_governors returns "performance" and "powersave".. cat /sys/devices/system/cpu/cpu0/cpufreq/scaling_governor is set to "powersave" by default. I'm giving this a try now... echo performance > /sys/devices/system/cpu/cpu0/cpufreq/scaling_governor ...for all cores.. cpu0/cpufreq, cpu1/cpufreq, etc etc
  11. Holy crap I think I may have figured it out. Seems to possibly be a problem with the host not scaling the CPU frequency as efficiently/intelligently as the guest would like. Check out this shizz. http://unix.stackexchange.com/questions/64297/host-cpu-does-not-scale-frequency-when-kvm-guest-needs-it So here's what I did to test, and got immediate results. cd /sys/devices/system/cpu/cpu0/cpufreq That's config info for cpu0. You can monkey with your cpu in here. "cat scaling_max_freq" resulted in 4300000. So I thought I'd give this a try. echo 4300000 > scaling_min_freq Basically what this does is set the minimum frequency to be the same as the maximum, so it'll run full tilt constantly. Did it for the other CPUs I was passing to the Windows VM as well. Went back to the VM, and noticed in CPU-Z that my CPU was now running at max frequency. At first glance, all my stuttering and slowdown problems were gone as well. I still have to do more testing but in-emulator benchmarks have immediately improved 33%. This is exciting. Now, I probably shouldn't leave things like this. But, what I surmise was happening to me is that the hypervisor wasn't triggering a jump to the highest multiplier. It could if it wanted to, though.. because Prime95 did it. So. What do we do with this information? It should be possible to change the frequency scaling rules, shouldn't it?
  12. After a long week of banging my head against walls at work, I've got a couple days off to bang my head against walls with this instead. Thanks for the suggestion, bungee91. I downloaded the script you linked to, managed to install netperf but couldn't find a build of netserver that would work. Instead, I just tried some trial and error, but didn't manage to see any improvement. I'm going to ask around on the Dolphin forums as well to see if someone over there might know some way to improve things.. It's very puzzling. I'm starting to think that unraid just isn't going to be able to do this. Holding out hope that 6.2 will help -- OVMF instead of seabios improved things a little for me -- but something here just doesn't add up.
  13. Thanks Scrapz, gave it a try but no dice. Playing around with pci-e settings in the bios now, I'm just about out of ideas.
  14. Does it give you that error if you start the VM from a cold boot? And I mean totally cold, physically turn the server off then back on again. I was trying for a while to pass through a USB card that didn't support PCI reset.. and it would work the very first time, but any subsequent times it would give me an error that the card couldn't be initialized. Only way to get it to work again was a cold boot, a reboot wouldn't do the trick. Nothing you can do if that's the case.
  15. I really don't think it's a CPU problem. Like I said, CPU usage is in the 40% range (as indicated in Windows) and things are still stuttering. I've run Prime95 to rule out CPU usage being misreported... when stress testing usage is pegged at 100%. I have tried giving it more cores, all cores, even tried less on a whim. No changes. Slowdowns are repeatable. They'll occur at the same point in a game map, for example... I'll load up Super Mario Galaxy, and if I walk to a certain place, and the camera is pointed in a certain direction, my FPS drop from 60 to 50. And stay at 50 if I don't move again. All the while I'm looking at my CPU meter never going above 40%. Nothing running in the background. I've ruled out Dolphin as a culprit. I've tried both DX and OpenGL backends, tweaked every setting, and this is the best I've been able to get it. I'm getting bad GPU benchmark scores in 3dmark and Cinebench. Might not be a GPU issue but it sure seems like it. Just don't know what to try next.
  16. Ahhh, scratch that idea. Must be a bug in nvidia's control panel -- both GPU-Z and lspci run under windows show the same 16x/8x that lspci does from unraid.
  17. Crappy 3dMark scores, too. I didn't make a note of the score before closing the window, but was getting 30fps-ish at 720p. Hmmmm, maybe a PCIe bus width issue? Loading up the nvidia control panel and selecting system information shows "Bus: PCI Express x 1". CPU-Z doesn't show anything in the Bus Width section, though... lspci -vv results: Subsystem: ZOTAC International (MCO) Ltd. Device 3265 Control: I/O+ Mem+ BusMaster+ SpecCycle- MemWINV- VGASnoop- ParErr- Stepping- SERR+ FastB2B- DisINTx+ Status: Cap+ 66MHz- UDF- FastB2B- ParErr- DEVSEL=fast >TAbort- <TAbort- <MAbort- >SERR- <PERR- INTx- Latency: 0, Cache Line Size: 32 bytes Interrupt: pin A routed to IRQ 47 Region 0: Memory at fa000000 (32-bit, non-prefetchable) [size=16M] Region 1: Memory at f0000000 (64-bit, prefetchable) [size=128M] Region 3: Memory at f8000000 (64-bit, prefetchable) [size=32M] Region 5: I/O ports at c000 [size=128] Expansion ROM at fb000000 [disabled] [size=512K] Capabilities: [60] Power Management version 3 Flags: PMEClk- DSI- D1- D2- AuxCurrent=0mA PME(D0-,D1-,D2-,D3hot-,D3cold-) Status: D0 NoSoftRst+ PME-Enable- DSel=0 DScale=0 PME- Capabilities: [68] MSI: Enable+ Count=1/1 Maskable- 64bit+ Address: 00000000fee00498 Data: 0000 Capabilities: [78] Express (v2) Endpoint, MSI 00 DevCap: MaxPayload 256 bytes, PhantFunc 0, Latency L0s unlimited, L1 <64us ExtTag+ AttnBtn- AttnInd- PwrInd- RBE+ FLReset- DevCtl: Report errors: Correctable- Non-Fatal- Fatal- Unsupported- RlxdOrd+ ExtTag+ PhantFunc- AuxPwr- NoSnoop+ MaxPayload 256 bytes, MaxReadReq 512 bytes DevSta: CorrErr- UncorrErr- FatalErr- UnsuppReq- AuxPwr- TransPend- LnkCap: Port #2, Speed 8GT/s, Width x16, ASPM L0s L1, Latency L0 <1us, L1 <4us ClockPM+ Surprise- LLActRep- BwNot- LnkCtl: ASPM L1 Enabled; RCB 64 bytes Disabled- Retrain- CommClk+ ExtSynch- ClockPM- AutWidDis- BWInt- AutBWInt- LnkSta: Speed 8GT/s, Width x8, TrErr- Train- SlotClk+ DLActive- BWMgmt- ABWMgmt- DevCap2: Completion Timeout: Range AB, TimeoutDis+, LTR-, OBFF Not Supported DevCtl2: Completion Timeout: 50us to 50ms, TimeoutDis-, LTR-, OBFF Disabled LnkCtl2: Target Link Speed: 8GT/s, EnterCompliance- SpeedDis- Transmit Margin: Normal Operating Range, EnterModifiedCompliance- ComplianceSOS- Compliance De-emphasis: -6dB LnkSta2: Current De-emphasis Level: -6dB, EqualizationComplete+, EqualizationPhase1+ EqualizationPhase2+, EqualizationPhase3+, LinkEqualizationRequest+ Capabilities: [b4] Vendor Specific Information: Len=14 <?> Capabilities: [100 v1] Virtual Channel Caps: LPEVC=0 RefClk=100ns PATEntryBits=1 Arb: Fixed- WRR32- WRR64- WRR128- Ctrl: ArbSelect=Fixed Status: InProgress- VC0: Caps: PATOffset=00 MaxTimeSlots=1 RejSnoopTrans- Arb: Fixed- WRR32- WRR64- WRR128- TWRR128- WRR256- Ctrl: Enable+ ID=0 ArbSelect=Fixed TC/VC=01 Status: NegoPending- InProgress- Capabilities: [128 v1] Power Budgeting <?> Capabilities: [600 v1] Vendor Specific Information: ID=0001 Rev=1 Len=024 <?> Capabilities: [900 v1] #19 Kernel driver in use: vfio-pci Hmmm.. LinkCap shows x16, LinkSta shows x8 (I've got two GPUs in here, and the 5820k is short on lanes so I'm not surprised about the x8.. So as far as unraid's end is concerned it's connected at x8) And looking here, http://www.linux-kvm.org/page/PCITodo, "Support for different PCI express link width/speed settings" is on their to-do list. Specifically.... "Issue: QEMU currently emulates all links at minimal width and speed. This means we don't need to emulate link negotiation, but might in theory confuse guests for assigned devices." Although this page is undated, so I don't know if this is still the case....
  18. Hey everyone, I've posted a few times here getting my system up and everyone's been a great help, thanks. I'm putting aside part of my unraid server to use as a gaming and htpc rig. Core i7 5920k overclocked and stable at 4.5ghz. Everything's running, but unfortunately not smoothly enough for me to actually use. Here's what I've done: Isolate cores 0-7 (out of 12 total) from host operations using isolcpus=0-7 in syslinux. Pass through cores 0-4 and 8gb RAM (out of 32 total) to the Windows machine Windows 10 vm image located on an unshared nvme ssd (fast fast fast) Disable xhci in bios to split apart usb controllers, one being passed through to Windows machine using <hostdev> Nvidia GTX760 + audio passed through to Windows machine using <qemu:commandline> MSI stuff done (GTX760 and audio controller show negative IRQ in device manager, lspci -v -s shows MSI: Enable+) DPC latency tests are generally good, under 1000us for the most part with the occasional spike. Was much, much worse but enabling MSI on the GTX760 largely fixed that. System Interrupts in resource monitor seems a little high.. It's averaging about 4% cpu right now but last night during tinkering it was up around 10% at times. I'm gaming with Dolphin, which is an emulator and generally CPU-bound. Running at 100%, 60FPS, my CPU usage hovers around 35-40%, so I've got plenty of overhead there. But I'm getting dips in framerate that I'm thinking are GPU related... because what else could it be? Oh, another weird thing, who knows, maybe related. I get weird mouse stuttering sometimes. Like the pointer gets stuck for a second, then boing, it's off on the other side of the screen overshooting whatever I'm trying to click on. That's pretty frustrating too. Here's my xml. Nothing weird. (Yeah I haven't deleted the virtio drivers iso part yet; though I'd thought of that in a previous vm running Win7 with the same stuttering problems, didn't help) <domain type='kvm' id='16' xmlns:qemu='http://libvirt.org/schemas/domain/qemu/1.0'> <name>Windows 10</name> <uuid>8cca1c77-5110-27f1-aa77-5386c6405f85</uuid> <metadata> <vmtemplate name="Custom" icon="windows7.png" os="windows7"/> </metadata> <memory unit='KiB'>8388608</memory> <currentMemory unit='KiB'>8388608</currentMemory> <memoryBacking> <nosharepages/> <locked/> </memoryBacking> <vcpu placement='static'>4</vcpu> <cputune> <vcpupin vcpu='0' cpuset='0'/> <vcpupin vcpu='1' cpuset='1'/> <vcpupin vcpu='2' cpuset='2'/> <vcpupin vcpu='3' cpuset='3'/> </cputune> <resource> <partition>/machine</partition> </resource> <os> <type arch='x86_64' machine='pc-i440fx-2.3'>hvm</type> </os> <features> <acpi/> <apic/> </features> <cpu mode='host-passthrough'> <topology sockets='1' cores='4' threads='2'/> </cpu> <clock offset='localtime'> <timer name='rtc' tickpolicy='catchup'/> <timer name='pit' tickpolicy='delay'/> <timer name='hpet' present='no'/> </clock> <on_poweroff>destroy</on_poweroff> <on_reboot>restart</on_reboot> <on_crash>restart</on_crash> <devices> <emulator>/usr/bin/qemu-system-x86_64</emulator> <disk type='file' device='disk'> <driver name='qemu' type='raw' cache='writeback'/> <source file='/mnt/nvme/vm_images/vdisk1.img'/> <backingStore/> <target dev='hdc' bus='virtio'/> <boot order='1'/> <alias name='virtio-disk2'/> <address type='pci' domain='0x0000' bus='0x00' slot='0x04' function='0x0'/> </disk> <disk type='file' device='cdrom'> <driver name='qemu' type='raw'/> <source file='/mnt/user/Misc/kvm/virtio-win.iso'/> <backingStore/> <target dev='hdb' bus='ide'/> <readonly/> <alias name='ide0-0-1'/> <address type='drive' controller='0' bus='0' target='0' unit='1'/> </disk> <controller type='usb' index='0'> <alias name='usb'/> <address type='pci' domain='0x0000' bus='0x00' slot='0x01' function='0x2'/> </controller> <controller type='pci' index='0' model='pci-root'> <alias name='pci.0'/> </controller> <controller type='ide' index='0'> <alias name='ide'/> <address type='pci' domain='0x0000' bus='0x00' slot='0x01' function='0x1'/> </controller> <controller type='virtio-serial' index='0'> <alias name='virtio-serial0'/> <address type='pci' domain='0x0000' bus='0x00' slot='0x03' function='0x0'/> </controller> <interface type='bridge'> <mac address='52:54:00:c0:89:32'/> <source bridge='virbr0'/> <target dev='vnet0'/> <model type='virtio'/> <alias name='net0'/> <address type='pci' domain='0x0000' bus='0x00' slot='0x02' function='0x0'/> </interface> <serial type='pty'> <source path='/dev/pts/0'/> <target port='0'/> <alias name='serial0'/> </serial> <console type='pty' tty='/dev/pts/0'> <source path='/dev/pts/0'/> <target type='serial' port='0'/> <alias name='serial0'/> </console> <channel type='unix'> <source mode='bind' path='/var/lib/libvirt/qemu/channel/target/Games Machine.org.qemu.guest_agent.0'/> <target type='virtio' name='org.qemu.guest_agent.0' state='disconnected'/> <alias name='channel0'/> <address type='virtio-serial' controller='0' bus='0' port='1'/> </channel> <hostdev mode='subsystem' type='pci' managed='yes'> <driver name='vfio'/> <source> <address domain='0x0000' bus='0x00' slot='0x1a' function='0x0'/> </source> <alias name='hostdev0'/> <address type='pci' domain='0x0000' bus='0x00' slot='0x06' function='0x0'/> </hostdev> <memballoon model='virtio'> <alias name='balloon0'/> <address type='pci' domain='0x0000' bus='0x00' slot='0x05' function='0x0'/> </memballoon> </devices> <qemu:commandline> <qemu:arg value='-device'/> <qemu:arg value='ioh3420,bus=pci.0,addr=1c.0,multifunction=on,port=2,chassis=1,id=root.1'/> <qemu:arg value='-device'/> <qemu:arg value='vfio-pci,host=07:00.0,bus=root.1,addr=00.0,multifunction=on,x-vga=on'/> <qemu:arg value='-device'/> <qemu:arg value='vfio-pci,host=07:00.1,bus=root.1,addr=00.1'/> </qemu:commandline> </domain> I'm out of ideas.. Anyone know what I might be missing?
  19. I'm so tantalizingly close to being able to put my unraid server back down in the basement where it belongs.. If only I could get a working USB solution for my two (1x Win7, 1x Mac OS X) VMs! I posted last week here http://lime-technology.com/forum/index.php?topic=43816.msg441303#msg441303 asking for any advice on my Mac-side USB problems, that are almost certainly a driver issue inside the VM. In the meantime, I bought the bullet and purchased an advertised Mac-compatible PCIe card, and it looked like it had solved all my problems.. Oh boy was I wrong! The card, an Orinoco 4-port Fresco Logic based card, passes through and works properly on the Mac side. But when I restart the Mac VM from inside the VM, the whole server locks up. Like, hard, refuses telnet connections locks up. If I just shut down the Mac VM, either from inside or doing a force stop from unraid's webui, the server continues to run normally, but I can't start that VM again. It's unable to pass through the USB card a second time. And what's weirder, rebooting unraid does NOT allow me to start the VM again either. I have to actually hard reboot, power down the system and bring it up again cold. Here's some things I saw in my syslog pertaining to the card. I'm guessing it's just bad firmware, card-won't-work kinda stuff that means I'm out $30 and have to just buy a different one. But hoping there's something I can try to coax it into working. Feb 8 06:34:48 Behemoth kernel: pci 0000:09:00.0: [1b73:1100] type 00 class 0x0c0330 Feb 8 06:34:48 Behemoth kernel: pci 0000:09:00.0: reg 0x10: [mem 0xfb100000-0xfb10ffff 64bit] Feb 8 06:34:48 Behemoth kernel: pci 0000:09:00.0: reg 0x18: [mem 0xfb111000-0xfb111fff 64bit] Feb 8 06:34:48 Behemoth kernel: pci 0000:09:00.0: reg 0x20: [mem 0xfb110000-0xfb110fff 64bit] Feb 8 06:34:48 Behemoth kernel: pci 0000:09:00.0: supports D1 Feb 8 06:34:48 Behemoth kernel: pci 0000:09:00.0: PME# supported from D0 D1 D3hot So far, so good? Feb 8 06:34:48 Behemoth kernel: dmar: [Firmware Bug]: RMRR entry for device 09:00.0 is broken - applying workaround .....oh, maybe not. This line makes me think that maybe the card is buggered. Feb 8 06:34:48 Behemoth kernel: IOMMU: Setting identity map for device 0000:09:00.0 [0x36e33000 - 0x36e41fff] Feb 8 06:34:48 Behemoth kernel: pci 0000:09:00.0: Signaling PME through PCIe PME interrupt Feb 8 06:34:48 Behemoth kernel: pci-stub 0000:09:00.0: claimed by stub (Yeah, I tried adding it as a pci-stub.id entry in syslinux.cfg to see if that would help. It didn't.) Now, when shutting down the VM, I spot this little nugget. Feb 8 06:42:50 Behemoth kernel: vfio-pci 0000:09:00.0: timed out waiting for pending transaction; performing function level reset anyway Uh oh! That doesn't look good. Trying to restart the VM from this point gives us these lines pertaining to the card. Feb 8 15:44:01 Behemoth kernel: vfio-pci 0000:09:00.0: Refused to change power state, currently in D3 Feb 8 15:44:01 Behemoth kernel: vfio-pci 0000:09:00.0: Refused to change power state, currently in D3 Feb 8 15:44:01 Behemoth kernel: vfio-pci 0000:09:00.0: Refused to change power state, currently in D3 Feb 8 15:44:02 Behemoth kernel: vfio-pci 0000:09:00.0: timed out waiting for pending transaction; performing function level reset anyway Feb 8 15:44:02 Behemoth kernel: vfio_cap_init: 0000:09:00.0 hiding cap 0xff Feb 8 15:44:02 Behemoth kernel: vfio_cap_init: 0000:09:00.0 hiding cap 0xff Feb 8 15:44:02 Behemoth kernel: vfio_cap_init: 0000:09:00.0 hiding cap 0xff ......(repeats 50 more times)....... Feb 8 15:44:02 Behemoth kernel: vfio_cap_init: 0000:09:00.0 hiding cap 0xff Feb 8 15:44:02 Behemoth kernel: vfio_cap_init: 0000:09:00.0 hiding cap 0xff Feb 8 15:44:02 Behemoth kernel: vfio_ecap_init: 0000:09:00.0 hiding ecap 0xffff@0x100 Feb 8 15:44:02 Behemoth kernel: vfio_ecap_init: 0000:09:00.0 hiding ecap 0xffff@0xffc ......(same with the ecap bits)...... Feb 8 15:44:02 Behemoth kernel: vfio_ecap_init: 0000:09:00.0 hiding ecap 0xffff@0xffc Feb 8 15:44:02 Behemoth kernel: vfio_ecap_init: 0000:09:00.0 hiding ecap 0xffff@0xffc Feb 8 15:44:02 Behemoth kernel: vfio_ecap_init: 0000:09:00.0 hiding ecap 0xffff@0xffc Feb 8 15:44:02 Behemoth kernel: genirq: Flags mismatch irq 18. 00000000 (vfio-intx(0000:09:00.0)) vs. 00000080 (ehci_hcd:usb1) Feb 8 15:44:03 Behemoth kernel: vfio-pci 0000:09:00.0: Refused to change power state, currently in D3 Feb 8 15:44:04 Behemoth kernel: vfio-pci 0000:09:00.0: timed out waiting for pending transaction; performing function level reset anyway And then this pops up in unraid. internal error: early end of file from monitor: possible problem: 2016-02-08T20:44:02.800332Z qemu-system-x86_64: -device vfio-pci,host=09:00.0,bus=root.1,addr=00.2: vfio: Error: Failed to setup INTx fd: Device or resource busy 2016-02-08T20:44:02.801472Z qemu-system-x86_64: -device vfio-pci,host=09:00.0,bus=root.1,addr=00.2: Device initialization failed 2016-02-08T20:44:02.801500Z qemu-system-x86_64: -device vfio-pci,host=09:00.0,bus=root.1,addr=00.2: Device 'vfio-pci' could not be initialized Any insights?
  20. Read post #23 here. http://lime-technology.com/forum/index.php?topic=43924.msg436930#msg436930 Explains pretty clearly what you need to do. NVME drives aren't supported natively in unraid yet (though its apparently being worked on) so you need to do everything manually. Took me about five minutes and it's so far 100% reliable. Essentially you'll be creating a mountpoint in /mnt, mounting the NVME drive to it yourself, then formatting it. You'll need a line in your go script to mount it on reboots. The drive won't be shared but you can create your VM image there just fine, and the speeds are incredible.
  21. Thanks a lot for this guide.. I've got a nearly perfect install of Yosemite running with GPU pass through, HDMI audio, clover bootloader, working iMessage, almost everything I need! The only thing that's giving me trouble is passing through a USB controller. I've got a single bluetooth dongle working just fine, pairs nicely with my Apple keyboard and trackpad.. though there seems to be a very slight lag at times. I got the bluetooth dongle working with: <hostdev mode='subsystem' type='usb' managed='yes'> <source> <vendor id='0x0a12'/> <product id='0x0001'/> <address bus='4' device='15'/> </source> <alias name='hostdev0'/> </hostdev> ...even though I was under the impression that PCIRootUID=1 would prevent such a pass-through from working. But anyway. I don't want to do it like this, I want to have an entire controller for hot plugging. I've purchased a USB PCI-e card so I'll have enough controllers to go around. These are the controllers I've been trying to pass through: 00:1a.0 USB controller: Intel Corporation C610/X99 series chipset USB Enhanced Host Controller #2 (rev 05) 09:00.0 USB controller: VIA Technologies, Inc. Device 3483 (rev 01) Both seem to pass through successfully. The onboard Intel controller doesn't seem to be recognized by the Mac and doesn't show up in the system profiler. The PCI-e VIA controller is recognized fine and shows up in the system profiler, like so: ...but that's pretty much as far as we get. I've tried pretty much every USB device I can find. Most aren't recognized at all. I did manage to find one USB thumb drive that would appear in the device tree, but it wouldn't mount on the desktop. Refreshing the device tree with command-r seemed very slow, as well. I've tried the latest GenericUSBXHCI.kext as well -- the card doesn't show up at all when I do. Any advice? Here's my xml in case there's anything there I'm doing wrong. Thanks in advance! <domain type='kvm' id='23' xmlns:qemu='http://libvirt.org/schemas/domain/qemu/1.0'> <name>OS X El Capitan 10.11</name> <uuid>0ba39646-7ba1-4d41-9602-e2969a3fc26d</uuid> <metadata> <vmtemplate icon="/mnt/nvme/vm_images/extra/EC.png"/> <type>None</type> </metadata> <memory unit='KiB'>8388608</memory> <currentMemory unit='KiB'>8388608</currentMemory> <vcpu placement='static'>4</vcpu> <cputune> <vcpupin vcpu='0' cpuset='4'/> <vcpupin vcpu='1' cpuset='5'/> <vcpupin vcpu='2' cpuset='6'/> <vcpupin vcpu='3' cpuset='7'/> </cputune> <resource> <partition>/machine</partition> </resource> <os> <type arch='x86_64' machine='pc-q35-2.3'>hvm</type> <boot dev='hd'/> <bootmenu enable='yes'/> </os> <features> <acpi/> </features> <cpu mode='custom' match='exact'> <model fallback='allow'>core2duo</model> </cpu> <clock offset='utc'/> <on_poweroff>destroy</on_poweroff> <on_reboot>restart</on_reboot> <on_crash>destroy</on_crash> <devices> <emulator>/usr/bin/qemu-system-x86_64</emulator> <disk type='file' device='disk'> <driver name='qemu' type='raw'/> <source file='/mnt/nvme/vm_images/Yosemite.img'/> <backingStore/> <target dev='hda' bus='sata'/> <alias name='sata0-0-0'/> <address type='drive' controller='0' bus='0' target='0' unit='0'/> </disk> <controller type='usb' index='0'> <alias name='usb'/> <address type='pci' domain='0x0000' bus='0x02' slot='0x01' function='0x0'/> </controller> <controller type='sata' index='0'> <alias name='ide'/> <address type='pci' domain='0x0000' bus='0x00' slot='0x1f' function='0x2'/> </controller> <controller type='pci' index='0' model='pcie-root'> <alias name='pcie.0'/> </controller> <controller type='pci' index='1' model='dmi-to-pci-bridge'> <alias name='pci.1'/> <address type='pci' domain='0x0000' bus='0x00' slot='0x1e' function='0x0'/> </controller> <controller type='pci' index='2' model='pci-bridge'> <alias name='pci.2'/> <address type='pci' domain='0x0000' bus='0x01' slot='0x01' function='0x0'/> </controller> <interface type='bridge'> <mac address='52:54:00:00:20:20'/> <source bridge='br0'/> <target dev='vnet0'/> <model type='e1000-82545em'/> <alias name='net0'/> <address type='pci' domain='0x0000' bus='0x02' slot='0x03' function='0x0'/> </interface> <hostdev mode='subsystem' type='usb' managed='yes'> <source> <vendor id='0x0a12'/> <product id='0x0001'/> <address bus='4' device='15'/> </source> <alias name='hostdev0'/> </hostdev> <memballoon model='none'> <alias name='balloon0'/> </memballoon> </devices> <seclabel type='none' model='none'/> <qemu:commandline> <qemu:arg value='-device'/> <qemu:arg value='ioh3420,bus=pcie.0,addr=1c.0,multifunction=on,port=2,chassis=1,id=root.1'/> <qemu:arg value='-device'/> <qemu:arg value='vfio-pci,host=03:00.0,bus=root.1,addr=00.0,multifunction=on,x-vga=on'/> <qemu:arg value='-device'/> <qemu:arg value='vfio-pci,host=03:00.1,bus=root.1,addr=00.1'/> <qemu:arg value='-device'/> <qemu:arg value='vfio-pci,host=09:00.0,bus=root.1,addr=00.2'/> <qemu:arg value='-device'/> <qemu:arg value='usb-kbd'/> <qemu:arg value='-device'/> <qemu:arg value='usb-mouse'/> <qemu:arg value='-device'/> <qemu:arg value='isa-applesmc,osk=********'/> <qemu:arg value='-smbios'/> <qemu:arg value='type=2'/> </qemu:commandline> </domain>
  22. Jon, thanks a lot for all your advice, both in this particular thread and in the forums in general. I ended up going the 5820k route, and after a few hiccups I've got things running nearly perfectly! I'm sending this message from a Mac OS Yosemite VM that's more or less 100% functional, complete with GPU and Bluetooth passthrough. It's amazing how the addition of kvm has turned unRAID from a useful appliance into the only computing solution I need. I have one (maybe simple?) question, that I've searched for in the forums but can't find an answer to... surprisingly, maybe it's too obvious and I've just missed it. lol.. But how do I pass my shares through to a VM? Is there a way to do this in my VM's xml or am I best to just use SMB or AFP? Oh, another one. When numbering CPUs, I get that mine run from 0-11. Six of those will be physical and six logical.. Do they alternate, 0 physical, 1 logical, 2 physical etc? And what would happen if I were to pass a single logical CPU to a VM? What's the difference between the two from a virtualization standpoint?
  23. Thanks Jon! That's very helpful.. I'd pretty much settled on Haswell already, though was slightly tempted by Skylake's improved single threaded performance which would help in emulation, but it sounds as though it's going to be the 5820k for certain. As far as pinning logical CPUs goes -- in playing around with things right now, when editing a VM I can select either core 0, core 1, or both. Is this what you're getting at... making sure that I'm not selecting the same cores in multiple VMs? And how can I prevent unRaid from using those cores, so that things aren't slowed up in the VM when, for instance, Sick Beard decides it's time to start doing its thing? Performance of the gaming VM is critical.. Everything else can fight for what's left over.
  24. Hello everyone, and thanks for taking the time to read this message! Long time unRaid user here, just starting to dabble with kvm and virtualization. My current setup doesn't support vt-d, so I haven't been able to do anything too complex yet, but I was pleasantly surprised at how easy unRaid makes setting up virtual machines! I've done a ton of research but there are still a few questions I haven't found the answers for. I'm looking at a rather large CPU upgrade, to give my server a bit more grunt and maybe consolidate a few machines into one centralized rig. In addition to my server, I've got an aging Mac Mini C2D that is very long in the tooth, a Windows gaming machine sporting an overclocked i5 2500k, and an old Atom-based Acer Revo that I use purely as an Openelec machine. So in my mind, I'll be running at least three guests.. One Windows, one Mac OS X (I've got plenty of Hackintosh experience so I'm confident I can get that running), and one Openelec. As far as hardware, I'm leaning towards the i7 5820k six core Haswell-E, though I haven't completely ruled out the 6700k Skylake. GPUs will be a GTX760 for Windows, Radeon HD5770 for Mac (both harvested from existing machines), and whatever cheap nvidia card I can find for Openelec. Now, I play a lot of games under emulation, specifically Gamecube and Wii games on Dolphin. I'm generally satisfied with Dolphin's performance on my 2500k, overclocked to about 4.3ghz. I'll possibly be overclocking the server a bit as well (conservatively, and with much torture testing beforehand to insure it's 100% stable). Even factoring in 5-10% overhead for running under virtualization, performance increases going from Sandy Bridge to Haswell-E would suggest that I'd be happy with Dolphin's performance in kvm... Am I right to assume as much? Are there ways to prioritize one guest over another? Or to ensure that a guest is running on real cores, not virtual? And my other question involves the behaviour of GPUs that are passed through to a guest. What happens to the GPU when a guest is suspended or saved? When it's resumed again, will the GPU reattach automatically? Is it possible to use the same GPU in multiple guests, for example? Not simultaneously, obviously, but by stopping one then resuming the other. Thanks!