iphillips77

Members
  • Posts

    24
  • Joined

  • Last visited

Converted

  • Gender
    Undisclosed

Recent Profile Visitors

The recent visitors block is disabled and is not being shown to other users.

iphillips77's Achievements

Noob

Noob (1/14)

0

Reputation

  1. Here are some tips for anyone having trouble getting HDMI audio to work with nvidia cards, particularly those like me who had it working under Sierra but had it break under High Sierra. Start by downloading HDMIAudio.kext 1.1 from here https://www.dropbox.com/s/9xenemmfwa1ee7b/HDMIAudio-1.1.dmg?dl=0 You'll also need ioRegistryExplorer from here https://mac.softpedia.com/get/System-Utilities/IORegistryExplorer.shtml Fire up ioRegistryExplorer. In the search field, look for HDAU. That should narrow things down to just your audio card. Take a look at the properties on the right.. You'll see vendor-id and device-id. Vendor-id should be <de 10 00 00> for any nvidia card. Your device-id will be <xx xx 00 00>, where xxxx is your ID. Mine was <ba 0f 00 00>, so BA0F is my device-id. Yours is probably different. Make a note of it. Mount your EFI partition, and put HDMIAudio.kext in EFI/CLOVER/kexts/Other. Next, fire up clover configurator and load up your config.plist. In the "kernel and kext patches" section, create a new entry under KextsToPatch. Use com.apple.driver.AppleHDAController as the kext name. Put DE101A0E in the "find" column, and DE10XXXX in the "replace" column, replacing XXXX with your device ID. I put DE10BA0F there. Reboot, and cross your fingers. This patches AppleHDAController on the fly, replacing one of the valid IDs (DE101A0E) with your card's ID. That was enough for me to get HDMIAudio.kext to work as well as it did in 10.12 Sierra. Not sure if this works under Mojave, I'm waiting for new web drivers to be released before I upgrade further. However I don't believe that this kext edit would cause any problems if done properly under any macOS version. Good luck!
  2. Just in case anyone else has the same problem in the future, and since I'm guessing I'm not the only person out there who keeps their server tucked away in the quiet of the basement it wouldn't surprise me -- I've got it sorted. It was the HDMI cable all along. I ended up picking up a "Luxe Series" active HDMI cable from Monoprice. I'm currently going from Mini DisplayPort on the Radeon R9 280, through a passive adapter to full-sized DisplayPort, through the Club3D DisplayPort to HDMI adapter, up through the floor and wall to my wall-mounted television. The cable is CL3 fire rated for in-wall use. So much for "all HDMI cables are the same"! So far it seems solid, 4k@60hz, and what appears to be 4:4:4. Not 100% sure, because it turns out this is an RGBW panel, but even with the extra stupid subpixel this is so much better than the 1080p display I was using before! Holding down "Option" while clicking "Scaled" in the display preference pane brings up all the scaled resolutions I could ever want, and some I don't, all at 60hz. Now if I could only get my sound to work. Off to the next problem! Thanks for your help, everyone.
  3. Thanks an awful lot for the suggestions, I'm getting closer. After some tweaking, I noticed that even though I was using the displayport output, it was showing up as 'HDMI or DVI' in system report. A little tinkering with the stock Mac OS framebuffer definitions and I've made some progress. I can now get 4K at 60Hz...... for a few seconds. Then, the screen blanks out and I get "snow". Not sure if it's actual snow, because that makes no sense, or if it's something that the TV is doing to generate fake snow due to signal loss.. I'm using a fairly long HDMI cable so that may be the problem. Another issue I'm seeing is if I use a "scaled" display, the option for 60hz vanishes. Except for 1920x1280.. I can get that in retina at 60hz, as well as unscaled 4k. Is this normal behaviour?
  4. First of all, I want to express my appreciation to Gridrunner et al for the all the knowledge and experience shared here in the forums and elsewhere. I'm currently running two so-far rock stable Sierra VMs -- one toiling away headless running Indigo (my home automation server) and various other "important" things, and this one, which I'm using as my daily driver. My daily, however, needs a bit of tweaking. I've got a pair of graphics cards in my server -- a Radeon R9 280, and an nVidia GTX 760. Both seem reasonably comparable in power, both seem to work pretty well when passed through to the Sierra VM. I do light gaming (console emulation mostly) on a Windows VM in the other room, either card is fine for that purpose, so I've got either at my disposal for the Mac. I'm running a 43" Insignia 4k television as a monitor. It supports 4k @ 60hz, with 4:4:4 colour space. Connected to the server via HDMI cable. On either card, I get the Mac displaying 4k just fine. However I can't get a refresh rate higher than 30hz. On either card. I've tried the HDMI ports on the cards, as well as an active DP to HDMI adapter (this one from Club 3D), which many people report success with, at least with genuine Apple machines. I know this is an issue that I'm likely going to need the help of the Hackintosh community for.. But it has raised some questions regarding my Clover setup. I can't get the VM to boot successfully with a system definition any newer than MacPro2,1. Pretty old. And having to specify a Penryn processor in the VM's xml definition.. Nosing around in this thread makes me wonder if that's because of an issue with Clover, that might have been corrected.. Anyway. I'm a bit out of my depth here. Does anyone have any insights they might be able to share as to how to configure my system so my hardware appears as modern as it actually is, so I can eliminate that as a possible reason why 4k@60hz screen modes might not be available? Thanks!
  5. It's more likely a problem with nvidia's utility. I had the same issue and was chasing my tail a bit.. Try downloading lspci for Windows and see what it tells you. Nvidia told me 1x no matter what I did, but lspci (and gpu-z, I think?) showed the correct number of lanes.
  6. New weirdness. Thought I'd gotten things pretty much sorted out. Was watching a video on Kodi on my Windows VM in the living room, and went over to transcode a video on the Mac VM to save on my ancient iPad. As soon as I started the transcode and the Mac's CPU meter approached 100%, the Windows VM started to sputter and lag. I've given the Windows VM cores 0-3, Mac VM uses 4-11, and unraid starts with isolcpus=0-7 in syslinux. Why would the Mac VM kill the Windows VM once it starts running something CPU intensive like a video transcode? It shouldn't be touching those cores at all.
  7. Have you tried swapping the two cards? Not all cards support PCI reset, which basically means that they'll work the first time but that's it. Had that problem with a USB card I was trying to use.. Had to hard reboot unraid every time I needed to restart the VM. Try swapping them, boot unraid off the 430 and pass the 210 to openelec and see if you have any luck.
  8. Yeah, figured that they've been pretty busy.. I'm going to send them an email momentarily just to have them give this thread a glance, I think there are some real issues here with CPU frequency governors and core assignments that, at the very least, could use some clarification. I still haven't run latency tests to determine exactly what the pairings are.. although from what I've read, Intel convention seems to be all physical cores followed by hyperthreaded cores. i.e., for a 4 core hyperthreaded CPU, pairings are (0,4) (1,5) (2,6) (3,7). 2 core would be (0,2) (1,3), 6 core (0,7) (1, etc. etc. etc. But I haven't run the tests because I don't know what to do with the information. If I do an isolcpus=0-3,6-9 in syslinux, to give unraid core pairs (4,10) and (5,11), things don't act the way I think they should when I start setting governor profiles. Some cores won't even report their frequency with /usr/bin/cpufreq-info unless I give isolcpus totally sequential cores (isolcpus=0-7 in my case now) So, in addition to cpu core pairs, I'm very curious as to what happens when we isolate those cores from host operations. How isolated are these cores? When I give unraid (4,10) and (5,11) to play with, and I go to the webgui dashboard, does core (0,1),(2,3) correspond to (4,10),(5,11)? And in the same instance, if I do a cpufreq-info, does isolating cpus change that numbering assignment or what? And does any of this really make any difference? Haha
  9. Scrapz.. Yep, it appears the 'ondemand' and 'conservative' governors have been deprecated for my CPU. All I have are 'performance' and 'powersave'. Also, found some tools already installed in unraid to manage CPU frequency.. /usr/bin/cpufreq-set, which allows you to set minimum and maximum frequencies for all cores or individually, as well as changing governers.. /usr/bin/cpufreq-info gives the current settings and /usr/bin/cpufreq-aperf seems to be a performance monitor tool. Much easier than catting and echoing!
  10. That was the first thing I tried, actually, but it seems that there are some differences between unRaid and Ubuntu when it comes to how CPU multipliers are handled. That file doesn't exist, so we can't change things that way. cat /sys/devices/system/cpu/cpu0/cpufreq/scaling_available_governors returns "performance" and "powersave".. cat /sys/devices/system/cpu/cpu0/cpufreq/scaling_governor is set to "powersave" by default. I'm giving this a try now... echo performance > /sys/devices/system/cpu/cpu0/cpufreq/scaling_governor ...for all cores.. cpu0/cpufreq, cpu1/cpufreq, etc etc
  11. Holy crap I think I may have figured it out. Seems to possibly be a problem with the host not scaling the CPU frequency as efficiently/intelligently as the guest would like. Check out this shizz. http://unix.stackexchange.com/questions/64297/host-cpu-does-not-scale-frequency-when-kvm-guest-needs-it So here's what I did to test, and got immediate results. cd /sys/devices/system/cpu/cpu0/cpufreq That's config info for cpu0. You can monkey with your cpu in here. "cat scaling_max_freq" resulted in 4300000. So I thought I'd give this a try. echo 4300000 > scaling_min_freq Basically what this does is set the minimum frequency to be the same as the maximum, so it'll run full tilt constantly. Did it for the other CPUs I was passing to the Windows VM as well. Went back to the VM, and noticed in CPU-Z that my CPU was now running at max frequency. At first glance, all my stuttering and slowdown problems were gone as well. I still have to do more testing but in-emulator benchmarks have immediately improved 33%. This is exciting. Now, I probably shouldn't leave things like this. But, what I surmise was happening to me is that the hypervisor wasn't triggering a jump to the highest multiplier. It could if it wanted to, though.. because Prime95 did it. So. What do we do with this information? It should be possible to change the frequency scaling rules, shouldn't it?
  12. After a long week of banging my head against walls at work, I've got a couple days off to bang my head against walls with this instead. Thanks for the suggestion, bungee91. I downloaded the script you linked to, managed to install netperf but couldn't find a build of netserver that would work. Instead, I just tried some trial and error, but didn't manage to see any improvement. I'm going to ask around on the Dolphin forums as well to see if someone over there might know some way to improve things.. It's very puzzling. I'm starting to think that unraid just isn't going to be able to do this. Holding out hope that 6.2 will help -- OVMF instead of seabios improved things a little for me -- but something here just doesn't add up.
  13. Thanks Scrapz, gave it a try but no dice. Playing around with pci-e settings in the bios now, I'm just about out of ideas.
  14. Does it give you that error if you start the VM from a cold boot? And I mean totally cold, physically turn the server off then back on again. I was trying for a while to pass through a USB card that didn't support PCI reset.. and it would work the very first time, but any subsequent times it would give me an error that the card couldn't be initialized. Only way to get it to work again was a cold boot, a reboot wouldn't do the trick. Nothing you can do if that's the case.
  15. I really don't think it's a CPU problem. Like I said, CPU usage is in the 40% range (as indicated in Windows) and things are still stuttering. I've run Prime95 to rule out CPU usage being misreported... when stress testing usage is pegged at 100%. I have tried giving it more cores, all cores, even tried less on a whim. No changes. Slowdowns are repeatable. They'll occur at the same point in a game map, for example... I'll load up Super Mario Galaxy, and if I walk to a certain place, and the camera is pointed in a certain direction, my FPS drop from 60 to 50. And stay at 50 if I don't move again. All the while I'm looking at my CPU meter never going above 40%. Nothing running in the background. I've ruled out Dolphin as a culprit. I've tried both DX and OpenGL backends, tweaked every setting, and this is the best I've been able to get it. I'm getting bad GPU benchmark scores in 3dmark and Cinebench. Might not be a GPU issue but it sure seems like it. Just don't know what to try next.