Jump to content

1812

Members
  • Posts

    2,626
  • Joined

  • Last visited

  • Days Won

    16

Everything posted by 1812

  1. like this? Fresco Logic FL1100 USB 3.0 Host Controller (sold as Inatek KT4006) Usage: usb disk access/usb3 to ethernet adapter/keyboard-mouse/bluetooth dongles Rating:Good+, good bandwidth with on card, no aux. power cable needed for card, 1 out of 5 I ordered died within 2 weeks Instructions: for OS X-plug and play though Sierra 10.12.2, windows not tested but manufacture claims windows 7+ compatibility (come with disc of drivers for windows) 666172-001 MNPA19-XTR HP 10GB Ethernet Network Card (Mellanox ConnectX-2 EN) Usage: Moderate to heavy capable Rating: Excellent/Good if you need to cheaply move data faster than gigabit Instructions: no OS X support, win10 plug and play but works better with Mellanox drivers (downloadable from their website,) native unRaid support for server to the use card in 6.2.4 (possibly earlier,) I currently use a pair to backup one server to the other. Geforce GT 710 Usage: good cheap general desktop card, light gaming, watching movies Rating: Good Instructions: Plug and play in OS X through Sierra 10.12.2 as long as smbios set to 14,1 or 14,2 before assigning the card (otherwise black screen occurs,) did not try to give it nvidia drivers Geforce GT 730 Usage: good cheap desktop card, light gaming, watching movies Rating: Good Instructions: Plug and play in OS X through Sierra 10.12.2 as long as smbios set to 14,1 or 14,2 before assigning the card (otherwise black screen occurs,) did not try to give it nvidia drivers GTX 760 SC Usage: mild video editing in fcpx, 3 simultaneous screes utilized Rating: Good+ Instruction: Plug and play in OS X through Sierra 10.12.2 as long as smbios set to 14,1 or 14,2 before assigning the card (otherwise black screen occurs,) using Nvidia web drivers can be a challenge as incremental updates to OS X can cause the drivers to no longer work with the card.
  2. did you click "load shares" after entering your credential? Let's assume you did. that means something is setup incorrectly on your synology. I don't use admin to acces my synology for videos. As a safety precaution, you should use something else. On your Synology, create a new user, ( I use "video") then only give that user permission to read or read/write where the folder(s) that your videos are stored in. (I only allow read access since plex doesn't need to write, and I don't want to accidentally delete a video using plex.) then double check that works by using your computer to mount the files on the network via smb or nfs (whichever you have setup) in network discover or whatever you use to dig around your network. once you know that works, then go back to your unRaid server and enter those credentials, click load shares, and your video folder(s) will be there. If your videos are in more than 1 overall source location, then each one will have to be added to the plex container settings.
  3. If the share won't mount in unassigned devices when you click the mount button, then the docker can't use it either because the login, pw, or the path is wrong. also, the path looks funny to me. I have all my media on a synology and it maps it by /mnt/disks/SynologyNasName/folder if your synology is in the same "workgroup" then search for servers in the workgroup instead of using the ip, and select from the toggle/dropdown. then ensure your name/pw are correctly entered. once the window closes, if you've done it correctly, clicking mount will mount it, which you can then use the path and add it to plex by clicking "add another path". once that is set, open plex, add library, find the path in there. ---edit to add image of what the screen looks like when you add the mounted smb connection in plex docker settings you can set it to auto mount the smb connection in unassigned devices in the future if you have to reboot your server.
  4. New test for audio/video stability and core pinning in os x (for those who care) For this one, I used one of my other servers with dual e5520 processors. Thread pairings as follows: Proc 1 cpu 0 <===> cpu 8 cpu 1 <===> cpu 9 cpu 2 <===> cpu 10 cpu 3 <===> cpu 11 Proc2 cpu 4 <===> cpu 12 cpu 5 <===> cpu 13 cpu 6 <===> cpu 14 cpu 7 <===> cpu 15 Proc 2 isolated from unRaid. vm1 with a gt730 gpu, assigned cores 4-7 vm2 no gpu, apple screen share, assigned cores 12-15 emulator pin 0-3 for both vm's vm disk images located on same ssd This cpu assignment was selected because it is accepted to cause audio/video issues when placing two vm's onto the same physical core(ht pairs). Result when vm2 was running cinebench with its cores at 100%, vm1 had zero audio/video issues streaming youtube, netflix, or blu-ray movies on vlc from a network server. No matter activities I did, I could detect no slow down, no lag, Just for fun, I put both vm’s on the same thread/core: vm1 with a gt730 gpu, assigned cores 4-7 vm2 no gpu, apple screen share, assigned cores 4-7 youtube videos and local blu-ray streamed with little video quality issues but audio was at times dirty and/or non functional (to be expected) even when vm2 was only idling. Interestingly, when cinebench pushed the shared cores at 100% during blu-ray playback, the video quality did not suffer at all, but as before, audio was less than desirable when it actually worked. So in my case, it is still fine (if not better performance-wise) to put os x vm’s on not ht paired cores since it does not cause audio/video issues while retaining he benefits of higher performance when the other vm isn’t 100% active. Why is like this? I don't know if this is OS X specific in regard to not causing the audio/video issues that are said to occur in windows 10 with this core assignment , if it is the particular gpu that is less susceptible to audio noise in ht pinning, if it is due to the fact that I am using it on older enterprise equipment, or only a gpu on a single vm (though people have reported issues with gpu-less dockers causing problems on ht cores of vm’s.) ok, no more tests for a while...
  5. and the results are in.... Details: HP DL380 G6 dual processor server Xeon x5670, 6 core HT processor, 12 logical cores per processor 2nd Processor used in testing isolated from unRaid/dockers/etc vm’s on separate unassigned SSD drives Cinebench R15 (higher scores are better) no emulator pin set 8gb ram on each vm clean non updated OS X sierra, (matching disk images) no GPU, vm’s accessed via apple screen share unless stated otherwise, cpu cores were run at 100% for benchmark, run 3x back to back no topology defined (I don't think os x cares, based on some previous tests I ran which showed almost no difference) Thread pairings Proc 1 cpu 0 <===> cpu 12 cpu 1 <===> cpu 13 cpu 2 <===> cpu 14 cpu 3 <===> cpu 15 cpu 4 <===> cpu 16 cpu 5 <===> cpu 17 Proc 2 cpu 6 <===> cpu 18 cpu 7 <===> cpu 19 cpu 8 <===> cpu 20 cpu 9 <===> cpu 21 cpu 10 <===> cpu 22 cpu 11 <===> cpu 23 ----------------------------- section 1 all testing in this section on following cores: vm1 6-11, vm2 18-23 vm1 6 cores non ht paired vcpus, vm 2 off (vm1 baseline score for non ht pairs) 521 527 526 vm1 while vm2 idle on ht cores 517 522 520 vm1 100% with vm2 20-50% use on ht cores 464 441 463 vm 2 while vm 1 idle on ht cores 497 514 515 vm1 & vm2 max 100% usage (simultaneous benchmark from here on down) vm1 314 vm2 309 vm1 315 vm2 311 vm1 317 vm2 315 ------------------------- section 2 all core configurations listed on each the test vm1 on 6-8, 18-20 (vm1 baseline score for HT paired cores) vm2 on 9-11, 21-23 HT core Pairs for each vm vm1 347 vm2 346 vm1 345 vm2 345 vm1 349 vm2 346 vm 1 on 6, 18, 7, 19, 8, 20 vm 2 on 9, 21, 10, 22, 11, 23 mixed vcpu ordering of ht paired cores vm1 346 vm2 345 vm1 346 vm2 344 vm1 348 vm2 346 vm 1 & 2 sharing 2 cores (8,20) utilizing only 10 total cores, but 6 on each vm vm1 on 6-8, 18-20 vm2 on 9,10,8, 21, 22, 20 (funny pairing to keep vcpu0 off the shared cores) vm1 286 vm2 286 vm1 286 vm2 283 vm1 288 vm2 285 and just for fun vm1 & vm2 on same cores (6-11) 100% utilization vm 1 254 vm2 255 vm1 254 vm 2 257 vm1 250 vm 2 255 -------------------------- What does it mean? Using HT paired cores scores a max of 349, vs non ht cores max of 522 with vm2 idle, meaning non ht cores have 50% more power in this setting. Even with 50% usage on vm2, vm1 only drops to 441, which is sill about 27% more power than ht paired cores (which makes sense actually.) Even if vm 2 is off, the max score of vm1 does not change if using ht pairs, so there is no possible performance gain regardless of what any other vm is doing or not doing, unlike using non ht paired assignments. Now, if both vm's are using 100% CPU resources, then ht pairs are the clear winner by about 10% over non HT core assignment which at best hit a 317 (compared to 349.) No video/audio testing was done in this situation because I only have 1 video card in this particular server. For many people none of this may apply. But for someone like me who needs cpu power and not graphics/audio on all vm's, then this has some merit and interest. I may drop in a card from on of my other servers and test for stability in the future. this actually explains quite a bit for me on why I thought my machines were wonky, it's more how I'm using them vs operating differently.
  6. The cinebench scores I posted earlier show the opposite in os x. Could you try some benchmarking of paired HT cores and non-paired vcpus in OS X? Maybe this will shed some light if my machines are just wonky of if os x is different in pairing cpu cores vs. windows. Yes that is to be expected if the situation is like this. my cpu pairing is like this cpu 0 <===> cpu 14 cpu 1 <===> cpu 15 cpu 2 <===> cpu 16 cpu 3 <===> cpu 17 cpu 4 <===> cpu 18 cpu 5 <===> cpu 19 cpu 6 <===> cpu 20 cpu 7 <===> cpu 21 cpu 8 <===> cpu 22 cpu 9 <===> cpu 23 cpu 10 <===> cpu 24 cpu 11 <===> cpu 25 cpu 12 <===> cpu 26 cpu 13 <===> cpu 27 so if i pinned 8 vcpus 0,1,2,3,4,5,6,7 then they would be over 8 separate cores so would get good performance as those cores would not be doing anything else, But, Then if i set up another vm with 8 vcpus 14,15,16,17,18,19,20,21 this would be sharing the same 8 cores and when both machines run at once then performance would be bad. So over those 8 cores 2 vms should be, vm1 . 0,1,2,3,14,15,16,17 vm 2 4,5,6,7,18,19,20,21 That way there is no overlap. Hope that makes sense yes and no. If you have a bunch of cores, then you can run them non HT and get good performance. If you don't, you get reduced performance. Just for kicks, i'm going to setup a couple 4 core vm's tonight on each other's pair and run some simultaneous tests... I do expect degraded benchmarks, but I'm curious to know if it's better to have 1 vm on it's own ht pair, or let it share with another vm, which may not be using the ht pair at the same time as the vm, therefore lessening the performance hit vs using ht pairs... (sorry if that's confusing, still on a bunch of medicine... results in a few hours though...)
  7. The cinebench scores I posted earlier show the opposite in os x. Could you try some benchmarking of paired HT cores and non-paired vcpus in OS X? Maybe this will shed some light if my machines are just wonky of if os x is different in pairing cpu cores vs. windows.
  8. Good that you got it running. Want it faster? Where is the vm mounted? I see it's on a cache drive, but is that an SSD or spinning disk? Spinning disk is much more sluggish. Even if it's an SSD cache, some believe that it is better for disk i/o to have the vm on an SSD mounted via unassigned devices. That's how I do it, but for other reasons. Also, cpu pinning.... What i've found is that the common idea of using HT cores with os x vm's don't give the best cpu performance. For example, I ran the following cinebench test this morning on 12 of my 24 cores (on 2 processors) Non paired threads spanning 2 processors <vcpupin vcpu='0' cpuset='12'/> <vcpupin vcpu='1' cpuset='13'/> <vcpupin vcpu='2' cpuset='14'/> <vcpupin vcpu='3' cpuset='15'/> <vcpupin vcpu='4' cpuset='16'/> <vcpupin vcpu='5' cpuset='17'/> <vcpupin vcpu='6' cpuset='18'/> <vcpupin vcpu='7' cpuset='19'/> <vcpupin vcpu='8' cpuset='20'/> <vcpupin vcpu='9' cpuset='21'/> <vcpupin vcpu='10' cpuset='22'/> <vcpupin vcpu='11' cpuset='23'/> Cinebench scores 1081 1104 1121 Paired HT threads on a single processor <vcpupin vcpu='0' cpuset=‘6’/> <vcpupin vcpu='1' cpuset=‘7’/> <vcpupin vcpu='2' cpuset=‘8’/> <vcpupin vcpu='3' cpuset=‘9’/> <vcpupin vcpu='4' cpuset=’10’/> <vcpupin vcpu='5' cpuset='11’/> <vcpupin vcpu='6' cpuset='18’/> <vcpupin vcpu='7' cpuset='19'/> <vcpupin vcpu='8' cpuset='20'/> <vcpupin vcpu='9' cpuset='21'/> <vcpupin vcpu='10' cpuset='22'/> <vcpupin vcpu='11' cpuset='23'/> Cinebench scores 759 757 749 topology: in previous tests I've done with os x I didn't see any improvement or degradation specifying the topology to the vm. Quite a bit of difference. unRaid shows in the dashboard that when both tests are run, cpu usage is 100% on those cores, but the results are about a 45% increase in cpu power benchmark score when not using HT cores for vm assignments. I'm not saying the way I do is right for your setup, but you could at least experiment and run your own tests. ALSO isolate your vm cores from unRaid if you haven't already, and then set emulator pin to one of the cores you left for unRaid. This makes sure that only the vm uses the cores assigned to it. splashtop/teamviewer may still use cpu to render even if you have a video card. I don't remember... someone else will have to chime in on that. but OS X screen share will use the gpu but sadly no sound... There may be some other gains from adjusting clover settings. I am about to start a new topic on that to compare notes with others and no clutter this one up so much. Thanks for the tips I'm definitely going to do some playing around with it later tonight. I use my vm's for building and developing c++ applications. So most of my build environments are on a spinning 2tb array disk with 2*250gb and 2*256gb cache drives in btrfs raid. My main OS(win10) which i'm typing from now, has gpu passthrough and is stored directly on the raided cache drive. But I've never isolated any cores for the VM's or used an unmounted drive as my belief was that the VM would perform better with paired threads and a cache drive or on the raided cache drive itself. The only thing I have done is opted to leave cores 0/6 free for unraid. 1. Is it possible my other VM's could benefit from core isolation as well? Also, you said that most recommend to run the VM on an isolated drive which is in an unmounted state. 2. Wouldn't the VM benefit more from having a SSD array drive with raided SSD cache IF that SSD array drive was only being used for VM's? Or like I'm currently doing for performance on my main machine which is storing on the raided cache drive? maybe my machines are just weird with OS X, i don't know. but what I do know is that the scores are pretty solid. I use mine as a transcoding cluster for edited/produced video, and can confirm that a vm using non ht paired cores to transcode does it faster than one with ht cores. I haven't done the test with win10 (have a vm but don't use it often) but my guess is that windows, optimized for virtual environments, will perform better with ht pairs like the common usage dictates. 1. all vm's can benefit with not having to share cores with unRaid services and dockers. if you've used isolcpus in your syslinux.cfg, and isolated everything but 0,6, then 0,6 are the only cores unpaid gets. The rest is clear for whatever vm's you run. drives aren't unmounted, they are mounted using the unassigned devices plugin. essentially if you put the disk image there, it doesn't have to contend with any other action/file transfers/etc that the cache drive may end up doing. 2. ssd cache will be faster than spinning cache, which is way faster than spinning array. If the ssd was in the array, it will still be limited by the speed of the parity drive read/writes. With a spinning parity drive, that's 30-50ishMB/s write and probably less than normal ssd reads? Using turbo write you can hit parity protected write speeds of 80-125MB/. I don't know of anyone who has a wholy ssd only array (someone does I bet around here) but they'd be the ones to ask about ssd array performance for specific numbers. Many people just leave the vm's on the cache drive, and it works fine. you all get the benefit of having a redundant copy if 1 of the paired drives fail (depending on raid configuration of the cache pool.) otherwise, you have to back it up yourself or use one of the backup program/scripts (which I don't use, i just move them manually.) But since I have a few heavy plex/docker app users in the house, then it's better for me to move mine off the cache drive and onto an unassigned disk where it gets the full bandwidth available. (if this is rambling, sorry, i'm under a quite bit of medication for seasonal allergies and things...)
  9. Good that you got it running. Want it faster? Where is the vm mounted? I see it's on a cache drive, but is that an SSD or spinning disk? Spinning disk is much more sluggish. Even if it's an SSD cache, some believe that it is better for disk i/o to have the vm on an SSD mounted via unassigned devices. That's how I do it, but for other reasons. Also, cpu pinning.... What i've found is that the common idea of using HT cores with os x vm's don't give the best cpu performance. For example, I ran the following cinebench test this morning on 12 of my 24 cores (on 2 processors) Non paired threads spanning 2 processors <vcpupin vcpu='0' cpuset='12'/> <vcpupin vcpu='1' cpuset='13'/> <vcpupin vcpu='2' cpuset='14'/> <vcpupin vcpu='3' cpuset='15'/> <vcpupin vcpu='4' cpuset='16'/> <vcpupin vcpu='5' cpuset='17'/> <vcpupin vcpu='6' cpuset='18'/> <vcpupin vcpu='7' cpuset='19'/> <vcpupin vcpu='8' cpuset='20'/> <vcpupin vcpu='9' cpuset='21'/> <vcpupin vcpu='10' cpuset='22'/> <vcpupin vcpu='11' cpuset='23'/> Cinebench scores 1081 1104 1121 Paired HT threads on a single processor <vcpupin vcpu='0' cpuset=‘6’/> <vcpupin vcpu='1' cpuset=‘7’/> <vcpupin vcpu='2' cpuset=‘8’/> <vcpupin vcpu='3' cpuset=‘9’/> <vcpupin vcpu='4' cpuset=’10’/> <vcpupin vcpu='5' cpuset='11’/> <vcpupin vcpu='6' cpuset='18’/> <vcpupin vcpu='7' cpuset='19'/> <vcpupin vcpu='8' cpuset='20'/> <vcpupin vcpu='9' cpuset='21'/> <vcpupin vcpu='10' cpuset='22'/> <vcpupin vcpu='11' cpuset='23'/> Cinebench scores 759 757 749 topology: in previous tests I've done with os x I didn't see any improvement or degradation specifying the topology to the vm. Quite a bit of difference. unRaid shows in the dashboard that when both tests are run, cpu usage is 100% on those cores, but the results are about a 45% increase in cpu power benchmark score when not using HT cores for vm assignments. I'm not saying the way I do is right for your setup, but you could at least experiment and run your own tests. ALSO isolate your vm cores from unRaid if you haven't already, and then set emulator pin to one of the cores you left for unRaid. This makes sure that only the vm uses the cores assigned to it. splashtop/teamviewer may still use cpu to render even if you have a video card. I don't remember... someone else will have to chime in on that. but OS X screen share will use the gpu but sadly no sound... There may be some other gains from adjusting clover settings. I am about to start a new topic on that to compare notes with others and no clutter this one up so much.
  10. I have a couple 710 cards and a 730. I just feed them straight to OS X with no issues. I'll try to post rom's soon.
  11. -bump for sas support!- when my sas drives are xfs, they spin down, but no when btrfs. also, temp display? Also funny: disks show as spun down but still usable....
  12. Thanks for the pointers. I made a bunch of changes, removed xvga from the xml, upgraded to Sierra, changed smbios setting in clover to match an older mac, etc. And IT WORKS!!! Finally. It recognized the card and without any boot arguments or installing web drivers, it worked. Phew, spent so many hours Now I need to get audio working ;-) EDIT: Got hdmi audio working thanks to another one of your posts http://lime-technology.com/forum/index.php?topic=51915.msg524900;topicseen#msg524900 :-) Thanks so much I have spent tons of time getting mine working, playing with settings, breaking things, redoing things....but that was also mainly because I was also learning how unRaid works AND how KVM works at the same time. But it was completely worth it and I've loved the learning experience. I couldn't have done it without many other's contributions and help on here, including gridrunner! now----> make a backup copy of your working disk image! Trust me, it saves time vs reinstalling again later. I keep a "base" image to use when creating a new vm or for when I break mine messing around.
  13. I use 2 gt710's and a gt730 (710 variant) with almost no issues... https://lime-technology.com/forum/index.php?topic=54786.msg523314#msg523314 I use host dev on them all. no nvidia drivers. no boot args. but that is no your problem. (sharing is caring though) what I use (from a running vm, so it has a few extra lines auto populated): <hostdev mode='subsystem' type='pci' managed='yes'> <driver name='vfio'/> <source> <address domain='0x0000' bus='0x07' slot='0x00' function='0x0'/> </source> <alias name='hostdev0'/> <address type='pci' domain='0x0000' bus='0x02' slot='0x01' function='0x0'/> </hostdev> <hostdev mode='subsystem' type='pci' managed='yes'> <driver name='vfio'/> <source> <address domain='0x0000' bus='0x07' slot='0x00' function='0x1'/> </source> <alias name='hostdev1'/> <address type='pci' domain='0x0000' bus='0x02' slot='0x04' function='0x0'/> </hostdev> try: removing "xvga=yes" from your xml. I don't see it in mine, for this card or my gtx760 Look in the boot log for the gt710 card, search by device id and by it's slot assignment and see if it is showing any errors there, like a bios bug or similar. also, try it in a different slot. sometimes that helps some folks.
  14. Yes, just change the primary vDisk location to the drive you want it on.
  15. I just played with this creating 2 vm's using 4 cores, and I think I found a possible issue for some. On my machine, I have 16 cores available, 12 of them isolated from unRaid, leaving 4 for server operations and dockers. I thought unRaid would populate the xml for the vm showing you what the core assignment was, much like when it populates an address of a pci device if you leave it blank. Didn't happen. So looking at the dashboard, it shows all the vm activity being bound to the 4 cores that unRaid has available to it. Just for kicks, I changed the core count of one vm to 8, and the other to 16. Same result. No activity on isolated cores, only active on the cores available to unRaid. Which I guess makes sense. I did not try to measure any performance differences since I got distracted by trying to find out what cores the vm's were using. (I actually never made it a full boot up cycle investigating this part first.) I'll get to that in a bit...
  16. how are you setting this up? never played with not pinning, but if you point me in the direction, I'm game.
  17. Not sure I would agree. Arguments can be made either way and depends on a specific user's needs. If cache is big enough, it's one less thing to configure, one less drive to buy, and one less port to use. using it on cache, the vm has to share drive bandwidth with any dockers that are running. I've noticed slight throughput gains using vm's on unassigned devices. Nothing to write home about, but I think others have seem more. Is it worth the extra setup (as in, adding another drive, and then clicking a new destination for the image file in vm setup)? That is up to each user. But if a problem arrises on the vm physical disk and it needs to be replaced, you don't have to stop the array to do so. and IF unRaid allows vm's to run without the array being spun up in the future, my suspicion is that it will only be run on mounted unassigned devices. But you are 100% right, arguments could be made either way. If you're using cache drive with a few drives, then the vm is saved on several disks, and if there is a drive failure, it is still operable. A benefit for some. Maybe we should do a poll, because my initial post was based on what my impressions were on the board. Could be interesting, I could be flat wrong! On the other hand, I don't actually run any VMs and if I did I don't have room for video cards or additional drives in my server. lol... they can be a pain. one of my graphics cars blocks 2 slots
  18. Not sure I would agree. Arguments can be made either way and depends on a specific user's needs. If cache is big enough, it's one less thing to configure, one less drive to buy, and one less port to use. using it on cache, the vm has to share drive bandwidth with any dockers that are running. I've noticed slight throughput gains using vm's on unassigned devices. Nothing to write home about, but I think others have seem more. Is it worth the extra setup (as in, adding another drive, and then clicking a new destination for the image file in vm setup)? That is up to each user. But if a problem arrises on the vm physical disk and it needs to be replaced, you don't have to stop the array to do so. and IF unRaid allows vm's to run without the array being spun up in the future, my suspicion is that it will only be run on mounted unassigned devices. But you are 100% right, arguments could be made either way. If you're using cache drive with a few drives, then the vm is saved on several disks, and if there is a drive failure, it is still operable. A benefit for some. Maybe we should do a poll, because my initial post was based on what my impressions were on the board. Could be interesting, I could be flat wrong!
  19. vm image on unsigned ssd is considered best, but cache drive is still better than array. if you use auto for location, and have your /domain folder set on cache drive, then the vm image will be put there.
  20. I haven't wanted to install LibreElec until now. Is finding the game roms difficult?
  21. has worked on 3 nvidia cards for me, after I was pointed in the right direction This worked almost perfectly for me to get HDMI sound from my Radeon RX460! Thanks!! Only issue I have is that there seems to be a very minor crackle/pop every few seconds from the sound. It's easiest to hear if you listen to something without any variation. I find listening to a tone makes any crackles/pops immediately obvious: Ex: https://www.youtube.com/embed/TxHctJZflh8 Any thoughts on a cure? Later in the thread there's talk of isolating cpus and such. Would this cure a minor crackle/pop in the sound from HDMI? It wouldn't hurt and will make it perform a little bit better. How many cores does your system have and how are you assigning them?
  22. downgrade to 6.2.4 And that solves what exactly? 6.3 uses 6.2 ovmf as far as I know. it solves making you vm working again and eliminates all the other things in the release client that you don't know about that could be causing the issue.
×
×
  • Create New...