glennv

Members
  • Posts

    299
  • Joined

  • Last visited

Everything posted by glennv

  1. Everywhere i read that OSX VM's can handle up to 64 vcpu's and QEMU itself way above that, but whatever i try i can not get anything above 32 cores to work. If i assign 40 vcores , i have to remove the topology line otherwise it wont boot at all above 32. But then once booted if i start anything serious or even a benchmark tool , it dies immediately with this error : malloc: nano zone abandoned because NCPUS mismatch. If i drop down to 32 all is fine. Any ideas ? I am running of a supermicro board with dual 2697v2 xeons This is the max that will work. Boot fast and all works as expected on any version of OSX i tried from Sierra up till Mojave: --- cut --- <vcpu placement='static'>32</vcpu> <cputune> <vcpupin vcpu='0' cpuset='4'/> <vcpupin vcpu='1' cpuset='28'/> <vcpupin vcpu='2' cpuset='5'/> <vcpupin vcpu='3' cpuset='29'/> <vcpupin vcpu='4' cpuset='6'/> <vcpupin vcpu='5' cpuset='30'/> <vcpupin vcpu='6' cpuset='7'/> <vcpupin vcpu='7' cpuset='31'/> <vcpupin vcpu='8' cpuset='8'/> <vcpupin vcpu='9' cpuset='32'/> <vcpupin vcpu='10' cpuset='9'/> <vcpupin vcpu='11' cpuset='33'/> <vcpupin vcpu='12' cpuset='10'/> <vcpupin vcpu='13' cpuset='34'/> <vcpupin vcpu='14' cpuset='11'/> <vcpupin vcpu='15' cpuset='35'/> <vcpupin vcpu='16' cpuset='16'/> <vcpupin vcpu='17' cpuset='40'/> <vcpupin vcpu='18' cpuset='17'/> <vcpupin vcpu='19' cpuset='41'/> <vcpupin vcpu='20' cpuset='18'/> <vcpupin vcpu='21' cpuset='42'/> <vcpupin vcpu='22' cpuset='19'/> <vcpupin vcpu='23' cpuset='43'/> <vcpupin vcpu='24' cpuset='20'/> <vcpupin vcpu='25' cpuset='44'/> <vcpupin vcpu='26' cpuset='21'/> <vcpupin vcpu='27' cpuset='45'/> <vcpupin vcpu='28' cpuset='22'/> <vcpupin vcpu='29' cpuset='46'/> <vcpupin vcpu='30' cpuset='23'/> <vcpupin vcpu='31' cpuset='47'/> <emulatorpin cpuset='0-1,24-25'/> </cputune> <cpu mode='host-passthrough' check='none'/> --- end cut ---
  2. Seems it also scans for non running VM's and its vdisks apparently. I had 2 that where pointing to data on the array, but where dummy templates and not running so was wierd and easily overlooked. But never mind. Its not that important, was just wondering if it was me
  3. Nope, all my VM, libvirt , docker, appdata are on a dedicated btrfs UD SSD mirror. So no reason for it to spin up the drives.
  4. Whats up with that ?? Does it do more then just list the VM's , maybe some actual file checks etc ? When in the morning i select my VM tab, it only comes up after 30 seconds or so. When investigating , it seems to do a spinup of my array drives, which are enjoying their sleep . That should not happen right ? Regardless if the VM's on the list have anything on array drives or not. Even more so when they are off. Also in my case have 3 active VM's only (the rest off) and these have all their files on dedicated UD. Running on latest rc4
  5. Thank you thank you thank you thank you !!!!!! All my latency issues on my OSX Sierra Render VM are gone like the wind. (had huge issues with midi controller devices and driven mouse/screen movements) Also in pure speed i noticed a few fps increase on some specific Davinci Resolve performance tests.
  6. Ps tried some mac fan control programs but they say no fans detected on this computer. Thats to be expexted as hacks need to go the bios route. But some at least display the fan speed. Not in this case.
  7. Tnx for your reply. As i am only using macs (hackintosh and vm’s) i am very used to flashing the bios of these cards. As there is no overclocking software on windows, i tune the bios so its hardcoded and in does not rely on any drivers etc. thats the wierd thing as i would have expected the fan curve in the bios to be doing its thing regardless of the card running in a physical hadkintosh or in a vm hadkintosh (exact same os version and nvidia drivers btw). i have the pci ids already in there so its not used by unraid. If i stop the vm, the card fans stop, so its is definately something in the vm that is overruling the cards bios. My original though was maybe its not getting the info hence my attempt to pass the dumped vbios , but that made no difference. Everything else is working perfect and the card performs on par with the same card in the physical hack. Its just the fans are wierd. Maybe something with the fact that also no temp values are available in the vm. Maybe they are needed and if not available the driver overrules the card fan curves or something . I am out of ideas.
  8. Flashed a new fancurve in the BIOS where it should only come on at 50C. >> same thing, fans stay spinning from VM boot. (fancurve works fine in hackintosh workstation) Passed romfile to vm >> same , fans stay spinning from VM boot. Am out of ideas. Anyone ????????
  9. Have a gtx980ti passed to an osx sierra vm and all running perfect and performance as expected. But fans seem to keep spinning at high speed even when nothing is running. If i stop the vm the fans stop. Same card in my hackintosh workstation , same os same gpu drivers card fans mostly idle unless heavy used. Any ideas ?
  10. Yeah i realise looking at the logs but i prefer slow vs suddenly NOT working if you get my point. This sort of thing should never ever suddenly lead to the whole docker not working but rather just warn or at least the gui should work with then a warning to the user to correct any issues. Reminds me of my windows days before i moved to osx where at any day you could come in and find stuff not working due to a busted auto update. I hear its still the case with windows 10. My internet business line is so fat(500MBytes/s) i did not even notice it was slow as slow was still fast enough . Will switch endpoint at some time in the future the proper way as you suggested.
  11. In case like me you dont want to or have no time to mess with a previously fine working config and roll back to the last working image , just use this as your repository adress in your docker settings : binhex/arch-delugevpn:1.3.15_14_gb8e5ebe82-1-14 the syntax is basicaly a colon plus the tag name of the image you want, which you can get from the docker repo website. That way youcan go to any version available. edit: not sure how it can be an endpoint issue suddenly as that would then also not work when i revert to an older version would it ? Will try it next week when i have more patience as happy with the previous version
  12. Have the same problem . Stuck in some retry loop . How do i revert to an older docker image to get this working again edit: figured it out and reverted to previous image, which works fine.
  13. You can get plex to work again by pointing it to /mnt/cache/appdata instead of /mnt/cache/appdata. Assuming you have appdata on cache. That is true for a few other dockers as wel like Sonar for example
  14. 64gb mainly for a huge memory hungry osx vm render node
  15. Did you ever got this to work ? Have the same issue. I was thinking that if the GPU is passed thru it should not matter if its in a VM or not for the sensors , but aparantly it is different. For my classic hackintosh it works fine, but cant get this VM to show up. GPU temp is what i need and ideally also freq (to detect throttling) , bit temp is most important to see if my cooling works etc.
  16. I had that when i switched on direct i/o. To solve that specific case i needed to use direct assignements of /mnt/cache/appdata instead of /mnt/user/appdata in the docker containers for it to work again. Not all dockers where affected but deluge and plex where.
  17. With parity seems normal yes . Just did a test on mine with all array(5)/parity(1) a 6TB ironwolf drive and getting 121MB/s so close enough
  18. Sounds slow if you have no parity. My ironwolf 6TB drives do 150mbps easily and consistently.
  19. Good, you have a starting point now. You dont set these in UNRAID. You have to dig into the bios of your specific motherboard and make sure all the proper cpu frequency scaling state stuff incl turbo is activated. Then you can check with provided commands on unraid commandline if it is working. Typicaly these things are under advanced cpu config , or advanced powermanagement etc . For supermicros i found this somewhere , but you have to check for your HP . At least you have something to compare and play with. -------------- cut from supermicro support ------- Question How do I enable Turbo mode to get the maximum Turbo mode speed on my X10DRi motherboard? Answer Please make sure the following settings are correct: 1. Please make sure all cores are enabled: Advanced >> CPU Configuration >> Core enabled >> “0” to enable all cores. 2. Under the Bios setup go to: Advanced >> CPU Configuration >> Advanced Power Management and make sure the setting are as follows: Power Technology >> Custom Energy performance Tuning >> disable Energy performance BIAS setting >> performance Energy efficient turbo >> disable 3. Then go to Advanced >> CPU Configuration >> Advanced Power Management >> CPU P state control and make sure the settings are as follows EIST (P-States) >> Enable Turbo mode >> enable P-state coordination >> HW_ALL 4. Then Advanced >> CPU Configuration >> Advanced Power Management >> CPU C state control and make sure the setting are as follows. Package C-state limit >> C0/C1 state CPU C3 Report >>disable CPU C6 report >> enable Enhanced Halt state >> disable -----------
  20. Only thing i can think of (i am totaly confused with all your testing results on what is what but) is that the cpu's are not turboing. Check on the unraid server the followig to display current requencies . Check during the tests if they reach max turbo speed or are stuck on stock speed. watch grep MHz /proc/cpuinfo And to check which governors are active on your cores (intel_pstate): cat /sys/devices/system/cpu/cpufreq/policy*/scaling_driver And then check: This is the important one, should be zero : cat /sys/devices/system/cpu/intel_pstate/no_turbo Some example output values from my system for cpu pstate/turbo stuff # cat /sys/devices/system/cpu/intel_pstate/turbo_pct 36 # cat /sys/devices/system/cpu/intel_pstate/max_perf_pct 100 # cat /sys/devices/system/cpu/intel_pstate/min_perf_pct 35 # cat /sys/devices/system/cpu/intel_pstate/num_pstates 23
  21. Htop starts counting cpu cores from 1 as seen in the screenshot and unraid assignement starts from zero. Got me a few times as well so i always check on the unraid dashboard for core utilisation instead.
  22. Tnx for confirming. Yeah pretty crap. I did find on github an virtio-net driver but was even slower. so guess i need either a 10gb switch (auch) or cheaper just another 10gb card then. It is what it is......
  23. Well i got my 10GB card in and even with the 10G card bridged to the VM the vmxnet3 bridge in OSX only works only at 1GB speeds. That is crap. If i physically pass the port to the VM and connect the 2 ports of the card together having one for unraid and one for the vm talking to each other , i get proper 10G speeds on the network and 400MB/s read speeds and about 250MB/s write from my btrfs 2xssd cache . But i need it as a bridge to access unraid local shares the normal way so i can use the other port in the vm for vm to a workstation connection. So no matter if the bridge is backed by a real 1GB or 10Gb card, its stuck at 1GB speeds. Is there ANYONE who got a virtual bridged 10GB up and running in OSX who could help me here ?
  24. I cant seem to get this to work as advertised. In my OSX Sierra VM the speeds seems limited to the actual network interface(s) not reported as 10G link as mentioned. Using the bridge device in the VM (using vmxnet3 ). I have 2x1Gb bonded on unraid and available as br0 and then used that in VM as network adapter. Speeds seem consistently with about 200mb/s if i copy data to a share on the dual samsung ssd btrfs cache pool. Copying a 100GB file to bypass the effect of initial cache to memory , but even that seems to go at a steady 200mb/s. What could be possibly causing this. Or does this only work on Windows/Linux VM's ?? Internal file copies on Unraid server run at proper ssd speeds minus btrfs overhead edit : correct that. I see the same slow speed when writing on unraid level so must be something weard with my btrfs raid 1. Will keep experimenting..... edit2 : switching array write mode to direct io solved my local copy speeds (before writing to /mnt/user was about 50% of the speed of writing to /mnt/cache. Now i get similar speeds). Writing from VM to this share still suggests its using the network and not some local bridging mechanism edit3: seems to be a mac thing as from a windows VM i get 10GB speeds (initialy due to mem cache) saving to this same share. Mac VM always stuck at about 200Mb/s same share. Set the network speed manualy to 10G instead of auto (which report 1G) , but no difference so far. Any tips welcome... edit4 : i think my troubleshooting focuses down on the VMXNET3 driver/adapter showing only 1GB , which should according to all i read show 10GB. I tested 3 OSX vm's , El capitan, Sierra and High Sierra with the vmxnet3 driver and all same problem so not specific within an OSX release at least. Googeling show some mention of this without much details . Maybe Spaceinvader One has an idea