fonts Posted January 9, 2017 Share Posted January 9, 2017 so im currently playing with dynamic cpu allocation aka not pining processors to a specific vm. i currently have 1 vms running in dynamic config. which is being used download stuff (windows) - 1 socket 2 cores 2 threads (4 vcpus) i have another 4 vms which i am going to convert over to dynamic allocations and see how it goes but at the moment i cant see much performance decrease and this way i can allocate as many processors i like to each guest. baring in mind these guests are not doing anything cpu intensive. my processor rarly goes over 6% usage (Xeon E5-2620) and i have 5 dockers running at the same time including plex! has anyone else played around with this ? if so whats your thoughts ? Quote Link to comment
1812 Posted January 10, 2017 Share Posted January 10, 2017 how are you setting this up? never played with not pinning, but if you point me in the direction, I'm game. Quote Link to comment
machineshake123 Posted January 11, 2017 Share Posted January 11, 2017 I am also very interested in this, please share your method. thanks Quote Link to comment
fonts Posted January 13, 2017 Author Share Posted January 13, 2017 Hi Guys, really sorry for the delay ok so here we go... so a while ago i found a video on how to manage unraid vms with virt-manager. (bottom of post) so i watched the video made the change but hadnt touched it since. i finally connected to unraid with virtual machine manager and noticed that it did not have any settings to pin processors to guests. so i thought ok this is intersting... so i changed the cpu configuration model type to Hypervisor Default and set the processor count to 4 i then went back into unraid and edited the xml on for that guest and removed the processors pinning <cputune> <vcpupin vcpu='0' cpuset='0'/> </cputune> the reason you have to remove this section from the xml is because virtual machine manager will not remove anything from your xml config it will only add/modify things so then i was left with the following section for processor in my config. <vcpu placement='static'>4</vcpu> this gives me 4 virtual processors but because of the limitation with windows 7 i had to do some more magic which was to split the processor up into threads and cores, this can also be done in virt-manager by selecting manually set cpu topology thats it... now if you go back to unraid and edit the guest you will notice that there is no cpu's ticked for the pinning. so now your vm's processor is not fixed to one core/thread like a proper hypervisior like ESX i have since changed all my guests to be dynamic i havent had any issues so ill continue to setup guests like this. please note if you make any changes in unraid that are not in the edit xml section you will probably kill the changes above and unraid will probably try and make you pin a processor. hope this helps.. if anyone has any questions let me know im happy to help how to connect virt-manager to unraid to help manage your VM's - Quote Link to comment
1812 Posted January 13, 2017 Share Posted January 13, 2017 I just played with this creating 2 vm's using 4 cores, and I think I found a possible issue for some. On my machine, I have 16 cores available, 12 of them isolated from unRaid, leaving 4 for server operations and dockers. I thought unRaid would populate the xml for the vm showing you what the core assignment was, much like when it populates an address of a pci device if you leave it blank. Didn't happen. So looking at the dashboard, it shows all the vm activity being bound to the 4 cores that unRaid has available to it. Just for kicks, I changed the core count of one vm to 8, and the other to 16. Same result. No activity on isolated cores, only active on the cores available to unRaid. Which I guess makes sense. I did not try to measure any performance differences since I got distracted by trying to find out what cores the vm's were using. (I actually never made it a full boot up cycle investigating this part first.) I'll get to that in a bit... Quote Link to comment
fonts Posted January 13, 2017 Author Share Posted January 13, 2017 I just played with this creating 2 vm's using 4 cores, and I think I found a possible issue for some. On my machine, I have 16 cores available, 12 of them isolated from unRaid, leaving 4 for server operations and dockers. I thought unRaid would populate the xml for the vm showing you what the core assignment was, much like when it populates an address of a pci device if you leave it blank. Didn't happen. So looking at the dashboard, it shows all the vm activity being bound to the 4 cores that unRaid has available to it. Just for kicks, I changed the core count of one vm to 8, and the other to 16. Same result. No activity on isolated cores, only active on the cores available to unRaid. Which I guess makes sense. I did not try to measure any performance differences since I got distracted by trying to find out what cores the vm's were using. (I actually never made it a full boot up cycle investigating this part first.) I'll get to that in a bit... yeah you will need to remove any isolation so that unraid has access to all the cores. Quote Link to comment
fonts Posted January 13, 2017 Author Share Posted January 13, 2017 thought id share this as well http://take.ms/aiDY4 this is a screenshot of my processor's load from the past week i wanted to point out that none of my cores have gone above 50.7% usage and i am currently running 6 guests 4 dockers (which includes plex) i am also playing around with dynamic memory on one of the windows guests.. seems to be working ok but it allocates the max memory rather than the min then bursting up (maybe this is a windows 7 bug, will do some more testing with this) Quote Link to comment
DoeBoye Posted January 13, 2017 Share Posted January 13, 2017 This sound really cool! I'd love to hear some official input as well from Limetech! This would be the perfect solution for Windows installs that need more oomph for the rare task! If this turns out to be doable, a simple checkmark in the GUI to enable dynamic cpu allocation would be amazing!! Quote Link to comment
machineshake123 Posted January 13, 2017 Share Posted January 13, 2017 This is great, but could someone please try the latency monitor to see what the latency is like in the VMs? I imagine the latency will be much higher since the cores are now dynamic and probably shared. Quote Link to comment
fonts Posted January 13, 2017 Author Share Posted January 13, 2017 This is great, but could someone please try the latency monitor to see what the latency is like in the VMs? I imagine the latency will be much higher since the cores are now dynamic and probably shared. this test was run while i was downloading guest cpu was sitting around 7% Dynamic - http://take.ms/xlgfA Pinned - http://take.ms/W4BCN now im no expert on the latency readings but there isnt much diffrence between the two as far as i can see... am i wrong ? Quote Link to comment
der8ertl Posted December 23, 2020 Share Posted December 23, 2020 i know this is an old topic, but has anyone gotten into this more? any further experiences? isn't also vmware esx assigning cpus dynamic? thanks in advance! d8 Quote Link to comment
Recommended Posts
Join the conversation
You can post now and register later. If you have an account, sign in now to post with your account.
Note: Your post will require moderator approval before it will be visible.