Jump to content

mrpj

Members
  • Posts

    8
  • Joined

  • Last visited

Everything posted by mrpj

  1. Ok yes i understand now. So assigning 8 cores but being able to scale those back on the start of another vm. Thats a very interesting idea and i didnt know that was possible. It would certainly be very useful. I had a similar idea a while back about isolcpus capability post-boot to be able to isolate and release cpus to the host without reboot. Having spoken to limetech this is something they have been investigating but it isnt in Linux's capability set and as such would be only be possible manually manipulating some things in the user space for this but it isnt something they are currently working on. So yes any info on how to implement what you are talking about here would be most welcome. I finally found some time to implement this for myself using python https://gist.github.com/patrickjahns/cfa90a39883206e18fdaccfd9d2809f0 It can either be used via command line, or imported into other scripts. The class is provided with different profiles (defined as json) and allows for switching between them. When switching the profile, the previous configuration is saved to a json file and can be easily restored. Examples 1) python vcpu.py --vcpumap vcpumap.json --profile default --ignored_domains ['pandora'] 2) python vcpu.py --vcpumap vcpumap.json --restore --ignored_domains ['pandora'] In example 1) the profile default is applied to all vms except the vm pandora In example 2) the previous configuration is restored vcpumap.json {"profilename": { "default": [1, 1, 1, 1, 0, 0, 0, 0, 1, 1, 1, 1, 0, 0, 0, 0], "vm1": {"all": [1, 1, 1, 1, 0, 0, 0, 0, 1, 1, 1, 1, 0, 0, 0, 0]}, "vm2": { "0": [1, 1, 1, 1, 0, 0, 0, 0, 1, 1, 1, 1, 0, 0, 0, 0], "1": [1, 1, 1, 1, 0, 0, 0, 0, 1, 1, 1, 1, 0, 0, 0, 0] } } } The json allows for creating profiles , each profile has a "default" configuration for all vcpus. It can also contain profiles per vm (vm1, vm2). With these you defined either a pinning for all ("all") vcpus, or per vcpu (see vm2). This should be quite flexible and help anyone else with doing this. My setup: Several VMS + 1 "GamestreamingVm" (uses nvidia gamestreaming) I use a flask application providing an url (http://ip/switchprofile/profile) to switch between defined profiles. Whenever my shield (or moonlight) connects, a AutoHotkey script on the gaming vm detects that "nvstreamer.exe" is running and calls http://ip/switchprofile/gaming and thus loads the gaming profile. The scripts also recognizes disconnects ("nvstreamer.exe is not running") and calls http://ip/restoreprofile, to restore previous configurations
  2. Thanks for your thought. Actually the xeon e5-2670 has quite a number of cores and the power should be plenty for using it both for work (gitlab+ci / staging area / low traffic websites) and leisure (mediaserver/plex + gaming). Thanks for explaining your setup - it is however not what I am looking to achieve. In your setup you change the vCPUs or virtual cores of your VM. I don`t want to change the core count of my VMs - I want to change what physical cores the vCPUs can utilize. Example Setup: assuming VM1 (2 vCPUs), VM2 (4vCPUs), GamingVM (4 vCPUs), e5-2670 with 8 physical cores (I omit the HT cores for this example) When GamingVM is OFF: VM1 and VM2 are free to use any of the 8 physical cores for their work. When GamingVM is ON: VM1 and VM2 are only allowed to use phyiscal cores 1-4; GamingVM is allowed to only use physical cores 5-8 There is no reason for having vCPU count == pysical core count. You can (and should) over commit resources, for increasing overall utilization I`d like to avoid (re-)starting VMs. Reason basically is, that some services should ideally be without downtime (webservices, gitlab ...). If the "gaming vm" resources are not in use - i`d like to utilize them for CI slaves or other work related tasks. As far as I could read, libvirt (and the virsh command line tool) allows for changing numa values (in this case vcpupin) while the VM is live. see virsh documentation. There are some limitations to changing live parameters: for example the vCPU count can be increased, but only to a previously specified maximum - so essentially I could add/remove vCPUs - but I am not looking into changing the amount of virtual Cores/virtual CPUs. I`d rather scale horizontal (more VMs) than vertical (more vCPUs per VM). So the important part for me is, to adjust cpu pinning/affinity (the pysical cores that vCPUs are allowed to use) on live VMs when certain conditions are met and not change the amount of vCPUs (or virtual cores). Would be great if there is already a tool to automate this process. Maybe someone else has already attempted something similar or already a working setup for this scenario? I guess this use case would also be something others here might be interested?
  3. Hey, I just registered since the Forum here seems to be most active when discussing kvm and cpu pinning. My setup consists of a e5-2670 with 32G ram - which I will be using for a kvm host with several VMs and LXC containers. Additionally I plan on creating a VM utilizing GPU (PCI) passthrough for gamestreaming. With this setup I`d like to allocate resources more dynamically. By that I mean, that depending if the "game vm" is active or not, I want to allocate the real cpu cores to the vCPUs in the setup. Example: Game VM OFF: every VM running has their own CPU pinning spread accross cores 1-8 (1-16 with HT) Game VM ON: game VM has exclusive right to cores 5-8; all other VMs will only be allowed to use cores 1-4 Did anyone ever attempt something similar? Do you know if there is any tool for this? (Besides using libvirt / virsh I have two concepts in mind: 1) define two working sets / xml files per VM (one for gamevm on; one for gamevm off) and switch between them depending on the state 2) - have a tool read the current (cpu) configuration of the vms before "game vm" is started - limit running vms to cores 1-4 and then start game vm - after game vm stops, restore initial configuration Any thoughts or ideas on this? How do you manage your cpu pinning with running vms?
×
×
  • Create New...