mrpj

Members
  • Posts

    8
  • Joined

  • Last visited

Converted

  • Gender
    Undisclosed

mrpj's Achievements

Noob

Noob (1/14)

0

Reputation

  1. Ok yes i understand now. So assigning 8 cores but being able to scale those back on the start of another vm. Thats a very interesting idea and i didnt know that was possible. It would certainly be very useful. I had a similar idea a while back about isolcpus capability post-boot to be able to isolate and release cpus to the host without reboot. Having spoken to limetech this is something they have been investigating but it isnt in Linux's capability set and as such would be only be possible manually manipulating some things in the user space for this but it isnt something they are currently working on. So yes any info on how to implement what you are talking about here would be most welcome. I finally found some time to implement this for myself using python https://gist.github.com/patrickjahns/cfa90a39883206e18fdaccfd9d2809f0 It can either be used via command line, or imported into other scripts. The class is provided with different profiles (defined as json) and allows for switching between them. When switching the profile, the previous configuration is saved to a json file and can be easily restored. Examples 1) python vcpu.py --vcpumap vcpumap.json --profile default --ignored_domains ['pandora'] 2) python vcpu.py --vcpumap vcpumap.json --restore --ignored_domains ['pandora'] In example 1) the profile default is applied to all vms except the vm pandora In example 2) the previous configuration is restored vcpumap.json {"profilename": { "default": [1, 1, 1, 1, 0, 0, 0, 0, 1, 1, 1, 1, 0, 0, 0, 0], "vm1": {"all": [1, 1, 1, 1, 0, 0, 0, 0, 1, 1, 1, 1, 0, 0, 0, 0]}, "vm2": { "0": [1, 1, 1, 1, 0, 0, 0, 0, 1, 1, 1, 1, 0, 0, 0, 0], "1": [1, 1, 1, 1, 0, 0, 0, 0, 1, 1, 1, 1, 0, 0, 0, 0] } } } The json allows for creating profiles , each profile has a "default" configuration for all vcpus. It can also contain profiles per vm (vm1, vm2). With these you defined either a pinning for all ("all") vcpus, or per vcpu (see vm2). This should be quite flexible and help anyone else with doing this. My setup: Several VMS + 1 "GamestreamingVm" (uses nvidia gamestreaming) I use a flask application providing an url (http://ip/switchprofile/profile) to switch between defined profiles. Whenever my shield (or moonlight) connects, a AutoHotkey script on the gaming vm detects that "nvstreamer.exe" is running and calls http://ip/switchprofile/gaming and thus loads the gaming profile. The scripts also recognizes disconnects ("nvstreamer.exe is not running") and calls http://ip/restoreprofile, to restore previous configurations
  2. @bonkers I feel your pain - took me 5 days of trying without a second card, before I finally gave up and ordered another card - just to try.. Two things to try for extracting: -blacklist all drivers (nvidia, nouveau etc.) -disable framebuffer for all cards - so they for sure don`t get initialized (grub add "video=efifb:off")
  3. mrpj

    virt-manager

    Thank you very much - got stuck with the dbus error and couldn`t find a solution while searching. Now its working over here :-) One more less vm on my laptop to have laying around for something that simple to do - virt-manager does not replace virsh, but makes adding/removing certain device parts easier, way easier
  4. mrpj

    virt-manager

    I checked out the Win10 upgrade and also got bash running - installed virt-manager via apt-get install virt-manager. But whenever I try to run it - it never shows any window. Just returns to bash after 1-3 seconds. Mind sharing some more details on your setup? It would be great for me since I wouldn`t need to start a ubuntu vm from my laptop for more indepth administration. Did you need to install certain packages? X-server? maybe you have a link to more information in this matter?
  5. mrpj

    virt-manager

    Can you elaborate a bit more? Are you running libvirt-manager "native on win10" ? Which binaries did you use for this? libvirt-manager is the most feature complete gui for libvirt / qemu / kvm that there is. Only the command line utility virsh offers more functionality compared to any other management solution for qemu/kvm. It makes creating and managing virtual machines far easier - but I only use it when I don`t want to edit the xml directly, or I don`t know (yet) how to properly edit the xml
  6. Just reporting another success story with a Nvidia Geforce 1060 (Palit 1060 Dual). Initially I had a lot of trouble and could not get any picture on my monitor. After reading a lot on this forum, checking VFIO reddit group and VFIO-users mailing list (which provided valuable resources and a lot of insight), I managed to get my setup working. Initially the only graphics card in my system was the geforce 1060 - but no matter what I tried, I could not get a picture on my screen. Last week I ordered another Graphics Card (radeon hd2600 pro for cheap) and put it into the machine and just like magic, the virtual machine would boot with a picture. First I didn`t bother to try again with just a single card, but today I followed the instructions in here on more time and dumped the vga bios with the radeon still attached. I removed the radeon card and added the freshly dumped graphics card bios to the xml and tried one more time. This time with only the nvidia card in the pc it works. @locutus2000 Try dumping the bios when another card is in the PC and the nvidia card is not used (for console or anything else). If you don`t have enough slots, try dumping the bios on a different machine
  7. Thanks for your thought. Actually the xeon e5-2670 has quite a number of cores and the power should be plenty for using it both for work (gitlab+ci / staging area / low traffic websites) and leisure (mediaserver/plex + gaming). Thanks for explaining your setup - it is however not what I am looking to achieve. In your setup you change the vCPUs or virtual cores of your VM. I don`t want to change the core count of my VMs - I want to change what physical cores the vCPUs can utilize. Example Setup: assuming VM1 (2 vCPUs), VM2 (4vCPUs), GamingVM (4 vCPUs), e5-2670 with 8 physical cores (I omit the HT cores for this example) When GamingVM is OFF: VM1 and VM2 are free to use any of the 8 physical cores for their work. When GamingVM is ON: VM1 and VM2 are only allowed to use phyiscal cores 1-4; GamingVM is allowed to only use physical cores 5-8 There is no reason for having vCPU count == pysical core count. You can (and should) over commit resources, for increasing overall utilization I`d like to avoid (re-)starting VMs. Reason basically is, that some services should ideally be without downtime (webservices, gitlab ...). If the "gaming vm" resources are not in use - i`d like to utilize them for CI slaves or other work related tasks. As far as I could read, libvirt (and the virsh command line tool) allows for changing numa values (in this case vcpupin) while the VM is live. see virsh documentation. There are some limitations to changing live parameters: for example the vCPU count can be increased, but only to a previously specified maximum - so essentially I could add/remove vCPUs - but I am not looking into changing the amount of virtual Cores/virtual CPUs. I`d rather scale horizontal (more VMs) than vertical (more vCPUs per VM). So the important part for me is, to adjust cpu pinning/affinity (the pysical cores that vCPUs are allowed to use) on live VMs when certain conditions are met and not change the amount of vCPUs (or virtual cores). Would be great if there is already a tool to automate this process. Maybe someone else has already attempted something similar or already a working setup for this scenario? I guess this use case would also be something others here might be interested?
  8. Hey, I just registered since the Forum here seems to be most active when discussing kvm and cpu pinning. My setup consists of a e5-2670 with 32G ram - which I will be using for a kvm host with several VMs and LXC containers. Additionally I plan on creating a VM utilizing GPU (PCI) passthrough for gamestreaming. With this setup I`d like to allocate resources more dynamically. By that I mean, that depending if the "game vm" is active or not, I want to allocate the real cpu cores to the vCPUs in the setup. Example: Game VM OFF: every VM running has their own CPU pinning spread accross cores 1-8 (1-16 with HT) Game VM ON: game VM has exclusive right to cores 5-8; all other VMs will only be allowed to use cores 1-4 Did anyone ever attempt something similar? Do you know if there is any tool for this? (Besides using libvirt / virsh I have two concepts in mind: 1) define two working sets / xml files per VM (one for gamevm on; one for gamevm off) and switch between them depending on the state 2) - have a tool read the current (cpu) configuration of the vms before "game vm" is started - limit running vms to cores 1-4 and then start game vm - after game vm stops, restore initial configuration Any thoughts or ideas on this? How do you manage your cpu pinning with running vms?