Switching between VMs conveniently


Recommended Posts

I run a Windows 10 VM and a MacOS VM and both share cores and GPU—obviously, they don't run at the same time.

 

I switch between these somewhat frequently. Is it possible by way of script or otherwise to shutdown a currently running vm and start another without having to go through the steps of bringing up the UnRAID GUI, clicking the VMs tab, shutting down the currently running VM, etc.?

 

Double points if this can be done with my Android phone. Perhaps a shortcut?

 

I don't run in GUI mode and I typically have to do this with my phone via the browser. 

 

Any suggestions? I don't really know what's possible.

 

Thanks.

Link to comment

Easily done, at least the scripting part. The only issue could be if the mac doesn't shut down cleanly when prompted by the host. I haven't tried, what happens when you click on the VM in the GUI and select stop? Does the mac stop cleanly like the windows VM does?

 

Quick and easy is 2 scripts, but with more advanced scripting it could be done with 1 script. You may need to add logic between the shutdown and the start if the shutdown returns before the guest is completely down, I didn't test for that.

 

(first script)

virsh shutdown VM2

virsh start VM1

 

(second script)

virsh shutdown VM1

virsh start VM2

 

Once you have the script(s) set up, logging in with an SSH client on your phone and running the scripts should work fine. A quick google found SSH Button for android, no experience with it, but the description says it does exactly what you want.

Link to comment

@tjb_altf4 that’d be a slick setup. Given it’s a macOS setup, @nlash is likely already utilizing Clover or Opencore in his macOS VM. Not exactly sure how I’d go about getting a windows VM to show up there as well, but it should be do-able. I’m thinking you’d have to add a second Vdisk with the right EFI setup so it shows up in the boot loader? 

Link to comment
5 hours ago, alturismo said:

@nlash as i liked the idea and i also use 2 VM´s with same hardware i made myself a small script to cycle

LOL my script is more of a "Hulk! Smash!" approach.

  • virsh shutdown VM1
  • wait 10s
  • virsh start VM2
  • wait 10s
  • virsh start VM2
  • wait 10s
  • virsh start VM2

Since VM1 and VM2 has the same passed-through GPU, it would be impossible to start both VM at the same time.

So if the 1st virsh start is before VM1 has been fully shutdown, it would just error out without doing anything and then the 2nd virsh start after 10s and then the 3rd one after another 10s.

If VM2 already started, any subsequent virsh start would also error out so they don't do anything.

 

 

Link to comment

@testdasi nice ;) i just wanted 1 script for all, so here it doesnt matter which one is running, it will either now turn vm1 off and vm2 on or vice vers ...

 

would be nicer to have it as plugin so it would run through while you live on vm1 ... as user script dies when mashine turns off ... ;)

 

lets see when i find some time how to add a plugin therefore

Link to comment

I love this topic! It would be lovely to have a neat graphical bootloader start up everytime a primary gpu passthrough OS is shut down so server doesn’t have to restarted or you have to use gui seperately... We also need the new Kernel that supports Ryzen/RX 5700 XT the best. Maybe this should become a real feature to Unraid for us that use Unraid as both, their server and workstation.

Link to comment

as final note, when using as user script, starting script in "background" will reboot here always from vm1 to vm2 or vice vers, so actually a one click solution.

 

image.thumb.png.e76b15952c0fecad3a58bc2149bc4c8e.png

 

the "bootloader" idea is nice, personally not needed here as i can always reboot from one vm to another which uses the same gpu ... in my sample nvidia, win 10 <-> ubuntu using both same gpu (GT1030) and same USB (onboard intel usb controller).

 

as they use different mashine types (i4400 win vs. Q35 ubuntu) the dual bootloader option above would probably fail here.

 

so a plugin with a "which vm boot option" would be my goal if i follow this, but currently im good as is.

Edited by alturismo
Link to comment
  • 2 years later...
On 3/5/2020 at 8:27 AM, alturismo said:

@nlash as i liked the idea and i also use 2 VM´s with same hardware i made myself a small script to cycle

 

 

 

Great thread that helped me exactly for what I was looking for !

Just a reply to let people know that the proposed solution still works well. Personnaly, I just updated the

virsh shutdown "$vm1"

with

virsh dompmsuspend "$vm2" --target disk

in order to use hibernation and find my work where I left it. Works well for Windows, but not for Linux distrib, as I found out it is not enable by default nowadays (the install does not even provide a swap partition when using default settings oO). Anyway that's an issue for an other day :)

  • Like 1
Link to comment
35 minutes ago, Darkman13 said:

in order to use hibernation and find my work where I left it. Works well for Windows,

nice idea, sadly not practical here with GPU passthrough's in my VM's

 

the VM's like to freeze ... and even if not, they stay vfio bounded as the VM is not completely off, so i cant set them in persistence mode ... so the mashine has more power consumption then otherwise ... ;)

Link to comment
6 minutes ago, alturismo said:

nice idea, sadly not practical here with GPU passthrough's in my VM's

 

the VM's like to freeze ... and even if not, they stay vfio bounded as the VM is not completely off, so i cant set them in persistence mode ... so the mashine has more power consumption then otherwise ... ;)

mmm...this shouldn't happen..if hibernation is set to disk, the vm should report as shutdown and the gpu should be free for other uses.

Did you enable sustend to disk in the xml?

Check this, might help:

https://forums.unraid.net/topic/130134-switching-from-gpu-passthrough-local-to-vnc-in-linux-pop/?do=findComment&comment=1184943

 

  • Thanks 1
Link to comment
18 minutes ago, alturismo said:

may a question ahead, while i trigger it inside the VM, would this also trigger it then ?

 

Are you asking if it will work if you hybernate the vm from inside the guest instead of from the host?My reply is...I don't know :D

But several users reported it working with virsh commands; dompmsuspend and dompmwakeup are virsh commands to be given from the host and the guest requires the guest agent installed.

 

here are posts where I get some info:

https://www.reddit.com/r/VFIO/comments/568mmt/saving_vm_state_with_gpu_passthrough/

 

reddit.thumb.png.378da268817fe19504a0f7ae43590e75.png

Edited by ghost82
  • Like 1
Link to comment
2 hours ago, ghost82 said:

Did you enable sustend to disk in the xml?

 

now i made a testrun

 

i added it upper devices start tag where its also persistennt (if i put it in the end inside the devices block its wiped out)

 

<?xml version='1.0' encoding='UTF-8'?>
<domain type='kvm' id='3'>
  <name>AlsPC</name>
...
....
.....
    <timer name='hypervclock' present='yes'/>
    <timer name='tsc' present='yes' mode='native'/>
  </clock>
  <on_poweroff>destroy</on_poweroff>
  <on_reboot>restart</on_reboot>
  <on_crash>restart</on_crash>
  <pm>
    <suspend-to-mem enabled='yes'/>
    <suspend-to-disk enabled='yes'/>
  </pm>
  <devices>
    <emulator>/usr/local/sbin/qemu</emulator>
...
....
.....
    </hostdev>
    <memballoon model='none'/>
######## HERE ITS NOT PERSISENT ########
######## HERE ITS NOT PERSISENT ########
</devices>
  <seclabel type='dynamic' model='dac' relabel='yes'>
    <label>+0:+100</label>
    <imagelabel>+0:+100</imagelabel>
  </seclabel>
</domain>

 

and behaviour is like i know it, VM goes to suspend, but libvirt is not "shutdown"

 

i have 2 GPU's, as we see here, its not free after suspend, also my hook scripts are not executed as its not "shutdown" foe qemu

 

root@AlsServer:~# nvidia-smi
Sun Nov  6 13:00:56 2022
+-----------------------------------------------------------------------------+
| NVIDIA-SMI 520.56.06    Driver Version: 520.56.06    CUDA Version: 11.8     |
|-------------------------------+----------------------+----------------------+
| GPU  Name        Persistence-M| Bus-Id        Disp.A | Volatile Uncorr. ECC |
| Fan  Temp  Perf  Pwr:Usage/Cap|         Memory-Usage | GPU-Util  Compute M. |
|                               |                      |               MIG M. |
|===============================+======================+======================|
|   0  NVIDIA GeForce ...  On   | 00000000:01:00.0 Off |                  N/A |
|  0%   26C    P8     6W / 350W |      0MiB / 12288MiB |      0%      Default |
|                               |                      |                  N/A |
+-------------------------------+----------------------+----------------------+
|   1  NVIDIA GeForce ...  On   | 00000000:06:00.0 Off |                  N/A |
|  0%   39C    P8     5W / 180W |      0MiB /  6144MiB |      0%      Default |
|                               |                      |                  N/A |
+-------------------------------+----------------------+----------------------+

+-----------------------------------------------------------------------------+
| Processes:                                                                  |
|  GPU   GI   CI        PID   Type   Process name                  GPU Memory |
|        ID   ID                                                   Usage      |
|=============================================================================|
|  No running processes found                                                 |
+-----------------------------------------------------------------------------+
root@AlsServer:~# virsh dompmsuspend AlsPC disk
Domain 'AlsPC' successfully suspended
root@AlsServer:~# nvidia-smi
Sun Nov  6 13:04:48 2022
+-----------------------------------------------------------------------------+
| NVIDIA-SMI 520.56.06    Driver Version: 520.56.06    CUDA Version: 11.8     |
|-------------------------------+----------------------+----------------------+
| GPU  Name        Persistence-M| Bus-Id        Disp.A | Volatile Uncorr. ECC |
| Fan  Temp  Perf  Pwr:Usage/Cap|         Memory-Usage | GPU-Util  Compute M. |
|                               |                      |               MIG M. |
|===============================+======================+======================|
|   0  NVIDIA GeForce ...  On   | 00000000:01:00.0 Off |                  N/A |
|  0%   26C    P8     6W / 350W |      0MiB / 12288MiB |      0%      Default |
|                               |                      |                  N/A |
+-------------------------------+----------------------+----------------------+

+-----------------------------------------------------------------------------+
| Processes:                                                                  |
|  GPU   GI   CI        PID   Type   Process name                  GPU Memory |
|        ID   ID                                                   Usage      |
|=============================================================================|
|  No running processes found                                                 |
+-----------------------------------------------------------------------------+
root@AlsServer:~#

 

and when i now wakeup the VM, its starting but stalled, no GPU, no access, no ping ...

 

image.thumb.png.5d8ba58b0608c4d2a95ab44608b8768a.png

 

so either my setup doesnt like it ... or i missinterpret your position from the other linked post as it doesnt look like its triggered properly

 

image.png.c1c6ceed0f613abe56c54de24fcdadda.png

 

my closing tag from devices is different as you see upper </devices> and not <devices/>

 

in terms you have an idea, thanks ahead, for now i must say it doesnt work here in my usecase / setup

Link to comment
4 hours ago, alturismo said:

nice idea, sadly not practical here with GPU passthrough's in my VM's

Hope I can help.

I have a GPU (Nvidia 970) passthrough, that I have put on both VM (one Windows 10, and one Zorin OS, based on Ubuntu). The have globally the same conf, except the Windows is on his own SSD. For the Linux to properly hibernate I had to follow an other topic and tweak around :

But I do not have the <pm>...</pm> lines on my config in order to hibernate the Windows. I remember that it is not activate by default either within the OS : https://www.ubackup.com/windows-11/hibernate-mode-windows-11.html

After that the qemu agent is able to shutdown the VM to disk properly, and I can do a normal virsh start to start the VM.

  • Thanks 1
Link to comment
1 hour ago, Darkman13 said:

I remember that it is not activate by default either within the OS :

when i think about it, i guess i disabled hibernate in my windows mashine's ;) useless vdisk space ... ;)

 

so in the end we talk about the normal hibernate feature from the OS and this is not particular to the host OS and some special "hibernate to disk" feature, my fault ;) well, i ll give it another try after checking if hibernate is active or not on my win VM's anymore and see what happens then.

 

Thanks for pointing @Darkman13

Link to comment
On 11/6/2022 at 1:21 PM, alturismo said:

i added it upper devices start tag where its also persistennt (if i put it in the end inside the devices block its wiped out)

 

On 11/6/2022 at 1:21 PM, alturismo said:

image.png.c1c6ceed0f613abe56c54de24fcdadda.png

 

my closing tag from devices is different as you see upper </devices> and not <devices/>

Sorry ignore this, the position you wrote is the right one!

 

As far as the other issue, I'm sorry I didn't try but only reported some findings :(

 

  • Thanks 1
Link to comment
On 11/7/2022 at 2:07 PM, ghost82 said:

As far as the other issue, I'm sorry I didn't try but only reported some findings :(

 

 

On 11/6/2022 at 3:13 PM, Darkman13 said:

After that the qemu agent is able to shutdown the VM to disk properly, and I can do a normal virsh start to start the VM.

 

On 11/6/2022 at 4:49 PM, alturismo said:

when i think about it, i guess i disabled hibernate in my windows mashine's ;) useless vdisk space ... ;)

 

so, i enabled hibernate in windows again as test scenario

 

and i can say, looking fine so far, i ll test this over some period of time and see if that keeps stable

 

may as note, after activating the hibernate button in windows again, also working when triggered from inside the VM

 

image.png.8a19dcfa68ab346f1ef963217b588809.png

 

will result in a "shutdown" for qemu so its running as it should be, virsh start also just wakes the VM up with all open apps ;)

 

image.png.0a9eaf6ae442af8b26d1a1feba72751c.png

 

very nice, thanks agin @ghost82 and @Darkman13, i dropped ths already completely as my experience was really bad with this ;)

  • Like 2
Link to comment
  • 1 year later...
36 minutes ago, GeekFreak said:

Hey i have got the solution for you i able to create a nice script with webhook to switch between vm easily 

thanks, but i have 1 already here for some time ... ;) simple but working, just to "cacle" between VM's sharing the same GPU passthrough

 

as userscript, run in background ... will turn the running VM off, will turn the other VM on (cycle)

 

#!/bin/bash

vm1="AlsPC"		        ## Name of first VM
vm2="AlsPC_Linux"       ## Name of second VM

############### End config

vm_running="running"
vm_down="shut off"

vm1_state=$(virsh domstate "$vm1")
vm2_state=$(virsh domstate "$vm2")

echo "$vm1 is $vm1_state"
echo "$vm2 is $vm2_state"

if [ "$vm1_state" = "$vm_running" ] && [ "$vm2_state" = "$vm_down" ]; then
	echo "$vm1 is running shutting down"
	virsh shutdown "$vm1"
	vm1_new_state=$(virsh domstate "$vm1")
	until [ "$vm1_new_state" = "$vm_down" ]; do
		echo "$vm1 $vm1_new_state"
		vm1_new_state=$(virsh domstate "$vm1")
		sleep 2
	done
	echo "$vm1 $vm1_new_state"
	sleep 2
	virsh start "$vm2"
	sleep 1
	vm2_new_state=$(virsh domstate "$vm2")
	echo "$vm2 $vm2_new_state"
else
	if [ "$vm2_state" = "$vm_running" ] && [ "$vm1_state" = "$vm_down" ]; then
		echo "$vm2 is running shutting down"
		virsh shutdown "$vm2"
		vm2_new_state=$(virsh domstate "$vm2")
		until [ "$vm2_new_state" = "$vm_down" ]; do
			echo "$vm2 $vm2_new_state"
			vm2_new_state=$(virsh domstate "$vm2")
			sleep 2
		done
		echo "$vm2 $vm2_new_state"
		sleep 2
		virsh start "$vm1"
		sleep 1
		vm1_new_state=$(virsh domstate "$vm1")
		echo "$vm1 $vm1_new_state"
	else
		echo "$vm1 $vm1_state and $vm2 $vm2_state doesnt match"
	fi
fi

 

Link to comment
On 12/24/2023 at 2:18 AM, alturismo said:

thanks, but i have 1 already here for some time ... ;) simple but working, just to "cacle" between VM's sharing the same GPU passthrough

 

as userscript, run in background ... will turn the running VM off, will turn the other VM on (cycle)

 

#!/bin/bash

vm1="AlsPC"		        ## Name of first VM
vm2="AlsPC_Linux"       ## Name of second VM

############### End config

vm_running="running"
vm_down="shut off"

vm1_state=$(virsh domstate "$vm1")
vm2_state=$(virsh domstate "$vm2")

echo "$vm1 is $vm1_state"
echo "$vm2 is $vm2_state"

if [ "$vm1_state" = "$vm_running" ] && [ "$vm2_state" = "$vm_down" ]; then
	echo "$vm1 is running shutting down"
	virsh shutdown "$vm1"
	vm1_new_state=$(virsh domstate "$vm1")
	until [ "$vm1_new_state" = "$vm_down" ]; do
		echo "$vm1 $vm1_new_state"
		vm1_new_state=$(virsh domstate "$vm1")
		sleep 2
	done
	echo "$vm1 $vm1_new_state"
	sleep 2
	virsh start "$vm2"
	sleep 1
	vm2_new_state=$(virsh domstate "$vm2")
	echo "$vm2 $vm2_new_state"
else
	if [ "$vm2_state" = "$vm_running" ] && [ "$vm1_state" = "$vm_down" ]; then
		echo "$vm2 is running shutting down"
		virsh shutdown "$vm2"
		vm2_new_state=$(virsh domstate "$vm2")
		until [ "$vm2_new_state" = "$vm_down" ]; do
			echo "$vm2 $vm2_new_state"
			vm2_new_state=$(virsh domstate "$vm2")
			sleep 2
		done
		echo "$vm2 $vm2_new_state"
		sleep 2
		virsh start "$vm1"
		sleep 1
		vm1_new_state=$(virsh domstate "$vm1")
		echo "$vm1 $vm1_new_state"
	else
		echo "$vm1 $vm1_state and $vm2 $vm2_state doesnt match"
	fi
fi

 

Oh thats perfect. mine works for anyone who want to connect their elgato stream deck and wants to switch to whatever system, it's nice if you have multiple VM you want to switch between like i have MacOS, Ubuntu, and Windows that i switch for software testing and etc, and i could do that with elgato stream deck it makes my life a little bit easy lol.

  • Like 1
Link to comment

Join the conversation

You can post now and register later. If you have an account, sign in now to post with your account.
Note: Your post will require moderator approval before it will be visible.

Guest
Reply to this topic...

×   Pasted as rich text.   Restore formatting

  Only 75 emoji are allowed.

×   Your link has been automatically embedded.   Display as a link instead

×   Your previous content has been restored.   Clear editor

×   You cannot paste images directly. Upload or insert images from URL.