VM's snapshots


Recommended Posts

  • 2 weeks later...
  • 3 weeks later...

+1 to this feature request. Specifically, the ability to create VM snapshots and managing VM snapshots from the Unraid GUI. I recently switched over from a QNAP NAS, and having the ability to snapshot a VM either hot/on or cold/off has been an absolute lifesaver!

 

From my current experience with Linux guests in Unraid, there is a bit more flexibility here within the guest OS itself, but being able to manage it from the Unraid GUI would save quite a bit of time, that's for sure!

Link to comment
  • 4 weeks later...

Yes! And *online* snapshots would be a must. Snapshot should then also include persisted vRAM state and probably some devices guff.

 

A useful feature would also be to set up a snapshot to always start a VM from, and discard any changes made afterwards. The use-case for this is something like an internet cafe, a demo OS, or a place to tinker with stuff that is otherwise impossible to undo. A discardable OS, where any damage a user may have done to a system can just go *poof* and rolls back to a known working state, would be extremely useful in a plethora of contexts. I'm thinking schools, internet cafe as mentioned, some kind of public kiosk, or even something your kid will be unable to break.

 

It would also need a function to merge snapshots, and to "delete" snapshots (which means to merge & persist them unto the regular img file).

Edited by thany
Link to comment

There are two types of snapshots available in QEMU internal and external.

 

Internal - Requires QCOW2 images,  Doesn't support OVMF, but have seen a work around to change to rom, snap but you have to do a manual revert.

 

External - Supports only diskonly, no revert or delete options at present, manual revert needs VM to be stopped.

 

Which VM types are people looking to snapshot? Does anyone know if proxmox supports OVMF? I have asked the libvirt team if any ideas on timeline for full support for external snaps/

 

Testing so far.

 

VM Seabios 2 x QCOW2 Disk files.

 

root@computenode:~# virsh domfsinfo DebianSB
 Mountpoint           Name   Type   Target
--------------------------------------------
 /                    vda1   ext4   hdc
 /media/snapb/disk2   vdb    ext4   hdd

root@computenode:~# virsh snapshot-create DebianSB
Domain snapshot 1672498342 created
root@computenode:~# virsh snapshot-list DebianSB
 Name         Creation Time               State
---------------------------------------------------
 1672498342   2022-12-31 14:52:22 +0000   running

root@computenode:~# virsh snapshot-revert DebianSB
error: --snapshotname or --current is required

root@computenode:~# virsh snapshot-revert DebianSB --current

root@computenode:~# 
root@computenode:~# virsh snapshot-info DebainSB
error: failed to get domain 'DebainSB'

root@computenode:~# virsh snapshot-info  DebianSB
error: --snapshotname or --current is required

root@computenode:~# virsh snapshot-list  DebianSB
 Name         Creation Time               State
---------------------------------------------------
 1672498342   2022-12-31 14:52:22 +0000   running

root@computenode:~# virsh snapshot-revert DebianSB  1672498342

root@computenode:~# virsh snapshot-list  DebianSB
 Name         Creation Time               State
---------------------------------------------------
 1672498342   2022-12-31 14:52:22 +0000   running

root@computenode:~# virsh snapshot-list  DebianSB
 Name         Creation Time               State
---------------------------------------------------
 1672498342   2022-12-31 14:52:22 +0000   running
 1672498947   2022-12-31 15:02:27 +0000   running

root@computenode:~# virsh snapshot-revert DebianSB  1672498947
root@computenode:~# virsh destroy DebianSB
Domain 'DebianSB' destroyed

root@computenode:~# virsh snapshot-create  DebianSB
Domain snapshot 1672500774 created
root@computenode:~# virsh snapshot-list  DebianSB
 Name         Creation Time               State
---------------------------------------------------
 1672498342   2022-12-31 14:52:22 +0000   running
 1672498947   2022-12-31 15:02:27 +0000   running
 1672500774   2022-12-31 15:32:54 +0000   shutoff

 

VM OVMF 2 x QCOW2

 

root@computenode:~# virsh snapshot-create DebianSO
error: Operation not supported: internal snapshots of a VM with pflash based firmware are not supported

root@computenode:~# # change pflash to rom
root@computenode:~# virsh snapshot-create DebianSO
Domain snapshot 1672502423 created
root@computenode:~# virsh snapshot-list DebianSO
 Name         Creation Time               State
---------------------------------------------------
 1672502423   2022-12-31 16:00:23 +0000   shutoff

 

Edited by SimonF
  • Like 2
Link to comment
  • 2 weeks later...

Hello everyone.
I would also love this feature.

 

That is just my single opinion, but my personnal usecase is to be able to save a working configuration before doing any kind of changes that I'm not so sure about while knowing I can always go back to a previous known working point. ie: more like a restauration point.Therefore, It's not a problem for me if I need to switch the VM down before taking the snapshot.

 

As I'm writing this, I'm not sure it's really a "Snapshot" per say where all states (including RAM) are captured. (I don't need RAM state to be captured...)

More like a "disk backup" at a given point in time.

 

Maybe those 2 use cases are different and one can be implemented faster than the other. Especially if it covers the need of the majority of people.

(Not saying we should forget users that need more elaborate functions (hot snaphshot with RAM, etc...))

 

That is my own 2 cents about a feature that would endeed be great

 

Have a good day.

Link to comment

Been trying today with virsh and qemu-img to try it from the cli - Using seabios (does it matter) and qcow2 but if I set current snapshot with virsh for that vm6 and do a dnf update or something else big the data still gets written to the clean base image - same goes for qemu-img.. added a image, no error but the snapshot file is not created after

 

"root@towerpve:/mnt/user/domains/ol6# qemu-img snapshot -c vdisk1_202302161020.qcow2 vdisk1.qcow2".

 

When I check if the snapshot is created:

 

root@towerpve:/mnt/user/domains/ol6# qemu-img info --backing-chain vdisk1.qcow2
 

image: vdisk1.qcow2
file format: qcow2
virtual size: 12 GiB (12884901888 bytes)
disk size: 4.3 GiB
cluster_size: 65536
Snapshot list:
ID        TAG               VM SIZE                DATE     VM CLOCK     ICOUNT
1         1676537842            0 B 2023-02-16 09:57:22 00:00:00.000          0
3         vdisk1_202302161020.qcow2      0 B 2023-02-16 10:22:03 00:00:00.000          0
Format specific information:
    compat: 1.1
    compression type: zlib
    lazy refcounts: false
    refcount bits: 16
    corrupt: false
    extended l2: false

 

Confused -

I tried the virt manager docker but I couldn't find an option to make a snapshot in the first place.

 

I'm not sure how virsh or qemu-img work together with unraid, but I'm using Xen at work which just snapshots an entire vm - no matter how many disk images there are attached to it.. Same goes for virtual box - which I'm using at home (on my workstation).

 

Native support for managing snapshots is most welcome - unraid is quite complete (and when not, you can complete is yourself with plugins), but it kinda bothers me not having this feature - that's the reason why I (almost) moved to proxmox having unraid just as dfs inside it, but that would be counterproductive. I've been thinking about btrfs too but I deliberately did not format the vm cache nvme as btrfs - don't want to go that way: qemu it self can manage snapshots..

Edited by sjoerd
typos, added info
Link to comment

Join the conversation

You can post now and register later. If you have an account, sign in now to post with your account.
Note: Your post will require moderator approval before it will be visible.

Guest
Reply to this topic...

×   Pasted as rich text.   Restore formatting

  Only 75 emoji are allowed.

×   Your link has been automatically embedded.   Display as a link instead

×   Your previous content has been restored.   Clear editor

×   You cannot paste images directly. Upload or insert images from URL.