Jump to content

VM Snapshots


Recommended Posts

3 minutes ago, enJOyIT said:

Is this thread about snapshots only or backups too?

Happy for you to ask questions about backups. But there is not a native backup with the system

 

All the files are stored in the vm directory including the memory as a .mem fi!e, xml copys etc. Snapshot config is stored in the linvirt dir under qemu/snapshotdb

Link to comment

Example files for a snapshot.  .running is the running.xml .mem is the memory dump and qcow2 is the overlay.

root@computenode:~# ls /mnt/user/domains2/DebianQ35/
S20240525081529.running  memoryS20240525081529.mem  vdisk1.S20240525081529qcow2  vdisk1.img*
root@computenode:~# cat /etc/libvirt/qemu/snapshotdb/DebianQ35/snapshots.db 
{
    "S20240525081529": {
        "name": "S20240525081529",
        "parent": "Base",
        "state": "running",
        "desc": null,
        "memory": {
            "@attributes": {
                "snapshot": "external",
                "file": "\/mnt\/vmpoolzfs\/domains2\/DebianQ35\/memoryS20240525081529.mem"
            }
        },
        "creationtime": "1716621329",
        "method": "QEMU",
        "backing": {
            "hdc": [
                "\/mnt\/vmpoolzfs\/domains2\/DebianQ35\/vdisk1.S20240525081529qcow2",
                "\/mnt\/user\/domains2\/DebianQ35\/vdisk1.img"
            ],
            "rhdc": [
                "\/mnt\/user\/domains2\/DebianQ35\/vdisk1.img",
                "\/mnt\/vmpoolzfs\/domains2\/DebianQ35\/vdisk1.S20240525081529qcow2"
            ]
        },
        "primarypath": "\/mnt\/vmpoolzfs\/domains2\/DebianQ35",
        "disks": [
            {
                "@attributes": {
                    "name": "hdc",
                    "snapshot": "external",
                    "type": "file"
                },
                "driver": {
                    "@attributes": {
                        "type": "qcow2"
                    }
                },
                "source": {
                    "@attributes": {
                        "file": "\/mnt\/vmpoolzfs\/domains2\/DebianQ35\/vdisk1.S20240525081529qcow2"
                    }
                }
            },
            {
                "@attributes": {
                    "name": "hda",
                    "snapshot": "no"
                }
            }
        ]
    }
}root@computenode:~# 

 

Link to comment

I have a NVME cache drive I use to hold my VMs, very close to capacity. So when I go to create a snapshot I get a storage error as I no extra space on the NVME to store 'said' snapshot. Is there the ability to store/save the snapshot to another storage area on my Unraid?

Link to comment

Are the snapshots provided by qcow2 format attributes or by underlying filesystem ?

 

Is there a way to export snapshot to backup vm ?

 

I'm glad editing a vm is now done fully in gui. We can now expect there is no more bug when switching between gui and xml like vdisk type switching from qcow2 to img.  

Link to comment
7 minutes ago, caplam said:

Are the snapshots provided by qcow2 format attributes or by underlying filesystem ?

 

Is there a way to export snapshot to backup vm ?

 

I'm glad editing a vm is now done fully in gui. We can now expect there is no more bug when switching between gui and xml like vdisk type switching from qcow2 to img.  

There are two options at present qemu which creates a qcow2 overlay and zfs filesystem. Default is qemu. Option for btrfs will be added at a future date.

 

Snapshot is not a backup of the whole system. Willl look to add support for copy option to allow backups. Currently only commit and pull implemented,

 

Copy will have an option to save to a different location but wont be until next release

  • Like 1
Link to comment
1 hour ago, wil56k said:

I have a NVME cache drive I use to hold my VMs, very close to capacity. So when I go to create a snapshot I get a storage error as I no extra space on the NVME to store 'said' snapshot. Is there the ability to store/save the snapshot to another storage area on my Unraid?

No it is not possible. Calculation is based on full disk and + memory size if VM is running. You can turn off free space check but I would be careful doing that as may fill disk.

 

Snapshots create an overlay file so VM will be reading both if data is not in overlay. May create performance issues.

Link to comment
  • 3 weeks later...

Sorry if I missed this in another area. Trying to snapshot a Windows Server w/ QCOW disks. Getting this? I have migration enabled, although I'm not sure what that does. 
 

Execution error

Requested operation is not valid: cannot migrate domain: Migration disabled: vhost-user backend lacks VHOST_USER_PROTOCOL_F_LOG_SHMFD feature.; Migration disabled: vhost-user backend lacks VHOST_USER_PROTOCOL_F_LOG_SHMFD feature.; 0000:10:00.0: VFIO migration is not supported in kernel; 0000:10:00.1: VFIO migration is not supported in kernel

 

image.png.f636f5daa68832be7398ecb0698a4d2c.png

Link to comment
7 minutes ago, dja said:

Sorry if I missed this in another area. Trying to snapshot a Windows Server w/ QCOW disks. Getting this? I have migration enabled, although I'm not sure what that does. 
 

Execution error

Requested operation is not valid: cannot migrate domain: Migration disabled: vhost-user backend lacks VHOST_USER_PROTOCOL_F_LOG_SHMFD feature.; Migration disabled: vhost-user backend lacks VHOST_USER_PROTOCOL_F_LOG_SHMFD feature.; 0000:10:00.0: VFIO migration is not supported in kernel; 0000:10:00.1: VFIO migration is not supported in kernel

 

image.png.f636f5daa68832be7398ecb0698a4d2c.png

Is the vm running?

Link to comment
6 minutes ago, dja said:

Um... (embarrassed) Yes? :) 

I guess I'm used to the HyperV world where you can do it running...is that not a thing? Or does it need to convert first?

I think running snapshots dont work if you have virtiofs enabled, maybe gous passedthru.

Link to comment

Ok. I tried a normal Win11 VM. It works to snapshot when powered off.

 

Getting error when started. Surely this is supposed to work in either state?  I have 'Migratable'  set to 'on' for VM. 

 

image.png.61875e405296a9a2ffee4d89ebea42af.png

Link to comment
10 minutes ago, dja said:

Ok. I tried a normal Win11 VM. It works to snapshot when powered off.

 

Getting error when started. Surely this is supposed to work in either state?  I have 'Migratable'  set to 'on' for VM. 

 

image.png.61875e405296a9a2ffee4d89ebea42af.png

Are you using virtiofs for shares?

Link to comment

So I'm having an issue trying to grab a snapshot of my Windows 11 VM. Is it possible to grab a snapshot of a VM using GPU Passthrough? Right now the error I keep getting is:

 

Requested operation is not valid: cannot migrate domain:

0000:03:00.0: VFIO migration is not supported in kernel;

0000:03:00.1: VFIO migration is not supported in kernel

Link to comment
5 hours ago, Lountz said:

So I'm having an issue trying to grab a snapshot of my Windows 11 VM. Is it possible to grab a snapshot of a VM using GPU Passthrough? Right now the error I keep getting is:

 

Requested operation is not valid: cannot migrate domain:

0000:03:00.0: VFIO migration is not supported in kernel;

0000:03:00.1: VFIO migration is not supported in kernel

Will look to see what is posible but may be depentant on driver and other component support. 

 

It should work if you dont do the memory dump, but I have not tested.

Link to comment
3 hours ago, dja said:

Is there any way for me to test this? 

I have tried any of the new options yet, you can download new binary from gitlab releases and replace the current file.

 


/usr/bin/virtiofsd
 

 

Additional options can be passed to virtiofsd. By default no including the variables passed from libvirt I add the following

 

--syslog

--inode-file-handles=mandatory

--announce-submounts

 

You can create a file called /etc/libvirt/virtiofsd.opt with options. Add the lines above, each option should be on a new line.

 

 

--syslog

--inode-file-handles=mandatory

--announce-submounts

--migration-mode <MIGRATION_MODE>

 

You may need to check any details regarding snapshots in the issue link.

 

      --migration-mode <MIGRATION_MODE>
          Defines how to perform migration, i.e. how to represent the internal state to the destination, and how to obtain that representation.
          
          - find-paths: Iterate through the shared directory (exhaustive search) to find paths for all inodes indexed and opened by the guest, and transfer these paths to the destination.
          
          This parameter is ignored on the destination side.
          
          [default: find-paths]

      --migration-on-error <MIGRATION_ON_ERROR>
          Controls how to respond to errors during migration.
          
          If any inode turns out not to be migrateable (either the source cannot serialize it, or the destination cannot opened the serialized representation), the destination can react in different ways:
          
          - abort: Whenever any error occurs, return a hard error to the vhost-user front-end (e.g. QEMU), aborting migration.
          
          - guest-error: Let migration finish, but the guest will be unable to access any of the affected inodes, receiving only errors.
          
          This parameter is ignored on the source side.
          
          [default: abort]

      --migration-verify-handles
          Ensure that the migration destination opens the very same inodes as the source (only works if source and destination use the same shared directory on the same filesystem).
          
          This option makes the source attach the respective file handle to each inode transferred during migration.  Once the destination has (re-)opened the inode, it will generate the file handle on its end, and compare, ensuring that it has opened the very same inode.
          
          (File handles are per-filesystem unique identifiers for inodes that, besides the inode ID, also include a generation ID to protect against inode ID reuse.)
          
          Using this option protects against external parties renaming or replacing inodes while migration is ongoing, which, without this option, can lead to data loss or corruption, so it should always be used when other processes besides virtiofsd have write access to the shared directory.  However, again, it only works if both source and destination use the same shared directory.
          
          This parameter is ignored on the destination side.

      --migration-confirm-paths
          Double-check the identity of inodes right before switching over to the destination, potentially making migration more resilient when third parties have write access to the shared directory.
          
          When representing migrated inodes using their paths relative to the shared directory, double-check during switch-over to the destination that each path still matches the respective inode, and on mismatch, try to correct it via the respective symlink in /proc/self/fd.
          
          Because this option requires accessing each inode indexed or opened by the guest, it can prolong the switch-over phase of migration (when both source and destination are paused) for an indeterminate amount of time.
          
          This parameter is ignored on the destination side.

 

  • Like 2
Link to comment
  • SimonF pinned this topic
  • 2 weeks later...
On 7/3/2024 at 12:52 PM, SimonF said:

No it is not possible. Calculation is based on full disk and + memory size if VM is running. You can turn off free space check but I would be careful doing that as may fill disk.

 

Snapshots create an overlay file so VM will be reading both if data is not in overlay. May create performance issues.

So a vm with a primary and secondary drive will need free capacity equal to drive1 + drive2 + memory in order to take the snap?

Link to comment
1 hour ago, sobesjm said:

So a vm with a primary and secondary drive will need free capacity equal to drive1 + drive2 + memory in order to take the snap?

Yes that is correct as the overlay could grow to the original disk size. You can un check free space check if you dont think it will exceed free space.

Link to comment
  • 2 weeks later...

So i'm playing around with the unraid VM in v7 beta 2

image.thumb.png.9491a0e843a86d9368a3ceeff2b14587.png

 

I have found a few bugs here and there from general use and would like to know more on the snapshot system

 

another small bug is removing older VMs... 

 

image.thumb.png.d94201b2c4a04989bee58306e51021ef.png

image.png.1df6802886fb380c9b38e488a5f3a056.png

 

image.thumb.png.5df4669b6c63d9ad3a97e64f7dc7bca6.png

 

This was a vm that I chose to remove vm and disk. but I can't remove this folder... 

 

Another small bug is a new vm created in vdisk 1 doen't build the vdisk. I was able to copy the vdisk over to other VMs, as if unrad makes a vdisk it 500 mb and a ubuntu installer can't find the disk missing block device... but copy over a unraid 20GB premade form unraid 6 and the same vm ubuntu installer can now see and use the vdsik...


Example in testing for example the libvirt.img breaks, moves or is no longer, any vm wiht snapshots is lost. there may be a way to fix this IDK. but is a concern if moving a vm from one unraid to another... Unless we merge the snapshot back into vdisk 1...

From my experience.

image.png.a21073382b52c481962ca9f9472c2253.png

 

I need to make 2 snapshots for a backup. If i want to revert to snapshot base per testing i would click the earlier chain and click revert snapshot. Great!

I can't remove an older snapshot as this will break the chain needed to boot and have my curent work in the lattest sna[shot (hence 2) 1 for the actual backup, 1 for the back runnig new code...

So my question is what does block commit and block Pull due and what is the use case.

How would I Merge my current chain into 1 disk image now that i have a finished project in testing.

So other side VM feature request hat would be nice as we have a excellent implementation of ejecting disks would be to add a boot order option to the web gui to change the boot from the vdisk file over the disk drive file.

Thoughts concerns, or am I miss understanding the implementation of how snapshots in VM is supposed to work?

Bug report added here:

 

Edited by bmartino1
bug report
Link to comment

Join the conversation

You can post now and register later. If you have an account, sign in now to post with your account.
Note: Your post will require moderator approval before it will be visible.

Guest
Reply to this topic...

×   Pasted as rich text.   Restore formatting

  Only 75 emoji are allowed.

×   Your link has been automatically embedded.   Display as a link instead

×   Your previous content has been restored.   Clear editor

×   You cannot paste images directly. Upload or insert images from URL.

×
×
  • Create New...