enJOyIT Posted July 2 Share Posted July 2 Is this thread about snapshots only or backups too? Quote Link to comment
SimonF Posted July 2 Author Share Posted July 2 3 minutes ago, enJOyIT said: Is this thread about snapshots only or backups too? Happy for you to ask questions about backups. But there is not a native backup with the system All the files are stored in the vm directory including the memory as a .mem fi!e, xml copys etc. Snapshot config is stored in the linvirt dir under qemu/snapshotdb Quote Link to comment
SimonF Posted July 2 Author Share Posted July 2 Example files for a snapshot. .running is the running.xml .mem is the memory dump and qcow2 is the overlay. root@computenode:~# ls /mnt/user/domains2/DebianQ35/ S20240525081529.running memoryS20240525081529.mem vdisk1.S20240525081529qcow2 vdisk1.img* root@computenode:~# cat /etc/libvirt/qemu/snapshotdb/DebianQ35/snapshots.db { "S20240525081529": { "name": "S20240525081529", "parent": "Base", "state": "running", "desc": null, "memory": { "@attributes": { "snapshot": "external", "file": "\/mnt\/vmpoolzfs\/domains2\/DebianQ35\/memoryS20240525081529.mem" } }, "creationtime": "1716621329", "method": "QEMU", "backing": { "hdc": [ "\/mnt\/vmpoolzfs\/domains2\/DebianQ35\/vdisk1.S20240525081529qcow2", "\/mnt\/user\/domains2\/DebianQ35\/vdisk1.img" ], "rhdc": [ "\/mnt\/user\/domains2\/DebianQ35\/vdisk1.img", "\/mnt\/vmpoolzfs\/domains2\/DebianQ35\/vdisk1.S20240525081529qcow2" ] }, "primarypath": "\/mnt\/vmpoolzfs\/domains2\/DebianQ35", "disks": [ { "@attributes": { "name": "hdc", "snapshot": "external", "type": "file" }, "driver": { "@attributes": { "type": "qcow2" } }, "source": { "@attributes": { "file": "\/mnt\/vmpoolzfs\/domains2\/DebianQ35\/vdisk1.S20240525081529qcow2" } } }, { "@attributes": { "name": "hda", "snapshot": "no" } } ] } }root@computenode:~# Quote Link to comment
wil56k Posted July 3 Share Posted July 3 I have a NVME cache drive I use to hold my VMs, very close to capacity. So when I go to create a snapshot I get a storage error as I no extra space on the NVME to store 'said' snapshot. Is there the ability to store/save the snapshot to another storage area on my Unraid? Quote Link to comment
caplam Posted July 3 Share Posted July 3 Are the snapshots provided by qcow2 format attributes or by underlying filesystem ? Is there a way to export snapshot to backup vm ? I'm glad editing a vm is now done fully in gui. We can now expect there is no more bug when switching between gui and xml like vdisk type switching from qcow2 to img. Quote Link to comment
SimonF Posted July 3 Author Share Posted July 3 7 minutes ago, caplam said: Are the snapshots provided by qcow2 format attributes or by underlying filesystem ? Is there a way to export snapshot to backup vm ? I'm glad editing a vm is now done fully in gui. We can now expect there is no more bug when switching between gui and xml like vdisk type switching from qcow2 to img. There are two options at present qemu which creates a qcow2 overlay and zfs filesystem. Default is qemu. Option for btrfs will be added at a future date. Snapshot is not a backup of the whole system. Willl look to add support for copy option to allow backups. Currently only commit and pull implemented, Copy will have an option to save to a different location but wont be until next release 1 Quote Link to comment
SimonF Posted July 3 Author Share Posted July 3 1 hour ago, wil56k said: I have a NVME cache drive I use to hold my VMs, very close to capacity. So when I go to create a snapshot I get a storage error as I no extra space on the NVME to store 'said' snapshot. Is there the ability to store/save the snapshot to another storage area on my Unraid? No it is not possible. Calculation is based on full disk and + memory size if VM is running. You can turn off free space check but I would be careful doing that as may fill disk. Snapshots create an overlay file so VM will be reading both if data is not in overlay. May create performance issues. Quote Link to comment
dja Posted July 18 Share Posted July 18 Sorry if I missed this in another area. Trying to snapshot a Windows Server w/ QCOW disks. Getting this? I have migration enabled, although I'm not sure what that does. Execution error Requested operation is not valid: cannot migrate domain: Migration disabled: vhost-user backend lacks VHOST_USER_PROTOCOL_F_LOG_SHMFD feature.; Migration disabled: vhost-user backend lacks VHOST_USER_PROTOCOL_F_LOG_SHMFD feature.; 0000:10:00.0: VFIO migration is not supported in kernel; 0000:10:00.1: VFIO migration is not supported in kernel Quote Link to comment
SimonF Posted July 18 Author Share Posted July 18 7 minutes ago, dja said: Sorry if I missed this in another area. Trying to snapshot a Windows Server w/ QCOW disks. Getting this? I have migration enabled, although I'm not sure what that does. Execution error Requested operation is not valid: cannot migrate domain: Migration disabled: vhost-user backend lacks VHOST_USER_PROTOCOL_F_LOG_SHMFD feature.; Migration disabled: vhost-user backend lacks VHOST_USER_PROTOCOL_F_LOG_SHMFD feature.; 0000:10:00.0: VFIO migration is not supported in kernel; 0000:10:00.1: VFIO migration is not supported in kernel Is the vm running? Quote Link to comment
dja Posted July 18 Share Posted July 18 Um... (embarrassed) Yes? I guess I'm used to the HyperV world where you can do it running...is that not a thing? Or does it need to convert first? Quote Link to comment
SimonF Posted July 18 Author Share Posted July 18 6 minutes ago, dja said: Um... (embarrassed) Yes? I guess I'm used to the HyperV world where you can do it running...is that not a thing? Or does it need to convert first? I think running snapshots dont work if you have virtiofs enabled, maybe gous passedthru. Quote Link to comment
dja Posted July 19 Share Posted July 19 Ok. I tried a normal Win11 VM. It works to snapshot when powered off. Getting error when started. Surely this is supposed to work in either state? I have 'Migratable' set to 'on' for VM. Quote Link to comment
SimonF Posted July 19 Author Share Posted July 19 10 minutes ago, dja said: Ok. I tried a normal Win11 VM. It works to snapshot when powered off. Getting error when started. Surely this is supposed to work in either state? I have 'Migratable' set to 'on' for VM. Are you using virtiofs for shares? Quote Link to comment
dja Posted July 19 Share Posted July 19 2 minutes ago, SimonF said: Are you using virtiofs for shares? Yup. Sorry, I do see you mentioned that... maybe that will be supported at some point? Quote Link to comment
SimonF Posted July 19 Author Share Posted July 19 17 minutes ago, dja said: Yup. Sorry, I do see you mentioned that... maybe that will be supported at some point? They have started work on it last Oct, But is not complete https://gitlab.com/virtio-fs/virtiofsd/-/issues/136 Quote Link to comment
Lountz Posted July 20 Share Posted July 20 So I'm having an issue trying to grab a snapshot of my Windows 11 VM. Is it possible to grab a snapshot of a VM using GPU Passthrough? Right now the error I keep getting is: Requested operation is not valid: cannot migrate domain: 0000:03:00.0: VFIO migration is not supported in kernel; 0000:03:00.1: VFIO migration is not supported in kernel Quote Link to comment
SimonF Posted July 20 Author Share Posted July 20 5 hours ago, Lountz said: So I'm having an issue trying to grab a snapshot of my Windows 11 VM. Is it possible to grab a snapshot of a VM using GPU Passthrough? Right now the error I keep getting is: Requested operation is not valid: cannot migrate domain: 0000:03:00.0: VFIO migration is not supported in kernel; 0000:03:00.1: VFIO migration is not supported in kernel Will look to see what is posible but may be depentant on driver and other component support. It should work if you dont do the memory dump, but I have not tested. Quote Link to comment
SimonF Posted July 24 Author Share Posted July 24 On 7/19/2024 at 8:29 PM, dja said: Yup. Sorry, I do see you mentioned that... maybe that will be supported at some point? There has been a merge of code, but have not tested the new version https://gitlab.com/virtio-fs/virtiofsd/-/issues/136 1 Quote Link to comment
dja Posted July 25 Share Posted July 25 23 hours ago, SimonF said: There has been a merge of code, but have not tested the new version https://gitlab.com/virtio-fs/virtiofsd/-/issues/136 Is there any way for me to test this? Quote Link to comment
SimonF Posted July 25 Author Share Posted July 25 3 hours ago, dja said: Is there any way for me to test this? I have tried any of the new options yet, you can download new binary from gitlab releases and replace the current file. /usr/bin/virtiofsd Additional options can be passed to virtiofsd. By default no including the variables passed from libvirt I add the following --syslog --inode-file-handles=mandatory --announce-submounts You can create a file called /etc/libvirt/virtiofsd.opt with options. Add the lines above, each option should be on a new line. --syslog --inode-file-handles=mandatory --announce-submounts --migration-mode <MIGRATION_MODE> You may need to check any details regarding snapshots in the issue link. --migration-mode <MIGRATION_MODE> Defines how to perform migration, i.e. how to represent the internal state to the destination, and how to obtain that representation. - find-paths: Iterate through the shared directory (exhaustive search) to find paths for all inodes indexed and opened by the guest, and transfer these paths to the destination. This parameter is ignored on the destination side. [default: find-paths] --migration-on-error <MIGRATION_ON_ERROR> Controls how to respond to errors during migration. If any inode turns out not to be migrateable (either the source cannot serialize it, or the destination cannot opened the serialized representation), the destination can react in different ways: - abort: Whenever any error occurs, return a hard error to the vhost-user front-end (e.g. QEMU), aborting migration. - guest-error: Let migration finish, but the guest will be unable to access any of the affected inodes, receiving only errors. This parameter is ignored on the source side. [default: abort] --migration-verify-handles Ensure that the migration destination opens the very same inodes as the source (only works if source and destination use the same shared directory on the same filesystem). This option makes the source attach the respective file handle to each inode transferred during migration. Once the destination has (re-)opened the inode, it will generate the file handle on its end, and compare, ensuring that it has opened the very same inode. (File handles are per-filesystem unique identifiers for inodes that, besides the inode ID, also include a generation ID to protect against inode ID reuse.) Using this option protects against external parties renaming or replacing inodes while migration is ongoing, which, without this option, can lead to data loss or corruption, so it should always be used when other processes besides virtiofsd have write access to the shared directory. However, again, it only works if both source and destination use the same shared directory. This parameter is ignored on the destination side. --migration-confirm-paths Double-check the identity of inodes right before switching over to the destination, potentially making migration more resilient when third parties have write access to the shared directory. When representing migrated inodes using their paths relative to the shared directory, double-check during switch-over to the destination that each path still matches the respective inode, and on mismatch, try to correct it via the respective symlink in /proc/self/fd. Because this option requires accessing each inode indexed or opened by the guest, it can prolong the switch-over phase of migration (when both source and destination are paused) for an indeterminate amount of time. This parameter is ignored on the destination side. 2 Quote Link to comment
sobesjm Posted August 7 Share Posted August 7 On 7/3/2024 at 12:52 PM, SimonF said: No it is not possible. Calculation is based on full disk and + memory size if VM is running. You can turn off free space check but I would be careful doing that as may fill disk. Snapshots create an overlay file so VM will be reading both if data is not in overlay. May create performance issues. So a vm with a primary and secondary drive will need free capacity equal to drive1 + drive2 + memory in order to take the snap? Quote Link to comment
SimonF Posted August 7 Author Share Posted August 7 1 hour ago, sobesjm said: So a vm with a primary and secondary drive will need free capacity equal to drive1 + drive2 + memory in order to take the snap? Yes that is correct as the overlay could grow to the original disk size. You can un check free space check if you dont think it will exceed free space. Quote Link to comment
bmartino1 Posted August 18 Share Posted August 18 (edited) So i'm playing around with the unraid VM in v7 beta 2 I have found a few bugs here and there from general use and would like to know more on the snapshot system another small bug is removing older VMs... This was a vm that I chose to remove vm and disk. but I can't remove this folder... Another small bug is a new vm created in vdisk 1 doen't build the vdisk. I was able to copy the vdisk over to other VMs, as if unrad makes a vdisk it 500 mb and a ubuntu installer can't find the disk missing block device... but copy over a unraid 20GB premade form unraid 6 and the same vm ubuntu installer can now see and use the vdsik... Example in testing for example the libvirt.img breaks, moves or is no longer, any vm wiht snapshots is lost. there may be a way to fix this IDK. but is a concern if moving a vm from one unraid to another... Unless we merge the snapshot back into vdisk 1... From my experience. I need to make 2 snapshots for a backup. If i want to revert to snapshot base per testing i would click the earlier chain and click revert snapshot. Great! I can't remove an older snapshot as this will break the chain needed to boot and have my curent work in the lattest sna[shot (hence 2) 1 for the actual backup, 1 for the back runnig new code... So my question is what does block commit and block Pull due and what is the use case. How would I Merge my current chain into 1 disk image now that i have a finished project in testing. So other side VM feature request hat would be nice as we have a excellent implementation of ejecting disks would be to add a boot order option to the web gui to change the boot from the vdisk file over the disk drive file. Thoughts concerns, or am I miss understanding the implementation of how snapshots in VM is supposed to work? Bug report added here: Edited August 18 by bmartino1 bug report Quote Link to comment
bmartino1 Posted August 18 Share Posted August 18 (edited) Theses snap shots have been made by default options: (the machine is powered off..) I do have the zfs master plugin and snpshot plugin not sure if these are causing problems with snapshots and other issues in beta 7 Edited August 18 by bmartino1 Quote Link to comment
Recommended Posts
Join the conversation
You can post now and register later. If you have an account, sign in now to post with your account.
Note: Your post will require moderator approval before it will be visible.