Jump to content
danioj

unraid-autovmbackup: automate backup of virtual machines in unRAID - v0.4

193 posts in this topic Last Reply

Recommended Posts

## USE AT YOUR OWN RISK ##

# Live backup is experimental, so please dont test/use on daily use unraid box for now. That beeing said, i use this on my daily Unraid box without problems.

 

In order for this to work make sure:

 

When using enable_backup_running_vm="1" you HAVE TO CHANGE your vdisk path in vm.xml to point to /mnt/cache/somefolder or /mnt/disk1/somefolder instead of /mnt/user/somefolder

(see:

 

New script added as attachment, what is changed ?

 

# - Fixed case sensitive bug, where an .iso file is shown as warning in script-logging when filename is xxx.ISO. Only .iso was accepted (lines: 1180 and 1921)
# - Experimental: added live backup, backup of running vm. This also checks if guest agent is installed inside vm to add extra parameter for making sure we have a consistent disk state before taking snapshot and backup. This backup DOES NOT includes RAM state. so when VM is restored it wil be powered off and is in a state as it has been forced powerd off. allows the guest to flush its I/O into a stable state before the snapshot is created, which allows use of the snapshot without having to perform a fsck or losing partial database transactions. When guest agent is not installed it just creats a snapshot but is stil considerd to be a safe snapshot. Can be enabled or disabled with: enable_backup_running_vm="1" or "0"
# - Added Quick settings on top of script for easy changes true Unraid webgui
# - Added comments in script to see what is changed: look for: "### Added:" and "### edited:"

 

All features and functionality from original script is intact, so if you set enable_backup_running_vm=0 it is the same script as: https://github.com/JTok/unraid-vmbackup : v1.1.4 - 2018/05/19

 

## USE AT YOUR OWN RISK ##

 

 

 

 

Autovmbackup.sh.txt

Share this post


Link to post
Posted (edited)
On 2/24/2019 at 8:42 AM, Dikkekop said:

## USE AT YOUR OWN RISK ##

# Live backup is experimental, so please dont test/use on daily use unraid box for now. That beeing said, i use this on my daily Unraid box without problems.

 

In order for this to work make sure:

 

When using enable_backup_running_vm="1" you HAVE TO CHANGE your vdisk path in vm.xml to point to /mnt/cache/somefolder or /mnt/disk1/somefolder instead of /mnt/user/somefolder

(see:

 

New script added as attachment, what is changed ?

 

# - Fixed case sensitive bug, where an .iso file is shown as warning in script-logging when filename is xxx.ISO. Only .iso was accepted (lines: 1180 and 1921)
# - Experimental: added live backup, backup of running vm. This also checks if guest agent is installed inside vm to add extra parameter for making sure we have a consistent disk state before taking snapshot and backup. This backup DOES NOT includes RAM state. so when VM is restored it wil be powered off and is in a state as it has been forced powerd off. allows the guest to flush its I/O into a stable state before the snapshot is created, which allows use of the snapshot without having to perform a fsck or losing partial database transactions. When guest agent is not installed it just creats a snapshot but is stil considerd to be a safe snapshot. Can be enabled or disabled with: enable_backup_running_vm="1" or "0"
# - Added Quick settings on top of script for easy changes true Unraid webgui
# - Added comments in script to see what is changed: look for: "### Added:" and "### edited:"

 

All features and functionality from original script is intact, so if you set enable_backup_running_vm=0 it is the same script as: https://github.com/JTok/unraid-vmbackup : v1.1.4 - 2018/05/19

 

## USE AT YOUR OWN RISK ##

 

 

 

 

Autovmbackup.sh.txt

 

How does one install the guest agent?

 

EDIT:  Nvmd, found it on the virtio CDrom.

Edited by IamSpartacus

Share this post


Link to post

So, just realised the pinned link at the start of this thread is 3 years old as is Spaceinvaders video! 

Never the less this is all still good stuff.

 

I am however having issues getting it to back up a Linux vm.

 

Its loading its image from a converted image for my HassIO server. 

The log output is -

2019-03-12 12:22 information: started attempt to backup 
HassIO
to /mnt/user/AutoBackups
2019-03-12 12:22 warning: HassIO can not be found on the system. skipping vm.
2019-03-12 12:22 information: finished attempt to backup 
HassIO
to /mnt/user/AutoBackups.

This is what the config looks like... any ideas why it cant find the VM?

 

image.thumb.png.6cd2b39d32a2b3632011d3d6f258ae00.png

 

 

Share this post


Link to post

This is what the config looks like... any ideas why it cant find the VM?


Looks like your VM is named HassOI in unraid template, not HassIO.


Sent from my iPhone using Tapatalk

Share this post


Link to post
11 hours ago, Jorgen said:

 


Looks like your VM is named HassOI in unraid template, not HassIO.


Sent from my iPhone using Tapatalk

 

Good point well made! $£$%

 

 

Share this post


Link to post
On 2/24/2019 at 7:42 AM, Dikkekop said:

I see delta sync is enabled by default. If I make 7 backups, then manually delete the first 6, will the 7th contain enough information to do a restore?

 

Very happy the original script has been updated/modified to allow vdisk exclusion. Thank you!

Share this post


Link to post

Could someone please explain to me the purpose of the ovmf nvram folder (and how I can distinguish which file with the ovmf folder links to which vm)? Because the names are like "c6021202-41d7-d4e5-01b1-a222c8fe2f19_VARS-pure-efi.fd"?

-thanks

Share this post


Link to post

@tll002 The xxx-efi.fd files are the specific BIOS files of each VM and you can find an entry in each xml of each VM.

Share this post


Link to post

You guys are free to use my script. It's just my personal one, but makes a good backup of the three main areas you need for a solid VM backup, which is XML, NVRAM (for UEFI OVMF BIOS) and of course the vdisk image. I am not concerned with things like pruning old backups, so no logic for it. But it does check to see if VM is running, and if it is, it shuts it down first, backs it up, then starts it up again. If it was already shut down, it backs it up and keeps on going.

 

The logic just scans for names in the /mnt/user/domains/ , so any VM you have it will back them up one by one.

 

I use it with @Squid 's excellent user scripts plugin, and it works great.

 

Anyways, have at it if you wish, just set where you want your backup dir to be first: "backuplocation"

 

#!/bin/bash
#VM Backup

#Change the below to your specific values
backuplocation="/mnt/user/Backups/VM-Backup"

#Static Variables
datestamp=" - "`date '+%Y_%m_%d_%I_%M_%p'`

#Execution
#Set temporary IFS var to handle spaces in name
SAVEIFS=$IFS
IFS=$(echo -en "\n\b")
#For loop to discover folders in domains folder, then backup
for D in `ls /mnt/user/domains` ; do
if [ ! -d "$backuplocation"/"$D$datestamp" ] ; then
			echo "making folder for todays date "$datestamp""
			mkdir -vp "$backuplocation"/"$D$datestamp"
		else
			echo ""$backuplocation"/"$D$datestamp" exists, continuing."
fi
#Actual execution of backup, stopping VM first if it's running
if [ `virsh domstate "$D"` == "running" ] ; then
			echo "VM "$D" running, shutting down now"
			virsh shutdown "$D"
			echo "Waiting 20 seconds for "$D" to shutdown gracefully" ; sleep 20
			echo "Saving "$D" xml file"
			rsync -a /etc/libvirt/qemu/"$D".xml "$backuplocation"/"$D$datestamp"/xml/
				if [ -d "/etc/libvirt/qemu/nvram" ] ; then
					echo "Found OVMF nvram directory, blindly backing up (verify with XML)"
					rsync -a /etc/libvirt/qemu/nvram/* "$backuplocation"/"$D$datestamp"/nvram/
				fi
			echo "Saving "$D" vdisk image file"
			zip -jr "$backuplocation"/"$D$datestamp"/vdisk.zip /mnt/user/domains/"$D"/*
			chmod -R 777 "$backuplocation"/"$D$datestamp"
			echo "Starting "$D""
			virsh start "$D"
		else
			echo ""$D" was already stopped, backing up anyway"
			echo "Saving "$D" xml file"
			rsync -a /etc/libvirt/qemu/"$D".xml "$backuplocation"/"$D$datestamp"/xml/
				if [ -d "/etc/libvirt/qemu/nvram" ] ; then
					echo "Found OVMF nvram directory, blindly backing up (verify with XML)"
					rsync -a /etc/libvirt/qemu/nvram/* "$backuplocation"/"$D$datestamp"/nvram/
				fi
			echo "Saving "$D" vdisk image file"
			zip -jr "$backuplocation"/"$D$datestamp"/vdisk.zip /mnt/user/domains/"$D"/*
			chmod -R 777 "$backuplocation"/"$D$datestamp"
fi
done
#Restore IFS to original value
IFS=$SAVEIFS
echo "Backups are complete"
exit

 

Share this post


Link to post
Posted (edited)

@tll002 The reason is because of the space in the name of your Unifi VM.

 

Edit:

Rename your VM to "Unifi_Video" and in the script and it should work. Btw what is the "\n" part at the end of the name for? Might be also the cause of your issue.

Edited by bastl

Share this post


Link to post
[mention=79141]bastl[/mention] like I stated in my original post that was a test, I have update the picture that now shows the "Unifi Video" VM gets backed up but "Uttorent_W" and "Uttorent_U" VM still don't get backed up.
-Thanks for the help, any more suggestions?

Script is case sensitive, you have the second letter “t” capitalized the wrong way round between the script and your utorrent VM names.


Sent from my iPhone using Tapatalk

Share this post


Link to post
Posted (edited)
On 2/24/2019 at 2:42 PM, Dikkekop said:

## USE AT YOUR OWN RISK ##

 

 

 

 

Autovmbackup.sh.txt

Is there a reason you don't have this on github ?

And a question;

You do the delta copying, but with the timestamps in the resulting copies, how do you delta (which files are compared)?

 

Either way, thanks for the script. Someone should just update the TS, the original script on github.

to /mnt/user/backup/vm
2019-05-24 17:32 information: debz can be found on the system. attempting backup.
2019-05-24 17:32 information: /mnt/user/backup/vm/debz exists. continuing.
2019-05-24 17:32 information: debz is running. can_backup_vm set to y enable_backup_running_vm is set to 1 debz will not shutdown
2019-05-24 17:32 information: debz is running. can_backup_vm set to y
2019-05-24 17:32 action: actually_copy_files is 1.
2019-05-24 17:32 action: can_backup_vm flag is y. starting backup of debz configuration, nvram, and vdisk(s).
sending incremental file list
debz.xml

sent 7,190 bytes received 35 bytes 14,450.00 bytes/sec
total size is 7,096 speedup is 0.98
2019-05-24 17:32 information: backup of debz xml configuration to /mnt/user/backup/vm/debz/20190524_1732_debz.xml complete.
debz.xml:5: namespace warning : xmlns: URI unraid is not absolute

^
sending incremental file list
721212e9-ebd2-5da3-4140-6a299e24765f_VARS-pure-efi.fd

sent 131,241 bytes received 35 bytes 262,552.00 bytes/sec
total size is 131,072 speedup is 1.00
2019-05-24 17:32 information: backup of debz nvram to /mnt/user/backup/vm/debz/20190524_1732_721212e9-ebd2-5da3-4140-6a299e24765f_VARS-pure-efi.fd complete.
2019-05-24 17:32 qemu-agent is installed and running, adding --quiesce parameter to make sure we have a consistent disk state before snapshot and backup. Allows the guest to flush its I/O into a stable state before the snapshot is created, which allows use of the snapshot without having to perform a fsck or losing partial database transactions.
Domain snapshot snap.img created
for debz
'/mnt/cache/domains/debz/vdisk1.img.snap.img' -> '/mnt/user/backup/vm/debz/20190524_1732_vdisk1.img.snap.img'
2019-05-24 17:32 information: backup of vdisk1.img.snap.img vdisk to /mnt/user/backup/vm/debz/20190524_1732_vdisk1.img.snap.img complete.
Block commit: [100 %]
Successfully pivoted
sleeping for 5 seconds after merging snapshot to vdisk before deleting snapshot
Removing snapshot snap.img
2019-05-24 17:32 information: /mnt/user/isos/debian-9.8.0-amd64-netinst.iso of debz is an iso not a vdisk. skipping.
2019-05-24 17:32 action: vm_original_state is running. starting debz.
debz is running and enable_backup_running_vm = 1
2019-05-24 17:32 information: backup of debz to /mnt/user/backup/vm/debz completed.
2019-05-24 17:32 information: cleaning out backups older than 8 days in location ONLY if newer files exist in /mnt/user/backup/vm/debz/
2019-05-24 17:32 information: cleaning out backups over 9 in location /mnt/user/backup/vm/debz/
find: '/mnt/user/backup/vm/debz/*.qcow2': No such file or directory
find: '/mnt/user/backup/vm/debz/*.tar.gz': No such file or directory
2019-05-24 17:32 information: finished attempt to backup
debz
to /mnt/user/backup/vm.
2019-05-24 17:32 information: cleaning out logs over 1.
2019-05-24 17:32 information: removed '/mnt/user/backup/vm//20190524_1727_unraid-vmbackup.log'
2019-05-24 17:32 information: cleaning out error logs over 10.
find: '/mnt/user/backup/vm//*unraid-vmbackup_error.log': No such file or directory
2019-05-24 17:32 Stop logging to log file.

 

Edited by fluisterben

Share this post


Link to post
On 3/4/2019 at 8:57 PM, IamSpartacus said:

 

How does one install the guest agent?

 

# apt install qemu-guest-agent

for debian/ubuntu etc.

Share this post


Link to post
Posted (edited)
On 2/24/2019 at 2:42 PM, Dikkekop said:

## USE AT YOUR OWN RISK ##

# Live backup is experimental, so please dont test/use on daily use unraid box for now. That beeing said, i use this on my daily Unraid box without problems.

 

Autovmbackup.sh.txt

My vdisk1.img in its VM config keeps getting renamed to /vdisk1.img.snap.img so at next boot it doesn't boot because the img name is wrong. Something in your script is not renaming correctly, or doesn't do it in the right order..

Edited by fluisterben
still not working

Share this post


Link to post
On 5/27/2019 at 1:00 AM, fluisterben said:

My vdisk1.img in its VM config keeps getting renamed to /vdisk1.img.snap.img so at next boot it doesn't boot because the img name is wrong. Something in your script is not renaming correctly, or doesn't do it in the right order..

temporary this path is edited to point to the "temp" snapshot file. Changes in your vm are written to that "temp" file during the backup. If backup is finished it should point backup to vdisk1.img. The moment it does this is when the script preforms "virsh blockcommit" (in your logging you wil see this where it says: Block commit: [100 %]
Successfully pivoted)

 

can you replace the script with this one and try again: https://github.com/thies88/unraid-vmbackup/blob/master/script

 

i made 2 new points where the script checks for the .snap.img files and delete them. One at the start of backup and one after.  i wil look at the editing the .xml file some other time. Maby we can build a check that checks the xml file for the .snap.img value and delete it

 

can you also upload an logfile (output of script).

Share this post


Link to post
On 5/24/2019 at 5:21 PM, fluisterben said:

Is there a reason you don't have this on github ?

And a question;

You do the delta copying, but with the timestamps in the resulting copies, how do you delta (which files are compared)?

 

Either way, thanks for the script. Someone should just update the TS, the original script on github.


to /mnt/user/backup/vm
2019-05-24 17:32 information: debz can be found on the system. attempting backup.
2019-05-24 17:32 information: /mnt/user/backup/vm/debz exists. continuing.
2019-05-24 17:32 information: debz is running. can_backup_vm set to y enable_backup_running_vm is set to 1 debz will not shutdown
2019-05-24 17:32 information: debz is running. can_backup_vm set to y
2019-05-24 17:32 action: actually_copy_files is 1.
2019-05-24 17:32 action: can_backup_vm flag is y. starting backup of debz configuration, nvram, and vdisk(s).
sending incremental file list
debz.xml

sent 7,190 bytes received 35 bytes 14,450.00 bytes/sec
total size is 7,096 speedup is 0.98
2019-05-24 17:32 information: backup of debz xml configuration to /mnt/user/backup/vm/debz/20190524_1732_debz.xml complete.
debz.xml:5: namespace warning : xmlns: URI unraid is not absolute

^
sending incremental file list
721212e9-ebd2-5da3-4140-6a299e24765f_VARS-pure-efi.fd

sent 131,241 bytes received 35 bytes 262,552.00 bytes/sec
total size is 131,072 speedup is 1.00
2019-05-24 17:32 information: backup of debz nvram to /mnt/user/backup/vm/debz/20190524_1732_721212e9-ebd2-5da3-4140-6a299e24765f_VARS-pure-efi.fd complete.
2019-05-24 17:32 qemu-agent is installed and running, adding --quiesce parameter to make sure we have a consistent disk state before snapshot and backup. Allows the guest to flush its I/O into a stable state before the snapshot is created, which allows use of the snapshot without having to perform a fsck or losing partial database transactions.
Domain snapshot snap.img created
for debz
'/mnt/cache/domains/debz/vdisk1.img.snap.img' -> '/mnt/user/backup/vm/debz/20190524_1732_vdisk1.img.snap.img'
2019-05-24 17:32 information: backup of vdisk1.img.snap.img vdisk to /mnt/user/backup/vm/debz/20190524_1732_vdisk1.img.snap.img complete.
Block commit: [100 %]
Successfully pivoted
sleeping for 5 seconds after merging snapshot to vdisk before deleting snapshot
Removing snapshot snap.img
2019-05-24 17:32 information: /mnt/user/isos/debian-9.8.0-amd64-netinst.iso of debz is an iso not a vdisk. skipping.
2019-05-24 17:32 action: vm_original_state is running. starting debz.
debz is running and enable_backup_running_vm = 1
2019-05-24 17:32 information: backup of debz to /mnt/user/backup/vm/debz completed.
2019-05-24 17:32 information: cleaning out backups older than 8 days in location ONLY if newer files exist in /mnt/user/backup/vm/debz/
2019-05-24 17:32 information: cleaning out backups over 9 in location /mnt/user/backup/vm/debz/
find: '/mnt/user/backup/vm/debz/*.qcow2': No such file or directory
find: '/mnt/user/backup/vm/debz/*.tar.gz': No such file or directory
2019-05-24 17:32 information: finished attempt to backup
debz
to /mnt/user/backup/vm.
2019-05-24 17:32 information: cleaning out logs over 1.
2019-05-24 17:32 information: removed '/mnt/user/backup/vm//20190524_1727_unraid-vmbackup.log'
2019-05-24 17:32 information: cleaning out error logs over 10.
find: '/mnt/user/backup/vm//*unraid-vmbackup_error.log': No such file or directory
2019-05-24 17:32 Stop logging to log file.

 

Domain snapshot snap.img created for debz '/mnt/cache/domains/debz/vdisk1.img.snap.img' -> '/mnt/user/backup/vm/debz/20190524_1732_vdisk1.img.snap.img' 2019-05-24 17:32 information: backup of vdisk1.img.snap.img vdisk to /mnt/user/backup/vm/debz/20190524_1732_vdisk1.img.snap.img complete.

 

Please edit your xml file and remove .snap.img. (you are creating a backup of your snapshot.) Check if your vm is booting correctly and try again with provided script

Share this post


Link to post

Does this have a way to limit the number of files kept?

(or days or any kind of limit that will prevent the backup disks to overflow)

 

 

Share this post


Link to post
On 10/8/2019 at 11:05 AM, NLS said:

Does this have a way to limit the number of files kept?

(or days or any kind of limit that will prevent the backup disks to overflow)

 

 

YES:

# default is 0. set this to the number of days backups should be kept. 0 means indefinitely.
number_of_days_to_keep_backups="0"
# default is 0. set this to the number of backups that should be kept. 0 means infinitely.
# WARNING: If VM has multiple vdisks, then they must end in sequential numbers in order to be correctly backed up (i.e. vdisk1.img, vdisk2.img, etc.).
number_of_backups_to_keep="1"

Share this post


Link to post

just got this running today - great script!

 

One question - one of my vis has a .vmdk extension instead of a .img as such it won't backup...

 

warning: /mnt/cache/vms/librenms/librenms.vmdk of LibreNMS is not a vdisk. skipping.

 

 

Is there a way to include different extensions?

Share this post


Link to post
4 hours ago, rorton said:

Is there a way to include different extensions?

Quick fix would be to rename the file extension of the vdisk to img and edit the xml and change the extension. You can basically name the vdisk file how ever you like, as long as the format for the vdisk is setup correctly in the xml.

 

example:

    <disk type='file' device='disk'>
      <driver name='qemu' type='qcow2' cache='writeback' discard='unmap'/>
      <source file='/mnt/user/VMs/Mint/vdisk1.img'/>
      <backingStore/>
      <target dev='hdc' bus='scsi'/>
      <boot order='1'/>
      <alias name='scsi0-0-0-2'/>
      <address type='drive' controller='0' bus='0' target='0' unit='2'/>
    </disk>

I can rename the vdisk what ever I like as long as the type, in my case qcow2 is the correct one and the VM shouldn't have any issues. I can even call it vdisk1.vmdk and it will work. Unraid only checks if the declared type matches with the file you setup.

Share this post


Link to post

ah perfect thanks

 

so 

 

<devices>
    <emulator>/usr/local/sbin/qemu</emulator>
    <disk type='file' device='disk'>
      <driver name='qemu' type='vmdk' cache='writeback'/>
      <source file='/mnt/cache/vms/librenms/librenms.vmdk'/>
      <backingStore/>
      <target dev='hdc' bus='sata'/>
      <boot order='1'/>
      <alias name='sata0-0-2'/>

 

So do i need to change my type to something other than vmdk? or leave that, and just rename the file to have .img extension, and change the same in the source file path above?

Share this post


Link to post

Join the conversation

You can post now and register later. If you have an account, sign in now to post with your account.
Note: Your post will require moderator approval before it will be visible.

Guest
Reply to this topic...

×   Pasted as rich text.   Restore formatting

  Only 75 emoji are allowed.

×   Your link has been automatically embedded.   Display as a link instead

×   Your previous content has been restored.   Clear editor

×   You cannot paste images directly. Upload or insert images from URL.