IamSpartacus Posted February 2, 2018 Share Posted February 2, 2018 How are others backing up and/or taking live snapshots of your KVM VMs for easy restore on the same or other KVM hosts? Quote Link to comment
Guest Posted February 2, 2018 Share Posted February 2, 2018 SpaceInvaderOne did a video on a user script that backs up VMs. I think if your vdisk is qcows then you can snapshot the VM in linux virt manager. Personally I use the user script backup once a month Quote Link to comment
JorgeB Posted February 2, 2018 Share Posted February 2, 2018 btrfs snapshots and then btrfs send/receive to another disk. Quote Link to comment
IamSpartacus Posted February 2, 2018 Author Share Posted February 2, 2018 5 minutes ago, johnnie.black said: btrfs snapshots and then btrfs send/receive to another disk. I assume this is all done with scripts? Quote Link to comment
JorgeB Posted February 2, 2018 Share Posted February 2, 2018 I assume this is all done with scripts?Yep Quote Link to comment
al_uk Posted February 14, 2018 Share Posted February 14, 2018 (edited) Hi Johnnie, I'm thinking of trying the BTRFS send and receive to probably an array drive. I'll upgrade from 6.3.5 to 6.4.1 1st. Are there any more gotchas or issues since your guide below a year ago? I'm thinking of having 2 Subvolumes on the target disk. One will be updated daily with the VMs running. The 2nd will be updated weekly after a scripted VM shutdown. The weekly subvolumes will then be backed up offsite using crashplan. If either source or target drives are encrypted does that cause any additional probems? The source will be the SSD cache pool. cheers, Al Edited February 14, 2018 by al_uk Quote Link to comment
JorgeB Posted February 15, 2018 Share Posted February 15, 2018 17 minutes ago, al_uk said: Are there any more gotchas or issues since your guide below a year ago? No, I'm still using the same method. 17 minutes ago, al_uk said: If either source or target drives are encrypted does that cause any additional probems? I don't use encryption but don't see a reason why it would cause issues. 1 Quote Link to comment
al_uk Posted February 17, 2018 Share Posted February 17, 2018 I've done some initial testing on this and it looks good. Johnnie, would you be willing to share your scripts please that I can use as a starting point to get the Snapshot rotation etc. in place, and any error checking you've put in? Thanks. Quote Link to comment
al_uk Posted February 17, 2018 Share Posted February 17, 2018 (edited) On 07/01/2017 at 12:49 PM, johnnie.black said: The subvolume will look like a normal folder, so if it's a new VM create a new folder inside the subvolume with the vdisk (e.g: /mnt/cache/VMs/Win10/vdisk1.img), you can also move an existing vdisk there and edit the VM template. I'm moving a 50GB VM across to the new subvolume called VMs on the same 2 x 1TB SSD cache pool using MC RenMov from /mnt/cache/appdata/KVM/vm1/vdisk.img to /mnt/cache/appdata/VMs/vm1/vdisk.img The copy is taking a long time. Started at 200MB/s and quickly dropped to 40MB/s, and everything else on the server has slowed to a crawl Is that normal? Edited February 17, 2018 by al_uk Quote Link to comment
JorgeB Posted February 18, 2018 Share Posted February 18, 2018 12 hours ago, al_uk said: Is that normal? Yes, for all purposes it's a separate filesystem, so although you're doing a move it acts like a copy on the same disk. 13 hours ago, al_uk said: Johnnie, would you be willing to share your scripts please that I can use as a starting point to get the Snapshot rotation etc. in place, and any error checking you've put in? I know nothing about scripts, I can Google and take this or that that works for me, but the script I use is very crude, it works for me because I know its limitations, e.g., it assumes you already have two existing snapshots on the first run or it won't work correctly when deleting the older snapshot, still I don't mind posting it so it might give some ideas and good laugh for the forum member who do know how to make scripts. I use the User Scripts plugin with two very similar scripts, one that runs on a schedule daily and takes a snapshot with the VMs running, and a very similar one I run manually usually once a week to take a snapshot with all the VMs shutdown, ideally if there's a problem I would restore to an offline snapshot but the online snapshots give more options in case I need something more recent. #description= #backgroundOnly=true cd /mnt/cache sd=$(echo VMs_Off* | awk '{print $1}') ps=$(echo VMs_Off* | awk '{print $2}') if [ "$ps" == "VMs_Offline_$(date '+%Y%m%d')" ] then echo "There's already a snapshot from today" else btrfs sub snap -r /mnt/cache/VMs /mnt/cache/VMs_Offline_$(date '+%Y%m%d') sync btrfs send -p /mnt/cache/$ps /mnt/cache/VMs_Offline_$(date '+%Y%m%d') | btrfs receive /mnt/disks/backup if [[ $? -eq 0 ]]; then /usr/local/emhttp/webGui/scripts/notify -i normal -s "Send/Receive complete" btrfs sub del /mnt/cache/$sd btrfs sub del /mnt/disks/backup/$sd else /usr/local/emhttp/webGui/scripts/notify -i warning -s "Send/Receive failed" fi fi Quote Link to comment
al_uk Posted February 18, 2018 Share Posted February 18, 2018 Thanks, much appreciated, I'll use these to create my own bodged script :-) I'm more worried abou the speed of my cache. I'd expect to be getting a copy rate of a few hundred MB/s. I'll create a separate thread for it. Quote Link to comment
al_uk Posted February 27, 2018 Share Posted February 27, 2018 I have this working now to an unassigned drive in my main server. Nightly snapshots with the VMs runnning. Weekly snapshots with using a modified version of your script. It shuts down the VMs, snapshots them, restarts them and then Sends the snapshot to the destination drive. Thanks for your help JB. Has anyone got BTRFS send and receive working to another unRAID server mounted via NFS through Unassigned Devices? A search appears to say that NFS won't work - the other server needs to be connected via SSH, but I havent' worked out how to do that yet. Quote Link to comment
IamSpartacus Posted February 27, 2018 Author Share Posted February 27, 2018 1 hour ago, al_uk said: I have this working now to an unassigned drive in my main server. Nightly snapshots with the VMs runnning. Weekly snapshots with using a modified version of your script. It shuts down the VMs, snapshots them, restarts them and then Sends the snapshot to the destination drive. Thanks for your help JB. Has anyone got BTRFS send and receive working to another unRAID server mounted via NFS through Unassigned Devices? A search appears to say that NFS won't work - the other server needs to be connected via SSH, but I havent' worked out how to do that yet. Would you mind sharing your script (either here or via PM)? This is exactly what I'm looking to do but haven't had the time yet to research the solution. Quote Link to comment
al_uk Posted February 27, 2018 Share Posted February 27, 2018 Quote I know nothing about scripts, I can Google and take this or that that works for me, but the script I use is very crude, it works for me because I know its limitations, e.g., it assumes you already have two existing snapshots on the first run or it won't work correctly when deleting the older snapshot, still I don't mind posting it so it might give some ideas and good laugh for the forum member who do know how to make scripts. I'll repeat JB's words because they apply to me as well! The script needs the 1st 2 snapshots already created. It shutsdown any running VMs, tests that they are shutdown. This bit was cribbed from somewhere else. It does not force a shutdown. It starts back up any VMs set to "autostart" It ran for the 1st time on Sunday, and I haven't had time to check that the snapshots created are correct. There's very little error checking in it. cd /mnt/cache/appdata/snapshots sd=$(echo VMs_Off* | awk '{print $1}') ps=$(echo VMs_Off* | awk '{print $2}') if [ "$ps" == "VMs_Offline_$(date '+%Y%m%d')" ] then echo "There's already a snapshot from today" else for i in `virsh list | grep running | awk '{print $2}'`; do virsh shutdown $i; done # Wait until all domains are shut down or timeout has reached. END_TIME=$(date -d "300 seconds" +%s) while [ $(date +%s) -lt $END_TIME ]; do # Break while loop when no domains are left. test -z "`virsh list | grep running | awk '{print $2}'`" && break # Wait a little, we don't want to DoS libvirt. sleep 1 done echo "shutdown completed" virsh list | grep running | awk '{print $2}' btrfs sub snap -r /mnt/cache/appdata/VMs /mnt/cache/appdata/snapshots/VMs_Offline_$(date '+%Y%m%d') for i in `virsh list --all --autostart|awk '{print $2}'|grep -v Name`; do virsh start $i; done sync btrfs send -p /mnt/cache/appdata/snapshots/$ps /mnt/cache/appdata/snapshots/VMs_Offline_$(date '+%Y%m%d') | btrfs receive /mnt/disks/unRAIDBackup_BTRFS/snapshots if [[ $? -eq 0 ]]; then /usr/local/emhttp/webGui/scripts/notify -i normal -s "Send/Receive complete" btrfs sub del /mnt/cache/appdata/snapshots/$sd #btrfs sub del /mnt/disks/unRAIDBackup_BTRFS/snapshots/$sd else /usr/local/emhttp/webGui/scripts/notify -i warning -s "Send/Receive failed" fi fi Quote Link to comment
bphillips330 Posted July 27, 2018 Share Posted July 27, 2018 (edited) I know this is an old post. I have two vm's running on my cash. Don't think they are setup as qcow. Just .img. Is there a way I can check to see if they are qcow? Can I convert them to qcow? or should I just reinstall? They are windows 10 vm and a arch linux vm. I found this reference from another post. This still work? Edited July 27, 2018 by bphillips330 Quote Link to comment
scorcho99 Posted August 7, 2018 Share Posted August 7, 2018 .img is the default (only?) option when creating vms through unraids web manager. You can convert them to qcow2. I would, its a lot faster than reinstalling windows. btfs snapshots are another level of snapshots though btfs is a snapshot of the partitions that the virtual disks reside on qcow2 is a virtual disk format that supports snapshotting among other things VMs themselves can do snapshots that comprise information about the VM combined with a disk snapshot (like qcow2) Quote Link to comment
boxer74 Posted August 7, 2018 Share Posted August 7, 2018 I think for the above scripts to work, your vm images need to be stored on btrfs subvolumes. Correct? Quote Link to comment
JorgeB Posted August 7, 2018 Share Posted August 7, 2018 4 minutes ago, aberg83 said: I think for the above scripts to work, your vm images need to be stored on btrfs subvolumes. Correct? Correct Quote Link to comment
Simora Posted January 7, 2019 Share Posted January 7, 2019 I don't use btrfs for vm vdisk storage so I wrote a script with parts of the posted scripts that will stop vms and rsync directories rather than btrfs send/receive. Hopefully this helps anyone else trying to achieve the same. #!/bin/bash # # This script will stop all Unraid VMs and rsync the specified src directories to # the specified dst directory. All src directories will be base64 encoded with # hostname and directory path to eliminate potential naming collisions and # the need for character escapes. This will complicate restoration of # backup data. The following illustrates what will be written and how to decode # the base64 string. # # # echo $SRC # /mnt/disks/src/domains/ # # echo $DST # /mnt/user0/Backup/domains # # hostname -f # localhost # # pwd # /mnt/user0/Backup/domains # # ls # bG9jYWxob3N0Oi9tbnQvZGlza3Mvc3JjL2RvbWFpbnMvCg==/ # # echo "bG9jYWxob3N0Oi9tbnQvZGlza3Mvc3JjL2RvbWFpbnMvCg==" | base64 --decode # localhost:/mnt/disks/src/domains/ # # Array of source directories with trailing forward slash declare -a SRC=( "/mnt/disks/src/domains/" ) # Destination directory without trailing forward slash DST="/mnt/user0/Backup/domains" # Timeout in seconds for waiting for vms to shutdown before failing TIMEOUT=300 # Stop all VMs STOP() { for i in `virsh list | grep running | awk '{print $2}'`; do virsh shutdown $i done } # Start all VMs flagged with autostart START() { for i in `virsh list --all --autostart|awk '{print $2}'|grep -v Name`; do virsh start $i done } # Wait for VMs to shutdown WAIT() { TIME=$(date -d "$TIMEOUT seconds" +%s) while [ $(date +%s) -lt $TIME ]; do # Break while loop when no domains are left. test -z "`virsh list | grep running | awk '{print $2}'`" && break # Wait a little, we don't want to DoS libvirt. sleep 1 done } RSYNC() { rsync -avhrW "$1" "$2" STATUS=$? } QUIT() { exit } NOTIFY() { /usr/local/emhttp/webGui/scripts/notify \ -i "$1" \ -e "VM Backup" \ -s "-- VM Backup --" \ -d "$2" } NOTIFY "normal" "Beginning VM Backup" STOP WAIT if [[ $(virsh list | grep running | awk '{print $2}') -ne 0 ]] ; then NOTIFY "alert" "VMs Failed to Shutdown. Restarting VMs and Exiting." START QUIT fi for i in "${SRC[@]}"; do RSYNC "$i" "$DST/$(echo `hostname -f`:$i | base64)" if ! [[ $STATUS -eq 0 ]] ; then NOTIFY "warning" "Rsync of $i return exit code $STATUS." fi done START NOTIFY "normal" "Completed VM Backup." QUIT 1 Quote Link to comment
Martsmac Posted October 11, 2020 Share Posted October 11, 2020 Does this work on vm's that are using entire drives that are being passed through I have a VM that uses 2 physical ssds but I'd like to make a snapshot of the VM?? Quote Link to comment
gurulee Posted December 10, 2020 Share Posted December 10, 2020 Borg backup is an option. 👍😉 Quote Link to comment
scorcho99 Posted December 15, 2020 Share Posted December 15, 2020 (edited) On 10/10/2020 at 8:14 PM, Martsmac said: Does this work on vm's that are using entire drives that are being passed through I have a VM that uses 2 physical ssds but I'd like to make a snapshot of the VM?? No. You'd have to handle it in the guest some how in that case, or perhaps some drive image software could be used when the VM was offline. Its the main reason I don't use direct device drive passthrough personally. Edited December 16, 2020 by scorcho99 Quote Link to comment
Recommended Posts
Join the conversation
You can post now and register later. If you have an account, sign in now to post with your account.
Note: Your post will require moderator approval before it will be visible.