VM Backup Plugin


Recommended Posts

Warning: parse_ini_file(/boot/config/plugins/vmbackup/vdisk-list.txt): failed to open stream: No such file or directory in /usr/local/emhttp/plugins/dynamix/include/DefaultPageLayout.php(501) : eval()'d code on line 55

 

 

Where are you seeing the warning, and what are you doing when it occurs?

 

Does anything happen after you get the warning?

 

Thanks,

JTok

 

 

Sent from my iPhone using Tapatalk

Link to comment
6 hours ago, JTok said:

 

 

Where are you seeing the warning, and what are you doing when it occurs?

 

Does anything happen after you get the warning?

 

Thanks,

JTok

 

 

Sent from my iPhone using Tapatalk

I got a notification of a failed error from vm backup when it tried to do a backup from a set schedule. When going to vm backup in settings this error is shown at the top of vm backup settings tab right above basic settings.

Link to comment
16 hours ago, jpowell8672 said:

I got a notification of a failed error from vm backup when it tried to do a backup from a set schedule. When going to vm backup in settings this error is shown at the top of vm backup settings tab right above basic settings.

Are you able to get the error message from the error log? It will be saved in the log folder inside your backup location.

 

Thanks,

JTok

Link to comment

Just come and say thank you for the great script, finally have a script can backup without shutting down my VM 😃 I've tested with running windows with snapshot and restore as a new VM and the vdisk work perfectly as new, now I can create a VM and clone it for future use instead of installing windows over and over again.

 

And little wish list if is ok, hehe,

1. Agree with those on first page...restore button (prefer with option to restore to current VM config or as new VM like a template)

2. Cron job per VM, because some VM I wanted to backup once and some VM I want to backup more often then the other

 

Other than that this script is perfect 😃

Thanks again for the great script

 

ps. During backup if you change setting it may not save properly, you will have to wait for the current backup finish and reload the page before you can actually change setting...atleast that's what happen to me...

Link to comment
On 1/6/2020 at 10:33 PM, JTok said:

Are you able to get the error message from the error log? It will be saved in the log folder inside your backup location.

 

Thanks,

JTok


2020-01-05 07:00 information: Windows 10 can be found on the system. attempting backup.
2020-01-05 07:00 information: creating local Windows 10.xml to work with during backup.
2020-01-05 07:00 information: /mnt/cache/backup/Windows 10 exists. continuing.
2020-01-05 07:00 information: skip_vm_shutdown is false and use_snapshots is 1. skipping vm shutdown procedure. Windows 10 is running. can_backup_vm set to y.
2020-01-05 07:00 information: actually_copy_files is 1.
2020-01-05 07:00 information: can_backup_vm flag is y. starting backup of Windows 10 configuration, nvram, and vdisk(s).
sending incremental file list
Windows 10.xml

sent 7,392 bytes received 35 bytes 14,854.00 bytes/sec
total size is 7,292 speedup is 0.98
2020-01-05 07:00 information: copy of Windows 10.xml to /mnt/cache/backup/Windows 10/20200105_0700_Windows 10.xml complete.
sending incremental file list
6d8d5b31-ae31-9fdc-f498-fa9c73b2221a_VARS-pure-efi.fd

sent 131,241 bytes received 35 bytes 262,552.00 bytes/sec
total size is 131,072 speedup is 1.00
2020-01-05 07:00 information: copy of /etc/libvirt/qemu/nvram/6d8d5b31-ae31-9fdc-f498-fa9c73b2221a_VARS-pure-efi.fd to /mnt/cache/backup/Windows 10/20200105_0700_6d8d5b31-ae31-9fdc-f498-fa9c73b2221a_VARS-pure-efi.fd complete.
2020-01-05 07:00 information: extension for /mnt/user/isos/Win10_1903_V1_English_x64.iso on Windows 10 was found in vdisks_extensions_to_skip. skipping disk.
2020-01-05 07:00 information: extension for /mnt/user/isos/virtio-win-0.1.160-1.iso on Windows 10 was found in vdisks_extensions_to_skip. skipping disk.
2020-01-05 07:00 information: the extensions of the vdisks that were backed up are .
2020-01-05 07:00 information: vm_state is running. vm_original_state is running. not starting Windows 10.
2020-01-05 07:00 information: backup of Windows 10 to /mnt/cache/backup/Windows 10 completed.
2020-01-05 07:00 information: number of days to keep backups set to indefinitely.
2020-01-05 07:00 information: cleaning out backups over 3 in location /mnt/cache/backup/Windows 10/
2020-01-05 07:00 information: did not find any config files to remove.
2020-01-05 07:00 information: did not find any nvram files to remove.
/tmp/vmbackup/scripts/user-script.sh: line 2809: 😞 command not found
2020-01-05 07:00 information: did not find any image files to remove.
2020-01-05 07:00 information: removing local Windows 10.xml.
removed 'Windows 10.xml'
2020-01-05 07:00 information: pfsense can be found on the system. attempting backup.
2020-01-05 07:00 information: creating local pfsense.xml to work with during backup.
2020-01-05 07:00 information: /mnt/cache/backup/pfsense exists. continuing.
2020-01-05 07:00 information: skip_vm_shutdown is false and use_snapshots is 1. skipping vm shutdown procedure. pfsense is running. can_backup_vm set to y.
2020-01-05 07:00 information: actually_copy_files is 1.
2020-01-05 07:00 information: can_backup_vm flag is y. starting backup of pfsense configuration, nvram, and vdisk(s).
sending incremental file list
pfsense.xml

sent 7,574 bytes received 35 bytes 15,218.00 bytes/sec
total size is 7,477 speedup is 0.98
2020-01-05 07:00 information: copy of pfsense.xml to /mnt/cache/backup/pfsense/20200105_0700_pfsense.xml complete.
sending incremental file list
7ee726f1-17c6-3496-d3a2-cb1916f748eb_VARS-pure-efi.fd

sent 131,240 bytes received 35 bytes 262,550.00 bytes/sec
total size is 131,072 speedup is 1.00
2020-01-05 07:00 information: copy of /etc/libvirt/qemu/nvram/7ee726f1-17c6-3496-d3a2-cb1916f748eb_VARS-pure-efi.fd to /mnt/cache/backup/pfsense/20200105_0700_7ee726f1-17c6-3496-d3a2-cb1916f748eb_VARS-pure-efi.fd complete.
2020-01-05 07:00 information: extension for /mnt/user/isos/pfSense-CE-2.4.4-RELEASE-p3-amd64.iso on pfsense was found in vdisks_extensions_to_skip. skipping disk.
2020-01-05 07:00 failure: extension for /mnt/user/domains/pfsense/vdisk1.snap on pfsense is the same as the snapshot extension snap. disk will always be skipped. this usually means that the disk path in the config was not changed from /mnt/user. if disk path is correct, then try changing snapshot_extension or vdisk extension.
2020-01-05 07:00 information: extension for /mnt/user/domains/pfsense/vdisk1.snap on pfsense was found in vdisks_extensions_to_skip. skipping disk.
2020-01-05 07:00 information: the extensions of the vdisks that were backed up are .
2020-01-05 07:00 information: vm_state is running. vm_original_state is running. not starting pfsense.
2020-01-05 07:00 information: backup of pfsense to /mnt/cache/backup/pfsense completed.
2020-01-05 07:00 information: number of days to keep backups set to indefinitely.
2020-01-05 07:00 information: cleaning out backups over 3 in location /mnt/cache/backup/pfsense/
2020-01-05 07:00 information: did not find any config files to remove.
2020-01-05 07:00 information: did not find any nvram files to remove.
/tmp/vmbackup/scripts/user-script.sh: line 2809: 😞 command not found
2020-01-05 07:00 information: did not find any image files to remove.
2020-01-05 07:00 information: removing local pfsense.xml.
removed 'pfsense.xml'
2020-01-05 07:00 information: finished attempt to backup Windows 10, pfsense to /mnt/cache/backup.
2020-01-05 07:00 information: cleaning out logs over 1.
2020-01-05 07:00 information: removed '/mnt/cache/backup/logs/20200103_0819_unraid-vmbackup.log'.
2020-01-05 07:00 information: cleaning out error logs over 10.
find: '/mnt/cache/backup/logs/*unraid-vmbackup_error.log': No such file or directory
2020-01-05 07:00 information: did not find any error log files to remove.
2020-01-05 07:00 warning: errors found. creating error log file.
sending incremental file list
20200105_0700_unraid-vmbackup.log

sent 8,049 bytes received 35 bytes 5,389.33 bytes/sec
total size is 7,930 speedup is 0.98
2020-01-05 07:00 Stop logging to log file.
2020-01-05 07:00 Stop logging to error log file.
2020-01-05 07:00:02 Removed: /tmp/vmbackup/scripts/user-script.sh
2020-01-05 07:00:02 Removed: /tmp/vmbackup/scripts/user-script.pid
 

Link to comment
On 1/7/2020 at 9:50 AM, alien said:

ps. During backup if you change setting it may not save properly, you will have to wait for the current backup finish and reload the page before you can actually change setting...atleast that's what happen to me...

I'll have to try and replicate that and see what is going on. Thanks for letting me know

Link to comment
On 1/7/2020 at 9:28 PM, nextgenpotato said:

Can I suggest using zstd for compression? This will cut down the time to compress the image by like 10 times.

Or lbzip2 if have many cores this will also make it much faster.

Or can we have options for these? Both of these are already on unraid 6.8 by default.

I looked into this a little today, but this is by no means conclusive. In my tests so far I/O has been the biggest bottleneck, not the compression algorithm or number of threads. So using the parity array vs the cache array, or an unassigned device, is probably going to have the biggest effect on performance.
Honestly, all things being equal, I only saw about a 15-20% performance improvement with my test VM (though I understand that there could be more pronounced differences with other use cases). I tested using zstd, lbzip2, and pigz.

That being said, since there are some performance improvements with a multi-threaded compression utility, I am looking into a good way to integrate something.
I suspect, that at least initially, I will stick with pigz because of backwards compatibility issues. Though I may look into adding an option for the other two later on.

Edited by JTok
Link to comment
41 minutes ago, jpowell8672 said:

I had snapshots enabled and it broke my pfsense vm and had to manually restore. Going to disable vm backup plugin for now and wait for you to work out the bugs. Thanks @JTok

In what way did it break it? It is difficult to fix bugs if I don't know what happened.

 

Did you make sure to change the vdisk path in your VM config before using snapshots?

 

-JTok

Link to comment
1 hour ago, JTok said:

In what way did it break it? It is difficult to fix bugs if I don't know what happened.

 

Did you make sure to change the vdisk path in your VM config before using snapshots?

 

-JTok

I changed the backup location in vm backup settings to /mnt/cache/ by accident I was up long time tired made mistake of not changing in vm instead. pfsense does not natively support qemu-guest-agent at the moment, will snapshots still work ok?

Link to comment
Just now, Stupifier said:
2 hours ago, jpowell8672 said:
I had snapshots enabled and it broke my pfsense vm and had to manually restore. Going to disable vm backup plugin for now and wait for you to work out the bugs. Thanks @JTok

Can't you just do the backup without using snapshot feature?

I could but would rather not take down my outside network with pfsense vm going down during backup.

Link to comment

Hello, I am confused as to how this is functioning. Does it function like Veeam Backup where it snapshots then creates a backup (and there are two files during this period) and at the end of the backup it writes back the changes to the original disk from the delta file so we are left with just one disk file again, or is it creating additional disk files each time the backup runs and never writing the changes back so we just have a single disk image each backup and are ending up with multiple files over time?
 

Sorry if this is already answered but it was not clear to me. 
 

P

Link to comment

First off all, thanks for your work! That's a really nice app.

 

I noticed while doing backups my CPU usage is used  a lot, and it's caused by a high IOWAIT.

It's OK and just caused by the low disk speeds (using WD Red 4TB with single parity)? Or there are any way to improve it?

 

CPU usage screenshot:

https://gyazo.com/bc542f0f658f30fe77e95618d29dad46

 

Thanks in advance!

Link to comment

Since updating to unraid 6.8.1, the plugin leaves a process running indefinitely with high cpu usage. Other than the screenshot below implies, it actually ran for 8 hours on my machine after backup completion - I terminated it and manually started a backup to confirm it was the culprit, then took the screenshot. Im fairly sure this did not happen before 6.8.1. 

 

Hope this will be okay eventually, loving the plugin!

 

 

Screenshot 2020-01-13 at 12.02.16.png

Link to comment

I have a Syntax Error that is displayed above the basic settings:

Warning: syntax error, unexpected '(' in /boot/config/plugins/vmbackup/vdisk-list.txt on line 5 in /usr/local/emhttp/plugins/dynamix/include/DefaultPageLayout.php(501) : eval()'d code on line 55

 

I am able to change some settings - but clicking "apply" or "backup now" doesn't do anything.

I've had the same issue with 6.8.0 snd 6.8.1.

Link to comment
On 1/8/2020 at 10:37 PM, JTok said:

I looked into this a little today, but this is by no means conclusive. In my tests so far I/O has been the biggest bottleneck, not the compression algorithm or number of threads. So using the parity array vs the cache array, or an unassigned device, is probably going to have the biggest effect on performance.
Honestly, all things being equal, I only saw about a 15-20% performance improvement with my test VM (though I understand that there could be more pronounced differences with other use cases). I tested using zstd, lbzip2, and pigz.

That being said, since there are some performance improvements with a multi-threaded compression utility, I am looking into a good way to integrate something.
I suspect, that at least initially, I will stick with pigz because of backwards compatibility issues. Though I may look into adding an option for the other two later on.

 

I assume most people host their VM image files on faster storage (i.e. SSD or NVMe cache or unassigned devices) and write their backups to the array.  The I/O performance bottleneck is mostly going to be with the array.  Currently, the script copies the image files from source to destination and then afterwards compresses them.  This results in writing uncompressed image files to the array, then reading uncompressed image files from the array, compressing them in memory, and finally writing the compressed result back to the array.  (i.e. READ from cache -> WRITE to array -> READ from array -> COMPRESS in memory -> WRITE to array)

 

So, what about an option to use inline (zstd) compression per image file and eliminating the entire post compression step?  This would mean that all reads go against the faster storage tier and are compressed in memory prior to writing to the slower array tier.  (READ from cache -> COMPRESS in memory -> WRITE to array).


Something like this (possibly with options to tweak the compression level and number of threads):

zstd -5 -T0 --sparse "$source" -o "$destination".zst

instead of this (and the later tar/compression step): 

cp -av --sparse=always "$source" "$destination"

 

In my testing, this dramatically reduces I/O and backup durations, and even results in slightly smaller archives (depending on compression levels chosen).

 

Or, is there a reason the image files have to first be copied and later compressed?

 

I might be able to whip up a pull request if that would be helpful.

Edited by sjerisman
Link to comment
On 1/13/2020 at 6:07 AM, Blacksus said:

Since updating to unraid 6.8.1, the plugin leaves a process running indefinitely with high cpu usage. Other than the screenshot below implies, it actually ran for 8 hours on my machine after backup completion - I terminated it and manually started a backup to confirm it was the culprit, then took the screenshot. Im fairly sure this did not happen before 6.8.1. 

 

Hope this will be okay eventually, loving the plugin!

 

 

Screenshot 2020-01-13 at 12.02.16.png

That's going to be a fun one. I don't have 6.8.1 yet because I am using the nvidia plugin, but I'll see if I can figure out what is going on anyway and get back to you.

  • Like 1
Link to comment
On 1/13/2020 at 7:45 AM, Dati said:

I have a Syntax Error that is displayed above the basic settings:

Warning: syntax error, unexpected '(' in /boot/config/plugins/vmbackup/vdisk-list.txt on line 5 in /usr/local/emhttp/plugins/dynamix/include/DefaultPageLayout.php(501) : eval()'d code on line 55

 

I am able to change some settings - but clicking "apply" or "backup now" doesn't do anything.

I've had the same issue with 6.8.0 snd 6.8.1.

Do you have parenthesis in any of your VM paths? I've run into some issues with that.

Link to comment
42 minutes ago, sjerisman said:

I assume most people host their VM image files on faster storage (i.e. SSD or NVMe cache or unassigned devices) and write their backups to the array.  The I/O performance bottleneck is mostly going to be with the array.  Currently, the script copies the image files from source to destination and then afterwards compresses them.  This results in writing uncompressed image files to the array, then reading uncompressed image files from the array, compressing them in memory, and finally writing the compressed result back to the array.  (i.e. READ from cache -> WRITE to array -> READ from array -> COMPRESS in memory -> WRITE to array)

Sorry, I wasn't clear. My tests were from an SSD cache array to the Parity Array. So that's also the bottleneck I was referring to.

I was attempting to also point out that, to anyone interested in improving throughput, the biggest improvements will come from changing storage around to cut out the Parity Array. i.e. running the VMs on an NVMe unassigned device and backing up to an SSD cache array or vice versa.

 

You're right though, that order of operations does seem a bit excessive doesn't it? lol

 

42 minutes ago, sjerisman said:

So, what about an option to use inline (zstd) compression per image file and eliminating the entire post compression step?  This would mean that all reads go against the faster storage tier and are compressed in memory prior to writing to the slower array tier.  (READ from cache -> COMPRESS in memory -> WRITE to array).

This seems viable, but there are some issues that I would need to handle related to backwards compatibility before switching compression algorithms outright.

 

42 minutes ago, sjerisman said:

Or, is there a reason the image files have to first be copied and later compressed?

I'm going from memory here, so possibly the details are wrong, but I believe it came down to being able to turn the VM back on sooner.

Essentially I couldn't guarantee the speed of the system that unRAID would be running on, so I decided to compress after copy because it meant the VM might be able to be turned on sooner (though I honestly can't remember if I tested this or not).

Esentially the logic was: Turn off VM -> Copy files -> Turn on VM -> compress files; would result in the VM being off for less time.

 

With snapshots though, this is far less efficient. So I think it will be a good behavior to make configurable in the future.

 

Thanks for looking into this btw!

Edited by JTok
Link to comment
3 hours ago, JTok said:

Do you have parenthesis in any of your VM paths? I've run into some issues with that.

Sorry, what a stupid mistake! I haven't read the error message correctly. I've just looked into the file /usr/local/emhttp/plugins/dynamix/include/DefaultPageLayout.php

The problem was a paranthesis. It was my mistake!

Link to comment

Join the conversation

You can post now and register later. If you have an account, sign in now to post with your account.
Note: Your post will require moderator approval before it will be visible.

Guest
Reply to this topic...

×   Pasted as rich text.   Restore formatting

  Only 75 emoji are allowed.

×   Your link has been automatically embedded.   Display as a link instead

×   Your previous content has been restored.   Clear editor

×   You cannot paste images directly. Upload or insert images from URL.