VM Backup Plugin


Recommended Posts

  • 2 weeks later...

Hi,

I'm using the awesome VMbackup plugin (it's great!👍). I have a questions though, i'd like to compress the files (using the only option Zstandard which seems to give about 30% improvement). However, in testing I can't see a way to extract the .zst file directly on unraid. Tried command line, krusader & cloud commander to no avail.

 

Any ideas?

Link to comment
28 minutes ago, halogen55 said:

Hi,

I'm using the awesome VMbackup plugin (it's great!👍). I have a questions though, i'd like to compress the files (using the only option Zstandard which seems to give about 30% improvement). However, in testing I can't see a way to extract the .zst file directly on unraid. Tried command line, krusader & cloud commander to no avail.

 

Any ideas?

First Google search result for "extract zst" tells me to use "unzstd" command in terminal.

Link to comment
  • 3 weeks later...

When uninstalling this plugin, is it supposed to remove pigz/unpigz?

After doing so was not able to update my docker containers until re-installed (didn't try restarting);

 

Note that when I went to reinstall, got the following message so the plugin installs its packages but not shown as installed:

plugin: run failed: /bin/bash retval: 1

 

Link to comment

I'm having a strange issue using this script and scheduling it, and I found that it's a problem related to cron under /var/spool/cron/crontabs/root

 

  • If I create a new VM backup entry, scheduled daily at 04:00am, it runs smoothly and in the crontab I can see this entry
# Job for VM Backup plugin windows-weekly:
0 4 * * 0 /usr/local/emhttp/plugins/vmbackup/runscript.php run_backup windows-weekly > /dev/null 2>&1
  • BUT after a server reboot, the crontab contains a modified version of the entry
# Job for VM Backup plugin /boot/config/plugins/vmbackup/configs/windows-weekly/:
0 4 * * 0 /usr/local/emhttp/plugins/vmbackup/runscript.php run_backup /boot/config/plugins/vmbackup/configs/windows-weekly/ > /dev/null 2>&1

and the backup fails with this error message: 

Quote

unRAID-server: VM Backup plugin

cannot run /boot/config/plugins/vmbackup/configs/windows-weekly/

User script file does not exist. Exiting.

 

  • either trying to run it manually through SSH  give me the same error (/usr/local/emhttp/plugins/vmbackup/runscript.php run_backup /boot/config/plugins/vmbackup/configs/windows-weekly/ > /dev/null 2>&1)
  • using the BAKCUP NOW option inside the plugin, everything runs smoothly and without any issue

Is it a common problem? How can I fix it?

Link to comment
30 minutes ago, M4st3r said:

I'm having a strange issue using this script and scheduling it, and I found that it's a problem related to cron under /var/spool/cron/crontabs/root

 

  • If I create a new VM backup entry, scheduled daily at 04:00am, it runs smoothly and in the crontab I can see this entry

# Job for VM Backup plugin windows-weekly:
0 4 * * 0 /usr/local/emhttp/plugins/vmbackup/runscript.php run_backup windows-weekly > /dev/null 2>&1
  • BUT after a server reboot, the crontab contains a modified version of the entry

# Job for VM Backup plugin /boot/config/plugins/vmbackup/configs/windows-weekly/:
0 4 * * 0 /usr/local/emhttp/plugins/vmbackup/runscript.php run_backup /boot/config/plugins/vmbackup/configs/windows-weekly/ > /dev/null 2>&1

and the backup fails with this error message: 

 

  • either trying to run it manually through SSH  give me the same error (/usr/local/emhttp/plugins/vmbackup/runscript.php run_backup /boot/config/plugins/vmbackup/configs/windows-weekly/ > /dev/null 2>&1)
  • using the BAKCUP NOW option inside the plugin, everything runs smoothly and without any issue

Is it a common problem? How can I fix it?

For Unraid, the common recommendation is to NOT use crontab and instead install the User Scripts plugin specially made for Unraid. The User Scripts plugin is the preferred Unraid way of scheduling scripts.

You can find it easily in the community applications store

  • Like 1
Link to comment
41 minutes ago, Stupifier said:

For Unraid, the common recommendation is to NOT use crontab and instead install the User Scripts plugin specially made for Unraid. The User Scripts plugin is the preferred Unraid way of scheduling scripts.

You can find it easily in the community applications store

Ok thanks!

 

But how can I use this VM backup plugin AND/INTO User Scripts plugin (already installed for RCLONE)?

Link to comment
Ok thanks!
 
But how can I use this VM backup plugin AND/INTO User Scripts plugin (already installed for RCLONE)?
I personally don't use the VM Backup Plugin yet... It is too beta for me.

Instead, I use the actual script which the plugin is based on. I configure this script, then put it into the user scripts plugin, set a cron schedule on the script from within user scripts plugin, and done.


https://forums.unraid.net/topic/46281-unraid-autovmbackup-automate-backup-of-virtual-machines-in-unraid-v04/?do=findComment&comment=814187

https://github.com/JTok/unraid-vmbackup/tree/v1.3.1
Link to comment
1 hour ago, Stupifier said:

I personally don't use the VM Backup Plugin yet... It is too beta for me.

Instead, I use the actual script which the plugin is based on. I configure this script, then put it into the user scripts plugin, set a cron schedule on the script from within user scripts plugin, and done.


https://forums.unraid.net/topic/46281-unraid-autovmbackup-automate-backup-of-virtual-machines-in-unraid-v04/?do=findComment&comment=814187

https://github.com/JTok/unraid-vmbackup/tree/v1.3.1

Ok, thx again, I'll try it out :)

Link to comment

Great plugin works 100% for me, I am using it to backup my VM's for Veeam to them come along and offload the backup to tape. Thank you!! 

 

Can I ask - When you open the plugin you have 5 tabs, settings, upload scripts etc. 

 

Can you create one for Running Jobs - within this would it be possiable to display what the backup is doing, where its at percentage or even which VM's have been done etc 

 

Like a Job History tab within Veem, or Veritas for example? 

 

This maybe a good day to bring in the restore option you are talking about as you could display all the past jobs in this window with a restore option as well maybe? 

 

Many Thanks 

Edited by IKWeb
Link to comment

Could anyone explain to me why the first part of the VM backup process includes a copy operation of my previous backup img file to a newly created img file BEFORE my VM shuts down and then continues on the backup process?  I'm just confused by what this first step is doing.

 

2020-04-28 13:35:20 information: copy of backup of /mnt/user/data/backups/servers/athens/vm/SPE-DC1/20200427_0401_spe-dc1_vdisk1.img vdisk to /mnt/user/data/backups/servers/athens/vm/SPE-DC1/20200428_1335_spe-dc1_vdisk1.img starting.
2020-04-28 13:42:24 information: copy of /mnt/user/data/backups/servers/athens/vm/SPE-DC1/20200427_0401_spe-dc1_vdisk1.img to /mnt/user/data/backups/servers/athens/vm/SPE-DC1/20200428_1335_spe-dc1_vdisk1.img complete.
2020-04-28 13:42:24 information: skip_vm_shutdown is false. beginning vm shutdown procedure.

 

Link to comment

 The plugin is really interesting and i'm doing some experiments with it. It doesn't work at all in my case, i assume because my VMs are not stored in a virtual disk, but directly to physical partitions with the /dev/disk/by-id/ syntax. I don't even know if this setup is supported by the plugin.

Here's my log

2020-05-01 14:09:45 Starting VM Backup for default config.
2020-05-01 14:09:45 PID: 4501
2020-05-01 14:09:45 User script copied to /tmp/vmbackup/scripts/default/user-script.sh
2020-05-01 14:09:45 Running command: '/tmp/vmbackup/scripts/default/user-script.sh' >> '/tmp/vmbackup/scripts/default/20200501_140945_user-script.log' 2>&1
2020-05-01 14:09:45 information: official_script_name is user-script.sh. script file's name is user-script.sh. script name is valid. continuing.
2020-05-01 14:09:45 information: enabled is 1. script is enabled. continuing.
2020-05-01 14:09:45 information: backup_location is /mnt/user/backup_VM_jtok. this location exists. continuing.
2020-05-01 14:09:45 information: backup_location is /mnt/user/backup_VM_jtok. this location is writable. continuing.
2020-05-01 14:09:45 information: timestamp_files is 1. timestamp will be added to backup files.
2020-05-01 14:09:45 information: /mnt/user/backup_VM_jtok/logs/ exists. continuing.
2020-05-01 14:09:45 information: log_file_subfolder is /mnt/user/backup_VM_jtok/logs/. this location exists. continuing.
2020-05-01 14:09:45 information: log_file_subfolder is /mnt/user/backup_VM_jtok/logs/. this location is writable. continuing.
2020-05-01 14:09:45 Start logging to log file.
2020-05-01 14:09:45 information: send_notifications is 1. notifications will be sent.
2020-05-01 14:09:45 information: only_send_error_notifications is 0. normal notifications will be sent if send_notifications is enabled.
2020-05-01 14:09:45 information: unRAID VM Backup script is starting. Look for finished message.
2020-05-01 14:09:45 information: keep_log_file is 1. log files will be kept.
2020-05-01 14:09:45 information: number_of_log_files_to_keep is 1. this is probably a sufficient number of log files to keep.
2020-05-01 14:09:45 information: enable_vm_log_file is 0. vm specific logs will not be created.
2020-05-01 14:09:45 information: backup_all_vms is 0. only vms listed in vms_to_backup will be backed up.
2020-05-01 14:09:45 information: use_snapshots is 0. vms will not be backed up using snapshots.
2020-05-01 14:09:45 information: kill_vm_if_cant_shutdown is 0. vms will not be forced to shutdown if a clean shutdown can not be detected.
2020-05-01 14:09:45 information: set_vm_to_original_state is 1. vms will be set to their original state after backup.
2020-05-01 14:09:45 information: number_of_days_to_keep_backups is 0. backups will be kept indefinitely. be sure to set number_of_backups_to_keep to keep backups storage usage down.
2020-05-01 14:09:45 information: number_of_backups_to_keep is 0. an infinite number of backups will be kept. be sure to set number_of_days_to_keep_backups to keep backups storage usage down.
2020-05-01 14:09:45 information: inline_zstd_compress is 0. vdisk images will not be inline compressed.
2020-05-01 14:09:45 information: pigz_compress is 0. backups will not be post compressed.
2020-05-01 14:09:45 information: use_snapshots disabled, not adding snapshot_extension to vdisk_extensions_to_skip.
2020-05-01 14:09:45 information: snapshot_fallback is 0. snapshots will fallback to standard backups.
2020-05-01 14:09:45 information: pause_vms is 0. vms will be shutdown for standard backups.
2020-05-01 14:09:45 information: enable_reconstruct_write is 0. reconstruct write will not be enabled by this script.
2020-05-01 14:09:45 information: compare_files is 0. files will not be compared after backups.
2020-05-01 14:09:45 information: backup_xml is 1. vms will have their xml configurations backed up.
2020-05-01 14:09:45 information: backup_nvram is 1. vms will have their nvram backed up.
2020-05-01 14:09:45 information: backup_vdisks is 1. vms will have their vdisks backed up.
2020-05-01 14:09:45 information: start_vm_after_backup is 0. vms will not be started following successful backup.
2020-05-01 14:09:45 information: start_vm_after_failure is 0. vms will not be started following an unsuccessful backup.
2020-05-01 14:09:45 information: disable_delta_sync is 0. rsync will be used to perform delta sync backups.
2020-05-01 14:09:45 information: rsync_only is 0. cp will be used when applicable.
2020-05-01 14:09:45 information: actually_copy_files is 1. files will be copied.
2020-05-01 14:09:45 information: clean_shutdown_checks is 20. this is probably a sufficient number of shutdown checks.
2020-05-01 14:09:45 information: seconds_to_wait is 30. this is probably a sufficient number of seconds to wait between shutdown checks.
2020-05-01 14:09:45 information: keep_error_log_file is 1. error log files will be kept.
2020-05-01 14:09:45 information: number_of_error_log_files_to_keep is 10. this is probably a sufficient error number of log files to keep.
2020-05-01 14:09:45 information: started attempt to backup Win10-Ufficio1 to /mnt/user/backup_VM_jtok
2020-05-01 14:09:45 information: Win10-Ufficio1 can be found on the system. attempting backup.
2020-05-01 14:09:45 information: removing old local Win10-Ufficio1.xml.
removed 'Win10-Ufficio1.xml'
2020-05-01 14:09:45 information: creating local Win10-Ufficio1.xml to work with during backup.
2020-05-01 14:09:45 information: /mnt/user/backup_VM_jtok/Win10-Ufficio1 exists. continuing.
/tmp/vmbackup/scripts/default/user-script.sh: line 424: vdisk_types["$vdisk_path"]: bad array subscript
2020-05-01 14:09:45 information: finished attempt to backup Win10-Ufficio1 to /mnt/user/backup_VM_jtok.
2020-05-01 14:09:45 information: cleaning out logs over 1.
2020-05-01 14:09:45 information: did not find any log files to remove.
2020-05-01 14:09:45 information: cleaning out error logs over 10.
find: '/mnt/user/backup_VM_jtok/logs/*unraid-vmbackup_error.log': No such file or directory
2020-05-01 14:09:45 information: did not find any error log files to remove.
2020-05-01 14:09:45 Stop logging to log file.
2020-05-01 14:09:45 Removed: /tmp/vmbackup/scripts/default/user-script.sh
2020-05-01 14:09:45 Removed: /tmp/vmbackup/scripts/default.pid

Here's the VM xml file:

<domain type='kvm' id='5'>
  <name>Win10-Ufficio1</name>
  <uuid>36ebe287-1a09-15af-c4c4-436fd24fc104</uuid>
  <metadata>
    <vmtemplate xmlns="http://unraid.net/xmlns" name="Windows 10" icon="windows.png" os="windows10"/>
  </metadata>
  <memory unit='KiB'>4194304</memory>
  <currentMemory unit='KiB'>4194304</currentMemory>
  <memoryBacking>
    <nosharepages/>
  </memoryBacking>
  <vcpu placement='static'>2</vcpu>
  <cputune>
    <vcpupin vcpu='0' cpuset='2'/>
    <vcpupin vcpu='1' cpuset='14'/>
  </cputune>
  <resource>
    <partition>/machine</partition>
  </resource>
  <os>
    <type arch='x86_64' machine='pc-i440fx-4.2'>hvm</type>
  </os>
  <features>
    <acpi/>
    <apic/>
    <hyperv>
      <relaxed state='on'/>
      <vapic state='on'/>
      <spinlocks state='on' retries='8191'/>
      <vendor_id state='on' value='none'/>
    </hyperv>
  </features>
  <cpu mode='host-passthrough' check='none'>
    <topology sockets='1' cores='1' threads='2'/>
    <cache mode='passthrough'/>
  </cpu>
  <clock offset='localtime'>
    <timer name='hypervclock' present='yes'/>
    <timer name='hpet' present='no'/>
  </clock>
  <on_poweroff>destroy</on_poweroff>
  <on_reboot>restart</on_reboot>
  <on_crash>restart</on_crash>
  <devices>
    <emulator>/usr/local/sbin/qemu</emulator>
    <disk type='block' device='disk'>
      <driver name='qemu' type='raw' cache='writeback'/>
      <source dev='/dev/disk/by-id/ata-CT240BX500SSD1_1901E16A30A5-part1' index='3'/>
      <backingStore/>
      <target dev='hdc' bus='virtio'/>
      <boot order='1'/>
      <alias name='virtio-disk2'/>
      <address type='pci' domain='0x0000' bus='0x00' slot='0x05' function='0x0'/>
    </disk>
    <disk type='file' device='cdrom'>
      <driver name='qemu' type='raw'/>
      <source file='/mnt/user/isos/Win10_1803_Italian_x64.iso' index='2'/>
      <backingStore/>
      <target dev='hda' bus='ide'/>
      <readonly/>
      <boot order='2'/>
      <alias name='ide0-0-0'/>
      <address type='drive' controller='0' bus='0' target='0' unit='0'/>
    </disk>
    <disk type='file' device='cdrom'>
      <driver name='qemu' type='raw'/>
      <source file='/mnt/user/isos/virtio-win-0.1.173-2.iso' index='1'/>
      <backingStore/>
      <target dev='hdb' bus='ide'/>
      <readonly/>
      <alias name='ide0-0-1'/>
      <address type='drive' controller='0' bus='0' target='0' unit='1'/>
    </disk>
    <controller type='usb' index='0' model='ich9-ehci1'>
      <alias name='usb'/>
      <address type='pci' domain='0x0000' bus='0x00' slot='0x07' function='0x7'/>
    </controller>
    <controller type='usb' index='0' model='ich9-uhci1'>
      <alias name='usb'/>
      <master startport='0'/>
      <address type='pci' domain='0x0000' bus='0x00' slot='0x07' function='0x0' multifunction='on'/>
    </controller>
    <controller type='usb' index='0' model='ich9-uhci2'>
      <alias name='usb'/>
      <master startport='2'/>
      <address type='pci' domain='0x0000' bus='0x00' slot='0x07' function='0x1'/>
    </controller>
    <controller type='usb' index='0' model='ich9-uhci3'>
      <alias name='usb'/>
      <master startport='4'/>
      <address type='pci' domain='0x0000' bus='0x00' slot='0x07' function='0x2'/>
    </controller>
    <controller type='pci' index='0' model='pci-root'>
      <alias name='pci.0'/>
    </controller>
    <controller type='ide' index='0'>
      <alias name='ide'/>
      <address type='pci' domain='0x0000' bus='0x00' slot='0x01' function='0x1'/>
    </controller>
    <controller type='virtio-serial' index='0'>
      <alias name='virtio-serial0'/>
      <address type='pci' domain='0x0000' bus='0x00' slot='0x04' function='0x0'/>
    </controller>
    <interface type='bridge'>
      <mac address='52:54:00:d6:15:7e'/>
      <source bridge='br0'/>
      <target dev='vnet2'/>
      <model type='virtio'/>
      <alias name='net0'/>
      <address type='pci' domain='0x0000' bus='0x00' slot='0x03' function='0x0'/>
    </interface>
    <serial type='pty'>
      <source path='/dev/pts/4'/>
      <target type='isa-serial' port='0'>
        <model name='isa-serial'/>
      </target>
      <alias name='serial0'/>
    </serial>
    <console type='pty' tty='/dev/pts/4'>
      <source path='/dev/pts/4'/>
      <target type='serial' port='0'/>
      <alias name='serial0'/>
    </console>
    <channel type='unix'>
      <source mode='bind' path='/var/lib/libvirt/qemu/channel/target/domain-5-Win10-Ufficio1/org.qemu.guest_agent.0'/>
      <target type='virtio' name='org.qemu.guest_agent.0' state='disconnected'/>
      <alias name='channel0'/>
      <address type='virtio-serial' controller='0' bus='0' port='1'/>
    </channel>
    <input type='tablet' bus='usb'>
      <alias name='input0'/>
      <address type='usb' bus='0' port='1'/>
    </input>
    <input type='mouse' bus='ps2'>
      <alias name='input1'/>
    </input>
    <input type='keyboard' bus='ps2'>
      <alias name='input2'/>
    </input>
    <graphics type='vnc' port='5902' autoport='yes' websocket='5702' listen='0.0.0.0' keymap='it'>
      <listen type='address' address='0.0.0.0'/>
    </graphics>
    <video>
      <model type='qxl' ram='65536' vram='65536' vgamem='16384' heads='1' primary='yes'/>
      <alias name='video0'/>
      <address type='pci' domain='0x0000' bus='0x00' slot='0x02' function='0x0'/>
    </video>
    <memballoon model='virtio'>
      <alias name='balloon0'/>
      <address type='pci' domain='0x0000' bus='0x00' slot='0x06' function='0x0'/>
    </memballoon>
  </devices>
  <seclabel type='dynamic' model='dac' relabel='yes'>
    <label>+0:+100</label>
    <imagelabel>+0:+100</imagelabel>
  </seclabel>
</domain>

 

Link to comment
  • 2 weeks later...

Hi!

I have the following problem with the plugin:

The "Number of days to keep backups:" field accepts no numeric entries except 0 (zero).
The field "Number of backups to keep:" is already set to default = '0' (zero).

Edited by JoergHH
Typo
Link to comment
30 minutes ago, JoergHH said:

Hi!

I have the following problem with the plugin:

The "Number of days to keep backups:" field accepts no numeric entries except 0 (zero).
The field "Number of backups to keep:" is already set to default = '0' (zero).

You probably have to set the number of backups to keep number to something other than 0 first.

Edited by IamSpartacus
Link to comment
27 minutes ago, IamSpartacus said:

You probably have to set the number of backups to keep number to something other than 0 first.

No, it doesn't matter what I put in the field "Number of backups to keep:". It doesn't make any difference.

The problem is the field formatting.

By the way, it is probably not a browser problem. I have tried different ones, always with the same negative result.

 

Edit: Fun fact on the side. The field only accepts values outside the range between 1 and 6.

Edited by JoergHH
Additional info
Link to comment
  • 2 weeks later...

Really appreciate this script/plugin.

 

I seem to get random snapshot failures - usually only a single VM out of many and not always the same one.  Around 50% of backup runs I get no failures at all.  That being the case I suspect that if the script were to retry taking the snapshot after a short delay, it would probably work?  Not sure if this is related to me using ZFS for VM storage.

2020-05-21 06:02:14 information: able to perform snapshot for disk /mnt/zfspool/vm/vmname/vdisk1.qcow2 on vmname. use_snapshots is 1. vm_state is running. vdisk_type is qcow2
2020-05-21 06:02:15 failure: snapshot command failed on vdisk1.snap for vmname.
2020-05-21 06:02:16 warning: snapshot_fallback is 1. attempting backup for vmname using fallback method.

Would it be possible to add some options to retry taking the snapshot before going to fallback method?

 

Retry snapshots - yes/no

Number of times to retry - integer

Number of seconds between retries - integer

 

Or alternatively make retrying snapshots a default behavior in the script?

 

ps: I've had a few instance where the script left a VM turned off and unable to start.  First example:

 

The log indicated that vdisk1 snapshot failed

The VM wouldn't turn on as the VM xml for vdisk 1, 2 and 3 were still pointing to .snap files.

vdisk1 had an orphaned .snap file (even though it was logged as failing),

.snap files for vdisk2 and vdisk3 had already been removed

I ended up deleting the .snap file for vdisk1, fixing the XML to point to the .qcow2 files for all three vdisks

The vm started up fine (I probably lost changes to vdisk1 between failed snapshot and shutdown, but wasn't too concerned about that)

I couldn't see anything in the logs that indicated what went wrong other than the vdisk1 snapshot failure.  I did end up with successful (fallback) backups of all three vdisks. 

 

Log for that VM on that backup run:

2020-05-20 05:31:01 information: vmname can be found on the system. attempting backup.
2020-05-20 05:31:01 information: creating local vmname.xml to work with during backup.
2020-05-20 05:31:01 information: /mnt/disks/localbackup/vm/vmname exists. continuing.
2020-05-20 05:31:01 information: skip_vm_shutdown is false and use_snapshots is 1. skipping vm shutdown procedure. vmname is running. can_backup_vm set to y.
2020-05-20 05:31:01 information: actually_copy_files is 1.
2020-05-20 05:31:01 information: can_backup_vm flag is y. starting backup of vmname configuration, nvram, and vdisk(s).
2020-05-20 05:31:01 information: copy of vmname.xml to /mnt/disks/localbackup/vm/vmname/20200520_0500_vmname.xml complete.
2020-05-20 05:31:01 information: copy of /etc/libvirt/qemu/nvram/a65cdc4d-0bcb-ef2f-0cd4-21e5bda55dfd_VARS-pure-efi.fd to /mnt/disks/localbackup/vm/vmname/20200520_0500_a65cdc4d-0bcb-ef2f-0cd4-21e5bda55dfd_VARS-pure-
efi.fd complete.
2020-05-20 05:31:01 information: able to perform snapshot for disk /mnt/zfspool/vm/vmname/vdisk1.qcow2 on vmname. use_snapshots is 1. vm_state is running. vdisk_type is qcow2
2020-05-20 05:31:01 information: qemu agent found. enabling quiesce on snapshot.
2020-05-20 05:31:18 failure: snapshot command failed on vdisk1.snap for vmname.
2020-05-20 05:31:18 warning: snapshot_fallback is 1. attempting backup for vmname using fallback method.
2020-05-20 05:31:18 information: skip_vm_shutdown is false. beginning vm shutdown procedure.
2020-05-20 05:31:18 infomration: vmname is running. vm desired state is shut off.
2020-05-20 05:31:19 information: performing 20 30 second cycles waiting for vmname to shutdown cleanly.
2020-05-20 05:31:19 information: cycle 1 of 20: waiting 30 seconds before checking if the vm has entered the desired state.
2020-05-20 05:31:49 information: vmname is shut off. vm desired state is shut off. can_backup_vm set to y.
2020-05-20 05:37:38 information: copy of /mnt/zfspool/vm/vmname/vdisk1.qcow2 to /mnt/disks/localbackup/vm/vmname/20200520_0500_vdisk1.qcow2.zst complete.
2020-05-20 05:37:38 information: backup of /mnt/zfspool/vm/vmname/vdisk1.qcow2 vdisk to /mnt/disks/localbackup/vm/vmname/20200520_0500_vdisk1.qcow2.zst complete.
2020-05-20 05:37:38 information: able to perform snapshot for disk /mnt/zfspool/vm/vmname/vdisk2.qcow2 on vmname. use_snapshots is 1. vm_state is running. vdisk_type is qcow2
2020-05-20 05:37:38 information: qemu agent not found. disabling quiesce on snapshot.
2020-05-20 05:37:38 information: snapshot command succeeded on vdisk2.snap for vmname.
2020-05-20 05:38:48 information: copy of /mnt/zfspool/vm/vmname/vdisk2.qcow2 to /mnt/disks/localbackup/vm/vmname/20200520_0500_vdisk2.qcow2.zst complete.
2020-05-20 05:39:03 information: backup of /mnt/zfspool/vm/vmname/vdisk2.qcow2 vdisk to /mnt/disks/localbackup/vm/vmname/20200520_0500_vdisk2.qcow2.zst complete.
2020-05-20 05:39:08 information: commited changes from snapshot for /mnt/zfspool/vm/vmname/vdisk2.qcow2 on vmname.
2020-05-20 05:39:08 information: forcibly removed snapshot /mnt/zfspool/vm/vmname/vdisk2.snap for vmname.
2020-05-20 05:39:08 information: able to perform snapshot for disk /mnt/zfspool/vm/vmname/vdisk3.qcow2 on vmname. use_snapshots is 1. vm_state is running. vdisk_type is qcow2
2020-05-20 05:39:09 information: qemu agent not found. disabling quiesce on snapshot.
2020-05-20 05:39:09 information: snapshot command succeeded on vdisk3.snap for vmname.
2020-05-20 05:47:56 information: copy of /mnt/zfspool/vm/vmname/vdisk3.qcow2 to /mnt/disks/localbackup/vm/vmname/20200520_0500_vdisk3.qcow2.zst complete.
2020-05-20 05:47:56 information: backup of /mnt/zfspool/vm/vmname/vdisk3.qcow2 vdisk to /mnt/disks/localbackup/vm/vmname/20200520_0500_vdisk3.qcow2.zst complete.
2020-05-20 05:48:01 information: commited changes from snapshot for /mnt/zfspool/vm/vmname/vdisk3.qcow2 on vmname.
2020-05-20 05:48:01 information: forcibly removed snapshot /mnt/zfspool/vm/vmname/vdisk3.snap for vmname.
2020-05-20 05:48:01 information: extension for /mnt/user/isos/Windows Server 2019/en_windows_server_2019_x64_dvd_3c2cf1202.iso on vmname was found in vdisks_extensions_to_skip. skipping disk.
2020-05-20 05:48:01 information: extension for /mnt/user/isos/virtio-win-0.1.173-2.iso on vmname was found in vdisks_extensions_to_skip. skipping disk.
2020-05-20 05:48:01 information: the extensions of the vdisks that were backed up are qcow2.
2020-05-20 05:48:01 information: vm_state is shut off. vm_original_state is running. starting vmname.
2020-05-20 05:48:01 information: backup of vmname to /mnt/disks/localbackup/vm/vmname completed.
2020-05-20 05:48:01 information: number of days to keep backups set to indefinitely.
2020-05-20 05:48:01 information: cleaning out backups over 3 in location /mnt/disks/localbackup/vm/vmname/
2020-05-20 05:48:01 information: removed '/mnt/disks/localbackup/vm/vmname/20200517_0500_vmname.xml' config file.
2020-05-20 05:48:01 information: removed '/mnt/disks/localbackup/vm/vmname/20200517_0500_a65cdc4d-0bcb-ef2f-0cd4-21e5bda55dfd_VARS-pure-efi.fd' nvram file.
2020-05-20 05:49:27 information: removed '/mnt/disks/localbackup/vm/vmname/20200517_0500_vdisk3.qcow2.zst' vdisk image file.
2020-05-20 05:49:27 information: removed '/mnt/disks/localbackup/vm/vmname/20200517_0500_vdisk2.qcow2.zst' vdisk image file.
2020-05-20 05:49:27 information: removed '/mnt/disks/localbackup/vm/vmname/20200517_0500_vdisk1.qcow2.zst' vdisk image file.
2020-05-20 05:49:27 information: did not find any vm log files to remove.
2020-05-20 05:49:27 information: removing local vmname.xml.

On another occasion with two vdisk VM:

 

vdisk1 snapshot failed

VM backed up using fallback method

No orphaned snapshot files left

VM XML for vdisk2 was left pointing to .snap file, so VM failed to start

 

I simply updated the XML and the VM started up fine.

Link to comment

i just downloaded this plugin and like what I see.  The only thing is that I have disks that are network drives. 

 

Is there a way to use directories that aren't using/on the array itself? I want to use other network attached disks that are connected to my unraid server. 

Link to comment
12 hours ago, mrtech213 said:

i just downloaded this plugin and like what I see.  The only thing is that I have disks that are network drives. 

 

Is there a way to use directories that aren't using/on the array itself? I want to use other network attached disks that are connected to my unraid server. 

Do you want to use external storage as a backup destination?

 

If so, get your external storage mounted in unraid:

 

Then configure "Set backup location:" in VM Backup plugin accordingly.  Note the following caveats, you'll need to type the path manually or disable restrictive validation:

Quote

Folder location to save backups. Must be full path.

This should be an unassigned device, or a share you have already created.

Each VM will have a subfolder made for it in this location.

To change the dropdown menu from /mnt/users/ to /mnt/, disable restrictive validation.

Any typed path in /mnt/ will validate. If a different path is needed, disable restrictive validation.

edit:  I haven't actually done this myself, but I can't see why it wouldn't work

Edited by ConnectivIT
Link to comment
13 hours ago, ConnectivIT said:

Do you want to use external storage as a backup destination?

 

If so, get your external storage mounted in unraid:

 

Then configure "Set backup location:" in VM Backup plugin accordingly.  Note the following caveats, you'll need to type the path manually or disable restrictive validation:

edit:  I haven't actually done this myself, but I can't see why it wouldn't work

So I checked the settings and the restrictive validation is already disabled: https://gyazo.com/14e4c8ca16646ea2871e51c3e7af0b06

 

And this is what I'm seeing right now: https://gyazo.com/f69a63ce1d0ebc0ade6ede85268f36b2

 

Should I just manually input the network attached share path that is added to my unraid server?

 

 

 

 

 

EDIT----> nevermind I was able to get the dropdown to show my network attached drives.   I think I'm all good now. Running the backup job right now

 

 

 

UPDATED EDIT----> I got the backup running successfully now to my network drives.  Thank you so much for the help :)

Edited by mrtech213
needed to add more info
Link to comment

I've been using this plugin to backup my VMs for a couple weeks now, but unfortunately I've found that this this is the cause of my server being unable to shutdown.

 

My unRAID was unable to shutdown and would freeze forcing me to hard kill the system, causing a parity check every time. I do not want to do this for an otherwise stable system.

 

Rolled back from 6.9.0-b1 to 6.8.3 didn't solve it.

Running in safe mode showed that everything worked, but I couldn't start my VMs due to the Unassigned devices plugin.

Uninstalling the VM Backup plugin solved the issue, and removed the error at startup.

Something in the VM Backup plugin is breaking the Hypervisor and messing with my bonded network connection.

 

From what I can tell, first there's the

"error: failed to connect to the hypervisor"

error: Failed to connect socket to '/varrun/libvirt/libvirt-sock' : No such file or directory"

image.png

 

After this, I seemed to be getting some kind of trace error when shutting down:

image.png

 

More shutdown:

image.png

 

And finally stalls here forever (I've waited a day for this, and it didn't shutdown😞

image.png

 

Uninstalling the VM Backup Plugin fixed the issue, and I can now shutdown/reboot without stalling, crashing and parity check. The errors have gone also.

It is a real shame because I use this plugin daily (Nightly).

 

Does anyone know why this is happening? I'd like to use this plugin.

Link to comment
18 hours ago, KptnKMan said:

I've been using this plugin to backup my VMs for a couple weeks now, but unfortunately I've found that this this is the cause of my server being unable to shutdown.

 

My unRAID was unable to shutdown and would freeze forcing me to hard kill the system, causing a parity check every time. I do not want to do this for an otherwise stable system.

 

Rolled back from 6.9.0-b1 to 6.8.3 didn't solve it.

Running in safe mode showed that everything worked, but I couldn't start my VMs due to the Unassigned devices plugin.

Uninstalling the VM Backup plugin solved the issue, and removed the error at startup.

Something in the VM Backup plugin is breaking the Hypervisor and messing with my bonded network connection.

 

From what I can tell, first there's the

"error: failed to connect to the hypervisor"

error: Failed to connect socket to '/varrun/libvirt/libvirt-sock' : No such file or directory"

imageproxy.php?img=&key=e5eec7c5c933ca16

image.png

 

After this, I seemed to be getting some kind of trace error when shutting down:

image.png

 

More shutdown:

image.png

 

And finally stalls here forever (I've waited a day for this, and it didn't shutdown😞

image.png

 

Uninstalling the VM Backup Plugin fixed the issue, and I can now shutdown/reboot without stalling, crashing and parity check. The errors have gone also.

It is a real shame because I use this plugin daily (Nightly).

 

Does anyone know why this is happening? I'd like to use this plugin.

 

I have the exact same issue, if you get to the bottom of it please let us know!

Cheers,

Tim

Link to comment

Just clarifying my understanding here but:

"

Option to use snapshots to backup VMs without shutting down.

be sure to install the qemu guest agent on VMs to enable quiescence, which will improve the integrity of backups.

the disk path in the VM config cannot be /mnt/user, but instead must be /mnt/cache or /mnt/diskX."

 

This means I need to go and modify all my vdisk XMLs to point to /mnt/cache as opposed to /mnt/user that they are pointing to now (by default?)

Link to comment

Join the conversation

You can post now and register later. If you have an account, sign in now to post with your account.
Note: Your post will require moderator approval before it will be visible.

Guest
Reply to this topic...

×   Pasted as rich text.   Restore formatting

  Only 75 emoji are allowed.

×   Your link has been automatically embedded.   Display as a link instead

×   Your previous content has been restored.   Clear editor

×   You cannot paste images directly. Upload or insert images from URL.