VM Backup Plugin


Recommended Posts

The VM Backup Plugin is great and really I want to use it but it causes several issues (see also 2 posts above), so I had to uninstalled the VM Backup Plugin. This uninstall fixes the following 3 issues for me:


1. the "error: failed to connect to the hypervisor" in console during startup (many others reported this error too in another thread, but it seams to work anyway)

2. Array stop not possible, stuck at "sync filesystem" => therefore no reboot/shutdown possible

3. Install of habridge docker image (and probably others?) not possibel with the following error:

Error: could not get decompression stream: fork/exec /usr/bin/unpigz: no such file or directory 
Link to comment
2 minutes ago, subivoodoo said:

The VM Backup Plugin is great and really I want to use it but it causes several issues (see also 2 posts above), so I had to uninstalled the VM Backup Plugin. This uninstall fixes the following 3 issues for me:


1. the "error: failed to connect to the hypervisor" in console during startup (many others reported this error too in another thread, but it seams to work anyway)

2. Array stop not possible, stuck at "sync filesystem" => therefore no reboot/shutdown possible

3. Install of habridge docker image (and probably others?) not possibel with the following error:


Error: could not get decompression stream: fork/exec /usr/bin/unpigz: no such file or directory 

Something messed up is definitely going on.

 

It would be great if the dev of this plugin would have ANYTHING to say please?

Link to comment
2 minutes ago, subivoodoo said:

The VM Backup Plugin is great and really I want to use it but it causes several issues (see also 2 posts above), so I had to uninstalled the VM Backup Plugin. This uninstall fixes the following 3 issues for me:


1. the "error: failed to connect to the hypervisor" in console during startup (many others reported this error too in another thread, but it seams to work anyway)

2. Array stop not possible, stuck at "sync filesystem" => therefore no reboot/shutdown possible

3. Install of habridge docker image (and probably others?) not possibel with the following error:


Error: could not get decompression stream: fork/exec /usr/bin/unpigz: no such file or directory 

The plugin is Alpha/Beta....use the userscript version if you want something more stable.

Link to comment
2 hours ago, Stupifier said:

The plugin is Alpha/Beta....use the userscript version if you want something more stable.

1. I understood the BETA nature of this plugin before using it (Please refer to the 1st line in the 1st post of this thread).

2. I have already been using the userscript version, since I found the plugin to be unstable.

3. It is important to report these issues so that:

   - others can know the unstable issues before using it, SO THAT THEY DON'T LOSE DATA FROM AN UNSTABLE SYSTEM.

   - The developer can work with those who want to find and test a fix.

 

@Stupifier Not to be rude, but your response helps nobody.

I have submitted my findings and report of what I've seen so far in this thread, so that the dev can help the people encountering strange unrelated errors and system instability.

If you have something constructive to add to the discussion, please do so.

  • Like 1
Link to comment
1. I understood the BETA nature of this plugin before using it (Please refer to the 1st line in the 1st post of this thread).
2. I have already been using the userscript version, since I found the plugin to be unstable.
3. It is important to report these issues so that:
   - others can know the unstable issues before using it, SO THAT THEY DON'T LOSE DATA FROM AN UNSTABLE SYSTEM.
   - The developer can work with those who want to find and test a fix.
 
@Stupifier Not to be rude, but your response helps nobody.
I have submitted my findings and report of what I've seen so far in this thread, so that the dev can help the people encountering strange unrelated errors and system instability.
If you have something constructive to add to the discussion, please do so.
My comments are constructive. Reminding people who visit this thread the beta/alpha nature of the plugin is very important. In addition, it is equally important to remind people of the more stable alternative. I'm not trying to be an ass or anything. You'd be surprised how many people don't even read the comment above their in posts.

Until (If) the dev gets comes back, there isn't much to do here in the plugins current form. Unless another dev wanted to pick it up
Link to comment
5 hours ago, Stupifier said:

My comments are constructive. Reminding people who visit this thread the beta/alpha nature of the plugin is very important. In addition, it is equally important to remind people of the more stable alternative. I'm not trying to be an ass or anything. You'd be surprised how many people don't even read the comment above their in posts.

Until (If) the dev gets comes back, there isn't much to do here in the plugins current form. Unless another dev wanted to pick it up

Agreed that a lot of people don't read up before posting. I'm not trying to be rude.

I also tried to post my findings to help the dev, and others, and my followup was to hopefully get something out of the dev. So far silence.

 

Personally, I was a little annoyed about how this server-breaking issue has not been even acknowledged by the plugin developer. I battled with this for some weeks and had to hard-reset my server a few times, which is NOT IDEAL for anyone (Especially with cache-pool BTRFS corruption issues, which I've also personally experienced).

 

I get that this is a "free" plugin and all but I wouldn't recommend anyone use this plugin as its marked as BETA, but is really very very unstable and seemingly untested, especially for a crucial solution such as VM Backups. It's doing some weird stuff that I'm unsure about, and should be marked as pre-alpha/alpha.

 

Lastly, I know that they don't wish to step on people's toes, but a basic (Not advanced) solution to this kind of thing (Config/VM/Docker backup) really needs to be rolled into unraid core.

Link to comment
18 hours ago, subivoodoo said:

I also know that this plugin is Alpha/Beta... and I've used the usercript version before I installed it... and I use it now.

I understand what you're saying, but there is a marked difference between ALPHA and BETA software. This is the former, and I would argue it's PRE-ALPHA with the issues encountered by numerous people. I'm just saying it's inappropriately labelled, which is dangerous and playing with peoples data integrity. I know it's their own risk they take by installing it, but the BETA label is severely misleading and lacking.

 

18 hours ago, subivoodoo said:

As written before, it's important to report the findings otherwise this plugin will always remain Alpha/Beta.

Absolutely, total agreement. I also stated that very point.

However, for this plugin to move beyond its debatable BETA status, the dev needs to speak up and work with those willing to test.

So far, I don't hear anything, and the GitHub account for this is quiet, to the point that the last dev commit was nearly 4 months ago.

 

On the basis of that alone, it's hard to recommend this plugin for use at all.

Link to comment

When a dev goes dark, it feels really bad to post stuff like "he needs to speak up". Sorry....but it is.

 

It's been pretty clear for quite some time now the dev taking a leave from working on this. Permanent or Temporary.... I dunno but everyone should just chill out.

 

I recognized his absence pretty early on. This is the reason why I've simply been redirecting people to the Script version...nothing else. Report bugs...fine.....but it is really bad to ask/insist/demand/whatever that he responds. His lack of response is ENOUGH for people to realize just like I have for weeks now....just let it be.

 

And lastly, this is a VM Backup plugin in a pre-release phase. Absolutely NOBODY should be relying on this to secure their data anyway. As such, any use of this plugin is purely to HELP the developer out. That's it. No expectations of anything else. Beta, Alpha, Pre-Alpha....whatever...it is still pre-release. Don't rely on it.

Link to comment
1 hour ago, Stupifier said:

When a dev goes dark, it feels really bad to post stuff like "he needs to speak up". Sorry....but it is.

 

It's been pretty clear for quite some time now the dev taking a leave from working on this. Permanent or Temporary.... I dunno but everyone should just chill out.

 

I recognized his absence pretty early on. This is the reason why I've simply been redirecting people to the Script version...nothing else. Report bugs...fine.....but it is really bad to ask/insist/demand/whatever that he responds. His lack of response is ENOUGH for people to realize just like I have for weeks now....just let it be.

I get what you're saying, but I'm not getting into a discussion about what "feels" right. I'm stating the facts.

Yeah, I can see why you and others would leave this plugin be, but its still out there, released in the Community Apps selection.

 

1 hour ago, Stupifier said:

And lastly, this is a VM Backup plugin in a pre-release phase. Absolutely NOBODY should be relying on this to secure their data anyway. As such, any use of this plugin is purely to HELP the developer out. That's it. No expectations of anything else. Beta, Alpha, Pre-Alpha....whatever...it is still pre-release. Don't rely on it.

It's clearly been released and is in the Community Apps selection, which is the recommended way to find and install plugins and extensions, right alongside official releases of plugins and the like. It's MOST DEFINITELY RELEASED, which has nothing to do with its Alpha/Beta status.

Sorry, but that is the truth.

 

I know personally that I'm not RELYING any of my data on this.

My gripe has clearly been that it makes unraid unstable (In my case, MY unraid server), and not just for me, which is unacceptable and I want others to know the danger this plugin poses.

If it doesn't work or is buggy within its domain... fine, no worries... but if it puts into danger your entire server and data by destabilising the whole system, causing startup and shutdown errors (And things inbetween) then this SHOULD be at the least pulled or fixed.

That is not "relying on this to secure their data", its much more dangerous. What makes it worse is that it silently a problem, so just by installing it, an unraid system or array could be rendered unusable.

 

Anyway I'm no super-dev myself but I've developed things in the past, and I understand that if you release something then you should be expected to support it, which is why I release anything rarely and support anything that I "release" online through ANY channel. Nobody is perfect, but this is the marked difference between those who try to be good developers (Emphasis on "try"), and those that don't.

 

Take the high road and judge my unpopular opinion if you like, but it is what it is and I'm not trying to be rude or sugar-coat anything.

Link to comment

I've reviewed this thread, and there's nothing per se that warrants any further moderation / comments on this plugin AT THIS TIME.

 

The error message on the screen during boot is a minor display aberration and completely harmless

 

Array Not able to Stop -> this is a big deal, but a single instance reported doesn't justify any action being taken.

 

Server not turning off displaying "Turning Off Swap".  While uninstalling the plugin may have fixed the issue for you, it is hard to properly diagnose this as I also have a server which doesn't run any VMs and also has this same issue.

 

 

  • Like 1
Link to comment

There are a few in this thread that have the same array/shutdown issue, but you've got a point that it may not be enough to pull it immediately.

 

There is another thread on this forum about the startup message, and I cannot fathom how many people might be experiencing "can't shutdown" issues. I know I searched through the forum enough times and trawled through logs to find my own conclusion.

 

Still, I still think it's important to highlight this link, so that others might find it.

I disagree that this doesn't warrant further discussion, it definitely does, but certainly less about the developer abandoning his project.

 

I've been trying to investigate the codebase to find why the issues are encountered, already posted some of my own findings, and I would encourage anyone with deeper knowledge of unraid to chime in on this as maybe it can be fixed and submitted as a pull-request to the repo.

  • Like 1
Link to comment

Great plugin. I'm having one problem, maybe a misconfiguration on my part.

 

I have two VMs I backup weekly:

193846310_ScreenShot2020-06-08at6_15_43AM.png.2814616148d096676bc77bb8741712ed.png

 

Both have the same problem: the primary vdisk from the last backup isn't deleted so there are always two, previous and current.

 

Debian

1162089101_ScreenShot2020-06-08at6_13_58AM.png.45c269337e6146607c09c4f6c7cbe29d.png

 

MacOS

1313608238_ScreenShot2020-06-08at6_14_55AM.png.82635b31f86be466e78c78cb4637360e.png

 

I've tried changing number of days to 6 (thinking it might be a timing issue) but the plugin prevents it:

2132126899_ScreenShot2020-06-08at6_15_56AM.png.5d9c05917204dea09719fa326209f935.png

 

I've tried using number of backups instead of date but the plugin prevents that as well:

1787418032_ScreenShot2020-06-08at6_16_12AM.png.380dd6348a4eea468171ca6eca572402.png

 

Anyone know where I'm going wrong?

Link to comment
  • 4 weeks later...
On 1/15/2020 at 12:31 PM, sjerisman said:

And, I repeated the same Windows 7 'real' VM test one more time, but this time used the SSD cache tier as the destination instead of the HDD...

 

With the old compression code, it took 1-2 minutes to copy the 18 GB image file from the NVMe UD over to the dual SSD cache, and then still took 13-14 minutes to further .tar.gz compress it down to 8.4 GB.  The compression step definitely seems CPU bound (probably single threaded) instead of I/O bound with this test.

 

With the new inline compression code, it still only took about 1-2 minutes to copy from the NVMe UD and compress (inline) over to the dual SSD cache and still produced a slightly smaller 8.2 GB output file.  The CPU was definitely hit harder, and probably became the bottleneck (over I/O), but I'm really happy with these results and would gladly trade off higher CPU for a few minutes for much lower disk I/O, much less disk wear, and much faster backups.

very interesting. I'm wondering if this is an older version thing or something because at default compression levels pigz is better for speed of both compression and massively faster at decompression however actual size is not so good. 

 

I do believe that this plugin could use some work on the descriptions of each setting for example doing away with gzip and just referencing pigz to avoid confusion as to wether its using multithread or not.

 

This is an awesome comparison of may different compression methods. Compression Comparison just scroll down.

Link to comment

Thanks for the great plugin. I believe I may have found a bug that is not too serious, or maybe I did something wrong. I am using version 2020.02.20 of the plugin on Unraid 6.9.22 beta.

 

I created a second config (in addition to "default") which I used for a few days. I then decided to delete the config, which I did using the "Manage Configs" tab, "Remove configs" button. The config was then removed from the GUI pages. However I kept getting Unraid notifications that the vmbackup plugin failed for the config that I removed. I checked my crontab and saw that the entry for my deleted config is still in there. No harm done because it just fails to run the cron entry as the config no longer exists. For now I've commented out the entry.

 

Hence unless I did something wrong, I believe that this version might not remove the cron entry when deleting configs. Let me know if you need any more info on this.

Link to comment
Thanks for the great plugin. I believe I may have found a bug that is not too serious, or maybe I did something wrong. I am using version 2020.02.20 of the plugin on Unraid 6.9.22 beta.
 
I created a second config (in addition to "default") which I used for a few days. I then decided to delete the config, which I did using the "Manage Configs" tab, "Remove configs" button. The config was then removed from the GUI pages. However I kept getting Unraid notifications that the vmbackup plugin failed for the config that I removed. I checked my crontab and saw that the entry for my deleted config is still in there. No harm done because it just fails to run the cron entry as the config no longer exists. For now I've commented out the entry.
 
Hence unless I did something wrong, I believe that this version might not remove the cron entry when deleting configs. Let me know if you need any more info on this.
Ok....I highly recommend using the terminal command line for rclone and completely ignoring the rclone plugin GUI section. Just type in "rclone config" to manage your config.

Also, use the Userscripts plugin for any cron stuff you trying to do.

This is how most people use rclone.
  • Like 1
Link to comment
On 12/19/2019 at 10:41 PM, Ruthalas said:

I second that request (for restore functionality).
No hurry, but that would be a valuable addition.

Restore functionality would be great. Until then where I can find a manual restore procedure?

Thanks in advance.

 

Link to comment

Is there someone that will be carrying this plugin forward? As of the new UnRAID 6.9.0 this plugin is having major issues for me. 

When I run a backup, it breaks the libvirtd service, so it can not discover the VM's. They are still running, however I then have to shutdown all the running VM's from within the guest and use lsof to close any further open files and reboot the entire server so I can get the libvirtd service back up and manage my VM's again. 

 

I have snapshots enabled and qemu-guest-agent installed on all the VM's. 

Sometimes it gets through 1 or 2 VM's, sometimes none. 

 

If a snapshot has been taken, when it fails, it also doesnt remove the snap disk from the VM's XML so after reboot I have to manually edit the XML back to get the VM working again (since the snap disk no longer exists)

 

I see nothing in the /var/log/libvirt/libvirtd.log 

Here are the 3 last backup logs from one of my scheduled configs

20200705_0000_unraid-vmbackup_error.log 20200706_0551_unraid-vmbackup_error.log 20200703_1700_unraid-vmbackup_error.log

Link to comment

Thanks for your work been using the userscript for ages!

 

Some way for the plugin to ignore or handle disks set in vm xml that are addressed to hardware would be good. I have mostly vdisks but if it encounters a hardware addressed disk in a vm it seems to bomb out. Default settings no snapshots.

  1. vm with only vdisks - did okay
  2. vm with only vdisks - did okay
  3. Vm with vdisks and hardware addressed ssd bombed out with "/tmp/vmbackup/scripts/default/user-script.sh: line 424: vdisk_types["$vdisk_path"]: bad array subscript"
  4. vm with only vdisks - didnt get done as after one that 'failed'

 

"2020-05-01 14:09:45 information: /mnt/user/backup_VM_jtok/Win10-Ufficio1 exists. continuing. /tmp/vmbackup/scripts/default/user-script.sh: line 424: vdisk_types["$vdisk_path"]: bad array subscript 2020-05-01 14:09:45"

 

I think @blurp76 had the same issue but I dont think he/she has vdisks as well.

 

Edit:

 

Just to followup on myself i tested by removing the hardware line within XML for the SSD then ran the backup again and everything worked. Also the user script does the same thing, where as the old version (really really old 2019 i think) worked okay.

Edited by Cadal
Attempted to verify the issue.
Link to comment
11 hours ago, Cadal said:

Thanks for your work been using the userscript for ages!

 

Some way for the plugin to ignore or handle disks set in vm xml that are addressed to hardware would be good. I have mostly vdisks but if it encounters a hardware addressed disk in a vm it seems to bomb out. Default settings no snapshots.

  1. vm with only vdisks - did okay
  2. vm with only vdisks - did okay
  3. Vm with vdisks and hardware addressed ssd bombed out with "/tmp/vmbackup/scripts/default/user-script.sh: line 424: vdisk_types["$vdisk_path"]: bad array subscript"
  4. vm with only vdisks - didnt get done as after one that 'failed'

 

"2020-05-01 14:09:45 information: /mnt/user/backup_VM_jtok/Win10-Ufficio1 exists. continuing. /tmp/vmbackup/scripts/default/user-script.sh: line 424: vdisk_types["$vdisk_path"]: bad array subscript 2020-05-01 14:09:45"

 

I think @blurp76 had the same issue but I dont think he/she has vdisks as well.

 

Edit:

 

Just to followup on myself i tested by removing the hardware line within XML for the SSD then ran the backup again and everything worked. Also the user script does the same thing, where as the old version (really really old 2019 i think) worked okay.

Seeing the same issue.  I believe it worked at some point, it wasn't until this morning that I realized my backups weren't being created.  I am going back to my manual virsh/rsync script I was running before this.

Link to comment
  • 2 weeks later...
  • 2 weeks later...
On 7/6/2020 at 2:19 PM, Jarsky said:

Is there someone that will be carrying this plugin forward? As of the new UnRAID 6.9.0 this plugin is having major issues for me. 

 

Just checking in to see if anyone has got this working OK on 6.9.0 without it crashing libvirtd?

  • Thanks 1
Link to comment
On 8/10/2020 at 11:46 PM, Jarsky said:

 

Just checking in to see if anyone has got this working OK on 6.9.0 without it crashing libvirtd?

I just ran into this issue. Log looks like it did the shutdown VMs just fine, but the 1 VM I have running with QEMU-Agent installed so it can perform live snapshots of the VM without shutting it down went through what appears to be "OK" ,but libvirt crashed right after?

Link to comment
  • 4 weeks later...
On 8/16/2020 at 12:05 AM, Darksurf said:

I just ran into this issue. Log looks like it did the shutdown VMs just fine, but the 1 VM I have running with QEMU-Agent installed so it can perform live snapshots of the VM without shutting it down went through what appears to be "OK" ,but libvirt crashed right after?

Turns out my agentless backups complete fine...or when the VM is powered off. 

The problem is when trying to do one with qemu-guest-agent so I can do a snapshot backup of a running VM. 

 

It then kills libvirtd. So the workaround for me since this is just a home setup, is to allow the VM's to be shutdown to complete the backup. 

Not a fan of that, but better than killing my whole server. 

 

2020-09-08 14:07:15 information: Elements can be found on the system. attempting backup.
2020-09-08 14:07:15 information: creating local Elements.xml to work with during backup.
2020-09-08 14:07:15 information: /mnt/user/share/BACKUP/VMBackups/Elements exists. continuing.
2020-09-08 14:07:15 information: skip_vm_shutdown is false and use_snapshots is 1. skipping vm shutdown procedure. Elements is running. can_backup_vm set to y.
2020-09-08 14:07:15 information: actually_copy_files is 1.
2020-09-08 14:07:15 information: can_backup_vm flag is y. starting backup of Elements configuration, nvram, and vdisk(s).
sending incremental file list
Elements.xml

sent 6,545 bytes received 35 bytes 13,160.00 bytes/sec
total size is 6,455 speedup is 0.98
2020-09-08 14:07:15 information: copy of Elements.xml to /mnt/user/share/BACKUP/VMBackups/Elements/20200908_1406_Elements.xml complete.
2020-09-08 14:07:15 information: Elements does not appear to have an nvram file. skipping.
2020-09-08 14:07:15 information: able to perform snapshot for disk /mnt/nvme_mirror/VM/Elements/elements.img on Elements. use_snapshots is 1. vm_state is running. vdisk_type is raw
2020-09-08 14:07:15 information: qemu agent found. enabling quiesce on snapshot.
Domain snapshot Elements-elements.snap created
2020-09-08 14:07:15 information: snapshot command succeeded on elements.snap for Elements.
/mnt/nvme_mirror/VM/Elements/elements.img : 40.77% (21474836480 => 8754302341 bytes, /mnt/user/share/BACKUP/VMBackups/Elements/20200908_1406_elements.img.zst)
2020-09-08 14:07:54 information: copy of /mnt/nvme_mirror/VM/Elements/elements.img to /mnt/user/share/BACKUP/VMBackups/Elements/20200908_1406_elements.img.zst complete.
2020-09-08 14:07:54 information: backup of /mnt/nvme_mirror/VM/Elements/elements.img vdisk to /mnt/user/share/BACKUP/VMBackups/Elements/20200908_1406_elements.img.zst complete.
error: Disconnected from qemu:///system due to end of file
error: internal error: client socket is closed

 

Link to comment

Join the conversation

You can post now and register later. If you have an account, sign in now to post with your account.
Note: Your post will require moderator approval before it will be visible.

Guest
Reply to this topic...

×   Pasted as rich text.   Restore formatting

  Only 75 emoji are allowed.

×   Your link has been automatically embedded.   Display as a link instead

×   Your previous content has been restored.   Clear editor

×   You cannot paste images directly. Upload or insert images from URL.