JTok

Members
  • Content Count

    90
  • Joined

  • Last visited

  • Days Won

    1

Everything posted by JTok

  1. Hello, long time no see. I am truly sorry to see so many of you have had an issue with this plugin, and it was not my intention to abandon it for as long as I have. Sadly, life had other plans (as it often does). I've recently found myself with time to tinker again, and as such I've released an update that does a few things to try and address some of the issues I am aware of. Unfortunately, I haven't been able to replicate many of the issues others are having, so my ability to test has been limited. I also can see that the operation of some of the advanced functions isn't imme
  2. For what it is worth, the direct mount did not actually fix it for me, just obfuscate it by making it so loop2/3 didn't show up in iotop. When I checked the SMART status I wound up with just as many writes as before, so I would be interested to hear your results. It seems like I might be an outlier here, so possibly I have a different issue affecting my setup.
  3. A note on this from my experience. I actually did this and it worked, but when I went to make changes to the dockers it failed and they would no longer start. I think I could have fixed it by clearing out the mountpoints, but I opted to just wipe the share and then re-add all the dockers from "my templates" which worked just fine. So, can confirm --would not recommend. I did copy the libvirt folder though and have not noticed any ill effects (...yet. haha)
  4. It's funny you mention that. I haven't rolled the workaround back yet, but I'm starting to wonder if it isn't fixing the issue so much as obfuscating it. I'm planning on rolling the fix back later today/tonight to see what happens. I got some metrics by basically doing the same thing as your script (except manually). According to SMART reporting, my cache drives are writing 16.93GB/hr. This is even though when I implemented the workaround I also moved several VMs and a few docker paths to my nvme drives just to reduce the writes further. I'd be curious to know what o
  5. For anyone else that needs it, I was having more issues with libvirt/loop3 than docker/loop2, so I adapted @S1dney's solution from here for libvirt. A little CYA: To reiterate what has already been said, this workaround is not ideal and comes with some big caveats, so be sure to read through the thread and ask questions before implementing. I'm not going to get into it here, but I used S1dney's same basic directions for the docker by making backups and copying files to folders in /boot/config/. Create a share called libvirt on the cache drive just lik
  6. That's definitely something I would be willing to implement at some point, but there is other functionality that I would like to ensure is working before I try to mess with it. Unfortunately, I cannot promise more than that right now.
  7. I am currently focusing on implementing a restore function next. Based on feedback, I have decided that in order for restore functionality to work, you will need to allow the plugin to convert the existing backup structure. This will be optional, but without it, you will need to perform restores manually. As development progresses, I will work out additional details and reach back out here for feedback.
  8. Some behind the scenes changes thanks to @sjerisman that include zstandard compression and updates to the logging. Make sure you read the help if you are going to be switching from legacy gzip compression to zstandard. I also made some changes to how configs are shown to the users that should make it clearer how they work, and what is currently being edited. Full change-log below. v0.2.1 - 2020/02/20 Pika Pika - merged changes from unraid-vmbackup script v1.3.1. - added seconds to logs. - added option to use zstandard compression. -
  9. I think I might have an idea. How are you having the backups cleaned? Are you using "Number of days to keep backups" or "Number of backups to keep"? Because it looks like you are using "Number of backups to keep". In order for that to properly function, the vdisks must be named in a specific way. If you expand the help, it should give the following info: "If a VM has multiple vdisks, then they must end in sequential numbers in order to be correctly backed up (i.e. vdisk1.img, vdisk2.img, etc.)." -JTok
  10. @torch2k Hmmmm 🤔. Okay, just did some checking and it looks like docker looks for pigz by default. That said, when the plugin installs pigz, it does add /usr/bin/unpigz. Possibly your installation was corrupted somehow, since I just checked my dockers and two of them needed updated. I then ran Update All without issue. It seems like reinstalling the plugin could fix the issue for you... unless I'm missing something? -JTok
  11. Will push a fix. Thx. Sent from my iPhone using Tapatalk
  12. It shouldn’t matter what the extension is, but I’ll create a qcow2 drive this afternoon and run some tests. Sent from my iPhone using Tapatalk
  13. That's a good idea, I was a little worried about messing with people's backups, so that's why I was leaning towards the more hands-off approach. Though I suppose it would be easy enough to make it optional 🤔, and even include a button in Danger Zone to do it manually at a later time if it's opted out of it during the initial transition.
  14. Sorry, I was unclear. What I meant was that once you switch to the new structure, backups made under the old structure will need manually cleaned out until only new structure backups exist. Backups made under the new structure would still be able to be cleaned out. I will update the original post to clarify this. Sent from my iPhone using Tapatalk
  15. I am looking to get some feedback on the following: I'd like to change how the plugin saves backups, but it would be a breaking change. I want to change the timestamp from being part of the filename to being a sub-folder. Instead of this structure: /backup_dir/vm_name/20200204_1555_vm_name.xml/backup_dir/vm_name/20200204_1555_asdf564asd12sd5dfsd.fd/backup_dir/vm_name/20200204_1555_vdisk1.img/backup_dir/vm_name/20200204_1555_vdisk2.img/backup_dir/vm_name/20200205_1555_vm_name.xml/backup_dir/vm_name/20200205_1555_asdf564asd12sd5dfsd.fd/backup_dir/vm_name/20200205_1555
  16. I'll have to make some changes to the script to handle that scenario, but it should be reasonably easy. I think I've got a general way to handle this that should work for other similar situations down the line. I'll add that to my list of things to do.
  17. Generally that should work, but if you run into issues, you may also need to replace the config and/or nvram.
  18. Thanks, I'm having trouble tracking down what might be causing this issue, so I will need to find time to set up an NFS share in my test environment. I did notice there was a recent update to unassigned devices relating to NFS shares... it didn't seem like it would be related to your issue, but possibly it will correct it. I'm hoping to have an opportunity to set up a share for testing sometime in the next week, but I'm out of town the next two weekends, so it may take me longer than that if time doesn't permit during the week.
  19. Could you let me know what you have the following set to? Settings: Enable Snapshots Compress Backups Other Settings: Compare files during backup Disable delta syncs for backups Only use rsync for backups Danger Zone: Fall back to standard backups Pause VMs instead of shutting down
  20. @dodgypast it looks like there is an issue with whatever data the script is pulling from the config. are you able to DM/send me the config (xml file) for that VM? Thanks, JTok
  21. I see what you mean, I’ll try to find a way to make this clearer and/or more intuitive. This is the design paradigm used by the rest of unRAID, so I don’t think it is a good idea for me make my plugin the exception as that could cause further confusion. The functions of those buttons are defined by the plugin system, and while I’ve bypassed those default functions to implement them in a specific way, I kept the basic functionality the same for consistency across all the other plugins/forms. Here is a breakdown of how the buttons work by default in unRAID: The Default button resets the fo
  22. I get what you're saying, but I don't want to make creating a profile necessary to get started... or going to any other tabs for that matter. I want it to be as close to working out of the box as possible, with as few fields as possible to fill in to get started, and all of them in the same place. To give you an idea of where my head is at, my design philosophy is that the main tab (i.e. Settings tab) is the only tab you actually need to have a basic functioning backup with reasonable settings. The other tabs are for more advanced configurations that a basic user really may no
  23. @DZMM You create and manage the configs on the Manage Configs tab, but to edit them you choose the "current config" from the drop-down box on the Settings tab. From there you edit the config like normal. When you want to edit a different config, you go back to the drop-down box on the Settings tab. Also, default config cannot be renamed or removed, and will not show up on the Manage Configs tab. So you already start with that config active, but from the Manage Configs tab you can add/edit additional configs. In retrospect, it isn't very obvious how it works, so I'll ma
  24. Another big thanks to @sjerisman for adding this feature. Again, for anyone using the plugin, the features that get added here will make their way into the plugin before too long. v1.3.1 - 2020/01/21 So Say We All - added option to create a vm specific log in each vm's sub-folder. Script here: https://github.com/JTok/unraid-vmbackup/tree/v1.3.1 -JTok
  25. My apologies for long delay since the last release, for those that care about why, I'll leave it in the postscript. In this version there are some major changes in addition to the usual bug fixes. A big bug fix, is that the plugin should now be able to handle parenthesis in your VM names (thanks to squid for suggesting a solution to that). New features include the ability to have multiple configs so that you can run backups on different schedules, as well as the ability to run pre and post scripts with those configs. Compression now uses pigz for multi-threaded compression (th