Jump to content

JTok

Members
  • Content Count

    85
  • Joined

  • Last visited

  • Days Won

    1

JTok last won the day on December 19 2019

JTok had the most liked content!

Community Reputation

41 Good

About JTok

  • Rank
    Advanced Member
  • Birthday August 25

Converted

  • Gender
    Male
  • URL
    https://github.com/JTok
  • Location
    Chicago

Recent Profile Visitors

838 profile views
  1. That's definitely something I would be willing to implement at some point, but there is other functionality that I would like to ensure is working before I try to mess with it. Unfortunately, I cannot promise more than that right now.
  2. I am currently focusing on implementing a restore function next. Based on feedback, I have decided that in order for restore functionality to work, you will need to allow the plugin to convert the existing backup structure. This will be optional, but without it, you will need to perform restores manually. As development progresses, I will work out additional details and reach back out here for feedback.
  3. Some behind the scenes changes thanks to @sjerisman that include zstandard compression and updates to the logging. Make sure you read the help if you are going to be switching from legacy gzip compression to zstandard. I also made some changes to how configs are shown to the users that should make it clearer how they work, and what is currently being edited. Full change-log below. v0.2.1 - 2020/02/20 Pika Pika - merged changes from unraid-vmbackup script v1.3.1. - added seconds to logs. - added option to use zstandard compression. - added option to create a vm specific log in each vm's subfolder. - added config drop-down selection to the top of each tab. - updated method used to determine cpu thread count. https://github.com/JTok/unraid.vmbackup/tree/v0.2.1 -JTok
  4. I think I might have an idea. How are you having the backups cleaned? Are you using "Number of days to keep backups" or "Number of backups to keep"? Because it looks like you are using "Number of backups to keep". In order for that to properly function, the vdisks must be named in a specific way. If you expand the help, it should give the following info: "If a VM has multiple vdisks, then they must end in sequential numbers in order to be correctly backed up (i.e. vdisk1.img, vdisk2.img, etc.)." -JTok
  5. @torch2k Hmmmm 🤔. Okay, just did some checking and it looks like docker looks for pigz by default. That said, when the plugin installs pigz, it does add /usr/bin/unpigz. Possibly your installation was corrupted somehow, since I just checked my dockers and two of them needed updated. I then ran Update All without issue. It seems like reinstalling the plugin could fix the issue for you... unless I'm missing something? -JTok
  6. Will push a fix. Thx. Sent from my iPhone using Tapatalk
  7. It shouldn’t matter what the extension is, but I’ll create a qcow2 drive this afternoon and run some tests. Sent from my iPhone using Tapatalk
  8. That's a good idea, I was a little worried about messing with people's backups, so that's why I was leaning towards the more hands-off approach. Though I suppose it would be easy enough to make it optional 🤔, and even include a button in Danger Zone to do it manually at a later time if it's opted out of it during the initial transition.
  9. Sorry, I was unclear. What I meant was that once you switch to the new structure, backups made under the old structure will need manually cleaned out until only new structure backups exist. Backups made under the new structure would still be able to be cleaned out. I will update the original post to clarify this. Sent from my iPhone using Tapatalk
  10. I am looking to get some feedback on the following: I'd like to change how the plugin saves backups, but it would be a breaking change. I want to change the timestamp from being part of the filename to being a sub-folder. Instead of this structure: /backup_dir/vm_name/20200204_1555_vm_name.xml/backup_dir/vm_name/20200204_1555_asdf564asd12sd5dfsd.fd/backup_dir/vm_name/20200204_1555_vdisk1.img/backup_dir/vm_name/20200204_1555_vdisk2.img/backup_dir/vm_name/20200205_1555_vm_name.xml/backup_dir/vm_name/20200205_1555_asdf564asd12sd5dfsd.fd/backup_dir/vm_name/20200205_1555_vdisk1.img/backup_dir/vm_name/20200205_1555_vdisk2.img Backups would have this structure: /backup_dir/vm_name/20200204_1555/vm_name.xml/backup_dir/vm_name/20200204_1555/asdf564asd12sd5dfsd.fd/backup_dir/vm_name/20200204_1555/vdisk1.img/backup_dir/vm_name/20200204_1555/vdisk2.img/backup_dir/vm_name/20200205_1555/vm_name.xml/backup_dir/vm_name/20200205_1555/asdf564asd12sd5dfsd.fd/backup_dir/vm_name/20200205_1555/vdisk1.img/backup_dir/vm_name/20200205_1555/vdisk2.img this should come with a few advantages: implementing restore functionality should be much easier and therefore quicker to implement. structure of backups will be less cluttered. handling files not part of a normal vm structure (such as in the case of SpaceInvaderOne's Macinabox) will be easier to handle. However, this will break the existing file cleaning functions. So you would need to manually clean out old backups until all your backups were part of the new structure. So once you switch to the new structure, only new structure backups would be cleaned by the plugin. So with that in mind, here is my proposal: Upon installation of whatever version contains this new structure, the first time you open the plugin page it will prompt you to choose which structure you want to use with a warning message (and a link to further explanation). Don't worry if the plugin auto-updates before you choose a structure, it will continue to use the old structure so your backups will still run even if you haven't picked one yet. Moving forward, development will focus on the new structure. the features that are part of the old structure will stay and keep working, but will not be updated to support new functionality. This will almost certainly include restoring and scenarios like Macinabox. This way, everything continues to work as it has in the past, and gives everyone time to transition at their own pace. Then, somewhere down the line a discussion regarding removing the old structure entirely may take place if it becomes necessary. Thoughts? Concerns? Thanks, JTok
  11. I'll have to make some changes to the script to handle that scenario, but it should be reasonably easy. I think I've got a general way to handle this that should work for other similar situations down the line. I'll add that to my list of things to do.
  12. Generally that should work, but if you run into issues, you may also need to replace the config and/or nvram.
  13. Thanks, I'm having trouble tracking down what might be causing this issue, so I will need to find time to set up an NFS share in my test environment. I did notice there was a recent update to unassigned devices relating to NFS shares... it didn't seem like it would be related to your issue, but possibly it will correct it. I'm hoping to have an opportunity to set up a share for testing sometime in the next week, but I'm out of town the next two weekends, so it may take me longer than that if time doesn't permit during the week.
  14. Could you let me know what you have the following set to? Settings: Enable Snapshots Compress Backups Other Settings: Compare files during backup Disable delta syncs for backups Only use rsync for backups Danger Zone: Fall back to standard backups Pause VMs instead of shutting down
  15. @dodgypast it looks like there is an issue with whatever data the script is pulling from the config. are you able to DM/send me the config (xml file) for that VM? Thanks, JTok