JTok

Members
  • Content Count

    89
  • Joined

  • Last visited

  • Days Won

    1

JTok last won the day on December 19 2019

JTok had the most liked content!

Community Reputation

48 Good

About JTok

  • Rank
    Advanced Member
  • Birthday August 25

Converted

  • Gender
    Male
  • URL
    https://github.com/JTok
  • Location
    Chicago

Recent Profile Visitors

1363 profile views
  1. I tried the vmbackup from CommunityApplications but it just does not want to trigger the backup

    I have the vm's in the cash pool that is mapped to the appdata folder

     

    PS.

    It worked prior to upgrading to version beta25

  2. For what it is worth, the direct mount did not actually fix it for me, just obfuscate it by making it so loop2/3 didn't show up in iotop. When I checked the SMART status I wound up with just as many writes as before, so I would be interested to hear your results. It seems like I might be an outlier here, so possibly I have a different issue affecting my setup.
  3. A note on this from my experience. I actually did this and it worked, but when I went to make changes to the dockers it failed and they would no longer start. I think I could have fixed it by clearing out the mountpoints, but I opted to just wipe the share and then re-add all the dockers from "my templates" which worked just fine. So, can confirm --would not recommend. I did copy the libvirt folder though and have not noticed any ill effects (...yet. haha)
  4. It's funny you mention that. I haven't rolled the workaround back yet, but I'm starting to wonder if it isn't fixing the issue so much as obfuscating it. I'm planning on rolling the fix back later today/tonight to see what happens. I got some metrics by basically doing the same thing as your script (except manually). According to SMART reporting, my cache drives are writing 16.93GB/hr. This is even though when I implemented the workaround I also moved several VMs and a few docker paths to my nvme drives just to reduce the writes further. I'd be curious to know what o
  5. For anyone else that needs it, I was having more issues with libvirt/loop3 than docker/loop2, so I adapted @S1dney's solution from here for libvirt. A little CYA: To reiterate what has already been said, this workaround is not ideal and comes with some big caveats, so be sure to read through the thread and ask questions before implementing. I'm not going to get into it here, but I used S1dney's same basic directions for the docker by making backups and copying files to folders in /boot/config/. Create a share called libvirt on the cache drive just lik
  6. That's definitely something I would be willing to implement at some point, but there is other functionality that I would like to ensure is working before I try to mess with it. Unfortunately, I cannot promise more than that right now.
  7. I am currently focusing on implementing a restore function next. Based on feedback, I have decided that in order for restore functionality to work, you will need to allow the plugin to convert the existing backup structure. This will be optional, but without it, you will need to perform restores manually. As development progresses, I will work out additional details and reach back out here for feedback.
  8. Some behind the scenes changes thanks to @sjerisman that include zstandard compression and updates to the logging. Make sure you read the help if you are going to be switching from legacy gzip compression to zstandard. I also made some changes to how configs are shown to the users that should make it clearer how they work, and what is currently being edited. Full change-log below. v0.2.1 - 2020/02/20 Pika Pika - merged changes from unraid-vmbackup script v1.3.1. - added seconds to logs. - added option to use zstandard compression. -
  9. I think I might have an idea. How are you having the backups cleaned? Are you using "Number of days to keep backups" or "Number of backups to keep"? Because it looks like you are using "Number of backups to keep". In order for that to properly function, the vdisks must be named in a specific way. If you expand the help, it should give the following info: "If a VM has multiple vdisks, then they must end in sequential numbers in order to be correctly backed up (i.e. vdisk1.img, vdisk2.img, etc.)." -JTok
  10. @torch2k Hmmmm 🤔. Okay, just did some checking and it looks like docker looks for pigz by default. That said, when the plugin installs pigz, it does add /usr/bin/unpigz. Possibly your installation was corrupted somehow, since I just checked my dockers and two of them needed updated. I then ran Update All without issue. It seems like reinstalling the plugin could fix the issue for you... unless I'm missing something? -JTok
  11. Will push a fix. Thx. Sent from my iPhone using Tapatalk
  12. It shouldn’t matter what the extension is, but I’ll create a qcow2 drive this afternoon and run some tests. Sent from my iPhone using Tapatalk
  13. That's a good idea, I was a little worried about messing with people's backups, so that's why I was leaning towards the more hands-off approach. Though I suppose it would be easy enough to make it optional 🤔, and even include a button in Danger Zone to do it manually at a later time if it's opted out of it during the initial transition.
  14. Sorry, I was unclear. What I meant was that once you switch to the new structure, backups made under the old structure will need manually cleaned out until only new structure backups exist. Backups made under the new structure would still be able to be cleaned out. I will update the original post to clarify this. Sent from my iPhone using Tapatalk
  15. I am looking to get some feedback on the following: I'd like to change how the plugin saves backups, but it would be a breaking change. I want to change the timestamp from being part of the filename to being a sub-folder. Instead of this structure: /backup_dir/vm_name/20200204_1555_vm_name.xml/backup_dir/vm_name/20200204_1555_asdf564asd12sd5dfsd.fd/backup_dir/vm_name/20200204_1555_vdisk1.img/backup_dir/vm_name/20200204_1555_vdisk2.img/backup_dir/vm_name/20200205_1555_vm_name.xml/backup_dir/vm_name/20200205_1555_asdf564asd12sd5dfsd.fd/backup_dir/vm_name/20200205_1555