Jump to content

JTok

Members
  • Content Count

    89
  • Joined

  • Last visited

  • Days Won

    1

JTok last won the day on December 19 2019

JTok had the most liked content!

Community Reputation

48 Good

About JTok

  • Rank
    Advanced Member
  • Birthday August 25

Converted

  • Gender
    Male
  • URL
    https://github.com/JTok
  • Location
    Chicago

Recent Profile Visitors

1130 profile views
  1. For what it is worth, the direct mount did not actually fix it for me, just obfuscate it by making it so loop2/3 didn't show up in iotop. When I checked the SMART status I wound up with just as many writes as before, so I would be interested to hear your results. It seems like I might be an outlier here, so possibly I have a different issue affecting my setup.
  2. A note on this from my experience. I actually did this and it worked, but when I went to make changes to the dockers it failed and they would no longer start. I think I could have fixed it by clearing out the mountpoints, but I opted to just wipe the share and then re-add all the dockers from "my templates" which worked just fine. So, can confirm --would not recommend. I did copy the libvirt folder though and have not noticed any ill effects (...yet. haha)
  3. It's funny you mention that. I haven't rolled the workaround back yet, but I'm starting to wonder if it isn't fixing the issue so much as obfuscating it. I'm planning on rolling the fix back later today/tonight to see what happens. I got some metrics by basically doing the same thing as your script (except manually). According to SMART reporting, my cache drives are writing 16.93GB/hr. This is even though when I implemented the workaround I also moved several VMs and a few docker paths to my nvme drives just to reduce the writes further. I'd be curious to know what others are seeing.
  4. For anyone else that needs it, I was having more issues with libvirt/loop3 than docker/loop2, so I adapted @S1dney's solution from here for libvirt. A little CYA: To reiterate what has already been said, this workaround is not ideal and comes with some big caveats, so be sure to read through the thread and ask questions before implementing. I'm not going to get into it here, but I used S1dney's same basic directions for the docker by making backups and copying files to folders in /boot/config/. Create a share called libvirt on the cache drive just like for the docker instructions. edit rc.libvirt 's start_libvirtd method as follows: start_libvirtd() { if [ -f $LIBVIRTD_PIDFILE ];then echo "libvirt is already running..." exit 1 fi if mountpoint /etc/libvirt &> /dev/null ; then echo "Image is mounted, will attempt to unmount it next." umount /etc/libvirt 1>/dev/null 2>&1 if [[ $? -ne 0 ]]; then echo "Image still mounted at /etc/libvirt, cancelling cause this needs to be a symlink!" exit 1 else echo "Image unmounted succesfully." fi fi # In order to have a soft link created, we need to remove the /etc/libvirt directory or creating a soft link will fail if [[ -d /etc/libvirt ]]; then echo "libvirt directory still exists, removing it so we can use it for the soft link." rm -rf /etc/libvirt if [[ -d /etc/libvirt ]]; then echo "/etc/libvirt still exists! Creating a soft link will fail thus refusing to start libvirt." exit 1 else echo "Removed /etc/libvirt. Moving on." fi fi # Now that we know that the libvirt image isn't mounted, we want to make sure the symlink is active if [[ -L /etc/libvirt && -d /etc/libvirt ]]; then echo "/etc/libvirt is a soft link, libvirt is allowed to start" else echo "/etc/libvirt is not a soft link, will try to create it." ln -s /mnt/cache/libvirt /etc/ 1>/dev/null 2>&1 if [[ $? -ne 0 ]]; then echo "Soft link could not be created, refusing to start libvirt!" exit 1 else echo "Soft link created." fi fi # convert libvirt 1.3.1 w/ eric's hyperv vendor id patch to how libvirt does it in libvirt 1.3.3+ sed -i -e "s/<vendor id='none'\/>/<vendor_id state='on' value='none'\/>/g" /etc/libvirt/qemu/*.xml &> /dev/null # remove <locked/> from xml because libvirt + virlogd + virlockd has an issue with locked sed -i -e "s/<locked\/>//g" /etc/libvirt/qemu/*.xml &> /dev/null # copy any new conf files we dont currently have cp -n /etc/libvirt-/*.conf /etc/libvirt &> /dev/null # add missing tss user account if coming from an older version of unRAID if ! grep -q "^tss:" /etc/passwd ; then useradd -r -c "Account used by the trousers package to sandbox the tcsd daemon" -d / -u 59 -g tss -s /bin/false tss fi echo "Starting libvirtd..." mkdir -p $(dirname $LIBVIRTD_PIDFILE) check_processor /sbin/modprobe -a $MODULE $MODULES /usr/sbin/libvirtd -d -l $LIBVIRTD_OPTS } Add this code the the go file in addition to the code for the docker workaround: # Put the modified libvirt service file over the original one to make it not use the libvirt.img cp /boot/config/service-mods/libvirt-service-mod/rc.libvirt /etc/rc.d/rc.libvirt chmod +x /etc/rc.d/rc.libvirt
  5. That's definitely something I would be willing to implement at some point, but there is other functionality that I would like to ensure is working before I try to mess with it. Unfortunately, I cannot promise more than that right now.
  6. I am currently focusing on implementing a restore function next. Based on feedback, I have decided that in order for restore functionality to work, you will need to allow the plugin to convert the existing backup structure. This will be optional, but without it, you will need to perform restores manually. As development progresses, I will work out additional details and reach back out here for feedback.
  7. Some behind the scenes changes thanks to @sjerisman that include zstandard compression and updates to the logging. Make sure you read the help if you are going to be switching from legacy gzip compression to zstandard. I also made some changes to how configs are shown to the users that should make it clearer how they work, and what is currently being edited. Full change-log below. v0.2.1 - 2020/02/20 Pika Pika - merged changes from unraid-vmbackup script v1.3.1. - added seconds to logs. - added option to use zstandard compression. - added option to create a vm specific log in each vm's subfolder. - added config drop-down selection to the top of each tab. - updated method used to determine cpu thread count. https://github.com/JTok/unraid.vmbackup/tree/v0.2.1 -JTok
  8. I think I might have an idea. How are you having the backups cleaned? Are you using "Number of days to keep backups" or "Number of backups to keep"? Because it looks like you are using "Number of backups to keep". In order for that to properly function, the vdisks must be named in a specific way. If you expand the help, it should give the following info: "If a VM has multiple vdisks, then they must end in sequential numbers in order to be correctly backed up (i.e. vdisk1.img, vdisk2.img, etc.)." -JTok
  9. @torch2k Hmmmm 🤔. Okay, just did some checking and it looks like docker looks for pigz by default. That said, when the plugin installs pigz, it does add /usr/bin/unpigz. Possibly your installation was corrupted somehow, since I just checked my dockers and two of them needed updated. I then ran Update All without issue. It seems like reinstalling the plugin could fix the issue for you... unless I'm missing something? -JTok
  10. Will push a fix. Thx. Sent from my iPhone using Tapatalk
  11. It shouldn’t matter what the extension is, but I’ll create a qcow2 drive this afternoon and run some tests. Sent from my iPhone using Tapatalk
  12. That's a good idea, I was a little worried about messing with people's backups, so that's why I was leaning towards the more hands-off approach. Though I suppose it would be easy enough to make it optional 🤔, and even include a button in Danger Zone to do it manually at a later time if it's opted out of it during the initial transition.
  13. Sorry, I was unclear. What I meant was that once you switch to the new structure, backups made under the old structure will need manually cleaned out until only new structure backups exist. Backups made under the new structure would still be able to be cleaned out. I will update the original post to clarify this. Sent from my iPhone using Tapatalk
  14. I am looking to get some feedback on the following: I'd like to change how the plugin saves backups, but it would be a breaking change. I want to change the timestamp from being part of the filename to being a sub-folder. Instead of this structure: /backup_dir/vm_name/20200204_1555_vm_name.xml/backup_dir/vm_name/20200204_1555_asdf564asd12sd5dfsd.fd/backup_dir/vm_name/20200204_1555_vdisk1.img/backup_dir/vm_name/20200204_1555_vdisk2.img/backup_dir/vm_name/20200205_1555_vm_name.xml/backup_dir/vm_name/20200205_1555_asdf564asd12sd5dfsd.fd/backup_dir/vm_name/20200205_1555_vdisk1.img/backup_dir/vm_name/20200205_1555_vdisk2.img Backups would have this structure: /backup_dir/vm_name/20200204_1555/vm_name.xml/backup_dir/vm_name/20200204_1555/asdf564asd12sd5dfsd.fd/backup_dir/vm_name/20200204_1555/vdisk1.img/backup_dir/vm_name/20200204_1555/vdisk2.img/backup_dir/vm_name/20200205_1555/vm_name.xml/backup_dir/vm_name/20200205_1555/asdf564asd12sd5dfsd.fd/backup_dir/vm_name/20200205_1555/vdisk1.img/backup_dir/vm_name/20200205_1555/vdisk2.img this should come with a few advantages: implementing restore functionality should be much easier and therefore quicker to implement. structure of backups will be less cluttered. handling files not part of a normal vm structure (such as in the case of SpaceInvaderOne's Macinabox) will be easier to handle. However, this will break the existing file cleaning functions. So you would need to manually clean out old backups until all your backups were part of the new structure. So once you switch to the new structure, only new structure backups would be cleaned by the plugin. So with that in mind, here is my proposal: Upon installation of whatever version contains this new structure, the first time you open the plugin page it will prompt you to choose which structure you want to use with a warning message (and a link to further explanation). Don't worry if the plugin auto-updates before you choose a structure, it will continue to use the old structure so your backups will still run even if you haven't picked one yet. Moving forward, development will focus on the new structure. the features that are part of the old structure will stay and keep working, but will not be updated to support new functionality. This will almost certainly include restoring and scenarios like Macinabox. This way, everything continues to work as it has in the past, and gives everyone time to transition at their own pace. Then, somewhere down the line a discussion regarding removing the old structure entirely may take place if it becomes necessary. Thoughts? Concerns? Thanks, JTok
  15. I'll have to make some changes to the script to handle that scenario, but it should be reasonably easy. I think I've got a general way to handle this that should work for other similar situations down the line. I'll add that to my list of things to do.