Jump to content

JTok

Members
  • Content Count

    65
  • Joined

  • Last visited

  • Days Won

    1

JTok last won the day on December 19 2019

JTok had the most liked content!

Community Reputation

32 Good

About JTok

  • Rank
    Advanced Member
  • Birthday August 25

Converted

  • Gender
    Male
  • URL
    https://github.com/JTok
  • Location
    Chicago

Recent Profile Visitors

651 profile views
  1. I’m out of town this weekend and away from my server, so I’m not sure if there’s a better way (or if this is easily accomplished in unraid)... but off the top of my head, can you mount the share locally in unraid first, then backup to the mount point? Sent from my iPhone using Tapatalk
  2. Big fat thanks to @sjerisman for doing all the work to get a zstandard compression option added to the script! This is going to be significantly faster than using the existing compression option, as well as much more efficient. However, it is not backwards compatible, so if you switch you will want to manually trim any old compressed backups. For those of you that are currently using the plugin, this is coming, but it will be a bit longer. With the plugin being so new, there are some additional kinks to work out as well. v1.3.0 - 2020/01/15 Better than standard - added option to use zstd inline compression. Script here: https://github.com/JTok/unraid-vmbackup/tree/v1.3.0 -JTok
  3. @sjerisman I will take a look at the PR today, and follow up with you through there.
  4. It's not your fault. It should work honestly, but it seems to be an issue with parse_ini_file in PHP as far as I can tell. I'm trying to find a workaround, but have been unsuccessful so far. At the very least, I'll try to make it clear they are causing an issue in a future update. At least until I can get it resolved.
  5. Sorry, I wasn't clear. My tests were from an SSD cache array to the Parity Array. So that's also the bottleneck I was referring to. I was attempting to also point out that, to anyone interested in improving throughput, the biggest improvements will come from changing storage around to cut out the Parity Array. i.e. running the VMs on an NVMe unassigned device and backing up to an SSD cache array or vice versa. You're right though, that order of operations does seem a bit excessive doesn't it? lol This seems viable, but there are some issues that I would need to handle related to backwards compatibility before switching compression algorithms outright. I'm going from memory here, so possibly the details are wrong, but I believe it came down to being able to turn the VM back on sooner. Essentially I couldn't guarantee the speed of the system that unRAID would be running on, so I decided to compress after copy because it meant the VM might be able to be turned on sooner (though I honestly can't remember if I tested this or not). Esentially the logic was: Turn off VM -> Copy files -> Turn on VM -> compress files; would result in the VM being off for less time. With snapshots though, this is far less efficient. So I think it will be a good behavior to make configurable in the future. Thanks for looking into this btw!
  6. Do you have parenthesis in any of your VM paths? I've run into some issues with that.
  7. That's going to be a fun one. I don't have 6.8.1 yet because I am using the nvidia plugin, but I'll see if I can figure out what is going on anyway and get back to you.
  8. In what way did it break it? It is difficult to fix bugs if I don't know what happened. Did you make sure to change the vdisk path in your VM config before using snapshots? -JTok
  9. I looked into this a little today, but this is by no means conclusive. In my tests so far I/O has been the biggest bottleneck, not the compression algorithm or number of threads. So using the parity array vs the cache array, or an unassigned device, is probably going to have the biggest effect on performance. Honestly, all things being equal, I only saw about a 15-20% performance improvement with my test VM (though I understand that there could be more pronounced differences with other use cases). I tested using zstd, lbzip2, and pigz. That being said, since there are some performance improvements with a multi-threaded compression utility, I am looking into a good way to integrate something. I suspect, that at least initially, I will stick with pigz because of backwards compatibility issues. Though I may look into adding an option for the other two later on.
  10. I'll have to try and replicate that and see what is going on. Thanks for letting me know
  11. @jpowell8672 At first glance, it looks like it is not seeing any extensions for your vdisks. Can you confirm that they do have extensions, and if so, what are they?
  12. Are you able to get the error message from the error log? It will be saved in the log folder inside your backup location. Thanks, JTok
  13. Where are you seeing the warning, and what are you doing when it occurs? Does anything happen after you get the warning? Thanks, JTok Sent from my iPhone using Tapatalk
  14. @queueiz @Stupifier With cron I have found it extremely difficult to validate for every possible input it will accept because there are so many. By default I use a more restrictive validation for user input to reduce errors. Since I realize that can prevent valid input that might be desired, there is an option in the Danger Zone tab to remove the validation. I will update the help to try to clarify what kind of input is accepted by default.
  15. I love the flexibility to manipulate the OS with plugins and other things from the community. I would be really interested in seeing snapshots implemented for KVM. Sent from my iPhone using Tapatalk