sjerisman

Members
  • Content Count

    10
  • Joined

  • Last visited

Community Reputation

2 Neutral

About sjerisman

  • Rank
    Newbie

Recent Profile Visitors

The recent visitors block is disabled and is not being shown to other users.

  1. Just a thought... What if any old backup files were converted to the new format after a plugin upgrade or after opting into the new naming convention? (i.e. use the filenames read and then remove the timestamps and move the files to new subfolders)
  2. Yep, no problem. Results are definitely looking promising. More details on the other thread: Hopefully these changes will be integrated into this GUI plugin sometime next week.
  3. @Squid - Sorry up front if this has already been asked, but any thoughts on an option to use zstd compression instead of gzip? Here are some quick tests I did on two of my systems that shows much improved speed and slightly smaller sizes: System 1: > cd /mnt/user/appdata > du -d 0 -h . 1.6G . > time tar -czf /mnt/user/UnRaidBackups/AppData.tar.gz * real 1m17.710s user 1m6.245s sys 0m6.219s > time tar --zstd -cf /mnt/user/UnRaidBackups/AppData.tar.zst * real 0m24.039s user 0m10.248s sys 0m5.330s > ls -lsah /mnt/user/UnRaidBack
  4. @JTok - Thanks for the shout out, and thanks for reviewing and releasing my changes so quickly! I did this as much for my own benefit as anything else. Just to give everyone a bit of an idea of how much faster and more efficient this new inline compression option is, here are some results from one of my UnRaid servers: I currently have 4 (fairly small) VMs on this server (Win10, Win7, Arch Linux, and AsteriskNOW) running on a NVMe unassigned device and backing up directly to a dual parity HDD array (bypassing the SSD cache) with TurboWrite enabled. This is runnin
  5. @JTok - I was able to do some more coding and testing on my open pull request: https://github.com/JTok/unraid-vmbackup/pull/23/files?utf8=✓&diff=split&w=1 Additional changes: * I added seconds to the generated timestamps and logged messages for better granularity * I refactored the existing code that deals with removing old backup files (both time based as well as count based) to make it more consistent and easier to follow * I added support for removing old .zst archives (both time based as well as count based) using the refactored code above * I did a
  6. And, I repeated the same Windows 7 'real' VM test one more time, but this time used the SSD cache tier as the destination instead of the HDD... With the old compression code, it took 1-2 minutes to copy the 18 GB image file from the NVMe UD over to the dual SSD cache, and then still took 13-14 minutes to further .tar.gz compress it down to 8.4 GB. The compression step definitely seems CPU bound (probably single threaded) instead of I/O bound with this test. With the new inline compression code, it still only took about 1-2 minutes to copy from the NVMe UD and compress
  7. And here is another test that is closer to real world (and even more impressive)... I took a 'real' Windows 7 VM with a 20 GB raw sparse img file (18 GB allocated) and ran it through the old and new compression code. With the old compression code, it took 3-4 minutes to copy the 18 GB image file from a NVMe UD over to a HDD dual-parity array, and then another 14-15 minutes to .tar.gz compress it down to 8.4 GB. With the new inline compression code, it only took 2-3 minutes to copy from the NVMe UD and compress (inline) over to the HDD dual-parity array with
  8. Yep, that makes sense. I hadn't really considered backing up directly to the cache tier because for a lot of people that means their VMs and backups are on the same storage device(s) and it could fill up the cache quickly and add a lot of wear to the SSDs. In thinking about it, I agree that backing up to a share that has cache: Yes, cache: Prefer, or cache: Only would definitely help with the I/O performance bottleneck. But I think the script would still be doing things a bit inefficiently (including wearing out the cache faster) and would still be slower than inline compression.
  9. I assume most people host their VM image files on faster storage (i.e. SSD or NVMe cache or unassigned devices) and write their backups to the array. The I/O performance bottleneck is mostly going to be with the array. Currently, the script copies the image files from source to destination and then afterwards compresses them. This results in writing uncompressed image files to the array, then reading uncompressed image files from the array, compressing them in memory, and finally writing the compressed result back to the array. (i.e. READ from cache -> WRITE to array -> READ from arr
  10. I am just about finished building a custom enclosure for my first unRAID build. I am naming this unRAID server 'Trogdor'... just because, no special reason. This unRAID server is primarily for Plex (docker), Transmission (docker), Jackett (docker), Sonarr (docker), Radarr (docker), Nextcloud (docker), and potentially a small handful of VMs (1x VoIP PBX, 1-2x Windows 10). Background: I have been using a custom-built Ubuntu/ZFS based server (virtualized on Hyper-V) for 8+ years, but am finally switching over to unRAID. While I really like ZFS overall, I fi