Monster1290

Members
  • Posts

    4
  • Joined

  • Last visited

Recent Profile Visitors

The recent visitors block is disabled and is not being shown to other users.

Monster1290's Achievements

Noob

Noob (1/14)

0

Reputation

  1. +1 But I'm thinking about slightly different logic. In share setting add checkbox "Sync cache with main pool" (or smth like this) when cache mode either Only or Prefer. In normal operation data are read/write to cache and synced with main pool when Mover is operating. Obviously this will lead to misunderstanding of word "move", so might be added another action "Sync" similar to "Mover" with it's own button, settings, etc (in settings might be checkbox to schedule syncing alongside with mover). When cache drive fails then data operation remains normal, but read/writes should be manually redirected to main pool. I think automatic redirection is not possible because of possible data consistency damage by process which will try to use data from main pool while they state based on data from cache pool. To restore normal operation user might change cache pool (in case of additional cache pools) or replace cache drive in current cache pool. In both cases syncing must be performed right away to eliminate uncertainty of where to write/read from data. I think this logic is the best from user perspective. You setup couple of settings and in case of failure just replace failed drive. Just like main pool operation. IMHO this feature will be very helpful because some of the users don't want to protect data on cache drive with another cache drive when main pool does the same.
  2. Confirm, that worked for me. Before that variable was TERM=xterm-256color
  3. Of course, I use Backup/Restore Appdata plugin. It's already helped me once. Also I already encountered issues with BTRFS on docker.img couple of times. And I found it painful to recreate all docker containers (especially when I have 2 custom docker networks), but of course it's not critical.
  4. Hello UnRAID community! In the beginning of this year I started to use UnRAID as my main gaming-NAS rig. As careful user I started to back up important files of the system. Such as libvirt.img, docker.img, flash drive. And also I wanted to backup my main Windows VM (who knows when it will break after another update). All existing solutions are primarily script-based and generally they all just do copying files to array. But if you wanted to have multiple backups of a VM, say for the last 3 days, then you must have multiple copies of vdisks. In this example 3 days x 70GB vdisk = 210GB of storage occupied by backup copies. Sounds not great for me. So I searched for an existing solution that support de-duplication and not found any of them. Then I decided to create my own solution. At that point in time I've been already using JTok script. Then I found program "BorgBackup", checked it and found it useful for my purposes. Based on JTok script and using BorgBackup I made my own script few months ago. I've done debugging and tested it for a few months, fixed few minor bugs and now ready to present it to community. You can find script and instruction on my GitHub repository. Shortly, you need install "BorgBackup" from Nerd Pack plugin, prepare Borg repository in the shell, copy the script to User scripts plugin, adjust script options, set schedule and you good to go. Hope you liked it This script have all features that have JTok script and couple of mine: Choose VM's to backup Backup running VM using snapshots De-duplication using borg Full support of borg pruning logic per VM-basis Much faster backup time after first run Versatile loging and notifications Below is my current state of Borg repository. Currently I have 4 VM's under backup. These VMs have 5 vdisks, 210GB in total size. Repo contains 18 VM backups. As you can see repo occupies only 248GB, but without de-dupliction it's 1,38TB of raw space. It's 5,7 times lower!