[solved] Install Duplicacy (install binary)?


walle

Recommended Posts

I have recently installed Unraid for the first time on my main server to replace Ubuntu. On that server I used Duplicacy to backup my data and I plan to do so on this server in order do have the same setup for all my Linux servers.

 

The issue I have is I'm not sure how I can install Duplicacy "the right way" with Unraid. The way I do it on my other Linux machines is to run this command.

sudo wget -O /usr/local/bin/duplicacy https://github.com/gilbertchen/duplicacy/releases/download/v2.1.0/duplicacy_linux_x64_2.1.0; sudo chmod 0755 /usr/local/bin/duplicacy

But I have a feeling that this command will only save the binary to RAM and I will need to do this again after next reboot. Is it a way to add a binary to the flash drive and make sure that it's in /usr/local/bin/ on boot?

 

Before someone asks, I have searched in Docker hub and seams that most of them either have an old version Duplicacy or missing key features like define more than one backup path and remote location.

Edited by walle
Solved
Link to comment

Ohh, that's was easy. Thank you!

 

When does /boot/config/go triggers? What comes before /boot/config/go?

 

Another thing, is it possible to check if the array is running? If I want to backup the flash drive and store the backup in a share (to save as a local backup), I assume the array needs to be running? Then what happens if I try to save the backup to /mnt/users/my-backup-share/local/flash? I would preferably abort the local backup if I can't/shouldn't save it to the share.

Link to comment
22 minutes ago, walle said:

When does /boot/config/go triggers? What comes before /boot/config/go?

 

It executes before the GUI even starts to load (the emhttp line is the GUI)

23 minutes ago, walle said:

Another thing, is it possible to check if the array is running?

Many ways.  Easiest though is to see if /mnt/user exists as a directory

23 minutes ago, walle said:

If I want to backup the flash drive and store the backup in a share (to save as a local backup), I assume the array needs to be running?

/boot is always available.   You can also backup the flash via Main - Boot Device - Then click on Flash ( or scheduled along with appdata via the CA Appdata Backup plugin)

25 minutes ago, walle said:

Then what happens if I try to save the backup to /mnt/users/my-backup-share/local/flash?

If the array isn't started, depending upon how you're backing up, the directory /mnt/user/my-backup-share will either return an error because /mnt/user doesn't exist, or it will create it in RAM which introduces a bunch of complications

Link to comment

Thank you for the reply!

Makes sens, then I just will look for if the folder exists and I would be able to do my remote backups.

19 minutes ago, Squid said:

You can also backup the flash via Main - Boot Device - Then click on Flash ( or scheduled along with appdata via the CA Appdata Backup plugin)

First of all, I like many of your plugins and many of them are fantastic (I probably wouldn't go with Unraid without them). But what I don't like with your backup plugin is the docker containers must be stopped for the backup. This wouldn't work in my case because I have sftp containers to allow remote servers to save backups on my server. What is the reason for this?

 

If it's about concurrent access, Duplicacy has lock-free concurrent access. I can backup a running MySql server without duping to SQL-file. From the restore tests I have done, this has worked flawlessly.

Link to comment

Like you said stopping is all about concurrent access and if file A depends upon file B being consistent with whatever is going on in A. But, I do give the option to not stop the apps via the advanced options.

I don't however begrudge anyone for not using my plugs.(except for CA itself). They were all to learn how to do something within CA itself

Sent from my LG-D852 using Tapatalk

  • Like 1
Link to comment
  • 1 month later...

Please note that this is not a guide, this is just a short(-ish) explanation of how I currently using Duplicacy. I assume you are familiar with how Duplicacy works and somewhat comfortable to work with a terminal. I'm aware of this could be done much simpler, with eg. doing a Docker container and therefore make it more accessible to others to use. But in my case, I needed a quick and dirty setup just to start to do backups again. I maybe will do a Docker container out of this some day.

 

In my case, I have created /boot/custom/bin/ folder where I save additional binaries like Duplicacy

wget -O /boot/custom/bin/duplicacy https://github.com/gilbertchen/duplicacy/releases/download/v2.1.0/duplicacy_linux_x64_2.1.0

What I add to my /boot/config/go file

## Copy Duplicacy binary
cp -f /boot/custom/bin/duplicacy /usr/local/bin/duplicacy
chmod 0755 /usr/local/bin/duplicacy

## Duplicacy backup
cp -rf /boot/custom/duplicacy /usr/local
chmod 0755 /usr/local/duplicacy/

/boot/custom/duplicacy is the folder I use to save backup preferences for each main folder I backup. I copy this folder to RAM in order to minimize wear on the flash drive. Duplicacy is using this preferences folder to temporary write cache files.

 

Folders I backup (plus private shares):

  • /boot
  • /mnt/user/appdata
  • /mnt/user/system/libvirt

To add a folder to backup, I cd to that folder (eg. `cd /boot`) and run duplicacy init command:

duplicacy init -pref-dir /boot/custom/duplicacy/boot my-snapshot-id /mnt/user/backup/duplicacy

To brake the command down a bit:

  • /boot/custom/duplicacy/boot - Path to preference folder. I have a separate folder for each main backup folder.
  • /mnt/user/backup/duplicacy - My local backup share. Can be replaced with a remote storage (read Duplicacy documentation).

If you want to add remote storage, add filters and do adjustments in preferences folder, do so before editing the .duplicacy file (eg. /boot/.duplicacy) and point it to the ram location.

Example:

  • From: /boot/custom/duplicacy/boot
  • To: /usr/local/duplicacy/boot

Do the same for rest of the backup folder, after that either run commands in the go-file or restart the server.

Test to backup by running the backup command (eg. cd /boot; duplicacy backup -threads 1) and copy command for remote storage.

 

I use User scripts plugin to run the backup and copy commands nightly.

 

If I need to add additional remote storage or do other changes that I by mistake was saved to RAM instead of the flash memory, I run this command to sync back the changes to flash:

rsync -avh --exclude=logs/ --exclude=cache/ --exclude=.git/ /usr/local/duplicacy/ /boot/custom/duplicacy/

How do this differently

Instead of adding main folders to backup with the init command, it should be possible to just run it once at / and use filters to include and exclude folders/files to backup. The reason I don't tested or want this setup is because I need to have the flexibility to have separate snapshot IDs for each folder in order to control what remote backup locations show have backup of what. For example I may want to send the /boot backup to Amazon s3 and to a friends server, but I don't want to send my family videos to s3 because it will be too expensive.

 

Any questions @xhaloz?

Link to comment
  • 3 months later...

Hey @walle, just now seeing this.  First I want to say thank you very much.  This is a great breakdown!!!  I am going to do the google drive route.  Where does duplicacy store the files being uploaded to an offsite storage?   Is there a temporary directory or something? Or is that the cache folder you were speaking of?  My upload speed is slow so I am concerned about filling up my ram faster than it can upload to a cloud.  What does your preferences look like in those folders?'

 

Also your script has some weird directories....one says cp -f /boot/custom/bin/duplicacy /usr/local/bin/duplicacy

 

And the other says cp -rf /boot/custom/duplicacy /usr/local

 

Was this a mistake?  I don't have a folder under /boot/custom/duplicacy but I do have one under /boot/custom/bin/duplicacy according to your script.

I am going to run a test backup right now, I need to learn more about the .duplicacy file preference thing.   So you do separate backups for each folder....that makes sense.  I don't want every single folder backed up but I also don't want to be messing with filters to include/exclude items. 

How much do I have to pay you for a docker hehe.  After CrashPlan had it's shenanigans, I found duplicati which is terrible IMO. A lot of database errors even after changing chunk sizes etc.  I then wrote a custom script to perform a borg backup and then rclone it up to google drive.  While this is cool, I don't really like borg storing files locally and then having to upload them to the cloud.  If duplicacy does all the same things (compression, encryption, etc) and uploads to a cloud directly, I much more prefer that.   Just not too savy with the custom binary stuff but your breakdown helped a ton.  I'll see what I can come up with and do some trial and error. All in all, seems like a fantastic program.  

Edited by xhaloz
Added words
Link to comment

Sorry about the late reply. I didn't have time to get back to you.

Quote

Where does duplicacy store the files being uploaded to an offsite storage?   Is there a temporary directory or something? Or is that the cache folder you were speaking of?  My upload speed is slow so I am concerned about filling up my ram faster than it can upload to a cloud.  What does your preferences look like in those folders?

You can read about the cache folder here: https://forum.duplicacy.com/t/cache-usage-details/1079

It uploads during backup, it only uploads chunks of the files. https://forum.duplicacy.com/t/chunk-size-details/1082

What I do is I backup locally on the server, then using the copy command to offsite storage. It's much more efficient than run same backup for each offsite backup storage and upload it each time.

 

Quote

How much do I have to pay you for a docker hehe. 

I think don't need to do that. Sense my last post, Duplicacy have announced beta testing of there new Web UI client (https://forum.duplicacy.com/t/duplicacy-web-edition-0-2-10-beta-is-now-available/1606). There are Docker images right now that looks promising (eg. https://hub.docker.com/r/saspus/duplicacy-web). I think the Web UI approach makes more sens for Unraid rather than using the CLI version. But it needs to become more stable before I dear to use it for my real backups. There are Docker images with Web UI that are progressing nicely. From what I can tell looking at some of the Docker images, all that needs to be done to work with Unraid is to create a Docker template (takes minutes to do) and test it.

 

One potential downside with Web UI is it will probably require a license to use. But looking at what the current GUI client costs ($20 first year and $5 for year 2 and forward, https://duplicacy.com/buy.html) and assume it will have the same price, it will probably be worth it.

Link to comment

Join the conversation

You can post now and register later. If you have an account, sign in now to post with your account.
Note: Your post will require moderator approval before it will be visible.

Guest
Reply to this topic...

×   Pasted as rich text.   Restore formatting

  Only 75 emoji are allowed.

×   Your link has been automatically embedded.   Display as a link instead

×   Your previous content has been restored.   Clear editor

×   You cannot paste images directly. Upload or insert images from URL.