[Support] Linuxserver.io - Duplicati


Recommended Posts

19 hours ago, Squid said:

Its in RAM.  Personally, I just set /tmp to be a folder within appdata

Thanks! By the way, I tried backing up to an external drive today. FCP warned me that I had to set the backup directory to slave mode in order to avoid problems. What exactly is slave mode? Does it allow Unassigned Devices to unmount without Docker crashing or something?

 

Also, the backup is going really slow, like 20KB/s slow. This is an external drive with no issues hitting 100MB/s. I've checked htop and while the CPU is sitting somewhere at 50% utilization it's nowhere near bottlenecked. What could be the problem?

 

EDIT: Also, disaster recovery. Suppose tomorrow my UnRAID box dies without warning and fries everything in it. I have a list of encrypted files on my external drive. How can I recover? Do I just grab a new UnRAID box, install Duplicati on that, feed it my passwords and let it restore from the cloud? Or is there something else I need to back up, like a configuration? I want to avoid backing up the entirety of appdata if I could.

Edited by Guest
Link to comment

Just registered to give a heads up that the current version of Duplicati (2.0.3.3_beta_2018_04_02) suffers from the upload/download throttling bug (https://forum.duplicati.com/t/throttle-upload-effects-restores/2632/4).

 

I was having a strange issue where I would get timeout errors during the "verifying backup" step of a backup, and timeout errors when I tried to do a recovery. Turns out, throttling the upload speed also affects download speed (and was also making the download stall completely). Removing the upload bandwidth limit has fixed the issue for me. Hopefully this saves other people the debugging hassle.

 

Link to comment
  • 2 weeks later...
  • 2 weeks later...
On 9/29/2018 at 9:57 AM, DoomBot5 said:

Is it possible to add the Canary builds as tags for this? Duplicati added a lot of features since the last beta release, and there aren't any release builds.

 

Use duplicati/duplicati:canary for the repository instead of LSIO's in the docker settings for the plugin.


Duplicati Canary's can be hit or miss; no matter how stable they may be at any given time you really need to know what you are doing to get the best use out of them. I kinda doubt LSIO wants to get in that business for Unraid users but not my call :)

Edited by planetix
Updated path
Link to comment
  • 2 weeks later...

I'm having an issue with the docker not starting for me. The log for the docker itself wasnt showing anything but when i go into the unraid log the following appears

 

Oct 20 12:06:19 SERVER kernel: docker0: port 13(veth9567ac9) entered blocking state
Oct 20 12:06:19 SERVER kernel: docker0: port 13(veth9567ac9) entered disabled state
Oct 20 12:06:19 SERVER kernel: device veth9567ac9 entered promiscuous mode
Oct 20 12:06:19 SERVER kernel: IPv6: ADDRCONF(NETDEV_UP): veth9567ac9: link is not ready
Oct 20 12:06:19 SERVER kernel: docker0: port 13(veth9567ac9) entered blocking state
Oct 20 12:06:19 SERVER kernel: docker0: port 13(veth9567ac9) entered forwarding state
Oct 20 12:06:19 SERVER kernel: docker0: port 13(veth9567ac9) entered disabled state
Oct 20 12:06:19 SERVER kernel: docker0: port 13(veth9567ac9) entered disabled state
Oct 20 12:06:19 SERVER kernel: device veth9567ac9 left promiscuous mode
Oct 20 12:06:19 SERVER kernel: docker0: port 13(veth9567ac9) entered disabled state

 

Any ideas? Had been working up until I had to force a reboot due to a mover hang. 

 

EDIT

 

fixed itself after another reboot.

Edited by tazire
Link to comment

Im not sure how this is running for anyone else but i am finding it very unstable atm. I go it running and completed a backup to a b2 storage location but with a few errors reported and after a couple of retries. I have a second backup to an external device attached to my router and no matter what i do it just will not finish the back up. but this seems to be caused by duplicati crashing. the latest crash ment i couldnt restart, stop or delete the docker. Im currently stuck trying to get the dam thing to shut down using putty.. but no joy atm.

 

Theses are the last entries in the log before it stopped working.

 

[s6-finish] syncing disks.
[s6-finish] sending all processes the TERM signal.
[s6-finish] sending all processes the KILL signal and exiting.

 

EDIT

Ended up having to hard reset the server to get back to a somewhat workable situation with this unfortunately.

Edited by tazire
Link to comment
  • Statement: I cannot recommend using this docker.
  • Why? It crashed Unraid twice with a simple 1-folder 60GByte photo backup (no special gimmicks used) onto a local SMB share.
  • What happened? The backup process started and after a few minutes (like 10-12) the web interface died "lost server connection... retry in...". After that I was unable to stop the docker, therefore I could not stop the array and was in the end forced to cold reboot+parity check...
  • What I tried:

Searching for a stable and working alternative now, pretty disapointed about Unraid for not having a "favored/built-in/well-proven/well-maintained" solution for a simple local backup in place (no cloud). A little more disappointed that a docker running on Unraid can easily tear down the whole system until complete unresponsiveness. This is NOT what I would call a "stable" system.

Link to comment
3 minutes ago, Oppaunke said:
  • Statement: I cannot recommend using this docker.
  • Why? It crashed Unraid twice with a simple 1-folder 60GByte photo backup (no special gimmicks used) onto a local SMB share.
  • What happened? The backup process started and after a few minutes (like 10-12) the web interface died "lost server connection... retry in...". After that I was unable to stop the docker, therefore I could not stop the array and was in the end forced to cold reboot+parity check...
  • What I tried:

Searching for a stable and working alternative now, pretty disapointed about Unraid for not having a "favored/built-in/well-proven/well-maintained" solution for a simple local backup in place (no cloud). A little more disappointed that a docker running on Unraid can easily tear down the whole system until complete unresponsiveness. This is NOT what I would call a "stable" system.

Maybe it's your system. The only issues I've had are with 6.1 and unassigned devices plugin freezing when it losses connection to a remote share. 

Link to comment

Unraid version used is 6.6.3 stable. No fancy stuff installed, no crappy hardware, configuration all-straight-forward. In both cases there was no connection loss logged. I'm not willing to fiddle around with this risking my data. A fundamental point of dockerization is to isolate its processes from the host system. So either the docker is faulty or the system is unstable. Since everything else works fine, I have my eyes on the docker. 

Link to comment

few questions with duplicati.

1st of i get a update available on the docker and i click update and let it do its thing. yet afterwards there's a button shown for update available. not sure why.

 

also how would i set duplicati to just make a mirror of the stuff on the share in 1 encrypted aes zip file?

 

finally if i download the aes file and i know the backup password, then could i open it using 7zip?

Link to comment
  • 2 weeks later...

I've had a pretty stable set of backups for about 8 months now. Aside from the incredibly slow restore process (browsing folders within the backup is painful!), I've been pretty happy with it.

 

However, I really wanted some of the newer features, such as improved performance during restores, so I switched to the canary build. I was not expecting my backup config to get wiped when I did this. Going back to the stable container returned the configs after some tweaking.

 

Is there a safe way I can move my configs over to the canary build?

Edited by Phastor
Link to comment
  • 2 weeks later...

Hey,

 

I have a backup config that backs up my Photographs folder from my array onto an external disk connected to my server.  However a lot of the time right now it's just sat saying "Photographs: Verifying backend data ..."    What is it actually doing?  Will it actually finish?   Is something wrong?

Link to comment
  • 3 weeks later...

I need access to put an asterisk in the the Hostnames option to make my reverse proxy working again.
What I need looks like this:
image.thumb.png.e2b11aee228f0291a128ed072fe10825.png

 

But that is only possible if I can visit Duplicati via 127.0.0.1, how do I do that when it is inside this docker container?

This means that as of now I only see this:
image.thumb.png.ec921f14ecc42a3798bc3239bcc0d921.png

Link to comment

To the devs:

You can fix the Reverse Proxy issue by editing the file /etc/services.d/duplicati/run to the following (i.e. adding --webservice-allowed-hostnames=*):

#!/usr/bin/with-contenv bash

cd /app/duplicati || exit

 exec \
    s6-setuidgid abc mono Duplicati.Server.exe \
    --webservice-interface=any --server-datafolder=/config --webservice-allowed-hostnames=*

See https://github.com/linuxserver/docker-duplicati/blob/master/root/etc/services.d/duplicati/run

For now I use a mounted run file with this change.

Edit: The best solution is to make it an environment variable in the docker container.

Edited by JohanSF
Link to comment
On 12/2/2018 at 6:00 PM, JohanSF said:

I need access to put an asterisk in the the Hostnames option to make my reverse proxy working again.
 

But that is only possible if I can visit Duplicati via 127.0.0.1, how do I do that when it is inside this docker container?
 

1

I have access and I can't change the Hostnames text box anyway. I put an FQDN in there and it doesn't save it.

Link to comment

Can Duplicati be used to backup 'appdata' folder instead of the plugin CA Backup / Restore Appdata?

I want to upload the backups to Black Blaze and these features are really helpful: incremental backups and data deduplication

Before starting the backup, is it mandatory to stop the running dockers?

Link to comment

Stopping dockers is a good idea prior to backing up appdata. Not all of them require it but there are enough variables, particularly if you run a lot of dockers, to make it the safe thing to do.

 

You can just use CA to automate that part then Duplicati to back up CA's backup? You can disable compression in CA and have it retain only one copy if you'd rather manage the rest with Duplicati. Not like the CA plugin takes up a lot of space or resources.

Link to comment

Didn't know it still tar'd the files.

 

Your best bet then is to leverage Duplicati's custom commands (in advanced options): --run-script-before, --run-script-after, --run-script-before-required and --run-script-timeout. You'll want to stop your dockers prior to having it backup Appdata then restart. There's some suggestions on how to script the latter around the forums, do a search. They might be able to help in the CA Appdata backup thread since I am pretty sure that is exactly what it does.

Edited by planetix
Link to comment
  • 2 weeks later...

I just want to chime in to say that I installed Duplicati earlier today, configured a couple of backups, tested them and assuming the scheduler and retention work as expected... then this is by far the most complete backup solution I ever dreamed of having in unRAID. You can back up to your array, to another samba share, to a USB drive or directly into the cloud... encrypted or not... easily export your backup configs so that you can restore your data even if you lose your whole unRAID server after some catastrophe.

I don't know if it's perfect protocol-wise (like, could it have better encryption, better compression, better performances and stuff like that) but it definitely does all I need to and will bring peace of mind once properly set up and tested over time (the smart retention option is a very nice business-class feature too).

Link to comment

I have a problem with this container. It's been running fine now for about a year, backing up every 4hrs to a local network FTP server and to remote backblaze.

 

Today, I added a SMB share, mounted into the unassigned devices plugin and started backing up to that.

The share is on another unraid server on the LAN. The share is mounted as Read/Write/Slave, and the a username/password is used to log into the share.

 

When the backup is run (manually at this point), after about 5 mins, 2 CPU cores lock at 100% and duplicati stops working. I cannot stop the docker, the only exit is to do a hard reset of the server.

 

Any ideas what could be wrong? or is this a bug with duplicati ?

 

Another user about 10 posts before this one has had the same issues with backing up to SMB shares. 

Edited by jj_uk
Link to comment

Join the conversation

You can post now and register later. If you have an account, sign in now to post with your account.
Note: Your post will require moderator approval before it will be visible.

Guest
Reply to this topic...

×   Pasted as rich text.   Restore formatting

  Only 75 emoji are allowed.

×   Your link has been automatically embedded.   Display as a link instead

×   Your previous content has been restored.   Clear editor

×   You cannot paste images directly. Upload or insert images from URL.