[Support] Linuxserver.io - Duplicati


Recommended Posts

I know but I don't want to have it as part of the array, I want to have it as an external HDD in NTFS probably. Is this possible?
Unassigned Devices.... The clue is in the name, they're unassigned to the array. Not sure how well NTFS will work though.

Sent from my LG-H815 using Tapatalk

Link to comment
  • 2 weeks later...

I am not a programmer but am trying to learn as I go.  How do I run Unassigned Devices scripts before and after in Duplicati? 

 

Currently I have the following under the advanced option in Duplicati:

--run-script-before=/mnt/user/appdata/duplicati/mount_bu_drive.sh
--run-script-after=/mnt/user/appdata/duplicati/unmount_bu_drive.sh

Those scripts above are as follows:

 

mount_bu_drive.sh

#!/bin/bash
 /usr/local/sbin/rc.unassigned mount /dev/sdg

 

unmount_bu_drive.sh

#!/bin/bash
 /usr/local/sbin/rc.unassigned umount /dev/sdg

 

 

What am I doing wrong?

Edited by kimocal
spelling
Link to comment
13 minutes ago, kimocal said:

I am not a programmer but am trying to learn as I go.  How do I run Unassigned Devices scripts before and after in Duplicati? 

 

Currently I have the following under the advanced option in Duplicati:


--run-script-before=/mnt/user/appdata/duplicati/mount_bu_drive.sh
--run-script-after=/mnt/user/appdata/duplicati/unmount_bu_drive.sh

Those scripts above are as follows:

 

mount_bu_drive.sh


#!/bin/bash
 /usr/local/sbin/rc.unassigned mount /dev/sdg

 

unmount_bu_drive.sh


#!/bin/bash
 /usr/local/sbin/rc.unassigned umount /dev/sdg

 

 

What am I doing wrong?

 

these questions are somewhat outside of what we support and probably best suited to asking in the unassigned devices support thread

Link to comment
2 minutes ago, sparklyballs said:

 

these questions are somewhat outside of what we support and probably best suited to asking in the unassigned devices support thread

 

Am I pointing to the correct path/script in the Duplicati Advanced Options at least?

--run-script-before=/mnt/user/appdata/duplicati/mount_bu_drive.sh
--run-script-after=/mnt/user/appdata/duplicati/unmount_bu_drive.sh

duplicati_001.png.a20484d2de527d695b7a769fcd1977aa.png

Link to comment

How far back is the version of Duplicati in the container? Looking under "About", I'm seeing that the version is one dating back to last August. Is this accurate? If so, what kind of time window are we looking at for the docker container to be on the version that was released today?

 

There's been some huge performance improvements to the latest version that I have been waiting for months on--namely the speed at which you can browse directories within your backups. Currently I am getting backups, but I wouldn't be able to use them if I were to need them since it literally takes like 15 minutes to drill down each individual directory. Getting this update will give me some peace of mind.

 

Using the "check for updates" function within Duplicati itself detects the newest version that was released today. Is it possible use the in-app update function, or would that break the container?

 

Edit:

It's occurred to me that hitting "download now" may not be a self updater, but instead take you to a download page to get an installer, which I know would be useless in this case. I haven't tried it yet in case it actually is a self updater and would break the container.

dupupdate.jpg

Edited by Phastor
Link to comment
1 hour ago, Phastor said:

How far back is the version of Duplicati in the container? Looking under "About", I'm seeing that the version is one dating back to last August. Is this accurate? If so, what kind of time window are we looking at for the docker container to be on the version that was released today?

 

There's been some huge performance improvements to the latest version that I have been waiting for months on--namely the speed at which you can browse directories within your backups. Currently I am getting backups, but I wouldn't be able to use them if I were to need them since it literally takes like 15 minutes to drill down each individual directory. Getting this update will give me some peace of mind.

 

Using the "check for updates" function within Duplicati itself detects the newest version that was released today. Is it possible use the in-app update function, or would that break the container?

 

Edit:

It's occurred to me that hitting "download now" may not be a self updater, but instead take you to a download page to get an installer, which I know would be useless in this case. I haven't tried it yet in case it actually is a self updater and would break the container.

dupupdate.jpg

 

 

our image is built using the latest beta or release available at the time of build and we build our images friday nights around 11pm uk time

we do not build canary releases and the latest beta release was issued since our last build last friday night (uk time)

Link to comment
On 4/7/2018 at 9:14 PM, NeoMatrixJR said:

any way to specify if you want canary build?

I'm not really sure you'd want to run the canary build - based on their site, that sounds like an alpha/early-beta test stage. Personally, I wouldn't want to have my backups relying on that level of code.

 

On another note - it seems that really slow backups are just the way this works? I'm seeing the same 5-50KB/s speeds everyone else is reporting. Granted, I've got my compression set to max because I need to get more pics onto a drive than there's room for, but this is painful! I'm backing up to an external USB2 device and it's... slooooow.

Link to comment
On 9/14/2017 at 11:31 AM, Phastor said:

Trying to browse through folders under the Restore tab is painful. Every time you try to drill down a folder it takes nearly two minutes to think before it actually does it. Doesn't matter how large or how many files are in the folder. It does this with every single folder--and it seems to get longer with every additional snapshot taken.

 

Is this normal?

 

It appears that it is, but that one of the devs is planning on doing something about it:

https://forum.duplicati.com/t/optimizing-load-speed-on-the-restore-page/3185

Link to comment

CrashPlan has done away with their cheap "Home" plan, and now I have to pay quite a bit more for my backups. My plan is to dump them and create my own off-site backup plan using duplicati. However, I'm a bit perplexed by the security aspects of it all.

 

I plan to build a new unRAID server and locate it at my in-laws house. To connect to this new server (once it's off-site), I plan on using the ProFTP plugin on the backup server, then having duplicati on my main server connect via FTP. Obviously, security is an issue. I noted a post in the ProFTP support thread where there were some pretty good directions on how to get SFTP set up, so if I go this way, I'll read through that.

 

My questions:

Does duplicati login to the SFTP server each time it needs to start a backup?

If I create security certificates for access (which I believe was part of the process), what do I do with them so duplicati has access to them from the source server?

 

One of the other suggestions in the ProFTP support thread was to use a VPN for the connection. This seems to me to be an even better option - I can either mount the remote server using UD, once the VPN is established, or run FTP (w/o the S) to make it simple. Is there a mechanism within duplicati or some sort of scripting I could use to establish the VPN connection prior to the kick off of a scheduled backup?

Link to comment
  • 2 weeks later...

Does anybody have duplicati backing up to their nextcloud webdav? I've been poking around trying to get it to connect, and even though I can successfully browse and create files with a webdav client, I am unable to get duplicati to successfully connect to webdav using the exact same host, port, directory, and credentials info that work with the webdav client.

 

I know this is a duplicati application issue and not related to the docker implementation, but I was hoping someone on here had successfully connected duplicati to a webdav destination and could share their method.

Link to comment
13 hours ago, jonathanm said:

Does anybody have duplicati backing up to their nextcloud webdav? I've been poking around trying to get it to connect, and even though I can successfully browse and create files with a webdav client, I am unable to get duplicati to successfully connect to webdav using the exact same host, port, directory, and credentials info that work with the webdav client.

 

I know this is a duplicati application issue and not related to the docker implementation, but I was hoping someone on here had successfully connected duplicati to a webdav destination and could share their method.

Got it working! The issue that I ran into had nothing to do with unraid, the docker or really duplicati or nextcloud. It was a combination of nginx reverse proxy settings and mono version on linux mint. :D

Link to comment
  • 3 weeks later...

Not sure if this has been approached on here.  Speed is an issue. I just switched to unraid and was running my own simple backup script using rsync.  I like the way duplicati does "snapshot" backups so to speak.  But SPEEEEED is an issue.  Please help me if I am doing something wrong.  I have 2.5 TB of photos I am backing up to my external seagate 5TB drive.  It is an smr drive so I can see a little slow down.   

I am using AES256 encryption.  

Porcessor is a i-7 4790k 4ghz 4 core.   

It has taken over a week or two just to get ONE backup done of that data.   

I am thinking of going back to just making a luks container on the hd and rsync'ing to that.   What am I doing wrong for it to take over two weeks to backup 2.5 tb?.  Only backing up at something like 3.5 k/s 

Link to comment
  • 2 weeks later...

So I found the issue regarding SMB2, so I have now changed the option and completely removed my old Duplicati container and appdata folder in order to start a fresh, but now I'm having an issue where Duplicati errors when it tries to write to the network share.

 

  • Access to the path "/backups/duplicati-20180519T202052Z.dlist.zip.aes" is denied.

 

This didn't happen before I removed the container, I am using the path in the container as /mnt/disks/duplicati with r/w slave.

 

Any ideas?

 

Edit Edit: Accidently changed something in the container vars, its all working now.

Edited by Spies
Link to comment

Guys I have a question about this:

 

I set '/source' to be '/mnt/user/' and changed this to be RO slave so I can backup anything on the server.

 

If I need to restore,  I guess I'll have to change this back to RW. 

 

But what should the '/backup' container var be set to?

 

My backup destination is Back blaze cloud server....

 

Link to comment
On 5/28/2018 at 3:27 AM, jj_uk said:

But what should the '/backup' container var be set to?

 

I believe that's only if you want Duplicati on unraid to be used as a backup location.

 

Most users will use some sort of cloud storage for their backup location, but you could backup (from another computer) to your unraid server.

Link to comment
  • 3 weeks later...
  • 2 weeks later...
On 5/7/2018 at 5:42 PM, bphillips330 said:

Not sure if this has been approached on here.  Speed is an issue. I just switched to unraid and was running my own simple backup script using rsync.  I like the way duplicati does "snapshot" backups so to speak.  But SPEEEEED is an issue.  Please help me if I am doing something wrong.  I have 2.5 TB of photos I am backing up to my external seagate 5TB drive.  It is an smr drive so I can see a little slow down.   

I am using AES256 encryption.  

Porcessor is a i-7 4790k 4ghz 4 core.   

It has taken over a week or two just to get ONE backup done of that data.   

I am thinking of going back to just making a luks container on the hd and rsync'ing to that.   What am I doing wrong for it to take over two weeks to backup 2.5 tb?.  Only backing up at something like 3.5 k/s 

This is an ongoing issue with duplicati. Its quite ridiculous.. I have also heard that the restores can take a couple days on some large sets.  However I have a 2gb folder that I backup several times a day and it works perfect and restores perfect (dedupes 2g down to 90mb too!)  The blocksize and the amount of files this program can make over time is also another issue.

Edited by kilobit
Link to comment
  • 2 weeks later...

Hello.

 

I´m leaving Crashplan Home in favor of Duplicati.

 

I´ve successfully run one backup from a  user share to Microsoft Onedrive.

But after that, my server started to crash.

Web interface crash, no answer on ping. 

 

It appears to be when Duplicati is indexing files on another backup sourse (from another user share)

 

I´ve run CA common problems, and the application said there was a problem with duplicati mapped to unassigned devices without the slave option.

(duplicati TMP mapped to a unassigned device, and also a backup location on a unassigned device)

I changed it to R/W Slave, but no difference.

 

I´m attaching the logs from common problems runned in troubleshooting mode.

 

Please let me know if anyone else has encountered the same problems, or if someone can tell anything strange from the logs.

FCPsyslog_tail.txt

datahall-diagnostics-20180707-2129 (1).zip

Link to comment
3 hours ago, linnkan said:

I changed it to R/W Slave, but no difference.

nor would it under this circumstance

3 hours ago, linnkan said:

strange from the logs.

Is Jul 7 ~ 9pm when the server stopped responding and you just noticed, or did you upload the incorrect files?

 

But, this appears to me to be your problem:

 

Overall:
    Device size:		 111.79GiB
    Device allocated:		 111.79GiB
    Device unallocated:		  56.00KiB
    Device missing:		     0.00B
    Used:			  33.20GiB
    Free (estimated):		  77.68GiB	(min: 77.68GiB)
    Data ratio:			      1.00
    Metadata ratio:		      1.00
    Global reserve:		  24.86MiB	(used: 0.00B)

Data,single: Size:110.78GiB, Used:33.10GiB
   /dev/sdd1	 110.78GiB

Metadata,single: Size:1.01GiB, Used:99.91MiB
   /dev/sdd1	   1.01GiB

System,single: Size:4.00MiB, Used:16.00KiB
   /dev/sdd1	   4.00MiB

Unallocated:
   /dev/sdd1	  56.00KiB

In English, your cache drive is full (even though its not).  I'm not a particular fan of btrfs formatted cache drives if you're not running a cache-pool (prefer XFS in those circumstances as there is far, far less trouble), but @johnnie.black will be help you to get this back in order.  More of a coincidence that duplicati is causing this.

 

Additionally, you really should heed this advice:

Jul  7 21:21:19 Datahall root: Fix Common Problems: Warning: Dynamix SSD Trim Plugin Not installed


 

You're just shooting yourself in the foot performance wise without it.

 

Link to comment

Join the conversation

You can post now and register later. If you have an account, sign in now to post with your account.
Note: Your post will require moderator approval before it will be visible.

Guest
Reply to this topic...

×   Pasted as rich text.   Restore formatting

  Only 75 emoji are allowed.

×   Your link has been automatically embedded.   Display as a link instead

×   Your previous content has been restored.   Clear editor

×   You cannot paste images directly. Upload or insert images from URL.