Jump to content
linuxserver.io

[Support] Linuxserver.io - Duplicati

229 posts in this topic Last Reply

Recommended Posts

5 hours ago, Phastor said:

Where's a good place to map /tmp? Does it get large? Would I be able to map it to something like /user/appdata/duplicati/tmp without an issue?

 

I have mine mapped to /tmp

 

So the host and the container are both /tmp. If my understanding is correct, /tmp is in your RAM, so make sure you have enough

Edited by CrimsonTyphoon
Add quote

Share this post


Link to post

(Yes, I quoted myself)

 

49 minutes ago, CrimsonTyphoon said:

 

I have mine mapped to /tmp

 

So the host and the container are both /tmp. If my understanding is correct, /tmp is in your RAM, so make sure you have enough

 

On 2nd thought, don't map /tmp to /tmp until you hear from someone else. I am not an expert. I mapped my /tmp to a cache only share. Can someone else chime in if it's okay to map /tmp to /tmp?

Share this post


Link to post

I came here looking for alternatives to CrapPlan bailing on home consumers.

 

SpaceInvader One video on Duplicati is worth watching. In it he discovered that the default /tmp folder could potentially fill up your "docker" image file (and details how to remap it).

 

Since SpaceInvader One video LS have since added the variable to make this mapping easier, you just tell it where you want tmp files stored. For majority of users the cache should suffice, depending on the upload volume size.

Share this post


Link to post

I decided to give this a try, now that I need to drop Crashplan before Oct 31.

 

Went with Backblaze B2 as I can backup up 4 devices for cheaper than Crashplan Business at $10 per device.

 

I followed SpaceInvader One video and got it all setup. did a couple backup/restore tests to ensure it would work.

 

I have paused/restarted the backup without issue.  It will take a while to complete as I only have 10mb/s upload.

 

I killed the docker and upon restart the backup is not automatically resumed.  It did report that it will start at the next scheduled time.  I have the default of once per day.

 

It did not clean up the unsent dup-* files found on the cache drive where I mapped /tmp.  That is a bit disconcerting.  Hopefully they will get cleaned up at some point.  I do see it making new files after I started it again.

 

david

Share this post


Link to post

I have a top level share called "backups" that I keep copies of my game server saves in and the Veeam archives for my workstations. That share is allocated to cache only, so based on what I heard here, I decided to put a tmp folder under that and map /tmp to that.

 

Next question:

 

I did a test run and the first thing I noticed were the tons upon tons of 50 MB zip files going onto my external backup drive. I did some reasearch and learned about remote volumes. From what I understand, the default size of these volumes were set to 50 MB with it in mind that the smaller sizes would be easier to move up and down to cloud services. Given that I eventually fill my 8 TB and am backing it all up, that puts it in the ballpark of around 160,000 of these little guys sitting on that drive. I don't intend to push my backups to the cloud, and instead plan stick with a couple USB drives that I rotate so that I can have one off-site. With this in mind, should I set the remote volume size to something larger? The majority of what I'm backing up is audio and video if that's a factor.

 

And the same as a couple other people have stated here, I'm here now because of Crashplan deciding to crap on their home users.

Edited by Phastor

Share this post


Link to post

I am backup up to the cloud and changed mine to 100MB without issue.  It all has to do with how long it takes to manage a single file if it is huge.  There were issues if the block size is > 4TB, but I think those have been fixed.

 

 

Share this post


Link to post

I kept the volume size 50MB and did another test. Backup seems to be going really slow as it did about 30GB in two hours.

 

I set it to 1GB just for testing and got the same results.

 

I monitored the shares as the backup was happening and it seems that the slowness is occurring during the generation of the zip files. It takes about 90 seconds to generate one of the 1GB volumes. Transfer to the USB drive seems fine as it only takes about 10 seconds to copy the file to the drive after it's generated.

 

I disabled compression thinking that might speed things up. Most of my files are audio and video, so I won't benefit much from compression. I'm not really noticing any difference even after doing that.

 

Is there any way I can speed that up?

 

 

Edited by Phastor

Share this post


Link to post

What would be best practise to use Duplicati to backup from an remote computer?

 

Share this post


Link to post
2 hours ago, isvein said:

What would be best practise to use Duplicati to backup from an remote computer?

 

Site to site VPN

  • Upvote 1

Share this post


Link to post
3 hours ago, jonathanm said:

Site to site VPN

And seperate SSH users for the clients? Cant have all use root for Duplicate connection.

Share this post


Link to post
5 hours ago, isvein said:

And seperate SSH users for the clients? Cant have all use root for Duplicate connection.

 

I also decided to try this to replace crashplan.

 

For my "crashplan friends", I just setup a FTP server on unraid.  The data is encrypted by the duplicati client so privacy is ok if not hardcore.  I created a bunch of FTP users and told them to use different root directory if they backup different PCs.

  • Upvote 1

Share this post


Link to post
8 hours ago, Gog said:

 

I also decided to try this to replace crashplan.

 

For my "crashplan friends", I just setup a FTP server on unraid.  The data is encrypted by the duplicati client so privacy is ok if not hardcore.  I created a bunch of FTP users and told them to use different root directory if they backup different PCs.

That works :) I found out that every file and every folder on an unraid server (I guess I should have checked this before) has all access (777). So if an FTP user conencts to the server with FTP, they have access to all files even if they should not have. (looks like its only the SMB/AFP permitions that blocks access)
But I also have a seperate unraid server that is used only for backup, and since everything here is encrypted its ok for friends and family to have access.

Share this post


Link to post
30 minutes ago, isvein said:

I found out that every file and every folder on an unraid server (I guess I should have checked this before) has all access (777). So if an FTP user conencts to the server with FTP, they have access to all files even if they should not have.

 

I've never used unraids FTP server, but that is good (or should I say bad) to know.

Share this post


Link to post
19 hours ago, jonathanm said:

Site to site VPN

Looks like site to site vpn and connect to the backup share from the client over SMB/AFP is the most secure option :)

Edited by isvein

Share this post


Link to post

This docker seems to create a public share named "HttpServer" whenever it starts that the fix common errors plugin then flags up as a problem. Any one know this share is created and what its purpose is?

Share this post


Link to post
3 minutes ago, allanp81 said:

This docker seems to create a public share named "HttpServer" whenever it starts that the fix common errors plugin then flags up as a problem. Any one know this share is created and what its purpose is?

 

Not seen that myself, perhaps if you attached your docker run command, some logs and some config information we can see what's happening

Share this post


Link to post

This is the output from updating the docker:

 

root@localhost:# /usr/local/emhttp/plugins/dynamix.docker.manager/scripts/docker run -d --name="duplicati" --net="bridge" -e TZ="Europe/London" -e HOST_OS="unRAID" -e "PUID"="99" -e "PGID"="100" -p 8200:8200/tcp -v "/mnt/cache/":"/tmp":rw -v "/mnt/disks/BUFFALO_External_HDD/Backup/duplicati/":"/backups":rw,slave -v "/mnt/user/P/":"/source":ro,slave -v "/mnt/cache/appdata/duplicati":"/config":rw linuxserver/duplicati
a1b4a215c1f22c23e12a7d5675ab3d7f83eb0a9590e97008c63faeb2bbc3840b

 

 

I'm not sure if this is what you mean by run command. If I bring up the log from the row under docker it shows me:

 

Quote

[s6-init] making user provided files available at /var/run/s6/etc...exited 0.
[s6-init] ensuring user provided files have correct perms...exited 0.
[fix-attrs.d] applying ownership & permissions fixes...
[fix-attrs.d] done.
[cont-init.d] executing container initialization scripts...
[cont-init.d] 10-adduser: executing...

-------------------------------------
_ _ _
| |___| (_) ___
| / __| | |/ _ \
| \__ \ | | (_) |
|_|___/ |_|\___/
|_|

Brought to you by linuxserver.io
We gratefully accept donations at:
https://www.linuxserver.io/donations/
-------------------------------------
GID/UID
-------------------------------------
User uid: 99
User gid: 100
-------------------------------------

[cont-init.d] 10-adduser: exited 0.
[cont-init.d] 30-config: executing...
[cont-init.d] 30-config: exited 0.
[cont-init.d] done.
[services.d] starting services
[services.d] done.
Server has started and is listening on 0.0.0.0, port 8200

 

I have about 10 other dockers and don't seem to have this issue with any others.

Share this post


Link to post

It's because you're mounting /mnt/cache/ to /tmp 

 

 

Share this post


Link to post

Yep

Sent from my LG-H815 using Tapatalk

Share this post


Link to post
6 hours ago, isvein said:

That works :) I found out that every file and every folder on an unraid server (I guess I should have checked this before) has all access (777). So if an FTP user conencts to the server with FTP, they have access to all files even if they should not have. (looks like its only the SMB/AFP permitions that blocks access)
But I also have a seperate unraid server that is used only for backup, and since everything here is encrypted its ok for friends and family to have access.

 

Yeah, I figured with proftpd restricting access to the backup directory it wasn't too bad but it still eaves me a bit twitchy.  I'm trying to switch to sFTP now but I'm fighting with the public key authentication method.  I'll post info if I figure it out.

Share this post


Link to post

Maybe not scope of this thread, but does anyone know how I can change the TMP folder Duplicati on windows uses?
It uses the default tmp folder and it fills up my SSD :(

 

Edit: I changed the system location of the TMP, no need for windows to use the SSD as tmp storage.

 

 

edit2: How small/big should the Upload volume size be?

Edited by isvein
no need to create a new post

Share this post


Link to post

Create an account or sign in to comment

You need to be a member in order to leave a comment

Create an account

Sign up for a new account in our community. It's easy!

Register a new account

Sign in

Already have an account? Sign in here.

Sign In Now