[Support] Linuxserver.io - Duplicati


Recommended Posts

5 hours ago, Phastor said:

Where's a good place to map /tmp? Does it get large? Would I be able to map it to something like /user/appdata/duplicati/tmp without an issue?

 

I have mine mapped to /tmp

 

So the host and the container are both /tmp. If my understanding is correct, /tmp is in your RAM, so make sure you have enough

Edited by CrimsonTyphoon
Add quote
Link to comment

(Yes, I quoted myself)

 

49 minutes ago, CrimsonTyphoon said:

 

I have mine mapped to /tmp

 

So the host and the container are both /tmp. If my understanding is correct, /tmp is in your RAM, so make sure you have enough

 

On 2nd thought, don't map /tmp to /tmp until you hear from someone else. I am not an expert. I mapped my /tmp to a cache only share. Can someone else chime in if it's okay to map /tmp to /tmp?

Link to comment

I came here looking for alternatives to CrapPlan bailing on home consumers.

 

SpaceInvader One video on Duplicati is worth watching. In it he discovered that the default /tmp folder could potentially fill up your "docker" image file (and details how to remap it).

 

Since SpaceInvader One video LS have since added the variable to make this mapping easier, you just tell it where you want tmp files stored. For majority of users the cache should suffice, depending on the upload volume size.

Link to comment

I decided to give this a try, now that I need to drop Crashplan before Oct 31.

 

Went with Backblaze B2 as I can backup up 4 devices for cheaper than Crashplan Business at $10 per device.

 

I followed SpaceInvader One video and got it all setup. did a couple backup/restore tests to ensure it would work.

 

I have paused/restarted the backup without issue.  It will take a while to complete as I only have 10mb/s upload.

 

I killed the docker and upon restart the backup is not automatically resumed.  It did report that it will start at the next scheduled time.  I have the default of once per day.

 

It did not clean up the unsent dup-* files found on the cache drive where I mapped /tmp.  That is a bit disconcerting.  Hopefully they will get cleaned up at some point.  I do see it making new files after I started it again.

 

david

Link to comment

I have a top level share called "backups" that I keep copies of my game server saves in and the Veeam archives for my workstations. That share is allocated to cache only, so based on what I heard here, I decided to put a tmp folder under that and map /tmp to that.

 

Next question:

 

I did a test run and the first thing I noticed were the tons upon tons of 50 MB zip files going onto my external backup drive. I did some reasearch and learned about remote volumes. From what I understand, the default size of these volumes were set to 50 MB with it in mind that the smaller sizes would be easier to move up and down to cloud services. Given that I eventually fill my 8 TB and am backing it all up, that puts it in the ballpark of around 160,000 of these little guys sitting on that drive. I don't intend to push my backups to the cloud, and instead plan stick with a couple USB drives that I rotate so that I can have one off-site. With this in mind, should I set the remote volume size to something larger? The majority of what I'm backing up is audio and video if that's a factor.

 

And the same as a couple other people have stated here, I'm here now because of Crashplan deciding to crap on their home users.

Edited by Phastor
Link to comment

I kept the volume size 50MB and did another test. Backup seems to be going really slow as it did about 30GB in two hours.

 

I set it to 1GB just for testing and got the same results.

 

I monitored the shares as the backup was happening and it seems that the slowness is occurring during the generation of the zip files. It takes about 90 seconds to generate one of the 1GB volumes. Transfer to the USB drive seems fine as it only takes about 10 seconds to copy the file to the drive after it's generated.

 

I disabled compression thinking that might speed things up. Most of my files are audio and video, so I won't benefit much from compression. I'm not really noticing any difference even after doing that.

 

Is there any way I can speed that up?

 

 

Edited by Phastor
Link to comment
5 hours ago, isvein said:

And seperate SSH users for the clients? Cant have all use root for Duplicate connection.

 

I also decided to try this to replace crashplan.

 

For my "crashplan friends", I just setup a FTP server on unraid.  The data is encrypted by the duplicati client so privacy is ok if not hardcore.  I created a bunch of FTP users and told them to use different root directory if they backup different PCs.

  • Upvote 1
Link to comment
8 hours ago, Gog said:

 

I also decided to try this to replace crashplan.

 

For my "crashplan friends", I just setup a FTP server on unraid.  The data is encrypted by the duplicati client so privacy is ok if not hardcore.  I created a bunch of FTP users and told them to use different root directory if they backup different PCs.

That works :) I found out that every file and every folder on an unraid server (I guess I should have checked this before) has all access (777). So if an FTP user conencts to the server with FTP, they have access to all files even if they should not have. (looks like its only the SMB/AFP permitions that blocks access)
But I also have a seperate unraid server that is used only for backup, and since everything here is encrypted its ok for friends and family to have access.

Link to comment
30 minutes ago, isvein said:

I found out that every file and every folder on an unraid server (I guess I should have checked this before) has all access (777). So if an FTP user conencts to the server with FTP, they have access to all files even if they should not have.

 

I've never used unraids FTP server, but that is good (or should I say bad) to know.

Link to comment
3 minutes ago, allanp81 said:

This docker seems to create a public share named "HttpServer" whenever it starts that the fix common errors plugin then flags up as a problem. Any one know this share is created and what its purpose is?

 

Not seen that myself, perhaps if you attached your docker run command, some logs and some config information we can see what's happening

Link to comment

This is the output from updating the docker:

 

root@localhost:# /usr/local/emhttp/plugins/dynamix.docker.manager/scripts/docker run -d --name="duplicati" --net="bridge" -e TZ="Europe/London" -e HOST_OS="unRAID" -e "PUID"="99" -e "PGID"="100" -p 8200:8200/tcp -v "/mnt/cache/":"/tmp":rw -v "/mnt/disks/BUFFALO_External_HDD/Backup/duplicati/":"/backups":rw,slave -v "/mnt/user/P/":"/source":ro,slave -v "/mnt/cache/appdata/duplicati":"/config":rw linuxserver/duplicati
a1b4a215c1f22c23e12a7d5675ab3d7f83eb0a9590e97008c63faeb2bbc3840b

 

 

I'm not sure if this is what you mean by run command. If I bring up the log from the row under docker it shows me:

 

Quote

[s6-init] making user provided files available at /var/run/s6/etc...exited 0.
[s6-init] ensuring user provided files have correct perms...exited 0.
[fix-attrs.d] applying ownership & permissions fixes...
[fix-attrs.d] done.
[cont-init.d] executing container initialization scripts...
[cont-init.d] 10-adduser: executing...

-------------------------------------
_ _ _
| |___| (_) ___
| / __| | |/ _ \
| \__ \ | | (_) |
|_|___/ |_|\___/
|_|

Brought to you by linuxserver.io
We gratefully accept donations at:
https://www.linuxserver.io/donations/
-------------------------------------
GID/UID
-------------------------------------
User uid: 99
User gid: 100
-------------------------------------

[cont-init.d] 10-adduser: exited 0.
[cont-init.d] 30-config: executing...
[cont-init.d] 30-config: exited 0.
[cont-init.d] done.
[services.d] starting services
[services.d] done.
Server has started and is listening on 0.0.0.0, port 8200

 

I have about 10 other dockers and don't seem to have this issue with any others.

Link to comment
6 hours ago, isvein said:

That works :) I found out that every file and every folder on an unraid server (I guess I should have checked this before) has all access (777). So if an FTP user conencts to the server with FTP, they have access to all files even if they should not have. (looks like its only the SMB/AFP permitions that blocks access)
But I also have a seperate unraid server that is used only for backup, and since everything here is encrypted its ok for friends and family to have access.

 

Yeah, I figured with proftpd restricting access to the backup directory it wasn't too bad but it still eaves me a bit twitchy.  I'm trying to switch to sFTP now but I'm fighting with the public key authentication method.  I'll post info if I figure it out.

Link to comment

Maybe not scope of this thread, but does anyone know how I can change the TMP folder Duplicati on windows uses?
It uses the default tmp folder and it fills up my SSD :(

 

Edit: I changed the system location of the TMP, no need for windows to use the SSD as tmp storage.

 

 

edit2: How small/big should the Upload volume size be?

Edited by isvein
no need to create a new post
Link to comment

Join the conversation

You can post now and register later. If you have an account, sign in now to post with your account.
Note: Your post will require moderator approval before it will be visible.

Guest
Reply to this topic...

×   Pasted as rich text.   Restore formatting

  Only 75 emoji are allowed.

×   Your link has been automatically embedded.   Display as a link instead

×   Your previous content has been restored.   Clear editor

×   You cannot paste images directly. Upload or insert images from URL.