Jump to content
joch

[container] joch's s3backup

32 posts in this topic Last Reply

Recommended Posts

S3Backup

A simple way to backup important files to Amazon S3. Just mount the directories you want backed up under the /data volume and add the required environment variables for the S3 bucket and credentials, and the files will be backed up automatically. You can optionally set a bucket policy on S3 to automatically move the uploaded files to the cheap Glacier or "infrequent access" storage.

 

 

Part of joch's docker templates: http://lime-technology.com/forum/index.php?topic=43480

 

Changelog

 

2015-10-22

  • Fix issue when using multiple parameters for S3CMDPARAMS

 

2015-10-21

  • Expose env S3CMDPARAMS for setting custom s3cmd parameters

 

2015-10-18

  • Initial release

Share this post


Link to post

Just to say that this works brilliantly provided your account's access key / secret key does not contain the + symbol.

 

It took me ages to work out why it wasn't working but I eventually saw that the + symbol had been transposed somewhere along the line into a space character resulting in an invalid command being sent to aws.

 

I'm now happily transferring 20gb of photos to AWS...

Share this post


Link to post

Just to say that this works brilliantly provided your account's access key / secret key does not contain the + symbol.

 

It took me ages to work out why it wasn't working but I eventually saw that the + symbol had been transposed somewhere along the line into a space character resulting in an invalid command being sent to aws.

 

I'm now happily transferring 20gb of photos to AWS...

Google urlencode. When URLs are involved, spaces are encoded into + and + are decoded into spaces. Other things happen to other special characters so they might need to be considered as well.

Share this post


Link to post

Google urlencode. When URLs are involved, spaces are encoded into + and + are decoded into spaces. Other things happen to other special characters so they might need to be considered as well.

 

Possibly but that's not the point especially when no encoding should be involved between a webform post request building up and executing a command line instruction.

 

The container's portal page give you a textbox in which you enter your access key and another one in which you enter your secret key. Everything I enter there should be transposed (without change) to the command line that launches the container. That wasn't the case resulting in the docker failing to run.

 

Can you tell me where the appropriate place to raise a bug for the following:-

 

Containers may inadvertently not run due to an inappropriate urldecode call transposing values entered in a container's config webform before the command line instruction is executed resulting in the container being passed incorrect values (that differ from those entered by the user on the container's config webform).

As an example AWS randomly generates access and secret keys that may or may not contain + symbols. Where the key containers that + symbol joch's s3backup's container will fail to run due to the + symbol being replaced by spaces in the command line instruction generated by the config page that is used to start the container.

Share this post


Link to post

Can you tell me where the appropriate place to raise a bug for the following:-

 

Containers may inadvertently not run due to an inappropriate urldecode call transposing values entered in a container's config webform before the command line instruction is executed resulting in the container being passed incorrect values (that differ from those entered by the user on the container's config webform).

As an example AWS randomly generates access and secret keys that may or may not contain + symbols. Where the key containers that + symbol joch's s3backup's container will fail to run due to the + symbol being replaced by spaces in the command line instruction generated by the config page that is used to start the container.

Guidelines for Defect Reports

Share this post


Link to post

Just to say that this works brilliantly provided your account's access key / secret key does not contain the + symbol.

 

It took me ages to work out why it wasn't working but I eventually saw that the + symbol had been transposed somewhere along the line into a space character resulting in an invalid command being sent to aws.

 

I'm now happily transferring 20gb of photos to AWS...

 

eek,

like you, i have installed joch's s3backup docker.  i also have a "+" in my AWS secret code field.  i dont see any indication that the docker has ever run and the docker log file is nearly empty (it has no err msgs). what specifically did you change to finally get this docker running?  thanks,

 

Kevin

 

 

Share this post


Link to post

Hey! Thanks for letting me know of this issue, and sorry for missing the replies! I will look into url encoding the key before it's used automatically within the container.

Share this post


Link to post

This looks like a great and cheap alternative to crashplan.

Do you recommend it to backup 2Tb of media (photos, movies, music)?

 

And if someone has experience, what would the innitial upload cost?

Share this post


Link to post

This looks like a great and cheap alternative to crashplan.

Do you recommend it to backup 2Tb of media (photos, movies, music)?

 

And if someone has experience, what would the innitial upload cost?

I use it in conjunction with CrashPlan, primarily for my photos as a secondary cloud backup, so it's kind of a last-resort backup. The Glacier pricing structure is a bit complicated so be sure to try the AWS pricing calculator before using it for large amounts of data, since it may cost a lot to retrieve the data if you exceed a certain threshold. Even sync requests cost for Glacier, but my photos are arranged in yearly folders. This means that I only need to sync the current year to Glacier, which reduces the cost significantly.

 

I currently pay around $1 per month to keep about 200 Gb of data in Glacier and uploading a couple of gigabytes per week.

 

I use these parameters to sync once a week (I keep the photos on SD cards anyway for at least a week) and to use the reduced redundancy S3 storage class before they are moved to Glacier:

 

CRON_SCHEDULE=0 3 * * 0

S3CMDPARAMS=--reduced-redundancy

 

To answer your question, I think CrashPlan is great for general purpose backups, but Glacier is good for last-resort backups of really important things.

Share this post


Link to post

Just to say that this works brilliantly provided your account's access key / secret key does not contain the + symbol.

 

It took me ages to work out why it wasn't working but I eventually saw that the + symbol had been transposed somewhere along the line into a space character resulting in an invalid command being sent to aws.

 

I'm now happily transferring 20gb of photos to AWS...

 

eek,

like you, i have installed joch's s3backup docker.  i also have a "+" in my AWS secret code field.  i dont see any indication that the docker has ever run and the docker log file is nearly empty (it has no err msgs). what specifically did you change to finally get this docker running?  thanks,

 

Kevin

 

Sorry never noticed the question until now.

 

I just regenerated my secret code until I got one without a + in it....

Share this post


Link to post

Just to say that this works brilliantly provided your account's access key / secret key does not contain the + symbol.

 

It took me ages to work out why it wasn't working but I eventually saw that the + symbol had been transposed somewhere along the line into a space character resulting in an invalid command being sent to aws.

 

I'm now happily transferring 20gb of photos to AWS...

 

eek,

like you, i have installed joch's s3backup docker.  i also have a "+" in my AWS secret code field.  i dont see any indication that the docker has ever run and the docker log file is nearly empty (it has no err msgs). what specifically did you change to finally get this docker running?  thanks,

 

Kevin

 

Sorry never noticed the question until now.

 

I just regenerated my secret code until I got one without a + in it....

 

wow, did not know this option (regenerate secret code) existed with AWS S3.  I now have new secret code (no plus sign) and will re-visit the s3backup docker to see if i can make it work.  Thanks.

 

 

 

Share this post


Link to post

Can someone post a working Amazon IAM Policy? I keep getting Access Denied errors. thanks!

Share this post


Link to post

I'm getting the following error when trying to install via community apps.  Any ideas?

 

Unable to find image 'bin:latest' locally

Pulling repository bin

Error: image library/bin:latest not found

 

 

The command displayed at the end is:

 

root@localhost:# /usr/local/emhttp/plugins/dynamix.docker.manager/scripts/docker run -d --name="S3Backup" --net="bridge" -e ACCESS_KEY="SOMETHING" -e SECRET_KEY="SOMETHING" -e S3PATH="s3://SOMETHING/" -e S3CMDPARAMS="--no-delete-removed" -e CRON_SCHEDULE="0 * * * *" -e TZ="America/New_York" -v "/mnt/user/main/s3backup":"/data":ro joch/s3backup

 

(something is obviously my key, secret, and bucket name)

 

Share this post


Link to post

I'm getting the following error when trying to install via community apps.  Any ideas?

 

Unable to find image 'bin:latest' locally

Pulling repository bin

Error: image library/bin:latest not found

 

 

The command displayed at the end is:

 

root@localhost:# /usr/local/emhttp/plugins/dynamix.docker.manager/scripts/docker run -d --name="S3Backup" --net="bridge" -e ACCESS_KEY="SOMETHING" -e SECRET_KEY="SOMETHING" -e S3PATH="s3://SOMETHING/" -e S3CMDPARAMS="--no-delete-removed" -e CRON_SCHEDULE="0 * * * *" -e TZ="America/New_York" -v "/mnt/user/main/s3backup":"/data":ro joch/s3backup

 

(something is obviously my key, secret, and bucket name)

http://lime-technology.com/forum/index.php?topic=40937.msg463355#msg463355

 

Presumably your secret key / access key / s3 path has a space and "library" contained within it

 

Upgrading to the latest CA and turning on auto updates for whatever plugins you choose will minimize these issues

 

Share this post


Link to post

Can someone post a working Amazon IAM Policy? I keep getting Access Denied errors. thanks!

 

Sorry for a late reply, but you can use something similar to the following:

 

{
"Version": "2012-10-17",
"Id": "Policy1459542796200",
"Statement": [
	{
		"Sid": "Stmt1459542791970",
		"Effect": "Allow",
		"Principal": {
			"AWS": "arn:aws:iam::123456789:user/mybackupuser"
		},
		"Action": "s3:*",
		"Resource": "arn:aws:s3:::mybucket/*"
	},
	{
		"Sid": "Stmt1459542791970",
		"Effect": "Allow",
		"Principal": {
			"AWS": "arn:aws:iam::123456789:user/mybackupuser"
		},
		"Action": "s3:ListBucket",
		"Resource": "arn:aws:s3:::mybucket"
	}
]
}

 

 

Share this post


Link to post

Thanks, didn't have any spaces but the update to the web ui seems to have fixed it.

Share this post


Link to post

Help needed,

 

I finally got the backup working after getting a key without any +'s in it. The backup ran once and now a week later, I am getting a SSL error:

"ERROR: SSL certificate verification failure: ('The read operation timed out',)"

 

Any thoughts?

 

Thank you!

Joe

Share this post


Link to post

Help needed,

 

I finally got the backup working after getting a key without any +'s in it. The backup ran once and now a week later, I am getting a SSL error:

"ERROR: SSL certificate verification failure: ('The read operation timed out',)"

 

Any thoughts?

 

Thank you!

Joe

Weird, I use it on multiple servers without any issues. Is the clock set correctly on your system?

Share this post


Link to post

Yes, clock is set correct. I just had some time to check my logs and now I am getting:

 

/tmp/s3cmd.lock detected, exiting! Already running?

 

Any possibility in pointing me the right direction? :-)

 

Thank you!

Share this post


Link to post

I love this image.  I run the same image on my unRAID server, my Windows workstation and my wife's Mac laptop.

 

Suggest using the following instead of --reduced-redundancy.  Looks like that may be a deprecated option.

 

--storage-class REDUCED_REDUNDANCY

 

Can also use:

 

--storage-class STANDARD_IA

or

--storage-class STANDARD (the default)

 

http://docs.aws.amazon.com/cli/latest/reference/s3/sync.html

 

Cost vs features:

https://aws.amazon.com/s3/reduced-redundancy/

http://aws.amazon.com/s3/pricing/

 

EDIT: Note that if you set the schedule to something like every hour, obviously it will spin up the disks every time it runs.  In my case, I'd set it to every hour, and had a disk spin-down timeout of two hours, so the disks just stayed running.

Share this post


Link to post

I changed --reduced-redundancy to --storage-class REDUCED_REDUNDANCY and then sync started working again.

Great that you figured out the issue! :)

Share this post


Link to post

Hello Joch,

 

Any chance this could be altered to work with Amazons new Cloud Storage? It is afterall just a consumer facing (priced) S3 option.

 

Thanks

Bill

Share this post


Link to post

Amazon Drive does have an API, but it's not an AWS service.  So it's not supported (yet?) by awscli nor s3cmd, which are the tools used in the various s3backup Docker images.

 

If you're thinking of something other than Amazon Drive, I couldn't find it.  All the searches I did for Amazon Cloud Storage brought me back to S3.  But Amazon Drive is the consumer-facing service that is similar to Dropbox or OneDrive.

Share this post


Link to post

Join the conversation

You can post now and register later. If you have an account, sign in now to post with your account.
Note: Your post will require moderator approval before it will be visible.

Guest
Reply to this topic...

×   Pasted as rich text.   Restore formatting

  Only 75 emoji are allowed.

×   Your link has been automatically embedded.   Display as a link instead

×   Your previous content has been restored.   Clear editor

×   You cannot paste images directly. Upload or insert images from URL.