joch Posted October 18, 2015 Posted October 18, 2015 S3Backup A simple way to backup important files to Amazon S3. Just mount the directories you want backed up under the /data volume and add the required environment variables for the S3 bucket and credentials, and the files will be backed up automatically. You can optionally set a bucket policy on S3 to automatically move the uploaded files to the cheap Glacier or "infrequent access" storage. Github: https://github.com/joch/docker-s3backup Docker hub: https://hub.docker.com/r/joch/s3backup/ Changelog: (To be added) Part of joch's docker templates: http://lime-technology.com/forum/index.php?topic=43480 Changelog 2015-10-22 Fix issue when using multiple parameters for S3CMDPARAMS 2015-10-21 Expose env S3CMDPARAMS for setting custom s3cmd parameters 2015-10-18 Initial release Quote
eek Posted November 12, 2015 Posted November 12, 2015 Just to say that this works brilliantly provided your account's access key / secret key does not contain the + symbol. It took me ages to work out why it wasn't working but I eventually saw that the + symbol had been transposed somewhere along the line into a space character resulting in an invalid command being sent to aws. I'm now happily transferring 20gb of photos to AWS... Quote
trurl Posted November 12, 2015 Posted November 12, 2015 Just to say that this works brilliantly provided your account's access key / secret key does not contain the + symbol. It took me ages to work out why it wasn't working but I eventually saw that the + symbol had been transposed somewhere along the line into a space character resulting in an invalid command being sent to aws. I'm now happily transferring 20gb of photos to AWS... Google urlencode. When URLs are involved, spaces are encoded into + and + are decoded into spaces. Other things happen to other special characters so they might need to be considered as well. Quote
eek Posted November 13, 2015 Posted November 13, 2015 Google urlencode. When URLs are involved, spaces are encoded into + and + are decoded into spaces. Other things happen to other special characters so they might need to be considered as well. Possibly but that's not the point especially when no encoding should be involved between a webform post request building up and executing a command line instruction. The container's portal page give you a textbox in which you enter your access key and another one in which you enter your secret key. Everything I enter there should be transposed (without change) to the command line that launches the container. That wasn't the case resulting in the docker failing to run. Can you tell me where the appropriate place to raise a bug for the following:- Containers may inadvertently not run due to an inappropriate urldecode call transposing values entered in a container's config webform before the command line instruction is executed resulting in the container being passed incorrect values (that differ from those entered by the user on the container's config webform). As an example AWS randomly generates access and secret keys that may or may not contain + symbols. Where the key containers that + symbol joch's s3backup's container will fail to run due to the + symbol being replaced by spaces in the command line instruction generated by the config page that is used to start the container. Quote
trurl Posted November 13, 2015 Posted November 13, 2015 Can you tell me where the appropriate place to raise a bug for the following:- Containers may inadvertently not run due to an inappropriate urldecode call transposing values entered in a container's config webform before the command line instruction is executed resulting in the container being passed incorrect values (that differ from those entered by the user on the container's config webform). As an example AWS randomly generates access and secret keys that may or may not contain + symbols. Where the key containers that + symbol joch's s3backup's container will fail to run due to the + symbol being replaced by spaces in the command line instruction generated by the config page that is used to start the container. Guidelines for Defect Reports Quote
kgregg Posted January 14, 2016 Posted January 14, 2016 Just to say that this works brilliantly provided your account's access key / secret key does not contain the + symbol. It took me ages to work out why it wasn't working but I eventually saw that the + symbol had been transposed somewhere along the line into a space character resulting in an invalid command being sent to aws. I'm now happily transferring 20gb of photos to AWS... eek, like you, i have installed joch's s3backup docker. i also have a "+" in my AWS secret code field. i dont see any indication that the docker has ever run and the docker log file is nearly empty (it has no err msgs). what specifically did you change to finally get this docker running? thanks, Kevin Quote
joch Posted January 19, 2016 Author Posted January 19, 2016 Hey! Thanks for letting me know of this issue, and sorry for missing the replies! I will look into url encoding the key before it's used automatically within the container. Quote
JeffreyVrancken Posted January 26, 2016 Posted January 26, 2016 This looks like a great and cheap alternative to crashplan. Do you recommend it to backup 2Tb of media (photos, movies, music)? And if someone has experience, what would the innitial upload cost? Quote
joch Posted January 26, 2016 Author Posted January 26, 2016 This looks like a great and cheap alternative to crashplan. Do you recommend it to backup 2Tb of media (photos, movies, music)? And if someone has experience, what would the innitial upload cost? I use it in conjunction with CrashPlan, primarily for my photos as a secondary cloud backup, so it's kind of a last-resort backup. The Glacier pricing structure is a bit complicated so be sure to try the AWS pricing calculator before using it for large amounts of data, since it may cost a lot to retrieve the data if you exceed a certain threshold. Even sync requests cost for Glacier, but my photos are arranged in yearly folders. This means that I only need to sync the current year to Glacier, which reduces the cost significantly. I currently pay around $1 per month to keep about 200 Gb of data in Glacier and uploading a couple of gigabytes per week. I use these parameters to sync once a week (I keep the photos on SD cards anyway for at least a week) and to use the reduced redundancy S3 storage class before they are moved to Glacier: CRON_SCHEDULE=0 3 * * 0 S3CMDPARAMS=--reduced-redundancy To answer your question, I think CrashPlan is great for general purpose backups, but Glacier is good for last-resort backups of really important things. Quote
spants Posted January 26, 2016 Posted January 26, 2016 http://www.hashbackup.com/technical/glacier-eol read this before you switch over to Glacier! Quote
eek Posted January 28, 2016 Posted January 28, 2016 Just to say that this works brilliantly provided your account's access key / secret key does not contain the + symbol. It took me ages to work out why it wasn't working but I eventually saw that the + symbol had been transposed somewhere along the line into a space character resulting in an invalid command being sent to aws. I'm now happily transferring 20gb of photos to AWS... eek, like you, i have installed joch's s3backup docker. i also have a "+" in my AWS secret code field. i dont see any indication that the docker has ever run and the docker log file is nearly empty (it has no err msgs). what specifically did you change to finally get this docker running? thanks, Kevin Sorry never noticed the question until now. I just regenerated my secret code until I got one without a + in it.... Quote
kgregg Posted January 28, 2016 Posted January 28, 2016 Just to say that this works brilliantly provided your account's access key / secret key does not contain the + symbol. It took me ages to work out why it wasn't working but I eventually saw that the + symbol had been transposed somewhere along the line into a space character resulting in an invalid command being sent to aws. I'm now happily transferring 20gb of photos to AWS... eek, like you, i have installed joch's s3backup docker. i also have a "+" in my AWS secret code field. i dont see any indication that the docker has ever run and the docker log file is nearly empty (it has no err msgs). what specifically did you change to finally get this docker running? thanks, Kevin Sorry never noticed the question until now. I just regenerated my secret code until I got one without a + in it.... wow, did not know this option (regenerate secret code) existed with AWS S3. I now have new secret code (no plus sign) and will re-visit the s3backup docker to see if i can make it work. Thanks. Quote
NOX6 Posted March 17, 2016 Posted March 17, 2016 Can someone post a working Amazon IAM Policy? I keep getting Access Denied errors. thanks! Quote
Horn Posted May 2, 2016 Posted May 2, 2016 I'm getting the following error when trying to install via community apps. Any ideas? Unable to find image 'bin:latest' locally Pulling repository bin Error: image library/bin:latest not found The command displayed at the end is: root@localhost:# /usr/local/emhttp/plugins/dynamix.docker.manager/scripts/docker run -d --name="S3Backup" --net="bridge" -e ACCESS_KEY="SOMETHING" -e SECRET_KEY="SOMETHING" -e S3PATH="s3://SOMETHING/" -e S3CMDPARAMS="--no-delete-removed" -e CRON_SCHEDULE="0 * * * *" -e TZ="America/New_York" -v "/mnt/user/main/s3backup":"/data":ro joch/s3backup (something is obviously my key, secret, and bucket name) Quote
Squid Posted May 2, 2016 Posted May 2, 2016 I'm getting the following error when trying to install via community apps. Any ideas? Unable to find image 'bin:latest' locally Pulling repository bin Error: image library/bin:latest not found The command displayed at the end is: root@localhost:# /usr/local/emhttp/plugins/dynamix.docker.manager/scripts/docker run -d --name="S3Backup" --net="bridge" -e ACCESS_KEY="SOMETHING" -e SECRET_KEY="SOMETHING" -e S3PATH="s3://SOMETHING/" -e S3CMDPARAMS="--no-delete-removed" -e CRON_SCHEDULE="0 * * * *" -e TZ="America/New_York" -v "/mnt/user/main/s3backup":"/data":ro joch/s3backup (something is obviously my key, secret, and bucket name) http://lime-technology.com/forum/index.php?topic=40937.msg463355#msg463355 Presumably your secret key / access key / s3 path has a space and "library" contained within it Upgrading to the latest CA and turning on auto updates for whatever plugins you choose will minimize these issues Quote
joch Posted May 2, 2016 Author Posted May 2, 2016 Can someone post a working Amazon IAM Policy? I keep getting Access Denied errors. thanks! Sorry for a late reply, but you can use something similar to the following: { "Version": "2012-10-17", "Id": "Policy1459542796200", "Statement": [ { "Sid": "Stmt1459542791970", "Effect": "Allow", "Principal": { "AWS": "arn:aws:iam::123456789:user/mybackupuser" }, "Action": "s3:*", "Resource": "arn:aws:s3:::mybucket/*" }, { "Sid": "Stmt1459542791970", "Effect": "Allow", "Principal": { "AWS": "arn:aws:iam::123456789:user/mybackupuser" }, "Action": "s3:ListBucket", "Resource": "arn:aws:s3:::mybucket" } ] } Quote
Horn Posted May 2, 2016 Posted May 2, 2016 Thanks, didn't have any spaces but the update to the web ui seems to have fixed it. Quote
joeschmoe Posted May 29, 2016 Posted May 29, 2016 Help needed, I finally got the backup working after getting a key without any +'s in it. The backup ran once and now a week later, I am getting a SSL error: "ERROR: SSL certificate verification failure: ('The read operation timed out',)" Any thoughts? Thank you! Joe Quote
joch Posted May 30, 2016 Author Posted May 30, 2016 Help needed, I finally got the backup working after getting a key without any +'s in it. The backup ran once and now a week later, I am getting a SSL error: "ERROR: SSL certificate verification failure: ('The read operation timed out',)" Any thoughts? Thank you! Joe Weird, I use it on multiple servers without any issues. Is the clock set correctly on your system? Quote
joeschmoe Posted June 12, 2016 Posted June 12, 2016 Yes, clock is set correct. I just had some time to check my logs and now I am getting: /tmp/s3cmd.lock detected, exiting! Already running? Any possibility in pointing me the right direction? :-) Thank you! Quote
koyaanisqatsi Posted July 8, 2016 Posted July 8, 2016 I love this image. I run the same image on my unRAID server, my Windows workstation and my wife's Mac laptop. Suggest using the following instead of --reduced-redundancy. Looks like that may be a deprecated option. --storage-class REDUCED_REDUNDANCY Can also use: --storage-class STANDARD_IA or --storage-class STANDARD (the default) http://docs.aws.amazon.com/cli/latest/reference/s3/sync.html Cost vs features: https://aws.amazon.com/s3/reduced-redundancy/ http://aws.amazon.com/s3/pricing/ EDIT: Note that if you set the schedule to something like every hour, obviously it will spin up the disks every time it runs. In my case, I'd set it to every hour, and had a disk spin-down timeout of two hours, so the disks just stayed running. Quote
joeschmoe Posted July 11, 2016 Posted July 11, 2016 I changed --reduced-redundancy to --storage-class REDUCED_REDUNDANCY and then sync started working again. Quote
joch Posted July 12, 2016 Author Posted July 12, 2016 I changed --reduced-redundancy to --storage-class REDUCED_REDUNDANCY and then sync started working again. Great that you figured out the issue! Quote
ritalin Posted September 26, 2016 Posted September 26, 2016 Hello Joch, Any chance this could be altered to work with Amazons new Cloud Storage? It is afterall just a consumer facing (priced) S3 option. Thanks Bill Quote
koyaanisqatsi Posted September 26, 2016 Posted September 26, 2016 Amazon Drive does have an API, but it's not an AWS service. So it's not supported (yet?) by awscli nor s3cmd, which are the tools used in the various s3backup Docker images. If you're thinking of something other than Amazon Drive, I couldn't find it. All the searches I did for Amazon Cloud Storage brought me back to S3. But Amazon Drive is the consumer-facing service that is similar to Dropbox or OneDrive. Quote
Recommended Posts
Join the conversation
You can post now and register later. If you have an account, sign in now to post with your account.
Note: Your post will require moderator approval before it will be visible.