Jump to content

joch

Community Developer
  • Posts

    18
  • Joined

  • Last visited

Posts posted by joch

  1. On 4/19/2021 at 12:05 PM, TangoEchoAlpha said:

    Hi Joch -

     

    Sorry, I meant to add to this thread earlier this morning. I added the --verbose flag to the command parameters and now I am successfully uploading to S3 straight into the Glacier storage class. Maybe the lock file got cleaned up in the interim, maybe it's a co-incidence but it's working!

     

    Am I right in thinking that this will not support either client side or server side encryption due to the need to do the MD5 hash as part of the file comparison?

     

    Thanks 😀

    Hi! Great to hear.

     

    You can enable transparent server side encryption on the S3 bucket side of things.

  2. 13 hours ago, TangoEchoAlpha said:

    I'm trying out this container, hoping to automate my AWS backups with a lightweight solution! At the moment I use a Windows app called FastGlacier to manually backup files to AWS, but obviously an automated solution would be better.

     

    I have installed the container and set my configuration as per the following:

     

    s3backup-config.thumb.PNG.6f9afe069ac67d6bab55e581067f373b.PNG

     

    As I want to use Glacier for backup and the lower cost, my storage-class command parameter is set to GLACIER which according to https://awscli.amazonaws.com/v2/documentation/api/latest/reference/s3/sync.html is a supported option.

     

    After starting the container and checking the logs, I saw this error saying that the bucket didn't exist:

     

    s3backup-1.PNG.e12571ad5ad4cc8236650cf9d419dbb4.PNG

     

    I don't know if there's a way to create the bucket automatically if it doesn't exist, but in the meantime I consoled into the container and used s3cmd to make a new bucket. I then checked the bucket exists and also verified that I could manually use s3cmd to upload a file from my container's 'data' path:

     

    s3backup-2.PNG.487df97bc0b5f3c9e83e546618d4f416.PNG

     

    But I am now seeing a similar issue to that which joeschmoe saw, the job fails due to an existing S3 lockfile:

     

    s3backup-3.PNG.37340b9d2ec38ab92bcd251bfe905907.PNG

     

    Based upon joeschmoe's findings, I am assuming this is because s3cmd doesn't like my s3 parameter being set to GLACIER - yet the same option runs fine from the command line when I ran s3cmd sync manually. I can also see that the file uploaded successfully in the bucket using the AWS management console.

     

    I would really like to get this working so would be grateful for any pointers! :) I did try restarting the container several times in case that would help cleanout /tmp but it didn't change the behaviour.

     

    Hi! Try removing the lock file from the Docker container, like this:

    docker exec -ti S3Backup rm -f /tmp/s3cmd.lock

     

  3. 18 hours ago, cptechnology said:

    I may be stupid, but I don't understand this part: "Just mount the directories you want backed up under the /data volume", I can't seem to find any /data volume on my unraid server?

    There is nothing wrong with asking questions! What I meant by that is that you need to mount the folders you want backed up under /data in the Docker container, eg. host path /mnt/user/Documents needs to map into the container under /data/Documents in order for it to be backed up. You can add multiple of those for everything you want backed up.

    • Thanks 1
  4. On 7/31/2020 at 3:16 PM, Michael Hacker said:

    New unraid and s3backup user here.  First, this container is great, and thanks for providing it!

     

    One question though, I'm not sure if I'm doing something wrong, but everytime i start the container, I get another cron entry.  Now /etc/cron.d/s3backup has 8 entries (the first one and 7 duplicates).  

     

    I can't figure out how to edit the cron file or how to make it stop duplicating.  I know this file has to be stored somewhere but there isn't any disk maps beyond my share that i'm backing up.  Any help is appreciated.

    Hi! That sounds weird. Did you manage to solve it, or are you still having an issue with this? How are you running the container?

  5. On 4/10/2020 at 4:10 PM, troyan said:

    don't work
    https://photos.app.goo.gl/4ypj73YkTf9SUvqn8
    when i connect to docker and i edit /root/.s3config  delete_removed = False ?


     

    Editing the config file shouldn't be necessary.

     

    If you're in the container, does "echo $S3CMDPARAMS" show "--delete-removed"?

     

    If it does, then try running the command (in the container) manually to see if it works: "/usr/local/bin/s3cmd sync $S3CMDPARAMS /data/ $S3PATH"

     

  6. On 9/16/2017 at 4:29 PM, UntouchedWagons said:

    Okay I added that paramater but there's still nothing in the log or in my bucket.

    Strange, let's do some debugging. First, enter the container:

    docker exec -ti S3Backup bash

    Then run the command manually, adding the verbose flag:

    /usr/local/bin/s3cmd sync $S3CMDPARAMS -v /data/ $S3PATH

    If that doesn't show anything, run the command with the debug flag:

    /usr/local/bin/s3cmd sync $S3CMDPARAMS -d /data/ $S3PATH

     

    To exit the container, just type "exit".

     

    Did any of that help you find out the reason why it doesn't work?

  7. 13 hours ago, UntouchedWagons said:

    Hi there. How do I find out why s3cmd did not upload anything to my bucket? I configured your container to run every day at 3AM and I supplied brand new access and security keys but when I checked the bucket this afternoon nothing was uploaded. I checked the container's log and there were no messages of any sort.

    Hm that's strange. Maybe try adding "--verbose" to S3CMDPARAMS to make it more verbose, so it may tell you something.

  8. What exactly does this do? Does it strictly copy files from unRAID to S3, or does it perform a sync between unRAID and S3? So if you were to delete a file off unRAID, will the object be removed from your S3 bucket?

    It's basically up to you. The default behaviour is to sync, but *not* removed delete files on the remote which have been deleted locally. You can however enable this option by providing the "--delete-removed" flag to S3CMDPARAMS if you want that behaviour.

  9. Help needed,

     

    I finally got the backup working after getting a key without any +'s in it. The backup ran once and now a week later, I am getting a SSL error:

    "ERROR: SSL certificate verification failure: ('The read operation timed out',)"

     

    Any thoughts?

     

    Thank you!

    Joe

    Weird, I use it on multiple servers without any issues. Is the clock set correctly on your system?

  10. Can someone post a working Amazon IAM Policy? I keep getting Access Denied errors. thanks!

     

    Sorry for a late reply, but you can use something similar to the following:

     

    {
    "Version": "2012-10-17",
    "Id": "Policy1459542796200",
    "Statement": [
    	{
    		"Sid": "Stmt1459542791970",
    		"Effect": "Allow",
    		"Principal": {
    			"AWS": "arn:aws:iam::123456789:user/mybackupuser"
    		},
    		"Action": "s3:*",
    		"Resource": "arn:aws:s3:::mybucket/*"
    	},
    	{
    		"Sid": "Stmt1459542791970",
    		"Effect": "Allow",
    		"Principal": {
    			"AWS": "arn:aws:iam::123456789:user/mybackupuser"
    		},
    		"Action": "s3:ListBucket",
    		"Resource": "arn:aws:s3:::mybucket"
    	}
    ]
    }

     

     

  11. This looks like a great and cheap alternative to crashplan.

    Do you recommend it to backup 2Tb of media (photos, movies, music)?

     

    And if someone has experience, what would the innitial upload cost?

    I use it in conjunction with CrashPlan, primarily for my photos as a secondary cloud backup, so it's kind of a last-resort backup. The Glacier pricing structure is a bit complicated so be sure to try the AWS pricing calculator before using it for large amounts of data, since it may cost a lot to retrieve the data if you exceed a certain threshold. Even sync requests cost for Glacier, but my photos are arranged in yearly folders. This means that I only need to sync the current year to Glacier, which reduces the cost significantly.

     

    I currently pay around $1 per month to keep about 200 Gb of data in Glacier and uploading a couple of gigabytes per week.

     

    I use these parameters to sync once a week (I keep the photos on SD cards anyway for at least a week) and to use the reduced redundancy S3 storage class before they are moved to Glacier:

     

    CRON_SCHEDULE=0 3 * * 0

    S3CMDPARAMS=--reduced-redundancy

     

    To answer your question, I think CrashPlan is great for general purpose backups, but Glacier is good for last-resort backups of really important things.

  12. S3Backup

    A simple way to backup important files to Amazon S3. Just mount the directories you want backed up under the /data volume and add the required environment variables for the S3 bucket and credentials, and the files will be backed up automatically. You can optionally set a bucket policy on S3 to automatically move the uploaded files to the cheap Glacier or "infrequent access" storage.

     

     

    Part of joch's docker templates: http://lime-technology.com/forum/index.php?topic=43480

     

    Changelog

     

    2015-10-22

    • Fix issue when using multiple parameters for S3CMDPARAMS

     

    2015-10-21

    • Expose env S3CMDPARAMS for setting custom s3cmd parameters

     

    2015-10-18

    • Initial release

×
×
  • Create New...