joch

Community Developer
  • Posts

    18
  • Joined

  • Last visited

Converted

  • Gender
    Undisclosed

Recent Profile Visitors

The recent visitors block is disabled and is not being shown to other users.

joch's Achievements

Noob

Noob (1/14)

1

Reputation

  1. Hi! I don't use Unraid anymore, so I'm not much of a help, but I hope someone on the forum can assist. If there is something that needs updating in the s3backup docker image, I can of course do the necessary changes.
  2. Hi! Great to hear. You can enable transparent server side encryption on the S3 bucket side of things.
  3. Hi! Try removing the lock file from the Docker container, like this: docker exec -ti S3Backup rm -f /tmp/s3cmd.lock
  4. There is nothing wrong with asking questions! What I meant by that is that you need to mount the folders you want backed up under /data in the Docker container, eg. host path /mnt/user/Documents needs to map into the container under /data/Documents in order for it to be backed up. You can add multiple of those for everything you want backed up.
  5. Hi! That sounds weird. Did you manage to solve it, or are you still having an issue with this? How are you running the container?
  6. Editing the config file shouldn't be necessary. If you're in the container, does "echo $S3CMDPARAMS" show "--delete-removed"? If it does, then try running the command (in the container) manually to see if it works: "/usr/local/bin/s3cmd sync $S3CMDPARAMS /data/ $S3PATH"
  7. Hello, Set the environment variable S3CMDPARAMS to include --delete-removed.
  8. Strange, let's do some debugging. First, enter the container: docker exec -ti S3Backup bash Then run the command manually, adding the verbose flag: /usr/local/bin/s3cmd sync $S3CMDPARAMS -v /data/ $S3PATH If that doesn't show anything, run the command with the debug flag: /usr/local/bin/s3cmd sync $S3CMDPARAMS -d /data/ $S3PATH To exit the container, just type "exit". Did any of that help you find out the reason why it doesn't work?
  9. Hm that's strange. Maybe try adding "--verbose" to S3CMDPARAMS to make it more verbose, so it may tell you something.
  10. It's basically up to you. The default behaviour is to sync, but *not* removed delete files on the remote which have been deleted locally. You can however enable this option by providing the "--delete-removed" flag to S3CMDPARAMS if you want that behaviour.
  11. Great that you figured out the issue!
  12. Weird, I use it on multiple servers without any issues. Is the clock set correctly on your system?
  13. Sorry for a late reply, but you can use something similar to the following: { "Version": "2012-10-17", "Id": "Policy1459542796200", "Statement": [ { "Sid": "Stmt1459542791970", "Effect": "Allow", "Principal": { "AWS": "arn:aws:iam::123456789:user/mybackupuser" }, "Action": "s3:*", "Resource": "arn:aws:s3:::mybucket/*" }, { "Sid": "Stmt1459542791970", "Effect": "Allow", "Principal": { "AWS": "arn:aws:iam::123456789:user/mybackupuser" }, "Action": "s3:ListBucket", "Resource": "arn:aws:s3:::mybucket" } ] }
  14. I use it in conjunction with CrashPlan, primarily for my photos as a secondary cloud backup, so it's kind of a last-resort backup. The Glacier pricing structure is a bit complicated so be sure to try the AWS pricing calculator before using it for large amounts of data, since it may cost a lot to retrieve the data if you exceed a certain threshold. Even sync requests cost for Glacier, but my photos are arranged in yearly folders. This means that I only need to sync the current year to Glacier, which reduces the cost significantly. I currently pay around $1 per month to keep about 200 Gb of data in Glacier and uploading a couple of gigabytes per week. I use these parameters to sync once a week (I keep the photos on SD cards anyway for at least a week) and to use the reduced redundancy S3 storage class before they are moved to Glacier: CRON_SCHEDULE=0 3 * * 0 S3CMDPARAMS=--reduced-redundancy To answer your question, I think CrashPlan is great for general purpose backups, but Glacier is good for last-resort backups of really important things.
  15. Hey! Thanks for letting me know of this issue, and sorry for missing the replies! I will look into url encoding the key before it's used automatically within the container.