Synching folder with Amazon S3

Share this post:

The other day I set out to finally start synching one of my backup folders on one of our Ubuntu Servers with Amazon S3. The reasons for this, are obvious. Amazon S3 is very cheap and reliable and is a good fit for keeping “smaller” chunks of files as a backup storage. So, apart from writing a script that runs trough crontab, which was easy to write, I’ve spent far more time to get the S3 script running. So, here are the steps to sync any folder with Amazon S3.

First off, I created a additional bucket on S3. Let’s call this backup “mybackup”.

Since, I did not wanted to install anything additionally to run the S3 scripts (there is a very popular S3Rsync for Ruby out there) I wanted to get it running with the “bash” shell itself. Why install another language, if you got everything already, right?

So, I went ahead and downloaded the S3-bash scripts. As the name implies, these are scripts that work in the bash shell and nothing else is needed for it. Unfortunately, the documentation lacks big time, so I had to run all over the net to find some. Since they are very sparse, here are the steps to get it running.

After you have unpacked the scripts you get three scripts called;

  • s3-get – for getting files from S3
  • s3-put – for putting files on S3
  • s3-delete – for deleting files on S3

Since we want to put files on S3, we are going to focus on the “s3-put” script. Here is a example how the commands for the s3-put script would look like:

s3-put -k {yourkey} -s awssecretfile -T /backup/myfile.zip /s3-backup/myfile.zip

The explanation of the params is as follows:

  • -k
    The “-k” parameter is your Amazon S3 key
  • -s
    The “-s” parameter is a path to a file (which you have to write) which contains your Amazon S3 secret key. Please read further down, in order to avoid any errors with this.
  • -T
    The “-T” parameter should be the absolute path to your file that you want to out on Amazon S3
  • file bucket and path on S3
    Last but not least, you have to pass the path to your S3 bucket and the file name to the command string.

Now, that wasn’t so hard right? Well, there is one small thing that drove me crazy during my initial setup. That is that the file with my Amazon S3 secret key kept on throwing an error. Somehow the length was not matching and some other errors. After some digging around I figured that one has to write to the file again in order to get rid of the 41 bytes error message. To do that issue the following command:

cat awssecretfile | tr -d 'n' >> awssecretfile-new

Right, so I’m hoping this helps anyone out there. Have fun.

Over 10 years in business.

Self-funded. No investors. No bullshit.

More than 3,000 customers worldwide.

Helpmonks - no bullshit customer engagement service

Growth starts with action

Empower your team and delight your customers.

Helpmonks - email management for small businesses