How To Copy Files From Local Directory To AWS S3 Bucket Using AWS CLI

June 19, 2018

Hogwarts freshman no more

In my previous blog, I mentioned about how I upload my website content to my AWS S3 Bucket by using just the simple drag-and-drop via the AWS Web Console. It's becoming very inconvenient as any changes to the source code then I need to do the following: open up a browser, and then go to the AWS Website, and then go to the bucket, and then click Upload, and then Select All my files from my local directory, and then drag..and-then..drop. Whew! And sometimes, you get the mouse-behaving-badly moments. I did mention that the drag-and-drop is not a mighty charm...it's the basic Wingardium Leviosa! I definitely need a better spell.

I'm planning to use a better method by running an AWS CLI command. Coming from a Linux background, the command is pretty simple and straightforward. This make it easier for me to update my website content. I just need to run a one-liner command and everything is all set. When I open up a terminal, I just do a reverse-search (CTRL+R) and just like that, magic. The installation and configuration of AWS CLI is beyond the scope of this article but you may find it in the links below:


TL;DR

Here is how I run my command to copy files from my local directory into my AWS S3 Bucket (www.jamgutz.com):

aws s3 cp ./ s3://www.jamgutz.com/ --recursive
            

Since I have configuration files like .gitignore which I don't want to upload to my AWS S3 Bucket, I just add the --exclude option.

Whenever I commit my changes to my remote repository on GitLab, I just run the following command from my source code local directory as part of my deployment process.

aws s3 cp ./ s3://www.jamgutz.com/ --exclude ".*" --recursive
            

Here's how I run the command if I want to cherry-pick a particular file (e.g. index.html) to upload:

aws s3 cp index.html s3://www.jamgutz.com/