Comprehensive Guide: Deploy Static Websites and SPAs (React/Vue/Angular) to AWS S3 Buckets

Ridwan Yusuf
AWS Tip
Published in
13 min readApr 12, 2023

--

This guide provides a step-by-step approach on how to deploy a static web application or Single Page Application (SPA) built with React, Angular, Vue, etc., to an AWS S3 bucket.

But the question here is why would we want to use an S3 bucket as an alternative to other hosting platforms?

Some benefits of using S3 bucket for static site hosting:

  1. Cost-effective: S3 offers a pay-as-you-go pricing model, allowing you to only pay for the storage and data transfer you actually use, which can be cost-effective for hosting static web content.
  2. Scalability: S3 can handle large amounts of data and high levels of traffic, making it suitable for hosting web applications with varying levels of traffic.
  3. Performance benefits: S3 allows you to store your content in multiple geographic locations, potentially improving the performance and load times of your web application by serving content from a location that is closer to your users.

Services and Tools Covered in the Tutorial:
1. AWS S3 bucket — to store files and folders
2. CloudFront — AWS CDN for fast content delivery
3. Route 53 — for managing CNAME/A records, etc.
4. IAM User — to create a user with S3 and CloudFront access
5. Certificate Manager — for securing connection with TLS (i.e., https)
6. GitAction — for automated deployment

What You Need to Get Started:
1.Custom domain (you can obtain a cheap one to follow along)
2.AWS account (at least a free tier account)
3.GitHub account

By the end of the guide, we will cover how to set up a parent domain (i.e., example.com) and subdomains (i.e., dashboard.example.com) using the services and tools mentioned earlier

Without delay, let’s begin.

#S3 setup

The first step is to go to the AWS console, search for Amazon S3, and create an S3 bucket to host the contents of our website

Click create bucket button

Click on the ‘Create bucket’ button. In the ‘Bucket name’ field, enter the exact domain name you have purchased from your registrar. For example, if you have purchased example.com, the bucket name must be exactly example.com and not www.example.com

Scroll down to ‘Block Public Access settings’ and deselect ‘Block all public access’ as shown in the screenshot. Leave all other settings as default and proceed to create the bucket.

Once the bucket is created, it will be listed as part of the available buckets. Click on the bucket name. Click on ‘Upload’, then select ‘Add files’ to upload files to the bucket.

bucket lists
Upload index.html in the created bucket

Select a sample ‘index.html’ file from your computer and finally click ‘Upload’ button to upload the file.

Once the index.html is uploaded, Navigate to the ‘Permissions’ tab for the bucket and click ‘Edit’ under ‘Bucket policy’.

Update the content of the Bucket policy as below. This allows public access to the bucket.

{
"Version": "2012-10-17",
"Statement": [
{
"Sid": "PublicReadGetObject",
"Effect": "Allow",
"Principal": "*",
"Action": "s3:GetObject",
"Resource": "arn:aws:s3:::ridwanray.com/*"
}
]
}

Remember to replace the domain name part(i.e. ridwanray.com) with your own domain(i.e. the name of the bucket you created) and finally save changes.

On the ‘Properties’ tab, scroll to the ‘Static website hosting’ section and click on ‘Edit’. Enable ‘Static website hosting’.

Enable static web hosting

Also, update both the Index document and Error document to be ‘index.html’. If you wish to use another page for the Error document, you can choose another file of your choice. Remember that the letter case matters, so ensure that the name is exactly ‘index.html’, which is the name of the document you uploaded earlier.

After making all the changes, click on ‘Save changes’ to save the updated settings.

Once static website hosting is enabled for an S3 bucket, a unique URL is generated that can be used to access the website. This URL can be used to access the website publicly.

#Route 53 and Name server setup

Route 53 is a scalable Domain Name System (DNS) web service offered by AWS. It provides domain registration, DNS routing, etc. It allows users to manage and configure their domain names and route traffic to various AWS resources, such as EC2 instances, S3 buckets, and Load Balancers.

The first thing we need to do is verify that we own the domain we have purchased. In Route 53, we need to create hosted zone for the domain.

For the ‘Domain name’ field, type the domain name purchased and finally create hosted zone.

Once the hosted zone is created, two additional records (NS and SOA)are generated as shown in the image. We will need the Name Server (NS) information to verify that we own the domain. Copy all these NS records and go to the domain registration platform where you purchased the domain. Fill in the NS records accordingly. Different domain registration platforms may have their own dashboard or settings for configuring Name Servers for a domain that you own.

On namecheap, it looks like this:

Sometimes it may take several hours for the changes to propagate after updating the Name Servers for a domain.

#Certificate Manager

In order for the website to support secure HTTPS (TLS), we need to provide a certificate.

Go to AWS Certificate Manager (ACM) and request a public certificate.

For domain names, I have used ridwanray.com and *.ridwanray.com. The second is a wildcard domain, meaning that it will match domains like www.ridwanray.com, blog.ridwanray.com, etc.

If you only want to support www.example.com in addition to example.com, you may not need to use a wildcard. The second domain would be www.example.com

Choose DNS validation and leave other settings as default. Then finally request the certificate.

Once the certificate is requested, Click on it and click the button that says, ‘Create records in Route 53’

To confirm that the records are added to Route 53, you can go to the hosted zone, click on the domain, and check if CNAME records are added.

After some minutes, we can check back the requested certificates, we see that the status has changed to ‘issued’

#CloudFront distribution setup

CloudFront acts as a Content Delivery Network (CDN) to serve content from the S3 bucket efficiently. This is achieved through caching mechanisms. When a user visits the website for the first time, CloudFront fetches the files from the S3 bucket and serves them. Subsequent visits from all users result in faster loading times as CloudFront serves the content from its cache. CloudFront only goes to the S3 bucket when there are changes in the files, ensuring efficient content delivery.

Next step is to setup Cloudfront distribution to serve the S3 bucket website.

Head over to CloudFront in the AWS console and click on the ‘Create Distribution’ button to create a new CloudFront distribution.

Fill up the ‘Origin domain’ field with the domain name provided to you from the S3 bucket. You can find this information in the ‘Properties’ tab of the S3 bucket, under ‘Static Site Hosting’. It is advisable to copy and paste the domain name rather than choosing from the bucket list.
An example of the domain name format is:
ridwanray.com.s3-website-us-east-1.amazonaws.com

Scroll down to ‘Viewer’ section and check ‘Redirect HTTP to HTTPS

Scroll to ‘Web Application Firewall (WAF)’ section and choose ‘Do not enable security protections’

Under ‘Settings’ section, add alternative domains to as yourdomain.com and www.yourdomain.com (Note: If you have created s3 bucket as blog.yourdomain.com, then alternative here needs to be blog.yourdomain.com)

For ‘Custom SSL certificate’ choose the certificates you created earlier.
Finally create distrubution while leaving other settings as default.

By now, we have the newly created distribution as part of the list.

The ID of the created distribution will be used later.

Once the ‘Last Modified’ field in the CloudFront distribution changes from ‘Deploying’ to a specific date and time, it indicates that the distribution is ready. At this point, you can access your website by using the domain provided by CloudFront (e.g. d12345abcdefg.cloudfront.net). This will allow you to verify that your website is correctly served through CloudFront.

Cloudfront domain

Navigate to ‘Error Pages’ Tab and then click ‘Create custom error response’

For ‘HTTP error code’ choose 403, Click ‘Yes’ for ‘Customize error response’ and specify the ‘/index.html’ path. That’s used to redirect 403 permission denied page.
Finally, create the custom error response.

#Setup IAM user with access to S3 bucket and Cloudfront.

The truth is that we would not want to always log in to the S3 bucket dashboard every time we want to update the website. Instead, we can create a user with access to S3 and CloudFront. The credentials of this user can be used on GitHub to make changes to the website. This way, we can streamline the process and avoid the need to manually update the S3 bucket dashboard for every change we make to the website.

We would need to create a group with S3 and CloudFront permissions. Once the group is created, we can add as many users as needed into that group. This way, we can manage access to S3 and CloudFront efficiently by assigning permissions to a group instead of individual users, making it easier to manage multiple users with similar permissions.

Go to ‘Identity and Access Management (IAM)’ and click on ‘User groups’ in the sidebar. This will take you to the IAM User Groups management where you can create a group.

Give the group a suitable name, such as “s3-cloudfront-group”, and attach the “AmazonS3FullAccess” and “CloudFrontFullAccess” permissions by searching for them and checking the boxes, as shown in the image. Once the permissions are attached, click on “Create group” to create the group.

Finally, create the group by clicking on the “Create group” button.

Confirm the group is created with the right permissions:

Under Accces management, Click Users then ‘Add users’

Enter the username of the user you want to create and add to the created group, and then click on the “Add user to group” button.

Add the user to the group we created earlier(i.e. s3-cloudfront-group) and finally ‘Create user’.

Once this done, click on the created user and move to the ‘Permissions’ tab, check that the user has the permissions defined in the group.

Under ‘Security credentials’ tab, scroll to Access Keys, click ‘Create access keys’

Choose CLI and then create the access key

Download the keys as a CSV file and save it in a secure location for future use.

The Access Key and Secret Access Key obtained from IAM will be used for authentication from GitHub to access AWS services such as S3 and CloudFront.

#Route requests coming into domains to the CloudFront distribution

For any request coming into the application through www.example.com, example.com, or blog.example.com, we need to use Route 53 to route those requests to the CloudFront distribution we created earlier. This can be done by creating A records inside the hosted zone for our domain

Go to the hosted zone for the domain

Here, we need to create A records that would direct requests for www.yourdomain.com and yourdomain.com to the CloudFront distribution.
Click create record:
1. for yourdomain.com, leave ‘record name’ as blank
2. for www.ridwanray.com. type‘www’ as ‘record name’.

Record type: select A and make sure to check ‘Alias’ button
Route traffic to: select Alias to Cloudfront distribution
Then select the cloudfront distribution you have created.


Click ‘Add another record’ to handle www

Supply the same details as above except for ‘record name’ which needs to be ‘www’

ridwanray.com
www.ridwanray.com

Once this is completed, we can see the two created records listed

By now, if www.example.com and example.com are not loading, do not worry. It might be that the domain name has not propagated yet, which can take hours. Just make sure that the URLs generated by CloudFront and S3 bucket are loading properly.

Cloudfront distribution url
S3 bucket url

#Setup a github repo & Workflow

At the root level of the project, create a folder named ‘.github’, then create another folder named ‘workflows’ inside the ‘.github’ folder. Finally, create a file called ‘deploy_dev.yml’.

Below is the content of ‘deploy_dev.yml’:


name: Build and Deploy App to S3 Bucket
on:
push:
branches: [ main ]
jobs:
build-and-deploy:
name: Build and Deploy
runs-on: ubuntu-latest
env:
BUCKET: ${{ secrets.BUCKET_NAME }}
DIR: .
REGION: us-west-2
DIST_ID: ${{ secrets.DIST_ID }}

steps:
- name: Checkout
uses: actions/checkout@v2

- name: Configure AWS Credentials
uses: aws-actions/configure-aws-credentials@v2
with:
aws-access-key-id: ${{ secrets.AWS_ACCESS_KEY_ID }}
aws-secret-access-key: ${{ secrets.AWS_SECRET_ACCESS_KEY }}
aws-region: ${{ env.REGION }}

- name: Copy files s3 bucket
run: |
aws s3 sync --delete ${{ env.DIR }} s3://${{ env.BUCKET }}
- name: Invalidate cache
run: |
aws cloudfront create-invalidation \
--distribution-id ${{ env.DIST_ID }} \
--paths "/*"

BUCKET_NAME, DIST_ID, AWS_ACCESS_KEY_ID and AWS_SECRET_ACCESS_KEY are all kept as secrets on the github repo.

aws s3 sync — delete ${{ env.DIR }} s3://${{ env.BUCKET }}

This code will update the content of the s3 bucket called (env.BUCKET) with what we have at (env.DIR), which is the root directory(.) in our case.

— delete: This tells the command to delete files in the S3 bucket that do not exist locally. This is to ensure both content on the local directory and s3 are exactly the same.

Cloudfront invalidation ensures that the updated files in the S3 bucket are propagated to the CloudFront edge locations and served to end users with the latest changes.

Note:Remember to set the appropriate Region (i.e. REGION: us-west-2). In this case, the region displayed beside the s3 bucket in the dashboard.

Note: If you are building a Single Page Application (SPA) with React, Angular, Vue, Svelte, or any other frontend framework, the directory should be changed to ‘build’, as that is the default directory where the project build is generated.

To generate a build for a Single Page Application (SPA) built in React.js, you would need to run a build step. Below is a sample YAML file for reference


name: Build and Deploy App to S3 Bucket
push:
branches: [ main ]
jobs:
build-and-deploy:
name: Build and Deploy
runs-on: ubuntu-latest
env:
BUCKET: ${{ secrets.BUCKET_NAME }}
DIST: build
REGION: us-west-2
DIST_ID: ${{ secrets.DIST_ID }}

steps:
- name: Checkout
uses: actions/checkout@v2

- name: Install dependencies
run: npm ci # Change to yarn install if using yarn

- name: Build app
run: npm run build

- name: Configure AWS Credentials
uses: aws-actions/configure-aws-credentials@v2
with:
aws-access-key-id: ${{ secrets.AWS_ACCESS_KEY_ID }}
aws-secret-access-key: ${{ secrets.AWS_SECRET_ACCESS_KEY }}
aws-region: ${{ env.REGION }}

- name: Copy files s3 bucket
run: |
aws s3 sync --delete ${{ env.DIST }} s3://${{ env.BUCKET }}
- name: Invalidate cache
run: |
aws cloudfront create-invalidation \
--distribution-id ${{ env.DIST_ID }} \
--paths "/*"

Once you have confirmed that everything is in order, go to your GitHub repository and update the settings for the following secrets: BUCKET_NAME, DIST_ID, AWS_ACCESS_KEY_ID, and AWS_SECRET_ACCESS_KEY.

Update AWS_ACCESS_KEY_ID to be the value displayed as AWS_ACCESS_KEY_ID in the downloaded CSV file for the access keys.
Update AWS_SECRET_ACCESS_KEY to be the value of the Secret access key from the CSV.
BUCKET_NAME should be the name of the AWS S3 bucket, e.g., example.com.
DIST_ID should be the CloudFront Distribution ID that was created earlier.

Distribution ID

Once the deployment is succssful, you should now be able to see the updated site.

Link to github repo: here

Thanks for reading and feel free to explore my video collection on YouTube for more educational content. Don’t forget to subscribe to the channel to stay updated with future releases.

Resources

https://docs.aws.amazon.com/AmazonS3/latest/userguide/website-hosting-cloudfront-walkthrough.html
https://docs.aws.amazon.com/AmazonS3/latest/userguide/website-hosting-custom-domain-walkthrough.html
https://docs.aws.amazon.com/AmazonS3/latest/userguide/IndexDocumentSupport.html
https://docs.aws.amazon.com/AmazonS3/latest/userguide/CustomErrorDocSupport.html

--

--