Introduction
If you have ever considered deploying your portfolio or a static website, you may have thought about hosting it on the cloud to make it accessible to anyone on the internet. Fortunately, there is a solution that eliminates the need to manually configure and manage AWS services through the management console.
With the appropriate tools and frameworks, it is possible to deploy your website to the AWS cloud without relying on the management console. This approach can save time and effort while ensuring a reliable and scalable infrastructure for your website. By leveraging infrastructure as code and other deployment automation techniques, you can streamline the deployment process and focus on creating great content for your website.
Be with me and don't start panicking. We will be using Terraform for provisioning different resources in the AWS cloud.
Now, What on the earth is Terraform and how does it do that?
Well, Terraform is essentially one of the best Infrastructure as a Code (IaaC) tools for building, changing, and versioning infrastructure safely and efficiently. Terraform allows you to define the basic building blocks of your infrastructure in .tf configuration files. When you run Terraform commands, it reads your configurations and automatically provisions and manages those resources.
Did you get some ideas? No? Then let's jump towards the practical implementation and you will be on your own towards provisioning with Terraform.
Prerequisite
You must have an AWS account and AWS CLI configured in your account. (CLI configured user must have policy related to the services we are using)
Knowledge about Terraform and installation on your machine. (My machine is Ubuntu distribution)
Code editor installed on your machine. (I am using VSCode)
Knowledge of the related AWS services.
STEP 1: Configure Keys
To connect to AWS and use the services, the terraform should have IAM Access and secret access keys. For this tutorial, you can create your new IAM user (if you have AWS CLI configured, you don't have to generate again check the policy of that user) and generate the keys. Download the keys.
After the key has been downloaded, verify the profile for that key. In most cases, it will be saved in the following directory and the profile will be [default].
cd home/.aws/credentials
STEP 2: Folder Structure
Maintaining a clean configuration and scaling infrastructure can become challenging when using Terraform for larger projects. However, Terraform provides a solution in the form of modules. With modules, you can simplify the management of your infrastructure and expand your project with ease.
For example, you can create a separate module for the EC2 instance, RDS database, and S3 bucket. Each module can have its own Terraform configuration file, which can be version-controlled and shared across projects.
With this modular approach, you can simplify the management of your infrastructure, make it more organized, and reuse resources across projects. You can also update and test each module independently, without affecting the rest of the infrastructure.
Create two directories (objects and modules) and files main.tf terraform.tfvars and variables.tf as shown in the figure.
Ignore other files such as the lock file. tfstate file.
Objects:
It contains objects HTML and CSS files to be uploaded in buckets.Modules:
These contain the separate configuration files for each module S3, route53, Cloudfront and ACM.main.tf:
This file contains the main Terraform configuration code that defines and provisions the infrastructure.variables.tf:
This file declares the variables that will be used in the Terraform configuration.terraform.tfvars:
This file initializes the declared variables in thevariables.tf
file. It maps the variable names to their values, which can be specified as key-value pairs in the file. This file is optional, but it can be useful for quickly initializing variables without having to pass them in via the command line or environment variables.
It is better to create a folder structure at the beginning. Since we know what services we will be using in AWS, create subdirectories as follows and three files main.tf, variables.tf and outputs.tf in each subdirectory.
Now, let's begin configuring.
STEP 3: S3 Bucket and Objects
Amazon S3 is a cloud-based storage service that lets you store and retrieve files, called objects, at any time and from anywhere. An S3 bucket is like a folder where you can store your objects. An object is a file that you upload to an S3 bucket. Objects can be of any type, such as text files, images, videos, or application data.
One of the best features of Amazon S3 is hosting a static website.
Inside the modules/s3
main.tf :
#creating s3 bucket
resource "aws_s3_bucket" "site_bucket" {
bucket = var.bucket_name
tags = {
Name = "mahesh-website-bucket"
Environment = "Prod"
}
}
This Terraform code creates an AWS S3 bucket using the aws_s3_bucket
resource. Here's a breakdown of the code:
resource "aws_s3_bucket" "site_bucket" { ... }
: This creates an AWS S3 bucket using theaws_s3_bucket
resource in Terraform.bucket = var.bucket_name
: This specifies the name of the bucket to be created, which is passed in as a variablevar.bucket_name
.tags = { ... }
: This specifies a set of tags to be applied to the S3 bucket. In this case, two tags are specified:Name
, which has the value "mahesh-website-bucket", andEnvironment
, which has the value "Prod".
#blocking public access
resource "aws_s3_bucket_ownership_controls" "site_bucket_ownership" {
bucket = aws_s3_bucket.site_bucket.bucket
rule {
object_ownership = "BucketOwnerPreferred"
}
}
resource "aws_s3_bucket_public_access_block" "site_bucket_block" {
bucket = aws_s3_bucket.site_bucket.bucket
block_public_acls = true
block_public_policy = true
ignore_public_acls = true
restrict_public_buckets = true
}
resource "aws_s3_bucket_acl" "site_bucket_acl" {
depends_on = [
aws_s3_bucket_ownership_controls.site_bucket_ownership,
aws_s3_bucket_public_access_block.site_bucket_block,
]
bucket = aws_s3_bucket.site_bucket.bucket
acl = "private"
}
This Terraform code creates a highly secure S3 bucket by blocking public access to the bucket and setting its access control to "private". Here's a breakdown of the code:
resource "aws_s3_bucket_ownership_controls" "site_bucket_ownership" { ... }
: This creates an AWS S3 bucket ownership control resource, which allows you to specify the object ownership for objects in the bucket. In this case, it sets theobject_ownership
to "BucketOwnerPreferred".resource "aws_s3_bucket_public_access_block" "site_bucket_block" { ... }
: This creates an AWS S3 bucket public access block resource, which blocks public access to the bucket. It sets theblock_public_acls
,block_public_policy
,ignore_public_acls
, andrestrict_public_buckets
parameters totrue
.resource "aws_s3_bucket_acl" "site_bucket_acl" { ... }
: This creates an AWS S3 bucket access control list resource, which sets the bucket access control to "private". It depends on theaws_s3_bucket_ownership_controls
andaws_s3_bucket_public_access_block
resources to ensure that the bucket is properly configured before setting the access control.
Public access is blocked because we will be using CloudFront to cache the content and the objects will be made available through CloudFront only.
# Although this is a bucket policy rather than an IAM policy,
# the aws_iam_policy_document data source may be used, so long as it specifies a principal.
data "aws_iam_policy_document" "allow_access" {
version = "2012-10-17"
statement {
sid = "Allow bucket access from cloudfront to static website"
principals {
type = "Service"
identifiers = ["cloudfront.amazonaws.com"]
effect = "Allow"
actions = [
"s3:GetObject"
]
resources = [
"${aws_s3_bucket.site_bucket.arn}/*",
]
condition {
test= "StringEquals"
variable= "AWS:SourceArn"
values = [var.cloudfront_arn]
}
}
}
resource "aws_s3_bucket_policy" "allow_access" {
bucket = aws_s3_bucket.site_bucket.bucket
policy = data.aws_iam_policy_document.allow_access.json
}
This is the bucket policy since we are defining the identifiers and types in the principals.
data "aws_iam_policy_document" "allow_access"
: This block declares a Terraform data source that represents an IAM policy document. A data source is a way to fetch and use external data in your Terraform configuration.version = "2012-10-17"
: This sets the version of the IAM policy document to "2012-10-17", which is one of the supported versions of the IAM policy language.statement { ... }
: This block defines a statement within the IAM policy document. A statement is a way to specify permission or restriction in the policy.sid = "Allow bucket access from cloudfront to static website"
: This sets a unique identifier for the statement. This identifier is optional, but it can be useful for auditing and debugging purposes.principals { ... }
: This block specifies the identity or group to which the permission applies. In this case, the identity is a service principal, which represents the AWS service that is requesting access. Thetype
attribute is set to "Service" and theidentifiers
attribute is set to,["cloudfront.amazonaws.com"]
indicating that the AWS CloudFront service is requesting access.effect = "Allow"
: This sets the effect of the statement to "Allow", indicating that the permission is granted.actions = [ "s3:GetObject" ]
: This specifies the action that is allowed. In this case, thes3:GetObject
action is allowed, which allows the CloudFront distribution to retrieve objects from the S3 bucket.resources = [ "${aws_s3_bucket.site_bucket.arn}/*" ]
: This specifies the resource to which the permission applies. In this case, it applies to all objects in the S3 bucket identified by the ARN of theaws_s3_bucket.site_bucket
Terraform resource.condition { ... }
: This block specifies a condition that must be met for the permission to be granted. In this case, the condition checks if theAWS:SourceArn
variable matches thecloudfront_arn
variable. If the condition is met, the permission is granted.
#encrypt bucket using SSE-S3
resource "aws_s3_bucket_server_side_encryption_configuration" "encrypt" {
bucket = aws_s3_bucket.site_bucket.bucket
rule {
apply_server_side_encryption_by_default {
sse_algorithm = "AES256"
}
}
}
This Terraform code sets up server-side encryption for an S3 bucket using SSE-S3. By setting up server-side encryption for an S3 bucket using SSE-S3, you can ensure that all objects stored in the bucket are encrypted at rest. Here's a breakdown of the code:
resource "aws_s3_bucket_server_side_encryption_configuration" "encrypt"
: This declares a Terraform resource of type,aws_s3_bucket_server_side_encryption_configuration
which represents the server-side encryption configuration for an S3 bucket.bucket = aws_s3_bucket.site_bucket.bucket
: This specifies the name of the S3 bucket to which the encryption configuration applies. Theaws_s3_bucket.site_bucket
reference is a reference to an existingaws_s3_bucket
resource that represents the S3 bucket.rule { ... }
: This defines a rule for the server-side encryption configuration. In this case, there is only one rule.apply_server_side_encryption_by_default { ... }
: This specifies that server-side encryption should be applied by default to all objects in the S3 bucket.sse_algorithm = "AES256"
: This sets the encryption algorithm to use for SSE-S3, which is AES-256.
You can also configure other features of the s3 bucket like versioning and others.
#enable static web hosting
resource "aws_s3_bucket_website_configuration" "site_hosting" {
bucket = aws_s3_bucket.site_bucket.bucket
index_document {
suffix = "index.html"
}
error_document {
key = "error.html"
}
}
#can be found in terraform. It is easy way to read all the files inside directory
module "template_files" {
source = "hashicorp/dir/template"
base_dir = "${path.module}/../../objects"
}
resource "aws_s3_object" "hosting_bucket_files" {
bucket = aws_s3_bucket.site_bucket.bucket
for_each = module.template_files.files
key = each.key
content_type = each.value.content_type
source = each.value.source_path
content = each.value.content
# Unless the bucket has encryption enabled, the ETag of each object is an
# MD5 hash of that object.
etag = each.value.digests.md5
}
This Terraform code sets up an S3 bucket for static website hosting and uploads files from a local directory to the bucket. Here's a breakdown of the code:
module "template_files" { ... }
: This declares a Terraform module that reads all the files inside a directory and formats them for use in other resources.source = "hashicorp/dir/template"
: This sets the source of the module to a HashiCorp-built module called "dir/template". This module provides a convenient way to read files from a directory and create a map of their contents.base_dir = "${path.module}/../../objects"
: This specifies the base directory where the files are located. Thepath.module
variable represents the path to the current module and../../objects
represents the relative path to the directory containing the files.resource "aws_s3_object" "hosting_bucket_files" { ... }
: This defines a Terraform resource of the typeaws_s3_object
that creates S3 objects for each file in the specified S3 bucket.bucket = aws_s3_bucket.site_bucket.bucket
: This specifies the name of the S3 bucket where the objects should be created. Theaws_s3_bucket.site_bucket
reference is a reference to an existingaws_s3_bucket
resource that represents the S3 bucket.for_each = module.template_files.files
: This specifies that the resource should be created for each file in themodule.template_files
module. Thefiles
variable in themodule.template_files
the module contains a map of all the files in the directory, with the key being the file name and the value being an object that contains information about the file.key = each.key
: This sets the S3 object key to the file name.content_type = each.value.content_type
: This sets the content type of the S3 object to the content type of the file.source = each.value.source_path
: This sets the source of the S3 object to the file path.content = each.value.content
: This sets the content of the S3 object to the contents of the file.etag = each.value.digests.md5
: This sets the ETag of the S3 object to the MD5 hash of the file contents.
variables.tf
variable "region" {
type = string
}
variable "bucket_name" {
type = string
}
variable "cloudfront_arn" {
type = string
}
By defining these variables in your, variables.tf
you can easily customize the deployment of AWS resources without having to hardcode values in your configuration files.
outputs.tf
output "url" {
value = aws_s3_bucket_website_configuration.site_hosting.website_endpoint
}
output "dns_domain_name" {
value = aws_s3_bucket.site_bucket.bucket_regional_domain_name
}
output "origin_id" {
value= aws_s3_bucket.site_bucket.id
}
output "path" {
value = "${path.module}"
}
By defining output variables in, outputs.tf
, you can easily retrieve important information about the resources that were created during the deployment. Output variables can also be used to pass information between Terraform modules or to other applications that need to interact with the deployed infrastructure.
Inside root directory
main.tf
terraform {
required_providers {
aws = {
source = "hashicorp/aws"
version = "5.1.0"
}
}
}
provider "aws" {
region = var.region
profile = "default"
}
module "s3" {
source = "./modules/s3"
region = var.region
bucket_name = var.bucket_name
cloudfront_arn = module.cloudfront.cloudfront_arn
}
This Terraform code declares a module named "s3" that sources its configuration from the "./modules/s3" directory and passes the values for several input variables.
region = var.region
: This passes the region name of the s3 as an input variable to the "s3" module. The value is obtained from the var.region
variable, which is defined in terraform.tfvars in the Terraform code.
bucket_name = var.bucket_name
: This passes the bucket name of the s3 as an input variable to the "s3" module. The value is obtained from the var.bucket_name
variable, which is defined in terraform.tfvars in the Terraform code.
cloudfront_arn = module.cloudfront.cloudfront_arn
: This passes the cloudfront_arn name of the CloudFront distribution as an input variable to the "s3" module. The value is obtained from the output variable cloudfront_arn
of another module named "cloudfront", which is declared elsewhere in the Terraform code.
variables.tf
variable "region" {
type = string
}
variable "bucket_name" {
type = string
}
terraform.tfvars
region = "us-east-1"
bucket_name = "mahesh-bucket-website"
STEP 4: CloudFront
Amazon CloudFront is a content delivery network (CDN) service provided by AWS. CloudFront can be used to distribute content, such as videos, images, and web pages, to geographically dispersed users around the world with low latency and high data transfer speeds.
CloudFront works by caching content in edge locations, which are distributed across the globe. When a user requests content, CloudFront automatically routes the request to the nearest edge location, which serves the content from the cache. This helps to reduce latency and improve the performance of web applications and other content-heavy services.
Inside modules/cloudfront
main.tf :
resource "aws_cloudfront_origin_access_control" "for_s3" {
name = "cloudfront-origin-access-control"
description = "Origin access control Pplicy"
origin_access_control_origin_type = "s3"
signing_behavior = "always"
signing_protocol = "sigv4"
}
resource "aws_cloudfront_distribution" "s3_distribution" {
origin {
domain_name = var.dns_domain_name
origin_access_control_id = aws_cloudfront_origin_access_control.for_s3.id
origin_id = var.origin_id
}
enabled = true
is_ipv6_enabled = true
comment = "Used for s3 bucket"
default_root_object = "index.html"
aliases = [var.domain_name]
default_cache_behavior {
allowed_methods = ["GET", "HEAD"]
cached_methods = ["GET", "HEAD"]
target_origin_id = var.origin_id
viewer_protocol_policy = "redirect-to-https"
min_ttl = 0
default_ttl = 3600
max_ttl = 86400
forwarded_values {
query_string = false
cookies {
forward = "none"
}
}
}
restrictions {
geo_restriction {
restriction_type = "none"
}
}
tags = {
Environment = "production"
}
viewer_certificate {
acm_certificate_arn = var.acm_certificate_arn
ssl_support_method = "sni-only"
}
}
This Terraform code creates an AWS CloudFront distribution for an S3 bucket and sets up origin access control to restrict access to the bucket. Here's a breakdown of the code:
resource "aws_cloudfront_origin_access_control" "for_s3" { ... }
: This creates a Terraform resource of the typeaws_cloudfront_origin_access_control
that sets up the origin access control for the S3 bucket. This resource declares the name, description, origin access control origin type, signing behaviour, and signing protocol.name = "cloudfront-origin-access-control"
: This specifies the name of the origin access control policy resource.description = "Origin access control Pplicy"
: This sets the description of the origin access control policy.origin_access_control_origin_type = "s3"
: This sets the type of origin for which access control is being set up. In this case, the type is set to "s3".signing_behavior = "always"
: This specifies the signing behaviour for CloudFront. In this case, the behaviour is set to "always".signing_protocol = "sigv4"
: This specifies the signing protocol for CloudFront. In this case, the protocol is set to "sigv4".resource "aws_cloudfront_distribution" "s3_distribution" { ... }
: This creates a Terraform resource of the typeaws_cloudfront_distribution
that sets up a CloudFront distribution for the S3 bucket.origin { ... }
: This specifies the origin of the CloudFront distribution. Thedomain_name
is set to the DNS domain name of the S3 bucket, theorigin_access_control_id
is set to the ID of the origin access control policy, and theorigin_id
is set to the ID of the origin.enabled = true
: This specifies whether the CloudFront distribution is enabled. In this case, it is set totrue
.is_ipv6_enabled = true
: This specifies whether IPv6 is enabled for the CloudFront distribution. In this case, it is set totrue
.comment = "Used for s3 bucket"
: This sets a comment for the CloudFront distribution.default_root_object = "index.html"
: This sets the default root object for the CloudFront distribution.aliases = [var.domain_name]
: This specifies a list of domain names to which the CloudFront distribution is associated.default_cache_behavior { ... }
: This specifies the default cache behaviour for the CloudFront distribution. It allows GET and HEAD methods to set the target origin ID to the ID of the origin, and set the viewer protocol policy to "redirect-to-HTTPS". It also sets the minimum, default, and maximum time-to-live values and specifies that cookies should not be forwarded.restrictions { ... }
: This specifies the restrictions for CloudFront distribution. In this case, the geo-restriction type is set to "none".tags = { Environment = "production" }
: This specifies a set of tags for the CloudFront distribution.viewer_certificate { ... }
: This specifies the viewer certificate for the CloudFront distribution. The ACM certificate ARN is set to the ARN of the certificate, and the SSL support method is set to "sni-only".
variables.tf
variable "dns_domain_name" {
type = string
}
variable "origin_id" {
type = string
}
variable "acm_certificate_arn" {
type = string
}
variable "domain_name" {
type = string
}
outputs.tf
output "cloudfront_arn" {
value= aws_cloudfront_distribution.s3_distribution.arn
}
output "cloudfront_domain_name" {
value = aws_cloudfront_distribution.s3_distribution.domain_name
}
output "cloudfront_hosted_zone_id" {
value = aws_cloudfront_distribution.s3_distribution.hosted_zone_id
}
Inside root
main.tf
module "cloudfront" {
source = "./modules/cloudfront"
dns_domain_name = module.s3.dns_domain_name
domain_name = var.domain_name
origin_id = module.s3.origin_id
acm_certificate_arn = module.acm.acm_certificate_arn
}
This Terraform code declares a module named "cloudfront" that sources its configuration from the "./modules/cloudfront" directory and passes the values for several input variables.
dns_domain_name = module.s3.dns_domain_name
: This passes the DNS domain name of the S3 bucket as an input variable to the "cloudfront" module. The value is obtained from the output variable dns_domain_name
of another module named "s3", which is declared elsewhere in the Terraform code.
domain_name = var.domain_name
: This passes the domain name of the CloudFront distribution as an input variable to the "cloudfront" module. The value is obtained from the var.domain_name
variable, which is defined in terraform.tfvars in the Terraform code.
variables.tf
variable "domain_name" {
type = string
}
terraform.tfvars
domain_name = "mupreti.com.np"
STEP 5: Amazon Certificate Manager (ACM)
Amazon Certificate Manager (ACM) is a service provided by AWS that allows you to provision, manage, and deploy SSL/TLS certificates for use with AWS services and your own applications.
ACM provides a simple and automated way to obtain SSL/TLS certificates for your domain names, without the need to go through a manual process of generating and managing certificates.
Inside modules/acm/
main.tf
# request public certificates from the amazon certificate manager.
resource "aws_acm_certificate" "acm_certificate" {
domain_name = var.domain_name
subject_alternative_names = [ var.alternative_name]
validation_method = "DNS"
lifecycle {
create_before_destroy = true
}
}
# get details about a route 53 hosted zone
data "aws_route53_zone" "route53_zone" {
name = var.domain_name
private_zone = false
}
# create a record set in route 53 for domain validatation
resource "aws_route53_record" "route53_record" {
for_each = {
for dvo in aws_acm_certificate.acm_certificate.domain_validation_options : dvo.domain_name => {
name = dvo.resource_record_name
record = dvo.resource_record_value
type = dvo.resource_record_type
zone_id = dvo.domain_name == "${var.domain_name}" ? data.aws_route53_zone.route53_zone.zone_id : data.aws_route53_zone.route53_zone.zone_id
}
}
allow_overwrite = true
name = each.value.name
records = [each.value.record]
ttl = 60
type = each.value.type
zone_id = each.value.zone_id
}
# validate acm certificates
resource "aws_acm_certificate_validation" "acm_certificate_validation" {
certificate_arn = aws_acm_certificate.acm_certificate.arn
validation_record_fqdns = [for record in aws_route53_record.route53_record : record.fqdn]
}
This Terraform code provisions a public SSL/TLS certificate using the Amazon Certificate Manager (ACM) service and validates it using DNS validation. Here is an explanation of the various resources used in this code:
aws_acm_certificate
: This resource is used to request a public SSL/TLS certificate from the ACM service. Thedomain_name
parameter specifies the primary domain name for which the certificate is being requested, while thesubject_alternative_names
parameter specifies any additional domain names to include in the certificate. Thevalidation_method
parameter is set to "DNS" to use DNS validation. Thelifecycle
block is used to ensure that the new certificate is created before the old one is destroyed.data.aws_route53_zone
: This data source is used to retrieve details about the Route 53 hosted zone that matches the domain name specified in thedomain_name
variable.aws_route53_record
: This resource is used to create a DNS record set in Route 53 for each domain validation option associated with the ACM certificate. Thefor_each
block iterates over thedomain_validation_options
attribute of theaws_acm_certificate
resource to create a record set for each validation option. Theallow_overwrite
parameter is set to true to allow the resource to update an existing record set if one already exists. Thename
,records
,ttl
,type
, andzone_id
parameters are set based on thedomain_validation_options
attribute.aws_acm_certificate_validation
: This resource is used to validate the ACM certificate using the DNS record sets that were created in the previous step. Thecertificate_arn
parameter specifies the ARN of the ACM certificate to validate, while thevalidation_record_fqdns
parameter is set to a list of fully qualified domain names (FQDNs) for the Route 53 DNS record sets that were created in the previous step.
variables.tf
variable "domain_name" {
type = string
}
variable "alternative_name" {
type = string
}
outputs.tf
output "domain_name" {
value = var.domain_name
}
output "acm_certificate_arn"{
value = aws_acm_certificate.acm_certificate.arn
}
output "aws_acm_certificate_validation_acm_certificate_validation_arn" {
value = aws_acm_certificate_validation.acm_certificate_validation.certificate_arn
}
Inside root
main.tf
module "acm" {
source = "./modules/acm"
domain_name = var.domain_name
alternative_name = var.alternative_name
}
variables.tf
variable "alternative_name" {
type = string
}
terraform.tfvars
alternative_name = "*.mupreti.com.np"
STEP 6: Route53
Amazon Route 53 is a highly scalable and reliable Domain Name System (DNS) service provided by AWS. Route 53 allows you to register domain names, route traffic to AWS resources, such as EC2 instances and S3 buckets, and perform health checks on your infrastructure.
Inside modules/route53
main.tf
data "aws_route53_zone" "zone" {
name = "mupreti.com.np"
private_zone = false
}
resource "aws_route53_record" "example" {
zone_id = data.aws_route53_zone.zone.zone_id
name = "mupreti.com.np"
type = "A"
alias {
name = var.cloudfront_domain_name
zone_id = var.cloudfront_hosted_zone_id
evaluate_target_health = false
}
}
This Terraform code creates an A
DNS record in an Amazon Route 53 public hosted zone for the domain, mupreti.com.np
and associates it with a CloudFront distribution. Here is an explanation of the various resources used in this code:
data.aws_route53_zone
: This data source is used to retrieve information about the Route 53 public-hosted zone that matches the domain name specified in thename
parameter. Theprivate_zone
parameter is set tofalse
indicate that the hosted zone is a public zone.aws_route53_record
: This resource is used to create anA
DNS record in the Route 53 public hosted zone. Thezone_id
parameter is set to thezone_id
attribute of thedata.aws_route53_zone
resource to indicate the ID of the hosted zone. Thename
parameter is set to the domain name for which the DNS record is being created. Thetype
parameter is set toA
specify that the DNS record is aA
record.alias
: This nested block is used to associate the DNS record with a CloudFront distribution. Thename
parameter is set to the domain name of the CloudFront distribution, while thezone_id
parameter is set to the hosted zone ID of the CloudFront distribution. Theevaluate_target_health
parameter is set tofalse
indicate that Route 53 should not perform health checks on the CloudFront distribution.
variables.tf
variable "cloudfront_domain_name" {
type = string
}
variable "cloudfront_hosted_zone_id" {
type = string
}
No any output need to be generated here. so, you can delete the outputs.tf file.
Inside root
main.tf
module "route53" {
source = "./modules/route53"
cloudfront_domain_name = module.cloudfront.cloudfront_domain_name
cloudfront_hosted_zone_id = module.cloudfront.cloudfront_hosted_zone_id
}
Since no variable is being passed directly, no need to assign in root variables.tf and terraform.tfvars .
Conclusion
After all the configuration is complete, run the following commands:
$terraform fmt
$terraform validate
The two above commands are just for refactoring and validating only.
$terraform init
$terraform plan
$terraform apply
terraform init
- This initializes Terraform in a directory, downloading any required providers. It should be run once when setting up a new Terraform config.terraform plan
- This generates an execution plan to show what Terraform will do when you callapply
. You can use this to preview the changes Terraform will make.terraform apply
- This executes the actions proposed in the plan and actually provisions or changes your infrastructure.
Verify the resources in AWS and see if the resources are created or not. Also, run your domain on the web browser. You will see something like this as defined in your index.html file.
At last, don't forget to run terraform destroy to destroy all the created resources. Otherwise, you will have to pay a huge amount of money and I don't want that :( You might have to delete some resources manually too.
Damn! It was one hell of a task but congratulations, you made it.
Now, you can create a repository and push the code to your GitHub account also don't forget to ignore those files not necessarily important while pushing in GitHub.
"Thank you for bearing with me until the end. See you in the next one."