Note: I’m posting this as a blog post as there has been interest in learning how to deploy this style architecture. If you find yourself not understanding some parts of it, I recommend taking the full course on this site.
It’s time to learn how to build the Three Tier Architecture. This architecture makes heavy use of Terraform modules. You will learn how to create your own modules, and use modules on the Terraform Registry.
This is a complex project that will cover a lot of ground. You should allocate two hours to complete this lesson.
Reference Architecture
Below is the architecture that you will learn how to build. Take a moment to study it. Note the VPC and subnet address design, as well as the security group architecture.
Our Build Approach
There are a couple of different ways you could approach this architecture. You could create each of the resources individually in one big file. No doubt you’d achieve the end goal and have a working architecture. However, it would be more complex than it needs to be.
In this lab you will learn how to build this architecture using a module design. You will work through the project in the following sequence:
- Create the core project files
- Create the networking module
- Create the database module
- Create the application module
- Create the web module
These five segments are explained comprehensively in the below sections.
Create the Core Project Files
Let’s start off by creating the core project files. The directions are provided using a Linux/macOs terminal. If you’re on Windows you can create the project files directly in Explorer.
First, create a working directory.
# create a working directory and change into it
$ mkdir terraform3tier && cd terraform3tier
# create the remaining files
$ touch variables.tf outputs.tf providers.tf versions.tf main.tf
Update the Variables File
There are several variables that we will use throughout the modules we create. Open the /terraform3tier/variables.tf
file and add these values:
variable "project" {
description = "The project to use for unique resource naming"
default = "smx-course"
type = string
}
variable "ssh_keypair" {
description = "SSH keypair to use for EC2 instance"
default = null
type = string
}
variable "region" {
description = "AWS region"
default = "us-west-2"
type = string
}
Update the Providers File
You may have previously only put the provider
config in a main.tf
file. While that’s perfectly acceptable, here are are separating it out into a separate file. Open the /terraform3tier/providers.tf
file and add the following code:
provider "aws" {
region = var.region
profile = "skillmix-lab"
}
Code Review
- The
region
value uses a value from thevariables.tf
file - The
profile
value is the name of the AWS CLI credential set that we want to use
Update the Versions File
We are also going to separate the terraform
block from the main.tf
file. Here, we will specify the required_providers
and versions
. Open the /terraform3tier/versions.tf
file and add the following code:
terraform {
required_version = ">= 0.15"
required_providers {
aws = {
source = "hashicorp/aws"
version = "~> 3.46"
}
random = {
source = "hashicorp/random"
version = "~> 3.0"
}
cloudinit = {
source = "hashicorp/cloudinit"
version = "~> 2.1"
}
}
}
Initialize the Project
Now that we’ve setup the core files, let’s initialize the project. Run this command at the root directory of /terraform3tier
to initialize the project and download the providers:
$ terraform init
# ... output
Prepare the Lab Environment
Follow these steps to prepare the lab environment.
Start the Lab
Click on the Start Lab button on the right of this page. Wait for all of the credentials to load.
Configure the AWS CLI Profile
Once the lab credentials are ready, you can configure the AWS CLI profile. Here is how you can do it on various operating systems.
On Linux/Mac, open the file ~/.aws/credentials
On Windows, open the file %USERPROFILE%.aws
credentials
Once you have the file open, add this profile to it. Use the keys from the lab session you started.
# ...other profiles
[skillmix-lab]
aws_access_key_id=<lab access key>
aws_secret_access_key=<lab secret key>
Create a Key Pair
The web and app servers created by Terraform need to have an EC2 key pair associated with them. While it is possible to create key pairs via Terraform, this is not recommended. It’s recommended to create them outside of Terraform so that they do not end up in source control, or leaked some other way.
Open the us-west-2 EC2 Key Pair Console and create a key pair named smx-lab-keypair
.
Create the Networking Module
Now it’s time to create the networking module. We’re building this now because the network is the base of the architecture. Several other resources that we’ll deploy have dependencies on it. For example, the EC2 & RDS instances must be launched into specific subnets. Therefore, we need to reference those IDs from the networking module in the app and web modules.
Create the Module Directory
To build the networking module, start by creating a ./modules/networking
directory in our root module. Create all of the files in this step in this directory.
Create the Variables File
In the variables.tf
file we will define the variables that will be used in this module. The values defined here can be passed down from the root directory. In this case, we want to pass down the project variable. We will use this variable in the main.tf
file to help with naming resources. You will see how this is used a little later in this section.
Create the ./modules/networking/variables.tf
and add the following code:
variable "project" {
type = string
}
Code Review
- Variables are defined with the
variable
block - Here, we define the project variable, with a type string, and no default value. This value will be populated from the root directory
Create the Outputs File
The outputs file is used to define what values we will “bubble up” to the root project. The values in this file can then be used in other modules. For this project, we need to pass up the VPC and security group values. These values will be used in the web
, app
, and database
modules.
Create the ./modules/networking/outputs.tf
and add the following code:
output "vpc" {
value = module.vpc
}
output "sg" {
value = {
web_alb_sg = module.web_alb_sg.security_group_id
web_server_sg = module.web_server_sg.security_group_id
app_alb_sg = module.app_alb_sg.security_group_id
app_server_sg = module.app_server_sg.security_group_id
db_sg = module.db_sg.security_group_id
}
}
Code Review
- Outputs are defined using the output configuration block
- The
vpc
output is going to output the module VPC values from themain.tf
file. We can use this in other module to get things like VPC and Subnet IDs that were deployed to the cloud provider - The
sg
output block is creating an object with several security group values that will be used throughout the other modules in the project
Create the Networking Module Main File
Now that we have the outputs and variable files created, let’s start working on the main file. This file will define the networking resources that will be used in our project. Specifically, we will create a VPC, subnets, a NAT gateway, and several security groups.
When creating resources, we have a couple of options. First, we can use Terraform HCL to define resources. In earlier lessons you used this method. The other way to do this is to use other modules. Using other modules can often make it easier and faster to build projects. The modules often abstract away some of the complexity in coding resources by hand.
In this module we will use the AWS Terraform Registry to source public modules. This registry has a larger number of modules for us to use. You can look through the registry to learn more about the available modules.
Create the ./modules/networking/main.tf
and add the following code:
data "aws_availability_zones" "available" {}
module "vpc" {
source = "terraform-aws-modules/vpc/aws"
version = "2.64.0"
name = "${var.project}-vpc"
cidr = "10.0.0.0/16"
azs = data.aws_availability_zones.available.names
private_subnets = ["10.0.1.0/24", "10.0.2.0/24", "10.0.3.0/24"]
public_subnets = ["10.0.10.0/24", "10.0.11.0/24", "10.0.12.0/24"]
database_subnets = ["10.0.21.0/24", "10.0.22.0/24", "10.0.23.0/24"]
create_database_subnet_group = true
enable_nat_gateway = false
single_nat_gateway = false
}
Code Review
- The
data
block is used to query our AWS account for the available availability zones in our region (us-west-2
). These values will be used in thevpc
module - The
module “vpc”
block is used to define VPC module. The module configuration block requires thesource
value. This value tells Terraform what module to use. Here, we are using theterraform-aws-modules/vpc/aws
. Other attributes for the VPC module are defined thereafter: - The
version
attribute locks the module version that will be used - The
name
attribute uses the variable from thevariables.tf
file we created in the module - The
cidr
attribute sets the VPC IPv4 CIDR block value. This is a VPC required value - The
azs
attribute uses thedata.aws_availability_zones.available.names
value from the defineddata
block. This is a good demonstration on how to usedata
block values - The
private_subnets
,public_subnets
,database_subnets
attributes set the subnet address ranges that the VPC design will use. Our EC2 and RDS resources will be deployed into their respective subnets - The
create_database_subnet_group
attribute creates a RDS Subnet Group - And lastly, we create a single NAT gateway with
enable_nat_gateway
andsingle_nat_gateway
Add the Security Groups to the Main File
Next, we will create security groups for our load balancers, web and app servers, and the database instance. These security groups will be configured to only open ports to the resources that absolutely require access. If you refer back to the network architecture above, you’ll see the security group architecture.
Append the following code to ./modules/networking/main.tf
:
# ...previous code
module "web_alb_sg" {
source = "terraform-aws-modules/security-group/aws"
name = "web-alb-sg"
description = "Security group for web tier app load balancer in VPC"
vpc_id = module.vpc.vpc_id
ingress_with_cidr_blocks = [
{
from_port = 80
to_port = 80
protocol = "tcp"
description = "HTTP"
cidr_blocks = "0.0.0.0/0"
}
]
}
module "web_server_sg" {
source = "terraform-aws-modules/security-group/aws"
name = "web-server-sg"
description = "Security group for web servers in VPC"
vpc_id = module.vpc.vpc_id
computed_ingress_with_source_security_group_id = [
{
rule = "http-80-tcp"
source_security_group_id = module.web_alb_sg.security_group_id
}
]
number_of_computed_ingress_with_source_security_group_id = 1
}
module "app_alb_sg" {
source = "terraform-aws-modules/security-group/aws"
name = "app-alb-sg"
description = "Security group for app tier alb in VPC"
vpc_id = module.vpc.vpc_id
computed_ingress_with_source_security_group_id = [
{
rule = "http-80-tcp"
source_security_group_id = module.web_server_sg.security_group_id
}
]
number_of_computed_ingress_with_source_security_group_id = 1
}
module "app_server_sg" {
source = "terraform-aws-modules/security-group/aws"
name = "app-server-sg"
description = "Security group for app servers in VPC"
vpc_id = module.vpc.vpc_id
computed_ingress_with_source_security_group_id = [
{
rule = "http-80-tcp"
source_security_group_id = module.app_alb_sg.security_group_id
}
]
number_of_computed_ingress_with_source_security_group_id = 1
}
module "db_sg" {
source = "terraform-aws-modules/security-group/aws"
name = "db-sg"
description = "Security group for db servers in VPC"
vpc_id = module.vpc.vpc_id
computed_ingress_with_source_security_group_id = [
{
rule = "mysql-tcp"
source_security_group_id = module.app_server_sg.security_group_id
}
]
number_of_computed_ingress_with_source_security_group_id = 1
}
Code Review
- The above code creates five security groups, and each uses the
terraform-aws-modules/security-group/aws
module. You can read the module documentation (recommended). All security groups are created in the VPC using thevpc_id = module.vpc.vpc_id
attribute; note that we get the VPC ID by referencing the VPC module created earlier in thismain.tf
file - The
module "web_alb_sg"
security group module is the public facing load balancer. This module creates an ingress rule that allows HTTP traffic to the load balancer - The
module "web_server_sg"
security group module is for the web servers. This security group opens up incoming HTTP traffic from the load balancer security group only - The remaining security groups follow a similar pattern. The security group only allows incoming traffic per the security group design
Update the Root Project Main File
Now we’ll do something we haven’t done yet. We are going to update the /terraform3tier/main.tf
file. This step will connect the networking module to the root project, and supply the module with the values that it needs.
Open the /terraform3tier/main.tf
and add this code:
module "networking" {
source = "./modules/networking"
project = var.project
}
Code Review
- All
module
configs require thesource
attribute to be set. Here we are giving the path to our networking module - The
project
attribute is passing theproject
variable to the module
Execute the Terraform Workflow
Run the following commands to apply the changes you’ve made so far:
$ terraform plan
# ... output
$ terraform apply
# when prompted, answer Yes
Create the Database Module
Next we will create the database module. Please refer to the system architecture at the beginning of this lesson to see it’s relation to the system.
This is one of the simpler modules as it only has a couple configuration blocks. Most notably, the database module will create a RDS instance that uses the MySQL database engine. It will launch this instance in the VPC that was created in the networking module.
Create the Variables File
The database module needs several values from the root and networking modules. From the root module, it needs the project
value. From the networking module, it needs the vpc
and sg
values. You will see how these values are used in the database module main.tf
file.
When we run terraform apply
, the VPC resources will be created first, and then their output values will be made available to the database module.
Create the ./modules/database
directory, and then create ./modules/database/variables.tf
and add the following code:
variable "project" {
type = string
}
variable "vpc" {
type = any
}
variable "sg" {
type = any
}
Create the Outputs File
There are several values, like the password and username, that should be saved after the RDS instance is created. To save this information, we will include it in the output
block.
Create the ./modules/database/outputs.tf
and add the following code:
output "db_config" {
value = {
user = aws_db_instance.database.username
password = aws_db_instance.database.password
database = aws_db_instance.database.name
hostname = aws_db_instance.database.address
port = aws_db_instance.database.port
}
}
Code Review
- The
output
block will print our database information to the screen after the instance is created. We can then save this information for later reference
Create the Database Module Main File
We can now create and write the main.tf
file. Unlike the networking module, this module definition contains resource
blocks provided by Terraform. A resource
block specifies a specific infrastructure object with the the required settings.
There are many resource definitions available in Terraform. You can refer to the AWS Terraform docs to learn more about them. In this module we will define two resources: random_password
and aws_db_instance
.
The database instance requires a password value. There are a couple of different ways we could set this value. Here, we have decided to use the Terraform random_password
resource.
resource "random_password" "password" {
length = 16
special = true
override_special = "_%*"
}
resource "aws_db_instance" "database" {
allocated_storage = 10
engine = "mysql"
engine_version = "8.0"
instance_class = "db.t2.micro"
identifier = "${var.project}-db-instance"
name = "db"
username = "admin"
password = random_password.password.result
db_subnet_group_name = var.vpc.database_subnet_group
vpc_security_group_ids = [var.sg.db_sg]
skip_final_snapshot = true
}
Code Review
random_password
is a resource that creates a random password with user-defined attributes. As reflected in the code above, the password length is defined as a variable calledpwd_length
. It uses a cryptographic number generator to create a password of this length.override_special
is an attribute that can be set where the user can provide special characters to be included inside the password. This means that the password will be 16 characters long and it will include_%*
characters inside it as well- The
resource "aws_db_instance" "database"
block creates the database instance. Most of the attributes in this resource are self explanatory if you have used RDS before. Take note of how thepassword
attribute is set; it references theresource "random_password" "password"
block. Thedb_subnet_group_name
andvpc_security_group_ids
attribute are populated from the networking module outputs that have been passed down to this module
Update the Root Project Main File
It’s time to connect the database module with the root project. Open the /terraform3tier/main.tf
and append this code:
# ... previous code
module "database" {
source = "./modules/database"
project = var.project
vpc = module.networking.vpc
sg = module.networking.sg
}
Code Review
- Take note of the
vpc
andsg
values being passed to thedatabase
module. These values bubbled up from thenetworking
module via its outputs file. Now, we can make them available in the database module
Execute the Terraform Workflow
Run the following commands to apply the changes you’ve made so far:
# initialize the new module
$ terraform init
$ terraform plan
# ... output
$ terraform apply
# when prompted, answer Yes
Create the Application Module
Our system architecture includes an application tier. This is where an application would be installed, such as Ruby on Rails or Django. This tier includes a load balancer and auto scaling enabled server group. We will create this resources in the app module. This is a fun module to build because it uses a nice range of Terraform features.
Create the Variables File
The app module needs several values from different parts of the project. First, it needs the vpc
and sg
values so that it can configure the load balancer and auto scaling groups in the right VPC, subnets, and security groups.
The app
module also needs the database config values. These values are needed by the app server so it can properly connect to the database. These values will be passed to the app servers using cloud-init
and userdata
.
The app servers also need a key pair. It is a best practice to create key pairs outside of Terraform, and then pass the name of they pair in the Terraform configuration. This code assumes that you have created a key pair already.
First, create the ./modules/app
directory. Then, create the ./modules/app/variables.tf
and add the following code:
variable "project" {
type = string
}
variable "ssh_keypair" {
type = string
}
variable "vpc" {
type = any
}
variable "sg" {
type = any
}
variable "db_config" {
type = object(
{
user = string
password = string
database = string
hostname = string
port = string
}
)
}
Code Review
- The
ssh_keypair
anddb_config
are new variables that we haven’t worked with yet - The
ssh_keypair
variable should be the name of an existing key pair - The
db_config
value was generated from the database module, and is being passed into the app module for use on the app servers
Create the Outputs File
The app module only needs to output the load balancer name. We will get this value from the load balancer module that will be created in the main.tf file.
Create the ./modules/app/outputs.tf
and add the following code:
output "alb_dns_name" {
value = module.app_alb.this_lb_dns_name
}
Create the Cloud Config File
When app servers start we want them to self-configure. This self-configuration involves creating the /etc/server.conf
file, running commands, and downloading packages.
To do this, we will use Ubuntu’s cloud-init
utility. This is an automation utility that can perform a set of tasks on instance boot. To use this utility, we have to pass it a configuration file. We will pass the configuration file using EC2’s userdata
feature.
The first step, however, is to create the configuration file itself in our project.
Create the ./modules/app/app_config.yaml
file and add the following code:
#cloud-config
write_files:
- path: /etc/server.conf
owner: root:root
permissions: "0644"
content: |
{
"user": "${user}",
"password": "${password}",
"database": "${database}",
"netloc": "${hostname}:${port}"
}
runcmd:
- curl -sL https://api.github.com/repos/scottwinkler/vanilla-webserver-src/releases/latest | jq -r ".assets[].browser_download_url" | wget -qi -
- unzip deployment.zip
- ./deployment/server
packages:
- jq
- wget
- unzip
Code Review
- There are three main sections to this code
- First, the
write_files
section is used to create the /etc/server.conf file, set the owner, permissions, and content - Second, the
runcmd
section downloads a web server from GitHub, unizips the file, and starts the server - Lastly, the
packages
section tellscloud-init
what Ubuntu packages to install
Create the App Module Main File
It’s time to create the app
module main.tf
file now. This is a complex file, so we’ll build it in parts.
First, we will create a data source for the cloud-init
configuration. This data source will create the userdata
value that we can pass to the EC2 instance.
Then, we will use the data "aws_ami" "ubuntu"
block to get the latest Ubuntu AMI ID. This configuration can query and filter Ubuntu’s AMIs to get the latest matching AMI ID. We will use the AMI ID in the auto scaling configuration.
Create the ./modules/app/main.tf
file and add the following code:
data "cloudinit_config" "config" {
gzip = true
base64_encode = true
part {
content_type = "text/cloud-config"
content = templatefile("${path.module}/app_config.yaml", var.db_config)
}
}
data "aws_ami" "ubuntu" {
most_recent = true
filter {
name = "name"
values = [
"ubuntu/images/hvm-ssd/ubuntu-focal-20.04-amd64-server-*" ]
}
filter {
name = "virtualization-type"
values = [
"hvm" ]
}
owners = [
"099720109477" ]
}
Code Review
- The
data "cloudinit_config" "config"
creates the content for the EC2 userdata. Note that this block uses Terraform’stemplatefile
function. This function accepts two values; the template file, and the values that are used in the template file. Here we pass invar.db_config
- The
data "aws_ami" "ubuntu"
block uses Terraform’s external data sources feature to query Ubuntu’s AWS account for the latest AMI ID
Add the Auto Scaling Config
Next, we need to setup auto scaling. Auto scaling will be responsible for managing the EC2 instances in this module. We will configure it to have a minimum of one instance. This requires two resources in our file.
First, we will configure the Launch Template. The Launch Template defines things such as the AMI ID, instance type, userdata, and VPC security groups.
Then, we create the auto scaling resource. This resources configures things such as group size, VPC, load balancer, and the launch template.
Append the following code to ./modules/app/main.tf
:
# ...previous code
resource "aws_launch_template" "appserver" {
name_prefix = var.project
image_id = data.aws_ami.ubuntu.id
instance_type = "t2.micro"
user_data = data.cloudinit_config.config.rendered
key_name = var.ssh_keypair
vpc_security_group_ids = [var.sg.app_server_sg]
}
resource "aws_autoscaling_group" "appserver" {
name = "${var.project}-app-asg"
min_size = 1
max_size = 3
vpc_zone_identifier = var.vpc.private_subnets
target_group_arns = module.app_alb.target_group_arns
launch_template {
id = aws_launch_template.appserver.id
version = aws_launch_template.appserver.latest_version
}
}
Code Review
- For
resource "aws_launch_template" "appserver"
, observe theimage_id
anduser_data
values. These values are set using the data sources previously defined - For
resource "aws_autoscaling_group" "appserver"
, observe how the various attributes have their values set. The VPC Zone is set with a variable, while the other values are set
Add the Load Balancer
The last piece of the app module is the load balancer. Per our design, the load balancer accepts incoming requests from the web tier, and distributes them to app servers.
Here, we will use the AWS Terraform Modules to build the load balancer. In the code below you can see that we’ve done so using the source attribute.
Append the following code to ./modules/app/main.tf
:
# ...previous code
module "app_alb" {
source = "terraform-aws-modules/alb/aws"
version = "~> 5.0"
name = "${var.project}--app-alb"
load_balancer_type = "application"
vpc_id = var.vpc.vpc_id
subnets = var.vpc.private_subnets
security_groups = [var.sg.app_alb_sg]
http_tcp_listeners = [
{
port = 80,
protocol = "HTTP"
target_group_index = 0
}
]
target_groups = [
{
name_prefix = "appsvr",
backend_protocol = "HTTP",
backend_port = 80
target_type = "instance"
}
]
}
Code Review
- The load balancer module based on a public registry module
- This is creating an
application
load balancer, in our project’s VPC, in the private subnets, and in the correct security group - A HTTP listener is created, along with one target group
Update the Root Project Main File
It’s time to connect the app
module with the root project. Open the /terraform3tier/main.tf
and append this code:
# ... previous code
module "app" {
source = "./modules/app"
project = var.project
ssh_keypair = var.ssh_keypair
vpc = module.networking.vpc
sg = module.networking.sg
db_config = module.database.db_config
}
Code Review
- You should start to see a pattern now. We are passing values from the
networking
anddatabase
modules into this one
Execute the Terraform Workflow
Run the following commands to apply the changes you’ve made so far:
# initialize the new module
$ terraform init
$ terraform plan
# ... output
$ terraform apply
# when prompted, answer Yes
Create the Web Module
It's time to build the web
module, the last module of the project. Structurally, it is very similar to the app module. The web module will deploy a load balancer and auto scaling configuration. It also defines a cloud-init
file, and queries for the Ubuntu AMI ID.
Create the Variables File
The web
module variables file is similar the same as the app
module. However, it does not need the db_config
data.
Create the ./modules/web/variables.tf
and add the following code:
variable "project" {
type = string
}
variable "ssh_keypair" {
type = string
}
variable "vpc" {
type = any
}
variable "sg" {
type = any
}
Create the Outputs File
Like the app module, we will output the web load balancer address.
Create the ./modules/web/outputs.tf
and add the following code:
output "alb_dns_name" {
value = module.web_alb.this_lb_dns_name
}
Create the Cloud Config File
The web cloud config file is simpler than that app module config.
Create the ./modules/web/web_config.yaml
and add the following code:
#cloud-config
write_files:
- path: /etc/server.conf
owner: root:root
permissions: "0644"
content: |
{
}
runcmd:
- curl -sL https://api.github.com/repos/scottwinkler/vanilla-webserver-src/releases/latest | jq -r ".assets[].browser_download_url" | wget -qi -
- unzip deployment.zip
- ./deployment/server
packages:
- jq
- wget
- unzip
Create the Web Module Main File
The last piece is is creating the web main.tf
file. As mentioned, this file is mostly the same as the app module. We will not explain it all again. However, do take note many of the resource names and attribute values have changed.
Create the ./modules/web/main.tf
and add the following code:
data "cloudinit_config" "config" {
gzip = true
base64_encode = true
part {
content_type = "text/cloud-config"
content = templatefile("${path.module}/web_config.yaml", {})
}
}
data "aws_ami" "ubuntu" {
most_recent = true
filter {
name = "name"
values = [
"ubuntu/images/hvm-ssd/ubuntu-focal-20.04-amd64-server-*" ]
}
filter {
name = "virtualization-type"
values = [
"hvm" ]
}
owners = [
"099720109477" ]
}
resource "aws_launch_template" "webserver" {
name_prefix = var.project
image_id = data.aws_ami.ubuntu.id
instance_type = "t2.micro"
user_data = data.cloudinit_config.config.rendered
key_name = var.ssh_keypair
vpc_security_group_ids = [var.sg.web_server_sg]
}
resource "aws_autoscaling_group" "webserver" {
name = "${var.project}-web-asg"
min_size = 1
max_size = 3
vpc_zone_identifier = var.vpc.public_subnets
target_group_arns = module.web_alb.target_group_arns
launch_template {
id = aws_launch_template.webserver.id
version = aws_launch_template.webserver.latest_version
}
}
module "web_alb" {
source = "terraform-aws-modules/alb/aws"
version = "~> 5.0"
name = "${var.project}--web-alb"
load_balancer_type = "application"
vpc_id = var.vpc.vpc_id
subnets = var.vpc.public_subnets
security_groups = [var.sg.web_alb_sg]
http_tcp_listeners = [
{
port = 80,
protocol = "HTTP"
target_group_index = 0
}
]
target_groups = [
{
name_prefix = "websvr",
backend_protocol = "HTTP",
backend_port = 80
target_type = "instance"
}
]
}
Update the Root Project Main File
It’s time to connect the app
module with the root project. Open the /terraform3tier/main.tf
and append this code:
# ... previous code
module "web" {
source = "./modules/web"
project = var.project
ssh_keypair = var.ssh_keypair
vpc = module.networking.vpc
sg = module.networking.sg
}
Update the Root Project Output File
Now that we have all of the modules built out, we can update the root project outputs.tf
file. This file will take outputs from the modules and print them to the CLI.
Open the /terraform3tier/outputs.tf
file and add this code.
output "db_password" {
value = module.database.db_config.password
sensitive = true
}
output "web_alb_dns_name" {
value = module.web.alb_dns_name
}
output "app_alb_dns_name" {
value = module.app.alb_dns_name
}
Code Review
- Note the syntax used in the
values
above:<module_block>.<module_name>.<output-value>
. This syntax matches the module structure
Execute the Terraform Workflow
Run the following commands to apply the changes you’ve made so far:
# initialize the new module
$ terraform init
$ terraform plan
# ... output
$ terraform apply
# when prompted, answer Yes
Review & Destroy
Congratulations! You just completed a fairly complex project. You can compare your code with the repo here: https://github.com/tabdon/terraform-3-tier-arch-lab