Here is a lesson and lab about how to setup Jenkins pipelines with Terraform. This is a draft and most likely needs some work. Proceed with caution!
Introduction to CI/CD
In the present context, CI/CD practices are the most widely accepted choice to shorten software development and delivery cycle time. CICD is a method that almost every DevOps/SRE teams use to build and deploy their products.
While CI allows the software to be made deployable at any time, the CD aims to be able to release software into a production-like environment at all times.
Continuous Integration
CI is a development approach that allows developers to continuously integrate code into a single shared repository.
In CI practice, developers build, run, and test code continuously and automatically. Continuous Integration is best achieved through the integration of version control. At first, developers build, run and test code in their local environment. If everything is working as how it is supposed to be, then they commit and push the changes to a common repository. The most common scenario of CI is that developers commit their daily code changes to a git repository. With CI, it will continuously monitor for new changes in the code and integrate with it.
The most common method of CI is to build the project with the new code changes. If the build is completed successfully, developers can start their testing.
Continuous Delivery/Deployment
Continuous delivery/deployment is a method of deploying artifacts that have passed the CI phase on a regular basis to ensure the continuous distribution of software or updates to users.
After CI is completed, CD takes care of deployments of the respective products. In this phase, the CD takes care of the building, deploying the product, and delivering it to the end-user.
What is the Difference Between Continuous Delivery vs Deployment
Continuous Delivery-
This method assures that the code is checked automatically after Continuous Integration, but it requires human interaction to manually and effectively trigger the deployment of the modifications.
Continuous Deployment-
This approach ensures that the code is checked and delivered automatically after Continuous Integration. The deployment of the application does not involve any human intervention. It will be deployed automatically if it passes the quality controls.
What is CI/CD Pipeline?
A continuous integration/continuous delivery (CI/CD) pipeline is a method for delivering a unit of change from development to delivery. Typically, this entails the following separate steps:
- Commit: When developers complete a change, they commit the change to the repository.
- Build: Source code from the repository is integrated into a build.
- Testing: Automated tests are run against the build. Test automation is an essential element of any CI/CD pipeline.
- Deploy: The built version is delivered to production.
Why CI/CD Matters?
- Reduce costs: Using automation in the CI/CD pipeline helps reduce the number of errors that can take place in the many repetitive steps of CI and CD.
- Smaller code changes: This allows for the integration of small pieces of code at one time. Which helps developers to recognize a problem before too much work is completed afterward.
- Faster release rate: Failures are detected faster and can be repaired faster, leading to increased release rates.
- Fault isolations: Designing the system with CI/CD ensures that fault isolations are faster to detect and easier to implement.
- More test reliability: Using CI/CD, test reliability improves due to small and specific changes introduced to the system, allowing for more accurate positive and negative tests to be conducted.
A Brief Introduction to Jenkins
Jenkins is a powerful application that facilitates continuous integration and continuous delivery of projects regardless of the platform on which it operates. It is an open-source application that can handle any form of integration or continuous deployment. Jenkins can be integrated with several testing and deployment technologies. Jenkins is a software that allows continuous integration and deployment. Jenkins will be installed on a server where the central build will take place.
Jenkins' benefits include:
- It is an open-source application, with great community support.
- It is straightforward to install.
- It has 1000+ plugins that make jobs easier.
- It is cost-free.
- It is Java-built and is therefore portable to all major platforms.
How to install Jenkins in Ubuntu 20.04
Jenkins can be installed in many different ways. For example, if you are a Docker user, you can install Jenkins by pulling from the Docker hub Jenkins image. Jenkins can be installed inside a Kubernetes cluster as well. For more information visit https://www.jenkins.io/doc/book/installing/
This tutorial explains how to install Jenkins in Ubuntu 20.04 using a bash script. The bash script looks something like this. Copy and paste this to a jenkins.sh file.
#/bin/bash
sudo apt-get update
sudo apt-get install openjdk-8-jdk -y
wget -q -O - [https://pkg.jenkins.io/debian-stable/jenkins.io.key](https://pkg.jenkins.io/debian-stable/jenkins.io.key) | sudo apt-key add -
sudo sh -c 'echo deb [https://pkg.jenkins.io/debian-stable](https://pkg.jenkins.io/debian-stable) binary/ > /etc/apt/sources.list.d/jenkins.list'
sudo apt-get update
sudo apt-get install jenkins -y
Code Review
- Update the system
- Install the Open Java Development Kit (OpenJDK)
- Add the repository key to the system
- Append the Debian package repository address to the server’s sources.list
- Update the system again
- Install Jenkins and its dependencies
Make the script an executable using chmod 700 jenkins.sh
. To execute the script, run ./jenkins.sh
.
After the installation is complete, you can visit your Jenkins server using localhost:8080 or server domain Name or IP address. Jenkins' default is port 8080. http://server_ip_or_domain_name:8080
You need to retrieve the password to unlock Jenkins. Run, sudo cat /var/lib/jenkins/secrets/initialAdminPassword
Follow along in the installation wizard to complete the Jenkins installation. After the completion, you will receive a page like this.
Click on "Start using Jenkins" to start using Jenkins. The Jenkins dashboard looks like this.
Congratulations! Now you are a proud owner of a Jenkins server. You can play around with different items in Jenkins. You can learn how to build a simple job, a pipeline, how to integrate GitHub and Jenkins, how to install plugins, etc.
In the real world, there are many environments that developers work in such as dev, staging, and production environments. When it comes to Terraform deployments, these can be done in separate workspaces and integrated with a CI/CD pipeline mechanism.
In this lab, you will learn how to build this architecture using a modular approach that we learned in previous weeks. You will work through the project in the following sequence:
- Create workspaces for Dev / UAT / Prod
- Create separate remote backends for each workspace
- Create core project files
- Create the application module
- Create the CI/CD pipeline
- Deploy the built Terraform code into production using the CI/CD pipeline
These six segments are explained comprehensively in the below sections.
Reference Architecture
Below is the architecture diagram of what we are trying to build in this lab. Take a moment to study it. Note the VPC and subnet address design, security group architecture, and the names of AWS resources.
Prerequisites
To understand how to implement and use Terraform S3 backend, we need to be familiar with the concepts below.
- How to work with terraform workspaces
- Understanding of Terraform modules and Terraform state
- General knowledge of AWS S3
- General knowledge about Jenkins and Jenkins installed in your machine
- Ability to develop Terraform scripts locally
Lab Goal
After completing this lab, you will be able to build a CI/CD pipeline that can deploy Terraform code in test and staging environments before releasing it to production.
Create Core Project Files
Let’s start by creating the core project files. The directions are provided using a Linux/macOS terminal. If you’re on Windows you can create the project files directly in Explorer.
First, create a working directory.
# create a working directory and change into it
$ mkdir terraformcicd && cd terraformcicd
# create the remaining files
$ touch variables.tf outputs.tf providers.tf versions.tf main.tf backend.tf
Update the Variables File
Variables.tf file is the place where we declare variables that will be used in the root project. Since we are working with workspaces in this lab, we are going to use tfvars files to pass input variables into the modules. Although we use tfvars files, we still need to declare variable definitions. Open the /terraformcicd/variables.tf
file and add these values:
variable "region" {
default = "us-west-2"
type = string
}
variable "project" {
default = "smx-course"
type = string
}
variable "public_key" {
default = "ssh-rsa AAAAB3NzaC1yc2EAAAA.."
type = string
}
variable "cidr" {
}
variable "private_subnet" {
}
variable "public_subnet" {
}
Following are the variables that we will use in all workspaces that we create. As you can see, some of the variables have no values assigned to them. That is because these values change from one environment to another. The interesting fact to note here is that all workspaces will refer to this variable definition file. But the actual values of those variables will differ from one workspace to another depending on the respective tfvars file.
The region, project, and public_key variables are common to all the workspaces. Therefore, assigning values to those variables can be done in the variables.tf file without having to repeat it in every tfvars file.
Update the Providers File
To add provider configuration, open the /terraformcicd/providers.tf
file and add the following code:
provider "aws" {
region = var.region
profile = "skillmix-lab"
}
Code Review
- The region value uses a value from the variables.tf file
- The profile value is the name of the AWS CLI credential set that we want to use
Update the Versions File
Here, we will specify the required_providers and versions. Open the /terraformcicd/versions.tf file and add the following code:
terraform {
required_version = ">= 0.15"
required_providers {
aws = {
source = "hashicorp/aws"
version = "~> 3.46"
}
random = {
source = "hashicorp/random"
version = "~> 3.0"
}
cloudinit = {
source = "hashicorp/cloudinit"
version = "~> 2.1"
}
}
}
Initialize the Project
Now that we’ve setup the core files, let’s initialize the project. Run this command at the root directory of /terraformcicd to initialize the project and download the providers:
$ terraform init
# ... output
Start the Lab
Click on the Start Lab button on the lesson page. Wait for all of the credentials to load.
Configure the AWS CLI Profile
Once the lab credentials are ready, you can configure the AWS CLI profile. Here is how you can do it on various operating systems.
- On Linux/Mac, open the file * ~/.aws/credentials*
- On Windows, open the file %USERPROFILE%.aws credentials
Once you have the file open, add this profile to it. Use the keys from the lab session you started.
# ...other profiles
[skillmix-lab]
aws_access_key_id=<lab access key>
aws_secret_access_key=<lab secret key>
Create the application module
Now it’s time to create the application module. This module contains the configurations of creating a VPC, creating public and private subnets, creating the RDS database instance, creating an EC2 instance, and a keypair to connect to it with. The instance will have the same cloud-config file in which it will configure the EC2 instance. This is the only module we will create for this project.
Create the Module Directory
To build the application module, start by creating a ./modules/application
directory in our root module. Create all of the files in this step in this directory.
Create the Variables file
In the module’s variables.tf file we define the variables that will pass down from the root directory. Create the ./modules/application/variables.tf
and add the following code:
variable "project" {
type = string
}
variable "cidr_block" {
type = string
}
variable "private_subnet" {
type = list(any)
}
variable "public_subnet" {
type = string
}
variable "pub_key" {
type = string
}
Code Review
- In this file we define the variables but don’t assign them any value. Rather, the values will be passed down from the project root
Create the Outputs File
We are creating a lot of resources inside this module. Therefore, we need to expose any important value associated with the created resources as output variables. Create the ./modules/application/outputs.tf
and add the following code:
output "vpc_id" {
description = "The ID of the VPC"
value = module.vpc.vpc_id
}
output "private_subnets" {
description = "List of IDs of private subnets"
value = module.vpc.private_subnets
}
output "public_subnets" {
description = "List of IDs of public subnets"
value = module.vpc.public_subnets
}
output "sg_ids" {
description = "A map containing IDs of security groups"
value = {
app_server_sg = module.app_server_sg.security_group_id
db_sg = module.db_sg.security_group_id
}
}
output "db_config" {
description = "A map containing details about the DB configuration"
value = {
user = aws_db_instance.database.username
password = aws_db_instance.database.password
database = aws_db_instance.database.name
hostname = aws_db_instance.database.address
port = aws_db_instance.database.port
}
sensitive = true
}
output "instance_id" {
description = "The ID of the instance"
value = aws_instance.app.id
}
output "public_ip" {
description = "The public IP address of the instance"
value = aws_instance.app.public_ip
}
Code Review
- The vpc_id output is going to output the id of the created VPC
- The private_subnets will expose the list of private subnet IDs in the VPC
- The public_subnets will expose the list of public subnet IDs in the VPC
- The sg_ids output block is creating an object with the security group ID values
- The db_config output block will print our database information to the screen after the db instance is created
- The instance_id will expose the ID of the created EC2 instance.
- The public_ip output is going to output the public IP address of the created EC2 instance
Create the Cloud Config File
When app servers start we want them to self-configure. This self-configuration involves creating the /etc/server.conf file, running commands, and downloading packages. This is done using the cloud-init utility just as we did in the Week 3 application module.
To create the configuration file itself in our project, create the ./modules/application/app_config.yaml
file and add the following code:
#cloud-config
write_files:
- path: /etc/server.conf
owner: root:root
permissions: "0644"
content: |
{
"user": "${user}",
"password": "${password}",
"database": "${database}",
"netloc": "${hostname}:${port}"
}
runcmd:
- curl -sL https://api.github.com/repos/scottwinkler/vanilla-webserver-src/releases/latest | jq -r ".assets[].browser_download_url" | wget -qi -
- unzip deployment.zip
- ./deployment/server
packages:
- jq
- wget
- unzip
Code Review
- There are three main sections to this code
- First, the write_files section is used to create the /etc/server.conf file set the owner, permissions, and content
- Second, the runcmd section downloads a web server from GitHub, unzips the file, and starts the server
- Lastly, the packages section tells cloud-init what Ubuntu packages to install
Create the Locals.tf File
Since we are creating a lot of resources, the main.tf file will get complex. Because of that, we will create a separate file to store the local values which we will use inside this module.
A local value gives an expression a name, allowing you to use it numerous times within a module without having to repeat it. Local values are created by a locals block. Once a local value is declared, you can reference it in expressions as local.
Create the ./modules/application/locals.tf
file and add the following code:
locals {
db_config = {
user = aws_db_instance.database.username
password = aws_db_instance.database.password
database = aws_db_instance.database.name
hostname = aws_db_instance.database.address
port = aws_db_instance.database.port
}
}
Code Review
- The db_config local value is a map-type local value.
- It assigns the database values and maps them to a map object.
- This db_config local value will later be used to pass database information when creating the cloud-config file.
Create the Data.tf File
Just like the locals.tf file, we will create a separate file called data.tf to configure data sources.
This data.tf file contains the data source for the cloud-init configuration. This data source will create the userdata value that we can pass to the EC2 instance.
It also contains the data block that is used to query our AWS account for the available availability zones in our region (us-west-2). This will be used in the creation of the VPC.
Lastly, it has the data "aws_ami" "ubuntu" block to get the latest Ubuntu AMI ID. This configuration can query and filter Ubuntu’s AMIs to get the latest matching AMI ID. This is used in the main.tf file when creating the EC2 instance.
Create the ./modules/application/data.tf
file and add the following code:
data "aws_availability_zones" "available" {}
data "cloudinit_config" "config" {
gzip = true
base64_encode = true
part {
content_type = "text/cloud-config"
content = templatefile("${path.module}/app_config.yaml", local.db_config)
}
}
data "aws_ami" "ubuntu" {
most_recent = true
filter {
name = "name"
values = [
"ubuntu/images/hvm-ssd/ubuntu-focal-20.04-amd64-server-*"]
}
filter {
name = "virtualization-type"
values = [
"hvm"]
}
owners = [
"099720109477"]
}
Code Review
- The "aws_availability_zones" "available" data block queries the available availability zones in our region (us-west-2)
- The data "cloudinit_config" "config" creates the content for the EC2 userdata. It uses Terraform’s templatefile function. This function accepts two values; the template file, and the values that are used in the template file. Here we pass in local.db_config which we configured in the locals.tf file earlier
- The data "aws_ami" "ubuntu" block uses Terraform’s external data sources feature to query Ubuntu’s AWS account for the latest AMI ID
Create the App Module Main File
Now that we have created the variables, outputs, locals, and data sources, we will now create the main file of the module. This is a complex file, so we’ll build it in parts.
First, we will create the VPC for the project. Create the ./modules/application/main.tf
and add the following code:
module "vpc" {
source = "terraform-aws-modules/vpc/aws"
version = "2.64.0"
name = "${var.project}-${terraform.workspace}-vpc"
cidr = var.cidr_block
azs = data.aws_availability_zones.available.names
database_subnets = var.private_subnet
public_subnets = [var.public_subnet]
create_database_subnet_group = true
enable_nat_gateway = false
single_nat_gateway = false
}
Code Review
- We are using the terraform-aws-modules/vpc/aws module
- The version attribute locks the module version that will be used
- The name attribute uses the variable from the variables.tf file we created in the module and we get the current terraform workspace name by using ${terraform.workspace} reference.
- For example, if we provide project=”smx-course” and we are working in the “dev” workspace, the VPC name will be something like -> smx-course-dev-vpc
- The cidr attribute sets the VPC IPv4 CIDR block value. This is a VPC required value and this is passed as a variable from the root module
- The public_subnets, database_subnets attributes set the subnet address ranges that the VPC design will use. Our EC2 and RDS resources will be deployed into their respective subnets
- The create_database_subnet_group attribute creates an RDS Subnet Group
- And lastly, we create a single NAT gateway with enable_nat_gateway and single_nat_gateway
Next, we will create security groups for our app servers and the database instance. If you refer back to the architecture diagram above, you’ll see the security group architecture.
Append the following code to ./modules/application/main.tf
:
module "app_server_sg" {
source = "terraform-aws-modules/security-group/aws"
name = "${terraform.workspace}-app-server-sg"
description = "Security group for instances in VPC"
vpc_id = module.vpc.vpc_id
ingress_with_cidr_blocks = [
{
from_port = 80
to_port = 80
protocol = "tcp"
description = "HTTP"
cidr_blocks = "0.0.0.0/0"
},
{
from_port = 22
to_port = 22
protocol = "tcp"
description = "SSH"
cidr_blocks = "0.0.0.0/0"
}
]
}
module "db_sg" {
source = "terraform-aws-modules/security-group/aws"
name = "${terraform.workspace}-db-server-sg"
description = "Security group for db servers in VPC"
vpc_id = module.vpc.vpc_id
computed_ingress_with_source_security_group_id = [
{
rule = "mysql-tcp"
source_security_group_id = module.app_server_sg.security_group_id
}
]
number_of_computed_ingress_with_source_security_group_id = 1
}
Code Review
- We are using the terraform-aws-modules/security-group/aws module to create the security groups
- All security groups are created in the VPC using the vpc_id = module.vpc.vpc_id
- As you can see, we have used the terraform.workspace name reference again to name the security groups.
- The app_server_sg security group is designed to be assigned to the application servers. Port 80 is configured to allow HTTP traffic to the application server and port 22 to SSH into the servers
- The db_sg security group is designed to be assigned with the DB instance
Now that we have created the networking resources, we will move on to create the computing and database resources. First, we’ll create the RDS database instance.
We use resource blocks to create the DB instance just like we created in the Week 3 database module. In this module, we will define two resources: random_password and aws_db_instance.
Append the following code to ./modules/application/main.tf
:
resource "random_password" "password" {
length = 16
special = true
override_special = "_%*"
}
resource "aws_db_instance" "database" {
allocated_storage = 10
engine = "mysql"
engine_version = "8.0"
instance_class = "db.t2.micro"
identifier = "${var.project}-${terraform.workspace}-db-instance"
name = "db"
username = "admin"
password = random_password.password.result
db_subnet_group_name = module.vpc.database_subnet_group
vpc_security_group_ids = [module.db_sg.security_group_id]
skip_final_snapshot = true
}
Code Review
- random_password is a resource that creates a random password with user-defined attributes. According to the above configuration, it will generate 16 characters long password and it will include _%
- characters inside it as well.
- The resource "aws_db_instance" "database" block is used to create the database instance. Take note of how the password attribute is set; it references the resource "random_password" "password" block. The db_subnet_group_name and vpc_security_group_ids attributes are populated from the networking module outputs that have been passed down to this module
Next, we will create computing resources. We are going to create an EC2 instance and a keypair to connect to the instance using SSH. Append the following code to ./modules/application/main.tf
:
resource "aws_key_pair" "key_pair" {
key_name = "${terraform.workspace}-key"
public_key = var.pub_key
}
resource "aws_instance" "app" {
ami = data.aws_ami.ubuntu.id
instance_type = "t2.micro"
subnet_id = module.vpc.public_subnets[0]
vpc_security_group_ids = [module.app_server_sg.security_group_id]
key_name = aws_key_pair.key_pair.id
user_data = data.cloudinit_config.config.rendered
tags = {
"Name" = "${var.project}-${terraform.workspace}-instance"
"Project" = "${var.project}"
"Technology" = "Terraform"
"Environment" = "${terraform.workspace}"
}
}
Code Review
- The resource "aws_key_pair" "key_pair" block is used to create a keypair. Note that we have used the workspace name reference as part of the key name. A variable containing the public key material (var.pub_key) is passed down to the public_key attribute. This is a required attribute in this resource block.
- aws_instance provides an EC2 instance resource with the user-configured attributes.
- As you can see, this instance is being created inside the public subnet of the VPC we created earlier and it is associated with the app_server_sg security group.
- Observe the image_id and user_data values. These values were set using the data sources previously in the data.tf file.
- The key_name of the Key Pair to use for the instance is passed using aws_key_pair.key_pair.id reference.
- Take a look at how the instance tags are defined. They are set using variable referrals and the workspace name referral.
This is the end of the module. We created a VPC, public and private subnets, security groups, a random password, a database instance, a key pair, and an EC2 instance. It is time to update the root project main file with the module configuration.
Update the Root Project Main File
It’s time to connect the application module with the root project. Open the /terraformcicd/main.tf
and append this code:
module "app" {
source = "./modules/application"
project = var.project
cidr_block = var.cidr
private_subnet = var.private_subnet
public_subnet = var.public_subnet
pub_key = var.public_key
}
Code Review
- All module configs require the source attribute to be set. Here we are giving the path to our application module
- The project attribute is passing the project variable to the module
- The cidr_block attribute is passing the cidr variable to the module which contains the cidr block for the VPC
- The private_subnet attribute is passing the private_subnet variable to the module
- The public_subnet attribute is passing the public_subnet variable to the module
- The pub_key attribute is passing the pub_key variable to the module which is an SSH RSA key
Update the Root Project Output File
Now that we have the module built out, we can update the root project outputs.tf file. Open the /terraformcicd/outputs.tf
file and append this code:
output "vpc_id" {
description = "The ID of the VPC"
value = module.app.vpc_id
}
output "private_subnets" {
description = "List of IDs of private subnets"
value = module.app.private_subnets
}
output "public_subnets" {
description = "List of IDs of public subnets"
value = module.app.public_subnets
}
output "sg_ids" {
description = "A map containing IDs of security groups"
value = module.app.sg_ids
}
output "db_hostname" {
description = "Database hostname"
value = module.app.db_config.hostname
sensitive = true
}
output "db_password" {
value = module.app.db_config.password
sensitive = true
}
output "instance_id" {
description = "Instance ID"
value = module.app.instance_id
}
output "instance_public_ip" {
description = "Public IP address of the instance"
value = module.app.public_ip
}
Code Review
- Note the syntax used in the values above:
. . . This syntax matches the module structure - All output values that we defined inside the module outputs.tf file is defined in here
- db_hostname and db_password is referred to using the db_config map object
Create Remote Backend
Terraform workspaces allow you to save your Terraform state in multiple, distinct, named workspaces. We are going to use an AWS S3 bucket as our remote backend. When you are dealing with multiple workspaces, Terraform has a great feature to create a new folder called “env:” inside the S3 bucket. Different states of different workspaces get saved inside this env folder. It stores the state of each workspace along with the “key” you specified in your backend configuration.
To configure the remote backend for this project, open the /terraformcicd/backend.tf
file and add this code:
terraform {
backend "s3" {
bucket = "<bucket-name>"
key = "workspace/terraform.tfstate"
region = "<bucket-region>"
profile = "skillmix-lab"
}
}
Code Review
- Backend configurations are defined inside of a terraform block.
- You need to have an S3 bucket beforehand to use as the backend of this project. You can follow last week’s “Building the Collaboration Backend” lab to create this. Replace the bucket name and the region of the bucket.
- The key is defined as workspace/terraform.tfstate.
- Workspace states are saved in a folder called “env:”
- For example, when we create the dev workspace, its state will be saved as env:/dev/workspace/terraform.state.
Create Workspaces For Dev / UAT / Prod
Right now, this deployment's state is saved in the default workspace. You may double-check this by using the terraform workspace show command, which will show you the workspace you're in right now:
# To view the current workspace
$ terraform workspace show
Since we are learning how to work in multiple environments, we are going to create three workspaces named dev
, stg
, and prod
. Now that you have learned how workspaces work, you know it manages multiple non-overlapping groups of resources with the same configuration. We can create exact copies of the resources that we defined in the module with values being different from environment to environment. This is done by using tfvars files.
Introducing tfvars Files
tfvars files are used because it's easier to specify the values of several variables in a single variable definitions file. These variable definition files end in either .tfvars or .tfvars.json. Then these files can be specified on the command line using -var-file like so:
$ terraform apply -var-file="example.tfvars"
tfvars files consist only of variable name assignments. For example:
image_id = "ami-abc123"
availability_zone_names = [
"us-east-1a",
"us-west-1c",
]
To learn more about tfvars files, go to input variables in Terraform documentation.
Create dev.tfvars
Now, let us create the first tfvars file which will be used in the dev environment. Create the file /terraformcicd/dev.tfvars
and add this code:
cidr = "192.168.0.0/16"
private_subnet = ["192.168.1.0/24", "192.168.2.0/24", "192.168.3.0/24"]
public_subnet = "192.168.10.0/24"
If you can remember the empty variables we defined in the variables.tf file in the root module, this is the place where we assign values to those variables. The descriptions of these variable values assigned are described below.
| Variable Name | Value | CIDR Block for the VPC | |----|----|----| | cidr | 192.168.0.0/16 | CIDR block for the VPC | | private_subnet | ["192.168.1.0/24", "192.168.2.0/24", "192.168.3.0/24" ] | These subnets will be configured as the database subnets in the VPC. RDS DB instance will be placed in one of these subnets | | public_subnet | 192.168.10.0/24 | We only create one public subnet. The application server EC2 instance will be placed in this subnet |
Create stg.tfvars and prod.tfvars
These files are just as same as the dev.tfvars file. The only difference is that the values are different from one environment to another.
If you are confused with assigning lots of cidr and subnet addresses, take a look at the architecture diagram at the beginning of the file. The VPC and subnet configurations are clearly defined in the diagram.
Create the file /terraformcicd/stg.tfvars
and add the following code:
cidr = "172.16.0.0/16"
private_subnet = ["172.16.1.0/24", "172.16.2.0/24", "172.16.3.0/24"]
public_subnet = "172.16.10.0/24"
Create the file /terraformcicd/prod.tfvars
and add the following code:
cidr = "10.0.0.0/16"
private_subnet = ["10.0.1.0/24", "10.0.2.0/24", "10.0.3.0/24"]
public_subnet = "10.0.10.0/24"
Create Workspaces
Now that you have an idea about how to use tfvars files, let’s create the new workspace called “dev” using the terraform workspace new command:
# create a workspace for dev
$ terraform workspace new dev
You're now in a new, empty workspace. As workspaces isolate their state, now when you run "terraform plan". Terraform will not see any existing state for this configuration and save the state on the specified S3 backend.
Execute the Terraform workflow
Run the following commands to apply the changes you’ve made so far:
# run init again
$ terraform init
$ terraform plan -var-file="dev.tfvars"
# ... output
$ terraform apply -var-file="dev.tfvars"
# when prompted, answer Yes
As discussed earlier, we need to pass the variable files when running Terraform commands in workspaces. This lab teaches you how to work with Terraform and CICD. We need to make sure that our configuration is correct before we integrate it with the CICD pipeline. That is the reason why we run Terraform plan
and apply
here. You can easily execute a Terraform destroy
command to destroy the resources made later.
If you follow the labs and create the files correctly, you will now have the architecture of the dev environment. If you navigate to the AWS console, you will see that all the resources are created just like in the architecture diagram.
Now that we have tested the dev deployment, we can move onto staging deployment. Create a new workspace for the staging environment using the command below.
# create workspace for stg
$ terraform workspace new stg
We have now switched to the stg workspace. According to what we have learned so far, a new workspace means a new empty state. We can check this by running a terraform plan. You’ll be able to see that Terraform plans all resources again in this new workspace.
Execute the Terraform workflow
Run the following commands to apply the changes into the staging environment:
$ terraform plan -var-file="stg.tfvars"
# ... output
$ terraform apply -var-file="stg.tfvars"
# when prompted, answer Yes
Make sure to pass the respective variable definition file when planning and applying in workspaces. If you run the above commands, you will have the staging environment created.
You have now created two workspaces for dev and staging and successfully deployed the terraform code. Finally, we can move onto the production environment. The execution flow is just as same as staging. Create the new prod workspace by executing the below command:
# create a workspace for prod
$ terraform workspace new prod
Execute the Terraform workflow
Run the following commands to apply the changes into the production environment:
$ terraform plan -var-file="prod.tfvars"
# ... output
$ terraform apply -var-file="prod.tfvars"
# when prompted, answer Yes
You have successfully created resources in the production environment. If you have the correct configuration scripts, workspaces, and the respective variable definition files, deploying resources in multi-environments with Terraform is just a piece of cake!
Review and Destroy
You have applied the terraform configurations in all environments. Now, it is time to destroy them. We are going to integrate this terraform workflow with a CICD pipeline in the next step. Therefore, we need to destroy these resources at this stage.
Execute the following commands to destroy all resources in every workspace.
$ terraform workspace select dev
$ terraform destroy -var-file="dev.tfvars"
$ terraform workspace select stg
$ terraform destroy -var-file="stg.tfvars"
$ terraform workspace select prod
$ terraform destroy -var-file="prod.tfvars"
Create the CI/CD pipeline
In this section, we will learn how to create a Jenkins CI/CD pipeline and integrate it with Terraform to work with multiple environments.
Prerequisites
- Jenkins installed locally or in a server
- Access to the Jenkins server
- A GitHub/GitLab account with the code we created earlier
- An understanding of Jenkins pipeline scripting
First, we need to install Terraform plugin in Jenkins. Log into your Jenkins server and go to Manage Jenkins → Manage Plugins
Search for "Terraform" in the Available section and install it. Choose "Install without Restart" and when the installation is in progress choose the "Restart Jenkins when installation is complete" check box to restart the Jenkins server.
You can check whether the installation was a success by searching for Terraform in Manage Jenkins → Global Tools Configuration section: http://<your_jenkins_ip>/configureTools/
Click on "Add Terraform" and fill in the fields as shown above. This will install Terraform 1.0.5. Click Apply and Save the configuration.
Note
Sometimes, adding the Terraform global tool from Jenkins Dashboard does not work properly. Therefore, run the following script in your Jenkins server to install Terraform in the server. This script will work in Linux distributions. Check this page to install Terraform in the respective operating systems. This will install Terraform version 0.15.0 in your Jenkins server.
#!/bin/bash
sudo apt-get update
wget https://releases.hashicorp.com/terraform/0.15.0/terraform_0.15.0_linux_amd64.zip
sudo apt-get install zip -y
unzip terraform*.zip
sudo mv terraform /usr/local/bin
terraform version
Now that you have everything in order, let's move on to the Jenkins pipeline. It would be beneficial if you have an understanding of Jenkins pipeline scripting for this section. This pipeline file is a bit lengthy, so we will go through step by step.
The pipeline is divided into stages. The first step is called "Checkout".
stage('Checkout') {
checkout([$class: 'GitSCM',
branches: [[name: '***/main**']],
extensions: [
[$class: 'RelativeTargetDirectory', relativeTargetDir: "code"]
],
userRemoteConfigs: [[credentialsId: '**skillmix-git**', url: 'git@github.com:**<git-uname>/<git-repo-name>**.git']]])
}
Code Review
- This stage is used to checkout from the git repository.
- You need to provide the branch name which contains our previously built Terraform code. In this case, it is called "main".
- The extensions option is used to rename the target directory for something easier without having to use the repository full path. In this example, it is configured as "code". So, if you have any folders created in the target directory, you can easily refer to them as code/
without having to provide an absolute path every time. - Jenkins needs credentials to access the git repository and needs to know what repository to use. You need to configure ssh authentication between Jenkins and GitHub/GitLab beforehand using the "Manage credentials" option in Jenkins. You need to add the public key of your Jenkins server into Git repository "Deploy Keys" and add the private key of the server in the "Credentials" section. Make sure to provide a meaningful name to your credentials.
- In the userRemoteConfigs option, provide the credential ID and the git repository path appropriately.
The second step is the initialization of the project. We will call this stage "Initialize Project".
stage('Initialize Project') {
dir('code/') {
echo "Running terraform init"
sh ''' terraform init '''
echo "Terraform successfully initialized"
}
}
Code Review
- Since this is a Terraform project, we need to initialize it first.
- If you recall the extension explanation in the first stage, this stage is using that dir('code/') to navigate to the directory.
- The echo commands are used to provide meaningful messages when the pipeline is running.
- A terraform init command is used to initialize the project.
Now that we have initialized the project, we can create the workspaces needed. As the lab indicates, the pipeline is used to deploy Terraform code in different TF workspaces. The third stage of the pipeline is called "Create Workspaces".
stage('Create Workspaces') {
dir('code/') {
script {
def wsApproval = input id: 'Workspace', message: 'Is this your first time?', submitter: '**<your-jenkins-username>**', parameters: [choice(choices: ['Yes', 'No'], description: 'Only click yes if you have not created workspaces yet!', name: 'Approval')]
if (wsApproval.toString() == 'Yes') {
echo "Creating workspaces for dev, stg and prod"
sh ''' terraform workspace new dev '''
sh ''' terraform workspace new stg '''
sh ''' terraform workspace new prod '''
echo "Finished creating workspaces"
}
}
}
}
Code Review
- As this is an interactive pipeline, there is user input involved asking whether this is the first time that you are running the pipeline.
- The reason behind this is, if we run the pipeline multiple times, it will try to create the workspaces again and again and will error out. Now that you are familiar with workspaces, you know that we can't have two Terraform workspaces by the same name. Therefore, make sure you only run this the first time you are running the pipeline.
- Based upon the user input, it will create the 3 workspaces if the input is equal to 'Yes'.
- Three workspaces are being created name dev, stg, and prod.
- Do not forget to add your Jenkins username in the submitter: section.
Now to the fun part! We have initialized and created the workspaces. All we need to do is to view a plan and apply the changes to different environments. The first environment is dev. A TF plan of the dev environment configurations is added as the fourth step of the pipeline.
stage('Development Plan') {
dir('code/') {
echo "Development plan in Dev"
sh ''' terraform workspace select dev '''
sh ''' terraform plan -var-file=dev.tfvars '''
}
}
Code Review
- This stage is called the "Development Plan".
- We need to switch to the dev workspace and then apply the configurations. Switching to the workspace is done by "terraform workspace select dev".
- Then a TF plan is executed providing the respective tfvars file. Since we have added the var files into the root directory, we do not need to worry about specifying a path. But if you have the tfvars files in a separate folder, make sure to add the correct path using code/
The next step is the applying of configurations. If you are satisfied with the plan, you can proceed with the flow.
stage('Dev Deployment') {
dir('code/') {
script {
def deploymentApproval = input id: 'Deploy', message: 'Deploy to Dev?', submitter: '**<your-jenkins-username>**', parameters: [choice(choices: ['Yes', 'No'], description: 'Approve the deployment?', name: 'Approval')]
if (deploymentApproval.toString() == 'Yes') {
echo "Deployment started in Dev"
sh ''' terraform workspace select dev '''
sh ''' terraform apply -var-file=dev.tfvars -auto-approve '''
echo "Finished deployment in Dev"
}
}
}
}
Code Review
- This stage is called "Dev Deployment".
- There is another user approval asking whether you want to deploy this to the dev environment. This was added in case if you are not satisfied with the TF plan, you can provide abort the pipeline, make necessary changes and apply again.
- Based on the user approval, it will apply the changes to the dev environment using "terraform apply -var-file=dev.tfvars -auto-approve" if the input is equal to "Yes".
There are four more steps to the pipeline. It is just as same as the dev plan and deployment. The only difference is that it deploy to staging and production environments.
stage('Staging Plan') {
dir('code/') {
echo "Development plan in Staging"
sh ''' terraform workspace select stg '''
sh ''' terraform plan -var-file=stg.tfvars '''
}
}
stage('Staging Deployment') {
dir('code/') {
script {
def deploymentApproval = input id: 'Deploy', message: 'Deploy to Staging?', submitter: '**<your-jenkins-username>**', parameters: [choice(choices: ['Yes', 'No'], description: 'Approve the deployment?', name: 'Approval')]
if (deploymentApproval.toString() == 'Yes') {
echo "Deployment started in Staging"
sh ''' terraform workspace select stg '''
sh ''' terraform apply -var-file=stg.tfvars -auto-approve '''
echo "Finished deployment in Staging"
}
}
}
}
Code Review
- These are the steps responsible for staging deployment.
- First, it will output a TF plan by switching into the stg workspace and executing a terraform plan command.
- If you are satisfied with the planned configurations, you can provide "Yes" as the input to deeply the changes into the staging environment.
These are the last steps of the pipeline.
stage('Production Plan') {
dir('code/') {
echo "Development plan in Production"
sh ''' terraform workspace select prod '''
sh ''' terraform plan -var-file=prod.tfvars '''
}
}
stage('Production Deployment') {
dir('code/') {
script {
def deploymentApproval = input id: 'Deploy', message: 'Deploy to Production?', submitter: 'shanika', parameters: [choice(choices: ['Yes', 'No'], description: 'Approve the deployment?', name: 'Approval')]
if (deploymentApproval.toString() == 'Yes') {
echo "Deployment started in Production"
sh ''' terraform workspace select prod '''
sh ''' terraform apply -var-file=prod.tfvars -auto-approve '''
echo "Finished deployment in Production"
}
}
}
}
Code Review
- These are the steps responsible for the production deployment.
- First, it will output a TF plan by switching into the prod workspace and executing a terraform plan command.
- If you are satisfied with the planned configurations, you can provide "Yes" as the input to deeply the changes into the production environment.
- If you do not need to apply the changes to the prod environment, you can either provide "No" as the input or simply abort the pipeline.
That's it!! The Jenkins interactive pipeline is completed. Now let's move on to the part where we get to see this in action.
Deploy the built Terraform code into production using the CI/CD pipeline
It is time to run the pipeline. Log into your Jenkins server and click on "New Item" and select "Pipeline". Provide a meaningful name for the pipeline and click "OK".
Add the pipeline script in the configuration and save.
Now, we can run the pipeline. Click on "Build Now" to execute the pipeline.
This is the view of the pipeline. It goes along with the stages we created in the pipeline script and completes them. As you can see, it was able to complete the first 2 stages. If you recall, we inserted a user input in the workspaces stage. The pipeline is paused until it gets user input. The user input prompt looks like this.
Click on "Yes" if this is the first tie that you are running the pipeline. If not, Choose "No" from the dropdown and click on "Proceed". Then it will move along to the next stage which is the development plan. You must be thinking, how can I view the plan if the pipeline looks like this in a UI? You can view the console output of your pipeline. Click on the little number above the build number and it will redirect to a detailed page. It contains information about the current build. Click on the "Console Output" to check the console logs.
The console output looks like this. It has all the console logs of the execution.
We added a terraform plan stage to all our environments. You can view the plan from the console output before proceeding to apply any changes. Here's how the plan looks in the console output section.
If you are satisfied with the configurations, you can proceed to the deployment. Select "Yes" on Approval and click on "Proceed" to deeply the code into the dev environment.
If you check the console output now, you will be able to see the applying of changes just like we see when applying in terminal.
Once the applying of changes is over, it will output the output values we defined earlier in the console like below.
Now that the development deployment is over, you can move onto the staging environment. If you do not want to move onto other environments, you can abort the pipeline now. Just like in the dev environment, this will also output a tf plan and ask for the user input before actually applying the changes.
If you chose "Yes", you will now be able to see the configurations are getting deployed in the staging environment.
Now that both dev and staging deployments are completed, we can move on to the production deployment. View the Terraform plan from the console and if satisfied click on "Proceed" to apply the changes into the prod environment.
If you followed all the steps correctly, you will now see that the prod deployment is completed and the pipeline ends stating that it is a success.
If everything went out smoothly, the pipeline outlook will look like the below image. It also contains the time that each stage took to complete.
Now, we can check in AWS whether these resources were actually created. Head over to your AWS console and check for the created resources.
VPCs -
EC2 Instances -
RDS Instances -
As you can see, everything has been deployed exactly the way it is supposed to be. Using a CI/CD pipeline is really easy when you get to work with multiple environments. Now that the deployment is over, we can check how the Terraform state got saved in the remote backend. Navigate to the S3 bucket.
As it was mentioned earlier, when working with multiple workspaces, the states get saved in a folder with the workspace name.
Congratulations! Now you have a very good understanding about workspaces and how to use them in multi-environments. And also now you can use it in a CI/CD pipeline.
Destroying resources
If you need to destroy the resources you made in each environment, it can be done in two ways.
- You can create another pipeline to destroy the resources in each workspace.
- Find the directory where the build information are stored inside the Jenkins server and run terraform destroy command from there.