Terraform Backends

Sign Up to Build

About this Architecture

Here is some information about this architecture.

How to Build This Solution

Here are the steps you can follow to build this solution on your own.

In this lesson, you will learn about the Terraform backend and how to configure it.

What Are Terraform Backends?

Terraform backends store the state files of Terraform projects. Every Terraform configuration can specify a backend where the state files are stored.

What is a State File?

In a state file, Terraform stores information about the state of the infrastructure and configurations it provisions and manages. The primary function of Terraform state files is to maintain the connection between the infrastructures in your remote cloud environment and the resources declared in your local configuration file. That is, anytime you run terraform apply, terraform plan, or terraform destroy, the state file syncs with your remote environment to determine what changes it needs to make.

Although the state file is in the regular JSON format, it is discouraged that you directly edit these files.

Types of Terraform Backends

There are two types of Terraform backends:

  • Local: When you initiate a Terraform project, the default backend is the local backend, unless you explicitly declare another backend. The local backend saves the resource state in a file called terraform.tfstate on the local file system. If only one person is working on the Terraform project, then this type of backend is acceptable.

  • Remote: This type of Terraform backend stores terraform state files in a remote storage system, such as an S3 bucket.

You will learn more about remote backends later in this lesson.

How is a Terraform Backend Configured?

To use the local backend, you do not need to write any configuration. But, you can change the path to the local file. The code below is an example of such.

terraform {
  backend "local" {
    path = "relative/path/to/my/terraform.tfstate
  }
}

A remote Terraform backend, on the other hand, needs to be configured using the backend block within the top-level terraform block, as shown in the code below:

terraform {
  backend "s3" {
    bucket = "mybucket"
    key = "path/to/my/terraform.tfstate"
    region = "us-west-2"
  }
}

The above code defines:

  • A backend block with a block type of S3

  • The bucket argument specifies the name of the remote S3 bucket, i.e "mybucket"

  • The key argument specifies the path of the s3 storage system we want to store the backend file

  • And finally, the region of our S3 bucket.

In the lab section of this lesson, you will build a remote backend.

You can see a list of all supported remote backends here.

Remote Backends

As we already know, remote backends keep state files in remote locations like S3. If multiple people are working on the Terraform project together, this is the preferred way to store a Terraform state file.  When you use a remote backend and run terraform init inside a project, the project first retrieves the environment's current state from the remote repository before making the modifications you specify.

Remote backends ensure that all people working on the project have a concurrent state file of the resources.

Prior to adding a backend resource block to your main Terraform file, you must first provision an S3 bucket that will house the backend file.

In the backend block, you will include the following details:

  • The name of the S3 bucket you created earlier

  • The path as the value of the key argument

  • The appropriate availability zone of the S3 bucket.

State Locking

What would happen if two people run terraform apply on a project at the same time? The changes to the infrastructure and state file could be in conflict. Chaos would ensue.

In a remote backend, there are chances that you have provisioned some resources that you don’t want any other collaborators to tamper with. In such cases, you can implement state locking. When you enable state locking for your remote backend, Terraform prevents other users from altering the state by locking all operations that can write state.

There are different solutions to state locking. If you’re using a remote S3 backend, a DynamoDB table is often used. The DynamoDB table will hold the information of the lock and prevent other users from altering it.

You can achieve this by:

  • Creating a DynamoDB table.

  • Adding the dynamodb_table argument to the backend block.

Below is an example:

terraform {
  backend "s3" {

    #...Other remote backend arguments

    #dynamodb table for state locking
    dynamo_db = "table_name"
  }

}

Lab Time!

It's time to get some hands-on Terraform backend experience. In this lab, you will launch an EC2 instance with a local backend, then reconfigure the Terraform project to use a remote backend.

Create a Working Directory

You can use the same directory and main.tf file for all the challenges in this lab. Create the directory now. Then, create the main.tf file in the folder and include the following code:

terraform {
  required_providers {
    aws = {
      source = "hashicorp/aws"
      version = "~> 4.0"
    }
  }
}

provider "aws" {
  profile = "skillmix-lab"
  region = "us-west-2"
}

Get Your AWS Credentials

If you're using the Skillmix Labs feature, open the lab settings (the beaker icon) on the right side of the code editor. Then, click the Start Lab button to start hte lab environment.

Wait for the credentials to load. Then run this in the terminal.

Be sure to enter in your own access key and secret key and name your profile 'smx-lab'.

$ aws configure --profile smx-lab
AWS Access Key ID [None]: 
AWS Secret Access Key [None]: 
Default region name [None]: us-west-2
Default output format [None]: 

Note: If you're using your own AWS account you'll need to ensure that you've created and configured a named AWS CLI profile named smx-lab.

Create VPC Subnet and Security Group

You will create an EC2 instance resource in this lab, so you need a subnet and security group IDs as a prerequisite.

VPC Config (docs)

CIDR Block: 10.0.0.0/16

Subnet Config (docs)

VPC ID: Refer to the VPC ID that you created

CIDR Block: 10.0.1.0/24

Security Group Config (docs)

Refer to the configuration docs to learn how to fill in the missing arguments in the configuration file below.

#Create VPC
resource "aws_vpc" "main" {
  #add the relevant arguments
}

# Create subnet
resource "aws_subnet" "private" {
  #add the relevant arguments
}

#Create the Security Group
resource "aws_security_group" "My_VPC_Security_Group" {
  vpc_id = aws_vpc.main.id
  name = "My VPC Security Group"
  description = "My VPC Security Group"

  # allow ingress of port 22
  ingress {
    description = "SSH"
    from_port = 22
    to_port = 22
    protocol = "tcp"
    #add the missing arguments
  }

  # allow egress of all ports

  egress {
    from_port = 0
    to_port = 0
    protocol = "-1"
    cidr_blocks = ["0.0.0.0/0"]
    ipv6_cidr_blocks = ["::/0"]
  }

  tags = {
    Name = "My VPC Security Group"
  }
}

Run the terraform commands to provision the resources:

#Initialize the Terraform project
$ terraform init

#View the resources to be modified/created
$ terraform plan

#Apply the modifications
$ terraform apply

This is necessary because you will use the VPC Security group and subnet IDs in the next step.

Create The EC2 Instance

Log in to the AWS console using the IAM username, password, and account ID you get from the Lab Environment. Navigate to the VPC dashboard and copy your VPC security group and subnet IDs.

Once you have your VPC security group and subnet IDs, it’s time to use them to create an EC2 instance.

resource "aws_instance" "web_server" {
  ami = "ami-0cf6f5c8a62fa5da6"
  instance_type = "t2.micro"
  vpc_security_group_ids = ["<your security group id"]
  subnet_id = "<your-subnet-id>"

  tags = {
    Name = "skillmix-lab-instance"
  }
}

Initialize Terraform, plan and apply the configurations

$ terraform init

$ terraform plan

$ terraform apply

Inspect to See the Local Backend File

When you check your project directory, you will find a new file named terraform.tfstate. This is the state file to store the details of the resources you provisioned using Terraform in your local file system.

You can open the file to see that it is a JSON format file.

Note: Do not edit anything in the file.

Manually Create an S3 Bucket for The Remote Backend

You can create the S3 bucket using Terraform, but you must not create it in the same project as the backend you want to store in it. To avoid confusion and to make it easier, you will create the S3 bucket manually.

Navigate to the S3 dashboard from the AWS console and create a new S3 bucket in your preferred availability zone.

Note the bucket name you use in this step because you will use it in the subsequent step.

Update Configuration to Use S3 Remote Backend

After creating an S3 bucket, update your Terraform configuration file to use a remote backend. Open your main.tf and include the backend resource block.

terraform {
  backend "s3" {
    bucket = "mybucket"
    key = "path/to/my/terraform.tfstate"
    region = "us-west-2"
  }
}

Replace "mybucket" with the name of the S3 bucket you created earlier.

You can use any key name you desire, but we recommend that you keep it descriptive as above.

Save the main.tf file.

Migrate Backend from Local to Remote

Next, use the Terraform CLI commands to migrate the local state file to the remote S3 repository.

Run terraform init. You should get a message and prompt as shown below:

Initializing the backend...
Acquiring state lock. This may take a few moments...

Do you want to copy existing state to the new backend?

Pre-existing state was found while migrating the previous "local" backend to the

newly configured "s3" backend. No existing state was found in the newly

configured "s3" backend. Do you want to copy this state to the new "s3"

backend? Enter "yes" to copy and "no" to start with an empty state.

Enter a value:

Terraform found an existing local repository and is asking if you want to migrate it to the new remote backend. Input yes and you should get a success message.

Run terraform plan, then terraform apply to successfully migrate the local backend to the remote S3 bucket.

Inspect the Remote Backend

Head over to your AWS console to inspect the S3 bucket. You should find a new object in your S3 bucket with the folders we defined in the key argument above. When you navigate through the folders, you'll find the terraform.tfstate file.

You can further inspect your local file system. Navigate to the terraform.tfstate file and open it. You'll find that the folder is empty. This is because the state files have been migrated to the remote backend. When you run terraform init, the local environment syncs with the S3 to retrieve the state of your Terraform resources.

Delete Resources

After you’ve completed this lab, run terraform destroy to remove all the resources you created.

main.tf file.

$ terraform destroy