Here is some information about this architecture.
Here are the steps you can follow to build this solution on your own.
In this project, you'll learn to configure an Application Load Balancer (ALB) that directs traffic to an AWS Lambda function. This setup offers a highly scalable solution, distributing traffic efficiently and allowing for smooth integration of the serverless computing model with Lambda. Through hands-on experience, you'll explore how to link ALB with Lambda and create targets to manage incoming requests effectively.
If you're using the Skillmix Labs feature, open the lab settings (the beaker icon) on the right side of the code editor. Then, click the Start Lab button to start hte lab environment.
Wait for the credentials to load. Then run this in the terminal.
Be sure to enter in your own access key and secret key and name your profile 'smx-lab'.
$ aws configure --profile smx-lab
AWS Access Key ID [None]:
AWS Secret Access Key [None]:
Default region name [None]: us-west-2
Default output format [None]:
Note: If you're using your own AWS account you'll need to ensure that you've created and configured a named AWS CLI profile named smx-lab.
Next, we'll create the required_providers
config. This config is used to specify the required providers for our Terraform project. In this case, we are specifying the aws
provider from HashiCorp with a version constraint of ~>4.52.0
.
terraform {
required_providers {
aws = {
source = "hashicorp/aws"
version = "~>4.52.0"
}
}
required_version = "~> 1.0"
}
Next, we'll create the aws config which is used to configure the AWS provider in Terraform. This config specifies the AWS profile to use, which is 'smx-lab', and the region to operate in, which is 'us-west-2'.
provider "aws" {
profile = "smx-lab"
region = "us-west-2"
}
Next, we'll create the region config. This config is used to specify the AWS Region where we will be deploying our resources.
variable "region" {
type = string
description = "AWS Region where deploying resources"
default = "us-east-1"
}
Next, we'll create the vpc_cidr config. This config is used to define the CIDR block for the Batch VPC. The default value is set to '10.0.0.0/16'.
variable "vpc_cidr" {
type = string
description = "CIDR block for Batch VPC"
default = "10.0.0.0/16"
}
Next, we'll create the aws_vpc
config. This config is used to create an Amazon Virtual Private Cloud (VPC) with the specified CIDR block.
resource "aws_vpc" "vpc" {
cidr_block = "${var.vpc_cidr}"
}
Next, we'll create the aws_subnet config. This config is used to define a public subnet in AWS.
resource "aws_subnet" "public_subnet1" {
cidr_block = "10.0.1.0/24"
vpc_id = "${aws_vpc.vpc.id}"
availability_zone = "${var.region}a"
tags = {
Name = "Subnet for ${var.region}a"
}
}
Next, we'll create the aws_subnet config. This config is used to define a subnet in AWS.
resource "aws_subnet" "public_subnet2" {
cidr_block = "10.0.2.0/24"
vpc_id = "${aws_vpc.vpc.id}"
availability_zone = "${var.region}b"
tags = {
Name = "Subnet for ${var.region}b"
}
}
Next, we'll create the aws_route_table
config. This config is used to create a public route table in AWS.
resource "aws_route_table" "public_rt" {
vpc_id = "${aws_vpc.vpc.id}"
route {
cidr_block = "0.0.0.0/0"
gateway_id = "${aws_internet_gateway.gw.id}"
}
tags = {
Name = "public_rt"
}
}
Next, we'll create the aws_route_table_association
config. This config is used to associate a subnet with a route table in AWS.
resource "aws_route_table_association" "public_rt_table_a" {
subnet_id = "${aws_subnet.public_subnet1.id}"
route_table_id = "${aws_route_table.public_rt.id}"
}
Next, we'll create the aws_route_table_association
config. This config is used to associate a subnet with a route table in AWS.
resource "aws_route_table_association" "public_rt_table_b" {
subnet_id = "${aws_subnet.public_subnet2.id}"
route_table_id = "${aws_route_table.public_rt.id}"
}
Next, we'll create the aws_internet_gateway
config. This config is used to create an internet gateway in AWS and associate it with a specific VPC.
resource "aws_internet_gateway" "gw" {
vpc_id = "${aws_vpc.vpc.id}"
}
Next, we'll create the aws_iam_role config. This config is used to define an AWS IAM role called "Lambda_Function_Role" that allows the Lambda service to assume this role. The role is used to grant permissions to the Lambda function to access other AWS resources.
resource "aws_iam_role" "lambda_role" {
name = "Lambda_Function_Role"
assume_role_policy = <<EOF
{
"Version": "2012-10-17",
"Statement": [
{
"Action": "sts:AssumeRole",
"Principal": {
"Service": "lambda.amazonaws.com"
},
"Effect": "Allow",
"Sid": ""
}
]
}
EOF
}
Next, we'll create the aws_iam_policy
config. This config is used to define an AWS IAM policy for managing an AWS Lambda role.
resource "aws_iam_policy" "iam_policy_for_lambda" {
name = "aws_iam_policy_for_terraform_aws_lambda_role"
path = "/"
description = "AWS IAM Policy for managing aws lambda role"
policy = <<EOF
{
"Version": "2012-10-17",
"Statement": [
{
"Action": [
"logs:CreateLogGroup",
"logs:CreateLogStream",
"logs:PutLogEvents"
],
"Resource": "arn:aws:logs:*:*:*",
"Effect": "Allow"
}
]
}
EOF
}
This Terraform block creates an AWS IAM policy named aws_iam_policy_for_terraform_aws_lambda_role
with a description of "AWS IAM Policy for managing aws lambda role". The policy allows the specified actions (logs:CreateLogGroup
, logs:CreateLogStream
, logs:PutLogEvents
) on all AWS CloudWatch Logs resources (arn:aws:logs:*:*:*
). The effect is set to "Allow", granting the necessary permissions for managing AWS Lambda roles.
Next, we'll create the aws_iam_role_policy_attachment
config. This config is used to attach an IAM policy to an IAM role.
resource "aws_iam_role_policy_attachment" "attach_iam_policy_to_iam_role" {
role = "${aws_iam_role.lambda_role.name}"
policy_arn = "${aws_iam_policy.iam_policy_for_lambda.arn}"
}
Next, we'll create the aws_security_group config called 'load_balancer_sg'. This config is used to create a security group for a load balancer.
resource "aws_security_group" "load_balancer_sg" {
name = "myLoadBalancerSG"
vpc_id = "${aws_vpc.vpc.id}"
ingress {
from_port = 80
to_port = 80
protocol = "tcp"
cidr_blocks = ["0.0.0.0/0"]
}
tags = {
Name = "myLoadBalancerSG"
}
}
Next, we'll create the aws_lb
config. This config is used to define an AWS Application Load Balancer.
resource "aws_lb" "load_balancer" {
name = "myLoadBalancer"
internal = false
load_balancer_type = "application"
security_groups = ["${aws_security_group.load_balancer_sg.id}"]
subnets = ["${aws_subnet.public_subnet1.id}", "${aws_subnet.public_subnet2.id}"]
tags = {
Name = "myLoadBalancer"
}
}
Next, we'll create the aws_lb_listener
config. This config is used to define a listener for an AWS Application Load Balancer.
resource "aws_lb_listener" "listener" {
load_balancer_arn = "${aws_lb.load_balancer.arn}"
port = 80
protocol = "HTTP"
default_action {
type = "forward"
target_group_arn = "${aws_lb_target_group.target_group.arn}"
}
}
Next, we'll create the aws_lb_target_group
config. This config is used to define a target group for an Application Load Balancer in AWS. The target group is named myLoadBalancerTargets
and is configured to target AWS Lambda functions. It is associated with the VPC specified by the aws_vpc.vpc.id
variable.
resource "aws_lb_target_group" "target_group" {
name = "myLoadBalancerTargets"
target_type = "lambda"
vpc_id = "${aws_vpc.vpc.id}"
}
Next, we'll create the aws_lb_target_group_attachment
config. This config is used to attach a target group to a Lambda function.
resource "aws_lb_target_group_attachment" "target_group_attachment" {
target_group_arn = "${aws_lb_target_group.target_group.arn}"
target_id = "${aws_lambda_function.lambda_function.arn}"
}
Next, we'll create the aws_lambda_function config. This config is used to define an AWS Lambda function.
resource "aws_lambda_function" "lambda_function" {
function_name = "lambdaFunction"
runtime = "nodejs14.x"
handler = "index.handler"
filename = "lambda.zip"
role = "${aws_iam_role.lambda_role.arn}"
depends_on = ["${aws_iam_role_policy_attachment.attach_iam_policy_to_iam_role}"]
tags = {
Name = "lambdaFunction"
}
}
Next, let's create the Lambda function file. In this configure we're creating a JavaScript file (NodeJS runtime above). To create this file, first create a file named index.js in the root directory. Add this content to the file:
exports.handler = async (event) => {
const response = {
statusCode: 200,
body: JSON.stringify('Hello World!'),
headers: {
'Content-Type': 'application/json'
}
};
return response;
};
This is just a simple file that will return HTTP status code 200, and a simple message as the body.
You have to create a .zip file of index.js file. Name the .zip file .
Next, we'll create the aws_lambda_permission
config. This config is used to grant permission to an Elastic Load Balancer (ELB) to invoke a Lambda function.
resource "aws_lambda_permission" "with_lb" {
statement_id = "AllowExecutionFromlb"
action = "lambda:InvokeFunction"
function_name = "${aws_lambda_function.lambda_function.arn}"
principal = "elasticloadbalancing.amazonaws.com"
source_arn = "${aws_lb_target_group.target_group.arn}"
}
Next, we'll create the alb_url
config. This config is used to output the URL of the Application Load Balancer (ALB) created in the Terraform configuration.
output "alb_url" {
value = "http://${aws_lb.load_balancer.dns_name}"
}
Deploy the Solution
Let's deploy this thing! If you haven't done so, start the Skillmix lab session and get the account credentials. Configure your Terraform environment to use those credentials.
Then, open a terminal or command prompt, navigate to the folder with your Terraform file, and execute these commands:
# initiatlize the project
$ terraform init
# show the plan
$ terraform plan
# apply the changes
$ terraform apply
Wait for the changes to be applied before proceeding.
Test the Solution
Testing this solution is pretty simple. All you have to do is get the ALB URL and open it in a browse. If you see the "Hello, World" message, you know it worked. That message is coming from the Lambda function. And that function is sitting behind the ALB. Congrats!