Here is some information about this architecture.
This solution demonstrates how to deploy an API Gateway that connects to a private Application Load Balancer (ALB) and Elastic Container Service (ECS) containers via a VPC PrivateLink. PrivateLink is used by API Gateway to support private APIs and private integrations, providing a good security layer for applications running behind a load balancer.
This solution shows you how to deploy this solution using Terraform and AWS services.
Here are the steps you can follow to build this solution on your own.
We'll be doing all of our work in one Terraform file. Create a new directory on your computer somewhere, and then create a file named main.tf
in it.
Next, we will create a Terraform configuration that will allow us to use the AWS provider. This configuration will require us to specify the version of the AWS provider that we want to use, as well as the version of Terraform that we are using. We will also specify the AWS profile and region that we want to use. This code will ensure that the correct versions of Terraform and the AWS provider are used, and that the AWS provider is configured correctly.
Append this code to the main.tf
file:
terraform {
required_providers {
aws = {
source = "hashicorp/aws"
version = "~> 4.0.0"
}
}
required_version = ">= 1.0.11"
}
provider "aws" {
profile = "smx-lab"
region = "us-west-2"
}
Next, we will create the data resource blocks. The first block of code is used to create a data resource for the Skillmix Lab VPC. This will allow us to access the VPC as a data source. The second block of code is used to create a data resource for the Skillmix Lab Subnets. This will allow us to access the Subnets and use them as a data resource.
Append this code to the main.tf
file:
# config for running this on the Skillmix lab account
# if you're running outside of Skillmix, change this to your VPC tags
data "aws_vpc" "lab_vpc" {
filter {
name = "tag:Name"
values = ["Skillmix Lab"]
}
}
# config for running this on the Skillmix lab account
# if you're running outside of Skillmix, change this to your VPC tags
data "aws_subnet_ids" "lab_subnets" {
vpc_id = data.aws_vpc.lab_vpc.id
filter {
name = "tag:Name"
values = [
"Skillmix Lab Public Subnet (AZ1)",
"Skillmix Lab Public Subnet (AZ2)"
]
}
}
data "aws_caller_identity" "current" {}
data "aws_region" "current" {}
Next, we will create an Application Load Balancer (ALB) using the Terraform resource "aws_lb". This ALB will be internal, meaning it will not be accessible from the internet, and will be associated with the subnets and security group specified in the code. The ALB will be used to route traffic to the services running in our ECS cluster.
Append this code to the main.tf
file:
resource "aws_lb" "ecs_alb" {
load_balancer_type = "application"
internal = true
subnets = data.aws_subnet_ids.lab_subnets
security_groups = [aws_security_group.lb_security_group.id]
}
Next, we will create an AWS load balancer target group using the resource "aws_lb_target_group" block. This block will define the port, protocol, target type, and VPC ID for the target group. In this example, the port is set to 80, the protocol is set to HTTP, the target type is set to IP, and the VPC ID is set to the ID of the lab VPC.
Append this code to the main.tf
file:
resource "aws_lb_target_group" "alb_ecs_tg" {
port = 80
protocol = "HTTP"
target_type = "ip"
vpc_id = data.aws_vpc.lab_vpc.id
}
Next, we will create an AWS load balancer listener using the resource "aws_lb_listener" block. This listener will be associated with the load balancer we created earlier, and will listen on port 80 for HTTP requests. When a request is received, the listener will forward it to the target group we created, which will then route the request to the appropriate ECS service.
Append this code to the main.tf
file:
resource "aws_lb_listener" "ecs_alb_listener" {
load_balancer_arn = aws_lb.ecs_alb.arn
port = "80"
protocol = "HTTP"
default_action {
type = "forward"
target_group_arn = aws_lb_target_group.alb_ecs_tg.arn
}
}
Next, we will create an Amazon Elastic Container Service (ECS) cluster and service. The cluster will be named "demo-ecs-cluster" and the service will be named "demo-ecs-svc". The service will use the task definition specified in the "ecs_taskdef" resource, and will have two desired tasks running. The service will also use a load balancer to route traffic to the tasks, and will be configured to use the security group and subnets specified in the "ecs_security_group" and "lab_subnets" resources. Finally, the service will be configured to use a health check grace period of 60 seconds.
Append this code to the main.tf
file:
resource "aws_ecs_cluster" "ecs_cluster" {
name = "demo-ecs-cluster"
}
resource "aws_ecs_service" "demo-ecs-service" {
name = "demo-ecs-svc"
cluster = aws_ecs_cluster.ecs_cluster.id
task_definition = aws_ecs_task_definition.ecs_taskdef.arn
desired_count = 2
deployment_maximum_percent = 200
deployment_minimum_healthy_percent = 50
enable_ecs_managed_tags = false
health_check_grace_period_seconds = 60
launch_type = "FARGATE"
depends_on = [aws_lb_target_group.alb_ecs_tg, aws_lb_listener.ecs_alb_listener]
load_balancer {
target_group_arn = aws_lb_target_group.alb_ecs_tg.arn
container_name = "web"
container_port = 80
}
network_configuration {
security_groups = [aws_security_group.ecs_security_group.id]
subnets = data.aws_subnet_ids.lab_subnets
}
}
Next, we will create an AWS ECS Task Definition resource. This resource will define the parameters for a task, such as the container image, CPU and memory requirements, and the network mode. The container definition will specify the name, image, essentiality, and port mappings for the container. The execution and task roles will be specified using the ARN of the IAM roles created earlier. Finally, the requires_compatibilities and network_mode parameters will be set to "FARGATE" and "awsvpc" respectively.
Append this code to the main.tf
file:
resource "aws_ecs_task_definition" "ecs_taskdef" {
family = "service"
container_definitions = jsonencode([
{
name = "web"
image = "nginx"
essential = true
portMappings = [
{
containerPort = 80
protocol = "tcp"
}
]
}
])
cpu = 512
memory = 1024
execution_role_arn = aws_iam_role.ecs_task_exec_role.arn
task_role_arn = aws_iam_role.ecs_task_role.arn
requires_compatibilities = ["FARGATE"]
network_mode = "awsvpc"
}
Next, we will create an AWS IAM role called "ecs_task_exec_role" that allows Amazon ECS tasks to assume the role and perform actions on your behalf. This is done by setting the assume_role_policy argument to a JSON-encoded policy that grants the ecs-tasks.amazonaws.com service permission to assume the role.
Append this code to the main.tf
file:
resource "aws_iam_role" "ecs_task_exec_role" {
# uncomment the 'permissions_boundary' argument if running this lab on skillmix.io
# permissions_boundary = "arn:aws:iam::${data.aws_caller_identity.current.account_id}:policy/LabUserNewResourceBoundaryPolicy"
name = "ecs_task_exec_role"
assume_role_policy = jsonencode({
Version = "2012-10-17"
Statement = [
{
Action = "sts:AssumeRole"
Effect = "Allow"
Principal = {
Service = "ecs-tasks.amazonaws.com"
}
},
]
})
}
Next, we will create an IAM role for our ECS task using the aws_iam_role resource. This code will create a role with the name "ecs_task_role" and an assume role policy that allows the ECS service to assume the role. This will allow our ECS task to access other AWS services.
Append this code to the main.tf
file:
resource "aws_iam_role" "ecs_task_role" {
# uncomment the 'permissions_boundary' argument if running this lab on skillmix.io
# permissions_boundary = "arn:aws:iam::${data.aws_caller_identity.current.account_id}:policy/LabUserNewResourceBoundaryPolicy"
name = "ecs_task_role"
assume_role_policy = jsonencode({
Version = "2012-10-17"
Statement = [
{
Action = "sts:AssumeRole"
Effect = "Allow"
Principal = {
Service = "ecs-tasks.amazonaws.com"
}
},
]
})
}
Next, we will create a VPC Link between the API Gateway and the Application Load Balancer. This VPC Link will allow the API Gateway to communicate with the Application Load Balancer. The code below will create a VPC Link with the name "vpclink_apigw_to_alb" and will use the subnet IDs from the data source "lab_subnets" to connect the two services.
Append this code to the main.tf
file:
resource "aws_apigatewayv2_vpc_link" "vpclink_apigw_to_alb" {
name = "vpclink_apigw_to_alb"
security_group_ids = []
subnet_ids = data.aws_subnet_ids.lab_subnets
}
Next, we will create an AWS API Gateway v2 API resource. This resource will be named "serverlessland-pvt-endpoint" and will use the HTTP protocol. This resource will allow us to create an API endpoint that can be used to access our serverless application.
Append this code to the main.tf
file:
resource "aws_apigatewayv2_api" "apigw_http_endpoint" {
name = "serverlessland-pvt-endpoint"
protocol_type = "HTTP"
}
Next, we will create an AWS API Gateway v2 integration resource to connect our API Gateway to an Application Load Balancer. This resource will define the integration type, integration URI, integration method, connection type, connection ID, and payload format version. It will also depend on the API Gateway v2 VPC link, API Gateway v2 API, and Application Load Balancer listener resources.
Append this code to the main.tf
file:
resource "aws_apigatewayv2_integration" "apigw_integration" {
api_id = aws_apigatewayv2_api.apigw_http_endpoint.id
integration_type = "HTTP_PROXY"
integration_uri = aws_lb_listener.ecs_alb_listener.arn
integration_method = "ANY"
connection_type = "VPC_LINK"
connection_id = aws_apigatewayv2_vpc_link.vpclink_apigw_to_alb.id
payload_format_version = "1.0"
depends_on = [aws_apigatewayv2_vpc_link.vpclink_apigw_to_alb,
aws_apigatewayv2_api.apigw_http_endpoint,
aws_lb_listener.ecs_alb_listener]
}
Next, we will create an API Gateway v2 route that will allow us to proxy requests to an integration. This route will be defined using the "aws_apigatewayv2_route" resource, and will specify the API ID, route key, target, and a dependency on the integration. The route key will be set to "ANY /{proxy+}", which will allow us to proxy requests to the integration. The target will be set to the integration ID, and the dependency will ensure that the route is created after the integration.
Append this code to the main.tf
file:
resource "aws_apigatewayv2_route" "apigw_route" {
api_id = aws_apigatewayv2_api.apigw_http_endpoint.id
route_key = "ANY /{proxy+}"
target = "integrations/${aws_apigatewayv2_integration.apigw_integration.id}"
depends_on = [aws_apigatewayv2_integration.apigw_integration]
}
Next, we will create an AWS API Gateway v2 stage resource using Terraform. This resource will be named "apigw_stage" and will be associated with the API Gateway v2 API resource we created previously. The stage will be named "$default" and will be set to auto-deploy when changes are made. This resource also depends on the API Gateway v2 API resource, so it will be created after that resource is created.
Append this code to the main.tf
file:
resource "aws_apigatewayv2_stage" "apigw_stage" {
api_id = aws_apigatewayv2_api.apigw_http_endpoint.id
name = "$default"
auto_deploy = true
depends_on = [aws_apigatewayv2_api.apigw_http_endpoint]
}
Next, we will create a security group for our load balancer, allowing anyone to access port 80. We will also create two security group rules, one to allow ingress from anyone on port 80, and another to allow egress from the load balancer to the ECS cluster on port 80.
Append this code to the main.tf
file:
resource "aws_security_group" "lb_security_group" {
description = "LoadBalancer Security Group"
vpc_id = data.aws_vpc.lab_vpc.id
ingress {
description = "Allow from anyone on port 80"
from_port = 80
to_port = 80
protocol = "tcp"
cidr_blocks = ["0.0.0.0/0"]
}
}
resource "aws_security_group_rule" "sg_ingress_rule_all_to_lb" {
type = "ingress"
description = "Allow from anyone on port 80"
from_port = 80
to_port = 80
protocol = "tcp"
cidr_blocks = ["0.0.0.0/0"]
ipv6_cidr_blocks = ["::/0"]
security_group_id = aws_security_group.lb_security_group.id
}
resource "aws_security_group_rule" "sg_egress_rule_lb_to_ecs_cluster" {
type = "egress"
description = "Target group egress"
from_port = 80
to_port = 80
protocol = "tcp"
security_group_id = aws_security_group.lb_security_group.id
source_security_group_id = aws_security_group.ecs_security_group.id
}
Next, we will create an AWS security group and an associated security group rule to allow ingress from the load balancer. The security group will be used to control access to the ECS cluster and will be associated with the VPC specified in the data resource. The security group rule will allow inbound traffic from the load balancer on port 80.
Append this code to the main.tf
file:
resource "aws_security_group" "ecs_security_group" {
description = "ECS Security Group"
vpc_id = data.aws_vpc.lab_vpc.id
egress {
description = "Allow all outbound traffic by default"
from_port = 0
to_port = 0
protocol = "-1"
cidr_blocks = ["0.0.0.0/0"]
}
}
# ECS cluster security group ingress from the load balancer.
resource "aws_security_group_rule" "sg_ingress_rule_ecs_cluster_from_lb" {
type = "ingress"
description = "Ingress from Load Balancer"
from_port = 80
to_port = 80
protocol = "tcp"
security_group_id = aws_security_group.ecs_security_group.id
source_security_group_id = aws_security_group.lb_security_group.id
}
Finally, we create an output. Here we will output the gateway URL endpoint that we will use for testing.
Append this code to the main.tf
file:
output "apigw_endpoint" {
value = aws_apigatewayv2_api.apigw_http_endpoint.api_endpoint
description = "API Gateway Endpoint"
}
Now that we have all of our code written, we can deploy the project. Open a terminal, navigate to the project, and run these commands.
# initialize the project
$ terraform init
# plan the project
$ terraform plan
# apply the project
$ terraform apply
If you open up a web browser and enter the generated API endpoint, you should be able to see the Nginx home page. Alternatively, you can use a command with the correct API endpoint, and you should receive a "200 response code."
$ curl -s -o /dev/null -w "%{http_code}" <API endpoint> ; echo
Run the destroy command to end it all
$ terraform destroy