Here is some information about this architecture.
This architecture pattern shows how you can setup AWS API Gateway (HTTP version) to accept incoming requests from the internet and route them to a Lambda function.
In this configuration, the API Gateway is configured to accept unauthorized requests from the internet. It then sends the request to the Lambda function. The Lambda simply function logs some information about the request to CloudWatch, though it could do a lot more!
Here are the steps you can follow to build this solution on your own.
Here are the steps needed to build this architecture.
If you're using the Skillmix Labs feature, open the lab settings (the beaker icon) on the right side of the code editor. Then, click the Start Lab button to start hte lab environment.
Wait for the credentials to load. Then run this in the terminal:
$ aws configure --profile smx-lab
AWS Access Key ID [None]: AKIA3E3W34P42CSHXDH5
AWS Secret Access Key [None]: vTmqpOqefgJfse8i6QwzgpjgswPjHZ6h/oiQq4zf
Default region name [None]: us-west-2
Default output format [None]: json
Be sure to name your credentials profile 'smx-lab'.
Note: If you're using your own AWS account you'll need to ensure that you've created and configured a named AWS CLI profile named smx-lab.
The first step is to create the Terraform and Provider blocks. Create a main.tf file in a directory and add the following code.
The required_providers block specifies the providers that are needed for the Terraform configuration, including:
AWS provider with version 4.0.0 or later from "hashicorp/aws".
Random provider with version 3.1.0 or later from "hashicorp/random".
Archive provider with version 2.2.0 or later from "hashicorp/archive".
The required_version
block specifies the Terraform version that is required for the configuration, with a minimum version of 1.0.
The provider block sets up the AWS provider and specifies the profile and region to use. The region is set using the value of the variable "aws_region".
terraform {
required_providers {
aws = {
source = "hashicorp/aws"
version = "~> 4.56.0"
}
random = {
source = "hashicorp/random"
version = "~> 3.1.0"
}
archive = {
source = "hashicorp/archive"
version = "~> 2.2.0"
}
}
required_version = "~> 1.0"
}
provider "aws" {
profile = "smx-lab"
region = var.aws_region
}
Next, let's create the variables that the project will use. Append the following code to the main.tf file. You'll see each of these variables use in the remaining code.
variable "aws_region" {
description = "AWS region for all resources."
type = string
default = "us-west-2"
}
variable "s3_bucket_prefix" {
description = "S3 bucket prefix for lambda code"
type = string
default = "apigw-http-api-lambda"
}
variable "lambda_name" {
description = "name of lambda function"
type = string
default = "test_apigw_integration"
}
variable "lambda_log_retention" {
description = "lambda log retention in days"
type = number
default = 7
}
variable "apigw_log_retention" {
description = "api gwy log retention in days"
type = number
default = 7
}
Next, we will create two data sources that will allow us to access information about the current AWS account and region. The first line of code, data "aws_caller_identity" "current" {}
, will create a data source that will provide information about the current AWS account, such as the account ID and the ARN. The second line of code, data "aws_region" "current" {}
, will create a data source that will provide information about the current AWS region, such as the region name and the region code.
Append this code to the main.tf
file:
data "aws_caller_identity" "current" {}
data "aws_region" "current" {}
Next, let's create an S3 bucket and ACL resource. This bucket will be used to store the Lambda function zip file.
This code will create an Amazon S3 bucket with the name "lambda_bucket" and set its access control list (ACL) to "private". The bucket_prefix
variable is used to set the prefix of the bucket name. The force_destroy
variable is set to true, which means that the bucket will be deleted even if it contains objects.
Append this code to the main.tf file:
resource "aws_s3_bucket" "lambda_bucket" {
bucket_prefix = var.s3_bucket_prefix
force_destroy = true
}
resource "aws_s3_bucket_acl" "private_bucket" {
bucket = aws_s3_bucket.lambda_bucket.id
acl = "private"
}
Next, we'll create the Lambda function resource. This resource defines the following:
Friendly name and description
Specifies the s3_bucket and s3_key
The version of python to use
The path to the function handler in the form of file_name.function_name
The source code hash
The IAM role to use (we will add this later)
And a dependency
Append this code to the main.tf file:
resource "aws_lambda_function" "app" {
function_name = var.lambda_name
description = "apigwy-http-api serverlessland pattern"
s3_bucket = aws_s3_bucket.lambda_bucket.id
s3_key = aws_s3_object.lambda_app.key
runtime = "python3.8"
handler = "app.lambda_handler"
source_code_hash = data.archive_file.lambda_zip.output_base64sha256
role = aws_iam_role.lambda_exec.arn
depends_on = [aws_cloudwatch_log_group.lambda_log]
}
Now that we've created the Terraform Lambda resource, we need to create the actual Python file and function that Lambda will run. For test purposes, this Python function will simply log some information to CloudWatch.
Create a directory in your folder called src. In that directory, create a file named app.py.
Add the following code to the app.py file:
# Copyright Amazon.com, Inc. or its affiliates. All Rights Reserved.
# SPDX-License-Identifier: MIT-0
import os
import json
import logging
import base64
from datetime import datetime
logger = logging.getLogger()
logger.setLevel(logging.INFO)
def lambda_handler(event, context):
logging.info(json.dumps(event, indent=2))
logging.info(f"Lambda function ARN: {context.invoked_function_arn}")
logging.info(f"CloudWatch log stream name: {context.log_stream_name}")
logging.info(f"CloudWatch log group name: {context.log_group_name}")
logging.info(f"Lambda Request ID: {context.aws_request_id}")
logging.info(f"Lambda function memory limits in MB: {context.memory_limit_in_mb}")
eventObject = {
"functionName":context.function_name,
"xForwardedFor":event["headers"]["X-Forwarded-For"],
"method":event["requestContext"]["httpMethod"],
"rawPath":event["requestContext"]["path"],
"queryString":event["queryStringParameters"],
"timestamp":event["requestContext"]["requestTime"]
}
if event["requestContext"]["httpMethod"] == "POST":
eventObject["body"] = event["body"]
return {
"statusCode": 200,
"headers": {
"Content-Type": "application/json"
},
"body": json.dumps({
"message ": eventObject
})
}
else:
return {
"statusCode": 200,
"headers": {
"Content-Type": "application/json"
},
"body": json.dumps({
"message ": eventObject
})
}
So far we have a S3 bucket, a Lambda function and Lambda resource block. Next, we will create a resource that will zip the Lambda function file and upload it to S3 when we run terraform apply
.
The following two resources are required to complete that action. Append this code to the main.tf file:
data "archive_file" "lambda_zip" {
type = "zip"
source_dir = "${path.module}/src"
output_path = "${path.module}/src.zip"
}
resource "aws_s3_object" "lambda_app" {
bucket = aws_s3_bucket.lambda_bucket.id
key = "source.zip"
source = data.archive_file.lambda_zip.output_path
etag = filemd5(data.archive_file.lambda_zip.output_path)
}
We need create a Lambda role that will give the AWS Lambda service permission to interact with the Lambda resource we create. We do this with a service policy, as shown below.
This is done in two steps. First, we create the role resource with our policy. Then, we create a resource attachment that links the policy to the lambda resource.
Append this code to the main.tf file:
resource "aws_iam_role" "lambda_exec" {
# uncomment the 'permissions_boundary' argument if running this lab on skillmix.io
# permissions_boundary = "arn:aws:iam::${data.aws_caller_identity.current.account_id}:policy/LabUserNewResourceBoundaryPolicy"
name = "serverless_lambda"
assume_role_policy = jsonencode({
Version = "2012-10-17"
Statement = [{
Action = "sts:AssumeRole"
Effect = "Allow"
Sid = ""
Principal = {
Service = "lambda.amazonaws.com"
}
}
]
})
}
resource "aws_iam_role_policy_attachment" "lambda_policy" {
role = aws_iam_role.lambda_exec.name
policy_arn = "arn:aws:iam::aws:policy/service-role/AWSLambdaBasicExecutionRole"
}
We have completed our work on the Lambda specific resources.
Now we will create the API Gateway resources. The first thing we'll do is create the API Gateway resource. This resource gives a friendly name and description, specifies it should be a HTTP gateway, and then configures CORS.
The CORS configuration specifies what methods are allowed, how credentials are handled, and what origins are allowed.
Append this code to the main.tf file:
resource "aws_apigatewayv2_api" "lambda" {
name = "apigw-http-lambda"
protocol_type = "HTTP"
description = "Serverlessland API Gwy HTTP API and AWS Lambda function"
cors_configuration {
allow_credentials = false
allow_headers = []
allow_methods = [
"GET",
"HEAD",
"OPTIONS",
"POST",
]
allow_origins = [
"*",
]
expose_headers = []
max_age = 0
}
}
API Gateway's require a stage configuration. Stages are lifecycle states. For example, you could have a stage for dev, test, and production.
Here we will create a default stage. The stage will define what gateway it is connected to, to automatically deploy changes when they are made to the gateway, and how to format the access logs.
Below the default stage, we have a separate Terraform resource that called aws_apigatewayv2_integration
that ties the API Gateway to the stage we have created.
Append this code to the main.tf file:
resource "aws_apigatewayv2_stage" "default" {
api_id = aws_apigatewayv2_api.lambda.id
name = "$default"
auto_deploy = true
access_log_settings {
destination_arn = aws_cloudwatch_log_group.api_gw.arn
format = jsonencode({
requestId = "$context.requestId"
sourceIp = "$context.identity.sourceIp"
requestTime = "$context.requestTime"
protocol = "$context.protocol"
httpMethod = "$context.httpMethod"
resourcePath = "$context.resourcePath"
routeKey = "$context.routeKey"
status = "$context.status"
responseLength = "$context.responseLength"
integrationErrorMessage = "$context.integrationErrorMessage"
}
)
}
depends_on = [aws_cloudwatch_log_group.api_gw]
}
resource "aws_apigatewayv2_integration" "app" {
api_id = aws_apigatewayv2_api.lambda.id
integration_uri = aws_lambda_function.app.invoke_arn
integration_type = "AWS_PROXY"
}
API Gateways can have multiple routes. A route is a path on a URL. Routes are things like /page
, /post
, /user
, etc.
In our configuration will create a default route that will respond on any route.
Append this code to the main.tf file:
resource "aws_apigatewayv2_route" "any" {
api_id = aws_apigatewayv2_api.lambda.id
route_key = "$default"
target = "integrations/${aws_apigatewayv2_integration.app.id}"
}
In our architecture, the API Gateway will invoke Lambda functions. We don't let anything just invoke our functions! No, instead we give resources permissions to do so.
That's what the following resource does. It grants the API Gateway permission to invoke the Lambda function.
Append this code to the main.tf file:
resource "aws_lambda_permission" "api_gw" {
statement_id = "AllowExecutionFromAPIGateway"
action = "lambda:InvokeFunction"
function_name = aws_lambda_function.app.function_name
principal = "apigateway.amazonaws.com"
source_arn = "${aws_apigatewayv2_api.lambda.execution_arn}/*/*"
}
Our project requires two log groups. One for the Lambda function, one for the API Gateway.
Append this code to the main.tf file to create the two log groups:
resource "aws_cloudwatch_log_group" "lambda_log" {
name = "/aws/lambda/${var.lambda_name}"
retention_in_days = var.lambda_log_retention
}
resource "aws_cloudwatch_log_group" "api_gw" {
name = "/aws/api_gw/${aws_apigatewayv2_api.lambda.name}"
retention_in_days = var.apigw_log_retention
}
To run our tests properly we will need the URL to our API Gateway. Append this code to the main.tf file to create the output:
output "apigwy_url" {
description = "URL for API Gateway stage"
value = aws_apigatewayv2_api.lambda.api_endpoint
}
Deploy the Solution
Let's deploy this thing! If you haven't done so, start the Skillmix lab session and get the account credentials. Configure your Terraform environment to use those credentials.
Then, open a terminal or command prompt, navigate to the folder with your Terraform file, and execute these commands:
# initiatlize the project
$ terraform init
# show the plan
$ terraform plan
# apply the changes
$ terraform apply
Wait for the changes to be applied before proceeding.
Test the Solution
Now we get to test the solution. In your terminal or command prompt, execute the following commands.
curl '<your http api endpoint>' #sample output { "message ": { "functionName": "test_apigw_integration", "xForwardedFor": "{YourIpAddress}", "method": "GET", "rawPath": "/", "queryString": null, "timestamp": "04/Apr/2022:22:50:34 +0000" } }
curl '<your http api endpoint>'/pets/dog/1?foo=bar -X POST \ --header 'Content-Type: application/json' \ -d '{"key1":"hello", "key2":"World!"}' #sample output { "message ": { "functionName": "test_apigw_integration", "xForwardedFor": "{YourIpAddress}", "method": "POST", "rawPath": "/pets/dog/1", "queryString": { "foo": "bar" }, "timestamp": "04/Apr/2022:22:49:14 +0000", "body": "{\"key1\":\"hello\", \"key2\":\"World!\"}" } }
Source
This project was sourced from the AWS Repo: https://github.com/aws-samples/serverless-patterns