Here is some information about this architecture.
There are some use cases that require running a computing task at a regular interval. This could be to check on the availability of a service, make a calculation, or perform some sort of cleanup.
Whatever the use case is, you can use this handy serverless solution to complete it. This solution demonstrates how to use EventBridge to initiate Lamabda function calls. The solution will initiate a Lambda call every minute, but you can configure it for different time intervals.
This solution is built with Terraform.
Here are the steps you can follow to build this solution on your own.
If you're using the Skillmix Labs feature, open the lab settings (the beaker icon) on the right side of the code editor. Then, click the Start Lab button to start hte lab environment.
Wait for the credentials to load. Then run this in the terminal:
$ aws configure --profile smx-lab
AWS Access Key ID [None]: AKIA3E3W34P42CSHXDH5
AWS Secret Access Key [None]: vTmqpOqefgJfse8i6QwzgpjgswPjHZ6h/oiQq4zf
Default region name [None]: us-west-2
Default output format [None]: json
Be sure to name your credentials profile 'smx-lab'.
Note: If you're using your own AWS account you'll need to ensure that you've created and configured a named AWS CLI profile named smx-lab.
We'll be doing all of our work in one Terraform file. Create a new directory on your computer somewhere, and then create a file named main.tf
in it.
Next, we will create a Terraform configuration that will allow us to use the AWS provider. This configuration will require us to specify the version of the AWS provider that we want to use, as well as the version of Terraform that we are using. We will also specify the AWS profile and region that we want to use. This code will ensure that the correct versions of Terraform and the AWS provider are used, and that the AWS provider is configured correctly.
Append this code to the main.tf
file:
terraform {
required_providers {
aws = {
source = "hashicorp/aws"
version = "~> 4.22"
}
}
required_version = ">= 0.14.9"
}
provider "aws" {
profile = "smx-temp"
region = "us-west-2"
}
Next, we will create two data sources that will allow us to access information about the current AWS account and region. The first data source, "aws_caller_identity", will provide us with information about the current AWS account, such as the account ID and user ARN. The second data source, "aws_region", will provide us with information about the current AWS region, such as the region name and endpoint.
Append this code to the main.tf
file:
data "aws_caller_identity" "current" {}
data "aws_region" "current" {}
Next, we will create a variable called "event_bus_name" which will be used to store a string value. This variable will have a default value of "default" which can be changed if needed. This variable will be used to store the name of the event bus that will be used in the Terraform code.
Append this code to the main.tf
file:
variable "event_bus_name" {
type = string
default = "default"
}
Next, we will create an AWS Lambda function using the Terraform code below. This code will create a Lambda function called "CloudWatchScheduledEventFunction" that is based on a zip file, has a handler of "app.lambda_handler", uses an IAM role for permissions, and is written in Python 3.8.
Append this code to the main.tf
file:
resource "aws_lambda_function" "lambda_function" {
function_name = "CloudWatchScheduledEventFunction"
filename = data.archive_file.lambda_zip_file.output_path
source_code_hash = data.archive_file.lambda_zip_file.output_base64sha256
handler = "app.lambda_handler"
role = aws_iam_role.lambda_iam_role.arn
runtime = "python3.8"
}
Next, we will create a data source that will allow us to package our application code into a zip file. The data source, "archive_file", will take the source file, which is our application code, and output a zip file. This zip file will be stored in the output path, which is specified in the Terraform code. This zip file will be used to deploy our application code to the cloud.
Append this code to the main.tf
file:
data "archive_file" "lambda_zip_file" {
type = "zip"
source_file = "${path.module}/src/app.py"
output_path = "${path.module}/lambda.zip"
}
We need to create a python file in our project. This file will become our Lambda function.
In the project directory, create a folder named src
. In that folder, create a file named app.py
. Add this code to the app.py
file.
def lambda_handler(event, context):
print("Hello World")
return True
Next, we will create a data source that will allow us to access the AWS IAM policy named "AWSLambdaBasicExecutionRole". This data source will allow us to access the policy and use it to create a role for our Lambda function. The data source will be called "lambda_basic_execution_role_policy" and will be created using the "aws_iam_policy" data source.
Append this code to the main.tf
file:
data "aws_iam_policy" "lambda_basic_execution_role_policy" {
name = "AWSLambdaBasicExecutionRole"
}
Next, we will create an IAM role for our Lambda function using the Terraform code above. This role will be named "EventBridgeScheduledLambdaRole-" and will be assigned the basic execution policy from the AWS IAM policy data source. The assume role policy will also be set to allow the Lambda service to assume the role.
Append this code to the main.tf
file:
resource "aws_iam_role" "lambda_iam_role" {
name_prefix = "EventBridgeScheduledLambdaRole-"
managed_policy_arns = [data.aws_iam_policy.lambda_basic_execution_role_policy.arn]
assume_role_policy = <<EOF
{
"Version": "2012-10-17",
"Statement": [
{
"Action": "sts:AssumeRole",
"Principal": {
"Service": "lambda.amazonaws.com"
},
"Effect": "Allow",
"Sid": ""
}
]
}
EOF
}
Next, we will create an AWS CloudWatch Event Rule (aka EventBridge) that will trigger every minute. This is done by using the resource type "aws_cloudwatch_event_rule" and setting the schedule_expression to "rate(1 minute)". This will ensure that the rule is triggered every minute, allowing us to take action on any events that occur.
Append this code to the main.tf
file:
resource "aws_cloudwatch_event_rule" "trigger_every_minute" {
schedule_expression = "rate(1 minute)"
}
Next, we will create an AWS CloudWatch Event Target that will be triggered every minute. This is done by using the "aws_cloudwatch_event_target" resource, which takes two arguments: the name of the CloudWatch Event Rule that will trigger the Lambda Function, and the ARN of the Lambda Function itself. In this case, we are using the "trigger_every_minute" rule and the ARN of the Lambda Function we created earlier.
Append this code to the main.tf
file:
resource "aws_cloudwatch_event_target" "target_lambda_function" {
rule = aws_cloudwatch_event_rule.trigger_every_minute.name
arn = aws_lambda_function.lambda_function.arn
}
Next, we will create an AWS Lambda permission that allows CloudWatch to invoke our Lambda function. This permission will be created using the "aws_lambda_permission" resource. The action will be set to "lambda:InvokeFunction", the function_name will be set to the name of our Lambda function, the principal will be set to "events.amazonaws.com", and the source_arn will be set to the ARN of our CloudWatch event rule.
Append this code to the main.tf
file:
resource "aws_lambda_permission" "allow_cloudwatch" {
action = "lambda:InvokeFunction"
function_name = aws_lambda_function.lambda_function.function_name
principal = "events.amazonaws.com"
source_arn = aws_cloudwatch_event_rule.trigger_every_minute.arn
}
Add this code to output the Lambda function ARN.
Append this code to the main.tf
file:
output "CloudWatchScheduledEventFunction" {
value = aws_lambda_function.lambda_function.arn
description = "CloudWatchScheduledEventFunction function name"
}
Now that we have all of our code written, we can deploy the project. Open a terminal, navigate to the project, and run these commands.
# initialize the project
$ terraform init
# plan the project
$ terraform plan
# apply the project
$ terraform apply
Open the AWS Lambda Console. Find the function that was created as part of this project. For that function, open up the CloudWatch Logs link. There you will see a log event every minute (must wait several minutes to verify).
Run the destroy command to end it all
$ terraform destroy