Here is some information about this architecture.
In this architecture, we use EventBridge as an event processing service. Events come to EventBridge, which is configured to look for specific event patterns. If there is a pattern match, the event is sent to a Step Function we have created. There, the Step Function will execute its state machine. The event contents will be analyzed to determine how the Step Function should process it.
This system is an awesome way to analyze incoming events and kick off workflows if there is a pattern match.
Here are the steps you can follow to build this solution on your own.
Here are the steps needed to build this architecture.
If you're using the Skillmix Labs feature, open the lab settings (the beaker icon) on the right side of the code editor. Then, click the Start Lab button to start hte lab environment.
Wait for the credentials to load. Then run this in the terminal:
$ aws configure --profile smx-lab
AWS Access Key ID [None]: AKIA3E3W34P42CSHXDH5
AWS Secret Access Key [None]: vTmqpOqefgJfse8i6QwzgpjgswPjHZ6h/oiQq4zf
Default region name [None]: us-west-2
Default output format [None]: json
Be sure to name your credentials profile 'smx-lab'.
Note: If you're using your own AWS account you'll need to ensure that you've created and configured a named AWS CLI profile named smx-lab.
We'll be doing all of our work in one Terraform file. Create a new directory on your computer somewhere, and then create a file named main.tf
in it.
Next, we will create a Terraform configuration that will allow us to use the AWS provider. This configuration will require us to specify the version of the AWS provider that we want to use, as well as the version of Terraform that we are using. We will also specify the AWS profile and region that we want to use. This code will ensure that the correct versions of Terraform and the AWS provider are used, and that the AWS provider is configured correctly.
Append this code to the main.tf
file:
terraform {
required_providers {
aws = {
source = "hashicorp/aws"
version = "~> 3.27"
}
}
required_version = ">= 0.14.9"
}
provider "aws" {
profile = "smx-lab"
region = "us-west-2"
}
Next, we will create a data source that will allow us to access information about the current AWS caller identity. This data source is created using the data "aws_caller_identity
" "current
" {} Terraform code. This code will allow us to access information such as the AWS account ID, user ID, and ARN of the current caller identity. This data source can be used to create resources that are specific to the current caller identity.
Append this code to the main.tf
file:
data "aws_caller_identity" "current" {}
Next, we will create an IAM role for EventBridge. This code will create a role that allows the service "events.amazonaws.com" to assume the role and perform actions. The code defines the policy that will be used to grant the service access to the role.
Append this code to the main.tf
file:
resource "aws_iam_role" "EventBridgeRole" {
# uncomment the 'permissions_boundary' argument if running this lab on skillmix.io
# permissions_boundary = "arn:aws:iam::${data.aws_caller_identity.current.account_id}:policy/LabUserNewResourceBoundaryPolicy"
assume_role_policy = <<POLICY1
{
"Version" : "2012-10-17",
"Statement" : [
{
"Effect" : "Allow",
"Principal" : {
"Service" : "events.amazonaws.com"
},
"Action" : "sts:AssumeRole"
}
]
}
POLICY1
}
Next, we will create an AWS IAM policy. This policy will allow the user to start an execution on an AWS Step Functions state machine. The policy is written in JSON format and is stored in the Terraform code. The policy will be applied to the state machine specified in the "Resource" field.
Append this code to the main.tf
file:
resource "aws_iam_policy" "EventBridgePolicy" {
policy = <<POLICY3
{
"Version" : "2012-10-17",
"Statement" : [
{
"Effect" : "Allow",
"Action" : [
"states:StartExecution"
],
"Resource" : "${aws_sfn_state_machine.sfn_state_machine.arn}"
}
]
}
POLICY3
}
Now we can attach the EventBridge Policy to its respective role.
Append this code to the main.tf
file:
resource "aws_iam_role_policy_attachment" "EventBridgePolicyAttachment" {
role = aws_iam_role.EventBridgeRole.name
policy_arn = aws_iam_policy.EventBridgePolicy.arn
}
Next, we will create an IAM role for our state machine. This code will create an IAM role with an assume role policy that allows the service "states.amazonaws.com
" to assume the role. This will allow the state machine to access the necessary resources.
Append this code to the main.tf
file:
resource "aws_iam_role" "StateMachineRole" {
# uncomment the 'permissions_boundary' argument if running this lab on skillmix.io
# permissions_boundary = "arn:aws:iam::${data.aws_caller_identity.current.account_id}:policy/LabUserNewResourceBoundaryPolicy"
assume_role_policy = <<POLICY2
{
"Version" : "2012-10-17",
"Statement" : [
{
"Effect" : "Allow",
"Principal" : {
"Service" : "states.amazonaws.com"
},
"Action" : "sts:AssumeRole"
}
]
}
POLICY2
}
Next, we will create an AWS IAM policy using Terraform code that will allow us to deliver logs to our AWS account. This policy will allow us to create, get, update, delete, list, put, and describe log deliveries and log groups.
Append this code to the main.tf
file:
resource "aws_iam_policy" "StateMachineLogDeliveryPolicy" {
policy = <<POLICY4
{
"Version" : "2012-10-17",
"Statement" : [
{
"Effect" : "Allow",
"Action" : [
"logs:CreateLogDelivery",
"logs:GetLogDelivery",
"logs:UpdateLogDelivery",
"logs:DeleteLogDelivery",
"logs:ListLogDeliveries",
"logs:PutResourcePolicy",
"logs:DescribeResourcePolicies",
"logs:DescribeLogGroups"
],
"Resource" : "*"
}
]
}
POLICY4
}
Now we can attach the Step Function Policy to its respective role.
Append this code to the main.tf
file:
resource "aws_iam_role_policy_attachment" "StateMachinePolicyAttachment" {
role = aws_iam_role.StateMachineRole.name
policy_arn = aws_iam_policy.StateMachineLogDeliveryPolicy.arn
}
We need a log group to store our step function logs.
Append this code to the main.tf
file:
resource "aws_cloudwatch_log_group" "MyLogGroup" {
name_prefix = "/aws/vendedlogs/states/StateMachine-terraform-"
}
Next, we will create an AWS Step Functions State Machine. This code will create a state machine with a Choice state that will route the execution based on the value of the Path variable. The state machine will also have two Pass states, one for each path, and a Fail state. Finally, the code will configure logging for the state machine, sending all log data to a CloudWatch Log Group.
Append this code to the main.tf
file:
resource "aws_sfn_state_machine" "sfn_state_machine" {
name = "eventbridge-state-machine-demo-${data.aws_caller_identity.current.account_id}"
role_arn = aws_iam_role.StateMachineRole.arn
definition = <<SFN
{
"Comment": "Simple State Machine with Choice",
"StartAt": "WhichPath?",
"States": {
"WhichPath?": {
"Type": "Choice",
"Choices": [
{
"Variable": "$.detail.Path",
"StringEquals": "A",
"Next": "PathA"
},
{
"Variable": "$.detail.Path",
"StringEquals": "B",
"Next": "PathB"
}
],
"Default": "Fail"
},
"PathA": {
"Type": "Pass",
"End": true
},
"PathB": {
"Type": "Pass",
"End": true
},
"Fail": {
"Type": "Fail"
}
}
}
SFN
logging_configuration {
log_destination = "${aws_cloudwatch_log_group.MyLogGroup.arn}:*"
include_execution_data = true
level = "ALL"
}
}
Next, we will create an AWS CloudWatch Event Rule. This code will create an event rule that will trigger when the source is "demo.sfn" and the account is the same as the one associated with the current caller identity. The event pattern is specified in the code and will be used to determine when the event rule should be triggered.
Append this code to the main.tf
file:
resource "aws_cloudwatch_event_rule" "MyEventRule" {
event_pattern = <<PATTERN
{
"account": ["${data.aws_caller_identity.current.account_id}"],
"source": ["demo.sfn"]
}
PATTERN
}
Next, we will create an AWS CloudWatch Event Target. This code will create a target for the CloudWatch Event Rule we created earlier, and will use the ARN of the AWS Step Functions State Machine and the ARN of the IAM Role we created to allow the Event Rule to trigger the State Machine.
Append this code to the main.tf
file:
resource "aws_cloudwatch_event_target" "SFNTarget" {
rule = aws_cloudwatch_event_rule.MyEventRule.name
arn = aws_sfn_state_machine.sfn_state_machine.arn
role_arn = aws_iam_role.EventBridgeRole.arn
}
Let's output the name of the log group. Append this code to the main.tf
file:
output "CW-Logs-Stream-Name" {
value = aws_cloudwatch_log_group.MyLogGroup.id
description = "The CloudWatch Log Group Name"
}
output "StepFunction-Name" {
value = aws_sfn_state_machine.sfn_state_machine.name
description = "The Step Function Name"
}
Now that we have all of our code written, we can deploy the project. Open a terminal, navigate to the project, and run these commands.
# initialize the project
$ terraform init
# plan the project
$ terraform plan
# apply the project
$ terraform apply
Create the Test Files
We will use three different JSON files in our test use cases. Create the following three files in your project directory (same place where main.tf is).
./event-A.json
[
{
"DetailType": "message",
"Source": "demo.sfn",
"Detail": "{\"Path\":\"A\"}"
}
]
./event-B.json
[
{
"DetailType": "message",
"Source": "demo.sfn",
"Detail": "{\"Path\":\"B\"}"
}
]
./event-Fail.json
[
{
"DetailType": "message",
"Source": "demo.sfn",
"Detail": "{\"Path\":\"C\"}"
}
]
We will test this solution using the AWS CLI. If you don't have that installed, go do that now and come back.
To test the solution, enter this command. Note that the --profile should be your named AWS CLI profile.
# run the first test
$ aws events put-events --entries file://event-A.json --profile smx-lab
# run the second test
$ aws events put-events --entries file://event-B.json --profile smx-lab
# run the third test
$ aws events put-events --entries file://event-Fail.json --profile smx-lab
When you're all done, run the destroy command:
$ terraform destroy
Source
This project was sourced from the AWS Repo: https://github.com/aws-samples/serverless-patterns