Lambda to Step Function

Sign Up to Build

About this Architecture

Here is some information about this architecture.

The Terraform template sets up the necessary infrastructure for running an application in AWS. It deploys:

  1. A Lambda function, which is responsible for asynchronously triggering a Step Functions workflow by using the AWS SDK and passing the event body.

  2. A Step Functions workflow, which expresses the desired application logic.

  3. A Log group in Amazon CloudWatch Logs, where the results of the workflow are stored.

  4. The required IAM (Identity and Access Management) resources, which are necessary to securely access AWS services.

Once the Lambda function triggers the Step Functions workflow, it returns the response from the workflow, including the execution ARN and start date.

How to Build This Solution

Here are the steps you can follow to build this solution on your own.

Here are the steps needed to build this architecture.

Get Your AWS Credentials

If you're using the Skillmix Labs feature, open the lab settings (the beaker icon) on the right side of the code editor. Then, click the Start Lab button to start hte lab environment.

Wait for the credentials to load. Then run this in the terminal:

$ aws configure --profile smx-lab
AWS Access Key ID [None]: AKIA3E3W34P42CSHXDH5
AWS Secret Access Key [None]: vTmqpOqefgJfse8i6QwzgpjgswPjHZ6h/oiQq4zf
Default region name [None]: us-west-2
Default output format [None]: json

Be sure to name your credentials profile 'smx-lab'.

Note: If you're using your own AWS account you'll need to ensure that you've created and configured a named AWS CLI profile named smx-lab.

Create the Terraform File

We'll be doing all of our work in one Terraform file. Create a new directory on your computer somewhere, and then create a file named main.tf in it.

Create the Terraform & Provider Block

Next, we will create a Terraform configuration that will allow us to use the AWS provider. This configuration will require us to specify the version of the AWS provider that we want to use, as well as the version of Terraform that we are using. We will also specify the AWS profile and region that we want to use. This code will ensure that the correct versions of Terraform and the AWS provider are used, and that the AWS provider is configured correctly.

Append this code to the main.tf file:

terraform {
  required_providers {
    aws = {
      source  = "hashicorp/aws"
      version = "~> 3.27"
    }
  }

  required_version = ">= 0.14.9"
}

provider "aws" {
  profile = "smx-lab"
  region  = "us-west-2"
}

Create the Data Resources

Next, we will create two data sources that will allow us to access information about the current AWS user and the current AWS region. The first line of code, data "aws_caller_identity" "current" {}, will create a data source that will allow us to access information about the current AWS user, such as the user's ID and account ID. The second line of code, data "aws_region" "current" {}, will create a data source that will allow us to access information about the current AWS region, such as the region's name and endpoint.

Append this code to the main.tf file:

data "aws_caller_identity" "current" {}

data "aws_region" "current" {}

Create the Lambda Function File

Next, we need to create the Python file that will be used in our Lambda function. In your working directory, create a folder named src. In that folder create a file named LambdaFunction.py.

The code in this function will receive an event, and invoke the configured step function.

In the LambdaFunction.py file, add this code:

# Copyright Amazon.com, Inc. or its affiliates. All Rights Reserved.
# SPDX-License-Identifier: MIT-0

import json
import boto3
import os
from aws_lambda_powertools import Logger

logger = Logger()
client = boto3.client('stepfunctions')
sfnArn = os.environ['SFN_ARN']

def lambda_handler(event, context):
    # TODO implement
    
    logger.info(f"Received Choice: {event['Choice']}")
    response = client.start_execution(
        stateMachineArn=sfnArn,
        input=json.dumps(event)
    )
    
    logger.info(f"Received Response: {response}")
    
    return {
        'statusCode': 200,
        'body': json.dumps(response,default=str)
    }

Create the Lambda File Zip Resource

We need to zip our file before sending it to the Lambda service. The code below points to the Python file we just created, and will output a zip file version of it in the root of our project.

Append this code to the main.tf file:

data "archive_file" "LambdaZipFile" {
  type        = "zip"
  source_file = "${path.module}/src/LambdaFunction.py"
  output_path = "${path.module}/LambdaFunction.zip"
}

Create the Lambda Function Resource

Next, we will create an AWS Lambda Function. This code will create a Lambda Function with the name using the file located at the output path of the data.archive_file.LambdaZipFile. The Lambda Function will have the following settings:

  • The IAM role specified in the aws_iam_role.LambdaRole.arn

  • Handler of "LambdaFunction.lambda_handler"

  • Runtime of "python3.9"

  • The AWSLambdaPowertoolsPython Lambda Layer

  • Environment variable of SFN_ARN set to the ARN of the aws_sfn_state_machine.sfn_state_machine

Append this code to the main.tf file:

resource "aws_lambda_function" "MyLambdaFunction" {
  function_name    = "lambda-sfn-terraform-demo-${data.aws_caller_identity.current.account_id}"
  filename         = data.archive_file.LambdaZipFile.output_path
  source_code_hash = filebase64sha256(data.archive_file.LambdaZipFile.output_path)
  role             = aws_iam_role.LambdaRole.arn
  handler          = "LambdaFunction.lambda_handler"
  runtime          = "python3.9"
  layers           = [
    "arn:aws:lambda:${data.aws_region.current.name}:017000801446:layer:AWSLambdaPowertoolsPython:15"
  ]

  environment {
    variables = {
      SFN_ARN = aws_sfn_state_machine.sfn_state_machine.arn
    }
  }
}

Create the Lambda IAM Role Resource

Next, we will create an IAM role for Lambda. This code will create an IAM role with an assume role policy that allows the Lambda service to assume the role. This will allow Lambda to access other AWS services on your behalf.

Append this code to the main.tf file:

resource "aws_iam_role" "LambdaRole" {
  # uncomment the 'permissions_boundary' argument if running this lab on skillmix.io 
  # permissions_boundary = "arn:aws:iam::${data.aws_caller_identity.current.account_id}:policy/LabUserNewResourceBoundaryPolicy"
  assume_role_policy = <<POLICY1
{
  "Version" : "2012-10-17",
  "Statement" : [
    {
      "Effect" : "Allow",
      "Principal" : {
        "Service" : "lambda.amazonaws.com"
      },
      "Action" : "sts:AssumeRole"
    }
  ]
}
POLICY1
}

Create the Lambda IAM Role Policy Resource

Next, we will create an AWS IAM policy. This policy will allow the Lambda function to create log streams and put log events, as well as allow the state machine to start execution. The policy is written in JSON format and is specified in the Terraform code.

Append this code to the main.tf file:

resource "aws_iam_policy" "LambdaPolicy" {
  policy = <<POLICY2
{
  "Version" : "2012-10-17",
  "Statement" : [
    {
      "Effect" : "Allow",
      "Action" : [
        "logs:CreateLogStream",
        "logs:PutLogEvents"
      ],
      "Resource" : "arn:aws:logs:${data.aws_region.current.name}:${data.aws_caller_identity.current.account_id}:log-group:/aws/lambda/${aws_lambda_function.MyLambdaFunction.function_name}:*:*"
    },
    {
        "Effect": "Allow",
        "Action": [
                "states:StartExecution"
            ],
        "Resource" : "${aws_sfn_state_machine.sfn_state_machine.arn}"
    }
  ]
}
POLICY2
}

Attach the Lambda Policy to the Role Resource

Next, we will create an IAM role policy attachment that will allow our Lambda role to access the policy we created.

Append this code to the main.tf file:

resource "aws_iam_role_policy_attachment" "LambdaPolicyAttachment" {
  role       = aws_iam_role.LambdaRole.name
  policy_arn = aws_iam_policy.LambdaPolicy.arn
}

Create the Step Function IAM Role Resource

Next, we will create an IAM role for our state machine. This code will create an IAM role with an assume role policy that allows the service "states.amazonaws.com" to assume the role. This will allow the state machine to access the necessary resources.

Append this code to the main.tf file:

resource "aws_iam_role" "StateMachineRole" {
  # uncomment the 'permissions_boundary' argument if running this lab on skillmix.io 
  # permissions_boundary = "arn:aws:iam::${data.aws_caller_identity.current.account_id}:policy/LabUserNewResourceBoundaryPolicy"
  assume_role_policy = <<POLICY3
{
  "Version" : "2012-10-17",
  "Statement" : [
    {
      "Effect" : "Allow",
      "Principal" : {
        "Service" : "states.amazonaws.com"
      },
      "Action" : "sts:AssumeRole"
    }
  ]
}
POLICY3
}

Create the Lambda IAM Policy Resource

Next, we will create an AWS IAM policy. This policy will allow us to create, get, update, delete, list, and describe log deliveries and log groups. The code will also allow us to put a resource policy in place.

Append this code to the main.tf file:

resource "aws_iam_policy" "StateMachineLogDeliveryPolicy" {
  policy = <<POLICY4
{
  "Version" : "2012-10-17",
  "Statement" : [
    {
      "Effect" : "Allow",
      "Action" : [
        "logs:CreateLogDelivery",
        "logs:GetLogDelivery",
        "logs:UpdateLogDelivery",
        "logs:DeleteLogDelivery",
        "logs:ListLogDeliveries",
        "logs:PutResourcePolicy",
        "logs:DescribeResourcePolicies",
        "logs:DescribeLogGroups"
      ],
      "Resource" : "*"
    }
  ]
}
POLICY4
}

Attach the Step Function Policy to the Role Resource

Next, we will create an IAM role policy attachment that will attach the policy we just created to the IAM role we created earlier.

Append this code to the main.tf file:

resource "aws_iam_role_policy_attachment" "StateMachinePolicyAttachment" {
  role       = aws_iam_role.StateMachineRole.name
  policy_arn = aws_iam_policy.StateMachineLogDeliveryPolicy.arn
}

Create the CloudWatch Log Group for Lambda

Next, create a CloudWatch log group for our Lambda resources.

Append this code to the main.tf file:

resource "aws_cloudwatch_log_group" "MyLambdaLogGroup" {
  name              = "/aws/lambda/${aws_lambda_function.MyLambdaFunction.function_name}"
  retention_in_days = 60
}

Create the CloudWatch Log Group for Step Function

Next, create a CloudWatch log group for our Step Function resources.

Append this code to the main.tf file:

resource "aws_cloudwatch_log_group" "MySFNLogGroup" {
  name_prefix       = "/aws/vendedlogs/states/StateMachine-terraform-"
  retention_in_days = 60
}

Create the Step Function State Machine Resource

Next, we will create an AWS Step Functions State Machine using Terraform. This code will create a state machine with various state types, including a Pass state, a Wait state, a Choice state, a Parallel state, a Succeed state, an Error Handling state, and a Fail state. The code will also configure the logging configuration for the state machine, setting the log destination, including execution data, and setting the log level to ALL.

Take note of the different states. When we test the solution, we will trigger these different states.

Append this code to the main.tf file:

resource "aws_sfn_state_machine" "sfn_state_machine" {
  name     = "lambda-sfn-demo-${data.aws_caller_identity.current.account_id}"
  role_arn = aws_iam_role.StateMachineRole.arn

  definition = <<SFN
{
  "Comment": "State Machine example with various state types",
  "StartAt": "Pass State",
  "States": {
    "Pass State": {
      "Comment": "A Pass state passes its input to its output, without performing work. Pass states are useful when constructing and debugging state machines.",
      "Type": "Pass",
      "Next": "Wait State"
    },
    "Wait State": {
      "Comment": "A Wait state delays the state machine from continuing for a specified time. You can choose either a relative time, specified in seconds from when the state begins, or an absolute end time, specified as a timestamp.",
      "Type": "Wait",
      "Seconds": 3,
      "Next": "Choice State"
    },
    "Choice State": {
      "Comment": "A Choice state adds branching logic to a state machine.",
      "Type": "Choice",
      "Choices": [
        {
          "Variable": "$.Choice",
          "StringEquals": "A",
          "Next": "Succeed State"
        },
        {
          "Variable": "$.Choice",
          "StringEquals": "B",
          "Next": "Parallel State"
        }
      ],
      "Default": "Error Handling State"
    },
    "Parallel State": {
      "Comment": "A Parallel state can be used to create parallel branches of execution in your state machine.",
      "Type": "Parallel",
      "Next": "Succeed State",
      "Branches": [
        {
          "StartAt": "Branch 1",
          "States": {
            "Branch 1": {
              "Type": "Pass",
              "Parameters": {
                "comment.$": "States.Format('Branch 1 Processing of Choice {}', $.Choice)"
              },
              "End": true
            }
          }
        },
        {
          "StartAt": "Branch 2",
          "States": {
            "Branch 2": {
              "Type": "Pass",
              "Parameters": {
                "comment.$": "States.Format('Branch 2 Processing of Choice {}', $.Choice)"
              },
              "End": true
            }
          }
        }
      ]
    },
    "Succeed State": {
      "Type": "Succeed",
      "Comment": "A Succeed state stops an execution successfully. The Succeed state is a useful target for Choice state branches that don't do anything but stop the execution."
    },
    "Error Handling State": {
      "Type": "Pass",
      "Parameters": {
        "error.$": "States.Format('{} is an invalid Choice.',$.Choice)"
      },
      "Next": "Fail State"
    },
    "Fail State": {
      "Type": "Fail"
    }
  }
}
SFN

  logging_configuration {
    log_destination        = "${aws_cloudwatch_log_group.MySFNLogGroup.arn}:*"
    include_execution_data = true
    level                  = "ALL"
  }
}

Create the Terraform Outputs

Lastly, we will configure Terraform to output some settings. Append this code to the main.tf file:

output "LambdaFunctionName" {
  value       = aws_lambda_function.MyLambdaFunction.function_name
  description = "The Lambda Function name"
}

output "CloudWatchLogName" {
  value       = "/aws/lambda/${aws_lambda_function.MyLambdaFunction.function_name}"
  description = "The Lambda Function Log Group"
}

output "StepFunction-Name" {
  value       = aws_sfn_state_machine.sfn_state_machine.name
  description = "The Step Function Name"
}

Deploying the Project

Now that we have all of our code written, we can deploy the project. Open a terminal, navigate to the project, and run these commands.

# initialize the project 
$ terraform init 

# plan the project 
$ terraform plan 

# apply the project 
$ terraform apply

Run the Project Tests

We will test this solution using the AWS CLI. If you don't have that installed, go do that now and come back.

To test the solution, enter this command. Note that the --profile should be your named AWS CLI profile.

Go to your command line and run these commands.

aws lambda invoke --function-name {LambdaProxyArn} --payload '{ "Choice": "A" }' responseA.json --profile smx-lab aws lambda invoke --function-name {LambdaProxyArn} --payload '{ "Choice": "B" }' responseA.json --profile smx-lab aws lambda invoke --function-name {LambdaProxyArn} --payload '{ "Choice": "C" }' responseA.json --profile smx-lab

Now go to the Step Function console in the appropriate region to view the

Destroy the Project

When you're all done, run the destroy command:

$ terraform destroy

Source

This project was sourced from the AWS Repo: https://github.com/aws-samples/serverless-patterns