Here is some information about this architecture.
In this project you will learn how to create a Python chat function that uses OpenAI GPT models. You’ll learn how to write the function and then deploy it to AWS Lambda. Then, we’ll show you how to configure the Lambda Public URL for external access.
Here are the steps you can follow to build this solution on your own.
In this project you will learn how to create a Python chat function that uses OpenAI GPT models. You’ll learn how to write the function and then deploy it to AWS Lambda. Then, we’ll show you how to configure the Lambda Public URL for external access.
Before we get started, let’s level set on what OpenAI’s GPT tech is all about. OpenAI GPT models generate textual responses based on the information they receive, often termed as "prompts". Crafting a prompt is akin to "instructing" the GPT model — this can be done by laying out clear directions or showcasing examples that guide the model to accomplish a particular task.
There are a lot of use cases for this. Per OpenAI’s site, you can use them for the following:
Draft documents
Write computer code
Answer questions about a knowledge base
Analyze texts
Create conversational agents
Give software a natural language interface
Tutor in a range of subjects
And much more!
There are different GPT models that are either a progression of a certain type, or for different use cases. For example, GPT-4 is the latest large multimodal model. DALL-E is the model used for image generation from text prompts.
In this lab you’ll be working with GPT models.
If you're using the Skillmix Labs feature, open the lab settings (the beaker icon) on the right side of the code editor. Then, click the Start Lab button to start hte lab environment.
Wait for the credentials to load. Then run this in the terminal:
$ aws configure
AWS Access Key ID [None]:
AWS Secret Access Key [None]:
Default region name [None]: us-west-2
Default output format [None]: json
Be sure to name your credentials profile 'smx-lab'.
Note: If you're using your own AWS account you'll need to ensure that you've created and configured a named AWS CLI profile named smx-lab.
You will need your own OpenAI key for this project. Head over to openai.com and follow these steps.
Create an account.
Create an API Key by going to this page: https://platform.openai.com/account/api-keys.
Remember: save your key somewhere safe. Don’t share it with anyone. You can also delete it after this lab for extra precaution.
Once the lab has started, it’s time to create the Python file we’ll be working with. We’ll create the file in the lab environment (on the remote development server). Follow these steps to create it:
In the Files pane, click on the + document icon.
In the modal, name the file main.py.
Click the Create button.
Click on the main.py file to open it for editing.
It’s now time to write the Python function. This is a relatively simple function. The first thing to note is that we will create it according to the AWS Lambda specification. Mainly, the function needs to accept the event and context objects, and return a status code and body.
In the main.py file, enter in this Python code, click the Save button in the editor, and we’ll review the code afterwards. Y
import json
import openai
# API Key for OpenAI
openai.api_key = "your_api_key"
def lambda_handler(event, context):
print(event)
body = json.loads(event['body'])
prompt = body['prompt']
messages = [
{
"role": "system",
"content": "You are a legendary fiction writer. Act like you normally do."
},
{
"role": "user",
"content": f"{prompt}."
}
]
response_text = ""
ai_response = openai.ChatCompletion.create(
model="gpt-4",
messages=messages,
temperature=0,
)
response_text += ai_response.choices[0]['message']['content'].strip()
return {
'statusCode': 200,
'body': json.dumps(response_text)
}
Code Review
import json
: Imports the JSON library, which is used for parsing and generating JSON.
import openai
: Imports the OpenAI library to facilitate interactions with the OpenAI API.
The openai.api_key
is set to a hardcoded string. This provides the API key for the OpenAI service, allowing the function to authenticate and interact with the OpenAI API. Note in production we would want to use a secrets manager.
The function lambda_handler
is defined with two parameters: event
and context
. These are standard parameters for AWS Lambda functions.
event
: Contains data about the incoming request, e.g., headers, query string parameters, and body.
context
: Provides information about the runtime and configuration settings.
The body of the incoming event is extracted and parsed from JSON format into a Python dictionary.
The parsed body is expected to contain a key 'prompt'
, the value of which is taken as a prompt to be provided to the GPT model.
A predefined message list is set up with two messages. The first message is a system instruction that informs the GPT model it is a "legendary fiction writer". The second message is a user prompt which is extracted from the incoming request. The role of this message is to instruct or query the model based on the provided prompt.
The openai.ChatCompletion.create()
method is called with:
A specific model ID: "gpt-4"
.
The previously created messages.
A temperature
value of 0
, which makes the model's outputs deterministic, favoring the most likely response.
The response from the model is extracted from the choices
attribute and is appended to the response_text
.
The function returns a response with a statusCode
of 200
, indicating success. The body of the response contains the AI model's generated content in JSON format.
Next, let’s deploy this code to AWS Lambda. We’ll use the AWS CLI in the Skillmix Editor to complete this step.
Note: this requires that you previously ran aws configure
as specified above.
First, let’s deploy this function to AWS Lambda. We need to perform a few tasks to do this. Go go the terminal now and enter these commands.
# install dependencies
$ apt-get update
$ apt-get -y install dialog
$ apt-get install zip
$ apt-get install less
# create the openai package .zip (needed dependency)
$ pip install openai -t ./package
$ cd package
$ zip -r ../myDeploymentPackage.zip .
# create the .zip that includes the package and python function
$ cd ..
$ zip -g myDeploymentPackage.zip main.py
# create an IAM trust policy document for our role
$ cat <<EOL >> policy.json
{
"Version": "2012-10-17",
"Statement": [
{
"Effect": "Allow",
"Principal": {
"Service": "lambda.amazonaws.com"
},
"Action": "sts:AssumeRole"
}
]
}
EOL
# create the IAM Role for our lambda function
aws iam create-role \
--role-name LambdaExecutionRole \
--assume-role-policy-document file://policy.json
# attach the policy to our role
aws iam attach-role-policy \
--role-name LambdaExecutionRole \
--policy-arn arn:aws:iam::aws:policy/service-role/AWSLambdaBasicExecutionRole
# create the lambda function
# be sure to replace **YOUR_ACCOUNT_ID with your account ID**
$ aws lambda create-function --function-name MyOpenAIFunction \
--zip-file fileb://myDeploymentPackage.zip \
--handler main.lambda_handler \
--runtime python3.10 \
--timeout 180 \
--role arn:aws:iam::**YOUR_ACCOUNT_ID**:role/LambdaExecutionRole
NOTE: You may need to hit CTRL Z to cancel out of the response
# create the function URL config
aws lambda create-function-url-config \
--function-name MyOpenAIFunction \
--auth-type NONE \
--cors "AllowOrigins"="*"
# add permissions to open the Lambda to public access
aws lambda add-permission \
--statement-id public-access \
--function-name MyOpenAIFunction \
--action lambda:InvokeFunctionUrl \
--principal "*" \
--function-url-auth-type NONE
# get the function URL
aws lambda get-function-url-config \
--function-name MyOpenAIFunction
With that stuff done, we should be able to make a POST call to the function. Replace <your_function_url> with your function URL from above.
curl -X POST <your_function_url> \
-H "Content-Type: application/json" \
-d '{"prompt":"what movies are great to watch with family?"}'
If you’re having any issues, use your credentials to log in to the AWS Console and go to the CloudWatch Logs dashboard. You’ll see the logs for this Lambda function. You can look at those logs to see what’s happening.