Here is some information about this architecture.
Here are the steps you can follow to build this solution on your own.
When making an API call to OpenAI’s GPT model, you can describe one of your functions. This description includes the function name, a description of the function, and the parameters the function uses. Then, if the GPT model decides it’s appropriate to use that function, it will respond with a payload that: 1) tells your application to call the function, 2) the parameters needed by the function in JSON format.
This is a very powerful feature. It effectively lets GPT decide if an external function should be called, and with what data it should be called with. You can build complex workflows using this feature.
If you're using the Skillmix Labs feature, open the lab settings (the beaker icon) on the right side of the code editor. Then, click the Start Lab button to start hte lab environment.
Wait for the credentials to load. Then run this in the terminal:
$ aws configure
AWS Access Key ID [None]:
AWS Secret Access Key [None]:
Default region name [None]: us-west-2
Default output format [None]: json
Be sure to name your credentials profile 'smx-lab'.
Note: If you're using your own AWS account you'll need to ensure that you've created and configured a named AWS CLI profile named smx-lab.
You will need your own OpenAI key for this project. Head over to openai.com and follow these steps.
Create an account.
Create an API Key by going to this page: https://platform.openai.com/account/api-keys.
Remember: save your key somewhere safe. Don’t share it with anyone. You can also delete it after this lab for extra precaution.
Once the lab has started, it’s time to create the Python file we’ll be working with. We’ll create the file in the lab environment (on the remote development server). Follow these steps to create it:
In the Files pane, click on the + document icon.
In the modal, name the file main.py.
Click the Create button.
Click on the main.py file to open it for editing.
Next, let’s create the Python file. This file will contain two functions. First, it will contain a custom function that we’ll inform GPT about.
Second, it will contain our Lambda handler function. The handler function will accept incoming requests, send them to GPT, and handle the GPT response.
With the main.py file open for editing, first, add this code:
import json
import openai
# our products "database"
products = {
'laptop': 1000,
'smartphone': 500,
'tablet': 250,
'headphones': 75,
'keyboard': 25
}
def get_product_price(product_name):
"""Look up the product by its name and return
its price, or indicate if it's not found.
"""
price = products.get(product_name.lower())
if price is not None:
return f"{product_name}: ${price}"
else:
return "No product found"
Code Review
import json
: Imports the JSON library, which is used for parsing and generating JSON.
import openai
: Imports the OpenAI library to facilitate interactions with the OpenAI API.
This def test()
function will be the one we describe to GPT. It simply returns true if it is called.
Still working with the main.py file, add the following new lines of code:
import json
import openai
# our products "database"
products = {
'laptop': 1000,
'smartphone': 500,
'tablet': 250,
'headphones': 75,
'keyboard': 25
}
def get_product_price(product_name):
"""Look up the product by its name and return
its price, or indicate if it's not found.
"""
price = products.get(product_name.lower())
if price is not None:
return f"{product_name}: ${price}"
else:
return "No product found"
# API Key for OpenAI
openai.api_key = ""
def lambda_handler(event, context):
print(event)
body = json.loads(event['body'])
messages = body['messages']
functions = [
{
"name": "get_product_price",
"description": "Gets the price of a product in our system.",
"parameters": {
"type": "object",
"properties": {
"product_name": {
"type": "string",
"description": "A product name",
}
},
"required": ["product_name"],
},
}
]
function_response = "No content"
ai_response = openai.ChatCompletion.create(
model="gpt-4",
messages=messages,
functions=functions,
temperature=0
)
# check to see if GPT wants to call a function
if "function_call" in ai_response["choices"][0]["message"]:
available_functions = {
"get_product_price": get_product_price
}
function_name = ai_response["choices"][0]["message"]["function_call"]["name"]
function_args_str = ai_response["choices"][0]["message"]["function_call"]["arguments"]
function_args = json.loads(function_args_str)
function_to_call = available_functions[function_name]
function_response = function_to_call(product_name=function_args.get("product_name"))
return {
'statusCode': 200,
'body': json.dumps(function_response)
}
Code Review
Body Extraction: Loads the JSON from event['body']
into the body
variable.
Prompt Extraction: Extracts the 'messages' from the body
.
Functions List: Defines a list named functions
with descriptions of available functions. This list currently contains a single function description:
get_product_price
:
Description: Gets the price of a product in our system.
Parameters: Expects an object with a product_name
property, which is a required string.
Initial Function Response: Sets a default value of "No content" for function_response
.
AI Interaction:
Makes a call to the OpenAI API with the ChatCompletion.create
method using the model "gpt-4", passing the messages, functions list, and setting the temperature to 0.
Function Call Check:
Checks if there's a "function_call" in the AI response.
Available Functions Mapping: A dictionary mapping function names to actual function references. Currently, it has only one function mapped, get_product_price
.
If the AI wants to call a function, extracts the function name and arguments from the response.
Calls the appropriate function based on the extracted name and arguments, updating the function_response
.
Sets response_text
to the result of the function call.
Return:
Returns a dictionary with a status code of 200 and a body containing the response text in JSON format.
Next, let’s deploy this code to AWS Lambda. We’ll use the AWS CLI in the Skillmix Editor to complete this step.
Note: this requires that you previously ran aws configure
as specified above.
First, let’s deploy this function to AWS Lambda. We need to perform a few tasks to do this. Go go the terminal now and enter these commands.
# install dependencies
$ apt-get update
$ apt-get -y install dialog
$ apt-get install zip
$ apt-get install less
# create the openai package .zip (needed dependency)
$ pip install openai -t ./package
$ cd package
$ zip -r ../myDeploymentPackage.zip .
# create the .zip that includes the package and python function
$ cd ..
$ zip -g myDeploymentPackage.zip main.py
# create an IAM trust policy document for our role
$ cat <<EOL >> policy.json
{
"Version": "2012-10-17",
"Statement": [
{
"Effect": "Allow",
"Principal": {
"Service": "lambda.amazonaws.com"
},
"Action": "sts:AssumeRole"
}
]
}
EOL
# create the IAM Role for our lambda function
aws iam create-role \
--role-name LambdaExecutionRole \
--assume-role-policy-document file://policy.json
# attach the policy to our role
aws iam attach-role-policy \
--role-name LambdaExecutionRole \
--policy-arn arn:aws:iam::aws:policy/service-role/AWSLambdaBasicExecutionRole
# create the lambda function
# be sure to replace **YOUR_ACCOUNT_ID with your account ID**
aws lambda create-function --function-name MyOpenAIFunction \
--zip-file fileb://myDeploymentPackage.zip \
--handler main.lambda_handler \
--runtime python3.10 \
--timeout 180 \
--role arn:aws:iam::**YOUR_ACCOUNT_ID**:role/LambdaExecutionRole
**NOTE: You may need to press CTRL Z to cancel out of the response**
# create the function URL config
aws lambda create-function-url-config \
--function-name MyOpenAIFunction \
--auth-type NONE \
--cors "AllowOrigins"="*"
# add permissions to open the Lambda to public access
aws lambda add-permission \
--statement-id public-access \
--function-name MyOpenAIFunction \
--action lambda:InvokeFunctionUrl \
--principal "*" \
--function-url-auth-type NONE
# get the function URL
aws lambda get-function-url-config \
--function-name MyOpenAIFunction
With that stuff done, we should be able to make a POST call to the function. Replace <your_function_url> with your function URL from above.
curl -X POST <your_function_url> \
-H "Content-Type: application/json" \
-d '{"messages":[
{"role": "system", "content": "You are a helpful store clerk."},
{"role": "user", "content": "Can you do a price check on a Laptop?"}
]}'
If you’re having any issues, use your credentials to log in to the AWS Console and go to the CloudWatch Logs dashboard. You’ll see the logs for this Lambda function. You can look at those logs to see what’s happening.