A few months ago, OpenAI introduced a new capability to its API, enhancing its most recent models to accept additional parameters for function calling. These models are now fine-tuned to determine when it's relevant to call one of these functions. In this article, we'll explore how to use this feature effectively, along with tips and tricks for optimal results.
We'll use the OpenAI SDK to demonstrate this new capability, imagining a function we find valuable to provide to the language model. We'll delve into what makes a function "interesting" and discuss various use cases for this new parameter.
Table of Contents:
Setting Up Working Environment
Defining a Function
Passing Function Definitions to the Language Model
Processing Responses from the Language Model
Handling Non-Function-Related Messages
Forcing Function Calls with Parameters
Integrating Function Results into Language Model Responses
Token Usage and Limitations
Conclusion and Next Steps
My E-book: Data Science Portfolio for Success Is Out!
I recently published my first e-book Data Science Portfolio for Success which is a practical guide on how to build your data science portfolio. The book covers the following topics: The Importance of Having a Portfolio as a Data Scientist How to Build a Data Science Portfolio That Will Land You a Job?
1. Setting Up Working Environment
We will use the OpenAI Python library to access the OpenAI API. You can this Python library using pip like this:
pip install openai
Next, we will import OpenAI and then set the OpenAI API key which is a secret key. You can get one of these API keys from the OpenAI website. It is better to set this as an environment variable to keep it safe if you share your code. We will use OpenAI’s chatGPT GPT 3.5 Turbo model, and the chat completions endpoint.
import os
import openai
from dotenv import load_dotenv, find_dotenv
_ = load_dotenv(find_dotenv()) # read local .env file
openai.api_key = os.environ['OPENAI_API_KEY']
2. Defining a Function
Let’s start with by defining the get_current_weather function. This is an example from OpenAI themselves when they first released this functionality.
This is a good example to use because getting the current weather is something that the language model can't necessarily do by itself. We often want to connect language models to functions.
import json
# In production, this could be your backend API or an external API
def get_current_weather(location, unit="fahrenheit"):
"""Get the current weather in a given location"""
weather_info = {
"location": location,
"temperature": "72",
"unit": unit,
"forecast": ["sunny", "windy"],
}
return json.dumps(weather_info)
In this example, we hard-code the returned information, but in production, this could be hitting a weather API or some external source of knowledge.
OpenAI has exposed a new parameter called functions
through which you can pass a list of function definitions. The full function definition for the above is right here.
# define a function
functions = [
{
"name": "get_current_weather",
"description": "Get the current weather in a given location",
"parameters": {
"type": "object",
"properties": {
"location": {
"type": "string",
"description": "The city and state, e.g. San Francisco, CA",
},
"unit": {"type": "string", "enum": ["celsius", "fahrenheit"]},
},
"required": ["location"],
},
}
]
As you can see, it’s a list, and in the list, there’s only one because we’re only passing one function. Here it is, this JSON object with a few different parameters. You’ve got a name parameter, which is the name of the function. You’ve then got a description parameter, and next, you have this parameter object.
Here, there are properties. Properties are another object. Each of these elements has a type, string, and description. Unit is an enum because we want it to be either Celsius or Fahrenheit, and we can reflect that here. We can also convey that the only required parameter is location.
3. Passing Function Definitions to the Language Model
We will pass these functions directly to the language model, including the description parameter. The language model will use these descriptions. Any information you want the language model to have to determine whether to call a function or how to call a function should be in the description.
messages = [
{
"role": "user",
"content": "What's the weather like in Boston?"
}
]
Let’s then import the OpenAI SDK and call the chat completion endpoint. First, we’re going to specify the model, making sure to specify one of the more recent ones that have this capability. Next, we’re going to pass on our messages defined above. Let’s run this and see what we get.
import openai
response = openai.ChatCompletion.create(
model="gpt-3.5-turbo-0613",
messages=messages,
functions=functions
)
print(response)
{
“id”: “chatcmpl-9TQ933VAIQwAXGDoK5QROsKq3dsVt”,
“object”: “chat.completion”,
“created”: 1716799041,
“model”: “gpt-3.5-turbo-0613”,
“choices”: [
{
“index”: 0,
“message”: {
“role”: “assistant”,
“content”: null,
“function_call”: {
“name”: “get_current_weather”,
“arguments”: “{\n \”location\”: \”Boston, MA\”\n}”
}
},
“logprobs”: null,
“finish_reason”: “function_call”
}
],
“usage”: {
“prompt_tokens”: 82,
“completion_tokens”: 18,
“total_tokens”: 100
},
“system_fingerprint”: null
}