Prompt engineering plays a pivotal role in crafting queries that help large language models (LLMs) understand not just the language but also the nuance and intent behind the query and help us build complex applications with ease.
In this article, we will implement what we covered in previous articles and build an end-to-end customer service assistant. Starting with checking the input to see if it flags the moderation API, extracting the list of products, searching for the products the user asked about, answering the user question with the model, and checking the output with the moderation API.
Finally, we will put all of these together and build a conversational chatbot that takes the user input and passes it through all of these steps, and returns it back to him.
Table of Contents:
Setting Up Working Environment
Chain of Prompts For Processing the User Query
Building Conversational Chatbot
My E-book: Data Science Portfolio for Success Is Out!
I recently published my first e-book Data Science Portfolio for Success which is a practical guide on how to build your data science portfolio. The book covers the following topics: The Importance of Having a Portfolio as a Data Scientist How to Build a Data Science Portfolio That Will Land You a Job?
1. Setting Up Working Environment
As usual, we will start with setting up the working environment and importing the packages and libraries we will work on within this article. In addition to the usual packages such as os and openai, we will import the panel package which is a Python package we’ll use for a chatbot UI. Also, we will import the utils file which will have some helper function that can be used to define the products and more.
import os
import openai
import sys
sys.path.append('../..')
import utils
import panel as pn # GUI
pn.extension()
from dotenv import load_dotenv, find_dotenv
_ = load_dotenv(find_dotenv()) # read local .env file
openai.api_key = os.environ['OPENAI_API_KEY']
Next, we will define the get_completion_from_messages which will take the messages with different types and return the LLM response.
def get_completion_from_messages(messages, model="gpt-3.5-turbo", temperature=0, max_tokens=500):
response = openai.ChatCompletion.create(
model=model,
messages=messages,
temperature=temperature,
max_tokens=max_tokens,
)
return response.choices[0].message["content"]