Prompt Engineering Best Practices: LLM Output Validation & Evaluation
Validating Output from Instruction-Tuned LLMs
Checking outputs before showing them to users can be important for ensuring the quality, relevance, and safety of the responses provided to them or used in automation flows.
In this article, we will learn how to use the Moderation API by OpenAI to ensure safety and free of harassment output. Also, we will learn how to use additional prompts to the model to evaluate output quality before displaying them to the user to ensure the generated output follows the given instructions and is free of hallucinations.
This article is the ninth part of the ongoing series Prompt Engineering Best Practices:
Prompt Engineering Best Practices for Instruction-Tuned LLM [Part 1]
Prompt Engineering Best Practices for Instruction-Tuned LLM [Part 2]
Prompt Engineering for Instruction-Tuned LLM: Iterative Prompt Development
Prompt Engineering for Instruction-Tuned LLM: Text Summarization
Prompt Engineering for Instruction-Tuned LLM: Textual Inference & Sentiment Analysis
Prompt Engineering for Instruction-Tuned LLM: Text Transforming & Translation
Prompt Engineering Best Practices: Chain of Thought Reasoning
Prompt Engineering Best Practices: LLM Output Validation [You are here!]
Table of Contents:
Setting Up Working Environment & Getting Started
Checking Harmful Output
Checking Instruction Following
My E-book: Data Science Portfolio for Success Is Out!
I recently published my first e-book Data Science Portfolio for Success which is a practical guide on how to build your data science portfolio. The book covers the following topics: The Importance of Having a Portfolio as a Data Scientist How to Build a Data Science Portfolio That Will Land You a Job?