Prompt Engineering Best Practices: LLM Output Validation & Evaluation

Validating Output from Instruction-Tuned LLMs

Youssef Hosni
Towards AI
Published in
9 min readMay 6, 2024

--

Checking outputs before showing them to users can be important for ensuring the quality, relevance, and safety of the responses provided to them or used in automation flows.

In this article, we will learn how to use the Moderation API by OpenAI to ensure safety and free of harassment output. Also, we will learn how to use additional prompts to the model to evaluate output quality before displaying them to the…

--

--