Learn how to integrate Galileo Evaluate into your Python applications, featuring step-by-step guidance and code samples for streamlined integration.
langchain
. To log these chains, we require using the callback from our Python client promptquality
.
Before creating a run, you’ll want to make sure you have an evaluation set (a set of questions / sample inputs you want to run through your prototype for evaluation). Your evaluation set should be consistent across runs.
First, we are going to construct a simple RAG chain with Galileo’s documentations stored in a vectorDB using Langchain:
GalileoPromptCallback
: