Integrate Evaluate Into My Existing Application With Python
If you already have a prototype or an application you’re looking to run experiments and evaluations over, Galileo Evaluate allows you to hook into it and log the inputs, outputs, and any intermediate steps to Galileo for further analysis.
In this QuickStart, we’ll show you how to:
-
Integrate with your workflows
-
Integrate with your Langchain apps
Let’s dive in!
Logging Workflows
If you’re looking to log your workflows, we provide an interface for uploading your executions.
Finally, log your Evaluate run to Galileo:
Please check out this page here for more information on logging experiments with our Python logger.
Langchain
Galileo supports the logging of chains from langchain
. To log these chains, we require using the callback from our Python client promptquality
.
Before creating a run, you’ll want to make sure you have an evaluation set (a set of questions / sample inputs you want to run through your prototype for evaluation). Your evaluation set should be consistent across runs.
First, we are going to construct a simple RAG chain with Galileo’s documentations stored in a vectorDB using Langchain:
Next, you can log in with Galileo:
After that, you can set up the GalileoPromptCallback
:
Finally, you can run the chain experiments across multiple intputs with Galileo Callback:
Was this page helpful?