Logging Expected Output
There are a few ways to create runs, and each way has a slightly different way of logging your Expected Output:PQ.run() or Playground UI
If you’re usingpq.run()
or creating runs through the Playground UI, simply include your expected answers in a column called output
in your evaluation set.
Python Logger
If you’re logging your runs via EvaluateRun, you can set the expected output using theground_truth
parameter in the workflow creation methods.
To log your runs with Galileo, you’d start with the same typical flow of logging into Galileo:
Langchain Callback
If you’re using a Langchain Callback, add your expected output by callingadd_expected_outputs
on your callback handler.
REST Endpoint
If you’re logging Evaluation runs via the REST endpoint, set the target field in the root node of each workflow.Important note: Set the expected_output on the root node of your workflow. Typically this will be the sole LLM node in your workflow or a “chain” node with other children nodes.
Comparing Output and Expected Output
When Expected Output gets logged, it’ll appear next to your Output wherever your output is shown.