Galileo GenAI Studio supports Custom Metrics (programmatic or GPT-based) for all your Evaluate and Observe projects. Depending on where, when, and how you want these metrics to be executed, you have the option to choose between Custom Scorers and Registered Scorers.
scorer_fn
: The scorer function is provided the row-wise inputs and is expected to generate outputs for each response. The expected signature for this function is:scorer_fn
must accept **kwargs
as the last parameter so that your registered scorer is forward-compatible.
Here is an example with the full list of parameters supported currently. This example checks the output vs the ground truth and returns the absolute difference in length:
node_name
, node_type
, node_id
and tools
are all specific to workflows/multi step chains. dataset_variables
contains key-value pairs of variables that are passed in from the dataset in prompt evaluation runs, but can also be used to get the target/ground truth in multi step runs. Dataset variables are not available for Evaluate workflows / Observe.
The index
parameter is the index of the row in the dataset, node_input
is the input to the node, and node_output
is the output from the node.
aggregator_fn
: The aggregator function is only used in Evaluate, not Observe. The aggregator function takes in an array of the row-wise outputs from your scorer and allows you to generate aggregates from those. The expected signature for the aggregator function is:
score_type
: The scorer_type function is used to define the Type
of the score that your scorer generates. The expected signature for this function is:
Type
object like float
, not the actual type itself. Defining this function is necessary for sorting and filtering by scores to work correctly. If you don’t define this function, the scorer is assumed to generate float
scores by default.
scoreable_node_types_fn
: If you want to restrict your scorer to only run on specific node types, you can define this function which returns a list of node types that your scorer should run on. The expected signature for this function is:
llm
and chat
nodes by default.
Here’s an example of a scoreable_node_types_fn
that restricts the scorer to only run on retriever
nodes:
include_llm_credentials
: If you want access to the LLM credentials for the user who created the Observe project / Evaluate run during the execution of the registered scorer. This is expected to be set as a boolean value. OpenAI credentials are the only ones that are currently supported. By default, it is assumed to be False
. The expected signature for this property is:
scorer_fn
at the keyword argument credentials
. The credentials will be a dictionary with the keys as the name of the integration, if available, and values as the credentials. For example, if the user has an OpenAI integration, the credentials will be:
scorer_fn
function, and an aggregator_fn
function.
scorer.py
file:openai
library as an example:
Registered Scorers | Custom Scorers | |
---|---|---|
Creating the custom metric | Created from the Python client, can be activated through the UI. | Created via the Python client |
Sharing across the organization | Accessible within the Galileo console across different projects and modules | Outside Galileo, accessible only to the current project |
Accessible modules | Evaluate and Observe | Evaluate |
Scorer Definition | As an independent Python file | Within the notebook |
Execution Environment | Server-side | Within your Python environment |
Python Libraries available | Limited to a Galileo provided execution environment | Any library within your virtual environment |
Execution Resources | Restricted by Galileo | Any resources available to your local instance |
executor
and aggrator
function as defined below). Common types include:
**executor**
and **aggregator**
instead of scorer_fn
and aggregator_fn
.
scorers
parameter inside pq.run
or pq.run_sweep
, pq.EvaluateRun
, or pq.GalileoPromptCallback
: