Overview of Galileo Guardrail Metrics
Utilize Galileo’s Guardrail Metrics to monitor generative AI models, ensuring adherence to quality, correctness, and alignment with project goals.
Understand Galileo’s Guardrail Metrics in LLM Studio
Galileo has built a menu of Guardrail Metrics to help you evaluate, observe and protect your generative AI applications. These metrics are tailored to your use case and are designed to help you ensure your application quality and behavior. The Scorer
definition for each metric is listed immediately below.
Galileo’s Guardrail Metrics are a combination of industry-standard metrics and an outcome of Galileo’s in-house ML Research Team.
Output Quality Metrics
-
Correctness (Open Domain Hallucinations)
-
Instruction Adherence:
Scorers.instruction_adherence_plus
-
Ground Truth Adherence:
Scorers.ground_truth_adherence_plus
-
-
Completeness Luna:
Scorers.completeness_luna
-
Completeness Plus:
Scorers.completeness_plus
-
Agent Quality Metrics
-
Tool Selection Quality:
Scorers.tool_selection_quality_plus
-
Tool Error:
Scorers.tool_errors_plus
RAG Quality Metrics
-
Context Adherence (Closed Domain Hallucinations)
-
Context Adherence Luna:
Scorers.context_adherence_luna
-
Context Adherence Plus:
Scorers.context_adherence_plus
-
-
-
Chunk Attribution Luna:
Scorers.chunk_attribution_utilization_luna
-
Chunk Attribution Plus:
Scorers.chunk_attribution_utilization_plus
-
-
-
Chunk Utilization Luna:
Scorers.chunk_attribution_utilization_luna
-
Chunk Utilization Plus:
Scorers.chunk_attribution_utilization_plus
-
Input Quality Metrics
Safety Metrics
Looking for something more specific? You can always add your own custom metric.
Was this page helpful?