Understand your metric's values
An important step towards debugging and evaluating your LLM applications is understanding your metric values and what led to them.
Our metrics have explainability built-in, helping you understand which parts of the input or output are leading to certain outcomes. We have two types of explainability: Highlighting and generated Explanations.
Explainability via Token Highlighting
When looking at a workflow in the expanded view, some metric values will have an
icon next to them. Clicking on it will turn token-level highlighting on the input / output section of the node.The following metrics have token-level highlighting:
Metric | Where to see it |
---|---|
PII | Input or Output into LLM or Chat Nodes |
Prompt Perplexity | Input into LLM or Chat Node |
Uncertainty | Output of LLM or Chat Node |
Context Adherence (Luna) | Output of LLM or Chat Node |
Chunk Relevance (Luna) | Output of Retriever Node |
Chunk Utilization (Luna) | Output of Retriever Node |
Explainability via Explanations
For metrics powered by Chainpoll, we provide an explanation or rationale generated by LLMs. 🪄 next to metric values indicate that this metric has an explanation available. This explanation will include the reasoning the model followed to get to its conclusion. To view the explanation, simply hover over the metric value.
The following metrics have generated explanations:
Was this page helpful?