# Galileo ## Docs - [Get Token](https://docs.galileo.ai/api-reference/auth/get-token.md) - [Login Api Key](https://docs.galileo.ai/api-reference/auth/login-api-key.md) - [Login Email](https://docs.galileo.ai/api-reference/auth/login-email.md) - [Login Social](https://docs.galileo.ai/api-reference/auth/login-social.md) - [Refresh Token](https://docs.galileo.ai/api-reference/auth/refresh-token.md) - [Verify Email](https://docs.galileo.ai/api-reference/auth/verify-email.md) - [List Evaluate Alerts](https://docs.galileo.ai/api-reference/evaluate-alerts/list-evaluate-alerts.md) - [Cancel Jobs For Project Run](https://docs.galileo.ai/api-reference/evaluate/cancel-jobs-for-project-run.md): Get all jobs for a project and run. - [Create a new Evaluate Run](https://docs.galileo.ai/api-reference/evaluate/create-workflows-run.md): Create a new Evaluate run with workflows. - [Get Evaluate Run Results](https://docs.galileo.ai/api-reference/evaluate/get-evaluate-run-results.md): Fetch evaluation results for a specific run including rows and aggregate information. - [API Reference | Getting Started with Galileo](https://docs.galileo.ai/api-reference/getting-started.md): Get started with Galileo's REST API: learn about base URLs, authentication methods, and how to verify your API setup for seamless integration. - [Healthcheck](https://docs.galileo.ai/api-reference/health/healthcheck.md) - [Get Workflows](https://docs.galileo.ai/api-reference/observe/get-workflows.md): Get workflows for a specific run in an Observe project. - [Log Workflows to an Observe Project](https://docs.galileo.ai/api-reference/observe/log-workflows.md): Log workflows to an Observe project. - [Protect notification](https://docs.galileo.ai/api-reference/protect-notification.md): When a Protect execution completes with the status specified in the configuration, the webhook specified is triggered with this payload. - [Invoke Protect](https://docs.galileo.ai/api-reference/protect/invoke.md): Learn how to use the 'Invoke Protect' API endpoint in Galileo's Protect module to process payloads with specified rulesets effectively. - [Workflowstep](https://docs.galileo.ai/api-reference/schemas/workflowstep.md) - [Python Client Reference | Galileo Evaluate](https://docs.galileo.ai/client-reference/evaluate/python.md): Integrate Galileo's Evaluate module into your Python applications with this guide, featuring installation steps and examples for prompt quality assessment. - [TypeScript Client Reference | Galileo Evaluate](https://docs.galileo.ai/client-reference/evaluate/typescript.md): Incorporate Galileo's Evaluate module into your TypeScript projects with this guide, providing setup instructions and workflow logging examples. - [Data Quality | Fine-Tune NLP Studio Client Reference](https://docs.galileo.ai/client-reference/finetune-nlp-studio/data-quality.md): Enhance your data quality in Galileo's NLP and CV Studio using the 'dataquality' Python package; find installation and usage details here. - [Python Client Reference | Galileo Observe](https://docs.galileo.ai/client-reference/observe/python.md): Integrate Galileo's Observe module into your Python applications; access installation instructions and comprehensive documentation for workflow monitoring. - [TypeScript Client Reference | Galileo Observescript](https://docs.galileo.ai/client-reference/observe/typescript.md): Integrate Galileo's Observe module into TypeScript applications with setup guides, sample code, and monitoring instructions for seamless workflow tracking. - [Client References](https://docs.galileo.ai/client-reference/overview.md): Explore Galileo's client references, including Python and TypeScript integrations, to streamline Evaluate, Observe, and Protect module implementations. - [Python Client Reference | Galileo Protect](https://docs.galileo.ai/client-reference/protect/python.md): Integrate Galileo's Protect module into Python workflows with this guide, including code examples, setup instructions, and ruleset invocation details. - [Data Privacy And Compliance](https://docs.galileo.ai/deployments/data-privacy-and-compliance.md): This page covers concerns regarding residency of data and compliances provided by Galileo. - [Dependencies](https://docs.galileo.ai/deployments/dependencies.md): Understand Galileo deployment prerequisites and dependencies to ensure a smooth installation and integration across supported platforms. - [Azure AKS](https://docs.galileo.ai/deployments/deploying-galileo-aks.md): This page details the steps to deploy a Galileo Kubernetes cluster in Microsoft Azure's AKS service environment. - [Deploying Galileo on Amazon EKS](https://docs.galileo.ai/deployments/deploying-galileo-eks.md): Deploy Galileo on Amazon EKS with a step-by-step guide for configuring, managing, and scaling Galileo's infrastructure using Kubernetes clusters. - [Zero Access Deployment | Galileo on EKS](https://docs.galileo.ai/deployments/deploying-galileo-eks-zero-access.md): Create a private Kubernetes Cluster with EKS in your AWS Account, upload containers to your container registry, and deploy Galileo. - [EKS Cluster Config Example | Zero Access Deployment](https://docs.galileo.ai/deployments/deploying-galileo-eks-zero-access/eks-cluster-config-example-zero-access.md): Access a zero-access EKS cluster configuration example for secure Galileo deployments on Amazon EKS, following best practices for Kubernetes security. - [EKS Cluster Config Example | Galileo Deployment](https://docs.galileo.ai/deployments/deploying-galileo-eks/eks-cluster-config-example.md): Review a detailed EKS cluster configuration example for deploying Galileo on Amazon EKS, ensuring efficient Kubernetes setup and management. - [Updating Cluster](https://docs.galileo.ai/deployments/deploying-galileo-eks/updating-galileo-eks-cluster.md): Galileo EKS cluster update from 1.21 -> 1.23 - [Exoscale](https://docs.galileo.ai/deployments/deploying-galileo-exoscale.md): The Galileo applications run on managed Kubernetes-like environments, but this document will specifically cover the configuration and deployment of an Exoscale Cloud SKS environment. - [Deploying Galileo on Google GKE](https://docs.galileo.ai/deployments/deploying-galileo-gke.md): Deploy Galileo on Google Kubernetes Engine (GKE) with this guide, covering configuration steps, cluster setup, and infrastructure scaling strategies. - [Cluster Setup Script](https://docs.galileo.ai/deployments/deploying-galileo-gke/galileo-gcp-setup-script.md): Utilize the Galileo GCP setup script for automating Google Cloud Platform (GCP) configuration to deploy Galileo seamlessly on GKE clusters. - [Enterprise Deployment](https://docs.galileo.ai/deployments/overview.md): Gain an overview of Galileo deployment options, covering supported platforms like Amazon EKS and Google GKE, setup requirements, and best practices. - [Post Deployment Checklist](https://docs.galileo.ai/deployments/post-deployment-checklist.md): The following guide will walk you through steps you can take to make sure your Galileo cluster is properly deployed and running well. - [Pre Requisites](https://docs.galileo.ai/deployments/pre-requisites.md): Before deploying Galileo, ensure the following prerequisites are met. - [Scheduling Automatic Backups For Your Cluster](https://docs.galileo.ai/deployments/scheduling-automatic-backups-for-your-cluster.md): Schedule automatic backups for Galileo clusters with this guide, ensuring data security, disaster recovery, and operational resilience for deployments. - [Aws Velero Account Setup Script](https://docs.galileo.ai/deployments/scheduling-automatic-backups-for-your-cluster/aws-velero-account-setup-script.md): Automate AWS Velero setup for Galileo cluster backups with this script, ensuring seamless backup scheduling and data resilience for AWS deployments. - [Gcp Velero Account Setup Script](https://docs.galileo.ai/deployments/scheduling-automatic-backups-for-your-cluster/gcp-velero-account-setup-script.md): Set up Velero for Google Cloud backups with this GCP account script, enabling automated backup scheduling and robust data protection for Galileo clusters. - [ Security & Access Control](https://docs.galileo.ai/deployments/security-and-access-control.md): This page covers networking, security and access control provisions that Galileo deployments enable - [Setting Up New Users](https://docs.galileo.ai/deployments/setting-up-new-users.md): Learn how to onboard new users in Galileo deployments with detailed instructions on user roles, access control, and permissions management. - [SSO Integration](https://docs.galileo.ai/deployments/sso-integration.md): This page covers our SSO Integration support with information we need to setup SSO for your Galileo cluster. - [Examples](https://docs.galileo.ai/examples/overview.md): Explore Galileo's practical examples covering real-world use cases and workflows for Evaluate, Observe, and Protect modules across AI projects. - [What is Galileo?](https://docs.galileo.ai/galileo.md): Evaluate, Observe, and Protect your GenAI applications - [Chainpoll](https://docs.galileo.ai/galileo-ai-research/chainpoll.md): ChainPoll is a powerful, flexible technique for LLM-based evaluation that is unique to Galileo. It is used to power multiple metrics across the Galileo platform. - [Class Boundary Detection](https://docs.galileo.ai/galileo-ai-research/class-boundary-detection.md): Detecting samples on the decision boundary - [Data Drift Detection](https://docs.galileo.ai/galileo-ai-research/data-drift-detection.md): Discover Galileo's data drift detection methods to monitor AI model performance, identify data changes, and maintain model reliability in production. - [Errors In Object Detection](https://docs.galileo.ai/galileo-ai-research/errors-in-object-detection.md): This page describes the rich error types offered by Galileo for Object Detection - [Galileo Data Error Potential (Dep) ](https://docs.galileo.ai/galileo-ai-research/galileo-data-error-potential-dep.md): Learn about Galileo's Data Error Potential (DEP) score, a metric to identify and categorize machine learning data errors, enhancing data quality and model performance. - [Likely Mislabeled](https://docs.galileo.ai/galileo-ai-research/likely-mislabeled.md): Garbage in, Garbage out - [Galileo AI Research](https://docs.galileo.ai/galileo-ai-research/overview.md): Research produced by Galileo AI Labs - [Rag Quality Metrics Using Chainpoll](https://docs.galileo.ai/galileo-ai-research/rag-quality-metrics-using-chainpoll.md): Learn how ChainPoll metrics assess retrieval-augmented generation (RAG) system quality, improving accuracy and performance of generative AI models. - [Rag Quality Metrics Using Luna](https://docs.galileo.ai/galileo-ai-research/rag-quality-metrics-using-luna.md): This page provides a brief overview of the research behind Galileo's RAG Quality Metrics. - [FAQs](https://docs.galileo.ai/galileo/galileo-nlp-studio/faqs.md): You have questions, we have (some) answers! - [Third Party 3p Integrations](https://docs.galileo.ai/galileo/galileo-nlp-studio/galileo-product-features/3p-integrations.md): Galileo has integrates seamlessly with your tools. - [Access Control Features | Galileo NLP Studio](https://docs.galileo.ai/galileo/galileo-nlp-studio/galileo-product-features/access-control.md): Discover Galileo NLP Studio's access control features, including user roles and group management, to securely share and manage projects within your organization. - [Actions](https://docs.galileo.ai/galileo/galileo-nlp-studio/galileo-product-features/actions.md): Actions help close the inspection loop and error discovery process. We support a number of actions. - [Clustering](https://docs.galileo.ai/galileo/galileo-nlp-studio/galileo-product-features/clusters.md): To help you make sense of your data and your embeddings view, Galileo provides out-of-the-box Clustering and Explainability. - [Compare Across Runs](https://docs.galileo.ai/galileo/galileo-nlp-studio/galileo-product-features/compare-across-runs.md): Track your experiments, data and models in one place - [Dataset Slices](https://docs.galileo.ai/galileo/galileo-nlp-studio/galileo-product-features/dataset-slices.md): Slices is a powerful Galileo feature that allows you to monitor, across training runs, a sub-population of the dataset based on metadata filters. - [Dataset View](https://docs.galileo.ai/galileo/galileo-nlp-studio/galileo-product-features/dataset-view.md): The Dataset View provides an interactive data table for inspecting your datasets. - [Embeddings View](https://docs.galileo.ai/galileo/galileo-nlp-studio/galileo-product-features/embeddings-view.md): The Embeddings View provides a visual playground for you to interact with your datasets. - [Error Types Breakdown](https://docs.galileo.ai/galileo/galileo-nlp-studio/galileo-product-features/error-types-breakdown.md): For use cases with complex data and error types (e.g. Named Entity Recognition, Object Detection or Semantic Segmentation), the **Error Types Chart** gives you an insight into exactly how the Ground Truth differed from your model's predictions - [Galileo + Delta Lake Databricks](https://docs.galileo.ai/galileo/galileo-nlp-studio/galileo-product-features/galileo-+-delta-lake-databricks.md): Integrate Galileo with Delta Lake on Databricks to manage large-scale data, ensuring seamless collaboration and enhanced NLP workflows. - [Insights Panel](https://docs.galileo.ai/galileo/galileo-nlp-studio/galileo-product-features/insights-panel.md): Utilize Galileo's Insights Panel to analyze data trends, detect issues, and gain actionable insights for improving NLP model performance. - [Product Features](https://docs.galileo.ai/galileo/galileo-nlp-studio/galileo-product-features/overview.md): Explore Galileo NLP Studio's features, including data insights, error detection, and monitoring tools for improving NLP workflows and AI quality. - [Similarity Search](https://docs.galileo.ai/galileo/galileo-nlp-studio/galileo-product-features/similarity-search.md): Similarity search provides out of the box ability to discover **similar samples** within your datasets. - [Alerts](https://docs.galileo.ai/galileo/galileo-nlp-studio/galileo-product-features/xray-insights.md): Explore Galileo NLP Studio's Alerts feature, designed to detect and summarize dataset issues like mislabeling and class imbalance, enhancing data inspection. - [Multi Label Text Classification](https://docs.galileo.ai/galileo/galileo-nlp-studio/multi-label-text-classification.md): Implement multi-label text classification in Galileo NLP Studio to accurately label datasets, streamline workflows, and enhance model training. - [Multi-Label Text Classification | Galileo NLP Studio Guide](https://docs.galileo.ai/galileo/galileo-nlp-studio/multi-label-text-classification/getting-started.md): Get started with multi-label text classification in Galileo NLP Studio, featuring setup instructions, workflow integration, and data preparation tips. - [Named Entity Recognition](https://docs.galileo.ai/galileo/galileo-nlp-studio/named-entity-recognition.md): NER is a sequence tagging problem, where given an input document, the task is to correctly identify the span boundaries for various entities and also classify the spans into correct entity types. - [Named Entity Recognition | Galileo NLP Studio Guide](https://docs.galileo.ai/galileo/galileo-nlp-studio/named-entity-recognition/getting-started.md): Start building named entity recognition (NER) models in Galileo NLP Studio with this guide on setup, labeling, and model training workflows. - [Model Monitoring & Data Drift | Named Entity Recognition](https://docs.galileo.ai/galileo/galileo-nlp-studio/named-entity-recognition/model-monitoring-and-data-drift.md): Learn how to monitor Named Entity Recognition models in production with Galileo NLP Studio, detecting data drift and maintaining model health effectively. - [Natural Language Inference](https://docs.galileo.ai/galileo/galileo-nlp-studio/natural-language-inference.md): Leverage Galileo NLP Studio for natural language inference (NLI), enabling accurate predictions and model performance monitoring. - [Natural Language Inference | Galileo NLP Studio Guide](https://docs.galileo.ai/galileo/galileo-nlp-studio/natural-language-inference/getting-started.md): Begin implementing natural language inference (NLI) workflows in Galileo NLP Studio with clear instructions for setup and model evaluation. - [Logging Data | Natural Language Inference in Galileo](https://docs.galileo.ai/galileo/galileo-nlp-studio/natural-language-inference/logging-data-to-galileo.md): The fastest way to find data errors in Galileo. - [Model Monitoring & Data Drift | Natural Language Inference](https://docs.galileo.ai/galileo/galileo-nlp-studio/natural-language-inference/model-monitoring-and-data-drift.md): Ensure optimal performance of Natural Language Inference models in production by monitoring data drift and model health with Galileo NLP Studio. - [Text Classification](https://docs.galileo.ai/galileo/galileo-nlp-studio/text-classification.md): Using Galileo for Text Classification you can improve your classification models by improving the quality of your training data. - [Automated Production Monitoring](https://docs.galileo.ai/galileo/galileo-nlp-studio/text-classification/automated-production-monitoring.md): Monitor text classification models in production with automated tools from Galileo NLP Studio to detect data drift and maintain performance. - [Build your own conditions](https://docs.galileo.ai/galileo/galileo-nlp-studio/text-classification/build-your-own-conditions.md): A class to build custom conditions for DataFrame assertions and alerting. - [Text Classification | Galileo NLP Studio Guide](https://docs.galileo.ai/galileo/galileo-nlp-studio/text-classification/getting-started.md): Start training and deploying text classification models in Galileo NLP Studio with this guide on setup, data preparation, and workflow integration. - [Logging Data | Text Classification in Galileo](https://docs.galileo.ai/galileo/galileo-nlp-studio/text-classification/logging-data-to-galileo.md): The fastest way to find data errors in Galileo - [Model Monitoring & Data Drift | Text Classification](https://docs.galileo.ai/galileo/galileo-nlp-studio/text-classification/model-monitoring-and-data-drift.md): Monitor text classification models in production with Galileo NLP Studio, detecting data drift and ensuring consistent model performance over time. - [Training High-Quality Supervised NLP Models | Galileo](https://docs.galileo.ai/galileo/galileo-nlp-studio/train-high-quality-supervised-nlp-models.md): Galileo NLP Studio supports Natural Language Processing Tasks across the life-cycle of your model development. - [Overview of Galileo Evaluate](https://docs.galileo.ai/galileo/gen-ai-studio-products/galileo-evaluate.md): Stop experimenting in spreadsheets and notebooks. Use Evaluate’s powerful insights to build GenAI systems that just work. - [Human Ratings](https://docs.galileo.ai/galileo/gen-ai-studio-products/galileo-evaluate/concepts/human-ratings.md): Learn how human ratings in Galileo Evaluate enable accurate model evaluations and improve performance through qualitative feedback. - [Metrics](https://docs.galileo.ai/galileo/gen-ai-studio-products/galileo-evaluate/concepts/metrics.md): Metrics are quantitative or qualitative ways to express insights about the [run](/galileo/gen-ai-studio-products/galileo-evaluate/concepts/run). - [Project Concepts | Galileo Evaluate](https://docs.galileo.ai/galileo/gen-ai-studio-products/galileo-evaluate/concepts/project.md): Understand project concepts in Galileo Evaluate, including organization of datasets, metrics, and workflows for AI evaluation. - [Run](https://docs.galileo.ai/galileo/gen-ai-studio-products/galileo-evaluate/concepts/run.md): Runs in Galileo are experiments or iterations done within a [project](/galileo/gen-ai-studio-products/galileo-evaluate/concepts/project). - [Template](https://docs.galileo.ai/galileo/gen-ai-studio-products/galileo-evaluate/concepts/template.md): Leverage templates in Galileo Evaluate to standardize metrics, model assessments, and workflows for efficient generative AI evaluation. - [Context vs. Instruction Adherence | Galileo Evaluate FAQ](https://docs.galileo.ai/galileo/gen-ai-studio-products/galileo-evaluate/faq/context-adherence-vs-instruction-adherence.md): Understand the distinctions between Context Adherence and Instruction Adherence metrics in Galileo Evaluate to assess generative AI outputs accurately. - [Error Computing Metrics | Galileo Evaluate FAQ](https://docs.galileo.ai/galileo/gen-ai-studio-products/galileo-evaluate/faq/errors-computing-metrics.md): Find solutions to common errors in computing metrics within Galileo Evaluate, including missing integrations and rate limit issues, to streamline your AI evaluations. - [How-To Guide | Galileo Evaluate](https://docs.galileo.ai/galileo/gen-ai-studio-products/galileo-evaluate/how-to.md): Follow step-by-step instructions in Galileo Evaluate to assess generative AI models, configure metrics, and analyze performance effectively. - [A/B Compare Prompts](https://docs.galileo.ai/galileo/gen-ai-studio-products/galileo-evaluate/how-to/a-b-compare-prompts.md): Easily compare multiple LLM runs in a single screen for better decision making - [Access Control Guide | Galileo Evaluate](https://docs.galileo.ai/galileo/gen-ai-studio-products/galileo-evaluate/how-to/access-control.md): Manage user permissions and securely share projects in Galileo Evaluate using detailed access control features, including system roles and group management. - [Add Tags and Metadata to Prompt Runs](https://docs.galileo.ai/galileo/gen-ai-studio-products/galileo-evaluate/how-to/add-tags-and-metadata-to-prompt-runs.md): While you are experimenting with your prompts you will probably be tuning many parameters. - [Auto-generating an LLM-as-a-judge](https://docs.galileo.ai/galileo/gen-ai-studio-products/galileo-evaluate/how-to/autogen-metrics.md): Learn how to use Galileo's Autogen feature to generate LLM-as-a-judge metrics. - [Choose your Guardrail Metrics](https://docs.galileo.ai/galileo/gen-ai-studio-products/galileo-evaluate/how-to/choose-your-guardrail-metrics.md): Select and understand guardrail metrics in Galileo Evaluate to effectively assess your prompts and models, utilizing both industry-standard and proprietary metrics. - [Collaborate with other personas](https://docs.galileo.ai/galileo/gen-ai-studio-products/galileo-evaluate/how-to/collaborate-with-other-personas.md): Galileo Evaluate is geared for cross-functional collaboration. Most of the teams using Galileo consist of a mix of the following personas - [Customizing your LLM-powered metrics via CLHF](https://docs.galileo.ai/galileo/gen-ai-studio-products/galileo-evaluate/how-to/continuous-learning-via-human-feedback.md): Learn how to customize your LLM-powered metrics with Continuous Learning via Human Feedback. - [Create an Evaluation Set](https://docs.galileo.ai/galileo/gen-ai-studio-products/galileo-evaluate/how-to/create-an-evaluation-set.md): Before starting your experiments, we recommend creating an evaluation set. - [Customize Chainpoll-powered Metrics](https://docs.galileo.ai/galileo/gen-ai-studio-products/galileo-evaluate/how-to/customize-chainpoll-powered-metrics.md): Improve metric accuracy by customizing your Chainpoll-powered metrics - [Enabling Scorers in Runs](https://docs.galileo.ai/galileo/gen-ai-studio-products/galileo-evaluate/how-to/enabling-scorers-in-runs.md): Learn how to turn on metrics when creating runs in your Python environment. - [Evaluate and Optimize Agents](https://docs.galileo.ai/galileo/gen-ai-studio-products/galileo-evaluate/how-to/evaluate-and-optimize-agents--chains-or-multi-step-workflows.md): How to use Galileo Evaluate with Agents - [Evaluate and Optimize Prompts](https://docs.galileo.ai/galileo/gen-ai-studio-products/galileo-evaluate/how-to/evaluate-and-optimize-prompts.md): How to use Galileo Evaluate for prompt engineering - [Evaluate and Optimize RAG Applications](https://docs.galileo.ai/galileo/gen-ai-studio-products/galileo-evaluate/how-to/evaluate-and-optimize-rag-applications.md): How to use Galileo Evaluate with RAG applications - [Evaluate with Human Feedback](https://docs.galileo.ai/galileo/gen-ai-studio-products/galileo-evaluate/how-to/evaluate-with-human-feedback.md): Galileo allows you to do qualitative human evaluations of your prompts and responses. - [Experiment with Multiple Workflows](https://docs.galileo.ai/galileo/gen-ai-studio-products/galileo-evaluate/how-to/experiment-with-multiple-chain-workflows.md): If you're building a multi-step workflow or chain (e.g. a RAG system, an Agent, or a chain) and want to experiment with multiple combinations of parameters or your versions at once, Chain Sweeps are your friend. - [Experiment with Multiple Prompts](https://docs.galileo.ai/galileo/gen-ai-studio-products/galileo-evaluate/how-to/experiment-with-multiple-prompts.md): Experiment with multiple prompts in Galileo Evaluate to optimize generative AI performance using iterative testing and comprehensive analysis tools. - [Export your Evaluation Runs](https://docs.galileo.ai/galileo/gen-ai-studio-products/galileo-evaluate/how-to/export-your-evaluation-runs.md): To download the results of your evaluation you can use the Export function. To export your runs, simply click on _Export Prompt Data._ - [Identify Hallucinations](https://docs.galileo.ai/galileo/gen-ai-studio-products/galileo-evaluate/how-to/identify-hallucinations.md): How to use Galileo Evaluate to find Hallucinations - [Log Pre-generated Responses in Python](https://docs.galileo.ai/galileo/gen-ai-studio-products/galileo-evaluate/how-to/log-pre-generated-responses-in-python.md): If you already have a dataset of requests and application responses, and you want to log and evaluate these on Galileo without re-generating the responses, you can do so via our worflows. - [Logging and Comparing against your Expected Answers](https://docs.galileo.ai/galileo/gen-ai-studio-products/galileo-evaluate/how-to/logging-and-comparing-against-your-expected-answers.md): Expected outputs are a key element for evaluating LLM applications. They provide benchmarks to measure model accuracy, identify errors, and ensure consistent assessments. - [Programmatically fetch logged data](https://docs.galileo.ai/galileo/gen-ai-studio-products/galileo-evaluate/how-to/programmatically-fetch-logged-data.md): If you want to fetch your logged data and metrics programmatically, you can do so via our Python clients. - [Prompt Management-Storage](https://docs.galileo.ai/galileo/gen-ai-studio-products/galileo-evaluate/how-to/prompt-management-storage.md): Manage and store your AI prompts efficiently in Galileo Evaluate, with tools for organizing, versioning, and analyzing prompt performance at scale. - [Finding the best run](https://docs.galileo.ai/galileo/gen-ai-studio-products/galileo-evaluate/how-to/rank-your-runs.md): Learn how to use Automatic Run Ranking to find the best run - [Register Custom Metrics](https://docs.galileo.ai/galileo/gen-ai-studio-products/galileo-evaluate/how-to/register-custom-metrics.md): Galileo GenAI Studio supports Custom Metrics (programmatic or GPT-based) for all your Evaluate and Observe projects. Depending on where, when, and how you want these metrics to be executed, you have the option to choose between **Custom Scorers** and **Registered Scorers**. - [Share a project](https://docs.galileo.ai/galileo/gen-ai-studio-products/galileo-evaluate/how-to/share-a-project.md): All projects on Galileo can be shared with others to enable collaboration. - [Understanding Metric Values | Galileo Evaluate How-To](https://docs.galileo.ai/galileo/gen-ai-studio-products/galileo-evaluate/how-to/understand-your-metrics-values.md): Gain insights into your metric values in Galileo Evaluate with explainability features, including token-level highlighting and generated explanations for better analysis. - [Using Datasets](https://docs.galileo.ai/galileo/gen-ai-studio-products/galileo-evaluate/how-to/using-datasets.md): How to use datasets in Galileo - [Integrations | Galileo Evaluate](https://docs.galileo.ai/galileo/gen-ai-studio-products/galileo-evaluate/integrations.md): Discover Galileo Evaluate's integrations with AI tools and platforms, enabling seamless connectivity and enhanced generative AI evaluation workflows. - [Logging Workflows](https://docs.galileo.ai/galileo/gen-ai-studio-products/galileo-evaluate/integrations/custom-chain.md): No matter how you're orchestrating your workflows, we have an interface to help you upload them to Galileo. - [Databricks](https://docs.galileo.ai/galileo/gen-ai-studio-products/galileo-evaluate/integrations/data-storage/databricks.md): Integrating into Databricks to seamlessly export your data to Delta Lake - [LangChain Integration | Galileo Evaluate](https://docs.galileo.ai/galileo/gen-ai-studio-products/galileo-evaluate/integrations/langchain.md): Galileo allows you to integrate with your Langchain application natively through callbacks - [LLMs](https://docs.galileo.ai/galileo/gen-ai-studio-products/galileo-evaluate/integrations/llms.md): Integrate large language models (LLMs) into Galileo Evaluate to assess performance, refine outputs, and enhance generative AI model capabilities. - [Adding Custom LLM APIs / Fine Tuned LLMs](https://docs.galileo.ai/galileo/gen-ai-studio-products/galileo-evaluate/integrations/llms/adding-custom-llms.md): Showcases how to use Galileo with any LLM API or custom fine-tuned LLMs, not supported out-of-the-box by Galileo. - [Supported LLMs](https://docs.galileo.ai/galileo/gen-ai-studio-products/galileo-evaluate/integrations/llms/supported-llms.md): Galileo comes with support for the following LLMs out of the box. In the Playground, you will see models for which you've added an integration. - [Quickstart Guide | Galileo Evaluate](https://docs.galileo.ai/galileo/gen-ai-studio-products/galileo-evaluate/quickstart.md): Start using Galileo Evaluate with this quickstart guide, covering prompt engineering, AI evaluation, and integrating tools into existing workflows. - [Integrate Evaluate Into My Existing Application With Python](https://docs.galileo.ai/galileo/gen-ai-studio-products/galileo-evaluate/quickstart/integrate-evaluate-into-my-existing-application-with-python.md): Learn how to integrate Galileo Evaluate into your Python applications, featuring step-by-step guidance and code samples for streamlined integration. - [Prompt Engineering From A UI](https://docs.galileo.ai/galileo/gen-ai-studio-products/galileo-evaluate/quickstart/prompt-engineering-from-a-ui.md): Explore UI-driven prompt engineering in Galileo Evaluate to create, test, and refine prompts with intuitive interfaces and robust evaluation tools. - [Overview of Galileo Guardrail Metrics](https://docs.galileo.ai/galileo/gen-ai-studio-products/galileo-guardrail-metrics.md): Utilize Galileo's Guardrail Metrics to monitor generative AI models, ensuring adherence to quality, correctness, and alignment with project goals. - [Action Advancement](https://docs.galileo.ai/galileo/gen-ai-studio-products/galileo-guardrail-metrics/action-advancement.md): Understand Galileo's Action Advancement Metric - [Action Completion](https://docs.galileo.ai/galileo/gen-ai-studio-products/galileo-guardrail-metrics/action-completion.md): Understand Galileo's Action Completion Metric - [BLEU and ROUGE](https://docs.galileo.ai/galileo/gen-ai-studio-products/galileo-guardrail-metrics/bleu-and-rouge-1.md): Understand BLEU & ROUGE-1 scores - [Chunk Attribution](https://docs.galileo.ai/galileo/gen-ai-studio-products/galileo-guardrail-metrics/chunk-attribution.md): Understand Galileo's Chunk Attribution Metric - [Chunk Attribution Luna](https://docs.galileo.ai/galileo/gen-ai-studio-products/galileo-guardrail-metrics/chunk-attribution/chunk-attribution-luna.md): Understand Galileo's Chunk Attribution Luna Metric - [Chunk Attribution Plus](https://docs.galileo.ai/galileo/gen-ai-studio-products/galileo-guardrail-metrics/chunk-attribution/chunk-attribution-plus.md): Understand Galileo's Chunk Attribution Plus Metric - [Chunk Relevance](https://docs.galileo.ai/galileo/gen-ai-studio-products/galileo-guardrail-metrics/chunk-relevance.md): Understand Galileo's Chunk Relevance Luna Metric - [Chunk Utilization](https://docs.galileo.ai/galileo/gen-ai-studio-products/galileo-guardrail-metrics/chunk-utilization.md): Understand Galileo's Chunk Utilization Metric - [Chunk Utilization Luna](https://docs.galileo.ai/galileo/gen-ai-studio-products/galileo-guardrail-metrics/chunk-utilization/chunk-utilization-luna.md): Understand Galileo's Chunk Utilization Luna Metric - [Chunk Utilization Plus](https://docs.galileo.ai/galileo/gen-ai-studio-products/galileo-guardrail-metrics/chunk-utilization/chunk-utilization-plus.md): Leverage Chunk Utilization+ in Galileo Guardrail Metrics to optimize generative AI output segmentation and maximize model efficiency. - [Completeness](https://docs.galileo.ai/galileo/gen-ai-studio-products/galileo-guardrail-metrics/completeness.md): Understand Galileo's Completeness Metric - [Completeness Luna](https://docs.galileo.ai/galileo/gen-ai-studio-products/galileo-guardrail-metrics/completeness/completeness-luna.md): Understand Galileo's Completeness Luna Metric - [Completeness Plus](https://docs.galileo.ai/galileo/gen-ai-studio-products/galileo-guardrail-metrics/completeness/completeness-plus.md): Understand Galileo's Completeness Plus Metric - [Context Adherence](https://docs.galileo.ai/galileo/gen-ai-studio-products/galileo-guardrail-metrics/context-adherence.md): Understand Galileo's Context Adherence Metric - [Context Adherence Luna](https://docs.galileo.ai/galileo/gen-ai-studio-products/galileo-guardrail-metrics/context-adherence/context-adherence-luna.md): Understand Galileo's Context Adherence Luna Metric - [Context Adherence Plus](https://docs.galileo.ai/galileo/gen-ai-studio-products/galileo-guardrail-metrics/context-adherence/context-adherence-plus.md): Understand Galileo's Context Adherence Plus Metric - [Correctness](https://docs.galileo.ai/galileo/gen-ai-studio-products/galileo-guardrail-metrics/correctness.md): Understand Galileo's Correctness Metric - [Context vs. Instruction Adherence | Guardrail Metrics FAQ](https://docs.galileo.ai/galileo/gen-ai-studio-products/galileo-guardrail-metrics/faq/context-adherence-vs-instruction-adherence.md): Understand the differences between Context Adherence and Instruction Adherence metrics in Galileo's Guardrail Metrics to accurately evaluate model outputs. - [Error Computing Metrics | Guardrail Metrics FAQ](https://docs.galileo.ai/galileo/gen-ai-studio-products/galileo-guardrail-metrics/faq/errors-computing-metrics.md): Find solutions to common errors in computing metrics within Galileo's Guardrail Metrics, including missing integrations and rate limit issues, to streamline your evaluations. - [Ground Truth Adherence](https://docs.galileo.ai/galileo/gen-ai-studio-products/galileo-guardrail-metrics/ground-truth-adherence.md): Measure ground truth adherence in generative AI models with Galileo's Guardrail Metrics, ensuring accurate and aligned outputs with dataset benchmarks. - [Instruction Adherence](https://docs.galileo.ai/galileo/gen-ai-studio-products/galileo-guardrail-metrics/instruction-adherence.md): Assess instruction adherence in AI outputs using Galileo Guardrail Metrics to ensure prompt-driven models generate precise and actionable results. - [Private Identifiable Information](https://docs.galileo.ai/galileo/gen-ai-studio-products/galileo-guardrail-metrics/private-identifiable-information.md): Understand Galileo's PII Metric - [Prompt Injection](https://docs.galileo.ai/galileo/gen-ai-studio-products/galileo-guardrail-metrics/prompt-injection.md): Understand Galileo's Prompt Injection metric - [Prompt Perplexity](https://docs.galileo.ai/galileo/gen-ai-studio-products/galileo-guardrail-metrics/prompt-perplexity.md): Understanding Galileo's Prompt Perplexity Metrics - [Sexism](https://docs.galileo.ai/galileo/gen-ai-studio-products/galileo-guardrail-metrics/sexism.md): Understand Galileo's Sexism Metric - [Tone](https://docs.galileo.ai/galileo/gen-ai-studio-products/galileo-guardrail-metrics/tone.md): Understand Galileo's Tone Metric - [Tool Error](https://docs.galileo.ai/galileo/gen-ai-studio-products/galileo-guardrail-metrics/tool-error.md): Understand Galileo's Tool Error Metric - [Tool Selection Quality](https://docs.galileo.ai/galileo/gen-ai-studio-products/galileo-guardrail-metrics/tool-selection-quality.md): Understand Galileo's Tool Selection Quality Metric - [Toxicity](https://docs.galileo.ai/galileo/gen-ai-studio-products/galileo-guardrail-metrics/toxicity.md): Understand Galileo's Toxicity Metric - [Uncertainty](https://docs.galileo.ai/galileo/gen-ai-studio-products/galileo-guardrail-metrics/uncertainty.md): Understand Galileo's Uncertainty Metric - [Overview of Galileo LLM Fine-Tune](https://docs.galileo.ai/galileo/gen-ai-studio-products/galileo-llm-fine-tune.md): Fine-tune large language models with Galileo's LLM Fine-Tune tools, enabling precise adjustments for optimized AI performance and output quality. - [Console Walkthrough](https://docs.galileo.ai/galileo/gen-ai-studio-products/galileo-llm-fine-tune/console-walkthrough.md): Upon completing a run, you'll be taken to the Galileo Console. - [Finding Similar Samples](https://docs.galileo.ai/galileo/gen-ai-studio-products/galileo-llm-fine-tune/finding-similar-samples.md): Similarity search allows you to discover **similar samples** within your datasets - [Quickstart Guide | Galileo LLM Fine-Tune](https://docs.galileo.ai/galileo/gen-ai-studio-products/galileo-llm-fine-tune/quickstart.md): Get started with Galileo's LLM Fine-Tune in this quickstart guide, featuring step-by-step instructions for tuning AI models effectively. - [Configuring Dq Auto](https://docs.galileo.ai/galileo/gen-ai-studio-products/galileo-llm-fine-tune/quickstart/dq.auto.md): Automatic Data Insights on your Seq2Seq dataset - [Taking Action](https://docs.galileo.ai/galileo/gen-ai-studio-products/galileo-llm-fine-tune/taking-action.md): Take actionable steps in Galileo LLM Fine-Tune to address model performance issues, refine outputs, and achieve targeted AI improvements. - [Using Alerts](https://docs.galileo.ai/galileo/gen-ai-studio-products/galileo-llm-fine-tune/using-alerts.md): Utilize Galileo LLM Fine-Tune's Alerts feature to detect and address dataset issues like high Data Error Potential scores and uncertainty outputs, enhancing data quality. - [Using Data Error Potential](https://docs.galileo.ai/galileo/gen-ai-studio-products/galileo-llm-fine-tune/using-data-error-potential.md): Learn about Galileo LLM Fine-Tune's Data Error Potential (DEP) score to identify and address errors in your training data, improving overall data quality. - [Using Uncertainty](https://docs.galileo.ai/galileo/gen-ai-studio-products/galileo-llm-fine-tune/using-uncertainty.md): On dataset splits where generations are enabled (e.g. the _Test split_), you'll be seeing Uncertainty Scores and Token-level Uncertainty highlighting - [Visualizing And Understanding Your Data](https://docs.galileo.ai/galileo/gen-ai-studio-products/galileo-llm-fine-tune/visualizing-and-understanding-your-data.md): Finetuning an LLM often requires large datasets. - [Overview of Galileo Observe](https://docs.galileo.ai/galileo/gen-ai-studio-products/galileo-observe.md): Monitor and analyze generative AI models with Galileo Observe, using real-time data insights to maintain performance and ensure quality outputs. - [Context vs. Instruction Adherence | Galileo Observe FAQ](https://docs.galileo.ai/galileo/gen-ai-studio-products/galileo-observe/faq/context-adherence-vs-instruction-adherence.md): Differentiate between Context Adherence and Instruction Adherence metrics in Galileo Observe to effectively evaluate and enhance your model's responses. - [Error Computing Metrics | Galileo Observe FAQ](https://docs.galileo.ai/galileo/gen-ai-studio-products/galileo-observe/faq/errors-computing-metrics.md): Troubleshoot common errors in Galileo Observe's metric computations, including integration issues, rate limits, JSON parsing errors, and missing embeddings, to ensure accurate evaluations. - [Getting Started | Galileo Observe](https://docs.galileo.ai/galileo/gen-ai-studio-products/galileo-observe/getting-started.md): How to monitor your apps with Galileo Observe - [How-To Guide | Galileo Observe](https://docs.galileo.ai/galileo/gen-ai-studio-products/galileo-observe/how-to.md): Learn how to use Galileo Observe to monitor and analyze generative AI models, including setup instructions, data logging, and workflow integrations. - [How to Set Up Access Control](https://docs.galileo.ai/galileo/gen-ai-studio-products/galileo-observe/how-to/access-control.md): Manage user permissions and securely share projects in Galileo Observe using detailed access control features, including system roles and group management. - [Choosing Your Guardrail Metrics](https://docs.galileo.ai/galileo/gen-ai-studio-products/galileo-observe/how-to/choosing-your-guardrail-metrics.md): Select and understand guardrail metrics in Galileo Observe to effectively evaluate your LLM applications, utilizing both industry-standard and proprietary metrics. - [Exporting Your Data](https://docs.galileo.ai/galileo/gen-ai-studio-products/galileo-observe/how-to/exporting-your-data.md): To download your Observe Data you can use the Export function. - [Identifying And Debugging Issues](https://docs.galileo.ai/galileo/gen-ai-studio-products/galileo-observe/how-to/identifying-and-debugging-issues.md): Once your monitored LLM app is up and running and you've selected your Guardrail Metrics, you can start monitoring your LLM app using Galileo. - [Logging Data Via Python](https://docs.galileo.ai/galileo/gen-ai-studio-products/galileo-observe/how-to/logging-data-via-python.md): Learn how to manually log your data via our Python Logger - [Monitoring Your Rag Application](https://docs.galileo.ai/galileo/gen-ai-studio-products/galileo-observe/how-to/monitoring-your-rag-application.md): Galileo Observe allows you to monitor your Retrieval-Augmented Generation (RAG) application with out-of-the-box Tracing and Analytics. - [Programmatically Fetching Logged Data](https://docs.galileo.ai/galileo/gen-ai-studio-products/galileo-observe/how-to/programmatically-fetching-logged-data.md): Fetch logged data programmatically in Galileo Observe with step-by-step instructions for seamless integration into automated workflows and analysis tools. - [Registering And Using Custom Metrics](https://docs.galileo.ai/galileo/gen-ai-studio-products/galileo-observe/how-to/registering-and-using-custom-metrics.md): Registered Metrics enable the ability for your team to define the custom metrics (programmatic or GPT-based) for your Observe projects. - [Setting Up Alerts](https://docs.galileo.ai/galileo/gen-ai-studio-products/galileo-observe/how-to/setting-up-alerts.md): How to set up Alerts and automatically be alerted when things go wrong - [Understanding Metric Values | Galileo Observe How-To](https://docs.galileo.ai/galileo/gen-ai-studio-products/galileo-observe/how-to/understand-your-metric-s-values.md): Gain insights into your metric values in Galileo Observe with explainability features, including token-level highlighting and generated explanations for better analysis. - [Logging Data Via Langchain Callback](https://docs.galileo.ai/galileo/gen-ai-studio-products/galileo-observe/integrations/langchain.md): Learn how to manually log your data from your Langchain Chains - [Overview of Galileo Protect](https://docs.galileo.ai/galileo/gen-ai-studio-products/galileo-protect.md): Explore Galileo Protect to safeguard AI applications with customizable rulesets, error detection, and robust metrics for enhanced AI governance. - [Action](https://docs.galileo.ai/galileo/gen-ai-studio-products/galileo-protect/concepts/action.md): Galileo will provide a set of action types (override, passthrough), that the user can use, along with a configuration for each action type. - [Project Concepts | Galileo Protect](https://docs.galileo.ai/galileo/gen-ai-studio-products/galileo-protect/concepts/project.md): Understand project management in Galileo Protect, focusing on ruleset organization, AI model protection, and error monitoring within structured workflows. - [Rule](https://docs.galileo.ai/galileo/gen-ai-studio-products/galileo-protect/concepts/rule.md): A condition or rule you never want your application to break. It's composed of three ingredients - [Ruleset](https://docs.galileo.ai/galileo/gen-ai-studio-products/galileo-protect/concepts/ruleset.md): All of the Rules within a Ruleset are executed in parallel, and the final resolution depends on all of the rules being completed. - [Stage](https://docs.galileo.ai/galileo/gen-ai-studio-products/galileo-protect/concepts/stage.md): A set of rulesets that are applied during _one_ invocation. - [How-To Guide | Galileo Protect](https://docs.galileo.ai/galileo/gen-ai-studio-products/galileo-protect/how-to.md): Follow detailed instructions on using Galileo Protect, including setting up rulesets, monitoring workflows, and ensuring secure AI application operations. - [Creating And Using Stages](https://docs.galileo.ai/galileo/gen-ai-studio-products/galileo-protect/how-to/creating-and-using-stages.md): Learn to create and manage stages in Galileo Protect, enabling structured AI monitoring and progressive error resolution throughout the deployment lifecycle. - [Editing Centralized Stages](https://docs.galileo.ai/galileo/gen-ai-studio-products/galileo-protect/how-to/editing-centralized-stages.md): Edit centralized stages in Galileo Protect with this guide, ensuring accurate ruleset updates and maintaining effective AI monitoring across applications. - [Invoking Rulesets](https://docs.galileo.ai/galileo/gen-ai-studio-products/galileo-protect/how-to/invoking-rulesets.md): Invoke rulesets in Galileo Protect to apply AI safeguards effectively, with comprehensive guidance on ruleset usage, configuration, and execution. - [Pausing Or Resuming A Stage](https://docs.galileo.ai/galileo/gen-ai-studio-products/galileo-protect/how-to/pausing-or-resuming-a-stage.md): When you're using the Galileo Protect product, once you've created a project and a stage, you can pause and resume the stage. - [Setting A Timeout On Your Protect Requests](https://docs.galileo.ai/galileo/gen-ai-studio-products/galileo-protect/how-to/setting-a-timeout-on-your-protect-requests.md): Your Protect Rules rely on [Guardrail Metrics](/galileo/gen-ai-studio-products/galileo-protect/how-to/supported-metrics-and-operators). Metrics are calculated using ML models, which can have varying latencies. - [Defining Rules](https://docs.galileo.ai/galileo/gen-ai-studio-products/galileo-protect/how-to/supported-metrics-and-operators.md): Explore supported metrics and operators in Galileo Protect to configure precise rulesets and enhance AI application monitoring and decision-making. - [LangChain Integration | Galileo Protect](https://docs.galileo.ai/galileo/gen-ai-studio-products/galileo-protect/integrations/langchain.md): Galileo Protect can also be used within your Langchain workflows. You can use Protect to validate inputs and outputs at different stages of your workflow. We provide a `tool` that allows you to easily integrate Protect into your Langchain workflows. - [Quickstart Guide | Galileo Protect](https://docs.galileo.ai/galileo/gen-ai-studio-products/galileo-protect/quickstart.md): Get started with Galileo Protect using this quickstart guide, covering setup, ruleset creation, and integration into AI workflows for secure operations. ## OpenAPI Specs - [openapi](https://api.staging.galileo.ai/public/v1/openapi.json) - [.prettierrc](https://docs.galileo.ai/.prettierrc.json) - [.pre-commit-config](https://docs.galileo.ai/.pre-commit-config.yaml) Built with [Mintlify](https://mintlify.com).