# Galileo ## Docs - [Get Token](https://docs.galileo.ai/api-reference/auth/get-token) - [Create Workflows Run](https://docs.galileo.ai/api-reference/evaluate/create-workflows-run): Create a new Evaluate run with workflows. Use this endpoint to create a new Evaluate run with workflows. The request body should contain the `workflows` to be ingested and evaluated. Additionally, specify the `project_id` or `project_name` to which the workflows should be ingested. If the project does not exist, it will be created. If the project exists, the workflows will be logged to it. If both `project_id` and `project_name` are provided, `project_id` will take precedence. The `run_name` is optional and will be auto-generated (timestamp-based) if not provided. The body is also expected to include the configuration for the scorers to be used in the evaluation. This configuration will be used to evaluate the workflows and generate the results. - [API Reference | Getting Started with Galileo](https://docs.galileo.ai/api-reference/getting-started): Get started with Galileo's REST API: learn about base URLs, authentication methods, and how to verify your API setup for seamless integration. - [Healthcheck](https://docs.galileo.ai/api-reference/health/healthcheck) - [Get Observe Workflows](https://docs.galileo.ai/api-reference/observe/get-observe-workflows): Get workflows for a specific run in an Observe project. - [Log Workflows](https://docs.galileo.ai/api-reference/observe/log-workflows): Log workflows to an Observe project. Use this endpoint to log workflows to an Observe project. The request body should contain the `workflows` to be ingested. Additionally, specify the `project_id` or `project_name` to which the workflows should be ingested. If the project does not exist, it will be created. If the project exists, the workflows will be logged to it. If both `project_id` and `project_name` are provided, `project_id` will take precedence. - [Invoke](https://docs.galileo.ai/api-reference/protect/invoke) - [WorkflowStep](https://docs.galileo.ai/api-reference/schemas/workflowstep) - [Python Client Reference | Galileo Evaluate](https://docs.galileo.ai/client-reference/evaluate/python): Integrate Galileo's Evaluate module into your Python applications with this guide, featuring installation steps and examples for prompt quality assessment. - [TypeScript Client Reference | Galileo Evaluate](https://docs.galileo.ai/client-reference/evaluate/typescript): Incorporate Galileo's Evaluate module into your TypeScript projects with this guide, providing setup instructions and workflow logging examples. - [Data Quality | Fine-Tune NLP Studio Client Reference](https://docs.galileo.ai/client-reference/finetune-nlp-studio/data-quality): Enhance your data quality in Galileo's NLP and CV Studio using the 'dataquality' Python package; find installation and usage details here. - [Python Client Reference | Galileo Observe](https://docs.galileo.ai/client-reference/observe/python): Integrate Galileo's Observe module into your Python applications; access installation instructions and comprehensive documentation for workflow monitoring. - [TypeTypeScript Client Reference | Galileo Observescript](https://docs.galileo.ai/client-reference/observe/typescript): Integrate Galileo's Observe module into TypeScript applications with setup guides, sample code, and monitoring instructions for seamless workflow tracking. - [Client References](https://docs.galileo.ai/client-reference/overview): Explore Galileo's client references, including Python and TypeScript integrations, to streamline Evaluate, Observe, and Protect module implementations. - [Python Client Reference | Galileo Protect](https://docs.galileo.ai/client-reference/protect/python): Integrate Galileo's Protect module into Python workflows with this guide, including code examples, setup instructions, and ruleset invocation details. - [Data Privacy And Compliance](https://docs.galileo.ai/deployments/data-privacy-and-compliance): This page covers concerns regarding residency of data and compliances provided by Galileo. - [Dependencies](https://docs.galileo.ai/deployments/dependencies): Understand Galileo deployment prerequisites and dependencies to ensure a smooth installation and integration across supported platforms. - [Azure AKS](https://docs.galileo.ai/deployments/deploying-galileo-aks): This page details the steps to deploy a Galileo Kubernetes cluster in Microsoft Azure's AKS service environment. - [Deploying Galileo on Amazon EKS](https://docs.galileo.ai/deployments/deploying-galileo-eks): Deploy Galileo on Amazon EKS with a step-by-step guide for configuring, managing, and scaling Galileo's infrastructure using Kubernetes clusters. - [Zero Access Deployment | Galileo on EKS](https://docs.galileo.ai/deployments/deploying-galileo-eks-zero-access): Create a private Kubernetes Cluster with EKS in your AWS Account, upload containers to your container registry, and deploy Galileo. - [EKS Cluster Config Example | Zero Access Deployment](https://docs.galileo.ai/deployments/deploying-galileo-eks-zero-access/eks-cluster-config-example-zero-access): Access a zero-access EKS cluster configuration example for secure Galileo deployments on Amazon EKS, following best practices for Kubernetes security. - [EKS Cluster Config Example | Galileo Deployment](https://docs.galileo.ai/deployments/deploying-galileo-eks/eks-cluster-config-example): Review a detailed EKS cluster configuration example for deploying Galileo on Amazon EKS, ensuring efficient Kubernetes setup and management. - [Updating Cluster](https://docs.galileo.ai/deployments/deploying-galileo-eks/updating-galileo-eks-cluster): Galileo EKS cluster update from 1.21 -> 1.23 - [Exoscale](https://docs.galileo.ai/deployments/deploying-galileo-exoscale): The Galileo applications run on managed Kubernetes-like environments, but this document will specifically cover the configuration and deployment of an Exoscale Cloud SKS environment. - [Deploying Galileo on Google GKE](https://docs.galileo.ai/deployments/deploying-galileo-gke): Deploy Galileo on Google Kubernetes Engine (GKE) with this guide, covering configuration steps, cluster setup, and infrastructure scaling strategies. - [Cluster Setup Script](https://docs.galileo.ai/deployments/deploying-galileo-gke/galileo-gcp-setup-script): Utilize the Galileo GCP setup script for automating Google Cloud Platform (GCP) configuration to deploy Galileo seamlessly on GKE clusters. - [Enterprise Deployment](https://docs.galileo.ai/deployments/overview): Gain an overview of Galileo deployment options, covering supported platforms like Amazon EKS and Google GKE, setup requirements, and best practices. - [Post Deployment Checklist](https://docs.galileo.ai/deployments/post-deployment-checklist): The following guide will walk you through steps you can take to make sure your Galileo cluster is properly deployed and running well. - [Pre Requisites](https://docs.galileo.ai/deployments/pre-requisites): Before deploying Galileo, ensure the following prerequisites are met. - [Scheduling Automatic Backups For Your Cluster](https://docs.galileo.ai/deployments/scheduling-automatic-backups-for-your-cluster): Schedule automatic backups for Galileo clusters with this guide, ensuring data security, disaster recovery, and operational resilience for deployments. - [Aws Velero Account Setup Script](https://docs.galileo.ai/deployments/scheduling-automatic-backups-for-your-cluster/aws-velero-account-setup-script): Automate AWS Velero setup for Galileo cluster backups with this script, ensuring seamless backup scheduling and data resilience for AWS deployments. - [Gcp Velero Account Setup Script](https://docs.galileo.ai/deployments/scheduling-automatic-backups-for-your-cluster/gcp-velero-account-setup-script): Set up Velero for Google Cloud backups with this GCP account script, enabling automated backup scheduling and robust data protection for Galileo clusters. - [ Security & Access Control](https://docs.galileo.ai/deployments/security-and-access-control): This page covers networking, security and access control provisions that Galileo deployments enable - [Setting Up New Users](https://docs.galileo.ai/deployments/setting-up-new-users): Learn how to onboard new users in Galileo deployments with detailed instructions on user roles, access control, and permissions management. - [SSO Integration](https://docs.galileo.ai/deployments/sso-integration): This page covers our SSO Integration support with information we need to setup SSO for your Galileo cluster. - [Examples](https://docs.galileo.ai/examples/overview): Explore Galileo's practical examples covering real-world use cases and workflows for Evaluate, Observe, and Protect modules across AI projects. - [What is Galileo?](https://docs.galileo.ai/galileo): Evaluate, Observe, and Protect your GenAI applications - [Chainpoll](https://docs.galileo.ai/galileo-ai-research/chainpoll): ChainPoll is a powerful, flexible technique for LLM-based evaluation that is unique to Galileo. It is used to power multiple metrics across the Galileo platform. - [Class Boundary Detection](https://docs.galileo.ai/galileo-ai-research/class-boundary-detection): Detecting samples on the decision boundary - [Data Drift Detection](https://docs.galileo.ai/galileo-ai-research/data-drift-detection): Discover Galileo's data drift detection methods to monitor AI model performance, identify data changes, and maintain model reliability in production. - [Errors In Object Detection](https://docs.galileo.ai/galileo-ai-research/errors-in-object-detection): This page describes the rich error types offered by Galileo for Object Detection - [Galileo Data Error Potential (Dep) ](https://docs.galileo.ai/galileo-ai-research/galileo-data-error-potential-dep): Learn about Galileo's Data Error Potential (DEP) score, a metric to identify and categorize machine learning data errors, enhancing data quality and model performance. - [Likely Mislabeled](https://docs.galileo.ai/galileo-ai-research/likely-mislabeled): Garbage in, Garbage out - [Galileo AI Research](https://docs.galileo.ai/galileo-ai-research/overview): Research produced by Galileo AI Labs - [Rag Quality Metrics Using Chainpoll](https://docs.galileo.ai/galileo-ai-research/rag-quality-metrics-using-chainpoll): Learn how ChainPoll metrics assess retrieval-augmented generation (RAG) system quality, improving accuracy and performance of generative AI models. - [Rag Quality Metrics Using Luna](https://docs.galileo.ai/galileo-ai-research/rag-quality-metrics-using-luna): This page provides a brief overview of the research behind Galileo's RAG Quality Metrics. - [FAQs](https://docs.galileo.ai/galileo/galileo-nlp-studio/faqs): You have questions, we have (some) answers! - [Third Party 3p Integrations](https://docs.galileo.ai/galileo/galileo-nlp-studio/galileo-product-features/3p-integrations): Galileo has integrates seamlessly with your tools. - [Access Control Features | Galileo NLP Studio](https://docs.galileo.ai/galileo/galileo-nlp-studio/galileo-product-features/access-control): Discover Galileo NLP Studio's access control features, including user roles and group management, to securely share and manage projects within your organization. - [Actions](https://docs.galileo.ai/galileo/galileo-nlp-studio/galileo-product-features/actions): Actions help close the inspection loop and error discovery process. We support a number of actions. - [Clustering](https://docs.galileo.ai/galileo/galileo-nlp-studio/galileo-product-features/clusters): To help you make sense of your data and your embeddings view, Galileo provides out-of-the-box Clustering and Explainability. - [Compare Across Runs](https://docs.galileo.ai/galileo/galileo-nlp-studio/galileo-product-features/compare-across-runs): Track your experiments, data and models in one place - [Dataset Slices](https://docs.galileo.ai/galileo/galileo-nlp-studio/galileo-product-features/dataset-slices): Slices is a powerful Galileo feature that allows you to monitor, across training runs, a sub-population of the dataset based on metadata filters. - [Dataset View](https://docs.galileo.ai/galileo/galileo-nlp-studio/galileo-product-features/dataset-view): The Dataset View provides an interactive data table for inspecting your datasets. - [Embeddings View](https://docs.galileo.ai/galileo/galileo-nlp-studio/galileo-product-features/embeddings-view): The Embeddings View provides a visual playground for you to interact with your datasets. - [Error Types Breakdown](https://docs.galileo.ai/galileo/galileo-nlp-studio/galileo-product-features/error-types-breakdown): For use cases with complex data and error types (e.g. Named Entity Recognition, Object Detection or Semantic Segmentation), the **Error Types Chart** gives you an insight into exactly how the Ground Truth differed from your model's predictions - [Galileo + Delta Lake Databricks](https://docs.galileo.ai/galileo/galileo-nlp-studio/galileo-product-features/galileo-+-delta-lake-databricks): Integrate Galileo with Delta Lake on Databricks to manage large-scale data, ensuring seamless collaboration and enhanced NLP workflows. - [Insights Panel](https://docs.galileo.ai/galileo/galileo-nlp-studio/galileo-product-features/insights-panel): Utilize Galileo's Insights Panel to analyze data trends, detect issues, and gain actionable insights for improving NLP model performance. - [Product Features](https://docs.galileo.ai/galileo/galileo-nlp-studio/galileo-product-features/overview): Explore Galileo NLP Studio's features, including data insights, error detection, and monitoring tools for improving NLP workflows and AI quality. - [Similarity Search](https://docs.galileo.ai/galileo/galileo-nlp-studio/galileo-product-features/similarity-search): Similarity search provides out of the box ability to discover **similar samples** within your datasets. - [Alerts](https://docs.galileo.ai/galileo/galileo-nlp-studio/galileo-product-features/xray-insights): Explore Galileo NLP Studio's Alerts feature, designed to detect and summarize dataset issues like mislabeling and class imbalance, enhancing data inspection. - [Multi Label Text Classification](https://docs.galileo.ai/galileo/galileo-nlp-studio/multi-label-text-classification): Implement multi-label text classification in Galileo NLP Studio to accurately label datasets, streamline workflows, and enhance model training. - [Multi-Label Text Classification | Galileo NLP Studio Guide](https://docs.galileo.ai/galileo/galileo-nlp-studio/multi-label-text-classification/getting-started): Get started with multi-label text classification in Galileo NLP Studio, featuring setup instructions, workflow integration, and data preparation tips. - [Named Entity Recognition](https://docs.galileo.ai/galileo/galileo-nlp-studio/named-entity-recognition): NER is a sequence tagging problem, where given an input document, the task is to correctly identify the span boundaries for various entities and also classify the spans into correct entity types. - [Named Entity Recognition | Galileo NLP Studio Guide](https://docs.galileo.ai/galileo/galileo-nlp-studio/named-entity-recognition/getting-started): Start building named entity recognition (NER) models in Galileo NLP Studio with this guide on setup, labeling, and model training workflows. - [Model Monitoring & Data Drift | Named Entity Recognition](https://docs.galileo.ai/galileo/galileo-nlp-studio/named-entity-recognition/model-monitoring-and-data-drift): Learn how to monitor Named Entity Recognition models in production with Galileo NLP Studio, detecting data drift and maintaining model health effectively. - [Natural Language Inference](https://docs.galileo.ai/galileo/galileo-nlp-studio/natural-language-inference): Leverage Galileo NLP Studio for natural language inference (NLI), enabling accurate predictions and model performance monitoring. - [Natural Language Inference | Galileo NLP Studio Guide](https://docs.galileo.ai/galileo/galileo-nlp-studio/natural-language-inference/getting-started): Begin implementing natural language inference (NLI) workflows in Galileo NLP Studio with clear instructions for setup and model evaluation. - [Logging Data | Natural Language Inference in Galileo](https://docs.galileo.ai/galileo/galileo-nlp-studio/natural-language-inference/logging-data-to-galileo): The fastest way to find data errors in Galileo. - [Model Monitoring & Data Drift | Natural Language Inference](https://docs.galileo.ai/galileo/galileo-nlp-studio/natural-language-inference/model-monitoring-and-data-drift): Ensure optimal performance of Natural Language Inference models in production by monitoring data drift and model health with Galileo NLP Studio. - [Text Classification](https://docs.galileo.ai/galileo/galileo-nlp-studio/text-classification): Using Galileo for Text Classification you can improve your classification models by improving the quality of your training data. - [Automated Production Monitoring](https://docs.galileo.ai/galileo/galileo-nlp-studio/text-classification/automated-production-monitoring): Monitor text classification models in production with automated tools from Galileo NLP Studio to detect data drift and maintain performance. - [null](https://docs.galileo.ai/galileo/galileo-nlp-studio/text-classification/build-your-own-conditions): A class to build custom conditions for DataFrame assertions and alerting. - [Text Classification | Galileo NLP Studio Guide](https://docs.galileo.ai/galileo/galileo-nlp-studio/text-classification/getting-started): Start training and deploying text classification models in Galileo NLP Studio with this guide on setup, data preparation, and workflow integration. - [Logging Data | Text Classification in Galileo](https://docs.galileo.ai/galileo/galileo-nlp-studio/text-classification/logging-data-to-galileo): The fastest way to find data errors in Galileo - [Model Monitoring & Data Drift | Text Classification](https://docs.galileo.ai/galileo/galileo-nlp-studio/text-classification/model-monitoring-and-data-drift): Monitor text classification models in production with Galileo NLP Studio, detecting data drift and ensuring consistent model performance over time. - [Training High-Quality Supervised NLP Models | Galileo](https://docs.galileo.ai/galileo/galileo-nlp-studio/train-high-quality-supervised-nlp-models): Galileo NLP Studio supports Natural Language Processing Tasks across the life-cycle of your model development. - [Overview of Galileo Evaluate](https://docs.galileo.ai/galileo/gen-ai-studio-products/galileo-evaluate): Stop experimenting in spreadsheets and notebooks. Use Evaluateā€™s powerful insights to build GenAI systems that just work. - [Human Ratings](https://docs.galileo.ai/galileo/gen-ai-studio-products/galileo-evaluate/concepts/human-ratings): Learn how human ratings in Galileo Evaluate enable accurate model evaluations and improve performance through qualitative feedback. - [Metrics](https://docs.galileo.ai/galileo/gen-ai-studio-products/galileo-evaluate/concepts/metrics): Metrics are quantitative or qualitative ways to express insights about the [run](/galileo/gen-ai-studio-products/galileo-evaluate/concepts/run). - [Project Concepts | Galileo Evaluate](https://docs.galileo.ai/galileo/gen-ai-studio-products/galileo-evaluate/concepts/project): Understand project concepts in Galileo Evaluate, including organization of datasets, metrics, and workflows for AI evaluation. - [Run](https://docs.galileo.ai/galileo/gen-ai-studio-products/galileo-evaluate/concepts/run): Runs in Galileo are experiments or iterations done within a [project](/galileo/gen-ai-studio-products/galileo-evaluate/concepts/project). - [Template](https://docs.galileo.ai/galileo/gen-ai-studio-products/galileo-evaluate/concepts/template): Leverage templates in Galileo Evaluate to standardize metrics, model assessments, and workflows for efficient generative AI evaluation. - [Context vs. Instruction Adherence | Galileo Evaluate FAQ](https://docs.galileo.ai/galileo/gen-ai-studio-products/galileo-evaluate/faq/context-adherence-vs-instruction-adherence): Understand the distinctions between Context Adherence and Instruction Adherence metrics in Galileo Evaluate to assess generative AI outputs accurately. - [Error Computing Metrics | Galileo Evaluate FAQ](https://docs.galileo.ai/galileo/gen-ai-studio-products/galileo-evaluate/faq/errors-computing-metrics): Find solutions to common errors in computing metrics within Galileo Evaluate, including missing integrations and rate limit issues, to streamline your AI evaluations. - [How-To Guide | Galileo Evaluate](https://docs.galileo.ai/galileo/gen-ai-studio-products/galileo-evaluate/how-to): Follow step-by-step instructions in Galileo Evaluate to assess generative AI models, configure metrics, and analyze performance effectively. - [A/B Compare Prompts](https://docs.galileo.ai/galileo/gen-ai-studio-products/galileo-evaluate/how-to/a-b-compare-prompts): Easily compare multiple LLM runs in a single screen for better decision making - [Access Control Guide | Galileo Evaluate](https://docs.galileo.ai/galileo/gen-ai-studio-products/galileo-evaluate/how-to/access-control): Manage user permissions and securely share projects in Galileo Evaluate using detailed access control features, including system roles and group management. - [Add Tags and Metadata to Prompt Runs](https://docs.galileo.ai/galileo/gen-ai-studio-products/galileo-evaluate/how-to/add-tags-and-metadata-to-prompt-runs): While you are experimenting with your prompts you will probably be tuning many parameters. - [Choose your Guardrail Metrics](https://docs.galileo.ai/galileo/gen-ai-studio-products/galileo-evaluate/how-to/choose-your-guardrail-metrics): Select and understand guardrail metrics in Galileo Evaluate to effectively assess your prompts and models, utilizing both industry-standard and proprietary metrics. - [Collaborate with other personas](https://docs.galileo.ai/galileo/gen-ai-studio-products/galileo-evaluate/how-to/collaborate-with-other-personas): Galileo Evaluate is geared for cross-functional collaboration. Most of the teams using Galileo consist of a mix of the following personas - [Create an Evaluation Set](https://docs.galileo.ai/galileo/gen-ai-studio-products/galileo-evaluate/how-to/create-an-evaluation-set): Before starting your experiments, we recommend creating an evaluation set. - [Customize Chainpoll-powered Metrics](https://docs.galileo.ai/galileo/gen-ai-studio-products/galileo-evaluate/how-to/customize-chainpoll-powered-metrics): Improve metric accuracy by customizing your Chainpoll-powered metrics - [Enabling Scorers in Runs](https://docs.galileo.ai/galileo/gen-ai-studio-products/galileo-evaluate/how-to/enabling-scorers-in-runs): Learn how to turn on metrics when creating runs in your Python environment. - [Evaluate and Optimize Agents](https://docs.galileo.ai/galileo/gen-ai-studio-products/galileo-evaluate/how-to/evaluate-and-optimize-agents--chains-or-multi-step-workflows): How to use Galileo Evaluate with Agents - [Evaluate and Optimize Prompts](https://docs.galileo.ai/galileo/gen-ai-studio-products/galileo-evaluate/how-to/evaluate-and-optimize-prompts): How to use Galileo Evaluate for prompt engineering - [Evaluate and Optimize RAG Applications](https://docs.galileo.ai/galileo/gen-ai-studio-products/galileo-evaluate/how-to/evaluate-and-optimize-rag-applications): How to use Galileo Evaluate with RAG applications - [Evaluate with Human Feedback](https://docs.galileo.ai/galileo/gen-ai-studio-products/galileo-evaluate/how-to/evaluate-with-human-feedback): Galileo allows you to do qualitative human evaluations of your prompts and responses. - [Experiment with Multiple Workflows](https://docs.galileo.ai/galileo/gen-ai-studio-products/galileo-evaluate/how-to/experiment-with-multiple-chain-workflows): If you're building a multi-step workflow or chain (e.g. a RAG system, an Agent, or a chain) and want to experiment with multiple combinations of parameters or your versions at once, Chain Sweeps are your friend. - [Experiment with Multiple Prompts](https://docs.galileo.ai/galileo/gen-ai-studio-products/galileo-evaluate/how-to/experiment-with-multiple-prompts): Experiment with multiple prompts in Galileo Evaluate to optimize generative AI performance using iterative testing and comprehensive analysis tools. - [Export your Evaluation Runs](https://docs.galileo.ai/galileo/gen-ai-studio-products/galileo-evaluate/how-to/export-your-evaluation-runs): To download the results of your evaluation you can use the Export function. To export your runs, simply click on _Export Prompt Data._ - [Identify Hallucinations](https://docs.galileo.ai/galileo/gen-ai-studio-products/galileo-evaluate/how-to/identify-hallucinations): How to use Galileo Evaluate to find Hallucinations - [Log Pre-generated Responses in Python](https://docs.galileo.ai/galileo/gen-ai-studio-products/galileo-evaluate/how-to/log-pre-generated-responses-in-python): If you already have a dataset of requests and application responses, and you want to log and evaluate these on Galileo without re-generating the responses, you can do so via our worflows. - [Logging and Comparing against your Expected Answers](https://docs.galileo.ai/galileo/gen-ai-studio-products/galileo-evaluate/how-to/logging-and-comparing-against-your-expected-answers): Expected outputs are a key element for evaluating LLM applications. They provide benchmarks to measure model accuracy, identify errors, and ensure consistent assessments. - [Programmatically fetch logged data](https://docs.galileo.ai/galileo/gen-ai-studio-products/galileo-evaluate/how-to/programmatically-fetch-logged-data): If you want to fetch your logged data and metrics programmatically, you can do so via our Python clients. - [Prompt Management-Storage](https://docs.galileo.ai/galileo/gen-ai-studio-products/galileo-evaluate/how-to/prompt-management-storage): Manage and store your AI prompts efficiently in Galileo Evaluate, with tools for organizing, versioning, and analyzing prompt performance at scale. - [Finding the best run](https://docs.galileo.ai/galileo/gen-ai-studio-products/galileo-evaluate/how-to/rank-your-runs): Learn how to use Automatic Run Ranking to find the best run - [Register Custom Metrics](https://docs.galileo.ai/galileo/gen-ai-studio-products/galileo-evaluate/how-to/register-custom-metrics): Galileo GenAI Studio supports Custom Metrics (programmatic or GPT-based) for all your Evaluate and Observe projects. Depending on where, when, and how you want these metrics to be executed, you have the option to choose between **Custom Scorers** and **Registered Scorers**. - [Share a project](https://docs.galileo.ai/galileo/gen-ai-studio-products/galileo-evaluate/how-to/share-a-project): All projects on Galileo can be shared with others to enable collaboration. - [Understanding Metric Values | Galileo Evaluate How-To](https://docs.galileo.ai/galileo/gen-ai-studio-products/galileo-evaluate/how-to/understand-your-metrics-values): Gain insights into your metric values in Galileo Evaluate with explainability features, including token-level highlighting and generated explanations for better analysis. - [Integrations | Galileo Evaluate](https://docs.galileo.ai/galileo/gen-ai-studio-products/galileo-evaluate/integrations): Discover Galileo Evaluate's integrations with AI tools and platforms, enabling seamless connectivity and enhanced generative AI evaluation workflows. - [Logging Workflows](https://docs.galileo.ai/galileo/gen-ai-studio-products/galileo-evaluate/integrations/custom-chain): No matter how you're orchestrating your workflows, we have an interface to help you upload them to Galileo. - [Databricks](https://docs.galileo.ai/galileo/gen-ai-studio-products/galileo-evaluate/integrations/data-storage/databricks): Integrating into Databricks to seamlessly export your data to Delta Lake - [LangChain Integration | Galileo Evaluate](https://docs.galileo.ai/galileo/gen-ai-studio-products/galileo-evaluate/integrations/langchain): Galileo allows you to integrate with your Langchain application natively through callbacks - [LLMs](https://docs.galileo.ai/galileo/gen-ai-studio-products/galileo-evaluate/integrations/llms): Integrate large language models (LLMs) into Galileo Evaluate to assess performance, refine outputs, and enhance generative AI model capabilities. - [Adding Custom LLM APIs / Fine Tuned LLMs](https://docs.galileo.ai/galileo/gen-ai-studio-products/galileo-evaluate/integrations/llms/adding-custom-llms): Showcases how to use Galileo with any LLM API or custom fine-tuned LLMs, not supported out-of-the-box by Galileo. - [Supported LLMs](https://docs.galileo.ai/galileo/gen-ai-studio-products/galileo-evaluate/integrations/llms/supported-llms): Galileo comes with support for the following LLMs out of the box. In the Playground, you will see models for which you've added an integration. - [Quickstart Guide | Galileo Evaluate](https://docs.galileo.ai/galileo/gen-ai-studio-products/galileo-evaluate/quickstart): Start using Galileo Evaluate with this quickstart guide, covering prompt engineering, AI evaluation, and integrating tools into existing workflows. - [Integrate Evaluate Into My Existing Application With Python](https://docs.galileo.ai/galileo/gen-ai-studio-products/galileo-evaluate/quickstart/integrate-evaluate-into-my-existing-application-with-python): Learn how to integrate Galileo Evaluate into your Python applications, featuring step-by-step guidance and code samples for streamlined integration. - [Prompt Engineering From A UI](https://docs.galileo.ai/galileo/gen-ai-studio-products/galileo-evaluate/quickstart/prompt-engineering-from-a-ui): Explore UI-driven prompt engineering in Galileo Evaluate to create, test, and refine prompts with intuitive interfaces and robust evaluation tools. - [Overview of Galileo Guardrail Metrics](https://docs.galileo.ai/galileo/gen-ai-studio-products/galileo-guardrail-metrics): Utilize Galileo's Guardrail Metrics to monitor generative AI models, ensuring adherence to quality, correctness, and alignment with project goals. - [BLEU and ROUGE](https://docs.galileo.ai/galileo/gen-ai-studio-products/galileo-guardrail-metrics/bleu-and-rouge-1): Understand BLEU & ROUGE-1 scores - [Chunk Attribution](https://docs.galileo.ai/galileo/gen-ai-studio-products/galileo-guardrail-metrics/chunk-attribution): Understand Galileo's Chunk Attribution Metric - [Chunk Attribution Luna](https://docs.galileo.ai/galileo/gen-ai-studio-products/galileo-guardrail-metrics/chunk-attribution/chunk-attribution-luna): Understand Galileo's Chunk Attribution Luna Metric - [Chunk Attribution Plus](https://docs.galileo.ai/galileo/gen-ai-studio-products/galileo-guardrail-metrics/chunk-attribution/chunk-attribution-plus): Understand Galileo's Chunk Attribution Plus Metric - [Chunk Relevance](https://docs.galileo.ai/galileo/gen-ai-studio-products/galileo-guardrail-metrics/chunk-relevance): Understand Galileo's Chunk Relevance Luna Metric - [Chunk Utilization](https://docs.galileo.ai/galileo/gen-ai-studio-products/galileo-guardrail-metrics/chunk-utilization): Understand Galileo's Chunk Utilization Metric - [Chunk Utilization Luna](https://docs.galileo.ai/galileo/gen-ai-studio-products/galileo-guardrail-metrics/chunk-utilization/chunk-utilization-luna): Understand Galileo's Chunk Utilization Luna Metric - [Chunk Utilization Plus](https://docs.galileo.ai/galileo/gen-ai-studio-products/galileo-guardrail-metrics/chunk-utilization/chunk-utilization-plus): Leverage Chunk Utilization+ in Galileo Guardrail Metrics to optimize generative AI output segmentation and maximize model efficiency. - [Completeness](https://docs.galileo.ai/galileo/gen-ai-studio-products/galileo-guardrail-metrics/completeness): Understand Galileo's Completeness Metric - [Completeness Luna](https://docs.galileo.ai/galileo/gen-ai-studio-products/galileo-guardrail-metrics/completeness/completeness-luna): Understand Galileo's Completeness Luna Metric - [Completeness Plus](https://docs.galileo.ai/galileo/gen-ai-studio-products/galileo-guardrail-metrics/completeness/completeness-plus): Understand Galileo's Completeness Plus Metric - [Context Adherence](https://docs.galileo.ai/galileo/gen-ai-studio-products/galileo-guardrail-metrics/context-adherence): Understand Galileo's Context Adherence Metric - [Context Adherence Luna](https://docs.galileo.ai/galileo/gen-ai-studio-products/galileo-guardrail-metrics/context-adherence/context-adherence-luna): Understand Galileo's Context Adherence Luna Metric - [Context Adherence Plus](https://docs.galileo.ai/galileo/gen-ai-studio-products/galileo-guardrail-metrics/context-adherence/context-adherence-plus): Understand Galileo's Context Adherence Plus Metric - [Correctness](https://docs.galileo.ai/galileo/gen-ai-studio-products/galileo-guardrail-metrics/correctness): Understand Galileo's Correctness Metric - [Context vs. Instruction Adherence | Guardrail Metrics FAQ](https://docs.galileo.ai/galileo/gen-ai-studio-products/galileo-guardrail-metrics/faq/context-adherence-vs-instruction-adherence): Understand the differences between Context Adherence and Instruction Adherence metrics in Galileo's Guardrail Metrics to accurately evaluate model outputs. - [Error Computing Metrics | Guardrail Metrics FAQ](https://docs.galileo.ai/galileo/gen-ai-studio-products/galileo-guardrail-metrics/faq/errors-computing-metrics): Find solutions to common errors in computing metrics within Galileo's Guardrail Metrics, including missing integrations and rate limit issues, to streamline your evaluations. - [Ground Truth Adherence](https://docs.galileo.ai/galileo/gen-ai-studio-products/galileo-guardrail-metrics/ground-truth-adherence): Measure ground truth adherence in generative AI models with Galileo's Guardrail Metrics, ensuring accurate and aligned outputs with dataset benchmarks. - [Instruction Adherence](https://docs.galileo.ai/galileo/gen-ai-studio-products/galileo-guardrail-metrics/instruction-adherence): Assess instruction adherence in AI outputs using Galileo Guardrail Metrics to ensure prompt-driven models generate precise and actionable results. - [Private Identifiable Information](https://docs.galileo.ai/galileo/gen-ai-studio-products/galileo-guardrail-metrics/private-identifiable-information): Understand Galileo's PII Metric - [Prompt Injection](https://docs.galileo.ai/galileo/gen-ai-studio-products/galileo-guardrail-metrics/prompt-injection): Understand Galileo's Prompt Injection metric - [Prompt Perplexity](https://docs.galileo.ai/galileo/gen-ai-studio-products/galileo-guardrail-metrics/prompt-perplexity): Understanding Galileo's Prompt Perplexity Metrics - [Sexism](https://docs.galileo.ai/galileo/gen-ai-studio-products/galileo-guardrail-metrics/sexism): Understand Galileo's Sexism Metric - [Tone](https://docs.galileo.ai/galileo/gen-ai-studio-products/galileo-guardrail-metrics/tone): Understand Galileo's Tone Metric - [Tool Error](https://docs.galileo.ai/galileo/gen-ai-studio-products/galileo-guardrail-metrics/tool-error): Understand Galileo's Tool Error Metric - [Tool Selection Quality](https://docs.galileo.ai/galileo/gen-ai-studio-products/galileo-guardrail-metrics/tool-selection-quality): Understand Galileo's Tool Selection Quality Metric - [Toxicity](https://docs.galileo.ai/galileo/gen-ai-studio-products/galileo-guardrail-metrics/toxicity): Understand Galileo's Toxicity Metric - [Uncertainty](https://docs.galileo.ai/galileo/gen-ai-studio-products/galileo-guardrail-metrics/uncertainty): Understand Galileo's Uncertainty Metric - [Overview of Galileo LLM Fine-Tune](https://docs.galileo.ai/galileo/gen-ai-studio-products/galileo-llm-fine-tune): Fine-tune large language models with Galileo's LLM Fine-Tune tools, enabling precise adjustments for optimized AI performance and output quality. - [Console Walkthrough](https://docs.galileo.ai/galileo/gen-ai-studio-products/galileo-llm-fine-tune/console-walkthrough): Upon completing a run, you'll be taken to the Galileo Console. - [Finding Similar Samples](https://docs.galileo.ai/galileo/gen-ai-studio-products/galileo-llm-fine-tune/finding-similar-samples): Similarity search allows you to discover **similar samples** within your datasets - [Quickstart Guide | Galileo LLM Fine-Tune](https://docs.galileo.ai/galileo/gen-ai-studio-products/galileo-llm-fine-tune/quickstart): Get started with Galileo's LLM Fine-Tune in this quickstart guide, featuring step-by-step instructions for tuning AI models effectively. - [Configuring Dq Auto](https://docs.galileo.ai/galileo/gen-ai-studio-products/galileo-llm-fine-tune/quickstart/dq.auto): Automatic Data Insights on your Seq2Seq dataset - [Taking Action](https://docs.galileo.ai/galileo/gen-ai-studio-products/galileo-llm-fine-tune/taking-action): Take actionable steps in Galileo LLM Fine-Tune to address model performance issues, refine outputs, and achieve targeted AI improvements. - [Using Alerts](https://docs.galileo.ai/galileo/gen-ai-studio-products/galileo-llm-fine-tune/using-alerts): Utilize Galileo LLM Fine-Tune's Alerts feature to detect and address dataset issues like high Data Error Potential scores and uncertainty outputs, enhancing data quality. - [Using Data Error Potential](https://docs.galileo.ai/galileo/gen-ai-studio-products/galileo-llm-fine-tune/using-data-error-potential): Learn about Galileo LLM Fine-Tune's Data Error Potential (DEP) score to identify and address errors in your training data, improving overall data quality. - [Using Uncertainty](https://docs.galileo.ai/galileo/gen-ai-studio-products/galileo-llm-fine-tune/using-uncertainty): On dataset splits where generations are enabled (e.g. the _Test split_), you'll be seeing Uncertainty Scores and Token-level Uncertainty highlighting - [Visualizing And Understanding Your Data](https://docs.galileo.ai/galileo/gen-ai-studio-products/galileo-llm-fine-tune/visualizing-and-understanding-your-data): Finetuning an LLM often requires large datasets. - [Overview of Galileo Observe](https://docs.galileo.ai/galileo/gen-ai-studio-products/galileo-observe): Monitor and analyze generative AI models with Galileo Observe, using real-time data insights to maintain performance and ensure quality outputs. - [Context vs. Instruction Adherence | Galileo Observe FAQ](https://docs.galileo.ai/galileo/gen-ai-studio-products/galileo-observe/faq/context-adherence-vs-instruction-adherence): Differentiate between Context Adherence and Instruction Adherence metrics in Galileo Observe to effectively evaluate and enhance your model's responses. - [Error Computing Metrics | Galileo Observe FAQ](https://docs.galileo.ai/galileo/gen-ai-studio-products/galileo-observe/faq/errors-computing-metrics): Troubleshoot common errors in Galileo Observe's metric computations, including integration issues, rate limits, JSON parsing errors, and missing embeddings, to ensure accurate evaluations. - [Getting Started | Galileo Observe](https://docs.galileo.ai/galileo/gen-ai-studio-products/galileo-observe/getting-started): How to monitor your apps with Galileo Observe - [How-To Guide | Galileo Observe](https://docs.galileo.ai/galileo/gen-ai-studio-products/galileo-observe/how-to): Learn how to use Galileo Observe to monitor and analyze generative AI models, including setup instructions, data logging, and workflow integrations. - [How to Set Up Access Control](https://docs.galileo.ai/galileo/gen-ai-studio-products/galileo-observe/how-to/access-control): Manage user permissions and securely share projects in Galileo Observe using detailed access control features, including system roles and group management. - [Choosing Your Guardrail Metrics](https://docs.galileo.ai/galileo/gen-ai-studio-products/galileo-observe/how-to/choosing-your-guardrail-metrics): Select and understand guardrail metrics in Galileo Observe to effectively evaluate your LLM applications, utilizing both industry-standard and proprietary metrics. - [Exporting Your Data](https://docs.galileo.ai/galileo/gen-ai-studio-products/galileo-observe/how-to/exporting-your-data): To download your Observe Data you can use the Export function. - [Identifying And Debugging Issues](https://docs.galileo.ai/galileo/gen-ai-studio-products/galileo-observe/how-to/identifying-and-debugging-issues): Once your monitored LLM app is up and running and you've selected your Guardrail Metrics, you can start monitoring your LLM app using Galileo. - [Logging Data Via Python](https://docs.galileo.ai/galileo/gen-ai-studio-products/galileo-observe/how-to/logging-data-via-python): Learn how to manually log your data via our Python Logger - [Monitoring Your Rag Application](https://docs.galileo.ai/galileo/gen-ai-studio-products/galileo-observe/how-to/monitoring-your-rag-application): Galileo Observe allows you to monitor your Retrieval-Augmented Generation (RAG) application with out-of-the-box Tracing and Analytics. - [Programmatically Fetching Logged Data](https://docs.galileo.ai/galileo/gen-ai-studio-products/galileo-observe/how-to/programmatically-fetching-logged-data): Fetch logged data programmatically in Galileo Observe with step-by-step instructions for seamless integration into automated workflows and analysis tools. - [Registering And Using Custom Metrics](https://docs.galileo.ai/galileo/gen-ai-studio-products/galileo-observe/how-to/registering-and-using-custom-metrics): Registered Metrics enable the ability for your team to define the custom metrics (programmatic or GPT-based) for your Observe projects. - [Setting Up Alerts](https://docs.galileo.ai/galileo/gen-ai-studio-products/galileo-observe/how-to/setting-up-alerts): How to set up Alerts and automatically be alerted when things go wrong - [Understanding Metric Values | Galileo Observe How-To](https://docs.galileo.ai/galileo/gen-ai-studio-products/galileo-observe/how-to/understand-your-metric-s-values): Gain insights into your metric values in Galileo Observe with explainability features, including token-level highlighting and generated explanations for better analysis. - [Logging Data Via Langchain Callback](https://docs.galileo.ai/galileo/gen-ai-studio-products/galileo-observe/integrations/langchain): Learn how to manually log your data from your Langchain Chains - [Overview of Galileo Protect](https://docs.galileo.ai/galileo/gen-ai-studio-products/galileo-protect): Explore Galileo Protect to safeguard AI applications with customizable rulesets, error detection, and robust metrics for enhanced AI governance. - [Action](https://docs.galileo.ai/galileo/gen-ai-studio-products/galileo-protect/concepts/action): Galileo will provide a set of action types (override, passthrough), that the user can use, along with a configuration for each action type. - [Project Concepts | Galileo Protect](https://docs.galileo.ai/galileo/gen-ai-studio-products/galileo-protect/concepts/project): Understand project management in Galileo Protect, focusing on ruleset organization, AI model protection, and error monitoring within structured workflows. - [Rule](https://docs.galileo.ai/galileo/gen-ai-studio-products/galileo-protect/concepts/rule): A condition or rule you never want your application to break. It's composed of three ingredients - [Ruleset](https://docs.galileo.ai/galileo/gen-ai-studio-products/galileo-protect/concepts/ruleset): All of the Rules within a Ruleset are executed in parallel, and the final resolution depends on all of the rules being completed. - [Stage](https://docs.galileo.ai/galileo/gen-ai-studio-products/galileo-protect/concepts/stage): A set of rulesets that are applied during _one_ invocation. - [How-To Guide | Galileo Protect](https://docs.galileo.ai/galileo/gen-ai-studio-products/galileo-protect/how-to): Follow detailed instructions on using Galileo Protect, including setting up rulesets, monitoring workflows, and ensuring secure AI application operations. - [Creating And Using Stages](https://docs.galileo.ai/galileo/gen-ai-studio-products/galileo-protect/how-to/creating-and-using-stages): Learn to create and manage stages in Galileo Protect, enabling structured AI monitoring and progressive error resolution throughout the deployment lifecycle. - [Editing Centralized Stages](https://docs.galileo.ai/galileo/gen-ai-studio-products/galileo-protect/how-to/editing-centralized-stages): Edit centralized stages in Galileo Protect with this guide, ensuring accurate ruleset updates and maintaining effective AI monitoring across applications. - [Invoking Rulesets](https://docs.galileo.ai/galileo/gen-ai-studio-products/galileo-protect/how-to/invoking-rulesets): Invoke rulesets in Galileo Protect to apply AI safeguards effectively, with comprehensive guidance on ruleset usage, configuration, and execution. - [Pausing Or Resuming A Stage](https://docs.galileo.ai/galileo/gen-ai-studio-products/galileo-protect/how-to/pausing-or-resuming-a-stage): When you're using the Galileo Protect product, once you've created a project and a stage, you can pause and resume the stage. - [Setting A Timeout On Your Protect Requests](https://docs.galileo.ai/galileo/gen-ai-studio-products/galileo-protect/how-to/setting-a-timeout-on-your-protect-requests): Your Protect Rules rely on [Guardrail Metrics](/galileo/gen-ai-studio-products/galileo-protect/how-to/supported-metrics-and-operators). Metrics are calculated using ML models, which can have varying latencies. - [Defining Rules](https://docs.galileo.ai/galileo/gen-ai-studio-products/galileo-protect/how-to/supported-metrics-and-operators): Explore supported metrics and operators in Galileo Protect to configure precise rulesets and enhance AI application monitoring and decision-making. - [LangChain Integration | Galileo Protect](https://docs.galileo.ai/galileo/gen-ai-studio-products/galileo-protect/integrations/langchain): Galileo Protect can also be used within your Langchain workflows. You can use Protect to validate inputs and outputs at different stages of your workflow. We provide a `tool` that allows you to easily integrate Protect into your Langchain workflows. - [Quickstart Guide | Galileo Protect](https://docs.galileo.ai/galileo/gen-ai-studio-products/galileo-protect/quickstart): Get started with Galileo Protect using this quickstart guide, covering setup, ruleset creation, and integration into AI workflows for secure operations.