Fine-tune large language models with Galileo’s LLM Fine-Tune tools, enabling precise adjustments for optimized AI performance and output quality.
dataquality
Python library. During Training, Galileo sees your samples and your model’s output to find errors in your data. Galileo uses Guardrail Metrics as well as its Data Error Potential score to help you find your most problematic samples.
The Galileo Data Error Potential (DEP) score has been built to provide a per-sample holistic data quality score to identify samples in the dataset contributing to low or high model performance i.e. ‘pulling’ the model up or down respectively. In other words, the DEP score measures the potential for “misfit” of an observation to the given model.
Galileo surfaces token-level DEP scores to understand which parts of your Target Output or Ground Truth your model is struggling with.
Getting Started
See the Quickstart section to get started.
There are a few ways to get started using Galileo Finetune. You can choose between hooking into your model training, or uploading your data via Galileo Auto.