How To
Experiment with Multiple Prompts
Experiment with multiple prompts in Galileo Evaluate to optimize generative AI performance using iterative testing and comprehensive analysis tools.
In Galileo, you can execute multiple prompt runs using what we call “Prompt Sweeps”.
A sweep allows you to execute, in bulk, multiple LLM runs with different combinations of - prompt templates, models, data, and hyperparameters such as temperature. Prompt Sweeps allows you to battle test an LLM completion step in your workflow.
Looking to run “sweeps” on more complex systems, such as Chains, RAG, or Agents? Check out Chain Sweeps.
See the PromptQuality Python Library Docs for more information.
Was this page helpful?