How To
Evaluate and Optimize Prompts
How to use Galileo Evaluate for prompt engineering
Galileo Evaluate enables you to evaluate and optimize your prompts with out-of-the-box Guardrail metrics.
-
Pip Install
promptquality
and create runs in your Python notebook. -
Next, you execute promptquality.run() like shown below.
The code snippet above uses ChatGPT API endpoint from OpenAI. Want to use other models (Azure OpenAI, Cohere, Anthropic, Mistral, etc)? Check out the integration page here.
Was this page helpful?