How to use Galileo Evaluate for prompt engineering
Galileo Evaluate enables you to evaluate and optimize your prompts with out-of-the-box Guardrail metrics.
Pip Installpromptquality and create runs in your Python notebook.
Next, you execute promptquality.run() like shown below.
Copy
Ask AI
import promptquality as pq pq.login({YOUR_GALILEO_URL}) template = "Explain {{topic}} to me like I'm a 5 year old" data = {"topic": ["Quantum Physics", "Politics", "Large Language Models"]} pq.run(project_name='my_first_project', template=template, dataset=data, settings=pq.Settings(model_alias='ChatGPT (16K context)', temperature=0.8, max_tokens=400))
The code snippet above uses ChatGPT API endpoint from OpenAI. Want to use other models (Azure OpenAI, Cohere, Anthropic, Mistral, etc)? Check out the integration page
here.