Getting started with Galileo Observe is really easy. It involves 3 steps:
Create a project
Go to your Galileo Console. Click on the big + icon on the top left, and follow the steps to create your Observe project.
Integrate Galileo in your code
Choose your Guardrail metrics
Install the Galileo Client
Install the python client via pip install galileo-observe
Install the python client via pip install galileo-observe
-
Open a TypeScript project where you want to install Galileo
-
Install the client via npm with npm install @rungalileo/galileo
If you are not using Observe Callback features you can use the --no-optional
flag to avoid extraneous dependencies.
- Add your console URL (GALILEO_CONSOLE_URL) and API key (GALILEO_API_KEY) to your environment variables in your
.env
file.
GALILEO_CONSOLE_URL="https://console.galileo.yourcompany.com"
GALILEO_API_KEY="Your API Key"
# Alternatively, you can also use username/password.
GALILEO_USERNAME="Your Username"
GALILEO_PASSWORD="Your Password"
Getting an API Key
To create an API key:
Go to your Galileo Console settings and select API Keys
Give your key a distinct name and hit Create
Logging via Client
If you’re not using LangChain, you can use our Python or TypeScript Logger to log your data to Galileo.
First you can create your ObserveWorkflows object with your existing project.
from galileo_observe import ObserveWorkflows
observe_logger = ObserveWorkflows(project_name="my_first_project")
Next you can log your workflow.
from openai import OpenAI
client = OpenAI()
prompt = "Tell me a joke about Large Language Models"
model = "gpt-4o-mini"
temperature = 0.3
# Create your workflow to log to Galileo.
wf = observe_logger.add_workflow(input={"input": prompt}, name="CustomWorkflow")
# Initiate the chat call
chat_completion = client.chat.completions.create(
model=model,
messages=[{"role": "user", "content": prompt}],
temperature=temperature,
)
output_message = chat_completion.choices[0].message
# Log your llm call step to Galileo.
wf.add_llm(
input=[{"role": "user", "content": prompt}],
output=output_message.model_dump(mode="json"),
model=model,
input_tokens=chat_completion.usage.prompt_tokens,
output_tokens=chat_completion.usage.completion_tokens,
total_tokens=chat_completion.usage.total_tokens,
metadata={"env": "production"},
name="ChatOpenAI",
)
# Conclude the workflow.
wf.conclude(output={"output": output_message.content})
# Log the workflow to Galileo.
observe_logger.upload_workflows()
First you can create your ObserveWorkflows object with your existing project.
from galileo_observe import ObserveWorkflows
observe_logger = ObserveWorkflows(project_name="my_first_project")
Next you can log your workflow.
from openai import OpenAI
client = OpenAI()
prompt = "Tell me a joke about Large Language Models"
model = "gpt-4o-mini"
temperature = 0.3
# Create your workflow to log to Galileo.
wf = observe_logger.add_workflow(input={"input": prompt}, name="CustomWorkflow")
# Initiate the chat call
chat_completion = client.chat.completions.create(
model=model,
messages=[{"role": "user", "content": prompt}],
temperature=temperature,
)
output_message = chat_completion.choices[0].message
# Log your llm call step to Galileo.
wf.add_llm(
input=[{"role": "user", "content": prompt}],
output=output_message.model_dump(mode="json"),
model=model,
input_tokens=chat_completion.usage.prompt_tokens,
output_tokens=chat_completion.usage.completion_tokens,
total_tokens=chat_completion.usage.total_tokens,
metadata={"env": "production"},
name="ChatOpenAI",
)
# Conclude the workflow.
wf.conclude(output={"output": output_message.content})
# Log the workflow to Galileo.
observe_logger.upload_workflows()
- Initialize client and create or select your project
import { GalileoObserveWorkflow } from "@rungalileo/galileo";
// Initialize and create project
const observeWorkflow = new GalileoObserveWorkflow("Observe Project"); // Project Name
await observeWorkflow.init();
- Log your workflows
// Observe dataset
const observeSet = [
"What are hallucinations?",
"What are intrinsic hallucinations?",
"What are extrinsic hallucinations?"
]
// Add workflows
const myLlmApp = (input) => {
const template = "Given the following context answer the question. \n Context: {context} \n Question: {question}"
// Add workflow
observeWorkflow.addWorkflow({ input });
// Get context from Retriever
// Pseudo-code, replace with your Retriever call
const retrieverCall = () => 'You're an AI assistant helping a user with hallucinations.';
const context = retrieverCall()
// Log Retriever Step
observeWorkflow.addRetrieverStep({
input: template,
output: context
})
// Get response from your LLM
// Pseudo-code, replace with your LLM call
const prompt = template.replace('{context}', context).replace('{question}', input)
const llmCall = (_prompt) => 'An LLM response…';
const llmResponse = llmCall(prompt);
// Log LLM step
observeWorkflow.addLlmStep({
durationNs: parseInt((Math.random() * 3) * 1000000000),
input: prompt,
output: llmResponse,
})
// Conclude workflow
observeWorkflow.concludeWorkflow(llmResponse);
}
observeSet.forEach((input) => myLlmApp(input));
- Log your Evaluate run to Galileo
// Upload workflows to Galileo
await observeWorkflow.uploadWorkflows();
Integrating with Langchain
We support integrating into both Python-based and Typescript-based Langchain systems:
Integrating into your Python-based Langchain application is the easiest and recommended route. You can just add GalileoObserveCallback(project_name="YOUR_PROJECT_NAME")
to the callbacks
of your chain invocation.
from galileo_observe import GalileoObserveCallback
from langchain.chat_models import ChatOpenAI
prompt = ChatPromptTemplate.from_template("tell me a joke about {foo}")
model = ChatOpenAI()
chain = prompt | model
monitor_handler = GalileoObserveCallback(project_name="YOUR_PROJECT_NAME")
chain.invoke({'foo':'bears'},
config(dict(callbacks=[monitor_handler])))
The GalileoObserveCallback logs your input, output, and relevant statistics back to Galileo, where additional evaluation metrics are computed.
Integrating into your Python-based Langchain application is the easiest and recommended route. You can just add GalileoObserveCallback(project_name="YOUR_PROJECT_NAME")
to the callbacks
of your chain invocation.
from galileo_observe import GalileoObserveCallback
from langchain.chat_models import ChatOpenAI
prompt = ChatPromptTemplate.from_template("tell me a joke about {foo}")
model = ChatOpenAI()
chain = prompt | model
monitor_handler = GalileoObserveCallback(project_name="YOUR_PROJECT_NAME")
chain.invoke({'foo':'bears'},
config(dict(callbacks=[monitor_handler])))
The GalileoObserveCallback logs your input, output, and relevant statistics back to Galileo, where additional evaluation metrics are computed.
Integrating into your Typescript-based Langchain application is a very simple process. You can just add aGalileoObserveCallback
object to the callbacks
of your chain invocation.
import { GalileoObserveCallback } from "@rungalileo/galileo";
const observe_callback = new GalileoObserveCallback("observe_example", "app_v1")
await observe_callback.init();
Add the callback {callbacks: [observe_callback]}
in the invoke step of your application:
const result = await chain.invoke(
{ question: "What is the powerhouse of the cell?"},
{callbacks: [observe_callback]});
The GalileoObserveCallback callback logs your input, output, and relevant statistics back to Galileo, where additional evaluation metrics are computed.
Logging through our REST APIs
If you are looking to log directly using our REST APIs, you can do so with our public APIs. More instructions on using those can be found here.
What’s next
Once you’ve integrated Galileo into your production app code, you can choose your Guardrail metrics.