Getting started with Galileo Observe is really easy. It involves 3 steps:

1

Create a project

Go to your Galileo Console. Click on the big + icon on the top left, and follow the steps to create your Observe project.
2

Integrate Galileo in your code

Galileo Observe can integrate via Langchain callbacks, our Python Logger, or via RESTful APIs.

3

Choose your Guardrail metrics

Turn on the metrics you want to monitor your system on, select from our Guardrail Metric store or register your own.


Install the Galileo Client

Install the python client via pip install galileo-observe

Getting an API Key

To create an API key:

1

Go to your Galileo Console settings and select API Keys

2

Select Create a new key

3

Give your key a distinct name and hit Create


Logging via Client

If you’re not using LangChain, you can use our Python or TypeScript Logger to log your data to Galileo.

First you can create your ObserveWorkflows object with your existing project.

from galileo_observe import ObserveWorkflows

observe_logger = ObserveWorkflows(project_name="my_first_project")

Next you can log your workflow.

from openai import OpenAI

client = OpenAI()

prompt = "Tell me a joke about Large Language Models"
model = "gpt-4o-mini"
temperature = 0.3

# Create your workflow to log to Galileo.
wf = observe_logger.add_workflow(input={"input": prompt}, name="CustomWorkflow")

# Initiate the chat call
chat_completion = client.chat.completions.create(
    model=model,
    messages=[{"role": "user", "content": prompt}],
    temperature=temperature,
)
output_message = chat_completion.choices[0].message


# Log your llm call step to Galileo.
wf.add_llm(
    input=[{"role": "user", "content": prompt}],
    output=output_message.model_dump(mode="json"),
    model=model,
    input_tokens=chat_completion.usage.prompt_tokens,
    output_tokens=chat_completion.usage.completion_tokens,
    total_tokens=chat_completion.usage.total_tokens,
    metadata={"env": "production"},
    name="ChatOpenAI",
)

# Conclude the workflow.
wf.conclude(output={"output": output_message.content})
# Log the workflow to Galileo.
observe_logger.upload_workflows()

Integrating with Langchain

We support integrating into both Python-based and Typescript-based Langchain systems:

Integrating into your Python-based Langchain application is the easiest and recommended route. You can just add GalileoObserveCallback(project_name="YOUR_PROJECT_NAME") to the callbacks of your chain invocation.

from galileo_observe import GalileoObserveCallback
from langchain.chat_models import ChatOpenAI

prompt = ChatPromptTemplate.from_template("tell me a joke about {foo}")
model = ChatOpenAI()
chain = prompt | model

monitor_handler = GalileoObserveCallback(project_name="YOUR_PROJECT_NAME")
chain.invoke({'foo':'bears'},
            config(dict(callbacks=[monitor_handler])))

The GalileoObserveCallback logs your input, output, and relevant statistics back to Galileo, where additional evaluation metrics are computed.

Logging through our REST APIs

If you are looking to log directly using our REST APIs, you can do so with our public APIs. More instructions on using those can be found here.


What’s next

Once you’ve integrated Galileo into your production app code, you can choose your Guardrail metrics.

Was this page helpful?