There are 2 main components to adding runtime protection:
ProtectTool - a LangChain tool that is configured with a stage, and optionally prioritized rulesets for local stages
ProtectParser - a parser that checks the results of the ProtectTool, and runs the next step in the chain if the rulesets are not triggered
You can then chain these together to create a runnable protected chain.
from galileo.handlers.langchain.tool import ProtectTool, ProtectParserfrom langchain_openai import ChatOpenAI# Create a ProtectTool instanceprotect_tool = ProtectTool( stage_name="My stage")# Create a LangChain LLM instancellm = ChatOpenAI(model="gpt-4o")# Create a ProtectParser instance, passing the LLM as the chain to be invokedprotect_parser = ProtectParser(chain=llm, echo_output=True)# Define the chain with Protect.protected_chain = protect_tool | protect_parser.parser# Invoke the protected chainresponse = protected_chain.invoke({"input": query})
If the rulesets are triggered, then the response will depend on the action assigned to the ruleset that was triggered. If the action is a Passthrough action, the original input is returned to be processed by the calling application. If the action is an Override then a randomly selected choice is returned.If no rulesets are triggered, then the chain is run and the relevant message type is returned by LangChain.
# Invoke the chainresponse = protected_chain.invoke({"input": query})# Check the response typeif isinstance(response, str): # This indicates the ProtectTool intervened and returned a string directly # such as the random choice from an Override action print(f"🛡️ Intercepted/Modified - Protect Response: {response}")else: # This means the LLM part of the chain was executed print(f"✅ Allowed - LLM Response: {response.content}")
You can pass a RunnableConfig when invoking the protected chain, allowing you to add a GalileoCallback logger callback handler.
from galileo.handlers.langchain import GalileoCallbackfrom langchain_core.runnables.config import RunnableConfig# Create a callback handlergalileo_callback = GalileoCallback()# Create the config with the callbackconfig = RunnableConfig( callbacks=[galileo_callback] )# Invoke the chain with the callbackresponse = protected_chain.invoke({"input": query}, config=config)