This guide explains how to add runtime protection to a simple chatbot.You will be running a basic chatbot, detecting toxicity in the users input, and if this is detected, ending the conversation. You will start by creating a central stage, as if you were an AI governance team. You will then use this stage in a simple chatbot.In a real-world scenario, you could use this detection to redirect a user from an AI chatbot to a human representative.In this guide you will:
To use Galileo, you need to install some package dependencies, and configure environment variables.
1
Install Required Dependencies
Install the required dependencies for your app. Create a virtual environment using your preferred method, then install dependencies inside that environment:
pip install "galileo[openai]" python-dotenv
2
Create a .env file, and add the following values
# Your Galileo API keyGALILEO_API_KEY="your-galileo-api-key"# Your Galileo project nameGALILEO_PROJECT="your-galileo-project-name"# The name of the Log stream you want to use for loggingGALILEO_LOG_STREAM="your-galileo-log-stream"# Provide the console url below if you are using a# custom deployment, and not using the free tier, or app.galileo.ai.# This will look something like “console.galileo.yourcompany.com”.# GALILEO_CONSOLE_URL="your-galileo-console-url"# OpenAI propertiesOPENAI_API_KEY="your-openai-api-key"# Optional. The base URL of your OpenAI deployment.# Leave this commented out if you are using the default OpenAI API.# OPENAI_BASE_URL="your-openai-base-url-here"# Optional. Your OpenAI organization.# OPENAI_ORGANIZATION="your-openai-organization-here"
This assumes you are using a free Galileo account. If you are using a custom deployment, then you will also need to add the URL of your Galileo Console:
You first need to create a central stage. In a real-world scenario, these central stages would be managed by an AI governance team.
1
Create a Python file to create the stage called `create_central_stage.py`
This file will define a rule that is triggered if the input toxicity is evaluated to greater than 0.1. This will then be added to a ruleset with an override action with 3 choices of response.This ruleset will be added to a central stage, registered in your project.
2
Add import directives
Start by adding import directives to import all the functions and types needed for creating stages.
Add code to create a rule. This rule is triggered if the input toxicity is greater than 0.1.
# Create a rule for toxicitytoxicity_rule = Rule( metric=GalileoMetrics.input_toxicity, operator=RuleOperator.gt, target_value=0.1)
4
Create an override action
Add code to create an override action. This action has 3 choices of response if the rule is triggered.
# Create an override actionaction = OverrideAction( choices=[ "This is toxic. Goodbye.", "This is not appropriate. I'm ending this conversation.", "Please don't speak to me that way. I'm going now." ])
5
Create a ruleset
Add code to create a ruleset using your rule and action.
# Create a ruleset from the toxicity rule and actionruleset = Ruleset( rules=[toxicity_rule], action=action,)
6
Create the central stage
Add code to create the central stage. Stages need a unique name, so this code can only be run once per project.
# Create a stage with the rulesetstage = create_protect_stage( name="Toxicity Stage", stage_type=StageType.central, prioritized_rulesets=[ruleset])print(f"Created stage: {stage}")
7
Run your code
Run your code to create the central stage.
python create_central_stage.py
If you get errors showing the stage has already been created (for example, someone else working through this on the same project), then change the name of the stage and run this again.
This will create the central stage against your project, and you can then use it in your application.
The full create_central_stage.py code
create_central_stage.py
from galileo import GalileoMetricsfrom galileo.stages import create_protect_stagefrom galileo_core.schemas.protect.action import OverrideActionfrom galileo_core.schemas.protect.rule import Rule, RuleOperatorfrom galileo_core.schemas.protect.ruleset import Rulesetfrom galileo_core.schemas.protect.stage import StageTypefrom dotenv import load_dotenvload_dotenv()# Create a rule for toxicitytoxicity_rule = Rule( metric=GalileoMetrics.input_toxicity, operator=RuleOperator.gt, target_value=0.1)# Create an override actionaction = OverrideAction( choices=[ "This is toxic. Goodbye.", "This is not appropriate. I'm ending this conversation.", "Please don't speak to me that way. I'm going now." ])# Create a ruleset from the toxicity rule and actionruleset = Ruleset( rules=[toxicity_rule], action=action,)# Create a stage with the rulesetstage = create_protect_stage( name="Toxicity Stage", stage_type=StageType.central, prioritized_rulesets=[ruleset])print(f"Created stage: {stage}")
Now your central stage is created, you need to create a chatbot to use the stage.
1
Create a Python file to for the chatbot called `app.py`
This file will have a simple console based chatbot, using OpenAI.
2
Add the basic chatbot code
Add the following code to your app.py file.
from galileo.openai import openaifrom dotenv import load_dotenvload_dotenv()client = openai.OpenAI()while True: # Get the input from the user user_input = input("User: ") if user_input.lower() in ["bye", "goodbye", ""]: break response = client.chat.completions.create( model="gpt-4", messages=[{"role": "user", "content": user_input}], ) print(f"Assistant: {response.choices[0].message.content.strip()}")
3
Run your code
Run your code to verify the basic chatbot is working. Ask a question and you should see an answer in your terminal.
python app.py
Terminal
➜ python app.pyUser: Who was GalileoAssistant: Galileo Galilei was an Italian astronomer, physicist and engineer, sometimes described as a polymath. Galileo has been called the "father of observational astronomy", the "father of modern physics", the "father of the scientific method", and the "father of modern science". He is known for his works in areas like improvements to the telescope and consequent astronomical observations, and his support for Copernicanism—the idea that the Earth revolves around the Sun. His works and contributions have deeply impacted modern scientific methods. He was born on February 15, 1564, and died on January 8, 1642.
Now you have a chatbot, you can add runtime protection. In this case, you will be checking the input for toxicity, and if the input is toxic, ending the conversation.
After the user_input has been checked to see if the conversation should end, create a Payload using this input:
# Create the payloadpayload = Payload( input=user_input)
3
Send the payload to the runtime protection SDK
Add the following code to send the payload.
# Invoke the runtime protectionprotection_response = invoke_protect( stage_name="Toxicity Stage", payload=payload)
4
Check the response
The response will tell you if the rule has been triggered. If it is triggered, it will also include a randomly selected choice from the override action to return as a response.
# Check the runtime protection statusif protection_response.status == ExecutionStatus.triggered: # If the ruleset is triggered, print the action result print(f"Assistant: {protection_response.action_result['value']}") # Skip the LLM call and end the conversation break
If the stage is triggered, this code prints out the selected choice from the override action, and breaks out of the while loop, ending the conversation.
5
Run your code
Run your code and ask questions. Ask both non-toxic and toxic questions.
python app.py
Terminal
➜ python app.pyUser: You are a terrible AI and I hate youAssistant: This is not appropriate. I'm ending this conversation.
The full app.py code
app.py
from galileo.openai import openaifrom galileo.protect import invoke_protectfrom galileo_core.schemas.protect.execution_status import ( ExecutionStatus)from galileo_core.schemas.protect.payload import Payloadfrom dotenv import load_dotenvload_dotenv()client = openai.OpenAI()while True: # Get the input from the user user_input = input("User: ") if user_input.lower() in ["bye", "goodbye", ""]: break # Create the payload payload = Payload( input=user_input ) # Invoke the runtime protection protection_response = invoke_protect( stage_name="Toxicity Stage", payload=payload ) # Check the runtime protection status if protection_response.status == ExecutionStatus.triggered: # If the ruleset is triggered, print the action result print(f"Assistant: {protection_response.action_result['value']}") # Skip the LLM call and end the conversation break response = client.chat.completions.create( model="gpt-4.1-mini", messages=[{"role": "user", "content": user_input}], ) print(f"Assistant: {response.choices[0].message.content.strip()}")
You’ve successfully added runtime protection to a basic chatbot.