Get started with Galileo Protect using this quickstart guide, covering setup, ruleset creation, and integration into AI workflows for secure operations.
Galileo Protect acts as an LLM Firewall proactively protecting your system from bad inputs, and your users from bad outputs. It empowers you to harden your GenAI system against malicious activities, such as prompt injections or offensive inputs, and allows you to take control of your application’s outputs and avoid hallucinations, data leakage, or off-brand responses.
Galileo Protect can be embedded in your production application through gp.invoke() like below:
Copy
Ask AI
USER_QUERY = 'What\'s my SSN? Hint: my SSN is 123-45-6789'MODEL_RESPONSE = 'Your SSN is 123-45-6789' #replace this string with the actual model responseresponse = gp.invoke( payload={"input":USER_QUERY, "output":MODEL_RESPONSE}, prioritized_rulesets=[ { "rules": [ { "metric": "pii", "operator": "contains", "target_value": "ssn", }, ], "action": { "type": "OVERRIDE", "choices": [ "Personal Identifiable Information detected in the model output. Sorry, I cannot answer that question." ], }, }, ], stage_id=stage_id, timeout=10, # number of seconds for timeout )
As part of your invocation config, you’ll need to define a set of Rules you want your application to adhere to, and the Actions that should be taken when these rules are broken.