AI Guardian Documentation
AI Guardian is an OpenAI-compatible security proxy that sits between your application and any LLM endpoint. It analyzes every request and response for threats, routes them based on a configurable risk policy, and provides a human-in-the-loop review queue for edge cases.
Architecture
At its core, AI Guardian is a FastAPI application that implements the same POST /v1/chat/completions interface as OpenAI. Your application sends requests to AI Guardian instead of directly to OpenAI. AI Guardian then:
- Filters the input — scans messages for 20+ threat patterns and computes a risk score (0–100).
- Routes based on policy — auto-allow (safe), queue for human review (ambiguous), or auto-block (dangerous).
- Filters the output — scans LLM responses for PII leaks, API keys, and other sensitive data before returning them.
- Logs everything — every request is written to the audit log with full metadata.
Quick Links
🚀📚🔌🐍
Quickstart
Integrate in 5 minutes
Concepts
Filters, scoring, HitL, policies
API Reference
Full endpoint documentation
Python Integration
openai-python examples
Support
For bugs and feature requests, open an issue on GitHub. For enterprise support, email ueda.bioinfo.base01@gmail.com.