AI Guardian

AI Guardian Documentation

AI Guardian is an OpenAI-compatible security proxy that sits between your application and any LLM endpoint. It analyzes every request and response for threats, routes them based on a configurable risk policy, and provides a human-in-the-loop review queue for edge cases.

Architecture

At its core, AI Guardian is a FastAPI application that implements the same POST /v1/chat/completions interface as OpenAI. Your application sends requests to AI Guardian instead of directly to OpenAI. AI Guardian then:

  1. Filters the inputscans messages for 20+ threat patterns and computes a risk score (0–100).
  2. Routes based on policyauto-allow (safe), queue for human review (ambiguous), or auto-block (dangerous).
  3. Filters the outputscans LLM responses for PII leaks, API keys, and other sensitive data before returning them.
  4. Logs everythingevery request is written to the audit log with full metadata.

Support

For bugs and feature requests, open an issue on GitHub. For enterprise support, email ueda.bioinfo.base01@gmail.com.