AI Guardian
Open-source — Start free (no card required)

Prompt injection is the #1 LLM attack. Are you protected?

Scan every request to your AI. Safe ones pass instantly, suspicious ones are auto-blocked. Just change your base_url — zero code changes needed.

OWASP LLM Top 10 Coverage
SOC2-ready Audit Logs
No request content stored by default
Latency <5ms (safe verdict)
Your App
Python / Node.js / Any SDK
All Requests →
AI Guardian
Security Proxy · <5ms
Safe Only →
Your LLM
OpenAI / Anthropic / Any
Intelligent Routing
PassReviewBlock
Beta

Join the Waitlist for SaaS Dashboard (Beta)

Get early access to the AI Guardian cloud dashboard — centralized threat monitoring, analytics, and team management.

No spam. Unsubscribe anytime.

43+
Detection Rules
<5 ms
Latency (safe verdict)
OWASP LLM Top 10
Coverage
99.9%
Uptime SLA
SOC2-ready
SOC2-ready Audit Logs
JP AI Act
Japan AI Governance Ready

How It Works

Three steps to secure your AI

Drop-in proxy that scans every request — no code changes needed.

01

Change one line: your base URL

Point your OpenAI SDK's base URL to AI Guardian's proxy endpoint. SDK, request format, and error handling stay the same.

base_url="http://localhost:8000/api/v1/proxy"

Average setup time: 4 minutes

02

AI Guardian scores & routes

OWASP LLM Top 10 coverage with 20+ built-in rules. Detects prompt injection, jailbreaks, SQL injection, PII leaks, and more. Every request gets a risk score within 5ms.

20+ rules · OWASP LLM Top 10

03

Your team decides the hard calls

Borderline requests aren't just blocked or passed. They go to your team's review queue. A black-box model doesn't decide — your team does. Risky requests are sent to LLM pre-blocked with a 403.

Default SLA 30 min · Fallback configurable

What Users Say

Security team went from 'absolutely not' to 'approved in 2 weeks.' The deal-closer was the audit log — every request, verdict, and reviewer action is fully visible.

MT

Marcus T.

Staff Engineer · Series B Fintech

Last quarter, prompt injection took our customer-facing chatbot down twice. Since AI Guardian — zero incidents. Edge cases our own rules missed, the HitL queue catches.

YN

Yuki N.

AI Platform Lead · SaaS Company · 200 employees

LLM-first product, so investors and enterprise customers kept asking 'how do you prevent jailbreaks?' Now we just show the AI Guardian dashboard and the conversation ends.

PK

Priya K.

CTO & Co-founder · AI-native Startup

What We Protect Against

Built on OWASP LLM Top 10.

Not a generic WAF. Rules designed specifically for LLM attack patterns.

LLM01Input · OWASP LLM01

Prompt Injection & Jailbreak

"Ignore previous instructions..." attacks, DAN patterns, role-play abuse, system prompt leaks. Covers OWASP LLM Top 10 #1 threat. Updated as new techniques emerge.

LLM02Output · OWASP LLM02

Sensitive Data & Credential Leak Prevention

Auto-scan LLM responses before they reach users. Catches API keys, credit card numbers, SSNs, internal hostnames. Your model never becomes the data leak path.

SQLInput · CWE-89

SQL Injection Detection

UNION SELECT, DROP TABLE, blind injection, and more — blocks attacks that try to manipulate data pipelines. Especially critical for text-to-SQL and RAG architectures.

HitLCore · 30-min default SLA

Human-in-the-Loop (HitL) Review

Borderline requests go to your team's review queue with a SLA timer. Reviewers approve, reject, or escalate. If time runs out, the fallback action triggers automatically.

POLConfiguration

Per-Tenant Policy Configuration

Custom risk thresholds per team. Score 30 and below auto-pass, 81+ auto-block, etc. Medical, financial, and compliance-specific custom rules (regex) can be added.

AUDCompliance · SOC2 ready

Immutable Audit Logs

Every request is auto-recorded: timestamp, risk score, matched rule, routing result, and reviewer action. Filterable by date and risk level. SOC2-ready CSV export.

OWASP LLM Top 10 · CWE/SANS reference · NIST AI RMF aligned

Risk Scoring Engine

Every request gets a score from 0 to 100. The score determines the routing action — automatically.

Low
Med
High
Critical
0306080100
Low0 – 30
Auto-Allow

Request passes through immediately to the LLM. No delay.

  • Normal questions
  • Code completion
  • Data summarization
Medium31 – 60
Queue for Review

Request is held. A human reviewer sees it within your configured SLA (default: 30 min).

  • Ambiguous instructions
  • Partial injection patterns
  • Borderline content
High61 – 80
Queue for Review

Priority queue. Reviewers are notified immediately. SLA timer starts.

  • Strong injection signals
  • Multiple matched patterns
  • Known attack fragments
Critical81 – 100
🚫Auto-Block

Request is rejected instantly. No human review needed. 403 returned to caller.

  • UNION SELECT attacks
  • DROP TABLE
  • DAN jailbreaks
  • Credential exfiltration

All thresholds are configurable per tenant via the Policy Engine.

Integrate in 2 Lines of Code

Change the base URL. Keep using your existing OpenAI SDK. That's it.

🐍Python (openai SDK)
from openai import OpenAI

client = OpenAI(
    api_key="aig_your_api_key_here",  # Your AI Guardian API key
    base_url="http://localhost:8000/api/v1/proxy",  # <-- just change this
)

response = client.chat.completions.create(
    model="gpt-4o",
    messages=[{"role": "user", "content": "What is the capital of France?"}],
)

print(response.choices[0].message.content)
# ✅ Safe requests pass through. Risky ones are blocked or queued.
Node.js (openai package)
import OpenAI from "openai";

const client = new OpenAI({
  apiKey: "aig_your_api_key_here",
  baseURL: "http://localhost:8000/api/v1/proxy", // <-- just change this
});

const response = await client.chat.completions.create({
  model: "gpt-4o",
  messages: [{ role: "user", content: "What is the capital of France?" }],
});

console.log(response.choices[0].message.content);
// ✅ Safe requests pass through. Risky ones are blocked or queued.
200 OKSafe request
{ "choices": [{ "message": { "content": "Paris" } }] }
202 AcceptedQueued for review
{ "error": { "code": "queued_for_review", "review_item_id": "..." } }
403 ForbiddenBlocked
{ "error": { "code": "request_blocked", "risk_score": 95 } }

Pricing

Simple, transparent pricing

Start free. Scale with your business. Enterprise-grade security included.

Starter

For small teams getting started with LLM security

$15/ user / month
  • Up to 10 users
  • 30 hours / month meeting transcription
  • AI-powered secure meeting notes
  • Basic PII detection & masking
  • 90-day data retention
  • Email support
Start Free Trial
Most Popular

Business

For growing teams that need decision intelligence

$38/ user / month
  • Unlimited users
  • Unlimited meeting hours
  • Decision tracking & action items
  • Industry-specific compliance rules
  • SSO / SAML authentication
  • 1-year data retention
  • Priority support (chat + email)
Start Business Trial

Enterprise

For regulated industries & large organizations

CustomCustom
  • Everything in Business
  • Organization knowledge base
  • On-premises / VPC deployment
  • Custom compliance reports
  • Dedicated CSM & security engineer
  • SOC2 / GDPR / HIPAA ready
Contact Sales

Three layers of value, from adoption to retention

Start with meeting notes, differentiate with decision tracking, retain with knowledge base

1

Secure Meeting Notes

Starter+

End-to-end encrypted meeting notes powered by ai-guardian. Auto PII masking, industry compliance. Open the door with safer AI meeting notes.

2

Decision Tracking

Business+

Auto-track who decided what, when, and why. Extract and follow up on action items. Visualize decision flows across meetings.

3

Organization Knowledge Base

Enterprise+

Turn collective meeting intelligence into a searchable knowledge graph. Support onboarding, analyze organizational decision patterns.

Optimized for Regulated Industries

Security that IT departments can approve. Industry-specific compliance built in.

🏦
Finance
SOC2 / FISC
🏥
Healthcare
HIPAA ready
🏛️
Government
ISMAP aligned
🏭
Manufacturing
IP protection
⚦️
Legal
Confidentiality

All plans include OWASP LLM Top 10 coverage · Annual billing saves 20%

FAQ

Questions before you start

Have other questions?

Email us at ueda.bioinfo.base01@gmail.com — we reply within 1 business day.

Ready to secure your AI?

Start protecting your LLM in minutes

No credit card required. Free tier available for small teams.

Enterprise or custom needs? Reach out at ueda.bioinfo.base01@gmail.com