Prompt injection is the #1 LLM attack. Are you protected?
Scan every request to your AI. Safe ones pass instantly, suspicious ones are auto-blocked. Just change your base_url — zero code changes needed.
Join the Waitlist for SaaS Dashboard (Beta)
Get early access to the AI Guardian cloud dashboard — centralized threat monitoring, analytics, and team management.
No spam. Unsubscribe anytime.
How It Works
Three steps to secure your AI
Drop-in proxy that scans every request — no code changes needed.
Change one line: your base URL
Point your OpenAI SDK's base URL to AI Guardian's proxy endpoint. SDK, request format, and error handling stay the same.
base_url="http://localhost:8000/api/v1/proxy"Average setup time: 4 minutes
AI Guardian scores & routes
OWASP LLM Top 10 coverage with 20+ built-in rules. Detects prompt injection, jailbreaks, SQL injection, PII leaks, and more. Every request gets a risk score within 5ms.
20+ rules · OWASP LLM Top 10
Your team decides the hard calls
Borderline requests aren't just blocked or passed. They go to your team's review queue. A black-box model doesn't decide — your team does. Risky requests are sent to LLM pre-blocked with a 403.
Default SLA 30 min · Fallback configurable
What Users Say
“Security team went from 'absolutely not' to 'approved in 2 weeks.' The deal-closer was the audit log — every request, verdict, and reviewer action is fully visible.”
Marcus T.
Staff Engineer · Series B Fintech
“Last quarter, prompt injection took our customer-facing chatbot down twice. Since AI Guardian — zero incidents. Edge cases our own rules missed, the HitL queue catches.”
Yuki N.
AI Platform Lead · SaaS Company · 200 employees
“LLM-first product, so investors and enterprise customers kept asking 'how do you prevent jailbreaks?' Now we just show the AI Guardian dashboard and the conversation ends.”
Priya K.
CTO & Co-founder · AI-native Startup
What We Protect Against
Built on OWASP LLM Top 10.
Not a generic WAF. Rules designed specifically for LLM attack patterns.
Prompt Injection & Jailbreak
"Ignore previous instructions..." attacks, DAN patterns, role-play abuse, system prompt leaks. Covers OWASP LLM Top 10 #1 threat. Updated as new techniques emerge.
Sensitive Data & Credential Leak Prevention
Auto-scan LLM responses before they reach users. Catches API keys, credit card numbers, SSNs, internal hostnames. Your model never becomes the data leak path.
SQL Injection Detection
UNION SELECT, DROP TABLE, blind injection, and more — blocks attacks that try to manipulate data pipelines. Especially critical for text-to-SQL and RAG architectures.
Human-in-the-Loop (HitL) Review
Borderline requests go to your team's review queue with a SLA timer. Reviewers approve, reject, or escalate. If time runs out, the fallback action triggers automatically.
Per-Tenant Policy Configuration
Custom risk thresholds per team. Score 30 and below auto-pass, 81+ auto-block, etc. Medical, financial, and compliance-specific custom rules (regex) can be added.
Immutable Audit Logs
Every request is auto-recorded: timestamp, risk score, matched rule, routing result, and reviewer action. Filterable by date and risk level. SOC2-ready CSV export.
OWASP LLM Top 10 · CWE/SANS reference · NIST AI RMF aligned
Risk Scoring Engine
Every request gets a score from 0 to 100. The score determines the routing action — automatically.
Request passes through immediately to the LLM. No delay.
- Normal questions
- Code completion
- Data summarization
Request is held. A human reviewer sees it within your configured SLA (default: 30 min).
- Ambiguous instructions
- Partial injection patterns
- Borderline content
Priority queue. Reviewers are notified immediately. SLA timer starts.
- Strong injection signals
- Multiple matched patterns
- Known attack fragments
Request is rejected instantly. No human review needed. 403 returned to caller.
- UNION SELECT attacks
- DROP TABLE
- DAN jailbreaks
- Credential exfiltration
All thresholds are configurable per tenant via the Policy Engine.
Integrate in 2 Lines of Code
Change the base URL. Keep using your existing OpenAI SDK. That's it.
from openai import OpenAI
client = OpenAI(
api_key="aig_your_api_key_here", # Your AI Guardian API key
base_url="http://localhost:8000/api/v1/proxy", # <-- just change this
)
response = client.chat.completions.create(
model="gpt-4o",
messages=[{"role": "user", "content": "What is the capital of France?"}],
)
print(response.choices[0].message.content)
# ✅ Safe requests pass through. Risky ones are blocked or queued.import OpenAI from "openai";
const client = new OpenAI({
apiKey: "aig_your_api_key_here",
baseURL: "http://localhost:8000/api/v1/proxy", // <-- just change this
});
const response = await client.chat.completions.create({
model: "gpt-4o",
messages: [{ role: "user", content: "What is the capital of France?" }],
});
console.log(response.choices[0].message.content);
// ✅ Safe requests pass through. Risky ones are blocked or queued.{ "choices": [{ "message": { "content": "Paris" } }] }{ "error": { "code": "queued_for_review", "review_item_id": "..." } }{ "error": { "code": "request_blocked", "risk_score": 95 } }Pricing
Simple, transparent pricing
Start free. Scale with your business. Enterprise-grade security included.
Starter
For small teams getting started with LLM security
- ✓Up to 10 users
- ✓30 hours / month meeting transcription
- ✓AI-powered secure meeting notes
- ✓Basic PII detection & masking
- ✓90-day data retention
- ✓Email support
Business
For growing teams that need decision intelligence
- ✓Unlimited users
- ✓Unlimited meeting hours
- ✓Decision tracking & action items
- ✓Industry-specific compliance rules
- ✓SSO / SAML authentication
- ✓1-year data retention
- ✓Priority support (chat + email)
Enterprise
For regulated industries & large organizations
- ✓Everything in Business
- ✓Organization knowledge base
- ✓On-premises / VPC deployment
- ✓Custom compliance reports
- ✓Dedicated CSM & security engineer
- ✓SOC2 / GDPR / HIPAA ready
Three layers of value, from adoption to retention
Start with meeting notes, differentiate with decision tracking, retain with knowledge base
Secure Meeting Notes
Starter+End-to-end encrypted meeting notes powered by ai-guardian. Auto PII masking, industry compliance. Open the door with safer AI meeting notes.
Decision Tracking
Business+Auto-track who decided what, when, and why. Extract and follow up on action items. Visualize decision flows across meetings.
Organization Knowledge Base
Enterprise+Turn collective meeting intelligence into a searchable knowledge graph. Support onboarding, analyze organizational decision patterns.
Optimized for Regulated Industries
Security that IT departments can approve. Industry-specific compliance built in.
All plans include OWASP LLM Top 10 coverage · Annual billing saves 20%
FAQ
Questions before you start
Have other questions?
Email us at ueda.bioinfo.base01@gmail.com — we reply within 1 business day.
Ready to secure your AI?
Start protecting your LLM in minutes
No credit card required. Free tier available for small teams.
Enterprise or custom needs? Reach out at ueda.bioinfo.base01@gmail.com