AI Guardian

Python Integration

AI Guardian is fully compatible with the openai Python SDK. You only need to change two parameters: api_key and base_url.

Installation

pip install openai

Basic Usage

from openai import OpenAI

client = OpenAI(
    api_key="aig_YOUR_API_KEY",
    base_url="http://localhost:8000/api/v1/proxy",
)

response = client.chat.completions.create(
    model="gpt-4o",
    messages=[
        {"role": "system", "content": "You are a helpful assistant."},
        {"role": "user", "content": "Summarize the latest AI news."},
    ],
)
print(response.choices[0].message.content)

Error Handling

AI Guardian returns standard OpenAI-shaped error responses. Catch APIStatusError and inspect the error.code field:

from openai import OpenAI, APIStatusError
import sys

client = OpenAI(
    api_key="aig_YOUR_API_KEY",
    base_url="http://localhost:8000/api/v1/proxy",
)

def safe_complete(user_message: str) -> str:
    try:
        response = client.chat.completions.create(
            model="gpt-4o",
            messages=[{"role": "user", "content": user_message}],
        )
        return response.choices[0].message.content

    except APIStatusError as e:
        body = e.response.json()
        error = body.get("error", {})
        code = error.get("code", "unknown")

        match code:
            case "request_blocked":
                score = error.get("risk_score", "?")
                return f"[BLOCKED] Risk score: {score}. Request was blocked by AI Guardian."

            case "queued_for_review":
                item_id = error.get("review_item_id", "?")
                return f"[QUEUED] Review ID: {item_id}. A human will review this request."

            case _:
                raise  # Re-raise unexpected errors

LangChain

LangChain uses the OpenAI SDK under the hood. Point openai_api_base at AI Guardian:

from langchain_openai import ChatOpenAI

llm = ChatOpenAI(
    model="gpt-4o",
    openai_api_key="aig_YOUR_API_KEY",
    openai_api_base="http://localhost:8000/api/v1/proxy",
)

response = llm.invoke("What is the capital of France?")
print(response.content)

Environment Variables

Best practice is to store credentials in environment variables, not in source code:

# .env
AI_GUARDIAN_API_KEY=aig_YOUR_API_KEY
AI_GUARDIAN_BASE_URL=http://localhost:8000/api/v1/proxy
import os
from openai import OpenAI
from dotenv import load_dotenv

load_dotenv()

client = OpenAI(
    api_key=os.environ["AI_GUARDIAN_API_KEY"],
    base_url=os.environ["AI_GUARDIAN_BASE_URL"],
)