Your AI now audits
SQL Injection
with parameterized query rules
Install for
$npx @auditor/mcp install --client claude

Your AI ships code. Auditor ships secure code.

Auditor adds security context to AI-generated code using rules tailored to your stack, detecting risks and enforcing production-safe patterns at every audit.

Unreviewed AI code
// AI-generated code
const user = await db.query(
  `SELECT * FROM users
   WHERE id = ${userId}`
)
⚠️

Critical risk: User-controlled SQL execution

  • Potential database exposure
  • Unauthorized data access risk
  • AI generated code without security context
Audited & production-ready
CR

CRITICAL · SQL Injection

Use parameterized queries. Never interpolate user input into SQL strings.

const user = await db.query(
  'SELECT * FROM users WHERE id = $1',
  [userId]
)

Secure pattern enforced automatically

  • Parameterized query applied
  • Input safely handled
  • Production-safe database access

Exploitation risk benchmarking

Impact measured across thousands of AI-generated code reviews.

Risk of exploitation in AI-generated code, before and after Auditor

Without Auditor
With Auditor

Meet the founders

Giuseppe
Giuseppe
CEO

AI specialist & Head of R&D at a fintech company

Building the security layer between LLMs and production code. AI research & fintech background.

Contin
Contin
CTO

Cybersecurity expert · ex-Siemens & Guidewire

Designs Auditor's core security engine. Enterprise cybersecurity background.

Works with
anything
you use
VSCode
VSCode
Claude
Claude
Cursor
Cursor
OpenAI
OpenAI
N8N
N8N
Windsurf
Windsurf
VSCode
VSCode
Claude
Claude
Cursor
Cursor
OpenAI
OpenAI
N8N
N8N
Windsurf
Windsurf
VSCode
VSCode
Claude
Claude
Cursor
Cursor
OpenAI
OpenAI
N8N
N8N
Windsurf
Windsurf
VSCode
VSCode
Claude
Claude
Cursor
Cursor
OpenAI
OpenAI
N8N
N8N
Windsurf
Windsurf
Common
questions

What is Auditor and how does it work?

Auditor is a CLI + MCP server that detects your project's stack and delivers context-aware security rules directly to your AI coding assistant. Run `auditor init` in your project, install the MCP server in your AI client, and from that point your AI will receive the right security rules every time it touches your code.

Why does AI-generated code need security rules?

AI models are trained on vast amounts of code — including insecure patterns. They have no awareness of your specific stack, ORM, or framework quirks. Without context, they can generate SQL injection vulnerabilities, expose secrets, misconfigure CORS, or skip rate limiting. Auditor closes that gap.

Which AI clients are supported?

Auditor works with any AI client that supports the Model Context Protocol (MCP), including Claude Code, Cursor, and Windsurf. If your client supports MCP, Auditor works with it.

Which stacks and languages does Auditor detect?

Currently Auditor detects Node.js projects (Express, Next.js, NestJS, Fastify, React, Prisma, Sequelize, TypeORM, Mongoose) and Python projects (FastAPI, Django, Flask, SQLAlchemy, Pydantic). It also detects containerized environments via Dockerfile and docker-compose.

What security rules does Auditor cover?

Auditor ships with 17 rules across 4 severity levels: Critical (SQL Injection, XSS, Authentication, Secrets Management, Input Validation), High (CORS, Rate Limiting, HTTP Headers, CSRF, Secure Logging, NoSQL Injection, ORM/Mass Assignment), and Medium (Dependency Scanning, Error Handling, Session Management, File Upload, Container Security).

Does Auditor send my code anywhere?

No. Auditor runs entirely locally. The CLI reads your manifest files (package.json, pyproject.toml, etc.) only to detect your stack and generate a fingerprint hash. No code is uploaded to any server.

How do I install Auditor?

Run `auditor init` inside your project to generate the auditor.json fingerprint file. Then install the MCP server in your AI client using the install command for your platform. That's it — your AI will start receiving security context automatically.

What happens when I update my dependencies?

Run `auditor update` to re-detect your stack. Auditor uses SHA-256 fingerprinting on your manifest files, so it only updates when something actually changed. Your AI will automatically receive the updated rules on the next audit.