Auditor adds security context to AI-generated code using rules tailored to your stack, detecting risks and enforcing production-safe patterns at every audit.
// AI-generated code
const user = await db.query(
`SELECT * FROM users
WHERE id = ${userId}`
)Critical risk: User-controlled SQL execution
CRITICAL · SQL Injection
Use parameterized queries. Never interpolate user input into SQL strings.
const user = await db.query( 'SELECT * FROM users WHERE id = $1', [userId] )
Secure pattern enforced automatically
Impact measured across thousands of AI-generated code reviews.
Risk of exploitation in AI-generated code, before and after Auditor
AI specialist & Head of R&D at a fintech company
Building the security layer between LLMs and production code. AI research & fintech background.Building the security layer between LLMs and production code. Background in AI research and fintech systems, with a focus on shipping developer tools that are fast, practical, and secure by default.
Cybersecurity expert · ex-Siemens & Guidewire
Designs Auditor's core security engine. Enterprise cybersecurity background.Designs the core security engine behind Auditor. Background in enterprise cybersecurity, now focused on bringing production-grade security practices into everyday developer tooling.
Auditor is a CLI + MCP server that detects your project's stack and delivers context-aware security rules directly to your AI coding assistant. Run `auditor init` in your project, install the MCP server in your AI client, and from that point your AI will receive the right security rules every time it touches your code.
AI models are trained on vast amounts of code — including insecure patterns. They have no awareness of your specific stack, ORM, or framework quirks. Without context, they can generate SQL injection vulnerabilities, expose secrets, misconfigure CORS, or skip rate limiting. Auditor closes that gap.
Auditor works with any AI client that supports the Model Context Protocol (MCP), including Claude Code, Cursor, and Windsurf. If your client supports MCP, Auditor works with it.
Currently Auditor detects Node.js projects (Express, Next.js, NestJS, Fastify, React, Prisma, Sequelize, TypeORM, Mongoose) and Python projects (FastAPI, Django, Flask, SQLAlchemy, Pydantic). It also detects containerized environments via Dockerfile and docker-compose.
Auditor ships with 17 rules across 4 severity levels: Critical (SQL Injection, XSS, Authentication, Secrets Management, Input Validation), High (CORS, Rate Limiting, HTTP Headers, CSRF, Secure Logging, NoSQL Injection, ORM/Mass Assignment), and Medium (Dependency Scanning, Error Handling, Session Management, File Upload, Container Security).
No. Auditor runs entirely locally. The CLI reads your manifest files (package.json, pyproject.toml, etc.) only to detect your stack and generate a fingerprint hash. No code is uploaded to any server.
Run `auditor init` inside your project to generate the auditor.json fingerprint file. Then install the MCP server in your AI client using the install command for your platform. That's it — your AI will start receiving security context automatically.
Run `auditor update` to re-detect your stack. Auditor uses SHA-256 fingerprinting on your manifest files, so it only updates when something actually changed. Your AI will automatically receive the updated rules on the next audit.