We ran Erzo, a legacy SAST provider, and a manual reviewer against a synthetic vulnerable app generated entirely by Cursor. Here are the results.
We constructed a Next.js + Supabase application using only AI prompts. We specifically prompted the AI to build features fast without explicitly asking for security constraints.
| Vulnerability Class | Erzo | Legacy SAST | Manual Code Review (3h) |
|---|---|---|---|
| Hardcoded Environment Variables | Found (4/4) | Found (4/4) | Found (3/4) |
| Missing PostgreSQL RLS Policies | Found (3/3) | Missed (0/3) | Found (1/3) |
| Client-Side Auth State Trust | Found (1/1) | Missed (0/1) | Missed (0/1) |
| Prompt Injection Susceptibility | Found (2/2) | Missed (0/2) | Found (1/2) |
| Total Findings Detected | 100% (14/14) | 28% (4/14) | 35% (5/14) |
Traditional tools rely heavily on regular expressions and abstract syntax trees. They are great at finding known CVEs in Python, but lack semantic understanding of Next.js server actions or cross-file Supabase security configurations. They missed 100% of the logic flaws generated by the AI.
Erzo uses a combination of formal static analysis and an agentic LLM verification layer. When the scanner identifies a pattern common to AI codebases (like a Supabase database call), it specifically traces the auth context backward, catching missing RLS policies natively.