Pillar 3: security and hosting for AI builders (OWASP, pen test, scaling)

AI doesn't think like an attacker. OWASP Top 10, pen testing, hosting, load testing, scaling: the security checklist for builders coding with AI.

security OWASP hosting devops best practices vibe coding scaling

AI doesn’t think like an attacker

Last week, I asked Claude to do a security audit on an API I had generated with it. It found a SQL injection in a search parameter. Good catch.

What it didn’t find: a combination of three minor flaws that, chained together, gave full admin access.

An endpoint that returned the user ID in the response body. Another that accepted that ID without authorization checks. And a third that exposed active sessions because debug mode was still on. Three “harmless” flaws individually. Complete admin access when combined.

This is the fundamental difference between AI and an attacker.

AI does pattern matching. It looks for known signatures: eval(user_input), concatenated SQL queries, hardcoded tokens. It reviews code line by line. File by file.

An attacker thinks in chains. They ask: “If I can get this, does that give me access to that?” They map the entire system, look for contact points between components, test hypotheses.

AI is an excellent first filter. But it doesn’t replace adversarial thinking. It doesn’t replace the 30 minutes where you sit in front of your app and ask yourself: “How would I break this?”

What Clean Coder says about professionalism

“I didn’t write it, AI did.”

This sentence doesn’t exist for your users.

In The Clean Coder, Robert C. Martin defines professionalism with one word: responsibility. A professional is responsible for what they ship. Period.

Not for what they wrote. For what they shipped.

If your application leaks user data, nobody checks whether Claude or you wrote the authentication handler. Your users don’t care. GDPR doesn’t care. Your reputation doesn’t care.

Martin insists: if you don’t understand the code, you don’t ship it. This rule existed long before AI. But it became critical with vibe coding, because the temptation to ship code you didn’t fully understand is now permanent.

AI gives you the velocity of a senior developer. Security is your responsibility as a professional.

OWASP Top 10: the minimum

OWASP (Open Web Application Security Project) has maintained the list of the most common web application vulnerabilities for 20 years. If you don’t know this list, now is the time.

The 5 flaws AI generates most

1. Injection (SQL, NoSQL, commands)

AI loves concatenating strings in queries. It’s the most “natural” answer to the question.

// What AI often generates
const user = await db.query(
  `SELECT * FROM users WHERE email = '${req.body.email}'`
)

// What you should ship
const user = await db.query(
  'SELECT * FROM users WHERE email = $1',
  [req.body.email]
)

Same pattern with MongoDB:

// Vulnerable
const user = await User.find({ email: req.body.email })
// If req.body.email = { "$ne": "" } → returns all users

2. Broken Authentication

AI generates auth systems that work for the happy path. Sessions that never expire. JWT tokens without signature verification. Zero rate limiting on login endpoints.

// AI almost always forgets this
app.post('/login',
  rateLimit({ windowMs: 15 * 60 * 1000, max: 5 }),
  async (req, res) => { /* ... */ }
)

3. Security Misconfiguration

Debug mode in production. Default credentials. The .env committed to the repo. CORS headers set to *. AI generates code for development. It doesn’t think about production.

// AI generates this to "make it work"
app.use(cors({ origin: '*' }))

// What you should ship
app.use(cors({
  origin: ['https://your-domain.com'],
  credentials: true
}))

4. SSRF (Server-Side Request Forgery)

AI loves making HTTP calls. fetch, axios, got… it generates requests to user-supplied URLs without any validation.

// AI happily generates this
const response = await fetch(req.body.webhookUrl)

// An attacker sends: http://169.254.169.254/latest/meta-data/
// → access to your server's AWS credentials

5. Insecure Deserialization

AI uses JSON.parse() without validation, accepts serialized objects without verifying their structure, trusts the payload.

// Vulnerable
const config = JSON.parse(req.body.config)
db.query(config.query) // Attacker controls the query

// Secure: validate with Zod or Joi
const configSchema = z.object({
  theme: z.enum(['light', 'dark']),
  lang: z.enum(['fr', 'en'])
})
const config = configSchema.parse(JSON.parse(req.body.config))

How to scan

Tools don’t replace thinking. But they catch the obvious mistakes.

Semgrep (free, open source). Pattern-based. You run it on your codebase, it finds SQL injections, eval() calls, hardcoded secrets. It’s the bare minimum.

semgrep --config=p/owasp-top-ten ./src

Snyk scans your dependencies. That npm package AI added to your package.json? It might have a known CVE. Snyk knows.

npx snyk test

Claude Code in review mode. Ask it to look for security flaws. It’ll find the known patterns. Useful as a first pass.

claude "Do a security audit of src/api/ focusing on injections, authentication, and input validation"

The limitation: these tools find patterns, not logic flaws. Semgrep catches eval(user_input). It doesn’t catch that your /api/users/:id endpoint doesn’t verify the connected user has permission to view the requested profile. That’s business logic. And that requires a human brain.

Pen testing

Pen testing is the art of breaking your own application before someone else does. You don’t need to be a security expert. You need 30 minutes and the right tools.

The solo minimum

OWASP ZAP (Zed Attack Proxy). Free, open source, maintained by OWASP. Point ZAP at your application and it runs automated scans: XSS, injections, weak configurations.

Burp Community Edition. The industry-standard tool in its free version. More manual, but more powerful for understanding what’s happening between your frontend and backend.

The manual checklist (30 minutes):

  • Auth bypass: try accessing protected routes without a token. Modify a JWT token (change the user ID). Test with an expired token.
  • IDOR (Insecure Direct Object Reference): change IDs in URLs. /api/orders/123 to /api/orders/124. Can you see another user’s orders?
  • XSS: inject <script>alert(1)</script> in every form field. Test stored XSS (persists in database) and reflected XSS (returns in the response).
  • CSRF: do your forms have anti-CSRF tokens? Can you submit a form from another site?

You don’t need to find zero-days. You need to find the doors you forgot to lock.

When to pay for a real pen test

If your application handles payments, health data, or sensitive personal data, DIY pen testing isn’t enough.

Realistic budget: $2,000 to $5,000 for a complete web audit.

Platforms:

  • YesWeHack (European, GDPR-compliant)
  • HackerOne (global, bug bounty programs)
  • A freelance pentester with verifiable references

This isn’t a luxury. If your app handles money or personal data, it’s a professional obligation. A data breach costs infinitely more than a pen test.

Hosting

You’ve tested your code, scanned for flaws, thought like an attacker. Now you need to put it in production somewhere.

The fundamentals

TypeCostControlComplexityFor whom?
Shared hosting$5-15/moLowLowBrochure sites, no API
VPS (Hetzner, DigitalOcean)$5-30/moFullMediumBuilders who want full control
Managed (Vercel, Railway)$0-20/moMediumLowMost AI builders
Serverless (AWS Lambda, Cloudflare Workers)Pay-per-useLowMediumAPIs with variable traffic

For most solo builders, Vercel or Railway is the sweet spot. Deploy in a push. Automatic SSL. Managed scaling.

Three non-negotiables:

  • SSL/HTTPS: Let’s Encrypt is free. There is no excuse for serving HTTP in 2026. None.
  • DNS: understand your configuration. An A record, a CNAME, an NS record. It takes 15 minutes to learn.
  • CDN: Cloudflare free tier. Your static assets served from the edge. Changes performance and adds a layer of DDoS protection.

Load testing

The question nobody asks before the first crash: how many concurrent users can your app handle before it falls over?

k6 is the tool. Free, scriptable, clear results.

// load-test.js
import http from 'k6/http'
import { check } from 'k6'

export const options = {
  stages: [
    { duration: '30s', target: 50 },   // ramp up to 50 users
    { duration: '1m', target: 50 },     // plateau
    { duration: '10s', target: 0 },     // ramp down
  ],
}

export default function () {
  const res = http.get('https://your-app.com/api/health')
  check(res, {
    'status 200': (r) => r.status === 200,
    'response < 500ms': (r) => r.timings.duration < 500,
  })
}
k6 run load-test.js

The #1 bug k6 will reveal in AI-generated code: N+1 queries. AI systematically generates the naive approach.

// What AI generates: N+1 queries
const orders = await db.query('SELECT * FROM orders')
for (const order of orders) {
  order.items = await db.query(
    'SELECT * FROM items WHERE order_id = $1', [order.id]
  )
}

// What you should ship: JOIN
const orders = await db.query(`
  SELECT o.*, json_agg(i.*) as items
  FROM orders o
  LEFT JOIN items i ON i.order_id = o.id
  GROUP BY o.id
`)

With 10 orders, it’s fine. With 10,000, your database collapses.

Scaling basics

You probably don’t need to scale right now. But you need to know your limits and what to do when you hit them.

Vertical scaling: a bigger server. Simple, expensive, limited. Going from 1 GB RAM to 4 GB. Works up to a point.

Horizontal scaling: more servers. Complex, economical at scale. Requires your application to be stateless (no in-memory sessions, no local temp files).

Connection pooling: your database has a limited number of simultaneous connections. PgBouncer for PostgreSQL. Without it, every request opens a new connection, and at 100 concurrent requests, PostgreSQL starts refusing new connections.

Caching: Redis for frequently read data that rarely changes. CDN for static assets and rendered pages. A well-placed cache divides your server load by 10x.

The trap: don’t scale prematurely. Understand why it’s slow first. 90% of the time, it’s an N+1 query or a missing index, not an infrastructure problem.

The security checklist before shipping

Screenshot this. Print it. Check it before every deployment.

  • HTTPS everywhere (no exceptions)
  • Authentication tested (sessions, tokens, expiry)
  • Input validation on ALL user inputs
  • Rate limiting on auth endpoints
  • Secrets in env vars, never in code
  • Dependencies scanned (npm audit / Snyk)
  • CORS configured properly (no * in production)
  • Database backups automated
  • Error messages don’t leak stack traces
  • Admin routes protected

Every unchecked line is an open door. AI didn’t close it for you. Because you didn’t ask. And because it’s not its job.

Summary

AI gives you speed. These 3 pillars give you confidence.

PillarGoalArticle
TestingVerify the code worksPillar 1: testing AI code
CI/CD & monitoringAutomate and observePillar 2: CI/CD & monitoring
Security & hostingProtect and deployThis article

Vibe coding without these pillars is building on sand. With them, it’s concrete.

You have senior-level velocity thanks to AI. You have the safety net of tests thanks to Pillar 1. You have automation thanks to Pillar 2. And now you have the security rigor of Pillar 3.

This all fits into a bigger picture. If you want the complete checklist for shipping AI code with confidence, read the manifesto. And if you still think “the right prompt is enough,” go read the anti-myths of prompt engineering.

Ship. But ship like a pro.

Pierre Rondeau

Pierre Rondeau

Developer and indie builder. I build products and automations with AI. Creator of Claude Hub.

LinkedIn