Pillar 2: CI/CD and monitoring for solo builders (GitHub Actions, Sentry, uptime)

Manual deployment is the best way to ship broken code without knowing. CI/CD, Sentry, uptime monitoring: the minimum viable pipeline for solo builders.

CI/CD monitoring best practices solo builder GitHub Actions Sentry devops

“It works on my machine”: the epitaph of 1,000 projects

Friday evening, 10 PM. You just fixed a bug that 3 users reported. Simple fix: a missing condition in a handler. You commit, push to prod manually, close the laptop.

Saturday morning, 14 messages. Your homepage is throwing a 500 error. Your fix broke something else. You didn’t know because nobody was watching.

I know this scenario. I’ve lived it. Multiple times.

When you’re solo, there’s no colleague to do a code review. No QA to test before production. No DevOps to configure alerts. It’s you. All the time. And your brain at 10 PM on a Friday is not reliable.

The natural reflex of a solo builder: “I’m alone, I don’t need CI/CD, that’s for teams.” It’s exactly the opposite. When you’re solo, you need more automation. Because there’s nobody to catch your mistakes.

Manual deployment is an anti-pattern. Not because it’s slow, but because it’s fragile. Every manual step is an opportunity for error. And every error in production is an error your users pay for.

What Clean Architecture says about boundaries

Robert C. Martin has a concept that applies perfectly here: architectural boundaries.

In Clean Architecture, he explains that the boundary between your code and the outside world deserves as much attention as the boundary between your internal modules. Deployment is a boundary. It’s the moment your code leaves your machine and enters the real world.

Most solo builders treat deployment as an afterthought. A git push followed by SSH into the server, an npm run build, a pm2 restart. Or worse: an FTP copy-paste.

The problem isn’t the technique. It’s the absence of a contract. When you deploy manually, there’s no guarantee that what reaches production is tested, linted, functional. You depend on your memory to execute every step in the right order.

Clean Architecture tells us: formalize your boundaries. Make them explicit. Automate them.

That’s exactly what a CI/CD pipeline does. It transforms an implicit boundary (“I think I did it right”) into an explicit boundary (“the pipeline validated every step before deploying”).

The minimum viable pipeline

I’m not going to sell you a Kubernetes setup with Terraform and ArgoCD. You’re solo. You need the minimum that protects you. Here are the 4 steps, in order.

Step 1: Auto lint + format

Setup time: 15 minutes. Cost: $0.

First safety net. Before even talking about tests, make sure your code is consistent.

For JavaScript/TypeScript: ESLint + Prettier. For Python: Ruff (which replaces flake8, isort, and black in a single tool). For Go: gofmt is already built in.

Why this is critical when you use AI to code: AI generates code with inconsistent formatting. One file uses single quotes, the next uses double quotes. Imports are in a random order. Indentation changes between files.

Auto lint + format fixes that in one command. You configure once, and every file follows the same rules. No more noise in diffs. No more mental debates about “should I put a semicolon here.”

Add a pre-commit hook with Husky (JS) or pre-commit (Python). Every commit is automatically formatted. You never think about it again.

Step 2: Auto tests on every push

Setup time: 30 minutes. Cost: $0 (GitHub Actions offers 2,000 minutes/month on public repos, 500 on private).

Create a .github/workflows/ci.yml file. The content is simple:

name: CI
on: [push, pull_request]
jobs:
  test:
    runs-on: ubuntu-latest
    steps:
      - uses: actions/checkout@v4
      - uses: actions/setup-node@v4
        with:
          node-version: 20
      - run: npm ci
      - run: npm run lint
      - run: npm test

That’s it. On every push, GitHub runs your tests. If a test fails, you know immediately. Not the next day when a user sends you a message.

The previous article on testing and TDD explains in detail which tests to write and how. If you don’t have tests yet, start there. The CI pipeline only has value if you have tests to run.

The crucial point: no exceptions. Every push triggers the pipeline. No “it’s just a quick fix, no need to test.” That quick fix is exactly what breaks everything.

Step 3: Auto deploy on merge to main

Setup time: 20 minutes. Cost: $0 (free tiers of Vercel, Netlify, and Railway are more than enough to start).

The principle is simple: merge to main = deploy to production. Automatically. No manual intervention.

With Vercel or Netlify, it’s native. You connect your GitHub repo, and every push to main triggers a build and deployment. Zero additional YAML configuration.

For a backend app, Railway or Render offer the same thing. Connect the repo, define the build and start commands, done.

The workflow becomes:

  1. You develop on a branch
  2. You open a PR (even if you’re solo, it forces a pause for reflection)
  3. CI passes (lint + tests)
  4. You merge
  5. Deployment happens automatically

You never touch production directly. You never ssh into a server to git pull. Deployment is a reproducible process, not an act of faith.

Step 4: One-click rollback

Setup time: 5 minutes. Cost: $0.

When things break (and they will), you need to be back up in seconds. Not hours.

Vercel and Netlify offer instant rollback. Every deployment is an immutable version. One click in the dashboard, and you’re back to the previous deployment. Downtime: less than 30 seconds.

For an app on Railway or Render, rollback is also available in the dashboard. As a last resort, a git revert on your problematic commit + push to main triggers an automatic redeployment.

The key: test your rollback before you need it. Do it once, under normal conditions, so you know exactly which buttons to click when it’s panic time.

Monitoring production

Deploying automatically is good. Knowing what happens after is better. Without monitoring, you’re blind. Your app can crash silently for hours.

Sentry

Setup: 10 lines of code. Cost: $0 (free tier offers 5,000 events/month).

Sentry captures every error in production, in real time. Full stack trace, user context, error frequency. You know exactly what broke, where, and for how many users.

Installation is trivial. For a Next.js project:

npx @sentry/wizard@latest -i nextjs

For a Node.js backend, it’s a few lines:

import * as Sentry from "@sentry/node";
Sentry.init({ dsn: "your-sentry-dsn" });

What changes when you have Sentry: you discover bugs before your users report them. You see that an error appeared 47 times in 2 hours, for 12 different users. You fix it before anyone has time to write an email.

Without Sentry, those 12 users leave silently. You’ll never know why.

Uptime monitoring

Setup: 5 minutes. Cost: $0 (BetterStack and UptimeRobot offer generous free tiers).

Uptime monitoring does one simple thing: it visits your site every X minutes and checks that it responds. If your site is down, you get an immediate alert: email, SMS, Slack, Discord.

Why it’s indispensable: your app can be down and you don’t know it. Your hosting provider has an incident. Your SSL certificate expired. Your database is full. Without monitoring, you find out when a user tweets that your site doesn’t work.

BetterStack (formerly Better Uptime) is my pick. The interface is clean, alerts are reliable, and the free tier covers a solo project easily. UptimeRobot is a solid and proven alternative.

Set up a check every 2 minutes on your main URL. Add a public status page if you have users who depend on your service. It takes 5 minutes and saves you hours of stress.

Analytics

Setup: 15 minutes. Cost: $0 (PostHog offers 1M events/month free, Plausible has a free tier for small sites).

Analytics are not vanity metrics. They tell you what your users actually do.

PostHog is my tool of choice. It’s far more than a privacy-friendly Google Analytics. It’s a product analytics tool: funnels, retention, session replay, feature flags. Everything you need to understand how people use your product.

Plausible is an alternative if you just want simple, lightweight, privacy-respecting web traffic data.

What matters is measuring the actions that make sense for your business:

  • How many users complete your onboarding?
  • What percentage uses the feature you spent 3 weeks building?
  • Where do people drop off in your funnel?

If nobody uses a feature, stop building on it. Analytics give you that clarity. Without them, you’re building blind.

How AI helps you set all this up

The irony is delicious: you use AI to build the guardrails for AI-generated code.

But that’s exactly the right use. Claude Code can generate your GitHub Actions file in 30 seconds. It knows the YAML syntax, available actions, best practices. You describe your stack, it generates the workflow.

Same for the Sentry config. “Add Sentry to my Next.js project with source maps upload”, and it’s done. Properly.

BetterStack monitoring? Claude can write you a script that configures your checks via the API.

The difference between a builder who uses AI intelligently and one who does vibe coding is exactly this. One uses AI to build the infrastructure that protects them. The other uses AI to pile up code without a safety net.

I detailed this approach in the article about how I built an AI copilot. The principle is the same: AI is a tool, not a replacement. And the best tools are the ones that prevent you from making mistakes.

Summary

StepToolSetup timeCost
Lint + formatESLint+Prettier / Ruff15 minFree
Auto testsGitHub Actions30 minFree
Auto deployVercel / Netlify / Railway20 minFree
RollbackVercel / git revert5 minFree
Error trackingSentry10 minFree (5k events/mo)
UptimeBetterStack / UptimeRobot5 minFree
AnalyticsPostHog / Plausible15 minFree

Total: ~2 hours of setup. $0. And you sleep soundly.

In 2 hours, you have a pipeline that lints, tests, deploys, monitors, and alerts you. That’s the minimum. But it’s a minimum that changes everything.

This pipeline is Pillar 2 of the manifesto for shipping AI code with confidence. Pillar 1 covered testing and TDD. Next up, Pillar 3 will cover security and hosting, because an automated pipeline is useless if your server is wide open.

Automate. Monitor. Sleep.

Pierre Rondeau

Pierre Rondeau

Developer and indie builder. I build products and automations with AI. Creator of Claude Hub.

LinkedIn