For the last 3 months, I've let AI write 70% of my code. Cursor for the editor. GitHub Copilot for autocomplete. Claude for architecture questions. The productivity gains are undeniable. But there are hidden costs nobody talks about. Here's the reality check.
The Productivity Gains Are Real
I'm shipping features 40% faster than I was 6 months ago. Boilerplate code that used to take 30 minutes now takes 5. API routes, form validation, database schemas AI handles all of it with minimal edits. The time savings compound.
What AI Actually Writes Well
1. CRUD Operations and Boilerplate
AI is perfect for repetitive patterns. "Create a Next.js API route that accepts a POST request, validates the input with Zod, inserts into Supabase, and returns the result." It writes this correctly 95% of the time.
// Prompt: "Create a Next.js API route for creating a task"
// AI Output (with minimal edits):
export async function POST(req: Request) {
const body = await req.json();
const { title, userId } = taskSchema.parse(body);
const task = await db.insert(tasks).values({ title, userId }).returning();
return Response.json(task);
}2. UI Components
Give AI a description of a component and it produces clean JSX. "A card component with a title, description, and action button. Use Tailwind and shadcn/ui patterns." The output is production-ready 80% of the time.
3. Test Cases
AI writes test cases faster than I do. Give it a function and it generates edge cases I wouldn't have thought of. The tests aren't perfect, but they're a solid starting point.
Where AI Still Breaks
1. Complex Business Logic
AI struggles with domain-specific logic that requires understanding your product. Subscription proration logic, multi-tenant data isolation, custom pricing tiers AI gets these wrong consistently. You need to write this yourself or heavily edit the AI output.
2. Performance Optimization
AI writes code that works, not code that's fast. It doesn't know your database indexes, your query patterns, or your scale requirements. I've seen AI generate N+1 queries, missing indexes, and inefficient algorithms that would break at scale.
3. Security
AI doesn't think about security by default. It'll write SQL queries without parameterization, API routes without rate limiting, and authentication flows without CSRF protection. You must review every line for security issues.
The Hidden Costs
1. Context Switching
Using AI effectively requires constant context switching. Write a prompt. Review the output. Edit it. Test it. Repeat. This is mentally taxing in a way that writing code from scratch isn't. By hour 6, I'm more tired than I used to be.
2. Over-Reliance
I've caught myself accepting AI suggestions without fully understanding them. This is dangerous. If you don't understand the code, you can't debug it when it breaks in production. I now force myself to read every line AI writes.
3. The Prompt Tax
Writing good prompts is a skill. A vague prompt produces vague code. A detailed prompt takes time to write. Sometimes it's faster to just write the code. The break-even point is around 20 lines below that, I write it myself.
The Tools I Actually Use
Cursor (Primary Editor)
Cursor is VS Code with AI built in. The killer feature: Cmd+K to edit code inline with natural language. "Add error handling to this function." "Refactor this to use async/await." It works 70% of the time without edits.
GitHub Copilot (Autocomplete)
Copilot is best for autocomplete, not generation. It predicts the next line based on context. For repetitive code (like mapping over arrays or writing similar functions), it's perfect. For novel logic, it's hit or miss.
Claude (Architecture Questions)
I use Claude for high-level questions. "What's the best way to structure a multi-tenant SaaS database?" "How should I handle webhook retries?" It's like having a senior developer on call. The answers aren't always right, but they're always a good starting point.
The Workflow That Works
- Plan the feature manually. Write the user flow, the database schema, the API contracts. Don't let AI do this it doesn't understand your product.
- Use AI for implementation. Generate the boilerplate, the CRUD operations, the UI components. This is where AI shines.
- Review every line. Treat AI output like a pull request from a junior developer. Check for security issues, performance problems, and edge cases.
- Test manually. AI-generated code works in the happy path. You need to test the edge cases yourself.
- Refactor. AI code is rarely optimal. Refactor for readability, performance, and maintainability.
The Verdict
AI coding tools are not hype. The productivity gains are real. I'm shipping 40% faster, and the gap is widening as the tools improve. But AI is not a replacement for developer judgment. It's a tool that makes good developers faster, not a tool that makes non-developers into developers.
The developers who win in 2026 are the ones who use AI for the boring parts and focus their energy on the parts that matter: architecture, business logic, and user experience. AI handles the how. You still need to figure out the what and the why.
Want a SaaS built by a developer who knows when to use AI and when not to? Book a call.