All articles
Case Study

Kanbi Board: How I Built an AI Kanban App in 18 Days (Full Case Study)

February 19, 202611 min read
#casestudy#ai#nextjs#supabase#kanban
TA

Muhammad Tanveer Abbas

Solo SaaS Builder · 6 Products Shipped · The MVP Guy

I build production SaaS MVPs in 14 days for non-technical founders. Writing about what actually works – no fluff.

Kanbi Board: 18-Day Build Breakdown

Kanbi Board is an AI-powered Kanban app I built in 18 days. This is the full case study architecture decisions, what almost broke in production, and the 3 mistakes I caught before shipping.

The Problem Worth Solving

Task management gets messy when ideas are scattered across notes, chats, and meetings. The insight: if AI could parse unstructured text into structured tasks, you'd eliminate the friction of manual entry entirely.

The best SaaS ideas remove a step that users hate doing. For Kanbi Board, that step was manually creating and categorizing tasks from meeting notes.

Architecture Decisions

Next.js 15 App Router for the full stack. Supabase for the database with RLS policies on every table. OpenAI as primary AI provider with Groq as fallback. Stripe for subscription tiers. The AI processing happens server-side to protect API keys never client-side.

Always process AI requests server-side. Exposing your OpenAI key client-side is a $10,000 mistake waiting to happen.

The 3 Mistakes I Almost Shipped

First: I almost shipped without idempotency keys on Stripe webhooks. A network blip would have created duplicate subscriptions. Second: my RLS policies allowed users to read other users' boards through a shared workspace query. Third: I had no rate limiting on the AI endpoint one user could have burned through the entire monthly API budget.

RLS policies are not optional. Test them explicitly by logging in as different users and trying to access each other's data. Don't assume they work.

The AI Integration

The dual-provider setup (OpenAI + Groq) was the right call. When OpenAI had a 2-hour outage during beta, Groq handled all requests with under 500ms latency. Users never noticed.

// Dual provider with fallback
async function parseTasksWithAI(text: string) {
  try {
    return await openai.chat.completions.create({ model: "gpt-4o-mini", messages: [...] });
  } catch {
    return await groq.chat.completions.create({ model: "llama3-8b-8192", messages: [...] });
  }
}
Build AI failover from day one. Single-provider AI apps are one outage away from being completely broken.

Want an AI-powered SaaS built for you? Book a call.

Building something similar?

I ship production MVPs in 14 days auth, payments, and everything in between.

Share:

Related Posts