How we ship MVPs in 2-8 weeks
Our playbook for rapid product development with AI-accelerated workflows.
The AI-native development workflow
Most teams are still building software the old way: large teams, long cycles, heavy planning up front.
We've rebuilt our entire development process around AI as a force multiplier. Here's how we consistently ship production-ready MVPs in 2–8 weeks with a team of 2–3.
1. Start with the problem, not the solution
Week 0: Discovery (3–5 days)
Before writing any code, we:
- Interview 5–10 potential users
- Map their current workflow (tools, pain points, workarounds)
- Identify the 1–2 things that, if solved, would create immediate value
Output: A one-page spec with:
- Problem statement
- Target user
- Core value prop (one sentence)
- 3–5 must-have features for v1
AI assist: We use Claude to synthesize interview notes and identify patterns across conversations.
2. Design in hours, not weeks
Week 1: Design (2–3 days)
We design directly in Figma with our shared component library:
- Mobile screens first (most usage happens on phones)
- Max 5–7 unique screens for MVP
- Use existing patterns from our design system
No custom illustrations, no branding exercises, no pixel-pushing. Just clean, functional UI.
AI assist:
- Generate screen content and microcopy with GPT-4
- Use Midjourney for placeholder imagery if needed
- Let AI suggest component layouts based on our design tokens
3. Build in public, test with real users
Weeks 2–6: Build + iterate
We build in 1-week sprints:
- Monday: Plan the week's features
- Tuesday–Thursday: Build + ship to staging
- Friday: User testing session with pilot partners
Every Friday, we put the product in front of real users. No prototypes—actual working software. We watch them use it, take notes, and adjust priorities for the next sprint.
AI assist:
- Cursor + Claude for code generation and debugging
- Automated test generation with GitHub Copilot
- AI code review for security and performance issues
Our stack:
- Frontend: Next.js + React Native (shared component library)
- Backend: NestJS + PostgreSQL
- Infra: Vercel + Railway + AWS S3
We use Turborepo to share code between web and mobile. One component library, two platforms.
4. Instrument everything from day one
Telemetry > opinions
We ship with analytics, error tracking, and feature flags built in:
- PostHog for product analytics
- Sentry for error monitoring
- Stripe for payments (even if pricing isn't finalized)
Why? Because we want data, not guesses.
Week 3 decisions are informed by Week 2 usage. We know:
- Which features get used (and which get ignored)
- Where users drop off
- What errors are blocking adoption
AI assist: We use AI to analyze user behavior patterns and suggest feature priorities based on engagement data.
5. Pilot-first, scale second
Weeks 7–8: Pilot onboarding
We don't launch publicly. We onboard 1–3 pilot customers who:
- Represent the target market
- Are willing to give weekly feedback
- Will pay for the product (even if discounted)
Pilots get:
- Hands-on onboarding (Zoom + Loom)
- Direct Slack channel to our team
- Co-design input on roadmap
Why pilots matter:
- They validate (or invalidate) the value prop
- They surface edge cases we didn't anticipate
- They become design partners, not just users
Exit criteria: When 2+ pilots say "I'd pay full price for this," we're ready to scale.
The AI multiplier effect
Here's what AI handles for us:
Code generation (40% time savings)
- Boilerplate CRUD operations
- API route scaffolding
- Database migrations
- Test generation
Documentation (60% time savings)
- API docs auto-generated from TypeScript types
- User-facing help articles drafted from Loom recordings
- Technical specs written from Figma + Notion notes
Code review (30% time savings)
- Security vulnerability scanning
- Performance optimization suggestions
- Accessibility compliance checks
What AI can't do:
- Understand user pain
- Make product decisions
- Design delightful UX
That's where we focus our time.
Our 8-week breakdown (example: WellnessOS)
| Week | Focus | Output | |------|-------|--------| | 0 | Discovery | One-page spec + 5 user interviews | | 1 | Design | Figma screens (7 screens, mobile-first) | | 2 | Infra + auth | User accounts, Stripe setup, basic dashboard | | 3 | Core feature 1 | Mindbody sync + POS UI | | 4 | Core feature 2 | Shopify integration + inventory sync | | 5 | Payments + checkout | Stripe Terminal SDK, checkout flow | | 6 | Polish + testing | Bug fixes, mobile responsiveness, pilot prep | | 7 | Pilot onboarding | Onboard 3 studios, train staff | | 8 | Iterate + validate | Weekly check-ins, fix blockers, add quick wins |
Result: Production-ready product with real users and revenue.
What we've learned
1. Small teams move faster Big teams create coordination overhead. 2–3 people can out-ship a team of 10 if they're using AI effectively.
2. Ship to real users early Don't wait for polish. Get feedback when it still hurts to hear it (Week 3, not Month 6).
3. AI is a tool, not a replacement AI makes you faster, but it doesn't make decisions. You still need taste, intuition, and user empathy.
4. Pilots > beta lists A pilot who pays $50/month is worth 100 people on a waitlist.
Try this yourself
This week:
- Pick one feature you've been planning
- Timebox it to 5 days
- Use AI to handle the boilerplate
- Ship to one real user by Friday
You'll learn more in one week than a month of planning.
Want to pilot a product with us? Apply here.