A complete playbook for building apps with AI — structured, efficient, and designed to keep you in the driver's seat without burning through your token budget.
This guide reflects a personal workflow developed through hands-on experience building apps with AI tools. What works here works for me — your mileage may vary depending on your stack, team, and use case. This workflow is also intentionally evolving: as AI models improve, new tools emerge, and best practices shift, expect this to change. Treat it as a living reference, not a rulebook.
AI assisted coding is a development approach where you describe what you want in natural language and let an LLM generate the code. Instead of writing line by line, you steer. You review, refine, and guide — the AI executes.
The key insight: treat your AI like a brilliant-but-overconfident junior developer. It will write code with complete conviction — including bugs — and won't always tell you when something is wrong. Your job is to stay in the driver's seat.
"You bring the vision, architecture, and judgment. The AI brings speed."
Input: App description + feature wishlist
Output: Product Requirements Document (PRD)
Write a clear app description. One paragraph is enough. Focus on what it does and who it's for — not the tech.
"A fitness app where users can log their daily calorie intake, browse workout routines by muscle group, and track progress over time."
Dump every feature idea without filtering. The more specific, the better — vague features produce vague code.
| ❌ TOO GENERIC | ✅ SPECIFIC |
|---|---|
| "Login screen" | "Email/password + Google OAuth + 'Forgot Password' flow" |
| "Show workouts" | "Filterable by muscle group, difficulty, equipment. Card shows duration + calories badge." |
Don't just dump features and ask for a PRD. Use a structured prompt to let the AI ask clarifying questions, organize features logically, identify your MVP, and phase your development.
I want to build [your app description here]. Here are all the features I want: [paste your full feature list] Tech stack: - Frontend: Next.js 15 (App Router) - Backend/Auth: Supabase - Styling: Tailwind CSS - Package Manager: pnpm Before generating any PRD, ask me clarifying questions to fill in any gaps. Then arrange the features in logical order, identify the MVP, and present all development phases in a numbered table. Once I confirm a phase number, generate the full PRD for that phase only.
After the AI presents the phased table, review it before generating any document. Push back on anything that feels wrong.
The phases look good but move the notification system from Phase 1 to Phase 2 — it's not MVP. Also, the user profile feature is missing. Add it to Phase 1 and regenerate the table.
Once you're happy with the phase table, generate PRDs one at a time:
Generate the full PRD for Phase 1 only. Include: overview, user stories, functional requirements, non-functional requirements, and a testing checklist with acceptance criteria for each feature.
Input: PRDs
Output: Epics, tasks, and a master plan
Prompt your coding agent to read all PRDs and break them into atomic, actionable tasks.
Read all PRDs in /AI/docs/PRDs/. Create task files per phase and generate master_plan.md with all tasks listed as [ ] not started. Format: - [ ] Task 1.1 — Setup Supabase - [ ] Task 1.2 — Create users table
Generate a single-page mockup of all UI components. Every future session references this — saving tokens on design decisions.
Create kitchen-sink.html with every UI component: nav, auth forms, cards, modals, inputs. Use Tailwind CDN. Design ref only — no interactivity needed.
AGENTS.md is your persistent
instruction file — the rulebook every new AI session inherits. Place it in your project
root.
Before we start, read AGENTS.md and master_plan.md. Confirm you understand the rules and tell me which task is next on the plan.
Use this to start every session.
Task by task. Session by session.
Read AGENTS.md and master_plan.md. We are working on Phase 1. Next task is Task 1.3: Implement email/password authentication using Supabase Auth. Relevant files: - /AI/docs/PRDs/phase-1-prd.md - /components/AuthForm.tsx - /lib/supabase.ts Complete Task 1.3 ONLY. Do not touch other files. When done, tell me exactly what to test.
Bloated context causes the AI to lose instructions, repeat itself, and compound errors. One task = one session.
Only include files relevant to the current task. Tell the AI what to ignore.
Your master_plan.md is the AI's only memory between sessions. Keep it updated — it replaces re-explaining everything.
#1 place people burn tokens and lose momentum
The key is giving the AI the right context fast. Always ask the AI to diagnose before it fixes — this catches wrong assumptions before they touch your code.
We have a bug.
Expected: After login, user redirects to /dashboard
Actual: Redirects to /dashboard then immediately
back to /login
Relevant files:
- /middleware.ts
- /lib/auth.ts
- /app/(auth)/login/page.tsx
Console error: [paste error here]
Do NOT change any files yet. First explain
what you think is causing this. Once I
confirm, write the fix.
If you see any of these signs — close the session and start fresh:
Let's reset. Ignore this session. App: [one-line description] Working: [what works] Broken: [exact bug] Tried: [list of failed fixes] Files: [only the relevant ones] Start fresh. Do not repeat anything we already tried.
Prompting is a skill. These techniques will immediately improve the quality of your AI output and save you tokens. Learn these before you start building.
Instead of telling the AI what to do directly, tell it how to think about the problem first. You ask the AI to generate its own instructions or clarify its approach before executing.
Before writing any code, explain your plan step by step. Identify edge cases you'll need to handle. Once I approve your plan, then write the code.
Best for: complex features, architecture decisions, anything you'd regret getting wrong.
Present the AI with a numbered menu of options or phases. The AI gives you a table or list to choose from, and only executes when you provide a selection. Keeps context clean and output focused.
Read my feature list. Organize them into logical development phases and present as a numbered table. Include estimated complexity per phase. Wait for my selection before generating anything.
Best for: PRD generation, planning large feature sets, breaking down epics.
Assign the AI a specific expert role before asking your question. This primes its responses with that domain's vocabulary, standards, and mindset — dramatically improving output quality.
You are a senior full-stack engineer specializing in Next.js and Supabase with 10 years of experience. Review this auth implementation and identify any security vulnerabilities.
Best for: code review, architecture critique, security audits, writing PRDs from a PM perspective.
Ask the AI to reason step by step before giving a final answer. Forces it to show its work — making it easier to catch errors in logic before they become errors in code.
Think through this step by step: 1. What does the current auth flow do? 2. Where could the redirect loop occur? 3. What's the most likely root cause? 4. What's the minimal fix? Reason through each step before suggesting any code.
Best for: debugging, algorithm design, understanding tradeoffs.
Explicitly define what the AI cannot do — not just what it should do. Negative constraints prevent the AI from going off-scope, refactoring uninvited, or touching files you didn't ask it to.
Add the logout button to the nav. Constraints: - Only edit /components/Navbar.tsx - Do NOT refactor other components - Do NOT install new packages - Do NOT change existing styling - Keep changes under 20 lines
Best for: surgical edits, keeping scope tight, preventing runaway refactors.
Show the AI 2–3 examples of the pattern you want before asking it to produce more. The AI learns your conventions — naming patterns, code style, file structure — without lengthy explanation.
Here are two existing API routes that follow our conventions: [paste /api/workouts/route.ts] [paste /api/meals/route.ts] Now create /api/progress/route.ts following the exact same pattern.
Best for: maintaining code consistency, replicating patterns, generating repetitive components.
The most powerful prompts combine multiple techniques. A single prompt can use role prompting (act as a senior engineer), chain of thought (reason step by step), and constraint prompting (only edit this file) simultaneously for dramatically better results.
Adjust them as you build. Discovering mid-build that a feature needs rethinking is normal — update the PRD first, then update the tasks.
Quality of output = quality of context. Never ask the AI to operate on partial information. New sessions are amnesiac — your docs are their only memory.
Your git log and master_plan.md are two sources of
truth. [1.3] Add Google OAuth — passes login flow beats fix auth every
time.
Always ask the AI to explain the bug before writing any fix. Catches wrong assumptions before they touch your code. One extra step saves many frustrated ones.
If the AI refactors something you didn't ask it to — revert it immediately. Unasked refactoring is one of the most common sources of hidden bugs.
Each model has different blind spots. Pasting the same prompt into a different model is a completely valid rescue strategy — not a last resort.
| TASK | RECOMMENDED MODEL |
|---|---|
| New projects, complex logic | Claude Sonnet / Opus |
| UI design, mockups | Gemini / Stitch |
| Debugging, file management | Gemini Flash / Claude Haiku |
| Quick questions, lookups | Cheapest fast model |
| Architecture audit, code review | Claude Opus |
Run this periodically to verify what's been built matches what was planned.
Read through the entire codebase. Generate two files: 1. architecture.md — high-level structure: how the app is organized, data flow, what each major module does, how auth/database/API interact. 2. engineering.md — technical details: patterns used, key decisions, dependencies and why chosen, anything a new developer needs to understand. Be factual — describe what IS, not what was planned.
This guide is a starting point. The AI coding ecosystem is evolving fast — here are the tools and concepts worth exploring as you get comfortable with the core workflow.
Anthropic's official CLI coding agent. Deeper project integration, direct file system access, and a tighter loop between prompting and execution than most IDE extensions. Worth exploring once you've mastered the core workflow.
A convention for giving AI agents reusable, domain-specific skill instructions — think of it as AGENTS.md but for individual capabilities. Particularly useful when building apps with repeated patterns like auth flows or API routes.
Model Context Protocol servers extend what your AI agent can do — connecting it to databases, APIs, file systems, and external services. The bridge between AI and real-world integrations like Supabase, GitHub, or Notion.
AI-native code editors that embed the full workflow — chat, generation, and diffs — directly in your IDE. Both support AGENTS.md conventions and work well with the phased task approach described in this guide.
The next frontier beyond single-task prompting — AI agents that autonomously plan, execute, and iterate across multi-step tasks. Understanding how to design tasks that work inside agentic loops will be a core skill as these tools mature.
Browser-based AI coding environments for rapid UI prototyping. Ideal for quickly generating your kitchen sink mockup or spinning up a frontend scaffold before importing into your main coding agent workflow.
This list will grow. The best way to stay current is to follow what practitioners — not vendors — are actually using. The tools that survive aren't always the most hyped; they're the ones that genuinely reduce friction in the build loop.
Paul is currently the IT Manager at MMDC and a Masters of Innovation and Business (MIB) candidate at AIM. Before that, he served as Head of Tech Operations at Rappler, one of the Philippines' leading digital news organizations, where he led the technical infrastructure behind award-winning journalism.
His focus sits at the intersection of technology and organizational change — specifically, making AI work for business in ways that are practical, ethical, and genuinely productive. This workflow guide is a direct output of that work: not theoretical, but tested in real projects, refined through real failures, and shared in the belief that AI tools should lower the barrier to building — not raise it.
Contact me at jfernandez.mib2026b@aim.edu