Day 1 Build Log — "Wait, This Actually Works?"
Planning was done. Now it had to actually get built.
Honestly, I was a little nervous. The documents were perfectly prepared — but sitting down to actually make it, a part of me was wondering: "Will this actually run?" It was the same feeling as starting kimeunsoo.xyz, not knowing what I was doing.
context.md: ready. schema.md: ready. roadmap.md: ready. Starting at Step 0.1.
Step 0.1 — Monorepo Skeleton
I handed context.md to Claude Code and gave the first instruction: build a turborepo-based monorepo. apps/mobile, apps/api, apps/core, packages/shared, packages/db, packages/llm — exactly the folder structure written in the roadmap.
A few minutes later, the folders were on GitHub.
Until a few months ago, I didn't know what GitHub was. I learned it building kimeunsoo.xyz — what a commit is, what a push is. Now it feels natural. That still surprises me.
While this was going on, I also asked the AI what a monorepo even is.
Success condition check: 6 folders visible at github.com/danny/pepper. context.md and roadmap.md in the root. ✅
Step 0.2 — Connecting Supabase
I created a Supabase project and got three API keys: the database URL, a public key, and a secret server-side key.
Claude Code stored these in a file called
.env.local and made sure it would never get pushed to GitHub. If the keys were exposed, anyone could access the database. Security handling like this gets set up automatically — but actually going through it yourself makes you feel why it matters, not just understand it abstractly.
Success condition check: .env.local not in git. ✅
Step 0.3–0.4 — Database Setup
This is where all the work put into schema.md in Ep 2 paid off. Claude Code read schema.md and generated 15 SQL files. Family info, chat rooms, Vault items, AI cost logs — every table Pepper would need.
My job was to paste each SQL into Supabase and click Run.
The first run hit an error. pgvector (the vector search extension) needed to be enabled before certain tables could be created — the order was wrong. I copied the error log and dropped it into Claude Code. A corrected SQL file came back in under two minutes. Ran it again. Passed.
When I opened the Supabase Table Editor and saw the table list fill the screen, one thought landed: "That structure I designed for half a day actually got built into a real database." The first moment an abstract document became something real.
Then I seeded the actual data — four family members (Danny, Soyeon, Eunsoo, Eunje) and five chat rooms (one family room + four individual rooms). Seeing the names appear on screen made it feel like an actual system with our family's data in it.
Success condition check: users table shows 4 entries. chat_rooms shows 5. ✅
Step 0.5 — LLM Router — This Was the Moment
This was the most impactful moment of Day 1.
I built a router that routes all AI calls through a single entry point. Light tasks like intent classification go to Gemini Flash-Lite (cheapest model). Heavy tasks like code generation go to Claude Sonnet (more expensive, but the performance is there). And every AI call is automatically logged — including cost.
I ran a test.
"Hello!" appeared in the terminal. Gemini responded.
Then I opened the Supabase cost_logs table. One row.
model: gemini-2.5-flash-lite cost_usd: 0.0000012
Three things hit at once.
"Wait — the AI actually answered my question."
"That's in the logs. With the price."
"If the price is logged, I can control it."
Seeing the cost recorded wasn't just a cool feature — it was the first time it clicked for real. If I can see exactly which model I used and when and how much it cost, I can catch it before it spirals. I still remembered hitting the session limit back in Ep 2 — so the value of this was immediately visceral, not abstract.
And then the most important thought: "I can actually build a real chatbot with this."
What had been a vague "maybe this is possible" became "yes, this is doable." One number did that.
Success condition check: Gemini response received. Row created in cost_logs with cost_usd recorded. Switching tier calls Claude Sonnet and logs the different model name. ✅
Step 0.6 — Deploying to Vercel
I built an API server in apps/api and deployed it to Vercel. Vercel is the service that automatically deploys to the internet whenever code is pushed to GitHub — I first learned about it building kimeunsoo.xyz. It's second nature now.
Deployment finished. I typed the URL into the browser.
{"ok": true, "time": "2026-05-17T..."}
Pepper's API server is live on the internet. A system I built is responding at a real URL.
Success condition check: /api/health returns ok: true. Auto-deploy on GitHub push confirmed. ✅
The Result — Half Day Planning, Two Hours Building, 51% of Session Used
Steps 0.1 through 0.6. Actual implementation time: two hours.
Session usage: 51%. Building kimeunsoo.xyz with a similar amount of session had produced far less. The difference comes down to one thing: whether time went into planning first.
There were a few errors along the way — pgvector ordering, missing environment variables, type errors. All solved the same way. Copy the error log, drop it into Claude Code. Fixed in minutes. I didn't need to know what to fix.
Clear success conditions make progress clear and fast. Being able to verify with my own eyes whether something worked means I can move to the next step without hesitation. For a non-developer, this matters more than you'd think — you can only tell the AI what to do next once you can judge for yourself whether the last thing worked.
We're a fair bit in. Next up: bringing up the Pepper Core server, scaffolding the mobile app, and wiring up the chat screen. Sometime next week, the moment will come — I'll type "@Pepper hi," and Pepper will respond for the first time.