46 Questions Before Writing Code

I needed to add multi-tenancy to a Laravel app. Database-per-tenant, shared application, dynamic connection switching. The kind of feature where there are dozens of valid approaches and picking the wrong one means rewriting everything in three months.

Normally I’d sketch something out, start coding, hit a decision I hadn’t thought through, stop, rethink, refactor, repeat. For a solo developer without a team to bounce ideas off, this cycle eats weeks.

This time I tried something different. I came across the “grill me” skill from Matt Pocock and adapted it for my workflow. Before writing a single line of code, I asked my AI assistant to grill me.

The approach

I told the assistant: “Ask me questions about what I’m building, one at a time. Don’t suggest solutions. Just ask questions until you have enough context to write a PRD.”

What followed was a session of about 46 questions that progressively narrowed from broad architectural choices to specific implementation details.

It started wide:

  • Is this a new app or an existing one?
  • Shared application or separate deployments per tenant?
  • What’s the data isolation requirement?

Then got more specific:

  • How do you resolve which tenant a request belongs to?
  • What happens when someone hits a domain that doesn’t exist?
  • Should contractors see work orders across tenants, or only within one?

And eventually drilled into edge cases I hadn’t considered:

  • What happens if a migration fails on tenant 4 of 10?
  • When a new manager is invited, do they get a password or a magic link?
  • How do you prevent one tenant from starving others on a shared server?

Why this works

Three things made this approach better than my usual process.

Edge cases surfaced before implementation. The question about migration failures is a good example. I hadn’t thought about what happens when you’re running migrations across ten tenant databases and one fails halfway. The answer was a dry-run-first approach: run migrate --pretend on every tenant first, and only proceed if all of them pass. That’s a simple solution, but I would have discovered the problem mid-implementation, probably after a partial migration had already broken something.

Scope got clearer. Multi-tenancy is one of those features that can grow forever. Contractor portals, self-serve signup, automated custom domains, cross-tenant analytics. By the time the Q&A was done, I had a clear three-phase plan. Phase 1 was foundation: tenant resolution, connection switching, super admin panel, provisioning flow. Everything else was explicitly deferred. Without the structured questioning, I would have tried to build everything at once.

Decisions were better because I had to say them out loud. When you’re coding solo, decisions happen silently. You pick an approach and move on. But when someone asks you “why database-per-tenant instead of schema-per-tenant?” you have to articulate the reasoning. That articulation catches bad assumptions. I changed my mind on at least three decisions during the session because hearing my own reasoning exposed the gaps.

The output

The session produced a complete PRD: architecture decisions, database schemas, tenant lifecycle flows, migration strategy, testing approach, and a phased implementation roadmap. Thirteen key decisions, all documented with rationale.

Here’s a sample of what came out:

#DecisionRationale
1Shared app, DB-per-tenantAvoids N-site deployment pain
2DIY multi-tenancy (~200-300 lines)Full control, fits existing architecture, no package lock-in
6Magic links for manager onboardingNo password sharing, time-limited, better UX
7Dry-run-first tenant migrationsPrevents partial migration failures across tenants
13Per-tenant rate limiting from day onePrevents noisy-neighbor issues on shared infrastructure

What happened next

With every decision already made and documented, the implementation was mostly smooth. A few things changed as I hit real code, but the PRD held up. The assistant had the full context from the Q&A session, so when I started coding, we had a shared understanding of what we were building and why.

Phase 1 landed with 276 passing tests. The upfront questioning didn’t just save time on architecture, it also made the test cases obvious. When you’ve already asked “what happens when a suspended tenant makes a request?” during the Q&A, writing the test for it is trivial.

Using this again

This is now my default approach for anything non-trivial. I’ve used it for other features since. The pattern is simple: before you start building, have your AI assistant grill you on what you’re building. Don’t let it suggest solutions. Make it ask questions until it has enough context to write the spec for you.

The key insight is that the Q&A session isn’t just about extracting information from your head. It’s about building a shared context between you and the assistant that carries through the entire implementation. Every decision is documented, every edge case is accounted for, and when you start writing code, both you and the assistant are working from the same understanding.

For a solo developer, this is the closest thing to having a solutions architect on the team.

I also wrote about the guardrails I use when giving AI assistants this level of access, and how I use structured notes to carry context between sessions.