Structured Notes for AI-Assisted Development

I went through all the note-taking tools. Org-mode, Notion, Logseq. Built systems, rebuilt them, abandoned them. The pattern was always the same: spend more time organizing than actually capturing anything useful.

For personal notes, I still use Obsidian. Daily journals, life stuff, things I write myself. But I noticed something interesting. When I’d ask my AI assistant a question like “what did I do last April for Easter,” it would skim through my daily notes and find the answer. The notes weren’t written for the assistant, but it could read them just fine because they had a consistent structure.

That got me thinking. If the assistant is already reading my notes, what if I designed notes specifically for it to read? That’s where this system came from.

The problem

I use AI coding assistants daily. Claude Code, OpenCode, whatever fits the task. Throughout the day I’m investigating bugs, researching APIs, reviewing PRs, making architectural decisions. Each conversation builds up a shared understanding between me and the AI. The problem is, that understanding disappears when the session ends.

A week later I’m back in the same area of the codebase, working on a related ticket. I remember we figured something out, but the details are gone. I can’t point the assistant at our previous conversation and say “remember when we looked into this.” So we start from scratch.

The other problem is scattered context. Some things I learn end up in Slack threads. Some in PR comments. Some just stay in my head. When I need to revisit a decision or bring someone up to speed, there’s no single place to look.

The system

Everything lives in ~/Workspace/notes/, grouped by date.

~/Workspace/notes/
  2026-03-16/
    pr-916-review.md
  2026-03-22/
    filament-ui-ux.md
    gumtape-sqlite-turso.md
  2026-04-08/
    expense-soft-delete.md

Flat markdown files in date folders. No database, no sync service, no app. Just files on disk, version controlled, in the same terminal where I’m already working.

Each note has YAML frontmatter with a title, type, date, and tags. Every note starts with a ## TL;DR section, 2-5 bullets that summarize the key points.

There are three types:

  • Research - Technical investigations, API docs, debugging findings, architecture analysis
  • Session - Work logs, tasks completed, decisions made, blockers hit
  • PR Review - Code review findings and recommendations

Why TL;DR matters

The TL;DR section is the part that makes the whole thing work. When an AI assistant scans through my notes, it can read the TL;DR and immediately decide if this note is relevant to the current task. No need to parse through a full document. No random grepping for text that might or might not match.

If I’m working on a database migration and there’s a research note from two weeks ago titled “gumtape-sqlite-turso-database-analysis.md” with a TL;DR that says “Turso has a 10MB row size limit, batch inserts cap at 1000 rows,” the assistant picks up exactly where we left off.

It also helps me. When I’m scanning through a week of notes, I can read the TL;DR and skip anything that’s not relevant in seconds.

Built for Claude, useful for me

These notes are structured so that an AI assistant can read them. That’s the primary design goal. But the side effect is that they’re also useful for me.

The frontmatter makes notes searchable by type and tags. The date-based folder structure means I can find anything by when it happened. The TL;DR means I never have to re-read a full note to remember what it was about.

When I come back to a task and want to resume from the shared understanding we built up in a previous session, I point the assistant at the relevant notes. Instead of re-explaining context, re-investigating the same APIs, or re-discovering the same gotchas, we pick up where we left off.

This is different from every other note system I’ve tried. Notion was for building wikis I never read. Logseq was for connecting ideas I never connected. This system works because the notes have a consumer that actually reads them every time.

How notes get created

I have a custom skill that handles note creation. I tell the assistant what I learned or what we investigated, and it generates a note following the template. Research notes get sections for Context, Findings, Gotchas, and References. Session notes get Work Done, Decisions, Blockers, and Next Steps.

The skill definition looks like this:

---
name: notes
description: Create structured notes.
allowed-tools: Bash(mkdir:*), Bash(date:*), Read, Write, Edit, Glob
---

# Notes Skill

## Workflow

1. Determine note type (research or session)
2. Generate a kebab-case slug from the topic
3. Create file at ~/Workspace/notes/YYYY-MM-DD/<slug>.md
4. Populate using the appropriate template
5. Confirm the file path and a one-line summary

## Templates

### Research Note
- TL;DR (2-5 bullets)
- Context (why this research was needed)
- Findings (numbered sections with detail)
- Gotchas (surprising or undocumented behavior)
- References (links, file paths, docs)

### Session Note
- TL;DR (what was accomplished)
- Work Done (what and why)
- Decisions (with rationale)
- Blockers / Open Items
- Next Steps

I review the output, adjust anything that’s off, and it gets saved. The whole thing takes under a minute because I’m not writing from scratch. The assistant was part of the conversation where this knowledge was generated, so it already has the context.

One note per topic per day. If I come back to the same topic later in the day, it appends to the existing note instead of creating a new one.

What makes this stick

Every other note system I’ve tried failed because it required me to switch contexts. Open a different app, navigate to the right workspace, create a page, format it. By the time I’d done all that, I’d lost the thought.

This system lives in the terminal. The notes are plain markdown files. There’s no app to switch to, no login, no sync to wait for. Claude generates the structure, I review it, and it’s done. The friction is close to zero.

The notes are also immediately useful. Not “useful someday when I build my knowledge graph.” Useful tomorrow, when I open a new session and the assistant reads them to understand where we left off.


If you’re using any AI coding assistant, think about what happens to the context you build up during a session. If it just disappears, you’re doing the same work twice. A simple notes folder with a consistent structure fixes that.