Notes
Notes
Short reflections and observations from working with AI inside real systems. These are not tutorials and not announcements. They are things I noticed, reconsidered, or understood differently after building something.
Each note is self-contained. Read them in any order.
The prompt is not the program
A prompt can produce working code, but that does not make it a program. A program has structure, boundaries, failure modes, and a reason to exist beyond the conversation that generated it. The moment I started treating prompts as inputs to a system — not as the system itself — the quality of everything changed.
Context windows are an architectural constraint
When working with AI agents, the context window is not just a technical limitation. It shapes what the agent can reason about, how you decompose problems, and where you place boundaries between components. Designing around it is no different from designing around memory limits or network latency. Ignore it, and the system degrades silently.
Tests written by AI need human intent
AI can generate tests quickly, but tests without clear intent are noise. The most useful pattern I found: I describe what the test must verify and why, then let the AI write the implementation. The human decides what matters. The machine handles the syntax.
The danger of fluent output
AI produces text that reads well. This is dangerous when the content is wrong. I learned to distrust fluency and look for structure instead. If the output cannot be broken into verifiable claims, it is not trustworthy — no matter how well it reads.
Small tools teach more than large frameworks
Building a habit tracker in one evening taught me more about AI-assisted development than reading documentation about agent frameworks. Constraints force decisions. Decisions produce knowledge. Large frameworks defer decisions, and with them, learning.
Naming things is still the hardest part
AI can generate code, refactor modules, and write documentation. But choosing the right name for a concept — one that will remain accurate as the system evolves — still requires a human who understands the domain and the direction. This has not changed.