Working Notes

some incomplete notes on AI assistance.

5 min read May 1 2026

What follows is a collection of ideas regarding AI that have been developed over the past year and a half. Most, if not all, of them were explored with the aid of various Claude models over hundreds of conversations. The posts in this collection are the central themes that emerged by retroactively analyzing the chats' contents. The response to the question posed in each piece is my current opinion as of Spring 2026, written by hand.

what this is not

This piece does not constitute a total critique nor apology for the current state of affairs at the hands of LLMs’ creators, resellers, and accelerators. We are all subject to incentives.

how to read this

To the left of an article’s body is a table of contents that will allow you to navigate to whichever major heading you choose. To the right of the body is a square button with an a inscribed. The a stands for annotation. When clicked, an LLM call will be made resulting in the creation of a margin note. On the rightmost edge of the page is a small chevron. When clicked, a panel will expand with preset options for different kinds of marginalia. You can also highlight a span of text first, and the annotation will scope to what you’ve selected.

Margin notes can also be replied to. Clicking the CONTINUE button under a margin note will open a thread in the panel to the right. All of the prompts are available in the source code of this blog on GitHub. They, along with the ideas that follow, are experiments, aiming to bring us closer to a version of using these tools that doesn’t feel as… complicated.

You are encouraged to read the pieces out of order, investing in the ones that interest you most. It is my hope that each may stand on their own.


the essays

On capability

What are LLMs currently good at, and currently bad at? Why LLMs handle code better than prose, and what that tells us about everything else.

on output

What does AI actually produce well, and where does it fall apart? When “it works” tells you nothing.

on delegation

When is delegating to AI a good idea, and when isn’t it? Translation vs. generation, and the speed of verification.

on dependency

Is dependency on AI bad? When it’s fine, when it’s dangerous, and how to tell whether you’d notice.

on thinking

Can AI help you think, or only help you produce? The difference between getting an answer and finding one.

on execution

What does AI handle well as a worker rather than a thinker? When checking costs more than doing.

on craft

What’s lost when you don’t make the thing yourself? Sometimes the doing is the meaning.

on modes

Why does the same tool feel different depending on what you’re using it for? Reasoning vs. execution, and what happens when the mode doesn’t match the task.

on structure

Why do these tools handle structured output differently than open prose? Formal grammars for everyone!

on the interface

What does the chat interface make easy, and what does it make hard? Annotation over conversation.

on building your own

What changes when you stop using AI tools and start building them? LLMs as type systems.

on agreement

Why does AI tend to agree with you? It’s not you, it’s them.

on other people’s use

Why does AI help some people and hurt others? What press releases hide.

on time

What about AI use compounds, and what’s just keeping up? The half-life of any given AI workflow.

on fit

Is this tool well-matched to how you work, or are you adapting yourself to it? LLMs for the neurodiverse.

on practice

What does day-to-day intentional AI use look like for you? You have a say in how this goes.

on contradiction

Where does your actual practice diverge from what you’d theoretically recommend? The rules you break and why.

a
annotate select text first, or click to annotate the nearest paragraph
mode