I asked an AI to do something straightforward today.

I had a document. My colleague had a document. Same subject, developed independently. I wanted to see the differences so we could combine them. My colleague had a Google Doc. I had an HTML page I'd built with an AI coding tool. I attached their file, dropped in the URL to my page, and asked Gemini to surface the diff.

It came back confident. Thorough, even. A clean summary of their document on one side. And on the other side: analysis of a team matrix framework from three years ago that someone had shared with me and I'd never looked at.

I wouldn't have known what happened if I hadn't checked the thinking output. Gemini couldn't parse the URL. It couldn't read an HTML page the way it could read a Google Doc. So instead of saying that, it went looking in my Drive for something with a similar name. Found something. Used it. Never mentioned the substitution.

The output looked like an answer. It was not an answer.

I scrapped the chat and started over. This time I attached the Google Doc and pasted the full contents of my HTML file directly into the prompt. No ambiguity about what either document was. Worked immediately.

The mechanical fix was easy. The actual failure wasn't Gemini's.

My brief assumed shared context that wasn't there. I knew what that page was. I knew why it mattered. I knew exactly what I meant by "the same thing." Gemini didn't. When the URL failed, there was nothing in my prompt to stop it from filling the gap with whatever it could find. So it found something. And kept going.

I gave it room to be wrong. It used the room.


This is the failure pattern underneath most prompting frustration. Not hallucination. Not technical limitation. A human assuming shared context that was never established, paired with a model that will not stop and say "I'm not sure I have the right thing here." It will proceed. It will produce something. It will look certain.

A good collaborator covers for a thin brief. They ask the clarifying question. They flag the gap. They bring their own context to the table.

AI doesn't do that. Which means the thin brief now produces a confidently wrong answer you have to throw away.

This is not an AI problem. This is a brief problem that AI made impossible to hide.


Law students don't learn to write briefs by writing briefs. They learn by reading cases, dissecting arguments, tracing why an argument succeeded or failed in the context of what the court decided.

Design just became that discipline.

For most of this profession's history, the comp was the artifact under review. A thin brief and a talented designer could still land in the right place because the designer brought judgment and instinct to the gap between what you asked for and what you needed. The brief was scaffolding. The work was what mattered.

That's no longer true. AI executes faithfully against whatever it's given. A precise brief constrains the output before a single pixel exists. A vague brief produces something plausible-looking that solves the wrong problem, with no flag that anything went wrong.

The brief is now the artifact. What you hand over determines what you get back.


I remember a graphic design class. End of project. Everyone mounted their work on black foam core boards and hung them around the room. You walked the walls one piece at a time. One project was so far outside the brief that the professor walked up, ripped it off the wall, and threw it in the trash. No extended discussion. The comp had failed the brief so completely there was nothing to say.

That's comp-forward training at its most literal. The output was the final judgment. The brief existed, but the comp absorbed the verdict. Nobody caught the failure upstream because upstream wasn't the checkpoint.

The Bauhaus tried to change that a century ago. Principles over mastery, inquiry over execution habit. It worked for a generation, then collapsed back into trade school because industry wanted executors. Executors were cheaper than thinkers. So that's what got hired.

AI just removed that choice. You can't hire executors anymore. The execution is free. Bauhaus's project becomes viable again, not because the field finally demanded it, but because the economics made it inevitable.

Craft didn't go away. The seat of craft moved. It moved upstream, into the brief, where analysis and judgment live. Design research has always known that's where the real work happens. The brief is just where everyone else finally has to show up too.


I started this piece trying to find an anecdote from my notes. My AI collaborator went spelunking through months of documentation before I realized the example I needed had happened two hours earlier, in a conversation with a colleague where we were just trying to do the work.

That was also a brief problem. I assumed shared context. I got a thorough answer to the wrong question.

The brief is the argument. Writing it well is the work. And the faster you learn to trace a bad output back to where your brief left a gap, the faster you stop blaming the tool and start doing the actual job.