Skip to main content

Why My AI Prompts Kept Failing (And What I Did About It)

I thought the problem was the AI. Too dumb. Not enough context. Wrong tool. Then I started paying attention to what I was actually typing.

Why My AI Prompts Kept Failing (And What I Did About It)

For a long time, I thought I was just bad at this.

I would open Claude or Cursor, type what I wanted, and get something back that was almost right. Close enough to be disappointing. Far enough from right that I could not use it.

I kept thinking the problem was the AI. Too dumb. Not enough context. Wrong tool.

Then I started paying attention to what I was actually typing.

"Build me a dashboard." That was my request. That is it.

I would not hire a contractor and tell them "build me a dashboard." I would give them a spec. I would tell them what data goes on it, what the layout looks like, where the data comes from, what success looks like.

I was applying human social norms — the "you know what I mean" shortcut — to a machine that literally does not know what I mean.

Once I figured this out, I started writing prompts differently. Not longer — more specific. There is a difference.

The things I started including that I had been leaving out:

The actual output. Not "build me a form" but "build me an email capture form with one field, a submit button, and an inline success message. When submitted, POST to /api/waitlist and store the email in a Supabase table called waitlist_emails." If I could not describe the output specifically, I did not actually know what I wanted yet. That was useful to figure out before I typed anything.

What I already had. Cursor and Claude have no idea what I built last week. They do not know my stack. They do not know my naming conventions. If I do not tell them "I am using Next.js 14, Tailwind, and Supabase — the client is at lib/supabaseClient.ts" they will make assumptions. The assumptions are usually wrong.

What to not do. This one felt weird at first — why would I tell the AI what not to do? Because AI is helpful and will add things I did not ask for. "Do not install any new packages without asking" and "do not change any files outside of what I mentioned" became my two most-used instructions.

What success looks like. "I can enter an email, click submit, see the confirmation message, and find the row in Supabase." When I defined success before I asked, the AI had something to aim at. When I did not define it, the AI declared victory whenever the code compiled.

My first-attempt hit rate went from maybe four out of ten to about eight or nine out of ten.

I built Briefli because I wanted a shortcut to this. You describe what you want to build, it asks you these questions, it produces the prompt. The four or five minutes of up-front thinking, automated.

Try it free at briefli.io → If it saves you an afternoon of correction loops, it was worth two minutes.

Stop re-writing prompts. Let Briefli build them for you.

Two-minute interview → precise, first-attempt prompt. Free to start.

Try It Free →