Write Insight Newsletter · · 11 min read

Prompt Engineering Is Your Most Expensive Habit

Stop re-explaining your expertise to a machine that should already know

A toolkit on a workbench.
What skills are you building with AI these days?

When I became a professor, nobody handed me a manual for delegation.

I had no staff. No research coordinator. No admin support. Just me and a handful of grad students trying to run a lab on duct tape and good intentions. I did everything myself, from ethics board submissions to formatting reference lists to scheduling participant sessions.

Then the grants came in, and suddenly I could hire people. Great. One problem: I had to explain how I actually did things. Not the big-picture strategy. The tedious, repeatable, 21-step administrative workflows I'd been running on autopilot for years.

I spent weeks writing SOPs (standard operating procedures). Sitting down. Documenting every click, every decision tree, every edge case. It was one of the most cognitively demanding things I've done as a professor, and I say that as someone who's written 150+ peer-reviewed papers. Externalizing your own process forces you to confront how much of your expertise lives as muscle memory you've never articulated.

The reward came when my staff actually followed the SOPs. Consistently. Without me hovering. That feeling of watching someone execute your process correctly, without a single clarifying email, is worth every hour of documentation.

Now here's the thing. AI follows SOPs better than any hire you'll ever make. No off days. No misread instructions. No wrong formats. And with the recent addition of skills in Claude (+Antigravity and +Codex), I can describe specific parts of my workflow, administrative, editorial, analytical, directly to the AI. It runs them correctly every single time. I iterate the instructions over time. They get better with each use.

That's what this issue is about. The 15-minute version of what took me weeks to learn about delegation.

The skill-based system that turns 15 minutes of setup into permanent AI memory

You've spent 15 years building expertise that fits inside your skull and nowhere else. You can diagnose a client's real problem in 20 minutes. You can read a financial model and spot the 3 assumptions that will blow up by Q3. You can walk into a boardroom, hear 45 minutes of cross-talk, and distil it into the 2 decisions that actually matter.

Then you open Claude. Or ChatGPT. Or whatever AI tool earned a spot on your dock this month.

And you start from zero.

You re-explain your consulting framework. You re-paste your report structure. You re-describe the tone your clients expect, the severity ratings you use, the way you frame recommendations so a C-suite exec actually reads past the first paragraph. Fifteen minutes of context-loading before you get a single useful word out.

You do this every day. Sometimes twice.

Task switching has a measurable cognitive cost. Psychologists David Meyer, Joshua Rubinstein, and Jeffrey Evans ran a series of experiments showing that people lose significant time when shifting between tasks, and the losses compound with complexity. Meyer concluded that even brief mental blocks created by shifting between tasks can cost as much as 40% of someone's productive time. The APA's summary of this research is blunt: The mind and brain were not designed for heavy-duty multitasking.

Now layer generative AI on top of that. A 2024 study published in Business Horizons found that effective use of GenAI depends on iterative prompt engineering, a back-and-forth refinement process where the human shapes the AI's output through successive rounds of feedback. The researchers frame this as human-AI knowledge co-construction, which means you're not typing a query and getting an answer. You're teaching the machine your standards, your context, your judgment, one correction at a time. That iterative loop is itself a form of hidden knowledge work. And every time you close the chat and open a new one, the co-constructed knowledge vanishes. You start the teaching cycle from scratch.

The productivity data on AI-assisted work makes this even clearer. Brynjolfsson, Li, and Raymond studied 5,172 customer support agents and found that access to an AI assistant increased productivity by 15% on average. The largest gains went to less experienced workers. The most experienced workers, the ones with the deepest expertise, saw the smallest improvements. One plausible explanation is that experts already have strong mental models, and the friction of re-contextualizing AI eats into the gains that less experienced workers get for free.

The industry has noticed. Anthropic built skills into Claude. And every major AI provider now lets you easily add them. Wvery AI product is in a hurry to tackle that cold-start challenge. It really hits when you’re most informed.

Skills are one of the best version of that fix. They externalize your repeated context into permanent, reusable files. They reduce re-entry work to zero. And they turn AI interactions from a fresh conversation every time into a reusable toolchain that remembers your methodology, your standards, and your corrections.

I tracked my own numbers. One hour per week re-contextualizing AI tools. 52 hours per year. That's more than a full work week spent telling a machine things it should already know.

You assumed the people who get better AI output are better at prompting. They aren't. They simplified prompting altogether. They built skills.

1. Understand what a skill actually is (and why it's the SOP your AI never had)

Every time you got great output from an AI chat, you built a skill. Then you closed the tab and threw it away.

A skill is a folder with 1 or more files of instructions. No API. No code. No engineering degree. That's the whole thing.

Think about the best hire you ever made. You didn't re-explain your entire methodology every morning. You trained them once. Handed them an SOP. Gave them reference materials. They executed. When they made a mistake, you corrected them once. They never made it again.

Skills are the SOP for your AI, except your AI never has an off day, never forgets the correction, and can run the skill at 5 AM on a Sunday while you're asleep.

The concept works across Claude, Antigravity, Codex, Cursor, VS Code, and other AI editors adopting the same standard. Build 1 skill. Use it everywhere.

Three components. That's all.

  1. The skill.md file (the brain). Your step-by-step process instruction. A recipe card. Clean, focused, no clutter.
  2. Reference files (the context). Supporting documents that give the AI the knowledge it needs. Your report template, your severity rubric, your brand guidelines, example deliverables. These live alongside the skill.md in the same folder.
  3. Metadata (the label). A name and short description at the top of your skill.md. This is the only part the AI reads when deciding whether to activate the skill. The label on the folder spine.

One folder. One process file. A few reference docs. Done.

2. Build your first skill in 15 minutes using work you've already done

Two methods. Starting with the easier one.

Method 1: Do it once, then codify

Do the task manually with the AI. Iterate until the output matches your standard. Then tell it to save the process as a reusable skill.

Say you're a consultant who writes executive briefings after every client engagement. You've been pasting your briefing format into Claude for months.

Open Claude. Do the briefing from scratch one more time. Be specific. Push back when the output isn't right. "The executive summary needs to lead with the business impact number, not the methodology." "Strip the jargon. My client's CEO has 90 seconds for this." "Add a risk section with probability ratings." Keep going until you'd sign off on it for a paying client.

Then say:

Turn this entire process into a reusable skill. Create a skill.md file
with the step-by-step instructions, save any reference files we used,
and register the skill in my catalogue.

Claude creates the folder, writes the process file, organises your reference materials, and registers the skill. Start a fresh chat. Trigger the skill by describing what you need. If it works, you're done. If not, refine the skill.md.

Method 2: Build from scratch

If you already know the exact workflow, skip the manual step and build the skill file directly. Open your skill.md and define 4 things.

First, a trigger: a plain-language description of when the skill should activate, specific enough that the AI knows exactly which requests match (e.g., "executive briefing, client summary, engagement recap").

Second, a goal: one sentence describing the end deliverable and its quality bar (e.g., "Generate a branded executive briefing with business impact analysis and strategic recommendations").

Third, a process: the numbered steps the AI should follow, in order, from reading reference files through drafting to final output. Be explicit about where to pause for your input and where to proceed autonomously.

Fourth, rules: the non-negotiable constraints that prevent the AI from cutting corners or drifting from your standard (e.g., "Every finding must include a business impact estimate," "Never skip reading the reference files").

Tell Claude to build the skill.md from those 4 components, create placeholder templates for any reference files you mentioned in the process steps, and register the skill in your catalogue. That's a functional starting point in under five minutes. You'll smooth it out over the next few uses.

3. Build the 4 reference files that make every skill 10 times sharper

Before you go skill-crazy, build these four files first. You'll reuse them across everything.

1. Professional context (professional-context.md)

Your practice, your methodology, your principles. How you define quality work. What your clients or stakeholders value in a deliverable. Two to three paragraphs.

Start with who you are and what you do: your domain, your experience level, the type of clients or stakeholders you serve. Then describe how you define quality work, the specific standards a deliverable has to meet before you'd put your name on it. Finish with 2 to 3 operating principles that guide your professional judgment, the kind of rules you'd give a senior hire on their first day so they understand how you think. Keep the whole thing to 2 or 3 paragraphs. Specific enough that the AI can make judgment calls on your behalf. General enough that it applies across multiple skills.

2. Audience map (audience-map.md)

Who reads your work, who acts on it, and what they need from it.

List every person or group that regularly receives your output. For each one, write down their role, what they care about most, what decisions they make based on your work, and the format they prefer. A CEO scanning for a go/no-go decision needs a different document than a technical lead prioritising next quarter's roadmap. Rank them by how often they see your work: primary audience first, then secondary, then anyone else who occasionally reviews your output. The AI uses this map to adjust tone, depth, and structure depending on who the deliverable is for.

3. Output template (output-template.md)

The structure your finished deliverable should follow every time.

Describe the sections your output needs, in order, and what belongs in each one. Include any formatting rules you care about: heading style, how numbers are displayed, file naming conventions, required sections versus optional ones. The goal is a reusable skeleton that the AI fills in with the right content every time. If you already have a PDF, Word document, or other file that shows the exact structure you want, drop it into the skill folder as a reference file and point the skill.md to read it. The AI will reverse-engineer the layout, section order, and formatting conventions from that document. Sometimes an existing output is the fastest template you can give it.

4. Good output examples

Past deliverables that represent your standard. The AI reverse-engineers patterns from examples better than from descriptions. Full stop.

Pull two to three of your strongest past deliverables. Structure each one as a reference file. Drop two to three files like this into your skill folder. After 3 examples, the AI starts matching your standards and style.

4. Use progressive disclosure so your AI loads only what it needs

Here's the mechanism that makes skills work at scale without drowning the AI in context.

Jakob Nielsen at Nielsen Norman Group formalized progressive disclosure as a core interaction design principle: show only the label until someone needs the full manual. Don't load what you don't need yet.

When you start a new chat, the AI doesn't read every skill file you've ever built. It reads only the metadata: the name and 1-line description at the top of each skill.md.

That's it. 50 skills. A few hundred words in memory.

The full instruction file loads only when you say something that matches the trigger description. Say "generate an executive briefing" and the briefing skill activates. Say "draft a thought leadership article" and the article skill fires. The AI pulls the relevant skill.md into active context, reads the reference files, and executes.

Anthropic published a detailed engineering guide on this exact principle. They call it context engineering: the practice of curating everything the AI sees before it responds, rather than cramming instructions into a single prompt. The shift from prompt engineering to context engineering is the difference between giving someone a 40-page manual every morning versus giving them a filing cabinet and letting them pull the right folder.

This is why the trigger description matters. A vague description means the AI doesn't know when to activate. A specific description fires reliably.

The last step is registration. Every skill needs an entry in your master claude.md file (the root-level file Claude reads at the start of every session). Each entry is just a name, a 1-line description, and the trigger phrases. When you finish building a skill, prompt Claude to add it:

Add this skill to my claude.md catalogue with its name, description, and trigger phrases.

Claude appends the entry. Every new skill, 1 new line in the catalogue. The AI reads this file at the start of every session and knows which skills are available without loading any of the full instruction files.

5. Iterate until the skill works better than your best junior hire

Your first version won't be perfect. By design. Skills are living documents.

The AI skips steps or does them out of order. Make the process flow clearer. Add explicit "do not proceed until this step is complete" language.

The output feels generic. Don't bolt more text onto the SKILL.md. Create a new reference file with the missing context and point the skill to read it at the relevant step.

The AI keeps making the same stylistic mistake. Add a specific rule. "Never list more than 3 strategic recommendations per briefing because executives stop reading after 3." Precise rules get followed. Vague rules get ignored.

You want the skill to learn automatically. Add 2 self-improvement rules:

  1. "If I correct a behaviour, update the rules section with the correction."
  2. "If I approve a final output, save it as a reference example file."

Over time, the skill accumulates corrections and good examples. 10 uses in, it produces better output than you could get from a fresh chat in 30 minutes of prompting. The compound effect is real.

Here's a demonstrated prompt you can use right now to audit an existing skill:

Review my [skill-name] skill. Check for:
1. Any steps that could be misinterpreted or skipped.
2. Rules that are too vague to enforce consistently.
3. Missing reference files that would improve output quality.
4. Trigger descriptions that might conflict with other skills.
Suggest specific improvements for each issue found.

MIT Sloan's Miro Kazakoff studies why experts struggle to communicate their own knowledge. He calls it the curse of knowledge. The deeper your knowledge, the harder it becomes to articulate what you know, because you've deleted the memory of what it felt like not to know it. I spent weeks writing SOPs for human staff before I understood this. The hard part was never the execution. It was the externalisation, forcing yourself to articulate what you do, step by step, decision by decision, edge case by edge case. Skills distill that weeks-long process into 15 minutes.

One group opens a new chat every morning, re-pastes their framework, re-explains their methodology, and calls it using AI. Fifty-two hours a year of re-explaining yourself to a machine. The other group spends 15 minutes building a skill and never re-explains anything again. Their AI remembers their methodology, their standards, their corrections. Permanently. For a setup that takes less time than making coffee.

Build the skill. Let the AI remember for you.

Bonus

The Write Insight subscribers with an AI Research Stack premium account this week also get 2 print-ready PDF worksheets (a one-page Skill Priority Triage Card and a multi-page Complete Skill Building Worksheet), 3 AI prompts (generate branded executive briefings from client engagement data, draft long-form thought leadership pieces from rough notes or talking points, and build client-ready proposals from discovery call notes), 5 curated resources on agentic AI workflows and skill-based memory systems, and a full 5-phase Build Your First AI Skill checklist.

Read next