TL;DR
- Expert knowledge extraction helps experts turn tacit knowledge into clear writing before they draft.
- The problem is usually extraction, not discipline, motivation, or another note-taking app.
- AI works best as an interviewer, observer, transcript analyst, and case sorter.
- The strongest method combines familiar task observation, structured interviews, critical incidents, and reusable knowledge cards.
- Experts should extract first, then edit. Drafting too early buries the judgement that makes the idea worth reading.
What is expert knowledge extraction?
Expert knowledge extraction is the process of pulling hidden judgement, examples, rules of thumb, and decision cues out of an expert’s work before turning them into writing. AI can help when it interviews the expert, analyses real work, structures cases, and converts raw notes into reusable knowledge cards.
Every week, I sit down to write my LinkedIn posts with 20 years of research behind me, hundreds of customer conversations in my head, and a stack of ideas worth sharing.
Then the dark screen wins.
(I like dark mode, ok?)
Not once in a while. Every week. This is how it always starts.
I will not call it writer’s block.
That phrase makes it sound romantic, like I am sitting in a French café, nibbling on my little baguette romantically in thought like a tortured soul, waiting for my muse to show up in a pretty scarf. My problem is less poetic. After 10 or 20 years of getting good at something, your ideas stop arriving as fleshed-out sentences.
The problem is that I know too much.
Sounds like a humble brag. But it’s not. It is the root of most of my writing problems, and it gets worse the better you get. Malcolm Gladwell would agree.
Why is expertise a writing trap?
Expertise becomes a writing trap because the better you get, the less visible your own thinking becomes. You stop noticing the assumptions, shortcuts, and pattern recognition that make your judgment useful, so the reader gets conclusions without the path that made them credible.
When you set out (like the little Indiana Jones that you are) to become exceptional at something, no one warns you about the cost of being hypervaluable as an expert. Depth makes you useful. It can also make you miserable at explaining what you know.
You’re stuck because you’re stupid. Quite the opposite. The deeper you dig your vampire fangs into the neck of your beautiful domain, Edward, the more your assumptions fade into black. You stop seeing the system—the paradigm that you’ve slobbered up like lukewarm milk from science mama’s teats—because you have become part of the system yourself. You’re not Alex Hormozi speaking in quotable tweets at podcast interviews. You don’t have clean little sentences waiting in the idea pantry. Your knowledge lives in your timing, pattern recognition, and gut-level judgment.
That is why you cannot just write it down. Much of what you know stopped being verbal years ago.
Knowledge management scholar Ikujiro Nonaka gave this phenomenon a name: Tacit knowledge. Tacit knowledge is experience soaked in judgment. It is the surgeon who spots trouble before the monitors confirm it. The consultant who reads a room in 30 seconds. The expert who sees the flaw in a methodology while a junior researcher keeps rereading the paragraph.
You cannot squash tacit knowledge into a LinkedIn post. It doesn’t work like that.
Nonaka’s SECI model describes how knowledge moves from tacit to explicit through externalisation, meaning metaphors, analogies, and dialogue pull embodied judgment into words. Trying harder at a blank page will not do that. You need a way to drag the knowledge out before you can shape it.
Most experts solve the wrong problem. They switch note-taking apps. Buy more AI tokens. Try a new morning routine. Blame discipline. The actual problem is extraction.
The apps are innocent. You are the bottleneck. I mean that with affection. Mostly.
Why do your words lag behind as your skill grows?
Your words lag behind your skill because expertise flattens knowledge into instinct. Early in your career, you explain every step because you are still learning it. Later, the steps disappear into timing, pattern recognition, and judgment that feel obvious only to you.
Early in a career, your knowledge still has grip. You learned it last week. You tested it yesterday. You explained it to anyone who would sit still long enough. Moving from novice to expert forces articulation because people keep asking you to justify your stuff.
Then you become senior.
People stop asking you to explain yourself and start asking you for advice. Simple decisions arrive like Amazon same-day delivery used to. The judgment feels instant. Your knowledge sinks into your body, and the trench between what you know and what you can write grows faster than your expertise.
This means the smarter you get, the worse the writing problem becomes. The more you deserve an audience, the harder that audience is to reach.
Expertise hides its own guts. The more you have, the less visible it becomes to you and to the people you want to reach.
Then someone asks you to write an article. Or build a course. Or share your insights on LinkedIn.
You turn into Jon Snow. Noble. Confused. Somehow still in charge of these shenanigans.
Camerer, Loewenstein, and Weber coined the term the curse of knowledge in a 1989 paper on asymmetric information. Better-informed people could not ignore what they knew, even when it cost them. Fischhoff mapped the same concept in 1975 through hindsight bias. Once people knew an outcome, they overestimated what they would have predicted before they knew it. What they knew changed what they thought could have happened.
Nickerson reviewed evidence in 1999 that people use their own knowledge as the starting point for estimating what other people know, then correct too little. Hinds gave the expert version some bite that same year in Journal of Experimental Psychology: Applied. Experts guessed how long beginners would take. They did worse than people with some experience did. Debiasing attempts did not save them.
Elizabeth Newton’s 1990 Stanford dissertation made the problem concrete. Participants tapped rhythms of well-known songs and predicted listeners would identify them 50% of the time. The actual rate was 2.5%. The tappers could hear the song in their heads. The listeners heard random fingers doing a tap dance.
You are that tapper. Sorry.
Your audience hears random knocks until you translate your knowledge for them. But this is not a skill you can just summon like a Patronus Charm. You have to construct it slowly, Hermione.
The blank page gets blamed too easily here. When, really this is more of a knockout match between tacit knowledge and the language you use. Joining that fight proves your knowledge has depth. You want to win this like Conor McGregor’s 13-second knockout of Jose Aldo did.
Why won’t writing more fix this?
Writing more will not fix this when the real bottleneck is how your extract your knowledge. A habit can produce more words, but it cannot automatically recover the hidden examples, decision cues, failed cases, and assumptions that make expert writing useful.
Productivity advice loves repetition. Write every day. Build the habit. Push through resistance.
It’s true, but only works when consistency truly constrains you. Having tacit knowledge is a different beast of a problem. Telling an expert to write more is like telling Terry Fox to sprint faster with one leg. Effort is not your missing ingredient. Knowing how to extract your knowledge is.
If you’ve ever forced content onto a page without a system, you’ve likely produced one of these results:
- Abstract writing that is technically correct and unreadable. (AI won’t help much.)
- Beginner-level content that lacks some bite.
- Nothing.
Think of it as Eminem’s 8 Mile freeze, except you are alone at your laptop at midnight. Mom didn’t make spaghetti and the hostile crowd hasn’t even arrived yet. You still cannot start.
Change how to get things out of your brain and onto the paper before you judge what’s even on the page.
Here are eight simple methods to help you do that. You don’t need all of them. Just pick two or three that match how your ideas are stuck.
How do you extract first and edit later?
Extracting first means capturing the raw judgment before trying to shape the prose. Use AI to interview you, observe your work, analyze transcripts, compare cases, and turn messy expertise into reusable material. Editing comes after the thinking has been pulled into view.
Expert knowledge does not come out through one magic prompt. Researchers usually extract it by combining 3 families of methods. They watch the familiar task. They interview the expert. They use special tasks that make hidden judgement visible.
That sounds dry. It is also exactly what AI makes easier now.
1. Brain dump and document capture
This is the baseline because every other method needs raw material to work with.
Set a 10-minute timer and write everything down. Worries. Half-formed ideas. Client patterns. Research findings. Concepts you keep circling. Unedited. Unstructured. Messy. Capture first. Organize later. Alternatively voice record or speech-to-text this part, but I highly recommend going through the agony of writing it down by hand.
Then add the material experts usually forget to include because it feels too obvious. Slides. Emails. Meeting notes. Old proposals. Client audits. Screenshots. Loom transcripts. The documents around your work often contain the tricks your brain doesn’t even notice anymore.
Save the pile to a local folder and prompt via either Codex, Claude Co-work/Code, or Antigravity:
This borrows from Hoffman’s point about documentation analysis. Manuals, notes, and old artifacts can produce a first-pass knowledge base before any interview begins.
The failure pattern is capture without review. Ideas pile up and rot. A 15 to 30 minute weekly review turns the dump into an asset. You scan, promote, and discard. Without that review, your capture system becomes an archaeological dig. You will find notes from 2019 and have no memory of writing them.
The Raiders of the Lost Ark warehouse, basically. Full of important things. None of them findable when you need them.
Use this as your daily baseline. It is your foundation.
2. Think-aloud capture while doing the work
Voice capture works best when you use it during the actual task, not 3 days later when your brain has already cleaned up the crime scene.
Open a voice recorder (or even better an AI speech-to-text app like superwhisper, WisprFlow, or Monologue) while you review a draft, work through a client problem, plan a workshop, annotate a paper, or redesign an offer. Say what you notice. Say what you reject. Say which detail made you suspicious. Say the sentence you almost wrote and why you cut it.
Then upload the transcript to AI and prompt:
This adapts protocol analysis for normal people with jobs. The point is not to sound polished, but to catch the judgement while you have momentum.
Use this when the knowledge lives inside a task you perform on autopilot.
3. Observe your familiar task with AI
Hoffman separates familiar task analysis from interviews for a reason. Experts often explain the official version of their work. Watching the work exposes the real sequence.
Record yourself doing a small real task. Review a client page. Build an outline. Fix a broken paragraph. Make a go or no-go call on an idea. Use screen recording if the work is visual. Use audio if the work is mostly thinking.
Then ask AI:
This is real task analysis. No performance required.
Use this when you know the answer, but you can't explain how you figured it out.
4. Build a bank of test cases and tough cases
Hoffman points out that experts reveal more when they work through test cases, especially difficult or unusual ones. Easy cases produce slogans. Tough cases produce thinking.
Create a small bank of examples:
- 3 routine cases
- 3 messy cases
- 3 failed cases
- 3 cases where your standard advice did not work
Then ask AI to interview you through them one at a time:
Failed cases don't warn you. They're where your argument gets tested.
Use this when your advice sounds too clean because you have removed the pain that produced it.
5. Use a structured AI interview
Most people use AI to generate content. Use it to interview you.
Unstructured interviews ramble. They can build rapport and warm up the domain, but they are weak at extracting knowledge. Structure makes the interview do work.
Prompt AI like this:
Add Feynman mode at the end:
Use this when the expertise is deep, specific, and hard to pull out alone. A 45-minute AI interview can surface more usable material than 10 hours of staring at a blank document.
6. Reconstruct critical incidents
A critical incident is a real event where something changed. For example, the client finally understood, the case broke down, the system failed, the advice worked under pressure, the audience reacted in a way you did not predict.
This works because experts remember judgement through cases. Your best reasoning often shows up in one specific incident where something went wrong, and that moment revealed the pattern you rely on.
Here's an AI prompt for it:
This adapts event recall and the Critical Decision Method for writing. The abstract topic gives you the TED Talk version. The incident gives you the usable material.
Use this when you need a story, a lesson, or a better argument.
7. Sort, rate, and build the decision tree
Some knowledge only appears when you force comparison.
Take 20 to 40 raw notes, examples, claims, or stories. Ask AI to turn them into cards. Then sort them into groups. Rate each card from 1 to 5 for audience pain, originality, evidence strength, and ease of explanation.
Then ask AI:
This adapts judgement tasks, scaling tasks, and decision analysis. Sorting exposes categories. Rating exposes priority. Decision trees expose the logic your draft keeps hiding from you.
Use this when the idea feels tangled or your outline keeps turning into soup.
8. Progressive summarization into a usable knowledge base
Tiago Forte’s progressive summarization solves the second-order problem. Capturing ideas is easy. Returning 6 months later and finding the usable signal is where the bodies are buried.
Use 4 layers.
- Layer 0 = the raw capture.
- Layer 1 = the passages that matter on first review.
- Layer 2 = the strongest material from those passages.
- Layer 3 adds a short executive summary in your own words at the top of the note.
Now add an AI layer to it.
Ask:
Most experts have piles of notes they never use because sorting them takes too much time. Progressive summarization cuts that effort before your future self is tired, rushed, and trying to convince themselves that 400 notes is a system.
Use this when you have too many notes and no fast way to turn them into writing.
How do you build an expert extraction system?
Build an expert extraction system so every idea has a location to go to. Capture fast. Review weekly. Run a simple extraction session on a set schedule. The system has one job, which is to keep your expert judgment from fading before you can use it in your writing.
Ideas do not arrive as a tidy backlog. They keep coming. Research findings. Client insights. Shower thoughts. Conversations that wreck your tidy little theory before you buy lunch for the team.
You need 4 pieces in place:
- One inbox. All captures go into one place. Voice memos, text notes, whiteboard photos, scraps from meetings. More inboxes create search overhead.
- Fast capture. The tool must be faster than the idea can fade. Voice memo. Notes app. Index card. Format matters less than speed. Anything slower than 10 seconds invites judgment, and judgment kills capture.
- Weekly review. Put 15 to 30 minutes on the calendar. Scan the inbox. Distil the useful bits. Delete or archive the rest. Turn captured scraps into assets.
- Extraction rituals. Schedule 30 to 60 minutes weekly or every 2 weeks for the Feynman technique, AI interviews, or Socratic questioning. Turn raw material into artifacts you can use.
You know this feeling too well. Two 30-minute extraction sessions per week are much better than 10 hours of procrastination suffering because they attack your bottleneck. Your problem is often just a missing review cycle.
AI has made conversion faster. Raw capture can become a draft in minutes. The remaining friction is getting the idea out of your head before you forget it.
What is this really about?
This is the gap between what experts know and what they can explain on demand. The blank page looks like the enemy. Deep knowledge often loses its words before the expert tries to write.
I wrote this because I needed it myself.
I still sit down every week with ideas in my head and nothing on the page. I struggle. And I have many systems. The shower thoughts still wash off before I reach my laptop. The blank screen still has its vampire fangs deep up my neck.
My mental model changed when I named the process. Years of practice shape expertise until the words fall off. Laziness is not the culprit here. Imposter syndrome is not either, even though it loves claiming credit for the damage. This is just how expertise works.
These methods interrupt the squeeze. They force tacit knowledge back toward language because experts need different tools than beginners.
Your difficulty putting ideas on paper does not prove you have nothing to say. It proves you have been using the wrong tool. A surgeon does not use a screwdriver. An expert does not use a blank document and willpower.
You are using the wrong tool.
Experts often assume visible creators know more, think with more clarity, or have some gift for effortless output. In my experience, they have a system. Usually a boring one. The advantage is their setup.
You have 20 years of insight locked inside you. The world has not heard it because no one gave you the right extraction tools.
That ends now.
Build one inbox. Schedule one weekly review. Run one AI interview on one topic you keep avoiding.
You know more than you can say.
Build the machine that gets it out.
That is your edge.
Author Bio
Lennart Nacke, PhD, is Research Chair at the University of Waterloo and a full professor in human-computer interaction (HCI), gamification, and AI strategy for academics and founders. He has supervised 20 PhD students, sat on dozens of hiring panels, and published over 300 peer-reviewed papers with more than 45,000 citations. He helps serious experts build research-grade writing systems that make them known, trusted, and chosen, without the content hamster wheel, hype, or hustle.