Write Insight Newsletter · · 16 min read

Your Proposal Didn't Fail. Your Writing Did.

30 Minutes. That's All Your Reviewer Will Give You.

A scientist in a lab coat holds a glowing beaker, while an observer watches, surrounded by various lab equipment and jars.
Science always finds a way…

You spent 59 days on your research proposal. The reviewer spent 30 minutes. That’s half a lunch break.

The rejection email arrives in four sentences.

No detailed feedback. No explanation of what went wrong. Just: “We regret to inform you that your proposal was not selected for funding at this time.”

And the worst part isn’t the rejection. It’s that you don’t know WHY.

My early proposals were thorough. They were also forgettable.

Here’s what nobody tells you about research proposals. The problem is almost never your research idea. A 2018 study by Elizabeth Pier and colleagues at the University of Wisconsin-Madison had 43 reviewers independently score the same 25 NIH grant applications. The agreement between reviewers was essentially zero. The outcome depended more on which reviewer happened to read your proposal than on the research you proposed.

The ideas didn’t fail. Never. The proposals did.

And most proposal-writing advice makes this worse, not better.

They teach you seven sections to fill out like a form. Title, background, literature review, methods, timeline, resources, bibliography. Check, check, check.

But a winning proposal is not a completed form. It’s an argument. Proposals that read like completed forms get treated like completed forms. Filed away.

Every section exists to answer one question the reviewer is silently asking themselves. If you don’t know what that question is, you’re building IKEA furniture in the dark. Scary AF.

This is the mistake that’s everywhere. It’s the mistake I made for years. My early proposals read like nice and dry textbook chapters. Thorough, properly cited, and utterly forgettable. They had no juice. It wasn’t until I started treating proposals as persuasion documents that anything changed for me.

And there is a perfect story that shows exactly why this matters.

In 1964, a chemist named Stephanie Kwolek was working at DuPont’s research lab in Wilmington, Delaware. Her job was to find a lightweight polymer strong enough to reinforce car tires.

By 1965, she had created something unusual. A polymer solution that broke every rule. Standard polymer solutions were thick as molasses and clear. Hers was thin as water, cloudy like buttermilk, and shimmered when she stirred it.

Any other researcher would have poured it down the drain.

Kwolek didn’t.

She brought it to Charles Smullen, the technician who operated the filament spinner. Smullen took one look and refused to run it. The solution would clog the holes. It would destroy his equipment. This was not what a polymer solution was supposed to look like.

Kwolek came back the next day. And the day after that. She filtered the solution to prove it contained no particles. She argued. She insisted. Days passed. She had grit.

Finally, in her own words: “I wore him down.”

They spun it. It spun beautifully.

The fiber was five times stronger than steel by weight. The results were so extraordinary Kwolek didn’t believe them herself. She asked colleagues to re-run the tests. The numbers held.

She had invented Kevlar.

She had the literal cure for bullets in a beaker. And she almost couldn’t get anyone to test it. Not because the science was bad. Because the presentation didn’t match what the gatekeeper expected.

That’s the same game you’re playing every time you submit a proposal.

Your reviewer has 30 to 50 other proposals to read. They’re tired, overcommitted, and 63 percent of their own research time is already consumed by administrative tasks. They will give your proposal maybe 30 minutes of genuine attention. That’s it.

In those 30 minutes, everything you’ve worked on for months gets reduced to a single two-pack question:

Does this person understand what they’re trying to do, and do I believe they can pull it off?

What follows are the seven moves that changed everything for me. Not seven form sections to fill out. Seven strategic decisions that turn a pile of good ideas into a proposal that makes reviewers want to throw money at you (and what recently won me half a million dollars in grant money).

Get real about the problem you’re solving

Your proposal lives or dies in the first page.

Not the methodology. Not the timeline. The problem statement.

Here’s why most problem statements fail. They describe a topic instead of a tension. You want friction. You want traction. You want what Steven Pressfield calls resistance (but not just in yourself but in the world).

“This research will investigate the relationship between X and Y” is a topic. It tells the reviewer what general area you’ll be working in. It does not tell them why they should care, why it matters right now, or what will break real soon if nobody steps up and does this work.

A problem statement needs to create discomfort. The reviewer should finish it and think: “Yeah, that IS a problem. Someone should fix that.”

Imagine you’re at a dinner party explaining your research to a smart friend who works in a completely different field. Not your committee. Not your supervisor. Someone who will ask “So what?” and actually mean it.

If you can make that person that challenges you lean forward, you’ve got a true problem statement.

Stewart Butterfield, the co-founder of Slack, wrote an internal memo before the product launched titled “We Don’t Sell Saddles Here.” His argument was that if you were selling saddles in a world where nobody had discovered horseback riding, you wouldn’t talk about leather quality and stitching. You’d sell the dream of speed and freedom and range. The saddle is the artifact. The transformation is the product. If you’re in sales, I’m speaking your lingo here, but if you’re a deep subject expert, this concept is likely totally alien to you.

Your problem statement works the same way.

You’re not selling your method. You’re selling the transformation that becomes possible once this problem is solved. How are you making the world a better place?

Gandalf didn’t recruit the Fellowship by describing the metallurgical properties of the One Ring, although I’m sure Tolkien would have loved to write a whole prequel just about that. He told them what happens to Middle-Earth if nobody walks into Mordor. That’s how you get a pair of shy Hobbits to climb up to Mount Doom. That’s the Kool Aid you need to be selling.

The formula is: [SPECIFIC GAP] prevents [SPECIFIC GROUP] from [SPECIFIC OUTCOME], which matters because [SPECIFIC CONSEQUENCE].

For example:

  • Neuroscience: “Current brain-computer interfaces decode motor intention with 73 percent accuracy, which means patients with locked-in syndrome still cannot reliably communicate basic needs to caregivers.”
  • Education: “Teachers in low-income districts spend an average of 11 hours per week on administrative compliance reporting, displacing the equivalent of 55 instructional days per school year.”
  • Climate: “Existing carbon capture methods cost $400–600 per ton, roughly 4x what’s needed for commercial viability, which means the technology that could offset 15 percent of global emissions remains economically impossible.”

Notice the numbers. Every single one.

A claim without a number is an opinion. A claim with a number is cold hard evidence.

Reviewers do trust evidence.

Strong problem statements don’t describe the world. They diagnose what’s broken in it. They show everyone what’s hurting them.

Every week I send one system to premium subscribers: a framework, a prompt workflow, a writing structure. Something you can use before Friday to make your expertise visible.13,000+ experts are in. → Join them.

Context always makes them care

On January 9, 2007, Steve Jobs walked onto a stage in San Francisco and announced three revolutionary products. A widescreen iPod with touch controls. A revolutionary mobile phone. A breakthrough Internet communicator.

Being the eloquent and tricky speaker that he was, he cycled through them. “An iPod, a phone, an internet communicator… an iPod, a phone… are you getting it?” He kept accelerating.

Then he revealed: “These are not three separate devices. This is one device.”

Boom. Same product. Completely different emotional impact.

He could have walked out and said “We made a phone with a touchscreen that plays music and browses the web.” Cool features. But by framing it as three separate breakthroughs first, he made the audience evaluate each function against its best competitor before revealing one device that dominated all three.

The product didn’t change. The frame did.

This is exactly what your background section must do.

This is also why the Star Wars prequels disappointed fans when they opened with a trade dispute about taxation. The heck? Same galaxy. Same Force. Completely different emotional response from us. George Lucas proved that even a universe with cool light sabres can bore you the heck out of you if the frame is wrong. (Let’s not talk about Rian Johnson here.)

Most researchers write background sections like Wikipedia entries. Here’s what we know. Here’s how we know it. Here’s where we are.

That’s context without framing. It’s boring. Sorry.

Your background section needs to walk the reviewer through the story of this problem. How it emerged, why it persists, what happens if nobody solves it. Give them something to gnaw on.

Tversky and Kahneman proved this matters with hard numbers. In their 1981 famous framing experiment, they described an identical medical outcome two ways. Saying “200 people will be saved” versus “400 people will die.” You get it. It’s the same result. Seventy-two percent chose the first framing. Only 22 percent chose the second.

Same facts. Different framing. Opposite decisions.

A few moves that work here:

  • Start with the practical tough implication, not the theoretical foundation. “Antibiotic resistance kills 1.27 million people per year” hits harder than “antimicrobial resistance is an emerging area of concern in infectious disease research.”
  • Name your assumptions. Stating them clearly shows honesty. It also saves the reviewer from guessing if you see your own biases.
  • Define your terms. Grant reviewers are not always specialists in your exact subfield. If they have to Google something in your second paragraph, you’ve already lost momentum. (Pick up the reader at a bus stop they recognize before taking them on field trip somewhere new.)

The background is not where you prove you’ve read everything. It’s where you prove you understand why this research matters to someone who doesn’t study what you study. That’s your goal.

That hole in the world only you can fill

Your literature review is not a book report.

I made this mistake for years. I treated it like a comprehensive survey of everything ever written on my topic, organized chronologically, with polite nods to everyone who came before me. I was so afraid to forget someone.

Thorough. Well-cited. Completely useless for convincing anyone to fund my work.

Here’s what a literature review actually needs to do. It must build a logical case that there is a specific hole in existing knowledge, and that your project is the precise shape to plug it.

Think of it like an escape room. You walk the reviewer through each clue until the one missing step becomes obvious. Then you hold up your project and say that this is the piece.

Show what’s known. But spend more time on what’s NOT known. Where did previous work stop? What questions remain open? What assumptions remain still untested?

Don’t just nod along with previous work. Challenge methods. Question conclusions. Point out when findings are based on outdated data or narrow samples. The reviewer wants to see your critical thinking.

If you’re introducing a new framework, explain exactly what it improves. “This framework accounts for X, which the standard model ignores” is infinitely more persuasive than “this framework offers a new perspective.”

And organize thematically (not chronologically). A chronological lit review reads like a timeline. A thematic one reads like a good argument.

Arguments win funding. Timelines don’t.

You need a method that survives contact

On September 23, 1999, NASA lost a $327 million spacecraft.

The Mars Climate Orbiter had traveled nine months to reach Mars. As it approached, ground control realized something was wrong. The spacecraft came in at 57 kilometers altitude instead of the planned 226. It burned up in the atmosphere. Talk about burning cash, man.

The cause was Lockheed Martin’s software, which produced thrust data in pound-force seconds. NASA’s navigation software expected newton-seconds. One team used imperial. The other used metric. The spec document called for metric. Nobody caught the discrepancy during nine months of flight. Hey, if you ever needed an argument for why the world should switch to the metric system, this is mine.

The spacecraft was destroyed not by a bad idea but by a failure of specification.

Star Trek totally understood this issue. Every episode where the Enterprise nearly explodes, someone miscalibrated a sensor array or forgot to account for a tachyon variance. Starfleet’s entire dramatic tension runs on specification failures. (At least back in the day when it was good.)

Your methodology section is that sensor array. It sits between your research plan and the reviewer’s confidence in your ability to execute.

Most methodology sections fail for the same reason the Orbiter failed. They look plausible at a glance. The numbers seem reasonable. But under scrutiny, the specifics don’t hold together. Don’t make that easy mistake.

Here’s what needs to be flawless:

  • Justify your approach. Qualitative, quantitative, or mixed? Don’t just state it. Explain why it’s the right choice for THIS exact question. Defend it like you’re explaining your pizza topping choices to someone who believes pineapple is a crime against food. (For the record: pineapple is still valid, friends.)
  • Detail your methods with precision. Surveys? What questions, what scale, what sample size, what distribution method? Interviews? How many, how long, what structure, what coding approach? Lab work? What equipment, what protocols, what controls?
  • Name your tools. Statistical software, analysis platforms, measurement instruments. Specificity signals competence.
  • Address ethics. Informed consent, confidentiality, data storage, IRB approval. This isn’t just a lame checkbox. It’s where reviewers look for red flags. Give them nothing to worry about.
  • Anticipate what could go wrong. Low response rates. Equipment failures. Participant attrition. A contingency plan doesn’t make you look uncertain. It makes you look experienced.

Experienced researchers know Murphy’s Law is not a joke. It’s a project management framework for chaotic science realities.

Your timeline isn’t a wishlist

Here’s a number that should change how you think about timelines.

A study published in BMJ Open (Herbert et al., 2013) found that preparing a single new research proposal takes an average of 38 working days. In one round of Australian medical research funding alone, researchers collectively spent 550 working years preparing 3,727 proposals. Cost: AU$66 million in salary. That’s a lot of dough, Frodo.

The most striking finding was that time spent writing was not correlated with whether the grant was funded.

More time does not equal more success. Stick that sticky note to your grind machine at work.

But a realistic timeline indicates something reviewers actually care about deeply. It is that you’ve actually thought about how this project unfolds in real life.

Break the project into distinct phases. Preparation, data collection, analysis, writing, revision. Set milestones for each. Not aspirational milestones. Real ones. (If you’ve ever gone through EU Horizon proposal work package hell, you feel me.)

The kind where you’ve accounted for the fact that ethics board approvals take three months or more. The kind where you know participant recruitment always runs slower than planned.

Pad your schedule. Not because you’re lazy. Because you’re honest. Equipment breaks. Co-investigators get pulled onto other projects. Holidays exist. People take time off. You know the drill.

(You’d be amazed how many proposals schedule critical data collection over December. Nobody is collecting data over December. I will die on this hill. Scheduling fieldwork over Christmas is the research equivalent of the Stark family ignoring every warning about winter. We all know what’s coming. It’s winter, Jon Snow. Plan accordingly.)

A tight, realistic timeline tells the reviewer one thing. This person has done this before. They know what a research project actually looks like.

The reviewer is not your enemy

In 1993, FBI star negotiator Chris Voss was on the phone with an armed bank robber at a Chase Manhattan branch in Brooklyn. Two masked men had cracked a security guard across the skull with a .357 revolver and were holding three hostages.

Voss didn’t argue. Didn’t moralize. Didn’t try to convince them they were wrong. He kept his cool.

He used what he later called tactical empathy. He mirrored their words back to them. Labeled their emotions. Spoke in a calm, downward-inflecting tone. He demonstrated that he understood their situation, their fears, and their need for a way out. This is so powerful.

He never once told them what to do.

All hostages survived.

Here’s the connection in case you’ve been wondering. Your grant reviewer is, in a weird way, a hostage, too.

Trapped in a room with a stack of demanding proposals. Under time pressure. Often reading outside their narrow specialty. They don’t want to reject you. Gosh, man of us don’t even get paid for this. They want to find proposals worth funding so they can feel good about their time.

Your reviewer is basically Tyrion Lannister at his own trial. Exhausted, underpaid for the job, surrounded by people who don’t fully understand what’s in front of them, and just trying to get through the day without making a terrible decision. Write your proposal so Tyrion would fund it.

You may be thinking, “But reviewers ARE the gatekeepers. They have the power.”

And sure. Fair enough. But power and attention are different things.

A reviewer with power and 30 minutes to spend on your proposal is someone you need to serve well.

Grant writing expert Robert Porter, whose proposals have won more than $8 million in funding over 30 years, put it bluntly: “To succeed at grant writing, most researchers need to learn a new set of writing skills.”

Academic papers inform. Proposals persuade. The muscle is different.

Here’s how to serve the reviewer:

  • Use the language of the funding call. If the call says “innovation,” your proposal says “innovation.” If it says “community impact,” your proposal says “community impact.” Don’t make the reviewer translate terms.
  • Put the most important information first in every section. Reviewers skim. The first sentence of each paragraph carries ten times the weight of the last one.
  • Format for exhausted eyes. Shorter paragraphs than a paper. Clear headers. White space. Bullet points for complex information. If your proposal looks like a wall of text, the reviewer’s brain checks out before their eyes do.

Anna Clemens, a grant writing specialist, identifies the most common mistake well. Researchers overestimate how much reviewers already know. Different synonyms for the same concept. Subfield shorthand never defined. Key points buried in dense paragraphs. Jargon is a killer.

Write for the smartest person who knows nothing about your specific topic.

That’s your reviewer.

Your bibliography is like your reputation

This section seems like a formality. It’s not.

Your bibliography tells the reviewer three things in under 60 seconds. How deeply you know the field, how current your knowledge is, and whether you’ve done the intellectual work of engaging with the right conversations.

Cite works that genuinely inform your research. Not everything you’ve ever read or deem vaguely interesting. This is not the place.

Check formatting against the required style guide. APA, Chicago, Harvard, ACM. Get it right. Sloppy citations signal sloppy thinking, even when they don’t. It’s not fair, but it’s the reality. Especially when AI can fix this in seconds.

Make sure your references are current. If your most recent citation is from 2019, the reviewer will wonder if you’ve been living under a rock to do your research.

Reference your own previous work if it’s relevant. This isn’t vanity. It’s evidence that you have a track record. As John Holmes used to say, if it fits, put it in.

Your bibliography is the last thing a reviewer sees. Make it the reason they trust you.

The one comma that cost $5 million

In February 2018, a Maine dairy company called Oakhurst settled a lawsuit for $5 million dollars.

Not about their product. Not about their business practices.

About a missing comma of all things.

A Maine overtime law listed exempt activities: “The canning, processing, preserving, freezing, drying, marketing, storing, packing for shipment or distribution of” perishable foods. No comma after “shipment.” The court couldn’t determine whether “distribution” was a separate activity or part of “packing for.”

Judge David Barron opened his 29-page ruling with: “For want of a comma, we have this case.”

Five million dollars. One comma. Take that, em-dash haters.

If a missing comma can cost $5 million in a legal proceeding where teams of lawyers scrutinize every word, imagine what unclear writing does in a grant review where a panelist gives you 30 minutes (sure sometimes more, but the decision is usually made quickly).

Precision is not just a stylistic preference. It’s the mechanism by which your ideas reach another human brain in the way you have intended.

Here’s your checklist. Run it before you submit your proposal:

  • Does your problem statement create genuine tension?
  • Does your background section frame the research as urgent?
  • Does your literature review build toward an inevitable gap?
  • Does your methodology leave zero “but how?” questions?
  • Is your timeline realistic?
  • Have you written for the reviewer’s experience?
  • Does every claim include a specific number, date, or named example?

The hard split is already happening

Ninety-two percent of researchers believe they spend too much time preparing proposals.

Only 10 percent believe the current funding system positively affects research quality. And the system isn’t changing. But a divide is forming among the people who operate within it.

On one side, we’ve got researchers who treat proposals as paperwork. They fill out the sections, cite the literature, describe the methods. The proposals are complete. They are also interchangeable with thousands of others. For lack of a better word, it’s slop.

On the other side, we’ve got researchers who treat proposals as persuasion. They diagnose a problem. They frame the context. They build an argument that makes the reviewer want to fund them before the methodology section even begins.

The first group competes on credentials. The second group competes on clarity.

Clarity wins. Not every time though. If the Pier study proved anything, it’s that randomness is baked into our system, too. But across ten proposals, across a career, the researcher who writes to persuade will outperform the researcher who writes to comply.

Even more so, if they use AI to get their angle right.

The NSF funds 27% of proposals. The ERC funds 12%. NIH early-stage investigator success rates dropped from 29.8% to 18.5% in just two years.

In Canada, the reality is just as unforgiving. The CIHR Project Grant success rate sits at a bleak 15.3%. The NSERC Discovery Grant success rate dropped from 67% down to 58%. SSHRC Insight Grants hover around a 40% average, but if you want the larger Stream B funding, your odds drop to 38%.

Those numbers are not getting better anytime soon.

Which means the margin between funded and unfunded is getting thinner. And in that thin margin, the quality of your argument is the only variable you fully control.

Your ideas might be as good as Stephanie Kwolek’s. Her breakthrough looked like a failure to the gatekeeper standing between her and the test. She had to spend days persuading him just to run it.

You have 30 minutes and 15ish pages to do the same thing.

Clear problem. Compelling context. Inevitable gap. Bulletproof method. Honest timeline. Reader-first writing. Credible evidence base.

Seven easy moves. Start writing today.

Bonus

The Write Insight subscribers with an AI Research Stack premium account this week also get 2 print-ready PDF worksheets (a one-page Proposal Checklist and a 7-Move Proposal Builder), 3 AI prompts (audit your problem statement against the tension formula, generate a thematic literature review structure, and extract a reviewer-proof methodology specification), 5 curated resources on grant proposal writing, and a full 7-move proposal protocol checklist. → Join them.

Read next