Write Insight Newsletter · · 10 min read

How AI is changing what it means to be a researcher

Academia's greatest flex is dead (and how AI killed it)

A researcher looking at an AI hologram with all the knowledge in the world.
Is this the future we are looking at?

Last Wednesday, as I slowly sipped what was probably my third Latté of the day (because f%$k you liquid calories), Google casually shattered my professional identity by releasing their co-scientist multi-agent AI system. I’ve demonstrated how a well-prompted AI system can generate a comprehensive literature review in my field in about 20 minutes before. But with the release of Google’s Co-Scientist and other multi-agent AI research tools, my very job is facing the instant urge to evolve or mutate significantly in the near future. David Cronenberg would have loved to write the script to this reality I’m facing. With the battle scars of a reduced sabbatical, delayed tenure review, some research papers that nearly broke me, and enough rejection letters to wallpaper my entire office, I sat there wondering if I'd become the academic equivalent of a horse-drawn carriage in the age of Teslas. Real museum stuff.

This wasn't just some technological anxiety. I am facing an existential crisis packaged in a sleek user interface. And, honestly, I don’t think I’m the only one. Caffeine clearly isn’t the answer to this one, because about a day later another notification popped up about yet another AI breakthrough. As if to underscore this technological anxiety, Elon’s xAI is pushing us further into uncharted territory with Grok 3, making the first public venture into generation 3 AI. Their strategy of “bigger is better,” backed by what they claim is the world's largest computer cluster, has yielded the highest benchmark scores we’ve seen from any base model. Not to be outdone, Claude 3.7 Sonnet’s release a couple of days ago shows even more remarkable improvements (and Claude Code), matching Grok 3’s capabilities while offering different strengths. Meanwhile, OpenAI’s unreleased o3 lurks on the horizon, promising to be another third generation powerhouse. I’m honestly feeling a little exhausted this week by this new. Academia will experience a tectonic shift because more companies will launch models at this unprecedented scale (and multi-agent scientist systems will likely do the bulk of academic writing in less than a year from now).

This wasn't just some technological anxiety. I am facing an existential crisis packaged in a sleek user interface. And, honestly, I don’t think I’m the only one. Caffeine clearly isn’t the answer to this one, because about a day later another notification popped up about yet another AI breakthrough. As if to underscore this technological anxiety, Elon’s xAI is pushing us further into uncharted territory with Grok 3, making the first public venture into generation 3 AI. Their strategy of “bigger is better,” backed by what they claim is the world's largest computer cluster, has yielded the highest benchmark scores we’ve seen from any base model. Not to be outdone, Claude 3.7 Sonnet’s release a couple of days ago shows even more remarkable improvements (and Claude Code), matching Grok 3’s capabilities while offering different strengths. Meanwhile, OpenAI’s unreleased o3 lurks on the horizon, promising to be another third generation powerhouse. I’m honestly feeling a little exhausted this week by this new. Academia will experience a tectonic shift because more companies will launch models at this unprecedented scale (and multi-agent scientist systems will likely do the bulk of academic writing in less than a year from now).

An academic flex that no longer impresses anyone

Remember when knowing obscure citations was our academic superpower? When students would look at us in awe as we casually referenced that hard-to-find but crucial paper from 1976? Yeah, those days are disappearing faster than free wine and cheese at conference receptions.

For decades, our worth as academics was measured by our ability to find rare sources, memorize key passages, and weave disparate ideas together through careful analysis. The knowledge we cultivated took years of dedication and countless hours in libraries that smelled like dust and academic desperation.

Today, the threshold for producing credible academic content has dropped so low that my neighbor’s teenager could use AI to write a surprisingly decent analysis of power dynamics in medieval French literature. What is even happening?

The quality of AI-generated academic content is improving at a pace that’s frankly a little terrifying. Most papers and reviews I read these days are at least 80% AI-supported (it’s a gut feel, but I feel I can judge this well enough). A well-designed prompt together with semantic search systems can produce a literature review that would earn a solid B+ in most graduate seminars. And that B+ is rapidly trending toward an A-.

Intellectual opposition is my favourite way to use AI though

Here's the surprising turn in this academic nightmare though: While many AI companies market AI tools as stuff that makes us better, faster, and more efficient academics, that’s whole efficiency and automation angle is quickly becoming outdated. I think AI’s most valuable function might not be speeding up or replacing our writing (although that’s a given these days), but challenging our own thinking.

I discovered this accidentally when, in a fit of petty revenge against the machines, I asked an ChatGPT o1 to critique my latest research idea. I was secretly hoping for lame-ass criticism I could smugly dismiss. But what do you know, I got intellectual confrontation that no colleague would dare to offer me. And I liked it. It really is like summoning a fearless colleague who doesn’t worry about research politics or hurting my feelings to just get me in the discomfort zone, where I do better thinking. Kudos to you, ChattieG.

And that’s when I realized: the real power of AI isn’t replacing academics, it’s disagreeing with us in ways that make our work stronger. It’s being a servant and a challenger that never gets tired of our nonsense. Kind of cool to have this readily available.

Three ways to use intellectual confrontation with AI

Here are 3 simple strategies (and AI prompts) how we can use AI’s power to challenge and strengthen our academic thinking. The following methods have emerged from countless hours of intellectual sparring with various AI models, each offering unique perspectives on research problems.Take AI from a mere writing assistant to a valuable intellectual opponent.

Read next