I'm excited to bring you some awesome Black Friday best deal of the year offers this week before we get into this week's issue. Check out these amazing deals (and please use my affiliate links in this newsletter if you decide to purchase these). All codes are only valid Nov. 26 through Dec. 2.
Today’s newsletter is kindly sponsored by Paperpal. Paperpal provides secure AI writing support, in-depth language correction, and a suite of pre-submission checks. Do your research, write, cite, edit and submit—all in one place. Use the code LENNART20 to get a 20% discount all year. Or if you buy a license this week, the code SAVEMAX will be auto-applied to save 40% on your purchase.
40% off 1-year premium newsletter access
Get all the bonus materials on my website. The premium newsletter subscription includes: subscriber-only posts archive, which includes tactical downloadable PDFs ($200 value), post comments on website, all-year discounts on my courses, and access to the legacy course vault (coming in January 2025). This includes full access to my premium Substack, and I'm starting it early just for subscribers. If you are a Substack premium subscriber and want access to the website as well, send me a quick email to get you set up. More deals below ↓
AI is smart. But sometimes it lies. And if you’re using it for academic writing, those lies can sneak into your papers, leaving you with a polished, confident, but totally false argument. Kind of like those TikTok life hacks that just set your kitchen on fire. In this post, I’ll explore the phenomenon of generative AI hallucinations and how to combat them in academic work (note that I use generative AI and AI interchangeably here for reading ease). Some AI is as trustworthy as a phishing email saying you’ve won the lottery without using your name. AI can make misinformation into plausible content. So, let’s get to the root of the problem and learn how to keep your academic work clean, accurate, and trustworthy.
The AI hallucination problem
Every researcher who has experimented with generative AI tools has run into this: the AI confidently presenting false information as fact. Basically, Fox News dressed up as a Wikipedia article. It sounds good; it reads smoothly, but it’s just plain wrong. Why does this happen? Large language models like GPT-4 work by predicting what makes sense statistically—not necessarily what’s true. This leads to issues when they sound confident but are, in reality, just making sh*&t up. The AI hallucination problem is especially tricky in academic writing. Academic writing requires accuracy, precision, and reliability. But AI hallucinations can introduce inaccuracies, imprecision, and unreliability into your work. You want to back away from that, like Gen Z, from a Facebook invite. In academic writing, you can’t afford to have your AI tool make stuff up. It is not insignificant when machines forget to be truthful. You might as well wear a Mentos jumpsuit in a Diet Coke swimming pool. So, how can you fight AI hallucinations in your academic work? Alright, let’s get down to business and look at the major challenges we’re facing with hallucination:
- Generative AI will sometimes create non-existent references to studies, journals, or papers that do not exist. It fills in the blanks when it doesn’t have an answer, leading to completely fabricated but credible-sounding citations. Generative AI combines legitimate information with false content. A reference might start correctly but then dive into made-up details. It’s like putting salt in your coffee—a mix of good and bad ruins the whole thing. So, don’t ask Terrence Howard to do elementary algebra for you.
- Generative AI can produce logical-sounding false statements that don’t stand up to scrutiny. Think of these as academic urban myths—they make sense on the surface but lack the grounding of real evidence. They can be hard to spot, especially if you’re not an expert in the field.
- Generative AI systems may draw faulty conclusions from the training data, leading to inaccurate summaries of complex research. If you’re not careful, these incorrect interpretations could end up in your work. Everyone loves a good research summary, but you do have to check it like the milk that just expired yesterday.
Always treat AI outputs as drafts. It is essential to verify the accuracy of the information provided by AI tools. This is especially true when it comes to academic writing. You need to be the fact checker and not think the AI output is the final product.
Citation integrity
One of the biggest challenges when using AI for academic purposes is managing citation integrity. Here’s how AI can trick you:
Imagine you’re working on a paper, and the AI spits out a fantastic-looking reference. Except it doesn’t exist. Sometimes it’s even a name you recognize or the title of a paper you recall, but it’s all jumbled together. Careful here, because these citations do look real—they have a journal name, author, even page numbers. But they’re just a figment of the AI’s imagination. And if you include them in your paper, you’re in trouble. You’re basically saying, “I’m a serious academic, but my sources are from Narnia.” So, how can you ensure citation integrity in the age of AI?
- You need to be vigilant. Always verify the references provided by generative AI tools. If the AI cites a study, check if it exists. If it doesn’t, you need to find a reliable source to back up your argument. The best way to do that quickly is Consensus. Consensus lets you quickly explore a research question (or even better, a simple yes/no question that comes to mind with your argument) and get a sense of the current state of the research about this. It’ll give you study snapshots, the full and correct citation, and a quick summary of the paper. Other tools that offer an AI literature review functionality like this are: SciSpace, Scite AI and Paperpal.
- You need to cite consistently. You can use reference management tools like Zotero to keep track of your sources. Citation management tools let you organize your references, generate citations, and ensure that your bibliography is accurate and up-to-date. You need to understand the citation style you are using, like the laundry rules for your partner’s special sweater. Different citation styles have different rules for formatting references. Consensus outputs the most common citation styles like APA, MLA, Chicago, Harvard, and BibTeX. The latter can be quickly imported anywhere (including in your Zotero library).
- You need to understand the context. This is distinctly necessary when you’re using AI to find references. The AI’s references should be relevant to your argument. You can’t just throw in a study because you like the title. You need to confirm that it supports your argument and that it comes from a reliable source. If you don’t understand the context, you might end up citing a study that contradicts your argument. In other words, you need to be a responsible academic (and count red flags as if you’re on a date with a crypto bro).
Never assume a citation is accurate until you find it yourself in the library or academic database. Example search index databases are: Google Scholar, Scopus, Web of Science, JSTOR, Semantic Scholar, ACM Digital Library, APA PsycNet, IEEE Xplore, and some field-specific others.
Maintaining content accuracy
Content accuracy is vital in academic research. You can’t afford to have your AI tool misinterpret data or misrepresent academic findings. Here’s why some generative AI tools get it wrong:
Generative AI often outputs information that sounds right but lacks factual support. It’s not that the AI is lying maliciously; it’s that it’s merely following patterns without understanding real-world correctness. The AI can generate answers that seem logical because they align with the statistical relationships it has learned, but without any grounding in verified facts, these outputs can be misleading. This is where AI often falls flat on its face in academic writing, where you need to be able to trust your sources. It’s like relying on a charming stranger who knows how to use the right words but ultimately doesn’t have any genuine expertise on the topic. So, kind of like following a “thought leader,” I guess.
AI can produce coherent, persuasive arguments that are, in fact, nonsense. This happens because it assembles pieces that statistically seem to fit without genuinely understanding the concepts. The AI mimics the flow of academic reasoning but lacks the true comprehension needed to form valid conclusions. The result is content that appears credible but crumbles under deeper scrutiny, as honest as a LinkedIn post about “hustle culture” written from a beach in Bali.
After generating text with AI, perform your own critical review—check facts, logic, and the reliability of any claims made. Give yourself a reality check, just like when you look at your screen time report.
Prevention strategies
Input control
The best prevention is precise input. AI performs better with better instructions. Be specific: if you ask a vague question, you get a vague (and likely incorrect) answer. Garbage in. Garbage out. Asking the AI specific questions with detailed instructions helps limit room for hallucinations. Including verified data also makes a difference—if you have legitimate data, provide it to the AI to reduce the chance that it tries to fill in the blanks inaccurately. Additionally, using specialized AI tools is key; general-purpose AI might know a little about a lot of things, but for academic work, you’re better off with AI tools designed specifically for research purposes. Kind of like when you’re using Consensus and an LLM like Claude, ChatGPT, or Gemini together. You could get the correct sources and summaries from Consensus and then tie their content together with elaborate prompting in an LLM. This way, you’re less likely to get a nonsensical answer.
The more you provide context and clear directions, the more accurate the AI will be. At least it will feel more dependable than getting fashion advice from a colourblind penguin.
Verification methods
Verifying AI content is where human intervention really shines. Sometimes tech just can’t get its act together without you. You need to cross-check any output by systematically fact-checking each claim using reliable sources such as academic databases, fact-checking websites, or consulting experts. Cross-referencing is essential here—always use multiple sources to verify the validity of information. If an AI tool presents something as true, ensure at least two additional sources corroborate it. Additionally, consider using Retrieval-Augmented Generation (RAG) systems, which improve accuracy by linking AI outputs to specific, retrievable information. These reduce the chances of hallucinations.
Treat AI like a first draft assistant. Like your new gym buddy, who wants to beef up but skips leg day. It’s useful, but you must apply the human touch to refine, verify, and validate it.
Best practices for technical academic use
To effectively use AI in academic work, adjust model parameters, cross-verify outputs, and establish a content validation workflow.
- You can adjust AI settings to make it less confident in unknown areas. When the AI is set to lower “temperature,” it becomes less likely to make leaps into speculation.
- Use different tools to check the output of one AI. If you have access to multiple AI systems, run the same prompt through each. Different answers show weak spots.
- Establish clear workflows for content validation. For instance, always have one round dedicated solely to cross-checking sources. Use a tool like Consensus to quickly help you get to real sources.
Lean on different tools and settings to find consistency. If two tools agree, you can usually trust the information a bit more. If they disagree, check the sources. This is a beneficial way to guarantee you’re not falling into the AI hallucination trap.
What can you do next?
- Run an academic query and then fact-check it. Identify any inaccuracies. Familiarize yourself with the kinds of errors your AI might make.
- Create a step-by-step process for verifying AI-generated content—include steps like cross-referencing and using specialized databases.
- If available, try using a Retrieval-Augmented Generation system to improve the quality of AI outputs. It will help make hallucinations far less likely.
Generative AI is an incredibly powerful tool for research, but ultimately it’s only as effective as the vigilance you pour into it, so if you’re half-assing it, don’t expect miracles. Double-check facts and trust your expertise to prevent AI hallucinations from compromising your research.
Stay sharp, fact-check thoroughly, and let AI be a helpful assistant. You have enough rogue co-authors already.
Black Friday Deals for Subscribers
Here are some Black Friday deals I have secured for this week. All codes are only valid Nov. 26 through Dec. 2:
40% off the How to Use AI Research Tools Webinar
The webinar recording includes: a 3-hour webinar video recording with subtitles in 19 different languages, 46 in-depth webinar slides (Google Slides and PDF), 1-hour ChatGPT Bonus Tutorial Video (+ 39 PDF slides with prompts), 3 Bonus App Tutorial Videos (Yomu AI, SciSpace, Sourcely), 184-page Mastery Guide for 34 different AI Research & Writing Apps (PDF). Use code BFF2024 at checkout.
40% off the Thesis Fast Track Webinar
This webinar recording includes: a 3-hour video recording of the entire webinar workshop, 64 instructional slides (Google Slides and PDF), list of recommended productivity and AI software, viewing access to the workshop online whiteboard (Miro), 7-page workbook with checklists and prompts to accompany the workshop, 10-page Viva questions guide, simplified checklist for PhD examination. Use code BFF2024 at checkout. More deals below.ꜜ