Are you tired of watching colleagues either blindly embrace AI tools or completely dismiss them — while you know there’s a smarter middle path?
The truth is, most academics are approaching AI literacy all wrong. They’re either using AI without any critical evaluation (leading to embarrassing mistakes and ethical violations) or they’re avoiding it entirely (missing massive opportunities to enhance their research and teaching). Meanwhile, administrators are scrambling to create AI policies without understanding the technology, students are using AI in ways that undermine learning, and departments are making expensive tool purchases based on marketing hype rather than actual value.
Today, I’m going to show you exactly how to position yourself as the trusted AI expert on your campus — the person everyone turns to for guidance, evaluation, and strategic thinking about AI in academia.
Let’s walk through each strategy.
Way 1: Master the fundamentals of prompt engineering beyond basic commands
Most academics think prompt engineering means asking ChatGPT to “write me a literature review” — but that’s amateur hour.
Real prompt engineering for academics involves understanding structured frameworks that consistently produce reliable, high-quality outputs. We've discussed many of them in our AI for UX Designers Masterclass yesterday.
While there are dozens of prompt frameworks available, four stand out as particularly powerful for academic work.
Start by learning the CLEAR framework: Context (provide background), Length (specify output length), Examples (show desired format), Audience (define who this is for), and Role (tell the AI what perspective to take). Instead of “Write me an introduction to my research paper about social media,” here’s an example using the CLEAR framework for structuring an academic introduction: “You are an experienced professor of academic writing. Write a 600-word introduction for a research paper on the impact of social media on academic discourse. Use the CARS (Create A Research Space) model with clear moves showing the research territory, niche, and occupation. Base the structure and tone on this example introduction: [insert example introduction]. Focus on establishing the research gap or problem regarding how platforms like Twitter have changed scholarly communication patterns. Write for an audience of peer reviewers at an HCI journal.”
The RACE framework (Role, Action, Context, Expectation) excels when you need the AI to assume specific academic expertise. For example: “Role: You are a senior literature review specialist in cognitive psychology. Action: Analyze these 15 research papers for methodological gaps. Context: I’m preparing a grant proposal on working memory interventions for ADHD students, and I need to identify unexplored research directions. Expectation: Provide a 400-word analysis highlighting 3 specific methodological gaps with suggested research questions for each.”
The APE framework (Action, Purpose, Expectation) works brilliantly for straightforward academic tasks where clarity is paramount. Try this: “Action: Summarize the key findings from this 50-page report. Purpose: To brief department colleagues who need to understand the implications for our curriculum review process. Expectation: Create a 2-page executive summary with bullet-pointed action items and a timeline for implementation.”
For complex, multi-step academic processes, the RISE framework (Role, Input, Steps, Expectation) delivers exceptional results. Here’s an example: “Role: You are an experienced journal editor in environmental science. Input: This draft manuscript on microplastic contamination in freshwater systems attached as PDF. Steps: First, evaluate the literature review draft for completeness. Second, assess methodology for rigour. Third, analyze results presentation for clarity. Fourth, review discussion for logical flow. Expectation: Provide specific feedback for each step with concrete suggestions for improvement.”
Practice this with different academic tasks until you can consistently get publication-quality outputs that still require your expertise to refine and validate rather than completely rewrite the output.
Way 2: Develop a systematic framework for evaluating AI tool accuracy and bias
The fastest way to lose credibility is recommending an AI tool that produces biased or inaccurate results — yet most academics have no systematic way to test these tools.
Create a standardized evaluation protocol that you can apply to any AI tool. Test the tool with questions where you already know the correct answers, probe for biases by asking about controversial topics in your field, and document systematic weaknesses you discover. For research tools, verify citations and fact-check claims. For writing tools, test for consistent voice and argument structure.
Keep a running document of your evaluations, noting which tools excel at specific tasks and which consistently fail. This becomes your secret weapon when colleagues ask for AI recommendations — you’ll have data-driven answers instead of hunches.
Way 3: Establish yourself as the department’s AI ethics and policy consultant
Every academic department needs AI guidelines, but most faculty have no idea how to create them — this is your opportunity to lead.
Draft practical AI policies that address academic integrity, data privacy, intellectual property, and pedagogical best practices. Core values that I think should be in such AI policies:
- Academic Integrity: AI use must maintain honesty, trust, and fairness in all academic endeavours while preserving the fundamental value of original human scholarship.
- Transparency: All AI assistance must be appropriately disclosed and documented to maintain scholarly transparency and reproducibility.
- Educational Value: AI tools should improve not replace critical thinking, creativity, and core disciplinary skills.
- Privacy and Security: Student and institutional data must be protected according to applicable privacy regulations (FERPA, GDPR).
- Equity and Accessibility: AI implementation should promote not hinder equal access to educational opportunities.
Focus on creating policies that are specific enough to be useful but flexible enough to evolve with the technology. Address questions like: When can students use AI for assignments? How should faculty disclose AI assistance in research? What data privacy considerations apply to institutional AI tool adoption?
Present these drafts at faculty meetings and volunteer to chair the AI policy committee. Position yourself as someone who understands both the technical capabilities and the academic implications.
Way 4: Build and maintain a curated repository of tested AI tools and use cases
Your colleagues don’t have time to test dozens of AI tools — but you can become the person who has already done that work. I’m sending paid subscribers a Notion link to my list today.
Create a living document that categorizes AI tools by academic function: research assistance, writing support, data analysis, course design, grading efficiency, and student engagement. For each tool, include specific use cases, pricing information, learning curve assessment, and integration capabilities with existing academic workflows.
Update this repository monthly and share highlights in department newsletters or faculty development sessions. When someone needs an AI solution for a specific problem, you’ll be the go-to expert with tested recommendations.
Way 5: Design and deliver AI literacy workshops tailored to academic contexts
Generic AI training doesn’t address the specific needs of academics — but workshops designed by academics for academics will be in high demand. Something like our AI Research Tools webinar is a great way to start.
Develop modular workshops that address different academic constituencies: “AI for Research Efficiency,” “Ethical AI Use in the Classroom,” “AI Tools for Administrative Tasks,” and “Critical AI Evaluation for Faculty.” Make these workshops hands-on, with participants working through real academic scenarios using the tools and frameworks you’ve mastered.
Start by offering these workshops to your own department, then expand to other departments, and eventually position yourself to lead campus-wide AI literacy initiatives.
Way 6: Establish office hours specifically for AI consultation and troubleshooting
Position yourself as the person colleagues can turn to when they’re stuck with AI tools or need guidance on specific applications.
Dedicate one hour per week to “AI office hours” where colleagues can bring specific challenges: a research project that needs AI assistance, a student assignment policy that needs AI considerations, or a departmental workflow that could benefit from automation. Document common questions and successful solutions — this becomes the basis for future workshops and policy recommendations.
This regular availability establishes you as approachable and knowledgeable. It builds the trust necessary to become the campus AI authority.
Way 7: Stay strategically ahead of AI developments through targeted learning and networking
AI changes monthly, and maintaining your expert status requires strategic learning that goes beyond surface-level news consumption.
Follow key academic AI researchers, subscribe to technical newsletters that focus on research applications, and join professional communities where academics discuss AI implementation. Set aside time each week to test new tools and features, but focus on developments that have clear academic applications rather than chasing every new AI trend or every new tools that a former grad student is trying to get funded.
Create a system for sharing your discoveries — whether through social media, department emails, newsletters, or informal conversations — so colleagues begin to see you as their source for relevant AI updates.
You don’t want to become a technical AI developer, but the trusted interpreter who helps your academic community navigate AI developments thoughtfully and strategically.
What are you waiting for, my friend?
P.S.: Curious to explore how we can tackle your research struggles together? I've got three suggestions that could be a great fit: A seven-day email course that teaches you the basics of research methods. Or the recordings of our AI research tools webinar and PhD student fast track webinar.
AI Tool Evaluation Worksheet
Below is a PDF worksheet that provides a structured approach to AI tool evaluation. And my comprehensive AI Research Tools Database in Notion.