
The integration of Generative AI into academic life appears to be a significant moment for university libraries. As trusted guides in the information ecosystem, librarians are positioned to help researchers explore this new terrain, but this transition requires developing a fresh set of skills.
Training your library team on AI-powered research tools could move beyond technical instruction to focus on critical thinking, ethical understanding, and human judgment.
Here is a proposed framework for a training program, organised by the new competencies your team might need to explore.
Foundational: Understanding Access and Use
This initial module establishes a baseline understanding of the technology itself.
- Accessing the Platform: Teach the technical steps for using the institution’s approved AI tools, including authentication, subscription models, and any specific interfaces (e.g., vendor-integrated AI features in academic databases, institutional LLMs, etc.).
- Core Mechanics: Explain what a Generative AI platform (like a Large Language Model) is and, crucially, what it is not. Cover foundational concepts like:
- Training Data: Familiarise staff with how to access the institution’s chosen AI tools, noting any specific authentication requirements or limitations tied to vendor-integrated AI features in academic databases.
- Prompting Basics: Introduce basic prompt engineering, the art of crafting effective, clear queries to get useful outputs.
- Hallucinations: Directly address the concept of “hallucinations,” or factually incorrect/fabricated outputs and citations, and emphasise the need for human verification.
Conceptual: Critical Evaluation and Information Management
This module focuses on the librarian’s core competency: evaluating information in a new context.
- Locating and Organising: Train staff on how to use AI tools for practical, time-saving tasks, such as:
- Generating keywords for better traditional database searches.
- Summarising long articles to quickly grasp the core argument.
- Identifying common themes across a set of resources.
- Evaluating Information: This is perhaps the most critical skill. Teach a new layer of critical information literacy:
- Source Verification: Always cross-check AI-generated citations, summaries, and facts against reliable, academic sources (library databases, peer-reviewed journals).
- Bias Identification: Examine AI outputs for subtle biases, especially those related to algorithmic bias in the training data, and discuss how to mitigate this when consulting with researchers.
- Using and Repurposing: Demonstrate how AI-generated material should be treated—as a raw output that must be heavily edited, critiqued, and cited, not as a final product.
Social: Communicating with AI as an Interlocutor
The quality of AI output is often dependent on the user’s conversational ability. This module suggests treating the AI platform as a possible partner in a dialogue.
- Advanced Prompt Engineering: Move beyond basic queries to teach techniques for generating nuanced, high-quality results:
- Assigning the AI a role (such as a ‘sceptical editor’ or ‘historical analyst’) to potentially shape a more nuanced response.
- Practising iterative conversation, where librarians refine an output by providing feedback and further instructions, treating the interaction as an ongoing intellectual exchange.
- Shared Understanding: Practise using the platform to help users frame their research questions more effectively. Librarians can guide researchers in using the AI to clarify a vague topic or map out a conceptual framework, turning the tool into a catalyst for deeper thought rather than a final answer generator.
Socio-Emotional Awareness: Recognising Impact and Building Confidence
This module addresses the human factor, building resilience and confidence
- Recognising the Impact of Emotions: Acknowledge the possibility of emotional responses, such as uncertainty about shifting professional roles or discomfort with rapid technological change, and facilitate a safe space for dialogue.
- Knowing Strengths and Weaknesses: Reinforce the unique, human-centric value of the librarian: critical thinking, contextualising information, ethical judgment, and deep disciplinary knowledge, skills that AI cannot replicate. The AI could be seen as a means to automate lower-level tasks, allowing librarians to focus on high-value consultation.
- Developing Confidence: Implement hands-on, low-stakes practice sessions using real-world research scenarios. Confidence grows from successful interaction, not just theoretical knowledge. Encourage experimentation and a “fail-forward” mentality.
Ethical: Acting Ethically as a Digital Citizen
Ethical use is the cornerstone of responsible AI adoption in academia. Librarians must be the primary educators on responsible usage.
- Transparency and Disclosure: Discuss the importance of transparency when utilizing AI. Review institutional and journal guidelines that may require students and faculty to disclose how and when AI was used in their work, and offer guidance on how to properly cite these tools.
- Data Privacy and Security: Review the potential risks associated with uploading unpublished, proprietary, or personally identifiable information (PII) to public AI services. Establish and enforce clear library policies on what data should never be shared with external tools.
- Copyright and Intellectual Property (IP): Discuss the murky legal landscape of AI-generated content and IP. Emphasise that AI models are often trained on copyrighted material and that users are responsible for ensuring their outputs do not infringe on existing copyrights. Advocate for using library-licensed, trusted-source AI tools whenever possible.
- Combating Misinformation: Position the librarian as the essential arbiter against the spread of AI-generated misinformation. Training should include spotting common AI red flags, teaching users how to think sceptically, and promoting the library’s curated, authoritative resources as the gold standard.