Categories
en Uncategorized

Research Integrity, Partnership, and Societal Impact

Research Integrity, Partnership, and Societal Impact

Research integrity extends beyond publication to include how scholarship is discovered, accessed, and used, and its societal impact depends on more than editorial practice alone. In practice, integrity and impact are shaped by a web of platforms and partnerships that determine how research actually travels beyond the press.

University press scholarship is generally produced with a clear public purpose, speaking to issues such as education, public health, social policy, culture, and environmental change, and often with the explicit aim of informing practice, policy, and public debate. 

Whether that aim is realised increasingly depends on what happens to research once it leaves the publishing workflow. Discovery platforms, aggregators, library consortia, and technology providers all influence this journey. Choices about metadata, licensing terms, ranking criteria, or the use of AI-driven summarisation affect which research is surfaced, how it is presented, and who encounters it in the first place. 

These choices can look technical or commercial on the surface, but they have real intellectual and social consequences. They shape how scholarship is understood and whether it can be trusted beyond core academic audiences. For university presses, this changes where responsibility sits. Editorial quality remains critical, but it is no longer the only consideration. Presses also have a stake in how their content is discovered, contextualised, and applied in wider knowledge ecosystems. Long-form and specialist research is particularly exposed here. When material is compressed or broken apart for speed and scale, nuance can easily be lost, even when the intentions behind the system are positive.

This is where partnerships start to matter in a very practical way. The conditions under which presses work with discovery services directly affect whether their scholarship remains identifiable, properly attributed, and anchored in its original context. For readers using research in teaching, healthcare, policy, or development settings, these signals are not decorative. They are essential to responsible use.

Zendy offers one example of how these partnerships can function differently. As a discovery and access platform serving researchers, clinicians, and policymakers in emerging and underserved markets, Zendy is built around extending reach without undermining trust. University press content is surfaced with clear attribution, structured metadata, and rights-respecting access models that preserve the integrity of the scholarly record.

Zendy works directly with publishers to agree how content is indexed, discovered, and, where appropriate, summarised. This gives presses visibility into and control over how their work appears in AI-supported discovery environments, while helping readers approach research with a clearer sense of scope, limitations, and authority.

From a societal impact perspective, this matters. Zendy’s strongest usage is concentrated in regions where access to trusted scholarship has long been uneven, including parts of Africa, the Middle East, and Asia. In these contexts, university press research is not being read simply for academic interest. It is used in classrooms, clinical settings, policy development, and capacity-building efforts, areas closely connected to the Sustainable Development Goals.

Governance really sits at the heart of this kind of model. Clear and shared expectations around metadata quality, content provenance, licensing boundaries, and the use of AI are what make the difference between systems that encourage genuine engagement and those that simply amplify visibility without depth. Metadata is not just a technical layer: it gives readers the cues they need to understand what they are reading, where it comes from, and how it should be interpreted.

AI-driven discovery and new access models create real opportunities to broaden the reach of university press publishing and to connect trusted scholarship with communities that would otherwise struggle to access it. But reach on its own does not equate to impact. When context and attribution are lost, the value of the research is diminished. Societal impact depends on whether work is understood and used with care, not simply on how widely it circulates.

For presses with a public-interest mission, active participation in partnerships like these is a way to carry their values into a more complex and fast-moving environment. As scholarship is increasingly routed through global, AI-powered discovery systems, questions of integrity, access, and societal relevance converge. Making progress on shared global challenges requires collaboration, shared responsibility, and deliberate choices about the infrastructures that connect research to the wider world. For university presses, this is not a departure from their mission, but a continuation of it, with partnerships playing an essential role.

FAQ

How do platforms and partnerships affect research integrity?
Discovery platforms, aggregators, and technology partners influence which research is surfaced, how it’s presented, and who can access it. Choices around metadata, licensing, and AI summarization directly impact understanding and trust.

Why are university press partnerships important?
Partnerships allow presses to maintain attribution, context, and control over their content in discovery systems, ensuring that research remains trustworthy and properly interpreted.

How does Zendy support presses and researchers?
Zendy works with publishers to surface research with clear attribution, structured metadata, and rights-respecting access, preserving integrity while extending reach to underserved regions.

For partnership inquiries, please contact:
 Sara Crowley Vigneau
Partnership Relations Manager
Email: s.crowleyvigneau@zendy.io

Categories
ar en

Beyond Publication. Access as a Research Integrity Issue

If research integrity now extends beyond publication to include how scholarship is discovered and used, then access is not a secondary concern. It is foundational.

In practice, this broader understanding of integrity quickly runs into a hard constraint: access. A significant percentage of academic publishing is still behind paywalls, and traditional library sales models fail to serve institutions with limited budgets or uneven digital infrastructure. Even where university libraries exist, access is often delayed or restricted to narrow segments of the scholarly record. The consequences are structural rather than incidental. When researchers and practitioners cannot access the peer-reviewed scholarship they need, it drops out of local research agendas, teaching materials as well as policy conversations. Decisions are then shaped by whatever information is most easily available, not necessarily by what is most rigorous or relevant. Over time, this weakens citation pathways, limits regional participation in scholarly debate, and reinforces global inequity in how knowledge is visible, trusted, and amplified. 

The ongoing success of shadow libraries highlights this misalignment: Sci-Hub reportedly served over 14 million monthly users in 2025, indicating sustained and widespread demand for academic research that existing access models continue to leave unmet. This is less about individual behaviour than about a system that consistently fails to deliver essential knowledge where it is needed most.

The picture looks different when access barriers are reduced: usage data from open and reduced-barrier initiatives consistently show strong engagement across Asia and Africa, particularly in fields linked to health, education, social policy, and development. These patterns highlight how emerging economies rely on high-quality publishing in contexts where it directly impacts professional practice and public decision-making.

From a research integrity perspective, this is important. When authoritative sources are inaccessible, alternative materials step in to fill the gap. The risk is not only exclusion, but distortion. Inconsistent, outdated, or unverified sources become more influential precisely because they are easier to obtain. Misinformation takes hold most easily where trusted knowledge is hardest to reach.

Addressing access is about more than widening readership or improving visibility, it is about ensuring that high-quality scholarship can continue to shape understanding and decisions in the contexts it seeks to serve. For university presses committed to the public good, this challenge sits across discovery systems, licensing structures, technology platforms, and the partnerships that increasingly determine how research is distributed, interpreted, and reused. If research integrity now extends across the full lifecycle of scholarship, then sustaining it requires collective responsibility and shared frameworks. How presses engage with partners, infrastructures, and governance mechanisms becomes central to protecting both trust and impact.

FAQ:

What challenges exist in current access models?
Many academic works remain behind paywalls, libraries face budget and infrastructure constraints, and access delays or restrictions can prevent researchers from using peer-reviewed scholarship effectively.

What happens when research is inaccessible?
When trusted sources are hard to reach, alternative, inconsistent, or outdated materials often fill the gap, increasing the risk of misinformation and weakening citation pathways.

How does Zendy help address access challenges?
Zendy provides affordable and streamlined access to high-quality research, helping scholars, practitioners, and institutions discover and use knowledge without traditional barriers.

For partnership inquiries, please contact:
 Sara Crowley Vigneau
Partnership Relations Manager
Email: s.crowleyvigneau@zendy.io

Categories
en Uncategorized

Beyond Peer Review. Research Integrity in University Press Publishing

University presses play a distinctive role in advancing research integrity and societal impact. Their publishing programmes are closely aligned with public-interest research in the humanities, social sciences, global health, education, and environmental studies, disciplines that directly inform policy and progress toward the UN Sustainable Development Goals. This work typically prioritises depth, context, and long-term understanding, often drawing on regional expertise and interdisciplinary approaches rather than metrics-driven outputs.

Research integrity is traditionally discussed in terms of editorial rigour, peer review, and ethical standards in the production of scholarship. These remain essential. But in an era shaped by digital platforms and AI-led discovery, they are no longer sufficient on their own. Integrity now also depends on what happens after publication: how research is surfaced, interpreted, reduced, and reused.

For university presses, this shift is particularly significant. Long-form scholarship, a core strength of press programmes, is increasingly encountered through abstracts, summaries, extracts, and automated recommendations rather than sustained reading. As AI tools mediate more first encounters with research, meaning can be subtly altered through selection, compression, or loss of context. These processes are rarely neutral. They encode assumptions about relevance, authority, and value.

This raises new integrity questions. Who decides which parts of a work are highlighted or omitted? How are disciplinary nuance and authorial intent preserved when scholarship is summarised? What signals remain to help readers understand scope, limitations, or evidentiary weight?

This isn’t to say that AI-driven discovery is inherently harmful, but it does require careful oversight. If university press scholarship is to continue informing research, policy, and public debate in meaningful ways, it needs to remain identifiable, properly attributed, and grounded in its original framing as it moves through increasingly automated discovery systems.

In this context, research integrity extends beyond how scholarship is produced to include how it is processed, surfaced and understood. For presses with a public-interest mission, research integrity now extends across the full journey of a work, from how it is published to how it is discovered, interpreted and used.

FAQ

Can Zendy help with AI-mediated research discovery?
Yes. Zendy’s tools help surface, summarise, and interpret research accurately, preserving context and authorial intent even when AI recommendations are used.

Does AI discovery harm research, or can it be beneficial?
AI discovery isn’t inherently harmful—it can increase visibility and accessibility. However, responsible use is essential to prevent misinterpretation or loss of nuance, ensuring research continues to inform policy and public debate accurately.

How does Zendy make research more accessible?
Researchers can explore work from multiple disciplines, including humanities, social sciences, global health, and environmental studies, all in one platform with easy search and AI-powered insights.

For partnership inquiries, please contact:
 Sara Crowley Vigneau
Partnership Relations Manager
Email: s.crowleyvigneau@zendy.io

Categories
en Uncategorized

From Curator to Digital Navigator: Evolving Roles for Modern Librarians

With the growing integration of digital technologies in academia, librarians are becoming facilitators of discovery. They play a vital role in helping students and researchers find credible information, use digital tools effectively, and develop essential research skills. At Zendy, we believe this shift represents a new chapter for librarians, one where they act as mentors, digital strategists, and AI collaborators.

Zendy’s AI-powered research assistant, ZAIA, is one example of how librarians can enhance their work using technology. Librarians can utilise ZAIA to assist users in clarifying research questions, discovering relevant papers more efficiently, and understanding complex academic concepts in simpler terms. This partnership between human expertise and AI efficiency allows librarians to focus more on supporting critical thinking, rather than manual searching.

According to our latest survey, AI in Education for Students and Researchers: 2025 Trends and Statistics, over 70% of students now rely on AI for research. Librarians are adapting to this shift by integrating these technologies into their services, offering guidance on ethical AI use, research accuracy, and digital literacy.

However, this evolution also comes with challenges. Librarians must ensure users understand how to evaluate AI-generated content, check for biases, and verify sources. The focus is moving beyond access to information, it’s now about ensuring that information is used responsibly and critically.

To support this changing role, here are some tools and practices modern librarians can integrate into their workflows:

  1. AI-Enhanced Discovery
    Using tools like ZAIA to help researchers refine queries and find relevant studies faster.
  2. Research Data Management
    Organising, preserving, and curating datasets for long-term academic use.
  3. Ethical AI and Digital Literacy Training
    Teaching researchers how to verify AI outputs, evaluate bias, and maintain academic integrity.
  4. Collaborative Digital Spaces
    Facilitating research communication through online repositories and discussion platforms.

In conclusion, librarians today are more than curators, they are digital navigators shaping how knowledge is accessed, evaluated, and shared. As technology continues to evolve, so will its role in guiding researchers and students through the expanding world of digital information.

Categories
en Uncategorized

Strategic AI Skills Every Librarian Must Develop

librarian skills

In 2026, librarians who understand how AI works will be better equipped to support students and researchers, organise collections, and help patrons find reliable information faster. Developing a few key AI skills can make everyday tasks easier and open up new ways to serve your community.

Why AI Skills Matter for Librarians

AI tools that recommend books, manage citations, or answer basic questions are becoming more common.

Learning how these tools work helps librarians:

  • Offer smarter, faster search results.
  • Improve cataloguing accuracy.
  • Provide better guidance to researchers and students.

Remember, AI isn’t replacing professional judgment; it’s supporting it.

Core AI Literacy Foundations

Before diving into specific tools, it helps to understand some basic ideas behind AI.

Machine Learning Basics:
Machine learning means teaching a computer to recognise patterns in data. In a library setting, this could mean analysing borrowing habits to suggest new titles or resources.

Natural Language Processing (NLP):
NLP is what allows a chatbot or search tool to understand and respond to human language. It’s how virtual assistants can answer questions like “What are some journals about public health policy?”

Quick Terms to Know:

  • Algorithm: A set of steps an AI follows to make a decision.
  • Training Data: The information used to “teach” an AI system.
  • Neural Network: A type of computer model inspired by how the brain processes information.
  • Bias: When data or systems produce unfair or unbalanced results.

Metadata Enrichment With AI

Cataloguing is one of the areas where AI makes a noticeable difference.

  • Automated Tagging: AI tools can read through titles and abstracts to suggest keywords or subject headings.
  • Knowledge Graphs: These connect related materials, for example, linking a book on climate change with recent journal articles on the same topic.
  • Bias Checking: Some systems can flag outdated or biased terminology in subject classifications.

Generative Prompt Skills

Knowing how to “talk” to AI tools is a skill in itself. The clearer your request, the better the result. Try experimenting with prompts like these:

  • Research Prompt: “List three recent studies on community reading programs and summarise their findings.”
  • Teaching Prompt: “Write a short activity plan for a workshop on evaluating online information sources.”
  • Summary Prompt: “Give me a brief overview of this article’s key arguments and methods.”

Adjusting tone or adding detail can change the outcome. It’s about learning how to guide the tool rather than letting it guess.

Ethical Data Practices

AI tools can be useful, but they also raise questions about privacy and fairness. Librarians have always cared deeply about protecting patron information, and that remains true with AI.

  • Keep personal data anonymous wherever possible.
  • Review AI outputs for signs of bias or misinformation.
  • Encourage clear policies around how data is stored and used.

Ethical AI is part of a librarian’s duty to maintain trust and fairness.

Automating Everyday Tasks

AI can take over some of the small, routine jobs that fill up a librarian’s day.

  • Circulation: Systems can send overdue reminders automatically or manage renewals.
  • Chatbots: Basic questions like “What are the library hours?” can be handled instantly.
  • Collection Management: AI can spot patterns in borrowing data to suggest which books to keep, reorder, or retire.

Building Your Learning Path

Getting comfortable with AI doesn’t have to mean earning a new degree. Start small:

  • Take short online courses or micro-certifications in AI literacy.
  • Join librarian groups or online forums where people share practical tips.
  • Block out one hour a week to try out a new tool or attend a webinar.

A little consistent learning goes a long way.

Making AI Affordable

Many smaller libraries worry about cost, but not every tool is expensive.

  • Free Tools: Some open-access AI platforms, like Zendy, offer affordable access to research databases and AI-powered features.
  • Shared Purchases: Partnering with other libraries to share licenses can cut costs.
  • Cloud Services: Pay-as-you-go plans mean you only pay for what you actually use.

There’s usually a way to experiment with AI without stretching the budget.

Showing Impact

Once AI tools are in use, it’s important to show their value. Track things like:

  • Time saved on cataloguing or circulation tasks.
  • Patron feedback on new services.
  • How often are AI tools used compared to manual systems?

Numbers matter, but so do stories. Sharing examples, like a student who found research faster thanks to a new search feature, can make your case even stronger.

And remember, the future of librarianship is about using AI tools in libraries thoughtfully to keep libraries relevant, reliable, and welcoming spaces for everyone.

Categories
en Uncategorized

Key Considerations for Training Library Teams on New Research Technologies

The integration of Generative AI into academic life appears to be a significant moment for university libraries. As trusted guides in the information ecosystem, librarians are positioned to help researchers explore this new terrain, but this transition requires developing a fresh set of skills.

Training your library team on AI-powered research tools could move beyond technical instruction to focus on critical thinking, ethical understanding, and human judgment.

Here is a proposed framework for a training program, organised by the new competencies your team might need to explore.

Foundational: Understanding Access and Use

This initial module establishes a baseline understanding of the technology itself.

  • Accessing the Platform: Teach the technical steps for using the institution’s approved AI tools, including authentication, subscription models, and any specific interfaces (e.g., vendor-integrated AI features in academic databases, institutional LLMs, etc.).
  • Core Mechanics: Explain what a Generative AI platform (like a Large Language Model) is and, crucially, what it is not. Cover foundational concepts like:
    • Training Data: Familiarise staff with how to access the institution’s chosen AI tools, noting any specific authentication requirements or limitations tied to vendor-integrated AI features in academic databases.
    • Prompting Basics: Introduce basic prompt engineering, the art of crafting effective, clear queries to get useful outputs.
    • Hallucinations: Directly address the concept of “hallucinations,” or factually incorrect/fabricated outputs and citations, and emphasise the need for human verification.

Conceptual: Critical Evaluation and Information Management

This module focuses on the librarian’s core competency: evaluating information in a new context.

  • Locating and Organising: Train staff on how to use AI tools for practical, time-saving tasks, such as:
    • Generating keywords for better traditional database searches.
    • Summarising long articles to quickly grasp the core argument.
    • Identifying common themes across a set of resources.
  • Evaluating Information: This is perhaps the most critical skill. Teach a new layer of critical information literacy:
    • Source Verification: Always cross-check AI-generated citations, summaries, and facts against reliable, academic sources (library databases, peer-reviewed journals).
    • Bias Identification: Examine AI outputs for subtle biases, especially those related to algorithmic bias in the training data, and discuss how to mitigate this when consulting with researchers.
  • Using and Repurposing: Demonstrate how AI-generated material should be treated—as a raw output that must be heavily edited, critiqued, and cited, not as a final product.

Social: Communicating with AI as an Interlocutor

The quality of AI output is often dependent on the user’s conversational ability. This module suggests treating the AI platform as a possible partner in a dialogue.

  • Advanced Prompt Engineering: Move beyond basic queries to teach techniques for generating nuanced, high-quality results:
    • Assigning the AI a role (such as a ‘sceptical editor’ or ‘historical analyst’) to potentially shape a more nuanced response.
    • Practising iterative conversation, where librarians refine an output by providing feedback and further instructions, treating the interaction as an ongoing intellectual exchange.
  • Shared Understanding: Practise using the platform to help users frame their research questions more effectively. Librarians can guide researchers in using the AI to clarify a vague topic or map out a conceptual framework, turning the tool into a catalyst for deeper thought rather than a final answer generator.

Socio-Emotional Awareness: Recognising Impact and Building Confidence

This module addresses the human factor, building resilience and confidence

  • Recognising the Impact of Emotions: Acknowledge the possibility of emotional responses, such as uncertainty about shifting professional roles or discomfort with rapid technological change, and facilitate a safe space for dialogue.
  • Knowing Strengths and Weaknesses: Reinforce the unique, human-centric value of the librarian: critical thinking, contextualising information, ethical judgment, and deep disciplinary knowledge, skills that AI cannot replicate. The AI could be seen as a means to automate lower-level tasks, allowing librarians to focus on high-value consultation.
  • Developing Confidence: Implement hands-on, low-stakes practice sessions using real-world research scenarios. Confidence grows from successful interaction, not just theoretical knowledge. Encourage experimentation and a “fail-forward” mentality.

Ethical: Acting Ethically as a Digital Citizen

Ethical use is the cornerstone of responsible AI adoption in academia. Librarians must be the primary educators on responsible usage.

  • Transparency and Disclosure: Discuss the importance of transparency when utilizing AI. Review institutional and journal guidelines that may require students and faculty to disclose how and when AI was used in their work, and offer guidance on how to properly cite these tools.
  • Data Privacy and Security: Review the potential risks associated with uploading unpublished, proprietary, or personally identifiable information (PII) to public AI services. Establish and enforce clear library policies on what data should never be shared with external tools.
  • Copyright and Intellectual Property (IP): Discuss the murky legal landscape of AI-generated content and IP. Emphasise that AI models are often trained on copyrighted material and that users are responsible for ensuring their outputs do not infringe on existing copyrights. Advocate for using library-licensed, trusted-source AI tools whenever possible.
  • Combating Misinformation: Position the librarian as the essential arbiter against the spread of AI-generated misinformation. Training should include spotting common AI red flags, teaching users how to think sceptically, and promoting the library’s curated, authoritative resources as the gold standard.
Categories
en Uncategorized

Digital Information Literacy Guidelines for Academic Libraries

Information literacy is the skill of finding, evaluating, and using information effectively. Data literacy is the skill of understanding numbers and datasets, reading charts, checking how data was collected, and spotting mistakes. Critical thinking is the skill of analysing information, questioning assumptions, and making sound judgments. With so many digital tools today, students and researchers need all three skills, not just to find information, but also to make sense of it and communicate it clearly.

Why Academic Libraries Should Offer Literacy Programs

Let’s face it: research can be overwhelming. Over 5 million research papers are published every year. This information overload means researchers spend 25-30%1 of their time finding and reviewing academic literature, according to the International Study: Perceptions and Behavior of Researchers. Predatory journals, low-quality datasets, and confusing search results can make learning stressful. Libraries are more than book storage, they’re a place to build practical skills. Programs that teach information and data literacy help students think critically, save time, and feel more confident with research.

Key Skills Students, Researchers, and Librarians Need

Finding and Using Scholarly Content

Knowing how to search a database efficiently is a big deal. Students should learn how to use filters, Boolean logic, subject headings and, of course, intelligent search. They should also know the difference between journal articles, conference papers, and open-access resources.

Evaluating Sources and Data

Not all information is equal. Programs should teach students how to check if sources are reliable, understand peer review, and spot bias in datasets. A few practical techniques, like cross-checking sources or looking for data provenance, can make research much stronger.

Managing Information Ethically

Citing sources properly, avoiding plagiarism, and respecting copyright are essentials. Tools like Zotero or Mendeley help keep references organised, so students spend less time managing files and more time on research.

Sharing Findings Clearly

Communicating is sharing, and sharing is caring. It’s one thing to collect information; it’s another to communicate it. Using infographics, slides, or storytelling techniques to make research more memorable. Ultimately, clear communication ensures that the work they’ve done can be understood, used, and appreciated by others.

Frameworks That Guide Literacy Programs

  • ACRL Framework: Provides six key concepts for teaching information literacy.
  • EU DigComp / DigCompEdu: Covers digital skills for students and educators.
  • Data Literacy Project: Helps students understand how to work with datasets, complementing traditional research skills.

These frameworks help librarians structure programs so students get consistent, practical guidance.

Steps to Build a Digital Literacy Program

  1. Audit Campus Needs: Talk to students and faculty, see what resources exist, and find gaps.
  2. Set Learning Goals: Decide what students should be able to do at the end, and make goals measurable.
  3. Select Content and Tools: Choose databases, software, and datasets that fit the library’s budget and tech setup.
  4. Create Short, Modular Lessons: Break skills into manageable pieces that build on each other.
  5. Launch and Improve: Introduce the program, gather feedback, and adjust lessons based on what works and what doesn’t.

Teaching Strategies and Online Tools

Flipped and Embedded Instruction

  • Students watch a short video about search techniques at home, then practice in class.
  • A librarian might join a research methods class, helping students build search strings live.
  • Pre-class quizzes on topics like peer review versus predatory journals prepare students for hands-on exercises.

Short Videos and Tutorials

Quick videos (2–5 minutes) can teach one skill at a time, like citation management, evaluating sources, or basic data visualisation. Include captions, transcripts, and small practice exercises to reinforce learning.

AI Summaries and Chatbots

AI tools can summarise articles, suggest keywords, highlight main points, and even draft bibliographies. But they aren’t perfect, they can make mistakes, miss nuances, or misread complex tables. Human oversight is still important.

Free Resources and Open Datasets

Students can practice with free databases and datasets like DOAJ, arXiv, Kaggle, or Zenodo. Using one of the open-access resources keeps programs affordable while providing real-world examples.

Checking if Students Are Learning

  • Before and After Assessments: Simple quizzes or tasks to see how skills improve.
  • Performance Rubrics: Compare beginner, developing, and advanced levels in searching, evaluating, and presenting data.
  • Analytics: Track which videos or tools students use most to improve future lessons.

Working With Faculty

  • Embedded Workshops: Librarians teach skills directly tied to assignments.
  • Joint Assignments: Faculty design research projects that naturally teach literacy skills.
  • Faculty Training: Show instructors how to integrate digital literacy into their courses.

Tackling Challenges

  • Staff Training: Librarians may need extra help with data tools. Peer mentoring and workshops work well.
  • Limited Budgets: Open access tools, collaborative licensing, and free platforms help make programs feasible.
  • Distance Learners: Make videos and tutorials accessible anytime, account for different time zones and internet access.

Looking Ahead

AI, open science, and global collaboration are changing research. AI can personalise learning, but it still needs oversight. Open science and FAIR data principles (set of guidelines for making research data Findable, Accessible, Interoperable, and Reusable to both humans and machines) encourage transparency and reproducibility. Libraries can also connect with international partners to share resources and best practices.

FAQs

How long does a program take to launch?
Basic services can start in six months; full programs usually take 1–2 years.

Do humanities students need data skills? 
Yes, focus is more on qualitative analysis and digital humanities tools.


Where can libraries find free datasets? 
Government repositories, Kaggle, Zenodo, and university archives.


Can small libraries succeed without data specialists?
Yes, faculty collaboration and online resources can cover most needs.

Categories
en Uncategorized

From Boolean to Intelligent Search: A Librarian’s Guide to Smarter Information Retrieval

As a librarian, you’ve always been the person people turn to when they need help finding answers. But the way we search for information is changing fast. Databases are growing, new tools keep appearing, and students expect instant results. Only then will you know the true benefit of AI for libraries, to help you make sense of it all.

From Boolean to Intelligent Search

Traditional search is still part of everyday library work. It depends on logic and structure, keywords, operators, and carefully built queries. But AI adds something new. It doesn’t just look for words; it tries to understand what someone means.

If a researcher searches for “climate change effects on migration,” an AI-powered tool doesn’t just pull results with those exact words. It also looks for studies about environmental displacement, regional challenges, and social impacts.

This means you can spend less time teaching people how to “speak database” and more time helping them understand the research they find.

The Evolution of Library Search

Traditional search engines focus on matching keywords, which often leads to long lists of results. With AI, search tools can now read queries in natural language, just the way people ask questions, and still find accurate, relevant material.

Natural language processing (NLP) and machine learning (ML) make it possible for search systems to connect related ideas, even when the exact words aren’t used. Features like semantic search and vector databases help AI recognise patterns and suggest other useful directions for exploration.

Examples of AI Tools Librarians Can Use

Tool / PlatformWhat It DoesWhy It Helps Librarians
ZendyA platform that combines literature discovery, AI summaries, keyphrase highlighting, and PDF analysisHelps librarians and researchers access, read, and understand academic papers more easily
ConsensusAn AI-powered academic search engine that summarises findings from peer-reviewed studiesHelps with literature reviews and citation management
Ex Libris PrimoUses AI to support discovery and manage metadataImproves record accuracy and helps users find what they need faster
MeilisearchA fast, flexible search engine that uses NLPMakes it easier to search large databases efficiently

The Ethics of Intelligent Search

Algorithms influence what users see and what they might miss. That’s why your role is so important. You can help users question why certain results appear on top, encourage critical thinking, and remind them that algorithms are not neutral.

Digital literacy today isn’t just about knowing how to search, it’s about understanding how the search works.

In Conclusion

AI tools for librarians are becoming easier to use and more helpful every day. Some platforms now include features like summarisation, citation analysis, and even plans to highlight retracted papers, something Zendy is working toward.

Trying out these tools can make your work smoother: faster reference responses, smarter cataloguing, and better guidance for researchers who often feel lost in the flood of information.

AI isn’t replacing your expertise, it’s helping you use it in new ways. And that’s what makes this moment exciting for librarians everywhere.

Categories
en Uncategorized

Why AI like ChatGPT still quotes retracted papers?

retracted studies

AI models like ChatGPT are trained on massive datasets collected at specific moments in time, which means they lack awareness of papers retracted after their training cutoff. When a scientific paper gets retracted, whether due to errors, fraud, or ethical violations, most AI systems continue referencing it as if nothing happened. This creates a troubling scenario where researchers using AI assistants might unknowingly build their work on discredited foundations.

In other words: retracted papers are the academic world’s way of saying “we got this wrong, please disregard.” Yet the AI tools designed to help us navigate research faster often can’t tell the difference between solid science and work that’s been officially debunked.

ChatGPT and other assistants tested

Recent studies examined how popular AI research tools handle retracted papers, and the results were concerning. Researchers tested ChatGPT, Google’s Gemini, and similar language models by asking them about known retracted papers. In many cases, they not only failed to flag the retractions but actively praised the withdrawn studies.

One investigation found that ChatGPT referenced retracted cancer imaging research without any warning to users, presenting the flawed findings as credible. The problem extends beyond chatbots to AI-powered literature review tools that researchers increasingly rely on for efficiency.

Common failure scenarios

The risks show up across different domains, each with its own consequences:

  • Medical guidance: Healthcare professionals consulting AI for clinical information might receive recommendations based on studies withdrawn for data fabrication or patient safety concerns
  • Literature reviews: Academic researchers face citation issues when AI assistants suggest retracted papers, damaging credibility and delaying peer review
  • Policy decisions: Institutional leaders making evidence-based choices might rely on AI-summarised research without realising the underlying studies have been retracted

A doctor asking about treatment protocols could unknowingly follow advice rooted in discredited research. Meanwhile, detecting retracted citations manually across hundreds of references proves nearly impossible for most researchers.

How Often Retractions Slip Into AI Training Data

The scale of retracted papers entering AI systems is larger than most people realise. Crossref, the scholarly metadata registry that tracks digital object identifiers (DOIs) for academic publications, reports thousands of retraction notices annually. Yet many AI models were trained on datasets harvested years ago, capturing papers before retraction notices appeared.

Here’s where timing becomes critical. A paper published in 2020 and included in an AI training dataset that same year might get retracted in 2023. If the model hasn’t been retrained with updated data, it remains oblivious to the retraction. Some popular language models go years between major training updates, meaning their knowledge of the research landscape grows increasingly outdated.

Lag between retraction and model update

Training Large Language Models requires enormous computational resources and time, which explains why most AI companies don’t continuously update their systems. Even when retraining occurs, the process of identifying and removing retracted papers from massive datasets presents technical challenges that many organisations haven’t prioritised solving.

The result is a growing gap between the current state of scientific knowledge and what AI assistants “know.” You might think AI systems could simply check retraction databases in real-time before responding, but most don’t. Instead, they generate responses based solely on their static training data, unaware that some information has been invalidated.

Risks of Citing Retracted Papers in Practice

The consequences of AI-recommended retracted papers extend beyond embarrassment. When flawed research influences decisions, the ripple effects can be substantial and long-lasting.

Clinical decision errors

Healthcare providers increasingly turn to AI tools for quick access to medical literature, especially when facing unfamiliar conditions or emerging treatments. If an AI assistant recommends a retracted study on drug efficacy or surgical techniques, clinicians might implement approaches that have been proven harmful or ineffective. The 2020 hydroxychloroquine controversy illustrated how quickly questionable research spreads. Imagine that dynamic accelerated by AI systems that can’t distinguish between valid and retracted papers.

Policy and funding implications

Government agencies and research institutions often use AI tools to synthesise large bodies of literature when making funding decisions or setting research priorities. Basing these high-stakes choices on retracted work wastes resources and potentially misdirects entire fields of inquiry. A withdrawn climate study or economic analysis could influence policy for years before anyone discovers the AI-assisted review included discredited research.

Academic reputation damage

For individual researchers, citing retracted papers carries professional consequences. Journals may reject manuscripts, tenure committees question research rigour, and collaborators lose confidence. While honest mistakes happen, the frequency of such errors increases when researchers rely on AI tools that lack retraction awareness, and the responsibility still falls on the researcher, not the AI.

Why Language Models Miss Retraction Signals

The technical architecture of most AI research assistants makes them inherently vulnerable to the retraction problem. Understanding why helps explain what solutions might actually work.

Corpus quality controls lacking

AI models learn from their training corpus, the massive collection of text they analyse during development. Most organisations building these models prioritise breadth over curation, scraping academic databases, preprint servers, and publisher websites without rigorous quality checks.

The assumption is that more data produces better models, but this approach treats all papers equally regardless of retraction status. Even when training data includes retraction notices, the AI might not recognise them as signals to discount the paper’s content. A retraction notice is just another piece of text unless the model has been specifically trained to understand its significance.

Sparse or inconsistent metadata

Publishers handle retractions differently, creating inconsistencies that confuse automated systems:

  • Some journals add “RETRACTED” to article titles
  • Others publish separate retraction notices
  • A few quietly remove papers entirely

This lack of standardisation means AI systems trained to recognise one retraction format might miss others completely. Metadata، the structured information describing each paper, often fails to consistently flag retraction status across databases. A paper retracted in PubMed might still appear without warning in other indexes that AI training pipelines access.

Hallucination and overconfidence

AI hallucination occurs when models generate plausible-sounding but false information, and it exacerbates the retraction problem. Even if a model has no information about a topic, it might confidently fabricate citations or misremember details from its training data. This overconfidence means AI assistants rarely express uncertainty about the papers they recommend, leaving users with no indication that additional verification is needed.

Real-Time Retraction Data Sources Researchers Should Trust

While AI tools struggle with retractions, several authoritative databases exist for manual verification. Researchers concerned about citation integrity can cross-reference their sources against these resources.

Retraction Watch Database

Retraction Watch operates as an independent watchdog, tracking retractions across all academic disciplines and publishers. Their freely accessible database includes detailed explanations of why papers were withdrawn, from honest error to fraud. The organisation’s blog also provides context about patterns in retractions and systemic issues in scholarly publishing.

Crossref metadata service

Crossref maintains the infrastructure that assigns DOIs to scholarly works, and publishers report retractions through this system. While coverage depends on publishers properly flagging retractions, Crossref offers a comprehensive view across multiple disciplines and publication types. Their API allows developers to build tools that automatically check retraction status, a capability that forward-thinking platforms are beginning to implement.

PubMed retracted publication tag

For medical and life sciences research, PubMed provides reliable retraction flagging with daily updates. The National Library of Medicine maintains this database with rigorous quality control, ensuring retracted papers receive prominent warning labels. However, this coverage is limited to biomedical literature, leaving researchers in other fields without equivalent resources.

DatabaseCoverageUpdate SpeedAccess
Retraction WatchAll disciplinesReal-timeFree
CrossrefPublisher-reportedVariableFree API
PubMedMedical/life sciencesDailyFree


Responsible AI Starts with Licensing

When AI systems access research papers, articles, or datasets, authors and publishers have legal and ethical rights that need protection. Ignoring these rights can undermine the sustainability of the research ecosystem and diminish trust between researchers and technology providers.

One of the biggest reasons AI tools get it wrong is that they often cite retracted papers as if they’re still valid. When an article is retracted, e.g. due to peer review process not being conducted properly or failing to meet established standards, most AI systems don’t know, it simply remains part of their training data. This is where licensing plays a crucial role. Licensed data ensures that AI systems are connected to the right sources, continuously updated with accurate, publisher-verified information. It’s the foundation for what platforms like Zendy aim to achieve: making sure the content is clean and trustworthy. 

Licensing ensures that content is used responsibly. Proper agreements between AI companies and copyright holders allow AI systems to access material legally while providing attribution and, when appropriate, compensation. This is especially important when AI tools generate insights or summaries that are distributed at scale, potentially creating value for commercial platforms without benefiting the sources of the content.

in conclusion, consent-driven licensing helps build trust. Publishers and authors can choose whether and how their work is incorporated into AI systems, ensuring that content is included only when rights are respected. Advanced AI platforms, such as Zendy, can even track which licensed sources contributed to a particular output, providing accountability and a foundation for equitable revenue sharing.

Categories
en

5 Tools Every Librarian Should Know in 2025

ai for libraries

The role of librarians has always been about connecting people with knowledge. But in 2025, with so much information floating around online, the challenge isn’t access, it’s sorting through the noise and finding what really matters. This is where AI for libraries is starting to make a difference. Here are five that are worth keeping in your back pocket this year.

1. Zendy

Zendy is a one-stop AI-powered research library that blends open access with subscription-based resources. Instead of juggling multiple platforms, librarians can point students and researchers to one place where they’ll find academic articles, reports, and AI tools to help with research discovery and literature review. With its growing use of AI for libraries, Zendy makes it easier to summarise research, highlight key ideas, and support literature reviews without adding to the librarian’s workload.

2. LibGuides

Still one of the most practical tools for librarians, LibGuides makes it easy to create tailored resource guides for courses, programs, or specific assignments. Whether you’re curating resources for first-year students or putting together a subject guide for advanced research, it helps librarians stay organised while keeping information accessible to learners.

3. OpenRefine

Cleaning up messy data is nobody’s favourite job, but it’s a reality when working with bibliographic records or digital archives. OpenRefine is like a spreadsheet, but with superpowers, it can quickly detect duplicates, fix formatting issues, and make large datasets more manageable. For librarians working in cataloguing or digital collections, it saves hours of tedious work.

4. PressReader

Library patrons aren’t just looking for academic content; they often want newspapers, magazines, and general reading material too. PressReader gives libraries a simple way to provide access to thousands of publications from around the world. It’s especially valuable in public libraries or institutions with international communities.

5. OCLC WorldShare

Managing collections and sharing resources across institutions is a constant task. OCLC WorldShare helps libraries handle cataloguing, interlibrary loans, and metadata management. It’s not flashy, but it makes collaboration between libraries smoother and ensures that resources don’t sit unused when another community could benefit from them.

Final thought

The tools above aren’t just about technology, they’re about making everyday library work more practical. Whether it’s curating resources with Zendy, cleaning data with OpenRefine, or sharing collections through WorldShare, these platforms help librarians do what they do best: guide people toward knowledge that matters.