AI and Libraries: Why Librarians May Become Arbiters of Reality

1 day ago 1
 at a library in Ciudad de México, the floor is so highly polished that the book stacks and windows are reflected, creating a somewhat surreal and confusing scene. A single visitor is seated at a table in the center.Photo by Julio Lopez

I’ve written a lot for my newsletter The Bottom Line about AI and publishing—the copyright lawsuits, the controversy over AI detection and disclosure, the new AI-based editorial services—but a recent Book Industry Study Group webinar offered specific insights I hadn’t heard before: librarians as the publishing industry’s early warning system. Librarians are not working in the realm of “what if” when it comes to AI; they’re managing the real-world effects right now.

BISG executive director Brian O’Leary presented survey data about libraries and AI, followed by a conversation with R. David Lankes, a library scholar who has conducted research in the field.

BISG’s survey on AI and what it says about librarians

BISG conducted a survey on AI use across the publishing supply chain, drawing responses from publishers, librarians, and industry partners. The full dataset was presented in an earlier webinar; this session focused specifically on the library segment.

The library respondents skewed toward those with experience and working with larger institutions (100+ employees). More than half reported 11+ years working in the field, with the largest single group falling in the 15 to 25 year range. The headline finding: librarians have a notably higher rate of resistance to AI. About a third reported not using AI and having no plans to do so. (Across the full dataset, it was 20%.)

Less than half of library respondents said their institution was actively using AI. Thirty-one percent said they were unsure, a figure O’Leary said may reflect the reality of working in larger institutions where not everything happening organizationally is visible to individual staff. The most common response to a question about institutional AI policy wasn’t encouragement or discouragement but no policy at all.

Librarian objections to AI included catalogs increasingly filled with low-quality AI-generated work, staff time consumed by identifying and removing “slop,” and the burden of countering false or misleading information—that is, dealing with work that runs directly counter to librarians’ core mission.

About 34% of librarians described themselves as ethically opposed to AI use, quite significant when you consider who is responding.

Librarians resist AI not out of ignorance but skepticism

The BISG session highlighted distinct conversations about AI playing out simultaneously across publishing and libraries alike, conversations in which people are often talking past one another.

One conversation relates to AI literacy. Some argue that AI is a tool and information professionals should understand tools, therefore librarians should learn to use AI and help patrons navigate it. Versions of this argument come along with every major information technology shift, and it carries an implicit assumption: that resistance is unfamiliarity, and education will resolve it. It’s how the field responded to the internet, to mobile, to social media.

But Lankes argued this model doesn’t so far fit what’s happening in the library community. The resistance isn’t coming from people who haven’t engaged with the technology. It’s coming disproportionately from people who have. When he conducted focus groups with librarians in the “never AI” camp, he found people who could explain large language models, discuss retrieval-augmented generation, and articulate technically why they considered the tools unreliable. They’ve concluded that a library’s adoption of AI would send the wrong signal about what a library fundamentally is.

Nonetheless, AI can obviously aid in libraries’ mission

Lankes offered an example that shows, despite skepticism, there is a clear role for AI to support libraries’ work. A collection of music materials at the University of Texas—thousands of albums and recordings—was effectively invisible to users because it had never been fully cataloged. The resources to do so through traditional means simply weren’t there. So a team developed a solution: use AI to create stub records, each tagged with a confidence factor indicating how certain the AI was about its own identification. The least-certain records could be flagged for future human review and high-use materials prioritized for fully human-generated cataloging.

This is not an AI literacy question but a mission question. Can AI help librarians do what they exist to do? There are, of course, parallels to publishing. In conversations I’ve reported on around accessibility, without AI tools, certain backlist titles might not receive alt text, metadata, and formatting updates needed to serve readers with disabilities. The choice isn’t necessarily between AI and a better alternative, but AI and nothing.

AI and the scalability problem

One of the more fascinating parts of the conversation came near the end, in relation to peer review. Peer review is foundational to academic publishing and, more broadly, to the mechanisms by which society establishes what is known and credible. Until now, that system has functioned because the rate of human knowledge production and the rate of human capacity to review it have been roughly matched. AI is breaking that equilibrium.

As Lankes put it, peer review is simply not scalable given the volume AI can produce. We can’t use the same AI tools that are accelerating content creation to evaluate that content’s credibility, because the trust isn’t there. The result is a genuine threat not just to publishing workflows but to the broader infrastructure by which society determines what is true.

I find this problem is often overlooked in discussions about AI: the question isn’t only what AI does to writing and publishing, but what it does to the information environment our industry depends on. You can see it clearly right now in how much authors worry that declining page reads in Kindle Unlimited may be tied to a greater glut of titles—a result of AI-assisted and AI-generated works entering the Amazon marketplace. If readers have less trust that unfamiliar books will meet their needs or be trustworthy, they’re more likely to stick with what they know and not take a chance on something new.

Libraries as arbiters of reality

Lankes suggested that libraries may be moving toward a new function that I would describe as “arbiters of reality.” He described librarians as serving as “trusted humans” in the loop, or people who can vouch for a source because they have it in their collection, know where it came from, and can go look at it. The trust that librarians have maintained over decades, even as institutional trust has declined in our society, gives them something AI systems don’t have: credibility with the people they serve. When a student gets an answer from AI, Lankes noted, they’ve been trained to be skeptical. When it comes from the library, they tend to believe it.

I imagine that’s why 34% of library respondents are ethically opposed to AI. They’re trying to protect something real. Lankes suggested that AI literacy—teaching people to be appropriately skeptical of AI-generated content—may be hitting a psychological ceiling. If people are required to question everything they encounter, the cognitive burden becomes unsustainable. He suggests that what’s needed isn’t more instruction but something closer to a wellness intervention: relief from constant epistemic vigilance.

Read Entire Article