Q&A with Duke’s Librarian for Artificial Intelligence Learning

Artificial intelligence is reshaping how we all learn, create, and consume information. At Duke, that transformation is being met not with fear or uncritical excitement, but with curiosity, critical thinking, and conversation—led in part by Duke librarians.
Hannah Rozear, Librarian for Biological Sciences, Global Health, and Artificial Intelligence Learning, works with Duke students and faculty on using AI technologies in responsible, creative, and impactful ways. Earlier this year, she collaborated with a small team of Duke undergraduates and researchers to develop the AI Ethics Learning Toolkit, a new resource that invites students and faculty to ask big questions about AI’s impact on knowledge, creativity, and society. The toolkit is built around discussion prompts and classroom activities that make space for thoughtful, values-driven engagement with generative AI.
We recently caught up with Rozear to ask how the toolkit came to be, what it’s teaching Duke students about the promise and pitfalls of AI, and why libraries are uniquely positioned to help communities approach these new technologies with reflection and a healthy dose of perspective.
Conversations about artificial intelligence are unfolding everywhere—in the news, throughout higher ed, and across Duke’s campus. What motivated you and your collaborators to develop the AI Ethics Learning Toolkit? How did you envision it helping the Duke community to engage thoughtfully with generative AI?
There’s no shortage of buzz about the good, the bad, and the ugly of AI’s impacts. Much of the conversation swings between extremes—either AI-hype or doomsday predictions about world-ending robots—without as much attention to the subtler human and social dimensions of these technologies. At a liberal arts institution like Duke, our team saw an opportunity to explore AI from a more humanistic perspective.
Our design team—two undergraduates, an education scholar, and a librarian—decided that a toolkit with adaptable classroom activities and discussion prompts would have the greatest potential impact. With students as collaborators, we built a resource that centers student perspectives and experiences. Because there’s no “one-size-fits-all” approach to conversations about AI, the toolkit aims to offer flexible starting points, letting faculty choose topics that align with their courses and adapt activities to fit their teaching goals and available time.
What challenges do you see Duke faculty and students currently facing as they try to navigate the use of generative AI tools responsibly? How does this toolkit help to address those challenges?
Generative AI can still feel like a taboo topic, and both faculty and students struggle to determine where the line is between acceptable and inappropriate use. The toolkit doesn’t address that question head- on. Instead, it offers nuanced and creative ways to spark meaningful conversations about AI use in the classroom. The term AI literacy can be unhelpfully broad, but it refers to understanding how these technologies work “under the hood” and recognizing their broader societal impacts. By helping faculty and students engage critically and thoughtfully with AI, the toolkit encourages curiosity, reflection, and ethical awareness—approaches that will help students make informed, responsible decisions about when and how to use AI tools. The toolkit also helps normalize conversations about AI, supporting Duke’s broader commitment to fostering ethical engagement with AI.
How do you see the role of the library—and librarians in particular—evolving as AI becomes a more integral part of research, teaching, and everyday information use?
My first experience with ChatGPT’s limitations came in 2023, when I encountered one of its now-famous citation “hallucinations.” In that moment, I realized two things: 1) everyone would be clamoring to use this tool for research and information-seeking, and 2) libraries would have a critical role in helping users understand the limitations of this seemingly magical technology.
Libraries have often been at the forefront of major technological shifts (remember the internet?), and the rise of generative AI is no exception. Library staff bring an incredible range of expertise—spanning coding, cataloging, archives, data, and research—and we’ve long been advocates for privacy, ethical technology use, and open access to knowledge.
As AI becomes increasingly integrated in everyday information practices, librarians play a key role in helping students and faculty evaluate AI outputs, recognize potential biases, and develop critical AI literacy skills. Our broad, systems-level view of information and technology uniquely positions us to guide our community in using these tools thoughtfully and responsibly.
Can you share any examples of how Duke students or faculty have been using the toolkit to think more deeply—or teach more effectively—about AI and its social implications?
We shared the toolkit with Thompson Writing Program faculty at the beginning of the fall 2025 semester, and many appreciated having simple, ready-made entry points for talking about AI with their students. Sometimes getting over the initial hump—simply naming AI in the classroom—opens the door to important conversations with students. In collaboration with one writing instructor, we tried out a toolkit activity in which students used ChatGPT to generate a bibliography for their research topics. One group discovered that every citation had been fabricated, which brought home the lesson about the need to fact-check AI-generated information carefully. Another faculty member shared that she and her students had created a collaborative AI policy together after I visited their class.
The toolkit has also gained attention beyond Duke. We’ve heard from librarians, educators, and academics across North Carolina (and as far away as Australia!) who are interested in adapting the materials for their own institutions. Their feedback has been encouraging, and we hope to incorporate these new perspectives as we continue to refine and expand the toolkit.
The topics and conversation starters in the toolkit seem intentionally simple and direct. For example: Is AI theft? Is AI a spy? Who benefits from AI? But there’s a lot to unpack in each of those questions. Which prompts do you think have particularly resonated and started good conversations with Duke students?
The students on our design team felt that concise, provocative questions offered more engaging hooks for their peers. A prompt like “Is AI theft?” sparks curiosity more quickly than a more formal question like “What are the impacts of AI on the copyright landscape?” Even though both essentially ask the same thing.
When I introduced the toolkit to students in a Focus AI program (a cohort of first-year students), the question “Can I trust AI?” really resonated. Many of them have seen firsthand how AI can make things up, but they’ve also experienced how valuable these tools can be when trying to grasp a complex topic. That tension between usefulness and reliability sparked thoughtful discussion about what “trust” in AI might look like.
In an Environmental Science course I worked with, students gravitated toward the question, “Is AI sustainable?” Many were concerned about the environmental costs of AI technologies, and some expressed skepticism about whether the benefits outweighed the harms. That conversation opened space to think critically about the broader systems, infrastructure, and energy use that underpin AI’s rapid expansion.
These questions are intentionally provocative. They may even come across as overly critical. But combined with supporting resources and guided classroom discussion, they invite students to explore multiple perspectives.
What tips do you have for people to help them identify if AI-generated information is true and reliable?
Be skeptical and fact-check any suspicious information. If it seems too good or too easy to be true, it probably is! Fact-checking can be as simple as opening a new browser tab and looking into the author, organization, or claim that you find suspicious. I think this process of fact-checking will become one of the most important twenty-first-century research skills. To help with this, I created something called the SNIFF test, to provide a framework for evaluating AI-generated information:
Source check
Navigate to a new tab
Investigate citations
Fact-check everything
Fight the urge to copy + paste!
Don’t get discouraged if certain sources or concepts conflict with one another or require a deeper dive. Librarians are well positioned to teach and model these strategies, helping students become more discerning fact-checkers and users of AI-generated information. Professors are also experts in their fields and can help students navigate to the best sources in their discipline.
How does the Duke Libraries’ involvement in projects like this reflect our broader mission to help students and faculty think critically about the information they encounter?
Information requires context. In an era where much of what we consume comes through social media feeds or AI-generated summaries, we’re experiencing what you could call a “context collapse.” We see snippets of information, detached from their sources, and the constant deluge of content can make it harder to pause, verify, and critically evaluate what we’re seeing. From misinformation and disinformation to AI hallucinations and deepfakes, students and faculty increasingly look to the library for help navigating this complex information landscape. That’s part of our mission. Unlike AI chatbots trained on terabytes of largely uncurated internet data, the library’s carefully curated collections—and the expertise of the people who steward them—offer depth, reliability, and perspective that AI alone can’t replace.
