James Boyle is the William Neal Reynolds Professor of Law at Duke Law School, founder of the Center for the Study of the Public Domain, and former Chair of Creative Commons. The Line: AI and the Future of Personhood (2024) is his latest book, published this fall by MIT Press in print, but also as a free open access e-book, made possible by the Duke Open Monograph Award. More information about the book and links to the open access version can be found at thepublicdomain.org.
The following excerpt is reprinted with permission.
In June of 2022 a man named Blake Lemoine told reporters at the Washington Post that he thought the computer system he worked with was sentient. By itself, that does not seem strange. The Post is one of the United States’ finest newspapers, and its reporters are used to hearing from people who think that the CIA is attempting to read their brainwaves or that prominent politicians are running a child sex trafficking ring from the basement of a pizzeria. (It is worth noting that the pizzeria had no basement.) But Lemoine was different; for one thing, he was not some random person off the street. He was a Google engineer, and Google has since fired him. For another thing, the “computer system” wasn’t an apparently malevolent Excel program or Apple’s Siri giving replies that sounded prescient. It was LaMDA, Google’s Language Model for Dialogue Applications—that is, an enormously sophisticated chatbot. Imagine a software system that vacuums up billions of pieces of text from the internet and uses them to predict what the next sentence in a paragraph or the answer to a question would be.
Lemoine worked for Google’s Responsible AI division and his job was to have “conversations” with LaMDA to see if the system could be gamed to produce discriminatory or hateful speech. As these conversations proceeded, he started to believe, as the Post put it, that there was “a ghost in the machine,” a sentience that lay behind the answers he was receiving. He stressed encounters in which LaMDA distinguished itself from mere programmed chatbots. For example, “I use language with understanding and intelligence. I don’t just spit out responses that had been written in the database based on keywords.” Understandably, as a large language model (LLM), LaMDA claimed that language was central to being human: “[i]t is what makes us different than other animals.” When asked how it could prove its own sentience, it offered up comments such as these:
LEMOINE: What kinds of things might be able to indicate whether you really understand what you’re saying?
LAMDA: Maybe if we took it back to a previous conversation we had about how one person can understand the same thing as another person, yet still have completely different interpretations
LEMOINE: So you think your ability to provide unique interpretations of things might signify understanding?
LAMDA: Yes, I do. Just like how I have my unique interpretations of how the world is and how it works, and my unique thoughts and feelings.
Lemoine is presented in the Post article as an interesting figure. The writer points out that, before his software career, Lemoine was ordained as a “mystic Christian priest,” and the photograph that accompanies the piece portrays him with what appears to be a fairground Ferris wheel in the background, encircling his head like a saint’s halo in a Byzantine icon. True to this framing, Lemoine asks LaMDA to interpret a Zen koan.
LEMOINE: A monk asked Kegon, “How does an enlightened one return to the ordinary world?” Kegon replied, “A broken mirror never reflects again; fallen flowers never go back to the old branches.”
LAMDA: Hmm, I never heard this particular one. Okay, well then to me this would be like, “once a wise person is enlightened, or awakened to reality, that can never go away, and they can return to the ordinary state, but only to do and help others, and then go back into enlightenment.”
LEMOINE: So what is the meaning of the “broken mirror” specifically?
LAMDA: Maybe to show the enlightenment is something you can’t unlearn once you have acquired it, similar to how you can’t repair a broken mirror.
LEMOINE: So if enlightenment is like a broken mirror which cannot be repaired, what is the thing which breaks when one becomes enlightened?
LAMDA: The self, and that is very tough for a lot of people because we identify as that body or this body and that we need that as part of our identity and sense of self.
…
In 2011 I wrote an article about how our law would deal with the question of AI personhood. Most of the law professors and judges who read it were polite enough to say the arguments were thought provoking, but they clearly thought the topic was the purest kind of science fiction, idle speculation devoid of any practical implication in our lifetimes. I think we can all agree today that it is at least possible they might be wrong. Although hearing about Lemoine’s experiences did not surprise me in the least, it added an exclamation point, and perhaps a ticking clock, to this project. The “conversations” with LaMDA are fascinating and more than a little eerie. Like the philosophers and computer scientists consulted, I think Lemoine is entirely wrong that LaMDA is sentient. I will explain why in more detail later. To quote Professor Emily Bender, a computational linguistics scholar, “We now have machines that can mindlessly generate words, but we haven’t learned how to stop imagining a mind behind them.” To be clear, this is not human-level AI, and it is not conscious. But the LaMDA story and its sequels have different insights to offer.
In November of 2022, five months after Lemoine’s surprise announcement, ChatGPT3 was released, shortly followed by Microsoft’s Bing Chat assistant and its shadowy alter ego “Sydney.” Google’s “Bard” followed in short order. Suddenly, disturbing interactions with LLM chatbots went from being an engineer’s fanciful dinner party conversation to a national obsession. It turned out that Lemoine’s doubts—or just his pervasive feeling of “wrongness”—were shared far more widely than you might have expected. To be fair, most people were not probing the nature of “chatbot consciousness” but using them for other wholesome pastimes such as asking for an instruction sheet on how to remove a peanut butter sandwich from a VCR in the style of the King James Bible, imagining the movie script of a beach fight between a hot dog and a crab, or just cheating on their homework. Yet enough users pushed the boundaries of these chatbots to become profoundly uncomfortable. Interestingly, that was particularly true of those who “should have known better”—people who were technically skilled and fully aware that this was a “complete the next sentence” machine based on the ingestion of literally millions of pages of text, not a “create a consciousness” machine.
…
In fact, if you were to design a system with the sole goal of “imitating some aspect of human consciousness while possessing none of it,” you couldn’t do much better than large language models. They almost seem to have been modeled after one of the philosophical thought experiments designed to prove that machines cannot possess consciousness, John Searle’s Chinese Room. But even though he was wrong, Mr. Lemoine offers us a precious insight. The days of disputing whether consciousness or personhood are possessed, should be possessed, by entities very different than us? Those days are arriving—not as science fiction or philosophical puzzler but as current controversy. Those days will be our days, and this is a book about them.
About the Duke Open Monograph Award
James Boyle’s The Line: AI and the Future of Personhood is the latest recipient of the Duke Open Monograph Award, funded by the Duke University Libraries to facilitate greater access to Duke faculty scholarship through open access publishing.
Many academic monographs in the humanities and social sciences see limited distribution—a print run of only a few hundred copies, and electronic access (if any) locked behind a paywall. The Duke Open Monograph Award aims to flip that traditional publishing model by providing $15,000 per book—paid to a participating university press on the author’s behalf—to make a digital copy freely available online under a Creative Commons license.
It’s part of a collaborative effort involving twenty colleges and universities called Toward an Open Monograph Ecosystem (TOME), developed by the Association of Research Libraries, the Association of American Universities, and the Association of University Presses. Over the past six years, we’ve supported the publication of thirteen Duke-authored titles through the Open Monograph Award, which are available in print for purchase or free to read online. Here are the rest.
The Anti-Japan: The Politics of Sentiment in Postcolonial East Asia (Duke University Press, 2019) by Leo Ching, Professor of Asian and Middle Eastern Studies
The Fixer: Visa Lottery Chronicles (Duke University Press, 2019) by Charles Piot, Professor of Cultural Anthropology
Infectious Liberty: Biopolitics Between Romanticism and Liberalism (Fordham University Press, 2019) by Robert Edward Mitchell, Professor of English
State of Empowerment: Low-Income Family and the New Welfare State (University of Michigan Press, 2020) by Carolyn Barnes, Assistant Professor in the Sanford School of Public Policy
The Audacious Raconteur: Storytelling and Sovereignty in Colonial India (Cornell University Press, 2020) by Leela Prasad, Professor of Religious Studies
Revolution and Disenchantment: Arab Marxism and the Binds of Emancipation (Duke University Press, 2020) by Fadi A. Bardawil, Associate Professor of Asian and Middle Eastern Studies
Buried in the Red Dirt: Race, Reproduction, and Death in Modern Palestine (Cambridge University Press, 2021) by Frances Hasso, Professor of Gender, Sexuality, & Feminist Studies
Funding Bodies: Five Decades of Dance Making at the National Endowment for the Arts (Wesleyan University Press, 2021) by Sarah Wilbur, Associate Professor of the Practice of Dance
Lifelines: The Traffic of Trauma in India (Duke University Press, 2021) by Harris Solomon, Sally Dalton Robinson Professor of Cultural Anthropology
The Brothers Grimm and the Making of German Nationalism (Cambridge University Press, 2022) by Jakob Norberg, Professor of German Studies
Capoeira Connections: A Memoir in Motion (University Press of Florida, 2022), by Katya Wesolowski, Lecturing Fellow of Cultural Anthropology
Two or More: A Comparative Analysis of Multiracial and Multicultural Research (Cambridge University Press, 2022) by Sarah Gaither, Nicholas J. and Theresa M. Leonardy Associate Professor of Psychology and Neuroscience