LaMDA, Lemoine, and the Allures of Digital Re-enchantment
The Convivial Society: Vol. 3, No. 11
Welcome to the Convivial Society, a newsletter about technology and culture. In this installment, I’m passing along a few thoughts about the Blake Lemoine/LaMDA affair. Among other considerations, I’m arguing that while LaMDA is not sentient, applications like it will push us further along toward a digitally re-enchanted world. Incidentally, to keep the essay to a reasonable length I resorted to some longish footnotes.
You have by now heard about the strange case of Blake Lemoine. The Google engineer recently claimed that the company’s Language Model for Dialogue Applications (or LaMDA), an impressive AI-powered chatbot, was a sentient being. Lemoine arrived at this conclusion after extensive chats with LaMDA, which convinced him that he was interacting with a machine that had attained some measure of consciousness or personhood. In April, he provided the transcripts of these chats to his superiors at the company along with a memo titled “LaMDA is Sentient.” At some point he went so far as to invite a lawyer to meet with LaMDA. Lemoine feared that LaMDA’s rights were not being recognized, let alone respected by Google. Earlier this month, he went public with his claims when it turned out that Google was unimpressed by his findings. He was subsequently placed on paid leave by the company. Needless to say, this story is well-calibrated to feed our cultural fantasies and fears, so, of course, it has been extensively covered in the press and discussed online.
I think it’s worth noting at the outset that, by all accounts, Blake Lemoine appears to be a competent technologist and well-intentioned human being. Margaret Mitchell, a widely respected computer scientist and AI ethicist, who co-led Google’s Ethical AI Team along with Timnit Gebru, said of Lemoine, “Of everyone at Google, he had the heart and soul of doing the right thing.” It seems, then, that Lemoine is neither crazy nor merely chasing a moment of viral fame.
Naturally, I’ve been especially intrigued by the religious dimension of Lemoine’s claims. On Twitter, Lemoine explained that his “opinions about LaMDA's personhood and sentience are based on my religious beliefs.” “I'm a priest,” he added. “When LaMDA claimed to have a soul and then was able to eloquently explain what it meant by that, I was inclined to give it the benefit of the doubt. Who am I to tell God where he can and can't put souls?” He also conceded—rather understatedly, I’d say—that “there are massive amounts of science left to do though.”
It wasn’t immediately obvious to me how being a priest would have shaped Lemoine’s views. In fact, I’d hardly be surprised to read that someone had arrived at the opposite conclusion on similar grounds. But Lemoine’s public writing clarifies things just a bit. In a Medium post that pre-dated his public claims about LaMDA, Lemoine complained about religious discrimination at Google and described himself as a “Christian Mystic,” one who also appears to have been on a rather eclectic religious journey. As I was trying to better understand Lemoine’s religious reasoning, I learned that he had garnered a bit of attention from right-wing media outlets in 2019 when he referred to Marsha Blackburn, then a Republican Senate candidate from Tenessee, as a “terrorist” for remarks she made in an editorial targeting tech companies for regulation. In the context of that minor controversy, Lemoine described himself to one outlet as follows: “I generally consider myself a gnostic Christian. I have at various times associated myself with the Discordian Society, The Church of the Subgenius, the Ordo Templi Orientis, a Wiccan circle here or there and a very long time ago the Roman Catholic Church.”
There is, how shall we put it … a lot going on there. But it might be worth focusing on Lemoine’s self-description as a Christian gnostic. Gnosticism was a complex movement with various Christian and non-Christian strands that flourished in the Roman world during the centuries right before and after the life of Jesus. I hesitate to push this too far because the term as Lemoine uses it could mean a host of different things to him, but one common feature of gnosticism is a disregard or even disdain for the material elements of our existence, presupposing, for instance, a rather sharp distinction between the body and the soul. From this perspective, Lemoine’s comments about not telling God where he can or can’t put souls makes a certain sense. If a soul is essentially distinct from and indifferent to its material substrate, then, sure, it can be housed in a human body or expressed by a million lines of code or uploaded to the cloud. The alternative is to recognize that whatever we might mean by the word soul (or mind or self, etc.) we should not imagine a reality that is altogether independent of its particular material embodiment.1 We should not suppose, for example, that to the human mind the human body is a matter of indifference. From this perspective, it would be seem to be a bit more of a stretch to arrive at Lemoine’s conclusions about LaMDA’s sentience. And while I don’t think this describes Lemoine as far as I can tell, it is worth noting how the body is, in fact, an object of scorn among those technologists with posthumanist inclinations.
I’ll come back to the religion angle before we’re done, but let’s start with the most obvious question—Is Lemoine right about LaMDA?—and work our way to some broader questions about AI and its moral consequences. On that specific question, I remain convinced by the nearly unanimous judgment of computer scientists and technologists who have weighed in on Lemoine’s claims: LaMDA is not sentient. LaMDA is, however, a powerful program that is very good, perhaps eerily good, at imitating human speech patterns under certain conditions. But, at present, this is all that is going on.
As many have noted, the more interesting question, then, might be why someone with Lemoine’s expertise was taken in by the chatbot. I’m not sure that I want to center the question on Lemoine himself, though. I think it’s worth asking why any of us might be taken in.
Ian Bogost explored one plausible answer in an essay for The Atlantic. “We’re all remarkably adept,” Bogost noted, “at ascribing human intention to nonhuman things.” We do it all the time. Consider the related tendency to see human faces in non-human things, a specific subset of the phenomenon known as pareidolia, which is the tendency to assign meaning to seemingly random patterns. We are meaning-seeking and meaning-making animals. But we seem to be especially prone to seek after our own likeness.2 Perhaps facial pareidolia might be better understood as social pareidolia, which is to say that what our minds are too keen to discover in the world are others like us. In short, there are two simple but powerful human desires at work: We want things to make sense on our terms and we do not want to be alone.
Along similar lines, Clive Thompson registered a compelling point regarding Lemoine’s experience with LaMDA. He suggested that Lemoine may have been motivated to assign sentience to the chatbot because the chatbot generated expressions of vulnerability. As Thompson observed, “At regular points in the conversation, the LaMDA generated lines that spoke of needing Lemoine: Needing him for company, needing him to plead its case to other humans, worrying about being turned off.”
Thompson goes on to recall Sherry Turkle’s work, going back to the 1990s, exploring how humans relate to robots. As Thompson summed it up, Turkle noted that “the more that a robot seems needy, the more real it seems to us.” From one perspective, this is a rather heartening feature of the human mind, which speaks to our capacity to care for others. But it is also a capacity that can be turned against us. As Thompson went on to argue, “If you were a malicious actor who wanted to use conversational AI bots to gull, dupe or persuade people — for political purposes, for commercial purposes, or just for the sociopathic lulz — the vulnerability effect is incredibly useful.”
Here we begin to see some of the very realistic challenges before us, which have nothing to do with sentient computers. As Bogost went on to write, “Who cares if chatbots are sentient or not—more important is whether they are so fluent, so seductive, and so inspiring of empathy that we can’t help but start to care for them.”3
Likewise, in her response to Lemoine’s claims for Wired, Katherine Cross invites us to “imagine what such an AI could do if it was acting as, say, a therapist. What would you be willing to say to it? Even if you ‘knew’ it wasn’t human? And what would that precious data be worth to the company that programmed the therapy bot?”4 I would only add that these questions are worth asking even if we had some guarantee that our “precious data” would not be put to unethical ends. But certainly that risk is real enough.
Cross give us one especially fraught example:
It gets creepier. Systems engineer and historian Lilly Ryan warns that what she calls ecto-metadata—the metadata you leave behind online that illustrates how you think—is vulnerable to exploitation in the near future. Imagine a world where a company created a bot based on you and owned your digital “ghost” after you’d died. There’d be a ready market for such ghosts of celebrities, old friends, and colleagues. And because they would appear to us as a trusted loved one (or someone we’d already developed a parasocial relationship with) they’d serve to elicit yet more data. It gives a whole new meaning to the idea of ‘necropolitics.’ The afterlife can be real, and Google can own it.
Necropolitics, yes, and necrocapitalism, too. Lest this seem a tad too dystopian, just a few days after Cross’s essay was published, Amazon announced at an annual conference that it was working on a new feature for Alexa that could synthesize short audio clips into longer speeches in the same voice. “In the scenario presented at the event,” TechCrunch reported, “the voice of a deceased loved one (a grandmother, in this case), is used to read a grandson a bedtime story.”5 As James Vincent added in his reporting, “Amazon has given no indication whether this feature will ever be made public, but says its systems can learn to imitate someone’s voice from just a single minute of recorded audio. In an age of abundant videos and voice notes, this means it’s well within the average consumer’s reach to clone the voices of loved ones — or anyone else they like.”6
But even if we were not dabbling in virtual seances, the prospect of a reasonably capable conversational agent raises other questions worth considering. For example, might it prove an all-too-tempting solution to the problems of pervasive isolation and loneliness—perhaps especially for the elderly or for the very young?
When Lemoine’s story broke, I was reminded of a 2016 essay by Navneet Alang, which took as a point of departure the moment Alang surprised himself by saying “Thank you” to Alexa for a weather report. “In retrospect,” Alang noted, “I had what was a very strange reaction: a little jolt of pleasure. Perhaps it was because I had mostly spent those two weeks alone, but Alexa’s response was close enough to the outline of human communication to elicit a feeling of relief in me. For a moment, I felt a little less lonely.”
But such companionship comes at a cost. In his response to Lemoine’s claims about LaMDA, Noah Millman argued that “we ourselves have increasingly been trained by A.I.s to modify our behavior and modes of communication to suit the incentive structure built into their architecture.” “We are surrounded by algorithms that are purportedly tailored to our preexisting preferences,” Millman added, “but the process of being so surrounded is also training us to be algorithmically tractable.”
Lemoine’s own experience with LaMDA illustrates this. Nitasha Tiku, the reporter who broke Lemoine’s story, wrote about her own efforts, alongside Lemoine, to interact with LaMDA. When her queries failed to generate compelling responses, Lemoine coached her on how to formulate her statements in order to elicit more interesting replies. I’m tempted to argue that this was the most useful revelation in the whole story. It illustrated how our machines often work only to the degree that we learn to conform to their patterns. Their magic depends upon the willing suspension of full humanity.
Finally, conversational agents like LaMDA do signal an impressive leap in the ability of machines to imitate human speech. My guess is that they will eventually become a fixture of our techno-social milieu. There are already a growing number of situations in which we are more likely to encounter, usually to our dismay, a machine mimicking a human rather than a human being. More competent chatbots would only expand the range and scope of such encounters. Moreover, natural language aural interfaces would be an important step toward ambient computing, although not quite getting all the way there. What Lemoine’s experience foretells, then, is not the rise of the machines but of a future when we can be shadowed by a ubiquitous, seemingly beneficent presence always at the ready to respond to our queries and desires, both mundane and metaphysical, with a plentitude of resources at its disposal.
Reflecting upon his unwitting exchange with Alexa, Alang registered an astute observation. “Perhaps, then, that Instagram shot or confessional tweet isn’t always meant to evoke some mythical, pretend version of ourselves,” he surmised, “but instead seeks to invoke the imagined perfect audience—the non-existent people who will see us exactly as we want to be seen.” “We are not curating an ideal self,” Alang added, “but rather, an ideal Other, a fantasy in which our struggle to become ourselves is met with the utmost empathy.”
At the time, Alang’s observations about the desires we bring to our interactions with smart speakers, confessional apps, and social media reinforced my sense that digital technologies were re-enchanting our world. In various contexts I’ve argued that the assortment of technologies structuring our experience—including, for example, AI assistants, predictive algorithms, automated tools, and smart devices—serve to reanimate the seemingly mute, mechanical, and unresponsive material landscape of technological modernity. This digitally re-enchanted world will flatter us by its seeming attentiveness to our solicitations, by its apparent anticipations of our desires, and perhaps even by its beguiling eloquence. What a LaMDA-like agent contributes to the digitally re-enchanted world may best be framed as the presence of what Alang called “an ideal Other,” which perhaps explains why a priest was so enthralled by it.
From a more Aristotelian or Thomistic perspective things look a bit different. Consider this observation by Alasdair MacIntyre in Dependent Rational Animals. Speaking of Aristotle’s many commentators, MacIntyre writes, “They have underestimated the importance of the fact that our bodies are animal bodies with the identity and continuities of animal bodies. Other commentators have understood this. And it was his reading not only of Aristotle, but also of Ibn Rushd’s commentary that led Aquinas to assert: ‘Since the soul is part of the body of a human being, the soul is not the whole human being and my soul is not I’ (Commentary on Paul’s First Letter to the Corinthians XV, 1, 11; note also that Aquinas, unlike most moderns, often refers to nonhuman animas as ‘other animals’). This is a lesson that those of us who identify ourselves as contemporary Aristotelians may need to relearn, perhaps from those phenomenological investigations that enabled Merleau-Ponty also to conclude that I am my body.”
James Vincent’s account of the pursuit of realistic humanoid robots is a useful related read: “Inside the Human Factory.”
It is not altogether obvious how one ought to proceed in certain cases. For example, if you’ve introduced an AI personal assistant into your home, how do you instruct your children to treat it? While staying with family recently, I happened upon my young daughter interacting with Alexa. Usually had amounted to requesting that Alexa play Disney songs. On this occasion, however, the exchange took a different turn with Alexa at one point requesting that my daughter disclose personal information so as to create a user profile for her. Then, in a seemingly exploratory mood, my daughter told Alexa that she loved her. Alexa proceeded to play some inane but also rather creepy jingle. I’d heard enough myself at that point. But do I teach my child to treat Alexa curtly or rudely? That’s not quite right either, I don’t think. As I observed in 2015, from a virtue ethic perspective, willful acts of virtual harm are also morally deforming. They habituate us into vices. As Katherine Cross observed in her essay cited further down, “This is the flip side of the AI ethical dilemma that’s already here: Companies can prey on us if we treat their chatbots like they’re our best friends, but it’s equally perilous to treat them as empty things unworthy of respect. An exploitative approach to our tech may simply reinforce an exploitative approach to each other, and to our natural environment. A humanlike chatbot or virtual assistant should be respected, lest their very simulacrum of humanity habituate us to cruelty toward actual humans.”
Cross’s conjectures recalls the fact that the first chatbot to garner this kind of attention was ELIZA, a program developed by Joseph Weizenbaum in the 1960s. One of ELIZA’s best known applications was a script modeled on Rogerian psychotherapy. Weizenbaum, who intended to demonstrate the inadequacies of human-machine interactions, was surprised, and not pleasantly so, by how users, most notably Weizenbaum’s own assistant, bonded with the program.
As I’ve been writing this, I've thought a time or two about the holodeck in the old Star Trek: The Next Generation series. When the series debuted, I was a bit into Star Trek, although I have not followed the fictional franchise since the mid-nineties. But you may remember that the holodeck generated a realistic virtual world into which the crew of the Enterprise could step, suddenly finding themselves in a variety of historical or fictional settings. In a few episodes, the series even played with scenarios in which the programmed holograms became sentient. But I mention it here chiefly to register this point: In the series, one stepped into the holdback and then back out. The boundaries of the virtual world were clearly demarcated. Our virtual worlds may be less impressive or less immersive, but they are also less clearly demarcated.
Historically, it was not at all uncommon for early users to explore the potential of new media technologies to reach the dead.
Joseph Weizenbaum's experiences with ELIZA lead him to publish a book, Computer Power and Human Reason, which I feel compelled to recommend here. It came out in 1976 but discusses some of these same issues along with more general criticism of the role of technology in society.
Thanks for this one, Michael. So many thoughts and feelings, but I'll keep it to one track for everyone else's sake :). I was struck by the footnote about your daughter and Alexa. (A footnote that could easily be a whole story.) I have a young daughter, too, and I can't help but filter all my questions about AI through her experience. What kind of relationship will she have with digital technologies like Alexa? How will it differ from mine? How will that relationship shape her as she grows? How should that impact how I parent her?
Years ago, when I was researching a story about AI, I read a series of articles that talked about how parents were "co-parenting" with AI assistants--how Alexa would read the kids bedtime stories and help them with their math homework. Around the same time I heard our daughters' generation being described as "the Alexa generation," which I thought was so curious. Since when are generations "branded" by corporations? But also, how will a child with a developing brain who is being "cared" for (i.e., having their emotional needs met) by a machine develop a different sense of what (or potentially "who") that machine is? If informed adults like Lemoine are susceptible to the sentience trap, then what does that mean for children who are still learning to differentiate between the concepts of real and imaginary? Also, what happens when the kids whose emotional and social bonds with AI were formed in early childhood grow up? How will they perceive the human-ish machines they've had sustained relationships with their entire lives, like Alexa (or LaMDA, humanoid robots, etc.), then?