The Convivial Society: Vol. 4, No. 2
I've been following your work for a while and really appreciate this series on AI-mediated communication. I thought I would chime in with some social scientific findings that fit nicely with your argument. Research in the Computers-as-Social-Actors paradigm (CASA) found that people can anthropomorphize all types of machines and prescribe social tendencies to them. The effect can be accentuated by design choices of the technologies (i.e., AI Chatbots with human names encourage anthropomorphism) AND when people engage mindlessly. The mindless aspect reminds me of your writing on cultivating attention. So perhaps part of why interactions with AI are so creepy are because people engage without their full attention. Relatedly, my dissertation work was on AI chatbots and so many of the uncanny effects you describe happened. Users found their co-workers hitting on AI chatbots thinking they were real people, people found themselves having to apologize for what their AI chatbots did, people brought the AI assistant lunch only to find out "Liz" was a chatbot . So these chatbots, when deployed by individuals, can actually reflect back on the person using them and inhibit hospitality with their human collaborators. This representational dynamic (trusting an AI agent to communicate to others on your behalf) made me think about how AI tools can inhibit hospitality by intervening in people's human relationships, not just by communicating directly with a user.
Thanks so much for your work, I look forward to reading more!
I recently wrote about Eliza as well, although what occurred to me was that, at the time, Eliza was exclusive: you had to have access to a lab in an elite university to "talk" to Eliza. Right now there is also a certain amount of exclusivity -- Bing is opened only to a certain number of users, chatGPT might tell you to come back later if it's a peak usage time and you're not a special user, and so on.
In Reclaiming Conversation Sherry Turkle wrote about lonely elderly people in nursing homes being given robot baby seals to cuddle and talk to -- she remarks that she found this horrifying while others around her thought it was wonderful. It occurs to me that the difference is whether you see the elderly lady cuddling the robot as "having access" to an "exclusive new" technology, or you see her as being given a "cheap replacement" for human contact.
I have more thoughts on this, spelled out here, https://amyletter.substack.com/p/waiting-for-artisanal-ai , but at bottom I'm saying that what is cheap common and infinitely replaceable will probably not "fool" humans ; we are more likely to be fooled when we feel we are talking to something new and *exclusive* and perhaps transitory -- something we would take the time to video ourselves interacting with! -- because that's the attitude we generally reserve for humans special to us -- or who have power over us.
Ultimately what we want is connection and community with our fellow human beings. Right now the novelty makes "Sydney" almost qualify. But if it persists and operates at scale, Sydney will just be another Alexa or Siri, even if it has greater capabilities, in the sense of our level of regard. We will call it "dumb robot" and laugh at its stupid mistakes.
I think a more dangerous outcome is if these newer AIs remain somewhat exclusive. That gives them social status and "person-ifies" them.
And it's clear that if this sentence-spouting autocomplete on steroids were "a person," it would be a dangerous and deranged person. (Which is not to anthropomorphize, but to merely say -- it has no sense of self because it has no self; by human standards it is "unstable.")
The very useful link to sophistry helps me understand the passion behind resistance to AI chat in schools and universities. Plato’s Apology accuses the accusers with their own behavior. It’s a story that ends tragically, because the accusers are not interested in ideals. They are interested in keeping this system bubbling with minimal disruption. And Socrates was stirring the pot.
It makes me reflect that the students who generate essays they don’t mean have been practicing (modeling?) sophistic chatbot behavior. And those who graduate and wonder what that was all for, now that they’re out in the world, have become themselves an ELIZA, robotically telling the paying client what he seems to value.
So today’s chatbots are specular, mimetic. They hold the mirror up to nature with social data embedded in language data.
How much more welcome is hospitality?… If you sit around a table, offered to you openly by a person with a face, and you eat the food and enjoy it, and look at the person who made it for you, the hospitality makes sophistry feel like an offense against humanity.
So maybe we need to live in smaller communities? Places where the smell of someone’s baked bread and the familiarity of their threshold, and the warmth of the food you brought yourself and set down on the shared table, lay bare the offense of apathy that sophistry requires?
Thank you for this. After experiencing the (very real) emotion of feeling gaslit by ChatGPT (in a brief first session where I tried it out of curiosity) your essay truly resonates.
“In the beginning was the Word...” In a “post-truth” world which dismisses the implications of the underlying nature of reality that these momentous words invite us to ponder, I fully agree that we have yet to grapple with the true nature and power of language, let alone as it’s casually being deployed via AI chatbots. I share your fears, but also your hopes in conviviality.
Plato’s one-sided take on the Sophists is a straw man argument repeated by naive realists ever since to condemn anyone (and now any thing) that shows how complex and relative (especially a socially mediated) reality is -- and how limited is the “truth” of any model if it. That’s not a fair or real argument; it doesn’t engage what the other side actually thinks and says. (Which is what good sophistry would do.)
Look at the actual history of rhetorical education from Stephen Toulmin (who related “sophistry” to what actually happens at the convivial table) through Ramism (not just Ong’s take) and early modern (Jesuit, Aristotelian) casuistry. If Chatbots are upsetting for being able to do what lawyers always have (and if we are honest, what we all do) your problem is with literacy, not AI.
It has probably been done before. Toynbee recorded many ways that a civilisation can implode. Perhaps though we are seeing another dangerous symptom rather than the core impasse / contradiction?
Arthur C. Clarke long ago guessed that pornography might be weaponised. I suppose that has already been done though I guess for profit rather than geopolitics. I understand pornography was the first means for leveraging the internet into a paying concern? There are inevitably ruthless outfits will pile in with ways to profit from any new means to exploit vulnerability. (Exploit is a word you have used., rightly I think.)
Regarding the way 'unnatural' loneliness gets built in, I picked up yesterday on a twitter by London-based John Burn-Murdoch, one of the brightest young at the Financial Times. Not sure twitter is easy for all to access but the thread seemed indicative of your thesis and possibly relevant to defensive structures. I hope my copying below might be useful. We have seen a succession of numerous communities in UJK simply wiped by 'Progress'. (Ivan Illich is always useful. He had seen it happen so many places.)
Absolutely fantastic thread. Today’s high-rise architecture (and indeed lots of modern culture) puts form over function, and we lose out on so much as a result.
I don’t agree. Cos these estates in London are real communities… neighbours seeing neighbours kids grow up, and being there for every milestone in your life, everyone knowing everyone.
The development I live in? I see my friends all over London more than I see my neighbours 😂 twitter.com/culture_crit/s…
Show this thread
I agree with all your urgent worries here - thank you for writing and helping us all think about this so lucidly.