Welcome to the Convivial Society, a newsletter about technology and culture considered broadly. This installment comes in three movements and focuses on the idea/fear/hope that machines can and will replace humans.
The newsletter is public, but sustained by readers who value the writing and have the means to support it.
There’s a particular type of AI-related story that I keep encountering. The kind of story I have in mind has nothing to do with the cataclysmic fears and anxieties that seem to be getting the most press. Rather it involves the far more mundane, actual uses to which generative AI models are being put by perfectly ordinary people and institutions.
These ordinary uses that I have in mind tend to fit a pattern. It is discovered that some task lends itself to automation because it was already formulaic, mechanistic, and predictable: thoughtless writing, box-checking busy work, bureaucratic hoop-jumping, the generation of meaningless content, etc.
In short, it is discovered that AI is especially adept at displacing human labor (or, from the enthusiast’s perspective, liberating) in situations wherein humans had already conformed, willfully or otherwise, to the pattern of a machine. Build a techno-social system which demands that humans act like machines and, lo and behold, it turns out that machines can eventually be made to displace humans with relative ease.
Ethan Mollick recently wrote about aspects of this pattern in a discussion of AI as it has been introduced into existing products such as Microsoft’s and Google’s office software:
Work that was boring to do, but meaningful when completed by humans (like performance reviews) becomes easy to outsource - and the apparent quality actually increases. We start to create documents mostly with AI that get sent to AI-powered inboxes where the recipients respond mostly with AI. Even worse, we still create the reports by hand, but realize that no human is actually reading them. This kind of meaningless task, what organizational theorists have called mere ceremony, has always been with us. But AI will make a lot of previously useful tasks meaningless. It it will also remove the facade that previously disguised meaningless tasks. We may not have always known if our work mattered in the bigger picture, but in most organizations, the people in your part of the organizational structure felt that it did. With AI-generated work sent to other AIs to assess, that sense of meaning disappears.
With these patterns in mind, we should consider whether it is helpful to think of AI as unfolding along a longstanding techno-cultural trajectory rather than being a drastic break with what has come before.1 In other words, how have existing assumptions and practices prepared the ground for the predicament we now find ourselves in? More specifically, how did it become possible to imagine that human beings could be displaced by AI?
One answer to this question, from an economic and political perspective, is simply that at certain tasks and under certain conditions, this is exactly what some have used machines to do in the interest of efficiency, productivity, and profit. There’s no surprise there. But I’m also interested in a slightly different approach to the question. I’m curious about why we might accept the premise that human beings can be displaced by machines and the degree to which we might even welcome such a displacement or otherwise take it for granted.
My present thesis is something like this: The claim or fear that AI will displace human beings becomes plausible to the degree that we have already been complicit in a deep deskilling that has unfolded over the last few generations. Or, to put it another way, it is easier to imagine that we are replaceable when we have already outsourced many of our core human competencies.
Put somewhat differently, the message of the medium we are presently calling AI is the realization that modern institutions and technologies have been schooling people toward their own future obsolescence.
Indeed, we might go further and say that the triumph of modern institutions is that they have schooled us even to desire our own obsolescence. If a job, a task, a role, or an activity becomes so thoroughly mechanical or bureaucratic, for the sake of efficiency and scale, say, that it is stripped of all human initiative, thought, judgment, and, consequently, responsibility, then of course, of course we will welcome and celebrate its automation. If we have been schooled to think that we lack basic levels of latent competence and capability, or that the cultivation of such competencies and capabilities entails too much inconvenience or risk or uncertainty, then of course, of course we will welcome and celebrate the displacement of our labor, involvement, and care.
In the 1960s and 70s, the social critic Ivan Illich2 offered a sustained and blistering critique of industrial age institutions—including schooling, medicine, and transportation, you know, the ones we then to think of as “good”—precisely along these lines. What these institutions have chiefly taught us, Illich argued, is that we are, in ourselves, inadequate to the task of living together as human beings in the world. That we cannot get on without the products and services that they alone can supply. Such institutions are not interested in equipping or empowering us, only in confirming us in an indefinite state of dependence in a consumerist mode. The professions associated with such institutions Illich called “disabling professions.”
“People need new tools to work with rather than tools that ‘work’ for them,” Illich argued. But he also concluded that “the institutions of industrial society do just the opposite. As the power of machines increases, the role of persons more and more decreases to that of mere consumers.” In this regard, the present development of AI3 is of a piece with previous patterns of technological and institutional growth. The trend line is consistent, only now a new class and range of roles and activities are being outsourced for the sake of a system that was already “unaligned” with human values, as it is now fashionable to say, because it demanded the conformity of human beings to inhuman standards of scale, speed, efficiency, and profitability.
This line of institutional and technological development will proceed apace without some account, provisional and contested as it may necessarily be, of what it is good for people to do regardless of whether a machine can do it better according to certain parameters (faster, more cheaply, etc.). And if our institutions—be they political, cultural, or corporate—will not entertain such a conversation, we should at least have it for ourselves and with whatever community to which we might be fortunate enough to belong.
I recently spent a couple of hours navigating the labyrinth of a certain credit card’s customer service machinery. This is a banal and familiar experience. I won’t trouble you with the details. As I was passed from an automated service to one department and then another and back again, ad nauseam, I realized that there were three kinds of agents I encountered. Of course, there was the obvious distinction between the automated service and persons, but it occurred to me that there was one further distinction to be made.
When I finally encountered a person who assumed a measure of care and responsibility for my situation, the whole quality of my experience changed. The tenor of the interaction was wholly different and the sense of being trapped in an endless loop of futility faded. Here, finally, was a person in the fullest sense dealing with me, in turn, as a person and not merely a problem framed by an elaborate bureaucratic playbook.
What I mean by this, and I want to be careful how I put this, is not that the handful of people I had spoken to beforehand, regardless of how they dealt with me, were not persons with full moral standing. I only mean that their speech and actions were machine-like in quality. So then, it seemed to me that there were not two but three possibilities in navigating this system: encountering a machine, encountering a person whose actions conform to the machine, and, finally, encountering a person who somehow managed to resist such conformity.4
The truth, of course, is that the principles of efficiency and speed and optimization and profitability, recurring themes here of late, increasingly dictate how we act and interact in many if not most of the social spaces we inhabit. Thoughtless automaticity, which demoralizes all parties, becomes our default mode, whether our thoughtless automaticity takes the form of resigned indifference or utterly predictable outrage and indignation.
So much so that it can be startling, if also invigorating and life-giving, to encounter someone who will break the script and deal with you as a person in fullest sense—by taking the time to regard you with kindness and respect, by offering a simple gesture of help or courtesy born out of deliberate attentiveness, by conveying care through the words they speak to us and how they are delivered. In other words, simply to be acknowledged as a person by a person can be a revitalizing gift, and most of us are, to some degree, in a position to grant it gratuitously. And we should. It may be the most vital practice, or perhaps better, discipline that we can cultivate. The effect is at times not unlike the moment in our stories when someone on the brink of death is given a magic, life-saving potion that instantly revives them … but for the soul. And in this case, the healing property enlivens both the recipient and the one who administers the elixir.
Enthusiasts claim that AI will liberate us, even if it is never quite clear what we might be liberated from, whether we desire to be liberated at all, or, more importantly, to what end. But the machine will liberate us only to the degree that it swallows itself, that it swallows up all of what the machine, in its longstanding technological, economic, and institutional forms, had required and demanded of us.
In the gospels, there is a brief but memorable scene best known for its political ramifications. The story begins with religious leaders seeking to entrap Jesus with a question that would force him either to implicitly deny that he was the expected Messiah or to open himself up to the charge of treason against the empire: “Is it lawful to pay taxes to Caesar, or not?”
Jesus, conscious of their motives, asks for a coin. When they bring him the coin, Jesus asks, “Whose likeness and inscription is this?” They said, “Caesar’s.” Then he said to them, “Render therefore to Caesar the things that are Caesar’s, and to God the things that are God’s.”
In this way, the snare is avoided and the demands of Caesar are utterly subverted. What is Caesar’s? A piece of metal with his image. Give it to him. I imagine Jesus flicking the coin back at them. But what is God’s? Everything. Everything that matters. Perhaps more specifically, the life of the whole person. Just as the coin bore the image of Caesar, so in the Jewish tradition the human being bears the image and likeness of God.
Some will take the religious significance to heart, others will not. Do as you will, of course. But I find myself thinking, or better, sensing, feeling, intuiting that something of this spirit might guide us well in the present moment as a very different totalizing force demands our resources, our attention, and our unwavering loyalty.
What would it mean to render to the machine, what is the machine’s in precisely this spirit? To regain a sense of what it is to be a person, coupled with a subversive practice of the same, within a techno-economic system whose default settings incline us to forget this vital fact about ourselves and our neighbors? To reclaim a confidence in what we might be able to do ourselves and for one another in the face of an array of technologies, services, and institutions that market themselves under the implicit sign of our ostensible helplessness and the banner of a debilitating liberation? Let the machine have everything that is stamped with its spirit. Let us keep everything else.
I realize these are mere provocations. But I offer them in the hope that they provoke us in the best sense. I leave it with you to decide.
Obviously, AI does have a longstanding technical history going back to the mid-20th century. But what I have in mind here is not the development of the technical elements of what we call AI, but rather the assumptions and values that govern how it is developed, deployed, and adopted.
Many of you know Ivan Illich is one of the main influences on my thinking. He was many things—radical social critic, historian, priest, polyglot. This newsletter owes its name, in part, to the title of his 1973 book Tools for Conviviality. He is chiefly remembered for his critique of industrial technologies and institutions, which he referred to collectively as tools. You can read a couple of older installments (here and here) for more about him.
“AI” is a label that now gathers a very wide assortment of technologies, techniques, and systems. Not everything that is called AI fits what I’m describing here.
I would hope that this goes without saying, but, just to be clear, none of this is intended as an indictment of the individuals to whom I spoke. They are not to blame for the conditions under which they labor, the standards to which they must conform, or the metrics by which they are evaluated.
I tried to make clear in the essay with reference to the three classes of agents we encounter, that the blame, such as it is, lay chiefly with the system that demanded conformity rather than the person who labored within it, but there’s a further point, which I failed to make.
The most important imperative is the one I give myself. I must actively resist conformity to depersonalizing patterns in these situations for the sake of the other even as I may desire that they do so for my sake. And perhaps it is even the case that the greater responsibility lies with me, less entangled as I was in the particular context I described.
This latest installment struck me as cutting particularly deep. Very nice.
"What these institutions have chiefly taught us, Illich argued, is that we are, in ourselves, inadequate to the task of living together as human beings in the world. That we cannot get on without the products and services that they alone can supply. Such institutions are not interested in equipping or empowering us, only in confirming us in an indefinite state of dependence in a consumerist mode. The professions associated with such institutions Illich called 'disabling professions'."
Forgive my playing the broken record: I'd like to reiterate that John McKnight's book, The Careless Society: Community and Its Counterfeits, contains concrete, documented cases that manifest the pattern you and Illich have trained your eyes on. (It's no coincidence: McKnight wrote the book after being interviewed by David Cayley for the latter's radio program, Ideas. Cayley emphasized to McKnight the relevance of Illich.) The cases explained in McKnight's book not only supply an articulation of the pattern, spelling out its meaning in concrete contexts, but confirm the existence of the pattern as well. You're not peddling hypotheses that merely "seem plausible."
McKnight describes the pattern — or, perhaps, one aspect of the pattern — in terms of "professionalization": the transformation of The Citizen, who contributes his gifts in a convivial community, into The Client, who is in perpetual, passive need of ministration from The Professional, who always knows better because he is credentialed and outfitted with "techniques" for producing The Product that The Client needs.
I mention this in case you ever write a book or are confronted with a more skeptical audience. And in case any of your readers might be interested.