Welcome to the Convivial Society, a newsletter about technology, culture, and the moral life. The newsletter takes its name from the work of the late 20th-century social critic, Ivan Illich. He features prominently in my writing, and in this essay I’m revisiting a talk he gave in the late 1960s and reapplying it to the current drive to deploy AI for good. I trust the provocation will be useful, especially to those among you who might professionally identify with this imperative. In truth, I think there’s something in this for all of us, regardless of whether we work in tech or not. May it find its audience.
As it always has, this newsletter operates on a patronage model. The writing is public and supported by those who value it and have the means to become paying subscribers.
On April 20th, 1968, at a small Catholic seminary just outside of Chicago, students gathered for a meeting of the Conference on InterAmerican Student Projects (CIASP). These students were there in preparation to spend their summer as volunteers on service projects in Mexico.
A few weeks earlier in March, a letter had gone out to the participants exclaiming, “Welcome aboard! You’re in for an exciting and profitable trip!” They were assured that the speakers for the gathering would be “top notch,” including a professor from Notre Dame and a representative of the National Council of Churches. But the letter also noted that there would be a “controversial” speaker, “Monsignor Ivan Illich of the Center of Intercultural Documentation [CIDOC] in Mexico.”
If you’ve been reading this newsletter for any amount of time, you probably know that Ivan Illich has influenced my own thinking and writing. He is best known for a series of books that came out during the 1970s, which offered radical critiques of industrial age technologies and institutions: Deschooling Society, Tools for Conviviality, Energy and Equity, and Limits of Medicine.
For the purposes of what follows, all you need to know is that Illich was already known for his trenchant criticism of western-led development projects in Latin America. The UN had declared the 1960s the first Development Decade. It was also the decade the Peace Corps was launched. And, not to be left behind, the Roman Catholic Church had also embarked on a series of similarly intentioned projects in Latin America. This was the broader context for the gathering to which Illich, the “controversial” speaker, had been invited.1
The text of Illich’s talk, including comments he felt compelled to add as a preface after he spoke with some of the participants beforehand, is usually given the title “To Hell With Good Intentions.” This gives you a sense of what Illich had to say to these very well-intentioned students and the CIASP leadership. Two paragraphs in and he’s telling them he was “equally impressed by the hypocrisy of most of you, by the hypocrisy of the atmosphere prevailing here.”
After wryly offering three guesses as to why someone with his views might be invited to address such a gathering, Illich states bluntly, “I did not come here to argue. I am here to tell you, if possible to convince you, and hopefully, to stop you, from pretentiously imposing yourselves on Mexicans.”
I’m tempted to give you a blow by blow of the whole thing, but I suspect you’re already wondering what, if anything, this obscure talk by a “controversial” priest has to do with technology.
I will not keep you in suspense. I am taking Illich’s blistering speech and, in earnest good will, offering it, by analogy and in spirit, to all of those who would today seek to do good in the world through the development of AI technologies.2
That there are those in the tech industry who seek no good but their own profit, no one would deny. That there are those in the industry who seek to do good in the world, some might deny due to a cynicism that was perhaps not wholly misplaced. But I do not deny it. I know it to be the case. And it is precisely to those who seek to do good that I offer this Illichian provocation.
But I am not Illich and thus not in the habit of speaking quite so stridently, and I have no interest in affecting the style for rhetorical effect. Nonetheless, I will venture this assertion in his spirit to those seeking to do good for the world through the development of technology:
I am not here to argue. I am here to tell you, if possible to convince you, and hopefully, to stop you, from pretentiously imposing yourselves on the rest of humanity.
There are two specific themes in Illich’s talk that I will offer you, but much of it amounts to calling those who would do good to/by others to a critical self-awareness. And, honestly, most of us would probably do well to do a little self-searching in this spirit regardless of whether we work in tech or not. As Illich puts it toward the end of this talk, “it is profoundly damaging to yourselves when you define something that you want to do as ‘good,’ a ‘sacrifice’ and ‘help.’” The implication here is that we are quite adept at deciding what we want to do, and only then calling it “good” so that we might feel better about imposing ourselves on others.
First, though, let’s crystalize this disposition to do good with a recent example. I was motivated to write this piece because I had recently been reading Illich again, but also in response to a blog post Open AI CEO Sam Altman wrote a few weeks back. Titled “The Intelligence Age,” Altman argued (or better asserted) that AI, so long as we don’t lose faith, will soon usher in an era of unprecedented global prosperity.
“I believe the future is going to be so bright,” Altman enthuses, “that no one can do it justice by trying to write about it now; a defining characteristic of the Intelligence Age will be massive prosperity.” He adds that “although it will happen incrementally, astounding triumphs – fixing the climate, establishing a space colony, and the discovery of all of physics – will eventually become commonplace.”
The discovery of all of physics! But there’s more.
AI will not only fix the climate, it will also help you with your scheduling, hard to tell which presents the greater challenge.3 “AI models will soon serve as autonomous personal assistants,” Altman writes, “who carry out specific tasks on our behalf like coordinating medical care on your behalf.”
“Eventually we can each have a personal AI team,” Altman continues, “full of virtual experts in different areas, working together to create almost anything we can imagine.”
Look, I know what many of you are thinking. Altman, by disposition and professional self-interest, necessarily traffics in hype. He is not a man to be taken at his word. So, yes, I’m pretty much in agreement with
, who rightly cautioned us against doing so.4But, let’s just pretend for a moment that Altman is sincere in his belief that AI will usher in an age of ineffable abundance. Okay, fine, maybe that’s too much of a stretch. Let’s just suppose that if not Altman, there are others working in tech—engineers, programmers, and developers, managers and executives, marketers and VCs—who do earnestly believe something like the vision Altman lays out. They are probably more thoughtful than Altman. They have thought more deeply about the possible harms of AI, and are thus more circumspect. But they are, nonetheless, determined to see AI used for good, for the betterment of society, for the general improvement of the world.
Maybe this is you. I hesitate to use the direct address. It strikes me as presumptuous and hectoring. But maybe, maybe it is you.5 You are pursuing AI for good, you want ethical AI, you believe AI can help the disadvantaged, you think AI can improve outcomes for the marginalized. Maybe. Maybe. But I urge you to consider Illich’s challenge as I will briefly translate it for our moment.
Much of what Illich had to say to those bright-eyed students preparing to spend their summer volunteering in Mexico are summed up in these early lines:
“I do have deep faith in the enormous good will of the U.S. volunteer. However, his good faith can usually be explained only by an abysmal lack of intuitive delicacy. By definition, you cannot help being ultimately vacationing salesmen for the middle-class ‘American Way of Life,’ since that is really the only life you know.”
Illich recognized that “development” work, as it was happening in the 1960s, was, in fact, a vehicle by which a whole complex nexus of values and systems was being exported to and imposed upon the “under-developed” world, and ultimately in such a way that the recipients of this aid would be subjected to new forms of poverty and dependence—“modernized poverty,” as Illich called it elsewhere.
In Deschooling Society, for example, Illich observed that “once basic needs have been translated by a society into demands for scientifically produced commodities, poverty is defined by standards which the technocrats can change at will.” “Poverty,” he adds, “then refers to those who have fallen behind an advertised ideal of consumption in some important respect.”
Moreover, Illich warned, “the increasing reliance on institutional care adds a new dimension to their helplessness: psychological impotence, the inability to fend for themselves.” “Modernized poverty,” in his view, “combines the lack of power over circumstances with a loss of personal potency.”
Illich’s critique, if we direct it toward the present spirit of Silicon Valley’s evangelists of efficiency and abundance, raises several pointed questions. Those who are developing new technologies and those in a position to decide whether they ought to be adopted in specific contexts might consider asking some version of the following:
— Is this a technology that actually empowers users with agency to accomplish the work they choose for themselves?
— Or, is this a technology that will entrap users in systems which create new forms of dependency and diminish self-directed agency?
— Will this technology generate an experience of real-world competency, or will it undermine the possibility of such an experience by promising to automate essential and meaningful labor?
— What implicit values will this technology bring into an existing social ecosystem? How will it erode the existing values that animate the institution or group it seeks to serve?
— In designing/adopting this technology, are we merely evangelists for a soulless gospel of optimization and efficiency?
— Because computerized systems excel at generating data of varying degrees of quality and usefulness, will this technology introduce measures and metrics into spheres of life where they do little good and mostly induce unnecessary anxiety and competitiveness?
— What versions of “modernized poverty” will this technology introduce into communities and sectors of society which are already under-resourced and inadequately supported?
— Will this technology introduce new social divisions and promote disabling hierarchies in the social ecosystem in which it is deployed?
— If the technology fails or if it is discontinued, will it leave its users worse off than they would have been had the technology never been introduced in the first place?
The line of argument implicit in these questions reaches its climax just after Illich tells his audience that “next to money and guns, the third largest North American export is the U.S. idealist, who turns up in every theater of the world: the teacher, the volunteer, the missionary, the community organizer, the economic developer, and the vacationing do-gooders”—to which list, of course, we can add the tech evangelist. It is then that he drops this devastating line:
“Perhaps this is the moment to instead bring home to the people of the U.S. the knowledge that the way of life they have chosen simply is not alive enough to be shared.”
I think this is it. There is a vision of the good life, a vision of what it means to be human implicated in all of our tools, devices, apps, programs, systems, etc. There is a way of being in the world that they encourage. There is a perspective on the world that they subtly encourage their users to adopt. There is a form of life that they are designed to empower and support.
Is this way of life alive enough to be shared?
If I were to become the ideal user of the technology you would have me adopt, would I be more fully human as a result? Would my agency and skill be further developed? Would my experience of community and friendship be enriched? Would my capacity to care for others be enhanced? Would my delight in the world be deepened? Would you be inviting me into a way of life that was, well, alive?
Illich wrapped up his talk with these closing lines:
“I am here to suggest that you voluntarily renounce exercising the power which being an American gives you. I am here to entreat you to freely, consciously and humbly give up the legal right you have to impose your benevolence on Mexico. I am here to challenge you to recognize your inability, your powerlessness and your incapacity to do the ‘good’ which you intended to do.”
At this point, I trust you can make the translation and re-application of these lines for yourself.
But I don’t want to give these lines the final word. There is one other theme woven more subtly in Illich’s talk with which I’ll close. It’s not quite stated positively, but it can be inferred.
For instance, it is there when Illich says to his audience, “you cannot even meet the majority which you pretend to serve in Latin America—even if you could speak their language, which most of you cannot.” Or when he says more forcefully, “If you insist on working with the poor, if this is your vocation, then at least work among the poor who can tell you to go to hell.”
What is explicit problem in these lines is the incapacity to hear what those you seek to serve would tell you if you had ears to hear. What is perhaps implicit is that if you could hear, you might then be able to do the good. Perhaps not the “good” you intended, but the good that was needed.
Part of the problem in the case of these Americans in Mexico is that they could not understand the language. But that is only part of the problem. The more significant issue is a deeper incapacity to listen to others not because you do not speak the language but because you have already decided that you know what is best for them. Then, convinced of your wisdom and goodness, you are prepared to impose your will on the other.
Around the same time as Illich delivered this talk, he was also doing language training and reflecting, perhaps more deeply than most, on what it might mean to learn a language. In a reflection titled “The Eloquence of Silence,” Illich argued that “it takes more time and effort and delicacy to learn the silence of a people than to learn its sounds.” In this same reflection, Illich spoke of three kinds of silences. The first among these Illich described as “the silence of the pure listener … the silence through which the message of the other becomes ‘he in us,’ the silence of deep interest.”
This silence, a silence woven in humility and renunciation of power, is the precondition of any meaningful service to others. But it is this silence, born of the desire to listen and to understand, that seems utterly absent from so much of the innovation that emerges from the tech sector today. This does not mean that it is impossible to produce technology that actually does good in the world. It is only to explain, in part perhaps, why so much of it fails to do so.
The Convivial Society is made possible by readers who value the work and have the means to support it. If you value this kind of writing and desire to see it in the world, please consider becoming a paid subscriber.
Hilarity ensued … depending on your sense of humor, I suppose. I can’t help but find the whole thing rather amusing. Deadly serious, but also amusing. I mean, what were they thinking?
Of course, all of the salient points apply just as well to other technologies.
This is only partially tongue in cheek, as anyone who has attempted to navigate the torturously byzantine American health care/insurance system will tell you.
From Karpf’s essay: “At a high enough level of abstraction, Altman’s entire job is to keep us all fixated on an imagined AI future so we don’t get too caught up in the underwhelming details of the present. Why focus on how AI is being used to harass and exploit children when you can imagine the ways it will make your life easier? It’s much more pleasant fantasizing about a benevolent future AI, one that fixes the problems wrought by climate change, than dwelling upon the phenomenal energy and water consumption of actually existing AI today.”
To some degree and in some way or in certain circumstances and with certain people, it is all of us.
Subscribe to The Convivial Society
Thinking about technology, society, and the good life.
I've often wondered how it was to be in that room, the day Illich delivered his speech. A friend once expressed that she found herself physically shaking, just reading the text of it, from the force of his words. (I did hear a rumour, not long ago, that a recording from the original event may be in existence.)
I found myself wondering now, as I read this, how one might interrupt the certainties of your imagined audience sufficiently for there to be a chance of them hearing what you (and Illich) are saying? To the extent that the history of Silicon Valley still figures in its collective imagination, that might offer an entry point, thinking of Illich's influence on some of the Homebrew Computer Club folks, not to mention his connection with Stewart Brand. If I found myself addressing such an audience, I'd be tempted to start with that history – and maybe with the time Carl Mitcham told me about Illich announcing with horror (and, no doubt, humour), "People are saying I invented this internet!"
Among your questions, it was this one that got me thinking about those certainties in need of interruption:
"Will this technology generate an experience of real-world competency, or will it undermine the possibility of such an experience by promising to automate essential and meaningful labor?"
Perhaps because I just finished Wendell Berry's The Need to Be Whole, I'm wondering how to awaken the sense that there might be such a thing as "essential and meaningful labor", that labour itself is not inherently a thing from which we need to be "saved". Part of what makes this hard to imagine is that the whole history of industrial society is woven through with the stripping of meaning from labour in pursuit of efficiency, productivity, etc. Once labour has been remade to this logic, who wouldn't want to be saved from it? The idea that the problem might not lie in labour itself, but in the inexorable economic logic of efficiency, is not immediately intuitive, unless there's a direct experience through which we can grasp it.
It's good to be reminded, too, of the language of aliveness at the heart of this speech. I realise that this is one of the sources from which we drew the phrase "the work of regrowing a living culture" when framing what we're up to at our little school. The allegation implicit in the phrase is the one that Illich makes explicitly: this culture, this way of being in the world, is not alive enough to be worth "sustaining". And so, as Neto Leão says, "To hell with sustainability!"
(And it's interesting how many different ways Illich dances with the language of "life", over time, from this speech, to "conviviality", to the anathema on life-as-idol in the speech that he gave at Will Campbell's invitation in Ohio.)
Great. We'll all have personal digital assistants to perfect our very lives. And the data centers to run all this? There's already coal-fired power plants still on-line, that needed to be retired years ago because of service life issues, just to meet the new demand for 'data centers', much less the load for AI machines. For my part, everyone who's selling this beautiful future would have to take the graveyard shift shoveling coal into those boilers, day after day. But, who am I kidding. I know who's going to be covered in coal dust with a Seymour #11 scoop shovel cramped into their hands, and it won't be Sam Altman. And, speaking of Schadenfreude, the big transformers that get electricity from the generating station to your personal digital assistant? Lead time on your utility provider getting one to boost power to your privileged neighborhood: 120 weeks. Altman's autonomous personal assistant's going to want your neighbor's air conditioning energy. Guess who's rates are going to have to cover the difference?
Thanks for the Illich here, Mr. Sacasas.