"The operating code of industrial tools encroaches on everyday language and reduces the poetic self-affirmation of men to a barely tolerated and marginal protest. The consequent industrialization of man can be inverted only if the convivial function of language is recuperated, but with a new level of consciousness. Language which is used by a people jointly claiming and asserting each person’s right to share in the shaping of the community becomes, so to speak, a second-order tool to clarify the relationships of a people to engineered instrumentalities."
— Ivan Illich, Tools For Conviviality (1973)
On the rare occasions when I'm asked about my field of study or my research interests, I tend to say something like "philosophy and ethics of technology" or simply "ethics of technology." It's not merely a matter of intellectual curiosity, either. I believe it is urgent work. (I know, everyone thinks that their own area of interest is of utmost importance. Guilty, but at least I'm aware of the unseemly self-regard.) It is urgent work that must nonetheless be undertaken carefully and determinedly. I also believe it is vexing work complicated by how the problem we seek to understand already conditions our efforts. Our thinking, our imagination, our moral reasoning, our character—each is already marked by the phenomenon we are trying comprehend.
Heidegger famously observed that the essence of technology is nothing technological. Likewise, the most important aspect of any ethics of technology is not ultimately the technology in question. The more deeply we reflect on the ethical quandaries posed by technology the more we realize that this or that technology is not the problem. The problem is the morally fractured and impoverished field from which they emerge and into which they proceed. I do not mean by this merely that there exist bad actors ready to misuse technology, I mean simply that we lack a moral firewall with which to oppose the encroachments of technique and its apparatus.
Consider, for example, an article (linked below) that argues, correctly it seems to me, that we are having a difficult time reckoning with big technology firms because the language we deploy to understand the threat they pose is inadequate. "If you can’t name and describe an injustice," the author argues, "then you will have an extremely difficult time fighting it." True enough. But what if we not only lack the language to name the injustice but the moral framework within with judge the full range of injustice? Moreover, what if the concept of injustice, critical as it is, does not itself capture the full range of moral realities to which we should be attentive? The depth of the problem is suggested by the trivial and ineffectual character of the example with which he concludes of how we might begin to address the failure of language.
Take, as a further example, the idea of privacy so frequently discussed in relationship to technology over the last several years. Allow me to suggest that nobody cares about privacy. I say that chiefly because while many people will profess to be vaguely concerned about privacy, very few, I suspect, could supply a coherent definition of the good they call privacy. It is an empty signifier. More precisely, the word now floats about untethered from any social reality within which it might make compelling moral sense to the average person.
By the average person, I'm generally thinking of those who are not students of the history/philosophy of technology or otherwise connected to the world of tech critical discourse among academics, journalists, etc. You know, the vast majority of our fellow citizens. I'm increasingly convinced that there is a great chasm fixed between these two worlds. In my, admittedly limited circles, remarkably few people are bothered by, for example, the data collection practices of the big tech companies. And, frankly, I'm not sure they are entirely to blame. I'm not sure that a compelling moral case has been made against these practices. More worrisome still, my point here is that I believe we lack the public conceptual infrastructure that might sustain such a case. Privacy talk operates with a vestigial moral category and as such can only generate, at best, a nondescript and passing experience of moral turbulence. A few bumps, momentarily worrisome, on what is experienced as an otherwise smooth flight into our emerging techno-social reality.
Again, to reiterate, my view is not that people simply need to pay more attention to critics, academics, and journalists and all will be well. My point is that even the most thoughtful of these critics, academics, and journalists are, at best, directing us to symptoms while never approaching the underlying disorder. (I don't, to be clear, exempt myself from this judgment.) For this reason, I grow increasingly skeptical about the recent growth of interest in and attention paid to the ethics of technology. It's not a skepticism born out of cynicism or the belief that those involved are necessarily acting in bad faith or what I've elsewhere called critical hipsterism. Rather it's born out of a growing realization that the roots of our situation run far deeper, and we seem unable to perceive them much less get at them in any meaningful way.
But what is there to do other than to keep digging?
News and Resources
At the Guardian you can read an excerpt from Marc DaCosta's Logic essay on the inadequacy of the language we use to grapple with our present techno-social realities: "... we won’t fix it with better public policy alone. We also need better language. We need new metaphors, new discourse, a new set of symbols to illustrate how these companies are rewiring our world, and how we as a democracy can respond." I think DaCosta is basically right about his key claim. His effort at the end to point us forward, as I suggested above falls flat. But I think he recognizes as much.
Peter Asaro, Kelly Gates, Woodrow Hartzog, Lilly Irani, Evan Selinger and Lucy Suchman co-author a call for Amazon to "stop building the facial recognition infrastructure for law enforcement agencies and the government and be committed to never return to the business in the future."
Harvard/MIT scholar Judith Donath believes facial recognition technology will be pervasive and ubiquitous in ten years. She is interviewed by Jon Christian: "I think people will become quickly reliant on it. I’m guessing within 10 years, we will be in a situation where if we had our facial recognition abilities taken away from us, it would be creepy to be out in public with everyone as a total stranger whom you know nothing about." Security, she proposes, will be the fulcrum: "It will make people feel safer."
"'Some parents still think it’s kind of 1984,' said Weil, whose 21-month-old granddaughter is among the scanned. 'A lot of people are afraid we’re getting too much information. . . . But the biggest thing for us is that we protect our kids.'" Washington Post reports on facial recognition companies targeting schools.
John Danaher, a scholar whose work is consistently clear and helpful, proposes an initial framework for the ethics of AI assistants.
"It’s hard to build a service powered by artificial intelligence. So hard, in fact, that some startups have worked out it’s cheaper and easier to get humans to behave like robots than it is to get machines to behave like humans." How tech firms quietly use humans to do bots' work.
"Google is billing Duplex as a way to promote Time Well Spent™ and lessen language barrier issues so that we can all be free to engage with the world outside our digital screens ... it’s more likely that this technology, like others before it, will just encourage us to further distance and focus only on ourselves in the world within a bubble." Natt Garun on the selfishness of Google Duplex.
Inside China’s Dystopian Dreams: A.I., Shame and Lots of Cameras: "In Zhengzhou, police were happy to explain how just the thought of the facial recognition glasses could get criminals to confess."
Frank Pasquale: "Finally, we must also acknowledge that, sometimes, it may be impossible to 'enlist technology in the service of values at all.' A continuous child-face-scanning system in schools, no matter how humanely administered, is oppressive. Nor are efforts to recognize the quintessential facial structure of criminals a project that can be humanized with proper legal values and human rights. Sometimes the best move in a game is not to play."
Zachary Loeb reviews Surveillance Valley: The Secret Military History of the Internet.
It appears that CRISPR is not quite ready for primetime.
The Internet is Drowning: "Within 15 years, thousands of miles of fiber optic cable—and hundreds of pieces of other key infrastructure—are likely to be swamped by the encroaching ocean. And while some of that infrastructure may be water resistant, little of it was designed to live fully underwater."
Blistering and deeply informed review of Steven Pinker's Enlightenment Now: The Case for Reason, Science, Humanism, and Progress. The first paragraph is a winner.
On the fascinating effort to read the charred papyrus scrolls discovered in Herculaneum, the Italian town destroyed by the eruption of Mount Vesuvius: "The facility, called Diamond Light Source, is one of the most powerful and sophisticated X-ray facilities in the world, used to probe everything from viruses to jet engines. On this summer afternoon, though, its epic beam will focus on a tiny crumb of papyrus that has already survived one of the most destructive forces on the planet—and 2,000 years of history."
Re-framings
From Leo Marx’s "Technology the Emergence of a Hazardous Concept." This is a valuable article, and I'm sympathetic to Marx's argument, however, I offered a somewhat critical reading via Langdon Winner and Jacques Ellul here:
"As for the hazardous character of the concept of technology, here I need only say that I am not thinking about weaponry or the physical damage wrought by the use of any particular technologies. The hazards I have in mind are conceptual, not physical. They stem from the meanings conveyed by the concept technology itself, and from the peculiar role it enables us to confer on the mechanic arts as an ostensibly discrete entity—one capable of becoming a virtually autonomous, all-encompassing agent of change."
__________________
Jacques Ellul, The Technological Society:
"Abstract techniques and their relation to morals underwent the same evolution. Earlier, economic or political inquiries were inextricably bound with ethical inquiry, and men attempted to maintain this union artificially even after they had recognized the independence of economic technique. Modern society is, in fact, conducted on the basis of purely technical considerations. But when men found themselves going counter to the human factor, they reintroduced—and in an absurd way—all manner of moral theories related to the rights of man, the League of Nations, liberty, justice. None of that has any more importance than the ruffled sunshade of McCormick's first reaper. When these moral flourishes overly encumber technical progress, they are discarded—more or less speedily, with more or less ceremony, but with determination nonetheless. This is the state we are in today."
__________________
Marshall McLuhan, 1966 interview:
"Fulford: What kind of a world would you rather live in? Is there a period in the past or a possible period in the future you’d rather be in?
McLuhan: No, I’d rather be in any period at all as long as people are going to leave it alone for a while.
Fulford: But they’re not going to, are they?
McLuhan: No, and so the only alternative is to understand everything that is going on, and then neutralize it as much as possible, turn off as many buttons as you can, and frustrate them as much as you can. I am resolutely opposed to all innovation, all change, but I am determined to understand what’s happening because I don’t choose just to sit and let the juggernaut roll over me. Many people seem to think that if you talk about something recent, you’re in favor of it. The exact opposite is true in my case. Anything I talk about is almost certainly to be something I’m resolutely against, and it seems to me the best way of opposing it is to understand it, and then you know where to turn off the button."
Recently Published
My piece in the Summer issue of The New Atlantis, "The Tech Backlash We Really Need," is no longer paywalled.
Of note on the blog:
Eight Theses Regarding the Society of the Disciplinary Spectacle
Political Economy or Ethics of Technology
Two posts riffing on Zygmunt Bauman: Swarms and Networks / Roots and Anchors
Things are changing. They're always changing, but more so of late for me. I'm in the midst of a rather significant reorganization of life and work. As is often the case with such moments, I'm feeling both a refreshing sense of renewed possibilities and a bit of trepidation moving forward into essentially uncharted and unpredictable territory. Without going into great detail—I remain an incurably private person even as I write to this small set of readers for which I am grateful—here's the gist of it: I'm setting aside fairly predictable work at which I've labored for nearly a decade in order to pursue what has, till now, been little more than an avocation I've indulged, probably too much so. That avocation that I will now pursue as a vocation is my work on technology and society, my best shorthand for what is in fact an unwieldy set of interrelated concerns. Chiefly, this will entail my work on CSET about which I'll keep you posted. But it also involves the blog, this newsletter, occasional speaking, and more writing for places like The New Atlantis. Stay tuned for more.
If you reply to this email, I'll get it in my Inbox. If you know anyone who might enjoy reading, pass along a link. If you're inclined to support the writer's efforts, you may do so here or here.
Cheers,
Michael