The Convivial Society, No 23
“Those who read the press of their group and listen to the radio of their group are constantly reinforced in their allegiance. They learn more and more that their group is right, that its actions are justified; thus their beliefs are strengthened. At the same time, such propaganda contains elements of criticism and refutation of other groups, which will never be read or heard by a member of another group...Thus we see before our eyes how a world of closed minds establishes itself, a world in which everybody talks to himself, everybody constantly views his own certainty about himself and the wrongs done him by the Others - a world in which nobody listens to anybody else.” — Jacques Ellul, Propaganda (1973)
[Ordinary practice here is for their to be an extended reflection below and two sets of resources/links with varying degrees of context and commentary below that. As it has turned out in this installment, you'll find what amounts to a mini-essay within each of those two sections below.]
A week or two ago you may have come across ImageNet Roulette, an application developed by Kate Crawford and Trevor Paglen, which used facial recognition software trained on the "person" category in the ImageNet database. You upload a picture and then, after a few seconds, your picture appears with your face blocked off in a green rectangle that also included the tag with which the model had associated you. I took a bit too long to get this newsletter published, and now ImageNet Roulette is no longer available. Although, my guess is that many of you reading this will already have played around with it. If you have, you may also already know that it was possible to draw a rather offensive label.
"As we go further into the depths of ImageNet’s Person categories, the classifications of humans within it take a sharp and dark turn. There are categories for Bad Person, Call Girl, Drug Addict, Closet Queen, Convict, Crazy, Failure, Flop, Fucker, Hypocrite, Jezebel, Kleptomaniac, Loser, Melancholic, Nonperson, Pervert, Prima Donna, Schizophrenic, Second-Rater, Spinster, Streetwalker, Stud, Tosser, Unskilled Person, Wanton, Waverer, and Wimp. There are many racist slurs and misogynistic terms."
I drew at least one of these myself. For Crawford and Paglen, this is part of the point. ImageNet Roulette was created, in part, to convey a point about the limitations of facial recognition software, especially when used to make predictive claims about individuals based on their appearance. As many have noted, facial recognition in some circles amounts to data-driven phrenology. I'm sometimes inclined to think that we are doomed to repeat the worst errors of the past but in a digitally augmented fashion. Often this is connected with a characteristically modern desire or urge to achieve a God's-eye-view of things without, you know, God. Much of what we might now think of as traditional postmodernism—thirty years ago I suspect no one would have imagined speaking of traditional postmodernism, but there it is and I think it works—was basically an acknowledgement that the modern quest for certain, objective, universal Truth had exhausted itself. Digital technology has re-animated the corpse, which is why we now see zombie versions of phrenology, eugenics, and the like floating around. But I digress ...
While you can no longer use the ImageNet Roulette, you can read a long report from Crawford and Paglen, which supplies the substantive argument of which the application was but a kind of interactive object lesson. The report is titled "Excavating AI: The Politics of Images in Machine Learning Training Sets."
The report is worth your time. Here are a couple of highlights that get at the most salient aspects of the whole:
"Datasets aren’t simply raw materials to feed algorithms, but are political interventions. As such, much of the discussion around 'bias' in AI systems misses the mark: there is no 'neutral,' 'natural,' or 'apolitical' vantage point that training data can be built upon. There is no easy technical 'fix' by shifting demographics, deleting offensive terms, or seeking equal representation by skin tone. The whole endeavor of collecting images, categorizing them, and labeling them is itself a form of politics, filled with questions about who gets to decide what images mean and what kinds of social and political work those representations perform."
"What are the assumptions undergirding visual AI systems? First, the underlying theoretical paradigm of the training sets assumes that concepts—whether 'corn', 'gender,' 'emotions,' or 'losers'—exist in the first place, and that those concepts are fixed, universal, and have some sort of transcendental grounding and internal consistency. Second, it assumes a fixed and universal correspondences between images and concepts, appearances and essences. What’s more, it assumes uncomplicated, self-evident, and measurable ties between images, referents, and labels. In other words, it assumes that different concepts—whether 'corn' or 'kleptomaniacs'—have some kind of essence that unites each instance of them, and that that underlying essence expresses itself visually. Moreover, the theory goes, that visual essence is discernible by using statistical methods to look for formal patterns across a collection of labeled images."
Once again, I commend the whole report to you.
I will make just one passing observation (a series of questions really), and, regrettably, it has lost some of its power now that the tool is no longer available. If you did get a chance to play with it, recall your experience. If you did not, try to imagine what might have motivated you to give something like ImageNet a try. Perhaps you approached it in the same spirit that you might have, in another age, agreed to visit the fortune teller at a carnival. Now, what exactly is it that you expect? Or what do you tacitly hope for?
Do you hope that the tool validates your sense of identity? That it, in other words, confirms to you that you are just who you think you are? If so, what does this reveal about how we experience our own self? Why would we crave independent verification of the reality about which we alone would seem to be in a position to know best?
Do you hope that your result is absurdly off the mark? Would this be a relief, dispelling the fear that we could be so readily legible to a machine, that who we are was so uncomplicated a reality that a computer program could easily discern its essence?
Or do you hope that it reveals something about you to yourself, something that has escaped your notice, something that will solve for you the mystery of just who exactly you are? If so, why is that? Are we bored with ourselves? Are we sure that something of consequences is missing and altogether baffled by the thought of finding it?
Maybe it is something else altogether. I don't know. I do know this line of thought was brought to you by the spirit of Walker Percy, who many years ago wrote, "What does a man do when he finds himself living after an age has ended and he can no longer understand himself because the theories of man of the former age no longer work and the theories of the new age are not yet known, for not even the name of the new age is known, and so everything is upside down, people feeling bad when they should feel good, good when they should feel bad? …. What is he then? He has not the faintest idea. Entered as he is into a new age, he is like a child who sees everything in his new world, names everything, knows everything except himself.”
News and Resources
Nick Barrowman on why data is never raw, in The New Atlantis.
Podcast interview with Samantha Hill of the Hannah Arendt Center at Bard College on the enduring relevance of Arendt's work.
"When we watch TV, our TVs watch us back and track our habits ..." A twitter thread from Arvind Narayanan, which includes links to three academic papers recently released on the topic.
From Roy Baumeister's 1987 article, "How the Self Became a Problem: A Psychological Review of Historical Research":
On the critical thinking fallacy: "In other words, intellectual skills and knowledge are not two distinct things. They must work together to produce critical thinkers. Put more baldly, despite all the rhetoric, there is no such thing as critical thinking in general."
"But what happens when you fuse startup culture, artificial intelligence, and fearful neighbors? Call it the rise of networked vigilante surveillance." On Flock Safety (these names ...), a start-up creating automatic license plate readers for sale to individuals and communities. I'm coming to the conclusion that, outside of certain small circles, devices such as this and Amazon's Ring doorbell are assumed to be wholly benign technologies, and their promise of heightened security is taken at more or less face value. I think this is a mistake, needless to say, for a variety of reasons. I'll mention only one, slightly more abstract consideration here: They presume our competence to judge, a presumption which is not warranted. These tools make us aware of phenomena that would ordinarily escape our notice, because, by and large, these phenomena are unremarkable. By extending our capacity to notice, they are thereby calling upon us to make judgments. Is this activity normal? Is it suspicious? Should I act? Should I alert the authorities? Etc. Are we competent to make such judgments? Will the need to make such judgments not, by its very nature, throw us back on some of our worst instincts in the absence of the kind of wisdom and skill needed in such cases? It seems like a case of desiring knowledge we are incapable of handling wisely or justly.
I'm reminded of Auden questioning our acceptance of "the notion that the right to know is absolute and unlimited." "We are quite prepared," Auden continued, "to admit that, while food and sex are good in themselves, an uncontrolled pursuit of either is not, but it is difficult for us to believe that intellectual curiosity is a desire like any other, and to recognize that correct knowledge and truth are not identical. To apply a categorical imperative to knowing, so that, instead of asking, 'What can I know?' we ask, 'What, at this moment, am I meant to know?' — to entertain the possibility that the only knowledge which can be true for us is the knowledge that we can live up to — that seems to all of us crazy and almost immoral."
It seems to me that concept of "knowledge I am meant to know" or "knowledge I can live up to" both imply a measure of moral responsibility and a certain degree of moral capacity. Regarding the net of surveillance we are knitting all about us, I'm not sure that we are up to the task. And this is to say nothing of the powerful interests, who will also deploy this technology to their own ends with little regard for justice or the well-being of our communities.
"The history of television’s place in domestic interiors fits into a much larger story about the look of technology in the home. Are pieces of consumer technology machines, furniture, or something else?" More here.
On the dwindling number of US Forest Service fire lookouts: "Humans throughout our culture are going to be replaced by various forms of automation going forward. It's happening now and it's going to accelerate," he says. "But I think we do ourselves a disservice if we don't at least stop and ask for half a second: What will we lose in that move towards automation?"
Wendell Berry writes about the agrarian standard:
"If we believed that the existence of the world is rooted in mystery and in sanctity, then we would have a different economy. It would still be an economy of use, necessarily, but it would be an economy also of return. The economy would have to accommodate the need to be worthy of the gifts we receive and use, and this would involve a return of propitiation, praise, gratitude, responsibility, good use, good care, and a proper regard for the unborn. What is most conspicuously absent from the industrial economy and industrial culture is this idea of return. Industrial humans relate themselves to the world and its creatures by fairly direct acts of violence. Mostly we take without asking, use without respect or gratitude, and give nothing in return.
In any consideration of agrarianism, this issue of limitation is critical. Agrarian farmers see, accept, and live within their limits. They understand and agree to the proposition that there is “this much and no more.” Everything that happens on an agrarian farm is determined or conditioned by the understanding that there is only so much land, so much water in the cistern, so much hay in the barn, so much corn in the crib, so much firewood in the shed, so much food in the cellar or freezer, so much strength in the back and arms — and no more. This is the understanding that induces thrift, family coherence, neighborliness, local economies. Within accepted limits, these become necessities. The agrarian sense of abundance comes from the experienced possibility of frugality and renewal within limits."
A few years back, in a post that is no longer available as far as I can tell, Ross Andersen addressed the question of the pace of technological change. "I used to think this rapid pace of change was uncontroversially a good thing," Andersen wrote, "but a few years ago, I read a long New Yorker piece about Paleolithic cave art that made me think twice." He went on to explain how paleoanthropologists had concluded that cave paintings discovered in the mid-1990s, which closely resembled the most famous cave paintings found at Lascaux, in fact dated from nearly 15,000 years earlier. (Replica below.) This prompted Andersen to observe, "People had been working within the same artistic tradition, with few changes, for ages." Andersen then cites the following from that piece in the New Yorker:
"What emerged with that revelation was an image of Paleolithic artists transmitting their techniques from generation to generation for twenty-five millennia with almost no innovation or revolt. A profound conservatism in art, Curtis notes, is one of the hallmarks of a 'classical civilization.' For the conventions of cave painting to have endured four times as long as recorded history, the culture it served, he concludes, must have been 'deeply satisfying'—and stable to a degree it is hard for modern humans to imagine."
Andersen noted that it was that last line, which I've emphasized, that gave him pause. "That last line has always stayed with me," he wrote. And it has, consequently, stayed with me. I first began drafting a blog post about it four years ago or so, but it was one of those drafts that never quite materialized. I won't attempt to develop this much further beyond pairing the line with one other line that has stayed with me, a definition of technology offered by the Spanish philosopher Jose Ortega y Gasset: “Technology is the production of superfluities—today as in the Paleolithic age. That is why animals are atechnical; they are content with the simple act of living.”
Leaving aside the matter of whether animals are atechnical, a quibble that doesn't get at the heart of the claim, there seems to be something to the notion that our technical virtuosity springs, at some fundamental level, from dissatisfaction or discontentment. Dissatisfied with the way things are, we seek out a tool or technique that will address the source of our dissatisfaction. Or, alternatively, if we were completely satisfied, we would not seek for different or better ways of doing things.
I'm not sure this holds up in all or maybe even most cases. The sources of technical change are many and mixed. Moreover, it is not obvious to me that if you invent a tool to help you feed your family, for example, that you are necessarily dealing in superfluities. Further, we must take stock of what we might think of as manufactured discontent, which is to say the business of marketing. It seems that Ortega's definition is closer to the mark in today's affluent societies than at any other point in human history.
Nonetheless, it is useful to consider how often it is the case that we turn to technology out of a sense of dissatisfaction, either as those who a merely user of technology or as those who are also creators. With what exactly are we dissatisfied? What precisely is the character of the discontent that we seek to overcome? Exploring these questions would elucidate the nature of the technology under consideration and its role in our lives. Is it, for example, injustice with which we are impatient or, rather, some aspect of our embodied human condition? Even if our dissatisfaction is justified, are we justified in seeking to alleviate it by technical means?
Thinking along these lines called to mind Norbert Wiener's admonition in God and Golem, Inc. and cited in a fine profile of Weiner by Doug Hill:
"Of the devoted priests of power, there are many who regard with impatience the limitations of mankind, and in particular the limitations consisting in man’s undependability and unpredictability …. To this sort of sorcerer, not only the doctrines of the Church give a warning but the accumulated common sense of humanity, as accumulated in legends, in myths, and in the writings of the conscious literary man. All of these insist that not only is sorcery a sin leading to Hell but it is a personal peril in this life. It is a two-edged sword, and sooner or later it will cut you deep."
Not a thing, but yes, things in the works.
A handful of you have reached out via email since the last installment and I have failed to respond. Accept my apologies and know that I will be working on those replies in the next few days. If you're ever inclined to drop me a note, don't let that admission discourage you from doing so.
And please consider yourself encouraged to pass along a link to the newsletter to anyone you think might be interested.