The Convivial Society, No. 21
"Contemporary man ... attempts to create the world in his image, to build a totally man-made environment, and then discovers that he can do so only on the condition of constantly remaking himself to fit it." — Ivan Illich, Tools for Conviviality
On Ezra Klein's podcast, Klein sparred with Nir Eyal about whether "big tech is addictive" and other related matters. You may recognize Eyal as the author of Hooked, a book about generating habit forming user experiences, and, more recently, Undistractable, a book about how to reclaim your attention from the distractedness fostered by digital devices. Eyal is altogether unfazed by the apparent tension, about which Klein patiently inquires early in the interview. Eyal is also unperturbed by the possibility that his earlier book, quite popular in Silicon Valley circles, might have contributed to patterns whose consequences we might generously call unfortunate. That was not at all his intent, he protests. The lines "the cost of progress" and "moral panic" were also invoked.
Much of the argument in the first part of the exchange focused on the particular kind of agency that we assign to digital tools (apps, devices, etc.) and the ability of individuals to take control of their capacity to pay attention. Naturally, it is possible to err in two directions. We might err in believing that we yield agency to devices so that we make ourselves out to be helpless automatons programmed by our use of this or that technology. Alternatively, we might err in assuming that our will-power is always up to the task of rational self-determination.
It seems that very talented and well-resourced individuals tend to err in misjudging the power of individuals to improve their situation simply by making better choices. On the most generous interpretation of this view, we might say that it takes a high view of human beings and their capacities. But it also fails to account for the fact that not everyone occupies the same playing field with regards to the choices that are available to them. It's a social variation of the error of Stoicism (coincidentally, an ascendant moral philosophy in Silicon Valley) according to Pascal: believing that what can be done once can be done always. I'd put it this way: believing that what you are able to do under your own circumstances, everyone else can do under theirs. To be fair, Eyal gestures toward this more complicated reality that might undermine our self-determination, but the interview moves on without too much elaboration. Perhaps it is a matter of emphasis. Eyal emphasizes our capacity to determine our situation while granting that certain circumstances might make this difficult. Klein finds this emphasis unhelpful and misleading, and I tend to agree with Klein.
Chiefly, Eyal wants us to stop blaming the technology itself. There are other deeper issues involved that we ignore at our peril when we focus on this or that device or app. This is a fair point and I'll get back to it in a moment, but first it may be helpful to reframe the discussion. We are sliding back and forth between a focus on the technological artifact "out there" and the will "in here"—the external tool and the internal will. Following the philosopher Peter-Paul Verbeek, it is useful to relocate our attention on the interaction of these two realities rather than on one or the other. By taking a phenomenological approach to the ethics of technology, Verbeek emphasizes how technology mediates our perception and our action. The point of moral significance is not simply the tool or the will, it is, rather, at the intersection of the two where we find the will being shaped by the mediating power of the technology in question. The technology is not all that matters, but you can't simply discount its formative powers. As Verbeek explains, the human and the technological are intertwined:
"The two simply cannot be separated. Humans are technological beings, just as technologies are social entities. Technologies, after all, play a constitutive role in our daily lives. They help to shape our actions and experiences, they inform our moral decisions, and they affect the quality of our lives. When technologies are used, they inevitably help to shape the context in which they function. They help specific relations between human beings and reality to come about and coshape new practices and ways of living.”
I don't know that Eyal would necessarily disagree with this in principle, but his commitment to seeing the technology itself as merely a surface level issue suggests to me that he's underestimating the significance of technology's role in mediating perception and action.
That said, there are deeper issues involved. I found myself agreeing with Eyal (and Klein) when they discussed how our use of technology is often driven by what Eyal called "needs displacement." So, for example, if kids are over-using (Eyal's preferred term) smartphones or video games or television or pinball machines, it is not the tool that is to blame but rather a social environment that is failing to satisfy some fundamental need for purpose or companionship or direction or whatever. This can often be the case, and I've argued as much on a handful of occasions. But this lets technology off the hook too easily. It does not account for how a device's design for compulsive engagement also interferes with our capacity to find purpose or companionship within our given circumstances, often by shaping us through our habit-forming use of the technology, which makes us into the sort of people who are less likely to avail ourselves of deeper, more substantive means of satisfaction. Nor does it account for the ubiquity of digital devices as opposed to the fixed character of a television or even a desktop computer. A smartphone, after all, is not merely a different sort of pinball machine. And, I would add, neither does it account for how technology not only answers desires but also generates them.
Just two more observations in passing. First, the more specific discussion of attention and distraction assumed a particular view of what attention is and what it is for. More specifically, Eyal explicitly foreclosed the question of what attention is for, treating it chiefly as a morally neutral capacity that he was going to teach you to use well. What you used it for was none of his business. I understand why he would take this position, but I think it restricts the usefulness of his analysis. I'm not sure the question of purpose and the question of function can be so easily separated. And this is because I don't think attention is simply a capacity to direct one's intentionality this way or that. Attention is constituted in part by the object upon which it fastens, and attention can also be understood as a way of being in the world that takes the world's givenness seriously. In this latter sense, it is a way of letting go of our drive toward the self-determination of our mental activity.
Secondly, and very briefly, Eyal mentions in passing a connection between some sort of innate dissatisfaction that characterizes the human condition and role of technology in our lives. This is worthy of further reflection, but I'll just mention that it recalled this definition of technology from the Spanish philosopher Jose Ortega y Gasset: “Technology is the production of superfluities—today as in the Paleolithic age. That is why animals are atechnical; they are content with the simple act of living.”
________________
More on Verbeek's philosophy of technological mediation.
More on the varieties of attention.
Of related interest, on social media and loneliness.
News and Resources
Harvard is planning the first real-world geoengineering experiments, questions of oversight loom large: "Critics fear such a step will lend scientific legitimacy to the idea that we could turn the dial on Earth’s climate. And they fret that even doing experiments is starting down a slippery slope toward creating a tool of incredible power." To be clear, the experiments themselves do not amount to geoengineering. There is a vicious cycle here, of course: technological interventions require further technological interventions, hubris demands further hubris. I don't say that glibly. The problems to which geoengineering purports to be a solution are serious, complex, and urgent. I'm not optimistic, however, that we will find our way if we are guided by the same spirit and informed by the same imagination.
In the last installment, I suggested, without too much elaboration, that there is a longstanding link between what we tend to gloss as surveillance society and the genre of reality TV. As it turns out, Martha Bayles recently argued along similar lines in more sophisticated and better researched fashion.
Second keynote from a conference at the University of Kent's Centre for Critical Thought on Gilbert Simondon's work: Politics, Culture and Technics in Simondon.
In the event that you might have access to this journal: "Algorithm Overdependence: How the Use of Algorithmic Recommendation Systems Can Increase Risks to Consumer Well-Being." "Counter to prior findings, this research indicates that consumers frequently depend too much on algorithm-generated recommendations, posing potential harms to their own well-being and leading them to play a role in propagating systemic biases that can influence other users."
In a recent newsletter, Alan Jacobs included this little bit of miscellany: "Orthosomnia: A condition in which concern about the quality of your sleep interferes with the quality of your sleep."
Upon reading this, I immediately wondered what role sleep tracking apps might play in generating orthosomnia. Lo and behold, the article to which Jacobs linked directly addressed the question: "They seemed to have symptoms related to concerns about what their sleep-tracker devices were telling them." The term was, in fact, recently coined to describe precisely this phenomenon. My second thought was this: Orthosomnia is a sign pointing to the nature of our situation. It is a particularly clear case of a pattern that recurs throughout our experience. We've created a series of self-defeating reflexive loops that have generated a crippling self-awareness of self and society. I've never articulated this in a satisfactory way, but I'm sometimes tempted to argue this disordered self-consciousness is one of the most serious problems we now face because it structures our perception, experience, and action.Earlier this year, you may have heard that Mazda, citing safety concerns, was doing away with touch screen controls on its vehicles. Now the US Navy intends to do the same. "'We got away from the physical throttles, and that was probably the number one feedback from the fleet - they said, just give us the throttles that we can use,' said Rear Adm Galinis." I mention this here simply to highlight the consequences of adopting first and asking questions later.
Jathan Sadowski has a forthcoming article from which he shared the following screen shot on Twitter. It's a handy paragraph whose rhythms tempt me to think of it as something like a catechism for the digital age.
Jathan shared this paragraph as a comment on Amazon's announcement that its facial recognition software, named Rekognition as if it were out of a Bond parody film, had been updated and improved: "With this release, we have further improved the accuracy of gender identification. In addition, we have improved accuracy for emotion detection (for all 7 emotions: ‘Happy’, ‘Sad’, ‘Angry’, ‘Surprised’, ‘Disgusted’, ‘Calm’ and ‘Confused’) and added a new emotion: ‘Fear’." In the case of fear, it's worth noting how, in certain situations where the technology is likely to be used as an instrument of law enforcement, the tool may in fact occasion the emotion it is purportedly detecting.
Apropos nothing I ordinarily write about, here's a fascinating 2012 essay on Byzantium that belies simple description. Link via Robert Cottrell's The Browser.
"From 20,000 miles up, our home planet is a hypnotic swirl of the familiar and the sublime": Video. Link via Patrick Tanguay.
Re-framings
More recently, Alan Jacobs included a link in his newsletter to a post discussing an image designed in 1926 by Dr. Fritz Kahn and known, in English, by the title "Man as Industrial Palace."
The post explains, "The analogy of man to machine was a widespread cultural trend in the 1920s and 30s. The explosion of industry and consumer technology in the early 20th century inspired new approaches to art and literature, integrating the the human experience into the dynamic speed, power and sharp edges of the machine age."
That is all true, of course. But it is useful to remember how far back the analogy of man to machine goes. Man a Machine, published in 1754 by Julien de La Mettrie, comes immediately to mind. Interestingly, both Kahn and La Mettrie were physicians.
Writing in the 1829, Thomas Carlyle famously declared his age to be an "Age of Machinery," "... the age which, with its whole undivided might, forwards, teaches and practices the great art of adapting means to ends.” He went on to add:
"Men are grown mechanical in head and heart, as well as in hand. They have lost faith in individual endeavors, and in natural force, of any kind. Not for internal perfection but for external combinations and arrangements for institutions, constitutions, for Mechanism of one sort or another, do they hope and struggle.”
However, Lewis Mumford, in Technics and Civilization (1934), would have us look further into the past for the origins of the analogy:
"While people often call our period the 'Machine Age,' very few have any perspective on modern technics or any clear notion as to its origins. Popular historians usually date the great transformation in modern industry from Watt’s supposed invention of the steam engine; and in the conventional economics textbook the application of automatic machinery to spinning and weaving is often treated as an equally critical turning point. But the fact is that in Western Europe the machine had been developing steadily for at least seven centuries before the dramatic changes that accompanied the 'industrial revolution' took place. Men had become mechanical before they perfected complicated machines to express their new bent and interest; and the will-to-order had appeared once more in the monastery and the army and the counting-house before it finally manifested itself in the factory. Behind all the great material inventions of the last century and a half was not merely a long internal development of technics: there was also a change of mind. Before the industrial processes could take hold on a great scale, a reorientation of wishes, habits, ideas, goals was necessary."
The take away here is that we did not beginning thinking of ourselves by analogy to machines once complex industrial machines appeared on the scene. Mumford, with good reason, suggests that we began thinking of ourselves by analogy to machines long before the dawn of the industrial age. I'm tempted to suggest that this "change of mind," as Mumford calls it, this yielding of the imagination to the image of the machine is somehow at the roots of all that we think of as modern, particularly with regards to the machine-like, putatively self-regulating character of the modern economy and the machine-like objectivity of our legal and political apparatus.
See also The Restless Clock: A History of the Centuries-Long Argument Over What Makes Living Things Tick by Jessica Riskin.
Recently Published
Nothing of note here except a recent blog post on language and digital media.
Quick programming note about The Convivial Society. I'm hoping to add a new feature to the newsletter: an occasional excerpt from recent or forthcoming books exploring the intersection of technology and society. The excerpts will go out as a new stand-alone installment of the newsletter, maybe once monthly or so. I've got a couple lined up already and I'm looking forward to sharing those soon.
Cheers,
Michael