The Convivial Society
The Convivial Society
The Uncanny Gaze of the Machine
4
1
0:00
-23:47

The Uncanny Gaze of the Machine

The Convivial Society: Vol. 3, No. 4
4
1

Welcome to the Convivial Society, a newsletter about technology and culture. The pace of the newsletter has been slow of late, which I regret, but I trust it will pick up just a touch in the coming weeks (also please forgive me if you’ve been in touch over the past month or so and haven’t heard back). For starters, I’ll follow up this installment shortly with another that will include some links and resources. In this installment, I’m thinking about attention again, but from a slightly different perspective—how do we become the objects of the attention for others? If you’re a recent subscriber, I’ll note that attention is recurring theme in my writing, although it may be awhile before I revisit it again (but don’t hold me to that). As per usual, this is an exercise in thinking out loud, which seeks to clarify some aspect of our experience with technology and explore its meaning. I hope you find it useful. Finally, I’m playing with formatting again, driven chiefly by the fact that this is a hybrid text meant to be both read and/or listened to in the audio version. So you’ll note my use of bracketed in-text excursuses in this installment. If it degrades your reading or listening, feel free to let me know.


Objects of Attention

A recent email exchange with Dr. Andreas Mayert got me thinking about attention from yet another angle. Ordinarily, I think about attention as something I have, or, as I suggested in a recent installment, something I do. I give my attention to things out there in the world, or, alternatively, I attend to the world out there. Regardless of how we formulate it, what I am imagining in these cases is how attention flows outward from me, the subject, to some object in the world. And there’s much to consider from that perspective: how we direct our attention, for example, or how objects in the world beckon and reward our attention. But, as Dr. Mayert suggested to me, it’s also worth considering how attention flows in the opposite direction. That is to say, considering not the attention I give, but the attention that bears down on me.

[First excursus: The case of attending to myself is an interesting one given this way of framing attention as both incoming and outgoing. If I attend to my own body—by minding my breathing, for example—I’d say that my attention still feels to me as if it is going outward before then focusing inward. It’s the mind’s gaze upon the body. But it’s a bit different if I’m trying to attend to my own thoughts. In this case I find it difficult to assign directionality to my attention. Moreover, it seems to me that the particular sense I am using to attend to the world matters in this regard, too. For example, closing my eyes seems to change the sense that my attention is flowing out from my body. As I listen while my eyes are shut, I have the sense that sounds are spatially located, to my left rather than to my right, but also that the sound is coming to me. I’m reminded, too, of the ancient understanding of vision, which conceived of sight as a ray emanating from the eye to make contact with the world. The significance of these subtle shifts in how we perceive the world and how media relate to perception should not be underestimated.]

There are several ways of thinking about where this attention that might fix on us as its object originates. We can consider, for example, how we become an object of attention for large, impersonal entities like the state or a corporation. Or we can contemplate how we become the object of attention for another person—legibility in the former case and the gaze in the latter. There are any number of other possibilities and variations within them, but given my exchange with Mayert I found myself considering what happens when a machine pays attention to us. By “machine” in this case, I mean any of the various assemblages of devices, sensors, and programs through which data is gathered about us and interpretations are extrapolated from that data, interpretations which purport to reveal something about us that we ourselves may not otherwise recognize or wish to disclose.

I am, to be honest, hesitant to say that the machine (or program or app, etc.) pays attention to us or, much less, attends to us.1 I suppose it is better to say that the machine mediates the attention of others. But there is something about the nature of that mediation that transforms the experience of being the object of another’s attention to such a degree that it may be inadequate to speak merely of the attention of another. By comparison, if I discover that someone is using a pair of binoculars to watch me at a distance, I would still say, with some unease to be sure, that it is the person and not the binoculars that are attending to me although of course their gaze is mediated by the binoculars. If I’m being watched on a recording of cctv footage, even though someone is attending to me asynchronously through the mediation of the camera, I’d still say that it is the person is paying attention to me although I might hesitate to say that it is me they are paying attention to.2

However, I’m less confident of putting it quite that way when, say, data about me is being captured, interpreted, and filtered to another who attends to me through that data and its interpretation. It does seem as if the primary work of attention, so to speak, is done not by the person but the machine, and this qualitatively changes the experience of being noted and attended to. Perhaps one way to say this is that when we are attended to by (or through) a machine we too readily become merely an object of analysis stripped of depth and agency, whereas when we are attended to more directly, although not necessarily in unmediated fashion, it may be harder—but not impossible, of course—to be similarly objectified.

I am reminded, for example, of the unnamed protagonist of Graham Greene’s The Power and the Glory, a priest known better for his insobriety than his piety, who, while being jailed alongside one of his tormentors, thinks to himself, “When you visualized a man or woman carefully, you could always begin to feel pity … that was a quality God’s image carried with it … when you saw the lines at the corners of the eyes, the shape of the mouth, how the hair grew, it was impossible to hate.” There’s much that may discourage us from attending to another in this way, but the mediation of the machine seems to remove the possibility altogether.

I am reminded of Clive Thompson’s intriguing analysis of captcha images, that grid of images that sometimes appears when you are logging in to a site and from which you are to select squares that contain things like buses or traffic lights. Thompson set out to understand why he found captcha images “overwhelmingly depressing.” After considering several factors, here’s what he concluded:

“They weren’t taken by humans, and they weren’t taken for humans. They are by AI, for AI. They thus lack any sense of human composition or human audience. They are creations of utterly bloodless industrial logic. Google’s CAPTCHA images demand you to look at the world the way an AI does.”

The uncanny and possibly depressing character of the captcha images is, in Thompson’s compelling argument, a function of being forced to see the world from a non-human perspective. I’d suggest that some analogous unease emerges when we know ourselves to be perceived or attended to by a non-human agent, something that now happens routinely. In one way or another we are the objects of attention for traffic light cameras, smart speakers, sentiment analysis tools, biometric sensors, doorbell cameras, proctoring software, on-the-job motion detectors, and algorithms used to ostensibly discern our credit worthiness, suitability for a job, or proclivity to commit a crime. The list could go on and on. We navigate a field in which we are just as likely to be scanned, analyzed, and interpreted by a machine as we are to enjoy the undisturbed attention of another human being.

Digital Impression Management

To explore these matters a bit more concretely, I’ll finally come to the subject of my exchange with Dr. Mayert, which was a study he conducted examining how some people experience the attention of a machine bearing down on them.

Mayert’s research examined how employees reasoned about systems, increasingly used in the hiring process, which promise to “create complex personality profiles from superficially innocuous individual social media profiles.” You’ll find an interview with Dr. Mayert and a link to the study, both in German, here, and you can use your online translation tool of choice if, like me, you’re not up on your German. With permission, I’ll share portions of what Mayert discussed in our email exchange.

The findings were interesting. On the one hand, Mayert found that “employees have no problem at all with companies taking a superficial look at their social media profiles to observe what is in any case only a mask in Goffman's sense.”

Erving Goffman, you may recall, was a mid-twentieth century sociologist, who, in The Presentation of the Self in Everyday Life, developed a dramaturgical model of human identity and social interactions. The basic idea is that we can understand social interactions by analogy to stage performance. When we’re “on stage,” we’re involved in the work of “impression management.” Which is to say that we carefully manage how we are perceived by controlling the impressions we’re giving off. (Incidentally, media theorist Joshua Meyrowitz usefully put Goffman’s work in conversation with McLuhan’s in No Sense of Place: The Impact of Electronic Media on Social Behavior, an underrated work of media theory published in 1986.)3

So the idea here is that social media platforms are Goffmanesque stages, and, after we came to terms with context collapse, we figured out how to manage the impressions given off by our profiles. Indeed, from this perspective we might say that social media just made explicit (and quantifiable) dimensions of human behavior which, hitherto, had been mostly implicit. You’d be forgiven for thinking that this picture is just a bit too tidy. In practice, impressions, like most human dynamics, cannot be perfectly managed. We always “give off” more than we imagine, for example, and others may read our performances more astutely than we suppose.

But this was not the whole story. Mayert reported that employees had a much stronger negative reaction when the systems claimed to “infer undisclosed personal information” from their carefully curated feeds. It is one thing, from their perspective, to have data used anonymously for the purpose of placing ads, for example—that is when people are “ultimately anonymous objects of the data economy”—and quite another when the systems promise to disclose something about them as a particular person, something they did not intend to reveal. Whether the systems can deliver on this promise to know us better than we would want to be known is another question, and I think we should remain skeptical of such claims. But the claim that they could do just that elicited a higher degree of discomfort among participants in the study.

The underlying logic of these attitudes uncovered by Mayert’s research is also of interest. The short version, as I understand it, goes something like this. Prospective employees have come to terms with the idea that employers will scan their profiles as part of the hiring process, so they have conducted themselves accordingly. But they are discomfited by the possibility that their digital “impression management” can be seen through to some truer level of the self. As Mayert put it, “respondents believed that they could nearly perfectly control how they were perceived by others through the design of their profiles, and this was of great importance to them.”

[Second excursus: I’m curious about whether this faith in digital impression management is a feature of the transition from televisual culture to digital culture. Impression management seems tightly correlated with the age of the image, specifically the televisual image. My theory is that social media initially absorbed those older impulses to mange the image (the literal image and our “self image”). We bring the assumptions and practices of the older media regime with us to new media, and this includes assumptions about the self and its relations. So those of us who grew up without social media brought our non-digital practices and assumptions to the use of social media. But practices and assumptions native to the new medium will eventually win out, and I think we’ve finally been getting a glimpse of this over the last few years. One of these assumptions is that the digital self is less susceptible to management, another may be that we now manage not the image but the algorithm, which mediates our audience’s experience of our performance. Or to put it another way, that our impression management is in the service of both the audience and the algorithm.]

Mayert explained, however, that there was yet another intriguing dimension to his findings:

“when they were asked about how they form their own image of others through information that can be found about them on the Net, it emerged that they superficially link together unexciting information that can be found about other persons and they themselves do roughly what is also attempted in applicant assessment through data analysis: they construct personality profiles from this information that, in terms of content, were strongly influenced by the attitudes, preferences or prejudices of the respondents.”

So, these participants seemed to think they could, to some degree, see through or beyond the careful “impression management” of others on social media, but it did not occur to them that others might do likewise with their own presentations of the self.

Mayert again: “Intended external representation and external representation perceived by others were equivalent for the respondents as long as it was about themselves.”

“This result,” he adds, “explains their aversion to more in-depth [analysis] of their profiles in social media. From the point of view of the respondents, this is tantamount to a loss of control over their external perception, which endangers exactly what is particularly important to them.”

The note of control and agency seems especially prominent and raises the question, “Who has the right to attend to us in this way?”

I think we can approach this question by noting that our techno-social milieu is increasingly optimized for surveillance, which is to say for placing each of us under the persistent gaze of machines, people, or both. Evan Selinger, among others, has long been warning us about surveillance creep, and it certainly seems to be the case that we can now be surveilled in countless ways by state actors, corporations, and fellow citizens. And, in large measure, ordinary people have been complicit in adopting and deploying seemingly innocuous nodes in the ever expanding network of surveillance technologies. Often, these technologies promise to enhance our own ability to pay attention, but it is now the case that almost every technology that acts as an extension of our senses designed to enhance our capacity to pay attention to the world is also an instrument through which the attention of others can flow back toward us, bidden or unbidden.

Data-driven Platonism

Hiring algorithms are but one example of a larger set of technologies which promise to disclose some deeper truth about the self or the world that would be otherwise unnoticed. Similar tools are deployed in the realms of finance, criminal justice, and health care among others. The underlying assumption, occasionally warranted, is that analyzing copious amounts of data can disclose significant patterns or correlations, which would have been missed without these tools. As I noted a few years back, we can think about this assumption by analogy to Plato’s allegory of the cave. We are, in this case, led out of the cave by data analysis, which reveals truths that are inaccessible not only to the senses but even to unaided human reason. I remain fascinated by the idea that we’ve created tools designed to seek out realities that exist only as putative objects of quantification and prediction. They exist, that is, only in the sense that someone designed a technology to discover them and the search amounts to a pursuit of immanentized Platonic forms.

With regard to the self, I wonder whether the participants in Mayert’s study had any clear notion of what might be discovered about them. In other words, in their online impression management, were they consciously suppressing or obscuring particular aspects of their personality or activities, which they now feared the machine would disclose, or was their unease itself a product of the purported capacities of the technology? Were they uneasy because they came to suspect that the machine would disclose something about them which they themselves did not know? Or, alternatively, was their unease grounded in the reasonable assumption that they would have no recourse should the technology disqualify them based on opaque automated judgments?

I was reminded of ImageNet Roulette created by Kate Crawford and Trevor Paglan in 2019. The app was trained on the ImageNet database’s labels for classifying persons and was intended to demonstrate the limits of facial recognition software. ImageNet Roulette invited you to submit a picture to see how you would be classified by the app. Many users found that they were classified with an array of mistaken and even offensive labels. As Crawford noted in a subsequent report,

“Datasets aren’t simply raw materials to feed algorithms, but are political interventions. As such, much of the discussion around 'bias' in AI systems misses the mark: there is no 'neutral,' 'natural,' or 'apolitical' vantage point that training data can be built upon. There is no easy technical 'fix' by shifting demographics, deleting offensive terms, or seeking equal representation by skin tone. The whole endeavor of collecting images, categorizing them, and labeling them is itself a form of politics, filled with questions about who gets to decide what images mean and what kinds of social and political work those representations perform.”

At the time, I was intrigued by another line of thought. I wondered what those who were playing with the app and posting their results might have been feeling about the prospects of getting labeled by the machine. My reflections, which I wrote about briefly in the newsletter, were influenced by the 20th century diagnostician of the self, Walker Percy. Basically, I wondered if users harbored any implicit hopes or fears in getting labeled by the machine. It is, of course possible and perhaps likely that user’s brought no such expectations to the experience, but maybe some found themselves unexpectedly curious about how they would be categorized. Would we hope that the tool validates our sense of identity, suggesting that we craved some validation of our own self-appraisals? Would we hope that the result would be obviously mistaken, suggesting that the self was not so uncomplicated that a machine could discern its essence? Or would we hope that it revealed something about us that had escaped our notice, suggesting that we’ve remained, as Augustine once put it, a question to ourselves?

Efforts to size-up the human through the gaze of the machine trade on the currency of a vital truth: we look for ourselves in the gaze of the other. When someone gives us their attention, or, better, when someone attends to us, they bestow upon us a gift. As Simone Weil has put it, “Attention is the rarest and purest form of generosity.”

When we consider how attention flows out from us, we are considering, among other things, what constitutes our bond to the world. When we consider how attention bears down on us, we are considering, among other things, what constitutes the self.

One of the assumptions I bring to my writing about attention is that we desire it and we’re right to do so. To receive no one’s attention would be a kind of death. There are, of course, disordered ways of seeking attention, but we need the attention of the other even if only to know who we are. This is why I recently wrote that “the problem of distraction can just as well be framed as a problem of loneliness.” Digital media environments hijack our desire to be known in order to fuel the attention economy. And it’s in this light that I think it may be helpful to reconsider much of what we’ve recently glossed as surveillance capitalism through the frame of attention, but not just the attention we give but that which we receive.

From this perspective, one striking feature of our techno-social milieu is that it has become increasingly difficult both to receive the attention of our fellow human beings and to refuse the attention of the machines. The exchange of one for the other is, in certain cases, especially disheartening, as, for example, when surveillance becomes, in Alan Jacobs’s memorable phrase, the normative form of care. And, as I suggested earlier, the attention frame also has the advantage of capturing the uncanny dimensions of being subject to the nonhuman gaze and rendered a quantifiable object of analysis, not so much seen as seen through, appraised without being known.

In a rather well known poem from 1967, Richard Brautigan wrote hopefully of a cybernetic paradise in which we, and the other non-human animals, would be “watched over by machines of loving grace.” He got the watching over part right, but there are no machines of loving grace. To be fair, it is also a quality too few humans tend to exhibit in our attention to others.

Share

1

I’m generally uncomfortable with anthropomorphizing what machines do because the direction of the metaphor can often change so that rather than understanding what machines do by analogy to human capacities, we begin to understand the human by analogy to the technological.

2

I’m tempted to say that when someone is looking at a picture of me or a video clip, they are not really paying attention to me. They are paying attention to something I have done that is capture by a visual representation of me, but they are not then attending to me. It’s the difference between looking at a picture of a friend and saying not “this is my friend” but “this is a picture of my friend.” The documentary trace is and is not me.

3

In case you’re interested, I used Goffman’s work in my own effort to understand our online experience for a piece in Real Life Magazine in 2019.

Discussion about this podcast

The Convivial Society
The Convivial Society
Audio version of The Convivial Society, a newsletter exploring the intersections of technology, society, and the moral life.