A few days ago, a handful of similar stories or anecdotes about technology came to my attention. While they came from different sectors and were of varying degrees of seriousness, they shared a common characteristic. In each case, there was either an expressed bewilderment or admission of obliviousness about the possibility that a given technology would be put to destructive or nefarious purposes. Naturally, I tweeted about it … like one does.
I subsequently clarified that I was not subtweeting anyone in particular just everything in general. Of course, naiveté, hubris, and recklessness don’t quite cover all the possibilities—nor are they mutually exclusive.
In response, someone noted that “people find it hard to ‘think like an *-hole’, in
@mathbabedotorg's phrase, because most aren’t.” That handle belongs to Cathy O’Neil, best known for her 2016 book, Weapons of Math Destruction: How Big Data Increases Inequality And Threatens Democracy.
There’s something to this, of course, and, as I mentioned in my reply, I truly do appreciate the generosity of this sentiment. I suggested that the witness of history is helpful on this score, correcting and informing our own limited perspectives. But I was also reminded of a set of questions that I had put together back in 2016 in a moment of similar frustration.
The occasion then was the following observation from Om Malik:
“I can safely say that we in tech don’t understand the emotional aspect of our work, just as we don’t understand the moral imperative of what we do. It is not that all players are bad; it is just not part of the thinking process the way, say, ‘minimum viable product’ or ‘growth hacking’ are.”
Malik went on to write that “it is time to add an emotional and moral dimension to products,” by which he seems to have meant that tech companies should use data responsibly and make their terms of service more transparent. In my response at the time, I took the opportunity to suggest that we needn’t add an emotional and moral dimension to tech, it was already there. The only question was as to its nature. As Langdon Winner had famously inquired “Do artifacts have politics?” and answered in the affirmative, I likewise argued that artifacts have ethics. I then went on to produce a set of 41 questions that I drafted with a view to helping us draw out the moral or ethical implications of our tools. The post proved popular at the time and I received a few notes from developers and programmers who had found the questions useful enough to print out post in their workspaces.
This was all before the subsequent boom in “tech ethics,” and, frankly, while my concerns obviously overlap to some degree with the most vocal and popular representatives of that movement, I’ve generally come at the matter from a different place and have expressed my own reservations with the shape more recent tech ethics advocacy has taken. Nonetheless, I have defended the need to think about the moral dimensions of technology against the notion that all that matters are the underlying dynamics of political economy (e.g., here and here).
I won’t cover that ground again, but I did think it might be worthwhile to repost the questions I drafted then. It’s been more than six years since I first posted them, and, while some you reading this have been following along since then, most of you picked up on my work in just the last couple of years. And, recalling where we began, trying to think like a malevolent actor might yield some useful insights, but I’d say that we probably need a better way to prompt our thinking about technology’s moral dimensions. Besides, worst case malevolent uses are not the only kinds of morally significant aspects of our technology worth our consideration, as I hope some of these questions will make clear.
This is not, of course, an exhaustive set of questions, nor do I claim any unique profundity for them. I do hope, however, that they are useful, wherever we happen to find ourselves in relation to technological artifacts and systems. At one point, I had considered doing something a bit more with these, possibly expanding on each briefly to explain the underlying logic and providing some concrete illustrative examples or cases. Who knows, may be that would be a good occasional series for the newsletter. Feel free to let me know what you think about that.
Anyway, without further ado, here they are:
What sort of person will the use of this technology make of me?
What habits will the use of this technology instill?
How will the use of this technology affect my experience of time?
How will the use of this technology affect my experience of place?
How will the use of this technology affect how I relate to other people?
How will the use of this technology affect how I relate to the world around me?
What practices will the use of this technology cultivate?
What practices will the use of this technology displace?
What will the use of this technology encourage me to notice?
What will the use of this technology encourage me to ignore?
What was required of other human beings so that I might be able to use this technology?
What was required of other creatures so that I might be able to use this technology?
What was required of the earth so that I might be able to use this technology?
Does the use of this technology bring me joy? [N.B. This was years before I even heard of Marie Kondo!]
Does the use of this technology arouse anxiety?
How does this technology empower me? At whose expense?
What feelings does the use of this technology generate in me toward others?
Can I imagine living without this technology? Why, or why not?
How does this technology encourage me to allocate my time?
Could the resources used to acquire and use this technology be better deployed?
Does this technology automate or outsource labor or responsibilities that are morally essential?
What desires does the use of this technology generate?
What desires does the use of this technology dissipate?
What possibilities for action does this technology present? Is it good that these actions are now possible?
What possibilities for action does this technology foreclose? Is it good that these actions are no longer possible?
How does the use of this technology shape my vision of a good life?
What limits does the use of this technology impose upon me?
What limits does my use of this technology impose upon others?
What does my use of this technology require of others who would (or must) interact with me?
What assumptions about the world does the use of this technology tacitly encourage?
What knowledge has the use of this technology disclosed to me about myself?
What knowledge has the use of this technology disclosed to me about others? Is it good to have this knowledge?
What are the potential harms to myself, others, or the world that might result from my use of this technology?
Upon what systems, technical or human, does my use of this technology depend? Are these systems just?
Does my use of this technology encourage me to view others as a means to an end?
Does using this technology require me to think more or less?
What would the world be like if everyone used this technology exactly as I use it?
What risks will my use of this technology entail for others? Have they consented?
Can the consequences of my use of this technology be undone? Can I live with those consequences?
Does my use of this technology make it easier to live as if I had no responsibilities toward my neighbor?
Can I be held responsible for the actions which this technology empowers? Would I feel better if I couldn’t?
Share this post