Author Archives: Guest Blogger

Beyond the Reveal: A Metaphor’s Effect

future

In my last post, I described how the black box emerges historically with the extrication of (at least some) laborers from the machines of industrial labor. The cost of this move is that the laborer, now outside the machine as an operator, must herself operate as black box. The interface between the laborer and machine becomes central to this new relationship, especially as managers and technologists focus on how constantly to reconfigure the interactions between and among human-machine pairs.

In recounting this history of a metaphor, I aim toward a critique of how black box metaphors are used today to describe opaque technological processes. And I don’t mean to suggest that any use of a black box metaphor inadvertently invokes a whole history of labor and interface. But I do think we can surmise from this history a dominant narrative that draws heavily from the black box metaphor:

  1. As an “infrastructural inversion,” the black box metaphor creates the possibility, for some, of imagining themselves as outside a system that formerly may not have been visible at all.
  2. Where and when this happens, interfaces emerge and gain prominence as a point of mediation with the formerly invisible system.
  3. Design for interaction between the user and the “black boxed” process tends to imagine the human mind as another form of black box, emphasizing cognitive over manual processes.
  4. The new system comprised by this user and her machine then starts the process anew—the user/worker has been incorporated into a new system that she may not actually see unless naming a new “black box.”
  5. This narrative will also depend on the exclusion of some who need to “stay behind” and keep the system going within the “old” forms of labor.

To describe a process as a black box thus potentially sets in motion a whole series of implications for sensation, knowledge, labor, and social organization.

Let’s look at this, for example, in light of new attention brought to the role of algorithms in Facebook use (an effort in which I have been involved as a scholar). How does describing the Facebook algorithm as a black box set us on a particular narrative of analysis and research?

Let’s imagine a Facebook user who is not yet aware of the algorithm at work in her social media platform. The process by which her content appears in others’ feeds, or by which others’ material appears in her own, is opaque to her. Approaching that process as a black box, might well situate our naive user as akin to the Taylorist laborer of the pre-computer, pre-war era. Prior to awareness, she blindly accepts input and provides output in the manufacture of Facebook’s product. Upon learning of the algorithm, she experiences the platform’s process as newly mediated. Like the post-war user, she now imagines herself outside the system, or strives to be so. She tweaks settings, probes to see what she has missed, alters activity to test effectiveness. She grasps at a newly-found potential to stand outside this system, to command it. We have a tendency to declare this a discovery of agency—a revelation even.

But maybe this grasp toward agency is also the beginning of a new system. The black box metaphor suggests that such providers will also need to design for the user who tweaks. (It may even be that designing for the tweaker may be more profitable than designing a “perfect feed.”) As in previous ergonomic problems, this process will begin to imagine and construct a particular kind of mind, a particular kind of body, a particular kind of user. Tweaking to account for black-boxed algorithmic processes could become a new form of labor, one that might then inevitably find description by some as its own black box, and one to escape.

Maybe, by structuring our engagement with the experience of Facebook’s opaque processes through the black box metaphor, we’ve set ourselves up to construct a new black box, and ignored the ways in which our relations to others, within and without the present system, have been changed by our newfound awareness.

I’m struck here, for example, by how well the narrative of the black box I’ve described here fits a number of stories we’ve lived and heard regarding privacy and networked media. Whether it’s the Snowden revelations or Facebook’s unauthorized emotion study, the story often plays out the same way for many of us. We realize or remember anew just how much work we’re providing some entity within a current system, and then proceed to either alter our use patterns or abstain altogether from that system in order to remain outside that work. Debates ensue over who is complicit and who is not, and with the exception of those working in a more organized fashion to enact prosecution or new laws, most of us are stuck in an “opt-in or opt-out” scenario that never goes anywhere.

It’s likely only a matter of time before the market for more subtle responses than “opt-in or opt-out” is met with a new set of black box systems. One can imagine, for example, a range of services: free email if you submit to full surveillance and data-trolling, modestly-priced email if you submit your data for use via an anonymizer, or premium email at high costs that removes you from all data-harvesting.

Perhaps, even as we remain justifiably critical of the unseen and unknown software processes that govern and regulate a growing number of shared spaces and subjectivities, we might search for another way to live with these processes than hitting the escape button and entering a higher-level routine. More on that in my next posts.

Kevin Hamilton is an artist and researcher at the University of Illinois, Urbana-Champaign, where as an Associate Professor he holds appointments in several academic units across theory, history, and practice of digital media. He is currently at work with Infernal Machine contributor Ned O’Gorman on a history of film in America’s nuclear weapons programs; other recent work includes a collaboration with colleagues at Illinois’ Center for People and Infrastructures on the ethics of algorithms in internet and social media platforms.

. . . . . . . .

Like The Hedgehog Review on Facebook, follow us on Twitter, and subscribe to our posts via RSS.

FacebookTwitterGoogle+LinkedInShare

Beyond the Reveal: Living with Black Boxes

brain

Part One: Histories

Amidst growing attention and calls to action on the role of algorithms in our everyday lives, one idea recurs: “opening the black box.” In such analyses, the “black box” describes a process that happens in secret, for which we only know the inputs and outputs, but not the steps that takes place between. How might this metaphor be structuring our approach to thinking about algorithms and their place in our lives, long before we get to the work of accounting for the social and political work of algorithmic systems?

In this first of four posts, I’ll begin an answer to this question by looking at the history of the “black box” as a way of modeling cognitive or computational processes. In the second post, I’ll offer some cautionary words about reliance on this metaphor in the important work of ensuring just systems. Finally, in the last two posts I’ll look to some alternatives to black-box-opening in our relationships to opaque technological processes.

The black box metaphor began to acquire its shape during changes in labor that took place after World War II. Whereas managers before the war had largely treated work as a series of learned behaviors, the designers of work and work environments after the war began to think less about suiting the laborer to the work, and more about suiting the work to the laborer.

More than a mere Taylorist repeater of actions, the new ideal worker of post-war Human Factors research not only acts but perceives, acting according to learned evaluative routines that correlate sensation to action. The ideal post-war laborer is not a person of a particular physical build, conditioned to perform particular motions, but rather a universalized collection of possible movements, curated and selected according to mathematical principles. Human Factors research turned the human laborer into a control for a system, a proper medium for the transfer and transformation of input.

Key to this new approach was the influence of information theory on approaches to both computing and psychology. In computing, the understanding of signals as information paved the way for a mathematics of binary code, in which the course of electrons through physical gates and switches could translate into algorithms and mathematical functions. In psychology, those who had grown weary of behaviorism’s stimulus-response approaches to explaining and modifying human action saw in Claude Shannon’s approach echoes of the structure of the human brain. These early cognitive scientists saw in thought a kind of algorithm performing consistent functions on ever-changing sense data, zipping through the brain’s neural pathways the way electrons travel through the copper of a computer’s circuits.

And so a new understanding of the operator’s actions emerged alongside a new understanding of a computer’s routines. The first software emerged at the same time that psychologists began to analyze human thought and memory as a collection of mathematical functions performed on sense data. In other words, the black box as we know it emerged as a pair of metaphors: one to describe the computational machine, and one to describe the human mind.

Before these developments, systems of manufacture and control were designed to include the human body as a “control” in the operational sense. The control in any function is a limiter, providing brackets to the acceptable inputs and possible outputs. If a laborer slows done his or her work, the entire process slows. In the new post-Taylorist work flow, in contrast, the control is performed by a computational process, rather than a human embodied one. The new computers allowed for the programming of internal black boxes within the machine itself. Information from multiple sensors, as it coursed through these machines, would be analyzed and checked for deviation. The result produced from such analyses would set certain mechanical processes in motion in order to produce a desired end.

Although the worker has been replaced by an algorithm as the system control, she or he is not missing from the scene entirely. Rather, the human operator now performs the function of a control for the control. The machine affords indications to the human operator of the proper functioning of the software-based controller. Deviations from designated functions trigger new action from the human operator, according to more advanced algorithms than required of previous industrial operators. This new human operator must synthesize multiple forms of data—visual, aural, even symbolic data—and then decide on a proper course of action, of input to the machine, according to a trained set of decision-making criteria and standards.

Though operating from more of a distance in relation to the phenomena of mechanical system function, this new, error-detecting human operator plays no less critical a role. His or her mental routines must be just as carefully scripted and trained as the Taylorist laborer’s physical actions, and often via emerging understanding of the brain as a computer.

The new operator is thus less of the system even though he or she is made more in the image of that system. Formerly one organ within a mechanical body, he now is modeled as a discrete body himself, tethered to another, mechanical body, and modeled after that body, for the purposes of safe and consistent system flow. The machine and the operator mirror one another, with the interface as their crucial site of division, the glass of reflection and action.

These changes also effect sociality through the creation of a new entity to include all agents. This new entity—the organization—invites design at a complex level that accounts for multiple machinic and human actors. Where each machine used to come with an operator as controller, the two treated as a single entity, the post war machine comes with an operator as agent, who is necessary to the proper functioning of the machine. But the human operator is separate from the machine. For large-scale projects, this doubling results in increased complexity, which the organization approaches as yet another information processing problem.

The organization, this plurality of entities, is coincident with the emergence of the interface. Machines and operators without true interfaces—as in Taylorist scenarios—are not collective in that they are not social. They are merely aggregate. Thus some of the biggest moves in computing research toward the latter half of the twentieth century were those that simultaneously addressed the interface between one operator and her machine, and the structure of all machine-human pairs, organized together into one system—one black box process.

Kevin Hamilton is an artist and researcher at the University of Illinois, Urbana-Champaign, where as an Associate Professor he holds appointments in several academic units across theory, history, and practice of digital media. He is currently at work with Infernal Machine contributor Ned O’Gorman on a history of film in America’s nuclear weapons programs; other recent work includes a collaboration with colleagues at Illinois’ Center for People and Infrastructures on the ethics of algorithms in internet and social media platforms.

. . . . . . . .

Like The Hedgehog Review on Facebook, follow us on Twitter, and subscribe to our posts via RSS.

79 Theses on Technology:
Our Detachment From Technology

insect blue_FLAT

When reading Alan Jacobs’s 79 theses, three jumped out at me:

55. This epidemic of forgetting where algorithms come from is the newest version of “I for one welcome our new insect overlords.”

56. It seems not enough for some people to attribute consciousness to algorithms; they must also grant them dominion.

58. Any sufficiently advanced logic is indistinguishable from stupidity.—Alex Tabarrok

These theses suggest a single issue: We have become increasingly detached from our software, both in how it works and how it is built.

The algorithms involved in much of our software are each designed to do something. When an algorithm was a single snippet of code or a tiny computer program, it could be read, understood, debugged, and even improved. Similarly, computing once involved regular interactions at the level of the command line. There was little distance between the code and the user.

Since the early era of command lines and prompts, software has become increasingly complex. It has also become increasingly shielded from the user. These are not necessarily bad changes. More sophisticated technology is more powerful and has greater functionality; giving it a simpler face prevents it from being overwhelming to use. We don’t need to enter huge numbers of commands or parameters to get something to work. We can just swipe our fingers and our intentions are intuited.

Thanks to these changes, however, each of us has become more distant from the inner workings of our machines. I’ve written elsewhere about how we must strive to become closer to our machines and bridge the gap between expert and user. This is difficult in our era of iPads and graphical interfaces, and often it doesn’t even seem that important. However, since these technologies affect so many parts of our lives, I think we need the possibility of closeness: We need gateways to understanding our machines better. In the absence of this proactive decision, our responses to our machines will tend to be driven by fear, veneration, and disdain.

As we have become detached from how algorithms and software operate, this detachment has caused a gross misunderstanding of how technology works. We find it to be far more inscrutable than it really is, forgetting all technology was designed by fallible people. We respond to this inscrutable power by imputing a beauty and sophistication that is not there. (For more on this, see Ian Bogost and his observation that many people use the word “algorithm” in an almost religious manner.)

Veneration of the algorithm as something inordinately impressive is detrimental to our ability to engage with technology. Software is often incredibly kludgy and chaotic, far from worthy of worship. This response is not so far from fearing technology just because we can’t understand it. Both fear and veneration are closely related, as both make algorithms out to be more than they are. (This is the subject of Jacobs’s Theses 55 and 56, though stated in a bit more extreme forms than I might be willing to do.)

But what about disdain? How does this work? When a device suggests the wrong word or phrase in a text or sends delivery trucks on seemingly counterintuitive routes, we disdain the device and its algorithms. Together, their outputs seem so self-evidently wrong that we are often filled with a sense of superiority, mocking these algorithms’ shortcomings, or feeling that they are superfluous.

Sometimes, our expertise does fall short and complex logic can seem like stupidity. But David Auerbach, writing in Nautilus, offered this wonderful story that shows that something else might be going on:

Deep Blue programmer Feng-Hsiung Hsu writes in his book Behind Deep Blue that during the match, outside analysts were divided over a mysterious move made by the program, thinking it either weak or obliquely strategic. Eventually, the programmers discovered that the move was simply the result of a bug that had caused the computer not to choose what it had actually calculated to be the best move—something that could have appeared as random play.

In this case, ignorance prevented observers from understanding what was going on.

Is complex logic indistinguishable from stupidity? I don’t think so. Our response to a process we don’t understand may be closer to the nervous laughter of ignorance than a feeling of superiority. We call these algorithms stupid not because we recognize some authentic algorithmic inadequacy in them. We call them stupid because to admit a certain humility in the face of their increasing complexity would be a display of weakness.

When I took an artificial intelligence course in college and learned the algorithms for programs such as playing board games or constructing plans, I didn’t feel superior—I felt a kind of sadness. I had seen behind the screen and found these processes sophisticated, but fairly mundane. Most complex technology is this way. But when each of us encounters a surprising and apparently stupid output, if we don’t understand its origins, it is a lot easier to mock the system than to feel humbled, or even disappointed, at discovering its true structure.

These responses to technology are not the everyday user’s fault. Many of the creators of these technologies want the user to attribute a certain power to these algorithms and so have protected them behind layers of complexity. Ultimately, I think the most appropriate response is intellectual humility in the face of technology from which we have become increasingly detached. Only then can we engage with algorithms and try to see, even if only a moment, what they are actually doing.

Samuel Arbesman is a Senior Adjunct Fellow at the Silicon Flatirons Center for Law, Technology, and Entrepreneurship at the University of Colorado and a Visiting Scholar in Philosophy at the University of Kansas. Follow him on Twitter at @arbesman.

. . . . . . . .

Like The Hedgehog Review on Facebook, follow us on Twitter, and subscribe to our posts via RSS.

79 Theses on Technology:
The Spectrum of Attention

“We should evaluate our investments of attention,” Jacobs urges in Thesis #7, “at least as carefully and critically as our investments of money.” But we will be in a better position to undertake such an evaluation when we understand exactly what we are talking about when we talk about attention, which is a word that—despite its importance—is never defined by Jacobs in the 79 Theses.

It’s easy to assume that “attention” is experienced in the same way by everyone. But as Matthew Crawford’s recent work has argued, attention has been imagined, and thus experienced, differently over time. Attention names various states or activities that we might do well to distinguish.

We can define attention first as “intently focusing on one object or task.” Reading a long, demanding text is a one example of this kind of attention. This sort of attention is the subject of Nicholas Carr’s Atlantic article, “Is Google Making Us Stupid?”: “Immersing myself in a book or a lengthy article used to be easy,” Carr notes, but now “my concentration often starts to drift after two or three pages. I get fidgety, lose the thread, begin looking for something else to do. I feel as if I’m always dragging my wayward brain back to the text.”

I suspect many of us share Carr’s experience. Not unlike the Apostle Paul, we lament, “What I want to pay attention to, I cannot. What I do not want to pay attention to, to that I do.” This failure to direct our attention presents itself as a failure of the will, and it assumes at some level that I am, as an autonomous subject, responsible for this failure (for more on this point, I suggest Chad Wellmon’s exchange with Jacobs).

But sometimes we talk about attention in a slightly different way; we speak of it as openness to the world, without any particular focal point. Sometimes the language of presence is used to articulate this kind of attention: Are we living in the moment? It is also the sort of attention that is advocated by proponents of “mindfulness,” to which Jacobs devoted two theses:

11. “Mindfulness” seems to many a valid response to the perils of incessant connectivity because it confines its recommendation to the cultivation of a mental stance without objects.

13. The only mindfulness worth cultivating will be teleological through and through.

On the surface, the two ways of talking about attention that I’ve outlined attention contradict each other. Directed attention is inconceivable without an object (mental or material) to sustain it, but no object would appear apart from an already existing form of attention.

Much depends on what exactly is meant by “mindfulness,” but I think we might be able to preserve a valuable distinction while still heeding Jacobs’s critique. If “mindfulness” functions, for instance, as a clearing of mental space in order to make directed attention possible, then the telos of mindfulness would be directed attention itself.

Attention as Dance

We can think of attention as a dance whereby we both lead and are led. This image suggests that receptivity and directedness do indeed work together. The proficient dancer knows when to lead and when to be led, and she also knows that such knowledge emerges out of the dance itself. This analogy reminds us, as well, that attention is the unity of body and mind making its way in a world that can be solicitous of its attention. The analogy also raises a critical question: How ought we conceive of attention given that we are  embodied creatures?

Maurice Merleau-Ponty can help us here. In Phenomenology of Perception, Merleau-Ponty discusses the shortcomings of both empiricist and intellectualist (rationalist) approaches to attention and makes the following observation: “Empiricism does not see that we need to know what we are looking for, otherwise we would not go looking for it; intellectualism does not see that we need to be ignorant of what we are looking for, or, again, we would not go looking for it.”

This simultaneous knowing and not-knowing seems to me another way of talking about attention as both openness to the world and as a directed work of the mind. It is a work of both receptivity, of perceiving the world as a gift, and care, of willfully and lovingly attending to particular aspects of the world. And, as Merleau-Ponty goes on to argue, attention is also a form of embodied perception that construes the world as much as it registers it. In this sense, our attention is never merely picking out items in the world (see Crawford on this idea); rather, attention is always interpreting the world in keeping with the desires and demands of an embodied being at a particular moment.

To a hiker on a long walk, for example, a stone is a thing to step around and is registered as such without conscious mental effort. It is attended to by the body in motion more than by the cogitating mind. To a geologist on a walk, on the other hand, a stone may become an object of urgent intellectual inquiry.

Both of these instances of perceiving-as result from subjective prior experience. The expert hiker moves along at a steady pace making countless adjustments and course corrections as a matter of bodily habit. The geologist, likewise, has trained his perception through hours of intellectual labor. In either situation, a novice might fail to hike as adroitly or notice the geologically interesting stone. Merleau-Ponty calls this repertoire of possible perceptions the “intentional arc,” which subtends “the life of consciousness—cognitive life, the life of desire or perceptual life.”

This example suggests two poles of attention, bodily and mental. But these are not mutually exclusive binaries. Rather, they constitute a spectrum of possibilities from the dominance of conscious mental activity on one end to the other end where non-conscious bodily activity is paramount. Consider the person lost deep in thought or a daydream. This person is deeply attentive, but not to his surroundings or to sensory information. Such a person would have to be called back to an awareness of their body and their surroundings.

By contrast, we may imagine the athlete, musician, or dancer who is, to borrow Mihály Csíkszentmihályi’s formulation, “in the flow.” Like the thinker or daydreamer, they, too, are in a state of deep attention, but in a different mode. Conscious thought would, in fact, disrupt their state of attention. We may complicate this picture even further by observing how the hiker “in the flow” might be lost in thought and remain an expert navigator of the terrain.

Attention Mediated Through Technology

But where does technology fit into our model? That is, after all, where we began and where Jacobs directs our attention. Perhaps there’s another spectrum intersecting with the one running from the bodily to the mental: one that runs from mediated to unmediated forms of attention.

Consider our hiker one more time. Imagine that she is now equipped with a walking stick. Aspects of her attending to the world through which she makes her way are now mediated by the walking stick. Of course, the walking stick is an adept tool for this particular context and extends the hiker’s perceptions in useful ways. (It would be very different, for instance, if the hiker were walking about with a garden hose.)

Imagine, however, giving the hiker a different tool: a smartphone. The smartphone mediates perception as well. In the act of taking a picture, for example, the landscape is seen through the lens. But a subtler act of mediation is at work even when the smartphone’s camera is not in use. Smartphone in hand, the hiker might now perceive the world as field of possible images. This may, for example, direct attention up from the path toward the horizon, causing even our experienced hiker to stumble.

We may be tempted to say that the hiker is no longer paying attention, that the device has distracted her. But this is, at best, only partly true. The hiker is still paying attention. But her attention is of a very different sort than the “in the flow” attention of a hiker on the move. Without the smartphone in hand, the hiker might not stumble—but she might not notice a particularly striking vista either.

So along one axis, we range from bodily to mental forms of attention. Along the other, we range from mediated to unmediated forms of attention. (Granted that our attention is never, strictly speaking, absolutely unmediated.) This yields a range of possibilities among the following categories: “bodily mediated,” “bodily unmediated,” “mental mediated,” and “mental unmediated.” (Consider the following as ideal types in each case: the musician, the dancer, the scientist, and the philosopher.)

sacasas graph

How does conceiving of attention in this way help us?

This schema yields a series of questions we may ask as we seek to evaluate our investments of attention. What kind of attention is required in this context? To what aspects of the world does a device invite me to pay attention? Does a device or tool encourage mental forms of attention when the context is better suited to bodily forms of attention? Is a device or tool encouraging me to direct my attention, when attentive openness would be more useful? What device or tool would best help me deploy the kind of attention required by the task before me?

The result of this exploration has been to break up the opposition of device to attention. An opposition, I should say, I don’t think Jacobs himself advocates. Instead, my hope is to expand our conceptual tool kit so that we might make better judgments regarding our devices and our attention to the world.

L.M. Sacasas is a doctoral candidate in the Texts and Technology program at the University of Central Florida. Follow him on Twitter @frailestthing.

Photo: Heinrich Vogeler, Sehnsucht (Träumerei), c.1900, via Wikimedia Commons, public domain

. . . . . . . .

Like The Hedgehog Review on Facebook, follow us on Twitter, and subscribe to our posts via RSS.

79 Theses on Technology:
The Hand That Holds the Smartphone

medical_anatomy_hands

Alan Jacobs poses a few questions to his readers: “What must I pay attention to?” “What may I pay attention to?” and “What must I refuse attention to?” These questions direct readers to understand their own positions in the world in terms of attention. They encourage reflection. Instead of directing the reader’s focus outward to ponder general, more abstract relations between “technology” and “society,” they return us to our own bodies even and suggest that the hand that swipes the iPhone, your hand, deserves attention.

Jacobs formulates only two other theses as questions  (#9, #60), and both are posed from a seemingly universal standpoint without a social location or even an implied interlocutor. However, some of Jacobs’s concerns about the current unhappy union with our attention-demanding devices seem to emerge from a specific social location. While these concerns may ring true for a large segment of higher-income, well-educated adults, who do in fact own smartphones in greater numbers than the rest of the US population, they may fall short of describing the experiences of many other users.

For example, #70, “The always-connected forget the pleasures of disconnection, then become impervious to them.” Who are the “always-connected”? The McDonald’s worker whose algorithmically determined shifts are apt to change with less than half day’s notice? Or one of the 10% of Americans who rely on their smartphones to access the Internet to do their banking, look for a job, and let their child do homework?

People who rely on their smartphones for Internet access are more likely to be young, low-income, and non-white, the same population with some of the highest levels of unemployment. With the migration of most job-seeking to online databases and applications, all members of the “always-connected” might not experience the “pleasures of disconnection” in the same way as the middle class knowledge worker with high-speed Internet access at home and at work. In reality, the “always-connected” is a large and diverse group, and is quickly becoming even larger and even more diverse.

Your hand isn’t the only hand that comes in contact with your phone, of course, but only the last set of hands in a long chain of designers, manufacturing workers, and marketing gurus. Jacobs points this out in the case of algorithms (Thesis #54, “The contemporary version of the pathetic fallacy is to attribute agency not to nature but to algorithms—as though humans don’t write algorithms. But they do.”), but it bears extending this line of thinking to other theses about the ideologies that run through contemporary discourse on technology.

Consider Thesis #41, “The agency that in the 1970s philosophers and theorists ascribed to language is now being ascribed to technology” and #44, “We try to give power to our idols so as to be absolved of the responsibilities of human agency”—who are the agents in these theses? Who is doing the ascribing? Who seeks absolution?

Kevin Kelly, the author Jacobs points to as a prime example of techno-enthusiasm, was a founding editor of Wired and has spent a lot of time talking to technology executives over the past several decades. Kelly’s ideas have often been translated into marketing strategies that soon enter into the public consciousness—like the sumptuously edited commercial for the Apple Watch in which the watch operates completely of its own agency, no human required!—where they shape our desires and understandings of our relationships with our devices.

It’s through the image of a series of hands grasping, texting, and swiping away that my attention is drawn to the people at other end of the technologies that shape our lives. As Jacobs points out, technology doesn’t want anything, “we want, with technology as our instrument,” but the question of who we are is isn’t just idle sociological speculation. It’s vital to imagining alternative arrangements of both people and technology, as well as more humane practices that may benefit us all.

Julia Ticona is a doctoral candidate in the sociology department at the University of Virginia and a dissertation fellow at the Institute for Advanced Studies in Culture. Her work focuses on the cultures of technology and everyday life.

Photo: Anatomical study of hands, public domain.

 

. . . . . . . .

Like The Hedgehog Review on Facebook, follow us on Twitter, and subscribe to our posts via RSS.

79 Theses on Technology:
Jacobs Responds to O’Gorman

tender buttons_FLAT

 

Ned O’Gorman, in his response to my 79 theses, writes:

Of course technologies want. The button wants to be pushed; the trigger wants to be pulled; the text wants to be read—each of these want as much as I want to go to bed, get a drink, or get up out of my chair and walk around, though they may want in a different way than I want. To reserve “wanting” for will-bearing creatures is to commit oneself to the philosophical voluntarianism that undergirds technological instrumentalism.

We’re in interesting and difficult territory here, because what O’Gorman thinks obviously true I think obviously false. In fact, it seems impossible to me that O’Gorman believes what he writes here.

Take for instance the case of the button that “wants to be pushed.” Clearly, O’Gorman does not believe that the button sits there anxiously, as a finger hovers over it, thinking “oh please push me please please please.” Clearly, he knows that the button is merely a piece of plastic that when depressed activates an electrical current that passes through wires on its way to detonating a weapon. Clearly, he knows that an identical button—buttons are, after all, to adopt a phrase from the poet Les Murray, the kind of thing that comes in kinds—might be used to start a toy car. So, what can he mean when he says that the button “wants”?

I am open to correction, but I think he must mean something like this: “That button is designed in such a way—via its physical conformation and its emplacement in contexts of use—that it seems to be asking or demanding to be used in a very specific way.” If that’s what he means, then I fully agree. But to call that “wanting” does gross violence to the term, and obscures the fact that other human beings designed and built that button and placed it in that particular context. It is the desires, the wants, of those “will-bearing” human beings, that have made the button so eminently pushable.

(I will probably want to say something later about the peculiar ontological status of books and texts, but for now just this: Even if I were to say that texts don’t want, I wouldn’t thereby be “divesting” them of “meaningfulness,” as O’Gorman claims. That’s a colossal non sequitur.)

I believe I understand why O’Gorman wants to make this argument: The phrases “philosophical voluntarism” and “technological instrumentalism” are the key ones. I assume that by invoking these phrases O’Gorman means to reject the idea that human beings stand in a position of absolute freedom, simply choosing whatever “instruments” seem useful to them for their given project. He wants to avoid the disasters we land ourselves in when we say that Facebook, or the internal combustion engine, or the personal computer, or nuclear power, is “just a tool” and that “what matters is how you use it.” And O’Gorman is right to want to critique this position as both naïve and destructive.

But he is wrong if he thinks that this position is entailed in any way by my theses; and even more wrong to think that this position can be effectively combated by saying that technologies “want.” Once you start to think of technologies as having desires of their own you are well on the way to the Borg Complex: We all instinctively understand that it is precisely because tools don’t want anything that they cannot be reasoned with or argued with. And we can become easily intimidated by the sheer scale of technological production in our era. Eventually, we can end up talking even about what algorithms do as though algorithms aren’t written by humans.

I trust O’Gorman would agree with me that neither pure voluntarism nor purely deterministic defeatism are adequate responses to the challenges posed by our current technocratic regime—or the opportunities offered by human creativity, the creativity that makes technology intrinsic to human personhood. It seems that he thinks the dangers of voluntarism are so great that they must be contested by attributing what can only be a purely fictional agency to tools, whereas I believe that the conceptual confusion this creates leads to a loss of a necessary focus on human responsibility, and an inability to confront the political dimensions of technological modernity.

. . . . . . . .

Like The Hedgehog Review on Facebook, follow us on Twitter, and subscribe to our posts via RSS.

79 Theses on Technology: Jacobs Responds to Wellmon

 La transverbération de Sainte Thérèse, 1672, by Josefa de Óbidos, Eglise (Igreja Matriz) de Cascais Josefa de Óbidos [Public domain], via Wikimedia Commons

Let me zero in on what I think is the key paragraph in my friend Chad Wellmon’s response to some of my theses:

But this image of a sovereign self governing an internal economy of attention is a poor description of other experiences of the world and ourselves. In addition, it levies an impossible burden of self mastery. A distributive model of attention cuts us off, as Matt Crawford puts it, from the world “beyond [our] head.” It suggests that anything other than my own mind that lays claim to my attention impinges upon my own powers to willfully distribute that attention. My son’s repeated questions about the Turing test are a distraction, but it might also be an unexpected opportunity to engage the world beyond my own head.

I want to begin by responding to that last sentence by saying: Yes, and it is an opportunity you can take only by ceding the sovereignty of self, by choosing (“willfully”) to allow someone else to occupy your attention, rather than insisting on setting your own course. This is something most of us find it hard to do, which is why Simone Weil says “Attention is the rarest and purest form of generosity.” And yet it is our choice whether or not to practice that generosity.

I would further argue that, in most cases, we manage to cede the “right” to our attention to others—when we manage to do that—only because we have disciplined and habituated ourselves to such generosity. Chad’s example of St. Teresa is instructive in this regard, because by her own account her ecstatic union with God followed upon her long practice of rigorous spiritual exercises, especially those prescribed by Francisco de Osuna in his Tercer abecedario espiritual (Third Spiritual Alphabet) and by Saint Peter of Alcantara in his Tractatus de oratione et meditatione (Treatise on Prayer and Meditation). Those ecstatic experiences were a free gift of God, Teresa thought, but through an extended discipline of paying attention to God she had laid the groundwork for receptivity to them.

(I’m also reminded here of the little experiment the violinist Joshua Bell tried in 2007, when he pretended to be a busker playing in a D.C. Metro station. Hardly anyone noticed, but those who did were able to do so because of long experience in listening to challenging music played beautifully.)

In my theses I am somewhat insistent on employing economic metaphors to describe the challenges and rewards of attentiveness, and in so doing I always had in mind the root of that word, oikonomos (οἰκονόμος), meaning the steward of a household. The steward does not own his household, any more than we own our lifeworld, but rather is accountable to it and answerable for the decisions he makes within it. The resources of the household are indeed limited, and the steward does indeed have to make decisions about how to distribute them, but such matters do not mark him as a “sovereign self” but rather the opposite: a person embedded in a social and familial context within which he has serious responsibilities. But he has to decide how and when (and whether) to meet those responsibilities. So, too, the person embedded in an “attention economy.”

In this light I want to question Weil’s notion of attention as a form of generosity. It can be that, of course. In their recent biography Becoming Steve Jobs, Brent Schlender and Rick Tetzeli tell a lovely story about a memorial service for Jobs during which Bill Gates ignored the high-powered crowd and spent the entire time in a corner talking with Jobs’s daughter about horses. That, surely, is attention as generosity. But in other circumstances attention may not be a free gift but a just rendering—as can happen when my son wants my attention while I am reading or watching sports on TV. This is often a theme in the religious life, as when the Psalmist says “Ascribe to the Lord the glory due his name,” or in a liturgical exchange: “Let us give thanks to the Lord our God.” “It is meet and right so to do.”

There is, then, such a thing as the attention that is proper and adequate to its object. Such attention can only be paid if attention is withheld from other potential objects of our notice or contemplation: The economy of our attentional lifeworld is a strict one. But I would not agree with Chad that this model “levies an impossible burden of self mastery”; rather, it imposes the difficult burden of wisely and discerningly distributing my attention in ways that are appropriate not to myself qua self but to the “household” in which I am embedded and to which I am responsible.

. . . . . . . .

Like The Hedgehog Review on Facebook, follow us on Twitter, and subscribe to our posts via RSS.

Algorithms Who Art in Apps, Hallowed Be Thy Code

 

If you want to understand the status of algorithms in our collective imagination, Ian Bogost, author, game designer, and professor of media studies and interactive computing at Georgia Institute of Technology,  proposes the following exercise in his recent essay in the Atlantic: “The next time you see someone talking about algorithms, replace the term with ‘God’ and ask yourself if the sense changes any?”

If Bogost is right, then more often than not you will find the sense of the statement entirely unchanged. This is because, in his view, “Our supposedly algorithmic culture is not a material phenomenon so much as a devotional one, a supplication made to the computers we have allowed to replace gods in our minds, even as we simultaneously claim that science has made us impervious to religion.” Bogost goes on to say that this development is part of a “larger trend” whereby “Enlightenment ideas like reason and science are beginning to flip into their opposites.” Science and technology, he fears, “have turned into a new type of theology.”

It’s not the algorithms themselves that Bogost is targeting; it is how we think and talk about them that worries him. In fact, Bogost’s chief concern is that how we talk about algorithms is impeding our ability to think clearly about them and their place in society. This is where the god-talk comes in. Bogost deploys a variety of religious categories to characterize the present fascination with algorithms.

Bogost believes “algorithms hold a special station in the new technological temple because computers have become our favorite idols.” Later on he writes, “the algorithmic metaphor gives us a distorted, theological view of computational action.” Additionally, “Data has become just as theologized as algorithms, especially ‘big data,’ whose name is meant to elevate information to the level of celestial infinity.” “We don’t want an algorithmic culture,” he concludes, “especially if that phrase just euphemizes a corporate theocracy.” The analogy to religious belief is a compelling rhetorical move. It vividly illuminates Bogost’s key claim: the idea of an “algorithm” now functions as a metaphor that conceals more than it reveals.

He prepares the ground for this claim by reminding us of earlier technological metaphors that ultimately obscured important realities. The metaphor of the mind as computer, for example, “reaches the rank of religious fervor when we choose to believe, as some do, that we can simulate cognition through computation and achieve the singularity.” Similarly, the metaphor of the machine, which is really to say the abstract idea of a machine, yields a profound misunderstanding of mechanical automation in the realm of manufacturing. Bogost reminds us that bringing consumer goods to market still “requires intricate, repetitive human effort.” Manufacturing, as it turns out, “isn’t as machinic nor as automated as we think it is.”

Likewise, the idea of an algorithm, as it is bandied about in public discourse, is a metaphorical abstraction that obscures how various digital and analog components, including human action, come together to produce the effects we carelessly attribute to algorithms. Near the end of the essay, Bogost sums it up this way:

The algorithm has taken on a particularly mythical role in our technology-obsessed era, one that has allowed it to wear the garb of divinity. Concepts like ‘algorithm’ have become sloppy shorthands, slang terms for the act of mistaking multipart complex systems for simple, singular ones. Of treating computation theologically rather than scientifically or culturally.

But why does any of this matter? It matters, Bogost insists, because this way of thinking blinds us in two important ways. First, our sloppy shorthand “allows us to chalk up any kind of computational social change as pre-determined and inevitable,” allowing the perpetual deflection of responsibility for the consequences of technological change. The apotheosis of the algorithm encourages what I’ve elsewhere labeled a Borg Complex, an attitude toward technological change aptly summed by the phrase, “Resistance is futile.” It’s a way of thinking about technology that forecloses the possibility of thinking about and taking responsibility for our choices regarding the development, adoption, and implementation of new technologies. Secondly, Bogost rightly fears that this “theological” way of thinking about algorithms may cause us to forget that computational systems can offer only one, necessarily limited perspective on the world. “The first error,” Bogost writes, “turns computers into gods, the second treats their outputs as scripture.”

______________________

Bogost is right to challenge the quasi-religious reverence for technology. It is, as he fears, an impediment to clear thinking. And he is not the only one calling for the secularization of our technological endeavors. Computer scientist and virtual-reality pioneer Jaron Lanier has spoken at length about the introduction of religious thinking into the field of AI. In a recent interview, he expressed his concerns this way:

There is a social and psychological phenomenon that has been going on for some decades now:  A core of technically proficient, digitally minded people reject traditional religions and superstitions. They set out to come up with a better, more scientific framework. But then they re-create versions of those old religious superstitions! In the technical world these superstitions are just as confusing and just as damaging as before, and in similar ways.

While Lanier’s concerns are similar to Bogost’s,  Lanier’s use of religious categories is more concrete. Bogost deploys a religious frame as a rhetorical device, while Lanier’s uses it more directly to critique the religiously inflected expressions of a desire for transcendence among denizens of the tech world themselves.

But such expressions are hardly new. Nor are they limited to the realm of AI. In The Religion of Technology: The Divinity of Man and the Spirit of Invention, the distinguished historian of technology David Noble made the argument that “modern technology and modern faith are neither complements nor opposites, nor do they represent succeeding stages of human development. They are merged, and always have been, the technological enterprise being, at the same time, an essentially religious endeavor.”

Noble elaborates:

This is not meant in a merely metaphorical sense, to suggest that technology is similar to religion in that it evokes religious emotions of omnipotence, devotion, and awe, or that it has become a new (secular) religion in and of itself, with its own clerical caste, arcane rituals, and articles of faith. Rather it is meant literally and historically, to indicate that modern technology and religion have evolved together and that, as a result, the technological enterprise has been and remains suffused with religious belief.

Looking also at the space program, atomic weapons, and biotechnology, Noble devoted a chapter of his book to history of artificial intelligence,  arguing that AI research had often been inspired by a curious fixation on the achievement of god-like, disembodied intelligence as a step toward personal immortality. Many of the sentiments and aspirations that Noble identifies in figures as diverse as George Boole, Claude Shannon, Alan Turing, Edward Fredkin, Marvin Minsky, Daniel Crevier, Danny Hillis, and Hans Moravec—all of them influential theorists and practitioners in the development of AI—find their consummation in the Singularity movement. The movement envisions a time—2045 is frequently suggested—when the distinction between machines and humans will blur and humanity as we know it will be eclipsed. Before Ray Kurzweil, the chief prophet of the Singularity, wrote about “spiritual machines,” Noble had astutely anticipated how the trajectories of AI, Internet, Virtual Reality, and Artificial Life research were all converging  in the age-old quest for the immortality.  Noble, who died quite suddenly in 2010, must have read the work of Kurzweil and company as a remarkable validation of his thesis in The Religion of Technology.

Interestingly, the sentiments that Noble documents alternate between the heady thrill of creating non-human Minds and non-human Life, on the one hand, and, on the other, the equally heady thrill of pursuing the possibility of radical life-extension and even immortality. Frankenstein meets Faust we might say. Humanity plays god in order to bestow god’s gifts on itself.

Noble cites one Artificial Life researcher who explains, “I feel like God; in fact, I am God to the universes I create,” and another who declares, “Technology will soon enable human beings to change into something else altogether [and thereby] escape the human condition.” Ultimately, these two aspirations come together into a grand techno-eschatological vision, expressed here by robotics specialist Hans Moravec:

Our speculation ends in a supercivilization, the synthesis of all solar system life, constantly improving and extending itself, spreading outward from the sun, converting non-life into mind …. This process might convert the entire universe into an extended thinking entity … the thinking universe … an eternity of pure cerebration.

Little wonder that Pamela McCorduck, who has been chronicling the progress of AI since the early 1980s, can say, “The enterprise is a god-like one. The invention—the finding within—of gods represents our reach for the transcendent.” And, lest we forget where we began, a more earth-bound, but no less eschatological hope was expressed by Edward Fredkin in his MIT and Stanford courses on “saving the world.” He hoped for a “global algorithm” that “would lead to peace and harmony.”

I would suggest that similar aspirations are expressed by those who believe that Big Data will yield a God’s-eye view of human society, providing wisdom and guidance that is otherwise inaccessible to ordinary human forms of knowing and thinking.

Perhaps this should not be altogether surprising. As the old saying has it, the Grand Canyon wasn’t formed by someone dragging a stick. This is just a way of saying that causes must be commensurate with the effects they produce. Grand technological projects such as space flight, the harnessing of atomic energy, and the pursuit of artificial intelligence are massive undertakings requiring stupendous investments of time, labor, and resources. What motives are sufficient to generate those sorts of expenditures? You’ll need something more than whim, to put it mildly. You may need something akin to religious devotion. Would we have attempted to put a man on the moon without the ideological spur of the Cold War, which cast space exploration as a field of civilizational battle for survival? Consider, as a more recent example, what drives Elon Musk’s pursuit of interplanetary space travel.

______________________

Without diminishing the criticisms offered by either Bogost or Lanier, Noble’s historical investigation into the roots of divinized or theologized technology reminds us that the roots of the disorder run much deeper than we might initially imagine. Noble’s own genealogy traces the origin of the religion of technology to the turn of the first millennium. It emerges out of a volatile mix of millenarian dreams, apocalyptic fervor, mechanical innovation, and monastic piety. Its evolution proceeds apace through the Renaissance, finding one of its most ardent prophets in the Elizabethan statesman and thinker Francis Bacon. Even through the Enlightenment, the religion of technology flourished. In fact, the Enlightenment may have been a decisive moment in the history of the religion of technology.

In his Atlantic essay, Bogost frames the emergence of techno-religious thinking as a departure from the ideals of reason and science associated with the Enlightenment. This is not altogether incidental to Bogost’s argument. When he talks about the “theological” thinking that suffuses our understanding of algorithms, Bogost is not working with a neutral, value-free, all-purpose definition of what constitutes the religious or the theological; there’s almost certainly no such definition available. Rather, he works (like Lanier and many others) with an Enlightenment understanding of Religion that characterizes it as Reason’s Other–as something a-rational if not altogether irrational, superstitious, authoritarian, and pernicious.

Noble’s work complicates this picture. The Enlightenment did not, as it turns out, vanquish Religion, driving it far from the pure realms of Science and Technology. In fact, to the degree that the radical Enlightenment’s assault on religious faith was successful, it empowered the religion of technology. To put it another way, the Enlightenment—and, yes, we are painting with broad strokes here—did not do away with the notions of Providence, Heaven, and Grace, but instead renamed them as, respectively, Progress, Utopia, and Technology. To borrow a phrase, the Enlightenment immanentized the eschaton. If heaven had been understood as a transcendent goal achieved with the aid of divine grace within the context of the providentially ordered unfolding of human history, it became a utopian vision, a heaven on earth, achieved by the ministrations science and technology within the context of progress, an inexorable force driving history toward its utopian consummation.

As historian Leo Marx has put it, the West’s “dominant belief system turned on the idea of technical innovation as a primary agent of progress.” Indeed, the further Western culture proceeded down the path of secularization as it is traditionally understood, the more emphasis was placed on technology as the principle agent of change. Marx observed that by the late nineteenth century, “the simple republican formula for generating progress by directing improved technical means to societal ends was imperceptibly transformed into a quite different technocratic commitment to improving ‘technology’ as the basis and the measure of—as all but constituting—the progress of society.”

When the prophets of the Singularity preach the gospel of transhumanism, they are not abandoning the Enlightenment heritage; they are simply embracing its fullest expression. As Bruno Latour has argued, modernity has never perfectly sustained the purity of the distinctions that were the self-declared hallmarks of its own superiority. Modernity characterized itself as a movement of secularization and differentiation, what Latour, with not a little irony, labels processes of purification. Science, politics, law, religion, ethics—these are all sharply distinguished and segregated from one another in the modern world, distinguishing it from the primitive pre-modern world. But it turns out that these spheres of human experience stubbornly resist the neat distinctions modernity sought to impose. Hybridization unfolds alongside purification, and Noble’s work has demonstrated how the lines between technology, sometimes reckoned the most coldly rational of human projects, and religion are anything but clear.

But not just any religion. Earlier I suggested that when Bogost characterizes our thinking about algorithms as “theological,” he is almost certainly assuming a particular kind of theology. This is why it is important to classify the religion of technology more precisely as a Christian heresy. It is in Western Christianity that Noble found the roots of the religion of technology, and it is in the context of post–Christian world that it currently flourishes.

It is Christian insofar as its aspirations are like those nurtured by the Christian faith, such as the conscious persistence of a soul after the death of the body. Noble cites Daniel Crevier, who, referring to the “Judeo-Christian tradition,” suggests that “religious beliefs, and particularly the belief in survival after death, are not incompatible with the idea that the mind emerges from physical phenomena.” This is noted on the way to explaining that a machine-based material support could be found for the mind, which leads Noble to quip, “Christ was resurrected in a new body; why not a machine?” Reporting on his study of the famed Santa Fe Institute in Los Alamos, anthropologist Stefan Helmreich writes, “Judeo-Christian stories of the creation and maintenance of the world haunted my informants’ discussions of why computers might be ‘worlds’ or ‘universes,’ …. a tradition that includes stories from the Old and New Testaments (stories of creation and salvation).”

However heretically it departs from traditional Christian teaching regarding the givenness of human nature, the moral dimensions of humanity’s brokenness, the gracious agency of God in the salvation of humanity, the religion of technology can be conceived as an imaginative account of how God might fulfill purposes that were initially revealed in incidental, pre-scientific garb. In other words, we might frame the religion of technology not so much as a Christian heresy, but rather as (post–)Christian fan-fiction, an elaborate imagining of how the hopes articulated by the Christian faith will materialize as a consequence of human ingenuity in the absence of divine action.

Near the end of The Religion of Technology, David Noble warns of the dangers posed by a blind faith in technology. “Lost in their essentially religious reveries,” he writes, “the technologists themselves have been blind to, or at least have displayed blithe disregard for, the harmful ends toward which their work has been directed.” Citing another historian of technology, Noble adds, “The religion of technology, in the end, ‘rests on extravagant hopes which are only meaningful in the context of transcendent belief in a religious God, hopes for a total salvation which technology cannot fulfill …. By striving for the impossible, [we] run the risk of destroying the good life that is possible.’ Put simply, the technological pursuit of salvation has become a threat to our survival.” I suspect that neither Bogost nor Lanier would disagree with Noble on this score.

This post originally appeared at The Frailest Thing.

Michael Sacasas is a doctoral candidate in the Texts and Technology program at the University of Central Florida. Follow him on Twitter @frailestthing. 

. . . . . . . .

Like The Hedgehog Review on Facebook, follow us on Twitter, and subscribe to our posts via RSS.

The Chokehold That Is Prosecutorial Discretion

A protest in Grand Central Station in New York after the grand jury’s verdict in the Eric Garner case.

The news yesterday that a second grand jury in two weeks had failed to indict a white police officer for causing the death of an unarmed black man has caused an understandable uproar.  Where should we focus the outrage in a justice system that is failing minorities? Let me suggest one candidate: prosecutorial discretion.

As an assistant federal public defender, I am used to experiencing the fruits of a system designed around the whims of the prosecution. First, what is prosecutorial discretion?  Plenty of literature gives content to this phrase, but it boils down to the idea that a prosecutor can do whatever a prosecutor wants to do, as evidenced by the fact that 97% of all cases in federal court result in guilty pleas prior to, and instead of, a jury trial. The majority of these settlements are the result of plea agreements where the prosecutor, not a judge, decides what charges a defendant should be convicted of, whether or not to require a plea to factors that trigger statutory mandatory minimum sentences, and often sentencing ranges with no input from the judicial branch, let alone a jury of a defendant’s peers. The arcane cruelty of federal criminal laws, particularly in the area of drugs and guns, means that penalties are so severe that most defendants have little choice but to plead guilty. Severe sentencing and over-criminalization combine to make a dangerous cocktail where almost everything is a crime, or multiple crimes.

So what does this have to do with Ferguson?  And now the Staten Island chokehold case?  Without question, the decision of these two grand juries not to indict was because the prosecutors, in their discretion, did not want to indict. As has been detailed elsewhere, when a prosecutor wants an indictment, a prosecutor usually gets an indictment. The fact that both of these grand jury presentments took place over months (instead of hours as is the norm), and involved testimony on behalf of the officers (each police officer testified on his own behalf, again, not the norm) shows just how unique these cases were. And just how little the prosecutors must have wanted to indict.

And why might these prosecutors not have wanted these grand juries to return indictments?  The reason may be legitimate doubts over the culpability of the officers under the laws of excessive force as written. Or because prosecutors have to rely on law enforcement officers every day to investigate cases and make out those cases in court, and, thus, were loath to alienate their usual allies. But we may never know, and we have no right to know under the law.

Prosecutors make decisions not to indict all the time and the only person who might even know about it is the victim, or alleged victim, of some criminal action. No one has a right to prosecution. The recent conversations regarding unprosecuted collegiate sexual assault sparked in part by the controversial Rolling Stone article about my alma mater, the University of Virginia, implicate this fundamental fact. Every victim stands at the whim of the prosecutor to determine whether or not a case even starts down the path of seeking justice from a criminal court.

Bill Stuntz, former Harvard Law professor, described our current system and its disproportionate effects on black Americans well in The Collapse of American Criminal Justice (Harvard, 2011):

As unenforced speed limits delegate power to state troopers patrolling the highways, so too American criminal law delegates power to the prosecutors who enforce it. That discretionary power is exercised differently in poor city neighborhoods than in wealthier urban and suburban communities. Far from hindering such discrimination, current law makes discriminating easy. That sad conclusion has its roots in a sad portion of America’s legal history. When the Fourteenth Amendment’s guarantee of the “equal protection of the laws” was enacted, one of its chief goals was to ensure that criminal law meant one law alike for blacks and whites—that both ex-slaves and ex-slaveowners would be held to the same legal standards, and that crime victims among both groups received roughly the same measure of legal protection. That understanding of equal protection did not survive Reconstruction’s collapse. Today, the equal protection guarantee is all but meaningless when applied to criminal law enforcement, one reason why both drug enforcement and enforcement of laws banning violent felonies are so different in black communities than in white ones.

The Ferguson and Staten Island cases may be targets of our outrage for many reasons, but one should certainly be that prosecutorial discretion has been exposed for what it is—entirely out of our hands.

Lisa Lorish is an assistant federal public defender in the Western District of Virginia and a graduate of the University of Virginia School of Law.

. . . . . . . .

Like The Hedgehog Review on Facebook, follow us on Twitter, and subscribe to our posts via RSS.