Author Archives: Ned O'Gorman

About Ned O'Gorman

Ned O'Gorman is Associate Professor of Communication and Conrad Humanities Scholar at the University of Illinois, Urbana-Champaign. He blogs about political technologies on The Infernal Machine.

Empire’s Regrets

The Pentagon (2008). Via Wikimedia Commons.

The Pentagon (2008). Via Wikimedia Commons.

There was a time, not that long ago, when America’s “business” sensibilities were seen as both the economic and ethical boon of American empire. George F. Kennan, one of the chief architects of the cold war American empire, saw in “the reputation of Americans for businesslike efficiency, sincerity and straightforwardness” a singular advantage in America’s effort to establish and maintain its global power. (I am quoting from Kennan’s notes for his Memoirs, archived at Princeton.) Indeed, for nearly all of the cold war architects of American empire, the “business” personality meant reliability, responsibility, power, and stability.

This personality is also the kind needed to build an empire. Empires want stability. Power is not enough. The Pax Romana of the ancient world was not an accident of the centralization of power in the emperor. It was its purpose and its justification. By the time of Octavian’s ascent to imperial rule as Augustus in 27 BCE, the Roman Republic, though esteemed then and now for its renowned constitution, had been in upheaval for well near a century, fraught with plots, assassinations, power plays, coups, and civil war. The emperor meant the empire could stabilize.

The American empire of the postwar and cold war periods was frequently characterized as a reluctant one. This was part of its “businesslike” ethic. Certainly, America’s ascent to world power after World War II was not intended to be a replication of the British colonial empire. It was to be more subtle, and, if possible, more invisible in its workings. It was not to be “colonial” in the way of nineteenth-century empires or America’s own past approach to its indigenous peoples. Rather, it was to work through a kind of triumvirate of distributed American military power, America-led financial institutions, and strategic alliances. This is, and was, American empire. And like all empires, it wants, on the whole, stability.

Within the empire of postwar and cold war America, technology was to be a means of order, or ordering. During the 1940s, 50s, and 60s, technology and technological innovation were inseparable from the empire: Big science, big industry, and a very big military-industrial complex drove technological innovation. There is no other way to make sense of the remarkable technological developments of the period—computers, the internet, satellites, missiles, and thermonuclear warheads—than in terms of the overwhelming imperative of the empire to enforce order onto the world, just as there was no other way to account for the empire’s penchant to perceive threats to order everywhere, from Laos to Guatemala to the Arctic.

But this “businesslike” empire was also an empire of capital, and of capitalism, both ideologically (as America confronted communism) and structurally (as private capital and public funding worked together to uphold empire). And capitalism is disruptive. As Americans learned in the 1930s, it was prone to destruction and reconstruction, ups and downs, booms and busts. If empire wants stability, capitalism favors instability.

From the mid-1940s until the early 1970s, American domestic and foreign policy was aimed at making both empire and capitalism work by having them work together. If Keynesianism was the logic, a “businesslike” approach to technological innovation was the lynchpin. A primary way the American empire harnessed capitalism was by harnessing science, technology, and industry—the sources of “innovation.” Bell Labs, IBM, Westinghouse, General Motors: Big Industry meant not only working-class jobs but the cooperation between capital and empire. This cooperation was crucial to empire’s power, for it meant capitalism’s disruptive logics could be tempered by empire’s need for order.

But as things turned out, capitalists began to undermine the cooperative logic of the empire. In the age of Reagan, a new kind of capitalism and a new kind of capitalist emerged under the auspices of innovation and deregulation. Entrepreneurial capitalism began to exploit the stable networks of capital, communications, and human movement the empire offered. If neoliberalism was the new logic, technology was the motor, including new techniques and technologies of finance capital. Finance, computers, the internet, automation, and a new Silicon Valley ethic of creative, disruptive innovation emerged as insurgents within the empire. And “business” took on a new, distinctly disruptive look, too.

The entrepreneurial insurgents of the 1980s and 90s created new markets, even as they destroyed old ones, especially labor markets. Tech and finance industries took new risks, risks freed of empire’s insistence on stability. These risks were money motivated, but they were also social, ambitiously aimed at reshaping the way humans live their lives (for the tech industry the “human” is always the subject, and for the finance industry humans are always objects).

And on the backs of these insurgents rode yet another kind of capitalist, the postmodern capitalist convinced that brand is value, image is economy, and money but a manipulable bit. Retail, development, entertainment, and service industries made brand identity a franchise industry, all the while using fraud, bankruptcy, lobbying, and the exploitation of legal and tax loopholes to create value, or perceived value.

Remarkably, given empire’s need for stability, these entrepreneurial and postmodern forms of capitalism became not only an economic ethic but a political one, as if the solution to every problem were to shake things up. We saw this, above all, in the penchant for deregulation in the 80s and 90s. But we also saw it in the mythologies that developed around Silicon Valley, innovation, and technology, and around what Donald Trump would brand “the art of the deal.” Still, from Reagan to the present, every presidential administration has tried to have it both ways, making room for capitalism’s disruptions while maintaining hold of a relatively stable American empire.

Now, the balance has shifted: The postmodern anarcho-capitalist, seen in the likes of Donald Trump, Steve Bannon, and Peter Thiel, is now vying for the reigns of the empire. This personality seeks to reorganize geopolitical power around the most elusive of categories—spirit, culture, and identity—while trying to create maximum space for the disruptions of capitalistic innovation. “Strength” and “weakness,” understood in quasi-romantic terms of spirit and culture, are supposed to organize the values of this would-be world power (which, because it eschews stability, would not be an empire), and state violence is to be used as a technique of purification (thus the ubiquity of “war” in the rhetoric of these anarcho-capitalists, a striking point of commonality with their surprise allies, conservative culture warriors). On the other hand, the old empire is striking back in the personalities of the new secretary of defense, James Mattis, and the new secretary of state, Rex Tillerson, both of whom seem to represent a vision of empire in which capital cooperates in exchange for relative world stability and in which “strength” is measured less in cultural and spiritual terms and more in terms of diplomatic alliances, military might, and economic hegemony.

Which vision will prevail is still unclear, but the current condition of uncertainty might partly explain the box-office success of Split, a horror film about a man suffering from multiple personality disorder. One might describe it as a parable for an empire in crises, in which we viewers are the kidnapped hostages.

. . . . . . . .

Like The Hedgehog Review on Facebook, follow us on Twitter, and subscribe to our posts via RSS.

FacebookTwitterGoogle+LinkedInShare

Apple’s Fight with the FBI: A Follow Up

Cracked iPhone. Camron Flanders via Flickr.

Cracked iPhone. Camron Flanders via Flickr.

In the end, the Apple-FBI dispute was solved when the FBI cracked Apple’s security—without assistance. This is great for the FBI, but terrible for Apple, which now has, as the New York Times reports, an image problem. “Apple is a business, and it has to earn the trust of its customers,” says one security company executive in the Times. “It needs to be perceived as having something that can fix this vulnerability as soon as possible.”

In taking on the FBI in the San Bernardino case, Apple, it seems, had hoped to create the perception of an absolute commitment to security. Creating an iPhone that not even the state could crack was important to Apple’s image in a post-Snowden era. No doubt Apple must have marketing data that suggests as much.

But now, everybody knows Apple’s “security” can be breached, with or without the help of Apple’s engineers. If the FBI had deliberately picked a public fight with Apple (which nothing suggests they did), it could hardly have orchestrated a better response to Apple’s refusal to cooperate with the San Bernardino investigation: The FBI got what it wanted while undermining the very claim on which Apple staked its case in the court of public opinion, leaving Apple frantically trying to figure out how they did it.

Of course, as the security executive says, Apple is a business. Still, in an age of complaints about  corporate profits taking precedence over the needs of civic life, I continue to be mystified by Apple’s stance, which—whatever the company’s claims—makes sense only as a strategy to maintain or further maximize its profits. In this case, Apple has shown little regard for that which the relative security of a society actually depends: legitimate forensic work, due process, and the state’s (yes, the state’s, which, unlike corporations or private security firms, is publicly accountable) capacity to gauge future threats and reasonably intervene within the confines of the law. Yet “security” is to Apple a marketing problem, not a civic problem.

As I stated in my earlier, longer, and admittedly more thoughtful post about this matter, I think that Apple could have cooperated in this particular case, as they had done in past cases, with relatively little harm to the company’s reputation and with real forensic good being done. Of course, cooperation would have meant that the only wall between your iPhone and the FBI would have been the law itself, but isn’t that the whole point of liberal societies? Lex Rex—law over all, including the FBI, and including Apple’s image.

. . . . . . . .

Like The Hedgehog Review on Facebook, follow us on Twitter, and subscribe to our posts via RSS.

The Public, the Private, and Apple’s Fight with the FBI

Apple CEO Tim Cook (2012). Mike Deerkoski via Flickr.

Apple CEO Tim Cook (2012). Mike Deerkoski via Flickr.

Apple is resisting the FBI’s request that the company write software to help unlock the IPhone of Syed Rizwan Farook, the perpetrator, with Tashfeen Malik, of the massacres in San Bernardino, California, on December 2, 2015. Apple is said to worry that if it lets the FBI into Farook’s phone, it will open a global can of worms, and set a precedent for doing the same thing for less “friendly” governments. And a “back door” to individual phone data will compromise overall security, leaving phones vulnerable, in Tim Cook’s words, to “hackers and criminals who want to access it, steal it, and use it without our knowledge or permission.”

Since the appearance of the Snowden documents, it’s hard for many of us, at least on the level of sentiment, to root for the US government wanting access to phone data. Though the case is complex (and Apple has unlocked phones for the FBI before), the surveillance state is a remarkably frightening prospect, and even the very targeted, essentially forensic, aims of the FBI in the San Bernardino case understandably evoke worries.

But Apple’s battle with the FBI brings to mind Bob Dylan’s quip that “you’re gonna have to serve somebody.” We face something like the classic high-school English class choice between Orwell’s “Big Brother” and Huxley’s “Brave New World.” If the FBI concerns us, Apple should, perhaps, concern us even more.

As Hannah Arendt makes clear in The Human Condition, privacy never stands alone: It always has its co-dependents—especially, the public, the political, and the social. Changes in the meaning of “privacy” mean changes in the meaning of the “public,” and the other way around. The private and the public are interlocking political concerns.

In other words, whenever you are faced with a debate about privacy, also ask what the implications of the debate’s potential outcomes are for public life. Continue reading

. . . . . . . .

Like The Hedgehog Review on Facebook, follow us on Twitter, and subscribe to our posts via RSS.

Media Are Elemental: Gerunding

Brush drawing of German philospher Martin Heidegger, made by Herbert Wetterauer, after a photo by Fritz Eschen. Via Wikimedia Commons.

Brush drawing of German philospher Martin Heidegger, made by Herbert Wetterauer, after a photo by Fritz Eschen. Via Wikimedia Commons.

I would prefer the gerund searching to the naked verb [search], but the battle appears to be lost.
—John Durham Peters, The Marvelous Clouds: Toward a Philosophy of Elemental Media, p. 325

Elemental media would seem to have something to do with the elements—whether we conceive of them as “earth, sea, sky, and fire,” “stone, salt, and sludge,” or “carbon, copper, radon, and bohrium.” Directing media studies back to the elements is an explicit aim of John Peters’s The Marvelous Clouds. The elements, he reminds us, lay at the heart of (not-so-)old notions of media: “Medium has always meant an element, environment, or vehicle in the middle of things.” Sea, fire, and sky, he argues, “are media for certain species in certain ways with certain techniques.” Media are not necessarily “natural,” but they are “ensembles of nature and culture, physis and technê,” such that ignoring nature altogether in discussions of media would be a gross neglect of the embeddedness of media within a world of elements.

The question of the relationship of physis (“nature”) to technê (“art,” “techniques,” “technology”) is a basic one in ancient Greek philosophy. It is also central to the works of one of Peters’s primary philosophical influences, Martin Heidegger.

Heidegger grants a lofty, if ambiguous, place to technê: “Technê belongs to bringing-forth, to poiêsis; it is something poietic,” he writes in his essay, “The Question Concerning Technology.” Ancient technê, in Heidegger’s estimation, entails “bringing” and “revealing,” and modern manifestations of technê expand to include the “[u]nlocking, transforming, storing, distributing, and switching” of nature.

What to make of all these Heideggerian infinitives and participle forms used as gerunds? And what do they have to do with elemental media, with nature and culture, and with Peters’s stated preference for the gerund over the “naked verb” when discussing techniques like “searching”? Lots could be said here. Clearly, Heidegger is interested both in thinking the general and particular together and in giving time a critical position in his philosophy. “-Ings” offer a means by which to accomplish both.

But even apart from any explicit concern with Heidegger, I have been thinking about gerunds with respect to media studies for a while. I have been researching the work of Harold Edgerton, the MIT engineer who became famous for stroboscopic photography and who, as Kevin Hamilton and I have documented, transformed his stroboscopic techniques into timing and firing mechanisms for atomic bombs. Edgerton was an engineer not of “fire” but of “firing.” His interest was not so much in “time” but in “timing.” If we think more broadly about the work of engineering (itself a gerund) in our world, we discover that engineering turns on processes more than essences, activities more than things. It concerns itself not just with timing and firing, but with Heidegger’s unlocking, transforming, storing, distributing, and switching—as well as securing, channeling, ordering, circulating, and a host of other verb-things.

Media studies has recently been preoccupied with “materiality.” Things, artifacts, infrastructures, and objects have helped to organize a wide range of rich inquiry. However, I wonder if the object-oriented ontology (I use the phrase a bit tongue-in-cheek) of media studies has kept from view the gerunding of media, even elemental media. To put it more provocatively: “Firing” always comes before the “fire,” “timing” before “time,” “storing” before “storage,” “switching” before the “switch,” and “searching” before the “search.” For how could there be fire without firing, time without timing, and so on?

What would it mean give priority to kinds of energeia—“being-at-work,” in Joe Sachs’s translation of the Aristotelian term—over the artifactual works themselves?

I have taken (with Wellmon and Hamilton) to calling these gerunds, or rather the processes and activities which they represent, “deep media” (which has nothing directly to do with immersive media). I am not sure it’s the best term, but it gets to the way in which verb-things or processes underlie, metaphorically speaking, media things, artifacts, infrastructures, and objects.

One benefit of turning attention to “deep media” is that it explicates, more clearly than most media studies approaches, the way in which “engineering” approaches the world. And engineering is very much orders our world.

A second benefit of turning our attention to “deep media” is that it allows media students, scholars, and thinkers to probe the “ensembles of nature and culture” of which Peters writes. The sun is always firing. So are, for the time being, innumerable power plants across the globe. Firing is both a human practice and a natural phenomenon which share basically the same form—something that can get lost if one attends only to “fire” as a thing.

And a third benefit of attention to “deep media” is that media studies, by definition, concerns what goes on in “the middle of things” more than the things themselves. The more verby vocabulary of deep media might better attune us to these goings-on.

Regardless, a philosophy of elemental media will need to take up the cause of the gerund—even if for the time being, as Peters laments, the cause of the gerund is “lost.”

. . . . . . . .

Like The Hedgehog Review on Facebook, follow us on Twitter, and subscribe to our posts via RSS.

79 Theses on Technology:
Things That Want—A Second Reply to Alan Jacobs

tender buttons_2_FLATI don’t know exactly what Alan Jacobs wants. But I know what my keyboard wants. That difference—a difference in my knowledge of the intentionality of things—is reason for me to conclude that Alan Jacobs and my keyboard are two different kinds of things. There is, we’d say, an ontological difference between Alan Jacobs and my keyboard. There is a functional difference as well. And so many more differences. I acknowledge this. The world is not flat.

But Jacobs differentiates himself from my keyboard based on “wanting” itself. Alan Jacobs wants. Keyboards—mine or others—don’t “want.” Such is for Jacobs the line between Alan Jacobs and keyboards. If we can regulate our language about things, he suggests, we can regulate things. I would rather just learn from our language, and from things, and go from there.

I think my differences with Jacobs take three directions: one rhetorical, another ontological, and a third ethical. I will discuss them each a bit here.

To start, I think that machines and other technologies are full of meaning and significance, and that they do in fact give meaning to our lives. Part of their meaningfulness is found in what I might call their “structure of intention,” or “intentionality.” This includes what design theorists call “affordances.” In the classic account of affordances, James Gibson described them as the latent “action possibilities” of things in relation to their environment. Design theorists tend to take a more straight-forward approach: plates on doors afford pushing; C-shaped bars affixed to doors afford pulling; and knobs afford either action. Likewise, buttons on car dashboards afford pushing, whereas dials afford turning.

But intentionality as I am calling it here goes beyond the artifacts themselves, to include the broader practices and discourses in which they are embedded. Indeed, the “intentionality” of a thing is likely to be stronger where those broader practices and discourses operate at the level of assumption rather than explicit indoctrination. So much of the meaningfulness of things is tacitly known and experienced, only becoming explicit when they are taken away.

So there are things, their affordances, and the practices and discourses in which they are embedded. And here I think it is rhetorically legitimate, ontologically plausible, and ethically justified to say that technologies can want.

Rhetorically, every culture animates its things through language. I do not think this is mere embellishment. It entails a recognition that non-human things are profoundly meaningful to us, and that they can be independent actors as they are “activated” or “deactivated” in our lives. (Think of the frustrations you feel when the plumbing goes awry. This frustration is about “meaning” in our lives as much as it is about using the bathroom.) To say technologies “want,” as Kevin Kelly does, is to acknowledge rhetorically how meaningful non-human things are to us; it is not to make a category mistake.

Ontologically, the issue hinges in part on whether we tie “wanting” to will, especially to the will of a single, intending human agent (hence, the issue of voluntarianism). If we tether wanting to will in a strong sense, we end up in messy philosophical terrain. What do we do with instinct, bodily desires, sensations, affections, and the numerous other forms of “wanting” that do not seem to be a product of our will? What do we do with animals, especially pets? What do we do with the colloquial expression, “The plant wants water”? Such questions are well beyond the scope of this response. I will just say that I am skeptical of attempts to tie wanting to will because willfulness is only one kind of wanting.

Jacobs and I agree, I think, that the most pressing issue in saying technologies want is ethical. Jacobs thinks that in speaking of technologies as having agency, I am essentially surrendering agency to technical things. I disagree.

I think it is perfectly legitimate and indeed ethically good and right to speak of technologies as “wanting.” “To want” is not simply to exercise a will but rather more broadly to embody a structure of intention within a given context or set of contexts. Will-bearing and non-will-bearing things, animate and inanimate things, can embody such a structure of intention.

It is good and right to call this “wanting” because “wanting” suggests that things, even machine things, have an active presence in our life—they are intentional. They cannot be reduced to mere tools or instruments, let alone “a piece of plastic that when depressed activates an electrical current.” Moreover, this active presence cannot be neatly traced back to their design and, ultimately, some intending human.

To say the trigger wants to be pulled is not to say only that the trigger “was made for” pulling. It is not even to say that the trigger “affords” pulling. It is to say that the trigger may be so culturally meaningful as to act upon us in powerful ways (as indeed we see with guns).

So far from leading, as Jacobs claims, to the “Borg Complex”—the belief that resistance to technology is futile—it is only by coming to grips with the profound and active power of things that we best recognize that resistance to technology is, as Jacobs correctly argues, a cultural project, not a merely personal one, let alone primarily a definitional one.

So rather than trying to clean up or correct our language with respect to things (technologies don’t want!), I think we ought to begin by paying closer attention to our language about things and ask what we may learn from it. Yes, we will learn of our idolatries, ideologies, idiocies, and lies. But we may also learn some uncomfortable truths. So I will say it again, of course technologies want!

. . . . . . . .

Like The Hedgehog Review on Facebook, follow us on Twitter, and subscribe to our posts via RSS.

79 Theses on Technology: On Things

“The Sausage” (of

“The Sausage” (of Operation Ivy), 1952.

One of the more refreshing aspects of Alan Jacobs’s wonderful exercise, “79 Theses on Technology. For Disputation,” is its medieval cast. Disputations, as Chad Wellmon writes, were medieval “public performances that trained university students in how to seek and argue for the truth.” Theses were textual tidbits that mediated things (res) by means of words (verba). Theses spurred the search for truth as they pointed readers or hearers to a world of things (res), rather than, as we currently assume, codifying and hardening “claims.” “Commentary,” as Jacobs suggests, was one important medieval means of trying to get to the things behind or beyond words (Theses 26-36).

I find it perplexing, then, that Jacobs is so seemingly unsympathetic to the meaningfulness of things, the class to which technologies belong:

40. Kelly tells us “What Technology Wants,” but it doesn’t: We want, with technology as our instrument.
41. The agency that in the 1970s philosophers & theorists ascribed to language is now being ascribed to technology. These are evasions of the human.
42. Our current electronic technologies make competent servants, annoyingly capricious masters, and tragically incompetent gods.
43. Therefore when Kelly says, “I think technology is something that can give meaning to our lives,” he seeks to promote what technology does worst.
44. We try to give power to our idols so as to be absolved of the responsibilities of human agency. The more they have, the less we have.

46. The cyborg dream is the ultimate extension of this idolatry: to erase the boundaries between our selves and our tools.

Here is some of my own commentary on Jacobs’ theses.

There’s a documentary film from the 1950s called Operation Ivy. Made by the US Air Force, it concerns the first-ever detonation of a thermonuclear device, a historic (and horrible) technological achievement. One of the pivotal points of the film’s narrative comes just before the hydrogen device is detonated. The narrator asks the chief engineer in charge of the test, ‘But what happens if you have to stop the firing mechanism, or can you stop it?’’ The engineer responds, ‘‘We can stop it all right if we have to. We have a radio link direct to the firing panel in the shot cab. If we have to stop the shot we simply push this button.’’

‘‘Just a simple flip of the wrist, huh?’’ the narrator says.

‘‘That’s right,” says the engineer, “but a lot of work goes down the drain. You understand we don’t want to stop this thing unless it is absolutely essential.’’

Our technological artifacts aren’t wholly distinct from human agency; they are bound up with it.

“Human agency,” then, is not a solution to the moral and political problems of technology; it is the condition of their possibility, and too often a means of their rationalization. We don’t need to reclaim “human agency”; we need to reclaim the meaningfulness and power of things (res)—the complex ways in which human decisions and choices become embodied, even sedimented in things.

It is odd to read a literary critic, one with some medieval sensibilities no less, expressing concern about ascribing “agency” to technology, calling it “evasions of the human.” Texts are technologies, technologies are things. In The Book of Memory, a book that every media theorist should read, Mary Carruthers writes of the medieval text:

[In the middle ages] interpretation is not attributed to any intention of the man [the author]…but rather to something understood to reside in the text itself.… [T]he important “intention” is within the work itself, as its res, a cluster of meanings which are only partially revealed in its original statement…. What keeps such a view of interpretation from being mere readerly solipsism is precisely the notion of res—the text has a sense within it which is independent of the reader, and which must be amplified, dilated, and broken-out from its words….

Things, in this instance manuscripts, are indeed meaningful and powerful. Why would we want to divest things of their poetic quality, their meaningfulness, and indeed their power? Kevin Kelly may be off in his aims or misguided in his understanding, but he’s right to recognize in things, even and especially in technologies, sources of meaning and meaningfulness.

Of course technologies want. The button wants to be pushed; the trigger wants to be pulled; the text wants to be read—each of these want as much as I want to go to bed, get a drink, or get up out of my chair and walk around, though they may want in a different way than I want. To reserve “wanting” for will-bearing creatures is to commit oneself to the philosophical voluntarianism that undergirds technological instrumentalism.

The cyborg dream may or may not be the extension of some idolatry, but there the remedy is not a firm boundary between “our selves and our tools.” “Then he said to me, ‘Son of man, eat this scroll I am giving you and fill your stomach with it.’ So I ate it, and it tasted as sweet as honey in my mouth” (Ezekiel 3:3). Our tools are our part of us, central to our subsistence and lives. They need to be digested, ruminated, regurgitated, and, yes, sometimes violently spit out.

. . . . . . . .

Like The Hedgehog Review on Facebook, follow us on Twitter, and subscribe to our posts via RSS.

The Public and Private, Once Again

Three surveillance cameras, Wikimedia Commons

Three surveillance cameras, Wikimedia Commons

In the wake of the Charlie Hebdo attacks, a political fire that has been burning for a long time is turning into a firestorm. Recently, the British Prime Minister David Cameron has called for tech companies to provide government security services with encryption keys to ensure that government authorities may legally access an individual’s data when warranted. The concern, now publicly shared by President Obama, is that terrorists are using the new encryption technologies being developed by companies like Apple, Google, WhatsApp, and Snapchat, especially “end-to-end” encryption, which “makes it nearly impossible for anyone to read users’ messages—even the company itself.”

And so, as The Economist has recently stated in an editorial about the matter, we are confronted again with the age-old dilemma “liberty vs. security, once again,” or more precisely “privacy vs. security.”

There are a host of legal, technological, political, and, perhaps above all, economic issues at play here. I do not claim to know precisely how one balances liberty with security, let alone balancing liberty with the tech companies’ push for profit maximization or governments’ desire to save face in the wake of acts of terror. But I do think that the scales are already set to fall off—that is, that these debates are taking place against a background of assumptions about privacy that are themselves problematic.

In calling privacy a right, we tend to do more than assert the necessity for its legal protection. We tend to carry with our idea of the right to privacy the metaphor of private space, even private property. Privacy as that which is bounded, set off from that which is public. Hence we have our private life and our public life, our private opinion and our public statements, our private information and our public profile, etc. In this very common way of thinking about things, the private and the public are two distinct realms, and the right to privacy is the guarantee of a wall around our private realm.

The privacy vs. security dilemma is imbedded in this way of thinking: It has to do with when it is legitimately permissible for the government to break down the wall of privacy for the sake of security. It is a version of the broader dilemma of liberty within the quasi-utilitarian liberalism that underlies our assumptions about privacy. We are to be free, so long as we do not interfere with the freedom of others; but when we do so interfere, the state has the right to encroach on our freedom, indeed even on our privacy, in the name of preserving maximum freedom for the greatest number.

Indeed, in recent rebuttals by libertarians, some liberals, and the tech industry to the call by Cameron and Obama for preserving a “back door” option by which to access user data, we see the greatest good for the greatest number argument used on behalf of super-encryption: Back doors, Cameron’s critics argue, can and will be used by the bad guys (criminals, hackers, the Russians, the Chinese) as well as the good guys, and the damage done by the bad guys could well be catastrophic. As Harvard’s Margo Seltzer recently said in The Financial Times,

If bad guys who are breaking laws cannot use encryption, they will find another way. It is an arms race and if governments say you cannot do this, that means the good guys can’t and the bad guys can. End-to-end encryption is the way to go.

Protecting privacy as an inviolable right, more sophisticated arguments go, is not only consistent with liberal societies, but also the most effective means of security—even if it means terrorists can communicate with little fear of being detected. It’s often assumed here that an absolute right to privacy will neatly reconcile itself with, even produce, the greatest good for the greatest number (albeit, the privacy of one’s data from tech companies themselves is more penetrable).

I think the super-encryption efforts of tech companies are socially and politically problematic. I think they are the wrong solution addressing the wrong problem. But in arguing so I am not interested in hypothetical calculations of the greatest good for the greatest number. Rather, I simply want to start with the manifest relationship of the private to the public. How do things work with respect to the private and the public?

Rather than starting with the regnant bugaboo, terrorism, let’s think about political corruption. Do politicians have an absolute right to the privacy of their deliberations and communications about public matters? Does the Speaker of the House, or the President, have an absolute right to the full and permanent protection of behind-the-scenes communications about matters of public consequence? If Legislator A and Donor K used WhatsApp to work out a deal for campaign donations in exchange for sponsoring legislation in the House of Representatives, would we, as citizens, accept the records of those conversations as being forever and irredeemably private, such that we simply could not ever access them?

I suspect that most of us, once we stop to think about it, would not be too comfortable with this already real-life scenario. What if the messages concerned bribes, threats, or other forms of back room dealings? What if the President told the Speaker things that the latter was not authorized to know? What if the CEO of Company X was privy to the messages, too? Or what if the Speaker sent the President the CEO’s messages without the CEO’s knowledge? This is the stuff of scandal and corruption, and these are each instances where communications, though “private,” indeed have public importance. The public would have a right to know about them.

This is not because we are willing to “sacrifice” privacy for the integrity of our political system; it is not a version of “liberty vs. security, once again.” Rather this is because, even with the high premium we put on the right to privacy, we understand that the private stands in a flexible, dialectical, and dependent relationship with the public: When private acts have direct public consequences, they are not strictly private—they can be called to public account.

This is the case whether we are talking about political corruption or communication among persons who would commit acts of terror. More important, in calling private acts to public account, we are not breaking down the wall of privacy; rather, we are simply walking through the door from the private to the public the reverse way, so to speak. An exchange between the private and the public has already taken place. We are but re-tracing it.

What I find particularly troubling about the unbreachable encryption efforts of Apple, Google, and others is that they technologically (or, more properly, mathematically) prevent this kind of reverse traffic in the name of the public good. Rather in the name of “privacy”—and, let’s be honest, in the name of corporate profits—tech companies are creating, in effect, not so much inviolable walls around privacy but something more like trap doors from the private to the public that can be gone through only one way. In such a scenario, it is only the public that will suffer.

The genuine political worry articulated by super-encryption is that about Big Brother. As Wired writes of WhatsApp founder Jan Koum,

Growing up in Soviet Ukraine in the 1980s, WhatsApp founder Jan Koum learned to distrust the government and detest its surveillance. After he emigrated to the U.S. and created his ultra-popular messaging system decades later, he vowed that WhatsApp would never make eavesdropping easy for anyone. Now, WhatsApp is following through on that anti-snooping promise at an unprecedented scale.

But the United States and the United Kingdom are not the Soviet Union, and while both governments have participated aggressively in very troubling illegal, large-scale dragnet-like surveillance in the last decade, we have not seen a corresponding development of a police state working in tandem with the data collection agencies. To the contrary, the greatest problem faced by American and British citizens is that of government secrecy, which has provided cover for illegal and otherwise questionable state surveillance programs, together with the cultural problem seen in repeated demands from politicians that intelligence agencies unfailingly connect the dots prior to a terrorist attack, or be held culpable when they do not. This cultivates a culture of self-preservation in intelligence communities, encourages them to lean always to more aggressive actions rather than less aggressive ones, and opens the door to all sorts of government contractors promising infallible technological fixes for what are, in the end, inherently political and social crises.

Encryption processes that simply block government surveillance outright, in keeping with Silicon Valley’s longstanding delusion, are also but a supposed technological fix for what are political and cultural problems—be it the NSA or al-Qaeda and their affiliates. End-to-end encryption and its equivalents in no way address the real problems we face from a civil liberties perspective—government secrecy and the unrealistic expectations before counter-terrorism agencies. Worse, encryption offers a false substitute for real solutions—something that is the moral equivalent of vigilante force when what we need is better government and law.

Ned O’Gorman, associate professor of communication and Conrad Humanities Professorial Scholar at the University of Illinois, Urbana-Champaign. He is the author of Spirits of the Cold War: Contesting Worldviews in the Classical Age of American Security Strategy and the forthcoming The Iconoclastic Imagination: Image, Catastrophe, and Economy in America since the Kennedy Assassination.

Editor’s Note: Ned O’Gorman is also a contributor to The Hedgehog Review‘s Spring 2015 issue.  Reserve your copy today here.

. . . . . . . .

Like The Hedgehog Review on Facebook, follow us on Twitter, and subscribe to our posts via RSS.

The Arts and Humanities Save Lives!

PrintThere was a time, long ago, when it was taken for granted that what we now call the “humanities” was integral not just to human enjoyment but to human survival.

It was self-evident that what the Greeks called logos—language and language-based reasoning—is not only what made the human animal different from all the other animal species of Earth, but that language was essential to the human capacity to survive, and thrive, in a too-often hostile and harsh environment. Rabbits have speed and agility; bears have teeth and claws; eagles have wings; humans have language. As one ancient Athenian put it:

For in the other powers which we possess we are in no respect superior to other living creatures; nay, we are inferior to many in swiftness and in strength and in other resources; but, because there has been implanted in us the power to persuade each other and to make clear to each other whatever we desire, not only have we escaped the life of wild beasts, but we have come together and founded cities and made laws and invented arts; and, generally speaking, there is no institution devised by man which the power of speech has not helped us to establish. For this it is which has laid down laws concerning things just and unjust, and things base and honorable; and if it were not for these ordinances we should not be able to live with one another. It is by this also that we confute the bad and extol the good. (Isocrates, Nicocles, Antidosis 3.5–7)

Above all, logos was for the Greeks the source of wisdom, which was assumed to be essential to human survival. Similar ideas can also be found in ancient Hebraic, Confucian, Arab, and other cultures.

We have come a long way from this “naive” view. Ours, as the cliché goes, is a technological age, and science and engineering are now assumed to be the keys to human survival.

Recently, my own university’s chancellor, Phyllis Wise, reminded us of technology’s vast sway.   In a blog post featured on Linkedin, and now republished on the University of Illinois’s website, she argued that the United States needs to fix what she and her fellow scientists and engineers call the “innovation deficit”—the gap between current levels of federal funding of science and engineering and what the country needs “to remain the world’s innovation leader.”

Once we fix this “innovation deficit,” she said,  “we will be well on our way to solving some of the world’s biggest problems.” Scientists and engineers, she explained in a series of examples, would be able to heal diseases, clean up the environment, and, “ensure our national security” by providing new technologies to the Department of Defense. By designating “health and education,” “environment,” and “national security” as the three main areas to which science and engineering can substantially contribute, Chancellor Wise appealed to human life as the main benefactor of science and technology research.

Of course, not all technological innovations are lifesaving. As Chancellor Wise concluded, some things are about “quality of life” rather than survival:

Finally, there’s this: everything from plasma TVs to the first internet browser to YouTube to whipped cream in a can—these innovations can all be credited to a university (ours, in the case of these examples). And while these innovations may not be lifesaving, they certainly contribute to our quality of life. Add to these examples the vibrant scholarly work of our historians, artists, musicians, journalists, and others who work to enrich our understanding of the world and of other people in a powerful way, and it’s clear that federal funding for research is a critical part of modern life.

This seems all self-evident enough, but is it?

Professional hurt feelings aside, it is incorrect to categorize the arts and humanities as concerned with “quality of life” rather than—as Wise’s post clearly implies—the sustenance of human life. The value of “historians, artists, musicians, journalists, and others” is not equivalent to the invention of canned whipped cream!

Today the world’s biggest problems have indeed grown big enough to concern the very survival of the human species: environmental catastrophe, genocidal weapons, and fragile technological and economic systems each put the species—not just individuals—at risk. But the solutions to these problems, in as much as they can be achieved, will be essentially, and not merely accidentally, social and political in nature.

Consider a national problem before us now that very much concerns human life. Two weeks ago, after the grand jury in Ferguson decided not to indict Officer Darren Wilson, President Obama and many others came out arguing for a technological fix: cop cameras. With this technology, they argued, we could erase ambiguities and probabilities from the arbitration of police conduct and be able to know with more certainty the sequence of events in incidents like the slaying of Michael Brown. Meanwhile, police officers themselves would certainly feel the presence of these cameras and be compelled to exercise more restraint when interacting with the public.

If this were so, Chancellor Wise might be right: Science and technology could fix some of the world’s biggest problems.

But as we learned last week in another grand jury decision concerning the police killing of another black man—this time, Eric Garner of Staten Island—technology means very little in a culture of police impunity before people of color. Garner’s killing took place in the full view of a camera. The footage is widely available online and shows an unarmed man who posed no imminent threat to police or bystanders being put in a chokehold by a police officer and slowly suffocated. As Sean Davis at the Federalist writes, “The grand jury’s decision not to bring any charges against the officer who killed Garner is inexplicable. It defies reason. It makes no sense…. All we have to do is watch the video and believe our own eyes.”

Cameras, microchip sensors, drones: No matter what the gadget, there is no technological fix for racism, or for more subtle problems such as prosecutorial discretion. There is no science that can save us from the historically embedded habits and the wider structures that cause us, seemingly instinctively, to value the lives of some more than others based on the skin color, gender, or any other of a number of social markers of the Other. And the only solutions for structural problems within the law are both better law and better practice of the law.

These problems require citizens capable of reflecting on matters like discrimination and the law, and leaders who understand that the world’s problems can’t be fixed simply through technology. The world’s largest problems are not equivalent to the problem of gravity. If they were, perhaps science and technology could solve them. We’d just need more well-funded Newtons and Einsteins. Rather, we have problems that are inherently political and/or social in nature and that require political and/or social solutions. Moreover, it should be obvious by now that scientific and technological “fixes” often create new ones (e.g., industrialism’s creation of global warming, genocidal killing machines, and antibiotics).

So while it seems silly to say it, it needs to be said, in light of the legitimate value political and academic leaders are putting on life: The arts and humanities save lives!

If for the Greeks, logos was the means by which humans established cities, today it is the language, visual, plastic, and other arts that offer the sorts of practices and epistemologies capable of better reckoning with the world’s problems, whether they be the plight of racism (which keeps technologies from being put to just use), the situation of climate change (which requires above all political solutions for any technological fixes to be implemented), or the existence of economic inequality and poverty (which simply cannot be addressed adequately by any metric apart from good policy).

I am not claiming that the arts and humanities are salvific. There is no need for hyperbole here, no need for triumphalism. Rather, we need only to reckon with the world’s problems as they are to see that the arts and humanities, in as much as they equip us to do political and social work well, can save far more lives than any gizmo. If saving lives is our metric, the biggest deficit we face is not an “innovation deficit” but a “wisdom deficit.”

Nor is the issue here the “humanities” versus the “sciences,” but rather the obvious point that science and technology are good only in so far as they are used well, and that their good use is a matter of principle and prudence, not causality and efficiency.

Our greatest problems are social and political problems. They call for social and political solutions. The arts and humanities train us in the sort of skills and sagacity integral to social and political solutions. The arts and humanities save lives!

Chancellor Wise and other leaders of the academy-in-crisis, in addition to the “innovation deficit,” would you consider solutions to the “wisdom deficit” we now face?

Ned O’Gorman, associate professor of communication at the University of Illinois, Urbana-Champaign, is the author of Spirits of the Cold War: Contesting Worldviews in the Classical Age of American Security Strategy and the forthcoming The Iconoclastic Imagination: Image, Catastrophe, and Economy in America since the Kennedy Assassination.

. . . . . . . .

Like The Hedgehog Review on Facebook, follow us on Twitter, and subscribe to our posts via RSS.

John Searle and the Threat of Artificial Intelligence

 

John Searle wants to reassure us. The University of Califorinia, Berkeley, philosopher, a specialist on mind, “intentionality,” consciousness and other concepts integral to the distinct sort of being that we refer to as human being, wants us to know that we need not fear “super intelligent computers intentionally setting out on their own to destroy us.” Nor, it turns out, should we grow giddy about the immanent passage of human being into the “infosphere,” where “everything is information and . . . computers are much better at it.”

In this month’s issue of The New York Review of Books (October 9, 2014), Searle offers an extensive rebuttal of the theses offered in two recent books: Nick Bostrom’s Superintelligence, which warns of the impending rise of “machine brains” superior to human ones, and Luciano Floridi’s The 4th Revolution, which announces the metaphysical emergence of the “infosphere.” Searle persuasively argues that both authors fail to account adequately for the vital role of consciousness in human being, and therefore miss in a fundamental sense what is really entailed in the creation of “artificial intelligence” and indeed “information.”

Searle offers several worthwhile distinctions, which he’s explored elsewhere, especially distinctions between “observer independent” and “observer relative” features of reality. The former exist “regardless of what we think”—mountains and molecules, for example. The latter “depen[d] on our attitudes” for their real existence—money and marriage, for example. Searle suggests that “information” falls in the latter category, as does “computation” when it is the product of machines. For only conscious agents, he argues, can have, create, or otherwise interact with “information” and “computation.” There is, then, no such thing as purely artificial information or intelligence, for there is no such thing as artificial consciousness. Conscious beings—something machines are not—must cooperate with the artificial mechanisms of information and computation in order for them to function in any way remotely as “intelligent” machines.

Or so it will be for the foreseeable future. It is possible, he surmises, that perhaps someday we will be able to make machines that “duplicate” the human brain, including consciousness. And it is at the point of this speculative possibility that Searle’s argument becomes both more interesting and more problematic, because it probes—somewhat indirectly, but powerfully nonetheless—the significance of the “artificial,” a category in which we can put both “art,” “artifice,” and certainly “technology.”

A bit of background on the artificial might be helpful here. In ancient Greece, a story circulated about the creation of human beings by the gods that began uncomfortably with humans being left “naked and shoeless” and thus in grave danger before the elements. And so it was until Prometheus gave humans fire and the mechanical arts by which to sustain and preserve their lives. The “artificial,” we might say, saved human life.

But the Greeks were as capable of worrying about the artificial as they were about celebrating it: Most famously, Plato worried about simulacra, those copies of copies that did nothing but deceive and mislead humans in their quest for order and justice.

The Edenic account in the Hebrew scriptures is different from the Greek one in that it presumes the goodness of being naked and shoeless—until the great Fall, at which points artificial coverings were made to cover human nakedness in a gesture of divine mercy and judgment.

I could offer other examples of various ideas and arguments about the status and significance of the artificial in human life. Questions about the relationship between art and nature, or the artificial versus the real, are longstanding and taken up in many cultural traditions.

But what interests me here is Searle’s account, which is fascinatingly emblematic of our own age. Whereas these older accounts were concerned with the relationship between nature and art, Searle is concerned most crucially with what it takes to make something really or truly an artificial version of some non-artificial entity. What does it mean, Searle asks, to really “artificialize” (my own word, if such a word exists) something? “Artificial intelligence” as we now know it, argues Searle, may be artificial, but it is not really intelligence. So what would a truly artificial intelligence look like?

An artificial heart, by contrast, seems to be for Searle really an artificial heart. Why? Searle bases this distinction on that between “a simulation or model” and “duplication.” He writes:

Consider an artificial heart as an example. Computer models were useful in constructing artificial hearts, but such a model is not an actual functioning causal mechanism. The actual artificial heart has to duplicate the causal powers of real hearts to pump blood. Both real and artificial hearts are physical pumps, unlike the computer model or simulation.

There is a strange literalism at work in Searle’s approach—or, better, an artificial essentialism. Causal processes are for Searle the essence of “reality”; the heart’s essence, it seems, is found for him in a basic causal function, pumping. In order to create a real artificial heart, that basic causal function needs to be literally, materially duplicated, or re-produced. Then we have, in a paradoxical formula, a real artificial heart.

But, I must ask, can that artificial heart skip a beat in a moment of terror or wonder? Such heart stopping moments, too, can be understood in cause-and-effect terms. Can an artificial heart grow stronger with exercise, or weaker with poor nutrition, also causal phenomena? Can an artificial heart, to be a bit hyperbolic, be eaten and subject to the causal processes of digestion? If not, then clearly the artificial heart is not a “real artificial heart” in every respect, but only in one respect, albeit a very important one.

My point is that “duplication” is a poor measure of the “really artificial,” for it is in the very nature and substance of the “artificial” itself to have a relative and partial relationship to that which it is imitating, copying, or “duplicating.” The artificial heart duplicates some aspects of the natural heart, but not all aspects. And the same thing can be said about computerized artificial intelligence: Of course it is true that, as Searle writes, computers lack altogether a “psychological reality,” and are nothing but well-designed, highly functional circuit systems. Nevertheless, in certain circumstances they behave outwardly in a manner that we recognize as “intelligent.” This resemblance may be far less intense than the duplication of the “causal mechanism” of the human heart, but it is a resemblance nevertheless.

If the philosopher’s quest is to find the line at which point “artificial X” crosses a line to become a truly artificial X, I am afraid it may be a frustrating task, for sharp conceptual distinctions are not going to do it. Rather, we are better off thinking in terms of a continuum, on which perhaps “concepts” themselves might sit at one end, “models” somewhere in the middle, and “duplications” at the other end.

Searle, as I said, wants to reassure us: Computers are not going to take over the world, and we have not entered a new aquarian age of the “info sphere.”

It is easy to imagine robots being programmed by a conscious mind to kill every recognizable human in sight. But the idea of superintelligent computers intentionally setting out on their own to destroy us, based on their own beliefs and desires and other motivations, is unrealistic because the machinery has no beliefs, desires, and motivations.

I don’t find this very reassuring, however. A greater danger than a future filled with “really artificial intelligence” is already squarely with us: We often behave as though computers believe, desire, and move. We ascribe them human agency. And in this present reality, not an apocalyptic future, what is “really artificial” matters little. Rather, what we need is better reflection on the meaning of the artificial in our lives together.

Ned O’Gorman, associate professor of communication at the University of Illinois, Urbana-Champaign, is the author of Spirits of the Cold War: Contesting Worldviews in the Classical Age of American Security Strategy and the forthcoming The Iconoclastic Imagination: Image, Catastrophe, and Economy in America Since the Kennedy Assassination.

. . . . . . . .

Like The Hedgehog Review on Facebook, follow us on Twitter, and subscribe to our posts via RSS.

Living With Ferguson

In April 1964, Malcolm X gave what would be one of the best-known speeches of the civil rights era, “The Ballot or the Bullet?” Speaking of the plight of blacks in America, Malcolm X asked an audience at Cory Methodist Church in Cleveland, Ohio, “Where do we go from here? In my little humble way of understanding it, it points either to the ballot or the bullet.”

I have thought a lot about Malcolm X’s proposition since Michael Brown was shot dead in Ferguson, Missouri, not far from where I grew up.

Malcolm X (photo from Wikimedia Commons)

Malcolm X in 1964 by Herman Hiller (photo from Wikimedia Commons)

To this day, Malcom X’s speech is all too often described as a call to violence and little more. But it was actually one of the more cooly pragmatic and tactical arguments of the era. It was as much a call to “the ballot” as it was to “the bullet.” Indeed, based on a simple word count, Malcolm X seemed more interested in the power of the ballot in 1964 than he was in the power of bullet. Unlike the bullet, the ballot offered a straightforward electoral strategy:

These 22 million victims are waking up. Their eyes are coming open. They’re beginning to see what they used to only look at. They’re becoming politically mature. They are realizing that there are new political trends from coast to coast. As they see these new political trends, it’s possible for them to see that every time there’s an election the races are so close that they have to have a recount. They had to recount in Massachusetts to see who was going to be governor, it was so close. It was the same way in Rhode Island, in Minnesota, and in many other parts of the country. And the same with Kennedy and Nixon when they ran for president. It was so close they had to count all over again. Well, what does this mean? It means that when white people are evenly divided, and black people have a bloc of votes of their own, it is left up to them to determine who’s going to sit in the White House and who’s going to be in the dog house.

And so “The Ballot or the Bullet” advocated hitting, straight on, the racist “American system” (Malcolm X’s just phrase) through traditional political means.

To be sure, if the ballot failed, the bullet, he suggested, would be necessary—but as the political equivalent of the ballot, rather than its extreme alternative. In Malcolm X’s speech, both the ballot and the bullet are blunt political technologies capable of changing the “American system.” “The ballot or the bullet. If you’re afraid to use an expression like that, you should get out of the country.”

Of course, there is irony here, but Malcolm X was on to something. Part of American optimism is this belief that the “system” can be changed with the right “inputs,” be they votes or violence. In the recent Ferguson protests, we saw this play out. A story on National Public Radio about white residents of Ferguson featured a man complaining, “There are so many people here that don’t vote, but they think they have the rights to everything in the world. If you don’t vote, you aren’t given all the gifts of our society.” There were also more than a few protestors at Ferguson ready to make hell with guns and fire and so take on the American system of white supremacy.

But here’s the problem with the ballot and the bullet as the only two options in a struggle for justice: What we call “racism,” what we call “oppression,” what we call “segregation,” what we call “white supremacy”—these are all systemic and historic injustices, the large-scale realities of the “American system.” As we are seeing once again in Ferguson, the use of force and the right to vote are part of an American system that has, to date, regularly made people of color second-class citizens, if citizens at all.

Despite an American optimism that encourages us to believe otherwise, neither the ballot nor the bullet is capable of targeting systemic and historic injustices. For both ballots and bullets are themselves components of the system, and—more broadly—of a systemic approach to political problem-solving that operates only in terms of inputs and outputs (where ballots and bullets are inputs supposedly capable of creating new outputs). In fact, in another irony, some white Americans would today agree with Malcolm X, tactically speaking. For some white Americans, “gun rights” have joined “voting rights” to represent the be all and end all of American democracy. If you don’t like it, vote (making sure you show your ID). If your vote is ineffective, go buy a gun.

The problem, however, is not just the ways electoral politics and force are implicated in systemic injustices. It is also one of scale and scope: In approaching systemic and historic injustice, we approach the wrongs of what St. Augustine long ago, in a helpful distinction, referred to as the ills of the  “world.” By the “world,” Augustine meant not “inevitable” or “given,” but rather that accumulation over time and space of injustices—what we would today call systemic and historic injustices. The “civitas,” or polity, Augustine argued, lacks any direct political means by which to address injustices of such scope and scale. As a St. Louis friend recently lamented by email, “It is so disheartening to live in St. Louis, to be immersed in 24–7 coverage of the unrest and outrage in Ferguson, and yet to know that the structural and systemic problems that create the context for incidents like this seem so intractable.”

Malcolm X, in “The Ballot or the Bullet” at least, does not seem all that far off from Augustine’s position. Although he calls for the ballot or the bullet, he does so with pessimism and irony. The sort of equality that seems most within reach is that old “American” one of life for life, tooth for tooth, eye for eye: “It’ll be Molotov cocktails this month, hand grenades next month, and something else next month. It’ll be ballots, or it’ll be bullets. It’ll be liberty, or it’ll be death. The only difference about this kind of death — it’ll be reciprocal.”

The gap between local political solutions and the systemic injustice of history is one that American optimism tends to ignore (to its peril). I have no solutions for bridging the gap, at least not directly. One of the most common solutions in the imaginary of power-hawks and revolutionaries alike has been catastrophe, by which — supposedly — history and system can be overturned and we can begin anew. But that’s a fool’s errand. Catastrophe is not a recipe for a more just order, but instead for the reordering of injustices.

A protestor in Ferguson (photo from Wikimedia Commons)

A protestor in Ferguson, August 15, 2014 (photo from Wikimedia Commons)

But here’s a hopeful paradox to fend off the pessimism at least a little: The ballot and the bullet are component parts of systems of injustice, and yet the ballot and bullet have become, in certain moments, representative acts of public judgment upon the very same systems of injustice in which they participate (and quite often without any intention on the part of the “voter” or “shooter.”)

Ferguson is a case in point. What Officer Darren Wilson did to Michael Brown itself became a representative act of public judgment against blacks on behalf of a white policing system. The arrival of protestors on the streets, who named and decried white supremacy and police violence, also became a representative act of public judgment. In turn, the arrival of police in armored vehicles with combat weapons, representing the policing system not only as predominantly white but also as deeply militarized, became both an act of public judgment on behalf of the system and itself subject to other acts of public judgment . . . not the least in Trooper Ron Johnson’s walking with the protestors, in marked contrast to the tactics of the local police. Far from a mere police “calming tactic,” Johnson’s walk was above all an act of public judgment upon a predominantly white, militarized policing system.

Each of these events was or became a public act, and each an act of judgment. What made Ferguson so remarkable — and what makes it remarkable still — is the succession of such public acts of judgment, seemingly running out of control. This is also what made Ferguson so frightening to so many. But they were, I think, less “out of control” than they were regulated by a system of publicity (a word we need to recover from the publicists) operating independently of (though certainly not entirely apart from) the “Inputs” of America’s racially tilted system. That is, these representative acts of public judgment were meaningful in ways that transcend the unjust system. The world, as the protestor in the photograph above declared, was watching, and that was significant.

Although I am loathe to get into the well-worn business of contrasting Malcolm X with Martin Luther King, Jr., I do think that the particular political virtue of nonviolent tactics by MLK and others was found, and still is found, in their creation of situations conducive to these representative acts of public judgment (which need not always be nonviolent). It is possible, within the sphere of publicity, to offer acts of judgment that “the system” cannot contain or control.

I do not know if the representative acts of public judgment that Ferguson gave us will have systemic effects for the better. That is, I don’t know if they will lead to structural and systemic changes that will mitigate injustices. There is good reason to remain pessimistic. Ferguson did, however, give us a rare public window into the historic racist injustices of the “American system,” and offered its judgments, judgments with which we will have to now live. And living with such judgments, I think, can only be for the better.

Ned O’Gorman, associate professor of communication at the University of Illinois, Urbana-Champaign, is the author of Spirits of the Cold War: Contesting Worldviews in the Classical Age of American Security Strategy and the forthcoming The Iconoclastic Imagination: Image, Catastrophe, and Economy in America since the Kennedy Assassination.

. . . . . . . .

Like The Hedgehog Review on Facebook, follow us on Twitter, and subscribe to our posts via RSS.