Category Archives: Commentary

It’s Not the “Deep State.” It’s the State.

The truth is out there? X-Files cosplay image via Wikimedia Commons,

The truth is out there? X-Files cosplay image via Wikimedia Commons.

Earlier this month, Rep. Mike Kelly, a Republican from Pennsylvania, got in on the latest pro-Trump talking point, telling a gathering of Republicans at a Lincoln Day dinner:

President Obama himself said he was going to stay in Washington until his daughter graduated. I think we ought to pitch in to let him go someplace else, because he is only there for one purpose and one purpose only, and that is to run a shadow government that is going to totally upset the new agenda.

The idea that a “shadow government” or “deep state” has been actively resisting Trump since the president’s inauguration has been widely circulated on right and alt-right media channels. Last week, Rush Limbaugh published an article indiscreetly titled, “Barack Obama and His Deep State Operatives are Attempting to Sabotage the Duly Elected President of the United States.” Meanwhile, Sean Hannity took to the airwaves to argue that the Russian hacking of the DNC was actually the work of American intelligence agencies seeking to undermine Trump. And I’d best not mention Breitbart News on the matter.

Thankfully, the notion that that the “deep state” is responsible for the Trump administration’s bumbling, stumbling first couple months in office has been panned by pundits on the left and the right. The New Yorker’s David Remnick wrote last week, “The problem in Washington is not a Deep State; the problem is a shallow man—an untruthful, vain, vindictive, alarmingly erratic President.” Similarly, Kevin Williamson writes in the National Review that “it isn’t the “Deep State” that is making President Donald Trump look like an amateur. It is amateurism.”

But if reports are true that Trump, Bannon, and other members of the White House inner ring are feeling frustrated and blaming it on the “deep state,” maybe we should ask why. Clearly, they feel like they are bumping up against something big. Just because they may be mistaking it for the “deep state” (let alone an Obama-run deep state), does not mean that they are not in fact facing some real big resistance: the state itself, that vast network of bureaucracies, rules, regulations, institutions, and cultures that comprise the United States government. Continue reading

. . . . . . . .

Like The Hedgehog Review on Facebook, follow us on Twitter, and subscribe to our posts via RSS.

FacebookTwitterGoogle+LinkedInShare

Politics is Downstream from Culture, Part 1: Right Turn to Narrative

88622867_fortune cookie NARRATIVE_FLAT_100dpi

Our lives—indeed, our very species—has storytelling wound into our DNA. From the earliest cave drawings, man has expressed himself in terms of story. Ancient civilizations understood that stories are vital to understanding our place in the world, so much so that they codified storytelling and found base rules that form it. Oral histories are a part of every culture across the globe.

I’ll give you three guesses as to the author of this statement. In fact, I’ll give you thirty. It’s not Bill Moyers, and it’s not James Cameron, and it’s not some literature professor. It’s from Breitbart News. If you’re a member of the professional (or non-professional) humanities, that should get you to more than guessing.

The quote, by Lawrence Meyers, appeared in a 2011 article headlined “Politics is Really Downstream from Culture.” It was an elaboration of Andrew Breitbart’s mantra, “politics is downstream from culture.” The slogan—a nice inverse of James Carville’s “It’s the economy, stupid!”—means what it says: Change the culture, change the government.

Now, six years later, national politics, we might say, is culture, and maybe even only culture. Steve Bannon, Breitbart’s successor, is not only in the White House, but, for the time being at least, enjoys a front-row seat on the National Security Council. John McCain, concerned about the elevation of a civilian political strategist to chief advisor on foreign affairs, has called Bannon’s NSC role  a “radical departure from any National Security Council in history.”  But the concern should run deeper than the possibility of war becoming but another mode of dirty politics. It should include Bannon making international relations into little more than a good story. This sense of story, as something that captures the attention, immerses the reader or viewer, and manufactures a desired political attitude, is Bannon’s stock-in-trade. He’s explicit about his sources for his narrative techniques: “the Left,” conceived on a spectrum from Hollywood filmmakers to Lenin (whom Bannon has said he idolizes, with tongue pretty clearly in cheek). 

Since he left Goldman Sachs in 1990, Bannon has been first and foremost a worker in the culture industry, a producer of stories. After helping negotiate the sale of Castle Rock Entertainment to Ted Turner, Bannon gained a stake in television shows like Seinfeld. He then got into his own brand of filmmaking, producing among other works, a hagiography of Ronald Reagan, a celebration of Sarah Palin, an encomium to Duck Dynasty star Phil Robertson, and a self-explanatory exposé, “Occupy Unmasked.” After Andrew Breitbart died suddenly in 2012, Bannon took over Breitbart News and single-handedly retrofitted the fringiest part of the “Right Wing Conspiracy” into a slick, savvy, and at least partly fact-based operation. (At the same time, Bannon helped found the investigative research organization that produced Clinton Cash, the book that undermined the Democratic nominee long before anyone from Vermont got involved.)

In addition to left-leaning pop culture sources, Bannon has also borrowed techniques from the academic left, specifically from the Humanities. That’s why it’s now possible to find quotes like the one I led off with above, where it’s hard to tell if we’re reading literary theory or an article on Breitbart Continue reading

. . . . . . . .

Like The Hedgehog Review on Facebook, follow us on Twitter, and subscribe to our posts via RSS.

Empire’s Regrets

The Pentagon (2008). Via Wikimedia Commons.

The Pentagon (2008). Via Wikimedia Commons.

There was a time, not that long ago, when America’s “business” sensibilities were seen as both the economic and ethical boon of American empire. George F. Kennan, one of the chief architects of the cold war American empire, saw in “the reputation of Americans for businesslike efficiency, sincerity and straightforwardness” a singular advantage in America’s effort to establish and maintain its global power. (I am quoting from Kennan’s notes for his Memoirs, archived at Princeton.) Indeed, for nearly all of the cold war architects of American empire, the “business” personality meant reliability, responsibility, power, and stability.

This personality is also the kind needed to build an empire. Empires want stability. Power is not enough. The Pax Romana of the ancient world was not an accident of the centralization of power in the emperor. It was its purpose and its justification. By the time of Octavian’s ascent to imperial rule as Augustus in 27 BCE, the Roman Republic, though esteemed then and now for its renowned constitution, had been in upheaval for well near a century, fraught with plots, assassinations, power plays, coups, and civil war. The emperor meant the empire could stabilize.

The American empire of the postwar and cold war periods was frequently characterized as a reluctant one. This was part of its “businesslike” ethic. Certainly, America’s ascent to world power after World War II was not intended to be a replication of the British colonial empire. It was to be more subtle, and, if possible, more invisible in its workings. It was not to be “colonial” in the way of nineteenth-century empires or America’s own past approach to its indigenous peoples. Rather, it was to work through a kind of triumvirate of distributed American military power, America-led financial institutions, and strategic alliances. This is, and was, American empire. And like all empires, it wants, on the whole, stability.

Within the empire of postwar and cold war America, technology was to be a means of order, or ordering. During the 1940s, 50s, and 60s, technology and technological innovation were inseparable from the empire: Big science, big industry, and a very big military-industrial complex drove technological innovation. There is no other way to make sense of the remarkable technological developments of the period—computers, the internet, satellites, missiles, and thermonuclear warheads—than in terms of the overwhelming imperative of the empire to enforce order onto the world, just as there was no other way to account for the empire’s penchant to perceive threats to order everywhere, from Laos to Guatemala to the Arctic.

But this “businesslike” empire was also an empire of capital, and of capitalism, both ideologically (as America confronted communism) and structurally (as private capital and public funding worked together to uphold empire). And capitalism is disruptive. As Americans learned in the 1930s, it was prone to destruction and reconstruction, ups and downs, booms and busts. If empire wants stability, capitalism favors instability.

From the mid-1940s until the early 1970s, American domestic and foreign policy was aimed at making both empire and capitalism work by having them work together. If Keynesianism was the logic, a “businesslike” approach to technological innovation was the lynchpin. A primary way the American empire harnessed capitalism was by harnessing science, technology, and industry—the sources of “innovation.” Bell Labs, IBM, Westinghouse, General Motors: Big Industry meant not only working-class jobs but the cooperation between capital and empire. This cooperation was crucial to empire’s power, for it meant capitalism’s disruptive logics could be tempered by empire’s need for order.

But as things turned out, capitalists began to undermine the cooperative logic of the empire. In the age of Reagan, a new kind of capitalism and a new kind of capitalist emerged under the auspices of innovation and deregulation. Entrepreneurial capitalism began to exploit the stable networks of capital, communications, and human movement the empire offered. If neoliberalism was the new logic, technology was the motor, including new techniques and technologies of finance capital. Finance, computers, the internet, automation, and a new Silicon Valley ethic of creative, disruptive innovation emerged as insurgents within the empire. And “business” took on a new, distinctly disruptive look, too.

The entrepreneurial insurgents of the 1980s and 90s created new markets, even as they destroyed old ones, especially labor markets. Tech and finance industries took new risks, risks freed of empire’s insistence on stability. These risks were money motivated, but they were also social, ambitiously aimed at reshaping the way humans live their lives (for the tech industry the “human” is always the subject, and for the finance industry humans are always objects).

And on the backs of these insurgents rode yet another kind of capitalist, the postmodern capitalist convinced that brand is value, image is economy, and money but a manipulable bit. Retail, development, entertainment, and service industries made brand identity a franchise industry, all the while using fraud, bankruptcy, lobbying, and the exploitation of legal and tax loopholes to create value, or perceived value.

Remarkably, given empire’s need for stability, these entrepreneurial and postmodern forms of capitalism became not only an economic ethic but a political one, as if the solution to every problem were to shake things up. We saw this, above all, in the penchant for deregulation in the 80s and 90s. But we also saw it in the mythologies that developed around Silicon Valley, innovation, and technology, and around what Donald Trump would brand “the art of the deal.” Still, from Reagan to the present, every presidential administration has tried to have it both ways, making room for capitalism’s disruptions while maintaining hold of a relatively stable American empire.

Now, the balance has shifted: The postmodern anarcho-capitalist, seen in the likes of Donald Trump, Steve Bannon, and Peter Thiel, is now vying for the reigns of the empire. This personality seeks to reorganize geopolitical power around the most elusive of categories—spirit, culture, and identity—while trying to create maximum space for the disruptions of capitalistic innovation. “Strength” and “weakness,” understood in quasi-romantic terms of spirit and culture, are supposed to organize the values of this would-be world power (which, because it eschews stability, would not be an empire), and state violence is to be used as a technique of purification (thus the ubiquity of “war” in the rhetoric of these anarcho-capitalists, a striking point of commonality with their surprise allies, conservative culture warriors). On the other hand, the old empire is striking back in the personalities of the new secretary of defense, James Mattis, and the new secretary of state, Rex Tillerson, both of whom seem to represent a vision of empire in which capital cooperates in exchange for relative world stability and in which “strength” is measured less in cultural and spiritual terms and more in terms of diplomatic alliances, military might, and economic hegemony.

Which vision will prevail is still unclear, but the current condition of uncertainty might partly explain the box-office success of Split, a horror film about a man suffering from multiple personality disorder. One might describe it as a parable for an empire in crises, in which we viewers are the kidnapped hostages.

. . . . . . . .

Like The Hedgehog Review on Facebook, follow us on Twitter, and subscribe to our posts via RSS.

Silicon Valley’s Survivalists

Bunker 318, Assabet River National Wildlife Refuge, Maynard Massachusetts. Via Wikimedia Commons.

Bunker 318, Assabet River National Wildlife Refuge, Maynard Massachusetts. Wikimedia Commons.

Seventeen years ago, just outside of Birmingham, Alabama, my wife’s grandfather built floor-to-ceiling shelves in his basement and filled them with toilet paper, tuna, Twinkies, and batteries. He was prepping for Y2K, the Millennium bug. Boom Boom, my wife’s normally calm and reasonable grandfather, was convinced that computer programmers had set civilization up for collapse by representing the four-digit year with only the final two digits. Once the digital clocks and computers tried to register the year 2000, electric grids and so all things electronic would crash. Civilization wouldn’t be too far behind. My father, in the foothills of western North Carolina, didn’t stock his shelves. But he did load his shotgun.

Today, prepping isn’t just for old southern white guys. The tech titans of Silicon Valley, as Evan Osnos recently wrote in the New Yorker, are buying bunkers and waiting for the breakdown of society as well. But Silicon Valley’s survivalists are different from Boom Boom and my dad. They are preparing for a civilizational collapse they otherwise celebrate as disruption and innovation. Continue reading

. . . . . . . .

Like The Hedgehog Review on Facebook, follow us on Twitter, and subscribe to our posts via RSS.

Irony Goes to Washington

Woodcut showing Cicero writing his letters. Wikimedia Commons.

Woodcut showing Cicero writing his letters. Wikimedia Commons.

A curious thing happened on the way to the Trump presidency. Cicero—the ancient Roman Stoic and teacher of rhetoric—started appearing in the media. Slate, CNN, and the Washington Post suggest that Trump’s sometimes incoherent speech is actually drawing on hallowed techniques of political oratory. Ancient rhetoricians didn’t just analyze speech; they taught ambitious young men how to use it to gain power with verbal tricks, such as saying you won’t say something as a way of intimating it. (Remember the first debate?) That’s called “praeteritio.” But hyperbole is also a technique. And so is intentionally contradicting yourself, which is called irony.

These articles about Trump’s Ciceronian speech are part of a debate about how intentional his speech actually is. Is he a master of rhetoric—especially on his preferred medium, Twitter—or simply lacking attention span, firing off tweets on conflicting whims? Continue reading

. . . . . . . .

Like The Hedgehog Review on Facebook, follow us on Twitter, and subscribe to our posts via RSS.

Infernal Machine Collective Manifesto: On the Occasion of the Inauguration

An empty podium at the U.S. Embassy in London. U.S. Embassy London via Flickr.

An empty podium at the U.S. Embassy in London. U.S. Embassy London via Flickr.

We—let us reclaim the We, the declaratory We, the contentious We, the collective We. On this inaugural day, as the seas rise, the drones fly, the tweets storm, and a reality TV star ascends to the heights of world power, the Infernal Machine returns.  Just as all that is solid is melting into air and all that is sacred is profaned (yet again but differently), we want to face the real conditions of our life together, again.

And these conditions are new: Our politics, our institutions, our reality have been eroded by a techno-enabled cynicism and a vociferous optimism peddled from Silicon Valley to Washington, DC. Our media channels,  often filled with noisy disinformation, have come close to overwhelming all hope for truth and a common good. Technology has become both a demon and a god, oppressor and savior, post-human and super-human.

Less than a year ago, many of us were decrying technocracy and neoliberal automation—the routinization of decisions by distant experts. And now? We have been compelled, by the Twitter-assisted success of the newly inaugurated president, to defend expertise, even as we know that it is not enough. But a few moons ago, we were skeptics when it came to data, statistics, and polls. Now social science, not to mention science, is under attack by those who believe that “to tweet it is to prove it.” A year ago we thought that the great showdown of the decade would be between the state and Silicon Valley. Now we see their collusion, willed or not.

So we are at our own inauguration point, our own auspicious beginning, full of omens.

During the age of high technology—think mid-twentieth-century broadcast media and rust-belt production lines—entropy, the erosion of order, represented the great problem of the age. “Information = entropy” became the rallying cry of a new coterie of scientists, engineers, and poets, the basis for a celebration of new media channels for a potentially limitless proliferation of communication.

And then the science fiction of that earlier pretense turned cyberpunk; the Internet promised freedom and gave us something more complex. And with the production of unprecedented quantities of digital data, the virtual world got Big. The prospect of our own technological age is now tipping points and system failures that threaten sudden catastrophe. Experts, yes, but what happens when a politics far outside the boundaries of the Old Media and the Old Discourse—including what threatens to become Old Democracy—emerges?

During the age of high technology, media could be left to their own. Signals, senders, receivers, and noise constituted an engineering schema that could be bought, regulated, and directed by the relatively few for advertising to, informing, and entertaining the many. The media was both an institution and a fantasy of power: a towering professional enclave that sent signals to the receivers of a polis of citizens, couch potatoes, airport-bound travelers, and runners on treadmills.

But today “the media” has no towering centers: Media are on wrists, our hearts,  our streets, and in space. The center of media production is no longer New York, Hollywood, or the editorial room at the local newspaper, but Facebook and would-be Facebooks (Twitter, Snapchat, Reddit, and so on).  But these new media types don’t edit. Their only norm is unregulated use for the sake of unlimited profit. And so a form of parallel meme-processing is the underbelly of what once was decried as “merely the news” by thinkers from Nietzsche to Neil Postman.

During the age of high technology the academic study of media developed its own high towers and professional enclaves: communications; radio, film, and television; cinema.  It also included courses from journalism, speech communication, economics, business, and literature. Each operated on its own frequency. Technology studies, meanwhile, built an edifice (rather plain and drab at first, until a Gothic renovation by a Frenchman, Bruno Latour, with a penchant for networks, actants, and jokes). If the age of high technology yielded a change in the categories, such that agency was distributed and binaries upended (a “general cyborg condition,” as Donna Haraway put it), then what does the fast-advancing Digital Era call for? What philosophy will grasp this history?

A chorus on the Left decries the “fading of fact,” as though we had not attached media and rhetoric to the disappearance of fact for half a century—or since Plato. How can our self-proclaimed sophisticates have failed to see this continent of intellectual energy emerging outside their media, yet on the platforms those media share? How can those trained to think of Enlightenment as having the darkest of sides, a necessary backlash in its very heart, be so naively surprised by this predictable development?

And, so, on this inauguration day, we dedicate this platform to finding those positions, to develop the techniques, to find the pressure-points in our media and rhetoric to make sense of our new conditions, technological and political, and to articulate commonalities and goals.

It’s therefore time we collect and meet in a common, contested, conflicted, complex field that we variously call research, criticism,  scholarship, philosophy, or science. Let us inaugurate a collective, a collective that might form a community, but that cannot and should not be an academic “discipline,” inasmuch as it is academic but undisciplined thinking that we need.

It’s time to collect, and to be collected. Let us be philosophical, whimsical, constructive, critical, and confused. But let us collect, and be collected. Let us write proverbs, poetry, commentary, essays, explorations, and maybe even code. But let us collect, and be collected.  Let us listen, learn, make notes, draw connections, and consider diagrams. But let us collect, and be collected, with ecumenical means in search of effective voice.

We—this “We,” too, is a complex field—invite a world of scholars, computer scientists, thinkers, programmers, poets, and priests to join us. This platform is a position, but a position that will change through collection and collation. We will make a database and channel of what is to be done. And that must remain an open question, our question, for as long as we have energy and affordance to answer to it.

. . . . . . . .

Like The Hedgehog Review on Facebook, follow us on Twitter, and subscribe to our posts via RSS.

Apple’s Fight with the FBI: A Follow Up

Cracked iPhone. Camron Flanders via Flickr.

Cracked iPhone. Camron Flanders via Flickr.

In the end, the Apple-FBI dispute was solved when the FBI cracked Apple’s security—without assistance. This is great for the FBI, but terrible for Apple, which now has, as the New York Times reports, an image problem. “Apple is a business, and it has to earn the trust of its customers,” says one security company executive in the Times. “It needs to be perceived as having something that can fix this vulnerability as soon as possible.”

In taking on the FBI in the San Bernardino case, Apple, it seems, had hoped to create the perception of an absolute commitment to security. Creating an iPhone that not even the state could crack was important to Apple’s image in a post-Snowden era. No doubt Apple must have marketing data that suggests as much.

But now, everybody knows Apple’s “security” can be breached, with or without the help of Apple’s engineers. If the FBI had deliberately picked a public fight with Apple (which nothing suggests they did), it could hardly have orchestrated a better response to Apple’s refusal to cooperate with the San Bernardino investigation: The FBI got what it wanted while undermining the very claim on which Apple staked its case in the court of public opinion, leaving Apple frantically trying to figure out how they did it.

Of course, as the security executive says, Apple is a business. Still, in an age of complaints about  corporate profits taking precedence over the needs of civic life, I continue to be mystified by Apple’s stance, which—whatever the company’s claims—makes sense only as a strategy to maintain or further maximize its profits. In this case, Apple has shown little regard for that which the relative security of a society actually depends: legitimate forensic work, due process, and the state’s (yes, the state’s, which, unlike corporations or private security firms, is publicly accountable) capacity to gauge future threats and reasonably intervene within the confines of the law. Yet “security” is to Apple a marketing problem, not a civic problem.

As I stated in my earlier, longer, and admittedly more thoughtful post about this matter, I think that Apple could have cooperated in this particular case, as they had done in past cases, with relatively little harm to the company’s reputation and with real forensic good being done. Of course, cooperation would have meant that the only wall between your iPhone and the FBI would have been the law itself, but isn’t that the whole point of liberal societies? Lex Rex—law over all, including the FBI, and including Apple’s image.

. . . . . . . .

Like The Hedgehog Review on Facebook, follow us on Twitter, and subscribe to our posts via RSS.

The Public, the Private, and Apple’s Fight with the FBI

Apple CEO Tim Cook (2012). Mike Deerkoski via Flickr.

Apple CEO Tim Cook (2012). Mike Deerkoski via Flickr.

Apple is resisting the FBI’s request that the company write software to help unlock the IPhone of Syed Rizwan Farook, the perpetrator, with Tashfeen Malik, of the massacres in San Bernardino, California, on December 2, 2015. Apple is said to worry that if it lets the FBI into Farook’s phone, it will open a global can of worms, and set a precedent for doing the same thing for less “friendly” governments. And a “back door” to individual phone data will compromise overall security, leaving phones vulnerable, in Tim Cook’s words, to “hackers and criminals who want to access it, steal it, and use it without our knowledge or permission.”

Since the appearance of the Snowden documents, it’s hard for many of us, at least on the level of sentiment, to root for the US government wanting access to phone data. Though the case is complex (and Apple has unlocked phones for the FBI before), the surveillance state is a remarkably frightening prospect, and even the very targeted, essentially forensic, aims of the FBI in the San Bernardino case understandably evoke worries.

But Apple’s battle with the FBI brings to mind Bob Dylan’s quip that “you’re gonna have to serve somebody.” We face something like the classic high-school English class choice between Orwell’s “Big Brother” and Huxley’s “Brave New World.” If the FBI concerns us, Apple should, perhaps, concern us even more.

As Hannah Arendt makes clear in The Human Condition, privacy never stands alone: It always has its co-dependents—especially, the public, the political, and the social. Changes in the meaning of “privacy” mean changes in the meaning of the “public,” and the other way around. The private and the public are interlocking political concerns.

In other words, whenever you are faced with a debate about privacy, also ask what the implications of the debate’s potential outcomes are for public life. Continue reading

. . . . . . . .

Like The Hedgehog Review on Facebook, follow us on Twitter, and subscribe to our posts via RSS.

Beyond the Reveal: Toward Other Hermeneutics

fitness

Part III: Toward other Hermeneutics

I want to make clear here that I believe we need to keep pushing for new research—new policies and practices that help ensure just algorithmic processes at work inside our infrastructures. (See posts one and two of “Beyond the Reveal.”) If our search engines, pricing structures, law enforcement or trade practices depend on or enact unlawful, unethical, or unjust algorithmic processes, we need to have ways of stopping them. We need accountability for these processes, and in some cases that will also mean we need transparency.

But, as urban studies scholar Dietmar Offenhuber points out in Accountability Technologies, accountability isn’t inextricably linked to transparency. In fact, some forms of revelation about opaque processes may do more harm than good to the public. If we make information access a priority over “answerability and enforcement” when it comes to just algorithmic infrastructures, Offenhuber warns, we may not achieve our goals.

So there may be times when “opening the box” might not be the best path to dealing with the possibility of unjust systems. And it is almost certainly the case that our black box metaphors aren’t helping us much in research or advocacy when it comes to charting alternatives.

In my own collaborative work on a Facebook user study, my co-authors and I focused primarily on a question directed to users: “Did you know there’s a black box here, and what do you think it’s doing?” The results of this study have set us on a path to at least learning more about how people make sense of these experiences. But in some ways, our work stands to get stuck on the “reveal,” the first encounter with the existence of a black box. Such reveals are appealing for scholars, artists, and activists—we sometimes like nothing better than to pull back a curtain. But  because of our collective habit of establishing new systems to extricate ourselves from old ones, that reveal can set us on a path away from deliberative and deliberate shared social spaces that support our fullest goals for human flourishing.

I confess that at this point, I bring more cautions about black box hermeneutics than I bring alternatives. I’ll conclude this post by at least pointing to a path forward and demonstrating one possible angle of approach.

My critique of black box metaphors so far leads me to the following questions about our work with technologies:

  1. How else might we deal with the unknown, the obscured or opaque besides “revealing” it?
  2. Do we have to think of ourselves as outside a system in order to find agency in relation to that system?
  3. Can interface serve to facilitate an experience that is more than cognitive, and a consciousness not ordered by the computational?

As Beth Novwiskie pointed out in a response to this post in lecture form, we already have at least one rich set of practices for addressing these questions: that of interpretive archival research. Are not the processes by which a corpus of documents come to exist in an archive as opaque as any internet search ranking algorithm? Isn’t part of the scholar’s job to account for that process as she interprets the texts, establishing the meaning of such texts in light of their corporeal life? And aren’t multiple sensoria at work in such a process, only some of which are anticipated by the systems of storage and retrieval at hand? Understood as “paper machines” and technologies in their own right, certainly the histories of how scholars and readers built their lives around epistles, chapbooks, encyclopedias, and libraries have much to offer our struggles to live with unknown algorithms.

We might also, however, look to the realms of art, design, and play for some productive alternatives. Take for example, the latest black box to take techno-consumption by storm—Apple’s iWatch. This object’s use is almost certainly headed in the direction of integration into users’ lives as a facilitator of new daily routines and systems, especially by the quantified self set. Other writers on this blog have already helpfully set the new box in the context of its precedent in meditative practices or contemporary tech labor. But as we work to understand how the new systems involve us in new, opaque processes, a glance at some more intentionally opaque neighbors might be of help. In my next post, I’ll set a few recent objects and experiences next to the iWatch for comparison for how they invite distinct incorporation into the rhythms of daily attention, thought and action.

Kevin Hamilton is an artist and researcher at the University of Illinois, Urbana-Champaign, where as an Associate Professor he holds appointments in several academic units across theory, history, and practice of digital media. He is currently at work with Infernal Machine contributor Ned O’Gorman on a history of film in America’s nuclear weapons programs; other recent work includes a collaboration with colleagues at Illinois’ Center for People and Infrastructures on the ethics of algorithms in internet and social media platforms.

. . . . . . . .

Like The Hedgehog Review on Facebook, follow us on Twitter, and subscribe to our posts via RSS.

Beyond the Reveal: A Metaphor’s Effect

future

In my last post, I described how the black box emerges historically with the extrication of (at least some) laborers from the machines of industrial labor. The cost of this move is that the laborer, now outside the machine as an operator, must herself operate as black box. The interface between the laborer and machine becomes central to this new relationship, especially as managers and technologists focus on how constantly to reconfigure the interactions between and among human-machine pairs.

In recounting this history of a metaphor, I aim toward a critique of how black box metaphors are used today to describe opaque technological processes. And I don’t mean to suggest that any use of a black box metaphor inadvertently invokes a whole history of labor and interface. But I do think we can surmise from this history a dominant narrative that draws heavily from the black box metaphor:

  1. As an “infrastructural inversion,” the black box metaphor creates the possibility, for some, of imagining themselves as outside a system that formerly may not have been visible at all.
  2. Where and when this happens, interfaces emerge and gain prominence as a point of mediation with the formerly invisible system.
  3. Design for interaction between the user and the “black boxed” process tends to imagine the human mind as another form of black box, emphasizing cognitive over manual processes.
  4. The new system comprised by this user and her machine then starts the process anew—the user/worker has been incorporated into a new system that she may not actually see unless naming a new “black box.”
  5. This narrative will also depend on the exclusion of some who need to “stay behind” and keep the system going within the “old” forms of labor.

To describe a process as a black box thus potentially sets in motion a whole series of implications for sensation, knowledge, labor, and social organization.

Let’s look at this, for example, in light of new attention brought to the role of algorithms in Facebook use (an effort in which I have been involved as a scholar). How does describing the Facebook algorithm as a black box set us on a particular narrative of analysis and research?

Let’s imagine a Facebook user who is not yet aware of the algorithm at work in her social media platform. The process by which her content appears in others’ feeds, or by which others’ material appears in her own, is opaque to her. Approaching that process as a black box, might well situate our naive user as akin to the Taylorist laborer of the pre-computer, pre-war era. Prior to awareness, she blindly accepts input and provides output in the manufacture of Facebook’s product. Upon learning of the algorithm, she experiences the platform’s process as newly mediated. Like the post-war user, she now imagines herself outside the system, or strives to be so. She tweaks settings, probes to see what she has missed, alters activity to test effectiveness. She grasps at a newly-found potential to stand outside this system, to command it. We have a tendency to declare this a discovery of agency—a revelation even.

But maybe this grasp toward agency is also the beginning of a new system. The black box metaphor suggests that such providers will also need to design for the user who tweaks. (It may even be that designing for the tweaker may be more profitable than designing a “perfect feed.”) As in previous ergonomic problems, this process will begin to imagine and construct a particular kind of mind, a particular kind of body, a particular kind of user. Tweaking to account for black-boxed algorithmic processes could become a new form of labor, one that might then inevitably find description by some as its own black box, and one to escape.

Maybe, by structuring our engagement with the experience of Facebook’s opaque processes through the black box metaphor, we’ve set ourselves up to construct a new black box, and ignored the ways in which our relations to others, within and without the present system, have been changed by our newfound awareness.

I’m struck here, for example, by how well the narrative of the black box I’ve described here fits a number of stories we’ve lived and heard regarding privacy and networked media. Whether it’s the Snowden revelations or Facebook’s unauthorized emotion study, the story often plays out the same way for many of us. We realize or remember anew just how much work we’re providing some entity within a current system, and then proceed to either alter our use patterns or abstain altogether from that system in order to remain outside that work. Debates ensue over who is complicit and who is not, and with the exception of those working in a more organized fashion to enact prosecution or new laws, most of us are stuck in an “opt-in or opt-out” scenario that never goes anywhere.

It’s likely only a matter of time before the market for more subtle responses than “opt-in or opt-out” is met with a new set of black box systems. One can imagine, for example, a range of services: free email if you submit to full surveillance and data-trolling, modestly-priced email if you submit your data for use via an anonymizer, or premium email at high costs that removes you from all data-harvesting.

Perhaps, even as we remain justifiably critical of the unseen and unknown software processes that govern and regulate a growing number of shared spaces and subjectivities, we might search for another way to live with these processes than hitting the escape button and entering a higher-level routine. More on that in my next posts.

Kevin Hamilton is an artist and researcher at the University of Illinois, Urbana-Champaign, where as an Associate Professor he holds appointments in several academic units across theory, history, and practice of digital media. He is currently at work with Infernal Machine contributor Ned O’Gorman on a history of film in America’s nuclear weapons programs; other recent work includes a collaboration with colleagues at Illinois’ Center for People and Infrastructures on the ethics of algorithms in internet and social media platforms.

. . . . . . . .

Like The Hedgehog Review on Facebook, follow us on Twitter, and subscribe to our posts via RSS.