Tag Archives: media

Politics is Downstream from Culture, Part 1: Right Turn to Narrative

88622867_fortune cookie NARRATIVE_FLAT_100dpi

Our lives—indeed, our very species—has storytelling wound into our DNA. From the earliest cave drawings, man has expressed himself in terms of story. Ancient civilizations understood that stories are vital to understanding our place in the world, so much so that they codified storytelling and found base rules that form it. Oral histories are a part of every culture across the globe.

I’ll give you three guesses as to the author of this statement. In fact, I’ll give you thirty. It’s not Bill Moyers, and it’s not James Cameron, and it’s not some literature professor. It’s from Breitbart News. If you’re a member of the professional (or non-professional) humanities, that should get you to more than guessing.

The quote, by Lawrence Meyers, appeared in a 2011 article headlined “Politics is Really Downstream from Culture.” It was an elaboration of Andrew Breitbart’s mantra, “politics is downstream from culture.” The slogan—a nice inverse of James Carville’s “It’s the economy, stupid!”—means what it says: Change the culture, change the government.

Now, six years later, national politics, we might say, is culture, and maybe even only culture. Steve Bannon, Breitbart’s successor, is not only in the White House, but, for the time being at least, enjoys a front-row seat on the National Security Council. John McCain, concerned about the elevation of a civilian political strategist to chief advisor on foreign affairs, has called Bannon’s NSC role  a “radical departure from any National Security Council in history.”  But the concern should run deeper than the possibility of war becoming but another mode of dirty politics. It should include Bannon making international relations into little more than a good story. This sense of story, as something that captures the attention, immerses the reader or viewer, and manufactures a desired political attitude, is Bannon’s stock-in-trade. He’s explicit about his sources for his narrative techniques: “the Left,” conceived on a spectrum from Hollywood filmmakers to Lenin (whom Bannon has said he idolizes, with tongue pretty clearly in cheek). 

Since he left Goldman Sachs in 1990, Bannon has been first and foremost a worker in the culture industry, a producer of stories. After helping negotiate the sale of Castle Rock Entertainment to Ted Turner, Bannon gained a stake in television shows like Seinfeld. He then got into his own brand of filmmaking, producing among other works, a hagiography of Ronald Reagan, a celebration of Sarah Palin, an encomium to Duck Dynasty star Phil Robertson, and a self-explanatory exposé, “Occupy Unmasked.” After Andrew Breitbart died suddenly in 2012, Bannon took over Breitbart News and single-handedly retrofitted the fringiest part of the “Right Wing Conspiracy” into a slick, savvy, and at least partly fact-based operation. (At the same time, Bannon helped found the investigative research organization that produced Clinton Cash, the book that undermined the Democratic nominee long before anyone from Vermont got involved.)

In addition to left-leaning pop culture sources, Bannon has also borrowed techniques from the academic left, specifically from the Humanities. That’s why it’s now possible to find quotes like the one I led off with above, where it’s hard to tell if we’re reading literary theory or an article on Breitbart Continue reading

. . . . . . . .

Like The Hedgehog Review on Facebook, follow us on Twitter, and subscribe to our posts via RSS.

FacebookTwitterGoogle+LinkedInShare

Algorithms Who Art in Apps, Hallowed Be Thy Code

 

If you want to understand the status of algorithms in our collective imagination, Ian Bogost, author, game designer, and professor of media studies and interactive computing at Georgia Institute of Technology,  proposes the following exercise in his recent essay in the Atlantic: “The next time you see someone talking about algorithms, replace the term with ‘God’ and ask yourself if the sense changes any?”

If Bogost is right, then more often than not you will find the sense of the statement entirely unchanged. This is because, in his view, “Our supposedly algorithmic culture is not a material phenomenon so much as a devotional one, a supplication made to the computers we have allowed to replace gods in our minds, even as we simultaneously claim that science has made us impervious to religion.” Bogost goes on to say that this development is part of a “larger trend” whereby “Enlightenment ideas like reason and science are beginning to flip into their opposites.” Science and technology, he fears, “have turned into a new type of theology.”

It’s not the algorithms themselves that Bogost is targeting; it is how we think and talk about them that worries him. In fact, Bogost’s chief concern is that how we talk about algorithms is impeding our ability to think clearly about them and their place in society. This is where the god-talk comes in. Bogost deploys a variety of religious categories to characterize the present fascination with algorithms.

Bogost believes “algorithms hold a special station in the new technological temple because computers have become our favorite idols.” Later on he writes, “the algorithmic metaphor gives us a distorted, theological view of computational action.” Additionally, “Data has become just as theologized as algorithms, especially ‘big data,’ whose name is meant to elevate information to the level of celestial infinity.” “We don’t want an algorithmic culture,” he concludes, “especially if that phrase just euphemizes a corporate theocracy.” The analogy to religious belief is a compelling rhetorical move. It vividly illuminates Bogost’s key claim: the idea of an “algorithm” now functions as a metaphor that conceals more than it reveals.

He prepares the ground for this claim by reminding us of earlier technological metaphors that ultimately obscured important realities. The metaphor of the mind as computer, for example, “reaches the rank of religious fervor when we choose to believe, as some do, that we can simulate cognition through computation and achieve the singularity.” Similarly, the metaphor of the machine, which is really to say the abstract idea of a machine, yields a profound misunderstanding of mechanical automation in the realm of manufacturing. Bogost reminds us that bringing consumer goods to market still “requires intricate, repetitive human effort.” Manufacturing, as it turns out, “isn’t as machinic nor as automated as we think it is.”

Likewise, the idea of an algorithm, as it is bandied about in public discourse, is a metaphorical abstraction that obscures how various digital and analog components, including human action, come together to produce the effects we carelessly attribute to algorithms. Near the end of the essay, Bogost sums it up this way:

The algorithm has taken on a particularly mythical role in our technology-obsessed era, one that has allowed it to wear the garb of divinity. Concepts like ‘algorithm’ have become sloppy shorthands, slang terms for the act of mistaking multipart complex systems for simple, singular ones. Of treating computation theologically rather than scientifically or culturally.

But why does any of this matter? It matters, Bogost insists, because this way of thinking blinds us in two important ways. First, our sloppy shorthand “allows us to chalk up any kind of computational social change as pre-determined and inevitable,” allowing the perpetual deflection of responsibility for the consequences of technological change. The apotheosis of the algorithm encourages what I’ve elsewhere labeled a Borg Complex, an attitude toward technological change aptly summed by the phrase, “Resistance is futile.” It’s a way of thinking about technology that forecloses the possibility of thinking about and taking responsibility for our choices regarding the development, adoption, and implementation of new technologies. Secondly, Bogost rightly fears that this “theological” way of thinking about algorithms may cause us to forget that computational systems can offer only one, necessarily limited perspective on the world. “The first error,” Bogost writes, “turns computers into gods, the second treats their outputs as scripture.”

______________________

Bogost is right to challenge the quasi-religious reverence for technology. It is, as he fears, an impediment to clear thinking. And he is not the only one calling for the secularization of our technological endeavors. Computer scientist and virtual-reality pioneer Jaron Lanier has spoken at length about the introduction of religious thinking into the field of AI. In a recent interview, he expressed his concerns this way:

There is a social and psychological phenomenon that has been going on for some decades now:  A core of technically proficient, digitally minded people reject traditional religions and superstitions. They set out to come up with a better, more scientific framework. But then they re-create versions of those old religious superstitions! In the technical world these superstitions are just as confusing and just as damaging as before, and in similar ways.

While Lanier’s concerns are similar to Bogost’s,  Lanier’s use of religious categories is more concrete. Bogost deploys a religious frame as a rhetorical device, while Lanier’s uses it more directly to critique the religiously inflected expressions of a desire for transcendence among denizens of the tech world themselves.

But such expressions are hardly new. Nor are they limited to the realm of AI. In The Religion of Technology: The Divinity of Man and the Spirit of Invention, the distinguished historian of technology David Noble made the argument that “modern technology and modern faith are neither complements nor opposites, nor do they represent succeeding stages of human development. They are merged, and always have been, the technological enterprise being, at the same time, an essentially religious endeavor.”

Noble elaborates:

This is not meant in a merely metaphorical sense, to suggest that technology is similar to religion in that it evokes religious emotions of omnipotence, devotion, and awe, or that it has become a new (secular) religion in and of itself, with its own clerical caste, arcane rituals, and articles of faith. Rather it is meant literally and historically, to indicate that modern technology and religion have evolved together and that, as a result, the technological enterprise has been and remains suffused with religious belief.

Looking also at the space program, atomic weapons, and biotechnology, Noble devoted a chapter of his book to history of artificial intelligence,  arguing that AI research had often been inspired by a curious fixation on the achievement of god-like, disembodied intelligence as a step toward personal immortality. Many of the sentiments and aspirations that Noble identifies in figures as diverse as George Boole, Claude Shannon, Alan Turing, Edward Fredkin, Marvin Minsky, Daniel Crevier, Danny Hillis, and Hans Moravec—all of them influential theorists and practitioners in the development of AI—find their consummation in the Singularity movement. The movement envisions a time—2045 is frequently suggested—when the distinction between machines and humans will blur and humanity as we know it will be eclipsed. Before Ray Kurzweil, the chief prophet of the Singularity, wrote about “spiritual machines,” Noble had astutely anticipated how the trajectories of AI, Internet, Virtual Reality, and Artificial Life research were all converging  in the age-old quest for the immortality.  Noble, who died quite suddenly in 2010, must have read the work of Kurzweil and company as a remarkable validation of his thesis in The Religion of Technology.

Interestingly, the sentiments that Noble documents alternate between the heady thrill of creating non-human Minds and non-human Life, on the one hand, and, on the other, the equally heady thrill of pursuing the possibility of radical life-extension and even immortality. Frankenstein meets Faust we might say. Humanity plays god in order to bestow god’s gifts on itself.

Noble cites one Artificial Life researcher who explains, “I feel like God; in fact, I am God to the universes I create,” and another who declares, “Technology will soon enable human beings to change into something else altogether [and thereby] escape the human condition.” Ultimately, these two aspirations come together into a grand techno-eschatological vision, expressed here by robotics specialist Hans Moravec:

Our speculation ends in a supercivilization, the synthesis of all solar system life, constantly improving and extending itself, spreading outward from the sun, converting non-life into mind …. This process might convert the entire universe into an extended thinking entity … the thinking universe … an eternity of pure cerebration.

Little wonder that Pamela McCorduck, who has been chronicling the progress of AI since the early 1980s, can say, “The enterprise is a god-like one. The invention—the finding within—of gods represents our reach for the transcendent.” And, lest we forget where we began, a more earth-bound, but no less eschatological hope was expressed by Edward Fredkin in his MIT and Stanford courses on “saving the world.” He hoped for a “global algorithm” that “would lead to peace and harmony.”

I would suggest that similar aspirations are expressed by those who believe that Big Data will yield a God’s-eye view of human society, providing wisdom and guidance that is otherwise inaccessible to ordinary human forms of knowing and thinking.

Perhaps this should not be altogether surprising. As the old saying has it, the Grand Canyon wasn’t formed by someone dragging a stick. This is just a way of saying that causes must be commensurate with the effects they produce. Grand technological projects such as space flight, the harnessing of atomic energy, and the pursuit of artificial intelligence are massive undertakings requiring stupendous investments of time, labor, and resources. What motives are sufficient to generate those sorts of expenditures? You’ll need something more than whim, to put it mildly. You may need something akin to religious devotion. Would we have attempted to put a man on the moon without the ideological spur of the Cold War, which cast space exploration as a field of civilizational battle for survival? Consider, as a more recent example, what drives Elon Musk’s pursuit of interplanetary space travel.

______________________

Without diminishing the criticisms offered by either Bogost or Lanier, Noble’s historical investigation into the roots of divinized or theologized technology reminds us that the roots of the disorder run much deeper than we might initially imagine. Noble’s own genealogy traces the origin of the religion of technology to the turn of the first millennium. It emerges out of a volatile mix of millenarian dreams, apocalyptic fervor, mechanical innovation, and monastic piety. Its evolution proceeds apace through the Renaissance, finding one of its most ardent prophets in the Elizabethan statesman and thinker Francis Bacon. Even through the Enlightenment, the religion of technology flourished. In fact, the Enlightenment may have been a decisive moment in the history of the religion of technology.

In his Atlantic essay, Bogost frames the emergence of techno-religious thinking as a departure from the ideals of reason and science associated with the Enlightenment. This is not altogether incidental to Bogost’s argument. When he talks about the “theological” thinking that suffuses our understanding of algorithms, Bogost is not working with a neutral, value-free, all-purpose definition of what constitutes the religious or the theological; there’s almost certainly no such definition available. Rather, he works (like Lanier and many others) with an Enlightenment understanding of Religion that characterizes it as Reason’s Other–as something a-rational if not altogether irrational, superstitious, authoritarian, and pernicious.

Noble’s work complicates this picture. The Enlightenment did not, as it turns out, vanquish Religion, driving it far from the pure realms of Science and Technology. In fact, to the degree that the radical Enlightenment’s assault on religious faith was successful, it empowered the religion of technology. To put it another way, the Enlightenment—and, yes, we are painting with broad strokes here—did not do away with the notions of Providence, Heaven, and Grace, but instead renamed them as, respectively, Progress, Utopia, and Technology. To borrow a phrase, the Enlightenment immanentized the eschaton. If heaven had been understood as a transcendent goal achieved with the aid of divine grace within the context of the providentially ordered unfolding of human history, it became a utopian vision, a heaven on earth, achieved by the ministrations science and technology within the context of progress, an inexorable force driving history toward its utopian consummation.

As historian Leo Marx has put it, the West’s “dominant belief system turned on the idea of technical innovation as a primary agent of progress.” Indeed, the further Western culture proceeded down the path of secularization as it is traditionally understood, the more emphasis was placed on technology as the principle agent of change. Marx observed that by the late nineteenth century, “the simple republican formula for generating progress by directing improved technical means to societal ends was imperceptibly transformed into a quite different technocratic commitment to improving ‘technology’ as the basis and the measure of—as all but constituting—the progress of society.”

When the prophets of the Singularity preach the gospel of transhumanism, they are not abandoning the Enlightenment heritage; they are simply embracing its fullest expression. As Bruno Latour has argued, modernity has never perfectly sustained the purity of the distinctions that were the self-declared hallmarks of its own superiority. Modernity characterized itself as a movement of secularization and differentiation, what Latour, with not a little irony, labels processes of purification. Science, politics, law, religion, ethics—these are all sharply distinguished and segregated from one another in the modern world, distinguishing it from the primitive pre-modern world. But it turns out that these spheres of human experience stubbornly resist the neat distinctions modernity sought to impose. Hybridization unfolds alongside purification, and Noble’s work has demonstrated how the lines between technology, sometimes reckoned the most coldly rational of human projects, and religion are anything but clear.

But not just any religion. Earlier I suggested that when Bogost characterizes our thinking about algorithms as “theological,” he is almost certainly assuming a particular kind of theology. This is why it is important to classify the religion of technology more precisely as a Christian heresy. It is in Western Christianity that Noble found the roots of the religion of technology, and it is in the context of post–Christian world that it currently flourishes.

It is Christian insofar as its aspirations are like those nurtured by the Christian faith, such as the conscious persistence of a soul after the death of the body. Noble cites Daniel Crevier, who, referring to the “Judeo-Christian tradition,” suggests that “religious beliefs, and particularly the belief in survival after death, are not incompatible with the idea that the mind emerges from physical phenomena.” This is noted on the way to explaining that a machine-based material support could be found for the mind, which leads Noble to quip, “Christ was resurrected in a new body; why not a machine?” Reporting on his study of the famed Santa Fe Institute in Los Alamos, anthropologist Stefan Helmreich writes, “Judeo-Christian stories of the creation and maintenance of the world haunted my informants’ discussions of why computers might be ‘worlds’ or ‘universes,’ …. a tradition that includes stories from the Old and New Testaments (stories of creation and salvation).”

However heretically it departs from traditional Christian teaching regarding the givenness of human nature, the moral dimensions of humanity’s brokenness, the gracious agency of God in the salvation of humanity, the religion of technology can be conceived as an imaginative account of how God might fulfill purposes that were initially revealed in incidental, pre-scientific garb. In other words, we might frame the religion of technology not so much as a Christian heresy, but rather as (post–)Christian fan-fiction, an elaborate imagining of how the hopes articulated by the Christian faith will materialize as a consequence of human ingenuity in the absence of divine action.

Near the end of The Religion of Technology, David Noble warns of the dangers posed by a blind faith in technology. “Lost in their essentially religious reveries,” he writes, “the technologists themselves have been blind to, or at least have displayed blithe disregard for, the harmful ends toward which their work has been directed.” Citing another historian of technology, Noble adds, “The religion of technology, in the end, ‘rests on extravagant hopes which are only meaningful in the context of transcendent belief in a religious God, hopes for a total salvation which technology cannot fulfill …. By striving for the impossible, [we] run the risk of destroying the good life that is possible.’ Put simply, the technological pursuit of salvation has become a threat to our survival.” I suspect that neither Bogost nor Lanier would disagree with Noble on this score.

This post originally appeared at The Frailest Thing.

Michael Sacasas is a doctoral candidate in the Texts and Technology program at the University of Central Florida. Follow him on Twitter @frailestthing. 

. . . . . . . .

Like The Hedgehog Review on Facebook, follow us on Twitter, and subscribe to our posts via RSS.

The Wall Must Stand: Innovation At the New York Times

“In the future,” writes digital scholar and Twitter wit Ian Bogost , “all news will be about, rather than in, the New York Times.”

That future seemed to arrive last week, and not only with the controversy unleashed by the abrupt firing of executive editor Jill Abramson. Possibly as part of the storm, someone in the newsroom leaked a 96-page document with detailed proposals for bringing the Times more fully into the digital age and—even more important—making the Grey Lady more “Reader Experience”-friendly.

#164976308 / gettyimages.com

Nieman Journalism Lab’s Joshua Benton, calls the report “one of the most remarkable documents I’ve seen in my years running the Lab.”  He even tasked three of his staffers with excerpting highlights from it. But the whole thing merits reading by anyone interested in the possible (or inevitable?) future of the news.

Not that there is anything truly new or surprising on any page of “Innovation,” as the study is so grandiloquently titled. Put together by a six-member team led by Arthur Gregg Sulzberger, the publisher’s son and heir apparent, the report is a compendium of ideas, strategies, arguments, and veiled threats familiar to anyone who has worked in or around newsrooms during the last decade or so. From the title on, it buzzes with the kind of jargony nostrums that fuel TED Talks and South by Southwest conferences, from impact toolboxes and repackaging old content in new formats to making journalists their own content promoters and integrating the Reader Experience team with the newsroom to, well, anything else that helps counter the disruptive incursions of new media upstarts like Buzzfeed and Huffington Post.

And why not counter those disrupters? As the report frequently notes, the NYT produces consistently superior content but does an almost-as-consistently inferior job of getting its content out to its reader. In some cases, competitors are even more successful at distributing Times content than the Times itself. It makes little sense to remain satisfied with such a status quo.

But reading the report invites suspicion on at least two counts. The first is quite immediate: How is it possible that these objectives haven’t already been accomplished? (And, in fact, is it possible that many of them have, as some NYT insiders say, proving once again that many strategic studies merely confirm the direction in which the institution is already heading). My incredulity arises from the facts presented in the report itself, namely the astonishing number of Times employees already dedicated to Reader Experience activities (which, as the report notes, “includes large segments of Design, Technology, Consumer Insight Group, R &D, and Product”).

The problem, explicitly remarked upon at several points in the report, appears to be turf wars.  Some of this is simply silly, such as the exclusion of Reader Experience people from key editorial meetings or other instances of uncollegial shunning. But I suspect the problem also stems from something far less silly, indeed, from the most fundamental of political-institutional questions: Who, at the end of the day, will be in command of the combined newsroom and Reader Experience personnel? Will it be the editorial leadership or the business leadership?

That question can’t be finessed, fudged, blurred, or deferred. It must be answered forthrightly, because if it isn’t, the very purpose of a serious news organization becomes unclear.

And that leads to my second big concern.  What is the real goal of “innovation” at the New York Times? Is it intended primarily to enable the editorial leaders to use and inculcate the best practices of distribution, with additional staff possessing advanced skills in those practices, in order to support and advance strong journalism? Or is intended primarily to increase the number of Reader Experiences as measured through analytics and other metrics at the expense, in the long or short runs, of the highest quality of journalism? If the former, I am on board—who wouldn’t be?

Which is why the political question must be answered first. If not, and if the new and enhanced newsroom ends up being run by the business side, then decisions will be made that will slowly erode the quality of the journalistic content. If content packagers and social media community managers answer ultimately to publishing executives and not to editors, then they will be able to demand the kind of content—whimsical features, for example, rather than hard reporting—that tends to trend most strongly.  The sad fact is that cute cat stories always sell better than revelations about city hall. The number of hits, likes, or visits will gradually but inevitably determine the editorial agenda.

Overstated? Simplistic? I don’t think so. When the ultimate purpose of a news organization is something that can be evaluated almost exclusively by metrics, then you can be sure you are no longer talking about a news organization.  A media company, perhaps, but not a journalistic one.

The report calls for some breaching of the editorial-publishing (or church-state) firewall. That sounds suspect to me. What it should call for is the migration of some of those Reader Experience departments and personnel to the editorial side of the firewall. The wall itself must stand. Or the journalism will fall.

Jay Tolson is the Executive Editor of The Hedgehog Review.

 

 

. . . . . . . .

Like The Hedgehog Review on Facebook, follow us on Twitter, and subscribe to our posts via RSS.