Tag Archives: algorithms

Beyond the Reveal: Toward Other Hermeneutics

fitness

Part III: Toward other Hermeneutics

I want to make clear here that I believe we need to keep pushing for new research—new policies and practices that help ensure just algorithmic processes at work inside our infrastructures. (See posts one and two of “Beyond the Reveal.”) If our search engines, pricing structures, law enforcement or trade practices depend on or enact unlawful, unethical, or unjust algorithmic processes, we need to have ways of stopping them. We need accountability for these processes, and in some cases that will also mean we need transparency.

But, as urban studies scholar Dietmar Offenhuber points out in Accountability Technologies, accountability isn’t inextricably linked to transparency. In fact, some forms of revelation about opaque processes may do more harm than good to the public. If we make information access a priority over “answerability and enforcement” when it comes to just algorithmic infrastructures, Offenhuber warns, we may not achieve our goals.

So there may be times when “opening the box” might not be the best path to dealing with the possibility of unjust systems. And it is almost certainly the case that our black box metaphors aren’t helping us much in research or advocacy when it comes to charting alternatives.

In my own collaborative work on a Facebook user study, my co-authors and I focused primarily on a question directed to users: “Did you know there’s a black box here, and what do you think it’s doing?” The results of this study have set us on a path to at least learning more about how people make sense of these experiences. But in some ways, our work stands to get stuck on the “reveal,” the first encounter with the existence of a black box. Such reveals are appealing for scholars, artists, and activists—we sometimes like nothing better than to pull back a curtain. But  because of our collective habit of establishing new systems to extricate ourselves from old ones, that reveal can set us on a path away from deliberative and deliberate shared social spaces that support our fullest goals for human flourishing.

I confess that at this point, I bring more cautions about black box hermeneutics than I bring alternatives. I’ll conclude this post by at least pointing to a path forward and demonstrating one possible angle of approach.

My critique of black box metaphors so far leads me to the following questions about our work with technologies:

  1. How else might we deal with the unknown, the obscured or opaque besides “revealing” it?
  2. Do we have to think of ourselves as outside a system in order to find agency in relation to that system?
  3. Can interface serve to facilitate an experience that is more than cognitive, and a consciousness not ordered by the computational?

As Beth Novwiskie pointed out in a response to this post in lecture form, we already have at least one rich set of practices for addressing these questions: that of interpretive archival research. Are not the processes by which a corpus of documents come to exist in an archive as opaque as any internet search ranking algorithm? Isn’t part of the scholar’s job to account for that process as she interprets the texts, establishing the meaning of such texts in light of their corporeal life? And aren’t multiple sensoria at work in such a process, only some of which are anticipated by the systems of storage and retrieval at hand? Understood as “paper machines” and technologies in their own right, certainly the histories of how scholars and readers built their lives around epistles, chapbooks, encyclopedias, and libraries have much to offer our struggles to live with unknown algorithms.

We might also, however, look to the realms of art, design, and play for some productive alternatives. Take for example, the latest black box to take techno-consumption by storm—Apple’s iWatch. This object’s use is almost certainly headed in the direction of integration into users’ lives as a facilitator of new daily routines and systems, especially by the quantified self set. Other writers on this blog have already helpfully set the new box in the context of its precedent in meditative practices or contemporary tech labor. But as we work to understand how the new systems involve us in new, opaque processes, a glance at some more intentionally opaque neighbors might be of help. In my next post, I’ll set a few recent objects and experiences next to the iWatch for comparison for how they invite distinct incorporation into the rhythms of daily attention, thought and action.

Kevin Hamilton is an artist and researcher at the University of Illinois, Urbana-Champaign, where as an Associate Professor he holds appointments in several academic units across theory, history, and practice of digital media. He is currently at work with Infernal Machine contributor Ned O’Gorman on a history of film in America’s nuclear weapons programs; other recent work includes a collaboration with colleagues at Illinois’ Center for People and Infrastructures on the ethics of algorithms in internet and social media platforms.

. . . . . . . .

Like The Hedgehog Review on Facebook, follow us on Twitter, and subscribe to our posts via RSS.

FacebookTwitterGoogle+LinkedInShare

Beyond the Reveal: Living with Black Boxes

brain

Part One: Histories

Amidst growing attention and calls to action on the role of algorithms in our everyday lives, one idea recurs: “opening the black box.” In such analyses, the “black box” describes a process that happens in secret, for which we only know the inputs and outputs, but not the steps that takes place between. How might this metaphor be structuring our approach to thinking about algorithms and their place in our lives, long before we get to the work of accounting for the social and political work of algorithmic systems?

In this first of four posts, I’ll begin an answer to this question by looking at the history of the “black box” as a way of modeling cognitive or computational processes. In the second post, I’ll offer some cautionary words about reliance on this metaphor in the important work of ensuring just systems. Finally, in the last two posts I’ll look to some alternatives to black-box-opening in our relationships to opaque technological processes.

The black box metaphor began to acquire its shape during changes in labor that took place after World War II. Whereas managers before the war had largely treated work as a series of learned behaviors, the designers of work and work environments after the war began to think less about suiting the laborer to the work, and more about suiting the work to the laborer.

More than a mere Taylorist repeater of actions, the new ideal worker of post-war Human Factors research not only acts but perceives, acting according to learned evaluative routines that correlate sensation to action. The ideal post-war laborer is not a person of a particular physical build, conditioned to perform particular motions, but rather a universalized collection of possible movements, curated and selected according to mathematical principles. Human Factors research turned the human laborer into a control for a system, a proper medium for the transfer and transformation of input.

Key to this new approach was the influence of information theory on approaches to both computing and psychology. In computing, the understanding of signals as information paved the way for a mathematics of binary code, in which the course of electrons through physical gates and switches could translate into algorithms and mathematical functions. In psychology, those who had grown weary of behaviorism’s stimulus-response approaches to explaining and modifying human action saw in Claude Shannon’s approach echoes of the structure of the human brain. These early cognitive scientists saw in thought a kind of algorithm performing consistent functions on ever-changing sense data, zipping through the brain’s neural pathways the way electrons travel through the copper of a computer’s circuits.

And so a new understanding of the operator’s actions emerged alongside a new understanding of a computer’s routines. The first software emerged at the same time that psychologists began to analyze human thought and memory as a collection of mathematical functions performed on sense data. In other words, the black box as we know it emerged as a pair of metaphors: one to describe the computational machine, and one to describe the human mind.

Before these developments, systems of manufacture and control were designed to include the human body as a “control” in the operational sense. The control in any function is a limiter, providing brackets to the acceptable inputs and possible outputs. If a laborer slows done his or her work, the entire process slows. In the new post-Taylorist work flow, in contrast, the control is performed by a computational process, rather than a human embodied one. The new computers allowed for the programming of internal black boxes within the machine itself. Information from multiple sensors, as it coursed through these machines, would be analyzed and checked for deviation. The result produced from such analyses would set certain mechanical processes in motion in order to produce a desired end.

Although the worker has been replaced by an algorithm as the system control, she or he is not missing from the scene entirely. Rather, the human operator now performs the function of a control for the control. The machine affords indications to the human operator of the proper functioning of the software-based controller. Deviations from designated functions trigger new action from the human operator, according to more advanced algorithms than required of previous industrial operators. This new human operator must synthesize multiple forms of data—visual, aural, even symbolic data—and then decide on a proper course of action, of input to the machine, according to a trained set of decision-making criteria and standards.

Though operating from more of a distance in relation to the phenomena of mechanical system function, this new, error-detecting human operator plays no less critical a role. His or her mental routines must be just as carefully scripted and trained as the Taylorist laborer’s physical actions, and often via emerging understanding of the brain as a computer.

The new operator is thus less of the system even though he or she is made more in the image of that system. Formerly one organ within a mechanical body, he now is modeled as a discrete body himself, tethered to another, mechanical body, and modeled after that body, for the purposes of safe and consistent system flow. The machine and the operator mirror one another, with the interface as their crucial site of division, the glass of reflection and action.

These changes also effect sociality through the creation of a new entity to include all agents. This new entity—the organization—invites design at a complex level that accounts for multiple machinic and human actors. Where each machine used to come with an operator as controller, the two treated as a single entity, the post war machine comes with an operator as agent, who is necessary to the proper functioning of the machine. But the human operator is separate from the machine. For large-scale projects, this doubling results in increased complexity, which the organization approaches as yet another information processing problem.

The organization, this plurality of entities, is coincident with the emergence of the interface. Machines and operators without true interfaces—as in Taylorist scenarios—are not collective in that they are not social. They are merely aggregate. Thus some of the biggest moves in computing research toward the latter half of the twentieth century were those that simultaneously addressed the interface between one operator and her machine, and the structure of all machine-human pairs, organized together into one system—one black box process.

Kevin Hamilton is an artist and researcher at the University of Illinois, Urbana-Champaign, where as an Associate Professor he holds appointments in several academic units across theory, history, and practice of digital media. He is currently at work with Infernal Machine contributor Ned O’Gorman on a history of film in America’s nuclear weapons programs; other recent work includes a collaboration with colleagues at Illinois’ Center for People and Infrastructures on the ethics of algorithms in internet and social media platforms.

. . . . . . . .

Like The Hedgehog Review on Facebook, follow us on Twitter, and subscribe to our posts via RSS.

79 Theses on Technology:
Our Detachment From Technology

insect blue_FLAT

When reading Alan Jacobs’s 79 theses, three jumped out at me:

55. This epidemic of forgetting where algorithms come from is the newest version of “I for one welcome our new insect overlords.”

56. It seems not enough for some people to attribute consciousness to algorithms; they must also grant them dominion.

58. Any sufficiently advanced logic is indistinguishable from stupidity.—Alex Tabarrok

These theses suggest a single issue: We have become increasingly detached from our software, both in how it works and how it is built.

The algorithms involved in much of our software are each designed to do something. When an algorithm was a single snippet of code or a tiny computer program, it could be read, understood, debugged, and even improved. Similarly, computing once involved regular interactions at the level of the command line. There was little distance between the code and the user.

Since the early era of command lines and prompts, software has become increasingly complex. It has also become increasingly shielded from the user. These are not necessarily bad changes. More sophisticated technology is more powerful and has greater functionality; giving it a simpler face prevents it from being overwhelming to use. We don’t need to enter huge numbers of commands or parameters to get something to work. We can just swipe our fingers and our intentions are intuited.

Thanks to these changes, however, each of us has become more distant from the inner workings of our machines. I’ve written elsewhere about how we must strive to become closer to our machines and bridge the gap between expert and user. This is difficult in our era of iPads and graphical interfaces, and often it doesn’t even seem that important. However, since these technologies affect so many parts of our lives, I think we need the possibility of closeness: We need gateways to understanding our machines better. In the absence of this proactive decision, our responses to our machines will tend to be driven by fear, veneration, and disdain.

As we have become detached from how algorithms and software operate, this detachment has caused a gross misunderstanding of how technology works. We find it to be far more inscrutable than it really is, forgetting all technology was designed by fallible people. We respond to this inscrutable power by imputing a beauty and sophistication that is not there. (For more on this, see Ian Bogost and his observation that many people use the word “algorithm” in an almost religious manner.)

Veneration of the algorithm as something inordinately impressive is detrimental to our ability to engage with technology. Software is often incredibly kludgy and chaotic, far from worthy of worship. This response is not so far from fearing technology just because we can’t understand it. Both fear and veneration are closely related, as both make algorithms out to be more than they are. (This is the subject of Jacobs’s Theses 55 and 56, though stated in a bit more extreme forms than I might be willing to do.)

But what about disdain? How does this work? When a device suggests the wrong word or phrase in a text or sends delivery trucks on seemingly counterintuitive routes, we disdain the device and its algorithms. Together, their outputs seem so self-evidently wrong that we are often filled with a sense of superiority, mocking these algorithms’ shortcomings, or feeling that they are superfluous.

Sometimes, our expertise does fall short and complex logic can seem like stupidity. But David Auerbach, writing in Nautilus, offered this wonderful story that shows that something else might be going on:

Deep Blue programmer Feng-Hsiung Hsu writes in his book Behind Deep Blue that during the match, outside analysts were divided over a mysterious move made by the program, thinking it either weak or obliquely strategic. Eventually, the programmers discovered that the move was simply the result of a bug that had caused the computer not to choose what it had actually calculated to be the best move—something that could have appeared as random play.

In this case, ignorance prevented observers from understanding what was going on.

Is complex logic indistinguishable from stupidity? I don’t think so. Our response to a process we don’t understand may be closer to the nervous laughter of ignorance than a feeling of superiority. We call these algorithms stupid not because we recognize some authentic algorithmic inadequacy in them. We call them stupid because to admit a certain humility in the face of their increasing complexity would be a display of weakness.

When I took an artificial intelligence course in college and learned the algorithms for programs such as playing board games or constructing plans, I didn’t feel superior—I felt a kind of sadness. I had seen behind the screen and found these processes sophisticated, but fairly mundane. Most complex technology is this way. But when each of us encounters a surprising and apparently stupid output, if we don’t understand its origins, it is a lot easier to mock the system than to feel humbled, or even disappointed, at discovering its true structure.

These responses to technology are not the everyday user’s fault. Many of the creators of these technologies want the user to attribute a certain power to these algorithms and so have protected them behind layers of complexity. Ultimately, I think the most appropriate response is intellectual humility in the face of technology from which we have become increasingly detached. Only then can we engage with algorithms and try to see, even if only a moment, what they are actually doing.

Samuel Arbesman is a Senior Adjunct Fellow at the Silicon Flatirons Center for Law, Technology, and Entrepreneurship at the University of Colorado and a Visiting Scholar in Philosophy at the University of Kansas. Follow him on Twitter at @arbesman.

. . . . . . . .

Like The Hedgehog Review on Facebook, follow us on Twitter, and subscribe to our posts via RSS.

Algorithms Who Art in Apps, Hallowed Be Thy Code

 

If you want to understand the status of algorithms in our collective imagination, Ian Bogost, author, game designer, and professor of media studies and interactive computing at Georgia Institute of Technology,  proposes the following exercise in his recent essay in the Atlantic: “The next time you see someone talking about algorithms, replace the term with ‘God’ and ask yourself if the sense changes any?”

If Bogost is right, then more often than not you will find the sense of the statement entirely unchanged. This is because, in his view, “Our supposedly algorithmic culture is not a material phenomenon so much as a devotional one, a supplication made to the computers we have allowed to replace gods in our minds, even as we simultaneously claim that science has made us impervious to religion.” Bogost goes on to say that this development is part of a “larger trend” whereby “Enlightenment ideas like reason and science are beginning to flip into their opposites.” Science and technology, he fears, “have turned into a new type of theology.”

It’s not the algorithms themselves that Bogost is targeting; it is how we think and talk about them that worries him. In fact, Bogost’s chief concern is that how we talk about algorithms is impeding our ability to think clearly about them and their place in society. This is where the god-talk comes in. Bogost deploys a variety of religious categories to characterize the present fascination with algorithms.

Bogost believes “algorithms hold a special station in the new technological temple because computers have become our favorite idols.” Later on he writes, “the algorithmic metaphor gives us a distorted, theological view of computational action.” Additionally, “Data has become just as theologized as algorithms, especially ‘big data,’ whose name is meant to elevate information to the level of celestial infinity.” “We don’t want an algorithmic culture,” he concludes, “especially if that phrase just euphemizes a corporate theocracy.” The analogy to religious belief is a compelling rhetorical move. It vividly illuminates Bogost’s key claim: the idea of an “algorithm” now functions as a metaphor that conceals more than it reveals.

He prepares the ground for this claim by reminding us of earlier technological metaphors that ultimately obscured important realities. The metaphor of the mind as computer, for example, “reaches the rank of religious fervor when we choose to believe, as some do, that we can simulate cognition through computation and achieve the singularity.” Similarly, the metaphor of the machine, which is really to say the abstract idea of a machine, yields a profound misunderstanding of mechanical automation in the realm of manufacturing. Bogost reminds us that bringing consumer goods to market still “requires intricate, repetitive human effort.” Manufacturing, as it turns out, “isn’t as machinic nor as automated as we think it is.”

Likewise, the idea of an algorithm, as it is bandied about in public discourse, is a metaphorical abstraction that obscures how various digital and analog components, including human action, come together to produce the effects we carelessly attribute to algorithms. Near the end of the essay, Bogost sums it up this way:

The algorithm has taken on a particularly mythical role in our technology-obsessed era, one that has allowed it to wear the garb of divinity. Concepts like ‘algorithm’ have become sloppy shorthands, slang terms for the act of mistaking multipart complex systems for simple, singular ones. Of treating computation theologically rather than scientifically or culturally.

But why does any of this matter? It matters, Bogost insists, because this way of thinking blinds us in two important ways. First, our sloppy shorthand “allows us to chalk up any kind of computational social change as pre-determined and inevitable,” allowing the perpetual deflection of responsibility for the consequences of technological change. The apotheosis of the algorithm encourages what I’ve elsewhere labeled a Borg Complex, an attitude toward technological change aptly summed by the phrase, “Resistance is futile.” It’s a way of thinking about technology that forecloses the possibility of thinking about and taking responsibility for our choices regarding the development, adoption, and implementation of new technologies. Secondly, Bogost rightly fears that this “theological” way of thinking about algorithms may cause us to forget that computational systems can offer only one, necessarily limited perspective on the world. “The first error,” Bogost writes, “turns computers into gods, the second treats their outputs as scripture.”

______________________

Bogost is right to challenge the quasi-religious reverence for technology. It is, as he fears, an impediment to clear thinking. And he is not the only one calling for the secularization of our technological endeavors. Computer scientist and virtual-reality pioneer Jaron Lanier has spoken at length about the introduction of religious thinking into the field of AI. In a recent interview, he expressed his concerns this way:

There is a social and psychological phenomenon that has been going on for some decades now:  A core of technically proficient, digitally minded people reject traditional religions and superstitions. They set out to come up with a better, more scientific framework. But then they re-create versions of those old religious superstitions! In the technical world these superstitions are just as confusing and just as damaging as before, and in similar ways.

While Lanier’s concerns are similar to Bogost’s,  Lanier’s use of religious categories is more concrete. Bogost deploys a religious frame as a rhetorical device, while Lanier’s uses it more directly to critique the religiously inflected expressions of a desire for transcendence among denizens of the tech world themselves.

But such expressions are hardly new. Nor are they limited to the realm of AI. In The Religion of Technology: The Divinity of Man and the Spirit of Invention, the distinguished historian of technology David Noble made the argument that “modern technology and modern faith are neither complements nor opposites, nor do they represent succeeding stages of human development. They are merged, and always have been, the technological enterprise being, at the same time, an essentially religious endeavor.”

Noble elaborates:

This is not meant in a merely metaphorical sense, to suggest that technology is similar to religion in that it evokes religious emotions of omnipotence, devotion, and awe, or that it has become a new (secular) religion in and of itself, with its own clerical caste, arcane rituals, and articles of faith. Rather it is meant literally and historically, to indicate that modern technology and religion have evolved together and that, as a result, the technological enterprise has been and remains suffused with religious belief.

Looking also at the space program, atomic weapons, and biotechnology, Noble devoted a chapter of his book to history of artificial intelligence,  arguing that AI research had often been inspired by a curious fixation on the achievement of god-like, disembodied intelligence as a step toward personal immortality. Many of the sentiments and aspirations that Noble identifies in figures as diverse as George Boole, Claude Shannon, Alan Turing, Edward Fredkin, Marvin Minsky, Daniel Crevier, Danny Hillis, and Hans Moravec—all of them influential theorists and practitioners in the development of AI—find their consummation in the Singularity movement. The movement envisions a time—2045 is frequently suggested—when the distinction between machines and humans will blur and humanity as we know it will be eclipsed. Before Ray Kurzweil, the chief prophet of the Singularity, wrote about “spiritual machines,” Noble had astutely anticipated how the trajectories of AI, Internet, Virtual Reality, and Artificial Life research were all converging  in the age-old quest for the immortality.  Noble, who died quite suddenly in 2010, must have read the work of Kurzweil and company as a remarkable validation of his thesis in The Religion of Technology.

Interestingly, the sentiments that Noble documents alternate between the heady thrill of creating non-human Minds and non-human Life, on the one hand, and, on the other, the equally heady thrill of pursuing the possibility of radical life-extension and even immortality. Frankenstein meets Faust we might say. Humanity plays god in order to bestow god’s gifts on itself.

Noble cites one Artificial Life researcher who explains, “I feel like God; in fact, I am God to the universes I create,” and another who declares, “Technology will soon enable human beings to change into something else altogether [and thereby] escape the human condition.” Ultimately, these two aspirations come together into a grand techno-eschatological vision, expressed here by robotics specialist Hans Moravec:

Our speculation ends in a supercivilization, the synthesis of all solar system life, constantly improving and extending itself, spreading outward from the sun, converting non-life into mind …. This process might convert the entire universe into an extended thinking entity … the thinking universe … an eternity of pure cerebration.

Little wonder that Pamela McCorduck, who has been chronicling the progress of AI since the early 1980s, can say, “The enterprise is a god-like one. The invention—the finding within—of gods represents our reach for the transcendent.” And, lest we forget where we began, a more earth-bound, but no less eschatological hope was expressed by Edward Fredkin in his MIT and Stanford courses on “saving the world.” He hoped for a “global algorithm” that “would lead to peace and harmony.”

I would suggest that similar aspirations are expressed by those who believe that Big Data will yield a God’s-eye view of human society, providing wisdom and guidance that is otherwise inaccessible to ordinary human forms of knowing and thinking.

Perhaps this should not be altogether surprising. As the old saying has it, the Grand Canyon wasn’t formed by someone dragging a stick. This is just a way of saying that causes must be commensurate with the effects they produce. Grand technological projects such as space flight, the harnessing of atomic energy, and the pursuit of artificial intelligence are massive undertakings requiring stupendous investments of time, labor, and resources. What motives are sufficient to generate those sorts of expenditures? You’ll need something more than whim, to put it mildly. You may need something akin to religious devotion. Would we have attempted to put a man on the moon without the ideological spur of the Cold War, which cast space exploration as a field of civilizational battle for survival? Consider, as a more recent example, what drives Elon Musk’s pursuit of interplanetary space travel.

______________________

Without diminishing the criticisms offered by either Bogost or Lanier, Noble’s historical investigation into the roots of divinized or theologized technology reminds us that the roots of the disorder run much deeper than we might initially imagine. Noble’s own genealogy traces the origin of the religion of technology to the turn of the first millennium. It emerges out of a volatile mix of millenarian dreams, apocalyptic fervor, mechanical innovation, and monastic piety. Its evolution proceeds apace through the Renaissance, finding one of its most ardent prophets in the Elizabethan statesman and thinker Francis Bacon. Even through the Enlightenment, the religion of technology flourished. In fact, the Enlightenment may have been a decisive moment in the history of the religion of technology.

In his Atlantic essay, Bogost frames the emergence of techno-religious thinking as a departure from the ideals of reason and science associated with the Enlightenment. This is not altogether incidental to Bogost’s argument. When he talks about the “theological” thinking that suffuses our understanding of algorithms, Bogost is not working with a neutral, value-free, all-purpose definition of what constitutes the religious or the theological; there’s almost certainly no such definition available. Rather, he works (like Lanier and many others) with an Enlightenment understanding of Religion that characterizes it as Reason’s Other–as something a-rational if not altogether irrational, superstitious, authoritarian, and pernicious.

Noble’s work complicates this picture. The Enlightenment did not, as it turns out, vanquish Religion, driving it far from the pure realms of Science and Technology. In fact, to the degree that the radical Enlightenment’s assault on religious faith was successful, it empowered the religion of technology. To put it another way, the Enlightenment—and, yes, we are painting with broad strokes here—did not do away with the notions of Providence, Heaven, and Grace, but instead renamed them as, respectively, Progress, Utopia, and Technology. To borrow a phrase, the Enlightenment immanentized the eschaton. If heaven had been understood as a transcendent goal achieved with the aid of divine grace within the context of the providentially ordered unfolding of human history, it became a utopian vision, a heaven on earth, achieved by the ministrations science and technology within the context of progress, an inexorable force driving history toward its utopian consummation.

As historian Leo Marx has put it, the West’s “dominant belief system turned on the idea of technical innovation as a primary agent of progress.” Indeed, the further Western culture proceeded down the path of secularization as it is traditionally understood, the more emphasis was placed on technology as the principle agent of change. Marx observed that by the late nineteenth century, “the simple republican formula for generating progress by directing improved technical means to societal ends was imperceptibly transformed into a quite different technocratic commitment to improving ‘technology’ as the basis and the measure of—as all but constituting—the progress of society.”

When the prophets of the Singularity preach the gospel of transhumanism, they are not abandoning the Enlightenment heritage; they are simply embracing its fullest expression. As Bruno Latour has argued, modernity has never perfectly sustained the purity of the distinctions that were the self-declared hallmarks of its own superiority. Modernity characterized itself as a movement of secularization and differentiation, what Latour, with not a little irony, labels processes of purification. Science, politics, law, religion, ethics—these are all sharply distinguished and segregated from one another in the modern world, distinguishing it from the primitive pre-modern world. But it turns out that these spheres of human experience stubbornly resist the neat distinctions modernity sought to impose. Hybridization unfolds alongside purification, and Noble’s work has demonstrated how the lines between technology, sometimes reckoned the most coldly rational of human projects, and religion are anything but clear.

But not just any religion. Earlier I suggested that when Bogost characterizes our thinking about algorithms as “theological,” he is almost certainly assuming a particular kind of theology. This is why it is important to classify the religion of technology more precisely as a Christian heresy. It is in Western Christianity that Noble found the roots of the religion of technology, and it is in the context of post–Christian world that it currently flourishes.

It is Christian insofar as its aspirations are like those nurtured by the Christian faith, such as the conscious persistence of a soul after the death of the body. Noble cites Daniel Crevier, who, referring to the “Judeo-Christian tradition,” suggests that “religious beliefs, and particularly the belief in survival after death, are not incompatible with the idea that the mind emerges from physical phenomena.” This is noted on the way to explaining that a machine-based material support could be found for the mind, which leads Noble to quip, “Christ was resurrected in a new body; why not a machine?” Reporting on his study of the famed Santa Fe Institute in Los Alamos, anthropologist Stefan Helmreich writes, “Judeo-Christian stories of the creation and maintenance of the world haunted my informants’ discussions of why computers might be ‘worlds’ or ‘universes,’ …. a tradition that includes stories from the Old and New Testaments (stories of creation and salvation).”

However heretically it departs from traditional Christian teaching regarding the givenness of human nature, the moral dimensions of humanity’s brokenness, the gracious agency of God in the salvation of humanity, the religion of technology can be conceived as an imaginative account of how God might fulfill purposes that were initially revealed in incidental, pre-scientific garb. In other words, we might frame the religion of technology not so much as a Christian heresy, but rather as (post–)Christian fan-fiction, an elaborate imagining of how the hopes articulated by the Christian faith will materialize as a consequence of human ingenuity in the absence of divine action.

Near the end of The Religion of Technology, David Noble warns of the dangers posed by a blind faith in technology. “Lost in their essentially religious reveries,” he writes, “the technologists themselves have been blind to, or at least have displayed blithe disregard for, the harmful ends toward which their work has been directed.” Citing another historian of technology, Noble adds, “The religion of technology, in the end, ‘rests on extravagant hopes which are only meaningful in the context of transcendent belief in a religious God, hopes for a total salvation which technology cannot fulfill …. By striving for the impossible, [we] run the risk of destroying the good life that is possible.’ Put simply, the technological pursuit of salvation has become a threat to our survival.” I suspect that neither Bogost nor Lanier would disagree with Noble on this score.

This post originally appeared at The Frailest Thing.

Michael Sacasas is a doctoral candidate in the Texts and Technology program at the University of Central Florida. Follow him on Twitter @frailestthing. 

. . . . . . . .

Like The Hedgehog Review on Facebook, follow us on Twitter, and subscribe to our posts via RSS.

Big Data, Small Data, and the Ethics of Scale

This past summer, two Cornell University scholars and a researcher from Facebook’s Data Science unit published a paper on what they termed “emotional contagion.” They claimed to show that Facebook’s news feed algorithm, the complex set of instructions that determines what shows up where in a news feed, could influence users’ emotional states. Using a massive data set of more than 689,003 Facebook accounts, they manipulated users’ news feeds so that some people saw more positive posts and others more negative posts. Over time, they detected a slight change in what users themselves posted: Those who saw more positive posts posted more positive posts of their own, while those who saw more negative posts posted more negative ones. Emotional contagion, they concluded, could spread among people without any direct interaction and “without their awareness.” 

Some critics lambasted Facebook for its failure to notify users that they were going to be part of a giant experiment on their emotions, but others simply thought it was cool. (My Infernal Machine colleague Ned O’Gorman has already outlined the debate.) Sheryl Sandberg, Facebook’s COO, just seemed confused. What’s all the fuss about, she wondered. This latest experiment “was part of ongoing research companies do to test different products.” Facebook wasn’t experimenting with people; it was improving its product. That’s what businesses do, especially digital business with access to so much free data. They serve their customers by better understanding their needs and desires. Some might call it manipulation. Facebook calls it marketing.

But, as technology writer Nicholas Carr points out, new digital technologies and the internet have ushered in a new era of market manipulation.

Thanks to the reach of the internet, the kind of psychological and behavioral testing that Facebook does is different in both scale and kind from the market research of the past. Never before have companies been able to gather such intimate data on people’s thoughts and lives, and never before have they been able to so broadly and minutely shape the information that people see. If the Post Office had ever disclosed that it was reading everyone’s mail and choosing which letters to deliver and which not to, people would have been apoplectic, yet that is essentially what Facebook has been doing. In formulating the algorithms that run its News Feed and other media services, it molds what its billion-plus members see and then tracks their responses. It uses the resulting data to further adjust its algorithms, and the cycle of experiments begins anew. Because the algorithms are secret, people have no idea which of their buttons are being pushed — or when, or why.

Businesses of all sorts, from publishers to grocery stores, have longed tracked the habits and predilections of their customors in order better to influence what and how much they consume. And cultural critics have always debated the propriety of such practices.

Eighteenth-century German scholars debated the intellectual integrity of publishers who deigned to treat books not only as sacred vessels of Enlightenment, but also as commodities to be fashioned and peddled to a generally unenlightened public. Friedrich Nicolai, one of late eighteenth-century Prussia’s leading publishers, described the open secrets of the Enlightenment book trade:

Try to write what everyone is talking about . . . If an Empress Catherine has died, or a Countess Lichtenau fallen out of favor, describe the secret circumstances of her life, even if you know nothing of them. Even if all your accounts are false, no one will doubt their veracity, your book will pass from hand to hand, it will be printed four times in three weeks, especially if you take care to invent a multitude of scandalous anecdotes.

The tastes and whims of readers could be formed and manipulated by a publishing trade that was in the business not only of sharing knowledge but also of producing books that provoked emotional responses and prompted purchases. And it did so in such obvious and pandering ways that its manipulative tactics were publicly debated. Immanuel Kant mocked Nicolai and his fellow publishers as industrialists who traded in commodities, not knowledge. But Kant did so in public, in print.

These previous forms of market manipulation were qualitatively different from those of our digital age. Be they the practices of eighteenth-century publishing or mid-twentieth-century television production, these forms of manipulation, claims Carr, were more public and susceptible to public scrutiny, and as long as they were “visible, we could evaluate them and resist them.” But in an age in which our online and offline lives are so thoroughly intertwined, the data of our lives—what we consume, how we communicate, how we socialize, how we live—can be manipulated in ways and to ends about which we are completely unaware and we have increasingly less capacity to evaluate.

Sheryl Sandberg would have us believe that Facebook and Google are neutral tools that merely process and organize information into an accessible format. But Facebook and Google are also companies interested in making money. And their primary technologies, their algorithms, should not be extracted from the broader environment in which they were created and are constantly tweaked by particular human beings for particular ends. They are pervasive and shape who we are and who we want to become, both individually and socially. We need to understand how live alongside them.

These are precisely the types of questions and concerns that a humanities of the twenty-first century can and should address. We need forms of inquiry that take the possibilities and limits of digital technologies seriously. The digital humanities would seem like an obvious community to which to turn for a set of practices, methods, and techniques for thinking about our digital lives, both historically and conceptually. But, to date, most scholars engaged in the digital humanities have not explicitly addressed the ethical ends and motivations of their work. (Bethany Nowviskie’s work is one exemplary exception: here and here.)

This hesitance has set them up for some broad attacks. Th recent diatribes against the digital humanities have not only peddled ignorance and lazy thinking as insight, they have also, perhaps more perniciously, managed to cast scholars interested in such methods and technologies as morally suspect. In his ill-informed New Republic article, Adam Kirsch portrayed digital humanities scholars as morally truncated technicians, obsessed with method and either uninterested in or incapable of ethical reflection. The digital humanities, Kirsch would have us believe, is the latest incarnation of the Enlightenment of Adorno and Horkheimer—a type of thinking interested only in technical mastery and unconcerned about the ends to which knowledge might be put.

Most of the responses to Kirsch and his ilk, my own included, didn’t dispute these more implicit suggestions. We conceded questions of value and purpose to the bumbling critics, as though to suggest that the defenders of a vague and ahistorical form of humanistic inquiry had a monopoly on such questions. We conceded, after a fashion, the language of ethics to Kirsch’s image of a purified humanities, one that works without technologies and with insight alone. We responded with arguments about method (“You don’t know what digital humanities scholars actually do.”) or history (“The humanities have always been interested in patterns.”).

In a keynote address last week, however, Scott Weingart encouraged humanities scholars engaged in computational analysis and other digital projects to think more clearly about the ethical nature of the work they are already doing. Echoing some of Carr’s concerns, he writes:

We are at the cusp of a new era. The mix of big data, social networks, media companies, content creators, government surveillance, corporate advertising, and ubiquitous computing is a perfect storm for intense influence both subtle and far-reaching. Algorithmic nudging has the power to sell products, win elections, topple governments, and oppress a people, depending on how it is wielded and by whom. We have seen this work from the bottom-up, in Occupy Wall Street, the Revolutions in the Middle East, and the ALS Ice-Bucket Challenge, and from the top-down in recent presidential campaigns, Facebook studies, and coordinated efforts to preserve net neutrality. And these have been works of non-experts: people new to this technology, scrambling in the dark to develop the methods as they are deployed. As we begin to learn more about network-based control and influence, these examples will multiply in number and audacity.

In light of these new scales of analysis and the new forms of agency they help create, Weingart encourages scholars, particularly those engaged in network and macroanalysis, to pay attention to the ways in which they mix the impersonal and individual, the individual and the universal. “By zooming in and out, from the distant to the close,” he writes, digital humanities scholars toggle back and forth between big and small data. Facebook, Google, and the NSA operate primarily at a macro level at which averages and aggregates are visible but not individuals. But that’s not how networks work. Networks are a messy, complex interaction of the micro and macro. They are products of the entire scale of knowledge, data, and being. Social networks and the ideas, actions, and interactions that comprise them emerge between the particular and the universal. What often distinguishes “the digital humanities from its analog counterpart,” writes Weingart, “is the distant reading, the macroanalysis.” But what binds humanities scholars of all sorts together is an “unwillingness to stray too far from the source. We intersperse the distant with the close, attempting to reintroduce the individual into the aggregate.” In this sense, scholars interested in a digital humanities are particularly well suited to challenge basic but dangerous misconceptions about the institutions and technologies that shape our world.

If we think of Facebook and Google and the computations in which we are enmeshed merely as information-processing machines, we concede our world to one end of the scale, a world of abstracted big data and all powerful algorithms. We forget that the internet, like any technology, is both a material infrastructure and, as Ian Bogost has put it, something we do. Every time we like a post on Facebook, search Google, or join the network at a local coffee shop, we participate in this massive, complex world of things and actions. We help form our technological world. So maybe its time we learn more about this world and remember that algorithms aren’t immutable, natural laws. They are, as Nowviskie puts it, rules and instructions that can manipulate and be manipulated. They are part of the our world, bound to us just as we are now to them.

. . . . . . . .

Like The Hedgehog Review on Facebook, follow us on Twitter, and subscribe to our posts via RSS.