Tag Archives: technology

79 Theses on Technology:
Of Techniques and “Technology”

https://www.flickr.com/photos/notionscapital/3397858623/in/photolist-6bfUZn-tzpptt-7hAbc2-fMVuXi-n2q2x2-6pxM9s-9tNJ27-79kkxr-8hdbbZ-aSiUAi-8nZUXY-8Jnx9G-ayq3Z2-9tKPn4-7wUAgt-6AfoHc-5Bo6N9-bZ4tP5-nWHt6S-nEkQHq-nWQkDM-tx4Kd1-eaXF9R-dgC8yj-as5shL-9MJPrV-oW8Tyr-nwWtC-87FrMA-Q6Bx9-87FrTw-aM6o1D-aM6nPn-aM6nAM-dhMP8o-dhMNBZ-dhMNAK-dhMP4y-dhMNyx-dgC8xw-dgC6qx-9evU3V-9eyZ5G-fvYkR-ddCpVp-ddCrgW-ddCrf5-9xCC3F-9xFALG-9xCBZr/

Anatomy of a Blogger, after Diderot’s Encyclopédie, ou dictionnaire raisonné des sciences, des arts et des métiers by Mike Licht via flickr

Editor’s Note: Earlier in the spring, Alan Jacobs drew up his 79 Theses on Technology, a provocative document that has drawn much commentary from our readers. John Durham Peters joins the fray here, commenting on Theses 64 through 70.

64. Everyone should sometimes write by hand, to recall what it’s like to have second thoughts before the first ones are completely recorded.

65. Everyone should sometimes write by hand, to revisit and refresh certain synaptic connections between mind and body.

66. To shift from typing to (hand)writing to speaking is to be instructed in the relations among minds, bodies, and technologies.

67. It’s fine to say “use the simplest technology that will do the job,” but in fact you’ll use the one you most enjoy using.

68. A modern school of psychoanalysis should be created that focuses on interpreting personality on the basis of the tools that one finds enjoyable to use.

69. Thinking of a technology as a means of pleasure may be ethically limited, but it’s much healthier than turning it into an idol.

70. The always-connected forget the pleasures of disconnection, then become impervious to them.

No doubt, writing is an intensely physical bio-mechanical activity. The back hurts, the neck cranes, the eyes sting, the head aches, the view out the window is consulted for the thousandth time. The inscription of words exacts a tax of muscular and nervous exertion. And no doubt, the most minute choices in writing technique make worlds of difference. Nietzsche thought writing while seated a sin against the Holy Ghost: only in strolling did words have for him truth.

But let us not confuse technology and technique. Technology once meant the study of the productive arts and sciences (as in the Massachusetts Institute of Technology); now, the term has been inflated not only into material devices of all kinds but also into a gas-bag for intellectuals to punch. Techniques are humble acts we do with hands, voices, eyes, feet, spine, and other embodied parts that bring forth mind into the world. We humans never do anything without technique, so we shouldn’t pretend there is any ontological difference between writing by hand, keyboarding, and speaking, or that one of them is more original or pure than the other. We are technical all the way down in body and mind. 

The age of ubiquitous computing has yielded, among other things, a florid genre of opt-out narratives, and I hope I do not espy in these theses another such tendency. Only by the orchestration of technologies can you catch a glimpse of a technology-free world. The more intensely made our environment is, the more actively its designers supply us with shock absorbers. The default images for the background of my desktop computer are all resolutely pastoral—not a sign of infrastructure, globalization, coltan, carbon, or human labor among them. I find tulips, a rising moon, cloudscapes, seascapes, and windblown desert sands, but no data, email, calendars, and bills, and certainly no human presence. Just how did this blue flower happen to sprout amid all the silicon? With heartfelt pleas that I “just have to watch,” my students send me YouTube videos that explain why we need to unplug, go outside, and seek real human contact. If you listen to the machine telling you how to get out of it, you only get sucked into it more, like a con artist who lulls you into a sense of trust by telling you that he is conning you. The promised liberation from technology is usually just another technology that you don’t recognize as such. This is one reason why a fuller appreciation of our diverse techniques is so vital.

Tools are all we have, but each one sets us in a very different horizon. Technology only risks being an idol because we don’t appreciate our techniques well enough. Writing with two hands on a keyboard, dictating to a person or a machine, writing with chalk, quill, pencil, or pen—each embody mind in different ways. Blessed be the back pain, as it reminds us that we are not immaterial beings flying through cyberspace.

I don’t understand the term “simplest” applied to a tool. Tools interact with mind and body. Compass and square could build gothic cathedrals. Piano and notepaper could yield symphonies. The more basic the tool, the harder it is to master. Who among us has yet learned how to speak, or walk, or think? The real challenges lie in the most basic acts. Some day, I’d like to write a really good sentence. Some day, I’d like to play a beautiful scale in C major. Some day, I’d like to say the right word to another person. The more basic the task, the more fundamental the challenge and difficult the tool.

John Durham Peters is the A. Craig Baird Professor of Communication Studies at the University of Iowa. His most recent book The Marvelous Clouds: Towards a Philosophy of Elemental Media has just been released by the University of Chicago Press. 

. . . . . . . .

Like The Hedgehog Review on Facebook, follow us on Twitter, and subscribe to our posts via RSS.

FacebookTwitterGoogle+LinkedInShare

79 Theses on Technology:
Things That Want—A Second Reply to Alan Jacobs

tender buttons_2_FLATI don’t know exactly what Alan Jacobs wants. But I know what my keyboard wants. That difference—a difference in my knowledge of the intentionality of things—is reason for me to conclude that Alan Jacobs and my keyboard are two different kinds of things. There is, we’d say, an ontological difference between Alan Jacobs and my keyboard. There is a functional difference as well. And so many more differences. I acknowledge this. The world is not flat.

But Jacobs differentiates himself from my keyboard based on “wanting” itself. Alan Jacobs wants. Keyboards—mine or others—don’t “want.” Such is for Jacobs the line between Alan Jacobs and keyboards. If we can regulate our language about things, he suggests, we can regulate things. I would rather just learn from our language, and from things, and go from there.

I think my differences with Jacobs take three directions: one rhetorical, another ontological, and a third ethical. I will discuss them each a bit here.

To start, I think that machines and other technologies are full of meaning and significance, and that they do in fact give meaning to our lives. Part of their meaningfulness is found in what I might call their “structure of intention,” or “intentionality.” This includes what design theorists call “affordances.” In the classic account of affordances, James Gibson described them as the latent “action possibilities” of things in relation to their environment. Design theorists tend to take a more straight-forward approach: plates on doors afford pushing; C-shaped bars affixed to doors afford pulling; and knobs afford either action. Likewise, buttons on car dashboards afford pushing, whereas dials afford turning.

But intentionality as I am calling it here goes beyond the artifacts themselves, to include the broader practices and discourses in which they are embedded. Indeed, the “intentionality” of a thing is likely to be stronger where those broader practices and discourses operate at the level of assumption rather than explicit indoctrination. So much of the meaningfulness of things is tacitly known and experienced, only becoming explicit when they are taken away.

So there are things, their affordances, and the practices and discourses in which they are embedded. And here I think it is rhetorically legitimate, ontologically plausible, and ethically justified to say that technologies can want.

Rhetorically, every culture animates its things through language. I do not think this is mere embellishment. It entails a recognition that non-human things are profoundly meaningful to us, and that they can be independent actors as they are “activated” or “deactivated” in our lives. (Think of the frustrations you feel when the plumbing goes awry. This frustration is about “meaning” in our lives as much as it is about using the bathroom.) To say technologies “want,” as Kevin Kelly does, is to acknowledge rhetorically how meaningful non-human things are to us; it is not to make a category mistake.

Ontologically, the issue hinges in part on whether we tie “wanting” to will, especially to the will of a single, intending human agent (hence, the issue of voluntarianism). If we tether wanting to will in a strong sense, we end up in messy philosophical terrain. What do we do with instinct, bodily desires, sensations, affections, and the numerous other forms of “wanting” that do not seem to be a product of our will? What do we do with animals, especially pets? What do we do with the colloquial expression, “The plant wants water”? Such questions are well beyond the scope of this response. I will just say that I am skeptical of attempts to tie wanting to will because willfulness is only one kind of wanting.

Jacobs and I agree, I think, that the most pressing issue in saying technologies want is ethical. Jacobs thinks that in speaking of technologies as having agency, I am essentially surrendering agency to technical things. I disagree.

I think it is perfectly legitimate and indeed ethically good and right to speak of technologies as “wanting.” “To want” is not simply to exercise a will but rather more broadly to embody a structure of intention within a given context or set of contexts. Will-bearing and non-will-bearing things, animate and inanimate things, can embody such a structure of intention.

It is good and right to call this “wanting” because “wanting” suggests that things, even machine things, have an active presence in our life—they are intentional. They cannot be reduced to mere tools or instruments, let alone “a piece of plastic that when depressed activates an electrical current.” Moreover, this active presence cannot be neatly traced back to their design and, ultimately, some intending human.

To say the trigger wants to be pulled is not to say only that the trigger “was made for” pulling. It is not even to say that the trigger “affords” pulling. It is to say that the trigger may be so culturally meaningful as to act upon us in powerful ways (as indeed we see with guns).

So far from leading, as Jacobs claims, to the “Borg Complex”—the belief that resistance to technology is futile—it is only by coming to grips with the profound and active power of things that we best recognize that resistance to technology is, as Jacobs correctly argues, a cultural project, not a merely personal one, let alone primarily a definitional one.

So rather than trying to clean up or correct our language with respect to things (technologies don’t want!), I think we ought to begin by paying closer attention to our language about things and ask what we may learn from it. Yes, we will learn of our idolatries, ideologies, idiocies, and lies. But we may also learn some uncomfortable truths. So I will say it again, of course technologies want!

. . . . . . . .

Like The Hedgehog Review on Facebook, follow us on Twitter, and subscribe to our posts via RSS.

79 Theses on Technology:
The Hand That Holds the Smartphone

medical_anatomy_hands

Alan Jacobs poses a few questions to his readers: “What must I pay attention to?” “What may I pay attention to?” and “What must I refuse attention to?” These questions direct readers to understand their own positions in the world in terms of attention. They encourage reflection. Instead of directing the reader’s focus outward to ponder general, more abstract relations between “technology” and “society,” they return us to our own bodies even and suggest that the hand that swipes the iPhone, your hand, deserves attention.

Jacobs formulates only two other theses as questions  (#9, #60), and both are posed from a seemingly universal standpoint without a social location or even an implied interlocutor. However, some of Jacobs’s concerns about the current unhappy union with our attention-demanding devices seem to emerge from a specific social location. While these concerns may ring true for a large segment of higher-income, well-educated adults, who do in fact own smartphones in greater numbers than the rest of the US population, they may fall short of describing the experiences of many other users.

For example, #70, “The always-connected forget the pleasures of disconnection, then become impervious to them.” Who are the “always-connected”? The McDonald’s worker whose algorithmically determined shifts are apt to change with less than half day’s notice? Or one of the 10% of Americans who rely on their smartphones to access the Internet to do their banking, look for a job, and let their child do homework?

People who rely on their smartphones for Internet access are more likely to be young, low-income, and non-white, the same population with some of the highest levels of unemployment. With the migration of most job-seeking to online databases and applications, all members of the “always-connected” might not experience the “pleasures of disconnection” in the same way as the middle class knowledge worker with high-speed Internet access at home and at work. In reality, the “always-connected” is a large and diverse group, and is quickly becoming even larger and even more diverse.

Your hand isn’t the only hand that comes in contact with your phone, of course, but only the last set of hands in a long chain of designers, manufacturing workers, and marketing gurus. Jacobs points this out in the case of algorithms (Thesis #54, “The contemporary version of the pathetic fallacy is to attribute agency not to nature but to algorithms—as though humans don’t write algorithms. But they do.”), but it bears extending this line of thinking to other theses about the ideologies that run through contemporary discourse on technology.

Consider Thesis #41, “The agency that in the 1970s philosophers and theorists ascribed to language is now being ascribed to technology” and #44, “We try to give power to our idols so as to be absolved of the responsibilities of human agency”—who are the agents in these theses? Who is doing the ascribing? Who seeks absolution?

Kevin Kelly, the author Jacobs points to as a prime example of techno-enthusiasm, was a founding editor of Wired and has spent a lot of time talking to technology executives over the past several decades. Kelly’s ideas have often been translated into marketing strategies that soon enter into the public consciousness—like the sumptuously edited commercial for the Apple Watch in which the watch operates completely of its own agency, no human required!—where they shape our desires and understandings of our relationships with our devices.

It’s through the image of a series of hands grasping, texting, and swiping away that my attention is drawn to the people at other end of the technologies that shape our lives. As Jacobs points out, technology doesn’t want anything, “we want, with technology as our instrument,” but the question of who we are is isn’t just idle sociological speculation. It’s vital to imagining alternative arrangements of both people and technology, as well as more humane practices that may benefit us all.

Julia Ticona is a doctoral candidate in the sociology department at the University of Virginia and a dissertation fellow at the Institute for Advanced Studies in Culture. Her work focuses on the cultures of technology and everyday life.

Photo: Anatomical study of hands, public domain.

 

. . . . . . . .

Like The Hedgehog Review on Facebook, follow us on Twitter, and subscribe to our posts via RSS.

Algorithms Who Art in Apps, Hallowed Be Thy Code

 

If you want to understand the status of algorithms in our collective imagination, Ian Bogost, author, game designer, and professor of media studies and interactive computing at Georgia Institute of Technology,  proposes the following exercise in his recent essay in the Atlantic: “The next time you see someone talking about algorithms, replace the term with ‘God’ and ask yourself if the sense changes any?”

If Bogost is right, then more often than not you will find the sense of the statement entirely unchanged. This is because, in his view, “Our supposedly algorithmic culture is not a material phenomenon so much as a devotional one, a supplication made to the computers we have allowed to replace gods in our minds, even as we simultaneously claim that science has made us impervious to religion.” Bogost goes on to say that this development is part of a “larger trend” whereby “Enlightenment ideas like reason and science are beginning to flip into their opposites.” Science and technology, he fears, “have turned into a new type of theology.”

It’s not the algorithms themselves that Bogost is targeting; it is how we think and talk about them that worries him. In fact, Bogost’s chief concern is that how we talk about algorithms is impeding our ability to think clearly about them and their place in society. This is where the god-talk comes in. Bogost deploys a variety of religious categories to characterize the present fascination with algorithms.

Bogost believes “algorithms hold a special station in the new technological temple because computers have become our favorite idols.” Later on he writes, “the algorithmic metaphor gives us a distorted, theological view of computational action.” Additionally, “Data has become just as theologized as algorithms, especially ‘big data,’ whose name is meant to elevate information to the level of celestial infinity.” “We don’t want an algorithmic culture,” he concludes, “especially if that phrase just euphemizes a corporate theocracy.” The analogy to religious belief is a compelling rhetorical move. It vividly illuminates Bogost’s key claim: the idea of an “algorithm” now functions as a metaphor that conceals more than it reveals.

He prepares the ground for this claim by reminding us of earlier technological metaphors that ultimately obscured important realities. The metaphor of the mind as computer, for example, “reaches the rank of religious fervor when we choose to believe, as some do, that we can simulate cognition through computation and achieve the singularity.” Similarly, the metaphor of the machine, which is really to say the abstract idea of a machine, yields a profound misunderstanding of mechanical automation in the realm of manufacturing. Bogost reminds us that bringing consumer goods to market still “requires intricate, repetitive human effort.” Manufacturing, as it turns out, “isn’t as machinic nor as automated as we think it is.”

Likewise, the idea of an algorithm, as it is bandied about in public discourse, is a metaphorical abstraction that obscures how various digital and analog components, including human action, come together to produce the effects we carelessly attribute to algorithms. Near the end of the essay, Bogost sums it up this way:

The algorithm has taken on a particularly mythical role in our technology-obsessed era, one that has allowed it to wear the garb of divinity. Concepts like ‘algorithm’ have become sloppy shorthands, slang terms for the act of mistaking multipart complex systems for simple, singular ones. Of treating computation theologically rather than scientifically or culturally.

But why does any of this matter? It matters, Bogost insists, because this way of thinking blinds us in two important ways. First, our sloppy shorthand “allows us to chalk up any kind of computational social change as pre-determined and inevitable,” allowing the perpetual deflection of responsibility for the consequences of technological change. The apotheosis of the algorithm encourages what I’ve elsewhere labeled a Borg Complex, an attitude toward technological change aptly summed by the phrase, “Resistance is futile.” It’s a way of thinking about technology that forecloses the possibility of thinking about and taking responsibility for our choices regarding the development, adoption, and implementation of new technologies. Secondly, Bogost rightly fears that this “theological” way of thinking about algorithms may cause us to forget that computational systems can offer only one, necessarily limited perspective on the world. “The first error,” Bogost writes, “turns computers into gods, the second treats their outputs as scripture.”

______________________

Bogost is right to challenge the quasi-religious reverence for technology. It is, as he fears, an impediment to clear thinking. And he is not the only one calling for the secularization of our technological endeavors. Computer scientist and virtual-reality pioneer Jaron Lanier has spoken at length about the introduction of religious thinking into the field of AI. In a recent interview, he expressed his concerns this way:

There is a social and psychological phenomenon that has been going on for some decades now:  A core of technically proficient, digitally minded people reject traditional religions and superstitions. They set out to come up with a better, more scientific framework. But then they re-create versions of those old religious superstitions! In the technical world these superstitions are just as confusing and just as damaging as before, and in similar ways.

While Lanier’s concerns are similar to Bogost’s,  Lanier’s use of religious categories is more concrete. Bogost deploys a religious frame as a rhetorical device, while Lanier’s uses it more directly to critique the religiously inflected expressions of a desire for transcendence among denizens of the tech world themselves.

But such expressions are hardly new. Nor are they limited to the realm of AI. In The Religion of Technology: The Divinity of Man and the Spirit of Invention, the distinguished historian of technology David Noble made the argument that “modern technology and modern faith are neither complements nor opposites, nor do they represent succeeding stages of human development. They are merged, and always have been, the technological enterprise being, at the same time, an essentially religious endeavor.”

Noble elaborates:

This is not meant in a merely metaphorical sense, to suggest that technology is similar to religion in that it evokes religious emotions of omnipotence, devotion, and awe, or that it has become a new (secular) religion in and of itself, with its own clerical caste, arcane rituals, and articles of faith. Rather it is meant literally and historically, to indicate that modern technology and religion have evolved together and that, as a result, the technological enterprise has been and remains suffused with religious belief.

Looking also at the space program, atomic weapons, and biotechnology, Noble devoted a chapter of his book to history of artificial intelligence,  arguing that AI research had often been inspired by a curious fixation on the achievement of god-like, disembodied intelligence as a step toward personal immortality. Many of the sentiments and aspirations that Noble identifies in figures as diverse as George Boole, Claude Shannon, Alan Turing, Edward Fredkin, Marvin Minsky, Daniel Crevier, Danny Hillis, and Hans Moravec—all of them influential theorists and practitioners in the development of AI—find their consummation in the Singularity movement. The movement envisions a time—2045 is frequently suggested—when the distinction between machines and humans will blur and humanity as we know it will be eclipsed. Before Ray Kurzweil, the chief prophet of the Singularity, wrote about “spiritual machines,” Noble had astutely anticipated how the trajectories of AI, Internet, Virtual Reality, and Artificial Life research were all converging  in the age-old quest for the immortality.  Noble, who died quite suddenly in 2010, must have read the work of Kurzweil and company as a remarkable validation of his thesis in The Religion of Technology.

Interestingly, the sentiments that Noble documents alternate between the heady thrill of creating non-human Minds and non-human Life, on the one hand, and, on the other, the equally heady thrill of pursuing the possibility of radical life-extension and even immortality. Frankenstein meets Faust we might say. Humanity plays god in order to bestow god’s gifts on itself.

Noble cites one Artificial Life researcher who explains, “I feel like God; in fact, I am God to the universes I create,” and another who declares, “Technology will soon enable human beings to change into something else altogether [and thereby] escape the human condition.” Ultimately, these two aspirations come together into a grand techno-eschatological vision, expressed here by robotics specialist Hans Moravec:

Our speculation ends in a supercivilization, the synthesis of all solar system life, constantly improving and extending itself, spreading outward from the sun, converting non-life into mind …. This process might convert the entire universe into an extended thinking entity … the thinking universe … an eternity of pure cerebration.

Little wonder that Pamela McCorduck, who has been chronicling the progress of AI since the early 1980s, can say, “The enterprise is a god-like one. The invention—the finding within—of gods represents our reach for the transcendent.” And, lest we forget where we began, a more earth-bound, but no less eschatological hope was expressed by Edward Fredkin in his MIT and Stanford courses on “saving the world.” He hoped for a “global algorithm” that “would lead to peace and harmony.”

I would suggest that similar aspirations are expressed by those who believe that Big Data will yield a God’s-eye view of human society, providing wisdom and guidance that is otherwise inaccessible to ordinary human forms of knowing and thinking.

Perhaps this should not be altogether surprising. As the old saying has it, the Grand Canyon wasn’t formed by someone dragging a stick. This is just a way of saying that causes must be commensurate with the effects they produce. Grand technological projects such as space flight, the harnessing of atomic energy, and the pursuit of artificial intelligence are massive undertakings requiring stupendous investments of time, labor, and resources. What motives are sufficient to generate those sorts of expenditures? You’ll need something more than whim, to put it mildly. You may need something akin to religious devotion. Would we have attempted to put a man on the moon without the ideological spur of the Cold War, which cast space exploration as a field of civilizational battle for survival? Consider, as a more recent example, what drives Elon Musk’s pursuit of interplanetary space travel.

______________________

Without diminishing the criticisms offered by either Bogost or Lanier, Noble’s historical investigation into the roots of divinized or theologized technology reminds us that the roots of the disorder run much deeper than we might initially imagine. Noble’s own genealogy traces the origin of the religion of technology to the turn of the first millennium. It emerges out of a volatile mix of millenarian dreams, apocalyptic fervor, mechanical innovation, and monastic piety. Its evolution proceeds apace through the Renaissance, finding one of its most ardent prophets in the Elizabethan statesman and thinker Francis Bacon. Even through the Enlightenment, the religion of technology flourished. In fact, the Enlightenment may have been a decisive moment in the history of the religion of technology.

In his Atlantic essay, Bogost frames the emergence of techno-religious thinking as a departure from the ideals of reason and science associated with the Enlightenment. This is not altogether incidental to Bogost’s argument. When he talks about the “theological” thinking that suffuses our understanding of algorithms, Bogost is not working with a neutral, value-free, all-purpose definition of what constitutes the religious or the theological; there’s almost certainly no such definition available. Rather, he works (like Lanier and many others) with an Enlightenment understanding of Religion that characterizes it as Reason’s Other–as something a-rational if not altogether irrational, superstitious, authoritarian, and pernicious.

Noble’s work complicates this picture. The Enlightenment did not, as it turns out, vanquish Religion, driving it far from the pure realms of Science and Technology. In fact, to the degree that the radical Enlightenment’s assault on religious faith was successful, it empowered the religion of technology. To put it another way, the Enlightenment—and, yes, we are painting with broad strokes here—did not do away with the notions of Providence, Heaven, and Grace, but instead renamed them as, respectively, Progress, Utopia, and Technology. To borrow a phrase, the Enlightenment immanentized the eschaton. If heaven had been understood as a transcendent goal achieved with the aid of divine grace within the context of the providentially ordered unfolding of human history, it became a utopian vision, a heaven on earth, achieved by the ministrations science and technology within the context of progress, an inexorable force driving history toward its utopian consummation.

As historian Leo Marx has put it, the West’s “dominant belief system turned on the idea of technical innovation as a primary agent of progress.” Indeed, the further Western culture proceeded down the path of secularization as it is traditionally understood, the more emphasis was placed on technology as the principle agent of change. Marx observed that by the late nineteenth century, “the simple republican formula for generating progress by directing improved technical means to societal ends was imperceptibly transformed into a quite different technocratic commitment to improving ‘technology’ as the basis and the measure of—as all but constituting—the progress of society.”

When the prophets of the Singularity preach the gospel of transhumanism, they are not abandoning the Enlightenment heritage; they are simply embracing its fullest expression. As Bruno Latour has argued, modernity has never perfectly sustained the purity of the distinctions that were the self-declared hallmarks of its own superiority. Modernity characterized itself as a movement of secularization and differentiation, what Latour, with not a little irony, labels processes of purification. Science, politics, law, religion, ethics—these are all sharply distinguished and segregated from one another in the modern world, distinguishing it from the primitive pre-modern world. But it turns out that these spheres of human experience stubbornly resist the neat distinctions modernity sought to impose. Hybridization unfolds alongside purification, and Noble’s work has demonstrated how the lines between technology, sometimes reckoned the most coldly rational of human projects, and religion are anything but clear.

But not just any religion. Earlier I suggested that when Bogost characterizes our thinking about algorithms as “theological,” he is almost certainly assuming a particular kind of theology. This is why it is important to classify the religion of technology more precisely as a Christian heresy. It is in Western Christianity that Noble found the roots of the religion of technology, and it is in the context of post–Christian world that it currently flourishes.

It is Christian insofar as its aspirations are like those nurtured by the Christian faith, such as the conscious persistence of a soul after the death of the body. Noble cites Daniel Crevier, who, referring to the “Judeo-Christian tradition,” suggests that “religious beliefs, and particularly the belief in survival after death, are not incompatible with the idea that the mind emerges from physical phenomena.” This is noted on the way to explaining that a machine-based material support could be found for the mind, which leads Noble to quip, “Christ was resurrected in a new body; why not a machine?” Reporting on his study of the famed Santa Fe Institute in Los Alamos, anthropologist Stefan Helmreich writes, “Judeo-Christian stories of the creation and maintenance of the world haunted my informants’ discussions of why computers might be ‘worlds’ or ‘universes,’ …. a tradition that includes stories from the Old and New Testaments (stories of creation and salvation).”

However heretically it departs from traditional Christian teaching regarding the givenness of human nature, the moral dimensions of humanity’s brokenness, the gracious agency of God in the salvation of humanity, the religion of technology can be conceived as an imaginative account of how God might fulfill purposes that were initially revealed in incidental, pre-scientific garb. In other words, we might frame the religion of technology not so much as a Christian heresy, but rather as (post–)Christian fan-fiction, an elaborate imagining of how the hopes articulated by the Christian faith will materialize as a consequence of human ingenuity in the absence of divine action.

Near the end of The Religion of Technology, David Noble warns of the dangers posed by a blind faith in technology. “Lost in their essentially religious reveries,” he writes, “the technologists themselves have been blind to, or at least have displayed blithe disregard for, the harmful ends toward which their work has been directed.” Citing another historian of technology, Noble adds, “The religion of technology, in the end, ‘rests on extravagant hopes which are only meaningful in the context of transcendent belief in a religious God, hopes for a total salvation which technology cannot fulfill …. By striving for the impossible, [we] run the risk of destroying the good life that is possible.’ Put simply, the technological pursuit of salvation has become a threat to our survival.” I suspect that neither Bogost nor Lanier would disagree with Noble on this score.

This post originally appeared at The Frailest Thing.

Michael Sacasas is a doctoral candidate in the Texts and Technology program at the University of Central Florida. Follow him on Twitter @frailestthing. 

. . . . . . . .

Like The Hedgehog Review on Facebook, follow us on Twitter, and subscribe to our posts via RSS.

The Public and Private, Once Again

Three surveillance cameras, Wikimedia Commons

Three surveillance cameras, Wikimedia Commons

In the wake of the Charlie Hebdo attacks, a political fire that has been burning for a long time is turning into a firestorm. Recently, the British Prime Minister David Cameron has called for tech companies to provide government security services with encryption keys to ensure that government authorities may legally access an individual’s data when warranted. The concern, now publicly shared by President Obama, is that terrorists are using the new encryption technologies being developed by companies like Apple, Google, WhatsApp, and Snapchat, especially “end-to-end” encryption, which “makes it nearly impossible for anyone to read users’ messages—even the company itself.”

And so, as The Economist has recently stated in an editorial about the matter, we are confronted again with the age-old dilemma “liberty vs. security, once again,” or more precisely “privacy vs. security.”

There are a host of legal, technological, political, and, perhaps above all, economic issues at play here. I do not claim to know precisely how one balances liberty with security, let alone balancing liberty with the tech companies’ push for profit maximization or governments’ desire to save face in the wake of acts of terror. But I do think that the scales are already set to fall off—that is, that these debates are taking place against a background of assumptions about privacy that are themselves problematic.

In calling privacy a right, we tend to do more than assert the necessity for its legal protection. We tend to carry with our idea of the right to privacy the metaphor of private space, even private property. Privacy as that which is bounded, set off from that which is public. Hence we have our private life and our public life, our private opinion and our public statements, our private information and our public profile, etc. In this very common way of thinking about things, the private and the public are two distinct realms, and the right to privacy is the guarantee of a wall around our private realm.

The privacy vs. security dilemma is imbedded in this way of thinking: It has to do with when it is legitimately permissible for the government to break down the wall of privacy for the sake of security. It is a version of the broader dilemma of liberty within the quasi-utilitarian liberalism that underlies our assumptions about privacy. We are to be free, so long as we do not interfere with the freedom of others; but when we do so interfere, the state has the right to encroach on our freedom, indeed even on our privacy, in the name of preserving maximum freedom for the greatest number.

Indeed, in recent rebuttals by libertarians, some liberals, and the tech industry to the call by Cameron and Obama for preserving a “back door” option by which to access user data, we see the greatest good for the greatest number argument used on behalf of super-encryption: Back doors, Cameron’s critics argue, can and will be used by the bad guys (criminals, hackers, the Russians, the Chinese) as well as the good guys, and the damage done by the bad guys could well be catastrophic. As Harvard’s Margo Seltzer recently said in The Financial Times,

If bad guys who are breaking laws cannot use encryption, they will find another way. It is an arms race and if governments say you cannot do this, that means the good guys can’t and the bad guys can. End-to-end encryption is the way to go.

Protecting privacy as an inviolable right, more sophisticated arguments go, is not only consistent with liberal societies, but also the most effective means of security—even if it means terrorists can communicate with little fear of being detected. It’s often assumed here that an absolute right to privacy will neatly reconcile itself with, even produce, the greatest good for the greatest number (albeit, the privacy of one’s data from tech companies themselves is more penetrable).

I think the super-encryption efforts of tech companies are socially and politically problematic. I think they are the wrong solution addressing the wrong problem. But in arguing so I am not interested in hypothetical calculations of the greatest good for the greatest number. Rather, I simply want to start with the manifest relationship of the private to the public. How do things work with respect to the private and the public?

Rather than starting with the regnant bugaboo, terrorism, let’s think about political corruption. Do politicians have an absolute right to the privacy of their deliberations and communications about public matters? Does the Speaker of the House, or the President, have an absolute right to the full and permanent protection of behind-the-scenes communications about matters of public consequence? If Legislator A and Donor K used WhatsApp to work out a deal for campaign donations in exchange for sponsoring legislation in the House of Representatives, would we, as citizens, accept the records of those conversations as being forever and irredeemably private, such that we simply could not ever access them?

I suspect that most of us, once we stop to think about it, would not be too comfortable with this already real-life scenario. What if the messages concerned bribes, threats, or other forms of back room dealings? What if the President told the Speaker things that the latter was not authorized to know? What if the CEO of Company X was privy to the messages, too? Or what if the Speaker sent the President the CEO’s messages without the CEO’s knowledge? This is the stuff of scandal and corruption, and these are each instances where communications, though “private,” indeed have public importance. The public would have a right to know about them.

This is not because we are willing to “sacrifice” privacy for the integrity of our political system; it is not a version of “liberty vs. security, once again.” Rather this is because, even with the high premium we put on the right to privacy, we understand that the private stands in a flexible, dialectical, and dependent relationship with the public: When private acts have direct public consequences, they are not strictly private—they can be called to public account.

This is the case whether we are talking about political corruption or communication among persons who would commit acts of terror. More important, in calling private acts to public account, we are not breaking down the wall of privacy; rather, we are simply walking through the door from the private to the public the reverse way, so to speak. An exchange between the private and the public has already taken place. We are but re-tracing it.

What I find particularly troubling about the unbreachable encryption efforts of Apple, Google, and others is that they technologically (or, more properly, mathematically) prevent this kind of reverse traffic in the name of the public good. Rather in the name of “privacy”—and, let’s be honest, in the name of corporate profits—tech companies are creating, in effect, not so much inviolable walls around privacy but something more like trap doors from the private to the public that can be gone through only one way. In such a scenario, it is only the public that will suffer.

The genuine political worry articulated by super-encryption is that about Big Brother. As Wired writes of WhatsApp founder Jan Koum,

Growing up in Soviet Ukraine in the 1980s, WhatsApp founder Jan Koum learned to distrust the government and detest its surveillance. After he emigrated to the U.S. and created his ultra-popular messaging system decades later, he vowed that WhatsApp would never make eavesdropping easy for anyone. Now, WhatsApp is following through on that anti-snooping promise at an unprecedented scale.

But the United States and the United Kingdom are not the Soviet Union, and while both governments have participated aggressively in very troubling illegal, large-scale dragnet-like surveillance in the last decade, we have not seen a corresponding development of a police state working in tandem with the data collection agencies. To the contrary, the greatest problem faced by American and British citizens is that of government secrecy, which has provided cover for illegal and otherwise questionable state surveillance programs, together with the cultural problem seen in repeated demands from politicians that intelligence agencies unfailingly connect the dots prior to a terrorist attack, or be held culpable when they do not. This cultivates a culture of self-preservation in intelligence communities, encourages them to lean always to more aggressive actions rather than less aggressive ones, and opens the door to all sorts of government contractors promising infallible technological fixes for what are, in the end, inherently political and social crises.

Encryption processes that simply block government surveillance outright, in keeping with Silicon Valley’s longstanding delusion, are also but a supposed technological fix for what are political and cultural problems—be it the NSA or al-Qaeda and their affiliates. End-to-end encryption and its equivalents in no way address the real problems we face from a civil liberties perspective—government secrecy and the unrealistic expectations before counter-terrorism agencies. Worse, encryption offers a false substitute for real solutions—something that is the moral equivalent of vigilante force when what we need is better government and law.

Ned O’Gorman, associate professor of communication and Conrad Humanities Professorial Scholar at the University of Illinois, Urbana-Champaign. He is the author of Spirits of the Cold War: Contesting Worldviews in the Classical Age of American Security Strategy and the forthcoming The Iconoclastic Imagination: Image, Catastrophe, and Economy in America since the Kennedy Assassination.

Editor’s Note: Ned O’Gorman is also a contributor to The Hedgehog Review‘s Spring 2015 issue.  Reserve your copy today here.

. . . . . . . .

Like The Hedgehog Review on Facebook, follow us on Twitter, and subscribe to our posts via RSS.

Algorithms Rule

inf machine_CDF_wanderer_algorithm_FLAT

The Internet killed expertise. Or, so claims public-policy specialist Tom Nichols in a recent essay that laments the demise of a cultural fixture.

I fear we are witnessing the “death of expertise”: a Google-fueled, Wikipedia-based, blog-sodden collapse of any division between professionals and laymen, students and teachers, knowers and wonderers—in other words, between those of any achievement in an area and those with none at all. By this, I do not mean the death of actual expertise, the knowledge of specific things that sets some people apart from others in various areas. There will always be doctors, lawyers, engineers, and other specialists in various fields. Rather, what I fear has died is any acknowledgment of expertise as anything that should alter our thoughts or change the way we live.

For Nichols, technologies such as Google and Wikipedia betoken a loss of epistemic authority, that is, what counts as authoritative knowledge in our digital age. What legitimates one form of knowledge over another? Which sources of knowledge are to be trusted? Which not? What practices and scholarly habits, techniques, and institutions render knowledge authoritative or worthy? In our digital age, these questions seem almost quaint, throwbacks to a less free and democratic age.

Nichols isn’t alone in divining the collapse of expertise and authority in our digital age. But other, more sanguine observers celebrate it and the liberating promises of digital technologies. “Neither the Internet nor the WWW,” writes Cathy Davidson, a professor at the City University of New York Graduate Center, “has a center, an authority, a hierarchy, or even much of a filter on the largest structural level.” With the advent of digitally-supported learning, “conventional modes of authority break down.” Digital technologies will liberate us from the constraints of traditional forms of epistemic authority. There will be no filters in the digital future to come.

Davidson’s messianic hopes as well as Nichols’s cultural despair mistakenly suppose that there can somehow be a vacuum of epistemic authority. But, in truth, forms and functions of epistemic authority, be they the disciplinary order of the research university or Wikipedia’s fundamental principles or “Five Pillars,” are themselves filtering technologies, helping us to orient ourselves amid a surfeit of information. They help us discern and attend to what is worthwhile. Google searches point us in the direction of some resources and not others. Technologies are normative, evaluative structures to make information accessible, manageable, and, ultimately, meaningful. It is not a question, then, of the presence or absence of epistemic authority; it is about better or worse forms of epistemic authority. Expertise and cultural authority are still with us. But now it might be more spectral, embodied not in the university don but in the black-boxed algorithm.

If the Internet and the World Wide Web lack, as Davidson puts it, a “centralized authority” and a “filter,” they do so only on the most abstract level. Our daily interactions with the Web are made possible by a host of technological constraints and filters. People access and engage information through technologies that allow them to select, filter, and delimit. Web browsers, hyperlinks, blogs, online newspapers, and the computational algorithms of Facebook, Google  and financial institutions help us turn terabytes of data into something more scalable, that is, something that can be made useful to an embodied person. These now-ubiquitous technologies help us to sort, to Google a needle in the haystack—and in so doing, they have become central mediums for the way we experience the world.

We are living in an age of algorithmic authority. Algorithms filter our music choices, track our purchasing decisions, find our airline tickets, and help us withdraw money from an ATM. They are ubiquitous. They are forming who we are and who we want to become. But we are only beginning to ask about our algorithmic selves. How can we learn about these algorithms from the outside and how they increasingly organize our very selves?

Authority hasn’t vanished. It has just assumed different, more latent forms. As Evgeny Morozov puts it,

The reason to fear Facebook and its ilk is not that they violate our privacy. It is that they define the parameters of the grey and mostly invisible technological infrastructure that shapes our identity.

We can’t free ourselves from our technologies; digital detoxes are palliative exercises. But we can try to get to know our new algorithmic selves.

Credit: Photo montage with algorithm and Caspar David Friedrich’s Wanderer Above the Sea of Fog (1818)

. . . . . . . .

Like The Hedgehog Review on Facebook, follow us on Twitter, and subscribe to our posts via RSS.