79 Theses on Technology:
The Spectrum of Attention

“We should evaluate our investments of attention,” Jacobs urges in Thesis #7, “at least as carefully and critically as our investments of money.” But we will be in a better position to undertake such an evaluation when we understand exactly what we are talking about when we talk about attention, which is a word that—despite its importance—is never defined by Jacobs in the 79 Theses.

It’s easy to assume that “attention” is experienced in the same way by everyone. But as Matthew Crawford’s recent work has argued, attention has been imagined, and thus experienced, differently over time. Attention names various states or activities that we might do well to distinguish.

We can define attention first as “intently focusing on one object or task.” Reading a long, demanding text is a one example of this kind of attention. This sort of attention is the subject of Nicholas Carr’s Atlantic article, “Is Google Making Us Stupid?”: “Immersing myself in a book or a lengthy article used to be easy,” Carr notes, but now “my concentration often starts to drift after two or three pages. I get fidgety, lose the thread, begin looking for something else to do. I feel as if I’m always dragging my wayward brain back to the text.”

I suspect many of us share Carr’s experience. Not unlike the Apostle Paul, we lament, “What I want to pay attention to, I cannot. What I do not want to pay attention to, to that I do.” This failure to direct our attention presents itself as a failure of the will, and it assumes at some level that I am, as an autonomous subject, responsible for this failure (for more on this point, I suggest Chad Wellmon’s exchange with Jacobs).

But sometimes we talk about attention in a slightly different way; we speak of it as openness to the world, without any particular focal point. Sometimes the language of presence is used to articulate this kind of attention: Are we living in the moment? It is also the sort of attention that is advocated by proponents of “mindfulness,” to which Jacobs devoted two theses:

11. “Mindfulness” seems to many a valid response to the perils of incessant connectivity because it confines its recommendation to the cultivation of a mental stance without objects.

13. The only mindfulness worth cultivating will be teleological through and through.

On the surface, the two ways of talking about attention that I’ve outlined attention contradict each other. Directed attention is inconceivable without an object (mental or material) to sustain it, but no object would appear apart from an already existing form of attention.

Much depends on what exactly is meant by “mindfulness,” but I think we might be able to preserve a valuable distinction while still heeding Jacobs’s critique. If “mindfulness” functions, for instance, as a clearing of mental space in order to make directed attention possible, then the telos of mindfulness would be directed attention itself.

Attention as Dance

We can think of attention as a dance whereby we both lead and are led. This image suggests that receptivity and directedness do indeed work together. The proficient dancer knows when to lead and when to be led, and she also knows that such knowledge emerges out of the dance itself. This analogy reminds us, as well, that attention is the unity of body and mind making its way in a world that can be solicitous of its attention. The analogy also raises a critical question: How ought we conceive of attention given that we are  embodied creatures?

Maurice Merleau-Ponty can help us here. In Phenomenology of Perception, Merleau-Ponty discusses the shortcomings of both empiricist and intellectualist (rationalist) approaches to attention and makes the following observation: “Empiricism does not see that we need to know what we are looking for, otherwise we would not go looking for it; intellectualism does not see that we need to be ignorant of what we are looking for, or, again, we would not go looking for it.”

This simultaneous knowing and not-knowing seems to me another way of talking about attention as both openness to the world and as a directed work of the mind. It is a work of both receptivity, of perceiving the world as a gift, and care, of willfully and lovingly attending to particular aspects of the world. And, as Merleau-Ponty goes on to argue, attention is also a form of embodied perception that construes the world as much as it registers it. In this sense, our attention is never merely picking out items in the world (see Crawford on this idea); rather, attention is always interpreting the world in keeping with the desires and demands of an embodied being at a particular moment.

To a hiker on a long walk, for example, a stone is a thing to step around and is registered as such without conscious mental effort. It is attended to by the body in motion more than by the cogitating mind. To a geologist on a walk, on the other hand, a stone may become an object of urgent intellectual inquiry.

Both of these instances of perceiving-as result from subjective prior experience. The expert hiker moves along at a steady pace making countless adjustments and course corrections as a matter of bodily habit. The geologist, likewise, has trained his perception through hours of intellectual labor. In either situation, a novice might fail to hike as adroitly or notice the geologically interesting stone. Merleau-Ponty calls this repertoire of possible perceptions the “intentional arc,” which subtends “the life of consciousness—cognitive life, the life of desire or perceptual life.”

This example suggests two poles of attention, bodily and mental. But these are not mutually exclusive binaries. Rather, they constitute a spectrum of possibilities from the dominance of conscious mental activity on one end to the other end where non-conscious bodily activity is paramount. Consider the person lost deep in thought or a daydream. This person is deeply attentive, but not to his surroundings or to sensory information. Such a person would have to be called back to an awareness of their body and their surroundings.

By contrast, we may imagine the athlete, musician, or dancer who is, to borrow Mihály Csíkszentmihályi’s formulation, “in the flow.” Like the thinker or daydreamer, they, too, are in a state of deep attention, but in a different mode. Conscious thought would, in fact, disrupt their state of attention. We may complicate this picture even further by observing how the hiker “in the flow” might be lost in thought and remain an expert navigator of the terrain.

Attention Mediated Through Technology

But where does technology fit into our model? That is, after all, where we began and where Jacobs directs our attention. Perhaps there’s another spectrum intersecting with the one running from the bodily to the mental: one that runs from mediated to unmediated forms of attention.

Consider our hiker one more time. Imagine that she is now equipped with a walking stick. Aspects of her attending to the world through which she makes her way are now mediated by the walking stick. Of course, the walking stick is an adept tool for this particular context and extends the hiker’s perceptions in useful ways. (It would be very different, for instance, if the hiker were walking about with a garden hose.)

Imagine, however, giving the hiker a different tool: a smartphone. The smartphone mediates perception as well. In the act of taking a picture, for example, the landscape is seen through the lens. But a subtler act of mediation is at work even when the smartphone’s camera is not in use. Smartphone in hand, the hiker might now perceive the world as field of possible images. This may, for example, direct attention up from the path toward the horizon, causing even our experienced hiker to stumble.

We may be tempted to say that the hiker is no longer paying attention, that the device has distracted her. But this is, at best, only partly true. The hiker is still paying attention. But her attention is of a very different sort than the “in the flow” attention of a hiker on the move. Without the smartphone in hand, the hiker might not stumble—but she might not notice a particularly striking vista either.

So along one axis, we range from bodily to mental forms of attention. Along the other, we range from mediated to unmediated forms of attention. (Granted that our attention is never, strictly speaking, absolutely unmediated.) This yields a range of possibilities among the following categories: “bodily mediated,” “bodily unmediated,” “mental mediated,” and “mental unmediated.” (Consider the following as ideal types in each case: the musician, the dancer, the scientist, and the philosopher.)

sacasas graph

How does conceiving of attention in this way help us?

This schema yields a series of questions we may ask as we seek to evaluate our investments of attention. What kind of attention is required in this context? To what aspects of the world does a device invite me to pay attention? Does a device or tool encourage mental forms of attention when the context is better suited to bodily forms of attention? Is a device or tool encouraging me to direct my attention, when attentive openness would be more useful? What device or tool would best help me deploy the kind of attention required by the task before me?

The result of this exploration has been to break up the opposition of device to attention. An opposition, I should say, I don’t think Jacobs himself advocates. Instead, my hope is to expand our conceptual tool kit so that we might make better judgments regarding our devices and our attention to the world.

L.M. Sacasas is a doctoral candidate in the Texts and Technology program at the University of Central Florida. Follow him on Twitter @frailestthing.

Photo: Heinrich Vogeler, Sehnsucht (Träumerei), c.1900, via Wikimedia Commons, public domain

. . . . . . . .

Like The Hedgehog Review on Facebook, follow us on Twitter, and subscribe to our posts via RSS.

FacebookTwitterGoogle+LinkedInShare

79 Theses on Technology:
Things That Want—A Second Reply to Alan Jacobs

tender buttons_2_FLATI don’t know exactly what Alan Jacobs wants. But I know what my keyboard wants. That difference—a difference in my knowledge of the intentionality of things—is reason for me to conclude that Alan Jacobs and my keyboard are two different kinds of things. There is, we’d say, an ontological difference between Alan Jacobs and my keyboard. There is a functional difference as well. And so many more differences. I acknowledge this. The world is not flat.

But Jacobs differentiates himself from my keyboard based on “wanting” itself. Alan Jacobs wants. Keyboards—mine or others—don’t “want.” Such is for Jacobs the line between Alan Jacobs and keyboards. If we can regulate our language about things, he suggests, we can regulate things. I would rather just learn from our language, and from things, and go from there.

I think my differences with Jacobs take three directions: one rhetorical, another ontological, and a third ethical. I will discuss them each a bit here.

To start, I think that machines and other technologies are full of meaning and significance, and that they do in fact give meaning to our lives. Part of their meaningfulness is found in what I might call their “structure of intention,” or “intentionality.” This includes what design theorists call “affordances.” In the classic account of affordances, James Gibson described them as the latent “action possibilities” of things in relation to their environment. Design theorists tend to take a more straight-forward approach: plates on doors afford pushing; C-shaped bars affixed to doors afford pulling; and knobs afford either action. Likewise, buttons on car dashboards afford pushing, whereas dials afford turning.

But intentionality as I am calling it here goes beyond the artifacts themselves, to include the broader practices and discourses in which they are embedded. Indeed, the “intentionality” of a thing is likely to be stronger where those broader practices and discourses operate at the level of assumption rather than explicit indoctrination. So much of the meaningfulness of things is tacitly known and experienced, only becoming explicit when they are taken away.

So there are things, their affordances, and the practices and discourses in which they are embedded. And here I think it is rhetorically legitimate, ontologically plausible, and ethically justified to say that technologies can want.

Rhetorically, every culture animates its things through language. I do not think this is mere embellishment. It entails a recognition that non-human things are profoundly meaningful to us, and that they can be independent actors as they are “activated” or “deactivated” in our lives. (Think of the frustrations you feel when the plumbing goes awry. This frustration is about “meaning” in our lives as much as it is about using the bathroom.) To say technologies “want,” as Kevin Kelly does, is to acknowledge rhetorically how meaningful non-human things are to us; it is not to make a category mistake.

Ontologically, the issue hinges in part on whether we tie “wanting” to will, especially to the will of a single, intending human agent (hence, the issue of voluntarianism). If we tether wanting to will in a strong sense, we end up in messy philosophical terrain. What do we do with instinct, bodily desires, sensations, affections, and the numerous other forms of “wanting” that do not seem to be a product of our will? What do we do with animals, especially pets? What do we do with the colloquial expression, “The plant wants water”? Such questions are well beyond the scope of this response. I will just say that I am skeptical of attempts to tie wanting to will because willfulness is only one kind of wanting.

Jacobs and I agree, I think, that the most pressing issue in saying technologies want is ethical. Jacobs thinks that in speaking of technologies as having agency, I am essentially surrendering agency to technical things. I disagree.

I think it is perfectly legitimate and indeed ethically good and right to speak of technologies as “wanting.” “To want” is not simply to exercise a will but rather more broadly to embody a structure of intention within a given context or set of contexts. Will-bearing and non-will-bearing things, animate and inanimate things, can embody such a structure of intention.

It is good and right to call this “wanting” because “wanting” suggests that things, even machine things, have an active presence in our life—they are intentional. They cannot be reduced to mere tools or instruments, let alone “a piece of plastic that when depressed activates an electrical current.” Moreover, this active presence cannot be neatly traced back to their design and, ultimately, some intending human.

To say the trigger wants to be pulled is not to say only that the trigger “was made for” pulling. It is not even to say that the trigger “affords” pulling. It is to say that the trigger may be so culturally meaningful as to act upon us in powerful ways (as indeed we see with guns).

So far from leading, as Jacobs claims, to the “Borg Complex”—the belief that resistance to technology is futile—it is only by coming to grips with the profound and active power of things that we best recognize that resistance to technology is, as Jacobs correctly argues, a cultural project, not a merely personal one, let alone primarily a definitional one.

So rather than trying to clean up or correct our language with respect to things (technologies don’t want!), I think we ought to begin by paying closer attention to our language about things and ask what we may learn from it. Yes, we will learn of our idolatries, ideologies, idiocies, and lies. But we may also learn some uncomfortable truths. So I will say it again, of course technologies want!

. . . . . . . .

Like The Hedgehog Review on Facebook, follow us on Twitter, and subscribe to our posts via RSS.

79 Theses on Technology:
The Hand That Holds the Smartphone

medical_anatomy_hands

Alan Jacobs poses a few questions to his readers: “What must I pay attention to?” “What may I pay attention to?” and “What must I refuse attention to?” These questions direct readers to understand their own positions in the world in terms of attention. They encourage reflection. Instead of directing the reader’s focus outward to ponder general, more abstract relations between “technology” and “society,” they return us to our own bodies even and suggest that the hand that swipes the iPhone, your hand, deserves attention.

Jacobs formulates only two other theses as questions  (#9, #60), and both are posed from a seemingly universal standpoint without a social location or even an implied interlocutor. However, some of Jacobs’s concerns about the current unhappy union with our attention-demanding devices seem to emerge from a specific social location. While these concerns may ring true for a large segment of higher-income, well-educated adults, who do in fact own smartphones in greater numbers than the rest of the US population, they may fall short of describing the experiences of many other users.

For example, #70, “The always-connected forget the pleasures of disconnection, then become impervious to them.” Who are the “always-connected”? The McDonald’s worker whose algorithmically determined shifts are apt to change with less than half day’s notice? Or one of the 10% of Americans who rely on their smartphones to access the Internet to do their banking, look for a job, and let their child do homework?

People who rely on their smartphones for Internet access are more likely to be young, low-income, and non-white, the same population with some of the highest levels of unemployment. With the migration of most job-seeking to online databases and applications, all members of the “always-connected” might not experience the “pleasures of disconnection” in the same way as the middle class knowledge worker with high-speed Internet access at home and at work. In reality, the “always-connected” is a large and diverse group, and is quickly becoming even larger and even more diverse.

Your hand isn’t the only hand that comes in contact with your phone, of course, but only the last set of hands in a long chain of designers, manufacturing workers, and marketing gurus. Jacobs points this out in the case of algorithms (Thesis #54, “The contemporary version of the pathetic fallacy is to attribute agency not to nature but to algorithms—as though humans don’t write algorithms. But they do.”), but it bears extending this line of thinking to other theses about the ideologies that run through contemporary discourse on technology.

Consider Thesis #41, “The agency that in the 1970s philosophers and theorists ascribed to language is now being ascribed to technology” and #44, “We try to give power to our idols so as to be absolved of the responsibilities of human agency”—who are the agents in these theses? Who is doing the ascribing? Who seeks absolution?

Kevin Kelly, the author Jacobs points to as a prime example of techno-enthusiasm, was a founding editor of Wired and has spent a lot of time talking to technology executives over the past several decades. Kelly’s ideas have often been translated into marketing strategies that soon enter into the public consciousness—like the sumptuously edited commercial for the Apple Watch in which the watch operates completely of its own agency, no human required!—where they shape our desires and understandings of our relationships with our devices.

It’s through the image of a series of hands grasping, texting, and swiping away that my attention is drawn to the people at other end of the technologies that shape our lives. As Jacobs points out, technology doesn’t want anything, “we want, with technology as our instrument,” but the question of who we are is isn’t just idle sociological speculation. It’s vital to imagining alternative arrangements of both people and technology, as well as more humane practices that may benefit us all.

Julia Ticona is a doctoral candidate in the sociology department at the University of Virginia and a dissertation fellow at the Institute for Advanced Studies in Culture. Her work focuses on the cultures of technology and everyday life.

Photo: Anatomical study of hands, public domain.

 

. . . . . . . .

Like The Hedgehog Review on Facebook, follow us on Twitter, and subscribe to our posts via RSS.

79 Theses on Technology:
Piper to Jacobs—No Comment

In his 79 Theses, Alan Jacobs hits upon one of the most important transformations affecting the technology of writing today. “Digital textuality,” writes Jacobs in Thesis 26, “offers us the chance to restore commentary to its pre-modern place as the central scholarly genre.” One could remove “scholarly” from this sentence and still capture the essential point: In the interconnected, intergalactic Internet, everything is commentary.

For Jacobs, commentary is about responsiveness and the way we encode ethics into our collective electronic outpourings. Nothing could feel further from the actual comments one encounters online today. As Jacobs points out, “Comment threads seethe with resentment,” not only with what has been written, but with their secondary status as emotions, or rather one emotion. In a world where we imagine writing to be about originality, the comment can only ever be angry. In response, we either turn them off (as is the case with this blog). Or we say “No comment.” Withholding commentary is a sign of resistance or power.

Of course, this was not always the case. Commentary was once imagined to be the highest form of writing, a way of communing with something greater than oneself. It was not something to be withheld or spewed, but involved a complex process of interpretation and expression. It took a great deal of learning.

Hunayn ibn Ishaq al-'Ibadi, 809?-873 (known as Joannitius). Isagoge Johannitii in Tegni Galeni.

The main difference between our moment and the lost world of pre-modern commentary that Jacobs invokes is of course a material one. In a context of hand-written documents, transcription was the primary activity that consumed most individuals’ time. Transcription preceded, but also informed commentary (as practiced by the medieval Arab translator Joannitius). Who would be flippant when it had just taken weeks to copy something out? The submission that Jacobs highlights as a prerequisite of good commentary—a privileging of someone else’s point of view over our own—was a product of corporeal labor. Our bodies shaped our minds’ eye.

Not all is lost today. While comment threads seethe, there is also a vibrant movement afoot to remake the web as a massive space of commentary. The annotated web, as it’s called, has the aim of transforming our writing spaces from linked planes to layered marginalia. Whether you like it or not, that blog or corporate presence you worked so hard to create can be layered with the world’s thoughts. Instead of writing up here and commenting down there, it reverses the hierarchy and places annotating on top. Needless to say, it has a lot of people worried.

I personally prefer the vision of “annotation” to commentary. Commentary feels very emulative to me—it tries to double as writing in a secondary space. Annotation by contrast feels more architectural and versatile. It builds, but also branches. It is never finished, nor does it aim to be so. It intermingles with the original text more subtly than the here/there structure of commentary. But whether you call it annotation or commentary, the point is the same—to take seriously the writer’s responsiveness to another person.

Missing from these models is pedagogy. The annotated web gives us one example of how to remake the technology of writing to better accommodate responsiveness. It’s a profound first step, one that will by no means be universally embraced (which should give us some idea of how significant it is).

But we do not yet have a way of teaching this to new (or old) writers. Follow the curricular pathways from the lockered hallways of elementary school to the bleak cubicles of higher education and you will still see the blank piece of paper or its electronic double as the primary writing surface. The self-containment of expression is everywhere. It is no wonder that these writers fail to comment well.

It’s all well and good to say commentary is back. It’s another to truly re-imagine how a second grader or college student learns to write. What if we taught commentary instead of expression, not just for beginning writers, but right on through university and the PhD? What if we trained people to build and create in the annotated web instead of on pristine planes of remediated paper? Now that would be different.

Andrew Piper is Associate Professor and William Dawson Scholar in the Department of Languages, Literatures, and Cultures at McGill University.

. . . . . . . .

Like The Hedgehog Review on Facebook, follow us on Twitter, and subscribe to our posts via RSS.

79 Theses on Technology:
Jacobs Responds to O’Gorman

tender buttons_FLAT

 

Ned O’Gorman, in his response to my 79 theses, writes:

Of course technologies want. The button wants to be pushed; the trigger wants to be pulled; the text wants to be read—each of these want as much as I want to go to bed, get a drink, or get up out of my chair and walk around, though they may want in a different way than I want. To reserve “wanting” for will-bearing creatures is to commit oneself to the philosophical voluntarianism that undergirds technological instrumentalism.

We’re in interesting and difficult territory here, because what O’Gorman thinks obviously true I think obviously false. In fact, it seems impossible to me that O’Gorman believes what he writes here.

Take for instance the case of the button that “wants to be pushed.” Clearly, O’Gorman does not believe that the button sits there anxiously, as a finger hovers over it, thinking “oh please push me please please please.” Clearly, he knows that the button is merely a piece of plastic that when depressed activates an electrical current that passes through wires on its way to detonating a weapon. Clearly, he knows that an identical button—buttons are, after all, to adopt a phrase from the poet Les Murray, the kind of thing that comes in kinds—might be used to start a toy car. So, what can he mean when he says that the button “wants”?

I am open to correction, but I think he must mean something like this: “That button is designed in such a way—via its physical conformation and its emplacement in contexts of use—that it seems to be asking or demanding to be used in a very specific way.” If that’s what he means, then I fully agree. But to call that “wanting” does gross violence to the term, and obscures the fact that other human beings designed and built that button and placed it in that particular context. It is the desires, the wants, of those “will-bearing” human beings, that have made the button so eminently pushable.

(I will probably want to say something later about the peculiar ontological status of books and texts, but for now just this: Even if I were to say that texts don’t want, I wouldn’t thereby be “divesting” them of “meaningfulness,” as O’Gorman claims. That’s a colossal non sequitur.)

I believe I understand why O’Gorman wants to make this argument: The phrases “philosophical voluntarism” and “technological instrumentalism” are the key ones. I assume that by invoking these phrases O’Gorman means to reject the idea that human beings stand in a position of absolute freedom, simply choosing whatever “instruments” seem useful to them for their given project. He wants to avoid the disasters we land ourselves in when we say that Facebook, or the internal combustion engine, or the personal computer, or nuclear power, is “just a tool” and that “what matters is how you use it.” And O’Gorman is right to want to critique this position as both naïve and destructive.

But he is wrong if he thinks that this position is entailed in any way by my theses; and even more wrong to think that this position can be effectively combated by saying that technologies “want.” Once you start to think of technologies as having desires of their own you are well on the way to the Borg Complex: We all instinctively understand that it is precisely because tools don’t want anything that they cannot be reasoned with or argued with. And we can become easily intimidated by the sheer scale of technological production in our era. Eventually, we can end up talking even about what algorithms do as though algorithms aren’t written by humans.

I trust O’Gorman would agree with me that neither pure voluntarism nor purely deterministic defeatism are adequate responses to the challenges posed by our current technocratic regime—or the opportunities offered by human creativity, the creativity that makes technology intrinsic to human personhood. It seems that he thinks the dangers of voluntarism are so great that they must be contested by attributing what can only be a purely fictional agency to tools, whereas I believe that the conceptual confusion this creates leads to a loss of a necessary focus on human responsibility, and an inability to confront the political dimensions of technological modernity.

. . . . . . . .

Like The Hedgehog Review on Facebook, follow us on Twitter, and subscribe to our posts via RSS.

79 Theses on Technology: On Things

“The Sausage” (of

“The Sausage” (of Operation Ivy), 1952.

One of the more refreshing aspects of Alan Jacobs’s wonderful exercise, “79 Theses on Technology. For Disputation,” is its medieval cast. Disputations, as Chad Wellmon writes, were medieval “public performances that trained university students in how to seek and argue for the truth.” Theses were textual tidbits that mediated things (res) by means of words (verba). Theses spurred the search for truth as they pointed readers or hearers to a world of things (res), rather than, as we currently assume, codifying and hardening “claims.” “Commentary,” as Jacobs suggests, was one important medieval means of trying to get to the things behind or beyond words (Theses 26-36).

I find it perplexing, then, that Jacobs is so seemingly unsympathetic to the meaningfulness of things, the class to which technologies belong:

40. Kelly tells us “What Technology Wants,” but it doesn’t: We want, with technology as our instrument.
41. The agency that in the 1970s philosophers & theorists ascribed to language is now being ascribed to technology. These are evasions of the human.
42. Our current electronic technologies make competent servants, annoyingly capricious masters, and tragically incompetent gods.
43. Therefore when Kelly says, “I think technology is something that can give meaning to our lives,” he seeks to promote what technology does worst.
44. We try to give power to our idols so as to be absolved of the responsibilities of human agency. The more they have, the less we have.

46. The cyborg dream is the ultimate extension of this idolatry: to erase the boundaries between our selves and our tools.

Here is some of my own commentary on Jacobs’ theses.

There’s a documentary film from the 1950s called Operation Ivy. Made by the US Air Force, it concerns the first-ever detonation of a thermonuclear device, a historic (and horrible) technological achievement. One of the pivotal points of the film’s narrative comes just before the hydrogen device is detonated. The narrator asks the chief engineer in charge of the test, ‘But what happens if you have to stop the firing mechanism, or can you stop it?’’ The engineer responds, ‘‘We can stop it all right if we have to. We have a radio link direct to the firing panel in the shot cab. If we have to stop the shot we simply push this button.’’

‘‘Just a simple flip of the wrist, huh?’’ the narrator says.

‘‘That’s right,” says the engineer, “but a lot of work goes down the drain. You understand we don’t want to stop this thing unless it is absolutely essential.’’

Our technological artifacts aren’t wholly distinct from human agency; they are bound up with it.

“Human agency,” then, is not a solution to the moral and political problems of technology; it is the condition of their possibility, and too often a means of their rationalization. We don’t need to reclaim “human agency”; we need to reclaim the meaningfulness and power of things (res)—the complex ways in which human decisions and choices become embodied, even sedimented in things.

It is odd to read a literary critic, one with some medieval sensibilities no less, expressing concern about ascribing “agency” to technology, calling it “evasions of the human.” Texts are technologies, technologies are things. In The Book of Memory, a book that every media theorist should read, Mary Carruthers writes of the medieval text:

[In the middle ages] interpretation is not attributed to any intention of the man [the author]…but rather to something understood to reside in the text itself.… [T]he important “intention” is within the work itself, as its res, a cluster of meanings which are only partially revealed in its original statement…. What keeps such a view of interpretation from being mere readerly solipsism is precisely the notion of res—the text has a sense within it which is independent of the reader, and which must be amplified, dilated, and broken-out from its words….

Things, in this instance manuscripts, are indeed meaningful and powerful. Why would we want to divest things of their poetic quality, their meaningfulness, and indeed their power? Kevin Kelly may be off in his aims or misguided in his understanding, but he’s right to recognize in things, even and especially in technologies, sources of meaning and meaningfulness.

Of course technologies want. The button wants to be pushed; the trigger wants to be pulled; the text wants to be read—each of these want as much as I want to go to bed, get a drink, or get up out of my chair and walk around, though they may want in a different way than I want. To reserve “wanting” for will-bearing creatures is to commit oneself to the philosophical voluntarianism that undergirds technological instrumentalism.

The cyborg dream may or may not be the extension of some idolatry, but there the remedy is not a firm boundary between “our selves and our tools.” “Then he said to me, ‘Son of man, eat this scroll I am giving you and fill your stomach with it.’ So I ate it, and it tasted as sweet as honey in my mouth” (Ezekiel 3:3). Our tools are our part of us, central to our subsistence and lives. They need to be digested, ruminated, regurgitated, and, yes, sometimes violently spit out.

. . . . . . . .

Like The Hedgehog Review on Facebook, follow us on Twitter, and subscribe to our posts via RSS.

79 Theses on Technology: Jacobs Responds to Wellmon

 La transverbération de Sainte Thérèse, 1672, by Josefa de Óbidos, Eglise (Igreja Matriz) de Cascais Josefa de Óbidos [Public domain], via Wikimedia Commons

Let me zero in on what I think is the key paragraph in my friend Chad Wellmon’s response to some of my theses:

But this image of a sovereign self governing an internal economy of attention is a poor description of other experiences of the world and ourselves. In addition, it levies an impossible burden of self mastery. A distributive model of attention cuts us off, as Matt Crawford puts it, from the world “beyond [our] head.” It suggests that anything other than my own mind that lays claim to my attention impinges upon my own powers to willfully distribute that attention. My son’s repeated questions about the Turing test are a distraction, but it might also be an unexpected opportunity to engage the world beyond my own head.

I want to begin by responding to that last sentence by saying: Yes, and it is an opportunity you can take only by ceding the sovereignty of self, by choosing (“willfully”) to allow someone else to occupy your attention, rather than insisting on setting your own course. This is something most of us find it hard to do, which is why Simone Weil says “Attention is the rarest and purest form of generosity.” And yet it is our choice whether or not to practice that generosity.

I would further argue that, in most cases, we manage to cede the “right” to our attention to others—when we manage to do that—only because we have disciplined and habituated ourselves to such generosity. Chad’s example of St. Teresa is instructive in this regard, because by her own account her ecstatic union with God followed upon her long practice of rigorous spiritual exercises, especially those prescribed by Francisco de Osuna in his Tercer abecedario espiritual (Third Spiritual Alphabet) and by Saint Peter of Alcantara in his Tractatus de oratione et meditatione (Treatise on Prayer and Meditation). Those ecstatic experiences were a free gift of God, Teresa thought, but through an extended discipline of paying attention to God she had laid the groundwork for receptivity to them.

(I’m also reminded here of the little experiment the violinist Joshua Bell tried in 2007, when he pretended to be a busker playing in a D.C. Metro station. Hardly anyone noticed, but those who did were able to do so because of long experience in listening to challenging music played beautifully.)

In my theses I am somewhat insistent on employing economic metaphors to describe the challenges and rewards of attentiveness, and in so doing I always had in mind the root of that word, oikonomos (οἰκονόμος), meaning the steward of a household. The steward does not own his household, any more than we own our lifeworld, but rather is accountable to it and answerable for the decisions he makes within it. The resources of the household are indeed limited, and the steward does indeed have to make decisions about how to distribute them, but such matters do not mark him as a “sovereign self” but rather the opposite: a person embedded in a social and familial context within which he has serious responsibilities. But he has to decide how and when (and whether) to meet those responsibilities. So, too, the person embedded in an “attention economy.”

In this light I want to question Weil’s notion of attention as a form of generosity. It can be that, of course. In their recent biography Becoming Steve Jobs, Brent Schlender and Rick Tetzeli tell a lovely story about a memorial service for Jobs during which Bill Gates ignored the high-powered crowd and spent the entire time in a corner talking with Jobs’s daughter about horses. That, surely, is attention as generosity. But in other circumstances attention may not be a free gift but a just rendering—as can happen when my son wants my attention while I am reading or watching sports on TV. This is often a theme in the religious life, as when the Psalmist says “Ascribe to the Lord the glory due his name,” or in a liturgical exchange: “Let us give thanks to the Lord our God.” “It is meet and right so to do.”

There is, then, such a thing as the attention that is proper and adequate to its object. Such attention can only be paid if attention is withheld from other potential objects of our notice or contemplation: The economy of our attentional lifeworld is a strict one. But I would not agree with Chad that this model “levies an impossible burden of self mastery”; rather, it imposes the difficult burden of wisely and discerningly distributing my attention in ways that are appropriate not to myself qua self but to the “household” in which I am embedded and to which I am responsible.

. . . . . . . .

Like The Hedgehog Review on Facebook, follow us on Twitter, and subscribe to our posts via RSS.

79 Theses on Technology: On Attention

“Everything,” claims Alan Jacobs in 79 Theses on Technology, “begins with attention.” Throughout his theses, Jacobs describes attention as a resource to be managed. We “pay,” “refuse,” and “invest” attention. Behind these distributive acts is a purposeful, willful agent. I can choose whether to “give” attention to writing this post or “withhold” it from my eleven-year-old son (who wants me to explain what the Turing test is). The idea that I can allocate attention as I could any other resource or good suggests that attention is fungible. I’ve got a limited store of attention, and I have to decide when and where to expend it. It’s as though I wake up every day with 100 units of attention. And it’s up to me manage them well.

If attention is a good that I can spend well or badly, then it is under my control. I could, on such an account, also lose my attention, in the same way that I lose my keys. And this distributive notion of attention seems to underpin many of our contemporary anxieties about our current moment of digital distraction. The constant notifications from Twitter, Facebook, and my iPhone are all greedy consumers of my attention. If I were just focused, I could assert my powers of distribution and maintain control of my limited units of attention. I would be able to decide exactly which among the myriad objects clamoring for my attention deserves it.

Underpinning Jacobs’ distributive model of attention is an assumption that some general mental faculty, a particular power of the mind, exists that can manage this precious resource. The power of attention—just like other traditional faculties such as reason, memory, imagination, or will—is a latent capacity that needs to be disciplined in order to become fully actual and susceptible to manipulation. It’s like a muscle that needs to be exercised. And if I engage in the right kinds of exercises—maybe if I read really long novels in one sitting or dismantle the WiFi on my laptop—then I can become the master of my own mind. I can be free and in control of who and what can enjoy the benefits of my limited attention. For Jacobs, then, attention is attention, regardless of the object I’m “investing” it in. And my task is to cultivate better habits of managing and controlling my attention.

Jacobs’ suggestion that attention is a mental power that we distribute here or there or anywhere makes sense in certain circumstances. When I engage in discrete tasks, I can think of attention as a limited good that requires tight control and manipulation. If I try to follow my Twitter feed, read a book, and write an article, then I won’t do any of those things well. If I refuse attention to Twitter and the book, however, I may well be able to finish a paragraph.

But this image of a sovereign self governing an internal economy of attention is a poor description of other experiences of the world and ourselves. In addition, it levies an impossible burden of self mastery. A distributive model of attention cuts us off, as Matt Crawford puts it, from the world “beyond [our] head.” It suggests that anything other than my own mind that lays claim to my attention impinges upon my own powers to willfully distribute that attention. My son’s repeated questions about the Turing test are a distraction, but it might also be an unexpected opportunity to engage the world beyond my own head.

If we conceive of attention as simply the activity of a willful agent managing her units of attention, we foreclose theThe Ecstasy of Saint Theresa possibility of being arrested or brought to attention by something fully outside ourselves. We foreclose, for example,
the possibility of an ecstatic attention and the possibility that we can be brought to attention by a particular thing beyond our will, a source beyond our own purposeful, willful action.

Consider, for example, Bernini’s sculptural ensemble in the Cornaro Chapel, Santa Maria della Vittoria, Rome, “The Ecstasy of Teresa.” Bernini has given us an image of complete attention and devotion, but one in which the agency of the will has been relinquished. Or consider the more mundane example of the first bud on a dogwood, wholly unexpected after a cold, icy winter. It surprises me by alerting me to a world beyond my own well-managed economy of attention. And, perhaps more perversely, what about all those shiny red notifications on my iPhone that take hold of me? If I imagine myself as master of my digital domain, I’m going to hate myself.

I know Jacobs is acutely aware of the limitations of such a distributive model of attention. He asks, for example, in Thesis 9, whether different phenomena require different forms of attention. There are, he suggests, different ways to attend to particular objects at particular moments—without “giving” or “paying” attention. And it’s these other forms, in which an agent doesn’t simply manage her attention, that seem just as crucial to making sense of how we inhabit our world.

. . . . . . . .

Like The Hedgehog Review on Facebook, follow us on Twitter, and subscribe to our posts via RSS.

79 Theses on Technology. For Disputation.

Alan Jacobs has written seventy-nine theses on technology for disputation. A disputation is an old technology, a formal technique of debate and argument that took shape in medieval universities in Paris, Bologna, and Oxford in the twelfth and thirteenth centuries. In its most general form, a disputation consisted of a thesis, a counter-thesis, and a string of arguments, usually buttressed by citations of Aristotle, Augustine, or the Bible.

But disputations were not just formal arguments. They were public performances that trained university students in how to seek and argue for the truth. They made demands on students and masters alike. Truth was hard won; it was to be found in multiple, sometimes conflicting traditions; it required one to give and recognize arguments; and, perhaps above all, it demanded an epistemic humility, an acknowledgment that truth was something sought, not something produced.

It is, then, in this spirit that Jacobs offers, tongue firmly in cheek, his seventy-nine theses on technology and what it means to inhabit a world formed by it. They are pithy, witty, ponderous, and full ofDisputation-300x295 life. And over the following weeks, we at the Infernal Machine will take Jacobs’ theses at his provocative best and dispute them. We’ll take three or four at a time and offer our own counter-theses in a spirit of generosity.

So here they are:

    1. Everything begins with attention.
    2. It is vital to ask, “What must I pay attention to?”
    3. It is vital to ask, “What may I pay attention to?”
    4. It is vital to ask, “What must I refuse attention to?”
    5. To “pay” attention is not a metaphor: Attending to something is an economic exercise, an exchange with uncertain returns.
    6. Attention is not an infinitely renewable resource; but it is partially renewable, if well-invested and properly cared for.
    7. We should evaluate our investments of attention at least as carefully and critically as our investments of money.
    8. Sir Francis Bacon provides a narrow and stringent model for what counts as attentiveness: “Some books are to be tasted, others to be swallowed, and some few to be chewed and digested: that is, some books are to be read only in parts, others to be read, but not curiously, and some few to be read wholly, and with diligence and attention.”
    9. An essential question is, “What form of attention does this phenomenon require? That of reading or seeing? That of writing also? Or silence?”
    10. Attentiveness must never be confused with the desire to mark or announce attentiveness. (“Can I learn to suffer/Without saying something ironic or funny/On suffering?”—Prospero, in Auden’s The Sea and the Mirror)
    11. “Mindfulness” seems to many a valid response to the perils of incessant connectivity because it confines its recommendation to the cultivation of a mental stance without objects.
    12. That is, mindfulness reduces mental health to a single, simple technique that delivers its user from the obligation to ask any awkward questions about what his or her mind is and is not attending to.
    13. The only mindfulness worth cultivating will be teleological through and through.
    14. Such mindfulness, and all other healthy forms of attention—healthy for oneself and for others—can only happen with the creation of and care for an attentional commons.
    15. This will not be easy to do in a culture for which surveillance has become the normative form of care.
    16. Simone Weil wrote that ‘Attention is the rarest and purest form of generosity’; if so, then surveillance is the opposite of attention.
    17. The primary battles on social media today are fought by two mutually surveilling armies: code fetishists and antinomians.
    18. The intensity of those battles is increased by a failure by any of the parties to consider the importance of intimacy gradients.
    19. “And weeping arises from sorrow, but sorrow also arises from weeping.”—Bertolt Brecht, writing about Twitter
    20. We cannot understand the internet without perceiving its true status: The Internet is a failed state.
    21. We cannot respond properly to that failed-state condition without realizing and avoiding the perils of seeing like a state.
    22. If instead of thinking of the internet in statist terms we apply the logic of subsidiarity, we might be able to imagine the digital equivalent of a Mondragon cooperative.
    23. The internet groans in travail as it awaits its José María Arizmendiarrieta.

    24. Useful strategies of resistance require knowledge of technology’s origin stories.
    25. Building an alternative digital commons requires reimagining, which requires renarrating the past (and not just the digital past).
    26. Digital textuality offers us the chance to restore commentary to its pre-modern place as the central scholarly genre.
    27. Recent technologies enable a renewal of commentary, but struggle to overcome a post-Romantic belief that commentary is belated, derivative.
    28. Comment threads too often seethe with resentment at the status of comment itself. “I should be the initiator, not the responder!”
    29. Only a Bakhtinian understanding of the primacy of response in communication could genuinely renew online discourse.
    30. Nevertheless certain texts will generate communities of comment around them, communities populated by the humbly intelligent.
    31. Blessed are they who strive to practice commentary as a legitimate, serious genre of responsiveness to others’ thoughts.
    32. And blessed also are those who discover how to write so as to elicit genuine commentary.
    33. Genuine commentary is elicited by the scriptural but also by the humble—but never by the (insistently) canonical.
    34. “Since we have no experience of a venerable text that ensures its own perpetuity, we may reasonably say that the medium in which it survives is commentary.”—Frank Kermode
    35. We should seek technologies that support the maximally beautiful readerly sequence of submission, recovery, comment.
    36. If our textual technologies promote commentary but we resist it, we will achieve a Pyrrhic victory over our technologies.

    37. “Western literature may have more or less begun, in Aeschylus’s Oresteia, with a lengthy account of a signal crossing space, and of the beacon network through whose nodes the signal’s message (that of Troy’s downfall) is relayed—but now, two and a half millennia later, that network, that regime of signals, is so omnipresent and insistent, so undeniably inserted or installed at every stratum of existence, that the notion that we might need some person, some skilled craftsman, to compose any messages, let alone incisive or ‘epiphanic’ ones, seems hopelessly quaint.”—Tom McCarthy
    38. To work against the grain of a technology is painful to us and perhaps destructive to the technology, but occasionally necessary to our humanity.
    39. “Technology wants to be loved,” says Kevin Kelly, wrongly: But we want to invest our technologies with human traits to justify our love for them.
    40. Kelly tells us “What Technology Wants,” but it doesn’t: We want, with technology as our instrument.
    41. The agency that in the 1970s philosophers & theorists ascribed to language is now being ascribed to technology. These are evasions of the human.
    42. Our current electronic technologies make competent servants, annoyingly capricious masters, and tragically incompetent gods.
    43. Therefore when Kelly says, “I think technology is something that can give meaning to our lives,” he seeks to promote what technology does worst.
    44. We try to give power to our idols so as to be absolved of the responsibilities of human agency. The more they have, the less we have.
    45. “In a sense there is no God as yet achieved, but there is that force at work making God, struggling through us to become an actual organized existence, enjoying what to many of us is the greatest conceivable ecstasy, the ecstasy of a brain, an intelligence, actually conscious of the whole, and with executive force capable of guiding it to a perfectly benevolent and harmonious end.”—George Bernard Shaw in 1907, or Kevin Kelly last week
    46. The cyborg dream is the ultimate extension of this idolatry: to erase the boundaries between our selves and our tools.
    47. Cyborgs lack humor, because the fusion of person and tool disables self-irony. The requisite distance from environment is missing.
    48. To project our desires onto our technologies is to court permanent psychic infancy.
    49. Though this does not seem to be widely recognized, the “what technology wants” model is fundamentally at odds with the “hacker” model.
    50. The “hacker” model is better: Given imagination and determination, we can bend technologies to our will.
    51. Thus we should stop thinking about “what technology wants” and start thinking about how to cultivate imagination and determination.
    52. Speaking of “what technology wants” is an unerring symptom of akrasia.
    53. The physical world is not infinitely redescribable, but if you had to you could use a screwdriver to clean your ears.
    54. The contemporary version of the pathetic fallacy is to attribute agency not to nature but to algorithms—as though humans don’t write algorithms. But they do.
    55. This epidemic of forgetting where algorithms come from is the newest version of “I for one welcome our new insect overlords.”
    56. It seems not enough for some people to attribute consciousness to algorithms; they must also grant them dominion.
    57. Perhaps Loki was right—and C. S. Lewis too: “I was not born to be free—I was born to adore and obey.”

    58. Any sufficiently advanced logic is indistinguishable from stupidity.—Alex Tabarrok
    59. Jaron Lanier: “The Turing test cuts both ways. You can’t tell if a machine has gotten smarter or if you’ve just lowered your own standards of intelligence to such a degree that the machine seems smart.”
    60. What does it say about our understanding of human intelligence that we think it is something that can be assessed by a one-off “test”—and one that is no test at all, but an impression of the moment?
    61. To attribute intelligence to something is to disclaim responsibility for its use.
    62. The chief purpose of technology under capitalism is to make commonplace actions one had long done painlessly seem intolerable.
    63. Embrace the now intolerable.
    64. Everyone should sometimes write by hand, to recall what it’s like to have second thoughts before the first ones are completely recorded.
    65. Everyone should sometimes write by hand, to revisit and refresh certain synaptic connections between mind and body.
    66. To shift from typing to (hand)writing to speaking is to be instructed in the relations among minds, bodies, and technologies.
    67. It’s fine to say “use the simplest technology that will do the job,” but in fact you’ll use the one you most enjoy using.
    68. A modern school of psychoanalysis should be created that focuses on interpreting personality on the basis of the tools that one finds enjoyable to use.
    69. Thinking of a technology as a means of pleasure may be ethically limited, but it’s much healthier than turning it into an idol.
    70. The always-connected forget the pleasures of disconnection, then become impervious to them.
    71. The Dunning-Kruger effect grows more pronounced when online and offline life are functionally unrelated.
    72. A more useful term than “Dunning-Kruger effect” is “digitally-amplified anosognosia.”
    73. More striking even than the anger of online commentary is its humorlessness. Too many people have offloaded their senses of humor to YouTube clips.
    74. A healthy comment thread is a (more often than not) funny comment thread.
    75. The protection of anonymity one reason why people write more extreme comments online than they would speak in person—but not the only one.
    76. The digital environment disembodies language in this sense: It prevents me from discerning the incongruity between my anger and my person.
    77. Consistent pseudonymity creates one degree of disembodiment; varying pseudonymity and anonymity create infinite disembodiment.
    78. On the internet nothing disappears; on the internet anything can disappear.
    79. “To apply a categorical imperative to knowing, so that, instead of asking, ‘What can I know?’ we ask, ‘What, at this moment, am I meant to know?’—to entertain the possibility that the only knowledge which can be true for us is the knowledge we can live up to—that seems to all of us crazy and almost immoral.”—Auden

. . . . . . . .

Like The Hedgehog Review on Facebook, follow us on Twitter, and subscribe to our posts via RSS.

Algorithms Who Art in Apps, Hallowed Be Thy Code

 

If you want to understand the status of algorithms in our collective imagination, Ian Bogost, author, game designer, and professor of media studies and interactive computing at Georgia Institute of Technology,  proposes the following exercise in his recent essay in the Atlantic: “The next time you see someone talking about algorithms, replace the term with ‘God’ and ask yourself if the sense changes any?”

If Bogost is right, then more often than not you will find the sense of the statement entirely unchanged. This is because, in his view, “Our supposedly algorithmic culture is not a material phenomenon so much as a devotional one, a supplication made to the computers we have allowed to replace gods in our minds, even as we simultaneously claim that science has made us impervious to religion.” Bogost goes on to say that this development is part of a “larger trend” whereby “Enlightenment ideas like reason and science are beginning to flip into their opposites.” Science and technology, he fears, “have turned into a new type of theology.”

It’s not the algorithms themselves that Bogost is targeting; it is how we think and talk about them that worries him. In fact, Bogost’s chief concern is that how we talk about algorithms is impeding our ability to think clearly about them and their place in society. This is where the god-talk comes in. Bogost deploys a variety of religious categories to characterize the present fascination with algorithms.

Bogost believes “algorithms hold a special station in the new technological temple because computers have become our favorite idols.” Later on he writes, “the algorithmic metaphor gives us a distorted, theological view of computational action.” Additionally, “Data has become just as theologized as algorithms, especially ‘big data,’ whose name is meant to elevate information to the level of celestial infinity.” “We don’t want an algorithmic culture,” he concludes, “especially if that phrase just euphemizes a corporate theocracy.” The analogy to religious belief is a compelling rhetorical move. It vividly illuminates Bogost’s key claim: the idea of an “algorithm” now functions as a metaphor that conceals more than it reveals.

He prepares the ground for this claim by reminding us of earlier technological metaphors that ultimately obscured important realities. The metaphor of the mind as computer, for example, “reaches the rank of religious fervor when we choose to believe, as some do, that we can simulate cognition through computation and achieve the singularity.” Similarly, the metaphor of the machine, which is really to say the abstract idea of a machine, yields a profound misunderstanding of mechanical automation in the realm of manufacturing. Bogost reminds us that bringing consumer goods to market still “requires intricate, repetitive human effort.” Manufacturing, as it turns out, “isn’t as machinic nor as automated as we think it is.”

Likewise, the idea of an algorithm, as it is bandied about in public discourse, is a metaphorical abstraction that obscures how various digital and analog components, including human action, come together to produce the effects we carelessly attribute to algorithms. Near the end of the essay, Bogost sums it up this way:

The algorithm has taken on a particularly mythical role in our technology-obsessed era, one that has allowed it to wear the garb of divinity. Concepts like ‘algorithm’ have become sloppy shorthands, slang terms for the act of mistaking multipart complex systems for simple, singular ones. Of treating computation theologically rather than scientifically or culturally.

But why does any of this matter? It matters, Bogost insists, because this way of thinking blinds us in two important ways. First, our sloppy shorthand “allows us to chalk up any kind of computational social change as pre-determined and inevitable,” allowing the perpetual deflection of responsibility for the consequences of technological change. The apotheosis of the algorithm encourages what I’ve elsewhere labeled a Borg Complex, an attitude toward technological change aptly summed by the phrase, “Resistance is futile.” It’s a way of thinking about technology that forecloses the possibility of thinking about and taking responsibility for our choices regarding the development, adoption, and implementation of new technologies. Secondly, Bogost rightly fears that this “theological” way of thinking about algorithms may cause us to forget that computational systems can offer only one, necessarily limited perspective on the world. “The first error,” Bogost writes, “turns computers into gods, the second treats their outputs as scripture.”

______________________

Bogost is right to challenge the quasi-religious reverence for technology. It is, as he fears, an impediment to clear thinking. And he is not the only one calling for the secularization of our technological endeavors. Computer scientist and virtual-reality pioneer Jaron Lanier has spoken at length about the introduction of religious thinking into the field of AI. In a recent interview, he expressed his concerns this way:

There is a social and psychological phenomenon that has been going on for some decades now:  A core of technically proficient, digitally minded people reject traditional religions and superstitions. They set out to come up with a better, more scientific framework. But then they re-create versions of those old religious superstitions! In the technical world these superstitions are just as confusing and just as damaging as before, and in similar ways.

While Lanier’s concerns are similar to Bogost’s,  Lanier’s use of religious categories is more concrete. Bogost deploys a religious frame as a rhetorical device, while Lanier’s uses it more directly to critique the religiously inflected expressions of a desire for transcendence among denizens of the tech world themselves.

But such expressions are hardly new. Nor are they limited to the realm of AI. In The Religion of Technology: The Divinity of Man and the Spirit of Invention, the distinguished historian of technology David Noble made the argument that “modern technology and modern faith are neither complements nor opposites, nor do they represent succeeding stages of human development. They are merged, and always have been, the technological enterprise being, at the same time, an essentially religious endeavor.”

Noble elaborates:

This is not meant in a merely metaphorical sense, to suggest that technology is similar to religion in that it evokes religious emotions of omnipotence, devotion, and awe, or that it has become a new (secular) religion in and of itself, with its own clerical caste, arcane rituals, and articles of faith. Rather it is meant literally and historically, to indicate that modern technology and religion have evolved together and that, as a result, the technological enterprise has been and remains suffused with religious belief.

Looking also at the space program, atomic weapons, and biotechnology, Noble devoted a chapter of his book to history of artificial intelligence,  arguing that AI research had often been inspired by a curious fixation on the achievement of god-like, disembodied intelligence as a step toward personal immortality. Many of the sentiments and aspirations that Noble identifies in figures as diverse as George Boole, Claude Shannon, Alan Turing, Edward Fredkin, Marvin Minsky, Daniel Crevier, Danny Hillis, and Hans Moravec—all of them influential theorists and practitioners in the development of AI—find their consummation in the Singularity movement. The movement envisions a time—2045 is frequently suggested—when the distinction between machines and humans will blur and humanity as we know it will be eclipsed. Before Ray Kurzweil, the chief prophet of the Singularity, wrote about “spiritual machines,” Noble had astutely anticipated how the trajectories of AI, Internet, Virtual Reality, and Artificial Life research were all converging  in the age-old quest for the immortality.  Noble, who died quite suddenly in 2010, must have read the work of Kurzweil and company as a remarkable validation of his thesis in The Religion of Technology.

Interestingly, the sentiments that Noble documents alternate between the heady thrill of creating non-human Minds and non-human Life, on the one hand, and, on the other, the equally heady thrill of pursuing the possibility of radical life-extension and even immortality. Frankenstein meets Faust we might say. Humanity plays god in order to bestow god’s gifts on itself.

Noble cites one Artificial Life researcher who explains, “I feel like God; in fact, I am God to the universes I create,” and another who declares, “Technology will soon enable human beings to change into something else altogether [and thereby] escape the human condition.” Ultimately, these two aspirations come together into a grand techno-eschatological vision, expressed here by robotics specialist Hans Moravec:

Our speculation ends in a supercivilization, the synthesis of all solar system life, constantly improving and extending itself, spreading outward from the sun, converting non-life into mind …. This process might convert the entire universe into an extended thinking entity … the thinking universe … an eternity of pure cerebration.

Little wonder that Pamela McCorduck, who has been chronicling the progress of AI since the early 1980s, can say, “The enterprise is a god-like one. The invention—the finding within—of gods represents our reach for the transcendent.” And, lest we forget where we began, a more earth-bound, but no less eschatological hope was expressed by Edward Fredkin in his MIT and Stanford courses on “saving the world.” He hoped for a “global algorithm” that “would lead to peace and harmony.”

I would suggest that similar aspirations are expressed by those who believe that Big Data will yield a God’s-eye view of human society, providing wisdom and guidance that is otherwise inaccessible to ordinary human forms of knowing and thinking.

Perhaps this should not be altogether surprising. As the old saying has it, the Grand Canyon wasn’t formed by someone dragging a stick. This is just a way of saying that causes must be commensurate with the effects they produce. Grand technological projects such as space flight, the harnessing of atomic energy, and the pursuit of artificial intelligence are massive undertakings requiring stupendous investments of time, labor, and resources. What motives are sufficient to generate those sorts of expenditures? You’ll need something more than whim, to put it mildly. You may need something akin to religious devotion. Would we have attempted to put a man on the moon without the ideological spur of the Cold War, which cast space exploration as a field of civilizational battle for survival? Consider, as a more recent example, what drives Elon Musk’s pursuit of interplanetary space travel.

______________________

Without diminishing the criticisms offered by either Bogost or Lanier, Noble’s historical investigation into the roots of divinized or theologized technology reminds us that the roots of the disorder run much deeper than we might initially imagine. Noble’s own genealogy traces the origin of the religion of technology to the turn of the first millennium. It emerges out of a volatile mix of millenarian dreams, apocalyptic fervor, mechanical innovation, and monastic piety. Its evolution proceeds apace through the Renaissance, finding one of its most ardent prophets in the Elizabethan statesman and thinker Francis Bacon. Even through the Enlightenment, the religion of technology flourished. In fact, the Enlightenment may have been a decisive moment in the history of the religion of technology.

In his Atlantic essay, Bogost frames the emergence of techno-religious thinking as a departure from the ideals of reason and science associated with the Enlightenment. This is not altogether incidental to Bogost’s argument. When he talks about the “theological” thinking that suffuses our understanding of algorithms, Bogost is not working with a neutral, value-free, all-purpose definition of what constitutes the religious or the theological; there’s almost certainly no such definition available. Rather, he works (like Lanier and many others) with an Enlightenment understanding of Religion that characterizes it as Reason’s Other–as something a-rational if not altogether irrational, superstitious, authoritarian, and pernicious.

Noble’s work complicates this picture. The Enlightenment did not, as it turns out, vanquish Religion, driving it far from the pure realms of Science and Technology. In fact, to the degree that the radical Enlightenment’s assault on religious faith was successful, it empowered the religion of technology. To put it another way, the Enlightenment—and, yes, we are painting with broad strokes here—did not do away with the notions of Providence, Heaven, and Grace, but instead renamed them as, respectively, Progress, Utopia, and Technology. To borrow a phrase, the Enlightenment immanentized the eschaton. If heaven had been understood as a transcendent goal achieved with the aid of divine grace within the context of the providentially ordered unfolding of human history, it became a utopian vision, a heaven on earth, achieved by the ministrations science and technology within the context of progress, an inexorable force driving history toward its utopian consummation.

As historian Leo Marx has put it, the West’s “dominant belief system turned on the idea of technical innovation as a primary agent of progress.” Indeed, the further Western culture proceeded down the path of secularization as it is traditionally understood, the more emphasis was placed on technology as the principle agent of change. Marx observed that by the late nineteenth century, “the simple republican formula for generating progress by directing improved technical means to societal ends was imperceptibly transformed into a quite different technocratic commitment to improving ‘technology’ as the basis and the measure of—as all but constituting—the progress of society.”

When the prophets of the Singularity preach the gospel of transhumanism, they are not abandoning the Enlightenment heritage; they are simply embracing its fullest expression. As Bruno Latour has argued, modernity has never perfectly sustained the purity of the distinctions that were the self-declared hallmarks of its own superiority. Modernity characterized itself as a movement of secularization and differentiation, what Latour, with not a little irony, labels processes of purification. Science, politics, law, religion, ethics—these are all sharply distinguished and segregated from one another in the modern world, distinguishing it from the primitive pre-modern world. But it turns out that these spheres of human experience stubbornly resist the neat distinctions modernity sought to impose. Hybridization unfolds alongside purification, and Noble’s work has demonstrated how the lines between technology, sometimes reckoned the most coldly rational of human projects, and religion are anything but clear.

But not just any religion. Earlier I suggested that when Bogost characterizes our thinking about algorithms as “theological,” he is almost certainly assuming a particular kind of theology. This is why it is important to classify the religion of technology more precisely as a Christian heresy. It is in Western Christianity that Noble found the roots of the religion of technology, and it is in the context of post–Christian world that it currently flourishes.

It is Christian insofar as its aspirations are like those nurtured by the Christian faith, such as the conscious persistence of a soul after the death of the body. Noble cites Daniel Crevier, who, referring to the “Judeo-Christian tradition,” suggests that “religious beliefs, and particularly the belief in survival after death, are not incompatible with the idea that the mind emerges from physical phenomena.” This is noted on the way to explaining that a machine-based material support could be found for the mind, which leads Noble to quip, “Christ was resurrected in a new body; why not a machine?” Reporting on his study of the famed Santa Fe Institute in Los Alamos, anthropologist Stefan Helmreich writes, “Judeo-Christian stories of the creation and maintenance of the world haunted my informants’ discussions of why computers might be ‘worlds’ or ‘universes,’ …. a tradition that includes stories from the Old and New Testaments (stories of creation and salvation).”

However heretically it departs from traditional Christian teaching regarding the givenness of human nature, the moral dimensions of humanity’s brokenness, the gracious agency of God in the salvation of humanity, the religion of technology can be conceived as an imaginative account of how God might fulfill purposes that were initially revealed in incidental, pre-scientific garb. In other words, we might frame the religion of technology not so much as a Christian heresy, but rather as (post–)Christian fan-fiction, an elaborate imagining of how the hopes articulated by the Christian faith will materialize as a consequence of human ingenuity in the absence of divine action.

Near the end of The Religion of Technology, David Noble warns of the dangers posed by a blind faith in technology. “Lost in their essentially religious reveries,” he writes, “the technologists themselves have been blind to, or at least have displayed blithe disregard for, the harmful ends toward which their work has been directed.” Citing another historian of technology, Noble adds, “The religion of technology, in the end, ‘rests on extravagant hopes which are only meaningful in the context of transcendent belief in a religious God, hopes for a total salvation which technology cannot fulfill …. By striving for the impossible, [we] run the risk of destroying the good life that is possible.’ Put simply, the technological pursuit of salvation has become a threat to our survival.” I suspect that neither Bogost nor Lanier would disagree with Noble on this score.

This post originally appeared at The Frailest Thing.

Michael Sacasas is a doctoral candidate in the Texts and Technology program at the University of Central Florida. Follow him on Twitter @frailestthing. 

. . . . . . . .

Like The Hedgehog Review on Facebook, follow us on Twitter, and subscribe to our posts via RSS.