79 Theses on Technology:
Of Techniques and “Technology”


Anatomy of a Blogger, after Diderot’s Encyclopédie, ou dictionnaire raisonné des sciences, des arts et des métiers by Mike Licht via flickr

Editor’s Note: Earlier in the spring, Alan Jacobs drew up his 79 Theses on Technology, a provocative document that has drawn much commentary from our readers. John Durham Peters joins the fray here, commenting on Theses 64 through 70.

64. Everyone should sometimes write by hand, to recall what it’s like to have second thoughts before the first ones are completely recorded.

65. Everyone should sometimes write by hand, to revisit and refresh certain synaptic connections between mind and body.

66. To shift from typing to (hand)writing to speaking is to be instructed in the relations among minds, bodies, and technologies.

67. It’s fine to say “use the simplest technology that will do the job,” but in fact you’ll use the one you most enjoy using.

68. A modern school of psychoanalysis should be created that focuses on interpreting personality on the basis of the tools that one finds enjoyable to use.

69. Thinking of a technology as a means of pleasure may be ethically limited, but it’s much healthier than turning it into an idol.

70. The always-connected forget the pleasures of disconnection, then become impervious to them.

No doubt, writing is an intensely physical bio-mechanical activity. The back hurts, the neck cranes, the eyes sting, the head aches, the view out the window is consulted for the thousandth time. The inscription of words exacts a tax of muscular and nervous exertion. And no doubt, the most minute choices in writing technique make worlds of difference. Nietzsche thought writing while seated a sin against the Holy Ghost: only in strolling did words have for him truth.

But let us not confuse technology and technique. Technology once meant the study of the productive arts and sciences (as in the Massachusetts Institute of Technology); now, the term has been inflated not only into material devices of all kinds but also into a gas-bag for intellectuals to punch. Techniques are humble acts we do with hands, voices, eyes, feet, spine, and other embodied parts that bring forth mind into the world. We humans never do anything without technique, so we shouldn’t pretend there is any ontological difference between writing by hand, keyboarding, and speaking, or that one of them is more original or pure than the other. We are technical all the way down in body and mind. 

The age of ubiquitous computing has yielded, among other things, a florid genre of opt-out narratives, and I hope I do not espy in these theses another such tendency. Only by the orchestration of technologies can you catch a glimpse of a technology-free world. The more intensely made our environment is, the more actively its designers supply us with shock absorbers. The default images for the background of my desktop computer are all resolutely pastoral—not a sign of infrastructure, globalization, coltan, carbon, or human labor among them. I find tulips, a rising moon, cloudscapes, seascapes, and windblown desert sands, but no data, email, calendars, and bills, and certainly no human presence. Just how did this blue flower happen to sprout amid all the silicon? With heartfelt pleas that I “just have to watch,” my students send me YouTube videos that explain why we need to unplug, go outside, and seek real human contact. If you listen to the machine telling you how to get out of it, you only get sucked into it more, like a con artist who lulls you into a sense of trust by telling you that he is conning you. The promised liberation from technology is usually just another technology that you don’t recognize as such. This is one reason why a fuller appreciation of our diverse techniques is so vital.

Tools are all we have, but each one sets us in a very different horizon. Technology only risks being an idol because we don’t appreciate our techniques well enough. Writing with two hands on a keyboard, dictating to a person or a machine, writing with chalk, quill, pencil, or pen—each embody mind in different ways. Blessed be the back pain, as it reminds us that we are not immaterial beings flying through cyberspace.

I don’t understand the term “simplest” applied to a tool. Tools interact with mind and body. Compass and square could build gothic cathedrals. Piano and notepaper could yield symphonies. The more basic the tool, the harder it is to master. Who among us has yet learned how to speak, or walk, or think? The real challenges lie in the most basic acts. Some day, I’d like to write a really good sentence. Some day, I’d like to play a beautiful scale in C major. Some day, I’d like to say the right word to another person. The more basic the task, the more fundamental the challenge and difficult the tool.

John Durham Peters is the A. Craig Baird Professor of Communication Studies at the University of Iowa. His most recent book The Marvelous Clouds: Towards a Philosophy of Elemental Media has just been released by the University of Chicago Press. 

. . . . . . . .

Like The Hedgehog Review on Facebook, follow us on Twitter, and subscribe to our posts via RSS.


Beyond the Reveal: Opacity in Personal Chrono-tech


From Apple II watch instructions on Instructables.com URL: http://www.instructables.com/id/Apple-II-Watch/]

From Apple II watch instructions on Instructables.com URL: http://www.instructables.com/id/Apple-II-Watch

Part Four: Opacity in Personal Chrono-tech

As a conclusion to this series on the limits of black box metaphors in critiques of obscured technological systems, I want to offer a brief example of an alternative approach. Earlier this year, I presented this material as a lecture. Since then, a new black box has entered the marketplace—Apple’s Watch. I have not yet interacted with Apple’s “most personal device,” but I expect (largely merited) critiques about how the Watch embeds Apple’s system ever deeper in the daily routines of users. With both fewer buttons and less screen real estate with which to interact, the inputs and outputs for this system will probably be more passive and less obtrusive, even as the background software and hardware processes grow more complex. What new routines and rhythms of attention will the Watch afford, and on what algorithmic processes of surveillance, marketing, or communication will this attention depend?

We will need new audits. We will need to know, as with the iPhone, what information this new device is storing and sharing, and with whom. The Watch’s role in collecting medical data should give us particular pause in this regard. But when considering constraints on agency and freedom, we shouldn’t limit our analysis to revealing the processes at work “inside” this device. The processes by which we live with such devices deserve as much attention as the routines at work in the operating system. And we can learn a great deal about this device’s role in our lives without ever peering inside the system.

As a prompt in this direction, I’ll offer a brief tour of objects that, like the Watch, “want” to be a part of our everyday rhythms of attention, yet make “seamful” rather than seamless opacity a foregrounded aspect of our interaction with them.


Vague Clock by Sejoon Kim URL: http://sejoonkim.com/design/vagueclock.html

Vague Clock by Sejoon Kim URL: http://sejoonkim.com/design/vagueclock.html

Take, for example, Sejoon Kim’s Vague Clock. In contrast to Apple’s Watch, it offers the time not “on demand” (with the raise of an arm), but “on exploration” (with the caress of a hand). The clock’s almost opaque fabric makes the reading of time at a glance almost impossible. Instead, the laborer at her desk must get up and not only tap the clock face, but explore it, changing a two-dimensional plane into a three-dimensional form.


Risk Watch by Fiona Dunne and Anthony Raby URL: http://www.dunneandraby.co.uk/content/projects/75/0

Risk Watch by Fiona Dunne and Anthony Raby URL: http://www.dunneandraby.co.uk/content/projects/75/0

The speculative designs of Fiona Dunne and Anthony Raby are also instructive here. Their 2007/08 series of objects entitled DO YOU WANT TO REPLACE THE EXISTING NORMAL? includes The Risk Watch, a watch whose opaque face carries a small nipple in place of any visible marks of temporal passage. When placed to an ear, the nipple activates a small device which speaks a number that “corresponds to the political stability of the country you are in at that time.” Dunne and Raby state about this body of work that “if our desires remain unimaginative and practical, then that is what design will be.” The Risk Watch gives us what we want—a sort of single-app Apple Watch—in a way that invites us to examine both the desires we bring to personal tech, and the processes we trust to grant them.

The NoPhone URL: http://nophone.myshopify.com/

The NoPhone URL: http://nophone.myshopify.com/

Dunne and Raby’s approach to opacity might also call to mind the NoPhone, a project launched last year via Kickstarter that reached some unexpected, if modest, financial success. The NoPhone, billed as “a technology-free alternative to constant hand-to-phone contact,” is simply a brick of black plastic molded in the size and shape of an iPhone. In use as a replacement for one’s phone, the device aspires to deliver a different sort of “reveal,” catching the user in the act of relentless phone-checking. Like Ben Grosser’s Facebook Demetricator, the NoPhone calls to mind counter-addiction regimes, but does so with some humor, and a desire to cast human habits into the spotlight.

The Durr watch, by Skrekkøgle URL: http://skreksto.re/products/durr

The Durr watch, by Skrekkøgle URL: http://skreksto.re/products/durr

Another provocative neighbor to Apple’s Watch is the Durr, a product of the Norwegian studio Skrekkøgle. As with the NoPhone, the Durr’s designers create personal technologies that utilize opacity in order to reveal something about the user’s daily activities. In this case, however, the object also introduces a modest new machinic process into the picture. Like the NoPhone or the Vague Clock, the Durr presents a wholly opaque face where a screen or dial might normally reside. Inside the object, however, resides a small vibrating motor that operates at five minute intervals.

For a few months now, I’ve been replacing my usual watch with a Durr for a day or two each week, with enlivening effects. The Durr reveals not only my habits of watch-checking, but the relative speed at which time passes in relation to the intensity and direction of my attention. Checking email, I can’t believe how fast the Durr is going. Traveling across town on foot, the durations seem broad and wide. Five minutes is just long enough to forget the thing in many cases, just too long to be counted by the human attention clock. Its opacity depends in part on me as much as on the device itself. As such, wearing the Durr casts my other machinic attention regimes into new light and invites me to reorient my body accordingly.

I could go on to mention a dozen different life-management and attention-management tools, simple things like www.donothingfor2minutes.com, or “productivity” apps such as Freedom, which disables a device’s internet for set periods of time. Where such efforts serve behavior-modification regimes, they should surely be set in the historical context of disciplinary, labor, or even religious regimes.

Set next to the growing number of algorithm auditing efforts, however, such attention-modification works serve a different function. They show how, in the quest to understand the influence of machinic processes on human agency, there is much to be learned without ever “unboxing” the technologies at hand. As we move forward with the vital work of monitoring and interpreting the multitude of new processes at work behind our technologies of attention, we should take great care not to stop our efforts at the algorithmic reveal. We should insist on the co-presence of at least two other bodies of work in the growing intellectual spaces devoted to critique of algorithms—that of critical race, gender, and labor studies, which reveals the differently-structured life on which the new algorithms depend, and of design, art, and play that casts human action and desire toward interface in new light.

Kevin Hamilton is an artist and researcher at the University of Illinois, Urbana-Champaign, where as an Associate Professor he holds appointments in several academic units across theory, history, and practice of digital media. He is currently at work with Infernal Machine contributor Ned O’Gorman on a history of film in America’s nuclear weapons programs; other recent work includes a collaboration with colleagues at Illinois’ Center for People and Infrastructures on the ethics of algorithms in internet and social media platforms.

. . . . . . . .

Like The Hedgehog Review on Facebook, follow us on Twitter, and subscribe to our posts via RSS.

Beyond the Reveal: Toward Other Hermeneutics


Part III: Toward other Hermeneutics

I want to make clear here that I believe we need to keep pushing for new research—new policies and practices that help ensure just algorithmic processes at work inside our infrastructures. (See posts one and two of “Beyond the Reveal.”) If our search engines, pricing structures, law enforcement or trade practices depend on or enact unlawful, unethical, or unjust algorithmic processes, we need to have ways of stopping them. We need accountability for these processes, and in some cases that will also mean we need transparency.

But, as urban studies scholar Dietmar Offenhuber points out in Accountability Technologies, accountability isn’t inextricably linked to transparency. In fact, some forms of revelation about opaque processes may do more harm than good to the public. If we make information access a priority over “answerability and enforcement” when it comes to just algorithmic infrastructures, Offenhuber warns, we may not achieve our goals.

So there may be times when “opening the box” might not be the best path to dealing with the possibility of unjust systems. And it is almost certainly the case that our black box metaphors aren’t helping us much in research or advocacy when it comes to charting alternatives.

In my own collaborative work on a Facebook user study, my co-authors and I focused primarily on a question directed to users: “Did you know there’s a black box here, and what do you think it’s doing?” The results of this study have set us on a path to at least learning more about how people make sense of these experiences. But in some ways, our work stands to get stuck on the “reveal,” the first encounter with the existence of a black box. Such reveals are appealing for scholars, artists, and activists—we sometimes like nothing better than to pull back a curtain. But  because of our collective habit of establishing new systems to extricate ourselves from old ones, that reveal can set us on a path away from deliberative and deliberate shared social spaces that support our fullest goals for human flourishing.

I confess that at this point, I bring more cautions about black box hermeneutics than I bring alternatives. I’ll conclude this post by at least pointing to a path forward and demonstrating one possible angle of approach.

My critique of black box metaphors so far leads me to the following questions about our work with technologies:

  1. How else might we deal with the unknown, the obscured or opaque besides “revealing” it?
  2. Do we have to think of ourselves as outside a system in order to find agency in relation to that system?
  3. Can interface serve to facilitate an experience that is more than cognitive, and a consciousness not ordered by the computational?

As Beth Novwiskie pointed out in a response to this post in lecture form, we already have at least one rich set of practices for addressing these questions: that of interpretive archival research. Are not the processes by which a corpus of documents come to exist in an archive as opaque as any internet search ranking algorithm? Isn’t part of the scholar’s job to account for that process as she interprets the texts, establishing the meaning of such texts in light of their corporeal life? And aren’t multiple sensoria at work in such a process, only some of which are anticipated by the systems of storage and retrieval at hand? Understood as “paper machines” and technologies in their own right, certainly the histories of how scholars and readers built their lives around epistles, chapbooks, encyclopedias, and libraries have much to offer our struggles to live with unknown algorithms.

We might also, however, look to the realms of art, design, and play for some productive alternatives. Take for example, the latest black box to take techno-consumption by storm—Apple’s iWatch. This object’s use is almost certainly headed in the direction of integration into users’ lives as a facilitator of new daily routines and systems, especially by the quantified self set. Other writers on this blog have already helpfully set the new box in the context of its precedent in meditative practices or contemporary tech labor. But as we work to understand how the new systems involve us in new, opaque processes, a glance at some more intentionally opaque neighbors might be of help. In my next post, I’ll set a few recent objects and experiences next to the iWatch for comparison for how they invite distinct incorporation into the rhythms of daily attention, thought and action.

Kevin Hamilton is an artist and researcher at the University of Illinois, Urbana-Champaign, where as an Associate Professor he holds appointments in several academic units across theory, history, and practice of digital media. He is currently at work with Infernal Machine contributor Ned O’Gorman on a history of film in America’s nuclear weapons programs; other recent work includes a collaboration with colleagues at Illinois’ Center for People and Infrastructures on the ethics of algorithms in internet and social media platforms.

. . . . . . . .

Like The Hedgehog Review on Facebook, follow us on Twitter, and subscribe to our posts via RSS.

Beyond the Reveal: A Metaphor’s Effect


In my last post, I described how the black box emerges historically with the extrication of (at least some) laborers from the machines of industrial labor. The cost of this move is that the laborer, now outside the machine as an operator, must herself operate as black box. The interface between the laborer and machine becomes central to this new relationship, especially as managers and technologists focus on how constantly to reconfigure the interactions between and among human-machine pairs.

In recounting this history of a metaphor, I aim toward a critique of how black box metaphors are used today to describe opaque technological processes. And I don’t mean to suggest that any use of a black box metaphor inadvertently invokes a whole history of labor and interface. But I do think we can surmise from this history a dominant narrative that draws heavily from the black box metaphor:

  1. As an “infrastructural inversion,” the black box metaphor creates the possibility, for some, of imagining themselves as outside a system that formerly may not have been visible at all.
  2. Where and when this happens, interfaces emerge and gain prominence as a point of mediation with the formerly invisible system.
  3. Design for interaction between the user and the “black boxed” process tends to imagine the human mind as another form of black box, emphasizing cognitive over manual processes.
  4. The new system comprised by this user and her machine then starts the process anew—the user/worker has been incorporated into a new system that she may not actually see unless naming a new “black box.”
  5. This narrative will also depend on the exclusion of some who need to “stay behind” and keep the system going within the “old” forms of labor.

To describe a process as a black box thus potentially sets in motion a whole series of implications for sensation, knowledge, labor, and social organization.

Let’s look at this, for example, in light of new attention brought to the role of algorithms in Facebook use (an effort in which I have been involved as a scholar). How does describing the Facebook algorithm as a black box set us on a particular narrative of analysis and research?

Let’s imagine a Facebook user who is not yet aware of the algorithm at work in her social media platform. The process by which her content appears in others’ feeds, or by which others’ material appears in her own, is opaque to her. Approaching that process as a black box, might well situate our naive user as akin to the Taylorist laborer of the pre-computer, pre-war era. Prior to awareness, she blindly accepts input and provides output in the manufacture of Facebook’s product. Upon learning of the algorithm, she experiences the platform’s process as newly mediated. Like the post-war user, she now imagines herself outside the system, or strives to be so. She tweaks settings, probes to see what she has missed, alters activity to test effectiveness. She grasps at a newly-found potential to stand outside this system, to command it. We have a tendency to declare this a discovery of agency—a revelation even.

But maybe this grasp toward agency is also the beginning of a new system. The black box metaphor suggests that such providers will also need to design for the user who tweaks. (It may even be that designing for the tweaker may be more profitable than designing a “perfect feed.”) As in previous ergonomic problems, this process will begin to imagine and construct a particular kind of mind, a particular kind of body, a particular kind of user. Tweaking to account for black-boxed algorithmic processes could become a new form of labor, one that might then inevitably find description by some as its own black box, and one to escape.

Maybe, by structuring our engagement with the experience of Facebook’s opaque processes through the black box metaphor, we’ve set ourselves up to construct a new black box, and ignored the ways in which our relations to others, within and without the present system, have been changed by our newfound awareness.

I’m struck here, for example, by how well the narrative of the black box I’ve described here fits a number of stories we’ve lived and heard regarding privacy and networked media. Whether it’s the Snowden revelations or Facebook’s unauthorized emotion study, the story often plays out the same way for many of us. We realize or remember anew just how much work we’re providing some entity within a current system, and then proceed to either alter our use patterns or abstain altogether from that system in order to remain outside that work. Debates ensue over who is complicit and who is not, and with the exception of those working in a more organized fashion to enact prosecution or new laws, most of us are stuck in an “opt-in or opt-out” scenario that never goes anywhere.

It’s likely only a matter of time before the market for more subtle responses than “opt-in or opt-out” is met with a new set of black box systems. One can imagine, for example, a range of services: free email if you submit to full surveillance and data-trolling, modestly-priced email if you submit your data for use via an anonymizer, or premium email at high costs that removes you from all data-harvesting.

Perhaps, even as we remain justifiably critical of the unseen and unknown software processes that govern and regulate a growing number of shared spaces and subjectivities, we might search for another way to live with these processes than hitting the escape button and entering a higher-level routine. More on that in my next posts.

Kevin Hamilton is an artist and researcher at the University of Illinois, Urbana-Champaign, where as an Associate Professor he holds appointments in several academic units across theory, history, and practice of digital media. He is currently at work with Infernal Machine contributor Ned O’Gorman on a history of film in America’s nuclear weapons programs; other recent work includes a collaboration with colleagues at Illinois’ Center for People and Infrastructures on the ethics of algorithms in internet and social media platforms.

. . . . . . . .

Like The Hedgehog Review on Facebook, follow us on Twitter, and subscribe to our posts via RSS.

Beyond the Reveal: Living with Black Boxes


Part One: Histories

Amidst growing attention and calls to action on the role of algorithms in our everyday lives, one idea recurs: “opening the black box.” In such analyses, the “black box” describes a process that happens in secret, for which we only know the inputs and outputs, but not the steps that takes place between. How might this metaphor be structuring our approach to thinking about algorithms and their place in our lives, long before we get to the work of accounting for the social and political work of algorithmic systems?

In this first of four posts, I’ll begin an answer to this question by looking at the history of the “black box” as a way of modeling cognitive or computational processes. In the second post, I’ll offer some cautionary words about reliance on this metaphor in the important work of ensuring just systems. Finally, in the last two posts I’ll look to some alternatives to black-box-opening in our relationships to opaque technological processes.

The black box metaphor began to acquire its shape during changes in labor that took place after World War II. Whereas managers before the war had largely treated work as a series of learned behaviors, the designers of work and work environments after the war began to think less about suiting the laborer to the work, and more about suiting the work to the laborer.

More than a mere Taylorist repeater of actions, the new ideal worker of post-war Human Factors research not only acts but perceives, acting according to learned evaluative routines that correlate sensation to action. The ideal post-war laborer is not a person of a particular physical build, conditioned to perform particular motions, but rather a universalized collection of possible movements, curated and selected according to mathematical principles. Human Factors research turned the human laborer into a control for a system, a proper medium for the transfer and transformation of input.

Key to this new approach was the influence of information theory on approaches to both computing and psychology. In computing, the understanding of signals as information paved the way for a mathematics of binary code, in which the course of electrons through physical gates and switches could translate into algorithms and mathematical functions. In psychology, those who had grown weary of behaviorism’s stimulus-response approaches to explaining and modifying human action saw in Claude Shannon’s approach echoes of the structure of the human brain. These early cognitive scientists saw in thought a kind of algorithm performing consistent functions on ever-changing sense data, zipping through the brain’s neural pathways the way electrons travel through the copper of a computer’s circuits.

And so a new understanding of the operator’s actions emerged alongside a new understanding of a computer’s routines. The first software emerged at the same time that psychologists began to analyze human thought and memory as a collection of mathematical functions performed on sense data. In other words, the black box as we know it emerged as a pair of metaphors: one to describe the computational machine, and one to describe the human mind.

Before these developments, systems of manufacture and control were designed to include the human body as a “control” in the operational sense. The control in any function is a limiter, providing brackets to the acceptable inputs and possible outputs. If a laborer slows done his or her work, the entire process slows. In the new post-Taylorist work flow, in contrast, the control is performed by a computational process, rather than a human embodied one. The new computers allowed for the programming of internal black boxes within the machine itself. Information from multiple sensors, as it coursed through these machines, would be analyzed and checked for deviation. The result produced from such analyses would set certain mechanical processes in motion in order to produce a desired end.

Although the worker has been replaced by an algorithm as the system control, she or he is not missing from the scene entirely. Rather, the human operator now performs the function of a control for the control. The machine affords indications to the human operator of the proper functioning of the software-based controller. Deviations from designated functions trigger new action from the human operator, according to more advanced algorithms than required of previous industrial operators. This new human operator must synthesize multiple forms of data—visual, aural, even symbolic data—and then decide on a proper course of action, of input to the machine, according to a trained set of decision-making criteria and standards.

Though operating from more of a distance in relation to the phenomena of mechanical system function, this new, error-detecting human operator plays no less critical a role. His or her mental routines must be just as carefully scripted and trained as the Taylorist laborer’s physical actions, and often via emerging understanding of the brain as a computer.

The new operator is thus less of the system even though he or she is made more in the image of that system. Formerly one organ within a mechanical body, he now is modeled as a discrete body himself, tethered to another, mechanical body, and modeled after that body, for the purposes of safe and consistent system flow. The machine and the operator mirror one another, with the interface as their crucial site of division, the glass of reflection and action.

These changes also effect sociality through the creation of a new entity to include all agents. This new entity—the organization—invites design at a complex level that accounts for multiple machinic and human actors. Where each machine used to come with an operator as controller, the two treated as a single entity, the post war machine comes with an operator as agent, who is necessary to the proper functioning of the machine. But the human operator is separate from the machine. For large-scale projects, this doubling results in increased complexity, which the organization approaches as yet another information processing problem.

The organization, this plurality of entities, is coincident with the emergence of the interface. Machines and operators without true interfaces—as in Taylorist scenarios—are not collective in that they are not social. They are merely aggregate. Thus some of the biggest moves in computing research toward the latter half of the twentieth century were those that simultaneously addressed the interface between one operator and her machine, and the structure of all machine-human pairs, organized together into one system—one black box process.

Kevin Hamilton is an artist and researcher at the University of Illinois, Urbana-Champaign, where as an Associate Professor he holds appointments in several academic units across theory, history, and practice of digital media. He is currently at work with Infernal Machine contributor Ned O’Gorman on a history of film in America’s nuclear weapons programs; other recent work includes a collaboration with colleagues at Illinois’ Center for People and Infrastructures on the ethics of algorithms in internet and social media platforms.

. . . . . . . .

Like The Hedgehog Review on Facebook, follow us on Twitter, and subscribe to our posts via RSS.

79 Theses on Technology:
Our Detachment From Technology

insect blue_FLAT

When reading Alan Jacobs’s 79 theses, three jumped out at me:

55. This epidemic of forgetting where algorithms come from is the newest version of “I for one welcome our new insect overlords.”

56. It seems not enough for some people to attribute consciousness to algorithms; they must also grant them dominion.

58. Any sufficiently advanced logic is indistinguishable from stupidity.—Alex Tabarrok

These theses suggest a single issue: We have become increasingly detached from our software, both in how it works and how it is built.

The algorithms involved in much of our software are each designed to do something. When an algorithm was a single snippet of code or a tiny computer program, it could be read, understood, debugged, and even improved. Similarly, computing once involved regular interactions at the level of the command line. There was little distance between the code and the user.

Since the early era of command lines and prompts, software has become increasingly complex. It has also become increasingly shielded from the user. These are not necessarily bad changes. More sophisticated technology is more powerful and has greater functionality; giving it a simpler face prevents it from being overwhelming to use. We don’t need to enter huge numbers of commands or parameters to get something to work. We can just swipe our fingers and our intentions are intuited.

Thanks to these changes, however, each of us has become more distant from the inner workings of our machines. I’ve written elsewhere about how we must strive to become closer to our machines and bridge the gap between expert and user. This is difficult in our era of iPads and graphical interfaces, and often it doesn’t even seem that important. However, since these technologies affect so many parts of our lives, I think we need the possibility of closeness: We need gateways to understanding our machines better. In the absence of this proactive decision, our responses to our machines will tend to be driven by fear, veneration, and disdain.

As we have become detached from how algorithms and software operate, this detachment has caused a gross misunderstanding of how technology works. We find it to be far more inscrutable than it really is, forgetting all technology was designed by fallible people. We respond to this inscrutable power by imputing a beauty and sophistication that is not there. (For more on this, see Ian Bogost and his observation that many people use the word “algorithm” in an almost religious manner.)

Veneration of the algorithm as something inordinately impressive is detrimental to our ability to engage with technology. Software is often incredibly kludgy and chaotic, far from worthy of worship. This response is not so far from fearing technology just because we can’t understand it. Both fear and veneration are closely related, as both make algorithms out to be more than they are. (This is the subject of Jacobs’s Theses 55 and 56, though stated in a bit more extreme forms than I might be willing to do.)

But what about disdain? How does this work? When a device suggests the wrong word or phrase in a text or sends delivery trucks on seemingly counterintuitive routes, we disdain the device and its algorithms. Together, their outputs seem so self-evidently wrong that we are often filled with a sense of superiority, mocking these algorithms’ shortcomings, or feeling that they are superfluous.

Sometimes, our expertise does fall short and complex logic can seem like stupidity. But David Auerbach, writing in Nautilus, offered this wonderful story that shows that something else might be going on:

Deep Blue programmer Feng-Hsiung Hsu writes in his book Behind Deep Blue that during the match, outside analysts were divided over a mysterious move made by the program, thinking it either weak or obliquely strategic. Eventually, the programmers discovered that the move was simply the result of a bug that had caused the computer not to choose what it had actually calculated to be the best move—something that could have appeared as random play.

In this case, ignorance prevented observers from understanding what was going on.

Is complex logic indistinguishable from stupidity? I don’t think so. Our response to a process we don’t understand may be closer to the nervous laughter of ignorance than a feeling of superiority. We call these algorithms stupid not because we recognize some authentic algorithmic inadequacy in them. We call them stupid because to admit a certain humility in the face of their increasing complexity would be a display of weakness.

When I took an artificial intelligence course in college and learned the algorithms for programs such as playing board games or constructing plans, I didn’t feel superior—I felt a kind of sadness. I had seen behind the screen and found these processes sophisticated, but fairly mundane. Most complex technology is this way. But when each of us encounters a surprising and apparently stupid output, if we don’t understand its origins, it is a lot easier to mock the system than to feel humbled, or even disappointed, at discovering its true structure.

These responses to technology are not the everyday user’s fault. Many of the creators of these technologies want the user to attribute a certain power to these algorithms and so have protected them behind layers of complexity. Ultimately, I think the most appropriate response is intellectual humility in the face of technology from which we have become increasingly detached. Only then can we engage with algorithms and try to see, even if only a moment, what they are actually doing.

Samuel Arbesman is a Senior Adjunct Fellow at the Silicon Flatirons Center for Law, Technology, and Entrepreneurship at the University of Colorado and a Visiting Scholar in Philosophy at the University of Kansas. Follow him on Twitter at @arbesman.

. . . . . . . .

Like The Hedgehog Review on Facebook, follow us on Twitter, and subscribe to our posts via RSS.

79 Theses on Technology:
The Spectrum of Attention

“We should evaluate our investments of attention,” Jacobs urges in Thesis #7, “at least as carefully and critically as our investments of money.” But we will be in a better position to undertake such an evaluation when we understand exactly what we are talking about when we talk about attention, which is a word that—despite its importance—is never defined by Jacobs in the 79 Theses.

It’s easy to assume that “attention” is experienced in the same way by everyone. But as Matthew Crawford’s recent work has argued, attention has been imagined, and thus experienced, differently over time. Attention names various states or activities that we might do well to distinguish.

We can define attention first as “intently focusing on one object or task.” Reading a long, demanding text is a one example of this kind of attention. This sort of attention is the subject of Nicholas Carr’s Atlantic article, “Is Google Making Us Stupid?”: “Immersing myself in a book or a lengthy article used to be easy,” Carr notes, but now “my concentration often starts to drift after two or three pages. I get fidgety, lose the thread, begin looking for something else to do. I feel as if I’m always dragging my wayward brain back to the text.”

I suspect many of us share Carr’s experience. Not unlike the Apostle Paul, we lament, “What I want to pay attention to, I cannot. What I do not want to pay attention to, to that I do.” This failure to direct our attention presents itself as a failure of the will, and it assumes at some level that I am, as an autonomous subject, responsible for this failure (for more on this point, I suggest Chad Wellmon’s exchange with Jacobs).

But sometimes we talk about attention in a slightly different way; we speak of it as openness to the world, without any particular focal point. Sometimes the language of presence is used to articulate this kind of attention: Are we living in the moment? It is also the sort of attention that is advocated by proponents of “mindfulness,” to which Jacobs devoted two theses:

11. “Mindfulness” seems to many a valid response to the perils of incessant connectivity because it confines its recommendation to the cultivation of a mental stance without objects.

13. The only mindfulness worth cultivating will be teleological through and through.

On the surface, the two ways of talking about attention that I’ve outlined attention contradict each other. Directed attention is inconceivable without an object (mental or material) to sustain it, but no object would appear apart from an already existing form of attention.

Much depends on what exactly is meant by “mindfulness,” but I think we might be able to preserve a valuable distinction while still heeding Jacobs’s critique. If “mindfulness” functions, for instance, as a clearing of mental space in order to make directed attention possible, then the telos of mindfulness would be directed attention itself.

Attention as Dance

We can think of attention as a dance whereby we both lead and are led. This image suggests that receptivity and directedness do indeed work together. The proficient dancer knows when to lead and when to be led, and she also knows that such knowledge emerges out of the dance itself. This analogy reminds us, as well, that attention is the unity of body and mind making its way in a world that can be solicitous of its attention. The analogy also raises a critical question: How ought we conceive of attention given that we are  embodied creatures?

Maurice Merleau-Ponty can help us here. In Phenomenology of Perception, Merleau-Ponty discusses the shortcomings of both empiricist and intellectualist (rationalist) approaches to attention and makes the following observation: “Empiricism does not see that we need to know what we are looking for, otherwise we would not go looking for it; intellectualism does not see that we need to be ignorant of what we are looking for, or, again, we would not go looking for it.”

This simultaneous knowing and not-knowing seems to me another way of talking about attention as both openness to the world and as a directed work of the mind. It is a work of both receptivity, of perceiving the world as a gift, and care, of willfully and lovingly attending to particular aspects of the world. And, as Merleau-Ponty goes on to argue, attention is also a form of embodied perception that construes the world as much as it registers it. In this sense, our attention is never merely picking out items in the world (see Crawford on this idea); rather, attention is always interpreting the world in keeping with the desires and demands of an embodied being at a particular moment.

To a hiker on a long walk, for example, a stone is a thing to step around and is registered as such without conscious mental effort. It is attended to by the body in motion more than by the cogitating mind. To a geologist on a walk, on the other hand, a stone may become an object of urgent intellectual inquiry.

Both of these instances of perceiving-as result from subjective prior experience. The expert hiker moves along at a steady pace making countless adjustments and course corrections as a matter of bodily habit. The geologist, likewise, has trained his perception through hours of intellectual labor. In either situation, a novice might fail to hike as adroitly or notice the geologically interesting stone. Merleau-Ponty calls this repertoire of possible perceptions the “intentional arc,” which subtends “the life of consciousness—cognitive life, the life of desire or perceptual life.”

This example suggests two poles of attention, bodily and mental. But these are not mutually exclusive binaries. Rather, they constitute a spectrum of possibilities from the dominance of conscious mental activity on one end to the other end where non-conscious bodily activity is paramount. Consider the person lost deep in thought or a daydream. This person is deeply attentive, but not to his surroundings or to sensory information. Such a person would have to be called back to an awareness of their body and their surroundings.

By contrast, we may imagine the athlete, musician, or dancer who is, to borrow Mihály Csíkszentmihályi’s formulation, “in the flow.” Like the thinker or daydreamer, they, too, are in a state of deep attention, but in a different mode. Conscious thought would, in fact, disrupt their state of attention. We may complicate this picture even further by observing how the hiker “in the flow” might be lost in thought and remain an expert navigator of the terrain.

Attention Mediated Through Technology

But where does technology fit into our model? That is, after all, where we began and where Jacobs directs our attention. Perhaps there’s another spectrum intersecting with the one running from the bodily to the mental: one that runs from mediated to unmediated forms of attention.

Consider our hiker one more time. Imagine that she is now equipped with a walking stick. Aspects of her attending to the world through which she makes her way are now mediated by the walking stick. Of course, the walking stick is an adept tool for this particular context and extends the hiker’s perceptions in useful ways. (It would be very different, for instance, if the hiker were walking about with a garden hose.)

Imagine, however, giving the hiker a different tool: a smartphone. The smartphone mediates perception as well. In the act of taking a picture, for example, the landscape is seen through the lens. But a subtler act of mediation is at work even when the smartphone’s camera is not in use. Smartphone in hand, the hiker might now perceive the world as field of possible images. This may, for example, direct attention up from the path toward the horizon, causing even our experienced hiker to stumble.

We may be tempted to say that the hiker is no longer paying attention, that the device has distracted her. But this is, at best, only partly true. The hiker is still paying attention. But her attention is of a very different sort than the “in the flow” attention of a hiker on the move. Without the smartphone in hand, the hiker might not stumble—but she might not notice a particularly striking vista either.

So along one axis, we range from bodily to mental forms of attention. Along the other, we range from mediated to unmediated forms of attention. (Granted that our attention is never, strictly speaking, absolutely unmediated.) This yields a range of possibilities among the following categories: “bodily mediated,” “bodily unmediated,” “mental mediated,” and “mental unmediated.” (Consider the following as ideal types in each case: the musician, the dancer, the scientist, and the philosopher.)

sacasas graph

How does conceiving of attention in this way help us?

This schema yields a series of questions we may ask as we seek to evaluate our investments of attention. What kind of attention is required in this context? To what aspects of the world does a device invite me to pay attention? Does a device or tool encourage mental forms of attention when the context is better suited to bodily forms of attention? Is a device or tool encouraging me to direct my attention, when attentive openness would be more useful? What device or tool would best help me deploy the kind of attention required by the task before me?

The result of this exploration has been to break up the opposition of device to attention. An opposition, I should say, I don’t think Jacobs himself advocates. Instead, my hope is to expand our conceptual tool kit so that we might make better judgments regarding our devices and our attention to the world.

L.M. Sacasas is a doctoral candidate in the Texts and Technology program at the University of Central Florida. Follow him on Twitter @frailestthing.

Photo: Heinrich Vogeler, Sehnsucht (Träumerei), c.1900, via Wikimedia Commons, public domain

. . . . . . . .

Like The Hedgehog Review on Facebook, follow us on Twitter, and subscribe to our posts via RSS.

79 Theses on Technology:
Things That Want—A Second Reply to Alan Jacobs

tender buttons_2_FLATI don’t know exactly what Alan Jacobs wants. But I know what my keyboard wants. That difference—a difference in my knowledge of the intentionality of things—is reason for me to conclude that Alan Jacobs and my keyboard are two different kinds of things. There is, we’d say, an ontological difference between Alan Jacobs and my keyboard. There is a functional difference as well. And so many more differences. I acknowledge this. The world is not flat.

But Jacobs differentiates himself from my keyboard based on “wanting” itself. Alan Jacobs wants. Keyboards—mine or others—don’t “want.” Such is for Jacobs the line between Alan Jacobs and keyboards. If we can regulate our language about things, he suggests, we can regulate things. I would rather just learn from our language, and from things, and go from there.

I think my differences with Jacobs take three directions: one rhetorical, another ontological, and a third ethical. I will discuss them each a bit here.

To start, I think that machines and other technologies are full of meaning and significance, and that they do in fact give meaning to our lives. Part of their meaningfulness is found in what I might call their “structure of intention,” or “intentionality.” This includes what design theorists call “affordances.” In the classic account of affordances, James Gibson described them as the latent “action possibilities” of things in relation to their environment. Design theorists tend to take a more straight-forward approach: plates on doors afford pushing; C-shaped bars affixed to doors afford pulling; and knobs afford either action. Likewise, buttons on car dashboards afford pushing, whereas dials afford turning.

But intentionality as I am calling it here goes beyond the artifacts themselves, to include the broader practices and discourses in which they are embedded. Indeed, the “intentionality” of a thing is likely to be stronger where those broader practices and discourses operate at the level of assumption rather than explicit indoctrination. So much of the meaningfulness of things is tacitly known and experienced, only becoming explicit when they are taken away.

So there are things, their affordances, and the practices and discourses in which they are embedded. And here I think it is rhetorically legitimate, ontologically plausible, and ethically justified to say that technologies can want.

Rhetorically, every culture animates its things through language. I do not think this is mere embellishment. It entails a recognition that non-human things are profoundly meaningful to us, and that they can be independent actors as they are “activated” or “deactivated” in our lives. (Think of the frustrations you feel when the plumbing goes awry. This frustration is about “meaning” in our lives as much as it is about using the bathroom.) To say technologies “want,” as Kevin Kelly does, is to acknowledge rhetorically how meaningful non-human things are to us; it is not to make a category mistake.

Ontologically, the issue hinges in part on whether we tie “wanting” to will, especially to the will of a single, intending human agent (hence, the issue of voluntarianism). If we tether wanting to will in a strong sense, we end up in messy philosophical terrain. What do we do with instinct, bodily desires, sensations, affections, and the numerous other forms of “wanting” that do not seem to be a product of our will? What do we do with animals, especially pets? What do we do with the colloquial expression, “The plant wants water”? Such questions are well beyond the scope of this response. I will just say that I am skeptical of attempts to tie wanting to will because willfulness is only one kind of wanting.

Jacobs and I agree, I think, that the most pressing issue in saying technologies want is ethical. Jacobs thinks that in speaking of technologies as having agency, I am essentially surrendering agency to technical things. I disagree.

I think it is perfectly legitimate and indeed ethically good and right to speak of technologies as “wanting.” “To want” is not simply to exercise a will but rather more broadly to embody a structure of intention within a given context or set of contexts. Will-bearing and non-will-bearing things, animate and inanimate things, can embody such a structure of intention.

It is good and right to call this “wanting” because “wanting” suggests that things, even machine things, have an active presence in our life—they are intentional. They cannot be reduced to mere tools or instruments, let alone “a piece of plastic that when depressed activates an electrical current.” Moreover, this active presence cannot be neatly traced back to their design and, ultimately, some intending human.

To say the trigger wants to be pulled is not to say only that the trigger “was made for” pulling. It is not even to say that the trigger “affords” pulling. It is to say that the trigger may be so culturally meaningful as to act upon us in powerful ways (as indeed we see with guns).

So far from leading, as Jacobs claims, to the “Borg Complex”—the belief that resistance to technology is futile—it is only by coming to grips with the profound and active power of things that we best recognize that resistance to technology is, as Jacobs correctly argues, a cultural project, not a merely personal one, let alone primarily a definitional one.

So rather than trying to clean up or correct our language with respect to things (technologies don’t want!), I think we ought to begin by paying closer attention to our language about things and ask what we may learn from it. Yes, we will learn of our idolatries, ideologies, idiocies, and lies. But we may also learn some uncomfortable truths. So I will say it again, of course technologies want!

. . . . . . . .

Like The Hedgehog Review on Facebook, follow us on Twitter, and subscribe to our posts via RSS.

79 Theses on Technology:
The Hand That Holds the Smartphone


Alan Jacobs poses a few questions to his readers: “What must I pay attention to?” “What may I pay attention to?” and “What must I refuse attention to?” These questions direct readers to understand their own positions in the world in terms of attention. They encourage reflection. Instead of directing the reader’s focus outward to ponder general, more abstract relations between “technology” and “society,” they return us to our own bodies even and suggest that the hand that swipes the iPhone, your hand, deserves attention.

Jacobs formulates only two other theses as questions  (#9, #60), and both are posed from a seemingly universal standpoint without a social location or even an implied interlocutor. However, some of Jacobs’s concerns about the current unhappy union with our attention-demanding devices seem to emerge from a specific social location. While these concerns may ring true for a large segment of higher-income, well-educated adults, who do in fact own smartphones in greater numbers than the rest of the US population, they may fall short of describing the experiences of many other users.

For example, #70, “The always-connected forget the pleasures of disconnection, then become impervious to them.” Who are the “always-connected”? The McDonald’s worker whose algorithmically determined shifts are apt to change with less than half day’s notice? Or one of the 10% of Americans who rely on their smartphones to access the Internet to do their banking, look for a job, and let their child do homework?

People who rely on their smartphones for Internet access are more likely to be young, low-income, and non-white, the same population with some of the highest levels of unemployment. With the migration of most job-seeking to online databases and applications, all members of the “always-connected” might not experience the “pleasures of disconnection” in the same way as the middle class knowledge worker with high-speed Internet access at home and at work. In reality, the “always-connected” is a large and diverse group, and is quickly becoming even larger and even more diverse.

Your hand isn’t the only hand that comes in contact with your phone, of course, but only the last set of hands in a long chain of designers, manufacturing workers, and marketing gurus. Jacobs points this out in the case of algorithms (Thesis #54, “The contemporary version of the pathetic fallacy is to attribute agency not to nature but to algorithms—as though humans don’t write algorithms. But they do.”), but it bears extending this line of thinking to other theses about the ideologies that run through contemporary discourse on technology.

Consider Thesis #41, “The agency that in the 1970s philosophers and theorists ascribed to language is now being ascribed to technology” and #44, “We try to give power to our idols so as to be absolved of the responsibilities of human agency”—who are the agents in these theses? Who is doing the ascribing? Who seeks absolution?

Kevin Kelly, the author Jacobs points to as a prime example of techno-enthusiasm, was a founding editor of Wired and has spent a lot of time talking to technology executives over the past several decades. Kelly’s ideas have often been translated into marketing strategies that soon enter into the public consciousness—like the sumptuously edited commercial for the Apple Watch in which the watch operates completely of its own agency, no human required!—where they shape our desires and understandings of our relationships with our devices.

It’s through the image of a series of hands grasping, texting, and swiping away that my attention is drawn to the people at other end of the technologies that shape our lives. As Jacobs points out, technology doesn’t want anything, “we want, with technology as our instrument,” but the question of who we are is isn’t just idle sociological speculation. It’s vital to imagining alternative arrangements of both people and technology, as well as more humane practices that may benefit us all.

Julia Ticona is a doctoral candidate in the sociology department at the University of Virginia and a dissertation fellow at the Institute for Advanced Studies in Culture. Her work focuses on the cultures of technology and everyday life.

Photo: Anatomical study of hands, public domain.


. . . . . . . .

Like The Hedgehog Review on Facebook, follow us on Twitter, and subscribe to our posts via RSS.

79 Theses on Technology:
Piper to Jacobs—No Comment

In his 79 Theses, Alan Jacobs hits upon one of the most important transformations affecting the technology of writing today. “Digital textuality,” writes Jacobs in Thesis 26, “offers us the chance to restore commentary to its pre-modern place as the central scholarly genre.” One could remove “scholarly” from this sentence and still capture the essential point: In the interconnected, intergalactic Internet, everything is commentary.

For Jacobs, commentary is about responsiveness and the way we encode ethics into our collective electronic outpourings. Nothing could feel further from the actual comments one encounters online today. As Jacobs points out, “Comment threads seethe with resentment,” not only with what has been written, but with their secondary status as emotions, or rather one emotion. In a world where we imagine writing to be about originality, the comment can only ever be angry. In response, we either turn them off (as is the case with this blog). Or we say “No comment.” Withholding commentary is a sign of resistance or power.

Of course, this was not always the case. Commentary was once imagined to be the highest form of writing, a way of communing with something greater than oneself. It was not something to be withheld or spewed, but involved a complex process of interpretation and expression. It took a great deal of learning.

Hunayn ibn Ishaq al-'Ibadi, 809?-873 (known as Joannitius). Isagoge Johannitii in Tegni Galeni.

The main difference between our moment and the lost world of pre-modern commentary that Jacobs invokes is of course a material one. In a context of hand-written documents, transcription was the primary activity that consumed most individuals’ time. Transcription preceded, but also informed commentary (as practiced by the medieval Arab translator Joannitius). Who would be flippant when it had just taken weeks to copy something out? The submission that Jacobs highlights as a prerequisite of good commentary—a privileging of someone else’s point of view over our own—was a product of corporeal labor. Our bodies shaped our minds’ eye.

Not all is lost today. While comment threads seethe, there is also a vibrant movement afoot to remake the web as a massive space of commentary. The annotated web, as it’s called, has the aim of transforming our writing spaces from linked planes to layered marginalia. Whether you like it or not, that blog or corporate presence you worked so hard to create can be layered with the world’s thoughts. Instead of writing up here and commenting down there, it reverses the hierarchy and places annotating on top. Needless to say, it has a lot of people worried.

I personally prefer the vision of “annotation” to commentary. Commentary feels very emulative to me—it tries to double as writing in a secondary space. Annotation by contrast feels more architectural and versatile. It builds, but also branches. It is never finished, nor does it aim to be so. It intermingles with the original text more subtly than the here/there structure of commentary. But whether you call it annotation or commentary, the point is the same—to take seriously the writer’s responsiveness to another person.

Missing from these models is pedagogy. The annotated web gives us one example of how to remake the technology of writing to better accommodate responsiveness. It’s a profound first step, one that will by no means be universally embraced (which should give us some idea of how significant it is).

But we do not yet have a way of teaching this to new (or old) writers. Follow the curricular pathways from the lockered hallways of elementary school to the bleak cubicles of higher education and you will still see the blank piece of paper or its electronic double as the primary writing surface. The self-containment of expression is everywhere. It is no wonder that these writers fail to comment well.

It’s all well and good to say commentary is back. It’s another to truly re-imagine how a second grader or college student learns to write. What if we taught commentary instead of expression, not just for beginning writers, but right on through university and the PhD? What if we trained people to build and create in the annotated web instead of on pristine planes of remediated paper? Now that would be different.

Andrew Piper is Associate Professor and William Dawson Scholar in the Department of Languages, Literatures, and Cultures at McGill University.

. . . . . . . .

Like The Hedgehog Review on Facebook, follow us on Twitter, and subscribe to our posts via RSS.