Tag Archives: Alan Jacobs

79 Theses on Technology:
Of Techniques and “Technology”

https://www.flickr.com/photos/notionscapital/3397858623/in/photolist-6bfUZn-tzpptt-7hAbc2-fMVuXi-n2q2x2-6pxM9s-9tNJ27-79kkxr-8hdbbZ-aSiUAi-8nZUXY-8Jnx9G-ayq3Z2-9tKPn4-7wUAgt-6AfoHc-5Bo6N9-bZ4tP5-nWHt6S-nEkQHq-nWQkDM-tx4Kd1-eaXF9R-dgC8yj-as5shL-9MJPrV-oW8Tyr-nwWtC-87FrMA-Q6Bx9-87FrTw-aM6o1D-aM6nPn-aM6nAM-dhMP8o-dhMNBZ-dhMNAK-dhMP4y-dhMNyx-dgC8xw-dgC6qx-9evU3V-9eyZ5G-fvYkR-ddCpVp-ddCrgW-ddCrf5-9xCC3F-9xFALG-9xCBZr/

Anatomy of a Blogger, after Diderot’s Encyclopédie, ou dictionnaire raisonné des sciences, des arts et des métiers by Mike Licht via flickr

Editor’s Note: Earlier in the spring, Alan Jacobs drew up his 79 Theses on Technology, a provocative document that has drawn much commentary from our readers. John Durham Peters joins the fray here, commenting on Theses 64 through 70.

64. Everyone should sometimes write by hand, to recall what it’s like to have second thoughts before the first ones are completely recorded.

65. Everyone should sometimes write by hand, to revisit and refresh certain synaptic connections between mind and body.

66. To shift from typing to (hand)writing to speaking is to be instructed in the relations among minds, bodies, and technologies.

67. It’s fine to say “use the simplest technology that will do the job,” but in fact you’ll use the one you most enjoy using.

68. A modern school of psychoanalysis should be created that focuses on interpreting personality on the basis of the tools that one finds enjoyable to use.

69. Thinking of a technology as a means of pleasure may be ethically limited, but it’s much healthier than turning it into an idol.

70. The always-connected forget the pleasures of disconnection, then become impervious to them.

No doubt, writing is an intensely physical bio-mechanical activity. The back hurts, the neck cranes, the eyes sting, the head aches, the view out the window is consulted for the thousandth time. The inscription of words exacts a tax of muscular and nervous exertion. And no doubt, the most minute choices in writing technique make worlds of difference. Nietzsche thought writing while seated a sin against the Holy Ghost: only in strolling did words have for him truth.

But let us not confuse technology and technique. Technology once meant the study of the productive arts and sciences (as in the Massachusetts Institute of Technology); now, the term has been inflated not only into material devices of all kinds but also into a gas-bag for intellectuals to punch. Techniques are humble acts we do with hands, voices, eyes, feet, spine, and other embodied parts that bring forth mind into the world. We humans never do anything without technique, so we shouldn’t pretend there is any ontological difference between writing by hand, keyboarding, and speaking, or that one of them is more original or pure than the other. We are technical all the way down in body and mind. 

The age of ubiquitous computing has yielded, among other things, a florid genre of opt-out narratives, and I hope I do not espy in these theses another such tendency. Only by the orchestration of technologies can you catch a glimpse of a technology-free world. The more intensely made our environment is, the more actively its designers supply us with shock absorbers. The default images for the background of my desktop computer are all resolutely pastoral—not a sign of infrastructure, globalization, coltan, carbon, or human labor among them. I find tulips, a rising moon, cloudscapes, seascapes, and windblown desert sands, but no data, email, calendars, and bills, and certainly no human presence. Just how did this blue flower happen to sprout amid all the silicon? With heartfelt pleas that I “just have to watch,” my students send me YouTube videos that explain why we need to unplug, go outside, and seek real human contact. If you listen to the machine telling you how to get out of it, you only get sucked into it more, like a con artist who lulls you into a sense of trust by telling you that he is conning you. The promised liberation from technology is usually just another technology that you don’t recognize as such. This is one reason why a fuller appreciation of our diverse techniques is so vital.

Tools are all we have, but each one sets us in a very different horizon. Technology only risks being an idol because we don’t appreciate our techniques well enough. Writing with two hands on a keyboard, dictating to a person or a machine, writing with chalk, quill, pencil, or pen—each embody mind in different ways. Blessed be the back pain, as it reminds us that we are not immaterial beings flying through cyberspace.

I don’t understand the term “simplest” applied to a tool. Tools interact with mind and body. Compass and square could build gothic cathedrals. Piano and notepaper could yield symphonies. The more basic the tool, the harder it is to master. Who among us has yet learned how to speak, or walk, or think? The real challenges lie in the most basic acts. Some day, I’d like to write a really good sentence. Some day, I’d like to play a beautiful scale in C major. Some day, I’d like to say the right word to another person. The more basic the task, the more fundamental the challenge and difficult the tool.

John Durham Peters is the A. Craig Baird Professor of Communication Studies at the University of Iowa. His most recent book The Marvelous Clouds: Towards a Philosophy of Elemental Media has just been released by the University of Chicago Press. 

. . . . . . . .

Like The Hedgehog Review on Facebook, follow us on Twitter, and subscribe to our posts via RSS.

FacebookTwitterGoogle+LinkedInShare

79 Theses on Technology:
Our Detachment From Technology

insect blue_FLAT

When reading Alan Jacobs’s 79 theses, three jumped out at me:

55. This epidemic of forgetting where algorithms come from is the newest version of “I for one welcome our new insect overlords.”

56. It seems not enough for some people to attribute consciousness to algorithms; they must also grant them dominion.

58. Any sufficiently advanced logic is indistinguishable from stupidity.—Alex Tabarrok

These theses suggest a single issue: We have become increasingly detached from our software, both in how it works and how it is built.

The algorithms involved in much of our software are each designed to do something. When an algorithm was a single snippet of code or a tiny computer program, it could be read, understood, debugged, and even improved. Similarly, computing once involved regular interactions at the level of the command line. There was little distance between the code and the user.

Since the early era of command lines and prompts, software has become increasingly complex. It has also become increasingly shielded from the user. These are not necessarily bad changes. More sophisticated technology is more powerful and has greater functionality; giving it a simpler face prevents it from being overwhelming to use. We don’t need to enter huge numbers of commands or parameters to get something to work. We can just swipe our fingers and our intentions are intuited.

Thanks to these changes, however, each of us has become more distant from the inner workings of our machines. I’ve written elsewhere about how we must strive to become closer to our machines and bridge the gap between expert and user. This is difficult in our era of iPads and graphical interfaces, and often it doesn’t even seem that important. However, since these technologies affect so many parts of our lives, I think we need the possibility of closeness: We need gateways to understanding our machines better. In the absence of this proactive decision, our responses to our machines will tend to be driven by fear, veneration, and disdain.

As we have become detached from how algorithms and software operate, this detachment has caused a gross misunderstanding of how technology works. We find it to be far more inscrutable than it really is, forgetting all technology was designed by fallible people. We respond to this inscrutable power by imputing a beauty and sophistication that is not there. (For more on this, see Ian Bogost and his observation that many people use the word “algorithm” in an almost religious manner.)

Veneration of the algorithm as something inordinately impressive is detrimental to our ability to engage with technology. Software is often incredibly kludgy and chaotic, far from worthy of worship. This response is not so far from fearing technology just because we can’t understand it. Both fear and veneration are closely related, as both make algorithms out to be more than they are. (This is the subject of Jacobs’s Theses 55 and 56, though stated in a bit more extreme forms than I might be willing to do.)

But what about disdain? How does this work? When a device suggests the wrong word or phrase in a text or sends delivery trucks on seemingly counterintuitive routes, we disdain the device and its algorithms. Together, their outputs seem so self-evidently wrong that we are often filled with a sense of superiority, mocking these algorithms’ shortcomings, or feeling that they are superfluous.

Sometimes, our expertise does fall short and complex logic can seem like stupidity. But David Auerbach, writing in Nautilus, offered this wonderful story that shows that something else might be going on:

Deep Blue programmer Feng-Hsiung Hsu writes in his book Behind Deep Blue that during the match, outside analysts were divided over a mysterious move made by the program, thinking it either weak or obliquely strategic. Eventually, the programmers discovered that the move was simply the result of a bug that had caused the computer not to choose what it had actually calculated to be the best move—something that could have appeared as random play.

In this case, ignorance prevented observers from understanding what was going on.

Is complex logic indistinguishable from stupidity? I don’t think so. Our response to a process we don’t understand may be closer to the nervous laughter of ignorance than a feeling of superiority. We call these algorithms stupid not because we recognize some authentic algorithmic inadequacy in them. We call them stupid because to admit a certain humility in the face of their increasing complexity would be a display of weakness.

When I took an artificial intelligence course in college and learned the algorithms for programs such as playing board games or constructing plans, I didn’t feel superior—I felt a kind of sadness. I had seen behind the screen and found these processes sophisticated, but fairly mundane. Most complex technology is this way. But when each of us encounters a surprising and apparently stupid output, if we don’t understand its origins, it is a lot easier to mock the system than to feel humbled, or even disappointed, at discovering its true structure.

These responses to technology are not the everyday user’s fault. Many of the creators of these technologies want the user to attribute a certain power to these algorithms and so have protected them behind layers of complexity. Ultimately, I think the most appropriate response is intellectual humility in the face of technology from which we have become increasingly detached. Only then can we engage with algorithms and try to see, even if only a moment, what they are actually doing.

Samuel Arbesman is a Senior Adjunct Fellow at the Silicon Flatirons Center for Law, Technology, and Entrepreneurship at the University of Colorado and a Visiting Scholar in Philosophy at the University of Kansas. Follow him on Twitter at @arbesman.

. . . . . . . .

Like The Hedgehog Review on Facebook, follow us on Twitter, and subscribe to our posts via RSS.

79 Theses on Technology:
Things That Want—A Second Reply to Alan Jacobs

tender buttons_2_FLATI don’t know exactly what Alan Jacobs wants. But I know what my keyboard wants. That difference—a difference in my knowledge of the intentionality of things—is reason for me to conclude that Alan Jacobs and my keyboard are two different kinds of things. There is, we’d say, an ontological difference between Alan Jacobs and my keyboard. There is a functional difference as well. And so many more differences. I acknowledge this. The world is not flat.

But Jacobs differentiates himself from my keyboard based on “wanting” itself. Alan Jacobs wants. Keyboards—mine or others—don’t “want.” Such is for Jacobs the line between Alan Jacobs and keyboards. If we can regulate our language about things, he suggests, we can regulate things. I would rather just learn from our language, and from things, and go from there.

I think my differences with Jacobs take three directions: one rhetorical, another ontological, and a third ethical. I will discuss them each a bit here.

To start, I think that machines and other technologies are full of meaning and significance, and that they do in fact give meaning to our lives. Part of their meaningfulness is found in what I might call their “structure of intention,” or “intentionality.” This includes what design theorists call “affordances.” In the classic account of affordances, James Gibson described them as the latent “action possibilities” of things in relation to their environment. Design theorists tend to take a more straight-forward approach: plates on doors afford pushing; C-shaped bars affixed to doors afford pulling; and knobs afford either action. Likewise, buttons on car dashboards afford pushing, whereas dials afford turning.

But intentionality as I am calling it here goes beyond the artifacts themselves, to include the broader practices and discourses in which they are embedded. Indeed, the “intentionality” of a thing is likely to be stronger where those broader practices and discourses operate at the level of assumption rather than explicit indoctrination. So much of the meaningfulness of things is tacitly known and experienced, only becoming explicit when they are taken away.

So there are things, their affordances, and the practices and discourses in which they are embedded. And here I think it is rhetorically legitimate, ontologically plausible, and ethically justified to say that technologies can want.

Rhetorically, every culture animates its things through language. I do not think this is mere embellishment. It entails a recognition that non-human things are profoundly meaningful to us, and that they can be independent actors as they are “activated” or “deactivated” in our lives. (Think of the frustrations you feel when the plumbing goes awry. This frustration is about “meaning” in our lives as much as it is about using the bathroom.) To say technologies “want,” as Kevin Kelly does, is to acknowledge rhetorically how meaningful non-human things are to us; it is not to make a category mistake.

Ontologically, the issue hinges in part on whether we tie “wanting” to will, especially to the will of a single, intending human agent (hence, the issue of voluntarianism). If we tether wanting to will in a strong sense, we end up in messy philosophical terrain. What do we do with instinct, bodily desires, sensations, affections, and the numerous other forms of “wanting” that do not seem to be a product of our will? What do we do with animals, especially pets? What do we do with the colloquial expression, “The plant wants water”? Such questions are well beyond the scope of this response. I will just say that I am skeptical of attempts to tie wanting to will because willfulness is only one kind of wanting.

Jacobs and I agree, I think, that the most pressing issue in saying technologies want is ethical. Jacobs thinks that in speaking of technologies as having agency, I am essentially surrendering agency to technical things. I disagree.

I think it is perfectly legitimate and indeed ethically good and right to speak of technologies as “wanting.” “To want” is not simply to exercise a will but rather more broadly to embody a structure of intention within a given context or set of contexts. Will-bearing and non-will-bearing things, animate and inanimate things, can embody such a structure of intention.

It is good and right to call this “wanting” because “wanting” suggests that things, even machine things, have an active presence in our life—they are intentional. They cannot be reduced to mere tools or instruments, let alone “a piece of plastic that when depressed activates an electrical current.” Moreover, this active presence cannot be neatly traced back to their design and, ultimately, some intending human.

To say the trigger wants to be pulled is not to say only that the trigger “was made for” pulling. It is not even to say that the trigger “affords” pulling. It is to say that the trigger may be so culturally meaningful as to act upon us in powerful ways (as indeed we see with guns).

So far from leading, as Jacobs claims, to the “Borg Complex”—the belief that resistance to technology is futile—it is only by coming to grips with the profound and active power of things that we best recognize that resistance to technology is, as Jacobs correctly argues, a cultural project, not a merely personal one, let alone primarily a definitional one.

So rather than trying to clean up or correct our language with respect to things (technologies don’t want!), I think we ought to begin by paying closer attention to our language about things and ask what we may learn from it. Yes, we will learn of our idolatries, ideologies, idiocies, and lies. But we may also learn some uncomfortable truths. So I will say it again, of course technologies want!

. . . . . . . .

Like The Hedgehog Review on Facebook, follow us on Twitter, and subscribe to our posts via RSS.

79 Theses on Technology:
The Hand That Holds the Smartphone

medical_anatomy_hands

Alan Jacobs poses a few questions to his readers: “What must I pay attention to?” “What may I pay attention to?” and “What must I refuse attention to?” These questions direct readers to understand their own positions in the world in terms of attention. They encourage reflection. Instead of directing the reader’s focus outward to ponder general, more abstract relations between “technology” and “society,” they return us to our own bodies even and suggest that the hand that swipes the iPhone, your hand, deserves attention.

Jacobs formulates only two other theses as questions  (#9, #60), and both are posed from a seemingly universal standpoint without a social location or even an implied interlocutor. However, some of Jacobs’s concerns about the current unhappy union with our attention-demanding devices seem to emerge from a specific social location. While these concerns may ring true for a large segment of higher-income, well-educated adults, who do in fact own smartphones in greater numbers than the rest of the US population, they may fall short of describing the experiences of many other users.

For example, #70, “The always-connected forget the pleasures of disconnection, then become impervious to them.” Who are the “always-connected”? The McDonald’s worker whose algorithmically determined shifts are apt to change with less than half day’s notice? Or one of the 10% of Americans who rely on their smartphones to access the Internet to do their banking, look for a job, and let their child do homework?

People who rely on their smartphones for Internet access are more likely to be young, low-income, and non-white, the same population with some of the highest levels of unemployment. With the migration of most job-seeking to online databases and applications, all members of the “always-connected” might not experience the “pleasures of disconnection” in the same way as the middle class knowledge worker with high-speed Internet access at home and at work. In reality, the “always-connected” is a large and diverse group, and is quickly becoming even larger and even more diverse.

Your hand isn’t the only hand that comes in contact with your phone, of course, but only the last set of hands in a long chain of designers, manufacturing workers, and marketing gurus. Jacobs points this out in the case of algorithms (Thesis #54, “The contemporary version of the pathetic fallacy is to attribute agency not to nature but to algorithms—as though humans don’t write algorithms. But they do.”), but it bears extending this line of thinking to other theses about the ideologies that run through contemporary discourse on technology.

Consider Thesis #41, “The agency that in the 1970s philosophers and theorists ascribed to language is now being ascribed to technology” and #44, “We try to give power to our idols so as to be absolved of the responsibilities of human agency”—who are the agents in these theses? Who is doing the ascribing? Who seeks absolution?

Kevin Kelly, the author Jacobs points to as a prime example of techno-enthusiasm, was a founding editor of Wired and has spent a lot of time talking to technology executives over the past several decades. Kelly’s ideas have often been translated into marketing strategies that soon enter into the public consciousness—like the sumptuously edited commercial for the Apple Watch in which the watch operates completely of its own agency, no human required!—where they shape our desires and understandings of our relationships with our devices.

It’s through the image of a series of hands grasping, texting, and swiping away that my attention is drawn to the people at other end of the technologies that shape our lives. As Jacobs points out, technology doesn’t want anything, “we want, with technology as our instrument,” but the question of who we are is isn’t just idle sociological speculation. It’s vital to imagining alternative arrangements of both people and technology, as well as more humane practices that may benefit us all.

Julia Ticona is a doctoral candidate in the sociology department at the University of Virginia and a dissertation fellow at the Institute for Advanced Studies in Culture. Her work focuses on the cultures of technology and everyday life.

Photo: Anatomical study of hands, public domain.

 

. . . . . . . .

Like The Hedgehog Review on Facebook, follow us on Twitter, and subscribe to our posts via RSS.

79 Theses on Technology:
Piper to Jacobs—No Comment

In his 79 Theses, Alan Jacobs hits upon one of the most important transformations affecting the technology of writing today. “Digital textuality,” writes Jacobs in Thesis 26, “offers us the chance to restore commentary to its pre-modern place as the central scholarly genre.” One could remove “scholarly” from this sentence and still capture the essential point: In the interconnected, intergalactic Internet, everything is commentary.

For Jacobs, commentary is about responsiveness and the way we encode ethics into our collective electronic outpourings. Nothing could feel further from the actual comments one encounters online today. As Jacobs points out, “Comment threads seethe with resentment,” not only with what has been written, but with their secondary status as emotions, or rather one emotion. In a world where we imagine writing to be about originality, the comment can only ever be angry. In response, we either turn them off (as is the case with this blog). Or we say “No comment.” Withholding commentary is a sign of resistance or power.

Of course, this was not always the case. Commentary was once imagined to be the highest form of writing, a way of communing with something greater than oneself. It was not something to be withheld or spewed, but involved a complex process of interpretation and expression. It took a great deal of learning.

Hunayn ibn Ishaq al-'Ibadi, 809?-873 (known as Joannitius). Isagoge Johannitii in Tegni Galeni.

The main difference between our moment and the lost world of pre-modern commentary that Jacobs invokes is of course a material one. In a context of hand-written documents, transcription was the primary activity that consumed most individuals’ time. Transcription preceded, but also informed commentary (as practiced by the medieval Arab translator Joannitius). Who would be flippant when it had just taken weeks to copy something out? The submission that Jacobs highlights as a prerequisite of good commentary—a privileging of someone else’s point of view over our own—was a product of corporeal labor. Our bodies shaped our minds’ eye.

Not all is lost today. While comment threads seethe, there is also a vibrant movement afoot to remake the web as a massive space of commentary. The annotated web, as it’s called, has the aim of transforming our writing spaces from linked planes to layered marginalia. Whether you like it or not, that blog or corporate presence you worked so hard to create can be layered with the world’s thoughts. Instead of writing up here and commenting down there, it reverses the hierarchy and places annotating on top. Needless to say, it has a lot of people worried.

I personally prefer the vision of “annotation” to commentary. Commentary feels very emulative to me—it tries to double as writing in a secondary space. Annotation by contrast feels more architectural and versatile. It builds, but also branches. It is never finished, nor does it aim to be so. It intermingles with the original text more subtly than the here/there structure of commentary. But whether you call it annotation or commentary, the point is the same—to take seriously the writer’s responsiveness to another person.

Missing from these models is pedagogy. The annotated web gives us one example of how to remake the technology of writing to better accommodate responsiveness. It’s a profound first step, one that will by no means be universally embraced (which should give us some idea of how significant it is).

But we do not yet have a way of teaching this to new (or old) writers. Follow the curricular pathways from the lockered hallways of elementary school to the bleak cubicles of higher education and you will still see the blank piece of paper or its electronic double as the primary writing surface. The self-containment of expression is everywhere. It is no wonder that these writers fail to comment well.

It’s all well and good to say commentary is back. It’s another to truly re-imagine how a second grader or college student learns to write. What if we taught commentary instead of expression, not just for beginning writers, but right on through university and the PhD? What if we trained people to build and create in the annotated web instead of on pristine planes of remediated paper? Now that would be different.

Andrew Piper is Associate Professor and William Dawson Scholar in the Department of Languages, Literatures, and Cultures at McGill University.

. . . . . . . .

Like The Hedgehog Review on Facebook, follow us on Twitter, and subscribe to our posts via RSS.

79 Theses on Technology. For Disputation.

Alan Jacobs has written seventy-nine theses on technology for disputation. A disputation is an old technology, a formal technique of debate and argument that took shape in medieval universities in Paris, Bologna, and Oxford in the twelfth and thirteenth centuries. In its most general form, a disputation consisted of a thesis, a counter-thesis, and a string of arguments, usually buttressed by citations of Aristotle, Augustine, or the Bible.

But disputations were not just formal arguments. They were public performances that trained university students in how to seek and argue for the truth. They made demands on students and masters alike. Truth was hard won; it was to be found in multiple, sometimes conflicting traditions; it required one to give and recognize arguments; and, perhaps above all, it demanded an epistemic humility, an acknowledgment that truth was something sought, not something produced.

It is, then, in this spirit that Jacobs offers, tongue firmly in cheek, his seventy-nine theses on technology and what it means to inhabit a world formed by it. They are pithy, witty, ponderous, and full ofDisputation-300x295 life. And over the following weeks, we at the Infernal Machine will take Jacobs’ theses at his provocative best and dispute them. We’ll take three or four at a time and offer our own counter-theses in a spirit of generosity.

So here they are:

    1. Everything begins with attention.
    2. It is vital to ask, “What must I pay attention to?”
    3. It is vital to ask, “What may I pay attention to?”
    4. It is vital to ask, “What must I refuse attention to?”
    5. To “pay” attention is not a metaphor: Attending to something is an economic exercise, an exchange with uncertain returns.
    6. Attention is not an infinitely renewable resource; but it is partially renewable, if well-invested and properly cared for.
    7. We should evaluate our investments of attention at least as carefully and critically as our investments of money.
    8. Sir Francis Bacon provides a narrow and stringent model for what counts as attentiveness: “Some books are to be tasted, others to be swallowed, and some few to be chewed and digested: that is, some books are to be read only in parts, others to be read, but not curiously, and some few to be read wholly, and with diligence and attention.”
    9. An essential question is, “What form of attention does this phenomenon require? That of reading or seeing? That of writing also? Or silence?”
    10. Attentiveness must never be confused with the desire to mark or announce attentiveness. (“Can I learn to suffer/Without saying something ironic or funny/On suffering?”—Prospero, in Auden’s The Sea and the Mirror)
    11. “Mindfulness” seems to many a valid response to the perils of incessant connectivity because it confines its recommendation to the cultivation of a mental stance without objects.
    12. That is, mindfulness reduces mental health to a single, simple technique that delivers its user from the obligation to ask any awkward questions about what his or her mind is and is not attending to.
    13. The only mindfulness worth cultivating will be teleological through and through.
    14. Such mindfulness, and all other healthy forms of attention—healthy for oneself and for others—can only happen with the creation of and care for an attentional commons.
    15. This will not be easy to do in a culture for which surveillance has become the normative form of care.
    16. Simone Weil wrote that ‘Attention is the rarest and purest form of generosity’; if so, then surveillance is the opposite of attention.
    17. The primary battles on social media today are fought by two mutually surveilling armies: code fetishists and antinomians.
    18. The intensity of those battles is increased by a failure by any of the parties to consider the importance of intimacy gradients.
    19. “And weeping arises from sorrow, but sorrow also arises from weeping.”—Bertolt Brecht, writing about Twitter
    20. We cannot understand the internet without perceiving its true status: The Internet is a failed state.
    21. We cannot respond properly to that failed-state condition without realizing and avoiding the perils of seeing like a state.
    22. If instead of thinking of the internet in statist terms we apply the logic of subsidiarity, we might be able to imagine the digital equivalent of a Mondragon cooperative.
    23. The internet groans in travail as it awaits its José María Arizmendiarrieta.

    24. Useful strategies of resistance require knowledge of technology’s origin stories.
    25. Building an alternative digital commons requires reimagining, which requires renarrating the past (and not just the digital past).
    26. Digital textuality offers us the chance to restore commentary to its pre-modern place as the central scholarly genre.
    27. Recent technologies enable a renewal of commentary, but struggle to overcome a post-Romantic belief that commentary is belated, derivative.
    28. Comment threads too often seethe with resentment at the status of comment itself. “I should be the initiator, not the responder!”
    29. Only a Bakhtinian understanding of the primacy of response in communication could genuinely renew online discourse.
    30. Nevertheless certain texts will generate communities of comment around them, communities populated by the humbly intelligent.
    31. Blessed are they who strive to practice commentary as a legitimate, serious genre of responsiveness to others’ thoughts.
    32. And blessed also are those who discover how to write so as to elicit genuine commentary.
    33. Genuine commentary is elicited by the scriptural but also by the humble—but never by the (insistently) canonical.
    34. “Since we have no experience of a venerable text that ensures its own perpetuity, we may reasonably say that the medium in which it survives is commentary.”—Frank Kermode
    35. We should seek technologies that support the maximally beautiful readerly sequence of submission, recovery, comment.
    36. If our textual technologies promote commentary but we resist it, we will achieve a Pyrrhic victory over our technologies.

    37. “Western literature may have more or less begun, in Aeschylus’s Oresteia, with a lengthy account of a signal crossing space, and of the beacon network through whose nodes the signal’s message (that of Troy’s downfall) is relayed—but now, two and a half millennia later, that network, that regime of signals, is so omnipresent and insistent, so undeniably inserted or installed at every stratum of existence, that the notion that we might need some person, some skilled craftsman, to compose any messages, let alone incisive or ‘epiphanic’ ones, seems hopelessly quaint.”—Tom McCarthy
    38. To work against the grain of a technology is painful to us and perhaps destructive to the technology, but occasionally necessary to our humanity.
    39. “Technology wants to be loved,” says Kevin Kelly, wrongly: But we want to invest our technologies with human traits to justify our love for them.
    40. Kelly tells us “What Technology Wants,” but it doesn’t: We want, with technology as our instrument.
    41. The agency that in the 1970s philosophers & theorists ascribed to language is now being ascribed to technology. These are evasions of the human.
    42. Our current electronic technologies make competent servants, annoyingly capricious masters, and tragically incompetent gods.
    43. Therefore when Kelly says, “I think technology is something that can give meaning to our lives,” he seeks to promote what technology does worst.
    44. We try to give power to our idols so as to be absolved of the responsibilities of human agency. The more they have, the less we have.
    45. “In a sense there is no God as yet achieved, but there is that force at work making God, struggling through us to become an actual organized existence, enjoying what to many of us is the greatest conceivable ecstasy, the ecstasy of a brain, an intelligence, actually conscious of the whole, and with executive force capable of guiding it to a perfectly benevolent and harmonious end.”—George Bernard Shaw in 1907, or Kevin Kelly last week
    46. The cyborg dream is the ultimate extension of this idolatry: to erase the boundaries between our selves and our tools.
    47. Cyborgs lack humor, because the fusion of person and tool disables self-irony. The requisite distance from environment is missing.
    48. To project our desires onto our technologies is to court permanent psychic infancy.
    49. Though this does not seem to be widely recognized, the “what technology wants” model is fundamentally at odds with the “hacker” model.
    50. The “hacker” model is better: Given imagination and determination, we can bend technologies to our will.
    51. Thus we should stop thinking about “what technology wants” and start thinking about how to cultivate imagination and determination.
    52. Speaking of “what technology wants” is an unerring symptom of akrasia.
    53. The physical world is not infinitely redescribable, but if you had to you could use a screwdriver to clean your ears.
    54. The contemporary version of the pathetic fallacy is to attribute agency not to nature but to algorithms—as though humans don’t write algorithms. But they do.
    55. This epidemic of forgetting where algorithms come from is the newest version of “I for one welcome our new insect overlords.”
    56. It seems not enough for some people to attribute consciousness to algorithms; they must also grant them dominion.
    57. Perhaps Loki was right—and C. S. Lewis too: “I was not born to be free—I was born to adore and obey.”

    58. Any sufficiently advanced logic is indistinguishable from stupidity.—Alex Tabarrok
    59. Jaron Lanier: “The Turing test cuts both ways. You can’t tell if a machine has gotten smarter or if you’ve just lowered your own standards of intelligence to such a degree that the machine seems smart.”
    60. What does it say about our understanding of human intelligence that we think it is something that can be assessed by a one-off “test”—and one that is no test at all, but an impression of the moment?
    61. To attribute intelligence to something is to disclaim responsibility for its use.
    62. The chief purpose of technology under capitalism is to make commonplace actions one had long done painlessly seem intolerable.
    63. Embrace the now intolerable.
    64. Everyone should sometimes write by hand, to recall what it’s like to have second thoughts before the first ones are completely recorded.
    65. Everyone should sometimes write by hand, to revisit and refresh certain synaptic connections between mind and body.
    66. To shift from typing to (hand)writing to speaking is to be instructed in the relations among minds, bodies, and technologies.
    67. It’s fine to say “use the simplest technology that will do the job,” but in fact you’ll use the one you most enjoy using.
    68. A modern school of psychoanalysis should be created that focuses on interpreting personality on the basis of the tools that one finds enjoyable to use.
    69. Thinking of a technology as a means of pleasure may be ethically limited, but it’s much healthier than turning it into an idol.
    70. The always-connected forget the pleasures of disconnection, then become impervious to them.
    71. The Dunning-Kruger effect grows more pronounced when online and offline life are functionally unrelated.
    72. A more useful term than “Dunning-Kruger effect” is “digitally-amplified anosognosia.”
    73. More striking even than the anger of online commentary is its humorlessness. Too many people have offloaded their senses of humor to YouTube clips.
    74. A healthy comment thread is a (more often than not) funny comment thread.
    75. The protection of anonymity one reason why people write more extreme comments online than they would speak in person—but not the only one.
    76. The digital environment disembodies language in this sense: It prevents me from discerning the incongruity between my anger and my person.
    77. Consistent pseudonymity creates one degree of disembodiment; varying pseudonymity and anonymity create infinite disembodiment.
    78. On the internet nothing disappears; on the internet anything can disappear.
    79. “To apply a categorical imperative to knowing, so that, instead of asking, ‘What can I know?’ we ask, ‘What, at this moment, am I meant to know?’—to entertain the possibility that the only knowledge which can be true for us is the knowledge we can live up to—that seems to all of us crazy and almost immoral.”—Auden

. . . . . . . .

Like The Hedgehog Review on Facebook, follow us on Twitter, and subscribe to our posts via RSS.