79 Theses on Technology. For Disputation.

Alan Jacobs has written seventy-nine theses on technology for disputation. A disputation is an old technology, a formal technique of debate and argument that took shape in medieval universities in Paris, Bologna, and Oxford in the twelfth and thirteenth centuries. In its most general form, a disputation consisted of a thesis, a counter-thesis, and a string of arguments, usually buttressed by citations of Aristotle, Augustine, or the Bible.

But disputations were not just formal arguments. They were public performances that trained university students in how to seek and argue for the truth. They made demands on students and masters alike. Truth was hard won; it was to be found in multiple, sometimes conflicting traditions; it required one to give and recognize arguments; and, perhaps above all, it demanded an epistemic humility, an acknowledgment that truth was something sought, not something produced.

It is, then, in this spirit that Jacobs offers, tongue firmly in cheek, his seventy-nine theses on technology and what it means to inhabit a world formed by it. They are pithy, witty, ponderous, and full ofDisputation-300x295 life. And over the following weeks, we at the Infernal Machine will take Jacobs’ theses at his provocative best and dispute them. We’ll take three or four at a time and offer our own counter-theses in a spirit of generosity.

So here they are:

    1. Everything begins with attention.
    2. It is vital to ask, “What must I pay attention to?”
    3. It is vital to ask, “What may I pay attention to?”
    4. It is vital to ask, “What must I refuse attention to?”
    5. To “pay” attention is not a metaphor: Attending to something is an economic exercise, an exchange with uncertain returns.
    6. Attention is not an infinitely renewable resource; but it is partially renewable, if well-invested and properly cared for.
    7. We should evaluate our investments of attention at least as carefully and critically as our investments of money.
    8. Sir Francis Bacon provides a narrow and stringent model for what counts as attentiveness: “Some books are to be tasted, others to be swallowed, and some few to be chewed and digested: that is, some books are to be read only in parts, others to be read, but not curiously, and some few to be read wholly, and with diligence and attention.”
    9. An essential question is, “What form of attention does this phenomenon require? That of reading or seeing? That of writing also? Or silence?”
    10. Attentiveness must never be confused with the desire to mark or announce attentiveness. (“Can I learn to suffer/Without saying something ironic or funny/On suffering?”—Prospero, in Auden’s The Sea and the Mirror)
    11. “Mindfulness” seems to many a valid response to the perils of incessant connectivity because it confines its recommendation to the cultivation of a mental stance without objects.
    12. That is, mindfulness reduces mental health to a single, simple technique that delivers its user from the obligation to ask any awkward questions about what his or her mind is and is not attending to.
    13. The only mindfulness worth cultivating will be teleological through and through.
    14. Such mindfulness, and all other healthy forms of attention—healthy for oneself and for others—can only happen with the creation of and care for an attentional commons.
    15. This will not be easy to do in a culture for which surveillance has become the normative form of care.
    16. Simone Weil wrote that ‘Attention is the rarest and purest form of generosity’; if so, then surveillance is the opposite of attention.
    17. The primary battles on social media today are fought by two mutually surveilling armies: code fetishists and antinomians.
    18. The intensity of those battles is increased by a failure by any of the parties to consider the importance of intimacy gradients.
    19. “And weeping arises from sorrow, but sorrow also arises from weeping.”—Bertolt Brecht, writing about Twitter
    20. We cannot understand the internet without perceiving its true status: The Internet is a failed state.
    21. We cannot respond properly to that failed-state condition without realizing and avoiding the perils of seeing like a state.
    22. If instead of thinking of the internet in statist terms we apply the logic of subsidiarity, we might be able to imagine the digital equivalent of a Mondragon cooperative.
    23. The internet groans in travail as it awaits its José María Arizmendiarrieta.

    24. Useful strategies of resistance require knowledge of technology’s origin stories.
    25. Building an alternative digital commons requires reimagining, which requires renarrating the past (and not just the digital past).
    26. Digital textuality offers us the chance to restore commentary to its pre-modern place as the central scholarly genre.
    27. Recent technologies enable a renewal of commentary, but struggle to overcome a post-Romantic belief that commentary is belated, derivative.
    28. Comment threads too often seethe with resentment at the status of comment itself. “I should be the initiator, not the responder!”
    29. Only a Bakhtinian understanding of the primacy of response in communication could genuinely renew online discourse.
    30. Nevertheless certain texts will generate communities of comment around them, communities populated by the humbly intelligent.
    31. Blessed are they who strive to practice commentary as a legitimate, serious genre of responsiveness to others’ thoughts.
    32. And blessed also are those who discover how to write so as to elicit genuine commentary.
    33. Genuine commentary is elicited by the scriptural but also by the humble—but never by the (insistently) canonical.
    34. “Since we have no experience of a venerable text that ensures its own perpetuity, we may reasonably say that the medium in which it survives is commentary.”—Frank Kermode
    35. We should seek technologies that support the maximally beautiful readerly sequence of submission, recovery, comment.
    36. If our textual technologies promote commentary but we resist it, we will achieve a Pyrrhic victory over our technologies.

    37. “Western literature may have more or less begun, in Aeschylus’s Oresteia, with a lengthy account of a signal crossing space, and of the beacon network through whose nodes the signal’s message (that of Troy’s downfall) is relayed—but now, two and a half millennia later, that network, that regime of signals, is so omnipresent and insistent, so undeniably inserted or installed at every stratum of existence, that the notion that we might need some person, some skilled craftsman, to compose any messages, let alone incisive or ‘epiphanic’ ones, seems hopelessly quaint.”—Tom McCarthy
    38. To work against the grain of a technology is painful to us and perhaps destructive to the technology, but occasionally necessary to our humanity.
    39. “Technology wants to be loved,” says Kevin Kelly, wrongly: But we want to invest our technologies with human traits to justify our love for them.
    40. Kelly tells us “What Technology Wants,” but it doesn’t: We want, with technology as our instrument.
    41. The agency that in the 1970s philosophers & theorists ascribed to language is now being ascribed to technology. These are evasions of the human.
    42. Our current electronic technologies make competent servants, annoyingly capricious masters, and tragically incompetent gods.
    43. Therefore when Kelly says, “I think technology is something that can give meaning to our lives,” he seeks to promote what technology does worst.
    44. We try to give power to our idols so as to be absolved of the responsibilities of human agency. The more they have, the less we have.
    45. “In a sense there is no God as yet achieved, but there is that force at work making God, struggling through us to become an actual organized existence, enjoying what to many of us is the greatest conceivable ecstasy, the ecstasy of a brain, an intelligence, actually conscious of the whole, and with executive force capable of guiding it to a perfectly benevolent and harmonious end.”—George Bernard Shaw in 1907, or Kevin Kelly last week
    46. The cyborg dream is the ultimate extension of this idolatry: to erase the boundaries between our selves and our tools.
    47. Cyborgs lack humor, because the fusion of person and tool disables self-irony. The requisite distance from environment is missing.
    48. To project our desires onto our technologies is to court permanent psychic infancy.
    49. Though this does not seem to be widely recognized, the “what technology wants” model is fundamentally at odds with the “hacker” model.
    50. The “hacker” model is better: Given imagination and determination, we can bend technologies to our will.
    51. Thus we should stop thinking about “what technology wants” and start thinking about how to cultivate imagination and determination.
    52. Speaking of “what technology wants” is an unerring symptom of akrasia.
    53. The physical world is not infinitely redescribable, but if you had to you could use a screwdriver to clean your ears.
    54. The contemporary version of the pathetic fallacy is to attribute agency not to nature but to algorithms—as though humans don’t write algorithms. But they do.
    55. This epidemic of forgetting where algorithms come from is the newest version of “I for one welcome our new insect overlords.”
    56. It seems not enough for some people to attribute consciousness to algorithms; they must also grant them dominion.
    57. Perhaps Loki was right—and C. S. Lewis too: “I was not born to be free—I was born to adore and obey.”

    58. Any sufficiently advanced logic is indistinguishable from stupidity.—Alex Tabarrok
    59. Jaron Lanier: “The Turing test cuts both ways. You can’t tell if a machine has gotten smarter or if you’ve just lowered your own standards of intelligence to such a degree that the machine seems smart.”
    60. What does it say about our understanding of human intelligence that we think it is something that can be assessed by a one-off “test”—and one that is no test at all, but an impression of the moment?
    61. To attribute intelligence to something is to disclaim responsibility for its use.
    62. The chief purpose of technology under capitalism is to make commonplace actions one had long done painlessly seem intolerable.
    63. Embrace the now intolerable.
    64. Everyone should sometimes write by hand, to recall what it’s like to have second thoughts before the first ones are completely recorded.
    65. Everyone should sometimes write by hand, to revisit and refresh certain synaptic connections between mind and body.
    66. To shift from typing to (hand)writing to speaking is to be instructed in the relations among minds, bodies, and technologies.
    67. It’s fine to say “use the simplest technology that will do the job,” but in fact you’ll use the one you most enjoy using.
    68. A modern school of psychoanalysis should be created that focuses on interpreting personality on the basis of the tools that one finds enjoyable to use.
    69. Thinking of a technology as a means of pleasure may be ethically limited, but it’s much healthier than turning it into an idol.
    70. The always-connected forget the pleasures of disconnection, then become impervious to them.
    71. The Dunning-Kruger effect grows more pronounced when online and offline life are functionally unrelated.
    72. A more useful term than “Dunning-Kruger effect” is “digitally-amplified anosognosia.”
    73. More striking even than the anger of online commentary is its humorlessness. Too many people have offloaded their senses of humor to YouTube clips.
    74. A healthy comment thread is a (more often than not) funny comment thread.
    75. The protection of anonymity one reason why people write more extreme comments online than they would speak in person—but not the only one.
    76. The digital environment disembodies language in this sense: It prevents me from discerning the incongruity between my anger and my person.
    77. Consistent pseudonymity creates one degree of disembodiment; varying pseudonymity and anonymity create infinite disembodiment.
    78. On the internet nothing disappears; on the internet anything can disappear.
    79. “To apply a categorical imperative to knowing, so that, instead of asking, ‘What can I know?’ we ask, ‘What, at this moment, am I meant to know?’—to entertain the possibility that the only knowledge which can be true for us is the knowledge we can live up to—that seems to all of us crazy and almost immoral.”—Auden

. . . . . . . .

Like The Hedgehog Review on Facebook, follow us on Twitter, and subscribe to our posts via RSS.

FacebookTwitterGoogle+LinkedInShare

Algorithms Who Art in Apps, Hallowed Be Thy Code

 

If you want to understand the status of algorithms in our collective imagination, Ian Bogost, author, game designer, and professor of media studies and interactive computing at Georgia Institute of Technology,  proposes the following exercise in his recent essay in the Atlantic: “The next time you see someone talking about algorithms, replace the term with ‘God’ and ask yourself if the sense changes any?”

If Bogost is right, then more often than not you will find the sense of the statement entirely unchanged. This is because, in his view, “Our supposedly algorithmic culture is not a material phenomenon so much as a devotional one, a supplication made to the computers we have allowed to replace gods in our minds, even as we simultaneously claim that science has made us impervious to religion.” Bogost goes on to say that this development is part of a “larger trend” whereby “Enlightenment ideas like reason and science are beginning to flip into their opposites.” Science and technology, he fears, “have turned into a new type of theology.”

It’s not the algorithms themselves that Bogost is targeting; it is how we think and talk about them that worries him. In fact, Bogost’s chief concern is that how we talk about algorithms is impeding our ability to think clearly about them and their place in society. This is where the god-talk comes in. Bogost deploys a variety of religious categories to characterize the present fascination with algorithms.

Bogost believes “algorithms hold a special station in the new technological temple because computers have become our favorite idols.” Later on he writes, “the algorithmic metaphor gives us a distorted, theological view of computational action.” Additionally, “Data has become just as theologized as algorithms, especially ‘big data,’ whose name is meant to elevate information to the level of celestial infinity.” “We don’t want an algorithmic culture,” he concludes, “especially if that phrase just euphemizes a corporate theocracy.” The analogy to religious belief is a compelling rhetorical move. It vividly illuminates Bogost’s key claim: the idea of an “algorithm” now functions as a metaphor that conceals more than it reveals.

He prepares the ground for this claim by reminding us of earlier technological metaphors that ultimately obscured important realities. The metaphor of the mind as computer, for example, “reaches the rank of religious fervor when we choose to believe, as some do, that we can simulate cognition through computation and achieve the singularity.” Similarly, the metaphor of the machine, which is really to say the abstract idea of a machine, yields a profound misunderstanding of mechanical automation in the realm of manufacturing. Bogost reminds us that bringing consumer goods to market still “requires intricate, repetitive human effort.” Manufacturing, as it turns out, “isn’t as machinic nor as automated as we think it is.”

Likewise, the idea of an algorithm, as it is bandied about in public discourse, is a metaphorical abstraction that obscures how various digital and analog components, including human action, come together to produce the effects we carelessly attribute to algorithms. Near the end of the essay, Bogost sums it up this way:

The algorithm has taken on a particularly mythical role in our technology-obsessed era, one that has allowed it to wear the garb of divinity. Concepts like ‘algorithm’ have become sloppy shorthands, slang terms for the act of mistaking multipart complex systems for simple, singular ones. Of treating computation theologically rather than scientifically or culturally.

But why does any of this matter? It matters, Bogost insists, because this way of thinking blinds us in two important ways. First, our sloppy shorthand “allows us to chalk up any kind of computational social change as pre-determined and inevitable,” allowing the perpetual deflection of responsibility for the consequences of technological change. The apotheosis of the algorithm encourages what I’ve elsewhere labeled a Borg Complex, an attitude toward technological change aptly summed by the phrase, “Resistance is futile.” It’s a way of thinking about technology that forecloses the possibility of thinking about and taking responsibility for our choices regarding the development, adoption, and implementation of new technologies. Secondly, Bogost rightly fears that this “theological” way of thinking about algorithms may cause us to forget that computational systems can offer only one, necessarily limited perspective on the world. “The first error,” Bogost writes, “turns computers into gods, the second treats their outputs as scripture.”

______________________

Bogost is right to challenge the quasi-religious reverence for technology. It is, as he fears, an impediment to clear thinking. And he is not the only one calling for the secularization of our technological endeavors. Computer scientist and virtual-reality pioneer Jaron Lanier has spoken at length about the introduction of religious thinking into the field of AI. In a recent interview, he expressed his concerns this way:

There is a social and psychological phenomenon that has been going on for some decades now:  A core of technically proficient, digitally minded people reject traditional religions and superstitions. They set out to come up with a better, more scientific framework. But then they re-create versions of those old religious superstitions! In the technical world these superstitions are just as confusing and just as damaging as before, and in similar ways.

While Lanier’s concerns are similar to Bogost’s,  Lanier’s use of religious categories is more concrete. Bogost deploys a religious frame as a rhetorical device, while Lanier’s uses it more directly to critique the religiously inflected expressions of a desire for transcendence among denizens of the tech world themselves.

But such expressions are hardly new. Nor are they limited to the realm of AI. In The Religion of Technology: The Divinity of Man and the Spirit of Invention, the distinguished historian of technology David Noble made the argument that “modern technology and modern faith are neither complements nor opposites, nor do they represent succeeding stages of human development. They are merged, and always have been, the technological enterprise being, at the same time, an essentially religious endeavor.”

Noble elaborates:

This is not meant in a merely metaphorical sense, to suggest that technology is similar to religion in that it evokes religious emotions of omnipotence, devotion, and awe, or that it has become a new (secular) religion in and of itself, with its own clerical caste, arcane rituals, and articles of faith. Rather it is meant literally and historically, to indicate that modern technology and religion have evolved together and that, as a result, the technological enterprise has been and remains suffused with religious belief.

Looking also at the space program, atomic weapons, and biotechnology, Noble devoted a chapter of his book to history of artificial intelligence,  arguing that AI research had often been inspired by a curious fixation on the achievement of god-like, disembodied intelligence as a step toward personal immortality. Many of the sentiments and aspirations that Noble identifies in figures as diverse as George Boole, Claude Shannon, Alan Turing, Edward Fredkin, Marvin Minsky, Daniel Crevier, Danny Hillis, and Hans Moravec—all of them influential theorists and practitioners in the development of AI—find their consummation in the Singularity movement. The movement envisions a time—2045 is frequently suggested—when the distinction between machines and humans will blur and humanity as we know it will be eclipsed. Before Ray Kurzweil, the chief prophet of the Singularity, wrote about “spiritual machines,” Noble had astutely anticipated how the trajectories of AI, Internet, Virtual Reality, and Artificial Life research were all converging  in the age-old quest for the immortality.  Noble, who died quite suddenly in 2010, must have read the work of Kurzweil and company as a remarkable validation of his thesis in The Religion of Technology.

Interestingly, the sentiments that Noble documents alternate between the heady thrill of creating non-human Minds and non-human Life, on the one hand, and, on the other, the equally heady thrill of pursuing the possibility of radical life-extension and even immortality. Frankenstein meets Faust we might say. Humanity plays god in order to bestow god’s gifts on itself.

Noble cites one Artificial Life researcher who explains, “I feel like God; in fact, I am God to the universes I create,” and another who declares, “Technology will soon enable human beings to change into something else altogether [and thereby] escape the human condition.” Ultimately, these two aspirations come together into a grand techno-eschatological vision, expressed here by robotics specialist Hans Moravec:

Our speculation ends in a supercivilization, the synthesis of all solar system life, constantly improving and extending itself, spreading outward from the sun, converting non-life into mind …. This process might convert the entire universe into an extended thinking entity … the thinking universe … an eternity of pure cerebration.

Little wonder that Pamela McCorduck, who has been chronicling the progress of AI since the early 1980s, can say, “The enterprise is a god-like one. The invention—the finding within—of gods represents our reach for the transcendent.” And, lest we forget where we began, a more earth-bound, but no less eschatological hope was expressed by Edward Fredkin in his MIT and Stanford courses on “saving the world.” He hoped for a “global algorithm” that “would lead to peace and harmony.”

I would suggest that similar aspirations are expressed by those who believe that Big Data will yield a God’s-eye view of human society, providing wisdom and guidance that is otherwise inaccessible to ordinary human forms of knowing and thinking.

Perhaps this should not be altogether surprising. As the old saying has it, the Grand Canyon wasn’t formed by someone dragging a stick. This is just a way of saying that causes must be commensurate with the effects they produce. Grand technological projects such as space flight, the harnessing of atomic energy, and the pursuit of artificial intelligence are massive undertakings requiring stupendous investments of time, labor, and resources. What motives are sufficient to generate those sorts of expenditures? You’ll need something more than whim, to put it mildly. You may need something akin to religious devotion. Would we have attempted to put a man on the moon without the ideological spur of the Cold War, which cast space exploration as a field of civilizational battle for survival? Consider, as a more recent example, what drives Elon Musk’s pursuit of interplanetary space travel.

______________________

Without diminishing the criticisms offered by either Bogost or Lanier, Noble’s historical investigation into the roots of divinized or theologized technology reminds us that the roots of the disorder run much deeper than we might initially imagine. Noble’s own genealogy traces the origin of the religion of technology to the turn of the first millennium. It emerges out of a volatile mix of millenarian dreams, apocalyptic fervor, mechanical innovation, and monastic piety. Its evolution proceeds apace through the Renaissance, finding one of its most ardent prophets in the Elizabethan statesman and thinker Francis Bacon. Even through the Enlightenment, the religion of technology flourished. In fact, the Enlightenment may have been a decisive moment in the history of the religion of technology.

In his Atlantic essay, Bogost frames the emergence of techno-religious thinking as a departure from the ideals of reason and science associated with the Enlightenment. This is not altogether incidental to Bogost’s argument. When he talks about the “theological” thinking that suffuses our understanding of algorithms, Bogost is not working with a neutral, value-free, all-purpose definition of what constitutes the religious or the theological; there’s almost certainly no such definition available. Rather, he works (like Lanier and many others) with an Enlightenment understanding of Religion that characterizes it as Reason’s Other–as something a-rational if not altogether irrational, superstitious, authoritarian, and pernicious.

Noble’s work complicates this picture. The Enlightenment did not, as it turns out, vanquish Religion, driving it far from the pure realms of Science and Technology. In fact, to the degree that the radical Enlightenment’s assault on religious faith was successful, it empowered the religion of technology. To put it another way, the Enlightenment—and, yes, we are painting with broad strokes here—did not do away with the notions of Providence, Heaven, and Grace, but instead renamed them as, respectively, Progress, Utopia, and Technology. To borrow a phrase, the Enlightenment immanentized the eschaton. If heaven had been understood as a transcendent goal achieved with the aid of divine grace within the context of the providentially ordered unfolding of human history, it became a utopian vision, a heaven on earth, achieved by the ministrations science and technology within the context of progress, an inexorable force driving history toward its utopian consummation.

As historian Leo Marx has put it, the West’s “dominant belief system turned on the idea of technical innovation as a primary agent of progress.” Indeed, the further Western culture proceeded down the path of secularization as it is traditionally understood, the more emphasis was placed on technology as the principle agent of change. Marx observed that by the late nineteenth century, “the simple republican formula for generating progress by directing improved technical means to societal ends was imperceptibly transformed into a quite different technocratic commitment to improving ‘technology’ as the basis and the measure of—as all but constituting—the progress of society.”

When the prophets of the Singularity preach the gospel of transhumanism, they are not abandoning the Enlightenment heritage; they are simply embracing its fullest expression. As Bruno Latour has argued, modernity has never perfectly sustained the purity of the distinctions that were the self-declared hallmarks of its own superiority. Modernity characterized itself as a movement of secularization and differentiation, what Latour, with not a little irony, labels processes of purification. Science, politics, law, religion, ethics—these are all sharply distinguished and segregated from one another in the modern world, distinguishing it from the primitive pre-modern world. But it turns out that these spheres of human experience stubbornly resist the neat distinctions modernity sought to impose. Hybridization unfolds alongside purification, and Noble’s work has demonstrated how the lines between technology, sometimes reckoned the most coldly rational of human projects, and religion are anything but clear.

But not just any religion. Earlier I suggested that when Bogost characterizes our thinking about algorithms as “theological,” he is almost certainly assuming a particular kind of theology. This is why it is important to classify the religion of technology more precisely as a Christian heresy. It is in Western Christianity that Noble found the roots of the religion of technology, and it is in the context of post–Christian world that it currently flourishes.

It is Christian insofar as its aspirations are like those nurtured by the Christian faith, such as the conscious persistence of a soul after the death of the body. Noble cites Daniel Crevier, who, referring to the “Judeo-Christian tradition,” suggests that “religious beliefs, and particularly the belief in survival after death, are not incompatible with the idea that the mind emerges from physical phenomena.” This is noted on the way to explaining that a machine-based material support could be found for the mind, which leads Noble to quip, “Christ was resurrected in a new body; why not a machine?” Reporting on his study of the famed Santa Fe Institute in Los Alamos, anthropologist Stefan Helmreich writes, “Judeo-Christian stories of the creation and maintenance of the world haunted my informants’ discussions of why computers might be ‘worlds’ or ‘universes,’ …. a tradition that includes stories from the Old and New Testaments (stories of creation and salvation).”

However heretically it departs from traditional Christian teaching regarding the givenness of human nature, the moral dimensions of humanity’s brokenness, the gracious agency of God in the salvation of humanity, the religion of technology can be conceived as an imaginative account of how God might fulfill purposes that were initially revealed in incidental, pre-scientific garb. In other words, we might frame the religion of technology not so much as a Christian heresy, but rather as (post–)Christian fan-fiction, an elaborate imagining of how the hopes articulated by the Christian faith will materialize as a consequence of human ingenuity in the absence of divine action.

Near the end of The Religion of Technology, David Noble warns of the dangers posed by a blind faith in technology. “Lost in their essentially religious reveries,” he writes, “the technologists themselves have been blind to, or at least have displayed blithe disregard for, the harmful ends toward which their work has been directed.” Citing another historian of technology, Noble adds, “The religion of technology, in the end, ‘rests on extravagant hopes which are only meaningful in the context of transcendent belief in a religious God, hopes for a total salvation which technology cannot fulfill …. By striving for the impossible, [we] run the risk of destroying the good life that is possible.’ Put simply, the technological pursuit of salvation has become a threat to our survival.” I suspect that neither Bogost nor Lanier would disagree with Noble on this score.

This post originally appeared at The Frailest Thing.

Michael Sacasas is a doctoral candidate in the Texts and Technology program at the University of Central Florida. Follow him on Twitter @frailestthing. 

. . . . . . . .

Like The Hedgehog Review on Facebook, follow us on Twitter, and subscribe to our posts via RSS.

The Public and Private, Once Again

Three surveillance cameras, Wikimedia Commons

Three surveillance cameras, Wikimedia Commons

In the wake of the Charlie Hebdo attacks, a political fire that has been burning for a long time is turning into a firestorm. Recently, the British Prime Minister David Cameron has called for tech companies to provide government security services with encryption keys to ensure that government authorities may legally access an individual’s data when warranted. The concern, now publicly shared by President Obama, is that terrorists are using the new encryption technologies being developed by companies like Apple, Google, WhatsApp, and Snapchat, especially “end-to-end” encryption, which “makes it nearly impossible for anyone to read users’ messages—even the company itself.”

And so, as The Economist has recently stated in an editorial about the matter, we are confronted again with the age-old dilemma “liberty vs. security, once again,” or more precisely “privacy vs. security.”

There are a host of legal, technological, political, and, perhaps above all, economic issues at play here. I do not claim to know precisely how one balances liberty with security, let alone balancing liberty with the tech companies’ push for profit maximization or governments’ desire to save face in the wake of acts of terror. But I do think that the scales are already set to fall off—that is, that these debates are taking place against a background of assumptions about privacy that are themselves problematic.

In calling privacy a right, we tend to do more than assert the necessity for its legal protection. We tend to carry with our idea of the right to privacy the metaphor of private space, even private property. Privacy as that which is bounded, set off from that which is public. Hence we have our private life and our public life, our private opinion and our public statements, our private information and our public profile, etc. In this very common way of thinking about things, the private and the public are two distinct realms, and the right to privacy is the guarantee of a wall around our private realm.

The privacy vs. security dilemma is imbedded in this way of thinking: It has to do with when it is legitimately permissible for the government to break down the wall of privacy for the sake of security. It is a version of the broader dilemma of liberty within the quasi-utilitarian liberalism that underlies our assumptions about privacy. We are to be free, so long as we do not interfere with the freedom of others; but when we do so interfere, the state has the right to encroach on our freedom, indeed even on our privacy, in the name of preserving maximum freedom for the greatest number.

Indeed, in recent rebuttals by libertarians, some liberals, and the tech industry to the call by Cameron and Obama for preserving a “back door” option by which to access user data, we see the greatest good for the greatest number argument used on behalf of super-encryption: Back doors, Cameron’s critics argue, can and will be used by the bad guys (criminals, hackers, the Russians, the Chinese) as well as the good guys, and the damage done by the bad guys could well be catastrophic. As Harvard’s Margo Seltzer recently said in The Financial Times,

If bad guys who are breaking laws cannot use encryption, they will find another way. It is an arms race and if governments say you cannot do this, that means the good guys can’t and the bad guys can. End-to-end encryption is the way to go.

Protecting privacy as an inviolable right, more sophisticated arguments go, is not only consistent with liberal societies, but also the most effective means of security—even if it means terrorists can communicate with little fear of being detected. It’s often assumed here that an absolute right to privacy will neatly reconcile itself with, even produce, the greatest good for the greatest number (albeit, the privacy of one’s data from tech companies themselves is more penetrable).

I think the super-encryption efforts of tech companies are socially and politically problematic. I think they are the wrong solution addressing the wrong problem. But in arguing so I am not interested in hypothetical calculations of the greatest good for the greatest number. Rather, I simply want to start with the manifest relationship of the private to the public. How do things work with respect to the private and the public?

Rather than starting with the regnant bugaboo, terrorism, let’s think about political corruption. Do politicians have an absolute right to the privacy of their deliberations and communications about public matters? Does the Speaker of the House, or the President, have an absolute right to the full and permanent protection of behind-the-scenes communications about matters of public consequence? If Legislator A and Donor K used WhatsApp to work out a deal for campaign donations in exchange for sponsoring legislation in the House of Representatives, would we, as citizens, accept the records of those conversations as being forever and irredeemably private, such that we simply could not ever access them?

I suspect that most of us, once we stop to think about it, would not be too comfortable with this already real-life scenario. What if the messages concerned bribes, threats, or other forms of back room dealings? What if the President told the Speaker things that the latter was not authorized to know? What if the CEO of Company X was privy to the messages, too? Or what if the Speaker sent the President the CEO’s messages without the CEO’s knowledge? This is the stuff of scandal and corruption, and these are each instances where communications, though “private,” indeed have public importance. The public would have a right to know about them.

This is not because we are willing to “sacrifice” privacy for the integrity of our political system; it is not a version of “liberty vs. security, once again.” Rather this is because, even with the high premium we put on the right to privacy, we understand that the private stands in a flexible, dialectical, and dependent relationship with the public: When private acts have direct public consequences, they are not strictly private—they can be called to public account.

This is the case whether we are talking about political corruption or communication among persons who would commit acts of terror. More important, in calling private acts to public account, we are not breaking down the wall of privacy; rather, we are simply walking through the door from the private to the public the reverse way, so to speak. An exchange between the private and the public has already taken place. We are but re-tracing it.

What I find particularly troubling about the unbreachable encryption efforts of Apple, Google, and others is that they technologically (or, more properly, mathematically) prevent this kind of reverse traffic in the name of the public good. Rather in the name of “privacy”—and, let’s be honest, in the name of corporate profits—tech companies are creating, in effect, not so much inviolable walls around privacy but something more like trap doors from the private to the public that can be gone through only one way. In such a scenario, it is only the public that will suffer.

The genuine political worry articulated by super-encryption is that about Big Brother. As Wired writes of WhatsApp founder Jan Koum,

Growing up in Soviet Ukraine in the 1980s, WhatsApp founder Jan Koum learned to distrust the government and detest its surveillance. After he emigrated to the U.S. and created his ultra-popular messaging system decades later, he vowed that WhatsApp would never make eavesdropping easy for anyone. Now, WhatsApp is following through on that anti-snooping promise at an unprecedented scale.

But the United States and the United Kingdom are not the Soviet Union, and while both governments have participated aggressively in very troubling illegal, large-scale dragnet-like surveillance in the last decade, we have not seen a corresponding development of a police state working in tandem with the data collection agencies. To the contrary, the greatest problem faced by American and British citizens is that of government secrecy, which has provided cover for illegal and otherwise questionable state surveillance programs, together with the cultural problem seen in repeated demands from politicians that intelligence agencies unfailingly connect the dots prior to a terrorist attack, or be held culpable when they do not. This cultivates a culture of self-preservation in intelligence communities, encourages them to lean always to more aggressive actions rather than less aggressive ones, and opens the door to all sorts of government contractors promising infallible technological fixes for what are, in the end, inherently political and social crises.

Encryption processes that simply block government surveillance outright, in keeping with Silicon Valley’s longstanding delusion, are also but a supposed technological fix for what are political and cultural problems—be it the NSA or al-Qaeda and their affiliates. End-to-end encryption and its equivalents in no way address the real problems we face from a civil liberties perspective—government secrecy and the unrealistic expectations before counter-terrorism agencies. Worse, encryption offers a false substitute for real solutions—something that is the moral equivalent of vigilante force when what we need is better government and law.

Ned O’Gorman, associate professor of communication and Conrad Humanities Professorial Scholar at the University of Illinois, Urbana-Champaign. He is the author of Spirits of the Cold War: Contesting Worldviews in the Classical Age of American Security Strategy and the forthcoming The Iconoclastic Imagination: Image, Catastrophe, and Economy in America since the Kennedy Assassination.

Editor’s Note: Ned O’Gorman is also a contributor to The Hedgehog Review‘s Spring 2015 issue.  Reserve your copy today here.

. . . . . . . .

Like The Hedgehog Review on Facebook, follow us on Twitter, and subscribe to our posts via RSS.

The Thin Reed of Humanism

Dürer, "Melancolia I" Städel Museum

Albrecht Dürer, Melencolia I (1514), Städel Museum

Leon Wieseltier is at his cantankerous best in his latest essay “Among the Disrupted.” After two opening paragraphs that are difficult to read as anything but a commentary on the recent demise of the New Republic, Wieseltier returns to his own well-trod turf:

[A]s technologism, which is not the same as technology, asserts itself over more and more precincts of human life, so too does scientism, which is not the same as science. The notion that the nonmaterial dimensions of life must be explained in terms of the material dimensions, and that nonscientific understandings must be translated into scientific understandings if they are to qualify as knowledge, is increasingly popular inside and outside the university, where the humanities are disparaged as soft and impractical and insufficiently new. The contrary insistence that the glories of art and thought are not evolutionary adaptations, or that the mind is not the brain, or that love is not just biology’s bait for sex, now amounts to a kind of heresy.

Wieseltier is reprising many of the themes of his public feud with Steven Pinker in the pages of the New Republic (here, here, and here). More than anything else, this earlier spat and Wieseltier’s latest essay are cultural barometers of our impoverished cultural imagination concerning the relationship of science, the humanities, and technology.

When Wieseltier invokes “scientism,” he’s gesturing toward real concerns about the reductive materialism or naturalism that tends to underlie the work of popular polemicists like Dawkins, Dennet, and Pinker. He is not denying that our world and our selves can, in part, be explained through material mechanisms. I assume he enjoys the benefits of modern medicine like the rest of us.

But terms like “scientism” and “technologism,” however well-intentioned, can obscure more than they clarify. Those who bandy them about presume, as the historian James Schmidt lays out, a number of things. First, they presume that there are different ways of knowing the world. There are limits to a uniquely scientific knowledge. There are some things that cannot be fully explained by modern science. Second, they presume that they can discern what those boundaries are. And, finally, they presume that they can diagnose the deleterious consequences of these illicit boundary crossings.

I’m sympathetic to all three of these premises. But I’m much less confident in our ability to identify where science begins and ends than those who so diligently guard the borders of knowledge exclaiming “scientism!” when they suspect interlopers. Those who invoke “scientism”—and there is a long tradition of its use and occasional abuse as Schmidt has wonderfully documented—put themselves in the position not only of policing the borders of knowledge but also of distinguishing real science, a science that knows its place, from a false science, a science that engages in constant and illicit “border crossing.”

My point is that these ominous sounding terms are all too often used as polemical cudgels. They refer to an illicit encroachment of one sort of knowledge into what is perceived as the proper sphere of another. Thus, “scientism” ultimately refers to the use of uniquely scientific knowledge within a distinctly non-scientific domain. Any appeal to the biological basis of love would be “scientism.” And very bad. These big, ugly worlds are, in short, the searchlights of our epistemic border guards.

But if “technologism” and “scientism” refer to types of knowledge that don’t know their place, then what type of knowledge or disposition adjudicates where these boundaries begin and end? For Wieseltier and many others today, it’s humanism. Humanism is the positive correlate of all those other “–isms,” those forms of knowledge that blithely stray beyond their boundaries.

But what is humanism? “For a start,” writes Wieseltier, “humanism is not the antithesis of religion, as Pope Francis is exquisitely demonstrating. The most common understanding of humanism is that it denotes a pedagogy and a worldview:

The pedagogy consists in the traditional Western curriculum of literary and philosophical classics, beginning in Greek and Roman antiquity and — after an unfortunate banishment of medieval culture from any pertinence to our own — erupting in the rediscovery of that antiquity in Europe in the early modern centuries, and in the ideals of personal cultivation by means of textual study and aesthetic experience that it bequeathed, or that were developed under its inspiration, in the “enlightened” 18th and 19th centuries, and eventually culminated in programs of education in the humanities in modern universities. The worldview takes many forms: a philosophical claim about the centrality of humankind to the universe, and about the irreducibility of the human difference to any aspect of our animality; a methodological claim about the most illuminating way to explain history and human affairs, and about the essential inability of the natural sciences to offer a satisfactory explanation; a moral claim about the priority, and the universal nature, of certain values, not least tolerance and compassion. It is all a little inchoate — ­human, humane, humanities, humanism, humanitarianism; but there is nothing shameful or demeaning about any of it.

Yes, it is all rather “inchoate.” And therein lies the problem.

Wieseltier is correct about the long and admirable lineage of a humanist classical pedagogy, less so about the worldview claim. “Humanism” as a human-centered worldview is a neologism invented not in the Renaissance but in the early nineteenth century as another polemical cudgel, one used to fight the same types of cultural battles that Pinker and Wieseltier have long been waging.

As far as I can tell, humanism, or rather its German cognate Humanismus, was first used in 1808 by the German pedagogue and philosopher F.I. Niethammer (1766–1848). In The Conflict of Philanthropinism and Humanism in Contemporary Theories of Education and Pedagogy, he juxtaposed humanism with philanthropinism, an Enlightenment-era educational theory that regarded the human as a natural being who needed to develop his or her natural capacities. What distinguished “humanism” from more modern forms of education was an underlying concern for, as Niethammer put it, “the humanity [over] the animality” of the human. As a worldview, humanism subordinated the body to reason and defended the autonomy of human nature from the material world. As first used by Niethammer, it was a boundary term; it marked what Niethammer thought was the clear line between the mental from the material, the human from the animal.

When critics invoke “humanism” against “scientism” or “technologism,” they presume to know the proper boundaries of science and technology; they presume that they can readily and forcefully articulate where scientific knowledge ends and humanistic knowledge begins. They assume the role of guardians of our intellectual and ethical world. That’s a heavy burden.

But it’s also a presumption that ignores how much of our knowledge comes from these border crossings. It’s at the margins of our established ways of engaging our world and ourselves that new ways of seeing and imagining what it is to be human so often emerge. We may well need knowledge police and concepts like “scientism” and “humanism” to warn us of charlatans and interlopers but we should hope that they do so with a little less alacrity and a bit more humility.

. . . . . . . .

Like The Hedgehog Review on Facebook, follow us on Twitter, and subscribe to our posts via RSS.

The Arts and Humanities Save Lives!

PrintThere was a time, long ago, when it was taken for granted that what we now call the “humanities” was integral not just to human enjoyment but to human survival.

It was self-evident that what the Greeks called logos—language and language-based reasoning—is not only what made the human animal different from all the other animal species of Earth, but that language was essential to the human capacity to survive, and thrive, in a too-often hostile and harsh environment. Rabbits have speed and agility; bears have teeth and claws; eagles have wings; humans have language. As one ancient Athenian put it:

For in the other powers which we possess we are in no respect superior to other living creatures; nay, we are inferior to many in swiftness and in strength and in other resources; but, because there has been implanted in us the power to persuade each other and to make clear to each other whatever we desire, not only have we escaped the life of wild beasts, but we have come together and founded cities and made laws and invented arts; and, generally speaking, there is no institution devised by man which the power of speech has not helped us to establish. For this it is which has laid down laws concerning things just and unjust, and things base and honorable; and if it were not for these ordinances we should not be able to live with one another. It is by this also that we confute the bad and extol the good. (Isocrates, Nicocles, Antidosis 3.5–7)

Above all, logos was for the Greeks the source of wisdom, which was assumed to be essential to human survival. Similar ideas can also be found in ancient Hebraic, Confucian, Arab, and other cultures.

We have come a long way from this “naive” view. Ours, as the cliché goes, is a technological age, and science and engineering are now assumed to be the keys to human survival.

Recently, my own university’s chancellor, Phyllis Wise, reminded us of technology’s vast sway.   In a blog post featured on Linkedin, and now republished on the University of Illinois’s website, she argued that the United States needs to fix what she and her fellow scientists and engineers call the “innovation deficit”—the gap between current levels of federal funding of science and engineering and what the country needs “to remain the world’s innovation leader.”

Once we fix this “innovation deficit,” she said,  “we will be well on our way to solving some of the world’s biggest problems.” Scientists and engineers, she explained in a series of examples, would be able to heal diseases, clean up the environment, and, “ensure our national security” by providing new technologies to the Department of Defense. By designating “health and education,” “environment,” and “national security” as the three main areas to which science and engineering can substantially contribute, Chancellor Wise appealed to human life as the main benefactor of science and technology research.

Of course, not all technological innovations are lifesaving. As Chancellor Wise concluded, some things are about “quality of life” rather than survival:

Finally, there’s this: everything from plasma TVs to the first internet browser to YouTube to whipped cream in a can—these innovations can all be credited to a university (ours, in the case of these examples). And while these innovations may not be lifesaving, they certainly contribute to our quality of life. Add to these examples the vibrant scholarly work of our historians, artists, musicians, journalists, and others who work to enrich our understanding of the world and of other people in a powerful way, and it’s clear that federal funding for research is a critical part of modern life.

This seems all self-evident enough, but is it?

Professional hurt feelings aside, it is incorrect to categorize the arts and humanities as concerned with “quality of life” rather than—as Wise’s post clearly implies—the sustenance of human life. The value of “historians, artists, musicians, journalists, and others” is not equivalent to the invention of canned whipped cream!

Today the world’s biggest problems have indeed grown big enough to concern the very survival of the human species: environmental catastrophe, genocidal weapons, and fragile technological and economic systems each put the species—not just individuals—at risk. But the solutions to these problems, in as much as they can be achieved, will be essentially, and not merely accidentally, social and political in nature.

Consider a national problem before us now that very much concerns human life. Two weeks ago, after the grand jury in Ferguson decided not to indict Officer Darren Wilson, President Obama and many others came out arguing for a technological fix: cop cameras. With this technology, they argued, we could erase ambiguities and probabilities from the arbitration of police conduct and be able to know with more certainty the sequence of events in incidents like the slaying of Michael Brown. Meanwhile, police officers themselves would certainly feel the presence of these cameras and be compelled to exercise more restraint when interacting with the public.

If this were so, Chancellor Wise might be right: Science and technology could fix some of the world’s biggest problems.

But as we learned last week in another grand jury decision concerning the police killing of another black man—this time, Eric Garner of Staten Island—technology means very little in a culture of police impunity before people of color. Garner’s killing took place in the full view of a camera. The footage is widely available online and shows an unarmed man who posed no imminent threat to police or bystanders being put in a chokehold by a police officer and slowly suffocated. As Sean Davis at the Federalist writes, “The grand jury’s decision not to bring any charges against the officer who killed Garner is inexplicable. It defies reason. It makes no sense…. All we have to do is watch the video and believe our own eyes.”

Cameras, microchip sensors, drones: No matter what the gadget, there is no technological fix for racism, or for more subtle problems such as prosecutorial discretion. There is no science that can save us from the historically embedded habits and the wider structures that cause us, seemingly instinctively, to value the lives of some more than others based on the skin color, gender, or any other of a number of social markers of the Other. And the only solutions for structural problems within the law are both better law and better practice of the law.

These problems require citizens capable of reflecting on matters like discrimination and the law, and leaders who understand that the world’s problems can’t be fixed simply through technology. The world’s largest problems are not equivalent to the problem of gravity. If they were, perhaps science and technology could solve them. We’d just need more well-funded Newtons and Einsteins. Rather, we have problems that are inherently political and/or social in nature and that require political and/or social solutions. Moreover, it should be obvious by now that scientific and technological “fixes” often create new ones (e.g., industrialism’s creation of global warming, genocidal killing machines, and antibiotics).

So while it seems silly to say it, it needs to be said, in light of the legitimate value political and academic leaders are putting on life: The arts and humanities save lives!

If for the Greeks, logos was the means by which humans established cities, today it is the language, visual, plastic, and other arts that offer the sorts of practices and epistemologies capable of better reckoning with the world’s problems, whether they be the plight of racism (which keeps technologies from being put to just use), the situation of climate change (which requires above all political solutions for any technological fixes to be implemented), or the existence of economic inequality and poverty (which simply cannot be addressed adequately by any metric apart from good policy).

I am not claiming that the arts and humanities are salvific. There is no need for hyperbole here, no need for triumphalism. Rather, we need only to reckon with the world’s problems as they are to see that the arts and humanities, in as much as they equip us to do political and social work well, can save far more lives than any gizmo. If saving lives is our metric, the biggest deficit we face is not an “innovation deficit” but a “wisdom deficit.”

Nor is the issue here the “humanities” versus the “sciences,” but rather the obvious point that science and technology are good only in so far as they are used well, and that their good use is a matter of principle and prudence, not causality and efficiency.

Our greatest problems are social and political problems. They call for social and political solutions. The arts and humanities train us in the sort of skills and sagacity integral to social and political solutions. The arts and humanities save lives!

Chancellor Wise and other leaders of the academy-in-crisis, in addition to the “innovation deficit,” would you consider solutions to the “wisdom deficit” we now face?

Ned O’Gorman, associate professor of communication at the University of Illinois, Urbana-Champaign, is the author of Spirits of the Cold War: Contesting Worldviews in the Classical Age of American Security Strategy and the forthcoming The Iconoclastic Imagination: Image, Catastrophe, and Economy in America since the Kennedy Assassination.

. . . . . . . .

Like The Hedgehog Review on Facebook, follow us on Twitter, and subscribe to our posts via RSS.

The Chokehold That Is Prosecutorial Discretion

A protest in Grand Central Station in New York after the grand jury’s verdict in the Eric Garner case.

The news yesterday that a second grand jury in two weeks had failed to indict a white police officer for causing the death of an unarmed black man has caused an understandable uproar.  Where should we focus the outrage in a justice system that is failing minorities? Let me suggest one candidate: prosecutorial discretion.

As an assistant federal public defender, I am used to experiencing the fruits of a system designed around the whims of the prosecution. First, what is prosecutorial discretion?  Plenty of literature gives content to this phrase, but it boils down to the idea that a prosecutor can do whatever a prosecutor wants to do, as evidenced by the fact that 97% of all cases in federal court result in guilty pleas prior to, and instead of, a jury trial. The majority of these settlements are the result of plea agreements where the prosecutor, not a judge, decides what charges a defendant should be convicted of, whether or not to require a plea to factors that trigger statutory mandatory minimum sentences, and often sentencing ranges with no input from the judicial branch, let alone a jury of a defendant’s peers. The arcane cruelty of federal criminal laws, particularly in the area of drugs and guns, means that penalties are so severe that most defendants have little choice but to plead guilty. Severe sentencing and over-criminalization combine to make a dangerous cocktail where almost everything is a crime, or multiple crimes.

So what does this have to do with Ferguson?  And now the Staten Island chokehold case?  Without question, the decision of these two grand juries not to indict was because the prosecutors, in their discretion, did not want to indict. As has been detailed elsewhere, when a prosecutor wants an indictment, a prosecutor usually gets an indictment. The fact that both of these grand jury presentments took place over months (instead of hours as is the norm), and involved testimony on behalf of the officers (each police officer testified on his own behalf, again, not the norm) shows just how unique these cases were. And just how little the prosecutors must have wanted to indict.

And why might these prosecutors not have wanted these grand juries to return indictments?  The reason may be legitimate doubts over the culpability of the officers under the laws of excessive force as written. Or because prosecutors have to rely on law enforcement officers every day to investigate cases and make out those cases in court, and, thus, were loath to alienate their usual allies. But we may never know, and we have no right to know under the law.

Prosecutors make decisions not to indict all the time and the only person who might even know about it is the victim, or alleged victim, of some criminal action. No one has a right to prosecution. The recent conversations regarding unprosecuted collegiate sexual assault sparked in part by the controversial Rolling Stone article about my alma mater, the University of Virginia, implicate this fundamental fact. Every victim stands at the whim of the prosecutor to determine whether or not a case even starts down the path of seeking justice from a criminal court.

Bill Stuntz, former Harvard Law professor, described our current system and its disproportionate effects on black Americans well in The Collapse of American Criminal Justice (Harvard, 2011):

As unenforced speed limits delegate power to state troopers patrolling the highways, so too American criminal law delegates power to the prosecutors who enforce it. That discretionary power is exercised differently in poor city neighborhoods than in wealthier urban and suburban communities. Far from hindering such discrimination, current law makes discriminating easy. That sad conclusion has its roots in a sad portion of America’s legal history. When the Fourteenth Amendment’s guarantee of the “equal protection of the laws” was enacted, one of its chief goals was to ensure that criminal law meant one law alike for blacks and whites—that both ex-slaves and ex-slaveowners would be held to the same legal standards, and that crime victims among both groups received roughly the same measure of legal protection. That understanding of equal protection did not survive Reconstruction’s collapse. Today, the equal protection guarantee is all but meaningless when applied to criminal law enforcement, one reason why both drug enforcement and enforcement of laws banning violent felonies are so different in black communities than in white ones.

The Ferguson and Staten Island cases may be targets of our outrage for many reasons, but one should certainly be that prosecutorial discretion has been exposed for what it is—entirely out of our hands.

Lisa Lorish is an assistant federal public defender in the Western District of Virginia and a graduate of the University of Virginia School of Law.

. . . . . . . .

Like The Hedgehog Review on Facebook, follow us on Twitter, and subscribe to our posts via RSS.

Universitybot Responds: Gang Rape as “Sexual Misconduct”

University of Virginia, 11/20/14

Above and below: Fresh graffiti at the Phi Kappa Psi fraternity house, University of Virginia, November 20, 2014. Photos: Gregory Thompson

Last week I wrote a post titled “Who Needs Captains of Erudition?” Long before “corporatization” had become synonymous with higher education, Thorstein Veblen, the early twentieth-century American sociologist, dismissed American universities as little more than “competitive businesses.” These enterprises were run by university presidents, who had become little more than managers of systems, chiefs of concerns, “captains of erudition.”

When I read President Sullivan’s response to a Rolling Stone article that recounted a University of Virginia woman’s personal story of gang rape and the trauma that followed, all I could hear was the defensive, disengaged chatter of one of Veblen’s captains:

“I am writing in response to a Rolling Stone magazine article that negatively depicts the University of Virginia and its handling of sexual misconduct cases.”

“Negatively depicts”? If one phrase embodies the corporatization of the university that might well be it. The contemporary university’s assessment metrics, use of adjunct labor, obsession with economic efficiency, and capitulation to the sovereignty of the student as consumer are just consequences of a deeper failure of moral imagination. The primary concern is with public perceptions. Never mind that a young woman felt that her only option was to talk to a Rolling Stone reporter. This is the language of an institution committed to nothing but its own mechanisms. There is no evidence of the virtues to which we here at the University of Virginia lay claim—empathy, civic concern, leadership, and curiosity.

University of Virginia, 11/20/14

Sullivan’s statement was a missive from the bureaucratic bowels of an accounting machine. It was surely manufactured by public relations specialists and lawyers whose interests are simply fiduciary, concerned only with legal liability and fundraising. There are no people, just “interests”; no judgments, just “initiatives”; no moral failures, just “issues.” There were, as one of my colleagues put it, no rapes, no victims, no women, no perpetrators—just “issues related to sexual misconduct.” And the only response is more policies, more initiatives, more accounting.

The captains of erudition are firmly at the helm at the modern American university. With their phalanx of managers, they are guiding us into seas of indistinction, into a future where the university is just another modern bureaucracy without ends, without purpose. And the faculty is asleep on the deck.

. . . . . . . .

Like The Hedgehog Review on Facebook, follow us on Twitter, and subscribe to our posts via RSS.

Who Needs Captains of Erudition?

Long before “corporatization” became synonymous with higher education, Thorstein Veblen, the early twentieth-century American sociologist who coined the term “conspicuous consumption,” dismissed American universities as little more than “competitive businesses.” In On the Higher Learning in America (1918), published fewer than forty years after Johns Hopkins was founded as America’s first research university, he described the contemporary university as a “business house dealing in merchantable knowledge, placed under the governing hand of a captain of erudition, whose office it is to turn the means in hand to account in the largest feasible output.” The modern American university president wasn’t a scholar, an intellectual, a scientist, or even much of a leader. He was the manager of systems, the chief of a concern, the captain of erudition.

Thorstein Veblen, by Edwin B. Child, 1934. Courtesy of Yale University Art Gallery, Gift of Associates of the Sitter. A protege of J. Laurence Laughlin, the first head of political economy, Veblen began his uneasy passage through the University in 1892.

Thorstein Veblen, by Edwin B. Child, 1934. Courtesy of Yale University Art Gallery, Gift of Associates of the Sitter. A protege of J. Laurence Laughlin, the first head of political economy, Veblen began his uneasy passage through the University in 1892.

Botstein and Bard

Leon Botstein, the charismatic conductor of the American Symphony Orchestra and president of Bard College, is no captain of erudition. “Botstein’s voice,” writes Alice Gregory in the New Yorker,

telegraphs a wizardly moral authority. Everyone responds to it, but parents, primed to be proud of their children, are especially susceptible. ‘We live in a time where people don’t really believe in education. That doubt is something we struggle with,’ he said. ‘Your enthusiasm, your determination, your idealism about education gives back to us a reminder of why we should fight for what we do.’

For Botstein, the “quantification of American higher education,” introduced by university administrators who just want to keep their jobs and facilitated by spineless faculty who have given up on the liberal arts, is a moral affront.

Botstein’s earnest and tireless defense of an ideal, however, might just doom this small, liberal arts college, 90 minutes north of New York City. Bard, where all those black-clad kids who read Sartre in high school wound up, is the singular creation of Botstein’s will and personality. But in December 2013, Moody’s Investors Service lowered its credit outlook to “negative.” And now some of its trustees are worried. Susan Weber, a trustee and donor, said:

Everyone says, ‘Oh, he’s the most amazing fund-raiser,’ Well, I wish that were so, because we wouldn’t be so underfunded if he were that amazing. I think he’s good at it—he works hard at it—but his real strength is building an institution.

“But”?  If one word can be said to embody the confusion over the purposes of higher education, that but might be it.

Botstein built an institution with a vision, but only a captain of erudition can, it seems, sustain it.

Weber’s resigned admission of what Bard needs after Botstein has become the assumption of many university boards. University presidents shouldn’t lead national debates or make moral claims; they should alleviate political pressures and mollify the idiosyncracies of donors. Ours is the age of the competent commander-in-chief—we need accountants, not idealists.

Veblen’s Prescience—in Our Own Backyard

On June 10, 2012, my colleagues and I at the University of Virginia (UVa) learned that Veblen had been all too prescient. Helen Dragas, Rector of UVa’s Board of Trustees, briefly and matter-of-factly informed us that our president had been fired:

On behalf of the Board of Visitors, we are writing to tell you that the Board and President Teresa Sullivan today mutually agreed that she will step down as president of the University of Virginia effective August 15, 2012. For the past year the Board has had ongoing discussions about the importance of developing, articulating and acting on a clear and concrete strategic vision. The Board believes that in the rapidly changing and highly pressurized external environment in both health care and in academia, the University needs to remain at the forefront of change.

Over the following weeks, my colleagues and I, joined by an international audience, speculated about these unspecified “philosophical differences” between President Sullivan and the Board of Visitors; we wondered about the “clear and concrete strategic vision” for which the Rector called. Hadn’t we already been subjected to years of strategic planning?

After ten days of increasing frustration and concern from faculty, students, and alumni, Dragas sent a second email. This one listed a number of “challenges” that UVa faced for which Sullivan, as Dragas implied, had no plan to deal with: the long-term decline in state funding for public universities, the disruptive effects of new technologies, rising tuition costs, increasing enrollments and an aging faculty (with no money to replace it), increasing demands for faculty and curricular assessment—not to mention the increasingly expanded roles that the contemporary university plays of health-care provider, entertainment center, sports venture, industrial and government research center, and, by the way, educator. In short, the university faced a whole host of challenges, none of which were unique to UVa.

sullivan_infernal machine_72dpi

UVa President Teresa Sullivan speaks on the steps on the Rotunda after addressing a closed session of the Board of Visitors, June 2012; photo © Norm Shafer

But between June 10 and Sullivan’s ultimate reinstatement on June 26, something else happened on Grounds, something that most stories and accounts of the summer’s events missed in their efforts to chronicle the process. Not only did it surprise me; I still struggle to  make sense of it. (Talbot Brewer also tried to make sense of this series of events in the summer issue of The Hedgehog Review.)

For about two weeks, UVa faculty members paid scant attention to the myriad problems that the Rector identified; they didn’t demand political intervention; they didn’t split up into conservative and liberal corners and revive culture-war arguments (the liberal faculty against the conservative administration). For two weeks, my colleagues condemned the Board of Visitors’ actions by making explicitly ethical arguments, arguments grounded in claims about the moral purposes of the university: What the university was and ought to be. Some colleagues defended and invoked an honor code with which we usually engage, if at all, only ironically. Others celebrated founder Thomas Jefferson’s commitment to higher education as a public and democratic good, but without the ironic winks that usually accompany such discussions. There was even an impassioned defense of peer review as an ethical practice. Whatever their particular content, the arguments led to a broad consensus: This wasn’t right, this wasn’t how a university ought to be run.

With our backs to the wall and overcome by the sense that our university was imperiled, we faculty members made arguments that were not, in the first instance, financial, technological, or political. We made normative claims about what a university ought to be. That is, the arguments that my colleagues mustered focused on the moral character and purposes of the university. Faculty were engaged and motivated by a general and rather vague sense that the moral authority of the university had been threatened.

Can We Afford Our Future?

My colleague Siva Vaidhyanathan has continued to make these arguments. Recently, while writing of another attempt to oust a public university president, this time at the University of Texas, Vaidhyanathan defended the increasingly beleaguered notion of the university as a public good:

The tuition increases and the realization that the payoffs from universities are deferred and unquantifiable pushed legislators and “reformers” to demand accountability and radical administrative transformations. This has only served to make it harder for faculty to teach and conduct research. It has made the richest nation in the history of the world act like it can’t afford to believe in its own future, respect its own culture, or foster the experimentation and knowledge that might serve the entire planet.

The university is more than than an “inefficient and outdated information delivery system.” It is a public good because it advances, conserves, refines and shares knowledge for the world. And it does so most basically by forming people who believe that knowledge is a public good.

Leon Botstein may at times be bombastic. And he is always, without question, idealistic. At a moment when the very purposes and values of universities are being reshaped in the name of efficiency and disruption, we don’t need captains of erudition. We need leaders who embody the true ethos of our institutions.

. . . . . . . .

Like The Hedgehog Review on Facebook, follow us on Twitter, and subscribe to our posts via RSS.

Here Comes the Potentiated Self

In a series of pithy posts,  Joshua Glenn, the brand analyst with a penchant for Roland Barthes, has been cataloguing cultural codes. Each code, embodied in advertisements or pop-cultural imagery, is a single, meaning-laden node in the complex, often imperceptible matrix “structuring our perception of the everyday world.” Glenn’s codes range from the child-adult, “a holy fool who speaks truth to power,” to the cool engineer, a visionary designer who sees “into things more acutely and penetratingly than ordinary mortals.”

An example of wired self-potentiation from Joshua Glenn’s Code-X (2), http://hilobrow.com/2014/04/30/code-x-2/

But one code seems particularly of our moment: wired self-potentiation. This code, central to the advertising campaigns of technology companies, celebrates a new, digitally enabled self. For the networked person of today, extended beyond time and space with their smartphones and gadgets, “multitasking [is] re-imagined as existential branching-out. Breaking the mold. Demonstrating vitality, multiplicity, and proactive refusal to conform to stereotyped expectations. All thanks to networked technology.” This is the potentiated self, the self raised to a higher power.

The idea of technologically enabled self-improvement is widespread. James Surowiecki recently described a “performance revolution” in sports, in which athletes aren’t just practicing harder but smarter, “using science and technology to enhance the way they train and perform.” Long hours in the gym or on the pitch won’t cut it anymore. Today’s elite athletes are monitored, analyzed, and reshaped by a matrix of biometric sensors and training regimes, all supervised by a phalanx of sports experts and coaches. Training methods for contemporary MLB, NFL, and NBA players are data-driven, networked systems designed to perfect not just athletes but the exercises and disciplines that make them better.

But if the improved, technologically enhanced training regimes of professional sports focus on improving people, the Internet of Things (IoT), another vision of the connected, networked age, seems altogether different. The Internet of Things, as one of its biggest proponents Jeremy Rifkin claims,

will connect every thing with everyone in an integrated global network. People, machines, natural resources, production lines, logistics networks, consumption habits, recycling flows, and virtually every other aspect of economic and social life will be linked via sensors and software to the Ioplatform, continually feeding Big Data to every node—businesses, homes, vehicles—moment to moment, in real time. Big Data, in turn, will be processed with advanced analytics, transformed into predictive algorithms, and programmed into automated systems to improve thermodynamic efficiencies, dramatically increase productivity, and reduce the marginal cost of producing and delivering a full range of goods and services to near zero across the entire economy.

The Internet of Things is all about connectivity. In this digital network everything, whether you or your thermostat, is a node and, thus, just another source of data. The Internet of Things, as Sue Halpern writes in the New York Review of Books, is

about the “dataization” of our bodies, ourselves, and our environment. As a post on the tech website Gigaom put it, “The Internet of Things isn’t about things. It’s about cheap data.” Lots and lots of it. “The more you tell the world about yourself, the more the world can give you what you want,” says Sam Lessin, the head of Facebook’s Identity Product Group.

In Rifkin’s vision of the Internet of Things, humans are just elements of a network organized around the endless circulation of information. In such a system, the networked self is little more than the self as networked, as a node in a complex system. This is one vision of the networked, potentiated self, a notion that Glenn takes from the German poet and philosopher, Friedrich von Hardenberg, better known as Novalis. But Novalis, despite his image as the frail, romantic poet who died of a broken heart, would have had more sympathy for Surowiecki’s jocks than Rifkin’s automated systems.

In 1798, Novalis wrote a short dialogue in which two figures, named simply A and B, debated the effects of the proliferation of print. Interlocutor A lamented the modern “book plague,” especially those marks of black that fill their pages: “What burden are these letters!” The modern human being, he complains, is characterized by his “fatal habituation to the printed nature.” There was so much print that modern readers had begun to mistake it for nature itself.

Interlocutor B wasn’t nearly as worried about book plagues and floods of ink, however. The key to dealing with media surplus lay not in acquiring better tools but in becoming a better reader. Like any art, “reading” required “practice” and, as Novalis put it, “practice makes perfect, even in the reading of books.” Technologies like print were good only insofar as they were engaged as elements of a human discipline, repetitive exercises tied to a person acting according to settled purposes.

For Novalis, a potentiated self would result not from technologies unmoored from human purposes but from the perfection of excellent habits. “The world must be romanticised,” he wrote. “Romanticising is nothing but a qualitative intensification. Through this operation the lower self becomes identified with a better self. Just as we ourselves are a sequence of such qualitative powers. [. . .] By giving the everday a higher meaning, the habitual a mysterious appearance, the known the dignity of the unknown, the finite a semblance of the infinite, I romanticise it.”

Raising the self to a higher power is grounded not in the liberating promises of technology, then, but in the more mundane pursuit of excellent habits. It’s about, as Surowiecki puts it, “getting better at getting better.” Sometimes data can help. But only with coaches and teachers skilled enough to help us make sense of it—and to help us learn how to practice.

. . . . . . . .

Like The Hedgehog Review on Facebook, follow us on Twitter, and subscribe to our posts via RSS.

John Searle and the Threat of Artificial Intelligence

 

John Searle wants to reassure us. The University of Califorinia, Berkeley, philosopher, a specialist on mind, “intentionality,” consciousness and other concepts integral to the distinct sort of being that we refer to as human being, wants us to know that we need not fear “super intelligent computers intentionally setting out on their own to destroy us.” Nor, it turns out, should we grow giddy about the immanent passage of human being into the “infosphere,” where “everything is information and . . . computers are much better at it.”

In this month’s issue of The New York Review of Books (October 9, 2014), Searle offers an extensive rebuttal of the theses offered in two recent books: Nick Bostrom’s Superintelligence, which warns of the impending rise of “machine brains” superior to human ones, and Luciano Floridi’s The 4th Revolution, which announces the metaphysical emergence of the “infosphere.” Searle persuasively argues that both authors fail to account adequately for the vital role of consciousness in human being, and therefore miss in a fundamental sense what is really entailed in the creation of “artificial intelligence” and indeed “information.”

Searle offers several worthwhile distinctions, which he’s explored elsewhere, especially distinctions between “observer independent” and “observer relative” features of reality. The former exist “regardless of what we think”—mountains and molecules, for example. The latter “depen[d] on our attitudes” for their real existence—money and marriage, for example. Searle suggests that “information” falls in the latter category, as does “computation” when it is the product of machines. For only conscious agents, he argues, can have, create, or otherwise interact with “information” and “computation.” There is, then, no such thing as purely artificial information or intelligence, for there is no such thing as artificial consciousness. Conscious beings—something machines are not—must cooperate with the artificial mechanisms of information and computation in order for them to function in any way remotely as “intelligent” machines.

Or so it will be for the foreseeable future. It is possible, he surmises, that perhaps someday we will be able to make machines that “duplicate” the human brain, including consciousness. And it is at the point of this speculative possibility that Searle’s argument becomes both more interesting and more problematic, because it probes—somewhat indirectly, but powerfully nonetheless—the significance of the “artificial,” a category in which we can put both “art,” “artifice,” and certainly “technology.”

A bit of background on the artificial might be helpful here. In ancient Greece, a story circulated about the creation of human beings by the gods that began uncomfortably with humans being left “naked and shoeless” and thus in grave danger before the elements. And so it was until Prometheus gave humans fire and the mechanical arts by which to sustain and preserve their lives. The “artificial,” we might say, saved human life.

But the Greeks were as capable of worrying about the artificial as they were about celebrating it: Most famously, Plato worried about simulacra, those copies of copies that did nothing but deceive and mislead humans in their quest for order and justice.

The Edenic account in the Hebrew scriptures is different from the Greek one in that it presumes the goodness of being naked and shoeless—until the great Fall, at which points artificial coverings were made to cover human nakedness in a gesture of divine mercy and judgment.

I could offer other examples of various ideas and arguments about the status and significance of the artificial in human life. Questions about the relationship between art and nature, or the artificial versus the real, are longstanding and taken up in many cultural traditions.

But what interests me here is Searle’s account, which is fascinatingly emblematic of our own age. Whereas these older accounts were concerned with the relationship between nature and art, Searle is concerned most crucially with what it takes to make something really or truly an artificial version of some non-artificial entity. What does it mean, Searle asks, to really “artificialize” (my own word, if such a word exists) something? “Artificial intelligence” as we now know it, argues Searle, may be artificial, but it is not really intelligence. So what would a truly artificial intelligence look like?

An artificial heart, by contrast, seems to be for Searle really an artificial heart. Why? Searle bases this distinction on that between “a simulation or model” and “duplication.” He writes:

Consider an artificial heart as an example. Computer models were useful in constructing artificial hearts, but such a model is not an actual functioning causal mechanism. The actual artificial heart has to duplicate the causal powers of real hearts to pump blood. Both real and artificial hearts are physical pumps, unlike the computer model or simulation.

There is a strange literalism at work in Searle’s approach—or, better, an artificial essentialism. Causal processes are for Searle the essence of “reality”; the heart’s essence, it seems, is found for him in a basic causal function, pumping. In order to create a real artificial heart, that basic causal function needs to be literally, materially duplicated, or re-produced. Then we have, in a paradoxical formula, a real artificial heart.

But, I must ask, can that artificial heart skip a beat in a moment of terror or wonder? Such heart stopping moments, too, can be understood in cause-and-effect terms. Can an artificial heart grow stronger with exercise, or weaker with poor nutrition, also causal phenomena? Can an artificial heart, to be a bit hyperbolic, be eaten and subject to the causal processes of digestion? If not, then clearly the artificial heart is not a “real artificial heart” in every respect, but only in one respect, albeit a very important one.

My point is that “duplication” is a poor measure of the “really artificial,” for it is in the very nature and substance of the “artificial” itself to have a relative and partial relationship to that which it is imitating, copying, or “duplicating.” The artificial heart duplicates some aspects of the natural heart, but not all aspects. And the same thing can be said about computerized artificial intelligence: Of course it is true that, as Searle writes, computers lack altogether a “psychological reality,” and are nothing but well-designed, highly functional circuit systems. Nevertheless, in certain circumstances they behave outwardly in a manner that we recognize as “intelligent.” This resemblance may be far less intense than the duplication of the “causal mechanism” of the human heart, but it is a resemblance nevertheless.

If the philosopher’s quest is to find the line at which point “artificial X” crosses a line to become a truly artificial X, I am afraid it may be a frustrating task, for sharp conceptual distinctions are not going to do it. Rather, we are better off thinking in terms of a continuum, on which perhaps “concepts” themselves might sit at one end, “models” somewhere in the middle, and “duplications” at the other end.

Searle, as I said, wants to reassure us: Computers are not going to take over the world, and we have not entered a new aquarian age of the “info sphere.”

It is easy to imagine robots being programmed by a conscious mind to kill every recognizable human in sight. But the idea of superintelligent computers intentionally setting out on their own to destroy us, based on their own beliefs and desires and other motivations, is unrealistic because the machinery has no beliefs, desires, and motivations.

I don’t find this very reassuring, however. A greater danger than a future filled with “really artificial intelligence” is already squarely with us: We often behave as though computers believe, desire, and move. We ascribe them human agency. And in this present reality, not an apocalyptic future, what is “really artificial” matters little. Rather, what we need is better reflection on the meaning of the artificial in our lives together.

Ned O’Gorman, associate professor of communication at the University of Illinois, Urbana-Champaign, is the author of Spirits of the Cold War: Contesting Worldviews in the Classical Age of American Security Strategy and the forthcoming The Iconoclastic Imagination: Image, Catastrophe, and Economy in America Since the Kennedy Assassination.

. . . . . . . .

Like The Hedgehog Review on Facebook, follow us on Twitter, and subscribe to our posts via RSS.