Author Archives: Chad Wellmon

Silicon Valley’s Survivalists

Bunker 318, Assabet River National Wildlife Refuge, Maynard Massachusetts. Via Wikimedia Commons.

Bunker 318, Assabet River National Wildlife Refuge, Maynard Massachusetts. Wikimedia Commons.

Seventeen years ago, just outside of Birmingham, Alabama, my wife’s grandfather built floor-to-ceiling shelves in his basement and filled them with toilet paper, tuna, Twinkies, and batteries. He was prepping for Y2K, the Millennium bug. Boom Boom, my wife’s normally calm and reasonable grandfather, was convinced that computer programmers had set civilization up for collapse by representing the four-digit year with only the final two digits. Once the digital clocks and computers tried to register the year 2000, electric grids and so all things electronic would crash. Civilization wouldn’t be too far behind. My father, in the foothills of western North Carolina, didn’t stock his shelves. But he did load his shotgun.

Today, prepping isn’t just for old southern white guys. The tech titans of Silicon Valley, as Evan Osnos recently wrote in the New Yorker, are buying bunkers and waiting for the breakdown of society as well. But Silicon Valley’s survivalists are different from Boom Boom and my dad. They are preparing for a civilizational collapse they otherwise celebrate as disruption and innovation. Continue reading

. . . . . . . .

Like The Hedgehog Review on Facebook, follow us on Twitter, and subscribe to our posts via RSS.

FacebookTwitterGoogle+LinkedInShare

Infernal Machine Collective Manifesto: On the Occasion of the Inauguration

An empty podium at the U.S. Embassy in London. U.S. Embassy London via Flickr.

An empty podium at the U.S. Embassy in London. U.S. Embassy London via Flickr.

We—let us reclaim the We, the declaratory We, the contentious We, the collective We. On this inaugural day, as the seas rise, the drones fly, the tweets storm, and a reality TV star ascends to the heights of world power, the Infernal Machine returns.  Just as all that is solid is melting into air and all that is sacred is profaned (yet again but differently), we want to face the real conditions of our life together, again.

And these conditions are new: Our politics, our institutions, our reality have been eroded by a techno-enabled cynicism and a vociferous optimism peddled from Silicon Valley to Washington, DC. Our media channels,  often filled with noisy disinformation, have come close to overwhelming all hope for truth and a common good. Technology has become both a demon and a god, oppressor and savior, post-human and super-human.

Less than a year ago, many of us were decrying technocracy and neoliberal automation—the routinization of decisions by distant experts. And now? We have been compelled, by the Twitter-assisted success of the newly inaugurated president, to defend expertise, even as we know that it is not enough. But a few moons ago, we were skeptics when it came to data, statistics, and polls. Now social science, not to mention science, is under attack by those who believe that “to tweet it is to prove it.” A year ago we thought that the great showdown of the decade would be between the state and Silicon Valley. Now we see their collusion, willed or not.

So we are at our own inauguration point, our own auspicious beginning, full of omens.

During the age of high technology—think mid-twentieth-century broadcast media and rust-belt production lines—entropy, the erosion of order, represented the great problem of the age. “Information = entropy” became the rallying cry of a new coterie of scientists, engineers, and poets, the basis for a celebration of new media channels for a potentially limitless proliferation of communication.

And then the science fiction of that earlier pretense turned cyberpunk; the Internet promised freedom and gave us something more complex. And with the production of unprecedented quantities of digital data, the virtual world got Big. The prospect of our own technological age is now tipping points and system failures that threaten sudden catastrophe. Experts, yes, but what happens when a politics far outside the boundaries of the Old Media and the Old Discourse—including what threatens to become Old Democracy—emerges?

During the age of high technology, media could be left to their own. Signals, senders, receivers, and noise constituted an engineering schema that could be bought, regulated, and directed by the relatively few for advertising to, informing, and entertaining the many. The media was both an institution and a fantasy of power: a towering professional enclave that sent signals to the receivers of a polis of citizens, couch potatoes, airport-bound travelers, and runners on treadmills.

But today “the media” has no towering centers: Media are on wrists, our hearts,  our streets, and in space. The center of media production is no longer New York, Hollywood, or the editorial room at the local newspaper, but Facebook and would-be Facebooks (Twitter, Snapchat, Reddit, and so on).  But these new media types don’t edit. Their only norm is unregulated use for the sake of unlimited profit. And so a form of parallel meme-processing is the underbelly of what once was decried as “merely the news” by thinkers from Nietzsche to Neil Postman.

During the age of high technology the academic study of media developed its own high towers and professional enclaves: communications; radio, film, and television; cinema.  It also included courses from journalism, speech communication, economics, business, and literature. Each operated on its own frequency. Technology studies, meanwhile, built an edifice (rather plain and drab at first, until a Gothic renovation by a Frenchman, Bruno Latour, with a penchant for networks, actants, and jokes). If the age of high technology yielded a change in the categories, such that agency was distributed and binaries upended (a “general cyborg condition,” as Donna Haraway put it), then what does the fast-advancing Digital Era call for? What philosophy will grasp this history?

A chorus on the Left decries the “fading of fact,” as though we had not attached media and rhetoric to the disappearance of fact for half a century—or since Plato. How can our self-proclaimed sophisticates have failed to see this continent of intellectual energy emerging outside their media, yet on the platforms those media share? How can those trained to think of Enlightenment as having the darkest of sides, a necessary backlash in its very heart, be so naively surprised by this predictable development?

And, so, on this inauguration day, we dedicate this platform to finding those positions, to develop the techniques, to find the pressure-points in our media and rhetoric to make sense of our new conditions, technological and political, and to articulate commonalities and goals.

It’s therefore time we collect and meet in a common, contested, conflicted, complex field that we variously call research, criticism,  scholarship, philosophy, or science. Let us inaugurate a collective, a collective that might form a community, but that cannot and should not be an academic “discipline,” inasmuch as it is academic but undisciplined thinking that we need.

It’s time to collect, and to be collected. Let us be philosophical, whimsical, constructive, critical, and confused. But let us collect, and be collected. Let us write proverbs, poetry, commentary, essays, explorations, and maybe even code. But let us collect, and be collected.  Let us listen, learn, make notes, draw connections, and consider diagrams. But let us collect, and be collected, with ecumenical means in search of effective voice.

We—this “We,” too, is a complex field—invite a world of scholars, computer scientists, thinkers, programmers, poets, and priests to join us. This platform is a position, but a position that will change through collection and collation. We will make a database and channel of what is to be done. And that must remain an open question, our question, for as long as we have energy and affordance to answer to it.

. . . . . . . .

Like The Hedgehog Review on Facebook, follow us on Twitter, and subscribe to our posts via RSS.

Media Are Elemental: Marvelous Clouds

Unknown“The time is ripe for a philosophy of media. And a philosophy of media needs a philosophy of nature.” So begins John Durham Peters in his new book, The Marvelous Clouds, subtitled “Toward a Philosophy of Elemental Media.” His larger claim is that media are elemental.

This fall—after a summer’s repose that lingered, we confess, too long into autumn—the Infernal Machine will be considering Peters’ larger, more ambitious project. For if media are elemental, if the philosophy of media needs a philosophy of nature (or even, as Peters claims, a philosophical anthropology), then media inquiry concerns not just the latest social media product or the history of print. Media inquiry concerns the very relationship between humans and nature—the ways that humans, in their frailty and finitude, struggle with all available techniques and technologies to make their way in the world. For Peters, media inquiry is an ethical project.

But first, let‘s discuss the claim “media are elemental.

This fall, one of us spent a day touring three of the Smithsonian museums in Washington, DC: the National Museum of Natural History, the National Museum of American History, and the Air and Space Museum. Only the last seemed to make “sense.” That is, only the Air and Space Museum offered a relatively coherent narrative. Moving from room to room, the museum’s story was fairly straight forward. From early-modern seafaring, to the Wright brothers, to World War II aerial combat, to nuclear deterrence, to the age of unmanned aerial vehicles, the world has been caught up in an age of ineffable aeronautical adventures. And the United States is the late-modern vanguard. Emblazoned on the tails of fighter jets and the bellies of missiles was the national story of technological flight.

Walking through the National Museum of American History, on the other hand, made no such sense. There was no coherent overall narrative. It was strictly an episodic experience, like watching the History Channel for a day. (No surprise: The History Channel is a prominent museum sponsor.) The National Museum of Natural History—dedicated to the cultural keeping of “nature”—was even more fragmented. Offering no history, no narrative, it simply assembled a pastiche of stuffed mammals, winged butterflies, arctic photographs, and tropical fish around an acquisitive centerpiece, the Hope Diamond.

After leaving the Mall and its museums, this tourist left with a clear message: Technological innovation is the only shared story that makes sense anymore. Neither the “imagined community” of the nation-state nor the Earth, which for aeons has grounded humans narratively and otherwise, has the symbolic power to make history cohere, at least in the United States. Even natural scientists, as the Museum of Natural History made clear, are engineers taking flights into the statistical improbabilities of human evolution and considerably warmer futures. “History” is technological innovation, a story told best through the marvels hanging from the ceilings of the Air and Space Museum.

To claim, as Peters does (though, in fact, he never says it quite this way), “media are elemental” is undoubtedly to take up the cause of landing media inquiry (of which “technology” is a crucial sub-concept) back on Earth—to make a return flight, so to speak, to the mundane, even if by way of the marvelous. If technology is less a means of flight than grounding, what does this mean for our shared stories, identities, quests, and concerns? If technology is the means by which humans struggle to modify themselves and their environment to make their world inhabitable, then what does this mean for our theories of technology and media?

So, as we kick back into gear after a too-long summer hiatus, among other things on the Infernal Machine we’ll be inviting a variety of colleagues from a variety of disciplines to consider Peters’ Marvelous Clouds and, moreover, to explore the claim, and the case, that media are elemental. Stay tuned! Posts will start rolling later this week.

. . . . . . . .

Like The Hedgehog Review on Facebook, follow us on Twitter, and subscribe to our posts via RSS.

79 Theses on Technology: On Attention

“Everything,” claims Alan Jacobs in 79 Theses on Technology, “begins with attention.” Throughout his theses, Jacobs describes attention as a resource to be managed. We “pay,” “refuse,” and “invest” attention. Behind these distributive acts is a purposeful, willful agent. I can choose whether to “give” attention to writing this post or “withhold” it from my eleven-year-old son (who wants me to explain what the Turing test is). The idea that I can allocate attention as I could any other resource or good suggests that attention is fungible. I’ve got a limited store of attention, and I have to decide when and where to expend it. It’s as though I wake up every day with 100 units of attention. And it’s up to me manage them well.

If attention is a good that I can spend well or badly, then it is under my control. I could, on such an account, also lose my attention, in the same way that I lose my keys. And this distributive notion of attention seems to underpin many of our contemporary anxieties about our current moment of digital distraction. The constant notifications from Twitter, Facebook, and my iPhone are all greedy consumers of my attention. If I were just focused, I could assert my powers of distribution and maintain control of my limited units of attention. I would be able to decide exactly which among the myriad objects clamoring for my attention deserves it.

Underpinning Jacobs’ distributive model of attention is an assumption that some general mental faculty, a particular power of the mind, exists that can manage this precious resource. The power of attention—just like other traditional faculties such as reason, memory, imagination, or will—is a latent capacity that needs to be disciplined in order to become fully actual and susceptible to manipulation. It’s like a muscle that needs to be exercised. And if I engage in the right kinds of exercises—maybe if I read really long novels in one sitting or dismantle the WiFi on my laptop—then I can become the master of my own mind. I can be free and in control of who and what can enjoy the benefits of my limited attention. For Jacobs, then, attention is attention, regardless of the object I’m “investing” it in. And my task is to cultivate better habits of managing and controlling my attention.

Jacobs’ suggestion that attention is a mental power that we distribute here or there or anywhere makes sense in certain circumstances. When I engage in discrete tasks, I can think of attention as a limited good that requires tight control and manipulation. If I try to follow my Twitter feed, read a book, and write an article, then I won’t do any of those things well. If I refuse attention to Twitter and the book, however, I may well be able to finish a paragraph.

But this image of a sovereign self governing an internal economy of attention is a poor description of other experiences of the world and ourselves. In addition, it levies an impossible burden of self mastery. A distributive model of attention cuts us off, as Matt Crawford puts it, from the world “beyond [our] head.” It suggests that anything other than my own mind that lays claim to my attention impinges upon my own powers to willfully distribute that attention. My son’s repeated questions about the Turing test are a distraction, but it might also be an unexpected opportunity to engage the world beyond my own head.

If we conceive of attention as simply the activity of a willful agent managing her units of attention, we foreclose theThe Ecstasy of Saint Theresa possibility of being arrested or brought to attention by something fully outside ourselves. We foreclose, for example,
the possibility of an ecstatic attention and the possibility that we can be brought to attention by a particular thing beyond our will, a source beyond our own purposeful, willful action.

Consider, for example, Bernini’s sculptural ensemble in the Cornaro Chapel, Santa Maria della Vittoria, Rome, “The Ecstasy of Teresa.” Bernini has given us an image of complete attention and devotion, but one in which the agency of the will has been relinquished. Or consider the more mundane example of the first bud on a dogwood, wholly unexpected after a cold, icy winter. It surprises me by alerting me to a world beyond my own well-managed economy of attention. And, perhaps more perversely, what about all those shiny red notifications on my iPhone that take hold of me? If I imagine myself as master of my digital domain, I’m going to hate myself.

I know Jacobs is acutely aware of the limitations of such a distributive model of attention. He asks, for example, in Thesis 9, whether different phenomena require different forms of attention. There are, he suggests, different ways to attend to particular objects at particular moments—without “giving” or “paying” attention. And it’s these other forms, in which an agent doesn’t simply manage her attention, that seem just as crucial to making sense of how we inhabit our world.

. . . . . . . .

Like The Hedgehog Review on Facebook, follow us on Twitter, and subscribe to our posts via RSS.

79 Theses on Technology. For Disputation.

Alan Jacobs has written seventy-nine theses on technology for disputation. A disputation is an old technology, a formal technique of debate and argument that took shape in medieval universities in Paris, Bologna, and Oxford in the twelfth and thirteenth centuries. In its most general form, a disputation consisted of a thesis, a counter-thesis, and a string of arguments, usually buttressed by citations of Aristotle, Augustine, or the Bible.

But disputations were not just formal arguments. They were public performances that trained university students in how to seek and argue for the truth. They made demands on students and masters alike. Truth was hard won; it was to be found in multiple, sometimes conflicting traditions; it required one to give and recognize arguments; and, perhaps above all, it demanded an epistemic humility, an acknowledgment that truth was something sought, not something produced.

It is, then, in this spirit that Jacobs offers, tongue firmly in cheek, his seventy-nine theses on technology and what it means to inhabit a world formed by it. They are pithy, witty, ponderous, and full ofDisputation-300x295 life. And over the following weeks, we at the Infernal Machine will take Jacobs’ theses at his provocative best and dispute them. We’ll take three or four at a time and offer our own counter-theses in a spirit of generosity.

So here they are:

    1. Everything begins with attention.
    2. It is vital to ask, “What must I pay attention to?”
    3. It is vital to ask, “What may I pay attention to?”
    4. It is vital to ask, “What must I refuse attention to?”
    5. To “pay” attention is not a metaphor: Attending to something is an economic exercise, an exchange with uncertain returns.
    6. Attention is not an infinitely renewable resource; but it is partially renewable, if well-invested and properly cared for.
    7. We should evaluate our investments of attention at least as carefully and critically as our investments of money.
    8. Sir Francis Bacon provides a narrow and stringent model for what counts as attentiveness: “Some books are to be tasted, others to be swallowed, and some few to be chewed and digested: that is, some books are to be read only in parts, others to be read, but not curiously, and some few to be read wholly, and with diligence and attention.”
    9. An essential question is, “What form of attention does this phenomenon require? That of reading or seeing? That of writing also? Or silence?”
    10. Attentiveness must never be confused with the desire to mark or announce attentiveness. (“Can I learn to suffer/Without saying something ironic or funny/On suffering?”—Prospero, in Auden’s The Sea and the Mirror)
    11. “Mindfulness” seems to many a valid response to the perils of incessant connectivity because it confines its recommendation to the cultivation of a mental stance without objects.
    12. That is, mindfulness reduces mental health to a single, simple technique that delivers its user from the obligation to ask any awkward questions about what his or her mind is and is not attending to.
    13. The only mindfulness worth cultivating will be teleological through and through.
    14. Such mindfulness, and all other healthy forms of attention—healthy for oneself and for others—can only happen with the creation of and care for an attentional commons.
    15. This will not be easy to do in a culture for which surveillance has become the normative form of care.
    16. Simone Weil wrote that ‘Attention is the rarest and purest form of generosity’; if so, then surveillance is the opposite of attention.
    17. The primary battles on social media today are fought by two mutually surveilling armies: code fetishists and antinomians.
    18. The intensity of those battles is increased by a failure by any of the parties to consider the importance of intimacy gradients.
    19. “And weeping arises from sorrow, but sorrow also arises from weeping.”—Bertolt Brecht, writing about Twitter
    20. We cannot understand the internet without perceiving its true status: The Internet is a failed state.
    21. We cannot respond properly to that failed-state condition without realizing and avoiding the perils of seeing like a state.
    22. If instead of thinking of the internet in statist terms we apply the logic of subsidiarity, we might be able to imagine the digital equivalent of a Mondragon cooperative.
    23. The internet groans in travail as it awaits its José María Arizmendiarrieta.

    24. Useful strategies of resistance require knowledge of technology’s origin stories.
    25. Building an alternative digital commons requires reimagining, which requires renarrating the past (and not just the digital past).
    26. Digital textuality offers us the chance to restore commentary to its pre-modern place as the central scholarly genre.
    27. Recent technologies enable a renewal of commentary, but struggle to overcome a post-Romantic belief that commentary is belated, derivative.
    28. Comment threads too often seethe with resentment at the status of comment itself. “I should be the initiator, not the responder!”
    29. Only a Bakhtinian understanding of the primacy of response in communication could genuinely renew online discourse.
    30. Nevertheless certain texts will generate communities of comment around them, communities populated by the humbly intelligent.
    31. Blessed are they who strive to practice commentary as a legitimate, serious genre of responsiveness to others’ thoughts.
    32. And blessed also are those who discover how to write so as to elicit genuine commentary.
    33. Genuine commentary is elicited by the scriptural but also by the humble—but never by the (insistently) canonical.
    34. “Since we have no experience of a venerable text that ensures its own perpetuity, we may reasonably say that the medium in which it survives is commentary.”—Frank Kermode
    35. We should seek technologies that support the maximally beautiful readerly sequence of submission, recovery, comment.
    36. If our textual technologies promote commentary but we resist it, we will achieve a Pyrrhic victory over our technologies.

    37. “Western literature may have more or less begun, in Aeschylus’s Oresteia, with a lengthy account of a signal crossing space, and of the beacon network through whose nodes the signal’s message (that of Troy’s downfall) is relayed—but now, two and a half millennia later, that network, that regime of signals, is so omnipresent and insistent, so undeniably inserted or installed at every stratum of existence, that the notion that we might need some person, some skilled craftsman, to compose any messages, let alone incisive or ‘epiphanic’ ones, seems hopelessly quaint.”—Tom McCarthy
    38. To work against the grain of a technology is painful to us and perhaps destructive to the technology, but occasionally necessary to our humanity.
    39. “Technology wants to be loved,” says Kevin Kelly, wrongly: But we want to invest our technologies with human traits to justify our love for them.
    40. Kelly tells us “What Technology Wants,” but it doesn’t: We want, with technology as our instrument.
    41. The agency that in the 1970s philosophers & theorists ascribed to language is now being ascribed to technology. These are evasions of the human.
    42. Our current electronic technologies make competent servants, annoyingly capricious masters, and tragically incompetent gods.
    43. Therefore when Kelly says, “I think technology is something that can give meaning to our lives,” he seeks to promote what technology does worst.
    44. We try to give power to our idols so as to be absolved of the responsibilities of human agency. The more they have, the less we have.
    45. “In a sense there is no God as yet achieved, but there is that force at work making God, struggling through us to become an actual organized existence, enjoying what to many of us is the greatest conceivable ecstasy, the ecstasy of a brain, an intelligence, actually conscious of the whole, and with executive force capable of guiding it to a perfectly benevolent and harmonious end.”—George Bernard Shaw in 1907, or Kevin Kelly last week
    46. The cyborg dream is the ultimate extension of this idolatry: to erase the boundaries between our selves and our tools.
    47. Cyborgs lack humor, because the fusion of person and tool disables self-irony. The requisite distance from environment is missing.
    48. To project our desires onto our technologies is to court permanent psychic infancy.
    49. Though this does not seem to be widely recognized, the “what technology wants” model is fundamentally at odds with the “hacker” model.
    50. The “hacker” model is better: Given imagination and determination, we can bend technologies to our will.
    51. Thus we should stop thinking about “what technology wants” and start thinking about how to cultivate imagination and determination.
    52. Speaking of “what technology wants” is an unerring symptom of akrasia.
    53. The physical world is not infinitely redescribable, but if you had to you could use a screwdriver to clean your ears.
    54. The contemporary version of the pathetic fallacy is to attribute agency not to nature but to algorithms—as though humans don’t write algorithms. But they do.
    55. This epidemic of forgetting where algorithms come from is the newest version of “I for one welcome our new insect overlords.”
    56. It seems not enough for some people to attribute consciousness to algorithms; they must also grant them dominion.
    57. Perhaps Loki was right—and C. S. Lewis too: “I was not born to be free—I was born to adore and obey.”

    58. Any sufficiently advanced logic is indistinguishable from stupidity.—Alex Tabarrok
    59. Jaron Lanier: “The Turing test cuts both ways. You can’t tell if a machine has gotten smarter or if you’ve just lowered your own standards of intelligence to such a degree that the machine seems smart.”
    60. What does it say about our understanding of human intelligence that we think it is something that can be assessed by a one-off “test”—and one that is no test at all, but an impression of the moment?
    61. To attribute intelligence to something is to disclaim responsibility for its use.
    62. The chief purpose of technology under capitalism is to make commonplace actions one had long done painlessly seem intolerable.
    63. Embrace the now intolerable.
    64. Everyone should sometimes write by hand, to recall what it’s like to have second thoughts before the first ones are completely recorded.
    65. Everyone should sometimes write by hand, to revisit and refresh certain synaptic connections between mind and body.
    66. To shift from typing to (hand)writing to speaking is to be instructed in the relations among minds, bodies, and technologies.
    67. It’s fine to say “use the simplest technology that will do the job,” but in fact you’ll use the one you most enjoy using.
    68. A modern school of psychoanalysis should be created that focuses on interpreting personality on the basis of the tools that one finds enjoyable to use.
    69. Thinking of a technology as a means of pleasure may be ethically limited, but it’s much healthier than turning it into an idol.
    70. The always-connected forget the pleasures of disconnection, then become impervious to them.
    71. The Dunning-Kruger effect grows more pronounced when online and offline life are functionally unrelated.
    72. A more useful term than “Dunning-Kruger effect” is “digitally-amplified anosognosia.”
    73. More striking even than the anger of online commentary is its humorlessness. Too many people have offloaded their senses of humor to YouTube clips.
    74. A healthy comment thread is a (more often than not) funny comment thread.
    75. The protection of anonymity one reason why people write more extreme comments online than they would speak in person—but not the only one.
    76. The digital environment disembodies language in this sense: It prevents me from discerning the incongruity between my anger and my person.
    77. Consistent pseudonymity creates one degree of disembodiment; varying pseudonymity and anonymity create infinite disembodiment.
    78. On the internet nothing disappears; on the internet anything can disappear.
    79. “To apply a categorical imperative to knowing, so that, instead of asking, ‘What can I know?’ we ask, ‘What, at this moment, am I meant to know?’—to entertain the possibility that the only knowledge which can be true for us is the knowledge we can live up to—that seems to all of us crazy and almost immoral.”—Auden

. . . . . . . .

Like The Hedgehog Review on Facebook, follow us on Twitter, and subscribe to our posts via RSS.

The Thin Reed of Humanism

Dürer, "Melancolia I" Städel Museum

Albrecht Dürer, Melencolia I (1514), Städel Museum

Leon Wieseltier is at his cantankerous best in his latest essay “Among the Disrupted.” After two opening paragraphs that are difficult to read as anything but a commentary on the recent demise of the New Republic, Wieseltier returns to his own well-trod turf:

[A]s technologism, which is not the same as technology, asserts itself over more and more precincts of human life, so too does scientism, which is not the same as science. The notion that the nonmaterial dimensions of life must be explained in terms of the material dimensions, and that nonscientific understandings must be translated into scientific understandings if they are to qualify as knowledge, is increasingly popular inside and outside the university, where the humanities are disparaged as soft and impractical and insufficiently new. The contrary insistence that the glories of art and thought are not evolutionary adaptations, or that the mind is not the brain, or that love is not just biology’s bait for sex, now amounts to a kind of heresy.

Wieseltier is reprising many of the themes of his public feud with Steven Pinker in the pages of the New Republic (here, here, and here). More than anything else, this earlier spat and Wieseltier’s latest essay are cultural barometers of our impoverished cultural imagination concerning the relationship of science, the humanities, and technology.

When Wieseltier invokes “scientism,” he’s gesturing toward real concerns about the reductive materialism or naturalism that tends to underlie the work of popular polemicists like Dawkins, Dennet, and Pinker. He is not denying that our world and our selves can, in part, be explained through material mechanisms. I assume he enjoys the benefits of modern medicine like the rest of us.

But terms like “scientism” and “technologism,” however well-intentioned, can obscure more than they clarify. Those who bandy them about presume, as the historian James Schmidt lays out, a number of things. First, they presume that there are different ways of knowing the world. There are limits to a uniquely scientific knowledge. There are some things that cannot be fully explained by modern science. Second, they presume that they can discern what those boundaries are. And, finally, they presume that they can diagnose the deleterious consequences of these illicit boundary crossings.

I’m sympathetic to all three of these premises. But I’m much less confident in our ability to identify where science begins and ends than those who so diligently guard the borders of knowledge exclaiming “scientism!” when they suspect interlopers. Those who invoke “scientism”—and there is a long tradition of its use and occasional abuse as Schmidt has wonderfully documented—put themselves in the position not only of policing the borders of knowledge but also of distinguishing real science, a science that knows its place, from a false science, a science that engages in constant and illicit “border crossing.”

My point is that these ominous sounding terms are all too often used as polemical cudgels. They refer to an illicit encroachment of one sort of knowledge into what is perceived as the proper sphere of another. Thus, “scientism” ultimately refers to the use of uniquely scientific knowledge within a distinctly non-scientific domain. Any appeal to the biological basis of love would be “scientism.” And very bad. These big, ugly worlds are, in short, the searchlights of our epistemic border guards.

But if “technologism” and “scientism” refer to types of knowledge that don’t know their place, then what type of knowledge or disposition adjudicates where these boundaries begin and end? For Wieseltier and many others today, it’s humanism. Humanism is the positive correlate of all those other “–isms,” those forms of knowledge that blithely stray beyond their boundaries.

But what is humanism? “For a start,” writes Wieseltier, “humanism is not the antithesis of religion, as Pope Francis is exquisitely demonstrating. The most common understanding of humanism is that it denotes a pedagogy and a worldview:

The pedagogy consists in the traditional Western curriculum of literary and philosophical classics, beginning in Greek and Roman antiquity and — after an unfortunate banishment of medieval culture from any pertinence to our own — erupting in the rediscovery of that antiquity in Europe in the early modern centuries, and in the ideals of personal cultivation by means of textual study and aesthetic experience that it bequeathed, or that were developed under its inspiration, in the “enlightened” 18th and 19th centuries, and eventually culminated in programs of education in the humanities in modern universities. The worldview takes many forms: a philosophical claim about the centrality of humankind to the universe, and about the irreducibility of the human difference to any aspect of our animality; a methodological claim about the most illuminating way to explain history and human affairs, and about the essential inability of the natural sciences to offer a satisfactory explanation; a moral claim about the priority, and the universal nature, of certain values, not least tolerance and compassion. It is all a little inchoate — ­human, humane, humanities, humanism, humanitarianism; but there is nothing shameful or demeaning about any of it.

Yes, it is all rather “inchoate.” And therein lies the problem.

Wieseltier is correct about the long and admirable lineage of a humanist classical pedagogy, less so about the worldview claim. “Humanism” as a human-centered worldview is a neologism invented not in the Renaissance but in the early nineteenth century as another polemical cudgel, one used to fight the same types of cultural battles that Pinker and Wieseltier have long been waging.

As far as I can tell, humanism, or rather its German cognate Humanismus, was first used in 1808 by the German pedagogue and philosopher F.I. Niethammer (1766–1848). In The Conflict of Philanthropinism and Humanism in Contemporary Theories of Education and Pedagogy, he juxtaposed humanism with philanthropinism, an Enlightenment-era educational theory that regarded the human as a natural being who needed to develop his or her natural capacities. What distinguished “humanism” from more modern forms of education was an underlying concern for, as Niethammer put it, “the humanity [over] the animality” of the human. As a worldview, humanism subordinated the body to reason and defended the autonomy of human nature from the material world. As first used by Niethammer, it was a boundary term; it marked what Niethammer thought was the clear line between the mental from the material, the human from the animal.

When critics invoke “humanism” against “scientism” or “technologism,” they presume to know the proper boundaries of science and technology; they presume that they can readily and forcefully articulate where scientific knowledge ends and humanistic knowledge begins. They assume the role of guardians of our intellectual and ethical world. That’s a heavy burden.

But it’s also a presumption that ignores how much of our knowledge comes from these border crossings. It’s at the margins of our established ways of engaging our world and ourselves that new ways of seeing and imagining what it is to be human so often emerge. We may well need knowledge police and concepts like “scientism” and “humanism” to warn us of charlatans and interlopers but we should hope that they do so with a little less alacrity and a bit more humility.

. . . . . . . .

Like The Hedgehog Review on Facebook, follow us on Twitter, and subscribe to our posts via RSS.