Humanities in the Face of Catastrophe

Earlier this month, my colleague Bethany Nowviskie asked a group of digital humanities scholars gathered in Lausaunne, Switzerland, to consider their work—its shape, its fate, its ends—within the context of climate change and the possibility of our, as in the human species, own extinction. (How’s that for a dinner-time keynote? Bourbon, please.)

The premise of Nowviskie’s talk, “Digital Humities in the Anthropocene,” was the fact of climate change and the irrevocable consequences that it could have for life on our planet. What would it mean, she asked, to be a humanities scholar in the anthropocene, a geological epic defined by human impact on the natural world? What would it mean to practice the humanities within a geological scale of time?

Whatever the fate of the “anthropocene” as a term (its existence and even inception is still being debated among geologists), the scientists, activists, and scholars who invoke it consider human activity and practices as inseparable from nature. Whether they intend to or not, they thereby challenge basic ideas about the human, culture, and agency that have sustained the humanities for centuries.

The very notion of the anthropocene and the suggestion that humans could cause geological-level change undermines, as Dipesh Chakrabarty puts it, the “age-old humanist distinction between natural history and human history.” From Vico’s famous claim that humans could  know only what they have created, while nature remains God’s inscrutable work, to Kant’s understanding of cosmopolitan history as purposive action, Western thinkers have long distinguished between human and natural history. But granting humans a geological agency, however aggregative and delayed, undermines this distinction. What becomes of history and culture when it can no longer be thought of as simply human?

For one thing, we can better conceive of our cultural and historical labor as bound to the contingency, finitude, and flux of nature, and, thus, cultivate a more acute awareness of how extinguishable our labor can be. Here’s Nowviskie speaking to her digital humanities colleagues:

Tonight, I’ll ask you to take to heart the notion that, alongside the myriad joyful, playful scholarly, and intellectual concerns that motivate us in the digital humanities—or, rather, resting beneath them all, as a kind of substrate—there lies the seriousness of one core problem. The problem is that of extinction—of multiple extinctions; heart-breaking extinctions; boring, quotidian, barely-noticed extinctions—both the absences that echo through centuries, and the disposable erosions of our lossy everyday. We edit to guess at a poet’s papers, long since burned in the hearth. We scrape through stratigraphic layers of earth to uncover ways of life forgotten, and piece together potsherds to make our theories about them hold water. Some of us model how languages change over time, and train ourselves to read the hands that won’t be written anymore. Others promulgate standards to ward against isolation and loss. With great labor and attention, we migrate complex systems forward. We redesign our websites and our tools—or abandon them, or (more rarely) we consciously archive and shut them down. DHers [digital humanities scholars] peer with microscopes and macroscopes, looking into things we cannot see. And even while we delight in building the shiny and the new—and come to meetings like this to celebrate and share and advance that work—we know that someone, sooner or later, curates bits against our ruins.

Humanities labor represents, continues Nowviskie, a deeply human striving to communicate “across millennia” and “a hope against hope that we will leave material traces,” in stone, manuscript, print, or digital form.

This is where the strivings of humanities scholars and geologists of the anthropocene intersect. They both rummage about nature for traces of the human. And while for some, such ceaseless searching is add47682f8_61308 copydriven by an indomitable humanism, a confidence in a human spirit that will always live on, others are haunted by the possibility of our extinction and the end, as Nowviskie puts it, “of so much worldly striving.” But those of us who are in search of a fragmented and sundered human are also motivated by this prospect of extinction.

When Nowviskie exhorts us to “dwell with extinction,” she echoes a humanistic disposition that has long been defined by the specter of loss. Since at least the studia humanitatis of the early modern period, scholars have practiced the humanities in anticipation of catastrophe. Theirs, however, was not a crisis of the guild, anxieties about budget cuts and declining humanities enrollments, but a fear of the radical and irreparable loss of the human record.

Many of the great encyclopedic works of the early modern European scholars were defined, as Ann Blair describes them, by the “stock-piling” of textual information intended to create a “treasury of material.” The early modern masterpieces of erudition—such as Conrad Gesner’s Bibliotheca universalis (1545) or Johann H. Alsted’s Encyclopaedia septem tomis distincta (1630)—were motivated not simply by information-lust but by a deeply cultural conception of the ends of scholarship, a desire to protect ancient learning and what humanists considered to be its integrity and authority. These early humanists, writes Blair, “hoped to safeguard the material they collected against a repetition of the traumatic loss of ancient learning of which they were keenly aware.” This loss had rendered Greek and Roman antiquity inaccessible until its gradual recovery in the fifteenth and sixteenth centuries. Humanist scholars saw encyclopedias and related reference works that collected ancient learning as guarantees that knowledge could be quickly reassembled should all books be lost again. They were busy collecting and curating because they feared another catastrophe.

Similarly, Denis Diderot described the eighteenth-century Encyclopédie as insurance against disaster:

The most glorious moment for a work of this sort would be that which might come immediately in the wake of some catastrophe so great as to suspend the progress of science, interrupt the labors of craftsmen, and plunge a portion of our hemisphere into darkness once again. What gratitude would not be lavished by the generation that came after this time of troubles upon those men who had discerned the approach of disaster from afar, who had taken measures to ward off its worst ravages by collecting in a safe place the knowledge of all past ages!

But even as Diderot promised his contributors future glory, he also acknowledged how fleeting the whole endeavor could be. The Encyclopédie, he lamented, would be irrelevant as soon as it was printed. The traces of the human that the “society of gentlemen” had so diligently collected in print would be out of date upon its first printing. Even print could not stop time.

In the nineteenth century, the famous German classicist August Böckh claimed that all this striving to collect and protect the material traces of the human was exemplified in the science of philology, which he defined as the Erkenntnis des Erkannten, or knowledge of what is and has been known. Our current knowledge is only as good as our past knowledge. And working with the fragmented documents of that past gave philologists an acute sense of how fragile our knowledge and history of ourselves was. The philologist’s attention to and care for the material nature of human culture—its embodiment in documents and texts of all sorts and qualities—was cultivated by a consciousness of how fragile it all was. The only thing that bound the species together was a documentary record always under threat.

Today that documentary record is undergoing a transition, unprecedented in its speed and extent, from printed to digital forms. Some scholars, archivists, and librarians warn that this move could prove catastrophic. Things of our past may be lost and forgotten. But if the long history of the humanities teaches us anything, it is that humanistic work has always been practiced in the face of catastrophe.

In our digital age, the vocation and the disposition of the humanities remains intact. “The work of the humanist scholar,” writes Jerome McGann, “is still to preserve, to monitor, to investigate, and to augment our cultural life and inheritance.” And it’s this disposition that thinking about the humanities in the anthropocene may help us recover.


Photograph: The dove and the raven released by Noah, with drowning people and animals in the water beneath the ark. From the Holkham Bible Picture Book, Add MS 47682, England: second quarter of the 14th century, Parchment codex, The British Library, London.


The Ethics of Squirming

Most readers of the Infernal Machine (though you, like us, may have been taking a break from blogging) have probably read something about the controversial Facebook “emotional contagion” study. It seems that a couple of years ago a researcher at Facebook decided to see list-of-facebook-emoticon-600x369what would happen if he tweaked the words in Facebook’s “News Feed” of close to 700,000 users so as to manipulate the “emotional content” of the News Feed. Would users respond on their Facebook pages in step with the manipulations? That is, could Facebook make people feel better if they tweaked their secret algorithms to prioritize “positive” words in the News Feed? (People might spend more time on Facebook if they could!)

The researcher, Adam Kramer, then brought in some university researchers to look at the massive data set. They did some “big data” statistical analyses (of a relatively straightforward type) and added some psychological theory and found this:

When positive expressions were reduced, people produced fewer positive posts and more negative posts; when negative expressions were reduced, the opposite pattern occurred. These results indicate that emotions expressed by others on Facebook influence our own emotions, constituting experimental evidence for massive-scale contagion via social networks.

The paper was peer reviewed and published in the prestigious Proceedings of the National Academy of Sciences (PNAS).

The merits of the study itself are highly questionable. My social science colleagues tell me that with such a massive sample size, you are almost always going to arrive at “statistically significant” findings, whether you are measuring “emotional contagion” or compulsive belching. The fact that the statistically significant “effect” was minimal throws more doubt onto the validity of the study. Furthermore, my social science colleagues tell me that it is doubtful that the study is really measuring “emotional contagion” at all — there are other theories (other than emotional contagion) available that would explain, why when Joe posts a “negative” statement, Jane is reluctant to follow with a “positive” one.

But the main controversy surrounds the ethics of the study: Participants were never made aware that they were part of a massive “experimental” study, and Facebook seems to have fudged on the timing of the “data use policy,” inserting the current bit about “data analysis” and “research” after the data was collected. This is out of keeping with common practice in the social sciences, to say the least.

Reviewing responses to the study within the academic community, I’ve noticed quite a lot of squirming. The fact is that university researchers were a major part of this study; Cornell University’s Institutional Review Board (IRB) board approved the study; and a prestigious academic journal has published it.

Now everyone, including Facebook’s researcher Adam Kramer, is squirming. Cornell University issued a bland statement clearly intended to wiggle its way out of the issue. PNAS has issued an “editorial expression of concern.” And the new media researchers that work in the same circles as the the university-based Facebook researchers have turned to such cliches as don’t  “throw the baby out with the bath water” (don’t over-react to university researchers using corporate-owned data on how they attempt to manipulate their users!).  They say people should stop “pointing the finger or pontificating” and instead “sit down together and talk.” “We need dialogue,” writes Mary Gray of Indiana University and Microsoft Research, “a thoughtful, compassionate conversation.”

Okay. But perhaps before we have that compassionate sit-down about the ethics of big data social media research, we might take a moment to think about the ethics of squirming. For, in this case, there has been way too much of it. I, for one, think TIME’s television critic James Poniewozik has been clear enough about the study: “Facebook can put whatever it wants in the fine print. That shouldn’t keep us from saying that this kind of grossness is wrong, in bold letters.” I don’t think much more needs to be said about the ethics of this particular study.

But something more does need to be said about the broader ethics of research, which sometimes puts us in uncertain ethical situations. There is something about the will to know, and more about the professionalization of knowledge production, that leaves us more frequently than we would like in tricky ethical territory. Rather than simply relying on an IRB “stamp of approval” university researchers might instead simply stop squirming and take responsibility for their work and even say they regret it.

Here’s what an ethics of squirming might look like:

(a) Start with full disclosure. Here  are the conflicts I am dealing with (or have dealt with); these are the messy issues; here is why I really want to squirm, but I won’t.
(b) Here’s  where I (or we) may have been, or indeed were, wrong. Here are our moral regrets. (By the way, Mr. Kramer, making people “uncomfortable” is not a a moral regret.)
(c) Here’s why I would or would not do it again.

All in all, the ethics of squirming entails less squirming and more speaking directly to the issues at hand. It means taking responsibility, either by repenting of wrongs and righting them if possible, or justifying one’s actions in public.

All together now:  Mea culpa.

“Open” is not Public

One of the English words most in need rehabilitation these days is the word “public.” I have to confess that though I hear the word all the time, and think about it regularly, I don’t know how it needs to be rehabilitated. But I am certain it does. 900px-Open_Access_PLoS.svg

Historically, the most obvious thing one could say about the word “public” was that it is not “private.” That distinction, however, hardly holds anymore, as the “private”—whether in the form of private lives, private capital, or private information—now fills our political and social lives, such that the distinction between the private and the public makes increasingly less sense. It is not just that the most “public” controversies today—for example, the Donald Sterling debacle—tend to involve the “public” exposure of “private” lives, it is that many of our most pressing political and social problems today are inextricably tied to private interests, private choices, and privacy.

But the problem with “public” is not just that the ancient dialectic on which it rests—public versus private—is outmoded. There is also a new meaning of “public” now in wide circulation, operating (as almost all language does) at the level of unexamined common sense. “Public” is now used as a synonym for “open.”  

Let me offer an example. A few weeks back I was invited, with several other faculty, to a meeting with a program officer from a major foundation to discuss ways to make humanistic scholarship more public. What struck me about our conversation, which was quite lively, was that most people in the room seemed to assume that the main barrier to a public life for humanistic scholarship was access or openness. The thinking went like this: University presses, which publish most long-form humanistic scholarship, put up physical and financial barriers to a “public” life for scholarship. That is, print books have limited runs and cost money, sometimes quite a bit of money. Therefore these books sit on library shelves for only a few specialists to read. The solution, many in the room seemed to assume, was to convince university presses to “go digital,” with the funding agency with whom we were meeting would using its money to offset the financial loss presses would incur by going digital.

The problem of a public life for humanistic scholarship was one that the program officer presented. The foundation wanted to figure out how to invest their money in such a way as to help the humanities “go public.” But for virtually everybody in the room this meant “going digital,” making humanities work openly accessible. Openness—or open access—was the assumed key to a public life for humanistic scholarship.

But making something openly accessible does not make it public. To make something accessible or “open” in the way we talk about it today does not assume, on the level of norms, making it legible, debatable, let alone useful to non-specialists. There are millions of studies, papers, and data sets that are openly accessible but that nevertheless do not have a public life. The U.S. government, no less, has invested in various “openness” initiatives over the past two decades. These projects are presented as democratic gestures to the “public,” but they do little more  than allow governing agencies to display their democratic credentials and grant a few specialists  access to documents and data. To make something open or accessible is not to make it public.

What would it mean to make humanistic scholarship or government data or, for that matter, computing code, truly public? One clue does come to us from antiquity: The Latin word publicus meant “of the people.” Publicus was an attribute, a quality or feature of a person, thing, action, or situation, rather than the thing itself. It did not, mind you, mean that the person, thing, action, or situation was authorized by, or somehow the consequence of, the majority. That is, publicus was not a synonym for “democratic.” Rather, it meant that the person, thing, action, or situation was plausibly representative “of the people,” such that the people could engage it in a useful and productive political manner.

The question of how to make something public concerns how to endow it with a quality that could be attributed to “the people”? Clearly, this would mean taking seriously matters of “design” or what Cicero called “style,” such that the “public thing” (in Latin, the res publica) is first of all legible (able to be “read” if not fully understood), second of all in some sense subject to discourse and debate (and thus subject to political deliberation), and third of all socially useful. This means thinking long and hard about “the people” themselves: their habits, their tastes, their interests, their schedules, their aptitudes, and so on. While “openness” may in certain circumstances be part of that which builds up to a “public” quality, I would venture to say that openness or access is not even necessary. Cicero, for one, saw “public” life as one that was styled in such a way as to be of the people but not necessarily exposed or “open” to the people. Public speeches, public events, and public figures could—like some public art today—be a bit opaque (legible enough, but not necessarily fully understandable) but still be “of the people.” Moreover, certain information and deliberation may need to be kept secret for good reasons—for example, having to do with the fair administration of justice (think of a jury deliberating behind closed doors)—but that information and deliberation can still be public according to the criteria above.

Indeed, openness is relatively easy; publicity is hard. Making something open is but an act; making something public is an art.

Our digital moment is a triumphal one for “openness.” But “open” is not public. If we are to really push the digital into the public, we need go well beyond questions of access, openness, and transparency and start asking how it is that the digital might take on a quality that is “of the people.”

You must change your graduate program!

Of all the hyperbole surrounding the fate of the humanities today, the problems facing graduate studies seem the least exaggerated. The number of PhDs has far outpaced the number of available full-time jobs. Financial support is both inadequate and unevenly distributed, requiring The Quant & The Connoisseur logostudents to defer earnings over long periods of time, assume unneeded debt, and compete against differently funded peers.

Overspecialization has made knowledge transfer between the disciplines, not to mention between the academy and the rest of the workforce, increasingly difficult. The model of mentorship that largely guides student progress, now centuries-old, seems increasingly out of touch with a culture of non-academic work, so that students are ill-prepared to leave the academic track. Time-to-degree has not only not sped-up, but increasingly it also correlates with lower success rates—the longer students stay in the PhD track the lower their chances for full-employment.

Philosophy Seminar Table at University of Chicago

The hegemony of the seminar as the only model of learning is also at odds with much recent thinking about learning and intellectual development. Undoubtedly, numerous teachers and students alike would describe at least some, if not many, of their seminar experiences as profoundly uninspiring. Add to that the way we largely operate within a developmental model premised on major phase-changes that dot otherwise stable, and largely flat, plateaus. The long time between exams or their sink-or-swim nature does little to promote long-term incremental development of students as thinkers, writers, or teachers. We think more in terms of a ninteenth-century-inspired model of botanical metamorphosis, with its inscrutable internal transformations, than we do incremental, cumulative performances.

There are also bio-political aspects to the crisis that have recently been raised, where the most intense periods of work and the most intense periods of insecurity overlap precisely within the normal timeframe of human fertility. This PhD model seems downright Saturnalian, consuming its own offspring.

What is there to like about this scenario?

Luckily, the MLA has issued a report about graduate education. “We are faced with an unsustainable reality,” the report states. Indeed. And then come the all-too-familiar platitudes. Maintain excellence. More teaching. Shorter time-frames. Innovate. Better connection with non-academic jobs. Advocate for more tenure-track positions.

As the clichés pile up, so do the contradictions. Get out faster, but spend more time teaching. Keep up those rigorous standards of specialization, but do it with haste (and be interdisciplinary about it). No teaching positions available? Look for another kind of job! Persuade the university to hire more tenure track positions—whom do I call for that one?

Will the MLA report lead to changes? It’s doubtful. Sure, we’ll crack down on time to degree without really changing requirements. We’ll spice up course offerings and maybe throw in an independent project or two (digital portfolios!). We’ll scratch our heads and say the PhD would be a great fit for a job in consulting, or in a museum, maybe even Google—and then do absolutely nothing. Five or ten years from now, we’ll talk about a crisis in the humanities, the shrinking of the field, and notice that, once again, there seem to be fewer of us hanging around the faculty club.

Nothing will change because we don’t have to. As long as there are too many graduate students, there is no problem for faculty. And no matter what we say, what we do is always the same thing. Try approaching your department and saying you need to change the scope, scale, content, and medium of the dissertation. You’re in for a fun conversation.

Try implementing a mandatory time-limit with yearly progress reports and consequences for failure and you’ll be barraged with so many exceptions your Agamben will start to hurt. What do employers want from PhDs—good writing skills, general knowledge, analytical capability, facility with numbers, strong work habits, works well with others? Sorry, can’t help you. None of our students has those skills since our program doesn’t emphasize them (but we have a writing center!).

Nothing will change because we don’t have to. We’re so conservative at heart we’d rather die out with our beliefs intact than do anything that might actually better serve the student population. We’ll continue to point to the exceptions without realizing how much they still look like something we didn’t want to change.

The PhD did something once, or rather it did one thing and it did it reasonably well. It still does that one thing, which is now, quantitatively speaking, vastly unnecessary. Some have suggested scaling up to meet the sciences on their own ground (and here at The Infernal Machine, too). I would suggest that we scale down to meet the needs of the world at large. More modular, more flexible, more creative, more varied, more timely, more general, more collaborative, and more relevant. Until we have any proof that our programs are feeders for jobs outside the academy, we’re just failing by another name.

We can either change in substantive ways or pretend to do something else while actually continuing to do the same things we’ve always done. The MLA report looks a lot like the latter and no doubt so will most of the responses to it.

I’m looking forward to next year’s report. How many ways can you play the same tune?



Big Humanities

[Editor's Note: This is the second installment in The Humanities in Full series.]

Before there was big science or big data, there was big humanities. Until the last third of the nineteenth century, the natural and physical sciences imitated many of the methods and practices of the humanities, especially disciplines like philology, which pioneered techniques in data mining, the coordination of observers, and the collection and sorting of information—what Lorraine Daston terms practices of “collective empiricism.”

One of the most successful and long-lasting projects was led by the Berlin philologist August Böckh. In a proposal to the Prussian Academy of Sciences in Berlin in 1815, Böckh and his colleagues requested funding for a long-term project to collect as completely as possible all Greek inscriptions, printed, inscribed, and holograph. Eventually published in four installments as Corpus Inscriptionum Graecarum between 1828 and 1859, with an index in 1877, Böckh’s project was an organizational feat that relied on the work of hundreds of philologists over decades, and it quickly became a model for German scholarship in all fields. “The primary purpose of a Royal Academy of the Sciences,” he wrote, should be to support the type of work that “no individual can accomplish.” The project collected, stored, preserved, and evaluated data. And in Böckh’s case, the data were Greek inscriptions scattered across the Mediterranean.

Böckh’s Corpus Inscriptionum Graecarum was just a prelude. In his inaugural lecture to the Prussian Academy of Sciences in 1858, Theodor Mommsen, one of Germany’s foremost classical scholars, declared that the purpose of disciplines like philology and history was to organize the “archive of the past.” What Mommsen had in mind, as would become evident in the kinds of projects he supported, was not some abstract archive of immaterial ideas. He wanted scholars to collect data and shape it into meticulously organized and edited printed volumes in which the “archive” would take tangible form. Work on the scales that Mommsen imagined would require international teams of scholars and the “liberation” of scholars from what he dismissed as the “arbitrary and senseless” divisions among the disciplines.

As secretary of the Prussian Academy of Sciences, Mommsen set out to institutionalize his vision of big philology, or what he termed the “large scale production of the sciences” [Grossbetrieb der Wissenschaften]. After securing a three-fold increase in the Academy’s budget, he supported a series of monumental projects. He oversaw the internationalization and expansion of the Corpus Inscriptionum Latinarum, the Latinate counterpart to Böckh’s project that sought to collect all inscriptions from across the entire Roman Empire. It eventually collected more than 180,000 inscriptions and grew to 17 volumes plus 13 supplementary volumes. Mommsen also helped church historian Adolf Harnack secure 75,000 Marks and a 15-year timeline for a project on Greek-Christian Authors of the First Three Centuries, the modest goal of which was to collect all of the hand-written manuscripts of early Christianity. Other projects included a prosopography of ancient Rome funded for a period of ten years.

Looking back on what Mommsen had accomplished for modern scholarship, the German philologist Ulrich von Wilamowitz-Moellendorf wrote:

The large scale production of science cannot replace the initiative of the individual; no one knew that better than Mommsen. But in many cases the individual will only be able to carry out his ideas through large scale production.

Theodor Mommsen, Wikipedia Commons

Theodor Mommsen, Wikipedia Commons

Figures such as Böckh and Mommsen introduced different scales to knowledge creation and different skill sets to humanistic scholarship. They developed, coordinated, and managed teams of people in order to organize huge sets of texts and data.

But to what end? What was the purpose of all this collecting, organizing, and managing? This was the question that transformed Germany’s most self-loathing philologist into a philosopher. A wunderkind trained in Leipzig, Friedrich Nietzsche was appointed professor of classical philology at the University of Basel at the age of 24, before he had even finished his doctorate. Just as Mommsen was busy assembling the “archive of the past,” Nietzsche began to diagnose modern culture, not to mention himself, as suffering from a bad case of “academic knowledge,” or Wissenschaft.

In We Philologists, Nietzsche excoriated his fellow scholars for abdicating philology’s real task. Ultimately, he argued, philology was not about advancing knowledge or building an “archive of the past.” It was about forming stronger, healthier human beings on the model, or at least the idealized model, of the ancient and classical Greeks. The real philologist was a lover of antiquity, someone who sought to transform himself through an encounter with a superior culture. Every good and worthwhile science, he wrote, should be kept in check by a “hygienics of life”—practices by which whatever one learned could be integrated into how one lived.

Despite his stylized iconoclasm, Nietzsche was a traditional German Grecophile for whom antiquity of the Greek sort was a moral utopia. But he was also a modern scholar struggling to come to terms with the ascendant research university and what we recognize today as its basic elements: the division of intellectual labor, academic specialization, and the constant struggle to integrate new technologies and practices for sifting through and making sense of the past.

Nietzsche’s polemics against big philology were precursors to contemporary anxieties about what might become of the humanities in the digital age. Data—be it 180,000 inscriptions or hundreds of digitized novels—cannot speak for itself, but it is never incoherent. It’s always collected, organized, edited, framed, and given meaning, whether in nineteenth-century printed volumes or twenty-first century graphs. With his bombast and passions, Nietzsche made the case for values and interpretation at a moment when textual empiricism was ascendant and positivism loomed. “Yes, but how are we to live!” was his constant refrain.

The catch, of course, is that most of us aren’t Nietzsche, though arguably too many contemporary scholars of the strongly critical-theoretical bent aspire to be. Scholarship and knowledge might be better served if many such would-be master interpreters settled for the humble but necessary drudgery of collecting, annotating, and commenting on the “archive of the past,” maintaining cultural inheritances and providing invaluable grist for the equally important job of hermeneutics. We shouldn’t forget that Nietzsche the philosopher, the moral psychologist who diagnosed the ethical ills of modernity, grew out of Nietzsche the philologist, the erudite scholar who reverentially tended ancient traditions and texts.

Nineteenth-century practices of collecting and evaluating data don’t exhaust the work of the humanities, but they highlight a broader history of the humanities in which collecting and evaluating data has been a central and even noble pursuit. Thinking of the humanities and the sciences in terms of what humanists and scientists actually do might help us develop a longer history of the humanities and see continuities that simple polemics only conceal. Nietzsche and his fellow nineteenth-century philologists struggled to reconcile more interpretive methods with historical approaches, to blend pleasure and delight with critical distance, and to temper particularity with timeless value. But Nietzsche represents only one side of the debate. While his critiques of the utopian impulses of big philology were necessary correctives, he ultimately left the university and withdrew to a life in extremis, writing at the edge of lucidity and under the shadow of genius.

The Wall Must Stand: Innovation At the New York Times

“In the future,” writes digital scholar and Twitter wit Ian Bogost , “all news will be about, rather than in, the New York Times.”

That future seemed to arrive last week, and not only with the controversy unleashed by the abrupt firing of executive editor Jill Abramson. Possibly as part of the storm, someone in the newsroom leaked a 96-page document with detailed proposals for bringing the Times more fully into the digital age and—even more important—making the Grey Lady more “Reader Experience”-friendly.

#164976308 /

Nieman Journalism Lab’s Joshua Benton, calls the report “one of the most remarkable documents I’ve seen in my years running the Lab.”  He even tasked three of his staffers with excerpting highlights from it. But the whole thing merits reading by anyone interested in the possible (or inevitable?) future of the news.

Not that there is anything truly new or surprising on any page of “Innovation,” as the study is so grandiloquently titled. Put together by a six-member team led by Arthur Gregg Sulzberger, the publisher’s son and heir apparent, the report is a compendium of ideas, strategies, arguments, and veiled threats familiar to anyone who has worked in or around newsrooms during the last decade or so. From the title on, it buzzes with the kind of jargony nostrums that fuel TED Talks and South by Southwest conferences, from impact toolboxes and repackaging old content in new formats to making journalists their own content promoters and integrating the Reader Experience team with the newsroom to, well, anything else that helps counter the disruptive incursions of new media upstarts like Buzzfeed and Huffington Post.

And why not counter those disrupters? As the report frequently notes, the NYT produces consistently superior content but does an almost-as-consistently inferior job of getting its content out to its reader. In some cases, competitors are even more successful at distributing Times content than the Times itself. It makes little sense to remain satisfied with such a status quo.

But reading the report invites suspicion on at least two counts. The first is quite immediate: How is it possible that these objectives haven’t already been accomplished? (And, in fact, is it possible that many of them have, as some NYT insiders say, proving once again that many strategic studies merely confirm the direction in which the institution is already heading). My incredulity arises from the facts presented in the report itself, namely the astonishing number of Times employees already dedicated to Reader Experience activities (which, as the report notes, “includes large segments of Design, Technology, Consumer Insight Group, R &D, and Product”).

The problem, explicitly remarked upon at several points in the report, appears to be turf wars.  Some of this is simply silly, such as the exclusion of Reader Experience people from key editorial meetings or other instances of uncollegial shunning. But I suspect the problem also stems from something far less silly, indeed, from the most fundamental of political-institutional questions: Who, at the end of the day, will be in command of the combined newsroom and Reader Experience personnel? Will it be the editorial leadership or the business leadership?

That question can’t be finessed, fudged, blurred, or deferred. It must be answered forthrightly, because if it isn’t, the very purpose of a serious news organization becomes unclear.

And that leads to my second big concern.  What is the real goal of “innovation” at the New York Times? Is it intended primarily to enable the editorial leaders to use and inculcate the best practices of distribution, with additional staff possessing advanced skills in those practices, in order to support and advance strong journalism? Or is intended primarily to increase the number of Reader Experiences as measured through analytics and other metrics at the expense, in the long or short runs, of the highest quality of journalism? If the former, I am on board—who wouldn’t be?

Which is why the political question must be answered first. If not, and if the new and enhanced newsroom ends up being run by the business side, then decisions will be made that will slowly erode the quality of the journalistic content. If content packagers and social media community managers answer ultimately to publishing executives and not to editors, then they will be able to demand the kind of content—whimsical features, for example, rather than hard reporting—that tends to trend most strongly.  The sad fact is that cute cat stories always sell better than revelations about city hall. The number of hits, likes, or visits will gradually but inevitably determine the editorial agenda.

Overstated? Simplistic? I don’t think so. When the ultimate purpose of a news organization is something that can be evaluated almost exclusively by metrics, then you can be sure you are no longer talking about a news organization.  A media company, perhaps, but not a journalistic one.

The report calls for some breaching of the editorial-publishing (or church-state) firewall. That sounds suspect to me. What it should call for is the migration of some of those Reader Experience departments and personnel to the editorial side of the firewall. The wall itself must stand. Or the journalism will fall.

Jay Tolson is the Executive Editor of The Hedgehog Review.



The Humanities in Full: Polemics Against the Two-Culture Fallacy

The New Republic does not like the digital humanities. Following Leon Wieseltier’s earlier diatribes, Adam Kirsch recently warned that the digital humanities and their “technology” were taking over English departments. Kirsch posed some reasonable questions: Are the digital humanities a form of technological solutionism? No, not withstanding the occasionally utopian strand. Are the digital humanities “post-verbal”? With all their graphs, charts, and network visualizations do they aspire to a discourse of mere pictures and objects? No and no. With all their generously funded projects, are they embracing the “market language of productivity to create yet another menacing metric for the humanities?” A good question that deserves thoughtful responses (here and here).

But Kirsch’s essay isn’t really about the digital humanities. It’s about the humanities more broadly and Kirsch’s truncated and ahistorical vision of what they ought to be. The problem with the digital humanities, he writes, is that they go against the “nature of humanistic work.” And their errant ways

derive from a false analogy between the humanities and the sciences. Humanistic thinking does not proceed by experiments that yield results; it is a matter of mental experiences, provoked by works of art and history, that expand the range of one’s understanding and sympathy. It makes no sense to accelerate the work of thinking by delegating it to a computer when it is precisely the experience of thought that constitutes the substance of a humanistic education. The humanities cannot take place in seconds. This is why the best humanistic scholarship is creative, more akin to poetry and fiction than to chemistry or physics: it draws not just on a body of knowledge, though knowledge is indispensable, but on a scholar’s imagination and sense of reality. Of course this work cannot be done in isolation, any more than a poem can be written in a private language. But just as writing a poem with a computer is no easier than writing one with a pen, so no computer can take on the human part of humanistic work, which is to feel and to think one’s way into different times, places, and minds.

Kirsch pits the technologically unadorned humanities that produce subjective experiences against the technology-dependent sciences that produce mere facts. This simple, and false,  dichotomy manages to slight both at once, and to obscure more than it clarifies.

In fact,  this humanities-sciences dichotomy is relatively recent.  And as it turns out, the  humanities itself is a relatively recent term, seldom used before the nineteenth century. The OED lists the first use as 1855 in a reference to music as of “the humanities.” Google’s NGram keyword search shows a marked increase in the prevalence of the term around 1840, just as the natural and physical sciences were becoming ascendant in universities.

Today’s distinctions between the digital humanities and the humanities proper has its longer dilthey_geisteswissenschaften_1883_0006_400pxhistory in these nineteenth-century divisions. For well over a century now, one of the dominant notions of the humanities, in the academy at least, is one that cast them squarely against the natural sciences. And this conception of the humanities, which has since gained wider influence in the culture at large, was first articulated by the late nineteenth-century German scholar Wilhelm Dilthey. Dilthey distinguished the human sciences [Geisteswissenschaften] from the “natural” sciences [Naturwissenschaften].

The Geisteswissenchaften, claimed Dilthey, studied the inner workings of mental facts – that is, the internal processes of human experience (Kirsch’s beloved mental experiences). For this internal realm, the freedom and autonomy of the subject were central and, thus, the primary objects of inquiry.

The natural sciences, by contrast, studied material processes governed by natural laws and the mechanisms of cause and effect. For Dilthey, humanities scholars don’t count, measure, or seek patterns; they seek to understand what motivates canonical historical figures who produce works of art and other artifacts of culture (Kirsch’s struggle to understand not just explain Auerbach’s Mimesis, for example). The human sciences explain phenomena from within, the natural sciences from without.

Dilthey’s efforts to distinguish sharply between the two forms of inquiry were in large part meant to resist the rising influence of the natural sciences in nineteenth-century German universities and, above all, the influence of positivism: the notion that we can have knowledge only of phenomena (the only possible knowledge is of an endless series of facts). Like Dilthey, Kirsch’s embrace of a very particular and limited notion of the humanities is reactionary. But whereas Dilthy feared the pervasive and corrosive effects of positivism, Kirsch fears the utopian delusions of technological solutionism.

These simple oppositions—the humanities versus the sciences—confuse more than they enlighten and, in a timeless irony, produce deeply anti-humanistic polemics. They also ignore the historical fact that the humanities and humanistic inquiry more broadly have not only been concerned with particular human artifacts (one painting, one poem, one piece of music) but also, as Dutch scholar Rens Bod recently put it, patterns and principles to make sense of and enjoy these artifacts.

The kinds of things that humanists actually do when they engage products of human creativity, their practices, have always been bound up with efforts to make connections and generalize. From Lorenzo Valla’s careful and methodological debunking of the Donatio Constantini (On the Donation of Constantine) in 1440 to Erich Auerbach’s Mimesis in 1946, humanists of all kinds have relied on particular notions of method, evidence, verification, and argument, just as the natural and physical sciences have relied on intuition and creativity.

We need a history and vision of the humanities capacious enough to see the humanities not as a particular method or set of disciplines but as a disposition, as a way of engaging the world.  What follows in subsequent blogs are short, polemical (in the best sense, I hope) steps toward such a history.


Semi-Pro—On the New Work-Study Conundrum

With the recent NCAA ruling in the Northwestern football case—in which college players were deemed eligible to unionize—the question of work on campus has reared its ugly head once again. However much the ruling was more narrowly about the behemoth that is NCAA sports, it was yet another sign of the increasing professionalization of all things academic. Even those students who were thought to be playing games are working. Whether it’s graduate students or student-athletes, there is a lot more work going on on campus than people had previously thought. In the good old days, work was something you did on the side while you studied. Now, it might be the only thing you do.

The Quant & The Connoisseur logo

In response to the decision, voices from inside the university were familiarly hypocritical. There’s just too much money at stake not to preserve the myth of the amateur or the apprentice. (In fact, it’s appalling how much money is in play in a system that still largely uses medieval guild models to understand itself.) Voices in the press, on the other hand, were typically nostalgic. Wouldn’t it be nice if this weren’t the case? Isn’t there some way of restoring the work-study balance so we can imagine real student-athletes and not financially indebted apprentices? Remember Greece?

Rather than banish the idea of work like some unclean vermin out of a Kafka story, we should be taking the opportunity to look it more squarely in the face, and not just in the world of college sports. Work is everywhere on campus today. It’s time we accepted this fact and start rethinking higher education accordingly — starting with that most hallowed distinction between working and studying that informs almost everything else we do.

Conflating work and study might seem to many to bring together categories that are traditionally seen as opposites. That’s why we have two words for them after all. But they’re also supposed to be sequentially related. You don’t just do one instead of the other. You are supposed to move from the one to the other.

Imagining work as part of the study equation rather than its opposite or its aftermath challenges this equation, especially assumptions about its general level of success. We largely operate  with what we could call a mimetic theory of learning in which students imitate, usually poorly, “real” work, after which at some point they presumably stop simulating and start producing. No one knows when this is supposed to happen or how. The failure rates in graduate school, or the difficulties most undergraduates face in moving from school to jobs, testify to how little this process results in putting people to work. And yet we insist on keeping “work” at bay.

My point is that the recent influx of work’s presence in the academy might in fact be good for learning. With all the anxiety about business models entering into the university, whether it’s students who mostly just play sports, graduate students who spend a lot of time teaching, or faculty who are subject to the market demand of student-consumers, the thing we haven’t considered is the act of learning and the vast majority of students who engage in it. What happens to them when learning looks a lot more like work?

In the humanities, to take the example I’m most familiar with, when students write papers for a class they are writing papers that are like good papers, but usually are not. There is a theory of learning-by-simulation that governs the practice. (I’ll ignore exams since we all know those are just compromises with the devil of overpopulation.) Students’ papers, and in most cases the grown-up ones they are supposed to emulate, are socially worthless. They don’t produce anything. They are poor copies, in Plato’s sense. Students may or may not drop by to pick them up at the end of the semester and, if they don’t, we throw them in the trash. Yes, that’s true, but telling.

But what about the growth! say the pundits. How else are students supposed to reach their full, expressive intellectual capacities without these exercises? Yes, it is like exercise. You practice to simulate the real game. It’s not like those kids playing basketball in sixth grade aren’t imitating their grown-up peers—that’s what learning is.

But here’s the difference. What the NCAA ruling shows us is that by the time of higher education, these students are no longer imitating. It is work in its own right and is being compensated accordingly. (Well, at least the coaches and administrators are being compensated.) Why not have the same expectations for our students? And certainly for graduate students, who are way past the sell-by date of learning by imitating? Why keep alive the pretense of simulation when in practice there is all sorts of work going on and potential for even more of it?

Because we’ve been doing everything on a simulator deck for so long, it’s hard to imagine what this might look like. But try to picture more assignments and activities that aim to solve real-world problems, even at the simplest of levels. Wikipedia pages are an obvious first step. Why not teach composition using this public space? Why not make new knowledge in the process of honing your writing skills and creating a suite of pages around an overlooked cultural feature? Along the way, students would also learn what it means to edit, a skill most of them very much need.

It would also teach them how to relate to other people’s writing. As we move more and more from a book-based world to a digital infrastructure, why not teach students how to contribute to that process? I don’t mean have them transcribe OCR instead of talking in class (too Matrix-y?). But certainly encoding texts and tagging them with more information that can be used by others are activities that require a good deal of thought and critical judgement. At the same time, this information is useful to others. Think of the boon to historians to have more information available in digital archives. Perhaps students might apply their new-found knowledge about the novel (Jane Austen, no doubt) by writing an algorithm that looks for something textually interesting in the myriad pockets and corners of the Internet. There is so much to analyze out there, why not do it rather than pretend to? I’m imagining something like the Maker Movement, though less artisanal in spirit and more analytical and applied.

The detractors will, of course, have seizures about all of this—they will say it overlooks the disinterested nature of learning and undoubtedly marks the end of the life of the mind. My colleagues in the humanities might argue that it smacks of a science system that prioritizes laboratory doing over seminar simulation. (They’ve been doing this for a long time.) Prepping for cubicle life is no way to think about Aristotle, they would insist, whither the critical thinkers? Once those ivory towers are opened to the world of work, there will be no end to it.

This is where I think the either/ors have it wrong. First, I’m not a big fan of pretending. We can pretend that work hasn’t infiltrated campus, but that doesn’t make it so. For those on the outside looking in, like the judicial system, we are in a serious state of denial. Ostrich-politics, as they say in Germany, isn’t a very effective or attractive technique (not least because of one’s embarrassing posture and the rear-end exposure).

Second, and I think more important, we can do this in ways that are intellectually valuable. We need to get over our fear of the simple—simple things, when brought together over time, add up to much more. Witness Wikipedia. You can imagine any number of smaller-scale forms of student-curated information. In fact, this might be a very useful way to frame a new kind of work-study stance. Instead of apprentices simulating their superiors, students are curators making and sharing knowledge simultaneously at different levels of scale.

I think the most compelling reason is also the least appetizing: in the end we must. Higher education has simply become too expensive to justify itself as a pasture for the world’s elite. Or to put it another way, that’s all it is unless we do something differently. It’s time to get to work.

Frank Gehry and the Enigma of American Monumentalism

Pity Frank Gehry? The great architect of the anti-monumental, so wily and free and unpredictable in form, with so little obvious care for function, got outflanked by the Eisenhower Memorial complex. Pity Frank Gehry? Heroic in his anti-heroic innovations, replete with rejections of the old-world style, he could not overcome the dignified decorum of D.C.’s National Mall. Though he, as much as anybody, seems to understand the modernist enigma that was President Eisenhower, he could not solve the enigma of the American national memory.

In case you hadn’t heard (and it is quite likely you hadn’t) Gehry’s design of the Eisenhower Memorial for the National Mall—unanimously selected by the Eisenhower Memorial Commission in 2010—was rejected in April by the National Capital Planning Commission, and this after Congress refused to fund the project in January.

Since the concept was first presented, Gehry’s memorial design has been the target of fiery criticism by an assortment of people, from columnists to politicians to architects to historians to members of the Eisenhower family. The objections have been manifold, but typically have revolved around three issues: the design overplays Eisenhower’s childhood roots in Kansas relative to his wartime heroism; its scale and size is too grandiose; and its style is “modernist.”

Gehry's design for the Eisenhower Memorial was selected by the Eisenhower Memorial Commission in 2010. Photo is public domain (accessed on Wikicommons).

Gehry’s design for the Eisenhower Memorial was selected by the Eisenhower Memorial Commission in 2010. (Credit: Eisenhower Memorial Commission)

Susan Eisenhower, the former president’s granddaughter, led the charge against the focus on Kansas from the early days of the design’s public release. “Memorials in Washington speak for the nation as a whole,” she was quoted at saying. “The nation is not grateful because he grew up in rural America. He defeated Nazi Germany. He led us through tumultuous times in war and peace.” Of course, Kansans would differ!

Then there’s the size issue. The New Yorker’s Jeffrey Frank wrote:

From the start, something has been a little off with this undertaking, beginning with its preposterous scale. Dwight D. Eisenhower was certainly not a modest man; he had his share of vanities. Even late in life, he was very much the incarnation of a five-star general. But he was not someone who showed off. He didn’t swan around with his medals when he was a soldier, and, as President, he was willing to let others take the credit (as well as the blame) for Administration actions.

That Gehry’s design would require a bloated budget only magnified the complaint. As Frank pointed out, again in The New Yorker, “The revised design has more man than boy, but the memorial, which so far has cost more than sixty-two million dollars—a sum that would have appalled the fiscally austere Eisenhower—has still produced little more than acrimony.”

With regard to its “modernist” style, Roger Lewis, professor emeritus of architecture at the University of Maryland, wrote in The Washington Post

Among the most strident complainers have been those condemning the aesthetic style of the memorial. Generally disliking architectural modernism, they strongly prefer a classically inspired memorial design. These critics argue that civic structures employing a modernist or avant-garde vocabulary just don’t belong in the nation’s capital, where so many classically derivative government edifices, museums and monuments have been built. To them, selecting Gehry was a colossal error.

The protests against Gehry’s style grew so strident as to produce a full-fledged “Toward a New Eisenhower Memorial” campaign under the auspices of the National Civic Art Society, a group advocating “a traditional artistic counterculture” to oppose “a postmodern, elitist culture that has reduced its works of ‘art’ to a dependence on rarified discourse incomprehensible to ordinary people.” The group has, among other things, come near to conspiratorial sensationalism,  suggesting the memorial commission fixed the design competition, and offering “rarely seen” and “shocking” photos of the “tapestry” Gehry would include in his design. Gehry, it is clear from the group’s website, is a threat to the nation: on it, they pit “Frank Gehry in His Own Words” (e.g. “I’m confused as to what’s ugly and what’s pretty”) against  “Eisenhower in His Own Words” (e.g. “What has happened to our concept of beauty and decency and morality?”).

The “Toward a New Eisenhower Memorial” campaign, whose motto is “We’d like what Ike would’ve liked, ” has held a competition for an alternative design, offering neoclassical columns, arches and even an obelisk to properly memorialize the late president. Unfortunately, they would not grant me permission to show one of the countermemorial designs in this post. Justin Shubow, the president of the organization, told me that the National Civic Art Society owns exclusive rights to the image I wanted to post (a neo-classical arch with “Eisenhower” inscribed across the top), and that they would not permit me to display it on The Infernal Machine. He further explained that the competition was run on a small budget, and that many of the entries were by students, seemingly attempting to explain the less than stellar quality of the designs.To see some of the countermemorial designs, including the one I wanted to show,  you can go here, here, and here.

Oh boy, Gehry (and more than one of his employees) must be thinking. And rightly so. For regardless of his taste, Eisenhower was without doubt the first “modernist” U.S. president, the architecton of missiles, missile silos, massive interstate systems, efficient underground backyard nuclear bomb shelters, consumer capitalism, space monkeys, and so on. There is, even today, a hollowed-out mountain out West, built in the 1950s with great effort and at great expense, in which sits, at the end of a labyrinth of tunnels, the “Eisenhower bedroom.” It’s one of many places the president could sleep in relative peace in case of a nuclear war. Now that‘s what I would call preposterous scale! That, we might even say, is modernism!

At the same time, in the spirit of modernist enigma, Eisenhower more than any other president in modern history preached the incapacity of matter and form to adequately symbolize ideals. “No monument of stone,” General Eisenhower declared after World War II regarding the soldiers who had died, “no memorial of whatever magnitude could so well express our respect and veneration for their sacrifice as would perpetuation of the spirit of comradeship in which they died.” As president, Eisenhower was guided by the idea that America and the rest of the world needed to get beyond old European ways, which had produced disaster after disaster, and enter into a new age of spirit.

This spirit, moreover, was the spirit of the future, not the past. As Eisenhower declared at the Metropolitan Museum of Art in 1946 of the United States, “Now we enter on an era of widened opportunity for physical and spiritual development, united in a determination to establish and maintain a peace in which the creative and expressive instincts of our people may flourish.” Two years later, 1948, the abstract painter Barnett Newman sounded the same theme in “The Sublime is Now”: “We are reasserting man’s natural desire for the exalted, for a concern with our relationship to the absolute emotions. We do not need the obsolete props of an outmoded and antiquated legend. . . We are freeing ourselves of the impediments of memory, association, nostalgia, legend, myth, or what have you, that have been the devices of Western European painting.”

To be sure, Eisenhower more than once expressed his distaste for, and his inability to understand, modernist art. But it was Eisenhower’s C.I.A. that propagated modernist art across the globe as a distinctly liberal and American art form, and the president’s basic ideas about America were a kind of optimistic version of avant garde ideas: productivity, spirit, opportunity, freedom, faith  . . . these were the watchwords of the Eisenhower presidency. “Tradition” was not.

But what of General Eisenhower, the old-style hero of D-Day? When Eisenhower decided to run for president he put his general’s uniform in the closet and assumed, instead, the mantel of the “ordinary” American, the man from Abilene, the main subject of Gehry’s original design. Hyper-aware of the disastrous militarism of Europe, President Eisenhower constantly tried to deflect attention from his own heroic persona (by purposefully fumbling through press conferences and making regular references to golf, among other things). He, instead, in an ongoing polemic against FDR and other “iconic” leaders (read, Hitler!), tried to direct national attention away from the monumental achievements of great men like himself and toward the greatness of American ideals, ideals that, like the spirit of the sacrifice of American soldiers in World War II, could not be adequately represented by monumental leaders or colossal obelisks. That missiles remain one of the great colossal leftovers of the Eisenhower era, and the fact that Eisenhower remained, even after putting his general’s uniform in the closet, the most military minded of all twentieth-century American presidents (helping hold the world hostage to a nuclear arms race, fighting covert wars through the C.I.A., and building the military-industrial complex that he would warn against in his famous Farewell Address), well, this is the modernist enigma that was Eisenhower.

It’s hard to memorialize an enigma. That’s what Frank Gehry has been trying to do. It’s the sort of challenge artists like Gehry love to tackle; it’s a task well suited to a (post)modernist. But Gehry has run up against an even more powerful enigma, that of American national monumentalism, which has repeatedly returned to “traditional” neoclassical forms of old-world Europe to give expression to the sublime spirit of the new world. So go ahead and pity Frank Gehry.

#failedacademic: the New Public Intellectual?

Anne Helen Petersen recently left Whitman College, where she taught on film and media studies, for Buzzfeed. One of the positive, if unintended, consequences of the dismal academic job market, she explains, is the emergence of a new generation of public intellectuals:

“The collapse of the PhD market, combined with the rise of digital publishing, has ironically yielded an exquisite, flourishing community of public intellectuals—people who write for places like The New Yorker and The Atlantic, sure, but also those who write for places like Los Angeles Review of Books, The New Inquiry, n+1, Avidly, and, of course, The Awl and The Hairpin. As more and more people with PhD behind their names find themselves in situations similar to mine, we’ve been forced to radically reconsider what we thought “teaching” and “dialogue” looks like.”

Petersen’s move to Buzzfeed comes just as @Neinquarterly, also known as Eric Jarosinski, prepares to leave his tenure-track position at the University of Pennsylvania to tweet full time and @pankisseskafka, Rebecca Schuman, settles in to her writing gig at Slate after telling academia to kiss off. To judge by their Twitter followers, over 60,000 and 4,000 respectively, Jaronsinski and Schuman seem to have found more readers than their academic prose ever would have. And they both write about culture, the academy, and all things intellectual. So, is Petersen right? Has the confluence of a horrible academic job market for humanities PhDs and the proliferation of new media outlets helped create a new class of public intellectuals?

A number of folks who don’t have to tweet for a living sure hope so. As The Infernal Machine noted a few weeks ago, Nicholas Kristof of The New York Times lambasted scholars for what he saw as their failure to engage the broader public. Where, he wondered, had all the public intellectuals gone? But where were they to begin with?

The Oxford English Dictionary‘s earliest example of “public intellectual” is from a 1967 New York Times article. A quick look at Google Ngram shows that the term didn’t take off until the 1960s, and its sharpest increase wasn’t until the 1990s. This is all back-of-the-envelope thinking, but it seems safe to say that “public intellectual” is a rather recent concept. Public intellectuals are celebrity thinkers, people paid to opine out loud and in public. Whether it is an Adorno avatar or a truth-telling former academic, they craft a public persona that will make them visible.

But why would anyone listen to a public intellectual? As writers, academics, and intellectuals of all sorts clamor for visibility and attention, how is this new class of public intellectuals to be heard above the roar of a digital deluge of tweets, blogs, and status updates? The answer, in part, is authority. Schuman’s giddy revelations of the academy’s hypocrisy and ineptitude and Jarosinski’s sardonic denunciations of university life have weight because of the three letters behind their names. However unmoored from the university they currently are, Schuman and Jarosinski rely on their past professorial lives not just for content but for legitimacy. They might excitedly predict the collapse of the university, but they depend on its shadow of authority to make a living. They peddle in the vestiges of academic authority and the glow of its prestige. They don’t write as #failedwriters but as #failedacademics. Their celebrity wobbles atop the uncertain future of the university.

We are living through an upheaval in epistemic authority, a moment of uncertainty and change concerning the technologies and institutions that have traditionally generated, transmitted, and evaluated knowledge. What legitimates one form of knowledge over another? Which sources of knowledge are to be trusted? Which not? What practices, habits, techniques, technologies, and institutions render knowledge authoritative or worthy?

For the past 150 years, the modern research university has stood in for epistemic authority as the embodiment of scientific knowledge and the culture of science. Since its inception in Germany in the early 19th century, and its reinvention in America later that same century, the research university has been the central institution of knowledge in the West. Today the university finds itself confronted by the challenge of technological change. The saturation of digital technologies, from Wikipedia to Google PageRank, is changing the ways by which humans create, store, distribute, and value knowledge in the twenty-first century. What constitutes authoritative or legitimate knowledge today?

The university has survived and sustained its practices, virtues, and values because it has been a community embedded in institutional structures. And this is precisely what the new class of public intellectuals that Petersen anticipates seems to lack thus far. It may be, as Corey Robin puts it, that the economics and new technologies that make blogs, niche magazines, and twitter celebrities possible “also make them unsustainable.”

Many of these outlets rely on the volunteer or nearly free labor of writers and grad students or middle-aged professors like me. The former live cheaply and pay their rent with a precarious passel of odd jobs, fellowships and university teaching; the latter have tenure.

The university may well be antiquated, hypocritical, and in some ways outdated, but at its best it is a bulwark against the pressures, market and otherwise, that celebrity tweeters, #failedintellectuals, and smart writers will certainly face.