Tag Archives: humanities

Quit Lit: Do the Humanities Need the University?

college chains_FINAL

#165598883 / gettyimages.com

There’s a new genre taking shape on blogs, Twitter, and even in the pages of The London Review of Books: Quit Lit. Just last week, Mariana Warner, a creative writing professor and member of the Man Booker prize committee, explained her decision to resign her position at the University of Essex. In “Why I Quit,” she describes the bureaucratic disciplines of England’s new Research Assessment Exercises, which tabulate and calculate academic labor with the efficiency and mindlessness usually reserved for an assembly plant (and a low tech one at that).

In a scene she must have embellished by channeling Kakfa U., Warner recounts a meeting with her new dean:

A Tariff of Expectations would be imposed across the university, with 17 targets to be met, and success in doing so assessed twice a year. I received mine from the executive dean for humanities. (I met her only once. She was appointed last year, a young lawyer specialising in housing. When I tried to talk to her about the history of the university, its hopes, its “radical innovation,” she didn’t want to know. I told her why I admired the place, why I felt in tune with Essex and its founding ideas. “That is all changing now,” she said quickly. “‘That is over.” My “workload allocation,” which she would “instruct” my head of department to implement, was impossible to reconcile with the commitments which I had been encouraged—urged—to accept.

Confused but, more deeply, defeated by this new regime, Warner resigned. But she continued her work for the Man Booker Prize committee which, as it turns out, has proven rather clarifying.

Among the scores of novels I am reading for the Man Booker International are many Chinese novels, and the world of Chinese communist corporatism, as ferociously depicted by their authors, keeps reminding me of higher education here, where enforcers rush to carry out the latest orders from their chiefs in an ecstasy of obedience to ideological principles which they do not seem to have examined, let alone discussed with the people they order to follow them, whom they cashier when they won’t knuckle under.

As a genre Quit Lit has a few organizing features. Its form tends to be personal and aggrieved. The university, like those vague but all-powerful institutions in Kafka’s texts, has been overtaken by an alien, usually bureaucratic-statist-inhumane power. And its content tends to be not just about the decline of the university but also about the impending demise of the humanities. By turning universities into vocational schools, we are robbing our children of humanistic forms of thought and the good that ensues. (If scientists wrote prose like humanists, maybe they would be writing about the end of the university and the collapse of science. NPR had a go at Quit Lit  this past week in their series on the dramatic cuts in basic science funding and the results it is having on future generations of scientists.)

As with all literary genres, Quit Lit has its predecessors. Before there were Rebecca Schuman and NeinQuarterly’s Eric Jarosinski, there was another German scholar experimenting in the genre, Friedrich Nietzsche. In 1872, just three years after he landed his first, and only, professorship at the University of Basel without even having finished his dissertation, Nietzsche delivered a series of lectures, On the Future of Our Educational Institutions, in the city museum. Before crowds of more than 300 people, Nietzsche staged a dialogue on the future of German universities and culture between two young students and a cantankerous old philosopher and his slow-witted but earnest assistant.

The grousing philosopher lamented the decline of universities into state-sponsored factories that produced pliant citizens and mindless, “castrated” scholars who cared not a bit for life. By the end of the lectures, it’s difficult to say whether Nietzsche thought there was a future at all for German universities. Nietzsche lasted a few more years in his position, resigning only when ill health forced him to. But he left an oeuvre that looked to the university and saw little but ruin.

As Nietzsche was writing, parts of the German university might not have been in decay, but they were in decline, the humanities in particular. Between 1841 and 1881, enrollment in philosophy, philology, and history within “philosophy faculties,” which compromised the core liberal arts fields, declined from 86.4 percent to 62.9 percent, whereas in mathematics and the natural sciences enrollments increased from 13.6 to 37.1 percent of all students matriculating at German universities. The mood among humanists was often such that they sounded quite a bit like the embattled literature professors of today. In academia, crisis is generally a matter of perception, and even in what now seems like a “golden age” for humanists, there was, in fact, a seismic shift for the humanities.

More recent forms of Quit Lit tend to lack a key feature of Nietzsche’s model, however. Nietzsche never conflated the humanities or humanistic inquiry with the university. For him, humanistic inquiry—and Nietzsche was deeply humanistic as his lifelong commitment to philology attests—transcended the institutional and historically particular shape of universities, which he saw as little more than extensions of a Prussian bureaucratic machine.

In what increasingly seems like a related genre, contemporary academics and intellectuals of all sorts have ostensibly been defending the humanities. But more often than not they actually defend certain forms of scholarship as they have come to be institutionalized in largely twentieth-century American research universities. Geoffrey Galt Harpham recently produced  the most egregious but well-argued example of this tendency with The Humanities and the Dream of America. His basic thesis is that the humanities as they are now practiced were an invention of post–World War II American research universities. Similarly, Peter Brooks’s edited collection The Humanities and Public Life suggests, with its focus on disciplines and scholarship and the imperatives of the university, inadvertently echoes the same. They conflate the humanities with their departmental and institutional shapes in universities.

In the measured “yes but” prose of academic speak, Patrícia Vieira gives this spirit of conflation ethical shape in a review entitled “What are the Humanities For?”:

Debates about the “future of the humanities” frequently revolve around the suspicion that the humanities might not have one. Yet despite the direness of this anxiety—an anxiety especially personal for every academic worried about professional choices or mortgage payments—conversations on the topic are often dull, long-faced affairs. Every professor has sat through one or another of these depressing discussions. The conversation proceeds according to a familiar set of pieces: there are passionate apologias of work in philosophy, literature, history, and the arts; veiled criticism of the anti-intellectualism of higher education administrators and society at large; and vague pledges to do more interdisciplinary research and extend a fraternal hand to the social and natural sciences, who remain largely unperturbed by this plight. The whole thing wraps up with the reassuring conviction that, if the humanities go down, they will do so in style (we study the arts, after all), and that truth is on our side, all folded in a fair dosage of indulgent self-pity.

Vieira can’t imagine the future of the humanities beyond the anxieties of professors and the failures of university administrators. All she can muster is a few gentle and inveterately academic admonitions for her authors:

Brooks’s and [Doris] Sommer’s [The Work of Art in the World: Civic Agency and Public Humanitiesbooks coincide in their desire to persuade those skeptical about the importance of the arts and the humanities of their inherent worth. The volumes set out to prove that these disciplines play a crucial role in public life and that they are vital to contemporary culture. Brooks’s collection often falls short of this goal by sliding into fatalistic rhetoric about the doomed future of humanistic scholarship—the very discourse the book attempts to combat—all while ignoring some of the vibrant new research in the field. In contrast, Sommer is overconfident in the power of the arts to tackle thorny socioeconomic and political problems. Both the despondent and celebratory approaches are symptomatic of the beleaguered state of the field, forced to justify its existence based upon technocratic principles that demand immediate results and fast returns. The humanities are constantly compelled to demonstrate practical results or hopelessly admit to lacking a concrete and immediate function, straitjacketed into foreign modes of valuation lifted from the empirical sciences. Neither a dying set of disciplines nor a panacea for social ills, the humanities remain a central form of human enquiry, in that they shed light on and question the tacit assumptions upon which our societies are based, outline the history of these values, and identify alternatives to the status quo.

Despite her attempts to cast the humanities as a form of “human” inquiry, Vieira is writing about a beleaguered and exhausted profession. There are only professors and their disciplines here. And they both are trapped, as Nietzsche would say, in a “castrated” passive voice: “The humanities are compelled ….” There are no agents in this drama, just put-upon, passive professors.

I am not suggesting that we should give up on universities. Universities, especially modern research universities, have long helped sustain and cultivate the practices and virtues central to the humanities. But just as German universities were becoming international paradigms, emulated from Baltimore to Beijing, Nietzsche made a fateful diagnosis. Those practices and virtues could ossify and whither in the arcane and self-justifying bowels of the modern, bureaucratic university. “Human inquiry,” in contrast, would live on.

We may well benefit from an exercise in imagination. Could the humanities survive the collapse of the university? I think so.

. . . . . . . .

Like The Hedgehog Review on Facebook, follow us on Twitter, and subscribe to our posts via RSS.

FacebookTwitterGoogle+LinkedInShare

Big Data, Small Data, and the Ethics of Scale

This past summer, two Cornell University scholars and a researcher from Facebook’s Data Science unit published a paper on what they termed “emotional contagion.” They claimed to show that Facebook’s news feed algorithm, the complex set of instructions that determines what shows up where in a news feed, could influence users’ emotional states. Using a massive data set of more than 689,003 Facebook accounts, they manipulated users’ news feeds so that some people saw more positive posts and others more negative posts. Over time, they detected a slight change in what users themselves posted: Those who saw more positive posts posted more positive posts of their own, while those who saw more negative posts posted more negative ones. Emotional contagion, they concluded, could spread among people without any direct interaction and “without their awareness.” 

Some critics lambasted Facebook for its failure to notify users that they were going to be part of a giant experiment on their emotions, but others simply thought it was cool. (My Infernal Machine colleague Ned O’Gorman has already outlined the debate.) Sheryl Sandberg, Facebook’s COO, just seemed confused. What’s all the fuss about, she wondered. This latest experiment “was part of ongoing research companies do to test different products.” Facebook wasn’t experimenting with people; it was improving its product. That’s what businesses do, especially digital business with access to so much free data. They serve their customers by better understanding their needs and desires. Some might call it manipulation. Facebook calls it marketing.

But, as technology writer Nicholas Carr points out, new digital technologies and the internet have ushered in a new era of market manipulation.

Thanks to the reach of the internet, the kind of psychological and behavioral testing that Facebook does is different in both scale and kind from the market research of the past. Never before have companies been able to gather such intimate data on people’s thoughts and lives, and never before have they been able to so broadly and minutely shape the information that people see. If the Post Office had ever disclosed that it was reading everyone’s mail and choosing which letters to deliver and which not to, people would have been apoplectic, yet that is essentially what Facebook has been doing. In formulating the algorithms that run its News Feed and other media services, it molds what its billion-plus members see and then tracks their responses. It uses the resulting data to further adjust its algorithms, and the cycle of experiments begins anew. Because the algorithms are secret, people have no idea which of their buttons are being pushed — or when, or why.

Businesses of all sorts, from publishers to grocery stores, have longed tracked the habits and predilections of their customors in order better to influence what and how much they consume. And cultural critics have always debated the propriety of such practices.

Eighteenth-century German scholars debated the intellectual integrity of publishers who deigned to treat books not only as sacred vessels of Enlightenment, but also as commodities to be fashioned and peddled to a generally unenlightened public. Friedrich Nicolai, one of late eighteenth-century Prussia’s leading publishers, described the open secrets of the Enlightenment book trade:

Try to write what everyone is talking about . . . If an Empress Catherine has died, or a Countess Lichtenau fallen out of favor, describe the secret circumstances of her life, even if you know nothing of them. Even if all your accounts are false, no one will doubt their veracity, your book will pass from hand to hand, it will be printed four times in three weeks, especially if you take care to invent a multitude of scandalous anecdotes.

The tastes and whims of readers could be formed and manipulated by a publishing trade that was in the business not only of sharing knowledge but also of producing books that provoked emotional responses and prompted purchases. And it did so in such obvious and pandering ways that its manipulative tactics were publicly debated. Immanuel Kant mocked Nicolai and his fellow publishers as industrialists who traded in commodities, not knowledge. But Kant did so in public, in print.

These previous forms of market manipulation were qualitatively different from those of our digital age. Be they the practices of eighteenth-century publishing or mid-twentieth-century television production, these forms of manipulation, claims Carr, were more public and susceptible to public scrutiny, and as long as they were “visible, we could evaluate them and resist them.” But in an age in which our online and offline lives are so thoroughly intertwined, the data of our lives—what we consume, how we communicate, how we socialize, how we live—can be manipulated in ways and to ends about which we are completely unaware and we have increasingly less capacity to evaluate.

Sheryl Sandberg would have us believe that Facebook and Google are neutral tools that merely process and organize information into an accessible format. But Facebook and Google are also companies interested in making money. And their primary technologies, their algorithms, should not be extracted from the broader environment in which they were created and are constantly tweaked by particular human beings for particular ends. They are pervasive and shape who we are and who we want to become, both individually and socially. We need to understand how live alongside them.

These are precisely the types of questions and concerns that a humanities of the twenty-first century can and should address. We need forms of inquiry that take the possibilities and limits of digital technologies seriously. The digital humanities would seem like an obvious community to which to turn for a set of practices, methods, and techniques for thinking about our digital lives, both historically and conceptually. But, to date, most scholars engaged in the digital humanities have not explicitly addressed the ethical ends and motivations of their work. (Bethany Nowviskie’s work is one exemplary exception: here and here.)

This hesitance has set them up for some broad attacks. Th recent diatribes against the digital humanities have not only peddled ignorance and lazy thinking as insight, they have also, perhaps more perniciously, managed to cast scholars interested in such methods and technologies as morally suspect. In his ill-informed New Republic article, Adam Kirsch portrayed digital humanities scholars as morally truncated technicians, obsessed with method and either uninterested in or incapable of ethical reflection. The digital humanities, Kirsch would have us believe, is the latest incarnation of the Enlightenment of Adorno and Horkheimer—a type of thinking interested only in technical mastery and unconcerned about the ends to which knowledge might be put.

Most of the responses to Kirsch and his ilk, my own included, didn’t dispute these more implicit suggestions. We conceded questions of value and purpose to the bumbling critics, as though to suggest that the defenders of a vague and ahistorical form of humanistic inquiry had a monopoly on such questions. We conceded, after a fashion, the language of ethics to Kirsch’s image of a purified humanities, one that works without technologies and with insight alone. We responded with arguments about method (“You don’t know what digital humanities scholars actually do.”) or history (“The humanities have always been interested in patterns.”).

In a keynote address last week, however, Scott Weingart encouraged humanities scholars engaged in computational analysis and other digital projects to think more clearly about the ethical nature of the work they are already doing. Echoing some of Carr’s concerns, he writes:

We are at the cusp of a new era. The mix of big data, social networks, media companies, content creators, government surveillance, corporate advertising, and ubiquitous computing is a perfect storm for intense influence both subtle and far-reaching. Algorithmic nudging has the power to sell products, win elections, topple governments, and oppress a people, depending on how it is wielded and by whom. We have seen this work from the bottom-up, in Occupy Wall Street, the Revolutions in the Middle East, and the ALS Ice-Bucket Challenge, and from the top-down in recent presidential campaigns, Facebook studies, and coordinated efforts to preserve net neutrality. And these have been works of non-experts: people new to this technology, scrambling in the dark to develop the methods as they are deployed. As we begin to learn more about network-based control and influence, these examples will multiply in number and audacity.

In light of these new scales of analysis and the new forms of agency they help create, Weingart encourages scholars, particularly those engaged in network and macroanalysis, to pay attention to the ways in which they mix the impersonal and individual, the individual and the universal. “By zooming in and out, from the distant to the close,” he writes, digital humanities scholars toggle back and forth between big and small data. Facebook, Google, and the NSA operate primarily at a macro level at which averages and aggregates are visible but not individuals. But that’s not how networks work. Networks are a messy, complex interaction of the micro and macro. They are products of the entire scale of knowledge, data, and being. Social networks and the ideas, actions, and interactions that comprise them emerge between the particular and the universal. What often distinguishes “the digital humanities from its analog counterpart,” writes Weingart, “is the distant reading, the macroanalysis.” But what binds humanities scholars of all sorts together is an “unwillingness to stray too far from the source. We intersperse the distant with the close, attempting to reintroduce the individual into the aggregate.” In this sense, scholars interested in a digital humanities are particularly well suited to challenge basic but dangerous misconceptions about the institutions and technologies that shape our world.

If we think of Facebook and Google and the computations in which we are enmeshed merely as information-processing machines, we concede our world to one end of the scale, a world of abstracted big data and all powerful algorithms. We forget that the internet, like any technology, is both a material infrastructure and, as Ian Bogost has put it, something we do. Every time we like a post on Facebook, search Google, or join the network at a local coffee shop, we participate in this massive, complex world of things and actions. We help form our technological world. So maybe its time we learn more about this world and remember that algorithms aren’t immutable, natural laws. They are, as Nowviskie puts it, rules and instructions that can manipulate and be manipulated. They are part of the our world, bound to us just as we are now to them.

. . . . . . . .

Like The Hedgehog Review on Facebook, follow us on Twitter, and subscribe to our posts via RSS.

Humanities in the Face of Catastrophe

Earlier this month, my colleague Bethany Nowviskie asked a group of digital humanities scholars gathered in Lausaunne, Switzerland, to consider their work—its shape, its fate, its ends—within the context of climate change and the possibility of our, as in the human species, own extinction. (How’s that for a dinner-time keynote? Bourbon, please.)

The premise of Nowviskie’s talk, “Digital Humities in the Anthropocene,” was the fact of climate change and the irrevocable consequences that it could have for life on our planet. What would it mean, she asked, to be a humanities scholar in the anthropocene, a geological epic defined by human impact on the natural world? What would it mean to practice the humanities within a geological scale of time?

Whatever the fate of the “anthropocene” as a term (its existence and even inception is still being debated among geologists), the scientists, activists, and scholars who invoke it consider human activity and practices as inseparable from nature. Whether they intend to or not, they thereby challenge basic ideas about the human, culture, and agency that have sustained the humanities for centuries.

The very notion of the anthropocene and the suggestion that humans could cause geological-level change undermines, as Dipesh Chakrabarty puts it, the “age-old humanist distinction between natural history and human history.” From Vico’s famous claim that humans could  know only what they have created, while nature remains God’s inscrutable work, to Kant’s understanding of cosmopolitan history as purposive action, Western thinkers have long distinguished between human and natural history. But granting humans a geological agency, however aggregative and delayed, undermines this distinction. What becomes of history and culture when it can no longer be thought of as simply human?

For one thing, we can better conceive of our cultural and historical labor as bound to the contingency, finitude, and flux of nature, and, thus, cultivate a more acute awareness of how extinguishable our labor can be. Here’s Nowviskie speaking to her digital humanities colleagues:

Tonight, I’ll ask you to take to heart the notion that, alongside the myriad joyful, playful scholarly, and intellectual concerns that motivate us in the digital humanities—or, rather, resting beneath them all, as a kind of substrate—there lies the seriousness of one core problem. The problem is that of extinction—of multiple extinctions; heart-breaking extinctions; boring, quotidian, barely-noticed extinctions—both the absences that echo through centuries, and the disposable erosions of our lossy everyday. We edit to guess at a poet’s papers, long since burned in the hearth. We scrape through stratigraphic layers of earth to uncover ways of life forgotten, and piece together potsherds to make our theories about them hold water. Some of us model how languages change over time, and train ourselves to read the hands that won’t be written anymore. Others promulgate standards to ward against isolation and loss. With great labor and attention, we migrate complex systems forward. We redesign our websites and our tools—or abandon them, or (more rarely) we consciously archive and shut them down. DHers [digital humanities scholars] peer with microscopes and macroscopes, looking into things we cannot see. And even while we delight in building the shiny and the new—and come to meetings like this to celebrate and share and advance that work—we know that someone, sooner or later, curates bits against our ruins.

Humanities labor represents, continues Nowviskie, a deeply human striving to communicate “across millennia” and “a hope against hope that we will leave material traces,” in stone, manuscript, print, or digital form.

This is where the strivings of humanities scholars and geologists of the anthropocene intersect. They both rummage about nature for traces of the human. And while for some, such ceaseless searching is add47682f8_61308 copydriven by an indomitable humanism, a confidence in a human spirit that will always live on, others are haunted by the possibility of our extinction and the end, as Nowviskie puts it, “of so much worldly striving.” But those of us who are in search of a fragmented and sundered human are also motivated by this prospect of extinction.

When Nowviskie exhorts us to “dwell with extinction,” she echoes a humanistic disposition that has long been defined by the specter of loss. Since at least the studia humanitatis of the early modern period, scholars have practiced the humanities in anticipation of catastrophe. Theirs, however, was not a crisis of the guild, anxieties about budget cuts and declining humanities enrollments, but a fear of the radical and irreparable loss of the human record.

Many of the great encyclopedic works of the early modern European scholars were defined, as Ann Blair describes them, by the “stock-piling” of textual information intended to create a “treasury of material.” The early modern masterpieces of erudition—such as Conrad Gesner’s Bibliotheca universalis (1545) or Johann H. Alsted’s Encyclopaedia septem tomis distincta (1630)—were motivated not simply by information-lust but by a deeply cultural conception of the ends of scholarship, a desire to protect ancient learning and what humanists considered to be its integrity and authority. These early humanists, writes Blair, “hoped to safeguard the material they collected against a repetition of the traumatic loss of ancient learning of which they were keenly aware.” This loss had rendered Greek and Roman antiquity inaccessible until its gradual recovery in the fifteenth and sixteenth centuries. Humanist scholars saw encyclopedias and related reference works that collected ancient learning as guarantees that knowledge could be quickly reassembled should all books be lost again. They were busy collecting and curating because they feared another catastrophe.

Similarly, Denis Diderot described the eighteenth-century Encyclopédie as insurance against disaster:

The most glorious moment for a work of this sort would be that which might come immediately in the wake of some catastrophe so great as to suspend the progress of science, interrupt the labors of craftsmen, and plunge a portion of our hemisphere into darkness once again. What gratitude would not be lavished by the generation that came after this time of troubles upon those men who had discerned the approach of disaster from afar, who had taken measures to ward off its worst ravages by collecting in a safe place the knowledge of all past ages!

But even as Diderot promised his contributors future glory, he also acknowledged how fleeting the whole endeavor could be. The Encyclopédie, he lamented, would be irrelevant as soon as it was printed. The traces of the human that the “society of gentlemen” had so diligently collected in print would be out of date upon its first printing. Even print could not stop time.

In the nineteenth century, the famous German classicist August Böckh claimed that all this striving to collect and protect the material traces of the human was exemplified in the science of philology, which he defined as the Erkenntnis des Erkannten, or knowledge of what is and has been known. Our current knowledge is only as good as our past knowledge. And working with the fragmented documents of that past gave philologists an acute sense of how fragile our knowledge and history of ourselves was. The philologist’s attention to and care for the material nature of human culture—its embodiment in documents and texts of all sorts and qualities—was cultivated by a consciousness of how fragile it all was. The only thing that bound the species together was a documentary record always under threat.

Today that documentary record is undergoing a transition, unprecedented in its speed and extent, from printed to digital forms. Some scholars, archivists, and librarians warn that this move could prove catastrophic. Things of our past may be lost and forgotten. But if the long history of the humanities teaches us anything, it is that humanistic work has always been practiced in the face of catastrophe.

In our digital age, the vocation and the disposition of the humanities remains intact. “The work of the humanist scholar,” writes Jerome McGann, “is still to preserve, to monitor, to investigate, and to augment our cultural life and inheritance.” And it’s this disposition that thinking about the humanities in the anthropocene may help us recover.

 

Photograph: The dove and the raven released by Noah, with drowning people and animals in the water beneath the ark. From the Holkham Bible Picture Book, Add MS 47682, England: second quarter of the 14th century, Parchment codex, The British Library, London.

. . . . . . . .

Like The Hedgehog Review on Facebook, follow us on Twitter, and subscribe to our posts via RSS.

You must change your graduate program!

Of all the hyperbole surrounding the fate of the humanities today, the problems facing graduate studies seem the least exaggerated. The number of PhDs has far outpaced the number of available full-time jobs. Financial support is both inadequate and unevenly distributed, requiring The Quant & The Connoisseur logostudents to defer earnings over long periods of time, assume unneeded debt, and compete against differently funded peers.

Overspecialization has made knowledge transfer between the disciplines, not to mention between the academy and the rest of the workforce, increasingly difficult. The model of mentorship that largely guides student progress, now centuries-old, seems increasingly out of touch with a culture of non-academic work, so that students are ill-prepared to leave the academic track. Time-to-degree has not only not sped-up, but increasingly it also correlates with lower success rates—the longer students stay in the PhD track the lower their chances for full-employment.

Philosophy Seminar Table at University of Chicago

The hegemony of the seminar as the only model of learning is also at odds with much recent thinking about learning and intellectual development. Undoubtedly, numerous teachers and students alike would describe at least some, if not many, of their seminar experiences as profoundly uninspiring. Add to that the way we largely operate within a developmental model premised on major phase-changes that dot otherwise stable, and largely flat, plateaus. The long time between exams or their sink-or-swim nature does little to promote long-term incremental development of students as thinkers, writers, or teachers. We think more in terms of a ninteenth-century-inspired model of botanical metamorphosis, with its inscrutable internal transformations, than we do incremental, cumulative performances.

There are also bio-political aspects to the crisis that have recently been raised, where the most intense periods of work and the most intense periods of insecurity overlap precisely within the normal timeframe of human fertility. This PhD model seems downright Saturnalian, consuming its own offspring.

What is there to like about this scenario?

Luckily, the MLA has issued a report about graduate education. “We are faced with an unsustainable reality,” the report states. Indeed. And then come the all-too-familiar platitudes. Maintain excellence. More teaching. Shorter time-frames. Innovate. Better connection with non-academic jobs. Advocate for more tenure-track positions.

As the clichés pile up, so do the contradictions. Get out faster, but spend more time teaching. Keep up those rigorous standards of specialization, but do it with haste (and be interdisciplinary about it). No teaching positions available? Look for another kind of job! Persuade the university to hire more tenure track positions—whom do I call for that one?

Will the MLA report lead to changes? It’s doubtful. Sure, we’ll crack down on time to degree without really changing requirements. We’ll spice up course offerings and maybe throw in an independent project or two (digital portfolios!). We’ll scratch our heads and say the PhD would be a great fit for a job in consulting, or in a museum, maybe even Google—and then do absolutely nothing. Five or ten years from now, we’ll talk about a crisis in the humanities, the shrinking of the field, and notice that, once again, there seem to be fewer of us hanging around the faculty club.

Nothing will change because we don’t have to. As long as there are too many graduate students, there is no problem for faculty. And no matter what we say, what we do is always the same thing. Try approaching your department and saying you need to change the scope, scale, content, and medium of the dissertation. You’re in for a fun conversation.

Try implementing a mandatory time-limit with yearly progress reports and consequences for failure and you’ll be barraged with so many exceptions your Agamben will start to hurt. What do employers want from PhDs—good writing skills, general knowledge, analytical capability, facility with numbers, strong work habits, works well with others? Sorry, can’t help you. None of our students has those skills since our program doesn’t emphasize them (but we have a writing center!).

Nothing will change because we don’t have to. We’re so conservative at heart we’d rather die out with our beliefs intact than do anything that might actually better serve the student population. We’ll continue to point to the exceptions without realizing how much they still look like something we didn’t want to change.

The PhD did something once, or rather it did one thing and it did it reasonably well. It still does that one thing, which is now, quantitatively speaking, vastly unnecessary. Some have suggested scaling up to meet the sciences on their own ground (and here at The Infernal Machine, too). I would suggest that we scale down to meet the needs of the world at large. More modular, more flexible, more creative, more varied, more timely, more general, more collaborative, and more relevant. Until we have any proof that our programs are feeders for jobs outside the academy, we’re just failing by another name.

We can either change in substantive ways or pretend to do something else while actually continuing to do the same things we’ve always done. The MLA report looks a lot like the latter and no doubt so will most of the responses to it.

I’m looking forward to next year’s report. How many ways can you play the same tune?

 

 

. . . . . . . .

Like The Hedgehog Review on Facebook, follow us on Twitter, and subscribe to our posts via RSS.

The Humanities in Full: Polemics Against the Two-Culture Fallacy

The New Republic does not like the digital humanities. Following Leon Wieseltier’s earlier diatribes, Adam Kirsch recently warned that the digital humanities and their “technology” were taking over English departments. Kirsch posed some reasonable questions: Are the digital humanities a form of technological solutionism? No, not withstanding the occasionally utopian strand. Are the digital humanities “post-verbal”? With all their graphs, charts, and network visualizations do they aspire to a discourse of mere pictures and objects? No and no. With all their generously funded projects, are they embracing the “market language of productivity to create yet another menacing metric for the humanities?” A good question that deserves thoughtful responses (here and here).

But Kirsch’s essay isn’t really about the digital humanities. It’s about the humanities more broadly and Kirsch’s truncated and ahistorical vision of what they ought to be. The problem with the digital humanities, he writes, is that they go against the “nature of humanistic work.” And their errant ways

derive from a false analogy between the humanities and the sciences. Humanistic thinking does not proceed by experiments that yield results; it is a matter of mental experiences, provoked by works of art and history, that expand the range of one’s understanding and sympathy. It makes no sense to accelerate the work of thinking by delegating it to a computer when it is precisely the experience of thought that constitutes the substance of a humanistic education. The humanities cannot take place in seconds. This is why the best humanistic scholarship is creative, more akin to poetry and fiction than to chemistry or physics: it draws not just on a body of knowledge, though knowledge is indispensable, but on a scholar’s imagination and sense of reality. Of course this work cannot be done in isolation, any more than a poem can be written in a private language. But just as writing a poem with a computer is no easier than writing one with a pen, so no computer can take on the human part of humanistic work, which is to feel and to think one’s way into different times, places, and minds.

Kirsch pits the technologically unadorned humanities that produce subjective experiences against the technology-dependent sciences that produce mere facts. This simple, and false,  dichotomy manages to slight both at once, and to obscure more than it clarifies.

In fact,  this humanities-sciences dichotomy is relatively recent.  And as it turns out, the  humanities itself is a relatively recent term, seldom used before the nineteenth century. The OED lists the first use as 1855 in a reference to music as of “the humanities.” Google’s NGram keyword search shows a marked increase in the prevalence of the term around 1840, just as the natural and physical sciences were becoming ascendant in universities.

Today’s distinctions between the digital humanities and the humanities proper has its longer dilthey_geisteswissenschaften_1883_0006_400pxhistory in these nineteenth-century divisions. For well over a century now, one of the dominant notions of the humanities, in the academy at least, is one that cast them squarely against the natural sciences. And this conception of the humanities, which has since gained wider influence in the culture at large, was first articulated by the late nineteenth-century German scholar Wilhelm Dilthey. Dilthey distinguished the human sciences [Geisteswissenschaften] from the “natural” sciences [Naturwissenschaften].

The Geisteswissenchaften, claimed Dilthey, studied the inner workings of mental facts – that is, the internal processes of human experience (Kirsch’s beloved mental experiences). For this internal realm, the freedom and autonomy of the subject were central and, thus, the primary objects of inquiry.

The natural sciences, by contrast, studied material processes governed by natural laws and the mechanisms of cause and effect. For Dilthey, humanities scholars don’t count, measure, or seek patterns; they seek to understand what motivates canonical historical figures who produce works of art and other artifacts of culture (Kirsch’s struggle to understand not just explain Auerbach’s Mimesis, for example). The human sciences explain phenomena from within, the natural sciences from without.

Dilthey’s efforts to distinguish sharply between the two forms of inquiry were in large part meant to resist the rising influence of the natural sciences in nineteenth-century German universities and, above all, the influence of positivism: the notion that we can have knowledge only of phenomena (the only possible knowledge is of an endless series of facts). Like Dilthey, Kirsch’s embrace of a very particular and limited notion of the humanities is reactionary. But whereas Dilthy feared the pervasive and corrosive effects of positivism, Kirsch fears the utopian delusions of technological solutionism.

These simple oppositions—the humanities versus the sciences—confuse more than they enlighten and, in a timeless irony, produce deeply anti-humanistic polemics. They also ignore the historical fact that the humanities and humanistic inquiry more broadly have not only been concerned with particular human artifacts (one painting, one poem, one piece of music) but also, as Dutch scholar Rens Bod recently put it, patterns and principles to make sense of and enjoy these artifacts.

The kinds of things that humanists actually do when they engage products of human creativity, their practices, have always been bound up with efforts to make connections and generalize. From Lorenzo Valla’s careful and methodological debunking of the Donatio Constantini (On the Donation of Constantine) in 1440 to Erich Auerbach’s Mimesis in 1946, humanists of all kinds have relied on particular notions of method, evidence, verification, and argument, just as the natural and physical sciences have relied on intuition and creativity.

We need a history and vision of the humanities capacious enough to see the humanities not as a particular method or set of disciplines but as a disposition, as a way of engaging the world.  What follows in subsequent blogs are short, polemical (in the best sense, I hope) steps toward such a history.

 

. . . . . . . .

Like The Hedgehog Review on Facebook, follow us on Twitter, and subscribe to our posts via RSS.