Monthly Archives: July 2014

Deans Care About Books

“Doesn’t Matt care about publishing books anymore?” That’s what an editor of a well-established humanities journal recently asked one of my press colleagues. The editor had just returned from a meeting with me, where she had expressed interest in publishing “curated” collections of articles from back issues of the journal. It struck me as a wonderful idea.

 

“Why make these print books?” I asked. “What do you mean?” she replied. I explained that the articles already existed in digital form in Project MUSE and could easily be collected there on the same page. Moreover, anyone working for a university press knows that most people read journal articles online, not in print. (Many institutions, in fact, only subscribe to the digital editions of scholarly journals.) Why not simply bundle the digital versions of the articles and publish them together online?

My explanation didn’t relieve her puzzlement. She explained that the editor of the collections and the authors of the articles wouldn’t get promotion and tenure credit if the collections were  published only online: “Deans care about books.”

This exchange reveals a troubling and unsustainable view, shared by scholars and deans, of the function of university presses in the world of scholarship. It has two elements. First, university presses have a responsibility to credentialize scholars. Second, presses discharge this responsibility by publishing scholarship in the form of print books.

For the sake of argument, I will leave the first assumption unquestioned. (For a compelling argument against the practice of deans relying on university presses to evaluate scholars, see Lindsay Waters’s Enemies of Promise: Publishing, Perishing, and the Eclipse of Scholarship.) It’s the second that concerns me here. University presses largely accept their credentialing function in the scholarly world. The economic realities of publishing scholarship, however, lead most of them to reject the idea that print is the proper, much less the only, format for all scholarship that makes it through the review process. By clinging to this second idea—the idea that humanities and humanistic social science scholarship must take the form of a print book—scholars and deans threaten the future of university presses and erode their ability to evaluate and distribute high-quality scholarship.

The only sure ticket to tenure and promotion is having your scholarship published by a university press. The reason is that having a university press book on one’s CV serves a signaling function. It tells deans and fellow scholars that your work withstood the rigors of peer review and the evaluation by press editors and faculty boards. This, in turn, signals that your work is good, that, as a scholar, you do your job well. There’s an alignment here between the interests of university presses and the interests of university deans. The presses want to publish high- quality scholarship, and the deans want to employ and promote scholars who are good at their jobs. The process required to produce the first provides evidence for the second.

These interests align, however, only up to a point. The decisions that track the scholarly quality of a project—those involved in discharging the credentialing responsibility—are not the only decisions a press makes when it comes to publishing a project. The work of determining the quality of scholarship and signaling this quality is done when a press issues a contract for a project and stands behind the decision by publishing it. In between those two moments, a press must make many production decisions—about the size of a print run, whether a book should be sold print-on-demand, whether a book should be cloth or paperback, whether images should be in color—that have nothing to do with the project’s quality. These decisions are responsive to practical matters such as how much it will cost to produce the book, how many copies a press thinks it can sell, who the press thinks the audience is, or even (perhaps especially), how well the press gets along with Jeff Bezos. They’re about selling books, not evaluating them. (For a good illustration of the gap between sales and quality consider: the press that published Nietzsche for Dummies probably sold a lot of books.)

Chief among production decisions is the one about publication format, whether to publish a project in print (cloth or paperback) or digitally (or, more commonly, both). To see the significance of this decision, consider what would have happened had the editor followed the route I suggested and published her collections online. It would involve asking the digital publisher to put all of the (already copy edited, designed, typeset) selected articles on the same webpage with a new introduction by the editor. (Obviously it would be a more complicated than that, but you get my point.)

Compare this to what would happen if they were published in print. The articles would have to be downloaded and printed out. Then, production staff would scan the original articles to turn them into a manuscript. Copy editors, designers, printers, distributors, and all of the other people involved in producing these collections would then do their parts. All of this to make the content available in a format nobody seems to want so that somebody can put another line on her CV that will be credited by her dean. (Not all decisions about how to publish a work are like this one, of course. This case is unique in that the material already existed digitally. But the point is that, even in such a case, the push for print remains.)

In case it needs spelling out, the enormous difference between these two paths to publication corresponds to a much different price tag for the press. This is why, as I said at the outset, by clinging to the outdated notion that scholarship must be published in print deans and scholars hurt university presses. They tie the legitimate responsibility of determining and distributing quality scholarship to a costly, inefficient, inflexible, and unsustainable publishing model. By insisting that print is a necessary condition for scholarly quality, deans and scholars make it more difficult for university presses to stay in business, thereby making it more difficult for them to publish print books! At the same time, scholars insist on having their own work published in print while they increasingly engage the work of others online. And deans demand that scholars publish print books while not giving their libraries enough funds to buy them. So they insist on print and undermine the demand for it.

So, to answer the editor’s question: I do care about publishing books. Like all scholars and publishers, I love print books and deeply appreciate them as objects (as a quick glance at my office would confirm). I am not advocating for the end of print. But I also care about the future of scholarly publishing. In order to help secure a healthy future for both university presses and print books, scholars and deans must embrace the digital as a legitimate and credit-worthy format for quality scholarship.

Matthew McAdam is Humanities Editor at Johns Hopkins University Press.

 

 

. . . . . . . .

Like The Hedgehog Review on Facebook, follow us on Twitter, and subscribe to our posts via RSS.

FacebookTwitterGoogle+LinkedInShare

Twilight of an Idol: College, Purpose, and Zombies

In “Don’t Send Your Kid to the Ivy League,” William Deresiewicz lambasts a pitiful American elite education system that “manufactures young people who are smart and talented and driven, yes, but also anxious, timid, and lost, with little intellectual curiosity and a stunted sense of purpose.” The entire system of elite education, he argues, reproduces an American upper-middle class and its distorted values, aspirations, and entitlement. Stanford and Swarthmore “are turning our kids into zombies.”

With his recent article in the New Republic and his forthcoming book (Excellent Sheep: The Miseducation of the American Elite and The Way to a Meaningful Life, Free Press), Deresiewicz is one of a growing number of pundits lamenting the loss of an American institution: college. “Is the only purpose of an education,” sneers Deresiewicz, “to enable you to get a job? What, in short, is college for?”

Andrew Delbanco recently asked the same question in College: What It Was, Is, and Should Be. So, too, did Mark Edmundson in Why Teach? As journalists, business professors, and university trustees chant “disruption,” college professors and and their public-intellectual kin seem intent on defending their institutions and vocations with appeals to a collegiate ideal. In response to declining state support for higher education and increasing skepticism about the economic value of higher education among sections of the public, college is making a return. But what are Deresiewicz, Delblanco, Edmundson, not to mention countless faculty committees who are busy reimagining undergraduate education, talking about when they conjure up the “college experience”?

Princeton University’s Firestone Library and statue of John Witherspoon, sixth president and signer of the Declaration of Independence; Emile Wamsteker/Bloomberg via Getty Images

They are, I think, mostly talking about self-transformation. College may teach you how to think and even give you some skills, but ultimately, as Deresiewicz puts it, college helps you build a self. College is a four-year respite, before an impending life of professionalism, for self-discovery. “Students are pressured and programmed, trained to live from task to task, relentlessly rehearsed and tested until winners are culled from the rest,” writes Delbanco, and so they scarcely have time to practice the art of being in college, the art of “loafing.” Citing Walt Whitman, Delbanco describes college as a time when “I lean and loaf at my ease observing a spear of summer grass.” Similarly, Mark Edumundson writes of college as a “mythic force,” a “rite of passage.” In Why Teach?, Edmundson sees college, the English major, and teaching as opportunites for self-transformation and “soul making.”  And this is an experience that Edmundson, Delbanco, and Deresiewicz want to democratize and make available to as many students as possible.

But University of Chicago undergraduate and Slate intern Osita Nwanevu isn’t buying it. In a response to Deresiewicz’s article, Nwanevu dismisses the entire notion that college is a singular opportunity for self-discovery.

Every ambitious student who believes that college is their opportunity to shape themselves will do whatever it takes to get into the very best, most exclusive school they can. When their experiences underwhelm, as many necessarily will, they will indeed leave college ‘anxious, timid, and lost,’ believing that they’ve missed out on a chance at intellectual development. Deresiewicz has simply traded careerism for another exalted goal, with similar results. [. . .] To believe that a college—Ivy or otherwise—can confer intellectual benefits in four years that you won’t be able to attain at some point over the course of the next 60 is to believe in magic.

What’s so special about four years of college? How did college come to be the defining experience of the American upper middle class? How did Harvard and Amherst, not to mention the liberal arts degree in English, come by their monopoly on an authentic self? Did Walt Whitman even go to college?

After the recent spate of books, articles, and faculty reports extolling and idealizing the transformative potential of a college experience, Nwanevu’s incredulity is refreshing. College has come to bear an impossible burden, both individually and socially. Its most confident advocates treat it like a stand-alone ethical resource, capable of funding and guiding the self-transformations of America’s elite. Deresiewicz laments contemporary college students’ lack of desire to do good or find the “higher meaning” of college buzzwords like leadership or service. And faculty, he claims, don’t have time for such meaningful pursuits; they’ve got research to do.

Deresiewicz is ultimately concerned about the ethical failures of American colleges. But he never mentions the particular ethical resources or traditions that make such self-transformation possible. And he never considers whether a transformation of the self is sufficient. Can such a collegiate celebration of the self resist the fragmenting and stultifying effects of the upper-middle-class American culture he decries—its consumerism, its anti-democratic tendencies, its solipsism? For Deresiewicz, college is less an institution devoted to a common end than it is a self-help retreat, replete with poetry classes and career services.

This is a common problem for the recent defenders of college. They invoke a collegiate ideal without considering the normative and ethical resources to which it used to be tied or the larger social ends that such an education was intended to serve. Perhaps inadvertently, Deresiewicz acknowledges this in a line of candor:  “Religious colleges—even obscure, regional schools, that no one has ever heard of on coasts—often do a much better job” transforming selves. Until the end of the nineteenth century, American colleges such as Princeton, Yale, and Harvard continued to train Protestant clergy. They were explicitly religious institutions organized around particular ethical traditions. As those and many other former colleges became universities at the end of the nineteenth century, however, these once-explicitly Christian institutions became generally nonsectarian Christian institutions devoted to broad, often vague public goods such as freedom, democracy, and economic and technological progress. The university, as University of Chicago president William Rainey Harper put it in 1899, was the “prophet” and “priest” of democracy, the keeper “of holy mysteries, of sacred and significant traditions.”

In the Harvard Report of 1945, General Education in a Free Society, some of the most respected scholars in the country acknowledged that American education was in “supreme need . . . of a unifying purpose and idea.” But religion wasn’t a possibility. “Given the American scene with its varieties of faith and even of unfaith,” Harvard faculty considered an explicitly religious basis for the undergraduate curriculum impossible.

Not much has changed since 1945. There is, thankfully, no going back to the nineteenth-century Protestant college of Christian gentlemen. And that leaves contemporary colleges, as we might conclude from Deresiewicz’s jeremiad, still rummaging about for sources of meaning and ethical self-transformation. Some invoke democratic citizenship, critical thinking, literature, and, most recently, habits of mind. But only half-heartedly—and mostly in fundraising emails.

At best, a college education today might alert students to an awareness of what is missing, not only in their own colleges but in themselves and the larger society as well.

. . . . . . . .

Like The Hedgehog Review on Facebook, follow us on Twitter, and subscribe to our posts via RSS.

Humanities in the Face of Catastrophe

Earlier this month, my colleague Bethany Nowviskie asked a group of digital humanities scholars gathered in Lausaunne, Switzerland, to consider their work—its shape, its fate, its ends—within the context of climate change and the possibility of our, as in the human species, own extinction. (How’s that for a dinner-time keynote? Bourbon, please.)

The premise of Nowviskie’s talk, “Digital Humities in the Anthropocene,” was the fact of climate change and the irrevocable consequences that it could have for life on our planet. What would it mean, she asked, to be a humanities scholar in the anthropocene, a geological epic defined by human impact on the natural world? What would it mean to practice the humanities within a geological scale of time?

Whatever the fate of the “anthropocene” as a term (its existence and even inception is still being debated among geologists), the scientists, activists, and scholars who invoke it consider human activity and practices as inseparable from nature. Whether they intend to or not, they thereby challenge basic ideas about the human, culture, and agency that have sustained the humanities for centuries.

The very notion of the anthropocene and the suggestion that humans could cause geological-level change undermines, as Dipesh Chakrabarty puts it, the “age-old humanist distinction between natural history and human history.” From Vico’s famous claim that humans could  know only what they have created, while nature remains God’s inscrutable work, to Kant’s understanding of cosmopolitan history as purposive action, Western thinkers have long distinguished between human and natural history. But granting humans a geological agency, however aggregative and delayed, undermines this distinction. What becomes of history and culture when it can no longer be thought of as simply human?

For one thing, we can better conceive of our cultural and historical labor as bound to the contingency, finitude, and flux of nature, and, thus, cultivate a more acute awareness of how extinguishable our labor can be. Here’s Nowviskie speaking to her digital humanities colleagues:

Tonight, I’ll ask you to take to heart the notion that, alongside the myriad joyful, playful scholarly, and intellectual concerns that motivate us in the digital humanities—or, rather, resting beneath them all, as a kind of substrate—there lies the seriousness of one core problem. The problem is that of extinction—of multiple extinctions; heart-breaking extinctions; boring, quotidian, barely-noticed extinctions—both the absences that echo through centuries, and the disposable erosions of our lossy everyday. We edit to guess at a poet’s papers, long since burned in the hearth. We scrape through stratigraphic layers of earth to uncover ways of life forgotten, and piece together potsherds to make our theories about them hold water. Some of us model how languages change over time, and train ourselves to read the hands that won’t be written anymore. Others promulgate standards to ward against isolation and loss. With great labor and attention, we migrate complex systems forward. We redesign our websites and our tools—or abandon them, or (more rarely) we consciously archive and shut them down. DHers [digital humanities scholars] peer with microscopes and macroscopes, looking into things we cannot see. And even while we delight in building the shiny and the new—and come to meetings like this to celebrate and share and advance that work—we know that someone, sooner or later, curates bits against our ruins.

Humanities labor represents, continues Nowviskie, a deeply human striving to communicate “across millennia” and “a hope against hope that we will leave material traces,” in stone, manuscript, print, or digital form.

This is where the strivings of humanities scholars and geologists of the anthropocene intersect. They both rummage about nature for traces of the human. And while for some, such ceaseless searching is add47682f8_61308 copydriven by an indomitable humanism, a confidence in a human spirit that will always live on, others are haunted by the possibility of our extinction and the end, as Nowviskie puts it, “of so much worldly striving.” But those of us who are in search of a fragmented and sundered human are also motivated by this prospect of extinction.

When Nowviskie exhorts us to “dwell with extinction,” she echoes a humanistic disposition that has long been defined by the specter of loss. Since at least the studia humanitatis of the early modern period, scholars have practiced the humanities in anticipation of catastrophe. Theirs, however, was not a crisis of the guild, anxieties about budget cuts and declining humanities enrollments, but a fear of the radical and irreparable loss of the human record.

Many of the great encyclopedic works of the early modern European scholars were defined, as Ann Blair describes them, by the “stock-piling” of textual information intended to create a “treasury of material.” The early modern masterpieces of erudition—such as Conrad Gesner’s Bibliotheca universalis (1545) or Johann H. Alsted’s Encyclopaedia septem tomis distincta (1630)—were motivated not simply by information-lust but by a deeply cultural conception of the ends of scholarship, a desire to protect ancient learning and what humanists considered to be its integrity and authority. These early humanists, writes Blair, “hoped to safeguard the material they collected against a repetition of the traumatic loss of ancient learning of which they were keenly aware.” This loss had rendered Greek and Roman antiquity inaccessible until its gradual recovery in the fifteenth and sixteenth centuries. Humanist scholars saw encyclopedias and related reference works that collected ancient learning as guarantees that knowledge could be quickly reassembled should all books be lost again. They were busy collecting and curating because they feared another catastrophe.

Similarly, Denis Diderot described the eighteenth-century Encyclopédie as insurance against disaster:

The most glorious moment for a work of this sort would be that which might come immediately in the wake of some catastrophe so great as to suspend the progress of science, interrupt the labors of craftsmen, and plunge a portion of our hemisphere into darkness once again. What gratitude would not be lavished by the generation that came after this time of troubles upon those men who had discerned the approach of disaster from afar, who had taken measures to ward off its worst ravages by collecting in a safe place the knowledge of all past ages!

But even as Diderot promised his contributors future glory, he also acknowledged how fleeting the whole endeavor could be. The Encyclopédie, he lamented, would be irrelevant as soon as it was printed. The traces of the human that the “society of gentlemen” had so diligently collected in print would be out of date upon its first printing. Even print could not stop time.

In the nineteenth century, the famous German classicist August Böckh claimed that all this striving to collect and protect the material traces of the human was exemplified in the science of philology, which he defined as the Erkenntnis des Erkannten, or knowledge of what is and has been known. Our current knowledge is only as good as our past knowledge. And working with the fragmented documents of that past gave philologists an acute sense of how fragile our knowledge and history of ourselves was. The philologist’s attention to and care for the material nature of human culture—its embodiment in documents and texts of all sorts and qualities—was cultivated by a consciousness of how fragile it all was. The only thing that bound the species together was a documentary record always under threat.

Today that documentary record is undergoing a transition, unprecedented in its speed and extent, from printed to digital forms. Some scholars, archivists, and librarians warn that this move could prove catastrophic. Things of our past may be lost and forgotten. But if the long history of the humanities teaches us anything, it is that humanistic work has always been practiced in the face of catastrophe.

In our digital age, the vocation and the disposition of the humanities remains intact. “The work of the humanist scholar,” writes Jerome McGann, “is still to preserve, to monitor, to investigate, and to augment our cultural life and inheritance.” And it’s this disposition that thinking about the humanities in the anthropocene may help us recover.

 

Photograph: The dove and the raven released by Noah, with drowning people and animals in the water beneath the ark. From the Holkham Bible Picture Book, Add MS 47682, England: second quarter of the 14th century, Parchment codex, The British Library, London.

. . . . . . . .

Like The Hedgehog Review on Facebook, follow us on Twitter, and subscribe to our posts via RSS.

The Ethics of Squirming

Most readers of the Infernal Machine (though you, like us, may have been taking a break from blogging) have probably read something about the controversial Facebook “emotional contagion” study. It seems that a couple of years ago a researcher at Facebook decided to see list-of-facebook-emoticon-600x369what would happen if he tweaked the words in Facebook’s “News Feed” of close to 700,000 users so as to manipulate the “emotional content” of the News Feed. Would users respond on their Facebook pages in step with the manipulations? That is, could Facebook make people feel better if they tweaked their secret algorithms to prioritize “positive” words in the News Feed? (People might spend more time on Facebook if they could!)

The researcher, Adam Kramer, then brought in some university researchers to look at the massive data set. They did some “big data” statistical analyses (of a relatively straightforward type) and added some psychological theory and found this:

When positive expressions were reduced, people produced fewer positive posts and more negative posts; when negative expressions were reduced, the opposite pattern occurred. These results indicate that emotions expressed by others on Facebook influence our own emotions, constituting experimental evidence for massive-scale contagion via social networks.

The paper was peer reviewed and published in the prestigious Proceedings of the National Academy of Sciences (PNAS).

The merits of the study itself are highly questionable. My social science colleagues tell me that with such a massive sample size, you are almost always going to arrive at “statistically significant” findings, whether you are measuring “emotional contagion” or compulsive belching. The fact that the statistically significant “effect” was minimal throws more doubt onto the validity of the study. Furthermore, my social science colleagues tell me that it is doubtful that the study is really measuring “emotional contagion” at all — there are other theories (other than emotional contagion) available that would explain, why when Joe posts a “negative” statement, Jane is reluctant to follow with a “positive” one.

But the main controversy surrounds the ethics of the study: Participants were never made aware that they were part of a massive “experimental” study, and Facebook seems to have fudged on the timing of the “data use policy,” inserting the current bit about “data analysis” and “research” after the data was collected. This is out of keeping with common practice in the social sciences, to say the least.

Reviewing responses to the study within the academic community, I’ve noticed quite a lot of squirming. The fact is that university researchers were a major part of this study; Cornell University’s Institutional Review Board (IRB) board approved the study; and a prestigious academic journal has published it.

Now everyone, including Facebook’s researcher Adam Kramer, is squirming. Cornell University issued a bland statement clearly intended to wiggle its way out of the issue. PNAS has issued an “editorial expression of concern.” And the new media researchers that work in the same circles as the the university-based Facebook researchers have turned to such cliches as don’t  “throw the baby out with the bath water” (don’t over-react to university researchers using corporate-owned data on how they attempt to manipulate their users!).  They say people should stop “pointing the finger or pontificating” and instead “sit down together and talk.” “We need dialogue,” writes Mary Gray of Indiana University and Microsoft Research, “a thoughtful, compassionate conversation.”

Okay. But perhaps before we have that compassionate sit-down about the ethics of big data social media research, we might take a moment to think about the ethics of squirming. For, in this case, there has been way too much of it. I, for one, think TIME’s television critic James Poniewozik has been clear enough about the study: “Facebook can put whatever it wants in the fine print. That shouldn’t keep us from saying that this kind of grossness is wrong, in bold letters.” I don’t think much more needs to be said about the ethics of this particular study.

But something more does need to be said about the broader ethics of research, which sometimes puts us in uncertain ethical situations. There is something about the will to know, and more about the professionalization of knowledge production, that leaves us more frequently than we would like in tricky ethical territory. Rather than simply relying on an IRB “stamp of approval” university researchers might instead simply stop squirming and take responsibility for their work and even say they regret it.

Here’s what an ethics of squirming might look like:

(a) Start with full disclosure. Here  are the conflicts I am dealing with (or have dealt with); these are the messy issues; here is why I really want to squirm, but I won’t.
(b) Here’s  where I (or we) may have been, or indeed were, wrong. Here are our moral regrets. (By the way, Mr. Kramer, making people “uncomfortable” is not a a moral regret.)
(c) Here’s why I would or would not do it again.

All in all, the ethics of squirming entails less squirming and more speaking directly to the issues at hand. It means taking responsibility, either by repenting of wrongs and righting them if possible, or justifying one’s actions in public.

All together now:  Mea culpa.

. . . . . . . .

Like The Hedgehog Review on Facebook, follow us on Twitter, and subscribe to our posts via RSS.