Algorithms Rule

inf machine_CDF_wanderer_algorithm_FLAT

The Internet killed expertise. Or, so claims public-policy specialist Tom Nichols in a recent essay that laments the demise of a cultural fixture.

I fear we are witnessing the “death of expertise”: a Google-fueled, Wikipedia-based, blog-sodden collapse of any division between professionals and laymen, students and teachers, knowers and wonderers—in other words, between those of any achievement in an area and those with none at all. By this, I do not mean the death of actual expertise, the knowledge of specific things that sets some people apart from others in various areas. There will always be doctors, lawyers, engineers, and other specialists in various fields. Rather, what I fear has died is any acknowledgment of expertise as anything that should alter our thoughts or change the way we live.

For Nichols, technologies such as Google and Wikipedia betoken a loss of epistemic authority, that is, what counts as authoritative knowledge in our digital age. What legitimates one form of knowledge over another? Which sources of knowledge are to be trusted? Which not? What practices and scholarly habits, techniques, and institutions render knowledge authoritative or worthy? In our digital age, these questions seem almost quaint, throwbacks to a less free and democratic age.

Nichols isn’t alone in divining the collapse of expertise and authority in our digital age. But other, more sanguine observers celebrate it and the liberating promises of digital technologies. “Neither the Internet nor the WWW,” writes Cathy Davidson, a professor at the City University of New York Graduate Center, “has a center, an authority, a hierarchy, or even much of a filter on the largest structural level.” With the advent of digitally-supported learning, “conventional modes of authority break down.” Digital technologies will liberate us from the constraints of traditional forms of epistemic authority. There will be no filters in the digital future to come.

Davidson’s messianic hopes as well as Nichols’s cultural despair mistakenly suppose that there can somehow be a vacuum of epistemic authority. But, in truth, forms and functions of epistemic authority, be they the disciplinary order of the research university or Wikipedia’s fundamental principles or “Five Pillars,” are themselves filtering technologies, helping us to orient ourselves amid a surfeit of information. They help us discern and attend to what is worthwhile. Google searches point us in the direction of some resources and not others. Technologies are normative, evaluative structures to make information accessible, manageable, and, ultimately, meaningful. It is not a question, then, of the presence or absence of epistemic authority; it is about better or worse forms of epistemic authority. Expertise and cultural authority are still with us. But now it might be more spectral, embodied not in the university don but in the black-boxed algorithm.

If the Internet and the World Wide Web lack, as Davidson puts it, a “centralized authority” and a “filter,” they do so only on the most abstract level. Our daily interactions with the Web are made possible by a host of technological constraints and filters. People access and engage information through technologies that allow them to select, filter, and delimit. Web browsers, hyperlinks, blogs, online newspapers, and the computational algorithms of Facebook, Google  and financial institutions help us turn terabytes of data into something more scalable, that is, something that can be made useful to an embodied person. These now-ubiquitous technologies help us to sort, to Google a needle in the haystack—and in so doing, they have become central mediums for the way we experience the world.

We are living in an age of algorithmic authority. Algorithms filter our music choices, track our purchasing decisions, find our airline tickets, and help us withdraw money from an ATM. They are ubiquitous. They are forming who we are and who we want to become. But we are only beginning to ask about our algorithmic selves. How can we learn about these algorithms from the outside and how they increasingly organize our very selves?

Authority hasn’t vanished. It has just assumed different, more latent forms. As Evgeny Morozov puts it,

The reason to fear Facebook and its ilk is not that they violate our privacy. It is that they define the parameters of the grey and mostly invisible technological infrastructure that shapes our identity.

We can’t free ourselves from our technologies; digital detoxes are palliative exercises. But we can try to get to know our new algorithmic selves.

Credit: Photo montage with algorithm and Caspar David Friedrich’s Wanderer Above the Sea of Fog (1818)

FacebookTwitterGoogle+LinkedInShare

Deans Care About Books

“Doesn’t Matt care about publishing books anymore?” That’s what an editor of a well-established humanities journal recently asked one of my press colleagues. The editor had just returned from a meeting with me, where she had expressed interest in publishing “curated” collections of articles from back issues of the journal. It struck me as a wonderful idea.

 

“Why make these print books?” I asked. “What do you mean?” she replied. I explained that the articles already existed in digital form in Project MUSE and could easily be collected there on the same page. Moreover, anyone working for a university press knows that most people read journal articles online, not in print. (Many institutions, in fact, only subscribe to the digital editions of scholarly journals.) Why not simply bundle the digital versions of the articles and publish them together online?

My explanation didn’t relieve her puzzlement. She explained that the editor of the collections and the authors of the articles wouldn’t get promotion and tenure credit if the collections were  published only online: “Deans care about books.”

This exchange reveals a troubling and unsustainable view, shared by scholars and deans, of the function of university presses in the world of scholarship. It has two elements. First, university presses have a responsibility to credentialize scholars. Second, presses discharge this responsibility by publishing scholarship in the form of print books.

For the sake of argument, I will leave the first assumption unquestioned. (For a compelling argument against the practice of deans relying on university presses to evaluate scholars, see Lindsay Waters’s Enemies of Promise: Publishing, Perishing, and the Eclipse of Scholarship.) It’s the second that concerns me here. University presses largely accept their credentialing function in the scholarly world. The economic realities of publishing scholarship, however, lead most of them to reject the idea that print is the proper, much less the only, format for all scholarship that makes it through the review process. By clinging to this second idea—the idea that humanities and humanistic social science scholarship must take the form of a print book—scholars and deans threaten the future of university presses and erode their ability to evaluate and distribute high-quality scholarship.

The only sure ticket to tenure and promotion is having your scholarship published by a university press. The reason is that having a university press book on one’s CV serves a signaling function. It tells deans and fellow scholars that your work withstood the rigors of peer review and the evaluation by press editors and faculty boards. This, in turn, signals that your work is good, that, as a scholar, you do your job well. There’s an alignment here between the interests of university presses and the interests of university deans. The presses want to publish high- quality scholarship, and the deans want to employ and promote scholars who are good at their jobs. The process required to produce the first provides evidence for the second.

These interests align, however, only up to a point. The decisions that track the scholarly quality of a project—those involved in discharging the credentialing responsibility—are not the only decisions a press makes when it comes to publishing a project. The work of determining the quality of scholarship and signaling this quality is done when a press issues a contract for a project and stands behind the decision by publishing it. In between those two moments, a press must make many production decisions—about the size of a print run, whether a book should be sold print-on-demand, whether a book should be cloth or paperback, whether images should be in color—that have nothing to do with the project’s quality. These decisions are responsive to practical matters such as how much it will cost to produce the book, how many copies a press thinks it can sell, who the press thinks the audience is, or even (perhaps especially), how well the press gets along with Jeff Bezos. They’re about selling books, not evaluating them. (For a good illustration of the gap between sales and quality consider: the press that published Nietzsche for Dummies probably sold a lot of books.)

Chief among production decisions is the one about publication format, whether to publish a project in print (cloth or paperback) or digitally (or, more commonly, both). To see the significance of this decision, consider what would have happened had the editor followed the route I suggested and published her collections online. It would involve asking the digital publisher to put all of the (already copy edited, designed, typeset) selected articles on the same webpage with a new introduction by the editor. (Obviously it would be a more complicated than that, but you get my point.)

Compare this to what would happen if they were published in print. The articles would have to be downloaded and printed out. Then, production staff would scan the original articles to turn them into a manuscript. Copy editors, designers, printers, distributors, and all of the other people involved in producing these collections would then do their parts. All of this to make the content available in a format nobody seems to want so that somebody can put another line on her CV that will be credited by her dean. (Not all decisions about how to publish a work are like this one, of course. This case is unique in that the material already existed digitally. But the point is that, even in such a case, the push for print remains.)

In case it needs spelling out, the enormous difference between these two paths to publication corresponds to a much different price tag for the press. This is why, as I said at the outset, by clinging to the outdated notion that scholarship must be published in print deans and scholars hurt university presses. They tie the legitimate responsibility of determining and distributing quality scholarship to a costly, inefficient, inflexible, and unsustainable publishing model. By insisting that print is a necessary condition for scholarly quality, deans and scholars make it more difficult for university presses to stay in business, thereby making it more difficult for them to publish print books! At the same time, scholars insist on having their own work published in print while they increasingly engage the work of others online. And deans demand that scholars publish print books while not giving their libraries enough funds to buy them. So they insist on print and undermine the demand for it.

So, to answer the editor’s question: I do care about publishing books. Like all scholars and publishers, I love print books and deeply appreciate them as objects (as a quick glance at my office would confirm). I am not advocating for the end of print. But I also care about the future of scholarly publishing. In order to help secure a healthy future for both university presses and print books, scholars and deans must embrace the digital as a legitimate and credit-worthy format for quality scholarship.

Matthew McAdam is Humanities Editor at Johns Hopkins University Press.

 

 

Twilight of an Idol: College, Purpose, and Zombies

In “Don’t Send Your Kid to the Ivy League,” William Deresiewicz lambasts a pitiful American elite education system that “manufactures young people who are smart and talented and driven, yes, but also anxious, timid, and lost, with little intellectual curiosity and a stunted sense of purpose.” The entire system of elite education, he argues, reproduces an American upper-middle class and its distorted values, aspirations, and entitlement. Stanford and Swarthmore “are turning our kids into zombies.”

With his recent article in the New Republic and his forthcoming book (Excellent Sheep: The Miseducation of the American Elite and The Way to a Meaningful Life, Free Press), Deresiewicz is one of a growing number of pundits lamenting the loss of an American institution: college. “Is the only purpose of an education,” sneers Deresiewicz, “to enable you to get a job? What, in short, is college for?”

Andrew Delbanco recently asked the same question in College: What It Was, Is, and Should Be. So, too, did Mark Edmundson in Why Teach? As journalists, business professors, and university trustees chant “disruption,” college professors and and their public-intellectual kin seem intent on defending their institutions and vocations with appeals to a collegiate ideal. In response to declining state support for higher education and increasing skepticism about the economic value of higher education among sections of the public, college is making a return. But what are Deresiewicz, Delblanco, Edmundson, not to mention countless faculty committees who are busy reimagining undergraduate education, talking about when they conjure up the “college experience”?

Princeton University’s Firestone Library and statue of John Witherspoon, sixth president and signer of the Declaration of Independence; Emile Wamsteker/Bloomberg via Getty Images

They are, I think, mostly talking about self-transformation. College may teach you how to think and even give you some skills, but ultimately, as Deresiewicz puts it, college helps you build a self. College is a four-year respite, before an impending life of professionalism, for self-discovery. “Students are pressured and programmed, trained to live from task to task, relentlessly rehearsed and tested until winners are culled from the rest,” writes Delbanco, and so they scarcely have time to practice the art of being in college, the art of “loafing.” Citing Walt Whitman, Delbanco describes college as a time when “I lean and loaf at my ease observing a spear of summer grass.” Similarly, Mark Edumundson writes of college as a “mythic force,” a “rite of passage.” In Why Teach?, Edmundson sees college, the English major, and teaching as opportunites for self-transformation and “soul making.”  And this is an experience that Edmundson, Delbanco, and Deresiewicz want to democratize and make available to as many students as possible.

But University of Chicago undergraduate and Slate intern Osita Nwanevu isn’t buying it. In a response to Deresiewicz’s article, Nwanevu dismisses the entire notion that college is a singular opportunity for self-discovery.

Every ambitious student who believes that college is their opportunity to shape themselves will do whatever it takes to get into the very best, most exclusive school they can. When their experiences underwhelm, as many necessarily will, they will indeed leave college ‘anxious, timid, and lost,’ believing that they’ve missed out on a chance at intellectual development. Deresiewicz has simply traded careerism for another exalted goal, with similar results. [. . .] To believe that a college—Ivy or otherwise—can confer intellectual benefits in four years that you won’t be able to attain at some point over the course of the next 60 is to believe in magic.

What’s so special about four years of college? How did college come to be the defining experience of the American upper middle class? How did Harvard and Amherst, not to mention the liberal arts degree in English, come by their monopoly on an authentic self? Did Walt Whitman even go to college?

After the recent spate of books, articles, and faculty reports extolling and idealizing the transformative potential of a college experience, Nwanevu’s incredulity is refreshing. College has come to bear an impossible burden, both individually and socially. Its most confident advocates treat it like a stand-alone ethical resource, capable of funding and guiding the self-transformations of America’s elite. Deresiewicz laments contemporary college students’ lack of desire to do good or find the “higher meaning” of college buzzwords like leadership or service. And faculty, he claims, don’t have time for such meaningful pursuits; they’ve got research to do.

Deresiewicz is ultimately concerned about the ethical failures of American colleges. But he never mentions the particular ethical resources or traditions that make such self-transformation possible. And he never considers whether a transformation of the self is sufficient. Can such a collegiate celebration of the self resist the fragmenting and stultifying effects of the upper-middle-class American culture he decries—its consumerism, its anti-democratic tendencies, its solipsism? For Deresiewicz, college is less an institution devoted to a common end than it is a self-help retreat, replete with poetry classes and career services.

This is a common problem for the recent defenders of college. They invoke a collegiate ideal without considering the normative and ethical resources to which it used to be tied or the larger social ends that such an education was intended to serve. Perhaps inadvertently, Deresiewicz acknowledges this in a line of candor:  “Religious colleges—even obscure, regional schools, that no one has ever heard of on coasts—often do a much better job” transforming selves. Until the end of the nineteenth century, American colleges such as Princeton, Yale, and Harvard continued to train Protestant clergy. They were explicitly religious institutions organized around particular ethical traditions. As those and many other former colleges became universities at the end of the nineteenth century, however, these once-explicitly Christian institutions became generally nonsectarian Christian institutions devoted to broad, often vague public goods such as freedom, democracy, and economic and technological progress. The university, as University of Chicago president William Rainey Harper put it in 1899, was the “prophet” and “priest” of democracy, the keeper “of holy mysteries, of sacred and significant traditions.”

In the Harvard Report of 1945, General Education in a Free Society, some of the most respected scholars in the country acknowledged that American education was in “supreme need . . . of a unifying purpose and idea.” But religion wasn’t a possibility. “Given the American scene with its varieties of faith and even of unfaith,” Harvard faculty considered an explicitly religious basis for the undergraduate curriculum impossible.

Not much has changed since 1945. There is, thankfully, no going back to the nineteenth-century Protestant college of Christian gentlemen. And that leaves contemporary colleges, as we might conclude from Deresiewicz’s jeremiad, still rummaging about for sources of meaning and ethical self-transformation. Some invoke democratic citizenship, critical thinking, literature, and, most recently, habits of mind. But only half-heartedly—and mostly in fundraising emails.

At best, a college education today might alert students to an awareness of what is missing, not only in their own colleges but in themselves and the larger society as well.

Humanities in the Face of Catastrophe

Earlier this month, my colleague Bethany Nowviskie asked a group of digital humanities scholars gathered in Lausaunne, Switzerland, to consider their work—its shape, its fate, its ends—within the context of climate change and the possibility of our, as in the human species, own extinction. (How’s that for a dinner-time keynote? Bourbon, please.)

The premise of Nowviskie’s talk, “Digital Humities in the Anthropocene,” was the fact of climate change and the irrevocable consequences that it could have for life on our planet. What would it mean, she asked, to be a humanities scholar in the anthropocene, a geological epic defined by human impact on the natural world? What would it mean to practice the humanities within a geological scale of time?

Whatever the fate of the “anthropocene” as a term (its existence and even inception is still being debated among geologists), the scientists, activists, and scholars who invoke it consider human activity and practices as inseparable from nature. Whether they intend to or not, they thereby challenge basic ideas about the human, culture, and agency that have sustained the humanities for centuries.

The very notion of the anthropocene and the suggestion that humans could cause geological-level change undermines, as Dipesh Chakrabarty puts it, the “age-old humanist distinction between natural history and human history.” From Vico’s famous claim that humans could  know only what they have created, while nature remains God’s inscrutable work, to Kant’s understanding of cosmopolitan history as purposive action, Western thinkers have long distinguished between human and natural history. But granting humans a geological agency, however aggregative and delayed, undermines this distinction. What becomes of history and culture when it can no longer be thought of as simply human?

For one thing, we can better conceive of our cultural and historical labor as bound to the contingency, finitude, and flux of nature, and, thus, cultivate a more acute awareness of how extinguishable our labor can be. Here’s Nowviskie speaking to her digital humanities colleagues:

Tonight, I’ll ask you to take to heart the notion that, alongside the myriad joyful, playful scholarly, and intellectual concerns that motivate us in the digital humanities—or, rather, resting beneath them all, as a kind of substrate—there lies the seriousness of one core problem. The problem is that of extinction—of multiple extinctions; heart-breaking extinctions; boring, quotidian, barely-noticed extinctions—both the absences that echo through centuries, and the disposable erosions of our lossy everyday. We edit to guess at a poet’s papers, long since burned in the hearth. We scrape through stratigraphic layers of earth to uncover ways of life forgotten, and piece together potsherds to make our theories about them hold water. Some of us model how languages change over time, and train ourselves to read the hands that won’t be written anymore. Others promulgate standards to ward against isolation and loss. With great labor and attention, we migrate complex systems forward. We redesign our websites and our tools—or abandon them, or (more rarely) we consciously archive and shut them down. DHers [digital humanities scholars] peer with microscopes and macroscopes, looking into things we cannot see. And even while we delight in building the shiny and the new—and come to meetings like this to celebrate and share and advance that work—we know that someone, sooner or later, curates bits against our ruins.

Humanities labor represents, continues Nowviskie, a deeply human striving to communicate “across millennia” and “a hope against hope that we will leave material traces,” in stone, manuscript, print, or digital form.

This is where the strivings of humanities scholars and geologists of the anthropocene intersect. They both rummage about nature for traces of the human. And while for some, such ceaseless searching is add47682f8_61308 copydriven by an indomitable humanism, a confidence in a human spirit that will always live on, others are haunted by the possibility of our extinction and the end, as Nowviskie puts it, “of so much worldly striving.” But those of us who are in search of a fragmented and sundered human are also motivated by this prospect of extinction.

When Nowviskie exhorts us to “dwell with extinction,” she echoes a humanistic disposition that has long been defined by the specter of loss. Since at least the studia humanitatis of the early modern period, scholars have practiced the humanities in anticipation of catastrophe. Theirs, however, was not a crisis of the guild, anxieties about budget cuts and declining humanities enrollments, but a fear of the radical and irreparable loss of the human record.

Many of the great encyclopedic works of the early modern European scholars were defined, as Ann Blair describes them, by the “stock-piling” of textual information intended to create a “treasury of material.” The early modern masterpieces of erudition—such as Conrad Gesner’s Bibliotheca universalis (1545) or Johann H. Alsted’s Encyclopaedia septem tomis distincta (1630)—were motivated not simply by information-lust but by a deeply cultural conception of the ends of scholarship, a desire to protect ancient learning and what humanists considered to be its integrity and authority. These early humanists, writes Blair, “hoped to safeguard the material they collected against a repetition of the traumatic loss of ancient learning of which they were keenly aware.” This loss had rendered Greek and Roman antiquity inaccessible until its gradual recovery in the fifteenth and sixteenth centuries. Humanist scholars saw encyclopedias and related reference works that collected ancient learning as guarantees that knowledge could be quickly reassembled should all books be lost again. They were busy collecting and curating because they feared another catastrophe.

Similarly, Denis Diderot described the eighteenth-century Encyclopédie as insurance against disaster:

The most glorious moment for a work of this sort would be that which might come immediately in the wake of some catastrophe so great as to suspend the progress of science, interrupt the labors of craftsmen, and plunge a portion of our hemisphere into darkness once again. What gratitude would not be lavished by the generation that came after this time of troubles upon those men who had discerned the approach of disaster from afar, who had taken measures to ward off its worst ravages by collecting in a safe place the knowledge of all past ages!

But even as Diderot promised his contributors future glory, he also acknowledged how fleeting the whole endeavor could be. The Encyclopédie, he lamented, would be irrelevant as soon as it was printed. The traces of the human that the “society of gentlemen” had so diligently collected in print would be out of date upon its first printing. Even print could not stop time.

In the nineteenth century, the famous German classicist August Böckh claimed that all this striving to collect and protect the material traces of the human was exemplified in the science of philology, which he defined as the Erkenntnis des Erkannten, or knowledge of what is and has been known. Our current knowledge is only as good as our past knowledge. And working with the fragmented documents of that past gave philologists an acute sense of how fragile our knowledge and history of ourselves was. The philologist’s attention to and care for the material nature of human culture—its embodiment in documents and texts of all sorts and qualities—was cultivated by a consciousness of how fragile it all was. The only thing that bound the species together was a documentary record always under threat.

Today that documentary record is undergoing a transition, unprecedented in its speed and extent, from printed to digital forms. Some scholars, archivists, and librarians warn that this move could prove catastrophic. Things of our past may be lost and forgotten. But if the long history of the humanities teaches us anything, it is that humanistic work has always been practiced in the face of catastrophe.

In our digital age, the vocation and the disposition of the humanities remains intact. “The work of the humanist scholar,” writes Jerome McGann, “is still to preserve, to monitor, to investigate, and to augment our cultural life and inheritance.” And it’s this disposition that thinking about the humanities in the anthropocene may help us recover.

 

Photograph: The dove and the raven released by Noah, with drowning people and animals in the water beneath the ark. From the Holkham Bible Picture Book, Add MS 47682, England: second quarter of the 14th century, Parchment codex, The British Library, London.

The Ethics of Squirming

Most readers of the Infernal Machine (though you, like us, may have been taking a break from blogging) have probably read something about the controversial Facebook “emotional contagion” study. It seems that a couple of years ago a researcher at Facebook decided to see list-of-facebook-emoticon-600x369what would happen if he tweaked the words in Facebook’s “News Feed” of close to 700,000 users so as to manipulate the “emotional content” of the News Feed. Would users respond on their Facebook pages in step with the manipulations? That is, could Facebook make people feel better if they tweaked their secret algorithms to prioritize “positive” words in the News Feed? (People might spend more time on Facebook if they could!)

The researcher, Adam Kramer, then brought in some university researchers to look at the massive data set. They did some “big data” statistical analyses (of a relatively straightforward type) and added some psychological theory and found this:

When positive expressions were reduced, people produced fewer positive posts and more negative posts; when negative expressions were reduced, the opposite pattern occurred. These results indicate that emotions expressed by others on Facebook influence our own emotions, constituting experimental evidence for massive-scale contagion via social networks.

The paper was peer reviewed and published in the prestigious Proceedings of the National Academy of Sciences (PNAS).

The merits of the study itself are highly questionable. My social science colleagues tell me that with such a massive sample size, you are almost always going to arrive at “statistically significant” findings, whether you are measuring “emotional contagion” or compulsive belching. The fact that the statistically significant “effect” was minimal throws more doubt onto the validity of the study. Furthermore, my social science colleagues tell me that it is doubtful that the study is really measuring “emotional contagion” at all — there are other theories (other than emotional contagion) available that would explain, why when Joe posts a “negative” statement, Jane is reluctant to follow with a “positive” one.

But the main controversy surrounds the ethics of the study: Participants were never made aware that they were part of a massive “experimental” study, and Facebook seems to have fudged on the timing of the “data use policy,” inserting the current bit about “data analysis” and “research” after the data was collected. This is out of keeping with common practice in the social sciences, to say the least.

Reviewing responses to the study within the academic community, I’ve noticed quite a lot of squirming. The fact is that university researchers were a major part of this study; Cornell University’s Institutional Review Board (IRB) board approved the study; and a prestigious academic journal has published it.

Now everyone, including Facebook’s researcher Adam Kramer, is squirming. Cornell University issued a bland statement clearly intended to wiggle its way out of the issue. PNAS has issued an “editorial expression of concern.” And the new media researchers that work in the same circles as the the university-based Facebook researchers have turned to such cliches as don’t  “throw the baby out with the bath water” (don’t over-react to university researchers using corporate-owned data on how they attempt to manipulate their users!).  They say people should stop “pointing the finger or pontificating” and instead “sit down together and talk.” “We need dialogue,” writes Mary Gray of Indiana University and Microsoft Research, “a thoughtful, compassionate conversation.”

Okay. But perhaps before we have that compassionate sit-down about the ethics of big data social media research, we might take a moment to think about the ethics of squirming. For, in this case, there has been way too much of it. I, for one, think TIME’s television critic James Poniewozik has been clear enough about the study: “Facebook can put whatever it wants in the fine print. That shouldn’t keep us from saying that this kind of grossness is wrong, in bold letters.” I don’t think much more needs to be said about the ethics of this particular study.

But something more does need to be said about the broader ethics of research, which sometimes puts us in uncertain ethical situations. There is something about the will to know, and more about the professionalization of knowledge production, that leaves us more frequently than we would like in tricky ethical territory. Rather than simply relying on an IRB “stamp of approval” university researchers might instead simply stop squirming and take responsibility for their work and even say they regret it.

Here’s what an ethics of squirming might look like:

(a) Start with full disclosure. Here  are the conflicts I am dealing with (or have dealt with); these are the messy issues; here is why I really want to squirm, but I won’t.
(b) Here’s  where I (or we) may have been, or indeed were, wrong. Here are our moral regrets. (By the way, Mr. Kramer, making people “uncomfortable” is not a a moral regret.)
(c) Here’s why I would or would not do it again.

All in all, the ethics of squirming entails less squirming and more speaking directly to the issues at hand. It means taking responsibility, either by repenting of wrongs and righting them if possible, or justifying one’s actions in public.

All together now:  Mea culpa.

“Open” is not Public

One of the English words most in need rehabilitation these days is the word “public.” I have to confess that though I hear the word all the time, and think about it regularly, I don’t know how it needs to be rehabilitated. But I am certain it does. 900px-Open_Access_PLoS.svg

Historically, the most obvious thing one could say about the word “public” was that it is not “private.” That distinction, however, hardly holds anymore, as the “private”—whether in the form of private lives, private capital, or private information—now fills our political and social lives, such that the distinction between the private and the public makes increasingly less sense. It is not just that the most “public” controversies today—for example, the Donald Sterling debacle—tend to involve the “public” exposure of “private” lives, it is that many of our most pressing political and social problems today are inextricably tied to private interests, private choices, and privacy.

But the problem with “public” is not just that the ancient dialectic on which it rests—public versus private—is outmoded. There is also a new meaning of “public” now in wide circulation, operating (as almost all language does) at the level of unexamined common sense. “Public” is now used as a synonym for “open.”  

Let me offer an example. A few weeks back I was invited, with several other faculty, to a meeting with a program officer from a major foundation to discuss ways to make humanistic scholarship more public. What struck me about our conversation, which was quite lively, was that most people in the room seemed to assume that the main barrier to a public life for humanistic scholarship was access or openness. The thinking went like this: University presses, which publish most long-form humanistic scholarship, put up physical and financial barriers to a “public” life for scholarship. That is, print books have limited runs and cost money, sometimes quite a bit of money. Therefore these books sit on library shelves for only a few specialists to read. The solution, many in the room seemed to assume, was to convince university presses to “go digital,” with the funding agency with whom we were meeting would using its money to offset the financial loss presses would incur by going digital.

The problem of a public life for humanistic scholarship was one that the program officer presented. The foundation wanted to figure out how to invest their money in such a way as to help the humanities “go public.” But for virtually everybody in the room this meant “going digital,” making humanities work openly accessible. Openness—or open access—was the assumed key to a public life for humanistic scholarship.

But making something openly accessible does not make it public. To make something accessible or “open” in the way we talk about it today does not assume, on the level of norms, making it legible, debatable, let alone useful to non-specialists. There are millions of studies, papers, and data sets that are openly accessible but that nevertheless do not have a public life. The U.S. government, no less, has invested in various “openness” initiatives over the past two decades. These projects are presented as democratic gestures to the “public,” but they do little more  than allow governing agencies to display their democratic credentials and grant a few specialists  access to documents and data. To make something open or accessible is not to make it public.

What would it mean to make humanistic scholarship or government data or, for that matter, computing code, truly public? One clue does come to us from antiquity: The Latin word publicus meant “of the people.” Publicus was an attribute, a quality or feature of a person, thing, action, or situation, rather than the thing itself. It did not, mind you, mean that the person, thing, action, or situation was authorized by, or somehow the consequence of, the majority. That is, publicus was not a synonym for “democratic.” Rather, it meant that the person, thing, action, or situation was plausibly representative “of the people,” such that the people could engage it in a useful and productive political manner.

The question of how to make something public concerns how to endow it with a quality that could be attributed to “the people”? Clearly, this would mean taking seriously matters of “design” or what Cicero called “style,” such that the “public thing” (in Latin, the res publica) is first of all legible (able to be “read” if not fully understood), second of all in some sense subject to discourse and debate (and thus subject to political deliberation), and third of all socially useful. This means thinking long and hard about “the people” themselves: their habits, their tastes, their interests, their schedules, their aptitudes, and so on. While “openness” may in certain circumstances be part of that which builds up to a “public” quality, I would venture to say that openness or access is not even necessary. Cicero, for one, saw “public” life as one that was styled in such a way as to be of the people but not necessarily exposed or “open” to the people. Public speeches, public events, and public figures could—like some public art today—be a bit opaque (legible enough, but not necessarily fully understandable) but still be “of the people.” Moreover, certain information and deliberation may need to be kept secret for good reasons—for example, having to do with the fair administration of justice (think of a jury deliberating behind closed doors)—but that information and deliberation can still be public according to the criteria above.

Indeed, openness is relatively easy; publicity is hard. Making something open is but an act; making something public is an art.

Our digital moment is a triumphal one for “openness.” But “open” is not public. If we are to really push the digital into the public, we need go well beyond questions of access, openness, and transparency and start asking how it is that the digital might take on a quality that is “of the people.”

You must change your graduate program!

Of all the hyperbole surrounding the fate of the humanities today, the problems facing graduate studies seem the least exaggerated. The number of PhDs has far outpaced the number of available full-time jobs. Financial support is both inadequate and unevenly distributed, requiring The Quant & The Connoisseur logostudents to defer earnings over long periods of time, assume unneeded debt, and compete against differently funded peers.

Overspecialization has made knowledge transfer between the disciplines, not to mention between the academy and the rest of the workforce, increasingly difficult. The model of mentorship that largely guides student progress, now centuries-old, seems increasingly out of touch with a culture of non-academic work, so that students are ill-prepared to leave the academic track. Time-to-degree has not only not sped-up, but increasingly it also correlates with lower success rates—the longer students stay in the PhD track the lower their chances for full-employment.

Philosophy Seminar Table at University of Chicago

The hegemony of the seminar as the only model of learning is also at odds with much recent thinking about learning and intellectual development. Undoubtedly, numerous teachers and students alike would describe at least some, if not many, of their seminar experiences as profoundly uninspiring. Add to that the way we largely operate within a developmental model premised on major phase-changes that dot otherwise stable, and largely flat, plateaus. The long time between exams or their sink-or-swim nature does little to promote long-term incremental development of students as thinkers, writers, or teachers. We think more in terms of a ninteenth-century-inspired model of botanical metamorphosis, with its inscrutable internal transformations, than we do incremental, cumulative performances.

There are also bio-political aspects to the crisis that have recently been raised, where the most intense periods of work and the most intense periods of insecurity overlap precisely within the normal timeframe of human fertility. This PhD model seems downright Saturnalian, consuming its own offspring.

What is there to like about this scenario?

Luckily, the MLA has issued a report about graduate education. “We are faced with an unsustainable reality,” the report states. Indeed. And then come the all-too-familiar platitudes. Maintain excellence. More teaching. Shorter time-frames. Innovate. Better connection with non-academic jobs. Advocate for more tenure-track positions.

As the clichés pile up, so do the contradictions. Get out faster, but spend more time teaching. Keep up those rigorous standards of specialization, but do it with haste (and be interdisciplinary about it). No teaching positions available? Look for another kind of job! Persuade the university to hire more tenure track positions—whom do I call for that one?

Will the MLA report lead to changes? It’s doubtful. Sure, we’ll crack down on time to degree without really changing requirements. We’ll spice up course offerings and maybe throw in an independent project or two (digital portfolios!). We’ll scratch our heads and say the PhD would be a great fit for a job in consulting, or in a museum, maybe even Google—and then do absolutely nothing. Five or ten years from now, we’ll talk about a crisis in the humanities, the shrinking of the field, and notice that, once again, there seem to be fewer of us hanging around the faculty club.

Nothing will change because we don’t have to. As long as there are too many graduate students, there is no problem for faculty. And no matter what we say, what we do is always the same thing. Try approaching your department and saying you need to change the scope, scale, content, and medium of the dissertation. You’re in for a fun conversation.

Try implementing a mandatory time-limit with yearly progress reports and consequences for failure and you’ll be barraged with so many exceptions your Agamben will start to hurt. What do employers want from PhDs—good writing skills, general knowledge, analytical capability, facility with numbers, strong work habits, works well with others? Sorry, can’t help you. None of our students has those skills since our program doesn’t emphasize them (but we have a writing center!).

Nothing will change because we don’t have to. We’re so conservative at heart we’d rather die out with our beliefs intact than do anything that might actually better serve the student population. We’ll continue to point to the exceptions without realizing how much they still look like something we didn’t want to change.

The PhD did something once, or rather it did one thing and it did it reasonably well. It still does that one thing, which is now, quantitatively speaking, vastly unnecessary. Some have suggested scaling up to meet the sciences on their own ground (and here at The Infernal Machine, too). I would suggest that we scale down to meet the needs of the world at large. More modular, more flexible, more creative, more varied, more timely, more general, more collaborative, and more relevant. Until we have any proof that our programs are feeders for jobs outside the academy, we’re just failing by another name.

We can either change in substantive ways or pretend to do something else while actually continuing to do the same things we’ve always done. The MLA report looks a lot like the latter and no doubt so will most of the responses to it.

I’m looking forward to next year’s report. How many ways can you play the same tune?

 

 

Big Humanities

[Editor's Note: This is the second installment in The Humanities in Full series.]

Before there was big science or big data, there was big humanities. Until the last third of the nineteenth century, the natural and physical sciences imitated many of the methods and practices of the humanities, especially disciplines like philology, which pioneered techniques in data mining, the coordination of observers, and the collection and sorting of information—what Lorraine Daston terms practices of “collective empiricism.”

One of the most successful and long-lasting projects was led by the Berlin philologist August Böckh. In a proposal to the Prussian Academy of Sciences in Berlin in 1815, Böckh and his colleagues requested funding for a long-term project to collect as completely as possible all Greek inscriptions, printed, inscribed, and holograph. Eventually published in four installments as Corpus Inscriptionum Graecarum between 1828 and 1859, with an index in 1877, Böckh’s project was an organizational feat that relied on the work of hundreds of philologists over decades, and it quickly became a model for German scholarship in all fields. “The primary purpose of a Royal Academy of the Sciences,” he wrote, should be to support the type of work that “no individual can accomplish.” The project collected, stored, preserved, and evaluated data. And in Böckh’s case, the data were Greek inscriptions scattered across the Mediterranean.

Böckh’s Corpus Inscriptionum Graecarum was just a prelude. In his inaugural lecture to the Prussian Academy of Sciences in 1858, Theodor Mommsen, one of Germany’s foremost classical scholars, declared that the purpose of disciplines like philology and history was to organize the “archive of the past.” What Mommsen had in mind, as would become evident in the kinds of projects he supported, was not some abstract archive of immaterial ideas. He wanted scholars to collect data and shape it into meticulously organized and edited printed volumes in which the “archive” would take tangible form. Work on the scales that Mommsen imagined would require international teams of scholars and the “liberation” of scholars from what he dismissed as the “arbitrary and senseless” divisions among the disciplines.

As secretary of the Prussian Academy of Sciences, Mommsen set out to institutionalize his vision of big philology, or what he termed the “large scale production of the sciences” [Grossbetrieb der Wissenschaften]. After securing a three-fold increase in the Academy’s budget, he supported a series of monumental projects. He oversaw the internationalization and expansion of the Corpus Inscriptionum Latinarum, the Latinate counterpart to Böckh’s project that sought to collect all inscriptions from across the entire Roman Empire. It eventually collected more than 180,000 inscriptions and grew to 17 volumes plus 13 supplementary volumes. Mommsen also helped church historian Adolf Harnack secure 75,000 Marks and a 15-year timeline for a project on Greek-Christian Authors of the First Three Centuries, the modest goal of which was to collect all of the hand-written manuscripts of early Christianity. Other projects included a prosopography of ancient Rome funded for a period of ten years.

Looking back on what Mommsen had accomplished for modern scholarship, the German philologist Ulrich von Wilamowitz-Moellendorf wrote:

The large scale production of science cannot replace the initiative of the individual; no one knew that better than Mommsen. But in many cases the individual will only be able to carry out his ideas through large scale production.

Theodor Mommsen, Wikipedia Commons

Theodor Mommsen, Wikipedia Commons

Figures such as Böckh and Mommsen introduced different scales to knowledge creation and different skill sets to humanistic scholarship. They developed, coordinated, and managed teams of people in order to organize huge sets of texts and data.

But to what end? What was the purpose of all this collecting, organizing, and managing? This was the question that transformed Germany’s most self-loathing philologist into a philosopher. A wunderkind trained in Leipzig, Friedrich Nietzsche was appointed professor of classical philology at the University of Basel at the age of 24, before he had even finished his doctorate. Just as Mommsen was busy assembling the “archive of the past,” Nietzsche began to diagnose modern culture, not to mention himself, as suffering from a bad case of “academic knowledge,” or Wissenschaft.

In We Philologists, Nietzsche excoriated his fellow scholars for abdicating philology’s real task. Ultimately, he argued, philology was not about advancing knowledge or building an “archive of the past.” It was about forming stronger, healthier human beings on the model, or at least the idealized model, of the ancient and classical Greeks. The real philologist was a lover of antiquity, someone who sought to transform himself through an encounter with a superior culture. Every good and worthwhile science, he wrote, should be kept in check by a “hygienics of life”—practices by which whatever one learned could be integrated into how one lived.

Despite his stylized iconoclasm, Nietzsche was a traditional German Grecophile for whom antiquity of the Greek sort was a moral utopia. But he was also a modern scholar struggling to come to terms with the ascendant research university and what we recognize today as its basic elements: the division of intellectual labor, academic specialization, and the constant struggle to integrate new technologies and practices for sifting through and making sense of the past.

Nietzsche’s polemics against big philology were precursors to contemporary anxieties about what might become of the humanities in the digital age. Data—be it 180,000 inscriptions or hundreds of digitized novels—cannot speak for itself, but it is never incoherent. It’s always collected, organized, edited, framed, and given meaning, whether in nineteenth-century printed volumes or twenty-first century graphs. With his bombast and passions, Nietzsche made the case for values and interpretation at a moment when textual empiricism was ascendant and positivism loomed. “Yes, but how are we to live!” was his constant refrain.

The catch, of course, is that most of us aren’t Nietzsche, though arguably too many contemporary scholars of the strongly critical-theoretical bent aspire to be. Scholarship and knowledge might be better served if many such would-be master interpreters settled for the humble but necessary drudgery of collecting, annotating, and commenting on the “archive of the past,” maintaining cultural inheritances and providing invaluable grist for the equally important job of hermeneutics. We shouldn’t forget that Nietzsche the philosopher, the moral psychologist who diagnosed the ethical ills of modernity, grew out of Nietzsche the philologist, the erudite scholar who reverentially tended ancient traditions and texts.

Nineteenth-century practices of collecting and evaluating data don’t exhaust the work of the humanities, but they highlight a broader history of the humanities in which collecting and evaluating data has been a central and even noble pursuit. Thinking of the humanities and the sciences in terms of what humanists and scientists actually do might help us develop a longer history of the humanities and see continuities that simple polemics only conceal. Nietzsche and his fellow nineteenth-century philologists struggled to reconcile more interpretive methods with historical approaches, to blend pleasure and delight with critical distance, and to temper particularity with timeless value. But Nietzsche represents only one side of the debate. While his critiques of the utopian impulses of big philology were necessary correctives, he ultimately left the university and withdrew to a life in extremis, writing at the edge of lucidity and under the shadow of genius.

The Wall Must Stand: Innovation At the New York Times

“In the future,” writes digital scholar and Twitter wit Ian Bogost , “all news will be about, rather than in, the New York Times.”

That future seemed to arrive last week, and not only with the controversy unleashed by the abrupt firing of executive editor Jill Abramson. Possibly as part of the storm, someone in the newsroom leaked a 96-page document with detailed proposals for bringing the Times more fully into the digital age and—even more important—making the Grey Lady more “Reader Experience”-friendly.

#164976308 / gettyimages.com

Nieman Journalism Lab’s Joshua Benton, calls the report “one of the most remarkable documents I’ve seen in my years running the Lab.”  He even tasked three of his staffers with excerpting highlights from it. But the whole thing merits reading by anyone interested in the possible (or inevitable?) future of the news.

Not that there is anything truly new or surprising on any page of “Innovation,” as the study is so grandiloquently titled. Put together by a six-member team led by Arthur Gregg Sulzberger, the publisher’s son and heir apparent, the report is a compendium of ideas, strategies, arguments, and veiled threats familiar to anyone who has worked in or around newsrooms during the last decade or so. From the title on, it buzzes with the kind of jargony nostrums that fuel TED Talks and South by Southwest conferences, from impact toolboxes and repackaging old content in new formats to making journalists their own content promoters and integrating the Reader Experience team with the newsroom to, well, anything else that helps counter the disruptive incursions of new media upstarts like Buzzfeed and Huffington Post.

And why not counter those disrupters? As the report frequently notes, the NYT produces consistently superior content but does an almost-as-consistently inferior job of getting its content out to its reader. In some cases, competitors are even more successful at distributing Times content than the Times itself. It makes little sense to remain satisfied with such a status quo.

But reading the report invites suspicion on at least two counts. The first is quite immediate: How is it possible that these objectives haven’t already been accomplished? (And, in fact, is it possible that many of them have, as some NYT insiders say, proving once again that many strategic studies merely confirm the direction in which the institution is already heading). My incredulity arises from the facts presented in the report itself, namely the astonishing number of Times employees already dedicated to Reader Experience activities (which, as the report notes, “includes large segments of Design, Technology, Consumer Insight Group, R &D, and Product”).

The problem, explicitly remarked upon at several points in the report, appears to be turf wars.  Some of this is simply silly, such as the exclusion of Reader Experience people from key editorial meetings or other instances of uncollegial shunning. But I suspect the problem also stems from something far less silly, indeed, from the most fundamental of political-institutional questions: Who, at the end of the day, will be in command of the combined newsroom and Reader Experience personnel? Will it be the editorial leadership or the business leadership?

That question can’t be finessed, fudged, blurred, or deferred. It must be answered forthrightly, because if it isn’t, the very purpose of a serious news organization becomes unclear.

And that leads to my second big concern.  What is the real goal of “innovation” at the New York Times? Is it intended primarily to enable the editorial leaders to use and inculcate the best practices of distribution, with additional staff possessing advanced skills in those practices, in order to support and advance strong journalism? Or is intended primarily to increase the number of Reader Experiences as measured through analytics and other metrics at the expense, in the long or short runs, of the highest quality of journalism? If the former, I am on board—who wouldn’t be?

Which is why the political question must be answered first. If not, and if the new and enhanced newsroom ends up being run by the business side, then decisions will be made that will slowly erode the quality of the journalistic content. If content packagers and social media community managers answer ultimately to publishing executives and not to editors, then they will be able to demand the kind of content—whimsical features, for example, rather than hard reporting—that tends to trend most strongly.  The sad fact is that cute cat stories always sell better than revelations about city hall. The number of hits, likes, or visits will gradually but inevitably determine the editorial agenda.

Overstated? Simplistic? I don’t think so. When the ultimate purpose of a news organization is something that can be evaluated almost exclusively by metrics, then you can be sure you are no longer talking about a news organization.  A media company, perhaps, but not a journalistic one.

The report calls for some breaching of the editorial-publishing (or church-state) firewall. That sounds suspect to me. What it should call for is the migration of some of those Reader Experience departments and personnel to the editorial side of the firewall. The wall itself must stand. Or the journalism will fall.

Jay Tolson is the Executive Editor of The Hedgehog Review.

 

 

The Humanities in Full: Polemics Against the Two-Culture Fallacy

The New Republic does not like the digital humanities. Following Leon Wieseltier’s earlier diatribes, Adam Kirsch recently warned that the digital humanities and their “technology” were taking over English departments. Kirsch posed some reasonable questions: Are the digital humanities a form of technological solutionism? No, not withstanding the occasionally utopian strand. Are the digital humanities “post-verbal”? With all their graphs, charts, and network visualizations do they aspire to a discourse of mere pictures and objects? No and no. With all their generously funded projects, are they embracing the “market language of productivity to create yet another menacing metric for the humanities?” A good question that deserves thoughtful responses (here and here).

But Kirsch’s essay isn’t really about the digital humanities. It’s about the humanities more broadly and Kirsch’s truncated and ahistorical vision of what they ought to be. The problem with the digital humanities, he writes, is that they go against the “nature of humanistic work.” And their errant ways

derive from a false analogy between the humanities and the sciences. Humanistic thinking does not proceed by experiments that yield results; it is a matter of mental experiences, provoked by works of art and history, that expand the range of one’s understanding and sympathy. It makes no sense to accelerate the work of thinking by delegating it to a computer when it is precisely the experience of thought that constitutes the substance of a humanistic education. The humanities cannot take place in seconds. This is why the best humanistic scholarship is creative, more akin to poetry and fiction than to chemistry or physics: it draws not just on a body of knowledge, though knowledge is indispensable, but on a scholar’s imagination and sense of reality. Of course this work cannot be done in isolation, any more than a poem can be written in a private language. But just as writing a poem with a computer is no easier than writing one with a pen, so no computer can take on the human part of humanistic work, which is to feel and to think one’s way into different times, places, and minds.

Kirsch pits the technologically unadorned humanities that produce subjective experiences against the technology-dependent sciences that produce mere facts. This simple, and false,  dichotomy manages to slight both at once, and to obscure more than it clarifies.

In fact,  this humanities-sciences dichotomy is relatively recent.  And as it turns out, the  humanities itself is a relatively recent term, seldom used before the nineteenth century. The OED lists the first use as 1855 in a reference to music as of “the humanities.” Google’s NGram keyword search shows a marked increase in the prevalence of the term around 1840, just as the natural and physical sciences were becoming ascendant in universities.

Today’s distinctions between the digital humanities and the humanities proper has its longer dilthey_geisteswissenschaften_1883_0006_400pxhistory in these nineteenth-century divisions. For well over a century now, one of the dominant notions of the humanities, in the academy at least, is one that cast them squarely against the natural sciences. And this conception of the humanities, which has since gained wider influence in the culture at large, was first articulated by the late nineteenth-century German scholar Wilhelm Dilthey. Dilthey distinguished the human sciences [Geisteswissenschaften] from the “natural” sciences [Naturwissenschaften].

The Geisteswissenchaften, claimed Dilthey, studied the inner workings of mental facts – that is, the internal processes of human experience (Kirsch’s beloved mental experiences). For this internal realm, the freedom and autonomy of the subject were central and, thus, the primary objects of inquiry.

The natural sciences, by contrast, studied material processes governed by natural laws and the mechanisms of cause and effect. For Dilthey, humanities scholars don’t count, measure, or seek patterns; they seek to understand what motivates canonical historical figures who produce works of art and other artifacts of culture (Kirsch’s struggle to understand not just explain Auerbach’s Mimesis, for example). The human sciences explain phenomena from within, the natural sciences from without.

Dilthey’s efforts to distinguish sharply between the two forms of inquiry were in large part meant to resist the rising influence of the natural sciences in nineteenth-century German universities and, above all, the influence of positivism: the notion that we can have knowledge only of phenomena (the only possible knowledge is of an endless series of facts). Like Dilthey, Kirsch’s embrace of a very particular and limited notion of the humanities is reactionary. But whereas Dilthy feared the pervasive and corrosive effects of positivism, Kirsch fears the utopian delusions of technological solutionism.

These simple oppositions—the humanities versus the sciences—confuse more than they enlighten and, in a timeless irony, produce deeply anti-humanistic polemics. They also ignore the historical fact that the humanities and humanistic inquiry more broadly have not only been concerned with particular human artifacts (one painting, one poem, one piece of music) but also, as Dutch scholar Rens Bod recently put it, patterns and principles to make sense of and enjoy these artifacts.

The kinds of things that humanists actually do when they engage products of human creativity, their practices, have always been bound up with efforts to make connections and generalize. From Lorenzo Valla’s careful and methodological debunking of the Donatio Constantini (On the Donation of Constantine) in 1440 to Erich Auerbach’s Mimesis in 1946, humanists of all kinds have relied on particular notions of method, evidence, verification, and argument, just as the natural and physical sciences have relied on intuition and creativity.

We need a history and vision of the humanities capacious enough to see the humanities not as a particular method or set of disciplines but as a disposition, as a way of engaging the world.  What follows in subsequent blogs are short, polemical (in the best sense, I hope) steps toward such a history.