Quit Lit: Do the Humanities Need the University?

college chains_FINAL

#165598883 / gettyimages.com

There’s a new genre taking shape on blogs, Twitter, and even in the pages of The London Review of Books: Quit Lit. Just last week, Mariana Warner, a creative writing professor and member of the Man Booker prize committee, explained her decision to resign her position at the University of Essex. In “Why I Quit,” she describes the bureaucratic disciplines of England’s new Research Assessment Exercises, which tabulate and calculate academic labor with the efficiency and mindlessness usually reserved for an assembly plant (and a low tech one at that).

In a scene she must have embellished by channeling Kakfa U., Warner recounts a meeting with her new dean:

A Tariff of Expectations would be imposed across the university, with 17 targets to be met, and success in doing so assessed twice a year. I received mine from the executive dean for humanities. (I met her only once. She was appointed last year, a young lawyer specialising in housing. When I tried to talk to her about the history of the university, its hopes, its “radical innovation,” she didn’t want to know. I told her why I admired the place, why I felt in tune with Essex and its founding ideas. “That is all changing now,” she said quickly. “‘That is over.” My “workload allocation,” which she would “instruct” my head of department to implement, was impossible to reconcile with the commitments which I had been encouraged—urged—to accept.

Confused but, more deeply, defeated by this new regime, Warner resigned. But she continued her work for the Man Booker Prize committee which, as it turns out, has proven rather clarifying.

Among the scores of novels I am reading for the Man Booker International are many Chinese novels, and the world of Chinese communist corporatism, as ferociously depicted by their authors, keeps reminding me of higher education here, where enforcers rush to carry out the latest orders from their chiefs in an ecstasy of obedience to ideological principles which they do not seem to have examined, let alone discussed with the people they order to follow them, whom they cashier when they won’t knuckle under.

As a genre Quit Lit has a few organizing features. Its form tends to be personal and aggrieved. The university, like those vague but all-powerful institutions in Kafka’s texts, has been overtaken by an alien, usually bureaucratic-statist-inhumane power. And its content tends to be not just about the decline of the university but also about the impending demise of the humanities. By turning universities into vocational schools, we are robbing our children of humanistic forms of thought and the good that ensues. (If scientists wrote prose like humanists, maybe they would be writing about the end of the university and the collapse of science. NPR had a go at Quit Lit  this past week in their series on the dramatic cuts in basic science funding and the results it is having on future generations of scientists.)

As with all literary genres, Quit Lit has its predecessors. Before there were Rebecca Schuman and NeinQuarterly’s Eric Jarosinski, there was another German scholar experimenting in the genre, Friedrich Nietzsche. In 1872, just three years after he landed his first, and only, professorship at the University of Basel without even having finished his dissertation, Nietzsche delivered a series of lectures, On the Future of Our Educational Institutions, in the city museum. Before crowds of more than 300 people, Nietzsche staged a dialogue on the future of German universities and culture between two young students and a cantankerous old philosopher and his slow-witted but earnest assistant.

The grousing philosopher lamented the decline of universities into state-sponsored factories that produced pliant citizens and mindless, “castrated” scholars who cared not a bit for life. By the end of the lectures, it’s difficult to say whether Nietzsche thought there was a future at all for German universities. Nietzsche lasted a few more years in his position, resigning only when ill health forced him to. But he left an oeuvre that looked to the university and saw little but ruin.

As Nietzsche was writing, parts of the German university might not have been in decay, but they were in decline, the humanities in particular. Between 1841 and 1881, enrollment in philosophy, philology, and history within “philosophy faculties,” which compromised the core liberal arts fields, declined from 86.4 percent to 62.9 percent, whereas in mathematics and the natural sciences enrollments increased from 13.6 to 37.1 percent of all students matriculating at German universities. The mood among humanists was often such that they sounded quite a bit like the embattled literature professors of today. In academia, crisis is generally a matter of perception, and even in what now seems like a “golden age” for humanists, there was, in fact, a seismic shift for the humanities.

More recent forms of Quit Lit tend to lack a key feature of Nietzsche’s model, however. Nietzsche never conflated the humanities or humanistic inquiry with the university. For him, humanistic inquiry—and Nietzsche was deeply humanistic as his lifelong commitment to philology attests—transcended the institutional and historically particular shape of universities, which he saw as little more than extensions of a Prussian bureaucratic machine.

In what increasingly seems like a related genre, contemporary academics and intellectuals of all sorts have ostensibly been defending the humanities. But more often than not they actually defend certain forms of scholarship as they have come to be institutionalized in largely twentieth-century American research universities. Geoffrey Galt Harpham recently produced  the most egregious but well-argued example of this tendency with The Humanities and the Dream of America. His basic thesis is that the humanities as they are now practiced were an invention of post–World War II American research universities. Similarly, Peter Brooks’s edited collection The Humanities and Public Life suggests, with its focus on disciplines and scholarship and the imperatives of the university, inadvertently echoes the same. They conflate the humanities with their departmental and institutional shapes in universities.

In the measured “yes but” prose of academic speak, Patrícia Vieira gives this spirit of conflation ethical shape in a review entitled “What are the Humanities For?”:

Debates about the “future of the humanities” frequently revolve around the suspicion that the humanities might not have one. Yet despite the direness of this anxiety—an anxiety especially personal for every academic worried about professional choices or mortgage payments—conversations on the topic are often dull, long-faced affairs. Every professor has sat through one or another of these depressing discussions. The conversation proceeds according to a familiar set of pieces: there are passionate apologias of work in philosophy, literature, history, and the arts; veiled criticism of the anti-intellectualism of higher education administrators and society at large; and vague pledges to do more interdisciplinary research and extend a fraternal hand to the social and natural sciences, who remain largely unperturbed by this plight. The whole thing wraps up with the reassuring conviction that, if the humanities go down, they will do so in style (we study the arts, after all), and that truth is on our side, all folded in a fair dosage of indulgent self-pity.

Vieira can’t imagine the future of the humanities beyond the anxieties of professors and the failures of university administrators. All she can muster is a few gentle and inveterately academic admonitions for her authors:

Brooks’s and [Doris] Sommer’s [The Work of Art in the World: Civic Agency and Public Humanitiesbooks coincide in their desire to persuade those skeptical about the importance of the arts and the humanities of their inherent worth. The volumes set out to prove that these disciplines play a crucial role in public life and that they are vital to contemporary culture. Brooks’s collection often falls short of this goal by sliding into fatalistic rhetoric about the doomed future of humanistic scholarship—the very discourse the book attempts to combat—all while ignoring some of the vibrant new research in the field. In contrast, Sommer is overconfident in the power of the arts to tackle thorny socioeconomic and political problems. Both the despondent and celebratory approaches are symptomatic of the beleaguered state of the field, forced to justify its existence based upon technocratic principles that demand immediate results and fast returns. The humanities are constantly compelled to demonstrate practical results or hopelessly admit to lacking a concrete and immediate function, straitjacketed into foreign modes of valuation lifted from the empirical sciences. Neither a dying set of disciplines nor a panacea for social ills, the humanities remain a central form of human enquiry, in that they shed light on and question the tacit assumptions upon which our societies are based, outline the history of these values, and identify alternatives to the status quo.

Despite her attempts to cast the humanities as a form of “human” inquiry, Vieira is writing about a beleaguered and exhausted profession. There are only professors and their disciplines here. And they both are trapped, as Nietzsche would say, in a “castrated” passive voice: “The humanities are compelled ….” There are no agents in this drama, just put-upon, passive professors.

I am not suggesting that we should give up on universities. Universities, especially modern research universities, have long helped sustain and cultivate the practices and virtues central to the humanities. But just as German universities were becoming international paradigms, emulated from Baltimore to Beijing, Nietzsche made a fateful diagnosis. Those practices and virtues could ossify and whither in the arcane and self-justifying bowels of the modern, bureaucratic university. “Human inquiry,” in contrast, would live on.

We may well benefit from an exercise in imagination. Could the humanities survive the collapse of the university? I think so.

FacebookTwitterGoogle+LinkedInShare

Big Data, Small Data, and the Ethics of Scale

This past summer, two Cornell University scholars and a researcher from Facebook’s Data Science unit published a paper on what they termed “emotional contagion.” They claimed to show that Facebook’s news feed algorithm, the complex set of instructions that determines what shows up where in a news feed, could influence users’ emotional states. Using a massive data set of more than 689,003 Facebook accounts, they manipulated users’ news feeds so that some people saw more positive posts and others more negative posts. Over time, they detected a slight change in what users themselves posted: Those who saw more positive posts posted more positive posts of their own, while those who saw more negative posts posted more negative ones. Emotional contagion, they concluded, could spread among people without any direct interaction and “without their awareness.” 

Some critics lambasted Facebook for its failure to notify users that they were going to be part of a giant experiment on their emotions, but others simply thought it was cool. (My Infernal Machine colleague Ned O’Gorman has already outlined the debate.) Sheryl Sandberg, Facebook’s COO, just seemed confused. What’s all the fuss about, she wondered. This latest experiment “was part of ongoing research companies do to test different products.” Facebook wasn’t experimenting with people; it was improving its product. That’s what businesses do, especially digital business with access to so much free data. They serve their customers by better understanding their needs and desires. Some might call it manipulation. Facebook calls it marketing.

But, as technology writer Nicholas Carr points out, new digital technologies and the internet have ushered in a new era of market manipulation.

Thanks to the reach of the internet, the kind of psychological and behavioral testing that Facebook does is different in both scale and kind from the market research of the past. Never before have companies been able to gather such intimate data on people’s thoughts and lives, and never before have they been able to so broadly and minutely shape the information that people see. If the Post Office had ever disclosed that it was reading everyone’s mail and choosing which letters to deliver and which not to, people would have been apoplectic, yet that is essentially what Facebook has been doing. In formulating the algorithms that run its News Feed and other media services, it molds what its billion-plus members see and then tracks their responses. It uses the resulting data to further adjust its algorithms, and the cycle of experiments begins anew. Because the algorithms are secret, people have no idea which of their buttons are being pushed — or when, or why.

Businesses of all sorts, from publishers to grocery stores, have longed tracked the habits and predilections of their customors in order better to influence what and how much they consume. And cultural critics have always debated the propriety of such practices.

Eighteenth-century German scholars debated the intellectual integrity of publishers who deigned to treat books not only as sacred vessels of Enlightenment, but also as commodities to be fashioned and peddled to a generally unenlightened public. Friedrich Nicolai, one of late eighteenth-century Prussia’s leading publishers, described the open secrets of the Enlightenment book trade:

Try to write what everyone is talking about . . . If an Empress Catherine has died, or a Countess Lichtenau fallen out of favor, describe the secret circumstances of her life, even if you know nothing of them. Even if all your accounts are false, no one will doubt their veracity, your book will pass from hand to hand, it will be printed four times in three weeks, especially if you take care to invent a multitude of scandalous anecdotes.

The tastes and whims of readers could be formed and manipulated by a publishing trade that was in the business not only of sharing knowledge but also of producing books that provoked emotional responses and prompted purchases. And it did so in such obvious and pandering ways that its manipulative tactics were publicly debated. Immanuel Kant mocked Nicolai and his fellow publishers as industrialists who traded in commodities, not knowledge. But Kant did so in public, in print.

These previous forms of market manipulation were qualitatively different from those of our digital age. Be they the practices of eighteenth-century publishing or mid-twentieth-century television production, these forms of manipulation, claims Carr, were more public and susceptible to public scrutiny, and as long as they were “visible, we could evaluate them and resist them.” But in an age in which our online and offline lives are so thoroughly intertwined, the data of our lives—what we consume, how we communicate, how we socialize, how we live—can be manipulated in ways and to ends about which we are completely unaware and we have increasingly less capacity to evaluate.

Sheryl Sandberg would have us believe that Facebook and Google are neutral tools that merely process and organize information into an accessible format. But Facebook and Google are also companies interested in making money. And their primary technologies, their algorithms, should not be extracted from the broader environment in which they were created and are constantly tweaked by particular human beings for particular ends. They are pervasive and shape who we are and who we want to become, both individually and socially. We need to understand how live alongside them.

These are precisely the types of questions and concerns that a humanities of the twenty-first century can and should address. We need forms of inquiry that take the possibilities and limits of digital technologies seriously. The digital humanities would seem like an obvious community to which to turn for a set of practices, methods, and techniques for thinking about our digital lives, both historically and conceptually. But, to date, most scholars engaged in the digital humanities have not explicitly addressed the ethical ends and motivations of their work. (Bethany Nowviskie’s work is one exemplary exception: here and here.)

This hesitance has set them up for some broad attacks. Th recent diatribes against the digital humanities have not only peddled ignorance and lazy thinking as insight, they have also, perhaps more perniciously, managed to cast scholars interested in such methods and technologies as morally suspect. In his ill-informed New Republic article, Adam Kirsch portrayed digital humanities scholars as morally truncated technicians, obsessed with method and either uninterested in or incapable of ethical reflection. The digital humanities, Kirsch would have us believe, is the latest incarnation of the Enlightenment of Adorno and Horkheimer—a type of thinking interested only in technical mastery and unconcerned about the ends to which knowledge might be put.

Most of the responses to Kirsch and his ilk, my own included, didn’t dispute these more implicit suggestions. We conceded questions of value and purpose to the bumbling critics, as though to suggest that the defenders of a vague and ahistorical form of humanistic inquiry had a monopoly on such questions. We conceded, after a fashion, the language of ethics to Kirsch’s image of a purified humanities, one that works without technologies and with insight alone. We responded with arguments about method (“You don’t know what digital humanities scholars actually do.”) or history (“The humanities have always been interested in patterns.”).

In a keynote address last week, however, Scott Weingart encouraged humanities scholars engaged in computational analysis and other digital projects to think more clearly about the ethical nature of the work they are already doing. Echoing some of Carr’s concerns, he writes:

We are at the cusp of a new era. The mix of big data, social networks, media companies, content creators, government surveillance, corporate advertising, and ubiquitous computing is a perfect storm for intense influence both subtle and far-reaching. Algorithmic nudging has the power to sell products, win elections, topple governments, and oppress a people, depending on how it is wielded and by whom. We have seen this work from the bottom-up, in Occupy Wall Street, the Revolutions in the Middle East, and the ALS Ice-Bucket Challenge, and from the top-down in recent presidential campaigns, Facebook studies, and coordinated efforts to preserve net neutrality. And these have been works of non-experts: people new to this technology, scrambling in the dark to develop the methods as they are deployed. As we begin to learn more about network-based control and influence, these examples will multiply in number and audacity.

In light of these new scales of analysis and the new forms of agency they help create, Weingart encourages scholars, particularly those engaged in network and macroanalysis, to pay attention to the ways in which they mix the impersonal and individual, the individual and the universal. “By zooming in and out, from the distant to the close,” he writes, digital humanities scholars toggle back and forth between big and small data. Facebook, Google, and the NSA operate primarily at a macro level at which averages and aggregates are visible but not individuals. But that’s not how networks work. Networks are a messy, complex interaction of the micro and macro. They are products of the entire scale of knowledge, data, and being. Social networks and the ideas, actions, and interactions that comprise them emerge between the particular and the universal. What often distinguishes “the digital humanities from its analog counterpart,” writes Weingart, “is the distant reading, the macroanalysis.” But what binds humanities scholars of all sorts together is an “unwillingness to stray too far from the source. We intersperse the distant with the close, attempting to reintroduce the individual into the aggregate.” In this sense, scholars interested in a digital humanities are particularly well suited to challenge basic but dangerous misconceptions about the institutions and technologies that shape our world.

If we think of Facebook and Google and the computations in which we are enmeshed merely as information-processing machines, we concede our world to one end of the scale, a world of abstracted big data and all powerful algorithms. We forget that the internet, like any technology, is both a material infrastructure and, as Ian Bogost has put it, something we do. Every time we like a post on Facebook, search Google, or join the network at a local coffee shop, we participate in this massive, complex world of things and actions. We help form our technological world. So maybe its time we learn more about this world and remember that algorithms aren’t immutable, natural laws. They are, as Nowviskie puts it, rules and instructions that can manipulate and be manipulated. They are part of the our world, bound to us just as we are now to them.

The Author in the Age of Digital Distribution

Amazon_Kindle_Model_No._D00901

In the Amazon v. Hachette battle, authors seem to have assumed the role of largely passive onlookers, idling nervously on the sidelines until the two giants come to an agreement on the price of ebooks. On the Amazon front, the myth that the online retail giant is a warm, author-friendly company was dealt a further blow when J. K. Rowling became a casualty in the battle. As for Hachette, the determination to sell ebooks for $11.99 appears to be an increasingly quixotic stand against the tides of the modern literary marketplace.

But what if authors were able to gain greater agency in deciding where, how, and at what price their books were sold? What if authors didn’t need to agree to Amazon’s exclusive, one-size-fits-all royalty and sales agreement or sign away their rights to publishers? We might not be too far away from a shared open-source platform on which authors could sell their own work, cutting Amazon out of the equation and reducing the negotiating power of publishers. The recent protest letter signed by 900 authors shows that the time is ripe for a new, author-centric paradigm.

Amazon isn’t solely to blame for the plummetting price values assigned to books. The distribution of books has become so efficient that its profitability has been drastically reduced—the book market has undercut itself. At the same time, the appeal of social and visual media has contributed to the rather sluggish status of literary reading. In this media-saturated climate, the most successful authors are the ones who can ruthlessly grab and hold readers while triumphing across media. The Hunger Games, for instance, has a plot that is based on reality television and televised spectaclea text that begs to be filmed. Authors have never faced so much competition in the form of free media, nor have they ever encountered such low barriers to distribution of their work. Rather than the death of the author, this ethos dictates the birth of scaled, income-based, and potentially author-driven digital pricing and distribution models.

We can hate Amazon as much as we want, but the company has turned publishing on its head. Although most self-published authors earn very little, there are now few barriers to entry and authors can earn, according to Amazon Kindle Direct Publishing, “up to seventy percent in royalties.” Yet, Amazon offers only an illusion of authorial control. In the background, Amazon programmers ensure that the “results generated by Amazon’s search engine are partly determined by promotional fees.” Furthermore, Amazon’s Kindle Direct Publishing agreement restricts authors from selling their digital books anywhere else.

However, many writers and artists are bucking Amazon’s monopoly. Tech writer Gwen Bell has been selling her writing directly from her website for years. Deadmau5 is the latest artist to go where Louis C.K. in 2011 and Stephen King back in 2000 went before: He announced on Reddit his intention to release his music through his own digital subscription service with different price points. Even the reclusive comic book artist Alan Moore has raised more than £60,000 through Kickstarter and proclaimed that crowdfunding will shape the future. The ability to demonstrate the monetary demand for a book even before it is published puts a good deal of power in the hands of authors, who may be able to bargain harder to control their own rights.

Open-source, distributed peer-to-peer networks and free software can easily link together authors’ individual sites and offerings. In an author-centric model, editorial work, marketing, and the stamp of the respected publisher would be secondary to the agency and established fanbase of individual authors, who would boast greater control of their works and proceeds. Rather than publishers and Amazon’s search algorithms acting as the filters, listings could be based entirely on user ratings in a manner similar to sites such as Reddit and Buzzfeed. In fact, we could even take this direct-sales model a step further and imagine a future in which books are crowd-edited or collaboratively written, with the author changing the text in response to readers’ comments.

Whether a work is distributed by a company or an artist, pricing models must be adjusted to the reality of digital distribution. More specifically, each product should have a variety of price points. For example, I live in Morocco, a “developing” country, and in my city, Rabat, there is an enormous market for pirated DVDs. One can walk through the old market and buy any movie or TV series for 6 dirhams, or about 73 cents. That price is appropriate for a country in which most citizens can’t afford to buy a full-priced DVD or go to the movie theater. If movie distribution companies or individual artists adjusted their pricing models, then they could compete in and capture developing markets. We have entered a Brave New World in which one price cannot serve all, and sales points need to be calculated and contextually negotiated. And even beyond flexible price points, there is the possibility of ongoing “public patronage” in which the public “subscribes” to an individual artistSurely, it is only a matter of time before sites like Gittip, which allows users to donate a minimum of 25 cents a week to members of the technorati, expand to cover authors and artists.

When, more than three quarters of a century ago, social critic Walter Benjamin wrote The Work of Art in the Age of Mechanical Reproduction, he claimed that reproducing a work would make it lose its uniqueness and context within an artistic tradition, its “aura.” Whether a digitally reproduced work of art or writing lacks uniqueness and resonance is perhaps debatable, but what is clear is that the process of digital distribution has generated new opportunities for the creator and her works. Concerning the question of artistic tradition, the internet and digital distribution have brought us even closer to artists and the context of their work—we have access to their daily lives through Twitter, and we can often Tweet questions to them or participate in a Reddit Ask Me Anything thread. While companies and managers can do a lot to position new talent, many rising stars are crowd-propelled in a way that was unimaginable a decade ago. We are in an age in which someone like pop singer Rebecca Black can be transformed into a celebrity, praised, attacked, and then abandoned within a few months. In fact, reality television and celebrity gossip blogs are opposite sides of the same impulse—to elevate reality and pull down celebrity until there is no distinction. Just as Benjamin foresaw, “At any moment the reader is ready to turn into a writer,” or the patron into an artist.

Authors no longer need publishers to distribute their work, to gauge readers’ interest, or to build a fan base, and many authors have already decided that they don’t need Amazon. This is not to say that authors don’t need publishers or a strong online distribution platform. Rather, the few authors who can guarantee their sales numbers have a lot more power when it comes to releasing or holding onto their rights. One example is Hugh Howey, a self-published author who turned down a seven-figure deal with Simon and Schuster in order to retain control of his digital rights (he instead “settled” for six figures). In this age of increased competition and pricing, authors might negotiate more intensely with their readers and their publishers to maintain control over the pricing and rights for their work.

But Hugh Howey might well be the exception. What if, as Scott Turow fears, authors cannot adjust to the challenges of online distribution and piracy, leaving us with a system that rewards a writerly 1 percent of a few winners and lots of losers? One possible consequence is that idealistic, imaginative, and socially engaged members of our generation will feel compelled to make a more direct and practical impact on the world—rather than writing about social inequality, they might be “forced” to take jobs as policy-makers, for example. In fact, with the collaborative and interdisciplinary mentality behind many crowdfunded projects, such possibilities have already emerged. Art expressing itself in the world: That might not be such a bad outcome, after all.

Sarah Ruth Jacobs is a founding editor of the open-access Journal of Interactive Technology and Pedagogy and a doctoral candidate in American literature at the Graduate Center of the City University of New York.

Photograph: Wikipedia Commons

Twitter, Racism, and the “Safe” Campus

Anyone following journalistic coverage of academia lately is likely to have heard of the divisive case of the “de-hiring” of Steven Salaita at my home institution, the University of Illinois. Reports have appeared in the New York Times and the Chronicle of Higher Education, although blogs have offered the more substantive coverage. I’ll leave it to other sites to relate the longer story, but the short version is that in early August, Chancellor Phyllis Wise notified Indigenous Studies scholar Steven Salaita that despite his accepting an offer of a tenured position in American Indian Studies last year as the result of a search, she would not be recommending his hire for final approval by the Board of Trustees.

Such approval is typically a pro forma step in the hiring process, as demonstrated by the fact that it usually happens after the start of a new faculty member’s first semester of employment. In this case, however, the Chancellor informed Salaita via letter that she felt the Board was “unlikely” to approve his hire. More recently, she has stated that she felt the “humane” thing to do, knowing the Board’s disapproval, was to end his candidacy before the start of the semester, rather than let him move to Illinois and begin teaching, only to be retroactively fired after the Board’s September meeting.

Journalistic coverage has revealed that the Chancellor’s decision came on the heels of a flurry of correspondence between campus leaders, trustees, and regular donors. At issue in these exchanges is the question of whether Salaita’s strident tweets on the recent Gaza attacks revealed the scholar of Palestine and Indigenous Studies as anti-Semitic, or at least “uncivil” and therefore unfit to teach. With appeals to protecting campus “civility,” Chancellor decided to rescind Salaita’s offer, leaving the professor jobless, his students without an instructor, and a campus in turmoil over the implications for academic freedom and shared governance.

The University of Illinois is currently as torn apart as I’ve seen it (and that’s saying a lot given our past scandals). Thousands of faculty elsewhere have boycotted the institution, and there are some signs that we might be on the road to censure by professional organizations. Boycotts were perhaps especially probable given that Salaita himself is one of the leaders of the Boycott, Divestment and Sanctions Movement against Israel. (Indeed, his efforts on that work earned him the attention of bloggers who likely first alerted campus leaders to his Twitter feed.)

There are many dimensions of this story to sort through—donor influence, political corruption, Zionism, racism, and definitions of free speech, for starters—and it also comes at a time when many faculty are moving to unionize. Somewhere near the center is the matter of social media, and how we use it.

Twitter drew attention last year in racist tweets by students against Chancellor Wise in response to a snow day decision, to which the campus responded with a new “Inclusive Illinois” campaign. What is the proper context for even citing a tweet, let alone understanding the import or effects of 140 characters delivered as part of a longer, partially public, partially private exchange?

Many are approaching this matter through lengthy legal arguments and examination of precedents. This may be exactly what Salaita needs to claim wrongdoing in court, or what the faculty need to claim breech of shared governance. When I sat down to write the Trustees, however, I wanted to boil the matter down still more.

The most generous description of why the Chancellor and Trustees de-hired Salaita is that they saw a threat to the well-being of the campus as a space supportive of inclusion and diversity. I know of many who are very surprised to find themselves, as supporters of the Chancellor’s decision, on the receiving end of accusations of racism. This whole episode is but the latest in a longer history of inadequate responses to white racism on this campus—particularly in light of our racist mascot “Chief” Illiniwek, banned by the NCAA and the Chancellor, but kept alive by fans and continually present on campus.

My letter to the Trustees thus took the form of a slideshow (one of my favorite forms), six slides to explain how their efforts to make the campus “safe” in fact contribute to the larger problems of racism. Below is my “PowerPoint on power.”

 

Six Questions on the Salaita Case from Complex Fields

Kevin Hamilton is Associate Professor of Art and Design at the University of Illinois, Urbana-Champaign. Along with regular Infernal Machine contributor Ned O’Gorman, he is currently at work on a history of film in American nuclear weapons development.

Apple Watch and the Quantified Self

Today Apple unveiled its latest technological creation, the Apple Watch, a wearable computer that tracks not only time but your every step, heartbeat, and calorie. With their latest product, Apple contributes to the growing availability of devices and apps that track and record our activities and biostatistics such as Fitbit, Basis, and My Fitness Pal. Given Apple’s commercial influence, the Apple Watch may well turn the nascent Quantified Self (QS) movement into a cultural mainstay delivering “self knowledge through numbers.”

Apple Watch

Apple Watch

Most QS practices track health-related activities such as calorie intake, exercise, and sleep patterns, but they are increasingly used to document and track experiences of grief, exploration, and productivity. And tracking apps and devices are even making their way unexpected areas of life experience. Attempts to measure the soul, data point by data point, for example, are increasingly common. Just last January a Menlo Park pastor teamed up with a University of Connecticut sociologist to create SoulPulse, which, as Casey N. Cep explains, is a

 a technology project that captures real-time data on the spirituality of Americans. SoulPulse attempts to quantify the soul, an unbodied version of what FitBit, the exercise-tracking device, has done for the body. After filling in a brief intake survey on your age, race, ethnicity, education, income, and religious affiliation, SoulPulse contacts you twice a day with questions about your physical health, spiritual disciplines, and religious experiences. Each of the surveys takes less than five minutes to complete.

SoulPulse encourages users to learn about their “spirituality” through the power of big data and digital automation. This may sound crazy, but what’s the difference between tracking your daily prayer life with an app and doing so with another set of repeatable instructions, such as the Benedictine Rule and its set of daily readings and reminders to ponder God?

Many aspects of the QS movement are anything but new. Western cultures have long maintained practices that document behaviors and experiences in order to discipline ourselves. Capitalism and quantifying the self have been intimately linked for some time. Early accounting practices allowed businessmen to understand the consequences of their behavior so that it could be modified in the future. Merchants developed account logs that allowed them to track the results of their business transactions and to modify them in the future.  Perhaps they had purchased too much grain and it spoiled before it could be sold. In the following year, the same merchant could alter his practice based on this cataloged information. And Frederick W. Taylor’s scientific management theories relied on precise measurements of workers’ efficiency.

And more in the tradition of St. Benedict, people have long kept track of their spiritual lives. Benjamin Franklin dutifully recorded his success in adhering to a list of thirteen virtues each day. Diaries and journals have long been witness not just to bad poetry but to detailed lists of eating and sleeping habits. Weight Watchers and its point system, founded in 1963,  turned such practices into a business.

Despite such similarities, tracking devices such as Apple Watch are not the same as eighteenth-century diaries. The former have the potential to revolutionize the health sector and facilitate better care, but what happens when they don’t just give away our desires on Facebook (I like this!) but open up a one-way data stream on our bodies? How long will it take for all that personal data to make its way to our insurance companies? (The now-common annual biometric screenings will seem quaint by comparison.)

Self-reflection and personal development are broad cultural values. But what happens to us when we focus on aspects of ourselves that are easily recorded and converted into numbers? QS enthusiasts advocate for the expansion of tracking devices from the private sphere into the work environment, where they might provide insights on employee selection, promotion, and productivity. How will tracking social and personal behavior, such as how many times one smiles during the day, alter work environments and those who inhabit them?

Digital practices and techniques for tracking and disciplining the self are different from the analogue and print predecessors for several reasons. First, what they can track has expanded. Benjamin Franklin most likely didn’t know the rate of his perspiration. Second, the precision with which data is measured and recorded is continually increasing. Similarly, tracking devices and apps are increasingly frictionless: They do their job with minimal interruption and effort on the part of the user. Finally, the digital format of the data represents a marked difference from records of the past. Many of these tracking devices easily connect to apps and programs that analyze the data, dictating to the individual a pre-programmed assessment of success or failure. The digital nature of the information also makes it easily available and transferable.

These new developments and the manufacture and dissemination of these technologies and apps through popular and trusted brands such as Apple are likely to expand the degree to which individuals come to imagine themselves, their bodies, and their habits through and as numbers. As we continue into our quantified future, will these new digital practice alter what will means to be a good person, a successful person, or an efficient person? Will be we able to juke the numbers?  Just because the technology is intended to track behavior and facilitate modification of that behavior doesn’t mean that it won’t be put to other purposes. What will we make of our new digital tracking practices and the self that we come to know through numbers?

Claire Maiers is a graduate student in the Department of Sociology at the University of Virginia.

Living With Ferguson

In April 1964, Malcolm X gave what would be one of the best-known speeches of the civil rights era, “The Ballot or the Bullet?” Speaking of the plight of blacks in America, Malcolm X asked an audience at Cory Methodist Church in Cleveland, Ohio, “Where do we go from here? In my little humble way of understanding it, it points either to the ballot or the bullet.”

I have thought a lot about Malcolm X’s proposition since Michael Brown was shot dead in Ferguson, Missouri, not far from where I grew up.

Malcolm X (photo from Wikimedia Commons)

Malcolm X in 1964 by Herman Hiller (photo from Wikimedia Commons)

To this day, Malcom X’s speech is all too often described as a call to violence and little more. But it was actually one of the more cooly pragmatic and tactical arguments of the era. It was as much a call to “the ballot” as it was to “the bullet.” Indeed, based on a simple word count, Malcolm X seemed more interested in the power of the ballot in 1964 than he was in the power of bullet. Unlike the bullet, the ballot offered a straightforward electoral strategy:

These 22 million victims are waking up. Their eyes are coming open. They’re beginning to see what they used to only look at. They’re becoming politically mature. They are realizing that there are new political trends from coast to coast. As they see these new political trends, it’s possible for them to see that every time there’s an election the races are so close that they have to have a recount. They had to recount in Massachusetts to see who was going to be governor, it was so close. It was the same way in Rhode Island, in Minnesota, and in many other parts of the country. And the same with Kennedy and Nixon when they ran for president. It was so close they had to count all over again. Well, what does this mean? It means that when white people are evenly divided, and black people have a bloc of votes of their own, it is left up to them to determine who’s going to sit in the White House and who’s going to be in the dog house.

And so “The Ballot or the Bullet” advocated hitting, straight on, the racist “American system” (Malcolm X’s just phrase) through traditional political means.

To be sure, if the ballot failed, the bullet, he suggested, would be necessary—but as the political equivalent of the ballot, rather than its extreme alternative. In Malcolm X’s speech, both the ballot and the bullet are blunt political technologies capable of changing the “American system.” “The ballot or the bullet. If you’re afraid to use an expression like that, you should get out of the country.”

Of course, there is irony here, but Malcolm X was on to something. Part of American optimism is this belief that the “system” can be changed with the right “inputs,” be they votes or violence. In the recent Ferguson protests, we saw this play out. A story on National Public Radio about white residents of Ferguson featured a man complaining, “There are so many people here that don’t vote, but they think they have the rights to everything in the world. If you don’t vote, you aren’t given all the gifts of our society.” There were also more than a few protestors at Ferguson ready to make hell with guns and fire and so take on the American system of white supremacy.

But here’s the problem with the ballot and the bullet as the only two options in a struggle for justice: What we call “racism,” what we call “oppression,” what we call “segregation,” what we call “white supremacy”—these are all systemic and historic injustices, the large-scale realities of the “American system.” As we are seeing once again in Ferguson, the use of force and the right to vote are part of an American system that has, to date, regularly made people of color second-class citizens, if citizens at all.

Despite an American optimism that encourages us to believe otherwise, neither the ballot nor the bullet is capable of targeting systemic and historic injustices. For both ballots and bullets are themselves components of the system, and—more broadly—of a systemic approach to political problem-solving that operates only in terms of inputs and outputs (where ballots and bullets are inputs supposedly capable of creating new outputs). In fact, in another irony, some white Americans would today agree with Malcolm X, tactically speaking. For some white Americans, “gun rights” have joined “voting rights” to represent the be all and end all of American democracy. If you don’t like it, vote (making sure you show your ID). If your vote is ineffective, go buy a gun.

The problem, however, is not just the ways electoral politics and force are implicated in systemic injustices. It is also one of scale and scope: In approaching systemic and historic injustice, we approach the wrongs of what St. Augustine long ago, in a helpful distinction, referred to as the ills of the  “world.” By the “world,” Augustine meant not “inevitable” or “given,” but rather that accumulation over time and space of injustices—what we would today call systemic and historic injustices. The “civitas,” or polity, Augustine argued, lacks any direct political means by which to address injustices of such scope and scale. As a St. Louis friend recently lamented by email, “It is so disheartening to live in St. Louis, to be immersed in 24–7 coverage of the unrest and outrage in Ferguson, and yet to know that the structural and systemic problems that create the context for incidents like this seem so intractable.”

Malcolm X, in “The Ballot or the Bullet” at least, does not seem all that far off from Augustine’s position. Although he calls for the ballot or the bullet, he does so with pessimism and irony. The sort of equality that seems most within reach is that old “American” one of life for life, tooth for tooth, eye for eye: “It’ll be Molotov cocktails this month, hand grenades next month, and something else next month. It’ll be ballots, or it’ll be bullets. It’ll be liberty, or it’ll be death. The only difference about this kind of death — it’ll be reciprocal.”

The gap between local political solutions and the systemic injustice of history is one that American optimism tends to ignore (to its peril). I have no solutions for bridging the gap, at least not directly. One of the most common solutions in the imaginary of power-hawks and revolutionaries alike has been catastrophe, by which — supposedly — history and system can be overturned and we can begin anew. But that’s a fool’s errand. Catastrophe is not a recipe for a more just order, but instead for the reordering of injustices.

A protestor in Ferguson (photo from Wikimedia Commons)

A protestor in Ferguson, August 15, 2014 (photo from Wikimedia Commons)

But here’s a hopeful paradox to fend off the pessimism at least a little: The ballot and the bullet are component parts of systems of injustice, and yet the ballot and bullet have become, in certain moments, representative acts of public judgment upon the very same systems of injustice in which they participate (and quite often without any intention on the part of the “voter” or “shooter.”)

Ferguson is a case in point. What Officer Darren Wilson did to Michael Brown itself became a representative act of public judgment against blacks on behalf of a white policing system. The arrival of protestors on the streets, who named and decried white supremacy and police violence, also became a representative act of public judgment. In turn, the arrival of police in armored vehicles with combat weapons, representing the policing system not only as predominantly white but also as deeply militarized, became both an act of public judgment on behalf of the system and itself subject to other acts of public judgment . . . not the least in Trooper Ron Johnson’s walking with the protestors, in marked contrast to the tactics of the local police. Far from a mere police “calming tactic,” Johnson’s walk was above all an act of public judgment upon a predominantly white, militarized policing system.

Each of these events was or became a public act, and each an act of judgment. What made Ferguson so remarkable — and what makes it remarkable still — is the succession of such public acts of judgment, seemingly running out of control. This is also what made Ferguson so frightening to so many. But they were, I think, less “out of control” than they were regulated by a system of publicity (a word we need to recover from the publicists) operating independently of (though certainly not entirely apart from) the “Inputs” of America’s racially tilted system. That is, these representative acts of public judgment were meaningful in ways that transcend the unjust system. The world, as the protestor in the photograph above declared, was watching, and that was significant.

Although I am loathe to get into the well-worn business of contrasting Malcolm X with Martin Luther King, Jr., I do think that the particular political virtue of nonviolent tactics by MLK and others was found, and still is found, in their creation of situations conducive to these representative acts of public judgment (which need not always be nonviolent). It is possible, within the sphere of publicity, to offer acts of judgment that “the system” cannot contain or control.

I do not know if the representative acts of public judgment that Ferguson gave us will have systemic effects for the better. That is, I don’t know if they will lead to structural and systemic changes that will mitigate injustices. There is good reason to remain pessimistic. Ferguson did, however, give us a rare public window into the historic racist injustices of the “American system,” and offered its judgments, judgments with which we will have to now live. And living with such judgments, I think, can only be for the better.

Ned O’Gorman, associate professor of communication at the University of Illinois, Urbana-Champaign, is the author of Spirits of the Cold War: Contesting Worldviews in the Classical Age of American Security Strategy and the forthcoming The Iconoclastic Imagination: Image, Catastrophe, and Economy in America since the Kennedy Assassination.

Algorithms Rule

inf machine_CDF_wanderer_algorithm_FLAT

The Internet killed expertise. Or, so claims public-policy specialist Tom Nichols in a recent essay that laments the demise of a cultural fixture.

I fear we are witnessing the “death of expertise”: a Google-fueled, Wikipedia-based, blog-sodden collapse of any division between professionals and laymen, students and teachers, knowers and wonderers—in other words, between those of any achievement in an area and those with none at all. By this, I do not mean the death of actual expertise, the knowledge of specific things that sets some people apart from others in various areas. There will always be doctors, lawyers, engineers, and other specialists in various fields. Rather, what I fear has died is any acknowledgment of expertise as anything that should alter our thoughts or change the way we live.

For Nichols, technologies such as Google and Wikipedia betoken a loss of epistemic authority, that is, what counts as authoritative knowledge in our digital age. What legitimates one form of knowledge over another? Which sources of knowledge are to be trusted? Which not? What practices and scholarly habits, techniques, and institutions render knowledge authoritative or worthy? In our digital age, these questions seem almost quaint, throwbacks to a less free and democratic age.

Nichols isn’t alone in divining the collapse of expertise and authority in our digital age. But other, more sanguine observers celebrate it and the liberating promises of digital technologies. “Neither the Internet nor the WWW,” writes Cathy Davidson, a professor at the City University of New York Graduate Center, “has a center, an authority, a hierarchy, or even much of a filter on the largest structural level.” With the advent of digitally-supported learning, “conventional modes of authority break down.” Digital technologies will liberate us from the constraints of traditional forms of epistemic authority. There will be no filters in the digital future to come.

Davidson’s messianic hopes as well as Nichols’s cultural despair mistakenly suppose that there can somehow be a vacuum of epistemic authority. But, in truth, forms and functions of epistemic authority, be they the disciplinary order of the research university or Wikipedia’s fundamental principles or “Five Pillars,” are themselves filtering technologies, helping us to orient ourselves amid a surfeit of information. They help us discern and attend to what is worthwhile. Google searches point us in the direction of some resources and not others. Technologies are normative, evaluative structures to make information accessible, manageable, and, ultimately, meaningful. It is not a question, then, of the presence or absence of epistemic authority; it is about better or worse forms of epistemic authority. Expertise and cultural authority are still with us. But now it might be more spectral, embodied not in the university don but in the black-boxed algorithm.

If the Internet and the World Wide Web lack, as Davidson puts it, a “centralized authority” and a “filter,” they do so only on the most abstract level. Our daily interactions with the Web are made possible by a host of technological constraints and filters. People access and engage information through technologies that allow them to select, filter, and delimit. Web browsers, hyperlinks, blogs, online newspapers, and the computational algorithms of Facebook, Google  and financial institutions help us turn terabytes of data into something more scalable, that is, something that can be made useful to an embodied person. These now-ubiquitous technologies help us to sort, to Google a needle in the haystack—and in so doing, they have become central mediums for the way we experience the world.

We are living in an age of algorithmic authority. Algorithms filter our music choices, track our purchasing decisions, find our airline tickets, and help us withdraw money from an ATM. They are ubiquitous. They are forming who we are and who we want to become. But we are only beginning to ask about our algorithmic selves. How can we learn about these algorithms from the outside and how they increasingly organize our very selves?

Authority hasn’t vanished. It has just assumed different, more latent forms. As Evgeny Morozov puts it,

The reason to fear Facebook and its ilk is not that they violate our privacy. It is that they define the parameters of the grey and mostly invisible technological infrastructure that shapes our identity.

We can’t free ourselves from our technologies; digital detoxes are palliative exercises. But we can try to get to know our new algorithmic selves.

Credit: Photo montage with algorithm and Caspar David Friedrich’s Wanderer Above the Sea of Fog (1818)

Deans Care About Books

“Doesn’t Matt care about publishing books anymore?” That’s what an editor of a well-established humanities journal recently asked one of my press colleagues. The editor had just returned from a meeting with me, where she had expressed interest in publishing “curated” collections of articles from back issues of the journal. It struck me as a wonderful idea.

 

“Why make these print books?” I asked. “What do you mean?” she replied. I explained that the articles already existed in digital form in Project MUSE and could easily be collected there on the same page. Moreover, anyone working for a university press knows that most people read journal articles online, not in print. (Many institutions, in fact, only subscribe to the digital editions of scholarly journals.) Why not simply bundle the digital versions of the articles and publish them together online?

My explanation didn’t relieve her puzzlement. She explained that the editor of the collections and the authors of the articles wouldn’t get promotion and tenure credit if the collections were  published only online: “Deans care about books.”

This exchange reveals a troubling and unsustainable view, shared by scholars and deans, of the function of university presses in the world of scholarship. It has two elements. First, university presses have a responsibility to credentialize scholars. Second, presses discharge this responsibility by publishing scholarship in the form of print books.

For the sake of argument, I will leave the first assumption unquestioned. (For a compelling argument against the practice of deans relying on university presses to evaluate scholars, see Lindsay Waters’s Enemies of Promise: Publishing, Perishing, and the Eclipse of Scholarship.) It’s the second that concerns me here. University presses largely accept their credentialing function in the scholarly world. The economic realities of publishing scholarship, however, lead most of them to reject the idea that print is the proper, much less the only, format for all scholarship that makes it through the review process. By clinging to this second idea—the idea that humanities and humanistic social science scholarship must take the form of a print book—scholars and deans threaten the future of university presses and erode their ability to evaluate and distribute high-quality scholarship.

The only sure ticket to tenure and promotion is having your scholarship published by a university press. The reason is that having a university press book on one’s CV serves a signaling function. It tells deans and fellow scholars that your work withstood the rigors of peer review and the evaluation by press editors and faculty boards. This, in turn, signals that your work is good, that, as a scholar, you do your job well. There’s an alignment here between the interests of university presses and the interests of university deans. The presses want to publish high- quality scholarship, and the deans want to employ and promote scholars who are good at their jobs. The process required to produce the first provides evidence for the second.

These interests align, however, only up to a point. The decisions that track the scholarly quality of a project—those involved in discharging the credentialing responsibility—are not the only decisions a press makes when it comes to publishing a project. The work of determining the quality of scholarship and signaling this quality is done when a press issues a contract for a project and stands behind the decision by publishing it. In between those two moments, a press must make many production decisions—about the size of a print run, whether a book should be sold print-on-demand, whether a book should be cloth or paperback, whether images should be in color—that have nothing to do with the project’s quality. These decisions are responsive to practical matters such as how much it will cost to produce the book, how many copies a press thinks it can sell, who the press thinks the audience is, or even (perhaps especially), how well the press gets along with Jeff Bezos. They’re about selling books, not evaluating them. (For a good illustration of the gap between sales and quality consider: the press that published Nietzsche for Dummies probably sold a lot of books.)

Chief among production decisions is the one about publication format, whether to publish a project in print (cloth or paperback) or digitally (or, more commonly, both). To see the significance of this decision, consider what would have happened had the editor followed the route I suggested and published her collections online. It would involve asking the digital publisher to put all of the (already copy edited, designed, typeset) selected articles on the same webpage with a new introduction by the editor. (Obviously it would be a more complicated than that, but you get my point.)

Compare this to what would happen if they were published in print. The articles would have to be downloaded and printed out. Then, production staff would scan the original articles to turn them into a manuscript. Copy editors, designers, printers, distributors, and all of the other people involved in producing these collections would then do their parts. All of this to make the content available in a format nobody seems to want so that somebody can put another line on her CV that will be credited by her dean. (Not all decisions about how to publish a work are like this one, of course. This case is unique in that the material already existed digitally. But the point is that, even in such a case, the push for print remains.)

In case it needs spelling out, the enormous difference between these two paths to publication corresponds to a much different price tag for the press. This is why, as I said at the outset, by clinging to the outdated notion that scholarship must be published in print deans and scholars hurt university presses. They tie the legitimate responsibility of determining and distributing quality scholarship to a costly, inefficient, inflexible, and unsustainable publishing model. By insisting that print is a necessary condition for scholarly quality, deans and scholars make it more difficult for university presses to stay in business, thereby making it more difficult for them to publish print books! At the same time, scholars insist on having their own work published in print while they increasingly engage the work of others online. And deans demand that scholars publish print books while not giving their libraries enough funds to buy them. So they insist on print and undermine the demand for it.

So, to answer the editor’s question: I do care about publishing books. Like all scholars and publishers, I love print books and deeply appreciate them as objects (as a quick glance at my office would confirm). I am not advocating for the end of print. But I also care about the future of scholarly publishing. In order to help secure a healthy future for both university presses and print books, scholars and deans must embrace the digital as a legitimate and credit-worthy format for quality scholarship.

Matthew McAdam is Humanities Editor at Johns Hopkins University Press.

 

 

Twilight of an Idol: College, Purpose, and Zombies

In “Don’t Send Your Kid to the Ivy League,” William Deresiewicz lambasts a pitiful American elite education system that “manufactures young people who are smart and talented and driven, yes, but also anxious, timid, and lost, with little intellectual curiosity and a stunted sense of purpose.” The entire system of elite education, he argues, reproduces an American upper-middle class and its distorted values, aspirations, and entitlement. Stanford and Swarthmore “are turning our kids into zombies.”

With his recent article in the New Republic and his forthcoming book (Excellent Sheep: The Miseducation of the American Elite and The Way to a Meaningful Life, Free Press), Deresiewicz is one of a growing number of pundits lamenting the loss of an American institution: college. “Is the only purpose of an education,” sneers Deresiewicz, “to enable you to get a job? What, in short, is college for?”

Andrew Delbanco recently asked the same question in College: What It Was, Is, and Should Be. So, too, did Mark Edmundson in Why Teach? As journalists, business professors, and university trustees chant “disruption,” college professors and and their public-intellectual kin seem intent on defending their institutions and vocations with appeals to a collegiate ideal. In response to declining state support for higher education and increasing skepticism about the economic value of higher education among sections of the public, college is making a return. But what are Deresiewicz, Delblanco, Edmundson, not to mention countless faculty committees who are busy reimagining undergraduate education, talking about when they conjure up the “college experience”?

Princeton University’s Firestone Library and statue of John Witherspoon, sixth president and signer of the Declaration of Independence; Emile Wamsteker/Bloomberg via Getty Images

They are, I think, mostly talking about self-transformation. College may teach you how to think and even give you some skills, but ultimately, as Deresiewicz puts it, college helps you build a self. College is a four-year respite, before an impending life of professionalism, for self-discovery. “Students are pressured and programmed, trained to live from task to task, relentlessly rehearsed and tested until winners are culled from the rest,” writes Delbanco, and so they scarcely have time to practice the art of being in college, the art of “loafing.” Citing Walt Whitman, Delbanco describes college as a time when “I lean and loaf at my ease observing a spear of summer grass.” Similarly, Mark Edumundson writes of college as a “mythic force,” a “rite of passage.” In Why Teach?, Edmundson sees college, the English major, and teaching as opportunites for self-transformation and “soul making.”  And this is an experience that Edmundson, Delbanco, and Deresiewicz want to democratize and make available to as many students as possible.

But University of Chicago undergraduate and Slate intern Osita Nwanevu isn’t buying it. In a response to Deresiewicz’s article, Nwanevu dismisses the entire notion that college is a singular opportunity for self-discovery.

Every ambitious student who believes that college is their opportunity to shape themselves will do whatever it takes to get into the very best, most exclusive school they can. When their experiences underwhelm, as many necessarily will, they will indeed leave college ‘anxious, timid, and lost,’ believing that they’ve missed out on a chance at intellectual development. Deresiewicz has simply traded careerism for another exalted goal, with similar results. [. . .] To believe that a college—Ivy or otherwise—can confer intellectual benefits in four years that you won’t be able to attain at some point over the course of the next 60 is to believe in magic.

What’s so special about four years of college? How did college come to be the defining experience of the American upper middle class? How did Harvard and Amherst, not to mention the liberal arts degree in English, come by their monopoly on an authentic self? Did Walt Whitman even go to college?

After the recent spate of books, articles, and faculty reports extolling and idealizing the transformative potential of a college experience, Nwanevu’s incredulity is refreshing. College has come to bear an impossible burden, both individually and socially. Its most confident advocates treat it like a stand-alone ethical resource, capable of funding and guiding the self-transformations of America’s elite. Deresiewicz laments contemporary college students’ lack of desire to do good or find the “higher meaning” of college buzzwords like leadership or service. And faculty, he claims, don’t have time for such meaningful pursuits; they’ve got research to do.

Deresiewicz is ultimately concerned about the ethical failures of American colleges. But he never mentions the particular ethical resources or traditions that make such self-transformation possible. And he never considers whether a transformation of the self is sufficient. Can such a collegiate celebration of the self resist the fragmenting and stultifying effects of the upper-middle-class American culture he decries—its consumerism, its anti-democratic tendencies, its solipsism? For Deresiewicz, college is less an institution devoted to a common end than it is a self-help retreat, replete with poetry classes and career services.

This is a common problem for the recent defenders of college. They invoke a collegiate ideal without considering the normative and ethical resources to which it used to be tied or the larger social ends that such an education was intended to serve. Perhaps inadvertently, Deresiewicz acknowledges this in a line of candor:  “Religious colleges—even obscure, regional schools, that no one has ever heard of on coasts—often do a much better job” transforming selves. Until the end of the nineteenth century, American colleges such as Princeton, Yale, and Harvard continued to train Protestant clergy. They were explicitly religious institutions organized around particular ethical traditions. As those and many other former colleges became universities at the end of the nineteenth century, however, these once-explicitly Christian institutions became generally nonsectarian Christian institutions devoted to broad, often vague public goods such as freedom, democracy, and economic and technological progress. The university, as University of Chicago president William Rainey Harper put it in 1899, was the “prophet” and “priest” of democracy, the keeper “of holy mysteries, of sacred and significant traditions.”

In the Harvard Report of 1945, General Education in a Free Society, some of the most respected scholars in the country acknowledged that American education was in “supreme need . . . of a unifying purpose and idea.” But religion wasn’t a possibility. “Given the American scene with its varieties of faith and even of unfaith,” Harvard faculty considered an explicitly religious basis for the undergraduate curriculum impossible.

Not much has changed since 1945. There is, thankfully, no going back to the nineteenth-century Protestant college of Christian gentlemen. And that leaves contemporary colleges, as we might conclude from Deresiewicz’s jeremiad, still rummaging about for sources of meaning and ethical self-transformation. Some invoke democratic citizenship, critical thinking, literature, and, most recently, habits of mind. But only half-heartedly—and mostly in fundraising emails.

At best, a college education today might alert students to an awareness of what is missing, not only in their own colleges but in themselves and the larger society as well.

Humanities in the Face of Catastrophe

Earlier this month, my colleague Bethany Nowviskie asked a group of digital humanities scholars gathered in Lausaunne, Switzerland, to consider their work—its shape, its fate, its ends—within the context of climate change and the possibility of our, as in the human species, own extinction. (How’s that for a dinner-time keynote? Bourbon, please.)

The premise of Nowviskie’s talk, “Digital Humities in the Anthropocene,” was the fact of climate change and the irrevocable consequences that it could have for life on our planet. What would it mean, she asked, to be a humanities scholar in the anthropocene, a geological epic defined by human impact on the natural world? What would it mean to practice the humanities within a geological scale of time?

Whatever the fate of the “anthropocene” as a term (its existence and even inception is still being debated among geologists), the scientists, activists, and scholars who invoke it consider human activity and practices as inseparable from nature. Whether they intend to or not, they thereby challenge basic ideas about the human, culture, and agency that have sustained the humanities for centuries.

The very notion of the anthropocene and the suggestion that humans could cause geological-level change undermines, as Dipesh Chakrabarty puts it, the “age-old humanist distinction between natural history and human history.” From Vico’s famous claim that humans could  know only what they have created, while nature remains God’s inscrutable work, to Kant’s understanding of cosmopolitan history as purposive action, Western thinkers have long distinguished between human and natural history. But granting humans a geological agency, however aggregative and delayed, undermines this distinction. What becomes of history and culture when it can no longer be thought of as simply human?

For one thing, we can better conceive of our cultural and historical labor as bound to the contingency, finitude, and flux of nature, and, thus, cultivate a more acute awareness of how extinguishable our labor can be. Here’s Nowviskie speaking to her digital humanities colleagues:

Tonight, I’ll ask you to take to heart the notion that, alongside the myriad joyful, playful scholarly, and intellectual concerns that motivate us in the digital humanities—or, rather, resting beneath them all, as a kind of substrate—there lies the seriousness of one core problem. The problem is that of extinction—of multiple extinctions; heart-breaking extinctions; boring, quotidian, barely-noticed extinctions—both the absences that echo through centuries, and the disposable erosions of our lossy everyday. We edit to guess at a poet’s papers, long since burned in the hearth. We scrape through stratigraphic layers of earth to uncover ways of life forgotten, and piece together potsherds to make our theories about them hold water. Some of us model how languages change over time, and train ourselves to read the hands that won’t be written anymore. Others promulgate standards to ward against isolation and loss. With great labor and attention, we migrate complex systems forward. We redesign our websites and our tools—or abandon them, or (more rarely) we consciously archive and shut them down. DHers [digital humanities scholars] peer with microscopes and macroscopes, looking into things we cannot see. And even while we delight in building the shiny and the new—and come to meetings like this to celebrate and share and advance that work—we know that someone, sooner or later, curates bits against our ruins.

Humanities labor represents, continues Nowviskie, a deeply human striving to communicate “across millennia” and “a hope against hope that we will leave material traces,” in stone, manuscript, print, or digital form.

This is where the strivings of humanities scholars and geologists of the anthropocene intersect. They both rummage about nature for traces of the human. And while for some, such ceaseless searching is add47682f8_61308 copydriven by an indomitable humanism, a confidence in a human spirit that will always live on, others are haunted by the possibility of our extinction and the end, as Nowviskie puts it, “of so much worldly striving.” But those of us who are in search of a fragmented and sundered human are also motivated by this prospect of extinction.

When Nowviskie exhorts us to “dwell with extinction,” she echoes a humanistic disposition that has long been defined by the specter of loss. Since at least the studia humanitatis of the early modern period, scholars have practiced the humanities in anticipation of catastrophe. Theirs, however, was not a crisis of the guild, anxieties about budget cuts and declining humanities enrollments, but a fear of the radical and irreparable loss of the human record.

Many of the great encyclopedic works of the early modern European scholars were defined, as Ann Blair describes them, by the “stock-piling” of textual information intended to create a “treasury of material.” The early modern masterpieces of erudition—such as Conrad Gesner’s Bibliotheca universalis (1545) or Johann H. Alsted’s Encyclopaedia septem tomis distincta (1630)—were motivated not simply by information-lust but by a deeply cultural conception of the ends of scholarship, a desire to protect ancient learning and what humanists considered to be its integrity and authority. These early humanists, writes Blair, “hoped to safeguard the material they collected against a repetition of the traumatic loss of ancient learning of which they were keenly aware.” This loss had rendered Greek and Roman antiquity inaccessible until its gradual recovery in the fifteenth and sixteenth centuries. Humanist scholars saw encyclopedias and related reference works that collected ancient learning as guarantees that knowledge could be quickly reassembled should all books be lost again. They were busy collecting and curating because they feared another catastrophe.

Similarly, Denis Diderot described the eighteenth-century Encyclopédie as insurance against disaster:

The most glorious moment for a work of this sort would be that which might come immediately in the wake of some catastrophe so great as to suspend the progress of science, interrupt the labors of craftsmen, and plunge a portion of our hemisphere into darkness once again. What gratitude would not be lavished by the generation that came after this time of troubles upon those men who had discerned the approach of disaster from afar, who had taken measures to ward off its worst ravages by collecting in a safe place the knowledge of all past ages!

But even as Diderot promised his contributors future glory, he also acknowledged how fleeting the whole endeavor could be. The Encyclopédie, he lamented, would be irrelevant as soon as it was printed. The traces of the human that the “society of gentlemen” had so diligently collected in print would be out of date upon its first printing. Even print could not stop time.

In the nineteenth century, the famous German classicist August Böckh claimed that all this striving to collect and protect the material traces of the human was exemplified in the science of philology, which he defined as the Erkenntnis des Erkannten, or knowledge of what is and has been known. Our current knowledge is only as good as our past knowledge. And working with the fragmented documents of that past gave philologists an acute sense of how fragile our knowledge and history of ourselves was. The philologist’s attention to and care for the material nature of human culture—its embodiment in documents and texts of all sorts and qualities—was cultivated by a consciousness of how fragile it all was. The only thing that bound the species together was a documentary record always under threat.

Today that documentary record is undergoing a transition, unprecedented in its speed and extent, from printed to digital forms. Some scholars, archivists, and librarians warn that this move could prove catastrophic. Things of our past may be lost and forgotten. But if the long history of the humanities teaches us anything, it is that humanistic work has always been practiced in the face of catastrophe.

In our digital age, the vocation and the disposition of the humanities remains intact. “The work of the humanist scholar,” writes Jerome McGann, “is still to preserve, to monitor, to investigate, and to augment our cultural life and inheritance.” And it’s this disposition that thinking about the humanities in the anthropocene may help us recover.

 

Photograph: The dove and the raven released by Noah, with drowning people and animals in the water beneath the ark. From the Holkham Bible Picture Book, Add MS 47682, England: second quarter of the 14th century, Parchment codex, The British Library, London.