The Arts and Humanities Save Lives!

PrintThere was a time, long ago, when it was taken for granted that what we now call the “humanities” was integral not just to human enjoyment but to human survival.

It was self-evident that what the Greeks called logos—language and language-based reasoning—is not only what made the human animal different from all the other animal species of Earth, but that language was essential to the human capacity to survive, and thrive, in a too-often hostile and harsh environment. Rabbits have speed and agility; bears have teeth and claws; eagles have wings; humans have language. As one ancient Athenian put it:

For in the other powers which we possess we are in no respect superior to other living creatures; nay, we are inferior to many in swiftness and in strength and in other resources; but, because there has been implanted in us the power to persuade each other and to make clear to each other whatever we desire, not only have we escaped the life of wild beasts, but we have come together and founded cities and made laws and invented arts; and, generally speaking, there is no institution devised by man which the power of speech has not helped us to establish. For this it is which has laid down laws concerning things just and unjust, and things base and honorable; and if it were not for these ordinances we should not be able to live with one another. It is by this also that we confute the bad and extol the good. (Isocrates, Nicocles, Antidosis 3.5–7)

Above all, logos was for the Greeks the source of wisdom, which was assumed to be essential to human survival. Similar ideas can also be found in ancient Hebraic, Confucian, Arab, and other cultures.

We have come a long way from this “naive” view. Ours, as the cliché goes, is a technological age, and science and engineering are now assumed to be the keys to human survival.

Recently, my own university’s chancellor, Phyllis Wise, reminded us of technology’s vast sway.   In a blog post featured on Linkedin, and now republished on the University of Illinois’s website, she argued that the United States needs to fix what she and her fellow scientists and engineers call the “innovation deficit”—the gap between current levels of federal funding of science and engineering and what the country needs “to remain the world’s innovation leader.”

Once we fix this “innovation deficit,” she said,  “we will be well on our way to solving some of the world’s biggest problems.” Scientists and engineers, she explained in a series of examples, would be able to heal diseases, clean up the environment, and, “ensure our national security” by providing new technologies to the Department of Defense. By designating “health and education,” “environment,” and “national security” as the three main areas to which science and engineering can substantially contribute, Chancellor Wise appealed to human life as the main benefactor of science and technology research.

Of course, not all technological innovations are lifesaving. As Chancellor Wise concluded, some things are about “quality of life” rather than survival:

Finally, there’s this: everything from plasma TVs to the first internet browser to YouTube to whipped cream in a can—these innovations can all be credited to a university (ours, in the case of these examples). And while these innovations may not be lifesaving, they certainly contribute to our quality of life. Add to these examples the vibrant scholarly work of our historians, artists, musicians, journalists, and others who work to enrich our understanding of the world and of other people in a powerful way, and it’s clear that federal funding for research is a critical part of modern life.

This seems all self-evident enough, but is it?

Professional hurt feelings aside, it is incorrect to categorize the arts and humanities as concerned with “quality of life” rather than—as Wise’s post clearly implies—the sustenance of human life. The value of “historians, artists, musicians, journalists, and others” is not equivalent to the invention of canned whipped cream!

Today the world’s biggest problems have indeed grown big enough to concern the very survival of the human species: environmental catastrophe, genocidal weapons, and fragile technological and economic systems each put the species—not just individuals—at risk. But the solutions to these problems, in as much as they can be achieved, will be essentially, and not merely accidentally, social and political in nature.

Consider a national problem before us now that very much concerns human life. Two weeks ago, after the grand jury in Ferguson decided not to indict Officer Darren Wilson, President Obama and many others came out arguing for a technological fix: cop cameras. With this technology, they argued, we could erase ambiguities and probabilities from the arbitration of police conduct and be able to know with more certainty the sequence of events in incidents like the slaying of Michael Brown. Meanwhile, police officers themselves would certainly feel the presence of these cameras and be compelled to exercise more restraint when interacting with the public.

If this were so, Chancellor Wise might be right: Science and technology could fix some of the world’s biggest problems.

But as we learned last week in another grand jury decision concerning the police killing of another black man—this time, Eric Garner of Staten Island—technology means very little in a culture of police impunity before people of color. Garner’s killing took place in the full view of a camera. The footage is widely available online and shows an unarmed man who posed no imminent threat to police or bystanders being put in a chokehold by a police officer and slowly suffocated. As Sean Davis at the Federalist writes, “The grand jury’s decision not to bring any charges against the officer who killed Garner is inexplicable. It defies reason. It makes no sense…. All we have to do is watch the video and believe our own eyes.”

Cameras, microchip sensors, drones: No matter what the gadget, there is no technological fix for racism, or for more subtle problems such as prosecutorial discretion. There is no science that can save us from the historically embedded habits and the wider structures that cause us, seemingly instinctively, to value the lives of some more than others based on the skin color, gender, or any other of a number of social markers of the Other. And the only solutions for structural problems within the law are both better law and better practice of the law.

These problems require citizens capable of reflecting on matters like discrimination and the law, and leaders who understand that the world’s problems can’t be fixed simply through technology. The world’s largest problems are not equivalent to the problem of gravity. If they were, perhaps science and technology could solve them. We’d just need more well-funded Newtons and Einsteins. Rather, we have problems that are inherently political and/or social in nature and that require political and/or social solutions. Moreover, it should be obvious by now that scientific and technological “fixes” often create new ones (e.g., industrialism’s creation of global warming, genocidal killing machines, and antibiotics).

So while it seems silly to say it, it needs to be said, in light of the legitimate value political and academic leaders are putting on life: The arts and humanities save lives!

If for the Greeks, logos was the means by which humans established cities, today it is the language, visual, plastic, and other arts that offer the sorts of practices and epistemologies capable of better reckoning with the world’s problems, whether they be the plight of racism (which keeps technologies from being put to just use), the situation of climate change (which requires above all political solutions for any technological fixes to be implemented), or the existence of economic inequality and poverty (which simply cannot be addressed adequately by any metric apart from good policy).

I am not claiming that the arts and humanities are salvific. There is no need for hyperbole here, no need for triumphalism. Rather, we need only to reckon with the world’s problems as they are to see that the arts and humanities, in as much as they equip us to do political and social work well, can save far more lives than any gizmo. If saving lives is our metric, the biggest deficit we face is not an “innovation deficit” but a “wisdom deficit.”

Nor is the issue here the “humanities” versus the “sciences,” but rather the obvious point that science and technology are good only in so far as they are used well, and that their good use is a matter of principle and prudence, not causality and efficiency.

Our greatest problems are social and political problems. They call for social and political solutions. The arts and humanities train us in the sort of skills and sagacity integral to social and political solutions. The arts and humanities save lives!

Chancellor Wise and other leaders of the academy-in-crisis, in addition to the “innovation deficit,” would you consider solutions to the “wisdom deficit” we now face?

Ned O’Gorman, associate professor of communication at the University of Illinois, Urbana-Champaign, is the author of Spirits of the Cold War: Contesting Worldviews in the Classical Age of American Security Strategy and the forthcoming The Iconoclastic Imagination: Image, Catastrophe, and Economy in America since the Kennedy Assassination.

. . . . . . . .

Like The Hedgehog Review on Facebook, follow us on Twitter, and subscribe to our posts via RSS.

FacebookTwitterGoogle+LinkedInShare

The Chokehold That Is Prosecutorial Discretion

A protest in Grand Central Station in New York after the grand jury’s verdict in the Eric Garner case.

The news yesterday that a second grand jury in two weeks had failed to indict a white police officer for causing the death of an unarmed black man has caused an understandable uproar.  Where should we focus the outrage in a justice system that is failing minorities? Let me suggest one candidate: prosecutorial discretion.

As an assistant federal public defender, I am used to experiencing the fruits of a system designed around the whims of the prosecution. First, what is prosecutorial discretion?  Plenty of literature gives content to this phrase, but it boils down to the idea that a prosecutor can do whatever a prosecutor wants to do, as evidenced by the fact that 97% of all cases in federal court result in guilty pleas prior to, and instead of, a jury trial. The majority of these settlements are the result of plea agreements where the prosecutor, not a judge, decides what charges a defendant should be convicted of, whether or not to require a plea to factors that trigger statutory mandatory minimum sentences, and often sentencing ranges with no input from the judicial branch, let alone a jury of a defendant’s peers. The arcane cruelty of federal criminal laws, particularly in the area of drugs and guns, means that penalties are so severe that most defendants have little choice but to plead guilty. Severe sentencing and over-criminalization combine to make a dangerous cocktail where almost everything is a crime, or multiple crimes.

So what does this have to do with Ferguson?  And now the Staten Island chokehold case?  Without question, the decision of these two grand juries not to indict was because the prosecutors, in their discretion, did not want to indict. As has been detailed elsewhere, when a prosecutor wants an indictment, a prosecutor usually gets an indictment. The fact that both of these grand jury presentments took place over months (instead of hours as is the norm), and involved testimony on behalf of the officers (each police officer testified on his own behalf, again, not the norm) shows just how unique these cases were. And just how little the prosecutors must have wanted to indict.

And why might these prosecutors not have wanted these grand juries to return indictments?  The reason may be legitimate doubts over the culpability of the officers under the laws of excessive force as written. Or because prosecutors have to rely on law enforcement officers every day to investigate cases and make out those cases in court, and, thus, were loath to alienate their usual allies. But we may never know, and we have no right to know under the law.

Prosecutors make decisions not to indict all the time and the only person who might even know about it is the victim, or alleged victim, of some criminal action. No one has a right to prosecution. The recent conversations regarding unprosecuted collegiate sexual assault sparked in part by the controversial Rolling Stone article about my alma mater, the University of Virginia, implicate this fundamental fact. Every victim stands at the whim of the prosecutor to determine whether or not a case even starts down the path of seeking justice from a criminal court.

Bill Stuntz, former Harvard Law professor, described our current system and its disproportionate effects on black Americans well in The Collapse of American Criminal Justice (Harvard, 2011):

As unenforced speed limits delegate power to state troopers patrolling the highways, so too American criminal law delegates power to the prosecutors who enforce it. That discretionary power is exercised differently in poor city neighborhoods than in wealthier urban and suburban communities. Far from hindering such discrimination, current law makes discriminating easy. That sad conclusion has its roots in a sad portion of America’s legal history. When the Fourteenth Amendment’s guarantee of the “equal protection of the laws” was enacted, one of its chief goals was to ensure that criminal law meant one law alike for blacks and whites—that both ex-slaves and ex-slaveowners would be held to the same legal standards, and that crime victims among both groups received roughly the same measure of legal protection. That understanding of equal protection did not survive Reconstruction’s collapse. Today, the equal protection guarantee is all but meaningless when applied to criminal law enforcement, one reason why both drug enforcement and enforcement of laws banning violent felonies are so different in black communities than in white ones.

The Ferguson and Staten Island cases may be targets of our outrage for many reasons, but one should certainly be that prosecutorial discretion has been exposed for what it is—entirely out of our hands.

Lisa Lorish is an assistant federal public defender in the Western District of Virginia and a graduate of the University of Virginia School of Law.

. . . . . . . .

Like The Hedgehog Review on Facebook, follow us on Twitter, and subscribe to our posts via RSS.

Universitybot Responds: Gang Rape as “Sexual Misconduct”

University of Virginia, 11/20/14

Above and below: Fresh graffiti at the Phi Kappa Psi fraternity house, University of Virginia, November 20, 2014. Photos: Gregory Thompson

Last week I wrote a post titled “Who Needs Captains of Erudition?” Long before “corporatization” had become synonymous with higher education, Thorstein Veblen, the early twentieth-century American sociologist, dismissed American universities as little more than “competitive businesses.” These enterprises were run by university presidents, who had become little more than managers of systems, chiefs of concerns, “captains of erudition.”

When I read President Sullivan’s response to a Rolling Stone article that recounted a University of Virginia woman’s personal story of gang rape and the trauma that followed, all I could hear was the defensive, disengaged chatter of one of Veblen’s captains:

“I am writing in response to a Rolling Stone magazine article that negatively depicts the University of Virginia and its handling of sexual misconduct cases.”

“Negatively depicts”? If one phrase embodies the corporatization of the university that might well be it. The contemporary university’s assessment metrics, use of adjunct labor, obsession with economic efficiency, and capitulation to the sovereignty of the student as consumer are just consequences of a deeper failure of moral imagination. The primary concern is with public perceptions. Never mind that a young woman felt that her only option was to talk to a Rolling Stone reporter. This is the language of an institution committed to nothing but its own mechanisms. There is no evidence of the virtues to which we here at the University of Virginia lay claim—empathy, civic concern, leadership, and curiosity.

University of Virginia, 11/20/14

Sullivan’s statement was a missive from the bureaucratic bowels of an accounting machine. It was surely manufactured by public relations specialists and lawyers whose interests are simply fiduciary, concerned only with legal liability and fundraising. There are no people, just “interests”; no judgments, just “initiatives”; no moral failures, just “issues.” There were, as one of my colleagues put it, no rapes, no victims, no women, no perpetrators—just “issues related to sexual misconduct.” And the only response is more policies, more initiatives, more accounting.

The captains of erudition are firmly at the helm at the modern American university. With their phalanx of managers, they are guiding us into seas of indistinction, into a future where the university is just another modern bureaucracy without ends, without purpose. And the faculty is asleep on the deck.

. . . . . . . .

Like The Hedgehog Review on Facebook, follow us on Twitter, and subscribe to our posts via RSS.

Who Needs Captains of Erudition?

Long before “corporatization” became synonymous with higher education, Thorstein Veblen, the early twentieth-century American sociologist who coined the term “conspicuous consumption,” dismissed American universities as little more than “competitive businesses.” In On the Higher Learning in America (1918), published fewer than forty years after Johns Hopkins was founded as America’s first research university, he described the contemporary university as a “business house dealing in merchantable knowledge, placed under the governing hand of a captain of erudition, whose office it is to turn the means in hand to account in the largest feasible output.” The modern American university president wasn’t a scholar, an intellectual, a scientist, or even much of a leader. He was the manager of systems, the chief of a concern, the captain of erudition.

Thorstein Veblen, by Edwin B. Child, 1934. Courtesy of Yale University Art Gallery, Gift of Associates of the Sitter. A protege of J. Laurence Laughlin, the first head of political economy, Veblen began his uneasy passage through the University in 1892.

Thorstein Veblen, by Edwin B. Child, 1934. Courtesy of Yale University Art Gallery, Gift of Associates of the Sitter. A protege of J. Laurence Laughlin, the first head of political economy, Veblen began his uneasy passage through the University in 1892.

Botstein and Bard

Leon Botstein, the charismatic conductor of the American Symphony Orchestra and president of Bard College, is no captain of erudition. “Botstein’s voice,” writes Alice Gregory in the New Yorker,

telegraphs a wizardly moral authority. Everyone responds to it, but parents, primed to be proud of their children, are especially susceptible. ‘We live in a time where people don’t really believe in education. That doubt is something we struggle with,’ he said. ‘Your enthusiasm, your determination, your idealism about education gives back to us a reminder of why we should fight for what we do.’

For Botstein, the “quantification of American higher education,” introduced by university administrators who just want to keep their jobs and facilitated by spineless faculty who have given up on the liberal arts, is a moral affront.

Botstein’s earnest and tireless defense of an ideal, however, might just doom this small, liberal arts college, 90 minutes north of New York City. Bard, where all those black-clad kids who read Sartre in high school wound up, is the singular creation of Botstein’s will and personality. But in December 2013, Moody’s Investors Service lowered its credit outlook to “negative.” And now some of its trustees are worried. Susan Weber, a trustee and donor, said:

Everyone says, ‘Oh, he’s the most amazing fund-raiser,’ Well, I wish that were so, because we wouldn’t be so underfunded if he were that amazing. I think he’s good at it—he works hard at it—but his real strength is building an institution.

“But”?  If one word can be said to embody the confusion over the purposes of higher education, that but might be it.

Botstein built an institution with a vision, but only a captain of erudition can, it seems, sustain it.

Weber’s resigned admission of what Bard needs after Botstein has become the assumption of many university boards. University presidents shouldn’t lead national debates or make moral claims; they should alleviate political pressures and mollify the idiosyncracies of donors. Ours is the age of the competent commander-in-chief—we need accountants, not idealists.

Veblen’s Prescience—in Our Own Backyard

On June 10, 2012, my colleagues and I at the University of Virginia (UVa) learned that Veblen had been all too prescient. Helen Dragas, Rector of UVa’s Board of Trustees, briefly and matter-of-factly informed us that our president had been fired:

On behalf of the Board of Visitors, we are writing to tell you that the Board and President Teresa Sullivan today mutually agreed that she will step down as president of the University of Virginia effective August 15, 2012. For the past year the Board has had ongoing discussions about the importance of developing, articulating and acting on a clear and concrete strategic vision. The Board believes that in the rapidly changing and highly pressurized external environment in both health care and in academia, the University needs to remain at the forefront of change.

Over the following weeks, my colleagues and I, joined by an international audience, speculated about these unspecified “philosophical differences” between President Sullivan and the Board of Visitors; we wondered about the “clear and concrete strategic vision” for which the Rector called. Hadn’t we already been subjected to years of strategic planning?

After ten days of increasing frustration and concern from faculty, students, and alumni, Dragas sent a second email. This one listed a number of “challenges” that UVa faced for which Sullivan, as Dragas implied, had no plan to deal with: the long-term decline in state funding for public universities, the disruptive effects of new technologies, rising tuition costs, increasing enrollments and an aging faculty (with no money to replace it), increasing demands for faculty and curricular assessment—not to mention the increasingly expanded roles that the contemporary university plays of health-care provider, entertainment center, sports venture, industrial and government research center, and, by the way, educator. In short, the university faced a whole host of challenges, none of which were unique to UVa.

sullivan_infernal machine_72dpi

UVa President Teresa Sullivan speaks on the steps on the Rotunda after addressing a closed session of the Board of Visitors, June 2012; photo © Norm Shafer

But between June 10 and Sullivan’s ultimate reinstatement on June 26, something else happened on Grounds, something that most stories and accounts of the summer’s events missed in their efforts to chronicle the process. Not only did it surprise me; I still struggle to  make sense of it. (Talbot Brewer also tried to make sense of this series of events in the summer issue of The Hedgehog Review.)

For about two weeks, UVa faculty members paid scant attention to the myriad problems that the Rector identified; they didn’t demand political intervention; they didn’t split up into conservative and liberal corners and revive culture-war arguments (the liberal faculty against the conservative administration). For two weeks, my colleagues condemned the Board of Visitors’ actions by making explicitly ethical arguments, arguments grounded in claims about the moral purposes of the university: What the university was and ought to be. Some colleagues defended and invoked an honor code with which we usually engage, if at all, only ironically. Others celebrated founder Thomas Jefferson’s commitment to higher education as a public and democratic good, but without the ironic winks that usually accompany such discussions. There was even an impassioned defense of peer review as an ethical practice. Whatever their particular content, the arguments led to a broad consensus: This wasn’t right, this wasn’t how a university ought to be run.

With our backs to the wall and overcome by the sense that our university was imperiled, we faculty members made arguments that were not, in the first instance, financial, technological, or political. We made normative claims about what a university ought to be. That is, the arguments that my colleagues mustered focused on the moral character and purposes of the university. Faculty were engaged and motivated by a general and rather vague sense that the moral authority of the university had been threatened.

Can We Afford Our Future?

My colleague Siva Vaidhyanathan has continued to make these arguments. Recently, while writing of another attempt to oust a public university president, this time at the University of Texas, Vaidhyanathan defended the increasingly beleaguered notion of the university as a public good:

The tuition increases and the realization that the payoffs from universities are deferred and unquantifiable pushed legislators and “reformers” to demand accountability and radical administrative transformations. This has only served to make it harder for faculty to teach and conduct research. It has made the richest nation in the history of the world act like it can’t afford to believe in its own future, respect its own culture, or foster the experimentation and knowledge that might serve the entire planet.

The university is more than than an “inefficient and outdated information delivery system.” It is a public good because it advances, conserves, refines and shares knowledge for the world. And it does so most basically by forming people who believe that knowledge is a public good.

Leon Botstein may at times be bombastic. And he is always, without question, idealistic. At a moment when the very purposes and values of universities are being reshaped in the name of efficiency and disruption, we don’t need captains of erudition. We need leaders who embody the true ethos of our institutions.

. . . . . . . .

Like The Hedgehog Review on Facebook, follow us on Twitter, and subscribe to our posts via RSS.

Here Comes the Potentiated Self

In a series of pithy posts,  Joshua Glenn, the brand analyst with a penchant for Roland Barthes, has been cataloguing cultural codes. Each code, embodied in advertisements or pop-cultural imagery, is a single, meaning-laden node in the complex, often imperceptible matrix “structuring our perception of the everyday world.” Glenn’s codes range from the child-adult, “a holy fool who speaks truth to power,” to the cool engineer, a visionary designer who sees “into things more acutely and penetratingly than ordinary mortals.”

An example of wired self-potentiation from Joshua Glenn’s Code-X (2), http://hilobrow.com/2014/04/30/code-x-2/

But one code seems particularly of our moment: wired self-potentiation. This code, central to the advertising campaigns of technology companies, celebrates a new, digitally enabled self. For the networked person of today, extended beyond time and space with their smartphones and gadgets, “multitasking [is] re-imagined as existential branching-out. Breaking the mold. Demonstrating vitality, multiplicity, and proactive refusal to conform to stereotyped expectations. All thanks to networked technology.” This is the potentiated self, the self raised to a higher power.

The idea of technologically enabled self-improvement is widespread. James Surowiecki recently described a “performance revolution” in sports, in which athletes aren’t just practicing harder but smarter, “using science and technology to enhance the way they train and perform.” Long hours in the gym or on the pitch won’t cut it anymore. Today’s elite athletes are monitored, analyzed, and reshaped by a matrix of biometric sensors and training regimes, all supervised by a phalanx of sports experts and coaches. Training methods for contemporary MLB, NFL, and NBA players are data-driven, networked systems designed to perfect not just athletes but the exercises and disciplines that make them better.

But if the improved, technologically enhanced training regimes of professional sports focus on improving people, the Internet of Things (IoT), another vision of the connected, networked age, seems altogether different. The Internet of Things, as one of its biggest proponents Jeremy Rifkin claims,

will connect every thing with everyone in an integrated global network. People, machines, natural resources, production lines, logistics networks, consumption habits, recycling flows, and virtually every other aspect of economic and social life will be linked via sensors and software to the Ioplatform, continually feeding Big Data to every node—businesses, homes, vehicles—moment to moment, in real time. Big Data, in turn, will be processed with advanced analytics, transformed into predictive algorithms, and programmed into automated systems to improve thermodynamic efficiencies, dramatically increase productivity, and reduce the marginal cost of producing and delivering a full range of goods and services to near zero across the entire economy.

The Internet of Things is all about connectivity. In this digital network everything, whether you or your thermostat, is a node and, thus, just another source of data. The Internet of Things, as Sue Halpern writes in the New York Review of Books, is

about the “dataization” of our bodies, ourselves, and our environment. As a post on the tech website Gigaom put it, “The Internet of Things isn’t about things. It’s about cheap data.” Lots and lots of it. “The more you tell the world about yourself, the more the world can give you what you want,” says Sam Lessin, the head of Facebook’s Identity Product Group.

In Rifkin’s vision of the Internet of Things, humans are just elements of a network organized around the endless circulation of information. In such a system, the networked self is little more than the self as networked, as a node in a complex system. This is one vision of the networked, potentiated self, a notion that Glenn takes from the German poet and philosopher, Friedrich von Hardenberg, better known as Novalis. But Novalis, despite his image as the frail, romantic poet who died of a broken heart, would have had more sympathy for Surowiecki’s jocks than Rifkin’s automated systems.

In 1798, Novalis wrote a short dialogue in which two figures, named simply A and B, debated the effects of the proliferation of print. Interlocutor A lamented the modern “book plague,” especially those marks of black that fill their pages: “What burden are these letters!” The modern human being, he complains, is characterized by his “fatal habituation to the printed nature.” There was so much print that modern readers had begun to mistake it for nature itself.

Interlocutor B wasn’t nearly as worried about book plagues and floods of ink, however. The key to dealing with media surplus lay not in acquiring better tools but in becoming a better reader. Like any art, “reading” required “practice” and, as Novalis put it, “practice makes perfect, even in the reading of books.” Technologies like print were good only insofar as they were engaged as elements of a human discipline, repetitive exercises tied to a person acting according to settled purposes.

For Novalis, a potentiated self would result not from technologies unmoored from human purposes but from the perfection of excellent habits. “The world must be romanticised,” he wrote. “Romanticising is nothing but a qualitative intensification. Through this operation the lower self becomes identified with a better self. Just as we ourselves are a sequence of such qualitative powers. [. . .] By giving the everday a higher meaning, the habitual a mysterious appearance, the known the dignity of the unknown, the finite a semblance of the infinite, I romanticise it.”

Raising the self to a higher power is grounded not in the liberating promises of technology, then, but in the more mundane pursuit of excellent habits. It’s about, as Surowiecki puts it, “getting better at getting better.” Sometimes data can help. But only with coaches and teachers skilled enough to help us make sense of it—and to help us learn how to practice.

. . . . . . . .

Like The Hedgehog Review on Facebook, follow us on Twitter, and subscribe to our posts via RSS.

John Searle and the Threat of Artificial Intelligence

 

John Searle wants to reassure us. The University of Califorinia, Berkeley, philosopher, a specialist on mind, “intentionality,” consciousness and other concepts integral to the distinct sort of being that we refer to as human being, wants us to know that we need not fear “super intelligent computers intentionally setting out on their own to destroy us.” Nor, it turns out, should we grow giddy about the immanent passage of human being into the “infosphere,” where “everything is information and . . . computers are much better at it.”

In this month’s issue of The New York Review of Books (October 9, 2014), Searle offers an extensive rebuttal of the theses offered in two recent books: Nick Bostrom’s Superintelligence, which warns of the impending rise of “machine brains” superior to human ones, and Luciano Floridi’s The 4th Revolution, which announces the metaphysical emergence of the “infosphere.” Searle persuasively argues that both authors fail to account adequately for the vital role of consciousness in human being, and therefore miss in a fundamental sense what is really entailed in the creation of “artificial intelligence” and indeed “information.”

Searle offers several worthwhile distinctions, which he’s explored elsewhere, especially distinctions between “observer independent” and “observer relative” features of reality. The former exist “regardless of what we think”—mountains and molecules, for example. The latter “depen[d] on our attitudes” for their real existence—money and marriage, for example. Searle suggests that “information” falls in the latter category, as does “computation” when it is the product of machines. For only conscious agents, he argues, can have, create, or otherwise interact with “information” and “computation.” There is, then, no such thing as purely artificial information or intelligence, for there is no such thing as artificial consciousness. Conscious beings—something machines are not—must cooperate with the artificial mechanisms of information and computation in order for them to function in any way remotely as “intelligent” machines.

Or so it will be for the foreseeable future. It is possible, he surmises, that perhaps someday we will be able to make machines that “duplicate” the human brain, including consciousness. And it is at the point of this speculative possibility that Searle’s argument becomes both more interesting and more problematic, because it probes—somewhat indirectly, but powerfully nonetheless—the significance of the “artificial,” a category in which we can put both “art,” “artifice,” and certainly “technology.”

A bit of background on the artificial might be helpful here. In ancient Greece, a story circulated about the creation of human beings by the gods that began uncomfortably with humans being left “naked and shoeless” and thus in grave danger before the elements. And so it was until Prometheus gave humans fire and the mechanical arts by which to sustain and preserve their lives. The “artificial,” we might say, saved human life.

But the Greeks were as capable of worrying about the artificial as they were about celebrating it: Most famously, Plato worried about simulacra, those copies of copies that did nothing but deceive and mislead humans in their quest for order and justice.

The Edenic account in the Hebrew scriptures is different from the Greek one in that it presumes the goodness of being naked and shoeless—until the great Fall, at which points artificial coverings were made to cover human nakedness in a gesture of divine mercy and judgment.

I could offer other examples of various ideas and arguments about the status and significance of the artificial in human life. Questions about the relationship between art and nature, or the artificial versus the real, are longstanding and taken up in many cultural traditions.

But what interests me here is Searle’s account, which is fascinatingly emblematic of our own age. Whereas these older accounts were concerned with the relationship between nature and art, Searle is concerned most crucially with what it takes to make something really or truly an artificial version of some non-artificial entity. What does it mean, Searle asks, to really “artificialize” (my own word, if such a word exists) something? “Artificial intelligence” as we now know it, argues Searle, may be artificial, but it is not really intelligence. So what would a truly artificial intelligence look like?

An artificial heart, by contrast, seems to be for Searle really an artificial heart. Why? Searle bases this distinction on that between “a simulation or model” and “duplication.” He writes:

Consider an artificial heart as an example. Computer models were useful in constructing artificial hearts, but such a model is not an actual functioning causal mechanism. The actual artificial heart has to duplicate the causal powers of real hearts to pump blood. Both real and artificial hearts are physical pumps, unlike the computer model or simulation.

There is a strange literalism at work in Searle’s approach—or, better, an artificial essentialism. Causal processes are for Searle the essence of “reality”; the heart’s essence, it seems, is found for him in a basic causal function, pumping. In order to create a real artificial heart, that basic causal function needs to be literally, materially duplicated, or re-produced. Then we have, in a paradoxical formula, a real artificial heart.

But, I must ask, can that artificial heart skip a beat in a moment of terror or wonder? Such heart stopping moments, too, can be understood in cause-and-effect terms. Can an artificial heart grow stronger with exercise, or weaker with poor nutrition, also causal phenomena? Can an artificial heart, to be a bit hyperbolic, be eaten and subject to the causal processes of digestion? If not, then clearly the artificial heart is not a “real artificial heart” in every respect, but only in one respect, albeit a very important one.

My point is that “duplication” is a poor measure of the “really artificial,” for it is in the very nature and substance of the “artificial” itself to have a relative and partial relationship to that which it is imitating, copying, or “duplicating.” The artificial heart duplicates some aspects of the natural heart, but not all aspects. And the same thing can be said about computerized artificial intelligence: Of course it is true that, as Searle writes, computers lack altogether a “psychological reality,” and are nothing but well-designed, highly functional circuit systems. Nevertheless, in certain circumstances they behave outwardly in a manner that we recognize as “intelligent.” This resemblance may be far less intense than the duplication of the “causal mechanism” of the human heart, but it is a resemblance nevertheless.

If the philosopher’s quest is to find the line at which point “artificial X” crosses a line to become a truly artificial X, I am afraid it may be a frustrating task, for sharp conceptual distinctions are not going to do it. Rather, we are better off thinking in terms of a continuum, on which perhaps “concepts” themselves might sit at one end, “models” somewhere in the middle, and “duplications” at the other end.

Searle, as I said, wants to reassure us: Computers are not going to take over the world, and we have not entered a new aquarian age of the “info sphere.”

It is easy to imagine robots being programmed by a conscious mind to kill every recognizable human in sight. But the idea of superintelligent computers intentionally setting out on their own to destroy us, based on their own beliefs and desires and other motivations, is unrealistic because the machinery has no beliefs, desires, and motivations.

I don’t find this very reassuring, however. A greater danger than a future filled with “really artificial intelligence” is already squarely with us: We often behave as though computers believe, desire, and move. We ascribe them human agency. And in this present reality, not an apocalyptic future, what is “really artificial” matters little. Rather, what we need is better reflection on the meaning of the artificial in our lives together.

Ned O’Gorman, associate professor of communication at the University of Illinois, Urbana-Champaign, is the author of Spirits of the Cold War: Contesting Worldviews in the Classical Age of American Security Strategy and the forthcoming The Iconoclastic Imagination: Image, Catastrophe, and Economy in America Since the Kennedy Assassination.

. . . . . . . .

Like The Hedgehog Review on Facebook, follow us on Twitter, and subscribe to our posts via RSS.

So Who Edits Google?

Last Tuesday the New York Times launched Watching, a new homepage feature that tracks developing news from the Times and “across the Web.” The newsfeed features prominently on the upper right-hand side of the homepage and runs 24/7. Like other web-based news feeds, Watching, writes Times reporter Marcus Mabry, will

help readers wade through the flood of news and other information that moves faster than ever before. Watching will feature news of the moment, such as early outlines of developing stories on NYTimes.com, and aggregate the most newsworthy and trusted reporting from around the web. It will also feature tweets from Times reporters and others, as well as photos and YouTube videos.

watching larger

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

Like other contemporary technologies—search engines, Facebook news feeds, and Twitter updates—Watching is designed to help us filter and search amidst the dross of the Web.

But Watching, or so the editors at the Times want us to believe, is different: “Times editors—not algorithms—will make the choices.” Unlike the inhuman and inscrutable mechanisms powering other feeds, Watching will keep us in the sure, expert hands of the Times’s human editors.

The Times’s decision to launch a newsfeed feature is, however belated, understandable. Running a website as though it were simply a digital version of a print paper (just scroll down to get below-the-fold feel) seems a losing proposition. The decision to juxtapose Watching’s reliance on human as opposed to computational judgement is also understandable.

While Watching will have the fastest metabolism of The Times’s digital report, it will have the same standards as the rest of our journalism. Posts will undergo the same scrutiny as any other pieces on the NYTimes.com home page.

The Watching team, led by me [Mabry] and the reporter Jennifer Preston, will be in constant contact with The Times’s news desks.

As the Times continues to develop a digital strategy, it wants to protect and exploit its cultural advantages by distinguishing itself from new forms of information distribution. And appealing to widespread assumptions about algorithms might seem like a winning strategy. Algorithms, after all, don’t make judgements; they simply compute. Real editorial decisions are made in newsrooms by news people sitting at news desks. At least that’s what the Times wants us to believe. Their all-too-human editors will, after all, be curating and culling news from the Web, an environment populated with algorithms of all sorts.

With Watching, then, the Times pits two forms of journalistic authority against one another, the editorial and the algorithmic. And journalism becomes a proxy for broader questions and anxieties not only about how information is distributed in our digital age but also about how it is authorized. What kinds of information can we trust? What sources can we rely on? What, in short, is the difference between editorial and algorithmic objectivity, these two ways of giving an account of a world common to all?

Man vs. Machine 

Editorial authority is based on institutional practices and norms as embodied, for example, in the two human editors responsible for the Times’s new Watching feature. The authority and value of the newsfeed is based on their curatorial acumen and journalistic integrity. The circulation and evaluation of knowledge—what is, and is not, important enough to make it to Watching’s feed—runs through them and their team of reporters.

Algorithmic authority, by contrast, is based, at least as advocates such as Google describe it, on the exclusion of such subjective forms of knowledge, that is, forms of knowledge embodied in the person of the expert. Its cultural authority relies, as Tarleton Gillespie puts it, on a “technically inflected promise of mechanical neutrality. Whatever choices are made are presented both as distant from the intervention of human hands, and as submerged inside of the cold workings of the machine.”

Algorithmic objectivity, however, is not a natural feature of algorithms. It is a carefully curated cultural good for which neutrality and suppression of the subjective are primary virtues. Google, for instance, recognizes that the efficiency and technical power of its search engine are meaningless, not to mention profitless, without the cultural authority granted it by users.

This authority rests primarily on a certain notion of objectivity. In “Ten Things We Know to be True,” Google promises: “We never manipulate rankings to put our partners higher in our search results and no one can buy better PageRank. Our users trust our objectivity and no short-term gain could ever justify breaching that trust.” Google’s “objectivity” is inseparable from the assumption that the iterative and computational character of its algorithms, such as PageRank, eradicates subjective error, prejudice, and poor judgment. Algorithms transcend personal predilections and give us access to the real.

Google’s algorithms, however, are not just a computational technology; they are communicative technology. They bring together the millions of individual decisions that users make every minute. The authority of PageRank, for instance, rests on the claim that it accurately reflects users’ decisions of what has value and what doesn’t. As Google says,

Google search works because it relies on the millions of individuals posting links on websites to help determine which other sites offer content of value. We assess the importance of every web page using more than 200 signals and a variety of techniques, including our patented PageRank™ algorithm, which analyzes which sites have been “voted” to be the best sources of information by other pages across the web.

Algorithms facilitate a knowledge that exceeds the self. They give voice, insists Google, to the desires of a community of users. Algorithms help us to see ourselves in the scattered and disparate patch work that is the web. But they do so at such scales that the individual user is little more than an algorithmic self, a flattened data point among millions.

In “The Relevance of Algorithms,” Tarleton Gillespie summarizes the differences between algorithmic and editorial authority:

The editorial logic depends on the subjective choices of experts, themselves made and authorized through institutional processes of training and certification, or validated by the public through the mechanisms of the market. The algorithmic logic, by contrast, depends on the proceduralized choices of a machine, designed by human operators to automate some proxy of human judgment or unearth patterns across collected social traces.

Editorial objectivity relies on the norms and practices of culturally authoritative institutions, whereas algorithmic authority relies on the cultural authority of mechanical neutrality. And yet, these cultural norms and notions of objectivity are not inherent to any particular technology, print or digital. Both editorial and algorithmic authority are cultural logics, as Gillespie says, that help us figure out what counts as knowledge and what doesn’t. Proponents of both, Times’ editors and Google spokespeople, pursue a similar end. They both want to identity and organize “relevant information crucial to the public, through unavoidably human means, in such a way as to be free from human error, bias, or manipulation.” But they have different norms, practices, and epistemic assumptions about how to do so.

Determining What Counts as Knowledge

The norms governing and authorizing editorial objectivity, for example, have emerged over centuries of print and in debates about how information should be produced and distributed. These norms were never inherent in the medium of the printed newspaper—they were hard won.

Over the course of the eighteenth-century German Enlightenment, the underlying assumption of intellectuals was that the level of a society’s enlightenment was a function of its access to information. More print equaled more enlightenment. Information, so it seemed, wanted to be free, and print—be it in the form of books, newspapers, or journals—facilitated this unrestrained circulation.

In the final decades of the eighteenth century, however, German intellectuals began to doubt whether the unregulated flow of information in print necessarily led to a more enlightened society. Members of Berlin’s famed Wednesday Society, which included leading Prussian bureaucrats and editors, debated whether “each and every bit” of information should be immediately published. While some members argued for the “unrestricted freedom” of information, others argued that information should be carefully filtered and selected before it was distributed.

The members of the Wednesday Society were not simply debating how best to maintain their cultural privilege and power and enforce a system of state-sponsored censorship. They were also struggling to understand how information and knowledge should be distributed in an age in which print had begun to proliferate at an unprecedented rate. Were authors responsible for their ideas once in print? While all information might, in the abstract, seem worth distributing as broadly as possible, could some ideas be pernicious or dangerous? Was an absolutely unrestricted dissemination of information, whatever the source or quality, obviously the best option? Information will always be filtered and selected, so what norms should govern these practices and technologies of communication?

When the Times juxtaposes its editors with the Web’s algorithms, it appeals to this longer history of journalistic filters and selection technologies of which editorial authority is a key feature. But to cast algorithmic authority as inherently non-human and mechanistic is both false and dangerous.

Algorithms are technologies crafted by particular people with distinct intentions. Algorithms want, as Nicholas Carr puts it, what “the people who write them want.” It’s just that these desires, judgments, and decisions are so deeply nested within the algorithm itself that we usually can’t see beneath the beguiling simplicity of Google’s homepage. The algorithm, however mechanical we imagine it, can’t produce truth or knowledge, as Bethany Nowviskie puts it, independently of its human designers and users. Judgment is no less critical to the selective criteria of an algorithm than it is to a page of a good newspaper. But when we read a newspaper, even in its online form, we do so while applying a set of norms and expectations that have been formulated and institutionalized over centuries. We are only now beginning to understand why the unstated norms that shape the design and use of computational algorithms need to be made more explicit—and therefore subject to critical public debate. For now, Google and Facebook don’t have mastheads or editorial pages. The names of the people who make judgments about what an algorithm does are hidden. All we have is the veneer of mechanical neutrality and the invocation of an objectivity that operates beyond the human.

. . . . . . . .

Like The Hedgehog Review on Facebook, follow us on Twitter, and subscribe to our posts via RSS.

Virtual Assembly and the Legal Limits of Digital Dualism

 

What are the implications of the First Amendment’s right of assembly to the lives we increasingly lead online? I have spent a fair amount of time over the past few years exploring the meaning and significance of “the right of the people peaceably to assemble,” but I have only recently begun to consider its applications in an online context. My first effort to think about “virtual assembly” was in an article published last year in the Cornell Law Review from which the following reflections are drawn.

The Scope of the Assembly Right

While many people may think of the right of assembly as covering temporal gatherings like a school celebration or a political protest, its scope is far broader. Protecting the moment of expression often requires protecting the group that makes that expression possible. Put differently, the verb “assemble” in the text of the First Amendment presupposes a noun—an assembly. And while some assemblies occur spontaneously, most do not. Most assemblies emerge from people who come together long before they hold protests or conventions. Our “pre-political” and “pre-expressive” groups enable our ordinary activities to develop into extraordinary ones. And those groups are among the most vulnerable to government monitoring and interference.

These observations extend to our online groups, differently embodied and displaced though they are. In fact, the virtual dimensions of assembly may yield insights for how we understand more traditional assemblies and the legal protections that we assign to them. Consider, for example, how our online groups may serve as early indicators of possible infringements of the right of assembly. One of the reasons that government officials usually ignore our most informal offline groups is that these groups often lack visible boundaries—there are often no membership lists or widely available indicia of the individuals who constitute “the group.”

But the boundaries are more pronounced online. A Facebook group lets us know who is in and who is out. Even the most informal Facebook group signals a more concrete form of exclusion and embrace than its offline counterpart. Our private boundary lines are more publicly visible online, and that visibility is more likely to attract the attention of government officials.

Virtual Assembly as Community

Despite their more pronounced boundaries, I was initially skeptical of the deeper significance of online groups. I am not on Facebook or Instagram, and I only recently succumbed to the professional pressure of Twitter. I knew that online connections could facilitate otherwise improbable interactions, and that they could simplify logistics and efficiencies for everything from flash mobs to political protests. But I was less certain about other questions, such as whether meaningful relationships and ways of life could really flourish online.

My thinking began to change when I came across Howard Rheingold’s work. Rheingold started writing about online communities in the early 1990s, which in virtual time places him somewhere in the vicinity of Plato. But he wrote movingly—and convincingly—of the deep social connections and communities fostered by online interactions. Other writers described the emotional shelters that online groups created for pregnant women, cancer patients, drug addicts, and numerous other affinity groups. I discovered online gaming communities and online churches, and not all of them seemed flaky.

There are, of course, plenty of shallow and harmful online groups. But as Rheingold noted in 1993, “it is dangerous to mindlessly invalidate the experiences of a person for whom Internet communication is not a luxury but a lifeline.”

These kinds of observations helped frame my approach to the project of virtual assembly, and to the legal, cultural, and normative questions that followed. I learned a great deal about the ways in which online groups contribute to important First Amendment interests like identity formation, self-governance, and dissent.

I also learned that the dynamic line between virtual and non-virtual groups often frustrates efforts to draw legal and cultural distinctions between them. Many of us use online connections to sustain relationships that begin offline.  And many relationships that begin online move offline. Online dating services lead to offline relationships. Business relationships initially formed through social networking sites lead to in-person meetings and partnerships. Lawyers who find clients in online worlds represent them in offline proceedings. (And, of course, lots of bad things can also happen when online relationships migrate offline.)

The connections between our online and offline groups suggest that we need to think carefully about the constitutional protections for those groups. Protecting values like identity formation, self-governance, and dissent depends on protecting the boundaries of these groups. But these protections are not cost free. Resistance to monitoring can complicate legitimate law-enforcement efforts. Exclusive groups can weaken antidiscrimination norms and hinder equality of opportunity. Deep attachments that we form with others can complicate our psychological ability to exit from groups. (As one example, I’m discovering the nontrivial psychological pressure that prevents me from “unfollowing” someone on Twitter.) We have been weighing these costs and benefits in our offline groups for some time, and it will be important to extend these considerations online as well. The online context might call for modified doctrinal applications. It might also cause us to rethink our existing offline frameworks.

Looking Back as We Look Ahead

The law is an imperfect and limited resource, online and off. Many of the looming questions may well be resolved by institutional and technological design rather than by legal doctrine. But recognizing that some ideals are largely beyond judicial competence should not cause us to cut short our normative aspirations for law. In the case of our online groups, we can meet some of the challenges that lie ahead by looking back: to the right of the people peaceably to assemble.

John Inazu is an Associate Professor of Law at the University of Washington, St. Louis and the author of Liberty’s Refuge: The Forgotten Freedom of Assembly (Yale 2012).

. . . . . . . .

Like The Hedgehog Review on Facebook, follow us on Twitter, and subscribe to our posts via RSS.

Cultural Critics vs. Social Scientists—They Both Stink

The past few weeks have seen some heady attempts at generalization: first Sam Tanenhaus’s piece on “Generation Nice” and then A.O. Scott on the “Death of Adulthood.” (Potential correlation there?)

The Quant & The Connoisseur logo

The subsequent critiques of both were withering. Tanenhaus’s article proved to be laden with errors, resulting in hilarious retractions by the Times editorial staff. In response to the proof of the millennials’ niceness, the editors wrote:

An article last Sunday about the millennial generation’s civic-mindedness included several errors…. Applications to the Peace Corps recently have been in decline with a 34 percent decrease from the peak in 2009, and applications to Teach for America decreased slightly last year; neither organization has seen “record numbers of new college graduates” applying for jobs.

Well done. And the unredacted rest apparently relied on citations of studies that cited studies that cited…an ad agency?!

As for Scott (one of Q&C’s favorite film critics), his reflections on adulthood’s imminent, if not already occurring, death come from having watched a lot of TV. “Something profound has been happening in our television over the past decade,” Scott announces with a foreboding sense of doom (it’s gotten worse?). And then, in an alliterative jingle that would make even the best nineteenth-century speech writer wriggle, “It is the era not just of mad men, but also of sad men and, above all, bad men.”

So there you have it: A few shows (Mad Men, The Sopranos, Breaking Bad) have chronicled the decline of white patriarchy, which is a good stand-in for the decline of adulthood, which in turn is a good stand-in for a major shift in “American Culture.” Imagining that all of adulthood, and masculinity in particular (Scott’s real aim), was coming to an end because of a few televisual-fantasies of bad dads like Don Draper ignored, as David Marcus pointed out, a whole lot of other stuff on TV that most people actually watch, like, say, football (or NCIS or NCIS: LA). Masculinity is doing just fine there (by which I mean on display, not as in, oh-so-admirable).

One would think at this point the answer is Big Data to the rescue. Instead of making whopping generalizations based on a few selective examples, turning culture into data can give us a much better view of the “big picture” (preferably as a picture: through that most ubiquitous of contemporary genres, the infographic). If we look broadly, what is “television” telling us and how would we segment it into different groups, for surely it is not telling all of us the same thing?

The problem is, as Marcus pointed out, it’s not as though the social scientists who traffic in cultural data mining have done much better. Turning culture into data is not a seamless process, nor is its interpretation. While we all know this, we seem unable to heed this advice given the opportunity for a juicy headline—in other words, given the chance to tell a story. Narrative trumps reason in fascinating ways.

The point is not, oh forget it, let’s just let Tanenhaus make it up after all. A good story is a good story and you can’t count culture anyway. The point is we need a lot more work on the work of translating culture into data before we go ahead and start calculating and interpreting. What would be a representative sample of “TV” or “pop-culture”? How would you measure depictions of adulthood or “masculinity” (either as positively or negatively coded)? What is your control set, i.e., what are you comparing this against? And so on.

The real answer is we need to think more about the process of cultural modeling. How do we model a cultural subset through a data set (a generation, for example, or contemporary television), and how do we model a cultural practice or concept through a particular measurement? These aren’t easy questions, but they are the prerequisite for correcting against journalistic just-so stories of cultural criticism.

This is the time for the humanists to jump into the fray, not to put our heads in the sand and say, “You can’t count that!”  The challenge is to think of counting culture in more sophisticated ways and so avoid the mythologizing that passes as cultural criticism these days.

 

. . . . . . . .

Like The Hedgehog Review on Facebook, follow us on Twitter, and subscribe to our posts via RSS.

Quit Lit: Do the Humanities Need the University?

college chains_FINAL

#165598883 / gettyimages.com

There’s a new genre taking shape on blogs, Twitter, and even in the pages of The London Review of Books: Quit Lit. Just last week, Mariana Warner, a creative writing professor and member of the Man Booker prize committee, explained her decision to resign her position at the University of Essex. In “Why I Quit,” she describes the bureaucratic disciplines of England’s new Research Assessment Exercises, which tabulate and calculate academic labor with the efficiency and mindlessness usually reserved for an assembly plant (and a low tech one at that).

In a scene she must have embellished by channeling Kakfa U., Warner recounts a meeting with her new dean:

A Tariff of Expectations would be imposed across the university, with 17 targets to be met, and success in doing so assessed twice a year. I received mine from the executive dean for humanities. (I met her only once. She was appointed last year, a young lawyer specialising in housing. When I tried to talk to her about the history of the university, its hopes, its “radical innovation,” she didn’t want to know. I told her why I admired the place, why I felt in tune with Essex and its founding ideas. “That is all changing now,” she said quickly. “‘That is over.” My “workload allocation,” which she would “instruct” my head of department to implement, was impossible to reconcile with the commitments which I had been encouraged—urged—to accept.

Confused but, more deeply, defeated by this new regime, Warner resigned. But she continued her work for the Man Booker Prize committee which, as it turns out, has proven rather clarifying.

Among the scores of novels I am reading for the Man Booker International are many Chinese novels, and the world of Chinese communist corporatism, as ferociously depicted by their authors, keeps reminding me of higher education here, where enforcers rush to carry out the latest orders from their chiefs in an ecstasy of obedience to ideological principles which they do not seem to have examined, let alone discussed with the people they order to follow them, whom they cashier when they won’t knuckle under.

As a genre Quit Lit has a few organizing features. Its form tends to be personal and aggrieved. The university, like those vague but all-powerful institutions in Kafka’s texts, has been overtaken by an alien, usually bureaucratic-statist-inhumane power. And its content tends to be not just about the decline of the university but also about the impending demise of the humanities. By turning universities into vocational schools, we are robbing our children of humanistic forms of thought and the good that ensues. (If scientists wrote prose like humanists, maybe they would be writing about the end of the university and the collapse of science. NPR had a go at Quit Lit  this past week in their series on the dramatic cuts in basic science funding and the results it is having on future generations of scientists.)

As with all literary genres, Quit Lit has its predecessors. Before there were Rebecca Schuman and NeinQuarterly’s Eric Jarosinski, there was another German scholar experimenting in the genre, Friedrich Nietzsche. In 1872, just three years after he landed his first, and only, professorship at the University of Basel without even having finished his dissertation, Nietzsche delivered a series of lectures, On the Future of Our Educational Institutions, in the city museum. Before crowds of more than 300 people, Nietzsche staged a dialogue on the future of German universities and culture between two young students and a cantankerous old philosopher and his slow-witted but earnest assistant.

The grousing philosopher lamented the decline of universities into state-sponsored factories that produced pliant citizens and mindless, “castrated” scholars who cared not a bit for life. By the end of the lectures, it’s difficult to say whether Nietzsche thought there was a future at all for German universities. Nietzsche lasted a few more years in his position, resigning only when ill health forced him to. But he left an oeuvre that looked to the university and saw little but ruin.

As Nietzsche was writing, parts of the German university might not have been in decay, but they were in decline, the humanities in particular. Between 1841 and 1881, enrollment in philosophy, philology, and history within “philosophy faculties,” which compromised the core liberal arts fields, declined from 86.4 percent to 62.9 percent, whereas in mathematics and the natural sciences enrollments increased from 13.6 to 37.1 percent of all students matriculating at German universities. The mood among humanists was often such that they sounded quite a bit like the embattled literature professors of today. In academia, crisis is generally a matter of perception, and even in what now seems like a “golden age” for humanists, there was, in fact, a seismic shift for the humanities.

More recent forms of Quit Lit tend to lack a key feature of Nietzsche’s model, however. Nietzsche never conflated the humanities or humanistic inquiry with the university. For him, humanistic inquiry—and Nietzsche was deeply humanistic as his lifelong commitment to philology attests—transcended the institutional and historically particular shape of universities, which he saw as little more than extensions of a Prussian bureaucratic machine.

In what increasingly seems like a related genre, contemporary academics and intellectuals of all sorts have ostensibly been defending the humanities. But more often than not they actually defend certain forms of scholarship as they have come to be institutionalized in largely twentieth-century American research universities. Geoffrey Galt Harpham recently produced  the most egregious but well-argued example of this tendency with The Humanities and the Dream of America. His basic thesis is that the humanities as they are now practiced were an invention of post–World War II American research universities. Similarly, Peter Brooks’s edited collection The Humanities and Public Life suggests, with its focus on disciplines and scholarship and the imperatives of the university, inadvertently echoes the same. They conflate the humanities with their departmental and institutional shapes in universities.

In the measured “yes but” prose of academic speak, Patrícia Vieira gives this spirit of conflation ethical shape in a review entitled “What are the Humanities For?”:

Debates about the “future of the humanities” frequently revolve around the suspicion that the humanities might not have one. Yet despite the direness of this anxiety—an anxiety especially personal for every academic worried about professional choices or mortgage payments—conversations on the topic are often dull, long-faced affairs. Every professor has sat through one or another of these depressing discussions. The conversation proceeds according to a familiar set of pieces: there are passionate apologias of work in philosophy, literature, history, and the arts; veiled criticism of the anti-intellectualism of higher education administrators and society at large; and vague pledges to do more interdisciplinary research and extend a fraternal hand to the social and natural sciences, who remain largely unperturbed by this plight. The whole thing wraps up with the reassuring conviction that, if the humanities go down, they will do so in style (we study the arts, after all), and that truth is on our side, all folded in a fair dosage of indulgent self-pity.

Vieira can’t imagine the future of the humanities beyond the anxieties of professors and the failures of university administrators. All she can muster is a few gentle and inveterately academic admonitions for her authors:

Brooks’s and [Doris] Sommer’s [The Work of Art in the World: Civic Agency and Public Humanitiesbooks coincide in their desire to persuade those skeptical about the importance of the arts and the humanities of their inherent worth. The volumes set out to prove that these disciplines play a crucial role in public life and that they are vital to contemporary culture. Brooks’s collection often falls short of this goal by sliding into fatalistic rhetoric about the doomed future of humanistic scholarship—the very discourse the book attempts to combat—all while ignoring some of the vibrant new research in the field. In contrast, Sommer is overconfident in the power of the arts to tackle thorny socioeconomic and political problems. Both the despondent and celebratory approaches are symptomatic of the beleaguered state of the field, forced to justify its existence based upon technocratic principles that demand immediate results and fast returns. The humanities are constantly compelled to demonstrate practical results or hopelessly admit to lacking a concrete and immediate function, straitjacketed into foreign modes of valuation lifted from the empirical sciences. Neither a dying set of disciplines nor a panacea for social ills, the humanities remain a central form of human enquiry, in that they shed light on and question the tacit assumptions upon which our societies are based, outline the history of these values, and identify alternatives to the status quo.

Despite her attempts to cast the humanities as a form of “human” inquiry, Vieira is writing about a beleaguered and exhausted profession. There are only professors and their disciplines here. And they both are trapped, as Nietzsche would say, in a “castrated” passive voice: “The humanities are compelled ….” There are no agents in this drama, just put-upon, passive professors.

I am not suggesting that we should give up on universities. Universities, especially modern research universities, have long helped sustain and cultivate the practices and virtues central to the humanities. But just as German universities were becoming international paradigms, emulated from Baltimore to Beijing, Nietzsche made a fateful diagnosis. Those practices and virtues could ossify and whither in the arcane and self-justifying bowels of the modern, bureaucratic university. “Human inquiry,” in contrast, would live on.

We may well benefit from an exercise in imagination. Could the humanities survive the collapse of the university? I think so.

. . . . . . . .

Like The Hedgehog Review on Facebook, follow us on Twitter, and subscribe to our posts via RSS.