Monthly Archives: January 2014

The Best Case for the Humanities

Many years ago, a friend of mine was asked what she planned to do with her English degree after she graduated from university. Her reply was terse but only partly ironic: “I plan to read novels.” I continue to find hers the best of all possible replies to that dreaded question, and not only because I know this person went on to have a successful life that included the practice of law, a happy marriage, many friends, and much richly rewarding reading. It would be going too far to say that her success and happiness resulted from her long engagement with good novels and other works of literature and thought. After all, we’ve all known avid readers of literature whose lives have played out disastrously, sometimes in seeming imitation of a tragic fictional plot or character. But in this friend’s case, a sense and sensibility sharpened by steady attention to demanding books gave added richness to her life, even what I would be tempted to call added meaning. Who she was, who she became, and who she still is all strike me as the strongest justification of a lifelong engagement with what we capaciously call the humanities.

All that is prelude to a subject broached in one of my past THR blogs on the declining status of the humanities in the academy. There I discussed what I and others see as failure on the part of most humanists to make a compelling case for what they do. I stand corrected. Several outstanding humanists (mainly scholars of literature)  contributing to the winter 2014 issue of Daedalus have done just that.  And they have done so in the most compelling of ways.  Followingdaed_2014_143_issue-1_largecover the directions of guest editor Denis Donoghue, a redoubtably shrewd specialist on Irish, British, and American literature, each contributor to “What Humanists Do” was asked to pick and discuss a text that has meant much to him or her. (Donoghue, exercising editorial prerogative, examines the texts of several formidable humanist-critics, including F. R. Leavis, Northrop Frye, and Jacques Derrida, but gives most attention to T. S. Eliot and one of his typically astringent critical judgments on how literature, in a very indirect way, gives us knowledge of life.)

J. Hillis Miller, now emeritus professor at the University of California, Irvine, begins his contribution with a few wry comments on Donoghue’s assignment.  First, he notes,

Such an issue of Daedalus would not be needed if the social utility of what humanists do were not the subject of widespread doubt. That utility used to be taken for granted. It is hardly necessary to rehearse the evidence for this doubt.

Rehearsing some of that evidence, Miller proceeds to describe the depressing reality of what too many humanists actually do these days. To begin with, many who find any employment at all end up as adjuncts, “typically teaching three or four composition courses a semester, often at several different colleges, for a poverty wage and often no benefits.” But even those lucky few who land on tenure track quickly find they have precious little time to devote to the careful reading and teaching of texts:

From graduate school until achieving status as a senior professor, literary scholars, like those in most academic fields, spend a great deal of time these days sending and answering email messages; serving on time-consuming departmental and university- or college-wide committees; writing seemingly innumerable letters of recommendation; serving as a departmental or program administrator,,,,

And on and on the list of duties and distractions goes, making it fully clear why Miller framed the title of his essay as a question, “What Ought Humanists To Do?” He then gets down to the business at hand, using a close reading of two poems, one by Tennyson and one by Yeats, to discuss the vexedly hard business of making sense out of figurative language. And the instrinsic worth of such an exercise? Well, if literature is figuration at a high level, and if all language is largely (if not entirely) figurative, then, Miller believes, those who “learn about the real world by reading literature” are well armed against the dangers of “taking figures of speech literally.”

That, in a way, is what many of the better essays in this volume illustrate: that dedication to a deep attentiveness to works of humanistic excellence profoundly alters us in relation to ourselves and to others.  It does so, among other ways, by challenging  the smugness of our natural egoism.  Here is Patricia Meyer Spacks, a professor emerita at the University of Virginia, on George Eliot’s masterpiece, Middlemarch:

We rise, ideally, from moral stupidity to moral clarity, Middlemarch tells us. Moral learning consists in the perception and development of relationships and the experience of their obligations. The study of literature, which renders relationship in all its multitudinous and complicated aspects, contributes to such learning—not by providing precepts; often by making problems of responsibility more perplexing than ever….

Scott Russell Sanders’ lapidary essay on his 50-year-long relationship with Henry David Thoreau’s Walden shows not only how we read a great book but also how it reads us, pushing us to look unflinchingly at our ideals, our ambitions, our achievements, and our shortcomings.

Besieged as we are by advertisements and the cult of consumerism, racing to keep up with our gadgets, rushing from one sensation to the next, we need more than ever to ask the questions posed in Walden: What is life for? What are the necessities of a good life? How much is enough? Do we own our devices or do they own us? What is our place in nature? How do we balance individual freedom with social responsibility? How should we spend our days? Whether or not Walden speaks to your condition, I tell my students, there are other books that will do so, giving voice to what you have felt but have not been able to say, asking your deepest questions, stirring you to more intense life.

Is this vision of a transformative engagement with great works of literature, art, or thought sufficient justification of a humanistic education? Is it the best justification? I wonder how many leaders of institutions of higher learning would be willing and able to affirm that it is both.

. . . . . . . .

Like The Hedgehog Review on Facebook, follow us on Twitter, and subscribe to our posts via RSS.

FacebookTwitterLinkedInGoogle+Share

Compared to What?

Rows of people at the movie on their phonts

(credit: iStock)

Rutgers University professor Keith Hampton, profiled in a recent New York Times Magazine article,  challenges the claims of fellow social scientists such as MIT’s Sherry Turkle that digital technologies are driving us apart:

Hampton found that, rather than isolating people, technology made them more connected. “It turns out the wired folk — they recognized like three times as many of their neighbors when asked,” Hampton said. Not only that, he said, they spoke with neighbors on the phone five times as often and attended more community events. Altogether, they were much more successful at addressing local problems, like speeding cars and a small spate of burglaries. They also used their Listserv to coordinate offline events, even sign-ups for a bowling league. Hampton was one of the first scholars to marshal evidence that the web might make people less atomized rather than more. Not only were people not opting out of bowling leagues — Robert Putnam’s famous metric for community engagement — for more screen time; they were also using their computers to opt in.

For Hampton, what debates and research about the effects of digital technologies on our lives so often lack is historical perspective.

We’re really bad at looking back in time,” Hampton said, speaking of his fellow sociologists. “You overly idealize the past. It happens today when we talk about technology. We say: ‘Oh, technology, making us isolated. We’re disengaged.’ Compared to what? You know, this kind of idealized notion of what community and social interactions were like.” He crudely summarized his former M.I.T. colleague Sherry Turkle’s book “Alone Together.” “She said: ‘You know, today, people standing at a train station, they’re all talking on their cellphones. Public spaces aren’t communal anymore. No one interacts in public spaces.’ I’m like: ‘How do you know that? We don’t know that. Compared to what? Like, three years ago?’

Although the merits of Hampton’s particular study can be debated, he makes an important point when he asks simply, “compared to what?” Those who make arguments about technology’s deleterious effects on our ability to converse with one another, to pay attention, or to read closely usually presume some way that we ought to talk to each other, that we ought to attend to a given object or event, or that we ought to read.

And maybe these critics are right; perhaps we ought carry on in the ways they presume we should. But appealing to history to make these normative claims is a much trickier move.  History is fraught and full of bad conversation, distraction, and poor readers.

. . . . . . . .

Like The Hedgehog Review on Facebook, follow us on Twitter, and subscribe to our posts via RSS.

Human Freedom and the Art of Nudging

Can behavioral economics improve public policy? Should it? Making the case for both in their 2008 best-selling book, NudgeRichard Thaler and Cass Sunstein popularized the idea that small tweaks to policies can make those policies more alert to innate human habits and behavior. Such tweaks can thus “nudge” individuals to make choices that are more beneficial to themselves, and to society—stopping smoking during pregnancy, say, or saving more for retirement, or purchasing appliances that are more energy efficient. When President Barack Obama appointed Sunstein as the head of the Office of Information and Regulatory Affairs in 2008 to tinker with existing regulatory policies, it was seen as a sign that Obama would work to nudge the government toward more nudging. And indeed, the White House is now forming a team to explore how behavioral economics may contribute to “evidence-based policy-making” in the future.

Nudging has not found such a warm welcome in all quarters, though. Richard Williams, director of policy research at George Mason University’s Mercatus Center, recently argued that the Obama administration’s growing interest in using behavioral economics signifies a renewed and creeping enthusiasm for massive government oversight and coercion of its citizens. Like many libertarians before him, he traces this critique back to an overall anxiety that a “nanny state” will interfere with our lives and take away our natural freedom.

Some of this, to be sure, typifies the kind of ideological advocacy that prevails in so many think tanks today. But we should take people at their word when they make arguments, and the fact that Williams’ anxiety centers on behavioral economics is, if not philosophically sophisticated, certainly interesting. At the least, it reveals something interesting about the confused nature of many contemporary political invocations of the ideal of freedom.

Libertarian anxieties about the “nanny state” tend to focus on governmental incursions into freedom, usually identified with new legislation: Don’t tell me I can’t do what I want!, the thinking goes. Williams updates this concern to address the more subtle form that policy “tweaks” in light of behavioral economics might take: And don’t make me want what I don’t want!

This new fear is not just that government will limit the exercise of our agency but that it might also shape it in some way. Thus the complaint that a government that uses behavioral economics to tailor its policies will “treat you like a child.” What this assumes is that you are naturally an adult, someone who is in complete control of yourself, including your desires—absent government “nudging,” your selection when buying a car, to use Williams’ example, will be wholly innocent of influence from forces outside your own bare (and perhaps given) preferences. On this account, behavioral economics is not only a form of tyranny; it is also a form of creepy mind control.

Cereal aisle at a grocery store

Behavioral economics actually allows more freedom than does government regulation, since under a world that takes account of behavioral economics, the sugar cereal is not illegal, but it just may be placed on the highest shelf. (iStock)

But this anxiety rests upon a flawed and misleading picture of the human person, especially with regard to how desires are shaped. The fact is, our agency is always being shaped by external factors. We shouldn’t have needed behavioral economics to show us that we are not as rational and totally in control of our choices as we’d like. The homo economicus ideal of the rational utility-maximizing individual, impervious to outside influence, whose solitary choices and subjective preferences essentially construct his or her self, would have been laughed out of court by Plato, or Aristotle, or the Stoics, or Augustine, or Aquinas, or even Hume or Kant, had anyone been so clueless as to propose it to them. Modern thinkers as diverse as Nietzsche, Freud, and Bonhoeffer have also exposed the inadequacy of this picture of freedom. Even today, it doesn’t take a scientist to prove that such an account cannot make sense of the reality of our own lives. Not one of us grows to adulthood without being shaped by forces beyond ourselves, including our parents, our peers, our schoolteachers, and our cultural context.

And therein lies the rub: we are not simply “self-made” men and women, and our consciously held preferences, beliefs, and intentions never tell the whole story. Rather, we are powerfully shaped by our relationships and cultural habits, long before we come to self-aware possession of our ability to desire, deliberate, and act. The spectrum of criteria we use for making decisions is never simply up to us, but is always already shaped by forces beyond our own individual agency.

Behavioral economics didn’t make this discovery; it merely points out that it is occasionally possible to predict particular ways in which this is true, and that various non-rational factors shape our economic decisions. It identifies some of the surprisingly patterned ways our brains interact with reality: we consistently overvalue some things, we consistently undervalue others, and we are more likely to buy the box of cereal displayed at eye level, no matter what our stated preferences.

One of the key insights of behavioral economics, then, is the fact that when we face choices, the way we decide what to do is at least partially shaped by how those choices are presented to us. Furthermore, it makes it unavoidably clear that there is no way to avoid some shaping of our decision process by the presentation of those choices, for the choices inevitably, inescapably are presented to us in some structure.

In other words, it shows us that our “choice architecture,” the structure of options through which we make our decisions, is a quite important part of our agency. And this choice architecture is something that humans decide on, deliberately or not. It is, that is to say, contingent, and determined by human decisions. For there is always some box of cereal at eye level, whether that’s because the government decided to promote something, or, more realistically, because Kellogg’s pays for the prime shelf space.

So if, on the one hand, behavioral economics teaches us that we are less free than we might like to think—because our agency is inescapably influenced by forces outside ourselves—it  also teaches us that we are more free than we currently acknowledge, because we can change those environmental forces.

The issue, then, is not between freedom and tyranny. The issue is whether we will choose to consciously and deliberately shape those forces, or rather let them be determined by purely economic factors, as is the current status quo, such as in the case of the eye-level Kellogg’s cereal. (And if you don’t think that companies and ad agencies are using behavioral economics right now to fine-tune their appeals to you, we have some prime swampland in Florida to sell you.) That is, the choice is not between a paternalistic “bureaucrat in Washington DC” and “you,” or between being “nudged” or manipulated by someone else or having your own innocent agency; the choice is between having the nudger be responsive to political leaders whom you put in power and the nudger be, say, some advertising executive over whose decisions you never have any say.

Behavioral economics is thus not against freedom. Rather, as a tool it may make us more free, by making us more conscious about the nature of our freedom, and prompting us to recognize a certain responsibility for determining the architecture that influences our choices.

In so offering itself, behavioral economics presents an unlikely ally to those in the humanities who want not only to question the model of the rational, choosing self that has dominated economics for so long but also to question the naturalization of worldviews that have relied upon this unrealistic account of freedom. That is to say, this ideal has functioned as an invisible “nudge” of its own by, among other things, legitimating the political worldview inhabited by many libertarian concerns.

Consider Williams again. His description of the political sphere as one in which “traditionally consumers empower governments to protect them,” for example, expresses a historically located and contestable consumerist account of politics, one less about the collective pursuit of substantive common goods than about preserving maximally productive markets with minimal externalities. (To be clear, there is nothing “traditional”’ about this particular account of government; it would be profoundly alien—if not antithetical to the beliefs of—founders such as John Adams, Thomas Jefferson, and James Madison.) More generally, this picture of politics is often rooted in the view that libertarian economics is an “obvious” reality that the world presents to us, that freedom of consumer choice (the sheer presence of it—never mind the choice between what, or under what circumstances) is as substantial as freedom really ever needs to be, and that outcomes of procedurally free markets are at least “natural,” if not socially desirable.

Now, this whole line of thought is dubious. If it were true, libertarianism would be attractive across societies as their natural default position. Yet it has rarely emerged except when it has been the beneficiary of enormous financial subvention by wealthy proponents. As a worldview, libertarianism is the product of tremendous amounts of public shaping, over decades, meshing with a deep if hidden history of unacknowledged privilege and resentment at those who are “on the dole” at our (really quite minimal) expense.

This sort of talk, that is to say, reveals a political imagination that has been “nudged” into its particular formation by history and culture, not one that has developed in some sort of isolated, ex nihilo way. Although we should take this perspective seriously, we should not do so by granting it the privilege of being the common sense view of reality.

Williams is right to point out that government use of behavioral economics is a topic worthy of public reflection. We certainly ought to have input on the ends toward which our decisions are being shaped. But he’s wrong that this development poses a new and nefarious threat to freedom. If anything, as we’ve said, it allows more freedom than does government regulation, since under a world that takes account of behavioral economics, the sugar cereal is not illegal, but it just may be more expensive (because, perhaps, the future health care costs for those who choose to eat it are somewhat factored in), and it may be placed on the highest shelf. Moreover, the public use of behavioral economics allows more reflective control over the directions we are being nudged than does pretending that we are not being nudged constantly, from all angles, and for profit or other motives.

If reflecting on this state of affairs brings up uncomfortable questions about whether there is such a thing as unconditioned consumer agency at all, and about how markets work in the first place (are “slotting fees” a good idea? do they make cereal markets less “free”?), so much the better.

Charles Mathewes is the Carolyn M. Barbour Professor of Religious Studies at the University of Virginia and a faculty fellow at the Institute for Advanced Studies in CultureChristina McRorie is a doctoral candidate in religious studies and a doctoral fellow at the Institute for Advanced Studies in Culture.

. . . . . . . .

Like The Hedgehog Review on Facebook, follow us on Twitter, and subscribe to our posts via RSS.

Lyndon Johnson’s War

President Johnson's 1964 State of the Union address

President Johnson’s 1964 State of the Union address. (Credit: LBJ Library photo by Cecil Stoughton)

Few would dispute that America’s war on poverty—declared 50 years ago by President Lyndon B. Johnson in his State of the Union Address—is still a long way from over. With 15 percent of Americans today living under the poverty line, only four percent fewer than when Johnson launched his campaign, many might even agree with Ronald Reagan’s stinging assessment that “poverty won.”

The stricter nonpartisan truth is that no war on poverty could have been won any more than it could have been lost. This is not to deny that many of Johnson’s Great Society programs—from expanded food stamps and Medicaid to Head Start and job-promoting tax cuts—did much to improve the quality of poor Americans’ lives. They even made many of those lives possible, and that is no small accomplishment.

Yet for all that he achieved, Johnson did his cause no good when he framed it as a war. To be sure, war metaphors are a common staple of political rhetoric, used to mobilize popular support for worthy campaigns, from J. Edgar Hoover’s war on crime to George W. Bush’s war on terror. But their use often has unintended consequences, as scholars and pundits have frequently observed. The war metaphor creates the expectation of an eventual conclusion, ideally a victory, something that is almost impossible to achieve in dealing with intractable social and existential conditions. It also implies the existence of a clearly defined enemy, but complex systemic problems have elusive and problematic bogeymen. (If our capitalist, free-enterprise system, with its inevitable winners and losers, is partly the problem, do we really want to “defeat” it?) And by implying a clear-cut struggle, the metaphor can demonize those who differ with proposed strategies for victory. Oversimplification, disappointed expectations, frustration, political polarization, and a general weariness have been just a few of the unfortunate outcomes of this protracted metaphorical war.

President Johnson's War on Poverty Tour

President Lyndon B. Johnson greets a resident of Appalachia during his Poverty Tour of Appalachia. (Credit: LBJ Library photo by Cecil Stoughton)

But arguably the most perverse consequence of Johnson’s rhetoric has been the gradual, almost imperceptible stigmatization of the very people the war was intended to help. If 50 years of fighting haven’t eradicated the problem of poverty, then, many people conclude, isn’t it possible that poor people themselves are the problem? However simplistic, the logic of that conclusion comports all too well with a range of stereotypes, misconceptions, half-truths, and prejudices: The poor are different. Their characters are deficient. Helping them only makes things worse.

At the very least, this blaming-the-victim syndrome has eroded the confidence of poor Americans, convincing many that they are failures. More broadly, it has contributed to something like the disappearance of the poor, in both figurative and concrete ways. Figuratively, people struggling at the bottom of the economic ladder became faceless as they were subsumed under the category of poverty, losing their individuality, distinctiveness, and humanity in the process. More concretely, they increasingly disappeared into their own distinct worlds, growing up, attending schools, working, suffering illnesses, and dying in places that are cut off and separate from those of the better off.

Such a separation, for both tangible and intangible reasons, makes it even harder for the least well off to improve their condition. They live in neighborhoods and communities with schools and other public accommodations that lag well behind those found in the wealthier precincts, which in itself hugely complicates the daily business of getting on. But the lack of face-to-face contact also results in a growing values divide, one that conservative social analyst Charles Murray takes pains to describe in his latest book, Coming Apart (2012).

While Murray focuses on the harm this does to the poor, it also has a debilitating effect on the middle classes and indeed on all Americans. It does so by eroding their sense of the common good, of social solidarity and trust, the absence of which allows a brutal sort of zero-sum thinking to prevail. We see the effects of this declining solidarity in the most obvious ways. It is widely acknowledged, for example, that growing income inequality makes many of the middle class fearful of falling into poverty themselves. But so far that rising concern has failed to produce the political will to mitigate the worst effects of inequality on those who earn the least, even by such modest measures as boosting the federal minimum wage to the inflation-adjusted level of 1964.

The poor are different in only one respect: They have less money. Poverty will not go away. Nor will any war defeat it. But the plight of poor Americans will be less crushing, and less hurtfully defining, if they are seen as part of shared body, as equals as deserving of decent and respectful consideration as any other part of that body. The best way to honor Johnson’s idealism is to declare his war over and, then, to rededicate ourselves to forging a true “one out of many.”

 

 

. . . . . . . .

Like The Hedgehog Review on Facebook, follow us on Twitter, and subscribe to our posts via RSS.