The Hedgehog Review: Vol. 18 No. 3 (Fall 2016)
From the Editor
Exactly forty years ago, the social critic Daniel Bell published The Cultural Contradictions of Capitalism, an enduringly astute reading of a fateful paradox, or double bind, at the heart of modern capitalist societies. Bell argued that capitalism over time enfeebled capitalists by turning them into creatures concerned above all with personal gratification, a satisfaction they would find largely through the consumption of goods produced by an ever-expanding capitalist economy. The problem, as Bell saw it, was that the hedonic, self-gratifying impulse, encouraged by the increasingly cunning arts of advertising, would progressively undercut the very work ethic, disciplines, and virtues considered essential to the rise and maintenance of the capitalist regime. Overly simplistic? Many scholars and critics have said so. They have challenged Bell’s thesis on many of its key points, including its claim that such cultural enfeeblement would necessarily bring down the economic engine itself. (The fact that insatiable consumers continue to spur economic growth around the world does raise a question.) Yet whatever its alleged shortcomings, Bell’s provocative thesis cannot be ignored. Anyone seeking to make sense of one of the dominant institutions of the late modern world must give it its due.
Capitalism is not the only modern institution entangled in cultural contradictions. Science has been just as central to the rise of modernity—not least because of its supporting role in the rise of the capitalist economy. The practices of induction, observation, experimentation, theory testing, and falsification, particularly as these became codified (and sometimes simplistically labeled “the scientific method”) within professional associations and institutions devoted to the advancement and promulgation of scientific knowledge, and particularly as such knowledge was applied to a seemingly endless number of practical uses and technologies—all of these have added up to a formidable force behind the shaping of the modern world. The dominion of science is so wide and deep that its role in society and culture is sometimes difficult to identify or delimit. Its products suffuse our world, even as its epistemology shapes our understandings of it. “Invisible science,” as historian and sociologist Steven Shapin calls it in his essay in this issue, is so ubiquitous that “it would be tempting to say we don’t see ‘science everywhere’ in the same way that we don’t see the nose in front of our face, but it would be better to say that science is just the face of modernity. It’s what we see when we look in the mirror.”
But ubiquity can breed suspicion. One concern is that as the power of science grows, its dominion extends even into areas of our culture where its proclaimed authority is dubious. Absolute power corrupts absolutely, Lord Acton cautioned, and when the methods and criteria associated with modern science are held up as the only ones to which people should turn in determining what is true or good, or even in setting guidelines for the ordering of their lives, whether personal or political, then we may have a problem. We may indeed be witnessing a variation on what is often called scientism, the claim that science alone can arrive at truth about the world and reality.
Such a claim can be made overtly or subtly. The positivists of the nineteenth century—scientists and engineers as well as philosophers and social theorists—were enthralled by the visible achievements of modern science. They loudly championed assorted programs of thought and action to vindicate their confidence in the power of science to supplant all previous forms of knowledge, whether theological or metaphysical, and to support the progressive advance of human civilization and arrive at universal truths. As Auguste Comte, the high priest of positivism, understood it, science was the foundation of a new “Religion of Humanity.” This belief was something novel in the centuries-old evolution of modern science, during which most scientific luminaries, from Bacon to Descartes to Newton, understood their work as being fundamentally compatible with, if not directly supportive of, various theological and metaphysical conceptions of truth and reality (even if their work was inadvertently undercutting such conceptions). The “new science” was merely an explanation of the material, mechanical, and even mathematical workings of the natural but divinely created order. By contrast, the positivist moment was the first time, but certainly not the last, that true believers stepped forth to declare the absolute sovereignty of science.
That hubristic confidence, Lorraine Daston explains in her elegantly reasoned essay, “When Science Went Modern,” began to collapse within the Temple of Science itself, as various scientists and savants, including a number of positivists, came to terms with an increasingly evident reality: “The most unsettling shift in scientific self-understanding—about what science was and where it was going—began in the middle decades of the nineteenth century, reaching its climax circa 1900. It was around that time that scientists began to wonder uneasily about whether scientific progress was compatible with scientific truth. If advances in knowledge were never-ending, could any scientific theory or empirical result count as real knowledge—true forever and always? Or was science, like the monarchies of Europe’s anciens régimes and the boundaries of its states and principalities, doomed to perpetual revision and revolution?”
That scientific knowledge could be contingent or subject to endless revision, that it could admit of gaps and contradictions, raised doubts about the most expansive claims of science. Yet at a deeper cultural level, the implicit metaphysical foundations of modern science (including the core faith in reductive explanation) continued to shape the worldview of most denizens of modernity, disposing them to see the world and reality in terms of scientific knowledge. The scientistic impulse within science would also reassert itself in certain programmatic or theoretical efforts, from sociobiology and evolutionary psychology to the scientific study of human consciousness. One such program, involving and blending several theoretical and methodological strands of modern science, is the attempt to establish a credible science of morality. As James Davison Hunter and Paul Nedelisky show in their essay “Where the New Science of Morality Goes Wrong,” this effort joins a highly pedigreed intellectual endeavor dating from the sixteenth century, but recently emboldened by theoretical and technological advances that give its proponents reasons to “believe that we are at the start of a new age, when the power of science will dispel myths surrounding morality and moral difference and establish a truly rational foundation for ethical truth.” But do these reasons bear up under close scrutiny? Is it even clear what the new moral scientists are claiming to show? Hunter and Nedelisky suggest that the new science of morality faces a dilemma. It must “meet at least two challenges: the challenge of definition and the challenge of demonstration.” The dilemma has emerged, they argue, “because a science of morality can meet at most only one of these two challenges; grasping one horn requires releasing the other.”
In such instances in which the truth claims of science become both grandiose and murky, modern science finds itself implicated in cultural contradictions at least as profound as those involving capitalism. Today, when we hear conflicting scientific pronouncements being issued on almost any subject (climate change, diet, vaccination), when talk about the unreliability of peer review grows more voluble, when more and more seemingly firm findings prove to be unreplicable, when the practice of fundamental research is challenged by economic pressures for immediate and practical results (see Paul Scherz’s bracing critique, “Trivial Pursuits: The Decline of Scientific Research”), it is not hard to see why science, and particularly scientific authority, has become the focus of heated contestation and debate. As Ari N. Schulman observes in “Science Anxiety,” “We seem to be facing a slow-brewing crisis of scientific authority even as we hear ever more eager paeans to science. Although these defiant and deferential attitudes might seem at odds, they are each dysfunctional relations toward scientific authority, mutually reinforcing and commonly opposed to the empowering independence science is supposed to sustain. Both attitudes suggest a kind of infantilization.”
At the very least, the almost reflexive way in which we turn to science to support political agendas or social programs suggests a deep problem within our culture: a reluctance to acknowledge that questions of value and ultimate worth might not be resolved by appeal to the imaginary referee we call science. For all its accomplishments, science cannot displace humankind’s various repositories of wisdom—literary, historical, philosophical, religious—which draw on experiences that defy evaluation by the methods and techniques of the laboratory. To expect science to do more than it reasonably can may lead to an even more widespread distrust of what it demonstrably has done.
Reprinted from The Hedgehog Review 18.3 (Fall 2016). This essay may not be resold, reprinted, or redistributed for compensation of any kind without prior written permission. Please contact The Hedgehog Review for further details.