Monthly Archives: October 2014

John Searle and the Threat of Artificial Intelligence

 

John Searle wants to reassure us. The University of Califorinia, Berkeley, philosopher, a specialist on mind, “intentionality,” consciousness and other concepts integral to the distinct sort of being that we refer to as human being, wants us to know that we need not fear “super intelligent computers intentionally setting out on their own to destroy us.” Nor, it turns out, should we grow giddy about the immanent passage of human being into the “infosphere,” where “everything is information and . . . computers are much better at it.”

In this month’s issue of The New York Review of Books (October 9, 2014), Searle offers an extensive rebuttal of the theses offered in two recent books: Nick Bostrom’s Superintelligence, which warns of the impending rise of “machine brains” superior to human ones, and Luciano Floridi’s The 4th Revolution, which announces the metaphysical emergence of the “infosphere.” Searle persuasively argues that both authors fail to account adequately for the vital role of consciousness in human being, and therefore miss in a fundamental sense what is really entailed in the creation of “artificial intelligence” and indeed “information.”

Searle offers several worthwhile distinctions, which he’s explored elsewhere, especially distinctions between “observer independent” and “observer relative” features of reality. The former exist “regardless of what we think”—mountains and molecules, for example. The latter “depen[d] on our attitudes” for their real existence—money and marriage, for example. Searle suggests that “information” falls in the latter category, as does “computation” when it is the product of machines. For only conscious agents, he argues, can have, create, or otherwise interact with “information” and “computation.” There is, then, no such thing as purely artificial information or intelligence, for there is no such thing as artificial consciousness. Conscious beings—something machines are not—must cooperate with the artificial mechanisms of information and computation in order for them to function in any way remotely as “intelligent” machines.

Or so it will be for the foreseeable future. It is possible, he surmises, that perhaps someday we will be able to make machines that “duplicate” the human brain, including consciousness. And it is at the point of this speculative possibility that Searle’s argument becomes both more interesting and more problematic, because it probes—somewhat indirectly, but powerfully nonetheless—the significance of the “artificial,” a category in which we can put both “art,” “artifice,” and certainly “technology.”

A bit of background on the artificial might be helpful here. In ancient Greece, a story circulated about the creation of human beings by the gods that began uncomfortably with humans being left “naked and shoeless” and thus in grave danger before the elements. And so it was until Prometheus gave humans fire and the mechanical arts by which to sustain and preserve their lives. The “artificial,” we might say, saved human life.

But the Greeks were as capable of worrying about the artificial as they were about celebrating it: Most famously, Plato worried about simulacra, those copies of copies that did nothing but deceive and mislead humans in their quest for order and justice.

The Edenic account in the Hebrew scriptures is different from the Greek one in that it presumes the goodness of being naked and shoeless—until the great Fall, at which points artificial coverings were made to cover human nakedness in a gesture of divine mercy and judgment.

I could offer other examples of various ideas and arguments about the status and significance of the artificial in human life. Questions about the relationship between art and nature, or the artificial versus the real, are longstanding and taken up in many cultural traditions.

But what interests me here is Searle’s account, which is fascinatingly emblematic of our own age. Whereas these older accounts were concerned with the relationship between nature and art, Searle is concerned most crucially with what it takes to make something really or truly an artificial version of some non-artificial entity. What does it mean, Searle asks, to really “artificialize” (my own word, if such a word exists) something? “Artificial intelligence” as we now know it, argues Searle, may be artificial, but it is not really intelligence. So what would a truly artificial intelligence look like?

An artificial heart, by contrast, seems to be for Searle really an artificial heart. Why? Searle bases this distinction on that between “a simulation or model” and “duplication.” He writes:

Consider an artificial heart as an example. Computer models were useful in constructing artificial hearts, but such a model is not an actual functioning causal mechanism. The actual artificial heart has to duplicate the causal powers of real hearts to pump blood. Both real and artificial hearts are physical pumps, unlike the computer model or simulation.

There is a strange literalism at work in Searle’s approach—or, better, an artificial essentialism. Causal processes are for Searle the essence of “reality”; the heart’s essence, it seems, is found for him in a basic causal function, pumping. In order to create a real artificial heart, that basic causal function needs to be literally, materially duplicated, or re-produced. Then we have, in a paradoxical formula, a real artificial heart.

But, I must ask, can that artificial heart skip a beat in a moment of terror or wonder? Such heart stopping moments, too, can be understood in cause-and-effect terms. Can an artificial heart grow stronger with exercise, or weaker with poor nutrition, also causal phenomena? Can an artificial heart, to be a bit hyperbolic, be eaten and subject to the causal processes of digestion? If not, then clearly the artificial heart is not a “real artificial heart” in every respect, but only in one respect, albeit a very important one.

My point is that “duplication” is a poor measure of the “really artificial,” for it is in the very nature and substance of the “artificial” itself to have a relative and partial relationship to that which it is imitating, copying, or “duplicating.” The artificial heart duplicates some aspects of the natural heart, but not all aspects. And the same thing can be said about computerized artificial intelligence: Of course it is true that, as Searle writes, computers lack altogether a “psychological reality,” and are nothing but well-designed, highly functional circuit systems. Nevertheless, in certain circumstances they behave outwardly in a manner that we recognize as “intelligent.” This resemblance may be far less intense than the duplication of the “causal mechanism” of the human heart, but it is a resemblance nevertheless.

If the philosopher’s quest is to find the line at which point “artificial X” crosses a line to become a truly artificial X, I am afraid it may be a frustrating task, for sharp conceptual distinctions are not going to do it. Rather, we are better off thinking in terms of a continuum, on which perhaps “concepts” themselves might sit at one end, “models” somewhere in the middle, and “duplications” at the other end.

Searle, as I said, wants to reassure us: Computers are not going to take over the world, and we have not entered a new aquarian age of the “info sphere.”

It is easy to imagine robots being programmed by a conscious mind to kill every recognizable human in sight. But the idea of superintelligent computers intentionally setting out on their own to destroy us, based on their own beliefs and desires and other motivations, is unrealistic because the machinery has no beliefs, desires, and motivations.

I don’t find this very reassuring, however. A greater danger than a future filled with “really artificial intelligence” is already squarely with us: We often behave as though computers believe, desire, and move. We ascribe them human agency. And in this present reality, not an apocalyptic future, what is “really artificial” matters little. Rather, what we need is better reflection on the meaning of the artificial in our lives together.

Ned O’Gorman, associate professor of communication at the University of Illinois, Urbana-Champaign, is the author of Spirits of the Cold War: Contesting Worldviews in the Classical Age of American Security Strategy and the forthcoming The Iconoclastic Imagination: Image, Catastrophe, and Economy in America Since the Kennedy Assassination.

. . . . . . . .

Like The Hedgehog Review on Facebook, follow us on Twitter, and subscribe to our posts via RSS.

FacebookTwitterGoogle+LinkedInShare

So Who Edits Google?

Last Tuesday the New York Times launched Watching, a new homepage feature that tracks developing news from the Times and “across the Web.” The newsfeed features prominently on the upper right-hand side of the homepage and runs 24/7. Like other web-based news feeds, Watching, writes Times reporter Marcus Mabry, will

help readers wade through the flood of news and other information that moves faster than ever before. Watching will feature news of the moment, such as early outlines of developing stories on NYTimes.com, and aggregate the most newsworthy and trusted reporting from around the web. It will also feature tweets from Times reporters and others, as well as photos and YouTube videos.

watching larger

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

Like other contemporary technologies—search engines, Facebook news feeds, and Twitter updates—Watching is designed to help us filter and search amidst the dross of the Web.

But Watching, or so the editors at the Times want us to believe, is different: “Times editors—not algorithms—will make the choices.” Unlike the inhuman and inscrutable mechanisms powering other feeds, Watching will keep us in the sure, expert hands of the Times’s human editors.

The Times’s decision to launch a newsfeed feature is, however belated, understandable. Running a website as though it were simply a digital version of a print paper (just scroll down to get below-the-fold feel) seems a losing proposition. The decision to juxtapose Watching’s reliance on human as opposed to computational judgement is also understandable.

While Watching will have the fastest metabolism of The Times’s digital report, it will have the same standards as the rest of our journalism. Posts will undergo the same scrutiny as any other pieces on the NYTimes.com home page.

The Watching team, led by me [Mabry] and the reporter Jennifer Preston, will be in constant contact with The Times’s news desks.

As the Times continues to develop a digital strategy, it wants to protect and exploit its cultural advantages by distinguishing itself from new forms of information distribution. And appealing to widespread assumptions about algorithms might seem like a winning strategy. Algorithms, after all, don’t make judgements; they simply compute. Real editorial decisions are made in newsrooms by news people sitting at news desks. At least that’s what the Times wants us to believe. Their all-too-human editors will, after all, be curating and culling news from the Web, an environment populated with algorithms of all sorts.

With Watching, then, the Times pits two forms of journalistic authority against one another, the editorial and the algorithmic. And journalism becomes a proxy for broader questions and anxieties not only about how information is distributed in our digital age but also about how it is authorized. What kinds of information can we trust? What sources can we rely on? What, in short, is the difference between editorial and algorithmic objectivity, these two ways of giving an account of a world common to all?

Man vs. Machine 

Editorial authority is based on institutional practices and norms as embodied, for example, in the two human editors responsible for the Times’s new Watching feature. The authority and value of the newsfeed is based on their curatorial acumen and journalistic integrity. The circulation and evaluation of knowledge—what is, and is not, important enough to make it to Watching’s feed—runs through them and their team of reporters.

Algorithmic authority, by contrast, is based, at least as advocates such as Google describe it, on the exclusion of such subjective forms of knowledge, that is, forms of knowledge embodied in the person of the expert. Its cultural authority relies, as Tarleton Gillespie puts it, on a “technically inflected promise of mechanical neutrality. Whatever choices are made are presented both as distant from the intervention of human hands, and as submerged inside of the cold workings of the machine.”

Algorithmic objectivity, however, is not a natural feature of algorithms. It is a carefully curated cultural good for which neutrality and suppression of the subjective are primary virtues. Google, for instance, recognizes that the efficiency and technical power of its search engine are meaningless, not to mention profitless, without the cultural authority granted it by users.

This authority rests primarily on a certain notion of objectivity. In “Ten Things We Know to be True,” Google promises: “We never manipulate rankings to put our partners higher in our search results and no one can buy better PageRank. Our users trust our objectivity and no short-term gain could ever justify breaching that trust.” Google’s “objectivity” is inseparable from the assumption that the iterative and computational character of its algorithms, such as PageRank, eradicates subjective error, prejudice, and poor judgment. Algorithms transcend personal predilections and give us access to the real.

Google’s algorithms, however, are not just a computational technology; they are communicative technology. They bring together the millions of individual decisions that users make every minute. The authority of PageRank, for instance, rests on the claim that it accurately reflects users’ decisions of what has value and what doesn’t. As Google says,

Google search works because it relies on the millions of individuals posting links on websites to help determine which other sites offer content of value. We assess the importance of every web page using more than 200 signals and a variety of techniques, including our patented PageRank™ algorithm, which analyzes which sites have been “voted” to be the best sources of information by other pages across the web.

Algorithms facilitate a knowledge that exceeds the self. They give voice, insists Google, to the desires of a community of users. Algorithms help us to see ourselves in the scattered and disparate patch work that is the web. But they do so at such scales that the individual user is little more than an algorithmic self, a flattened data point among millions.

In “The Relevance of Algorithms,” Tarleton Gillespie summarizes the differences between algorithmic and editorial authority:

The editorial logic depends on the subjective choices of experts, themselves made and authorized through institutional processes of training and certification, or validated by the public through the mechanisms of the market. The algorithmic logic, by contrast, depends on the proceduralized choices of a machine, designed by human operators to automate some proxy of human judgment or unearth patterns across collected social traces.

Editorial objectivity relies on the norms and practices of culturally authoritative institutions, whereas algorithmic authority relies on the cultural authority of mechanical neutrality. And yet, these cultural norms and notions of objectivity are not inherent to any particular technology, print or digital. Both editorial and algorithmic authority are cultural logics, as Gillespie says, that help us figure out what counts as knowledge and what doesn’t. Proponents of both, Times’ editors and Google spokespeople, pursue a similar end. They both want to identity and organize “relevant information crucial to the public, through unavoidably human means, in such a way as to be free from human error, bias, or manipulation.” But they have different norms, practices, and epistemic assumptions about how to do so.

Determining What Counts as Knowledge

The norms governing and authorizing editorial objectivity, for example, have emerged over centuries of print and in debates about how information should be produced and distributed. These norms were never inherent in the medium of the printed newspaper—they were hard won.

Over the course of the eighteenth-century German Enlightenment, the underlying assumption of intellectuals was that the level of a society’s enlightenment was a function of its access to information. More print equaled more enlightenment. Information, so it seemed, wanted to be free, and print—be it in the form of books, newspapers, or journals—facilitated this unrestrained circulation.

In the final decades of the eighteenth century, however, German intellectuals began to doubt whether the unregulated flow of information in print necessarily led to a more enlightened society. Members of Berlin’s famed Wednesday Society, which included leading Prussian bureaucrats and editors, debated whether “each and every bit” of information should be immediately published. While some members argued for the “unrestricted freedom” of information, others argued that information should be carefully filtered and selected before it was distributed.

The members of the Wednesday Society were not simply debating how best to maintain their cultural privilege and power and enforce a system of state-sponsored censorship. They were also struggling to understand how information and knowledge should be distributed in an age in which print had begun to proliferate at an unprecedented rate. Were authors responsible for their ideas once in print? While all information might, in the abstract, seem worth distributing as broadly as possible, could some ideas be pernicious or dangerous? Was an absolutely unrestricted dissemination of information, whatever the source or quality, obviously the best option? Information will always be filtered and selected, so what norms should govern these practices and technologies of communication?

When the Times juxtaposes its editors with the Web’s algorithms, it appeals to this longer history of journalistic filters and selection technologies of which editorial authority is a key feature. But to cast algorithmic authority as inherently non-human and mechanistic is both false and dangerous.

Algorithms are technologies crafted by particular people with distinct intentions. Algorithms want, as Nicholas Carr puts it, what “the people who write them want.” It’s just that these desires, judgments, and decisions are so deeply nested within the algorithm itself that we usually can’t see beneath the beguiling simplicity of Google’s homepage. The algorithm, however mechanical we imagine it, can’t produce truth or knowledge, as Bethany Nowviskie puts it, independently of its human designers and users. Judgment is no less critical to the selective criteria of an algorithm than it is to a page of a good newspaper. But when we read a newspaper, even in its online form, we do so while applying a set of norms and expectations that have been formulated and institutionalized over centuries. We are only now beginning to understand why the unstated norms that shape the design and use of computational algorithms need to be made more explicit—and therefore subject to critical public debate. For now, Google and Facebook don’t have mastheads or editorial pages. The names of the people who make judgments about what an algorithm does are hidden. All we have is the veneer of mechanical neutrality and the invocation of an objectivity that operates beyond the human.

. . . . . . . .

Like The Hedgehog Review on Facebook, follow us on Twitter, and subscribe to our posts via RSS.