Last Tuesday the New York Times launched Watching, a new homepage feature that tracks developing news from the Times and “across the Web.” The newsfeed features prominently on the upper right-hand side of the homepage and runs 24/7. Like other web-based news feeds, Watching, writes Times reporter Marcus Mabry, will
help readers wade through the flood of news and other information that moves faster than ever before. Watching will feature news of the moment, such as early outlines of developing stories on NYTimes.com, and aggregate the most newsworthy and trusted reporting from around the web. It will also feature tweets from Times reporters and others, as well as photos and YouTube videos.
Like other contemporary technologies—search engines, Facebook news feeds, and Twitter updates—Watching is designed to help us filter and search amidst the dross of the Web.
But Watching, or so the editors at the Times want us to believe, is different: “Times editors—not algorithms—will make the choices.” Unlike the inhuman and inscrutable mechanisms powering other feeds, Watching will keep us in the sure, expert hands of the Times’s human editors.
The Times’s decision to launch a newsfeed feature is, however belated, understandable. Running a website as though it were simply a digital version of a print paper (just scroll down to get below-the-fold feel) seems a losing proposition. The decision to juxtapose Watching’s reliance on human as opposed to computational judgement is also understandable.
While Watching will have the fastest metabolism of The Times’s digital report, it will have the same standards as the rest of our journalism. Posts will undergo the same scrutiny as any other pieces on the NYTimes.com home page.
The Watching team, led by me [Mabry] and the reporter Jennifer Preston, will be in constant contact with The Times’s news desks.
As the Times continues to develop a digital strategy, it wants to protect and exploit its cultural advantages by distinguishing itself from new forms of information distribution. And appealing to widespread assumptions about algorithms might seem like a winning strategy. Algorithms, after all, don’t make judgements; they simply compute. Real editorial decisions are made in newsrooms by news people sitting at news desks. At least that’s what the Times wants us to believe. Their all-too-human editors will, after all, be curating and culling news from the Web, an environment populated with algorithms of all sorts.
With Watching, then, the Times pits two forms of journalistic authority against one another, the editorial and the algorithmic. And journalism becomes a proxy for broader questions and anxieties not only about how information is distributed in our digital age but also about how it is authorized. What kinds of information can we trust? What sources can we rely on? What, in short, is the difference between editorial and algorithmic objectivity, these two ways of giving an account of a world common to all?
Man vs. Machine
Editorial authority is based on institutional practices and norms as embodied, for example, in the two human editors responsible for the Times’s new Watching feature. The authority and value of the newsfeed is based on their curatorial acumen and journalistic integrity. The circulation and evaluation of knowledge—what is, and is not, important enough to make it to Watching’s feed—runs through them and their team of reporters.
Algorithmic authority, by contrast, is based, at least as advocates such as Google describe it, on the exclusion of such subjective forms of knowledge, that is, forms of knowledge embodied in the person of the expert. Its cultural authority relies, as Tarleton Gillespie puts it, on a “technically inflected promise of mechanical neutrality. Whatever choices are made are presented both as distant from the intervention of human hands, and as submerged inside of the cold workings of the machine.”
Algorithmic objectivity, however, is not a natural feature of algorithms. It is a carefully curated cultural good for which neutrality and suppression of the subjective are primary virtues. Google, for instance, recognizes that the efficiency and technical power of its search engine are meaningless, not to mention profitless, without the cultural authority granted it by users.
This authority rests primarily on a certain notion of objectivity. In “Ten Things We Know to be True,” Google promises: “We never manipulate rankings to put our partners higher in our search results and no one can buy better PageRank. Our users trust our objectivity and no short-term gain could ever justify breaching that trust.” Google’s “objectivity” is inseparable from the assumption that the iterative and computational character of its algorithms, such as PageRank, eradicates subjective error, prejudice, and poor judgment. Algorithms transcend personal predilections and give us access to the real.
Google’s algorithms, however, are not just a computational technology; they are communicative technology. They bring together the millions of individual decisions that users make every minute. The authority of PageRank, for instance, rests on the claim that it accurately reflects users’ decisions of what has value and what doesn’t. As Google says,
Google search works because it relies on the millions of individuals posting links on websites to help determine which other sites offer content of value. We assess the importance of every web page using more than 200 signals and a variety of techniques, including our patented PageRank™ algorithm, which analyzes which sites have been “voted” to be the best sources of information by other pages across the web.
Algorithms facilitate a knowledge that exceeds the self. They give voice, insists Google, to the desires of a community of users. Algorithms help us to see ourselves in the scattered and disparate patch work that is the web. But they do so at such scales that the individual user is little more than an algorithmic self, a flattened data point among millions.
In “The Relevance of Algorithms,” Tarleton Gillespie summarizes the differences between algorithmic and editorial authority:
The editorial logic depends on the subjective choices of experts, themselves made and authorized through institutional processes of training and certification, or validated by the public through the mechanisms of the market. The algorithmic logic, by contrast, depends on the proceduralized choices of a machine, designed by human operators to automate some proxy of human judgment or unearth patterns across collected social traces.
Editorial objectivity relies on the norms and practices of culturally authoritative institutions, whereas algorithmic authority relies on the cultural authority of mechanical neutrality. And yet, these cultural norms and notions of objectivity are not inherent to any particular technology, print or digital. Both editorial and algorithmic authority are cultural logics, as Gillespie says, that help us figure out what counts as knowledge and what doesn’t. Proponents of both, Times’ editors and Google spokespeople, pursue a similar end. They both want to identity and organize “relevant information crucial to the public, through unavoidably human means, in such a way as to be free from human error, bias, or manipulation.” But they have different norms, practices, and epistemic assumptions about how to do so.
Determining What Counts as Knowledge
The norms governing and authorizing editorial objectivity, for example, have emerged over centuries of print and in debates about how information should be produced and distributed. These norms were never inherent in the medium of the printed newspaper—they were hard won.
Over the course of the eighteenth-century German Enlightenment, the underlying assumption of intellectuals was that the level of a society’s enlightenment was a function of its access to information. More print equaled more enlightenment. Information, so it seemed, wanted to be free, and print—be it in the form of books, newspapers, or journals—facilitated this unrestrained circulation.
In the final decades of the eighteenth century, however, German intellectuals began to doubt whether the unregulated flow of information in print necessarily led to a more enlightened society. Members of Berlin’s famed Wednesday Society, which included leading Prussian bureaucrats and editors, debated whether “each and every bit” of information should be immediately published. While some members argued for the “unrestricted freedom” of information, others argued that information should be carefully filtered and selected before it was distributed.
The members of the Wednesday Society were not simply debating how best to maintain their cultural privilege and power and enforce a system of state-sponsored censorship. They were also struggling to understand how information and knowledge should be distributed in an age in which print had begun to proliferate at an unprecedented rate. Were authors responsible for their ideas once in print? While all information might, in the abstract, seem worth distributing as broadly as possible, could some ideas be pernicious or dangerous? Was an absolutely unrestricted dissemination of information, whatever the source or quality, obviously the best option? Information will always be filtered and selected, so what norms should govern these practices and technologies of communication?
When the Times juxtaposes its editors with the Web’s algorithms, it appeals to this longer history of journalistic filters and selection technologies of which editorial authority is a key feature. But to cast algorithmic authority as inherently non-human and mechanistic is both false and dangerous.
Algorithms are technologies crafted by particular people with distinct intentions. Algorithms want, as Nicholas Carr puts it, what “the people who write them want.” It’s just that these desires, judgments, and decisions are so deeply nested within the algorithm itself that we usually can’t see beneath the beguiling simplicity of Google’s homepage. The algorithm, however mechanical we imagine it, can’t produce truth or knowledge, as Bethany Nowviskie puts it, independently of its human designers and users. Judgment is no less critical to the selective criteria of an algorithm than it is to a page of a good newspaper. But when we read a newspaper, even in its online form, we do so while applying a set of norms and expectations that have been formulated and institutionalized over centuries. We are only now beginning to understand why the unstated norms that shape the design and use of computational algorithms need to be made more explicit—and therefore subject to critical public debate. For now, Google and Facebook don’t have mastheads or editorial pages. The names of the people who make judgments about what an algorithm does are hidden. All we have is the veneer of mechanical neutrality and the invocation of an objectivity that operates beyond the human.
. . . . . . . .