Tag Archives: Facebook

So Who Edits Google?

Last Tuesday the New York Times launched Watching, a new homepage feature that tracks developing news from the Times and “across the Web.” The newsfeed features prominently on the upper right-hand side of the homepage and runs 24/7. Like other web-based news feeds, Watching, writes Times reporter Marcus Mabry, will

help readers wade through the flood of news and other information that moves faster than ever before. Watching will feature news of the moment, such as early outlines of developing stories on NYTimes.com, and aggregate the most newsworthy and trusted reporting from around the web. It will also feature tweets from Times reporters and others, as well as photos and YouTube videos.

watching larger

















Like other contemporary technologies—search engines, Facebook news feeds, and Twitter updates—Watching is designed to help us filter and search amidst the dross of the Web.

But Watching, or so the editors at the Times want us to believe, is different: “Times editors—not algorithms—will make the choices.” Unlike the inhuman and inscrutable mechanisms powering other feeds, Watching will keep us in the sure, expert hands of the Times’s human editors.

The Times’s decision to launch a newsfeed feature is, however belated, understandable. Running a website as though it were simply a digital version of a print paper (just scroll down to get below-the-fold feel) seems a losing proposition. The decision to juxtapose Watching’s reliance on human as opposed to computational judgement is also understandable.

While Watching will have the fastest metabolism of The Times’s digital report, it will have the same standards as the rest of our journalism. Posts will undergo the same scrutiny as any other pieces on the NYTimes.com home page.

The Watching team, led by me [Mabry] and the reporter Jennifer Preston, will be in constant contact with The Times’s news desks.

As the Times continues to develop a digital strategy, it wants to protect and exploit its cultural advantages by distinguishing itself from new forms of information distribution. And appealing to widespread assumptions about algorithms might seem like a winning strategy. Algorithms, after all, don’t make judgements; they simply compute. Real editorial decisions are made in newsrooms by news people sitting at news desks. At least that’s what the Times wants us to believe. Their all-too-human editors will, after all, be curating and culling news from the Web, an environment populated with algorithms of all sorts.

With Watching, then, the Times pits two forms of journalistic authority against one another, the editorial and the algorithmic. And journalism becomes a proxy for broader questions and anxieties not only about how information is distributed in our digital age but also about how it is authorized. What kinds of information can we trust? What sources can we rely on? What, in short, is the difference between editorial and algorithmic objectivity, these two ways of giving an account of a world common to all?

Man vs. Machine 

Editorial authority is based on institutional practices and norms as embodied, for example, in the two human editors responsible for the Times’s new Watching feature. The authority and value of the newsfeed is based on their curatorial acumen and journalistic integrity. The circulation and evaluation of knowledge—what is, and is not, important enough to make it to Watching’s feed—runs through them and their team of reporters.

Algorithmic authority, by contrast, is based, at least as advocates such as Google describe it, on the exclusion of such subjective forms of knowledge, that is, forms of knowledge embodied in the person of the expert. Its cultural authority relies, as Tarleton Gillespie puts it, on a “technically inflected promise of mechanical neutrality. Whatever choices are made are presented both as distant from the intervention of human hands, and as submerged inside of the cold workings of the machine.”

Algorithmic objectivity, however, is not a natural feature of algorithms. It is a carefully curated cultural good for which neutrality and suppression of the subjective are primary virtues. Google, for instance, recognizes that the efficiency and technical power of its search engine are meaningless, not to mention profitless, without the cultural authority granted it by users.

This authority rests primarily on a certain notion of objectivity. In “Ten Things We Know to be True,” Google promises: “We never manipulate rankings to put our partners higher in our search results and no one can buy better PageRank. Our users trust our objectivity and no short-term gain could ever justify breaching that trust.” Google’s “objectivity” is inseparable from the assumption that the iterative and computational character of its algorithms, such as PageRank, eradicates subjective error, prejudice, and poor judgment. Algorithms transcend personal predilections and give us access to the real.

Google’s algorithms, however, are not just a computational technology; they are communicative technology. They bring together the millions of individual decisions that users make every minute. The authority of PageRank, for instance, rests on the claim that it accurately reflects users’ decisions of what has value and what doesn’t. As Google says,

Google search works because it relies on the millions of individuals posting links on websites to help determine which other sites offer content of value. We assess the importance of every web page using more than 200 signals and a variety of techniques, including our patented PageRank™ algorithm, which analyzes which sites have been “voted” to be the best sources of information by other pages across the web.

Algorithms facilitate a knowledge that exceeds the self. They give voice, insists Google, to the desires of a community of users. Algorithms help us to see ourselves in the scattered and disparate patch work that is the web. But they do so at such scales that the individual user is little more than an algorithmic self, a flattened data point among millions.

In “The Relevance of Algorithms,” Tarleton Gillespie summarizes the differences between algorithmic and editorial authority:

The editorial logic depends on the subjective choices of experts, themselves made and authorized through institutional processes of training and certification, or validated by the public through the mechanisms of the market. The algorithmic logic, by contrast, depends on the proceduralized choices of a machine, designed by human operators to automate some proxy of human judgment or unearth patterns across collected social traces.

Editorial objectivity relies on the norms and practices of culturally authoritative institutions, whereas algorithmic authority relies on the cultural authority of mechanical neutrality. And yet, these cultural norms and notions of objectivity are not inherent to any particular technology, print or digital. Both editorial and algorithmic authority are cultural logics, as Gillespie says, that help us figure out what counts as knowledge and what doesn’t. Proponents of both, Times’ editors and Google spokespeople, pursue a similar end. They both want to identity and organize “relevant information crucial to the public, through unavoidably human means, in such a way as to be free from human error, bias, or manipulation.” But they have different norms, practices, and epistemic assumptions about how to do so.

Determining What Counts as Knowledge

The norms governing and authorizing editorial objectivity, for example, have emerged over centuries of print and in debates about how information should be produced and distributed. These norms were never inherent in the medium of the printed newspaper—they were hard won.

Over the course of the eighteenth-century German Enlightenment, the underlying assumption of intellectuals was that the level of a society’s enlightenment was a function of its access to information. More print equaled more enlightenment. Information, so it seemed, wanted to be free, and print—be it in the form of books, newspapers, or journals—facilitated this unrestrained circulation.

In the final decades of the eighteenth century, however, German intellectuals began to doubt whether the unregulated flow of information in print necessarily led to a more enlightened society. Members of Berlin’s famed Wednesday Society, which included leading Prussian bureaucrats and editors, debated whether “each and every bit” of information should be immediately published. While some members argued for the “unrestricted freedom” of information, others argued that information should be carefully filtered and selected before it was distributed.

The members of the Wednesday Society were not simply debating how best to maintain their cultural privilege and power and enforce a system of state-sponsored censorship. They were also struggling to understand how information and knowledge should be distributed in an age in which print had begun to proliferate at an unprecedented rate. Were authors responsible for their ideas once in print? While all information might, in the abstract, seem worth distributing as broadly as possible, could some ideas be pernicious or dangerous? Was an absolutely unrestricted dissemination of information, whatever the source or quality, obviously the best option? Information will always be filtered and selected, so what norms should govern these practices and technologies of communication?

When the Times juxtaposes its editors with the Web’s algorithms, it appeals to this longer history of journalistic filters and selection technologies of which editorial authority is a key feature. But to cast algorithmic authority as inherently non-human and mechanistic is both false and dangerous.

Algorithms are technologies crafted by particular people with distinct intentions. Algorithms want, as Nicholas Carr puts it, what “the people who write them want.” It’s just that these desires, judgments, and decisions are so deeply nested within the algorithm itself that we usually can’t see beneath the beguiling simplicity of Google’s homepage. The algorithm, however mechanical we imagine it, can’t produce truth or knowledge, as Bethany Nowviskie puts it, independently of its human designers and users. Judgment is no less critical to the selective criteria of an algorithm than it is to a page of a good newspaper. But when we read a newspaper, even in its online form, we do so while applying a set of norms and expectations that have been formulated and institutionalized over centuries. We are only now beginning to understand why the unstated norms that shape the design and use of computational algorithms need to be made more explicit—and therefore subject to critical public debate. For now, Google and Facebook don’t have mastheads or editorial pages. The names of the people who make judgments about what an algorithm does are hidden. All we have is the veneer of mechanical neutrality and the invocation of an objectivity that operates beyond the human.

. . . . . . . .

Like The Hedgehog Review on Facebook, follow us on Twitter, and subscribe to our posts via RSS.


Big Data, Small Data, and the Ethics of Scale

This past summer, two Cornell University scholars and a researcher from Facebook’s Data Science unit published a paper on what they termed “emotional contagion.” They claimed to show that Facebook’s news feed algorithm, the complex set of instructions that determines what shows up where in a news feed, could influence users’ emotional states. Using a massive data set of more than 689,003 Facebook accounts, they manipulated users’ news feeds so that some people saw more positive posts and others more negative posts. Over time, they detected a slight change in what users themselves posted: Those who saw more positive posts posted more positive posts of their own, while those who saw more negative posts posted more negative ones. Emotional contagion, they concluded, could spread among people without any direct interaction and “without their awareness.” 

Some critics lambasted Facebook for its failure to notify users that they were going to be part of a giant experiment on their emotions, but others simply thought it was cool. (My Infernal Machine colleague Ned O’Gorman has already outlined the debate.) Sheryl Sandberg, Facebook’s COO, just seemed confused. What’s all the fuss about, she wondered. This latest experiment “was part of ongoing research companies do to test different products.” Facebook wasn’t experimenting with people; it was improving its product. That’s what businesses do, especially digital business with access to so much free data. They serve their customers by better understanding their needs and desires. Some might call it manipulation. Facebook calls it marketing.

But, as technology writer Nicholas Carr points out, new digital technologies and the internet have ushered in a new era of market manipulation.

Thanks to the reach of the internet, the kind of psychological and behavioral testing that Facebook does is different in both scale and kind from the market research of the past. Never before have companies been able to gather such intimate data on people’s thoughts and lives, and never before have they been able to so broadly and minutely shape the information that people see. If the Post Office had ever disclosed that it was reading everyone’s mail and choosing which letters to deliver and which not to, people would have been apoplectic, yet that is essentially what Facebook has been doing. In formulating the algorithms that run its News Feed and other media services, it molds what its billion-plus members see and then tracks their responses. It uses the resulting data to further adjust its algorithms, and the cycle of experiments begins anew. Because the algorithms are secret, people have no idea which of their buttons are being pushed — or when, or why.

Businesses of all sorts, from publishers to grocery stores, have longed tracked the habits and predilections of their customors in order better to influence what and how much they consume. And cultural critics have always debated the propriety of such practices.

Eighteenth-century German scholars debated the intellectual integrity of publishers who deigned to treat books not only as sacred vessels of Enlightenment, but also as commodities to be fashioned and peddled to a generally unenlightened public. Friedrich Nicolai, one of late eighteenth-century Prussia’s leading publishers, described the open secrets of the Enlightenment book trade:

Try to write what everyone is talking about . . . If an Empress Catherine has died, or a Countess Lichtenau fallen out of favor, describe the secret circumstances of her life, even if you know nothing of them. Even if all your accounts are false, no one will doubt their veracity, your book will pass from hand to hand, it will be printed four times in three weeks, especially if you take care to invent a multitude of scandalous anecdotes.

The tastes and whims of readers could be formed and manipulated by a publishing trade that was in the business not only of sharing knowledge but also of producing books that provoked emotional responses and prompted purchases. And it did so in such obvious and pandering ways that its manipulative tactics were publicly debated. Immanuel Kant mocked Nicolai and his fellow publishers as industrialists who traded in commodities, not knowledge. But Kant did so in public, in print.

These previous forms of market manipulation were qualitatively different from those of our digital age. Be they the practices of eighteenth-century publishing or mid-twentieth-century television production, these forms of manipulation, claims Carr, were more public and susceptible to public scrutiny, and as long as they were “visible, we could evaluate them and resist them.” But in an age in which our online and offline lives are so thoroughly intertwined, the data of our lives—what we consume, how we communicate, how we socialize, how we live—can be manipulated in ways and to ends about which we are completely unaware and we have increasingly less capacity to evaluate.

Sheryl Sandberg would have us believe that Facebook and Google are neutral tools that merely process and organize information into an accessible format. But Facebook and Google are also companies interested in making money. And their primary technologies, their algorithms, should not be extracted from the broader environment in which they were created and are constantly tweaked by particular human beings for particular ends. They are pervasive and shape who we are and who we want to become, both individually and socially. We need to understand how live alongside them.

These are precisely the types of questions and concerns that a humanities of the twenty-first century can and should address. We need forms of inquiry that take the possibilities and limits of digital technologies seriously. The digital humanities would seem like an obvious community to which to turn for a set of practices, methods, and techniques for thinking about our digital lives, both historically and conceptually. But, to date, most scholars engaged in the digital humanities have not explicitly addressed the ethical ends and motivations of their work. (Bethany Nowviskie’s work is one exemplary exception: here and here.)

This hesitance has set them up for some broad attacks. Th recent diatribes against the digital humanities have not only peddled ignorance and lazy thinking as insight, they have also, perhaps more perniciously, managed to cast scholars interested in such methods and technologies as morally suspect. In his ill-informed New Republic article, Adam Kirsch portrayed digital humanities scholars as morally truncated technicians, obsessed with method and either uninterested in or incapable of ethical reflection. The digital humanities, Kirsch would have us believe, is the latest incarnation of the Enlightenment of Adorno and Horkheimer—a type of thinking interested only in technical mastery and unconcerned about the ends to which knowledge might be put.

Most of the responses to Kirsch and his ilk, my own included, didn’t dispute these more implicit suggestions. We conceded questions of value and purpose to the bumbling critics, as though to suggest that the defenders of a vague and ahistorical form of humanistic inquiry had a monopoly on such questions. We conceded, after a fashion, the language of ethics to Kirsch’s image of a purified humanities, one that works without technologies and with insight alone. We responded with arguments about method (“You don’t know what digital humanities scholars actually do.”) or history (“The humanities have always been interested in patterns.”).

In a keynote address last week, however, Scott Weingart encouraged humanities scholars engaged in computational analysis and other digital projects to think more clearly about the ethical nature of the work they are already doing. Echoing some of Carr’s concerns, he writes:

We are at the cusp of a new era. The mix of big data, social networks, media companies, content creators, government surveillance, corporate advertising, and ubiquitous computing is a perfect storm for intense influence both subtle and far-reaching. Algorithmic nudging has the power to sell products, win elections, topple governments, and oppress a people, depending on how it is wielded and by whom. We have seen this work from the bottom-up, in Occupy Wall Street, the Revolutions in the Middle East, and the ALS Ice-Bucket Challenge, and from the top-down in recent presidential campaigns, Facebook studies, and coordinated efforts to preserve net neutrality. And these have been works of non-experts: people new to this technology, scrambling in the dark to develop the methods as they are deployed. As we begin to learn more about network-based control and influence, these examples will multiply in number and audacity.

In light of these new scales of analysis and the new forms of agency they help create, Weingart encourages scholars, particularly those engaged in network and macroanalysis, to pay attention to the ways in which they mix the impersonal and individual, the individual and the universal. “By zooming in and out, from the distant to the close,” he writes, digital humanities scholars toggle back and forth between big and small data. Facebook, Google, and the NSA operate primarily at a macro level at which averages and aggregates are visible but not individuals. But that’s not how networks work. Networks are a messy, complex interaction of the micro and macro. They are products of the entire scale of knowledge, data, and being. Social networks and the ideas, actions, and interactions that comprise them emerge between the particular and the universal. What often distinguishes “the digital humanities from its analog counterpart,” writes Weingart, “is the distant reading, the macroanalysis.” But what binds humanities scholars of all sorts together is an “unwillingness to stray too far from the source. We intersperse the distant with the close, attempting to reintroduce the individual into the aggregate.” In this sense, scholars interested in a digital humanities are particularly well suited to challenge basic but dangerous misconceptions about the institutions and technologies that shape our world.

If we think of Facebook and Google and the computations in which we are enmeshed merely as information-processing machines, we concede our world to one end of the scale, a world of abstracted big data and all powerful algorithms. We forget that the internet, like any technology, is both a material infrastructure and, as Ian Bogost has put it, something we do. Every time we like a post on Facebook, search Google, or join the network at a local coffee shop, we participate in this massive, complex world of things and actions. We help form our technological world. So maybe its time we learn more about this world and remember that algorithms aren’t immutable, natural laws. They are, as Nowviskie puts it, rules and instructions that can manipulate and be manipulated. They are part of the our world, bound to us just as we are now to them.

. . . . . . . .

Like The Hedgehog Review on Facebook, follow us on Twitter, and subscribe to our posts via RSS.

You Must Unplug Your Life!

Maybe you missed it because you left your iPhone at home, but the fifth annual National Day ofundologo Unplugging was celebrated by certain digital malcontents on March 7th. The event’s organizer, Reboot, exhorts us “to unplug and reconnect in real life” for one day from sundown to sundown:

We increasingly miss out on the important moments of our lives as we pass the hours with our noses buried in our iPhones and BlackBerry’s, chronicling our every move through Facebook and Twitter and shielding ourselves from the outside world with the bubble of “silence” that our earphones create.

If you recognize that in yourself—or your friends, families or colleagues—join us for the National Day of Unplugging, sign the Unplug pledge and start living a different life: connect with the people in your street, neighborhood and city, have an uninterrupted meal or read a book to your child.

Reboot wants to change your life. You don’t just need to put down your iPad, you need to remake yourself. This broader injunction to self-reinvention reveals just what “unplugging” and “detoxing” is all about and helps us to understand better the handwringing about too much information and digital distraction that has been a constant of cultural criticism of late.

The National Day of Unplugging grew out of the group’s Sabbath Manifesto, a ten-point list of edifying imperatives that will, if heeded, make you a better person. Among the many injunctions are “Connect with loved ones,” “Nurture your health,” “Get outside,” and, my favorite, “Drink wine”—all laudable activities that I would encourage you to try out if you haven’t. But the one that tops the list is “Avoid technology.” And this one makes less sense to me. How are we to “avoid” technologies? And why exactly?

Avoiding technology may sound like a noble feat of asceticism, but it’s neither possible nor desirable. Technologies are part of us. They aren’t just fungible tools that we can set aside when we want to be more human. They help constitute what is to be human. To pretend otherwise is naive and self-defeating. Unplugging from our digital devices, as Casey N. Cep points out,

doesn’t stop us from experiencing our lives through their lenses, frames, and formats. We are only ever tourists in the land of no technology, our visas valid for a day or a week or a year, and we travel there with the same eyes and ears that we use in our digital homeland. That is why so many of those who unplug return so quickly to speak about their sojourns. The ostentatious announcements of leave-taking (“I’m #digitaldetoxing for a few days, so you won’t see any tweets from me!” “Leaving Facebook for a while to be in the world!”) are inevitably followed by vainglorious returns, excited exclamations having turned into desperate questions (“Sorry to be away from Twitter. #Digitaldetox for three WHOLE days. Miss me?” “Back online. What did I miss?”).

The idea of “unplugging” assumes that a brief hiatus from your favorite device or app will have a cleansing effect. But who among us plans on living without our technologies? We all make plans to return under the illusion that we’ve actually done something good for ourselves, that we’ve changed our lives by turning off our iPhone. These earnest efforts at digital detoxing distract from just how enmeshed we are in our technologies. The dream of a world without technologies, however short-lived, is not sustainable. We need practices and norms to help guide us through this era of technological change. We need repetition: practice, practice, practice! One day a year won’t do it.

. . . . . . . .

Like The Hedgehog Review on Facebook, follow us on Twitter, and subscribe to our posts via RSS.