Author Archives: admin

Universitybot Responds: Gang Rape as “Sexual Misconduct”

University of Virginia, 11/20/14

Above and below: Fresh graffiti at the Phi Kappa Psi fraternity house, University of Virginia, November 20, 2014. Photos: Gregory Thompson

Last week I wrote a post titled “Who Needs Captains of Erudition?” Long before “corporatization” had become synonymous with higher education, Thorstein Veblen, the early twentieth-century American sociologist, dismissed American universities as little more than “competitive businesses.” These enterprises were run by university presidents, who had become little more than managers of systems, chiefs of concerns, “captains of erudition.”

When I read President Sullivan’s response to a Rolling Stone article that recounted a University of Virginia woman’s personal story of gang rape and the trauma that followed, all I could hear was the defensive, disengaged chatter of one of Veblen’s captains:

“I am writing in response to a Rolling Stone magazine article that negatively depicts the University of Virginia and its handling of sexual misconduct cases.”

“Negatively depicts”? If one phrase embodies the corporatization of the university that might well be it. The contemporary university’s assessment metrics, use of adjunct labor, obsession with economic efficiency, and capitulation to the sovereignty of the student as consumer are just consequences of a deeper failure of moral imagination. The primary concern is with public perceptions. Never mind that a young woman felt that her only option was to talk to a Rolling Stone reporter. This is the language of an institution committed to nothing but its own mechanisms. There is no evidence of the virtues to which we here at the University of Virginia lay claim—empathy, civic concern, leadership, and curiosity.

University of Virginia, 11/20/14

Sullivan’s statement was a missive from the bureaucratic bowels of an accounting machine. It was surely manufactured by public relations specialists and lawyers whose interests are simply fiduciary, concerned only with legal liability and fundraising. There are no people, just “interests”; no judgments, just “initiatives”; no moral failures, just “issues.” There were, as one of my colleagues put it, no rapes, no victims, no women, no perpetrators—just “issues related to sexual misconduct.” And the only response is more policies, more initiatives, more accounting.

The captains of erudition are firmly at the helm at the modern American university. With their phalanx of managers, they are guiding us into seas of indistinction, into a future where the university is just another modern bureaucracy without ends, without purpose. And the faculty is asleep on the deck.

. . . . . . . .

Like The Hedgehog Review on Facebook, follow us on Twitter, and subscribe to our posts via RSS.


Who Needs Captains of Erudition?

Long before “corporatization” became synonymous with higher education, Thorstein Veblen, the early twentieth-century American sociologist who coined the term “conspicuous consumption,” dismissed American universities as little more than “competitive businesses.” In On the Higher Learning in America (1918), published fewer than forty years after Johns Hopkins was founded as America’s first research university, he described the contemporary university as a “business house dealing in merchantable knowledge, placed under the governing hand of a captain of erudition, whose office it is to turn the means in hand to account in the largest feasible output.” The modern American university president wasn’t a scholar, an intellectual, a scientist, or even much of a leader. He was the manager of systems, the chief of a concern, the captain of erudition.

Thorstein Veblen, by Edwin B. Child, 1934. Courtesy of Yale University Art Gallery, Gift of Associates of the Sitter. A protege of J. Laurence Laughlin, the first head of political economy, Veblen began his uneasy passage through the University in 1892.

Thorstein Veblen, by Edwin B. Child, 1934. Courtesy of Yale University Art Gallery, Gift of Associates of the Sitter. A protege of J. Laurence Laughlin, the first head of political economy, Veblen began his uneasy passage through the University in 1892.

Botstein and Bard

Leon Botstein, the charismatic conductor of the American Symphony Orchestra and president of Bard College, is no captain of erudition. “Botstein’s voice,” writes Alice Gregory in the New Yorker,

telegraphs a wizardly moral authority. Everyone responds to it, but parents, primed to be proud of their children, are especially susceptible. ‘We live in a time where people don’t really believe in education. That doubt is something we struggle with,’ he said. ‘Your enthusiasm, your determination, your idealism about education gives back to us a reminder of why we should fight for what we do.’

For Botstein, the “quantification of American higher education,” introduced by university administrators who just want to keep their jobs and facilitated by spineless faculty who have given up on the liberal arts, is a moral affront.

Botstein’s earnest and tireless defense of an ideal, however, might just doom this small, liberal arts college, 90 minutes north of New York City. Bard, where all those black-clad kids who read Sartre in high school wound up, is the singular creation of Botstein’s will and personality. But in December 2013, Moody’s Investors Service lowered its credit outlook to “negative.” And now some of its trustees are worried. Susan Weber, a trustee and donor, said:

Everyone says, ‘Oh, he’s the most amazing fund-raiser,’ Well, I wish that were so, because we wouldn’t be so underfunded if he were that amazing. I think he’s good at it—he works hard at it—but his real strength is building an institution.

“But”?  If one word can be said to embody the confusion over the purposes of higher education, that but might be it.

Botstein built an institution with a vision, but only a captain of erudition can, it seems, sustain it.

Weber’s resigned admission of what Bard needs after Botstein has become the assumption of many university boards. University presidents shouldn’t lead national debates or make moral claims; they should alleviate political pressures and mollify the idiosyncracies of donors. Ours is the age of the competent commander-in-chief—we need accountants, not idealists.

Veblen’s Prescience—in Our Own Backyard

On June 10, 2012, my colleagues and I at the University of Virginia (UVa) learned that Veblen had been all too prescient. Helen Dragas, Rector of UVa’s Board of Trustees, briefly and matter-of-factly informed us that our president had been fired:

On behalf of the Board of Visitors, we are writing to tell you that the Board and President Teresa Sullivan today mutually agreed that she will step down as president of the University of Virginia effective August 15, 2012. For the past year the Board has had ongoing discussions about the importance of developing, articulating and acting on a clear and concrete strategic vision. The Board believes that in the rapidly changing and highly pressurized external environment in both health care and in academia, the University needs to remain at the forefront of change.

Over the following weeks, my colleagues and I, joined by an international audience, speculated about these unspecified “philosophical differences” between President Sullivan and the Board of Visitors; we wondered about the “clear and concrete strategic vision” for which the Rector called. Hadn’t we already been subjected to years of strategic planning?

After ten days of increasing frustration and concern from faculty, students, and alumni, Dragas sent a second email. This one listed a number of “challenges” that UVa faced for which Sullivan, as Dragas implied, had no plan to deal with: the long-term decline in state funding for public universities, the disruptive effects of new technologies, rising tuition costs, increasing enrollments and an aging faculty (with no money to replace it), increasing demands for faculty and curricular assessment—not to mention the increasingly expanded roles that the contemporary university plays of health-care provider, entertainment center, sports venture, industrial and government research center, and, by the way, educator. In short, the university faced a whole host of challenges, none of which were unique to UVa.

sullivan_infernal machine_72dpi

UVa President Teresa Sullivan speaks on the steps on the Rotunda after addressing a closed session of the Board of Visitors, June 2012; photo © Norm Shafer

But between June 10 and Sullivan’s ultimate reinstatement on June 26, something else happened on Grounds, something that most stories and accounts of the summer’s events missed in their efforts to chronicle the process. Not only did it surprise me; I still struggle to  make sense of it. (Talbot Brewer also tried to make sense of this series of events in the summer issue of The Hedgehog Review.)

For about two weeks, UVa faculty members paid scant attention to the myriad problems that the Rector identified; they didn’t demand political intervention; they didn’t split up into conservative and liberal corners and revive culture-war arguments (the liberal faculty against the conservative administration). For two weeks, my colleagues condemned the Board of Visitors’ actions by making explicitly ethical arguments, arguments grounded in claims about the moral purposes of the university: What the university was and ought to be. Some colleagues defended and invoked an honor code with which we usually engage, if at all, only ironically. Others celebrated founder Thomas Jefferson’s commitment to higher education as a public and democratic good, but without the ironic winks that usually accompany such discussions. There was even an impassioned defense of peer review as an ethical practice. Whatever their particular content, the arguments led to a broad consensus: This wasn’t right, this wasn’t how a university ought to be run.

With our backs to the wall and overcome by the sense that our university was imperiled, we faculty members made arguments that were not, in the first instance, financial, technological, or political. We made normative claims about what a university ought to be. That is, the arguments that my colleagues mustered focused on the moral character and purposes of the university. Faculty were engaged and motivated by a general and rather vague sense that the moral authority of the university had been threatened.

Can We Afford Our Future?

My colleague Siva Vaidhyanathan has continued to make these arguments. Recently, while writing of another attempt to oust a public university president, this time at the University of Texas, Vaidhyanathan defended the increasingly beleaguered notion of the university as a public good:

The tuition increases and the realization that the payoffs from universities are deferred and unquantifiable pushed legislators and “reformers” to demand accountability and radical administrative transformations. This has only served to make it harder for faculty to teach and conduct research. It has made the richest nation in the history of the world act like it can’t afford to believe in its own future, respect its own culture, or foster the experimentation and knowledge that might serve the entire planet.

The university is more than than an “inefficient and outdated information delivery system.” It is a public good because it advances, conserves, refines and shares knowledge for the world. And it does so most basically by forming people who believe that knowledge is a public good.

Leon Botstein may at times be bombastic. And he is always, without question, idealistic. At a moment when the very purposes and values of universities are being reshaped in the name of efficiency and disruption, we don’t need captains of erudition. We need leaders who embody the true ethos of our institutions.

. . . . . . . .

Like The Hedgehog Review on Facebook, follow us on Twitter, and subscribe to our posts via RSS.

Here Comes the Potentiated Self

In a series of pithy posts,  Joshua Glenn, the brand analyst with a penchant for Roland Barthes, has been cataloguing cultural codes. Each code, embodied in advertisements or pop-cultural imagery, is a single, meaning-laden node in the complex, often imperceptible matrix “structuring our perception of the everyday world.” Glenn’s codes range from the child-adult, “a holy fool who speaks truth to power,” to the cool engineer, a visionary designer who sees “into things more acutely and penetratingly than ordinary mortals.”

An example of wired self-potentiation from Joshua Glenn’s Code-X (2),

But one code seems particularly of our moment: wired self-potentiation. This code, central to the advertising campaigns of technology companies, celebrates a new, digitally enabled self. For the networked person of today, extended beyond time and space with their smartphones and gadgets, “multitasking [is] re-imagined as existential branching-out. Breaking the mold. Demonstrating vitality, multiplicity, and proactive refusal to conform to stereotyped expectations. All thanks to networked technology.” This is the potentiated self, the self raised to a higher power.

The idea of technologically enabled self-improvement is widespread. James Surowiecki recently described a “performance revolution” in sports, in which athletes aren’t just practicing harder but smarter, “using science and technology to enhance the way they train and perform.” Long hours in the gym or on the pitch won’t cut it anymore. Today’s elite athletes are monitored, analyzed, and reshaped by a matrix of biometric sensors and training regimes, all supervised by a phalanx of sports experts and coaches. Training methods for contemporary MLB, NFL, and NBA players are data-driven, networked systems designed to perfect not just athletes but the exercises and disciplines that make them better.

But if the improved, technologically enhanced training regimes of professional sports focus on improving people, the Internet of Things (IoT), another vision of the connected, networked age, seems altogether different. The Internet of Things, as one of its biggest proponents Jeremy Rifkin claims,

will connect every thing with everyone in an integrated global network. People, machines, natural resources, production lines, logistics networks, consumption habits, recycling flows, and virtually every other aspect of economic and social life will be linked via sensors and software to the Ioplatform, continually feeding Big Data to every node—businesses, homes, vehicles—moment to moment, in real time. Big Data, in turn, will be processed with advanced analytics, transformed into predictive algorithms, and programmed into automated systems to improve thermodynamic efficiencies, dramatically increase productivity, and reduce the marginal cost of producing and delivering a full range of goods and services to near zero across the entire economy.

The Internet of Things is all about connectivity. In this digital network everything, whether you or your thermostat, is a node and, thus, just another source of data. The Internet of Things, as Sue Halpern writes in the New York Review of Books, is

about the “dataization” of our bodies, ourselves, and our environment. As a post on the tech website Gigaom put it, “The Internet of Things isn’t about things. It’s about cheap data.” Lots and lots of it. “The more you tell the world about yourself, the more the world can give you what you want,” says Sam Lessin, the head of Facebook’s Identity Product Group.

In Rifkin’s vision of the Internet of Things, humans are just elements of a network organized around the endless circulation of information. In such a system, the networked self is little more than the self as networked, as a node in a complex system. This is one vision of the networked, potentiated self, a notion that Glenn takes from the German poet and philosopher, Friedrich von Hardenberg, better known as Novalis. But Novalis, despite his image as the frail, romantic poet who died of a broken heart, would have had more sympathy for Surowiecki’s jocks than Rifkin’s automated systems.

In 1798, Novalis wrote a short dialogue in which two figures, named simply A and B, debated the effects of the proliferation of print. Interlocutor A lamented the modern “book plague,” especially those marks of black that fill their pages: “What burden are these letters!” The modern human being, he complains, is characterized by his “fatal habituation to the printed nature.” There was so much print that modern readers had begun to mistake it for nature itself.

Interlocutor B wasn’t nearly as worried about book plagues and floods of ink, however. The key to dealing with media surplus lay not in acquiring better tools but in becoming a better reader. Like any art, “reading” required “practice” and, as Novalis put it, “practice makes perfect, even in the reading of books.” Technologies like print were good only insofar as they were engaged as elements of a human discipline, repetitive exercises tied to a person acting according to settled purposes.

For Novalis, a potentiated self would result not from technologies unmoored from human purposes but from the perfection of excellent habits. “The world must be romanticised,” he wrote. “Romanticising is nothing but a qualitative intensification. Through this operation the lower self becomes identified with a better self. Just as we ourselves are a sequence of such qualitative powers. [. . .] By giving the everday a higher meaning, the habitual a mysterious appearance, the known the dignity of the unknown, the finite a semblance of the infinite, I romanticise it.”

Raising the self to a higher power is grounded not in the liberating promises of technology, then, but in the more mundane pursuit of excellent habits. It’s about, as Surowiecki puts it, “getting better at getting better.” Sometimes data can help. But only with coaches and teachers skilled enough to help us make sense of it—and to help us learn how to practice.

. . . . . . . .

Like The Hedgehog Review on Facebook, follow us on Twitter, and subscribe to our posts via RSS.

So Who Edits Google?

Last Tuesday the New York Times launched Watching, a new homepage feature that tracks developing news from the Times and “across the Web.” The newsfeed features prominently on the upper right-hand side of the homepage and runs 24/7. Like other web-based news feeds, Watching, writes Times reporter Marcus Mabry, will

help readers wade through the flood of news and other information that moves faster than ever before. Watching will feature news of the moment, such as early outlines of developing stories on, and aggregate the most newsworthy and trusted reporting from around the web. It will also feature tweets from Times reporters and others, as well as photos and YouTube videos.

watching larger

















Like other contemporary technologies—search engines, Facebook news feeds, and Twitter updates—Watching is designed to help us filter and search amidst the dross of the Web.

But Watching, or so the editors at the Times want us to believe, is different: “Times editors—not algorithms—will make the choices.” Unlike the inhuman and inscrutable mechanisms powering other feeds, Watching will keep us in the sure, expert hands of the Times’s human editors.

The Times’s decision to launch a newsfeed feature is, however belated, understandable. Running a website as though it were simply a digital version of a print paper (just scroll down to get below-the-fold feel) seems a losing proposition. The decision to juxtapose Watching’s reliance on human as opposed to computational judgement is also understandable.

While Watching will have the fastest metabolism of The Times’s digital report, it will have the same standards as the rest of our journalism. Posts will undergo the same scrutiny as any other pieces on the home page.

The Watching team, led by me [Mabry] and the reporter Jennifer Preston, will be in constant contact with The Times’s news desks.

As the Times continues to develop a digital strategy, it wants to protect and exploit its cultural advantages by distinguishing itself from new forms of information distribution. And appealing to widespread assumptions about algorithms might seem like a winning strategy. Algorithms, after all, don’t make judgements; they simply compute. Real editorial decisions are made in newsrooms by news people sitting at news desks. At least that’s what the Times wants us to believe. Their all-too-human editors will, after all, be curating and culling news from the Web, an environment populated with algorithms of all sorts.

With Watching, then, the Times pits two forms of journalistic authority against one another, the editorial and the algorithmic. And journalism becomes a proxy for broader questions and anxieties not only about how information is distributed in our digital age but also about how it is authorized. What kinds of information can we trust? What sources can we rely on? What, in short, is the difference between editorial and algorithmic objectivity, these two ways of giving an account of a world common to all?

Man vs. Machine 

Editorial authority is based on institutional practices and norms as embodied, for example, in the two human editors responsible for the Times’s new Watching feature. The authority and value of the newsfeed is based on their curatorial acumen and journalistic integrity. The circulation and evaluation of knowledge—what is, and is not, important enough to make it to Watching’s feed—runs through them and their team of reporters.

Algorithmic authority, by contrast, is based, at least as advocates such as Google describe it, on the exclusion of such subjective forms of knowledge, that is, forms of knowledge embodied in the person of the expert. Its cultural authority relies, as Tarleton Gillespie puts it, on a “technically inflected promise of mechanical neutrality. Whatever choices are made are presented both as distant from the intervention of human hands, and as submerged inside of the cold workings of the machine.”

Algorithmic objectivity, however, is not a natural feature of algorithms. It is a carefully curated cultural good for which neutrality and suppression of the subjective are primary virtues. Google, for instance, recognizes that the efficiency and technical power of its search engine are meaningless, not to mention profitless, without the cultural authority granted it by users.

This authority rests primarily on a certain notion of objectivity. In “Ten Things We Know to be True,” Google promises: “We never manipulate rankings to put our partners higher in our search results and no one can buy better PageRank. Our users trust our objectivity and no short-term gain could ever justify breaching that trust.” Google’s “objectivity” is inseparable from the assumption that the iterative and computational character of its algorithms, such as PageRank, eradicates subjective error, prejudice, and poor judgment. Algorithms transcend personal predilections and give us access to the real.

Google’s algorithms, however, are not just a computational technology; they are communicative technology. They bring together the millions of individual decisions that users make every minute. The authority of PageRank, for instance, rests on the claim that it accurately reflects users’ decisions of what has value and what doesn’t. As Google says,

Google search works because it relies on the millions of individuals posting links on websites to help determine which other sites offer content of value. We assess the importance of every web page using more than 200 signals and a variety of techniques, including our patented PageRank™ algorithm, which analyzes which sites have been “voted” to be the best sources of information by other pages across the web.

Algorithms facilitate a knowledge that exceeds the self. They give voice, insists Google, to the desires of a community of users. Algorithms help us to see ourselves in the scattered and disparate patch work that is the web. But they do so at such scales that the individual user is little more than an algorithmic self, a flattened data point among millions.

In “The Relevance of Algorithms,” Tarleton Gillespie summarizes the differences between algorithmic and editorial authority:

The editorial logic depends on the subjective choices of experts, themselves made and authorized through institutional processes of training and certification, or validated by the public through the mechanisms of the market. The algorithmic logic, by contrast, depends on the proceduralized choices of a machine, designed by human operators to automate some proxy of human judgment or unearth patterns across collected social traces.

Editorial objectivity relies on the norms and practices of culturally authoritative institutions, whereas algorithmic authority relies on the cultural authority of mechanical neutrality. And yet, these cultural norms and notions of objectivity are not inherent to any particular technology, print or digital. Both editorial and algorithmic authority are cultural logics, as Gillespie says, that help us figure out what counts as knowledge and what doesn’t. Proponents of both, Times’ editors and Google spokespeople, pursue a similar end. They both want to identity and organize “relevant information crucial to the public, through unavoidably human means, in such a way as to be free from human error, bias, or manipulation.” But they have different norms, practices, and epistemic assumptions about how to do so.

Determining What Counts as Knowledge

The norms governing and authorizing editorial objectivity, for example, have emerged over centuries of print and in debates about how information should be produced and distributed. These norms were never inherent in the medium of the printed newspaper—they were hard won.

Over the course of the eighteenth-century German Enlightenment, the underlying assumption of intellectuals was that the level of a society’s enlightenment was a function of its access to information. More print equaled more enlightenment. Information, so it seemed, wanted to be free, and print—be it in the form of books, newspapers, or journals—facilitated this unrestrained circulation.

In the final decades of the eighteenth century, however, German intellectuals began to doubt whether the unregulated flow of information in print necessarily led to a more enlightened society. Members of Berlin’s famed Wednesday Society, which included leading Prussian bureaucrats and editors, debated whether “each and every bit” of information should be immediately published. While some members argued for the “unrestricted freedom” of information, others argued that information should be carefully filtered and selected before it was distributed.

The members of the Wednesday Society were not simply debating how best to maintain their cultural privilege and power and enforce a system of state-sponsored censorship. They were also struggling to understand how information and knowledge should be distributed in an age in which print had begun to proliferate at an unprecedented rate. Were authors responsible for their ideas once in print? While all information might, in the abstract, seem worth distributing as broadly as possible, could some ideas be pernicious or dangerous? Was an absolutely unrestricted dissemination of information, whatever the source or quality, obviously the best option? Information will always be filtered and selected, so what norms should govern these practices and technologies of communication?

When the Times juxtaposes its editors with the Web’s algorithms, it appeals to this longer history of journalistic filters and selection technologies of which editorial authority is a key feature. But to cast algorithmic authority as inherently non-human and mechanistic is both false and dangerous.

Algorithms are technologies crafted by particular people with distinct intentions. Algorithms want, as Nicholas Carr puts it, what “the people who write them want.” It’s just that these desires, judgments, and decisions are so deeply nested within the algorithm itself that we usually can’t see beneath the beguiling simplicity of Google’s homepage. The algorithm, however mechanical we imagine it, can’t produce truth or knowledge, as Bethany Nowviskie puts it, independently of its human designers and users. Judgment is no less critical to the selective criteria of an algorithm than it is to a page of a good newspaper. But when we read a newspaper, even in its online form, we do so while applying a set of norms and expectations that have been formulated and institutionalized over centuries. We are only now beginning to understand why the unstated norms that shape the design and use of computational algorithms need to be made more explicit—and therefore subject to critical public debate. For now, Google and Facebook don’t have mastheads or editorial pages. The names of the people who make judgments about what an algorithm does are hidden. All we have is the veneer of mechanical neutrality and the invocation of an objectivity that operates beyond the human.

. . . . . . . .

Like The Hedgehog Review on Facebook, follow us on Twitter, and subscribe to our posts via RSS.

Virtual Assembly and the Legal Limits of Digital Dualism


What are the implications of the First Amendment’s right of assembly to the lives we increasingly lead online? I have spent a fair amount of time over the past few years exploring the meaning and significance of “the right of the people peaceably to assemble,” but I have only recently begun to consider its applications in an online context. My first effort to think about “virtual assembly” was in an article published last year in the Cornell Law Review from which the following reflections are drawn.

The Scope of the Assembly Right

While many people may think of the right of assembly as covering temporal gatherings like a school celebration or a political protest, its scope is far broader. Protecting the moment of expression often requires protecting the group that makes that expression possible. Put differently, the verb “assemble” in the text of the First Amendment presupposes a noun—an assembly. And while some assemblies occur spontaneously, most do not. Most assemblies emerge from people who come together long before they hold protests or conventions. Our “pre-political” and “pre-expressive” groups enable our ordinary activities to develop into extraordinary ones. And those groups are among the most vulnerable to government monitoring and interference.

These observations extend to our online groups, differently embodied and displaced though they are. In fact, the virtual dimensions of assembly may yield insights for how we understand more traditional assemblies and the legal protections that we assign to them. Consider, for example, how our online groups may serve as early indicators of possible infringements of the right of assembly. One of the reasons that government officials usually ignore our most informal offline groups is that these groups often lack visible boundaries—there are often no membership lists or widely available indicia of the individuals who constitute “the group.”

But the boundaries are more pronounced online. A Facebook group lets us know who is in and who is out. Even the most informal Facebook group signals a more concrete form of exclusion and embrace than its offline counterpart. Our private boundary lines are more publicly visible online, and that visibility is more likely to attract the attention of government officials.

Virtual Assembly as Community

Despite their more pronounced boundaries, I was initially skeptical of the deeper significance of online groups. I am not on Facebook or Instagram, and I only recently succumbed to the professional pressure of Twitter. I knew that online connections could facilitate otherwise improbable interactions, and that they could simplify logistics and efficiencies for everything from flash mobs to political protests. But I was less certain about other questions, such as whether meaningful relationships and ways of life could really flourish online.

My thinking began to change when I came across Howard Rheingold’s work. Rheingold started writing about online communities in the early 1990s, which in virtual time places him somewhere in the vicinity of Plato. But he wrote movingly—and convincingly—of the deep social connections and communities fostered by online interactions. Other writers described the emotional shelters that online groups created for pregnant women, cancer patients, drug addicts, and numerous other affinity groups. I discovered online gaming communities and online churches, and not all of them seemed flaky.

There are, of course, plenty of shallow and harmful online groups. But as Rheingold noted in 1993, “it is dangerous to mindlessly invalidate the experiences of a person for whom Internet communication is not a luxury but a lifeline.”

These kinds of observations helped frame my approach to the project of virtual assembly, and to the legal, cultural, and normative questions that followed. I learned a great deal about the ways in which online groups contribute to important First Amendment interests like identity formation, self-governance, and dissent.

I also learned that the dynamic line between virtual and non-virtual groups often frustrates efforts to draw legal and cultural distinctions between them. Many of us use online connections to sustain relationships that begin offline.  And many relationships that begin online move offline. Online dating services lead to offline relationships. Business relationships initially formed through social networking sites lead to in-person meetings and partnerships. Lawyers who find clients in online worlds represent them in offline proceedings. (And, of course, lots of bad things can also happen when online relationships migrate offline.)

The connections between our online and offline groups suggest that we need to think carefully about the constitutional protections for those groups. Protecting values like identity formation, self-governance, and dissent depends on protecting the boundaries of these groups. But these protections are not cost free. Resistance to monitoring can complicate legitimate law-enforcement efforts. Exclusive groups can weaken antidiscrimination norms and hinder equality of opportunity. Deep attachments that we form with others can complicate our psychological ability to exit from groups. (As one example, I’m discovering the nontrivial psychological pressure that prevents me from “unfollowing” someone on Twitter.) We have been weighing these costs and benefits in our offline groups for some time, and it will be important to extend these considerations online as well. The online context might call for modified doctrinal applications. It might also cause us to rethink our existing offline frameworks.

Looking Back as We Look Ahead

The law is an imperfect and limited resource, online and off. Many of the looming questions may well be resolved by institutional and technological design rather than by legal doctrine. But recognizing that some ideals are largely beyond judicial competence should not cause us to cut short our normative aspirations for law. In the case of our online groups, we can meet some of the challenges that lie ahead by looking back: to the right of the people peaceably to assemble.

John Inazu is an Associate Professor of Law at the University of Washington, St. Louis and the author of Liberty’s Refuge: The Forgotten Freedom of Assembly (Yale 2012).

. . . . . . . .

Like The Hedgehog Review on Facebook, follow us on Twitter, and subscribe to our posts via RSS.

Quit Lit: Do the Humanities Need the University?

college chains_FINAL

#165598883 /

There’s a new genre taking shape on blogs, Twitter, and even in the pages of The London Review of Books: Quit Lit. Just last week, Mariana Warner, a creative writing professor and member of the Man Booker prize committee, explained her decision to resign her position at the University of Essex. In “Why I Quit,” she describes the bureaucratic disciplines of England’s new Research Assessment Exercises, which tabulate and calculate academic labor with the efficiency and mindlessness usually reserved for an assembly plant (and a low tech one at that).

In a scene she must have embellished by channeling Kakfa U., Warner recounts a meeting with her new dean:

A Tariff of Expectations would be imposed across the university, with 17 targets to be met, and success in doing so assessed twice a year. I received mine from the executive dean for humanities. (I met her only once. She was appointed last year, a young lawyer specialising in housing. When I tried to talk to her about the history of the university, its hopes, its “radical innovation,” she didn’t want to know. I told her why I admired the place, why I felt in tune with Essex and its founding ideas. “That is all changing now,” she said quickly. “‘That is over.” My “workload allocation,” which she would “instruct” my head of department to implement, was impossible to reconcile with the commitments which I had been encouraged—urged—to accept.

Confused but, more deeply, defeated by this new regime, Warner resigned. But she continued her work for the Man Booker Prize committee which, as it turns out, has proven rather clarifying.

Among the scores of novels I am reading for the Man Booker International are many Chinese novels, and the world of Chinese communist corporatism, as ferociously depicted by their authors, keeps reminding me of higher education here, where enforcers rush to carry out the latest orders from their chiefs in an ecstasy of obedience to ideological principles which they do not seem to have examined, let alone discussed with the people they order to follow them, whom they cashier when they won’t knuckle under.

As a genre Quit Lit has a few organizing features. Its form tends to be personal and aggrieved. The university, like those vague but all-powerful institutions in Kafka’s texts, has been overtaken by an alien, usually bureaucratic-statist-inhumane power. And its content tends to be not just about the decline of the university but also about the impending demise of the humanities. By turning universities into vocational schools, we are robbing our children of humanistic forms of thought and the good that ensues. (If scientists wrote prose like humanists, maybe they would be writing about the end of the university and the collapse of science. NPR had a go at Quit Lit  this past week in their series on the dramatic cuts in basic science funding and the results it is having on future generations of scientists.)

As with all literary genres, Quit Lit has its predecessors. Before there were Rebecca Schuman and NeinQuarterly’s Eric Jarosinski, there was another German scholar experimenting in the genre, Friedrich Nietzsche. In 1872, just three years after he landed his first, and only, professorship at the University of Basel without even having finished his dissertation, Nietzsche delivered a series of lectures, On the Future of Our Educational Institutions, in the city museum. Before crowds of more than 300 people, Nietzsche staged a dialogue on the future of German universities and culture between two young students and a cantankerous old philosopher and his slow-witted but earnest assistant.

The grousing philosopher lamented the decline of universities into state-sponsored factories that produced pliant citizens and mindless, “castrated” scholars who cared not a bit for life. By the end of the lectures, it’s difficult to say whether Nietzsche thought there was a future at all for German universities. Nietzsche lasted a few more years in his position, resigning only when ill health forced him to. But he left an oeuvre that looked to the university and saw little but ruin.

As Nietzsche was writing, parts of the German university might not have been in decay, but they were in decline, the humanities in particular. Between 1841 and 1881, enrollment in philosophy, philology, and history within “philosophy faculties,” which compromised the core liberal arts fields, declined from 86.4 percent to 62.9 percent, whereas in mathematics and the natural sciences enrollments increased from 13.6 to 37.1 percent of all students matriculating at German universities. The mood among humanists was often such that they sounded quite a bit like the embattled literature professors of today. In academia, crisis is generally a matter of perception, and even in what now seems like a “golden age” for humanists, there was, in fact, a seismic shift for the humanities.

More recent forms of Quit Lit tend to lack a key feature of Nietzsche’s model, however. Nietzsche never conflated the humanities or humanistic inquiry with the university. For him, humanistic inquiry—and Nietzsche was deeply humanistic as his lifelong commitment to philology attests—transcended the institutional and historically particular shape of universities, which he saw as little more than extensions of a Prussian bureaucratic machine.

In what increasingly seems like a related genre, contemporary academics and intellectuals of all sorts have ostensibly been defending the humanities. But more often than not they actually defend certain forms of scholarship as they have come to be institutionalized in largely twentieth-century American research universities. Geoffrey Galt Harpham recently produced  the most egregious but well-argued example of this tendency with The Humanities and the Dream of America. His basic thesis is that the humanities as they are now practiced were an invention of post–World War II American research universities. Similarly, Peter Brooks’s edited collection The Humanities and Public Life suggests, with its focus on disciplines and scholarship and the imperatives of the university, inadvertently echoes the same. They conflate the humanities with their departmental and institutional shapes in universities.

In the measured “yes but” prose of academic speak, Patrícia Vieira gives this spirit of conflation ethical shape in a review entitled “What are the Humanities For?”:

Debates about the “future of the humanities” frequently revolve around the suspicion that the humanities might not have one. Yet despite the direness of this anxiety—an anxiety especially personal for every academic worried about professional choices or mortgage payments—conversations on the topic are often dull, long-faced affairs. Every professor has sat through one or another of these depressing discussions. The conversation proceeds according to a familiar set of pieces: there are passionate apologias of work in philosophy, literature, history, and the arts; veiled criticism of the anti-intellectualism of higher education administrators and society at large; and vague pledges to do more interdisciplinary research and extend a fraternal hand to the social and natural sciences, who remain largely unperturbed by this plight. The whole thing wraps up with the reassuring conviction that, if the humanities go down, they will do so in style (we study the arts, after all), and that truth is on our side, all folded in a fair dosage of indulgent self-pity.

Vieira can’t imagine the future of the humanities beyond the anxieties of professors and the failures of university administrators. All she can muster is a few gentle and inveterately academic admonitions for her authors:

Brooks’s and [Doris] Sommer’s [The Work of Art in the World: Civic Agency and Public Humanitiesbooks coincide in their desire to persuade those skeptical about the importance of the arts and the humanities of their inherent worth. The volumes set out to prove that these disciplines play a crucial role in public life and that they are vital to contemporary culture. Brooks’s collection often falls short of this goal by sliding into fatalistic rhetoric about the doomed future of humanistic scholarship—the very discourse the book attempts to combat—all while ignoring some of the vibrant new research in the field. In contrast, Sommer is overconfident in the power of the arts to tackle thorny socioeconomic and political problems. Both the despondent and celebratory approaches are symptomatic of the beleaguered state of the field, forced to justify its existence based upon technocratic principles that demand immediate results and fast returns. The humanities are constantly compelled to demonstrate practical results or hopelessly admit to lacking a concrete and immediate function, straitjacketed into foreign modes of valuation lifted from the empirical sciences. Neither a dying set of disciplines nor a panacea for social ills, the humanities remain a central form of human enquiry, in that they shed light on and question the tacit assumptions upon which our societies are based, outline the history of these values, and identify alternatives to the status quo.

Despite her attempts to cast the humanities as a form of “human” inquiry, Vieira is writing about a beleaguered and exhausted profession. There are only professors and their disciplines here. And they both are trapped, as Nietzsche would say, in a “castrated” passive voice: “The humanities are compelled ….” There are no agents in this drama, just put-upon, passive professors.

I am not suggesting that we should give up on universities. Universities, especially modern research universities, have long helped sustain and cultivate the practices and virtues central to the humanities. But just as German universities were becoming international paradigms, emulated from Baltimore to Beijing, Nietzsche made a fateful diagnosis. Those practices and virtues could ossify and whither in the arcane and self-justifying bowels of the modern, bureaucratic university. “Human inquiry,” in contrast, would live on.

We may well benefit from an exercise in imagination. Could the humanities survive the collapse of the university? I think so.

. . . . . . . .

Like The Hedgehog Review on Facebook, follow us on Twitter, and subscribe to our posts via RSS.

Big Data, Small Data, and the Ethics of Scale

This past summer, two Cornell University scholars and a researcher from Facebook’s Data Science unit published a paper on what they termed “emotional contagion.” They claimed to show that Facebook’s news feed algorithm, the complex set of instructions that determines what shows up where in a news feed, could influence users’ emotional states. Using a massive data set of more than 689,003 Facebook accounts, they manipulated users’ news feeds so that some people saw more positive posts and others more negative posts. Over time, they detected a slight change in what users themselves posted: Those who saw more positive posts posted more positive posts of their own, while those who saw more negative posts posted more negative ones. Emotional contagion, they concluded, could spread among people without any direct interaction and “without their awareness.” 

Some critics lambasted Facebook for its failure to notify users that they were going to be part of a giant experiment on their emotions, but others simply thought it was cool. (My Infernal Machine colleague Ned O’Gorman has already outlined the debate.) Sheryl Sandberg, Facebook’s COO, just seemed confused. What’s all the fuss about, she wondered. This latest experiment “was part of ongoing research companies do to test different products.” Facebook wasn’t experimenting with people; it was improving its product. That’s what businesses do, especially digital business with access to so much free data. They serve their customers by better understanding their needs and desires. Some might call it manipulation. Facebook calls it marketing.

But, as technology writer Nicholas Carr points out, new digital technologies and the internet have ushered in a new era of market manipulation.

Thanks to the reach of the internet, the kind of psychological and behavioral testing that Facebook does is different in both scale and kind from the market research of the past. Never before have companies been able to gather such intimate data on people’s thoughts and lives, and never before have they been able to so broadly and minutely shape the information that people see. If the Post Office had ever disclosed that it was reading everyone’s mail and choosing which letters to deliver and which not to, people would have been apoplectic, yet that is essentially what Facebook has been doing. In formulating the algorithms that run its News Feed and other media services, it molds what its billion-plus members see and then tracks their responses. It uses the resulting data to further adjust its algorithms, and the cycle of experiments begins anew. Because the algorithms are secret, people have no idea which of their buttons are being pushed — or when, or why.

Businesses of all sorts, from publishers to grocery stores, have longed tracked the habits and predilections of their customors in order better to influence what and how much they consume. And cultural critics have always debated the propriety of such practices.

Eighteenth-century German scholars debated the intellectual integrity of publishers who deigned to treat books not only as sacred vessels of Enlightenment, but also as commodities to be fashioned and peddled to a generally unenlightened public. Friedrich Nicolai, one of late eighteenth-century Prussia’s leading publishers, described the open secrets of the Enlightenment book trade:

Try to write what everyone is talking about . . . If an Empress Catherine has died, or a Countess Lichtenau fallen out of favor, describe the secret circumstances of her life, even if you know nothing of them. Even if all your accounts are false, no one will doubt their veracity, your book will pass from hand to hand, it will be printed four times in three weeks, especially if you take care to invent a multitude of scandalous anecdotes.

The tastes and whims of readers could be formed and manipulated by a publishing trade that was in the business not only of sharing knowledge but also of producing books that provoked emotional responses and prompted purchases. And it did so in such obvious and pandering ways that its manipulative tactics were publicly debated. Immanuel Kant mocked Nicolai and his fellow publishers as industrialists who traded in commodities, not knowledge. But Kant did so in public, in print.

These previous forms of market manipulation were qualitatively different from those of our digital age. Be they the practices of eighteenth-century publishing or mid-twentieth-century television production, these forms of manipulation, claims Carr, were more public and susceptible to public scrutiny, and as long as they were “visible, we could evaluate them and resist them.” But in an age in which our online and offline lives are so thoroughly intertwined, the data of our lives—what we consume, how we communicate, how we socialize, how we live—can be manipulated in ways and to ends about which we are completely unaware and we have increasingly less capacity to evaluate.

Sheryl Sandberg would have us believe that Facebook and Google are neutral tools that merely process and organize information into an accessible format. But Facebook and Google are also companies interested in making money. And their primary technologies, their algorithms, should not be extracted from the broader environment in which they were created and are constantly tweaked by particular human beings for particular ends. They are pervasive and shape who we are and who we want to become, both individually and socially. We need to understand how live alongside them.

These are precisely the types of questions and concerns that a humanities of the twenty-first century can and should address. We need forms of inquiry that take the possibilities and limits of digital technologies seriously. The digital humanities would seem like an obvious community to which to turn for a set of practices, methods, and techniques for thinking about our digital lives, both historically and conceptually. But, to date, most scholars engaged in the digital humanities have not explicitly addressed the ethical ends and motivations of their work. (Bethany Nowviskie’s work is one exemplary exception: here and here.)

This hesitance has set them up for some broad attacks. Th recent diatribes against the digital humanities have not only peddled ignorance and lazy thinking as insight, they have also, perhaps more perniciously, managed to cast scholars interested in such methods and technologies as morally suspect. In his ill-informed New Republic article, Adam Kirsch portrayed digital humanities scholars as morally truncated technicians, obsessed with method and either uninterested in or incapable of ethical reflection. The digital humanities, Kirsch would have us believe, is the latest incarnation of the Enlightenment of Adorno and Horkheimer—a type of thinking interested only in technical mastery and unconcerned about the ends to which knowledge might be put.

Most of the responses to Kirsch and his ilk, my own included, didn’t dispute these more implicit suggestions. We conceded questions of value and purpose to the bumbling critics, as though to suggest that the defenders of a vague and ahistorical form of humanistic inquiry had a monopoly on such questions. We conceded, after a fashion, the language of ethics to Kirsch’s image of a purified humanities, one that works without technologies and with insight alone. We responded with arguments about method (“You don’t know what digital humanities scholars actually do.”) or history (“The humanities have always been interested in patterns.”).

In a keynote address last week, however, Scott Weingart encouraged humanities scholars engaged in computational analysis and other digital projects to think more clearly about the ethical nature of the work they are already doing. Echoing some of Carr’s concerns, he writes:

We are at the cusp of a new era. The mix of big data, social networks, media companies, content creators, government surveillance, corporate advertising, and ubiquitous computing is a perfect storm for intense influence both subtle and far-reaching. Algorithmic nudging has the power to sell products, win elections, topple governments, and oppress a people, depending on how it is wielded and by whom. We have seen this work from the bottom-up, in Occupy Wall Street, the Revolutions in the Middle East, and the ALS Ice-Bucket Challenge, and from the top-down in recent presidential campaigns, Facebook studies, and coordinated efforts to preserve net neutrality. And these have been works of non-experts: people new to this technology, scrambling in the dark to develop the methods as they are deployed. As we begin to learn more about network-based control and influence, these examples will multiply in number and audacity.

In light of these new scales of analysis and the new forms of agency they help create, Weingart encourages scholars, particularly those engaged in network and macroanalysis, to pay attention to the ways in which they mix the impersonal and individual, the individual and the universal. “By zooming in and out, from the distant to the close,” he writes, digital humanities scholars toggle back and forth between big and small data. Facebook, Google, and the NSA operate primarily at a macro level at which averages and aggregates are visible but not individuals. But that’s not how networks work. Networks are a messy, complex interaction of the micro and macro. They are products of the entire scale of knowledge, data, and being. Social networks and the ideas, actions, and interactions that comprise them emerge between the particular and the universal. What often distinguishes “the digital humanities from its analog counterpart,” writes Weingart, “is the distant reading, the macroanalysis.” But what binds humanities scholars of all sorts together is an “unwillingness to stray too far from the source. We intersperse the distant with the close, attempting to reintroduce the individual into the aggregate.” In this sense, scholars interested in a digital humanities are particularly well suited to challenge basic but dangerous misconceptions about the institutions and technologies that shape our world.

If we think of Facebook and Google and the computations in which we are enmeshed merely as information-processing machines, we concede our world to one end of the scale, a world of abstracted big data and all powerful algorithms. We forget that the internet, like any technology, is both a material infrastructure and, as Ian Bogost has put it, something we do. Every time we like a post on Facebook, search Google, or join the network at a local coffee shop, we participate in this massive, complex world of things and actions. We help form our technological world. So maybe its time we learn more about this world and remember that algorithms aren’t immutable, natural laws. They are, as Nowviskie puts it, rules and instructions that can manipulate and be manipulated. They are part of the our world, bound to us just as we are now to them.

. . . . . . . .

Like The Hedgehog Review on Facebook, follow us on Twitter, and subscribe to our posts via RSS.

The Author in the Age of Digital Distribution


In the Amazon v. Hachette battle, authors seem to have assumed the role of largely passive onlookers, idling nervously on the sidelines until the two giants come to an agreement on the price of ebooks. On the Amazon front, the myth that the online retail giant is a warm, author-friendly company was dealt a further blow when J. K. Rowling became a casualty in the battle. As for Hachette, the determination to sell ebooks for $11.99 appears to be an increasingly quixotic stand against the tides of the modern literary marketplace.

But what if authors were able to gain greater agency in deciding where, how, and at what price their books were sold? What if authors didn’t need to agree to Amazon’s exclusive, one-size-fits-all royalty and sales agreement or sign away their rights to publishers? We might not be too far away from a shared open-source platform on which authors could sell their own work, cutting Amazon out of the equation and reducing the negotiating power of publishers. The recent protest letter signed by 900 authors shows that the time is ripe for a new, author-centric paradigm.

Amazon isn’t solely to blame for the plummetting price values assigned to books. The distribution of books has become so efficient that its profitability has been drastically reduced—the book market has undercut itself. At the same time, the appeal of social and visual media has contributed to the rather sluggish status of literary reading. In this media-saturated climate, the most successful authors are the ones who can ruthlessly grab and hold readers while triumphing across media. The Hunger Games, for instance, has a plot that is based on reality television and televised spectaclea text that begs to be filmed. Authors have never faced so much competition in the form of free media, nor have they ever encountered such low barriers to distribution of their work. Rather than the death of the author, this ethos dictates the birth of scaled, income-based, and potentially author-driven digital pricing and distribution models.

We can hate Amazon as much as we want, but the company has turned publishing on its head. Although most self-published authors earn very little, there are now few barriers to entry and authors can earn, according to Amazon Kindle Direct Publishing, “up to seventy percent in royalties.” Yet, Amazon offers only an illusion of authorial control. In the background, Amazon programmers ensure that the “results generated by Amazon’s search engine are partly determined by promotional fees.” Furthermore, Amazon’s Kindle Direct Publishing agreement restricts authors from selling their digital books anywhere else.

However, many writers and artists are bucking Amazon’s monopoly. Tech writer Gwen Bell has been selling her writing directly from her website for years. Deadmau5 is the latest artist to go where Louis C.K. in 2011 and Stephen King back in 2000 went before: He announced on Reddit his intention to release his music through his own digital subscription service with different price points. Even the reclusive comic book artist Alan Moore has raised more than £60,000 through Kickstarter and proclaimed that crowdfunding will shape the future. The ability to demonstrate the monetary demand for a book even before it is published puts a good deal of power in the hands of authors, who may be able to bargain harder to control their own rights.

Open-source, distributed peer-to-peer networks and free software can easily link together authors’ individual sites and offerings. In an author-centric model, editorial work, marketing, and the stamp of the respected publisher would be secondary to the agency and established fanbase of individual authors, who would boast greater control of their works and proceeds. Rather than publishers and Amazon’s search algorithms acting as the filters, listings could be based entirely on user ratings in a manner similar to sites such as Reddit and Buzzfeed. In fact, we could even take this direct-sales model a step further and imagine a future in which books are crowd-edited or collaboratively written, with the author changing the text in response to readers’ comments.

Whether a work is distributed by a company or an artist, pricing models must be adjusted to the reality of digital distribution. More specifically, each product should have a variety of price points. For example, I live in Morocco, a “developing” country, and in my city, Rabat, there is an enormous market for pirated DVDs. One can walk through the old market and buy any movie or TV series for 6 dirhams, or about 73 cents. That price is appropriate for a country in which most citizens can’t afford to buy a full-priced DVD or go to the movie theater. If movie distribution companies or individual artists adjusted their pricing models, then they could compete in and capture developing markets. We have entered a Brave New World in which one price cannot serve all, and sales points need to be calculated and contextually negotiated. And even beyond flexible price points, there is the possibility of ongoing “public patronage” in which the public “subscribes” to an individual artistSurely, it is only a matter of time before sites like Gittip, which allows users to donate a minimum of 25 cents a week to members of the technorati, expand to cover authors and artists.

When, more than three quarters of a century ago, social critic Walter Benjamin wrote The Work of Art in the Age of Mechanical Reproduction, he claimed that reproducing a work would make it lose its uniqueness and context within an artistic tradition, its “aura.” Whether a digitally reproduced work of art or writing lacks uniqueness and resonance is perhaps debatable, but what is clear is that the process of digital distribution has generated new opportunities for the creator and her works. Concerning the question of artistic tradition, the internet and digital distribution have brought us even closer to artists and the context of their work—we have access to their daily lives through Twitter, and we can often Tweet questions to them or participate in a Reddit Ask Me Anything thread. While companies and managers can do a lot to position new talent, many rising stars are crowd-propelled in a way that was unimaginable a decade ago. We are in an age in which someone like pop singer Rebecca Black can be transformed into a celebrity, praised, attacked, and then abandoned within a few months. In fact, reality television and celebrity gossip blogs are opposite sides of the same impulse—to elevate reality and pull down celebrity until there is no distinction. Just as Benjamin foresaw, “At any moment the reader is ready to turn into a writer,” or the patron into an artist.

Authors no longer need publishers to distribute their work, to gauge readers’ interest, or to build a fan base, and many authors have already decided that they don’t need Amazon. This is not to say that authors don’t need publishers or a strong online distribution platform. Rather, the few authors who can guarantee their sales numbers have a lot more power when it comes to releasing or holding onto their rights. One example is Hugh Howey, a self-published author who turned down a seven-figure deal with Simon and Schuster in order to retain control of his digital rights (he instead “settled” for six figures). In this age of increased competition and pricing, authors might negotiate more intensely with their readers and their publishers to maintain control over the pricing and rights for their work.

But Hugh Howey might well be the exception. What if, as Scott Turow fears, authors cannot adjust to the challenges of online distribution and piracy, leaving us with a system that rewards a writerly 1 percent of a few winners and lots of losers? One possible consequence is that idealistic, imaginative, and socially engaged members of our generation will feel compelled to make a more direct and practical impact on the world—rather than writing about social inequality, they might be “forced” to take jobs as policy-makers, for example. In fact, with the collaborative and interdisciplinary mentality behind many crowdfunded projects, such possibilities have already emerged. Art expressing itself in the world: That might not be such a bad outcome, after all.

Sarah Ruth Jacobs is a founding editor of the open-access Journal of Interactive Technology and Pedagogy and a doctoral candidate in American literature at the Graduate Center of the City University of New York.

Photograph: Wikipedia Commons

. . . . . . . .

Like The Hedgehog Review on Facebook, follow us on Twitter, and subscribe to our posts via RSS.

Twitter, Racism, and the “Safe” Campus

Anyone following journalistic coverage of academia lately is likely to have heard of the divisive case of the “de-hiring” of Steven Salaita at my home institution, the University of Illinois. Reports have appeared in the New York Times and the Chronicle of Higher Education, although blogs have offered the more substantive coverage. I’ll leave it to other sites to relate the longer story, but the short version is that in early August, Chancellor Phyllis Wise notified Indigenous Studies scholar Steven Salaita that despite his accepting an offer of a tenured position in American Indian Studies last year as the result of a search, she would not be recommending his hire for final approval by the Board of Trustees.

Such approval is typically a pro forma step in the hiring process, as demonstrated by the fact that it usually happens after the start of a new faculty member’s first semester of employment. In this case, however, the Chancellor informed Salaita via letter that she felt the Board was “unlikely” to approve his hire. More recently, she has stated that she felt the “humane” thing to do, knowing the Board’s disapproval, was to end his candidacy before the start of the semester, rather than let him move to Illinois and begin teaching, only to be retroactively fired after the Board’s September meeting.

Journalistic coverage has revealed that the Chancellor’s decision came on the heels of a flurry of correspondence between campus leaders, trustees, and regular donors. At issue in these exchanges is the question of whether Salaita’s strident tweets on the recent Gaza attacks revealed the scholar of Palestine and Indigenous Studies as anti-Semitic, or at least “uncivil” and therefore unfit to teach. With appeals to protecting campus “civility,” Chancellor decided to rescind Salaita’s offer, leaving the professor jobless, his students without an instructor, and a campus in turmoil over the implications for academic freedom and shared governance.

The University of Illinois is currently as torn apart as I’ve seen it (and that’s saying a lot given our past scandals). Thousands of faculty elsewhere have boycotted the institution, and there are some signs that we might be on the road to censure by professional organizations. Boycotts were perhaps especially probable given that Salaita himself is one of the leaders of the Boycott, Divestment and Sanctions Movement against Israel. (Indeed, his efforts on that work earned him the attention of bloggers who likely first alerted campus leaders to his Twitter feed.)

There are many dimensions of this story to sort through—donor influence, political corruption, Zionism, racism, and definitions of free speech, for starters—and it also comes at a time when many faculty are moving to unionize. Somewhere near the center is the matter of social media, and how we use it.

Twitter drew attention last year in racist tweets by students against Chancellor Wise in response to a snow day decision, to which the campus responded with a new “Inclusive Illinois” campaign. What is the proper context for even citing a tweet, let alone understanding the import or effects of 140 characters delivered as part of a longer, partially public, partially private exchange?

Many are approaching this matter through lengthy legal arguments and examination of precedents. This may be exactly what Salaita needs to claim wrongdoing in court, or what the faculty need to claim breech of shared governance. When I sat down to write the Trustees, however, I wanted to boil the matter down still more.

The most generous description of why the Chancellor and Trustees de-hired Salaita is that they saw a threat to the well-being of the campus as a space supportive of inclusion and diversity. I know of many who are very surprised to find themselves, as supporters of the Chancellor’s decision, on the receiving end of accusations of racism. This whole episode is but the latest in a longer history of inadequate responses to white racism on this campus—particularly in light of our racist mascot “Chief” Illiniwek, banned by the NCAA and the Chancellor, but kept alive by fans and continually present on campus.

My letter to the Trustees thus took the form of a slideshow (one of my favorite forms), six slides to explain how their efforts to make the campus “safe” in fact contribute to the larger problems of racism. Below is my “PowerPoint on power.”


Six Questions on the Salaita Case from Complex Fields

Kevin Hamilton is Associate Professor of Art and Design at the University of Illinois, Urbana-Champaign. Along with regular Infernal Machine contributor Ned O’Gorman, he is currently at work on a history of film in American nuclear weapons development.

. . . . . . . .

Like The Hedgehog Review on Facebook, follow us on Twitter, and subscribe to our posts via RSS.

Apple Watch and the Quantified Self

Today Apple unveiled its latest technological creation, the Apple Watch, a wearable computer that tracks not only time but your every step, heartbeat, and calorie. With their latest product, Apple contributes to the growing availability of devices and apps that track and record our activities and biostatistics such as Fitbit, Basis, and My Fitness Pal. Given Apple’s commercial influence, the Apple Watch may well turn the nascent Quantified Self (QS) movement into a cultural mainstay delivering “self knowledge through numbers.”

Apple Watch

Apple Watch

Most QS practices track health-related activities such as calorie intake, exercise, and sleep patterns, but they are increasingly used to document and track experiences of grief, exploration, and productivity. And tracking apps and devices are even making their way unexpected areas of life experience. Attempts to measure the soul, data point by data point, for example, are increasingly common. Just last January a Menlo Park pastor teamed up with a University of Connecticut sociologist to create SoulPulse, which, as Casey N. Cep explains, is a

 a technology project that captures real-time data on the spirituality of Americans. SoulPulse attempts to quantify the soul, an unbodied version of what FitBit, the exercise-tracking device, has done for the body. After filling in a brief intake survey on your age, race, ethnicity, education, income, and religious affiliation, SoulPulse contacts you twice a day with questions about your physical health, spiritual disciplines, and religious experiences. Each of the surveys takes less than five minutes to complete.

SoulPulse encourages users to learn about their “spirituality” through the power of big data and digital automation. This may sound crazy, but what’s the difference between tracking your daily prayer life with an app and doing so with another set of repeatable instructions, such as the Benedictine Rule and its set of daily readings and reminders to ponder God?

Many aspects of the QS movement are anything but new. Western cultures have long maintained practices that document behaviors and experiences in order to discipline ourselves. Capitalism and quantifying the self have been intimately linked for some time. Early accounting practices allowed businessmen to understand the consequences of their behavior so that it could be modified in the future. Merchants developed account logs that allowed them to track the results of their business transactions and to modify them in the future.  Perhaps they had purchased too much grain and it spoiled before it could be sold. In the following year, the same merchant could alter his practice based on this cataloged information. And Frederick W. Taylor’s scientific management theories relied on precise measurements of workers’ efficiency.

And more in the tradition of St. Benedict, people have long kept track of their spiritual lives. Benjamin Franklin dutifully recorded his success in adhering to a list of thirteen virtues each day. Diaries and journals have long been witness not just to bad poetry but to detailed lists of eating and sleeping habits. Weight Watchers and its point system, founded in 1963,  turned such practices into a business.

Despite such similarities, tracking devices such as Apple Watch are not the same as eighteenth-century diaries. The former have the potential to revolutionize the health sector and facilitate better care, but what happens when they don’t just give away our desires on Facebook (I like this!) but open up a one-way data stream on our bodies? How long will it take for all that personal data to make its way to our insurance companies? (The now-common annual biometric screenings will seem quaint by comparison.)

Self-reflection and personal development are broad cultural values. But what happens to us when we focus on aspects of ourselves that are easily recorded and converted into numbers? QS enthusiasts advocate for the expansion of tracking devices from the private sphere into the work environment, where they might provide insights on employee selection, promotion, and productivity. How will tracking social and personal behavior, such as how many times one smiles during the day, alter work environments and those who inhabit them?

Digital practices and techniques for tracking and disciplining the self are different from the analogue and print predecessors for several reasons. First, what they can track has expanded. Benjamin Franklin most likely didn’t know the rate of his perspiration. Second, the precision with which data is measured and recorded is continually increasing. Similarly, tracking devices and apps are increasingly frictionless: They do their job with minimal interruption and effort on the part of the user. Finally, the digital format of the data represents a marked difference from records of the past. Many of these tracking devices easily connect to apps and programs that analyze the data, dictating to the individual a pre-programmed assessment of success or failure. The digital nature of the information also makes it easily available and transferable.

These new developments and the manufacture and dissemination of these technologies and apps through popular and trusted brands such as Apple are likely to expand the degree to which individuals come to imagine themselves, their bodies, and their habits through and as numbers. As we continue into our quantified future, will these new digital practice alter what will means to be a good person, a successful person, or an efficient person? Will be we able to juke the numbers?  Just because the technology is intended to track behavior and facilitate modification of that behavior doesn’t mean that it won’t be put to other purposes. What will we make of our new digital tracking practices and the self that we come to know through numbers?

Claire Maiers is a graduate student in the Department of Sociology at the University of Virginia.

. . . . . . . .

Like The Hedgehog Review on Facebook, follow us on Twitter, and subscribe to our posts via RSS.