Monthly Archives: September 2014

Virtual Assembly and the Legal Limits of Digital Dualism


What are the implications of the First Amendment’s right of assembly to the lives we increasingly lead online? I have spent a fair amount of time over the past few years exploring the meaning and significance of “the right of the people peaceably to assemble,” but I have only recently begun to consider its applications in an online context. My first effort to think about “virtual assembly” was in an article published last year in the Cornell Law Review from which the following reflections are drawn.

The Scope of the Assembly Right

While many people may think of the right of assembly as covering temporal gatherings like a school celebration or a political protest, its scope is far broader. Protecting the moment of expression often requires protecting the group that makes that expression possible. Put differently, the verb “assemble” in the text of the First Amendment presupposes a noun—an assembly. And while some assemblies occur spontaneously, most do not. Most assemblies emerge from people who come together long before they hold protests or conventions. Our “pre-political” and “pre-expressive” groups enable our ordinary activities to develop into extraordinary ones. And those groups are among the most vulnerable to government monitoring and interference.

These observations extend to our online groups, differently embodied and displaced though they are. In fact, the virtual dimensions of assembly may yield insights for how we understand more traditional assemblies and the legal protections that we assign to them. Consider, for example, how our online groups may serve as early indicators of possible infringements of the right of assembly. One of the reasons that government officials usually ignore our most informal offline groups is that these groups often lack visible boundaries—there are often no membership lists or widely available indicia of the individuals who constitute “the group.”

But the boundaries are more pronounced online. A Facebook group lets us know who is in and who is out. Even the most informal Facebook group signals a more concrete form of exclusion and embrace than its offline counterpart. Our private boundary lines are more publicly visible online, and that visibility is more likely to attract the attention of government officials.

Virtual Assembly as Community

Despite their more pronounced boundaries, I was initially skeptical of the deeper significance of online groups. I am not on Facebook or Instagram, and I only recently succumbed to the professional pressure of Twitter. I knew that online connections could facilitate otherwise improbable interactions, and that they could simplify logistics and efficiencies for everything from flash mobs to political protests. But I was less certain about other questions, such as whether meaningful relationships and ways of life could really flourish online.

My thinking began to change when I came across Howard Rheingold’s work. Rheingold started writing about online communities in the early 1990s, which in virtual time places him somewhere in the vicinity of Plato. But he wrote movingly—and convincingly—of the deep social connections and communities fostered by online interactions. Other writers described the emotional shelters that online groups created for pregnant women, cancer patients, drug addicts, and numerous other affinity groups. I discovered online gaming communities and online churches, and not all of them seemed flaky.

There are, of course, plenty of shallow and harmful online groups. But as Rheingold noted in 1993, “it is dangerous to mindlessly invalidate the experiences of a person for whom Internet communication is not a luxury but a lifeline.”

These kinds of observations helped frame my approach to the project of virtual assembly, and to the legal, cultural, and normative questions that followed. I learned a great deal about the ways in which online groups contribute to important First Amendment interests like identity formation, self-governance, and dissent.

I also learned that the dynamic line between virtual and non-virtual groups often frustrates efforts to draw legal and cultural distinctions between them. Many of us use online connections to sustain relationships that begin offline.  And many relationships that begin online move offline. Online dating services lead to offline relationships. Business relationships initially formed through social networking sites lead to in-person meetings and partnerships. Lawyers who find clients in online worlds represent them in offline proceedings. (And, of course, lots of bad things can also happen when online relationships migrate offline.)

The connections between our online and offline groups suggest that we need to think carefully about the constitutional protections for those groups. Protecting values like identity formation, self-governance, and dissent depends on protecting the boundaries of these groups. But these protections are not cost free. Resistance to monitoring can complicate legitimate law-enforcement efforts. Exclusive groups can weaken antidiscrimination norms and hinder equality of opportunity. Deep attachments that we form with others can complicate our psychological ability to exit from groups. (As one example, I’m discovering the nontrivial psychological pressure that prevents me from “unfollowing” someone on Twitter.) We have been weighing these costs and benefits in our offline groups for some time, and it will be important to extend these considerations online as well. The online context might call for modified doctrinal applications. It might also cause us to rethink our existing offline frameworks.

Looking Back as We Look Ahead

The law is an imperfect and limited resource, online and off. Many of the looming questions may well be resolved by institutional and technological design rather than by legal doctrine. But recognizing that some ideals are largely beyond judicial competence should not cause us to cut short our normative aspirations for law. In the case of our online groups, we can meet some of the challenges that lie ahead by looking back: to the right of the people peaceably to assemble.

John Inazu is an Associate Professor of Law at the University of Washington, St. Louis and the author of Liberty’s Refuge: The Forgotten Freedom of Assembly (Yale 2012).

. . . . . . . .

Like The Hedgehog Review on Facebook, follow us on Twitter, and subscribe to our posts via RSS.


Cultural Critics vs. Social Scientists—They Both Stink

The past few weeks have seen some heady attempts at generalization: first Sam Tanenhaus’s piece on “Generation Nice” and then A.O. Scott on the “Death of Adulthood.” (Potential correlation there?)

The Quant & The Connoisseur logo

The subsequent critiques of both were withering. Tanenhaus’s article proved to be laden with errors, resulting in hilarious retractions by the Times editorial staff. In response to the proof of the millennials’ niceness, the editors wrote:

An article last Sunday about the millennial generation’s civic-mindedness included several errors…. Applications to the Peace Corps recently have been in decline with a 34 percent decrease from the peak in 2009, and applications to Teach for America decreased slightly last year; neither organization has seen “record numbers of new college graduates” applying for jobs.

Well done. And the unredacted rest apparently relied on citations of studies that cited studies that cited…an ad agency?!

As for Scott (one of Q&C’s favorite film critics), his reflections on adulthood’s imminent, if not already occurring, death come from having watched a lot of TV. “Something profound has been happening in our television over the past decade,” Scott announces with a foreboding sense of doom (it’s gotten worse?). And then, in an alliterative jingle that would make even the best nineteenth-century speech writer wriggle, “It is the era not just of mad men, but also of sad men and, above all, bad men.”

So there you have it: A few shows (Mad Men, The Sopranos, Breaking Bad) have chronicled the decline of white patriarchy, which is a good stand-in for the decline of adulthood, which in turn is a good stand-in for a major shift in “American Culture.” Imagining that all of adulthood, and masculinity in particular (Scott’s real aim), was coming to an end because of a few televisual-fantasies of bad dads like Don Draper ignored, as David Marcus pointed out, a whole lot of other stuff on TV that most people actually watch, like, say, football (or NCIS or NCIS: LA). Masculinity is doing just fine there (by which I mean on display, not as in, oh-so-admirable).

One would think at this point the answer is Big Data to the rescue. Instead of making whopping generalizations based on a few selective examples, turning culture into data can give us a much better view of the “big picture” (preferably as a picture: through that most ubiquitous of contemporary genres, the infographic). If we look broadly, what is “television” telling us and how would we segment it into different groups, for surely it is not telling all of us the same thing?

The problem is, as Marcus pointed out, it’s not as though the social scientists who traffic in cultural data mining have done much better. Turning culture into data is not a seamless process, nor is its interpretation. While we all know this, we seem unable to heed this advice given the opportunity for a juicy headline—in other words, given the chance to tell a story. Narrative trumps reason in fascinating ways.

The point is not, oh forget it, let’s just let Tanenhaus make it up after all. A good story is a good story and you can’t count culture anyway. The point is we need a lot more work on the work of translating culture into data before we go ahead and start calculating and interpreting. What would be a representative sample of “TV” or “pop-culture”? How would you measure depictions of adulthood or “masculinity” (either as positively or negatively coded)? What is your control set, i.e., what are you comparing this against? And so on.

The real answer is we need to think more about the process of cultural modeling. How do we model a cultural subset through a data set (a generation, for example, or contemporary television), and how do we model a cultural practice or concept through a particular measurement? These aren’t easy questions, but they are the prerequisite for correcting against journalistic just-so stories of cultural criticism.

This is the time for the humanists to jump into the fray, not to put our heads in the sand and say, “You can’t count that!”  The challenge is to think of counting culture in more sophisticated ways and so avoid the mythologizing that passes as cultural criticism these days.


. . . . . . . .

Like The Hedgehog Review on Facebook, follow us on Twitter, and subscribe to our posts via RSS.

Quit Lit: Do the Humanities Need the University?

college chains_FINAL

#165598883 /

There’s a new genre taking shape on blogs, Twitter, and even in the pages of The London Review of Books: Quit Lit. Just last week, Mariana Warner, a creative writing professor and member of the Man Booker prize committee, explained her decision to resign her position at the University of Essex. In “Why I Quit,” she describes the bureaucratic disciplines of England’s new Research Assessment Exercises, which tabulate and calculate academic labor with the efficiency and mindlessness usually reserved for an assembly plant (and a low tech one at that).

In a scene she must have embellished by channeling Kakfa U., Warner recounts a meeting with her new dean:

A Tariff of Expectations would be imposed across the university, with 17 targets to be met, and success in doing so assessed twice a year. I received mine from the executive dean for humanities. (I met her only once. She was appointed last year, a young lawyer specialising in housing. When I tried to talk to her about the history of the university, its hopes, its “radical innovation,” she didn’t want to know. I told her why I admired the place, why I felt in tune with Essex and its founding ideas. “That is all changing now,” she said quickly. “‘That is over.” My “workload allocation,” which she would “instruct” my head of department to implement, was impossible to reconcile with the commitments which I had been encouraged—urged—to accept.

Confused but, more deeply, defeated by this new regime, Warner resigned. But she continued her work for the Man Booker Prize committee which, as it turns out, has proven rather clarifying.

Among the scores of novels I am reading for the Man Booker International are many Chinese novels, and the world of Chinese communist corporatism, as ferociously depicted by their authors, keeps reminding me of higher education here, where enforcers rush to carry out the latest orders from their chiefs in an ecstasy of obedience to ideological principles which they do not seem to have examined, let alone discussed with the people they order to follow them, whom they cashier when they won’t knuckle under.

As a genre Quit Lit has a few organizing features. Its form tends to be personal and aggrieved. The university, like those vague but all-powerful institutions in Kafka’s texts, has been overtaken by an alien, usually bureaucratic-statist-inhumane power. And its content tends to be not just about the decline of the university but also about the impending demise of the humanities. By turning universities into vocational schools, we are robbing our children of humanistic forms of thought and the good that ensues. (If scientists wrote prose like humanists, maybe they would be writing about the end of the university and the collapse of science. NPR had a go at Quit Lit  this past week in their series on the dramatic cuts in basic science funding and the results it is having on future generations of scientists.)

As with all literary genres, Quit Lit has its predecessors. Before there were Rebecca Schuman and NeinQuarterly’s Eric Jarosinski, there was another German scholar experimenting in the genre, Friedrich Nietzsche. In 1872, just three years after he landed his first, and only, professorship at the University of Basel without even having finished his dissertation, Nietzsche delivered a series of lectures, On the Future of Our Educational Institutions, in the city museum. Before crowds of more than 300 people, Nietzsche staged a dialogue on the future of German universities and culture between two young students and a cantankerous old philosopher and his slow-witted but earnest assistant.

The grousing philosopher lamented the decline of universities into state-sponsored factories that produced pliant citizens and mindless, “castrated” scholars who cared not a bit for life. By the end of the lectures, it’s difficult to say whether Nietzsche thought there was a future at all for German universities. Nietzsche lasted a few more years in his position, resigning only when ill health forced him to. But he left an oeuvre that looked to the university and saw little but ruin.

As Nietzsche was writing, parts of the German university might not have been in decay, but they were in decline, the humanities in particular. Between 1841 and 1881, enrollment in philosophy, philology, and history within “philosophy faculties,” which compromised the core liberal arts fields, declined from 86.4 percent to 62.9 percent, whereas in mathematics and the natural sciences enrollments increased from 13.6 to 37.1 percent of all students matriculating at German universities. The mood among humanists was often such that they sounded quite a bit like the embattled literature professors of today. In academia, crisis is generally a matter of perception, and even in what now seems like a “golden age” for humanists, there was, in fact, a seismic shift for the humanities.

More recent forms of Quit Lit tend to lack a key feature of Nietzsche’s model, however. Nietzsche never conflated the humanities or humanistic inquiry with the university. For him, humanistic inquiry—and Nietzsche was deeply humanistic as his lifelong commitment to philology attests—transcended the institutional and historically particular shape of universities, which he saw as little more than extensions of a Prussian bureaucratic machine.

In what increasingly seems like a related genre, contemporary academics and intellectuals of all sorts have ostensibly been defending the humanities. But more often than not they actually defend certain forms of scholarship as they have come to be institutionalized in largely twentieth-century American research universities. Geoffrey Galt Harpham recently produced  the most egregious but well-argued example of this tendency with The Humanities and the Dream of America. His basic thesis is that the humanities as they are now practiced were an invention of post–World War II American research universities. Similarly, Peter Brooks’s edited collection The Humanities and Public Life suggests, with its focus on disciplines and scholarship and the imperatives of the university, inadvertently echoes the same. They conflate the humanities with their departmental and institutional shapes in universities.

In the measured “yes but” prose of academic speak, Patrícia Vieira gives this spirit of conflation ethical shape in a review entitled “What are the Humanities For?”:

Debates about the “future of the humanities” frequently revolve around the suspicion that the humanities might not have one. Yet despite the direness of this anxiety—an anxiety especially personal for every academic worried about professional choices or mortgage payments—conversations on the topic are often dull, long-faced affairs. Every professor has sat through one or another of these depressing discussions. The conversation proceeds according to a familiar set of pieces: there are passionate apologias of work in philosophy, literature, history, and the arts; veiled criticism of the anti-intellectualism of higher education administrators and society at large; and vague pledges to do more interdisciplinary research and extend a fraternal hand to the social and natural sciences, who remain largely unperturbed by this plight. The whole thing wraps up with the reassuring conviction that, if the humanities go down, they will do so in style (we study the arts, after all), and that truth is on our side, all folded in a fair dosage of indulgent self-pity.

Vieira can’t imagine the future of the humanities beyond the anxieties of professors and the failures of university administrators. All she can muster is a few gentle and inveterately academic admonitions for her authors:

Brooks’s and [Doris] Sommer’s [The Work of Art in the World: Civic Agency and Public Humanitiesbooks coincide in their desire to persuade those skeptical about the importance of the arts and the humanities of their inherent worth. The volumes set out to prove that these disciplines play a crucial role in public life and that they are vital to contemporary culture. Brooks’s collection often falls short of this goal by sliding into fatalistic rhetoric about the doomed future of humanistic scholarship—the very discourse the book attempts to combat—all while ignoring some of the vibrant new research in the field. In contrast, Sommer is overconfident in the power of the arts to tackle thorny socioeconomic and political problems. Both the despondent and celebratory approaches are symptomatic of the beleaguered state of the field, forced to justify its existence based upon technocratic principles that demand immediate results and fast returns. The humanities are constantly compelled to demonstrate practical results or hopelessly admit to lacking a concrete and immediate function, straitjacketed into foreign modes of valuation lifted from the empirical sciences. Neither a dying set of disciplines nor a panacea for social ills, the humanities remain a central form of human enquiry, in that they shed light on and question the tacit assumptions upon which our societies are based, outline the history of these values, and identify alternatives to the status quo.

Despite her attempts to cast the humanities as a form of “human” inquiry, Vieira is writing about a beleaguered and exhausted profession. There are only professors and their disciplines here. And they both are trapped, as Nietzsche would say, in a “castrated” passive voice: “The humanities are compelled ….” There are no agents in this drama, just put-upon, passive professors.

I am not suggesting that we should give up on universities. Universities, especially modern research universities, have long helped sustain and cultivate the practices and virtues central to the humanities. But just as German universities were becoming international paradigms, emulated from Baltimore to Beijing, Nietzsche made a fateful diagnosis. Those practices and virtues could ossify and whither in the arcane and self-justifying bowels of the modern, bureaucratic university. “Human inquiry,” in contrast, would live on.

We may well benefit from an exercise in imagination. Could the humanities survive the collapse of the university? I think so.

. . . . . . . .

Like The Hedgehog Review on Facebook, follow us on Twitter, and subscribe to our posts via RSS.

Big Data, Small Data, and the Ethics of Scale

This past summer, two Cornell University scholars and a researcher from Facebook’s Data Science unit published a paper on what they termed “emotional contagion.” They claimed to show that Facebook’s news feed algorithm, the complex set of instructions that determines what shows up where in a news feed, could influence users’ emotional states. Using a massive data set of more than 689,003 Facebook accounts, they manipulated users’ news feeds so that some people saw more positive posts and others more negative posts. Over time, they detected a slight change in what users themselves posted: Those who saw more positive posts posted more positive posts of their own, while those who saw more negative posts posted more negative ones. Emotional contagion, they concluded, could spread among people without any direct interaction and “without their awareness.” 

Some critics lambasted Facebook for its failure to notify users that they were going to be part of a giant experiment on their emotions, but others simply thought it was cool. (My Infernal Machine colleague Ned O’Gorman has already outlined the debate.) Sheryl Sandberg, Facebook’s COO, just seemed confused. What’s all the fuss about, she wondered. This latest experiment “was part of ongoing research companies do to test different products.” Facebook wasn’t experimenting with people; it was improving its product. That’s what businesses do, especially digital business with access to so much free data. They serve their customers by better understanding their needs and desires. Some might call it manipulation. Facebook calls it marketing.

But, as technology writer Nicholas Carr points out, new digital technologies and the internet have ushered in a new era of market manipulation.

Thanks to the reach of the internet, the kind of psychological and behavioral testing that Facebook does is different in both scale and kind from the market research of the past. Never before have companies been able to gather such intimate data on people’s thoughts and lives, and never before have they been able to so broadly and minutely shape the information that people see. If the Post Office had ever disclosed that it was reading everyone’s mail and choosing which letters to deliver and which not to, people would have been apoplectic, yet that is essentially what Facebook has been doing. In formulating the algorithms that run its News Feed and other media services, it molds what its billion-plus members see and then tracks their responses. It uses the resulting data to further adjust its algorithms, and the cycle of experiments begins anew. Because the algorithms are secret, people have no idea which of their buttons are being pushed — or when, or why.

Businesses of all sorts, from publishers to grocery stores, have longed tracked the habits and predilections of their customors in order better to influence what and how much they consume. And cultural critics have always debated the propriety of such practices.

Eighteenth-century German scholars debated the intellectual integrity of publishers who deigned to treat books not only as sacred vessels of Enlightenment, but also as commodities to be fashioned and peddled to a generally unenlightened public. Friedrich Nicolai, one of late eighteenth-century Prussia’s leading publishers, described the open secrets of the Enlightenment book trade:

Try to write what everyone is talking about . . . If an Empress Catherine has died, or a Countess Lichtenau fallen out of favor, describe the secret circumstances of her life, even if you know nothing of them. Even if all your accounts are false, no one will doubt their veracity, your book will pass from hand to hand, it will be printed four times in three weeks, especially if you take care to invent a multitude of scandalous anecdotes.

The tastes and whims of readers could be formed and manipulated by a publishing trade that was in the business not only of sharing knowledge but also of producing books that provoked emotional responses and prompted purchases. And it did so in such obvious and pandering ways that its manipulative tactics were publicly debated. Immanuel Kant mocked Nicolai and his fellow publishers as industrialists who traded in commodities, not knowledge. But Kant did so in public, in print.

These previous forms of market manipulation were qualitatively different from those of our digital age. Be they the practices of eighteenth-century publishing or mid-twentieth-century television production, these forms of manipulation, claims Carr, were more public and susceptible to public scrutiny, and as long as they were “visible, we could evaluate them and resist them.” But in an age in which our online and offline lives are so thoroughly intertwined, the data of our lives—what we consume, how we communicate, how we socialize, how we live—can be manipulated in ways and to ends about which we are completely unaware and we have increasingly less capacity to evaluate.

Sheryl Sandberg would have us believe that Facebook and Google are neutral tools that merely process and organize information into an accessible format. But Facebook and Google are also companies interested in making money. And their primary technologies, their algorithms, should not be extracted from the broader environment in which they were created and are constantly tweaked by particular human beings for particular ends. They are pervasive and shape who we are and who we want to become, both individually and socially. We need to understand how live alongside them.

These are precisely the types of questions and concerns that a humanities of the twenty-first century can and should address. We need forms of inquiry that take the possibilities and limits of digital technologies seriously. The digital humanities would seem like an obvious community to which to turn for a set of practices, methods, and techniques for thinking about our digital lives, both historically and conceptually. But, to date, most scholars engaged in the digital humanities have not explicitly addressed the ethical ends and motivations of their work. (Bethany Nowviskie’s work is one exemplary exception: here and here.)

This hesitance has set them up for some broad attacks. Th recent diatribes against the digital humanities have not only peddled ignorance and lazy thinking as insight, they have also, perhaps more perniciously, managed to cast scholars interested in such methods and technologies as morally suspect. In his ill-informed New Republic article, Adam Kirsch portrayed digital humanities scholars as morally truncated technicians, obsessed with method and either uninterested in or incapable of ethical reflection. The digital humanities, Kirsch would have us believe, is the latest incarnation of the Enlightenment of Adorno and Horkheimer—a type of thinking interested only in technical mastery and unconcerned about the ends to which knowledge might be put.

Most of the responses to Kirsch and his ilk, my own included, didn’t dispute these more implicit suggestions. We conceded questions of value and purpose to the bumbling critics, as though to suggest that the defenders of a vague and ahistorical form of humanistic inquiry had a monopoly on such questions. We conceded, after a fashion, the language of ethics to Kirsch’s image of a purified humanities, one that works without technologies and with insight alone. We responded with arguments about method (“You don’t know what digital humanities scholars actually do.”) or history (“The humanities have always been interested in patterns.”).

In a keynote address last week, however, Scott Weingart encouraged humanities scholars engaged in computational analysis and other digital projects to think more clearly about the ethical nature of the work they are already doing. Echoing some of Carr’s concerns, he writes:

We are at the cusp of a new era. The mix of big data, social networks, media companies, content creators, government surveillance, corporate advertising, and ubiquitous computing is a perfect storm for intense influence both subtle and far-reaching. Algorithmic nudging has the power to sell products, win elections, topple governments, and oppress a people, depending on how it is wielded and by whom. We have seen this work from the bottom-up, in Occupy Wall Street, the Revolutions in the Middle East, and the ALS Ice-Bucket Challenge, and from the top-down in recent presidential campaigns, Facebook studies, and coordinated efforts to preserve net neutrality. And these have been works of non-experts: people new to this technology, scrambling in the dark to develop the methods as they are deployed. As we begin to learn more about network-based control and influence, these examples will multiply in number and audacity.

In light of these new scales of analysis and the new forms of agency they help create, Weingart encourages scholars, particularly those engaged in network and macroanalysis, to pay attention to the ways in which they mix the impersonal and individual, the individual and the universal. “By zooming in and out, from the distant to the close,” he writes, digital humanities scholars toggle back and forth between big and small data. Facebook, Google, and the NSA operate primarily at a macro level at which averages and aggregates are visible but not individuals. But that’s not how networks work. Networks are a messy, complex interaction of the micro and macro. They are products of the entire scale of knowledge, data, and being. Social networks and the ideas, actions, and interactions that comprise them emerge between the particular and the universal. What often distinguishes “the digital humanities from its analog counterpart,” writes Weingart, “is the distant reading, the macroanalysis.” But what binds humanities scholars of all sorts together is an “unwillingness to stray too far from the source. We intersperse the distant with the close, attempting to reintroduce the individual into the aggregate.” In this sense, scholars interested in a digital humanities are particularly well suited to challenge basic but dangerous misconceptions about the institutions and technologies that shape our world.

If we think of Facebook and Google and the computations in which we are enmeshed merely as information-processing machines, we concede our world to one end of the scale, a world of abstracted big data and all powerful algorithms. We forget that the internet, like any technology, is both a material infrastructure and, as Ian Bogost has put it, something we do. Every time we like a post on Facebook, search Google, or join the network at a local coffee shop, we participate in this massive, complex world of things and actions. We help form our technological world. So maybe its time we learn more about this world and remember that algorithms aren’t immutable, natural laws. They are, as Nowviskie puts it, rules and instructions that can manipulate and be manipulated. They are part of the our world, bound to us just as we are now to them.

. . . . . . . .

Like The Hedgehog Review on Facebook, follow us on Twitter, and subscribe to our posts via RSS.

The Author in the Age of Digital Distribution


In the Amazon v. Hachette battle, authors seem to have assumed the role of largely passive onlookers, idling nervously on the sidelines until the two giants come to an agreement on the price of ebooks. On the Amazon front, the myth that the online retail giant is a warm, author-friendly company was dealt a further blow when J. K. Rowling became a casualty in the battle. As for Hachette, the determination to sell ebooks for $11.99 appears to be an increasingly quixotic stand against the tides of the modern literary marketplace.

But what if authors were able to gain greater agency in deciding where, how, and at what price their books were sold? What if authors didn’t need to agree to Amazon’s exclusive, one-size-fits-all royalty and sales agreement or sign away their rights to publishers? We might not be too far away from a shared open-source platform on which authors could sell their own work, cutting Amazon out of the equation and reducing the negotiating power of publishers. The recent protest letter signed by 900 authors shows that the time is ripe for a new, author-centric paradigm.

Amazon isn’t solely to blame for the plummetting price values assigned to books. The distribution of books has become so efficient that its profitability has been drastically reduced—the book market has undercut itself. At the same time, the appeal of social and visual media has contributed to the rather sluggish status of literary reading. In this media-saturated climate, the most successful authors are the ones who can ruthlessly grab and hold readers while triumphing across media. The Hunger Games, for instance, has a plot that is based on reality television and televised spectaclea text that begs to be filmed. Authors have never faced so much competition in the form of free media, nor have they ever encountered such low barriers to distribution of their work. Rather than the death of the author, this ethos dictates the birth of scaled, income-based, and potentially author-driven digital pricing and distribution models.

We can hate Amazon as much as we want, but the company has turned publishing on its head. Although most self-published authors earn very little, there are now few barriers to entry and authors can earn, according to Amazon Kindle Direct Publishing, “up to seventy percent in royalties.” Yet, Amazon offers only an illusion of authorial control. In the background, Amazon programmers ensure that the “results generated by Amazon’s search engine are partly determined by promotional fees.” Furthermore, Amazon’s Kindle Direct Publishing agreement restricts authors from selling their digital books anywhere else.

However, many writers and artists are bucking Amazon’s monopoly. Tech writer Gwen Bell has been selling her writing directly from her website for years. Deadmau5 is the latest artist to go where Louis C.K. in 2011 and Stephen King back in 2000 went before: He announced on Reddit his intention to release his music through his own digital subscription service with different price points. Even the reclusive comic book artist Alan Moore has raised more than £60,000 through Kickstarter and proclaimed that crowdfunding will shape the future. The ability to demonstrate the monetary demand for a book even before it is published puts a good deal of power in the hands of authors, who may be able to bargain harder to control their own rights.

Open-source, distributed peer-to-peer networks and free software can easily link together authors’ individual sites and offerings. In an author-centric model, editorial work, marketing, and the stamp of the respected publisher would be secondary to the agency and established fanbase of individual authors, who would boast greater control of their works and proceeds. Rather than publishers and Amazon’s search algorithms acting as the filters, listings could be based entirely on user ratings in a manner similar to sites such as Reddit and Buzzfeed. In fact, we could even take this direct-sales model a step further and imagine a future in which books are crowd-edited or collaboratively written, with the author changing the text in response to readers’ comments.

Whether a work is distributed by a company or an artist, pricing models must be adjusted to the reality of digital distribution. More specifically, each product should have a variety of price points. For example, I live in Morocco, a “developing” country, and in my city, Rabat, there is an enormous market for pirated DVDs. One can walk through the old market and buy any movie or TV series for 6 dirhams, or about 73 cents. That price is appropriate for a country in which most citizens can’t afford to buy a full-priced DVD or go to the movie theater. If movie distribution companies or individual artists adjusted their pricing models, then they could compete in and capture developing markets. We have entered a Brave New World in which one price cannot serve all, and sales points need to be calculated and contextually negotiated. And even beyond flexible price points, there is the possibility of ongoing “public patronage” in which the public “subscribes” to an individual artistSurely, it is only a matter of time before sites like Gittip, which allows users to donate a minimum of 25 cents a week to members of the technorati, expand to cover authors and artists.

When, more than three quarters of a century ago, social critic Walter Benjamin wrote The Work of Art in the Age of Mechanical Reproduction, he claimed that reproducing a work would make it lose its uniqueness and context within an artistic tradition, its “aura.” Whether a digitally reproduced work of art or writing lacks uniqueness and resonance is perhaps debatable, but what is clear is that the process of digital distribution has generated new opportunities for the creator and her works. Concerning the question of artistic tradition, the internet and digital distribution have brought us even closer to artists and the context of their work—we have access to their daily lives through Twitter, and we can often Tweet questions to them or participate in a Reddit Ask Me Anything thread. While companies and managers can do a lot to position new talent, many rising stars are crowd-propelled in a way that was unimaginable a decade ago. We are in an age in which someone like pop singer Rebecca Black can be transformed into a celebrity, praised, attacked, and then abandoned within a few months. In fact, reality television and celebrity gossip blogs are opposite sides of the same impulse—to elevate reality and pull down celebrity until there is no distinction. Just as Benjamin foresaw, “At any moment the reader is ready to turn into a writer,” or the patron into an artist.

Authors no longer need publishers to distribute their work, to gauge readers’ interest, or to build a fan base, and many authors have already decided that they don’t need Amazon. This is not to say that authors don’t need publishers or a strong online distribution platform. Rather, the few authors who can guarantee their sales numbers have a lot more power when it comes to releasing or holding onto their rights. One example is Hugh Howey, a self-published author who turned down a seven-figure deal with Simon and Schuster in order to retain control of his digital rights (he instead “settled” for six figures). In this age of increased competition and pricing, authors might negotiate more intensely with their readers and their publishers to maintain control over the pricing and rights for their work.

But Hugh Howey might well be the exception. What if, as Scott Turow fears, authors cannot adjust to the challenges of online distribution and piracy, leaving us with a system that rewards a writerly 1 percent of a few winners and lots of losers? One possible consequence is that idealistic, imaginative, and socially engaged members of our generation will feel compelled to make a more direct and practical impact on the world—rather than writing about social inequality, they might be “forced” to take jobs as policy-makers, for example. In fact, with the collaborative and interdisciplinary mentality behind many crowdfunded projects, such possibilities have already emerged. Art expressing itself in the world: That might not be such a bad outcome, after all.

Sarah Ruth Jacobs is a founding editor of the open-access Journal of Interactive Technology and Pedagogy and a doctoral candidate in American literature at the Graduate Center of the City University of New York.

Photograph: Wikipedia Commons

. . . . . . . .

Like The Hedgehog Review on Facebook, follow us on Twitter, and subscribe to our posts via RSS.

Twitter, Racism, and the “Safe” Campus

Anyone following journalistic coverage of academia lately is likely to have heard of the divisive case of the “de-hiring” of Steven Salaita at my home institution, the University of Illinois. Reports have appeared in the New York Times and the Chronicle of Higher Education, although blogs have offered the more substantive coverage. I’ll leave it to other sites to relate the longer story, but the short version is that in early August, Chancellor Phyllis Wise notified Indigenous Studies scholar Steven Salaita that despite his accepting an offer of a tenured position in American Indian Studies last year as the result of a search, she would not be recommending his hire for final approval by the Board of Trustees.

Such approval is typically a pro forma step in the hiring process, as demonstrated by the fact that it usually happens after the start of a new faculty member’s first semester of employment. In this case, however, the Chancellor informed Salaita via letter that she felt the Board was “unlikely” to approve his hire. More recently, she has stated that she felt the “humane” thing to do, knowing the Board’s disapproval, was to end his candidacy before the start of the semester, rather than let him move to Illinois and begin teaching, only to be retroactively fired after the Board’s September meeting.

Journalistic coverage has revealed that the Chancellor’s decision came on the heels of a flurry of correspondence between campus leaders, trustees, and regular donors. At issue in these exchanges is the question of whether Salaita’s strident tweets on the recent Gaza attacks revealed the scholar of Palestine and Indigenous Studies as anti-Semitic, or at least “uncivil” and therefore unfit to teach. With appeals to protecting campus “civility,” Chancellor decided to rescind Salaita’s offer, leaving the professor jobless, his students without an instructor, and a campus in turmoil over the implications for academic freedom and shared governance.

The University of Illinois is currently as torn apart as I’ve seen it (and that’s saying a lot given our past scandals). Thousands of faculty elsewhere have boycotted the institution, and there are some signs that we might be on the road to censure by professional organizations. Boycotts were perhaps especially probable given that Salaita himself is one of the leaders of the Boycott, Divestment and Sanctions Movement against Israel. (Indeed, his efforts on that work earned him the attention of bloggers who likely first alerted campus leaders to his Twitter feed.)

There are many dimensions of this story to sort through—donor influence, political corruption, Zionism, racism, and definitions of free speech, for starters—and it also comes at a time when many faculty are moving to unionize. Somewhere near the center is the matter of social media, and how we use it.

Twitter drew attention last year in racist tweets by students against Chancellor Wise in response to a snow day decision, to which the campus responded with a new “Inclusive Illinois” campaign. What is the proper context for even citing a tweet, let alone understanding the import or effects of 140 characters delivered as part of a longer, partially public, partially private exchange?

Many are approaching this matter through lengthy legal arguments and examination of precedents. This may be exactly what Salaita needs to claim wrongdoing in court, or what the faculty need to claim breech of shared governance. When I sat down to write the Trustees, however, I wanted to boil the matter down still more.

The most generous description of why the Chancellor and Trustees de-hired Salaita is that they saw a threat to the well-being of the campus as a space supportive of inclusion and diversity. I know of many who are very surprised to find themselves, as supporters of the Chancellor’s decision, on the receiving end of accusations of racism. This whole episode is but the latest in a longer history of inadequate responses to white racism on this campus—particularly in light of our racist mascot “Chief” Illiniwek, banned by the NCAA and the Chancellor, but kept alive by fans and continually present on campus.

My letter to the Trustees thus took the form of a slideshow (one of my favorite forms), six slides to explain how their efforts to make the campus “safe” in fact contribute to the larger problems of racism. Below is my “PowerPoint on power.”


Six Questions on the Salaita Case from Complex Fields

Kevin Hamilton is Associate Professor of Art and Design at the University of Illinois, Urbana-Champaign. Along with regular Infernal Machine contributor Ned O’Gorman, he is currently at work on a history of film in American nuclear weapons development.

. . . . . . . .

Like The Hedgehog Review on Facebook, follow us on Twitter, and subscribe to our posts via RSS.

Apple Watch and the Quantified Self

Today Apple unveiled its latest technological creation, the Apple Watch, a wearable computer that tracks not only time but your every step, heartbeat, and calorie. With their latest product, Apple contributes to the growing availability of devices and apps that track and record our activities and biostatistics such as Fitbit, Basis, and My Fitness Pal. Given Apple’s commercial influence, the Apple Watch may well turn the nascent Quantified Self (QS) movement into a cultural mainstay delivering “self knowledge through numbers.”

Apple Watch

Apple Watch

Most QS practices track health-related activities such as calorie intake, exercise, and sleep patterns, but they are increasingly used to document and track experiences of grief, exploration, and productivity. And tracking apps and devices are even making their way unexpected areas of life experience. Attempts to measure the soul, data point by data point, for example, are increasingly common. Just last January a Menlo Park pastor teamed up with a University of Connecticut sociologist to create SoulPulse, which, as Casey N. Cep explains, is a

 a technology project that captures real-time data on the spirituality of Americans. SoulPulse attempts to quantify the soul, an unbodied version of what FitBit, the exercise-tracking device, has done for the body. After filling in a brief intake survey on your age, race, ethnicity, education, income, and religious affiliation, SoulPulse contacts you twice a day with questions about your physical health, spiritual disciplines, and religious experiences. Each of the surveys takes less than five minutes to complete.

SoulPulse encourages users to learn about their “spirituality” through the power of big data and digital automation. This may sound crazy, but what’s the difference between tracking your daily prayer life with an app and doing so with another set of repeatable instructions, such as the Benedictine Rule and its set of daily readings and reminders to ponder God?

Many aspects of the QS movement are anything but new. Western cultures have long maintained practices that document behaviors and experiences in order to discipline ourselves. Capitalism and quantifying the self have been intimately linked for some time. Early accounting practices allowed businessmen to understand the consequences of their behavior so that it could be modified in the future. Merchants developed account logs that allowed them to track the results of their business transactions and to modify them in the future.  Perhaps they had purchased too much grain and it spoiled before it could be sold. In the following year, the same merchant could alter his practice based on this cataloged information. And Frederick W. Taylor’s scientific management theories relied on precise measurements of workers’ efficiency.

And more in the tradition of St. Benedict, people have long kept track of their spiritual lives. Benjamin Franklin dutifully recorded his success in adhering to a list of thirteen virtues each day. Diaries and journals have long been witness not just to bad poetry but to detailed lists of eating and sleeping habits. Weight Watchers and its point system, founded in 1963,  turned such practices into a business.

Despite such similarities, tracking devices such as Apple Watch are not the same as eighteenth-century diaries. The former have the potential to revolutionize the health sector and facilitate better care, but what happens when they don’t just give away our desires on Facebook (I like this!) but open up a one-way data stream on our bodies? How long will it take for all that personal data to make its way to our insurance companies? (The now-common annual biometric screenings will seem quaint by comparison.)

Self-reflection and personal development are broad cultural values. But what happens to us when we focus on aspects of ourselves that are easily recorded and converted into numbers? QS enthusiasts advocate for the expansion of tracking devices from the private sphere into the work environment, where they might provide insights on employee selection, promotion, and productivity. How will tracking social and personal behavior, such as how many times one smiles during the day, alter work environments and those who inhabit them?

Digital practices and techniques for tracking and disciplining the self are different from the analogue and print predecessors for several reasons. First, what they can track has expanded. Benjamin Franklin most likely didn’t know the rate of his perspiration. Second, the precision with which data is measured and recorded is continually increasing. Similarly, tracking devices and apps are increasingly frictionless: They do their job with minimal interruption and effort on the part of the user. Finally, the digital format of the data represents a marked difference from records of the past. Many of these tracking devices easily connect to apps and programs that analyze the data, dictating to the individual a pre-programmed assessment of success or failure. The digital nature of the information also makes it easily available and transferable.

These new developments and the manufacture and dissemination of these technologies and apps through popular and trusted brands such as Apple are likely to expand the degree to which individuals come to imagine themselves, their bodies, and their habits through and as numbers. As we continue into our quantified future, will these new digital practice alter what will means to be a good person, a successful person, or an efficient person? Will be we able to juke the numbers?  Just because the technology is intended to track behavior and facilitate modification of that behavior doesn’t mean that it won’t be put to other purposes. What will we make of our new digital tracking practices and the self that we come to know through numbers?

Claire Maiers is a graduate student in the Department of Sociology at the University of Virginia.

. . . . . . . .

Like The Hedgehog Review on Facebook, follow us on Twitter, and subscribe to our posts via RSS.

Living With Ferguson

In April 1964, Malcolm X gave what would be one of the best-known speeches of the civil rights era, “The Ballot or the Bullet?” Speaking of the plight of blacks in America, Malcolm X asked an audience at Cory Methodist Church in Cleveland, Ohio, “Where do we go from here? In my little humble way of understanding it, it points either to the ballot or the bullet.”

I have thought a lot about Malcolm X’s proposition since Michael Brown was shot dead in Ferguson, Missouri, not far from where I grew up.

Malcolm X (photo from Wikimedia Commons)

Malcolm X in 1964 by Herman Hiller (photo from Wikimedia Commons)

To this day, Malcom X’s speech is all too often described as a call to violence and little more. But it was actually one of the more cooly pragmatic and tactical arguments of the era. It was as much a call to “the ballot” as it was to “the bullet.” Indeed, based on a simple word count, Malcolm X seemed more interested in the power of the ballot in 1964 than he was in the power of bullet. Unlike the bullet, the ballot offered a straightforward electoral strategy:

These 22 million victims are waking up. Their eyes are coming open. They’re beginning to see what they used to only look at. They’re becoming politically mature. They are realizing that there are new political trends from coast to coast. As they see these new political trends, it’s possible for them to see that every time there’s an election the races are so close that they have to have a recount. They had to recount in Massachusetts to see who was going to be governor, it was so close. It was the same way in Rhode Island, in Minnesota, and in many other parts of the country. And the same with Kennedy and Nixon when they ran for president. It was so close they had to count all over again. Well, what does this mean? It means that when white people are evenly divided, and black people have a bloc of votes of their own, it is left up to them to determine who’s going to sit in the White House and who’s going to be in the dog house.

And so “The Ballot or the Bullet” advocated hitting, straight on, the racist “American system” (Malcolm X’s just phrase) through traditional political means.

To be sure, if the ballot failed, the bullet, he suggested, would be necessary—but as the political equivalent of the ballot, rather than its extreme alternative. In Malcolm X’s speech, both the ballot and the bullet are blunt political technologies capable of changing the “American system.” “The ballot or the bullet. If you’re afraid to use an expression like that, you should get out of the country.”

Of course, there is irony here, but Malcolm X was on to something. Part of American optimism is this belief that the “system” can be changed with the right “inputs,” be they votes or violence. In the recent Ferguson protests, we saw this play out. A story on National Public Radio about white residents of Ferguson featured a man complaining, “There are so many people here that don’t vote, but they think they have the rights to everything in the world. If you don’t vote, you aren’t given all the gifts of our society.” There were also more than a few protestors at Ferguson ready to make hell with guns and fire and so take on the American system of white supremacy.

But here’s the problem with the ballot and the bullet as the only two options in a struggle for justice: What we call “racism,” what we call “oppression,” what we call “segregation,” what we call “white supremacy”—these are all systemic and historic injustices, the large-scale realities of the “American system.” As we are seeing once again in Ferguson, the use of force and the right to vote are part of an American system that has, to date, regularly made people of color second-class citizens, if citizens at all.

Despite an American optimism that encourages us to believe otherwise, neither the ballot nor the bullet is capable of targeting systemic and historic injustices. For both ballots and bullets are themselves components of the system, and—more broadly—of a systemic approach to political problem-solving that operates only in terms of inputs and outputs (where ballots and bullets are inputs supposedly capable of creating new outputs). In fact, in another irony, some white Americans would today agree with Malcolm X, tactically speaking. For some white Americans, “gun rights” have joined “voting rights” to represent the be all and end all of American democracy. If you don’t like it, vote (making sure you show your ID). If your vote is ineffective, go buy a gun.

The problem, however, is not just the ways electoral politics and force are implicated in systemic injustices. It is also one of scale and scope: In approaching systemic and historic injustice, we approach the wrongs of what St. Augustine long ago, in a helpful distinction, referred to as the ills of the  “world.” By the “world,” Augustine meant not “inevitable” or “given,” but rather that accumulation over time and space of injustices—what we would today call systemic and historic injustices. The “civitas,” or polity, Augustine argued, lacks any direct political means by which to address injustices of such scope and scale. As a St. Louis friend recently lamented by email, “It is so disheartening to live in St. Louis, to be immersed in 24–7 coverage of the unrest and outrage in Ferguson, and yet to know that the structural and systemic problems that create the context for incidents like this seem so intractable.”

Malcolm X, in “The Ballot or the Bullet” at least, does not seem all that far off from Augustine’s position. Although he calls for the ballot or the bullet, he does so with pessimism and irony. The sort of equality that seems most within reach is that old “American” one of life for life, tooth for tooth, eye for eye: “It’ll be Molotov cocktails this month, hand grenades next month, and something else next month. It’ll be ballots, or it’ll be bullets. It’ll be liberty, or it’ll be death. The only difference about this kind of death — it’ll be reciprocal.”

The gap between local political solutions and the systemic injustice of history is one that American optimism tends to ignore (to its peril). I have no solutions for bridging the gap, at least not directly. One of the most common solutions in the imaginary of power-hawks and revolutionaries alike has been catastrophe, by which — supposedly — history and system can be overturned and we can begin anew. But that’s a fool’s errand. Catastrophe is not a recipe for a more just order, but instead for the reordering of injustices.

A protestor in Ferguson (photo from Wikimedia Commons)

A protestor in Ferguson, August 15, 2014 (photo from Wikimedia Commons)

But here’s a hopeful paradox to fend off the pessimism at least a little: The ballot and the bullet are component parts of systems of injustice, and yet the ballot and bullet have become, in certain moments, representative acts of public judgment upon the very same systems of injustice in which they participate (and quite often without any intention on the part of the “voter” or “shooter.”)

Ferguson is a case in point. What Officer Darren Wilson did to Michael Brown itself became a representative act of public judgment against blacks on behalf of a white policing system. The arrival of protestors on the streets, who named and decried white supremacy and police violence, also became a representative act of public judgment. In turn, the arrival of police in armored vehicles with combat weapons, representing the policing system not only as predominantly white but also as deeply militarized, became both an act of public judgment on behalf of the system and itself subject to other acts of public judgment . . . not the least in Trooper Ron Johnson’s walking with the protestors, in marked contrast to the tactics of the local police. Far from a mere police “calming tactic,” Johnson’s walk was above all an act of public judgment upon a predominantly white, militarized policing system.

Each of these events was or became a public act, and each an act of judgment. What made Ferguson so remarkable — and what makes it remarkable still — is the succession of such public acts of judgment, seemingly running out of control. This is also what made Ferguson so frightening to so many. But they were, I think, less “out of control” than they were regulated by a system of publicity (a word we need to recover from the publicists) operating independently of (though certainly not entirely apart from) the “Inputs” of America’s racially tilted system. That is, these representative acts of public judgment were meaningful in ways that transcend the unjust system. The world, as the protestor in the photograph above declared, was watching, and that was significant.

Although I am loathe to get into the well-worn business of contrasting Malcolm X with Martin Luther King, Jr., I do think that the particular political virtue of nonviolent tactics by MLK and others was found, and still is found, in their creation of situations conducive to these representative acts of public judgment (which need not always be nonviolent). It is possible, within the sphere of publicity, to offer acts of judgment that “the system” cannot contain or control.

I do not know if the representative acts of public judgment that Ferguson gave us will have systemic effects for the better. That is, I don’t know if they will lead to structural and systemic changes that will mitigate injustices. There is good reason to remain pessimistic. Ferguson did, however, give us a rare public window into the historic racist injustices of the “American system,” and offered its judgments, judgments with which we will have to now live. And living with such judgments, I think, can only be for the better.

Ned O’Gorman, associate professor of communication at the University of Illinois, Urbana-Champaign, is the author of Spirits of the Cold War: Contesting Worldviews in the Classical Age of American Security Strategy and the forthcoming The Iconoclastic Imagination: Image, Catastrophe, and Economy in America since the Kennedy Assassination.

. . . . . . . .

Like The Hedgehog Review on Facebook, follow us on Twitter, and subscribe to our posts via RSS.