Author Archives: Guest Blogger

Shame: An Argument for
Preserving “Those” Monuments

Two of the Clark Mills equestrian statues of Andrew Jackson, Lafayette Park, Washington, DC (left) and Jackson Square, New Orleans (right); photos: Leann Davis Alspaugh

On July 4, around 8 am, the French Quarter was wild with heat. I walked up St. Peter’s and took a left on Bourbon, where street cleaners hosed off the previous evening’s bacchanalia of regret. At Canal, I went left and by the time I reached St. Charles my glasses were fogged with humidity. I crossed Poydras and went to Camp Street. From there, I went right and my pulse quickened, anticipating the famous absence I’d traveled here to witness. I was making this walk well after the press had left town and well before white supremacists terrorized Charlottesville, Virginia, to experience the empty plinth where a statue of General Robert E. Lee once stood.

But then my geography got rusty. I was expecting to see the conspicuous display of emptiness about two blocks straight ahead. My body tensed in anticipation. But crossing Andrew Higgins Street, I looked right to make sure all was clear, and it was in that nanosecond that I unexpectedly got a direct view of the nothingness that was indeed something and—a reaction I don’t typically have—I gasped.

The image moved me: Robert E. Lee, that icon of the Confederacy, that bronze statuesque symbol that once lorded several stories over New Orleans, was, after 132 years, gone, relegated (for now) to municipal storage. And there I stood, a white person who, by virtue of my whiteness, benefits daily from the legacy of slavery, and took in this poignantly empty column, feeling the power history in a way I’d never before felt it.

Weeks earlier, with rare eloquence, Mitch Landrieu, the mayor of New Orleans, drove home the emotion in a remarkable speech. The Times-Picayune called it “one of the most honest speeches on race” delivered by “a white southern politician.” Landrieu, in the aftermath of the statue’s removal from Lee Circle, explained to a city that’s 62 percent black how “These statues are not just stone and metal. They are not just innocent remembrances of a benign history. These monuments purposefully celebrate a fictional, sanitized Confederacy; ignoring the death, ignoring the enslavement, and the terror that it actually stood for.” A lot of people said it and I agreed with them—Amen.

And so there it was: a seamless convergence of media, morality, and message. The removal of a city’s offensive Confederate-themed statues, a speech that will be anthologized, the humility of a public figure, a frank look at the reality of racism, and now this eerie lone column, a stark and unifying exclamation point on a Southern landscape. And yet, in spite of myself, something in my gut told me that General Lee should have stayed.

The Problem with Jackson

Before leaving the French Quarter for Lee Circle, I spent a few moments in Jackson Square contemplating the lone statue of Andrew Jackson. As an historian, I knew Jackson fairly well. I knew he was a slaveholder. I knew he was a man who built his identity around killing Indians. I knew that his reputation as an ethnic cleanser helped get him ousted from the twenty-dollar bill.

Knowing all this, I wondered how this swaggering crusader for racial purity still sat lionized atop his rearing horse, tipping his hat to the city he saved at the Battle of New Orleans, the city that, as it purged its obvious symbols of the Confederacy, refused—as Landrieu did—to include in that purge a figure who helped make the Confederacy possible.

There’s no question that removing a Confederate era statue—a monument put in place to remind blacks that they would never have equal rights—is a symbolic expression of justice. My own reaction to Lee’s absence proved it. But the persistence of Jackson led me to realize something was wrong. It made me wonder if there might be something too easy in the symbolism of Lee’s removal, an ease that exonerated white progressives from doing something far more challenging and consequential for the cause of racial justice than tearing down statues, spitting on them, and sending out virtue signals on Instagram.

After my Jackson-to-Lee walk, I met with Richard Marksbury at a coffee shop near Tulane University. Marksbury, sixty-six and white, is a cultural anthropologist who directs the university’s Asian Studies Program. Of all the arguments marshaled against the statue removals, Marksbury’s stood out for their rigor and manner in which he delivered them—not as a caveat-generating academic, but as an activist affiliated with the all-volunteer Monumental Task Committee, a group founded in 1989 to “restore, repair, and forever maintain all the monuments located in the city.”

Marksbury’s case was this: The white citizenry of New Orleans agreed in 1884 to celebrate Robert E. Lee by erecting a monument to his legacy. Even if that choice was, in Landrieu’s words, on “the wrong side of history and humanity,” it was made without ambiguity by racists interested in furthering the myth of the lost cause. That fact alone—history left the monument there as a kind of primary source for us to interpret—legitimates its right to stay put. “If something is there for 130 years,” Marksbury said, “it’s just part of the landscape.”

I thought, no—not valid. The notion that a memorial should be preserved because, at some point in time, an empowered group of citizens deemed an evil ideology worthy of memorializing only seems reasonable if history is apolitical, unemotional, and entirely relegated to the past. But history is none of those things. Infused in the heated politics of daily life, history is what left me in shock in the shadow of Lee’s empty pedestal. History is what turned Charlottesville into a war zone. History burns those who get close.

But Marksbury, if only in an indirect way, had a point. He directed my attention to Audubon Park. There, he explained, “you will find a statue of the Buffalo Soldiers.” He said, “Do you know what those soldiers did to the Native Americans? They mutilated them. So, what about the feelings of Native Americans? If you’re going to take down Robert E. Lee, you’ve got to take down the Buffalo Soldiers.”

And as for Jackson, he noted that when Take ’Em Down Nola—the organization dedicated to removing New Orleans’s racially offensive monuments—demonstrated to have Jackson removed, they were absolutely right to do so. “Landrieu,” he said, “could have appealed to the emotions of the Native American community.” But he “remained silent.” It was a silence that kept ringing in my ears.

Sloppy History

Marksbury’s argument does not condemn the removal of Confederate-themed monuments. It condemns inconsistency. One can argue that the NOLA removals were history in the making and that, in time, the moral logic underscoring that approach would be equally applied to other symbols of racism—including Andrew Jackson and many others. That would be good (if extremely ambitious) history. But that’s not what was happening in New Orleans. The mayor and city council removed Lee and other confederates while explicitly refusing to touch the image of Jackson. It was sloppy history.

Politicians can get away with that. But professional historians cannot. When I exchanged emails with Victoria Bynum, author of several books on the myth of the Lost Cause as well as The Free State of Jones: Mississippi’s Longest Civil War (which inspired a 2016 Hollywood movie), she was adamant that the public expression of history be scrupulously accurate and consistent. “I so fervently want the true history of the Civil War understood at the popular level,” she wrote. “And it saddens me that so many Americans, and not just Southerners, actually believe that the Civil War was not caused by slavery.”

Of course, she’s right. But was removing statues of confederate generals the right way to achieve historical accuracy in public space? (Bynum, for the record suggested the monuments go into a museum.) Again, it could be. If we honestly intended to take the logic underscoring Lee’s removal to the necessary extreme then we might get on with the massive project of de-anthologizing the public landscape of all racist vestiges. Or, acknowledging the difficulty of consistency on this point, we might instead rethink the logic behind statue removals altogether.

From the Bottom Up

One transformation that has touched the entire historical profession over the past two generations is the idea that we should do history “from the bottom up.” What kind of history was done in New Orleans when the statues came down? In a sense, it was top down. You had a white man who, largely through his own initiative and the power of his position as mayor, swept historical markers from their pedestals. Landrieu’s speech was grand. But shouldn’t skepticism be stoked when a May 26, 2017, editorial predicts that “as Abraham Lincoln’s remarkable 1860 Cooper Union Speech about slavery propelled the little-known Illinois lawyer toward the Republican Party’s presidential nomination, so might Landrieu’s Gallier Hall speech prompt Democrats to give the Louisiana mayor a closer look”? We should ask: Who tangibly benefits when Lee goes missing and General Jackson—of the Battle of New Orleans fame—stays put?

Three other Confederate monuments also came down around the time of the Lee statue removal, leading some lesser-known citizens suggested a bottom up approach. News reports called their behavior criminal acts of vandalism. But one might more charitably label them interpretations of public history made by the disenfranchised. At the base of the Robert E. Lee monument, someone spray-painted the phrase “white supremacy is a LIE” in sharp black letters. There we go, I thought.

Such a brutally accurate interpretation—obviously illegal and, if allowed to run amok, pointless—was in its singularity of expression and incisive moral commentary a far greater challenge to the myth of the Lost Cause than the nothingness that now rests on the pedestal. Plus, the motives in this case were clear—to bring truth to the monument—and nobody’s political prospects were improved in the process.

With that tag, truth spoke to power because the embarrassing emblem of that horrible power remained in place to be witnessed and interpreted. Certainly, we can take a cue from the vandals and find ways to demonize these relics with appropriate levels of scorn—new explanatory plaques come to mind—rather than sending them crashing once and for all to the pavement. And—more to the point—certainly there could be greater benefits for racial justice and historical understanding by engaging in ongoing interpretations of what these monuments mean in the here and now.

Forgetting How to Feel Shame

While taking an Uber car in New Orleans, I passed several streets named after slaveholders (or those who condoned slaveholding)—Henry Clay, Zachary Taylor, Thomas Jefferson, Napoleon, Washington. Prompted by this observation, I asked my driver, an African American business owner in his forties, what he thought about the statue removals. He paused and looked at me hard in the rearview mirror. “Taking those statues down was a bad idea because they reminded white people what was done to us.” Then he added: “We are not educated.”

It took me a moment to realize what he meant by “we” and “educated,” but what he was saying was that white people don’t know how to feel shame. We haven’t been taught how to confront the troubled history and legacy of slavery in a way that demands our sustained discomfort and puts us at risk in public space. True, by wishing the statues away, we justifiably honor the crushed feelings African Americans experience when living amidst monuments that once honored slavery. But less justifiably, by wishing these statues away we also ease the guilt of progressive whites who, for altogether different reasons, also hate looking up to Lee, Jackson, and, dare one say it, Mr. Jefferson.

Don’t worry about me, my Uber driver was saying. Worry about you. He wanted, in essence, whites to swallow a healthy dose of shame, and to bring that struggle to bear on our thinking about racial justice. However paradoxically, the white supremacist thugs who marched through Charlottesville only intensified the imperative. They further demanded that the rest of us, as we witness (and die from) their violent hatred, connect the awful racism of the past to that of the present through a bridge paved with shame, the kind of shame that, from the bottom up, can overwhelm the utter lack of it that currently swaggers at the top of American politics.

If that becomes the goal we choose to pursue with our remaining Confederacy monuments—and I cannot think of a better way to use public history—then we might take a note from the New Orleans vandals and begin to add to, rather than subtract from, the existing textual landscape.

That is exactly what the civil rights lawyer, MacArthur Foundation fellow, and founder of the Equal Justice Institute (EJI), Bryan Stevenson, is doing in Montgomery, Alabama. EJI marked Montgomery with a series of historical plaques acknowledging the warehouses used in the city’s slave trade. This effort, in addition to EJI’s current project to build a national memorial dedicated to lynching victims, defies the city’s antiquated markers to the Confederacy (of which there are more than fifty). And what do you think Stevenson wants whites to feel when staring at lists of the lynched? Not a sense of ease. Not a sense of relief.

Before justice and history merge on the landscape, they will first have to merge in our hearts. Without shame, this cannot happen. Taking on shame is a process that will inevitably ask whites not only to feel that emotion, but also to live in it, and to harness it for the cause of righteousness. And if that’s what we’re in for, if that’s what must happen for us to inch toward true racial reconciliation, then moving Confederate monuments out of sight becomes less an act of racial justice than yet another expression of the same white privilege that got us into this mess to begin with.

James McWilliams is a professor of history at Texas State University and the author of A Revolution in Eating: How the Quest for Food Shaped America and Just Food: Where Locavores Get It Wrong and How We Can Truly Eat Responsibly.

 

. . . . . . . .

Like The Hedgehog Review on Facebook, follow us on Twitter, and subscribe to our posts via RSS.

FacebookTwitterLinkedInGoogle+Share

Missing Michael Cromartie

Flyer from Protestants and Other Americans United for Separation of Church and State (1960). Via Wikimedia Commons.

Michael Cromartie was a rare figure in public life. An evangelical Christian, he devoted much of his work at the Washington-based Ethics & Public Policy Center to shedding light on issues that too often fueled the angriest culture-war disagreements over the place of religion in the public square. Until his recent death after a long struggle with cancer, he was rightly hailed as a bridge builder between journalism and religion.  Twice annually, he hosted the Faith Angle Forum, which, as Ross Douthat explained in a eulogistic column for the New York Times, invited “prominent journalists, members of one of America’s most secular professions, into extended conversation with religious leaders, theologians and historians, the best and brightest students and practitioners of varied faiths.” In a tribute on the website Real Clear Politics, journalist Carl Cannon wrote that “Cromartie did more to ensure that American political journalism is imbued with religious tolerance, biblical literacy, historical insight, and an ecumenical spirit than any person alive.”

I found myself missing Cromartie as I watched (and participated in) the reaction to New York Times reporter Laurie Goodstein’s description of the religious community of Professor Amy Barrett, nominated by President Trump to the U.S. Court of Appeals for the Seventh Circuit. (Barrett’s hearing before the Senate Judiciary Committee garnered some attention after Senator Diane Feinstein opined: “The dogma lives loudly within you.”)

Goodstein’s article has many problems, but what made me think of Cromartie was what the article and some responses to it revealed about the deep misunderstandings and biases of some of America’s more prominent religion journalists about some of the most basic practices of millions of American religious believers. These kinds of misunderstandings are all the more troubling at a time when the words and actions of our president have exacerbated divisions in our nation. Continue reading

. . . . . . . .

Like The Hedgehog Review on Facebook, follow us on Twitter, and subscribe to our posts via RSS.

Once and Always a Criminal?

Andrew Falk, left, a senior fellow at the Sagamore Institute, works with Michelle Jones on a housing policy proposal at the Indiana Women’s Prison. ANDREW SPEAR FOR THE MARSHALL PROJECT

 


Audio brought to you by curio.io, a Hedgehog Review partner.

A convicted murderer is accepted into Harvard University’s graduate history program only to have university officials override the admissions decision for fear of what news reports might say, among other stated and unstated concerns: If you didn’t read this compelling story, reported by Eli Hager at The Marshall Project, a nonprofit group focused on criminal justice, and published in the New York Times, you should. It is a forceful reminder of how we fail to think adequately about the ends and means of justice.

Michelle Jones, who is starting work on her Ph.D. at New York University this fall, was released from prison last August after serving twenty years of a fifty-year sentence for murdering her four-year-old son. The story of how she managed to become a published scholar of American history while incarcerated at an Indiana state prison—with no access to the Internet—is impressive in its own right. According to Hager, not only did Jones, now 45, lead “a team of inmates that pored through reams of photocopied documents from the [Indiana State Archives] to produce the Indiana Historical Society’s best research project last year. As prisoner number 970554, Ms. Jones also wrote several dance compositions and historical plays, one of which is slated to open at an Indianapolis theater in December.”

The details of why Harvard overrode the history department’s decision to admit Jones (one of eighteen selected from more than 300 applicants) are not entirely clear. However, Hager uncovered a memo from two American studies professors who examined Jones’s acceptance (she was a top alternate) and “questioned whether she had minimized her crime ‘to the point of misrepresentation.’” One of the professors, John Stauffer, further noted that “frankly, we knew that anyone could just punch her crime into Google, and Fox News would probably say that P.C. liberal Harvard gave 200 grand of funding to a child murderer, who also happened to be a minority. I mean, c’mon.”

C’mon indeed. Probably unintentionally, Stauffer voiced one of the unspoken presumptions of America’s criminal justice system: once a criminal, always a criminal. This presumption too often becomes a self-fulfilling prophecy because of the collateral consequences those with criminal convictions face after release from incarceration, including restrictions on access to employment, student loans, public housing, and other federal benefits.

But there is more for us to consider.

The one thing we know for sure is that Jones’s possible “misrepresentation” and “minimizing” version of her crime was cause for concern. Her crime was unquestionably a terrible one. After getting pregnant at fourteen as a result of what she called nonconsensual sex with a high-school senior, her mother beat her in the stomach with a board and she was placed in a series of group homes and foster family situations. This damaged and completely unprepared mother ultimately confessed to beating her four-year-old son and leaving him alone for days in their apartment, eventually returning to find him dead. Jones was twenty when she committed this horrible crime, which a personal statement accompanying her Harvard application described as the result of a “psychological breakdown after years of abandonment and domestic violence.”

Her statement that she killed her son partly because of her own trauma and psychological breakdown speaks to an unresolved tension in our thinking about crime. On one hand, we need to believe that there are reasons why people commit crimes; otherwise, we fear that anyone could become a victim or a perpetrator of violence at any moment. At the same time, we can give only so much credence, or even thought, to the explanation of why a crime was committed; too much understanding might cause us to question our criminal justice system’s reliance on incarceration as the most efficacious response to crime.

The belief that offenders should accept their responsibility and repent of their wrongdoing is so baked into our criminal justice system that it rewards offenders by taking months and sometimes years off their sentences if they say the magic words. As a federal public defender, I have counseled clients about their allocution to the court before they are sentenced. After all the lawyers have spoken, what should defendants say in those final moments before the gavel drops? The general advice is always to avoid blaming anyone else—or even pointing to conditions beyond one’s control—because the American myth of self-reliance and autonomy requires the defendant to bear the full weight of the offense in that moment. And when defendants offer their mea culpas, they give us all permission to think that justice has been well and fairly served.

After Jones spent many of the best years of her life in prison, why should it matter now how she describes something she did a little more than two decades ago? It matters because we need to ask ourselves whether the actions of someone as bright and capable as Jones were largely the result of forces beyond her reasonable control, namely intense trauma and extreme psychological duress.

And if we conclude, reasonably, that they were, we might further ask if fifty years of incarceration was an appropriate sentence in the first place. Would fifteen years have sufficed? Would psychiatric hospitalization have been a better response? Undeniably, Jones’s crime was horrific, but she fulfilled her end of the bargain, doing everything that the sentencing court and the Indiana Department of Correction asked of her, and then some. The Sentencing Project estimates that 161,957 people were serving life sentences as of 2016. The other two million adults currently incarcerated in the United States will be released someday. If Jones is not entitled to have her debt declared repaid in full, what hope can we extend to them?

Lisa Lorish is an assistant federal public defender in the Western District of Virginia and a graduate of the University of Virginia School of Law.

. . . . . . . .

Like The Hedgehog Review on Facebook, follow us on Twitter, and subscribe to our posts via RSS.

Terror, Photographed

Two people escape an Oslo office building after the 2011 terrorist attack there; Morten Holm/AFP.

Audio brought to you by curio.io, a Hedgehog Review partner.

Terror trades in images—it needs spectators to feed itself. It is commonplace to say that atrocities like the terrorist attacks in Manchester, London, Brussels, or Boston are “unspeakable”—whether to indicate that words are inadequate to describe such an act or that using words in this way is somehow, in itself, a form of violence. While these attacks may be unspeakable, they were most certainly not un-picturable: On the contrary, they generated a great number of images. These images mobilize shock, disbelief and repulsion, as well as gratuitous voyeurism. Becoming prime mediators in interrelationships between the targeted local communities and global audiences, they deploy a visual force that releases the impact of terror to the world at large.

Although piercing, images of terror are becoming more and more disconnected from the context in which they take place—all too often, photographs of mayhem, wounded bystanders, and destroyed buildings could have been taken almost anywhere in the world. No longer novel, photographs of terror now seem to create a sense of déjà vu or anxious anticipation. Many of us—myself included—have projected the photographed scenes against the background of own cities. Still, a few images stand out: Davina Douglass pressing a gauze mask to her face after being rescued in the aftermath of the 2005 London tube bombing; Tarana Akbari in a green tunic screaming in horror just minutes after a 2011 suicide bombing in Kabul; or dazed and bloodied Omran Daqneesh in the back of an Aleppo ambulance after a 2016 airstrike. Is any one of these as indelible as, for example, nine-year-old Kim Phúc running naked and napalmed down a Vietnam highway in 1972? That remains to be seen, but all of these demonstrate the essential qualities of terrorism photographs: their emphatic, graphic reality and unavoidably exploitative nature.

The Intersection of Photography and Terror

Any consideration of the impact of terrorism’s imagery must also examine the entanglement of photography and terror. Italian philosopher Adriana Cavarero defines terrorism as an act of violence that destroys any notion of safety, integrity, or distinctiveness for individuals or society as a whole. In fact, she argues in her 2011 book Horrorism that words such as “terrorism” and “war” have become obsolete. Terrorist acts, says Cavarero, whether bombing or stabbing, are as ubiquitous as they are random: Every person, innocent or guilty, armed or defenceless, civilian or soldier, believer or atheist, is a potential target for obliteration. Terror disperses violence from its designated territories (battlefields, camps) into civilian settings (a pop concert, a restaurant, a Christmas fair), making the quotidian into a war zone. As for photographs of terror, Cavarero follows Susan Sontag’s lead in considering such images as the eroticization of horror.

But terror is also the emotional response that the act of violence engenders among bystanders, a mixture of fear, angst, disgust, and disbelief. To respond to terrorism (or horrorism as Cavarero would have it) is to experience a visceral and brutal check to our usually unexamined feelings of personal and public safety, our comfortable integration in the world, our very concept of existence. It is precisely the generation of this affective state among large populations, rather than the death and dismemberment of individuals, that is the ultimate goal of terrorists. From ISIS to white supremacy, terror’s power is in its emotional undoing.

Photographs of terrorist attacks operate to document the results of terror as an act of violence on helpless civilians and to instill a feeling of terror in viewers—viewers often caught unawares by these images and thus unwittingly coerced into a state of anxiety and fear. Further, photographs of terror create a sense of loss of time and place, a de-temporalization as lived experience comes to a halt. We become temporarily paralyzed—a reaction, it should be noted, that is fundamental to the experience of photography itself. Photographs stop a moment in time and arrest our sense of the ongoing movement that characterizes the body and the world around it. What’s more, photography interrupts our sense of interiority, disrupting with its insistence on motionlessness our feeling of being anchored in a temporal world. When the camera captures a moment of violence, it freezes the act of terror, making it unending, even eternal, forever perpetuating our emotional reaction to it.

In the Moment and After

Images of terrorism come in three forms. First, there are images of the immediate aftermath of the attack, civilians—often the most vulnerable, women and children—staggering out of the danger zone, faces bewildered or distorted by anguish, clothing torn and bloody. These victims have seen and experienced the carnage. “In the moment” imagery also includes first responders and aerial shots taken by police helicopters or drones. The latter can be particularly disquieting as they often reveal bodies of the wounded or the dead who may be the attackers themselves. Formalistically, these images tend to be visually arresting: vivid colors, dynamic compositions, chaotic scenes filled with authentic, unstudied human expressions. The element of immediacy and a lack of finish also contribute to a sense of dread. What we can see within the frame is bad enough—what horrors are taking place just out of sight?

Then come the images of mourning, grieving, and commemorating—photographs that invite viewers to become part of the visual script for grief. Streetscapes changed into shrines, a sea of snapshots, handwritten messages, flowers, and teddy bears. There are candlelight vigils and stern police officers patrolling the streets. These photographs offer no grand gestures, no sweeping emotions, no spectacle. Rather, they speak to the more mundane work that terror develops over time, of the personal, political, or cultural effort to make sense of the event and move beyond its senselessness. These images spark solidarity and identification, appealing to the feeling of shared mourning. They also run the risk of trivializing the act of terror and the depth of human response by replacing scenes of violence with those of sentimentality.

Accordingly, we have grown accustomed to visual performances of institutionalized grief: heads of state and high officials—Queen Elizabeth, London mayor Sadiq Khan, former prime minister Theresa May—making somber hospital visits and condemnatory speeches. Following the May 22, 2017, bombing in Manchester of an Ariana Grande concert, there followed a novel expression of performed grief: Grande’s June 4 benefit concert. Except for images of the singer sporting her One Love Manchester sweatshirt (merch available online), photographs of the event look exactly like any one of hundreds of other carefully choreographed and highly lucrative pop star concerts. The visual medium in this instance has brought us quite a distance from an act of terror to scenes of entertainment and capitalism seen through a screen of philanthropy.

The commemorative power of photographs reaches new potency in the world of social media. Twitter and Facebook were deluged with profile images of concertgoers as they looked before they became victims—who can forget the sweet-faced Saffie Rose Roussos, an eight-year-old victim, or Georgina Callander, a bespectacled eighteen-year-old who was shown in an older photograph with her arms around Grande’s waist? All of these private images became gateways to lives that would have otherwise remained unknown to us. Rather than being manifest and visible in these images, terror remains stored, contained, releasing an insidious emotional reaction for which we are unprepared. In the social media environment where self-celebratory rhetoric mixes with conflict reporting and funny cat videos, the snapshot portraits of victims became an affective portal for our voracious and indiscriminate quest for media consumption.

In Pursuit of Clickbait

Photographs of terrorist events also have other troubling aspects. More than other forms of photography, portraits seem to reflect the Western bias in the media coverage of terrorist attacks. Victims are presented as precarious and grief-worthy. After the concert bombing, the Greater Manchester Police Twitter account, for example, was transformed into a kind of digital shrine of personal images as a tribute to lost lives. At the same time, private images of the victims of attacks in Kabul and Baghdad, which happened around the same time as the UK attacks, are hardly to be found in the Western media. Occasionally, we do encounter memorable photographs of non-Western child victims (Omran Daqneesh, refugee Aylan Kurdi, or Kim Phúc), but countless others remain invisible and nameless.

In addition, especially since the Manchester bombing, Facebook and Twitter have been filled with images of fake victims, images stolen and posted alongside messages pretending to beg for help in finding “loved ones.” Generating thousands of shares and re-tweets by well-intentioned bystanders, these hoaxes extend the emotional impact of terror by amplifying and confusing the event for no other purpose than sowing chaos and eroding our ability to make meaning from meaninglessness. With the increasing prevalence of trolling and accusations of fake news, the credibility of media imagery grows weaker and the possibility of sensationalism, exploitation and anonymous malice increases. Photographs of terror, loss, and death and the feelings they spark become so much clickbait. These manipulations expose our disenchantment with the medium of photography, once thought to be an unimpeachable source of objectivity and truth.

Photographs of terror remain a powerful tool. More than simply capturing evidence of violence done to people and places, they provide concrete evidence of the unravelling of human identity and communities. Intentionally or not, they also reinforce the concept of violence as a norm, accommodate and expand our appetite for manipulable visual media, and forge affective (or potentially abusive) connections between individuals separated in space and time. In the aftermath of a terrorist attack, photographs deepen solidarity and care for one another, but they can also intensify fear and xenophobia, invite voyeurism, and expose vulnerability. They may coerce viewers toward specific narratives, generate unwelcome emotions, manipulate people to spend money, or indoctrinate certain outlooks or ideologies. In a media saturated world, one in which photography flourishes, it is critical to recognize its dynamics, endurance, and significance.

Dr. Marta Zarzycka teaches at the University of Texas, Austin, at the Center of Women and Gender Studies. She is the author of Gendered Tropes in War Photography: Mothers, Mourners, Soldiers (Routledge) and essays on photography in Los Angeles Review of Books, Lens Culture, and Huck Magazine.

. . . . . . . .

Like The Hedgehog Review on Facebook, follow us on Twitter, and subscribe to our posts via RSS.

The Groot Gang: Superheroes, Politics, and Art

Image from a film by Louis Feuillade. Via Wikimedia Commons.

Image from a film by Louis Feuillade. Via Wikimedia Commons.

In Guardians of the Galaxy Vol. 2, good guys die by disintegration. They flake apart; their death leaves confetti everywhere. This residue—sparkly, expensive-looking, soon gone—resembles the way the film exists in the memory.

As for the bad guys: They die, as in all Marvel movies, by extreme, cartoonish violence, of the sort one is supposed to find cutely amoral. In this case, it’s a glowing flying space arrow (don’t ask) that a character controls by whistling (don’t ask) and that carves beautiful arabesques on the screen as it disposes many dozens of henchmen. The crowd around me laughed, just as they laughed last year, when Ryan Reynolds’s Deadpool killed eleven goons while dodging twelve bullets, or nine years ago, when Robert Downey’s Iron Man flattened those hostage-takers with the shoulder-mounted rockets. Superhero films resemble slasher movies, these days, in the cleverness and dexterity of their kills. In Guardians 2—as in the first film, which featured a space-jailbreak that presumably left hundreds dead—the audience is expected to go along with this violence, and largely does, because of the excellence of the heroes’ repartee. They’re bounty hunters and killers, but they’re cute, and one of them is a tree.

The amoral turn in superhero cinema—you can trace it to Iron Man, with Sam Raimi’s Darkman (1990) as a fascinating precursor—is really a turning back. Historians generally attribute the distinction of “first superhero” to Superman, but this requires willful blindness to the great silent crime serials of Louis Feuillade—the Fantomas series (1913–14), Les Vampires (1916)—or their imitators: 1926’s The Bat, based on Mary Roberts Rinehart’s play; Fritz Lang’s Spies (1919). Les Vampires in particular, with its elaborately costumed, endlessly clever, undeniably sexy conspirators, in turn drew on the activities of the Bonnot Gang, an anarchist sect known for expropriating (though they never got around to redistributing) the goods of wealthy Parisians. Just as the first detective was a thief—Eugene Vidocq, a nineteenth-century thief-turned-fence-turned-informer, invented criminology and opened the first private detective agency—the first superheroes were supervillains. Continue reading

. . . . . . . .

Like The Hedgehog Review on Facebook, follow us on Twitter, and subscribe to our posts via RSS.

What Is Innocence Worth?

lorish innocence FLAT

In its recent Nelson v. Colorado decision, the Supreme Court affirmed what might have seemed to require no formal affirmation—namely, that a person whose criminal conviction is overturned on appeal is entitled to the return of any fees, court costs, or restitution paid to the state as a result of the conviction. Previously, the state of Colorado required an exonerated defendant to file a separate civil suit and prove actual innocence by clear and convincing evidence before funds would be repaid. Having a conviction overturned on a mere legal technicality would not suffice for financial recovery. The central question in the case—which was decided six to one in favor of the petitioners, with Justice Clarence Thomas dissenting—concerned due process.

While it was notable that the Supreme Court took up such a seemingly self-evident case, the Court did not address the question of compensation for periods of wrongful incarceration. Justice Ginsburg, writing for the majority, explained that the “[petitioners] seek restoration of funds they paid to the State, not compensation for temporary deprivation of those funds. Petitioners seek only their money back, not interest on those funds for the period the funds were in the State’s custody.” Justice Ginsburg continued: “Just as the restoration of liberty on reversal of a conviction is not compensation, neither is the return of money taken by the State on account of the conviction.” She made it clear what compensation is and what it is not: While compensation may be the return of something wrongfully taken, it is not necessarily compensation to be released from prison in which one was held for no lawful reason in the first place. Compensation is something more—an award for loss, suffering, or an injury. Continue reading

. . . . . . . .

Like The Hedgehog Review on Facebook, follow us on Twitter, and subscribe to our posts via RSS.

Princeton Seminary, Presbyterian Pastors, and Purpose

Princeton Theological Seminary library.  Billy via Flickr.

Princeton Theological Seminary library. Billy via Flickr.

Last week, Princeton Theological Seminary announced it was rescinding its decision to bestow an award upon Presbyterian pastor and author Tim Keller. The seminary’s president explained that Keller’s leadership role in the conservative Presbyterian Church in America was at odds with the school’s mission. Keller’s denomination, unlike the seminary’s own Presbyterian Church (USA), “prevents women and LGBTQ+ persons from full participation in the ordained Ministry of Word and Sacrament.” He also emphasized that the school’s reversal in no way undermined its commitment to open dialogue—the award comes with a lecture, which Keller was still invited to to deliver: “We are a school that can welcome a church leader to address one of its centers about his subject, even if we strongly disagree with his theology on ordination to ministry.”

Keller is in some ways an unlikely candidate for headline-generating controversy. He’s more known for writing readable books, ministering to Millennials in New York City, and engaging in dialogues with atheists on college campuses. Last year, he and I coauthored an article that argues, among other things, that we can and must figure out a way to live peaceably in the midst of our deep differences, and that we can treat each other charitably across those differences. Those ideas emerge out of the intersection of Keller’s approach to pluralism as a pastor and my academic framework of confident pluralism.

One of the core commitments of confident pluralism is that the First Amendment should permit private associations—including private institutions of higher education—to follow their own norms absent extraordinarily compelling governmental interests. Since interests of such magnitude are not implicated here, Princeton Seminary can do whatever it wants. It could give or not give the award to Keller. It could—as it did—offer and then rescind the award for just about any reason. It could—as it did not—disinvite Keller to deliver his lecture. Still, this whole episode raises questions, not only about the purpose of Princeton Theological Seminary, but whether or not the school has adequately articulated its sense of purpose. Continue reading

. . . . . . . .

Like The Hedgehog Review on Facebook, follow us on Twitter, and subscribe to our posts via RSS.

Beyond the Legality of Executive Orders

A young Japanese-American waits to be taken to an assembly center. Via Wikimedia Commons.

A young Japanese American waits to be taken to an assembly center. Via Wikimedia Commons.

This Sunday marks the seventy-fifth anniversary of Executive Order 9066. Signed by President Franklin D. Roosevelt on February 19, 1942, the order authorized the secretary of war and military commanders to establish “exclusion zones,” which ultimately led to the internment of 110,000 Japanese Americans. The Supreme Court upheld the constitutionality of these actions in a series of decisions culminating in Korematsu v. United States.

We are now in the middle of a heated national debate over another executive order: “Protecting the Nation From Foreign Terrorist Entry Into the United States,” signed by President Donald J. Trump. The two orders are not the same in scope or consequence. But they do bear some similarities. Neither Executive Order 9066 nor Trump’s immigration order singles out a group of people by name. Yet both orders make possible discriminatory action.

As much as I disagree with its substance and symbolism, many of the constitutional arguments raised against Trump’s executive order strike me as unpersuasive. The order does not flagrantly overstep the bounds of executive power as they are currently understood; nor is the purported Establishment Clause challenge as obvious as some commentators have suggested. (I find Michael McConnell’s analysis of the Ninth Circuit’s opinion closest to the mark.)

But whether or not an executive order is constitutional is not the only question that can be raised about it or even necessarily the most important. The actions of our president—particularly those formalized and ritualized as executive orders—have expressive as well as legal consequences. They tell us something about who we are and who we should be as a people. From this perspective, the historical connection to Executive Order 9066 reminds us of the dangers of fear and the human toll that can too easily result from that fear. Continue reading

. . . . . . . .

Like The Hedgehog Review on Facebook, follow us on Twitter, and subscribe to our posts via RSS.