Tag Archives: technology

The Talking Cure

Sherry Turkle, 2009. Jeanbaptisteparis via Flickr.

Sherry Turkle, 2009. Jeanbaptisteparis via Flickr.

This week, Sherry Turkle will publish Reclaiming Conversation: The Power of Talk in a Digital Age. It develops themes that she began to explore in her astute and disturbing Alone Together (see the conversation with Turkle in THR about Alone Together here). An essay adapted from her new book that appeared in the New York Times on Sunday provides a nice summary of some of her critical observations on the personal and social impact of how we use our devices and her recommendations for how we might reclaim conversation, a “talking cure” for the “failing connections of our digital world.”

This is her central point:

We’ve gotten used to being connected all the time, but we have found ways around conversation—at least from conversation that is open-ended and spontaneous, in which we play with ideas and allow ourselves to be fully present and vulnerable. But it is in this type of conversation—where we learn to make eye contact, to become aware of another person’s posture and tone, to comfort one another and respectfully challenge one another—that empathy and intimacy flourish. In these conversations, we learn who we are.

She argues, not unreasonably, that smartphones, and connectivity more generally, disrupt such “emphatic conversation” because we can’t be fully present to someone while we’re attending to something else; thus phones diminish our ability to know and understand each other and undercut the quality of the conversations we do have. These conversations must necessarily be kept “relatively light, on topics where people feel they can drop in and out.”

On an even more basic level, she believes that our connectivity diminishes our ability to “converse” with ourselves, because the “capacity for solitude” is also undercut by picking up the phone and checking our email. For Turkle, self-reflection in solitude is how “we find ourselves” and gain the capacity to “really hear what other people have to say.” While she observes that conversation with others “leads us to become better at inner dialogue,” her emphasis is on the importance of slowing down and reclaiming solitude as a “start toward reclaiming conversation.”

Turkle notes that we have turned “time alone into a problem that needs to be solved with technology,” and she briefly cites the research of University of Virginia psychologist Timothy Wilson and his colleagues, who did a series of studies on how people experience solitude. In a paper, published last year in the journal Science they

found that participants typically did not enjoy spending 6 to 15 minutes in a room by themselves with nothing to do but think, that they enjoyed doing mundane external activities much more, and that many preferred to administer electric shocks to themselves instead of being left alone with their thoughts.

However, Wilson et al. do not argue that we have turned time alone into a problem or that technologies are themselves making solitude more difficult. In addition to their studies with college students, they did another with community participants, ages eighteen to seventy-seven (the median age was forty-eight), recruited at a farmers’ market and a local church. They report that

the results were similar to those found with college students. There was no evidence that enjoyment of the thinking period was related to participants’ age, education, income, or the frequency with which they used smart phones or social media.

Their claim is that solitude is inherently difficult. This point does not contradict Turkle’s but it does suggest that our devices are hazardous because they discourage us from doing something—inner dialogue—that’s already challenging. Our efforts to “reclaim solitude” will involve an element of asceticism.

Further, if “the untutored mind does not like to be alone with itself,” as Wilson and colleagues argue, then perhaps a better starting point for energizing what Turkle calls the “virtuous circle that links conversation to the capacity for self-reflection,” is with conversation rather than solitude. That is, perhaps at the individual level the joys of friendship can be more easily acknowledged and cultivated than those of the inner life, better motivating us toward a more judicious use of our devices and an increased capacity to “listen to ourselves.”

Joseph E. Davis is publisher of the Hedgehog Review.

. . . . . . . .

Like The Hedgehog Review on Facebook, follow us on Twitter, and subscribe to our posts via RSS.


The Prestige

Caricature: Le Mesmerisme Confondu by A. Schuller, from Die Karikatur und satire in der Medizin by Eugen Hollander; Wellcome Library, London via Wikimedia Commons.

Le Mesmerisme Confondu by A. Schuller, from Die Karikatur und satire in der Medizin by Eugen Hollander; Wellcome Library, London, via Wikimedia Commons.

Under dim lights and soft music, he eased the nervous maladies of well-bred psychosomatics across Europe—until Enlightenment rationality brought him down. Although the German physician Franz Anton Mesmer, who died two hundred years ago yesterday, and his theory of animal magnetism have given way before modern medicine, we still have the concept of mesmerism, or what we know of today as hypnotherapy.

Looking for something gentler than bleeding and purgatives, Mesmer turned theories about gravity, bodily fluids, and the earth’s magnetic fields into a new approach that centered on the power of suggestion and hypnosis. The good doctor took his show on the road, gaining so many clients in Paris that he began holding séances with groups of patients gathered around a bacquet, an oak vessel filled with iron fillings and broken glass from which rods protruded that sprayed “magnetized water.” Eventually, Louis XIV, irked by complaints from the medical establishment, appointed a commission to investigate Mesmer (whose patients included Marie Antoinette). The committee, composed of the chemist Antoine Lavoisier, Paris mayor Jean Bailly, Dr. Joseph Guillotin, and Benjamin Franklin, determined that Mesmer’s methods were unscientific and his cures dubious. Discredited, Mesmer left France and died a few years later in Switzerland. (Lavoisier and Bailly would later perish under the blade of the beheading machine invented by Guillotin.)

Mesmerism had a curious resurgence in Victorian England—Charles Dickens was said to be an adept—and figures like Rasputin and Hitler were both said to have used mesmerism, or, more accurately, hypnosis, to exercise control over their subjects. But in today’s discussion of Big Data and algorithms, we see another application of mesmerism. Legal expert Frank Pasquale, in his essay “The Algorithmic Self” from our spring issue, suggests that in our engagement with technology, we have become susceptible to advertisers and social-media developers who “deploy opaque algorithms to monitor and manipulate behavior.” Spellbound by ever more access to information, convenience, and connectivity, we have become mesmerized by technology itself.

As we interact with these technologies, we supply engineers, analysts, and computer banks with new data about who we are. On learning how these data samples are used, we, in turn, evolve into more savvy consumers of technology. The result, says Pasquale, is that we begin to adopt algorithmic identities calculated to present the face that we want the world to see rather than the one that truly reflects who we are. We pass ourselves off as one sort of person to our friends on Facebook, for example, and another altogether to potential headhunters on LinkedIn. This is not, strictly speaking, duplicity, but rather a Pavlovian response. As Pasquale says: “The new technologies of connection are not merely instrumental to, but constitutive of, our ends. They change how we think and reinforce certain character traits.”

The moral implications of this new “modulated selfhood” are significant. As a physician,  Dr. Mesmer was know for his kindness and devotion to his patients. He soothed them with soft lighting and overlooked their hysterical antics around the bacquet. But these patients were not coerced into treatments by Mesmer. They freely allowed him to operate on their psyches, willingly entering into a potentially dangerous situation, one that alienated them from their natural personalities and moral states. In addition, by submitting to his domination, Mesmer’s patients relinquished their liberty, running the risk of, at least, embarrassing themselves or of, at most, throwing their entire cerebral mechanism out of gear.

Mesmer’s opportunism illustrates the morally detestable aspect of hypnotism. Susceptible subjects abdicating their rights is something that Frank Pasquale sees happening today as we wittingly and unwittingly contribute to the construction of our algorithmic selves. Paraphrasing the golden rule, Pasquale observes “As we are treated algorithmically, we are conditioned to treat others similarly.” The result is a shift in ethical perspective where the technology has first priority rather than our moral obligations to those around us.

I am reminded of the three parts of a magician’s illusion: the Pledge, where the audience is presented with an ordinary object; the Turn, where the object is turned into something extraordinary; and the Prestige, where the object is brought back. We might see Mesmer and his charlatanism as mere warm-up act for the grand technological illusion now being presented for our delectation. We are presented with a time-saving tool and, through the mysterious forces of HTML coding, magnets, and electrical impulses, we discover in the object the means to transform us into something else—who wouldn’t want a transcendent self? Then comes the Prestige, when our modulated, algorithmic self is revealed as valueless and unrecognizable.

Anybody know where I can buy some magnetized water?

Leann Davis Alspaugh is managing editor of The Hedgehog Review.

. . . . . . . .

Like The Hedgehog Review on Facebook, follow us on Twitter, and subscribe to our posts via RSS.

Beethoven and the Beef Jerky Maker

Klaus Kammerichs: Beethon (1986) Material: Beton nach einer Vorlage des Beethoven-Porträts von Joseph Karl Stieler (1819) Standort: Beethovenhalle in Bonn

Klaus Kammerichs: Beethon (1986) Material: Beton nach einer Vorlage des Beethoven-Porträts von Joseph Karl Stieler (1819) Standort: Beethovenhalle in Bonn. By Hans Weingartz (Own work) [CC BY-SA 2.0 de], via Wikimedia Commons

A few years ago, a Boston-based recording studio was entrusted with an especially challenging remastering project. As box after box of reel-to-reel tapes, the classical music archives of RCA’s Living Stereo label, began arriving, the studio engineers had to figure out how to get usable material off the fragile tapes and transfer it to compact discs. The tapes had been stored properly, but the glue that held the magnetic medium on the acetate tapes had become sticky. Unspooling the tapes in order to thread them through a playback machine could destroy them entirely.

Introduced in the 1950s, the Living Stereo label brought stereo recordings in the form of long-playing records (LPs) into the mainstream. With a postwar boom in home record players with stereo sound reproduction capability, music lovers could enjoy affordable LPs featuring some of the world’s greatest orchestras in the comfort of their own living rooms. Living Stereo, along with rival CBS Masterworks, changed more than just how people listened to music. Heard on radio programs and used as music education tools, these recordings set new musical standards for amateurs and professionals all over the world. Conductors like Charles Munch and violinists like Jascha Heifetz became household names whose interpretations have continued to have an indelible impact on music.

In the early 2000s, many record companies began to dig deep into their back catalogs, looking for a way to monetize their holdings. Transferring Living Stereo archives to CDs would give audiophiles greater access to historic recordings and provide a profitable boost to the record companies. But none of that would happen if the reel-to-reel tapes couldn’t be rescued. Finding a machine to play the tapes on wasn’t a problem—the studio in Boston is a veritable museum of audio technology. But even unspooling the sticky tapes carefully by hand could create irrevocable damage. The solution turned out to be a humble beef jerky maker. The appliance’s round shape is exactly the size of a tape reel, and its dehydration settings reach just the right temperature to “cook” a tape without damaging it. The studio went on to transfer scores of Living Stereo tapes, bringing Beethoven, Brahms, Bartok and a host of great orchestras back from acetate oblivion. Continue reading

. . . . . . . .

Like The Hedgehog Review on Facebook, follow us on Twitter, and subscribe to our posts via RSS.

Where Moth and Rust Doth Corrupt

Vint Cerf, 2007.

Vinton Cerf, 2007. By Joi Ito (Flickr) [CC BY 2.0], via Wikimedia Commons.

“We are nonchalantly throwing all of our data,” said Dr. Vinton Cerf, the vice president of Google, in a recent address to American Association for the Advancement of Science, “into what could become an information black hole without realizing it…. In our zeal to get excited about digitizing we digitize photographs thinking it’s going to make them last longer, and we might turn out to be wrong.” And if we do turn out to be wrong, Cerf continues, then we will probably leave few historical records behind us. Our excessively documented age will simply disappear.

Cerf’s comments might come as surprise to some. When expressing fears over our digital records, we focus much of our concern focuses on permanence—the idea that, for instance, an unkind or embarrassing action could linger on and haunt us long beyond any reasonable amount of time. When looking at the recent hiring-and-firing of Ethan Czahor, Jeb Bush’s tech officer, many draw the obvious moral that it is unwise to tweet. “Everyone of Czahor’s generation and younger,” says Caitlin Dewey of The Washington Post, “will come with some kind of digital dirt, some ancient tweet to be deleted or beer-pong portrait to untag.” This is, of course, true.

But it’s also true that those tweets don’t stay accessible as long as we might think. Without requesting my archives or searching for specific terms, I can’t access my own tweets past a certain point (last I checked, mid-December). For those who use Twitter, as I sometimes do, as a way of taking notes, this discovery can be very unwelcome. The archive still exists, of course. I can get to it if I want. But as Cerf points out, it may be that in three or four years that archive, downloaded to my computer, may exist in a form I’m unable to open; and Twitter may no longer exist to provide them to me in a format that I can use.

These tweets are no great loss to anyone, of course, not even to me, and in their contents they do not even rise to the level of the mildly scandalous. All the same, it came as a shock to realize how many small notes of mine were functionally inaccessible to me. In fact, thinking into the future with our current technology is a difficult task in many respects. You can, if you so desire, write an email to the future, which is by itself a fairly draining spiritual exercise. But are you really so confident that you know the email address you’ll use in the future, let alone that you’ll be using email at all?

Continue reading

. . . . . . . .

Like The Hedgehog Review on Facebook, follow us on Twitter, and subscribe to our posts via RSS.

After Strange Gods: Peter Thiel and the Cult of Startupism

Peter Thiel at TechCrunch50 (2008). Image from Wikimedia Commons

Peter Thiel at TechCrunch50 (2008). Image from Wikimedia Commons

In the humanities departments of the university where I live and work, the word “corporate” is an epithet of disdain, and “entrepreneurship” is code for “corporate.” My fellow humanists tolerate the business school because it provides fuel for the English composition classes that keep us tenured radicals employed.

A confession: I used to share that outlook myself. But the experience of working alongside actual entrepreneurs and CEOs of various stripes shattered my comfortable assumptions. Not only did I find that entrepreneurs are willing to take risks that I would never hazard; I also learned that many are keenly interested in the world of ideas, theory, and “big picture” thinking. Indeed, such philosophically inclined entrepreneurs excel at practical wisdom—what Aristotle called phronesis—precisely because their imaginations have been nourished by contemplation. They are philosophers of a kind I will never be.

Prominent among these philosopher-entrepreneurs is Peter Thiel.  A co-founder of PayPal and Palantir, he has become a Silicon Valley guru, the contrarians’ contrarian. His new book, Zero to One: Notes on Startups, Or How to Build the Future, began as notes taken by an admiring student, Ben Masters, who took Thiel’s course at Stanford. It is an ambitious book—it could even be described as Machiavelli’s The Prince as re-imagined for our startup age—and it puts Thiel’s command of the philosophical canon on prominent display. How many other business books at the airport bookstore draw on Hegel, Nietzsche, Aristotle, John Rawls, and René Girard?

Zero to One is aphoristic, biting, forthright, and at times, in the spirit of Machiavelli, ruthless. Thiel unapologetically commends the pursuit of monopoly (“the more we compete, the less we gain”), and then counsels noble lies to hide its achievement. He casts aspersions on the bureaucracies of existing organizations: “Accountable to nobody,” he writes, “the DMV is misaligned with everybody.” And he calls out bad ideas, particularly those coupled with shoddy execution. His take-down of failed federal investment in clean technology is well worth the cost of the book.

Thiel’s intellectual reach is anything but modest. He offers a sociology of creativity, a grand theory of human civilization, and even a sort of theology of culture–though it is not quite clear whom he casts as God. Indeed, it’s over the grandiosity and hubris of Thiel’s claims that I find myself parting ways with the more fawning reviews of his book. I realize that creative risk-tasking requires a healthy dose of self-confidence that can often come across as arrogance. What worries me, though, is not his confident dispensing of practical wisdom but the hubristic evangelizing for what might be called startupism.

A cult of creative innovation, startupism has four notable  features, beginning with the outsized role it accords to human creativity. As early as page two of the book, Thiel tells us “humans are distinguished from other species by our abilities to work miracles. We call these miracles technology.” His emphasis on the creative power of human making is laudable and timely, though not particularly new. (Thiel should add Giambattista Vico to his reading list.) What’s unique to startupism is the “miraculous,” god-like powers Thiel attributes to us mortals: “Humans don’t decide what to build by making choices from some cosmic catalogue of options given in advance; instead, by creating new technologies, we rewrite the plan of the world.” We command fate. “A startup is the largest endeavor over which you can have definite mastery. You can have agency not just over your own life, but over a small and important part of the world. It begins by rejecting the unjust tyranny of Chance. You are not a lottery ticket.”

Second, the creativity celebrated by startupism blurs the old distinction between Creator and creature. What Thiel calls “vertical” or “intensive” progress isn’t 1+1 development; truly creative, intensive progress is a qualitative advance from 0 to 1. I believe the Latin for that is creation ex nihilo. (And “[t]he single word for vertical, 0 to 1 progress” is…you guessed it…“technology.”)

Third, as you might expect, startupism has its own ecclesia: the new organization founded by a noble remnant who have distanced themselves from the behemoths of existing institutions. “New technology,” Thiel observes, “tends to come from new ventures” that we call startups. These are launched by tiny tribes that Thiel compares to the Founding Fathers and the British Royal Society. “[S]mall groups of people bound together by a sense of mission have changed the world for the better,” he explains, because “it’s hard to develop new things in big organizations, and it’s even harder to do it by yourself.” We shouldn’t be surprised, then, that “the best startups might be considered slightly less extreme kinds of cults.” The successful startup will have to be a total, all-encompassing institution: our family, our home, our cultus.

Finally, in startupism, the founder is savior. Granted, Thiel—following Girard—is going to talk about this in terms of scapegoating in a long, meandering chapter that aims to associate successful Silicon Valley geeks with pop stars and other people we like to look at. But it’s not just that founders are heroes in their companies. The scope of their impact is much wider: “Creative monopolists give customers more choices by adding entirely new categories of abundance to the world. Creative monopolies aren’t just good for the rest of society; they’re powerful engines for making it better.” But to get there, Thiel says, “we need founders.” No founders; no progress. Steve Jobs, hear our prayer.

Thiel offers genuine, authoritative insight into entrepreneurship and the dynamics of a startup organization. When he tacitly suggests that society derives its crucial and even salvific dynamism from the startup, I become both skeptical and nervous. Can startups contribute to the common good? Without question. Are startups going to save us? Not a chance.

Thiel’s hubris stems from a certain parochialism. Startupism is a Bay-area mythology whose plausibility diminishes by the time you hit, say, Sacramento. The confident narrative of progress, the narrow identification of progress with technology, and the tales of 0 to 1 creationism are the products of an echo chamber. This chamber fosters hubris among the faithful precisely because it shuts out competing voices that might remind them of the deeper and wider institutional, intellectual, and even spiritual resources on which they depend and draw. We are makers, without question, but we are also heirs. We can imagine a different future, but we have to deal with a past that was created by others before us.

Thiel, and the New Creators like him (and get ready for a slew of parroting Little Creators coming in their wake), have drunk their own Kool-Aid and believed their own PR. It’s why all the sentences that begin “At PayPal…” grow tiresome and make you wonder why someone who developed a new mode of currency exchange thinks he brought about the new heaven and the new earth ex nihilo. One can applaud Thiel’s elucidation of creativity and innovation while deploring the (idolatrous) theology in which he embeds it. We need startups. We can do without startupism.

James K. A. Smith is a professor of philosophy at Calvin College, where he holds the Byker Chair in Applied Reformed Theology and Worldview. He is the author of Who’s Afraid of Postmodernism? and How (Not) to be Be Secular: Reading Charles Taylor, among other books. Smith is also the editor of Comment magazine.

. . . . . . . .

Like The Hedgehog Review on Facebook, follow us on Twitter, and subscribe to our posts via RSS.

Self-Knowledge in the Age of the Digital Panopticon

A short piece in the British thought journal Prospect, “Quantified Self: The Algorithm of Life,” reminds me once again that satire of the dystopian variety can barely keep up with what the real world throws at us every day.

Betham's drawing of a Panopticon

Drawing of Jeremy Bentham’s Panopticon (Wikipedia Commons)

I have in mind the recent novel by Dave Eggers, The Circle, which follows the career of a bright young thing who becomes a rising star in the utimate tech company, a sort of hybrid of Google, Facebook, and half a dozen other cutting-edge cool shops. The Circle, as the eponymous firm is called, puts all your social media and personal digital data together and allows you to accesss them under your own uber-password. It provides one-stop shopping for those who want to be connected to everything, even while seeing to it that those connections (and, of course, the personal data) are assiduously monitored, stored, and exploited to bring you even more of what you don’t yet know you want.

But a digital panopticon fulfilling Jeremy Bentham’s vision of the relentlessly surveilled life is so close to being realized in the real world that it almost puts Eggers’ fictive imaginings to shame.  Josh Cohen in his Prospect piece describes a tech-driven movement of self-quantification that comports perfectly with the world of the Circle, even bringing to it a higher, finer resolution:

Quantified Self (QS) is a growing global movement selling a new form of wisdom, encapsulated in the slogan “self-knowledge through numbers”. Rooted in the American tech scene, it encourages people to monitor all aspects of their physical, emotional, cognitive, social, domestic and working lives. The wearable cameras that enable you to broadcast your life minute by minute; the Nano-sensors that can be installed in any region of the body to track vital functions from blood pressure to cholesterol intake, the voice recorders that pick up the sound of your sleeping self or your baby’s babble—together, these devices can provide you with the means to regain control over your fugitive life.

This vision has traction at a time when our daily lives, as the Snowden leaks have revealed, are being lived in the shadow of state agencies, private corporations and terrorist networks—overwhelming yet invisible forces that leave us feeling powerless to maintain boundaries around our private selves. In a world where our personal data appears vulnerable to intrusion and exploitation, a movement that effectively encourages you to become your own spy is bound to resonate. Surveillance technologies will put us back in the centre of the lives from which they’d displaced us. Our authoritative command of our physiological and behavioural “numbers” can assure us that after all, no one knows us better than we do.

No, Cohen does not believe this.  He knows that there is no end to the monitoring and the new devices, no end to “more data accumulated and shared.” But he is concerned less with the loss of privacy than he is with the misguided search for self-knowledge through more and finer quantitative measurements. He eloquently describes the panic that may begin to set in even at the outset of such a quantitative quest, a panic that, if not suppressed by other technological measurements, may lead to a valuable insight:  “Perhaps the self you really want to know, and that always eludes you, is the one that can’t be quantified.”

True, of course. But perhaps the saddest thing about Cohen’s lament is how it registers on the mind of even one who agrees:  How quaint Cohen sounds.  How truly quaint.

. . . . . . . .

Like The Hedgehog Review on Facebook, follow us on Twitter, and subscribe to our posts via RSS.

Compared to What?

Rows of people at the movie on their phonts

(credit: iStock)

Rutgers University professor Keith Hampton, profiled in a recent New York Times Magazine article,  challenges the claims of fellow social scientists such as MIT’s Sherry Turkle that digital technologies are driving us apart:

Hampton found that, rather than isolating people, technology made them more connected. “It turns out the wired folk — they recognized like three times as many of their neighbors when asked,” Hampton said. Not only that, he said, they spoke with neighbors on the phone five times as often and attended more community events. Altogether, they were much more successful at addressing local problems, like speeding cars and a small spate of burglaries. They also used their Listserv to coordinate offline events, even sign-ups for a bowling league. Hampton was one of the first scholars to marshal evidence that the web might make people less atomized rather than more. Not only were people not opting out of bowling leagues — Robert Putnam’s famous metric for community engagement — for more screen time; they were also using their computers to opt in.

For Hampton, what debates and research about the effects of digital technologies on our lives so often lack is historical perspective.

We’re really bad at looking back in time,” Hampton said, speaking of his fellow sociologists. “You overly idealize the past. It happens today when we talk about technology. We say: ‘Oh, technology, making us isolated. We’re disengaged.’ Compared to what? You know, this kind of idealized notion of what community and social interactions were like.” He crudely summarized his former M.I.T. colleague Sherry Turkle’s book “Alone Together.” “She said: ‘You know, today, people standing at a train station, they’re all talking on their cellphones. Public spaces aren’t communal anymore. No one interacts in public spaces.’ I’m like: ‘How do you know that? We don’t know that. Compared to what? Like, three years ago?’

Although the merits of Hampton’s particular study can be debated, he makes an important point when he asks simply, “compared to what?” Those who make arguments about technology’s deleterious effects on our ability to converse with one another, to pay attention, or to read closely usually presume some way that we ought to talk to each other, that we ought to attend to a given object or event, or that we ought to read.

And maybe these critics are right; perhaps we ought carry on in the ways they presume we should. But appealing to history to make these normative claims is a much trickier move.  History is fraught and full of bad conversation, distraction, and poor readers.

. . . . . . . .

Like The Hedgehog Review on Facebook, follow us on Twitter, and subscribe to our posts via RSS.

Christmas and the New Cult of Images

Gossaert 2013 stamp

Christmas is a complicated affair. Never mind the dizzying logistics of 15 billion packages, cards, and letters to be delivered by the U.S. Postal Service this season—many going to those who have so much, fewer to those who have little, and most of them destined for landfills or the bottom of the sea—a thought that brings on a sense of aesthetic fatigue at the environmental consequences of the gift-giving economy, not to mention its deeper ethical quandaries. The simple question of what stamp to choose has also become increasingly complex.

As I was recently and publicly reminded by a postal clerk, with a knowing wink to the other customers waiting in line, it’s no longer acceptable to ask for Christmas stamps. “Holiday” is the preferred term.

Two of this year’s seasonal stamps, though modest in size, reveal much about why images—what we choose to represent by them and the meanings they might unfold—continue to matter in a culture thoroughly saturated by them.

Global Wreath 2013 stamp

Consider the differences between Jan Gossaert’s richly rendered, jewel-like Virgin and Child of 1531 and the “Global Wreath,” the first global holiday stamp, newly commissioned for 2013:

The wreath that graces the stamp art—created especially for the project—has a base made of a wire metal frame folded around Styrofoam, which was spray-painted green. The designer attached evergreen twigs onto picks—small sticks with one sharpened end—and then inserted them into the base, rotating the picks to make the wreath full and lush, a process that took more than eight hours.

I understand, and have some sympathy for, the logic behind the Global Wreath, which is more than a celebration of Styrofoam-and-toothpick craft. As wars of religion continue to rage, with images increasingly at the center, benign symbols like the Global Wreath, nearly emptied of historic or religious content, are carefully constructed to avoid such tensions.

But I chose the Gossaert. The choice was made not without a certain feeling of embarrassment and the need to justify my apparent lack of global spirit, both to the clerk and the suddenly attentive audience awaiting my decision.

“I’m an art historian,” I said, attempting to use my profession as a buffer. “I simply want the most beautiful image, a real work of art.”

Yes, the Gossaert is a real work of art: the deeply saturated colors, subtle volumes and nearly palpable surfaces of its figures and cloth rendered through a masterful application of innumerable layers of paint. It’s an image that overtly performs and announces a dedication to the achievement of the highest principles of painting by an individual artist, in contrast to the anonymous, Styrofoam project that constitutes the Global Wreath.

There’s also a rich symbolism to the painting, an iconography of allusive meanings that slowly unfolds for the viewer who makes the effort to understand the theological significance of the Virgin’s somber, contemplative look, the darkening mood of the sky, the artful exposure of the Child’s genitalia or the cluster of berries he holds in his hands.

But what makes Gossaert’s painting of 1531 deeply moving, and worthy of further reflection, is its fragile synthesis of the humanistic ideals of Renaissance painting and the image as a site of devotional contemplation, a synthesis that was to be shattered by the wars of religion in the Reformation that reshaped the religious and political landscape of modern Europe.

As Reformation scholar James Simpson reminds us, the violent iconoclasm of the sixteenth century, and the resultant shift of contested, religious images into the neutral space of art and museum, had greater consequences than the stripping of Protestant churches and the division of faiths. Our hard-won modernity — the Enlightenment legacy of rational inquiry, freed from the superstitious grip of idolatry–is integrally bound up with the destruction of images.

Yet despite the best efforts of the Reformation and Enlightenment to abolish the idols of images and of the mind, they have returned with renewed force in contemporary culture, reconstituted in powerfully seductive, proliferating forms that are increasingly hard to resist.

From iPads to iPhones, image-rendering screens are an indispensable part of modern life, reconfiguring individual and social behavior in complex ways we are only beginning to grasp, rebooting traditional conceptions of what it means to be human in a post-human world. Given the timely introduction of the iPad mini for the holidays, with expected sales upwards of tens of millions, Christmas day around the globe promises to be a virtual affair, as a new generation of iconophiles pay homage to these artful and miraculous machines.

Should we be worried that the iconic image of the holidays is no longer one of communal gathering around a hearth or table but a scene more reminiscent of Plato’s cave, where individuals are held captive by an illusory succession of flickering images—newly instantiated in even more illusionistic and captivating digital form?

Among modernity’s unintended effects—one that isn’t captured by historian Brad Gregory’s capacious, and controversial, critique—is the ascendancy of a new cult of images:  endlessly distracting technological images that cultivate consumerism rather than contemplation and, more alarming, at times supplant our connections to the real persons, places and the world around us with virtual ones.

Fortunately, we need not be philosopher-kings to occasionally find our way out of our technological confinement. Nor do we need to resort to the wholesale iconoclasm of our intellectual ancestors, though there is nothing so refreshing as occasionally closing the browser, stepping outside into the bright light of day, and noting the suddenly enhanced contrast between the sensuously replete, fully dimensional, physical world, and the digital world of the internet.

We share a life with images. There is an intimate relation between the images that form the basis of perception and the internal images of the imagination, a relation that profoundly shapes the most fundamental aspects of our being as knowing, feeling, and thinking persons. For philosopher Alva Noë, pictures, and how they bring aspects of reality into presence, are crucial to understanding perception in general. The world opens up to us in relation to what we we able to bring into view.

Picturing things, taking a view, is what makes us human; art is making sense and giving shape to that sense, the German painter Gerhard Richter once wrote. To the degree that we attempt to understand a work of art, it holds the potential to transform how and what we are able to see, enabling us to see more and further, and to become more intelligent in the process. As Kryzstof Ziarek argues in “The End of Art As Its Future” (The Hedgehog Review, Summer 2004), art, in its transformative, poetic possibilities, also enables us see the limits of technology as the dominant  interface with the world.

It may be a virtue of our newly complex, technological condition that the continuing value of the old cult of images, whether of art or literature, is brought into clearer view, precisely at the moment when it would seem to have been lost.


Anna Marazuela Kim is an Associate Fellow at the Institute for Advanced Studies in Culture at the University of Virginia and a member of The Iconoclasms Network, an international team of scholars and curators whose research contributed to the exhibition Art Under Attack: Histories of British Iconoclasm at Tate Britain in London.

. . . . . . . .

Like The Hedgehog Review on Facebook, follow us on Twitter, and subscribe to our posts via RSS.