The Hedgehog Review

The Hedgehog Review: Vol. 20 No. 3 (Fall 2018)

From Memory to Innovation: The Vowel Revolution in the Making of the Modern Mind

Colin Wells

The Hedgehog Review

The Hedgehog Review: Fall 2018

(Volume 20 | Issue 3)

On December 4, 1935, the Harvard Crimson carried a brief story under the headline “Parry, Greek and Latin Professor, Killed Yesterday.” The subject of the story, Milman Parry, was a thirty-three-year-old assistant professor of classics at Harvard. His death, the campus newspaper reported, “was the result of an accidental shooting.… Parry, visiting his mother-in-law in Los Angeles, was unpacking a suitcase in his hotel bedroom when a revolver mixed in with his clothing went off, mortally wounding him.” Such phrasing was commonly used at the time to mask suicide or, conceivably, an accident at the hands of a child or other family member. Parry’s death remains a blank.

As an undergraduate studying classics at UCLA in the mid-1980s, I grew familiar with Parry’s name. At Oxford after that, I attended the lectures of celebrated scholars like Jasper Griffin and Hugh Lloyd-Jones and heard his name there, too. Parry seemed fresh and radical to them, although he had died more than half a century earlier. They spoke of him the way Al Pacino or Dustin Hoffman might speak of James Dean. Indeed, a couple of decades after Parry’s death, the eminent British classicist H.T. Wade-Gery called him “the Darwin of Homeric scholarship.” Not bad, I suppose, for leaving the field so early. What, precisely, did Parry do that was so different?

In life, Parry was an adventurous romantic with a dashing mustache and an equally dashing prose style. Just before his death, he had journeyed to Yugoslavia with crates of the newest sound equipment to record performances by guslari, the unlettered native bards who sang epic tales dramatizing the long-past deeds of legendary heroes. Parry’s goal was to prove that the Homeric epics of the Iliad and the Odyssey were produced not by an author, brilliant or otherwise, but by an entire culture—a culture, moreover, without writing at all. He recorded hundreds of hours of oral verse, demonstrating its detailed similarities to Homer’s poems. In short, Parry discovered oral culture, forever revolutionizing anthropology and much, much more. Although at first contested, Parry’s orality thesis came to be accepted as fundamentally valid.1

But orality is just the half of it. Building on Parry’s work, his contemporary and fellow classicist Eric Havelock argued long after Parry’s death that only the ancient Greek invention of the alphabet has enabled humankind to shift away from oral culture, unleashing the intellect (including science and other forms of rational inquiry), opening the door to the spread of new ideas, and giving rise to the first clearly articulated abstract thought. Most controversially, Havelock observed that other writing systems—Egyptian, Hebrew, Arabic, Indian, and Chinese among them—have never been able to equal the alphabet for both learnability and readability. Not only was the alphabet the first writing that allowed us to produce new ideas and spread them widely, he pointed out; it’s the only kind of writing that has ever allowed us to do so.

Havelock’s name was far less widely known than Parry’s. Condemned to academic purgatory, Havelock remains under an ideological cloud, his alphabetic thesis shunned by many who wrongly take it as a putdown of non-Western cultures and peoples. There are signs this is changing, but change is slow.

Far from being a reactionary, the British-born Havelock was a dreamy social-justice warrior avant la lettre. As a young socialist professor at the University of Toronto, he provoked the administration with his union activism. In his writing later, at Harvard and Yale, he positively gushed over the sophistication and creative power of oral culture. He and his critics were never really on the same page, to borrow an apt cliché.

“You Need the Hammer”

To understand why the academic establishment received Havelock so poorly, consider the story of another curious young researcher, who made a journey to the remotest parts of Soviet Central Asia two years before Parry’s trip to Yugoslavia. Alexander Luria was a close contemporary of Parry and Havelock; all three were born within a year of each other. Luria was not a classicist, or even a scholar in the humanities, but an up-and-coming Russian psychologist who worked at the Institute of Experimental Psychology in Moscow, where he began collaborating with the celebrated Lev Vygotsky.

Luria and Vygotsky shared an interest in how outside influences such as culture shape thinking. At the time, the Soviet Union under Joseph Stalin was undertaking a vigorous literacy push as part of its larger campaign to “collectivize” the peasants. Vygotsky suggested that Uzbekistan and Kirghizia would make a good laboratory for groundbreaking work on culture and cognition. Luria headed to these two Central Asian Soviet republics in 1931 and began interviewing peasants, some with no schooling or literacy, some with a little, some with more. Like Parry, he was a careful and thorough field researcher.

One of Luria’s tests involved showing people pictures of common objects and asking that they group them. When shown a saw, a hammer, a hatchet, and a log, the literate subjects would group the saw, hammer, and hatchet together as tools and put the log on the side. The unschooled subjects consistently rejected this approach as impractical, even stupid. “They all fit here!” one man exclaimed. “The saw has to saw the log, the hammer has to hammer it, and the hatchet has to chop it. And if you want to chop the log up really good, you need the hammer. You can’t take any of these things away. There isn’t any you don’t need!”2

In this early work, Luria demonstrated that people in oral cultures think concretely, not abstractly. They create stories around things according to their utility—“If you want to chop the log up really good, you need the hammer!”—rather than classify them according to their inherent qualities or characteristics. A circle drawn on a card is described not as a circle but as a plate. A square is a window. A rectangle is a door.

This is not about superiority, civilization, intelligence, rationality, or anything like that. It’s about technology. A hammer isn’t superior to a shoe, but it is better at pounding nails (or driving a hatchet into a log). This is the hardest part to grasp, but it’s what separates Havelock’s thesis from earlier, triumphalist accounts of “Western” exceptionality. Immanuel Kant set the tone: Abstraction is something to be achieved, reflecting an inherent ability—or lack thereof—linked to race (as per Kant, right up into the twentieth century) or culture (as per more recent writers). “Lower” races, such thinking went, aren’t “able” to think abstractly, but surely they would if they could, because abstract thinking is so self-evidently superior. Those who substitute culture for race don’t do much better, clinging determinedly to the false equation of abstraction with superiority, thus merely substituting cultural superiority for racial superiority. But Luria’s subjects are quite capable of thinking abstractly. They just find it fruitless to do so. Like the man quoted above, they get exasperated with it.

Havelock understood this. Luria’s work was published only in the 1970s, but long before that, Havelock explained how abstraction is simply not efficient for people in oral cultures, and how concrete, practical thinking works better for them.3 The best way to hold information in memory is to make a story out of it (generally, though not necessarily, with an assist from rhythm), but stories need characters who do things. As an abstract idea, beauty is elusive and useless in a story; as a character, Beauty is readily grasped and therefore useful, especially when paired with a Beast. From the perspective of people who need to hold their entire culture in their living memory, concrete narrative thinking is more intelligent, not less.

Havelock saw that orality and literacy each come with their own values and biases. Orality fosters a culture oriented toward what things do, while literacy fosters one oriented toward what things are. Doing versus being, story versus exposition, tradition versus originality: Never the twain shall meet. In a mirror image of Luria’s subjects, then, highly literate people (from uptight German philosophers to woke academics and journalists) tend to privilege the abstract over the “merely” practical. Such individuals see personification in myth, for example, as an aesthetic choice—a “literary device”—because they’ve lost sight of its origins in human need. This is what separates Milton, or for that matter Virgil, from Homer.

So everyone’s thinking is indeed shaped by one’s culture—which in turn is shaped by the technology through which it is shared. “The medium is the message,” Marshall McLuhan, who acknowledged Havelock’s influence, wrote in the 1960s. McLuhan’s epigram is now so familiar it verges on triteness. That may actually work against it, by concealing the surprisingly deep level on which it has played out.

We readily accept that our technologies do stuff for us. Plato, the founding father of abstraction, recognized that it’s more unsettling to ask what they do to us.

Devices of Memory

At a construction site somewhere in the central Ohio Valley, a senior architect stands lost in thought. The project that has him so fully absorbed is in its early stages, and is very complicated, but the architect has all the latest technology at his disposal, including portable devices like the handheld tablet he’s studying intently. Back and forth he glances, checking the landscape here, assessing the possibilities there, retrieving a key piece of information with his tablet, envisioning the finished structure.

If we peered over the architect’s shoulder, we’d see that the surface of his tablet is covered with a display of complex geometric designs along with a smattering of more recognizable icons. These are the tools of his trade—symbolic representations of the highly technical information he has spent a lifetime studying and mastering. That much, at least, would seem familiar to us.

The surface of the architect’s tablet doesn’t glow brightly, however. Nor does it change, with old shapes and colors giving way to new ones. It is made of stone, and the elaborate designs on its surface are engraved. The recognizable icons are stylized depictions of animals: the beaked outline of a bird’s head, the humped shape of a bison. It is sometime around 800 BCE, and our architect is a high-ranking member of the Adena culture, Native American mound builders who were just then shifting from a nomadic hunter-gatherer lifestyle to farming in settled communities.

This portrayal is speculative, but grounded in fact. Stone tablets—portable handheld devices—were used by people in the Adena culture, and our Ohio Valley architect uses his in the way that similar devices are known to be used by highly trained individuals in other oral cultures. Like our portable devices—like so much of our prized “modern” technology—this technology is used to retrieve information. Despite the similar end result, however, we could no more use the architect’s device than he could use one of ours. The disparity goes beyond any difference of language. It’s not just because we speak, say, English, French, or Spanish, and he speaks a long-lost Native American tongue. Even if we knew his language, there’s no way we could somehow translate his tablet and make the information he’s retrieving accessible to us. The two tablets have completely different functions. Yet both are designed to summon information.

The difference—and the key to this puzzle—is that our devices actually give us the information we’re seeking, often information we haven’t encountered before. But when the Adena architect is looking at his device, he isn’t receiving information—he’s remembering it. His tablet is a memory device, not an information storage or communication device. It would be useless to anyone who didn’t have the right memories already.

Trained memory can be surprisingly reliable, as has long been recognized. Ancient Greek and Roman orators used the method of “loci,” or places, to memorize lengthy, complicated speeches and other challenging material. More recent practitioners knew the same technique as the “memory palace.” Modern memory champions use it to memorize a deck of cards in under two minutes. It is an immensely powerful tool, and it seems to arise naturally from the way our minds work.

One recent investigator, the Australian science writer Lynne Kelly, has connected this memory training with archeology and anthropology, transforming it from sideshow curiosity to main event in a single bold move. In her book The Memory Code (2017), Kelly argues persuasively that it has always been a natural feature of oral cultures, allowing them to organize and retain vast amounts of highly detailed information over many generations.4 The intimate behavior of hundreds of species of animals. The qualities of similarly large numbers of plants, including tiny distinctions among them. How to navigate uncharted deserts and oceans. When to plant and when to harvest. Features of the earth and the movements of the heavens.

Throughout our history, Kelly suggests, we have buttressed our memories by coding complex information into journeys, using both natural and artificial settings. Each journey is a story in song, each piece of information a character in the story that corresponds to a specific place on the journey. Every character has a story that is part of a larger story. The songlines of indigenous Australians, the haunting statues of Easter Island, the mysterious giant figures inscribed into the plains at Nazca, the intricate pueblos of Chaco Canyon, the elaborately carved totem poles of Native American peoples in the Pacific Northwest, even the massive slabs of Stonehenge and Avebury—these and many other puzzling archeological artifacts work perfectly as memory devices, Kelly writes, and are best understood as such. The knotted strings of the Incas, called quipu, are among the most effective, she attests from personal experience using them.

One of the most elegant aspects of these memory devices is that they are so flexible and scalable. The work surface can be as vast as the Australian outback, the size of a ceremonial mound, or as small as a handheld tablet. Some can be abstract geometric designs on pottery or jewelry. Some can even be imaginary. We dismiss oral tradition as “mythology,” but it’s more like a monumental organic and technological encyclopedia at one’s fingertips (and feet).

Just a tiny remnant of this knowledge remains, as a few tattered cultures struggle to keep the old ways in an increasingly literate “global village” (the phrase is Marshall McLuhan’s). Viewed from this perspective, the story of literacy is the story of how humankind has lost something unimaginably precious.

The Stonecutter’s Gift

Around the same time our Native American architect was pondering his work site in the Ohio Valley, a Greek stonecutter was doing some pondering of his own some six thousand miles to the east, on the Mediterranean island of Cyprus. Like the architect, our sculptor is a speculative figure, but one grounded in evidence and supported by recent research.

We meet him in his studio. He’s smoothing his hands over the rough base of a small limestone sculpture, which still holds a hint of the morning’s coolness. The stonecutter takes a breath and recites a brief scrap of poetry, a single line of verse that flows from his mouth in a smooth, unbroken stream of sound. He takes another breath and repeats the first word of the line, and then the first sound of the first word—a single quiet note, Ahhhhh.

As he has many times earlier that day, he touches his throat now with his fingertips, feeling the column of vibrating air rising through his larynx, modulating the sounds by changing the shape of his open mouth. He does not block the column of air with his tongue, lips, teeth, or palate. He just lets it flow out of him smoothly. Ahhhhh becomes Ehhhhh, then Ihhhhh, then Ohhhhh, then Uuuuuu, and back to Ahhhhh.

With practiced deliberation, he lays the point of his chisel against the squared-off stone of the base. He takes one more breath, visualizing the finished sculpture sitting proudly in its temple. Then he raises his mallet and gives the chisel a gentle tap, continuing to chant that note to himself as he does so: Ahhhhh. The sound we think of as A—which, after a few minutes, is also the shape of the first letter that appears in the stone as the sculptor begins his work.

This depiction of the alphabet’s first use is based on research by Kevin Robb, a classics professor at the University of Southern California who received his doctorate under Havelock at Yale, and who has spent his career studying the origins of the alphabet. Robb proposes Cyprus as the site because it had prosperous bilingual towns in which Greeks and Phoenicians lived close together, and because the Greek alphabet was based on the earlier writing of the Phoenicians.

Phoenician writing consisted of some twenty-two consonants. This was a big advance on previous syllabic systems, which were unwieldy both to read and to write. They had hundreds of signs to memorize, and still couldn’t cover all the sounds in the languages they transcribed. Reading them required guesswork. Phoenician script was much easier to write; aside from the Greeks, many other peoples would base their writing on it, including the Hebrews and, much later, the Arabs.

Unlike the others who took up Phoenician writing, the stonecutter added a twist of his own: vowels. Surprisingly, the alphabet was invented only once. One way or another, every true alphabet in the world—that is, every writing system with both consonants and vowels—is based on the stonecutter’s original model, which became the Greek alphabet.5

I’ve called him a stonecutter, but he could just as well have been a smith or a potter. Whatever the specific craft, Robb suggests that it was probably a single individual working with a literate—and rather patient—Phoenician who came up with the alphabet all at once, possibly in a single day. And, if Robb is correct, the goal was not to keep track of trade, or to compose long literary masterpieces, as others have proposed. Rather, it was to record one or two brief lines of prayer, composed for the occasion, to mark a religious offering. Such offerings—figurines and other worked objects—indeed bear some of the earliest surviving examples of Greek writing.

The wealthy Phoenicians dominated sea trade along the southern coast of the Mediterranean Sea. The upstart Greeks were jostling with them and would eventually carve out a competing trade empire along the Mediterranean’s northern shores as far west as Spain. Greeks and Phoenicians shared temples in places like Cyprus, and the stonecutter would have seen his Phoenician colleague adding written prayers to the offerings placed by prosperous Phoenicians. Robb imagines the stonecutter’s excitement on realizing that those marks would fix the prayer in place, in effect repeating it continuously and thus increasing its efficacy many times over. Its raw power would have seemed magical, as writing does to those in oral cultures. “I need to learn that!” we can hear him thinking.

His enthusiasm would have carried him through the grind of repeating the letters over and over in order until he got them right, perhaps chanting them to himself as young children do to this day, with much the same letters in much the same order. As Robb observes, this is where our stonecutter’s Phoenician friend would have needed to be a patient teacher. Aleph, bet, gimel, dalet…. In Phoenician, these words mean “ox, house, camel, door.…” The sound (but not the meaning) of these and many other letter names would be preserved in Greek: alpha, beta, gamma, delta.

The letter aleph was written sideways with the cross-stroke ears poking out, so that one can more easily see the bovine head. In early inscriptions, the Greeks would follow the Phoenicians with the sideways A, as in other habits, such as writing right to left, the way writers of Hebrew and Arabic still do today. They would also experiment with writing lines in alternating directions. “Ox-turning,” it was called, because it followed a course like that of an ox plowing a field. (Oxen again!) It will take a while for the oxen to leave the field, but soon enough A’s will be recognizably upright and writing will go properly—for us—left to right.

The same intensity of purpose that got him through learning the letters in order would also have spurred our stonecutter to take several of the consonants—“leftover” ones that represented sounds occurring in Phoenician but not in Greek—and use them for vowel sounds instead. We’ve already met aleph, which originally sounded something like a loud gulp, and which came into Greek as alpha (A). Others included he, which became epsilon (E); yod, which took two new identities as iota (I) and upsilon (Y); and ayin, which became omicron (O).

All this inventiveness, including the efficient repurposing of unused materials, feels elegantly workmanlike. The person responsible for the offering composed the prayer. For fun, I’ve imagined his name beginning with A, as in “Amphalkes set this up…” (as begins an Athenian grave marker set up by one Amphalkes to memorialize his two deceased sons in the fifth century BCE). But the craftsman employed to create the offering itself seems likely to have been the one who invented the new writing. Hence Robb’s suggestion of a stonecutter, a smith, or a potter.

It’s hard, perhaps, to see how the stonecutter’s brainchild could have gone on to make much of a difference. After all, there are lots of consonants and only a handful of vowels. Yet small differences in ingredients can lead to sweeping transformations in outcome. The stonecutter’s invention didn’t merely add more of the same; it wasn’t like adding more flour to a batch of dough. Instead, the vowels were the tiny pinch of yeast that brought the flour to life: a catalyst, not merely another component. They altered the reading process itself, and that’s what changed everything else.

Phoenician consonantal writing added better-quality flour, but it was still just more flour. The result may have been easier to write than the earlier, clunky syllabic writing, but it was still challenging to read because it was still phonetically incomplete. The reader had to supply the missing vowel sounds. In bread terms, a tough chew. In practical terms, the same process as with earlier writing. Guesswork was unavoidable; the letters were merely hints. You had to figure out what it might be saying before you could actually read it.

This is a widely acknowledged feature of consonantal writing (and one that is commonly observed among readers of Arabic even today), but its irreducible property as a stumbling block goes unremarked. One obvious result is clearly illustrated by the evidence of the past. As Jack Goody and Ian Watt observed in a seminal article that appeared in 1963, it’s clear from what survives that early writing was used almost exclusively as an aid to memory, such as in tax records or storage manifests, or to write down familiar stories and sayings.6The Law Code of Hammurabi, The Epic of Gilgamesh, Egyptian hieroglyphics, the Hebrew Bible—one way or another, early writing recorded stuff that was already out there in the oral culture. In other words, a memory device.

Vowels reversed the old way of reading. The significance of this reversal—a true revolution in reading—has also gone unnoticed, yet it had the most profound consequences for communication. Now you could read written language first, automatically, even if you had no idea what it meant. Then you could go on to figure out the meaning. You could even re-read the language of a difficult and unfamiliar passage over again, as many times as needed, until you understood the meaning, which was not possible before without guesswork. In bread terms, that tiny pinch of yeast yielded a more palatable balance of crunch and chew.

Like earlier writing systems, the alphabet could serve as a memory device, to be sure, and a very good one, but it also turned out to be something unprecedented: an innovation device. The alphabet revolution opened the door to the new. The true dividing line in human cultural evolution lies not between the spoken word and the written word, as is commonly assumed. It lies instead between memory and novelty—that is, between two very different devices: the architect’s and the stonecutter’s.

“A Stone Floating in Space”

After the vowel revolution, the world still runs, at first, on memory. “Homer” (or someone else with the same name, as the old joke goes) records his oral epics, perhaps by dictation. Hesiod uses the same epic language and meter to compose his lofty didactic poems, one on farming and the other on the gods. Both belong firmly to the common “oral encyclopedia.”

Soon afterward, however, Archilochus and then Sappho pen their exquisite and intensely personal poems exploring jealousy, desire, and other down-to-earth emotions. One a common soldier, the other an aristocratic lady, they are subversive charmers both. Homer’s Iliad reeks of warrior machismo, but Archilochus drops his shield and turns tail, laughing sourly at himself and us. Steady Penelope, The Good Wife, stands by her ramblin’ man in the Odyssey, but Sappho gets the shakes, sweat dripping down her body, as she drinks in a stunning girl.

Over the next few centuries, we can name hundreds of identifiable individuals in ancient Greek civilization and witness them creating the first literature that wasn’t directly based on oral tradition, along with distinct and still-recognizable literary genres. They are the world’s first philosophers, scientists, historians, playwrights, critics, political scientists, even novelists (although the novel doesn’t really come into its own until the age of print). Seemingly out of nowhere, then, writers and readers. It seems worth emphasizing that nothing even remotely like this was happening anywhere else.

The changes, however, are not confined to what we usually think of as “literature.” The alphabet opens up a whole new kind of knowledge. Uniquely, this new knowledge is handed down by users whose focus is not on preserving their predecessors’ claims but on questioning them—and improving them. The unique traditions of intellectual culture founded in this way are science and other branches of rational inquiry such as philosophy and history. Sappho’s near-contemporary Anaximander was “the first of the Greeks to produce a written account of nature,” according to the fourth-century writer Themistius, and also the first person, as far as we know, to write in prose. Reportedly, he produced as part of this new technology an explanatory diagram of the earth floating in space and the first map of the known world. Both were wildly inaccurate (his earth was shaped something like a bongo drum, with flat surfaces at either end), but, as with all gifts, it’s the idea that counts. His contribution is unique, a correction to the traditional idea—held, as far as we know, by every culture Anaximander’s idea had not yet touched—that the earth is a plate with the heavens above.7

So the important thing isn’t really that Anaximander had the radically new idea that the earth was an object floating in space. The important thing is that we have it. Umpteen other people, after all, may have thought of it before Anaximander. But they had no way of passing it on. Anaximander had better timing. He had this idea at the earliest point in history when it could be captured and communicated, accurately and reliably, to someone—to lots of someones—out of earshot. Who could then take it up and work on it. Aristotle, for example, a few centuries later, showed that the floating earth is not a bongo drum but a sphere. Aristarchus, a couple of generations after that, figured out that it rotates on its axis and revolves around the sun. And Eratosthenes, a few more generations on, ascertained its circumference.

The Promiscuous Alphabet

There’s no reason to think smart Greeks were any smarter than, say, smart Chinese, who went on thinking of the earth as a plate for more than two thousand years. But the Greeks had something the Chinese didn’t: an innovation device that let them catch a new idea about the world and send it down the pipeline, where it could be taken up by others and incrementally improved. Or discarded, if it didn’t measure up. Or even discarded, in favor of another idea, then picked up again, as Copernicus picked up Aristarchus after the rediscovery of ancient Greek literature in the West. Messy or neat, that process is called science, and it can’t be done with memory alone.

Like Darwin’s theory, Havelock’s alphabetic thesis is slippery and challenging. It is very hard to get right, and it bears revisiting over and over. But, as with Darwin’s idea, the effort is worthwhile because of the unusual explanatory power of the thesis. Orality and literacy are like two black holes orbiting each other. From inside one of them it’s impossible to peer into the other, yet neither can be adequately explained unless the other is taken into account. And the collision between them sends deep waves rippling through everything. Diverse phenomena such as the emergence of democracy, the displacement of polytheism by monotheistic faith, the chronic failure of today’s Arab societies to modernize, even the subtle complexities of identity politics in the electronic age, are deeply connected by the slow tectonics of orality, alphabetic literacy, and the gravity well of ever-deepening abstraction.

Linguists who reconstruct Proto-Indo-European tell us it had no reflexive pronouns. Proto-Indo-Europeans didn’t spend much time wondering about themselves, apparently, because they didn’t have “themselves” to wonder about. Later, for Homer’s heroes, transactions with the self (autos in Greek) were limited to things like washing it, and excluded anything like pondering it, searching for it, being true to it, or reinventing it. Individual volition came from one of several internal organs (Homeric heroes had a variety to choose from) or from a god or goddess whispering in your ear, never from a deepest self. There was no such thing. Who you were was defined by your social relations—ruler, ally, guest-friend, etc. This parallels what we know of other oral cultures ancient and modern (and can clearly be seen at work in Luria’s subjects).

After the alphabet’s invention, however, Greeks started looking inward. In his 2012 book The Emergence of Reflexivity in Greek Language and Thought, University of Melbourne classicist Edward T. Jeremiah traces the rise of the self from Homer to Plato, who refashioned the Homeric word psyche from a wispy, smokelike shade into what we still think of as the soul, and who found new ways to say what an idea like justice or piety is “in and of itself” (kat’ heauton in Greek). One of Walter Ong’s favorite words was interiority, which sums up the result of this process nicely. It also points forward, as we’ve been looking further and further inward ever since—into things, and into our own selves. Black holes, indeed! The trend feels as if it’s accelerating, alarmingly, although that may be an illusion that comes with the territory—Plato’s complaints about writing were echoed after the Print Revolution and again at the beginning of our digital age.

Havelock insisted that literacy be recognized as more than a matter of individual achievement. It must be seen as a social condition. That seems to call for a new definition of modernity: If modernity is literacy, it’s more than just reading and writing. It’s the mass communication of new ideas. Modernization came about only with the advent of print, but print by itself is not enough.8 Even with the availability of print, other forms of writing have not been able to match alphabet for readability.

The glaring example: The world now runs on innovation, but Arabic writing remains a memory device. That, not piety, is why Arabic literacy is measured by familiarity with the Quran, and why Spain alone translates more books into Spanish in a single year than have been translated into Arabic over the last thousand years.9 Not much point translating books no one can read. In short, the Arab world has been trying to pound a nail with a shoe, and it’s pretty clear how that’s worked out. This is about more than flatlining GDPs and no democracy and mass emigration and suicide bombers, though it’s about that, certainly. It’s also about cultural despair.

Among the economic powerhouses of the new “global village,” polyglot India is unified by the use of alphabetic English, while China has relied on alphabetic Pinyin for more than fifty years. Joseph Needham, the British Sinologist, famously asked why the Scientific Revolution didn’t happen in China. He gave a long list of Chinese “scientific” accomplishments, but they all represent technological advances, not scientific ones. Havelock pointed out that Chinese writing, in which each sign stands for a whole word, offers no way to handle novelty—if readers don’t know a given sign already, they have no way to look it up. And there are tens of thousands of them to memorize. As with other forms of nonalphabetic writing, literacy in Chinese script never spread beyond a narrow and highly trained elite, in this case the mandarins of the imperial bureaucracy. Only by adopting alphabetic Pinyin after the Chinese Revolution was China’s government able to increase literacy.

“Vowel” comes from the Latin vox, “voice.” This is fitting, because the alphabet gave us original, individual voices to replace the traditional, collective voice of orality. The vowel still gives us our voices. Earlier media critics such as Marshall McLuhan, Walter Ong, and Neil Postman saw telephones, radio, and television as portending a new age of electronic orality. If that age existed at all, it didn’t last long. In just a few short decades, these inventions have been joined by laptops, smartphones, and tablets, all requiring keyboards in one form or another. It is print, and print culture, that has been displaced in an age of electronic literacy, not the promiscuous alphabet, which merely found a new partner. My guess is the last trade book to be typeset (rather than printed from digital files, the standard practice today) probably went to press about 1988, the year Havelock died.

As for the alphabet, it’s doing just fine after the breakup, thank you. Just ask Twitter, as zillions of tiny 280-character sledgehammers (formerly 140-character sledgehammers) attack the deepest foundations of print culture, from politics to corporations to publishing itself. As Spock said to Kirk in the latest Star Trek movie, with the Enterprise being ripped to shreds by swarms of rapacious nanobots, “Captain, we are not equipped for this manner of engagement.”

The alphabet’s sleek new partner has enchanted us. It fits in our pockets so we are never without it. We don’t talk on our phones anymore as much as we read and write on them. Likewise, we may watch viral videos on YouTube, but try finding one without entering letters in a search field.

The alphabet has moved on. What about us, trapped in the dizzying rush of abstraction’s event horizon, seemingly forever? The downside of everyone’s having a voice is that no one can tell who is speaking. Attention shifts from content to identity—and its grail cup, authenticity, whose Greek root means “self-accomplishing.” Edward Jeremiah reminds us that for Greeks like Sophocles, balanced on the very knife-edge of abstraction, this was not a favorable development at all, but terrifying and dangerous, the hubristic imposition of autos on the natural—that is, social—order. “With every new category,” Jeremiah writes, “there is a perversion of that category, its distorted reflection, and tragedy explores this darker side as its cost. Indeed, in many ways tragedy seems to mourn the birth of the self and its reflexive acts. It treats this category, and the technology of self-care, with the conflicted and sceptical attitude the luddite shows material technology. This is nowhere clearer than in the case of Oedipus, whose single-minded pursuit of the Delphic exhortation to know himself reaps not self-enlightenment but self-destruction.”10

Literacy has given us much, but like any gift it stole something, too. The social order is no longer “natural.” We are alienated and angst-ridden creatures now, fearful of our own technologies and selves. It would take St. Paul to truly capitalize on this feeling by alphabetically marketing the new panacea he called “faith” (pistis in Greek), but Sophocles anticipated the anxiety itself.11 Perhaps there were others who felt like that before Sophocles’s time, but we have no way of knowing. Their voices are gone with the wind.

Notes

  1. Adam Parry, ed., The Collected Papers of Milman Parry (Oxford, England: Oxford University Press, 1971).
  2. A.R. Luria, Cognitive Development: Its Cultural and Social Foundations (Cambridge, MA: Harvard University Press, 1976), 58. This early work of Luria’s was not published even in Russian until the 1970s. Walter J. Ong’s influential book Orality and Literacy (London, England: Methuen, 1982) featured Luria’s results prominently; it also did much to spread Havelock’s ideas to academics outside the classics field.
  3. Eric A. Havelock, Preface to Plato (Cambridge, MA: Harvard University Press, 1963). See also The Muse Learns to Write: Reflections on Orality and Literacy from Antiquity to the Present (New Haven, CT: Yale University Press, 1986) and The Literate Revolution in Greece and Its Cultural Consequences (Princeton, NJ: Princeton University Press, 1982), the latter being a collection of Havelock’s academic articles.
  4. Lynne Kelly, The Memory Code: The Secrets of Stonehenge, Easter Island, and Other Ancient Monuments (New York, NY: Pegasus Books, 2017). Kelly’s writing inspired my fanciful portrait of the architect. I have taken a few liberties that I hope she would find acceptable.
  5. Kevin Robb, Literacy and Paideia in Ancient Greece (New York, NY: Oxford University Press, 1994). For the purposes of the present article, I have ignored two aspects of consonantal writing that may appear relevant at first glance but do not bear directly on my argument here. The first is the presence of consonantal roots (so-called triradicals) in Semitic languages such as Phoenician, Arabic, and Hebrew. The second is the availability to writers of those languages of workarounds such as vocalic points and matres lectionis (consonants that stand in for vowel sounds), even if they were not commonly used. My argument concerns the dynamic of reading, which I think is the important thing. Neither separately nor together do either the helpful roots or the workarounds change the fact that in order to read a consonantal text, you need to figure out what it is saying first. Nor do they remove what Havelock called “residual ambiguity,” by which he meant the absence of capacity of any consonantal text to yield a definitive reading. I have also passed over an interesting point Robb makes in Literacy and Paideia, which is that Greek meter is based not on stress but on vowel length, which he suggests would have helped compel the deployment of dedicated letter shapes for vowel sounds (along with the fact that as an Indo-European language, Greek relies on vowel sounds for lexical differentiation, unlike Semitic languages).
  6. Jack Goody and Ian Watt, “The Consequences of Literacy,” in Comparative Studies in Society and History 5, no. 3 (1963): 304–45.
  7. See Carlo Rovelli, The First Scientist: Anaximander and His Legacy, trans. Marion Lignana Rosenberg (Yardley, PA: Westholme, 2007). A physicist, Rovelli has a better understanding of the significance of new ideas than many contemporary scholars in the humanities. Rovelli also originated the phrase “a stone floating in space” which appears as a subhead in this essay.
  8. Elizabeth L. Eisenstein, The Printing Revolution in Early Modern Europe, 2nd ed. (Cambridge, England: Cambridge University Press, 2005).
  9. Arab Human Development Report, 2002 (New York, NY: United Nations Human Development Programme, 2002).
  10. Edward T. Jeremiah, The Emergence of Reflexivity in Greek Language and Thought: From Homer to Plato and Beyond (Leiden, Netherlands: Brill, 2012), 147–48.
  11. A fuller discussion may be found in Colin Wells, “How Did God Get Started?” in Arion 18, no. 2 (Fall 2010), 111–37; https://www.bu.edu/arion/files/2010/10/Wells_21Sept2010_Layout-1.pdf.

Colin Wells is a writer and independent scholar whose books include Sailing from Byzantium: How a Lost Empire Shaped the World and A Brief History of History: Great Historians and the Epic Quest to Explain the Past.

Reprinted from The Hedgehog Review 20.3 (Fall 2018). This essay may not be resold, reprinted, or redistributed for compensation of any kind without prior written permission. Please contact The Hedgehog Review for further details.

Who We Are

Published three times a year by the Institute for Advanced Studies in Culture, The Hedgehog Review offers critical reflections on contemporary culture—how we shape it, and how it shapes us.

IASC Home | Research | Scholars | Events | Support

IASC Newsletter Signup

First Name Last Name Email Address
   

Follow Us . . . FacebookTwitter