Part One: Histories
Amidst growing attention and calls to action on the role of algorithms in our everyday lives, one idea recurs: “opening the black box.” In such analyses, the “black box” describes a process that happens in secret, for which we only know the inputs and outputs, but not the steps that takes place between. How might this metaphor be structuring our approach to thinking about algorithms and their place in our lives, long before we get to the work of accounting for the social and political work of algorithmic systems?
In this first of four posts, I’ll begin an answer to this question by looking at the history of the “black box” as a way of modeling cognitive or computational processes. In the second post, I’ll offer some cautionary words about reliance on this metaphor in the important work of ensuring just systems. Finally, in the last two posts I’ll look to some alternatives to black-box-opening in our relationships to opaque technological processes.
The black box metaphor began to acquire its shape during changes in labor that took place after World War II. Whereas managers before the war had largely treated work as a series of learned behaviors, the designers of work and work environments after the war began to think less about suiting the laborer to the work, and more about suiting the work to the laborer.
More than a mere Taylorist repeater of actions, the new ideal worker of post-war Human Factors research not only acts but perceives, acting according to learned evaluative routines that correlate sensation to action. The ideal post-war laborer is not a person of a particular physical build, conditioned to perform particular motions, but rather a universalized collection of possible movements, curated and selected according to mathematical principles. Human Factors research turned the human laborer into a control for a system, a proper medium for the transfer and transformation of input.
Key to this new approach was the influence of information theory on approaches to both computing and psychology. In computing, the understanding of signals as information paved the way for a mathematics of binary code, in which the course of electrons through physical gates and switches could translate into algorithms and mathematical functions. In psychology, those who had grown weary of behaviorism’s stimulus-response approaches to explaining and modifying human action saw in Claude Shannon’s approach echoes of the structure of the human brain. These early cognitive scientists saw in thought a kind of algorithm performing consistent functions on ever-changing sense data, zipping through the brain’s neural pathways the way electrons travel through the copper of a computer’s circuits.
And so a new understanding of the operator’s actions emerged alongside a new understanding of a computer’s routines. The first software emerged at the same time that psychologists began to analyze human thought and memory as a collection of mathematical functions performed on sense data. In other words, the black box as we know it emerged as a pair of metaphors: one to describe the computational machine, and one to describe the human mind.
Before these developments, systems of manufacture and control were designed to include the human body as a “control” in the operational sense. The control in any function is a limiter, providing brackets to the acceptable inputs and possible outputs. If a laborer slows done his or her work, the entire process slows. In the new post-Taylorist work flow, in contrast, the control is performed by a computational process, rather than a human embodied one. The new computers allowed for the programming of internal black boxes within the machine itself. Information from multiple sensors, as it coursed through these machines, would be analyzed and checked for deviation. The result produced from such analyses would set certain mechanical processes in motion in order to produce a desired end.
Although the worker has been replaced by an algorithm as the system control, she or he is not missing from the scene entirely. Rather, the human operator now performs the function of a control for the control. The machine affords indications to the human operator of the proper functioning of the software-based controller. Deviations from designated functions trigger new action from the human operator, according to more advanced algorithms than required of previous industrial operators. This new human operator must synthesize multiple forms of data—visual, aural, even symbolic data—and then decide on a proper course of action, of input to the machine, according to a trained set of decision-making criteria and standards.
Though operating from more of a distance in relation to the phenomena of mechanical system function, this new, error-detecting human operator plays no less critical a role. His or her mental routines must be just as carefully scripted and trained as the Taylorist laborer’s physical actions, and often via emerging understanding of the brain as a computer.
The new operator is thus less of the system even though he or she is made more in the image of that system. Formerly one organ within a mechanical body, he now is modeled as a discrete body himself, tethered to another, mechanical body, and modeled after that body, for the purposes of safe and consistent system flow. The machine and the operator mirror one another, with the interface as their crucial site of division, the glass of reflection and action.
These changes also effect sociality through the creation of a new entity to include all agents. This new entity—the organization—invites design at a complex level that accounts for multiple machinic and human actors. Where each machine used to come with an operator as controller, the two treated as a single entity, the post war machine comes with an operator as agent, who is necessary to the proper functioning of the machine. But the human operator is separate from the machine. For large-scale projects, this doubling results in increased complexity, which the organization approaches as yet another information processing problem.
The organization, this plurality of entities, is coincident with the emergence of the interface. Machines and operators without true interfaces—as in Taylorist scenarios—are not collective in that they are not social. They are merely aggregate. Thus some of the biggest moves in computing research toward the latter half of the twentieth century were those that simultaneously addressed the interface between one operator and her machine, and the structure of all machine-human pairs, organized together into one system—one black box process.
Kevin Hamilton is an artist and researcher at the University of Illinois, Urbana-Champaign, where as an Associate Professor he holds appointments in several academic units across theory, history, and practice of digital media. He is currently at work with Infernal Machine contributor Ned O’Gorman on a history of film in America’s nuclear weapons programs; other recent work includes a collaboration with colleagues at Illinois’ Center for People and Infrastructures on the ethics of algorithms in internet and social media platforms.
. . . . . . . .