1024 Points of Light_rev1

45
1024 Points of Light: Project Whirlwind and the Emergence of Interactive Computer Screens It is hard to imagine what computers – and computing devices such as tablets and smart phones – would be like without the glowing, colour-rich screens that we use to access information, make phone calls, send text messages, play games, and otherwise engage in innumerable interactions. With the extreme miniaturization of CPUs and memory, and the virtualization of keyboards, many devices appear to be nothing but screen, with perhaps a token button or two along the edges. The digital screen has become an essential component of our digital hardware devices, and it is hard to imagine how it could be otherwise. But for years, many computers went without displays, relying on teletype machines, punched card readers and writers, and printers for input and output purposes. The first displays were ancillary equipment, used largely to display bit values stored in memory at a time when such memory was composed of vacuum tubes. These relied on cathode-ray projection techniques that had been used extensively in the development of radar over the course of the Second World War, and were also being used in experiments to project moving

description

1024 Points of Light

Transcript of 1024 Points of Light_rev1

1024 Points of Light: Project Whirlwind and the Emergence of Interactive Computer ScreensIt is hard to imagine what computers and computing devices such as tablets and smart phones would be like without the glowing, colour-rich screens that we use to access information, make phone calls, send text messages, play games, and otherwise engage in innumerable interactions. With the extreme miniaturization of CPUs and memory, and the virtualization of keyboards, many devices appear to be nothing but screen, with perhaps a token button or two along the edges. The digital screen has become an essential component of our digital hardware devices, and it is hard to imagine how it could be otherwise. But for years, many computers went without displays, relying on teletype machines, punched card readers and writers, and printers for input and output purposes. The first displays were ancillary equipment, used largely to display bit values stored in memory at a time when such memory was composed of vacuum tubes. These relied on cathode-ray projection techniques that had been used extensively in the development of radar over the course of the Second World War, and were also being used in experiments to project moving images onto a screen, a technology that would become known as television.

CRT displays only very gradually became instruments through which computers could display text and images. This was not because such displays were incapable of doing so, as radar and television were revealing. Rather, it appears that a lack of imagination on the part of some of the first computer engineers was to blame. Consider the following description taken from a 1949 internal summary report of the display built for MIT's landmark Whirlwind mainframe computer:

The display equipment now in use with WWI is intended primarily for demonstration purposes. It gives a qualitative picture of solutions to problems set up in test storage, and it illustrates a type of output device that can be used when data are desired in graphical rather than numerical form.[footnoteRef:1] [1: Servomechanisms Laboratory, Massachusetts Institute of Technology, Summary Report No. 20: Third Quarter, 1949, 29.]

Whirlwind, in fact, would go on to become the primary catalyst for the widespread adoption of digital displays across all types of computers. Yet the passage above is hardly tinged with any great level of excitement. Rather, the display is cast as a secondary object, used for "qualitative" and "graphical" analysis in an age when numbers reigned supreme. Moreover, it was simply used for "demonstration purposes". This mentality was echoed by Norman Taylor, a former engineer on the Whirlwind project, when speaking at the SIGGRAPH conference many years later. Noting that as late as 1950 the machine was still using a "5 [inch] Tektronix scope," he explained that "we were not trying to build a display here; we were building a computer."[footnoteRef:2] And summary reports from as late as 1949 (preceding the report cited above) were still referencing CRT displays solely with respect to their role as test equipment for the computer's stored memory.[footnoteRef:3] Despite such inauspicious beginnings, however, by the end of Whirlwind's short life the display was being used for everything from graph plotting to pattern generation to gaming; the rudimentary Bouncing Ball game was one of its most popular programs, and would go on to influence the development of the important game Spacewar! more than a decade later.[footnoteRef:4] [2: Jan Hurst et al., "Retrospectives I: The Early Years in Computer Graphics at MIT, Lincoln Lab, and Harvard," 22.] [3: Servomechanisms Laboratory, Massachusetts Institute of Technology, Summary Report No. 16: January 1949, 12-15.] [4: Jim M. Graetz, "The Origin of Spacewar", 56-67.]

Whirlwind would also provide the architectural foundation for the Air Force's Semi-Automatic Ground Environment (SAGE) early warning system, where CRT displays and a "light guns" that could be used to interact with these displays were key components.[footnoteRef:5] [5: See, for example, Lev Manovich, The Language of New Media, 100-102.]

If this is all true, why is it that Whirlwind's displays were initially deemed to be so unimportant? I will argue in this article that displays were rarely taken seriously until they became normalized as sites of interactivity. We interact with our digital devices on such a constant, real-time basis that it might be difficult to recognize the peculiarity of our actions, or the incredibly precise engineering necessary to facilitate these actions. Digital computing was originally a laborious process, involving the methodical transfer of code and data from a source medium such as teletype to the electronic components that made up a computer's memory. This was a slow procedure, with checks required at regular intervals to ensure that everything was being read correctly. After all of this information was transferred, and after the computer ran the code and processed the given data, output would typically be printed out. Nothing about this process hints at a future in which information could be exchanged virtually instantaneously between a computing device and its user, with a pixelated screen serving as the medium through which such interactions took place.

Whirlwind was well-suited to host these innovations because it was the first computer from the era for which real-time interactivity was made a priority. Launched in the immediate aftermath of the Second World War and lasting into the earliest years of the Cold War, Project Whirlwind was initially intended to produce a mechanical flight simulator for the United States Navy, not a digital computer. Led by recent MIT graduate Jay Forrester, however, the project's leaders gradually shifted the focus from simulator to computer, provoking much officious ire from both MIT and the U.S. Navy. The project also became the site of much experimentation with various electronic components, including, as we have discussed, CRT displays. In order to justify its continued funding, however, Forrester continued to promote Whirlwind as a machine capable of performing real-time calculations of the sort that the flight simulator had required, and suggested other military-specific uses in which real-time data would be necessary. Project Whirlwind, then, as a site of experimentation and research into real-time computing, was an ideal environment with respect to facilitating many of the first experiments in screen-based interactivity.

This article, then, will focus on these early Whirlwind experiments in display-based interactivity. Treatments of Whirlwind tend to gloss over this period, and jump ahead to its role in the development of SAGE. Bouncing Ball earns a mention occasionally in histories on gaming. But there remains much primary source material on Whirlwind including progress reports, internal memos, and transcripts of presentations and discussions by Whirlwind engineers that can be leveraged to gain an understanding into how the CRT display came to be such an important medium for users. The following account will be roughly chronological, though most of the major events occurred in the late 1940s, and there is conflicting information with respect to timing in some cases. I will discuss SAGE only briefly in the conclusion; it has been described and analyzed at length elsewhere.

"Captain Among the Synthetics"The prehistory of Project Whirlwind unfolded over the course of the Second World War, and centered around the work of Navy Captain Luis de Florez in the field of "synthetic training devices." The term may be unfamiliar to us, but these "synthetic" devices were essentially rudimentary flight simulators. A New Yorker article from the time describes them as a "collection of peculiar and complicated machinesthat simulate the conditions of aerial combat"; de Florez's notoriety in this field was such that the same article dubbed him the "captain among the synthetics." [footnoteRef:6] Unlike modern simulators, however, these were not high-end electronic systems; rather, they were mechanical devices intended to recreate the cockpit environment for novice flyers. De Florez seems to have made it his mission to develop increasingly sophisticated "synthetic" devices, housed within a facility called the Special Devices Depot. They all operated through what de Florez refers to as a "Flight Engineer's Panel" that is, a simulated cockpit instrument panel that functioned as would be expected from a real airplane in flight; the article notes that the student user "can maneuver the device exactly as he will be expected to maneuver his plane later over the Pacific."[footnoteRef:7] An instructor can adjust the machine so that it presents a variety of emergency situations, and the information displayed on the panel would adjust accordingly. [6: Robert Lewis Taylor, "Captain Among the Synthetics," 32.] [7: Robert Lewis Taylor, "Captain Among the Synthetics," 34.]

Here, then, we can see an important pre-digital form of machine-based interactivity. Instrument panels are themselves interactive interfaces, displaying information such as velocity and altitude while also providing mechanisms for the user/flyer to adjust aspects of this information though, given the complexities of flight and the quality of aircraft at the time, such agency could be limited, or lead to unexpected results. Regardless, de Florez's control panels served to abstract elements of human flight and reduce them to only those actions that could be read and adjusted from the cockpit. This set up an intriguing feedback loop that pivoted around a machine that, from a certain perspective, behaved rather arbitrarily. It was only useful because students and instructors could conceive of it as an actual cockpit instrument panel, at least to a certain extent. They could then read this panel as if they were in actual flight, and interact with it accordingly.

As already noted, de Florez was always looking to upgrade and refine his "synthetic" machines to create more realistic simulated flying experiences. The 3A2, one of the most advanced of these trainers, even simulated aerial combat. It did so, crucially, by adding a screen to the simulation apparatus. The following citation, while somewhat lengthy, succinctly describes the operation of the 3A2:

The 3A2 consists of a cockpit on a raised platform some twenty feet from a movie screenthe 3A2 goes into action when somebody climbs in and turns on the switchesthe sound of an engine begins and, since the cockpit is so set up that it can go through nearly all the motions of a plane aloft, the occupant is, in effect, actually flyingwhen the switches arc thrown, a movie projector gets under way and the student finds himself in the middle of a dogfightWith a hand on the stick and a thumb on his gun switch, he must learn to outfly and outshoot the [Japanese]. His bursts of machine-gun fire show up on the screen as needle points of white light and enable him to correct his position, just as tracers will do later in combat.[footnoteRef:8] [8: Robert Lewis Taylor, "Captain Among the Synthetics," 34.]

Beyond the simulated control panel, then, the user would also need to buy into the notion that they were shooting at real Japanese aircraft that were actually just projected on a screen. This sort of conceptual leap may seem rather trivial now we are used to computers presenting us with simulated environments, ranging from onscreen "desktops" to three-dimensional gaming worlds but simulations of any kind were still quite novel in this period.[footnoteRef:9] In these early examples, however, we can see how simulations depend on dynamic streams of information to sustain their verisimilitude that is, information needs to be presented to the user by the simulation machine in real time, and the machine also has to accept information sent by the user in real time. Associated with these requirements is an important stipulation: this information must be accurate with respect to the machine that is being simulated. It was concerns over this issue that led to the creation of Project Whirlwind. [9: Arguably the most well-known sort of simulation in existence at the time was the war game. War games the sort that require miniatures, simulated terrain, and dice had emerged from Prussia in the 19th century, and spread rapidly to other militaries around the world. They have little in common with the simulators we are looking at here, but it is helpful to conceive of what many military officers of the time would have considered with respect to the term simulation (see Jon Peterson, Playing at the World).]

In the later years of the Second World, war-related research and development efforts had become so intense that de Florez's group could not keep up with the flow of increasingly sophisticated aircraft being sent into combat.[footnoteRef:10] Each of de Florez's synthetic machines was tailored to simulate only one specific vehicle; they could not be easily reconfigured to accommodate new specifications. Thus the U.S. Navy came to MIT de Florez's alma mater proposing to fund a project to build a "protean, versatile, master ground trainer that could be adjusted to simulate the flying behavior of any one of a number of warplanes."[footnoteRef:11] This arrangement was formalized in 1944; the final product was to be called the Airplane Stability and Control Analyzer (ASCA). The analyzer component of this project that is, the device that was to perform all the necessary calculations to simulate flight was initially expected to be a mechanical analog computer, not a digital machine. As such, the project was turned over to the Servomechanisms Laboratory at MIT, and was headed up by an assistant director at the lab, Jay Forrester. [10: Redmond and Smith, Project Whirlwind, 1.] [11: Redmond and Smith, Project Whirlwind, 2.]

By all accounts, Forrester was ambitious, perhaps even headstrong, and did much of the early research on the ASCA on his own, or in tandem with a handpicked group of fellow engineers. While agreeing with the initial proposal for an analog analyzer, Forrester gradually came to learn that such a device would be extremely difficult to construct out of mechanical parts.[footnoteRef:12] After learning more about digital computation and a visit to the University of Pennsylvania to learn more about the ENIAC and EDVAC digital computing projects he moved the project in that direction, as this passage from a 1946 summary report on Whirlwind indicates: [12: Redmond and Smith, Project Whirlwind, 14-20.]

When the project was organized in December 1944 plans were based on the use of analogue computation. It became apparent that analogue methods would lead to excessive complexity and that other computing techniques should be studied. High-speed electronic digital computation is now being investigated in an effort to develop components for application to the problems of immediate interest.

By 1948, the flight simulator aspect of the project would be "indefinitely postponed" in order to focus on the digital computer. A summary report from that year blamed the decision in part on "pressure for staff time on other phases of the research program," though that is largely because Forrester was pivoting the entire project towards the development of what would be known as the Whirlwind I computer.[footnoteRef:13] We will get into more detail about this below, but first it is necessary to trace the origins of what will become one of Whirlwind's most important features and the focus of this article: the cathode-ray tube display. [13: Servomechanisms Laboratory, Summary Report No. 9, 12.]

The Cathode-Ray DisplayThe 1949 summary report cited in the introduction went on to describe the Whirlwind's CRT display as "an oscilloscope having a linear horizontal sweep, a decoder for converting binary numbers into analogous voltages which provide vertical deflections for the cathode-ray beam, gate generating circuits to produce the unblanking pulses, and control circuits to accomplish periodic repetition of the computations."[footnoteRef:14] While we need not consider all the technical details of the device here, it is worth examining the general functionality of the display, as well as relevant events in the history of the CRT up to the point at which Project Whirlwind began. This will provide the context necessary to better understand the innovations introduced by Whirlwind researchers. [14: Servomechanisms Laboratory, Summary Report No. 20, 29.]

At the most fundamental level, a CRT is simply a device that is capable of shooting electrons through a vacuum; as these electrons strike a screen set up on the other side of the vacuum tube, they give off a glow. A number of scientists and inventors Heinrich Geissler, William Crookes, and J. J. Thomson, to name just a few made advances in this area, but Ferdinand Braun is generally credited with developing the first true cathode-ray tube.[footnoteRef:15] Yet they all owe partial credit for their respective discoveries to William Watson, an eighteenth-century English scientist who "was particularly interested in the conduction of electricity through rarefied gases and vacuo."[footnoteRef:16] Building a vacuum tube with electrodes at either end, Watson observed "brushes of electrical firepassing from the periphery of the upper plate to that of the lower," and was therefore the first individual to produce the effect that would evolve into the CRT tube and screen.[footnoteRef:17] This is relevant here because it reflects the fact that the science and engineering behind the CRT was not initially devoted to a specific purpose; it was only instrumental after the fact. This spirit of experimentation would continue throughout the nineteenth and early twentieth centuries, as the CRT played central roles in a diversity of emerging technologies. [15: R. W. Burns, Television: An International History of the Formative Years.] [16: Burns, Television, 111.] [17: William Watson, "An Account of the Phenomena of Electricity in Vacuo with Some Observations Thereupon," 366.]

The oscilloscope used by Whirlwind can tract its ultimate origins back to the work of Braun, who used his CRT to "study the rapid variations of electrical currents" with electrical circuits; oscilloscopes, technically, are designed to "make visible a time-varying electrical state, usually in a graph with time as the horizontal coordinate and signal voltage as the vertical coordinate."[footnoteRef:18] They were to come to prominence before and during the Second World War as the displays used in the first radar systems, ranging from the simple 'A' scope that only displayed range along a horizontal sweep, to the more sophisticated 'B' scope which showed range and azimuth (angle) along a two-dimensional plane.[footnoteRef:19] Yet in all cases the basic functionality remained essentially the same, with the horizontal or vertical sweep of a "cathode-ray beam" deflected to either side of the axis by electrical current. These deflections would show up on the display screen, but they would also quickly fade away. Successive sweeps across the tube could reproduce the fading image, or they could draw a new image, thereby animating the changes in electrical current being received. In the case of radar, this process was leveraged so that the movement of objects picked up by the transmitter could be tracked. [18: Frederik Nebeker, Dawn of the Electronic Age: Electrical Technologies in the Shaping of the Modern World, 1914 to 1945, 335.] [19: Henry E. Guerlac, RADAR in World War II, 596-602. ]

What we have here, then, is a technology that is designed to capture and display dynamic information. Along with being dynamic, such information would also need to be ephemeral, and of little to no use beyond meeting immediate needs. Presumably, the locations of objects detected by a radar device did not need to be stored and recalled at a later time; or, at least, the benefits of storing such information did not outweigh those gained by using an oscilloscope for the task at hand. If we accept this premise, we can then appreciate the fact that the oscilloscope would hardly have been recognized as a useful tool with respect to recording information output by the earliest digital computers. As is well known among historians of technology, the digital computer was originally designed in the 1940s to serve as a means to mass produce firing tables for the many artillery pieces that were being used in the war effort. The ENIAC was commissioned to address this very need, as was the EDVAC, even if it was recognized by its designers as also having vast potential as a "general-purpose" computing device.[footnoteRef:20] Such tables were meant to be printed and distributed as needed, not flashed on a screen for a brief moment. In order to incorporate a CRT screen as an output device, there needed to be a task that required the display of ephemeral, ever-changing information. [20: Harry Polachek, "Before the ENIAC," 25-30; and Nancy Stern, From ENIAC to UNIVAC: An Appraisal of the Eckert-Mauchly Computers.]

And there was, in fact, one particular task related to digital computing for which the CRT was recognized early on as a useful tool: testing the stability and integrity of internal memory stores. We see this application described in Whirlwind summary reports; the CRT-based "storage tube reliability tester," for example, is described in one report as follows:

The reliability tester is being designed to use up to five storage tubes and to read spots from one tube to a temporary storage register and then rewrite them on the next tube. Thus, a pattern may be made to cycle from one tube to the next through all five tubes and then back to the first tube, continuing in a loop over periods of hours. Any error in reading or writing will be detectable as a change in the original pattern.[footnoteRef:21] [21: Servomechanisms Laboratory, Summary Report No. 14, 11.]

In order to check the integrity of the "spots" (what we would probably call "dots") written into the storage tubes at each stage of this process, the tester relied on CRTs to read and display them onscreen. In this scenario, then, it was preferable to have the information on each CRT screen fade away, so that each spot could be redrawn on the next pass. In other words, if we are keeping an eye out for changes in the spot pattern, we need a device that can reflect such change. What is happening here, then, is that each CRT is displaying information as it changes (or does not change) in real time. When the proper case presents itself, then, CRTs can become quite useful as peripherals to digital computing machines.

Light Guns, and Points of LightReflecting on the earliest of Whirlwind's display and borrowing a favourite phrase of former President George H.W. Bush Norman Taylor noted that the first Whirlwind CRTs could generate "256 points of light," but later displays upped this number to 1024.[footnoteRef:22] He then goes on to say something that is crucial for our purposes. After noting the ways in which CRTs were used to test memory (as described above), he then goes on to talk about how this procedure was augmented with a means by which to interact directly with the information onscreen. The problem with the original test was that it was difficult to determine which address in memory each onscreen spot/point represented. Taylor explains the solution to this problem in his conversational style: [22: Hurst et al., "Retrospectives I," 20.]

We were asking how we can identify the address of [each] spot . So Bob Everett, our technical director, said "we can do that easily." All we need is a light gun to put over the spot that stops and well get a readout as to which one it is. So he invented the light gun that afternoon and the next day we achieved man machine interactive control of the display I believe for the first time.[footnoteRef:23] [23: Hurst et al., "Retrospectives I," 20.]

Whether or not this was the first example of "man machine" control of a display, the development of the "light gun" one of the most important early tools for onscreen interactivity. Before we get into why that is, however, it is necessary to digress somewhat in order to confront a historiographical concern. Even though Taylor gives credit to Robert Everett for the development of the light gun, and claims that this all took place in "late '48 or early '49," there appears to be no other evidence that backs him up on all points.[footnoteRef:24] Perhaps even more concerning is the fact that there appears to be no references to a "light gun" in the extant documentation for Whirlwind until autumn, 1950; and at that point, the device is discussed only with respect to its use in SAGE (to be discussed below).[footnoteRef:25] Everett's name, moreover, is nowhere to be found in these SAGE-related references. How, then, should we proceed on this issue? [24: Hurst et al., "Retrospectives I," 20.] [25: The earliest of these references may be found in Servomechanisms Laboratory, Bi-weekly, Project 6673, September 29, 1950, 5; subsequent bi-weekly reports often mention it as well.]

If, in fact, Everett did develop the original Whirlwind light gun, this would not be the only time that a Whirlwind side project is not mentioned in any of the official extant documentation (more on this below). So we should not take the absence of references to Everett as a sign that he was not involved. And, in fact, we have a source printed two decades later that does briefly mention him. This would be a 1981 report to the Office of Naval Research on the subject of basic research. Its purpose is explained in the introduction as follows:

This report attempts to show the returns on investment from three distinctly different basic research activities funded at the Massachusetts Institute of Technology (M.I.T.) by the Office of Naval Research (ONR), the first Federal agency to contract broadly for basic research beginning in 1946.[footnoteRef:26] [26: Bruce S. Old Associates, Inc. Return on Investment in Basic Research: Exploring a Methodology, 1.]

One of the projects chosen for this evaluation was Whirlwind. In a section entitled "New Knowledge from Whirlwind," for the year 1950, the report boasts of "the development of a light gun photocell to permit the first communication between the operator and the computer in aircraft intercept exercises (R.R. Everett)."[footnoteRef:27] Everett is clearly being given credit for the light gun here, though at a later date than that claimed by Norman Taylor. Everett himself has also backs up these claims, while admitting that the device could have been invented in several places at roughly the same time.[footnoteRef:28] In the end, however, what really matters is how such technologies are used; I believe that a solid case may be had in crediting Everett with the development of the Whirlwind gun, but there is no need here to discuss it any further. [27: Bruce S. Old Associates, Inc. Return on Investment in Basic Research: Exploring a Methodology, 18.] [28: Redmond and Smith, From Whirlwind to MITRE, 82.]

Getting back to the gun itself, then, what Taylor describes in his presentation is an entirely new means (for the time) to act upon a computer's memory stores. Initially, it would seem, all the gun could do was report the address in memory of whatever spot it was pointed at. At some point, however and Taylor is not very specific on this it became possible to actually modify the memory at each address. It appears as if this could only be done in one direction that is, spots could be removed from the screen, but not added (given the capabilities of the light gun, this is the most logical scenario, even though Taylor is once again somewhat vague on the details.)[footnoteRef:29] A slide in Taylor's presentation shows the name "MIT" spelled out in dots, which was accomplished by removing most of the dots in the centre of the screen to implicitly create each letter. Such tricks became a popular means to show off Whirlwind's power; as Taylor wryly puts it, "[i]t was clear that displays attracted potential users computer code did not."[footnoteRef:30] [29: Hurst et al., "Retrospectives I," 20.] [30: Hurst et al., "Retrospectives I," 20.]

In the previous section, it was noted that testing internal memory was just the sort of scenario for which a CRT was an ideal component. We can now build on that, and state that the manipulation of this memory emerged as an ideal problem to solve via an interactive CRT-based system. As with the flight simulators discussed above, there is a tight bond in this process between the actions of reading information and sending information to the system. With the simulators, most of this information was provided via cockpit readouts; the aerial combat film being the only major exception. Since we are dealing with display screens in the Whirlwind case, it may seem as if analogies could be drawn between this film and the memory manipulation techniques described here. I would claim, however, that the better analogy would be to associate these memory techniques with the cockpit readouts from the simulators. These readouts provide quantitative data to the user; similarly, Whirlwind's memory manipulation functionality works with quantified values, both to express location in memory and the value assigned to that location. The film simply provides a backdrop for simulated combat. This is an important point worth emphasizing: while digital displays often present graphical information to the user, as it does with the memory "spots" described here this information is wholly grounded in quantified data. The screen serves as a site for interactivity only because it can render digital data pictorially, as well as translate user input into quantifiable information. Film cannot perform such tasks, and is therefore a very different sort of technology as compared to the digital display.

Follow the Bouncing BallApart from the light gun, which we will return to shortly, Whirlwind's engineers were finding other ways to make interesting and interactive display programs. It quickly became apparent that the oscilloscopes they were working with could do more than just help in the testing and setting of memory. Graph plotting was perhaps the most evident direction to go in with respect to generating more interesting output. A 1949 summary report describes this functionality as follows:

One value for each of the functions to be plotted is obtained by a single cycle through a program set up in test storage, and successive points on these curves are derived from repeated cycles through the same program with the independent variable increased by a fixed increment for each cyclethe linear horizontal sweep of an oscilloscope is used to provide deflection along the axis of the independent variable.[footnoteRef:31] [31: Servomechanisms Laboratory, Summary Report No. 20, 29.]

Initially, such efforts were directed towards plotting second-order differential equations, largely related to solving problems in electromagnetism.[footnoteRef:32] Such programs could hardly be deemed to be interactive in any obvious way. But it did not take long for Whirlwind's engineers to build off of this early work and create more engaging display-based routines. One such program was known as Bouncing Ball, which was arguably the first interactive computer game ever created.[footnoteRef:33] How important was Bouncing Ball? J. Martin Graetz, one of the creators of the very important and influential game Spacewar!, cited it as a source of inspiration years after the fact: [32: Servomechanisms Laboratory, Summary Report No. 20, 28-29.] [33: Norman Taylor makes this claim, as we shall see, and it has some merit. The computer game often cited as the first, William Higinbotham's Tennis for Two, was created in 1958, nearly a decade later. OXO, a noughts and crosses (or "tic-tac-toe") game for the British EDSAC computer, was created by Alexander Douglas in 1952, a few years after Bouncing Ball. ]

When computers were still marvels, people would flock to watch them still at work whenever the opportunity arose. They were usually disappointed. Whirring tapes and clattering card readers can hold one's interest for only so long On the other hand, something is always happening on a TV screen, which is why people stare at them for hours. On MIT's annual Open House day, for example, people came to stare for hours at Whirlwind's CRT screen. What did they stare at? Bouncing Ball. Bouncing Ball may be the very first computer-CRT demonstration program.[footnoteRef:34] [34: J. M. Graetz, "The Origins of Spacewar," 60.]

Spacewar was programmed on a PDP-1 designed by Digital Equipment Corporation (DEC), which was based off of MIT's TX-0 computer, a transistorized version of the Whirlwind computer. So Graetz is correct in citing Bouncing Ball as an antecedent to his group's work.

The game itself was quite simple, and leveraged the Whirlwind's display scope(s) ability to plot graphs such as those discussed above. Taylor summarizes its history and functionality quite succinctly, as follows:

Charlie Adams, the original programmer, decided that we'd better go beyond static curves. And he invented what we call the Bouncing Ball Program, the solution of three differential equations...You see that the bouncing ball finds a hole in the floor and the trick was to set the frequency such that you hit the hole in the floor. This kept a lot of people interested for quite a while and it was clear that man-machine interaction was here to stay. Anyone could turn the frequency knobs.[footnoteRef:35] [35: Hurst et al., "Retrospectives I," 21.]

Despite its simplicity, however, Bouncing Ball is nothing less than an interactive, screen-based game. With respect to the differential equation plots described above, the user was essentially a passive viewer of onscreen information. Bouncing Ball, on the other hand, integrates the user into the sort of feedback loop we discussed with respect to the cockpit readouts in de Florez's flight trainers that is, the user is presented with onscreen information, inputs data into the computer in order to change what is onscreen (via the "frequency knobs" described by Taylor), observes the results, and repeats the process if necessary. The oscilloscope screen is an ideal medium through which to play such a game; because points of light drawn on it degrade quickly, it was relatively trivial to "animate" the ball by plotting it at successive locations along its movement path. Once the ball disappeared or stopped moving, the screen could erase everything and reset the initial position of the ball. Within the context of the game, all of this onscreen information was ephemeral, and useful only in the immediate circumstances in which it was produced.

Unlike the differential equation plots, Bouncing Ball would not translate well if output to a printer or other form of physical media. The game depended on the display's need to continuously refresh the onscreen image, thereby producing the illusion of movement. While perhaps somewhat frivolous, then, Bouncing Ball was just the sort of program that required a display. There were other, similar cases as well; in a 1949 summary report the same report that described the differential equation programs discussed above we find an intriguing program called the "target-seeking simulation".[footnoteRef:36] According to the report, the program was designed to "pictorially illustrate" what is described as a "cut-and-try" mathematical problem "which involve[s] a great many operations to obtain a few results."[footnoteRef:37] Yet there was more to this program than a simple graph plot. Firstly, it plotted "a horizontal line whose left end can be shifted right or left by changing toggle switches." It then plotted a parabola with "an initial slope which can be varied by the computer." The report then instructs us to imagine the end of the line as a "target," and the parabola as the "trajectory of a shell," noting that "the various possible configurations" of line and parabola "can be interpreted as undershooting, on target, or overshooting" the target.[footnoteRef:38] The functionality of the program is then described as follows: [36: Servomechanisms Laboratory, Summary Report No. 22, 29-31.] [37: Servomechanisms Laboratory, Massachusetts Institute of Technology, Summary Report No. 21: Fourth Quarter, 1949, 29.] [38: Servomechanisms Laboratory, Summary Report No. 21, 29.]

The programinstructs the computer to measure the error (or distance by which the shell misses the target), to make a change in the initial slope of the parabola in accordance with the error, to plot the new trajectory, measure the new error, and so on. Thus the program directs the computer to simulate a control device.[footnoteRef:39] [39: Servomechanisms Laboratory, Summary Report No. 21, 29.]

Unlike Bouncing Ball, then, this was not a game. But users could still interact with the program similarly to how they did so with Bouncing Ball, only this time they set the position of the actual target. After that, it was the computer that "played" the game, working out the proper trajectory to get the "shell" to land in the right spot.

Again, the CRT display is ideal choice to handle the output from this program, plotting each trajectory and then removing it to make room for the next one. And it is important to recognize that the CRT was being used only for such ideal cases. In terms of text-based and numerical output, this would all still be printed out. It was not an inevitability, then, that the CRT display would become the primary means through which users would interact with their digital computing devices. What we have seen up to this point is a variety of experimental programs that leverage the display as an interactive medium in different ways. None of them on their own make the case for an expanded role for the CRT. But the tools and practices developed in the wake of such experimentation pointed the way to an even more substantial role for display-based computing.

Conclusion: After WhirlwindWell before the flight simulator aspect of the Whirlwind was officially dropped, the project had come under intense scrutiny by both MIT and the Office of Naval Research. Akera chronicles the struggles that unfolded over this period in exceptional detail, and there is no need to restate the narrative in its entirety here.[footnoteRef:40] There are two major points, however, that require some emphasis. The first is that, regardless of the "rhetoric" (Akera's term) employed by Forrester to justify the continued funding of Whirlwind, his primary goal seems to have been to develop a powerful digital computer, regardless of utility as Akera states it, "Forrester was completely taken by digital computers."[footnoteRef:41] That is not to say that he was entirely dismissive with respect to military applications of his work; rather, he intended on building a machine that was much more powerful than such applications required. The second point is that, in order to justify the continued funding of the project. Forrester and his colleagues increasingly emphasized the role that Whirlwind could play in "control" applications that is, scenarios under which a given computer could read and display information to the user in real time, and could also accept, process, and transmit user input in real time. In a 1948 report emphasizing the differences between Whirlwind and a digital computing project underway at the Institute for Advanced Study (IAS) at Princeton, Forrester and Everett claim that "MIT believes that the development of digital equipment for military control is important in connection with other military research now in progressIAS being primarily interested in scientific applications will probably take a more leisurely course."[footnoteRef:42] These statements were made to justify the expenses accrued by Whirlwind up to that point; as Akera notes, Forrester was essentially building an apology via memo: [40: Atsushi Akera, Calculating a Natural World: Scientists, Engineers, and Computers during the Rise of U. S. Cold War Research, 181-220.] [41: Akera, Calculating a Natural World, 210.] [42: Jay W. Forrester, and Robert R. Everett, Comparison Between the Computer Programs at the Institute for Advanced Study and the M.I.T. Servomechanisms Laboratory, 5.]

Forrester used these memos to reconstitute a coherent mission for Project Whirlwind. Fundamentally, Project Whirlwind was now a research program in computer applications, especially in the realm of real-time control systems. However, in order to pursue this research, it was necessary to have a fast and reliable computer. This required large initial expenditures on the physical aspects of computer engineering work.[footnoteRef:43] [43: Akera, Calculating a Natural World, 211.]

These efforts culminated in a funding switchover from the Navy to a new branch of the American military, the Air Force. Whirlwind technology was to serve within the command and control centre of the Semi-Automatic Ground Environment (SAGE), a network of coastal radar stations that served as an early-warning system for a potential Soviet aerial attack.[footnoteRef:44] While the project ended up running off of IBM computers, the display-based interactive functionality of Whirlwind served as the model for the workstations that received data from the radar network. As Taylor explains it, the MIT engineers designed the prototype workstation, which they then turned over to IBM to mass produce.[footnoteRef:45] [44: Redmond and Smith, From Whirlwind to MITRE, provides a detailed account of this history.] [45: Hurst et al., "Retrospectives I," 23.]

SAGE was made up of 20 division centres, each with its own computer (with two CPUs as a failsafe measure), and each capable of sending radar data to up to 150 workstations, each with its own CPU display.[footnoteRef:46] As Taylor explains it, the display that they chose for the prototype a "Charactron", from Convair Company was capable of rendering text onscreen, a first for Project Whirlwind.[footnoteRef:47] More importantly, we see the return of the light gun, now given a central role in SAGE-related tasks. Each workstation was given an "assigned geographic area" and received all the radar data from within that space. The light gun could then be used to select specific aircraft that were being tracked onscreen; the user would be given information about that particular craft, which could then be "pushed" to the "SAGE Direction Center's summary board," at which point action of some form could be taken against it.[footnoteRef:48] Clearly this is a much more advanced form of display-based interactivity as compared to memory testing and bouncing balls. But it is also clear that SAGE inherited such functionality from Whirlwind. A project designed to simulate aircraft in real-time informed the later development of a system that tracked aircraft in real time. And a device designed initially to measure the variances in electrical signals became a medium through which computers and users could freely exchange information. [46: Sebastian Anthony, "Inside IBMs $67 Billion SAGE, the Largest Computer Ever Built."] [47: Hurst et al., "Retrospectives I," 22-23.] [48: IBM Archives, "SAGE Console."]

Or could they? When we use terms such as "output device" to describe the CRT display, we discursively conceal an important fact: while displays can and do depict elements of a computer's internal memory via graphics or text, they also obscure what typically is a much greater amount of data that is not explicitly sent to the screen by a given program. It might be argued that such data is not relevant to the user, or else the program would show it to them. But who or what gets to decide what information is and is not relevant? The program itself, and the developer who created it, have complete agency over the user in this respect. When a user interacts with a screen-based program, then, they are forced to adopt a specific perspective on a limited set of data elements, and have thus had ceded much of their control over such data to running code. As the digital screen has become increasingly ubiquitous, such limitations often go unnoticed. By recognizing that the screen was in no way an inevitable addition to the prototypical digital computing system, however, we can discard essentialist thinking about it. We then might be able to understand more clearly what we have gained and what we have lost in terms of agency over our digital information, and how we might work around the limitations to provide even more meaningful forms of user-computer interactivity.BibliographyAkera, Atsushi. Calculating a Natural World: Scientists, Engineers, and Computers during the Rise of U. S. Cold War Research. Cambridge, MA: The MIT Press, 2007.

Anthony, Sebastian. "Inside IBMs $67 Billion SAGE, the Largest Computer Ever Built." ExtremeTech, 2013. Retrieved from http://www.extremetech.com/computing/151980-inside-ibms-67-billion-sage-the-largest-computer-ever-built

Bruce S. Old Associates, Inc. Return on Investment in Basic Research: Exploring a Methodology. Report to Office of Naval Research, Department of the Navy, 1981.

Burns, R. W. Television: An International History of the Formative Years. London: The Institute of Engineering and Technology, 1998.

Forrester, Jay W., and Robert R. Everett. Comparison Between the Computer Programs at the Institute for Advanced Study and the M.I.T. Servomechanisms Laboratory. Cambridge, MA, 1948. Retrieved from http://dome.mit.edu/handle/1721.3/45971

Guerlac, Henry E. RADAR in World War II. Sections A-C. Los Angeles: Tomash Publishers, 1997.

Hurst, Jan, Michael S. Mahoney, Norman H. Taylor, Douglas T. Ross, and Robert M. Fano. "Retrospectives I: The Early Years in Computer Graphics at MIT, Lincoln Lab, and Harvard." In Proceedings from SIGGRAPH: Association for Computing Machinery, Special Interest Group on GRAPHics and Interactive Techniques, Boston. MA, July 31 August 4, 1989, 19-38. New York: Association for Computing Machinery.

Manovich, L. The Language of New Media. Cambridge, MA: The MIT Press, 2001.

Nebeker, Frederik. Dawn of the Electronic Age: Electrical Technologies in the Shaping of the Modern World, 1914 to 1945. Hoboken, NJ: John Wiley & Sons, 2009.

Peterson, Jon. Playing at the World. San Diego: Unreason Press, 2012.

Polachek, Harry. "Before the ENIAC." IEEE Annals of the History of Computing 19, no. 2 (1997): 25-30.

Redmond, Kent C., and Thomas H. Smith. Project Whirlwind: The History of a Pioneer Computer. Bedford, MA: Digital Press, 1980.

Redmond, Kent C., and Thomas H. Smith. From Whirlwind to MITRE: The R&D Story of the SAGE Air Defense Computer. Cambridge, MA: The MIT Press, 2000.

Servomechanisms Laboratory, Massachusetts Institute of Technology. Summary Report No. 9: June 1948. Cambridge, MA, 1948. Retrieved from http://dome.mit.edu/handle/1721.3/40715

Servomechanisms Laboratory, Massachusetts Institute of Technology. Summary Report No. 14: November 1948. Cambridge, MA, 1948. Retrieved from http://dome.mit.edu/handle/1721.3/40723

Servomechanisms Laboratory, Massachusetts Institute of Technology. Summary Report No. 16: January 1949. Cambridge, MA, 1949. Retrieved from http://dome.mit.edu/handle/1721.3/40722

Servomechanisms Laboratory, Massachusetts Institute of Technology. Summary Report No. 20: Third Quarter, 1949. Cambridge, MA, 1949. Retrieved from http://dome.mit.edu/handle/1721.3/45940

Servomechanisms Laboratory, Massachusetts Institute of Technology. Summary Report No. 21: Fourth Quarter, 1949. Cambridge, MA, 1949. Retrieved from http://dome.mit.edu/handle/1721.3/40729

Servomechanisms Laboratory, Massachusetts Institute of Technology. Summary Report No. 22: First Quarter, 1950. Cambridge, MA, 1950. Retrieved from http://dome.mit.edu/handle/1721.3/40735

Servomechanisms Laboratory, Massachusetts Institute of Technology. Bi-weekly, Project 6673, September 29, 1950. Cambridge, MA, 1950. Retrieved from http://dome.mit.edu/handle/1721.3/39684

Stern, Nancy. From ENIAC to UNIVAC: an appraisal of the Eckert-Mauchly Computers. Bedford, MA: Digital Press, 1981.

Taylor, Robert. Lewis. "Captain Among the Synthetics, Part I" New Yorker 20, no. 44 (11 November, 1944): 32-43.

Watson, William. "An Account of the Phenomena of Electricity in Vacuo with Some Observations Thereupon." Philosophical Transactions, Giving Some Account of the Present Undertakings, Studies, and Labours of the Ingenious in Many Considerable Parts of the World XLVII (1752): 362-376.