THE MACHINE THAT CHANGED THE WORLD - TAPES F321-328, F399-402 PAUL CERUZZI
Interviewer:
THIS IS A STORY THAT’S GOT A LOT TO DO WITH MACHINES, SO I WANTED TO START YOU OFF EARLY WITH WHAT IS A MACHINE? WHAT DO MOST PEOPLE THINK OF WHEN THEY THINK OF A CONVENTIONAL MACHINE?
Ceruzzi:
Well many people have a sort of classic definition of a machine which is a something that channels the forces of nature into a specific direction to do something that you want it to do. And if you think of common machines like an automobile or an airplane or a washing machine, it's composed of pieces of metal or various parts that work together in a harmonious way to do a very specific thing. And the difficulty I think that people have with computers, is that a computer is not a machine like that. The person who designed the computer or who manufactured the computer, doesn't really know or even care what somebody's going to do with it, once they buy it, or they start using it. In fact you really can't predict what people are going to do with it because in the nature of the computer, it's so general in purpose, it's a universal machine, and it can do anything that someone has the ingenuity to discover how to do. And that can never be anticipated by the designer.

Enter chapter titles here

Enter chapter titles here

Interviewer:
NOW, THAT’S JUST GREAT. GOING TO CONVENTIONAL MACHINES OF, SAY, THE NINETEENTH CENTURY - THESE THINGS YOU SAY THEY WERE CONNECTIONS OF METAL. WHAT DID THEY DO? THEY CAPTURED A PARTICULAR PROCESS- WERE THEY DEFINED IN TERMS OF WHAT THEY DID?
Ceruzzi:
Well, the designer of, or inventor of a classical machine always had a very specific notion in mind.
Interviewer:
[Technical Discussion]
Ceruzzi:
In a classical machine the designer, or inventor always had a very specific notion in mind of what that machine was going to do when he or she built it. You consider Edison for example, when he built, or invented the phonograph. He had a very specific notion in his head that people were going to use this device for certain, certain functions or for certain ways that always had to be in the mind of the inventor when he's building it. He had to think of that. The computer, is, just isn't that way. Although the inventors of the computer did really, uh, when people at first invented what we consider the modern computer at around 1940s, they had in mind very specific applications, pertaining to mathematics or to various aspects of physics. And they designed the machines with that in mind. However, in the course of their design they discovered a very interesting fact which was that these machines, if designed in a certain way could do many, many other things, including things that they never anticipated.
Interviewer:
LET’S GO BACK THEN TO THE NINETEENTH CENTURY WHERE THE FIRST GLIMMERINGS OF THIS SORT OF TURNED UP, NOW, YOU’VE TALKED ABOUT CONVENTIONAL MACHINES AS EMBODYING THE PROCESSES AND DOING ENERGY, NOW A COMPUTER IN THE DAY OF CHARLES BABBAGE WASN’T A MACHINE. IN FACT THERE WERE VERY FEW MACHINES FOR MENTAL PROCESSES, IS THAT RIGHT?
Ceruzzi:
In Babbage's day there were very few machines to do mental activities, primarily because, ah, let me see if I can get this. I, let me, see if I can back up and maybe we can get a notion of this. That a machine in the classical sense takes energy which is generated by steam or, or electricity or whatever, and it channels it into a direction that you want it to go in. That requires some kind of mechanism that channels that energy, it requires a source of that energy and it also requires a control. Now for most of recorded history, or the history of technology, that control was done by the human hand and the human eye, guiding the machine in its various ways. There was no need for anything more automatic than that because the machines did not really do anything that sophisticated. Only when machines begin to be more sophisticated does, do people get a sense that hey, maybe we ought to figure out a way to have the machine do some of its own controlling aspects. And in Babbage's day there really wasn't that kind of complexity of mechanisms, that required that. So people simply didn't think of machines as devices that could do the controlling aspect. They strictly thought of them as devices that channeled energy in some definite way.
Interviewer:
BUT THESE MACHINES IN GENERAL, THEY WORK WITH PHYSICAL MATERIALS. NOW THE IDEA THAT BABBAGE AND OTHER PEOPLE HAD WAS WHY NOT TRY TO MECHANIZE MENTAL PROCESSES. CAN YOU TALK A BIT ABOUT THAT?
Ceruzzi:
The Babbage machine really actually, the difference engine which was the first of Babbage's planned machines, and later on the analytical engine, both grew out of a perception that Babbage had of repetitive tedious work that was being done by human beings. It was mental work, but it was at the same time very repetitive and very tedious. It made them very physically tired to do it. Although, strictly speaking it was not work in the mechanical sense of lifting objects or moving heavy masses around in one form or another. But Babbage felt that that distinction wasn't really important. What was important was that it was repetitive and tedious and therefore it could lend itself to being mechanized.
Interviewer:
SO HE’S TRYING IN HIS DIFFERENCE ENGINE, WHICH IS A CLASSICAL ENGINE, TO WHAT? TO CAPTURE THE PROCESS OF ARITHMETIC?
Ceruzzi:
The difference engine was intended to print out mathematical tables, uh, which required some calculation and it was a very straightforward process once you understood what you had to do to make a certain table. And it was a very special purpose machine. That's all it was going to do. It would print out certain tables and that was it. You wouldn't be able to change it to do something else, even if you wanted to, but it would relieve the human beings of a lot of effort that they were putting out to calculate and copy down by hand, these long tables of numbers.
Interviewer:
SO THE DIFFERENCE ENGINE IS A, IN SOME SENSE, A CLASSICAL MACHINE. WHAT ABOUT THE ANALYTICAL ENGINE? WHAT’S THE DIFFERENCE?
Ceruzzi:
Well, I, it would be hard for me to sort of reconstruct what went through Babbage's mind, but I have a feeling that he must have perceived in the course of making the difference engine, that you could separate out the mechanism of producing the numbers, of actually doing the calculations and perhaps doing the printing, separate that notion out, from the notion of organizing the flow of work through that machine. That once you separated them out then you could treat the story, the notion of controlling the machine separately and in an abstract way and you could then use, and reuse the mechanism for many, many different problems, and have what in principle is a general purpose machine that can do any number of processes, anything that you can work out in your own mind a way of solving, it will solve that problem. That was the notion of the analytical engine that he came to, although he never finished that in, he never reduced it to practice. He never built a working version of the analytical engine. The idea of it is central to what a computer is today. That there is a separation of the actual mechanism that does the arithmetic, or does the processing of information from the organization or the control, or the general logical, there's a separation of the...
Interviewer:
I WONDER IF YOU SAY IT ANOTHER WAY. I WONDER IF IT’S, UM... HOW WAS THE ANALYTICAL ENGINE DIFFERENT FROM THE CLASSICAL MACHINES OF THE DAY? WAS IT IS SOME SENSE, THIS WAS A MACHINE WHICH DIDN’T HAVE A PURPOSE SPECIFIED IN ADVANCE? WHY WAS THAT SO SIGNIFICANT?
Ceruzzi:
The analytical engine consisted of some, some general subsections that did arithmetic or did storage, or printed, or something like that, but the overall control came to it by way of perforations in cards, very similar to the Jacquard loom or punch cards that were used in the 1920s and 1930s. That control was a separate mechanism and it allowed you to think about what you wanted the machine to do separately. And that I think was a real profound difference in the way people thought of machines. Because for the first time in history you had someone designing a machine who didn't really think as he was designing it, all of the different ways that it could be used, what it was good for. It was good for anything and he would worry about that later on, but for the moment he would concentrate on getting the machine to work in a very general sort of way, and then having the specific uses attended to later. What we nowadays call software was really born in that moment when someone thought that you could separate out those two functions of a machine.
Interviewer:
IF YOU COULD SAY THAT LAST BIT AGAIN, BUT IF YOU COULD USE THE WORD PROGRAMMING AS WELL.
Ceruzzi:
I think it was a very profound moment in history when Babbage realized that you could separate out those functions of the machine performing a process, and, I'm sorry... When Babbage...
Interviewer:
SAY THAT WHOLE THING OVER AGAIN.
Ceruzzi:
Okay, I think it was a very profound moment in the history of technology when Babbage realized that you could design a machine in which you separated out the performing of arithmetic from the organization of the flow of work. What today we call programming. Although he didn't call it that. But the notion that you could concentrate on one thing at a time, put all your energies into the hardware design to make it work as well as you can, as reliably as you can, and do all those things very well, and then later on, or perhaps somebody else, not even yourself, someone who has a very specific problem to do, he or she would then go, "I, will write the program, I will develop the instructions that will be fed into this machine that will then turn it into something that will do something useful for me."
Interviewer:
BABBAGE’S ANALYTICAL ENGINE WAS NEVER BUILT SO I WANT TO MOVE ON 100 YEARS TO THE 1930S. AND START OFF REALLY BY SAYING STILL IN THE MID-1930S A COMPUTER WAS A HUMAN BEING RATHER THAN A MACHINE, IF YOU COULD SAY THAT, AND ALSO TELL ME WHAT A COMPUTER DID.
Ceruzzi:
Well, if you look in a dictionary you'll see the word computer. Very commonly used in the early part of the 19th, in the early part of the 20th century a computer was a person who carried out calculations very often with the aid of a desk calculator. A mechanical calculator where you press buttons and pull the lever, or some kinds of calculators you turned a crank. Made out of mechanical parts. That person was also supplied with a pencil and paper, upon which he or she and very often they were women would write down intermediate results. Other pieces of paper would have the sketch of the problem being solved. And they worked very often for scientists especially astronomers, who would gather lots of data in astronomical observations and then hand this data over to a computer, again a person, along with some instructions as to what to do with that data. The person would then perform the calculations and come out with an answer. The answer being that data reduced in some sense, processed in the sense of taking an average, plotting it on a piece of paper to get some kind of sense of a trend or something like that. The interesting thing to me is that the digital computer as we now know it was invented by people who had that image in their mind. I'm very much convinced of that. Many of them came from scientific backgrounds. Especially physics and astronomy. And they already knew what a computer did. Namely a person. And what they ended up doing mechanically, was building a machine that replicated the entire room, including the pieces of paper, the pencil, the person, and the mechanical calculator. Babbage incidentally called his processing unit a mill. And you get the sense because in many of those calculators you actually turned a crank, like a, like a pepper mill, and you cranked out the numbers. And I think today, you hear people talk about number crunching, as if you're in a pepper grinder and it's a, it goes back I'm sure to those days when people used these mechanical calculators. Well, to get back to the story, that is precisely what people were thinking of when they sketched out notions of automatic machinery that would replicate that entire room, and once machines were built like that, using various parts, either relays, or vacuum tubes or whatever, they called them computers. It seemed very natural because that was precisely what these machines were replacing. It had a very definite numerical connotation to it. It was not the kind of thing that was familiar to people who did what we call data processing. But that wasn't the background that these people came from. They came from a background where computers were people who were hired in very large numbers.
Interviewer:
AROUND THAT TIME THERE WAS AT LEAST ONE PERSON WHO THOUGHT ABOUT HUMAN COMPUTING. BUT ACTUALLY EXTRACTED, THOUGHT ABOUT IT IN A MORE GENERAL WAY, ALAN TURING. NOW I WONDER IF YOU COULD JUST HAVE A STAB AT TRYING TO SORT OF GET HIS INTEREST IN THE DIRECTION HE WAS COMING FROM.
Ceruzzi:
Alan Turing is a very difficult person to try to understand and I think if you talk to mathematicians they often throw up their hands in despair or else they say, if you're not involved in higher mathematics there's no point in even bothering with him. And to a certain extent some of that's true, but I always like to think of Turing as trying to tackle some very tough mathematical problems in sort of down to earth ways. And one of the analogies he used was of a person doing some work. Let's say a person, a human computer working at a desk with pencil and paper, instructions, trying to solve some kind of problem. And traditionally we think of that kind of operation as having two components to it. One of them is the data. Let's say the astronomical observations. Or some kind of lists of numbers that are your sort of raw material. The pepper in the mill if you will. The other is the instructions, the equations, it says take the square root of this number, take the sign or the cosine of that number, add the two together and so forth. We tend to think of them as being very distinct. Turing, however, he did a very interesting thing. He kind of broke that process down to a point where, it was almost as if the distinction blurred together. He said, "What if you were one of these people who had your commands or instructions on one sheet of paper and the data on another sheet of paper and you start solving the problem and then you're interrupted, the phone rings or something like that. And you say where was I? I gotta go pick up the phone, let me write down where I was in this stage of the calculation." So you write down on that piece of paper exactly where you were in the middle of this, then you come back, you start working on it again. And you get interrupted again, so you, you find yourself with a piece of paper that has a tremendous mixture of what you're supposed to be doing, and what you're supposed to do it to. And the two get jumbled together to the extreme point where you really can't tell the difference between the two. And Turing being the theoretician that he was, carried that out to such an extreme, that all you had were simple marks of a slash mark or a stroke on a square of paper and he didn't even bother with a rectangular sheet of paper. He said, "let's take the paper and make it a big long strip. It simplifies things mathematically, even though no human being would ever work that way." But he said, "Well we can do that. We don't need to have all these symbols, let's just have one symbol, a stroke." And when you got to that point, you find that in theory you have a very abstract notion of manipulating information. And that abstract notion which Turing really was developing to solve a theorem of advanced mathematics, turned out to be very powerful and one which enabled computers, and I'm convinced it enables computers today to have so much of the power they have. Because they have this power to treat both their programs and their data interchangeably and mix things up and kind of generally process information. They can do so much more than a simple machine that just crunches numbers if you will. A number crunching machine may be very good to a mathematician, but it's not going to have the impact on the rest of society that we now are seeing the computer have.
Interviewer:
JUST ONE MORE STAB AT THE, AT THE TURING THING, IN TERMS OF WHY IT WAS SIGNIFICANT, IS HE GIVING A LOGICAL PROOF THAT A MACHINE LIKE THE ONE BABBAGE IMAGINED COULD BE BUILT AND IS HE GOING BEYOND THAT IN SOME WAY? WHAT’S HE ACTUALLY SAYING? WHY IS THIS IDEA OF A UNIVERSAL (?) MACHINE SIGNIFICANT IN THE CONTEXT? [TECHNICAL DISCUSSION]
[END OF TAPE F321]
Ceruzzi:
I think that in practical day to day terms you can get along very well without ever thinking about Alan Turing. However, at some point you really have to think about the nature of the computer being a universal machine, and the fact that it can do a lot more than just number crunching or very restricted business data processing tasks, or things like that. And if you push it far enough, you eventually will come to Turing's discovery or his mathematical proof that you could in fact, construct a machine and program it in such a way that it could imitate any other of a certain class of machines. And that doesn't mean every single machine. You can't really build a computer that will do everything, meaning everything. But you can certainly build a computer that will do many, many different things. And that was what Turing really demonstrated in a very rigorous way. And I think that there's a need for that. Computer science is based on some very fundamental principles and to the extent that we've built this enormous edifice, it's because we do have at least some sense that there's a solid foundation of it. Even though most of the time in our day to day lives people who work with computers, and who program them, who build them, never even bother themselves with thinking about Turing, but he's there. He's there, deep inside every chip, there's his notion at work.
Interviewer:
WHEN PEOPLE HEAR THE TERM UNIVERSAL MACHINE, THEY GET A BIT CONFUSED, NOW, WHAT DOES IT MEAN AND WHAT DOESN’T IT MEAN?
Ceruzzi:
Well when I think of the term universal machine, I can't help thinking of these ads on late night TV for these kitchen appliances that somehow transform themselves into vegetable slicers, and then they do this, and then they do that, and then they'll also sew buttons on your shirt, if it. . . Something like that and they're always so ludicrous and I, I feel like well is that what a computer is, and I suppose in a sense, yes it is. Only it's deadly serious, that a computer does have this kind of ability to transform itself into different things. If you tried to build a machine without the kind of logical structure that a computer has, you end up with these ludicrous things that people try to build. Another good example of this, is every so often someone tries to build an automobile that turns itself into an airplane. You see it all the time. The auto-flyer, or something like, they've got some various names for them. And they don't work. They can be made to work sort of, but a good automobile is not a good airplane, and a good airplane is not a good automobile. But people keep trying that. Well, the reason they don't work is because they're not built out of very fundamental sort of basic, logical, building blocks the way a computer is, with a computer you can make that work. A very good example is, your standard personal computer, which you can use for engineering drawings, you can use it for mathematics, you can use it for word processing, you can use it to sort names and addresses, all kinds of activities. It's the same computer. And the person who buys it decides "Well I want to use it for this, I want to use it for that." My daughter uses a computer for drawing, I use the same machine for writing books. Something like that. And it's fine, there's no problem. In fact it does a wonderful job at all of those things. Perhaps not as well as I'd like it to, but it does work. It's not an awkward or inelegant design, and that to me is what's really sort of unique about the computer. It sets it apart from a lot of these earlier attempts that people may have had or still have to make machines do more than one thing.
Interviewer:
JUST BEFORE WE GO ON… WE CAN NOW SEE ITS UNIVERSALITY, BECAUSE WE CAN CONNECT IT TO A TV SCREEN OR A MOUSE. IS IT A MUCH HARDER JOB IF YOU DON’T HAVE ANY PERIPHERALS, YOU KNOW SEEING INTO THE COMPUTER’S OF THE 1930S, THIS UNIVERSALITY, IS THAT A MUCH MORE DIFFICULT KIND OF QUEST?
Ceruzzi:
Well, people who first built programmable automatic machinery like the Mark I at Harvard, or the ENIAC to some extent. They thought of them as being general purpose but in their minds general purpose meant really, solving a general class of arithmetic problems. Now to them that was a tremendous breakthrough. Because it was, it meant that, it took the place of maybe 20 or 30 special purpose arithmetic machines. But it certainly didn't mean in reality or in the minds of the people who built it, the kind of general purpose abilities that we now take for granted today. But that was the way people, people thought of it, and I guess it's, it's really, it's been a gradual evolutionary process of people getting these machines in their hands and then sort of thinking about them and playing with them and learning how they work and then discovering ways to make them into machines that can do all kinds of different things. And that's been the sort of, the whole history of the computer ever since the 1950s has been one of people discovering how to make it useful in ways that had not been thought of before.
Interviewer:
SO WE’VE GOT THIS NOTION OF A UNIVERSAL MACHINE, BUT FOR THE PEOPLE WHO HAD TO BUILD IT, YOU KNOW IN THE TEN YEARS REALLY IN BETWEEN TURING’S PAPER AND THE END OF THE WAR, THEY HAD A SPECIFIC GROUP MOTIVATION GENERALLY HAVING TO DO CALCULATIONS. I WONDER IF WE COULD TALK A BIT ABOUT THE PARTICULAR PROBLEMS OF ACTUALLY BUILDING SUCH A MACHINE. NOW BABBAGE WOULD HAVE BUILT HIS MACHINE IF HE BUILT IT OUT OF COGS AND WHEELS, WHY WOULD SUCH A SCHEME NOT HAVE ATTRACTED SAY ZUSE WHAT HAD CHANGED IN TERMS OF HARDWARE AND WHAT DID THE HARDWARE OF THIS MACHINE HAVE TO DO?
Ceruzzi:
Well the, to get back to what I was saying about the computer's universality, it comes from its intrinsic design in that it is made out of essentially very simple basic pieces which are replicated by the thousands and even the millions today. That's what makes it such a different kind of machine from all the others that we're comfortable with. And it also gives it, its power because it is in the structure that you really get the complexity or the power of the computer. But what that means is that you have to have some kind of basic element that is simple and rugged and that you can connect to other elements like it. Mechanical devices, such as the kind that Babbage was working with can be made to do that, but it is not so easy. Zuse in fact built wonderful machines using mechanical parts, but I think he's the exception to the rule. In general when you try to connect mechanical pieces to each other, you run out of dimensions in space. You've basically only got three. And if you try to connect eight mechanical pieces to each other, you've got far too many linkages and rods and levers, that you just run out of places to put them. That led people very quickly to the notion of using electricity. The very first of these machines used electromagnetic relays which were invented in the 19th century for the telegraph industry, and were used extensively in the telegraph and the telephone industry in the early part of the 20th century. They are fundamentally logical devices that do switching. People recognized that this relay could serve as your fundamental building blocks. All you have to do is get a lot of them together. The vacuum tube was a similar device. It can be used as switch, although it was not invented for that purpose. It was invented for something else entirely, but it could be used as a switch. And if you got enough of them together, you can wire them and of course the wires can run very flexibly from one part to another. And you don't have this problem of how do you organize it in three dimensional space, you just basically put them all in a rack and then you worry about wiring them in various ways, and that's not an easy job, but you can do it.
Interviewer:
I THINK I’M GOING TO COME ON TO THE HARDWARE LATER. [INTERVIEWERS SWITCH] SO IT’S AFTER WORLD WAR II AND THE ENIAC HAS BEEN BUILT AND YOU WOULD THINK TODAY THAT PEOPLE WOULD SAY, "OH GREAT WE'VE GOT A COMPUTER." BUT IN FACT THERE WERE SKEPTICS ABOUT THIS MACHINE. NOW WE'RE TAKING THE COMPUTER INTO THE BUSINESS WORLD, CAN YOU TELL ME A LITTLE BIT ABOUT WHEN ECKERT AND MAUCHLY NO, NO, YES, WHEN ECKERT AND MAUCHLY DECIDED TO GO INTO THE COMPUTER BUSINESS. HOW DID SOME PEOPLE FEEL ABOUT THAT AND WHY?
Ceruzzi:
[Technical Discussion] Well, Eckert and Mauchly of course were interested in commercializing their invention. And they met with quite a bit of resistance from a lot of their colleagues. And I think you have to understand who their colleagues were. By and large they were scientists or researchers in a university environment. And as such these people were used to building expensive equipment, doing research with that equipment and perhaps getting useful science or that sort of thing out of it. But they didn't really think of this in terms of a general market of the rest of the world out there. And Eckert and Mauchly were trying to bring it out of the laboratory environment and a lot of people felt that, "Well, there's not that many research environments like this one. Who's going to be interested in this machine? Who's going to buy it? How many physicists are there in the world? And even if you gave each one his own computer, you still wouldn't sell that many and they're very fragile." And all this sort of thing, so there was quite a bit of resistance.
Interviewer:
I WANT TO ASK YOU TO DO IT AGAIN, AND WONDER IF WE CAN GO THROUGH THE LIST OF THE UNRELIABILITY OF THE TUBES...WITH A LITTLE BIT LESS BACKGROUND. SO IT WAS THE TUBES AND THE PROGRAMMING AND THE IDEA THAT IT WAS A MATHEMATICAL MACHINE, SO JUST SORT OF A LIST OF IT. SO DID YOU KNOW, DID PEOPLE DOUBT ECKERT AND MAUCHLY'S IDEA OF GOING INTO BUSINESS?
Ceruzzi:
Well, when Eckert and Mauchly proposed to make a commercial venture out of building and selling electronic computers, a lot of people were very skeptical and they raised a number of very interesting objections which were all pretty much quite valid at the time, and it's interesting because today we're so smug about the fact that they were wrong, but in fact if you take a look at what they were saying, it tells you quite a bit about the nature of the computer itself as well as the kind of social conditions under which, or out of which it emerged. Let me just mention a couple of the objections that people had. The first one was that the machine was made out of large numbers of vacuum tubes, and these tubes were prone to burning out, in fact if you could keep one of these machines like the ENIAC running for a few hours before a vacuum tube burned out you considered that great. That was something that was just remarkable, because usually they would burn out every minute or so, and then you'd have to go poking around trying to find it and replace it and that sort of thing. The vacuum tubes took up a lot of power, so you had an enormous electric bill. You had to have a room with tremendous ventilation and air conditioning to keep the whole thing cool. Very expensive to run these things. Let's consider another objection people had. If you wanted to, if you had a machine like the ENIAC, let's say you had one and it was in working order, now you want to get some useful work out of it. Well, you've got to program the thing. And how do you program that? Well you've got to be very well trained, not only in the problem you're trying to solve, but also in the intricacies of the way that machine works. How the machine is programmed. Basically by the mid 1950s the computer design had stabilized to a point where programming them consisted of feeding in instructions in numerical form in a fairly standardized way, but that didn't mean that it was easy to do. You still had to keep track of all those little engineering idiosyncrasies of the circuits and the speeds and all the different devices and the memory capacity and all that sort of thing. Plus, while you were doing that you had to worry about the problem because that's what you were really going to the computer in the first place for. And a lot of people felt, well how could we ever sell these machines, when we, how could we ever sell these machines if we aren't going to have people who have the mathematical training or the engineering training or some unique combination of both, to actually do that kind of programming. It's just something that very few people in the world have talent for and when you run out of those people, you will not sell another machine because it will just sit there waiting for those people to come around to work on it. Now let me talk about another objection that people had. The computers that emerged at the end of World War II computed or were intended to work with vacuum tubes instead of electromagnetic relays. Now that resulted in anywhere from three hundred to a thousand times the processing speed that you could get from the mechanical calculators, the punch card devices, the relay machines, the sort of desk top calculators that people used. A thousand times faster. Now here was an environment of people who were very comfortable with doing work with these punch card machines or mechanical calculators and there was a balance that had been achieved over the years between the work that the people did and the work that the machines did. All of a sudden now you've got a machine that does its part a thousand times faster. What are the people going to do? They're going to sit around twiddling their thumbs, so the story went, waiting, well that's not quite right. The other way around [LAUGHTER]. What are, here you have a machine now all of a sudden that's working a thousand times faster than what people had been accustomed to. What is the machine going to do. It's going to spew out its answer and then just sit there and wait, and wait, and wait a very expensive machine that you're paying enormous amounts of money for, until the people can come around and give it another problem to solve. And that seemed to be so completely unrealistic that it it meant that people would not buy these machines, because it just was not economical. Why did you, you wouldn't want to have that kind of speed unless you had some way of speeding up the entire process, which meant essentially a rethinking of the entire notion of what the how you solve the problem. And that wasn't something that people were willing to do, in around 1950. It took awhile for that to settle in. So that was an objection that people had. And the result was they felt that any attempt to commercialize computers was not going to succeed because of these objections. Because of these very fundamental objections.
Interviewer:
THE THIRD ONE OF THOSE WAS JUST WHAT WAS THE MACHINE WAS GOING TO DO, THAT WAS THE ONLY ONE, IF YOU COULD TAKE ANOTHER STAB AT THAT. WHAT WAS THE MACHINE GOING TO DO? YOU GOT STUCK IN THE MIDDLE OF THAT. IT WAS GOING VERY WELL, THEN THERE WAS THE BIT IN THE MIDDLE WHERE YOU KIND OF HESITATED.
Ceruzzi:
Oh, okay. We're still rolling here? Okay, so you had a, you now had a situation where you had this exact same, this exact same installation of people, problems, data and machines, only now the equation has changed. The machine is now a thousand time faster. You give it a problem, it solves the problem now in a fraction of second, in a blink of an eye, and what does it do, it just sits there. It's just going to have to sit there for a long time waiting for the people to come around and give it some more information and that seemed to be such an uneconomical thing to do. Remember that these machines are very expensive and they were very often intended to be rented. You're paying rent on a machine that's sitting there 90% of its time waiting for a person to come in and give it some order, or give it some work to do. It just seemed so utterly impractical.
[END OF TAPE F322]
Interviewer:
OK WE'RE STARTING WITH THE SIZE OF ENIAC AND MOVING INTO THE THINGS, WHY DID WERE PEOPLE SCEPTICAL ABOUT COMMERCIAL COMPUTER?
Ceruzzi:
Okay, I there, there are a number of reasons why people were skeptical. One of them was the shear size of these machines. Think about the ENIAC, it would, it filled up a, a large room at the University of Pennsylvania. And if you were going to sell these things, you really had to think about how much room you had to set aside plus the, the, ah, power consumption, the air conditioning requirements, all of those things that just seemed so impractical compared to the kinds of, ah, installations the punch card equipment or calculating machines that people were accustomed to. That was one reason. Let's consider another reason, the fact that the machines worked so fast, up to about a thousand times faster than human beings computing with mechanical calculators or with punch card equipment. What that meant was that, once the computer was set up and fed a problem, it would very quickly solve that problem and then sit around, still costing you money but with nothing to do because it had to sit and wait for human beings to give it another problem to solve. In fact if you did a sort of basic comparison of processing speeds, you would find that a single computer like the ENIAC if it could be made to run for a week, would all of the problems of all human computing installations in the entire United States for about six months. So it just seemed like a, an absurd comparison, I guess, you could say. Ah, a third reason was the, the fact that the computers were very difficult to program. You almost required a, an understanding of very advanced mathematics or logic, ah, to even get the computer to add two numbers together. So you had to know that kind of knowledge, plus you had to know the knowledge of the problem you wanted to solve, whatever it might have been, whether it would be. [INTERRUPTION]
Okay, another reason was that, you have to consider the social environment in which the computer was built. They were built by scientists, researchers, physicists who were used to building research equipment that might have been very expensive but it was also something that very few people had a need for. After all, how many large observatories are there in the world? How many atom smashers are there in the world. How many wind tunnels are there in the world. There's some but there's one on everybody's desktop. And there certainly one in every small, little corner of business and, and academic life. And yet, this is what, precisely what Eckert and Mauchly were proposing to do. So the people around them just didn't understand any kind of climate in which there would be more of a market than that. And then the, a final reason or another reason why there was some skepticism was the fact that, in order to program these computers, you really had to understand advanced mathematics. And, [INTERRUPTION]. So these researchers or scientists had a certain notion not only of who was an intended user or a typical user of this machine, namely, very few people in very selected, top level research institutions or universities in the country, but they also had a notion that the machines were good for a very restricted class of mathematical problems and that they were not really that suitable for the kind of day to day routine accounting work, let's say, that businesses needed. For that kind of work, these people felt that the existing equipment, punch card equipment was fine. So, ah, there was skepticism for that reason too. And then there was a reason that people, the objection that, and then there was an objection that people raised that the programming of one of the digital computers was extremely difficult. You had to have an understanding of fairly sophisticated mathematics or electrical engineering to understand how to get the codes into the machine that would then get that machine to do some kind of useful work. So, in other words, you had to wear two hats: you had to know the problem you were going to solve, let's say you were solving an inventory problem for a large business, you had to know all the details and the little minutia of that problem, plus you had to know all about the intricacies of this called, an electronic computer and what kind of memory it had, how fast a signal travels from here to there. You had to know all that and somehow combine that expert knowledge in some kind of creative way so that you could program the computer. If you couldn't do that, the computer simply would sit there useless to you. And people felt that such a talent was so rare in the world that it didn't matter if you could mass produce the computer, you would never find enough people who had that skill to make them useful.
Interviewer:
I THINK WE'VE GOT IT.
Ceruzzi:
Maybe I could just talk about… One of the legends that always come up in the discussion of the early days of computing is this statement, and no one is quite sure who made it, that four or five computers would satisfy the needs of the entire world. And we do find people saying something like that, and, ah, Eckert and Mauchly remember this all of the time because they were going around trying to sell this machine and people would come back and say, - but I heard it said that that's all you need, four or five, how could you possibly make a business out of this? Well, they believed that that was wrong, obviously. And they persevered against tremendous odds, although they were not the only ones who saw the fallacy of that argument, but they certainly were at the forefront or pioneers in trying to show that it was wrong. [Inaudible] I don't want to get the anti-Eckert people mad at me. There were other people like the Leo, the Leo, the Leo people in England. There were a lot of people who said that was wrong.
Interviewer:
YEAH WE’RE NOT TRYING TO SAY THEY WERE THE ONLY ONES. HOW WE TALKED ABOUT LAST NIGHT, THE IDEA OF A CONNECTION BETWEEN THESE REPETITIVE, TEDIOUS, NUMERICAL TASKS THAT ENIAC WOULD DO AND DATA PROCESSING, REPETITIVE AND TEDIOUS DATA PROCESSING TASKS. WE NEED TO GIVE OUR VIEWERS A CONNECTION BETWEEN THOSE TWO JOBS AND WHY, YOU KNOW, THIS MACHINE COULD BE… WAS SEEN AS BEING ABLE TO DO THOSE TASKS ALTHOUGH THEY’RE NOT ARITHMETICAL. SO THE QUESTION IS, THAT THE MACHINE HAD BEEN SEEN AS A SCIENTIFIC TOOL. WHAT DID ECKERT AND MAUCHLY SORT OF UNDERSTAND, THAT IT COULD DO, THAT WAS NOT SCIENTIFIC AND THEREFORE COULD BE A COMMERCIAL MACHINE?
Ceruzzi:
For those who tried to commercialize the computer, they were faced with a very difficult problem and that's that, in the commercial world, information was processed in a very different way from the way it was done by scientists. Scientists organized the problem in what I would call, lines of a problem. They would write out an equation and they would have variables in the equation where you would plug in numbers and perform functions and at the end you would get an answer. In a business environment, you didn't really do that. You had decks of punched cards where each card would represent a customer or a client or a piece of inventory or something like that and you would run those decks of punched cards through specialized equipment like a sorter, a tabulator, a collator or a printing device or something like that, in which you would perform one operation on the entire deck of cards. Then you would move the deck to another machine.
[INTERRUPTION]. [MISCELLANEOUS CONVERSATION] .
In a business environment, you did not do all of the calculations on a single, ah, ah, you did not do all of the calculations at once. You took a deck of cards, of punched cards on which you had each card would represent the data involving a customer or a client or a piece of your inventory or a product that you manufacture, each card would have that piece of data and then you would have the deck of cards representing the data that you want to work on. You would feed those decks of cards through specialized machines. Each machine would do one thing: a tabulator would sum up the columns of various cards; a sorter would sort other columns into alphabetical or numerical order; ah, you would have a, a device that would print out some of the data that was punched on to cards. Later on, companies introduced multipliers that would multiply two numbers punched in different columns of the cards that you would organize the work by physically moving decks of cards from one machine to another, each machine doing a very specific action. That is very different from the way a computer, as it was invented in the 1940s, operated. A computer does everything in a sort of sequential fashion: it will do a multiplication, then an addition, then it will do the printing or something like that, and then it will go on to the next piece of data and do it on that. And if you were someone who was trying to sell computers to a business environment, you had to convince people that this machine, which operated in a very different way, could, in fact, do all of the work that your existing punch card installation could do and not only do it but do it better and do it, ah, more economically and, and do it, ah, as reliably and all these other things, ah, it was quite a tough hill to climb to convince people that you could do that sort of thing.
Interviewer:
ALRIGHT, NOW LET ME ASK YOU TO DO IT A LITTLE BIT DIFFERENTLY BECAUSE WE NEED THE VIEWER TO UNDERSTAND THAT IT COULD DO THE SAME THING. THEN LATER ON YOU COULD SAY, OF COURSE IT DID IT DIFFERENTLY. BUT IT’S A LITTLE CONFUSING I THINK, THE WAY YOU PUT IT BECAUSE WE’RE TRYING TO SAY HEY IT REALLY CAN DO THE SAME THING AND YOU'RE SAYING RIGHT AWAY THAT IT DOES IT DIFFERENTLY. SO IF YOU COULD JUST SWITCH IT AROUND A LITTLE BIT AND SAY THE CONNECTION BETWEEN THE TWO AND THEN HOW IT DOES IT DIFFERENTLY.
Ceruzzi:
Ask me again a little bit.
Interviewer:
OK THE WAY I PHRASED IT, AND MAYBE YOU CAN PUT THIS BACK IN, IS SORT OF, WHAT DID ECKERT… THIS MACHINE WAS SEEN AS A SCIENTIFIC MACHINE, AND WHAT DID ECKERT AND MAUCHLY SEE THAT IT COULD DO?
Ceruzzi:
The computer as it was invented was something that did work for scientists and Eckert and Mauchly had to convince business customers that this machine could also do the kind of work that they were accustomed to do, what we now call, data processing, ah, because Eckert and Mauchly realized again, that it is a universal machine and if you write the right kind of program for it, it will, in a sense, transform itself, this computer will transform itself into that room of punch card equipment that everyone was comfortable with. A computer transforms itself by virtue of the kinds of programs you write for it. And, if it's a universal machine, as they insisted it was and that's why they called it — UNIVAC, that it could be made to do that same kind of work that people wanted done in the business environment and that was the whole selling campaign that they took with them to the various customers around the country, hoping to sell them UNIVACs.
Interviewer:
THAT WAS PERFECT. NOW LET ME GET YOU TO DO ANOTHER PARAGRAPH ON THE TABULATING MACHINES IF YOU WOULD. JUST SORT OF A DESCRIPTIVE PARAGRAPH… TABULATING MACHINES. FIRST THEY WOULD PUNCH, THEN THEY WOULD SORT, COLLATE, TABULATE AND PRINT. JUST SORT OF GIVE A LITTLE…
Ceruzzi:
Okay, let's take a look at a typical punch card installation. It consisted of anywhere from three to about, ah, eight or nine specialized pieces of equipment. You would begin by entering data on to a single punch card with a key punching device. So that was probably the beginning of the whole process, punching data onto cards. You could then take the data, now punched into a deck of cards, and bring it to something called, a tabulator which would perform simple addition, summing up all the columns, all the data punched in a certain column or set of columns in a card and giving you a sum total. You could then take that deck of cards and put it into a machine called the sorting, a sorting machine where you would sort according to certainly columns in alphabetical order or numerical order or combinations of the two. You could use another machine that would multiply, multiply the numbers punched into two columns or perhaps take the numbers punched into one card and multiply them against a number punched in another card and take the result and punch it in another column or in another card or something like that. Another machine was called, the verifier where it could compare the holes punched in one card against those punched in another and alert to the fact that they changed. This seems like a very kind of inelegant way of doing things but it had a very important function in that it insured that the data you punched into cards, always remained reliable, that you could trust it. And this is one of the key functions of all data processing today is that data in a computer, if you design it properly, is never lost. It can be replicated perfectly for eternity if you wanted to. Ah, you don't have to worry about the fact that physically the devices may, ah I get old, the paper might tear, the magnetic tape might get erased or something. If you design it properly, it's, it's there forever and you can trust it. And this is very important, obviously, if you're in a business environment. And finally, you could take it to a machine that would print out, ah, columns or, or, ah, results of things that were punched on to earlier cards. So you had an installation where you had a number of machines that did very special things and in a typical installation you had people whose job it was to take decks of cards and bring them from one machine to the other according to whatever problem it was you were trying to solve, or whatever, if you were trying to compute or print the payroll checks, for example, very typical application, you would have a certain order that you would go through with decks of cards with the employees' names and, ah, their hourly wage rates and that sort of thing. And you would take it to the various machines, essentially, you were programming that room, just as you would program a computer in the modern sense, the programming being the people carrying decks of cards from one machine to the other.
Interviewer:
DO YOU WANNA DO A PARAGRAPH ON THE CONNECTION BETWEEN THE REPETITIVENESS TOO, WE REALLY HAVEN’T GOTTEN INTO THIS, OF THE TASKS. TELL US ABOUT THE REPETITIVENESS. CAN YOU COMPARE IT TO THE SCIENTIFIC AND THE DATA PROCESSING?
Ceruzzi:
In the scientific world you had a lot of repetitive calculations that you had to do on different pieces of data as you got them from an experiment, let's say, or an astronomical observation. In the business environment you also had repetition, let's say, a very typical application of printing payroll checks where you had to perform the exact same, simple calculations on hundreds and hundreds of, ah, cards, each one representing a specific employee. So you had repetitiveness in both cases, although it was a little bit different. In the business environment, a few calculations on a lot of data, in the scientific environment, a lot of very involved calculations, sometimes involving trigonometry or, ah, taking integrals or something like that, on relatively small pieces of data.
[MISCELLANEOUS CONVERSATION] .
Okay, so that, they were, although they were slight, in, in some sense these were different kinds of activities. The one involving a lot of data with a few operations, the other very complex operations on smaller amounts of data, basically it was the same notion of repetitiveness and if you could invent a machine that would take over some of that repetitiveness, it was a relatively trivial matter to rework that machine so that it could take over some of this other kinds of repetitiveness. And, in fact, that is one of the keys of any kind of understanding of computing is that whenever you see tedium or repetitiveness of any kind, that's a candidate for computation.
Interviewer:
ONE MORE TIME.
Ceruzzi:
So, if you could figure out a way to get a computer to take over the repetitive, arithmetic task that the scientists or the scientific computing people were doing, you could also figure out a way to take over the repetitive task of the punch card installation, ah, specifically the people carrying decks of cards around from one machine to another.
[END OF TAPE F323]
Interviewer:
SUDDENLY COMPUTERS GOT A LITTLE BIT POPULAR AND ALL OF A SUDDEN THE USERS ARE GETTING VERY DISILLUSIONED AND WHY? MAINLY THE REASON WE TALKED ABOUT LAST NIGHT WAS PROGRAMMING AND IT’S BASICALLY BECAUSE THEY NEEDED MATHEMATICIANS TO DO IT, AND UM, AND… [MISCELLANEOUS CONVERSATION].
Ceruzzi:
Well, you know, if you look at some of the reasons that people were skeptical, it's kind of interesting because as it turned out, they were able to make vacuum tube machines pretty reliable although it wasn't easy but they were able to do that. And, ah, they were able to make them useful for non-scientific applications. But the one thing that kept sticking with people, in fact is still stuck with us today, is to make the programming possible, make it easy enough so that people can program computers without having to know much about how the computer is built or what its intricacies are. It's a problem that was really serious back then and really did, in fact, hold back a lot of the sales of the early UNIVACs, for example, and it is still with us today. I think the, you can almost document the entire development of computing as a commercial activity in terms of how to make it easy for people to use who don't otherwise want to know about how a computer works.
Interviewer:
OK, WE’LL HAVE TO THAT AGAIN BECAUSE OF THE NOISE. . .
Ceruzzi:
Okay, ah, now the, - shall I start from the beginning with that? It's interesting when you look back at the reasons people were skeptical for, ah, selling more than, say, six or seven of these computers. It turned out that they were able to make vacuum tube machines reliable after all. It wasn't easy but with carefully engineering you could do it. It turned out that you could figure out ways to program computers to make them useful for non-scientific applications such as business, what we call data processing or other activities. But, the one thing that people, the skeptics were right about and which did, in fact, inhibit the sales of the early UNIVACs and IBM 701s and machines like that, turns out to be something that inhibits the sales of computers today and that's the programming. That, to the extent that you have to learn about the intricacies of the way a computer works before you can get that machine to do what you want it to do, it's going to be hard and it's not going to be something people will enjoy doing or will have the training to do. And you could, - I don't that's, - I lost it. [MISCELLANEOUS CONVERSATION]
The people who built and designed the very first computers didn't really think very much about the programming issue. They really couldn't until they had a machine in front of them to, to program. Once they had a few in front of them, then the question came, - well, how do we get people who know how to write the program who are not necessarily the same people who are building the thing? And there was a lot of confusion. People thought, - well, you had to be a mathematician. Other people thought, - well chess players might make good computer programmers. Some people said, - well, a skill in, in languages might make you, if you spoke a lot of foreign languages you would make a good computer programmer. And I think generally people, people didn't really understand what it took to be a good computer programmer and it was something that people struggled with throughout the 1950s and I think they still do today. I don't know where else, what else I can say about that. (Give me the question again. Give, give me some more of the question here. I forgot what you, what the question was)
Interviewer:
IT’S INTERESTING BECAUSE WHAT WE WERE SAYING EARLIER, BUT YOUR RIGHT… [CUT BACK IN] AND YOU REALLY HAD TO BE SORT OF A MATHEMATICIAN, OR THAT ILK, TO DO IT AND THAT CAUSED A GREAT SHORTAGE IN THE NUMBER OF PEOPLE.
Ceruzzi:
When, let me start, when, ah, the, the people who first, who built or invented the first computers didn't really think about programming as a problem. In fact the couldn't think about it at all really until they had some working computers in front of them. But as soon as they did have some that worked in front of them, they faced the question of what kind of person has the ability to write programs for these machines. And there was a lot of debate and there wasn't any clear consensus about it. Although, in general, people believed that you had to have a certain mathematical talent that, in order to write programs, you certainly had to know something about binary arithmetic or the intricacies of number systems and, ah, basic, ah, ah, mechanisms of adding and subtracting and how that was done by a machine. So people knew that but beyond that they really didn't understand kind of what it took but they did know that it was not a talent that everybody had. And that was a real problem. There just weren't that many people around who had that talent and there was no prospect of ever creating more of those people or training more of those people. You do have people say, - well, let's have a complete change in the curriculum of schools where we train people to do these things. But that was, really it, many people realized that, that wasn't going to work either, that the, the, talent for programming was, it was almost like an art rather than a science and it could not very easily be conveyed to large numbers of people. Therefore, if you were going to get these computers in large numbers of customers' hands, you had to figure out a way to get the computer itself to take over some of these tasks.
Interviewer:
GREAT THAT WAS PERFECT. SO NOW WE WANNA KNOW THE BENEFITS OF HIGHER LEVEL LANGUAGES, AND THAT YOU DIDN’T HAVE TO BE A MATHAMATITION OR ANY KIND OF SPECIALIST. ALL YOU NEEDED TO DO WAS UNDERSTAND WHAT YOUR PROBLEM WAS, ONCE YOU HAD A HIGHER-LEVEL LANGUAGE IT COULD BE FRAMED IN THE TERMS THAT YOU UNDERSTOOD. THAT YOU WERE USED TO USING. SO YOU JUST HAD TO UNDERSTAND HOW TO WRITE YOUR PROBLEM AND YOU DIDN’T HAVE TO WORRY ABOUT THE MACHINE OR ITS PECULIARITIES OF LANGUAGE. SO CAN YOU TELL ME THE BENEFITS OF HIGHER LEVEL LANGUAGES?
Ceruzzi:
Well with high level languages such as Fortran or COBOL, you now had a situation where people could concentrate on the problem they were solving and not have to worry about what happened to that problem once it got inside the machine. One of the very big issues in getting a computer to solve a problem is to allocate portions of the memory of the computer to various parts of the data in the instructions. You, memory is always constrained, you never have enough and you always have to worry about whether you run out of memory or you have to structure your problem in such a way so that it's always going to fit right there and it's there when you need it. And the high level languages took care a lot of that housekeeping for you. And that was a tremendous benefit because otherwise, if you had to keep track of that in your head, it was just extremely tedious and Very hard to do. You had to worry about all kinds of little things about various circuits in the computer and you just, ah, very quickly got lost in all the tangle of it all. High level languages took you away from all that. You just didn't have to worry about it all. You simply said, - here is a variable. Let's say it's the wages, hourly wages of an employee, put that into the computer, the computer will find room for it and it will make sure that when you need that data, it's there. And that's a tremendously important thing.
Interviewer:
AND WHAT DID THAT MEAN TO BUSINESS MEN? HOW DID THAT CHANGE WHAT THEY THOUGHT ABOUT COMPUTERS?
Ceruzzi:
For the business customer, this meant that they could trust the machine a lot more. One of the big changes between the punch card installation and the electronic computer as a replacement for it, is that you now have your data, no longer on punch cards that you can pick up and hold in your hand and visibly see the record of, you now have it as invisible spots on magnetic tape or a magnetic drum and that caused people some anxiety. What if you lost the information? Now, there are all kinds of ways to insure that you don't lose the information but if you have to worry about allocating storage and worrying about overflow and all these other memory problems that you do when you program a computer in machine language, it makes people nervous. If you can develop a high level language that takes care of that and insures automatically, lets the computer essentially take over that job of making sure that it's not going to lose information, then you can breathe a lot easier.
Interviewer:
I WONDER IF WE COULD ALSO TALK ABOUT NOT JUST TECHNICALLY THAT THEY TRUSTED THE MACHINES MORE BUT DID THEY START TO SORT OF USE THEM MORE?
Ceruzzi:
As in, in the case with all, all, it's the case with most inventions that they are perceived as a replacement for what came before. So we have the automobile, it was called the, - horseless carriage. You replaced the horse-drawn carriage. The radio was called - the wireless. The computer was perceived at first as a replacement for the punch card installation. But once you've got the computer in that room, you begin to see all kinds of things that it can do that no punch card installation could do. And there's one thing in particular that a computer is very good at. It can make decisions. It can take data that has been analyzed or processed by some activity that just went on and then make a decision and say, - well, now, the computer says to itself, - I'm going to do this. And if this answer comes out this way, I'm going to do that. That is something that a punch card installation can't do unless you tell the individuals in the room to look out for certain things, you could do that, but you can't do it too much because they can't really be, ah, trusted with making some very complex decisions that, ah, involve very, very complex manipulations of data. So, the computer, once it gets its foot in the door, and I'm sort of treating it, almost like a person here, which I shouldn't be doing, but once you get the computer in the room, people very quickly begin to see ways of using it as a kind of, ah, ah, partner in making management decisions in a business that, ah, really can, ah, extend the, the range of activities that you can do. And it goes right down into, ah, these realms of science fiction of the automated factory and the automated business where machines Would make all the decisions. And if you look in the literature of the 1950s, you see much discussion in magazines like Fortune magazine about the, the factory of the future or the business of the future will be all computerized, you begin to see that. Although doing it is not easy, but you begin to see the potential of doing it.
Interviewer:
AND WHAT ABOUT THE HIGHER LEVEL LANGUAGES, DO THEY PLAY ANY PART IN THAT?
Ceruzzi:
The higher level languages, ah, especially COBOL, which was one that was the most popular for business uses had a very English-like interface. You simply wrote out sentences that looked like ordinary English and the computer could understand them and execute them and solve your problem. And when people started using COBOL they began to think, - well, if this machine can understand it when I say, - calculate the payroll, or something like that, maybe it will also understand something about, make some kind of management decision about, should I invest in this particular venture or not. Now, of course, they were fooled. The computer didn't have that ability, at least at, at first, but they started thinking that maybe it could and they began sort of pushing for that kind of activity. And, in fact, that's what's really been happening every since. Although, you have to understand that there was a period of time, and it's still true to this day, where people have tremendous expectations of these machines being a lot smarter than they are. And they, time and time again, they become frustrated when the computer reminds you of the fact that it cannot do something unless you have written out a very detailed program to tell it how to do something.
Interviewer:
I LOVE THAT ANSWER. CAN WE DO IT A LITTLE BIT DIFFERENTLY? CAN YOU SAY, HIGHER-LEVEL ENGINES LIKE COBOL BEGAN TO ALLOW BUSINESS PEOPLE TO SEE, YOU KNOW REALLY FOR THE FIRST TIME A LOT OF THEM TO SEE, THAT IF IT DID THIS IT COULD SORT OF DO THIS. THEN COME IN WITH THE, OF COURSE SOMETIMES THEY WERE WRONG. BUT I’D LIKE TO GET THE POSITIVE SIDE SO THE VIEWERS CAN UNDERSTAND AND THEN YOU CAN SAY THE QUALIFIERS.
Ceruzzi:
When, when people began programming the computers, using higher level languages, especially a language like COBOL which uses English words in many of the operations, in fact, the COBOL program can almost be read as a document written in English where you have words like, like, - sort or process or something like that, or print, when people began using COBOL or languages like that, they began to really become more comfortable with the notion of asking or programming the computer to do more and more ambitious things. And they felt, well, here is a tool that really has the potential to make very fundamental management decisions involved with the running of this business, ah, which was something you really couldn't do with the punch card installation. And the people had very high expectations that the computer would become almost like a partner with the management team that would make very fundamental decisions affecting the future of that business and what ventures they would go into, what directions they would proceed in. And they began to understand a lot more of the vision that in 1940s only a few people had. (I don't know if I can go on much more).
[MISCELLANEOUS CONVERSATION].
People tend to talk about the history of computing in terms of the generations of devices that were used to build them: vacuum tubes, then discrete transistors, then finally, integrated circuits. And one of the big changes came around the end of the 1950s. Around 1959, for example, you see the first large scale computers being built out of transistors instead of vacuum tubes, being marketed to commercial customers. That's also the same year that the integrated circuit is invented and it too would come into commercial use soon thereafter. And, one of the things that people often think about is that, yes, with solid state devices like transistors and tubes, with, with solid, they said, people think about this transition in terms of the fact that the solid state devices, the transistor and the integrated circuit, didn't give off any heat, therefore they were more reliable. Vacuum tubes tended to burn out and it was very hard to get an arrangement of several thousand of them all working at once. Transistors and integrated circuits were smaller. And that too, was important. But what I see is something a little bit more that and that's that with the transistor, it became possible, because they were smaller, to try to think about building computers that had many, many times more components than they ever though of with_the vacuum tube era. But as soon as they did that, you see, they ran into this problem - how do you wire them all together? You had this incredible tangle of wiring. And that was the real problem. And again, it is one that is still with us today. You have to get a very complex, logical structure to get an arrangement of very simple devices to do anything useful. And that's what a computer is, it's an arrangement of very simple devices. But this logical structure means that you run out of places to put all the wire. The integrated circuit was invented precisely to handle that problem of getting rid of that tangle of wire, putting the whole thing down on a single, ah, chip of silicon so that wires are actually not wires any more but they're just tiny, little strips that are etched photographically. And that broke through the real barrier which is this barrier of complexity. Some people call it, the tyranny of numbers that is fundamental to the computer. It was there right in the days of the ENIAC and it's there today. It's kind of interesting because we think of this breakthrough, and it was a breakthrough, of the integrated circuit as solving that problem but it only temporarily solved it because we're facing it today with the design of these new microprocessors. They're so complex that it takes too long to design them. Ah, the new generation chips that go into personal computers are taking years to design and that's just getting out, out of control. Once it takes too long to design, everything gets upset and you can't really have a practical device. So, we're again running into the exact same problem that we ran into in the 1950s with these transistors that you had to wire together or in the 1940s when you had relays or vacuum tube circuits that you had to somehow get in a single room and somehow make them work together.
[END OF TAPE F324]
Interviewer:
TELL ME ABOUT THE DRAWBACKS OF THE VACUUM TUBE.
Ceruzzi:
Well, I'll start, start with the advantages, ah. Vacuum tubes were wonderful devices because they could switch up to a thousand times faster than relays. But they had their disadvantages, the chief one was that they gave off a lot of heat and that meant that eventually they would burn out. Incidentally, it's one of the reasons why vacuum tubes are plugged into sockets whereas every other part of a computer or electronic circuit is wired in or soldered in or something like that. You had to be able to remove it because you, you knew that they were going to burn out eventually. They also tended to take up quite a bit of room. But, ah, the chief problem I think was the, the fact that they did burn out which was a consequence of the heat. You had to develop a way, you had to build some kind of system to cool the machine, to bring the hot air away to have some kind of air conditioning. You also had to have a way of getting at the vacuum tubes to replace them when they burned out. You also had to have a way of knowing which tube burned out. Imagine a whole rack full of vacuum tubes, hundred and hundreds and hundreds of them, one of them burns out, how do you know which one it is? You can try testing each one individually but that could take you all day. So you had to design in some kind of circuit into the computer that would somehow alert you to the fact that one of the tubes is burned out and where it is. So it's quite a bit of, of hassle you have to put up with to take advantage of the high speeds of the tubes. The transistor was very consciously invented at Bell Laboratories by people who wanted to have all the advantages of the high switching speeds but without the heat that is generated by tubes and that is indeed what a transistor does. So you could, as happened in the mid and late 1950s, design computers that had the basic same electronic design, the same logical design as the vacuum tube machine only every place you had a vacuum tube you had a transistor or its equivalent.
Interviewer:
WHAT WERE THE DISADVANTAGES OF THE TRANSISTOR?
Ceruzzi:
The transistor had a very difficult time getting into mass produced computers that you could sell. Even though it was invented in the, ah, late 1940s and when it was invented people realized, very quickly, that this was a revolutionary inventions and was going to really transform electronics. It was very difficult to mass produce transistors that had consistent properties. The vacuum tube industry, by contrast, was an established, conservative industry that knew how to mass produce tubes at a very low cost, around a dollar a piece or so or maybe even less, that you knew would work pretty well. Now they did have the unreliable, unreliability problem but that was something you could work around. Transistors, on the other hand, for a long time, you would get a batch of transistors and they would vary in their characteristics by as much as ten to one, just randomly selecting some out. Some wouldn't work at all. And the worst part about that was that nobody knew why. When they were making them they just didn't know what to do to make these transistors more consistent and reliable and it took about ten years of very hard work and a lot of very fundamental research into solid state physics before, before people could figure out how to make batches of transistors that had fairly consistent properties. So that was their primary disadvantage. Another thing that happened with transistors, it's kind of like a, a saying that people have and I don't know if I can get it right but, - when you lower the level of water in a river, it exposes all of the rocks. And in a sense the transistors introduction into computing took away this problem of vacuum tube unreliability. But when it took that away it revealed a new problem that was even more serious and that's the problem of the complexity of having to wire all these transistorized circuits to one another. It was there in the vacuum tube era but people didn't worry about it so much because they were worried about the tubes so much because they were worried about the tubes so much. They were worried about getting the tubes to work that they didn't worry so much about the complexity of the wiring. But as soon as you had transistors that were very small, didn't use much power, you didn't have to worry about all these other problems of getting the power, getting the heat taken away and all that. Suddenly you were confronted, face to face, with this problem of just this enormous tangle of wires that you somehow had to get organized and, and logical. And that just strained people's abilities, not only from the design standpoint but also the manufacturing standpoint, how could you build a computer with all those wires in it.
Interviewer:
WHAT WAS THE SOLUTION TO THE TANGLE OF WIRES PROBLEM?
Ceruzzi:
So when you, when you, when transistorized computers began appearing on, in the marketplace and people saw what you could do with transistors they also saw that further progress had to depend on somehow cutting through this problem of the tangle of wires. There's a phrase that people used called the tyranny of numbers, just the shear combinatorial complexity of having thousands of devices, each one has to, somehow, communicate with each other. It just, ah, very quickly grows up into a number of connections that exceeds all the known connections in the universe or something like that. And no one could figure out, how are we going to get through this problem. Out of that came a number of research projects aimed at attacking that tyranny of numbers and from that came essentially two simultaneous inventions of what we now call, the integrated circuit. It was invented twice really; once in California by Fairchild people, people at Fairchild Semiconductor, and once in Texas by people at Texas Instruments. In both cases they said, - let's get rid of the wires, put the components on a single slab of material and etch the connections in photographically or by some other method so that you don't have those connections. You just cut down the number of connections drastically and you can now design computers that have a much greater complexity to them, ah, without the increase in connecting wires.
Interviewer:
WHAT WAS THE DIFFERENCE BETWEEN WHAT WAS DONE IN TEXAS AND WHAT WAS DONE IN CALIFORNIA?
Ceruzzi:
The people in Texas and this was really headed by a named Jack Kilby, ah, are regarded as the first to conceive of the notion of what We now call the integrated circuit. They had, incidentally, a different name for it. I think it was called - solid logic technology, or something like that. They, ah, deposited the individual circuits on a piece of material, and I believe it was germanium, although later on it was silicon, and they had very fine wires attached on the slab from one piece to another. At Fairchild Semiconductor in Silicon Valley, Robert Noyce conceived of an idea of photographically etching the connections into the material rather than having physical wires jumping from one piece to another. So that I think is the fundamental difference. Kilby's work was a little bit earlier than Robert Noyce's work. I would say, ah, it really, as far as if you wanted to assign a priority, I wouldn't, I wouldn't do that. I would say that they were both very much instrumental in bringing this into, into being. I'm not sure whether that answers the question or not.
Interviewer:
CAN YOU TALK ABOUT MACHINES AGAIN? WHAT IS A CLASSICAL DEFINITION OF A MACHINE? HOW A COMPUTER IS DIFFERENT.
Ceruzzi:
Okay, there's a, there is a classical definition of what a machine is that I like to use and I think of it as a device that takes the chaotic forces of nature and channels them into a particular direction that makes it do something that you want it to do. And you think about an automobile or an airplane or a washing machine or any of the machines that we're familiar with in our daily lives that, that, ah, perform these functions. When you get to a computer you find that that definition breaks down. It no long applies because a computer, yes, indeed it does channel the forces of nature using electricity into certain directions according to the design of its builder but it doesn't have a very specific function. In fact it has no specific function at all prior to its being placed in the hands of a user who then decides what to do with it and that decision is based on what kinds of programs or software that are available. It's only when you add the software to the machine, to the computer that it becomes one of these machines in the classical sense that does a specific thing like a word processing computer program. Ah, another one might do architectural drawing. Another one might do your banking for you or something like that. It's all the same computer, very often coming right off the assembly line one after another or it may be the same one at your desk and you just put in a different disc each time. But once you do that, the machine changes itself. It transforms itself into something very specific that does a very specific job for you. And then it still retains that capability of doing something else, anything else, if someone can write the program for it.
Interviewer:
IN THE HISTORY OF CIVILIZATION THE INVENTION OF SUCH A MACHINE IS QUITE AN IMPORTANT PHENOMENON…
Ceruzzi:
It certainly doesn't compared to classical machines. It would be very hard to find anything like it. It almost harkens back to the notion of primitive tools, which like a knife or a hammer where you don't really know what you're going to do with a pocket-knife until you've got one and you start finding all kinds of things that you, you find it useful for. But, ah, a computer is not a simple tool like a knife. It's a very complex machine that has lots of interconnected parts that each of which has to function very precisely and correctly. So it shares the attributes of very simple tools with the most complex of machines. And I think this is one of the reasons why, when you talk to various people, including experts in the field as well as, as sort of the man on the street, you get such different reactions to, - what is a computer? Ah, a lot of times people say the computer is nothing but a tool. It's what you do with it that's important and, and yet there's a lot of truth to that. And yet, that kind of definition misses the, the inherent complexity of a computer, that, ah, other people will say, the computer is this marvelously complex device that does wonderful things. And they're, they're missing the sort of general purpose nature of it. So the truth lies somewhere in between or it's probably a conglomeration of all of those impressions.
Interviewer:
NOW IN THE SHORT HISTORY OF THE COMPUTER PEOPLE HAVE BEEN VERY CONFUSED ABOUT WHAT IT IS. HOW WOULD YOU CHARACTERIZE WHAT THEY THOUGHT IT WAS? IS IT A VERSION OF THE PROBLEMS WE HAVE UNDERSTANDING ANY NEW TECHNOLOGY JUST AMPLIFIED BECAUSE IT KEEPS TRANSFORMING ITSELF? HOW WOULD YOU DESCRIBE IT? HOW WOULD IT CHANGE FOR INSTANCE?
Ceruzzi:
The computer was invented in the 1940s. Well, the modern, electronic computer was invented in the 1940s by people who essentially had mathematical problems to solve. They built machines that had this general capability and they programmed them to solve their mathematical problems. Now once these machines got out into the world and other people began to become aware of them, including other mathematicians or other scientists, but also people in other areas of life. They found that you could do other things with them. The word itself, - computer - has this connotation of mathematics and arithmetic that will always be carried with it and consequently people feel always that a computer is some kind of mathematical device even though today, you look at the way most people use computers and they use them in ways that are probably not very mathematical at all. Let's say word processing or, or sorting data or something like that. Although I think it's important to realize that in a sense the word is not a bad word because deep down inside the computer all of the things that you do when you sit in front of one, get translated into mathematical symbols. So somewhere inside that box is mathematics being carried out. You just don't necessarily know that but it's in there just the same.
Interviewer:
THERE ARE SYMBOLS IN THERE, NOT NECESSARILY MATHEMATICAL SYMBOLS.
Ceruzzi:
They're, they're symbols that are being manipulated according to the rules of symbolic logic which is, I would consider, a branch of mathematics
Interviewer:
BEFORE THE WAR, AS WE’VE SAID, THE TERM COMPUTER DESCRIBED A HUMAN BEING AND AFTER THE WAR, A MACHINE. I WONDER IF YOU COULD TALK ABOUT THE PROBLEMS FACING THE ENGINEERS WHO SET OUT TO BUILD IT. WHAT WERE THE ELEMENTS THEY WOULD BUILD IT WITH AND WHY THEY MADE SOME IF THE CHOICES THEY DID.
Ceruzzi:
Well there was, ah, in the 1940s there was considerable debate over the proper element to build a computer out of. And the chief, ah, contenders were electromagnetic relays and vacuum tubes. Both are fairly simple devices, operate on electricity or with electricity, can be wired together in very flexible ways.
Interviewer:
WHY WERE THEY LED TO MAKE THIS THING OUT OF SWITCHES? WHY NOT COG WHEELS LIKE BABBAGE?
Ceruzzi:
Okay, I'm trying to think of what, what I was saying, ah, earlier. I guess, let me see if I can get this right. now, we all know that Babbage tried to build a computer out of mechanical parts using the decimal number system and it seems natural that that's what you would do. and, in fact, people in the 1930s and '40s were also working in that direction as well. but as they got closer to the engineering difficulties of building such a device, they realized that the best way to build this machine was essentially to forget all the ways that you built classical machines, which is to say, a very complex specialized sub-system that you put together, let's say an airplane- You've got the wing assembly, you've got the tail assembly, you've got the engine, a few sub-systems that go together. Computer engineers realize at some point that a better way is to decide on a very, simple basic element that does only something very simple and then replicate that element by the thousands and have the power of the machine come from the structure of the way you arranged those many replicated simple elements. Now that has to do with some very basic notions going back to the way you carry out arithmetic and how you manipulate symbols, which they realized, at some point. the question then became, really, two fold: What's the good choice of the basic element and then second, how do you structure the organization of those elements? Those were the two questions really that people were facing through the 1930s into the '40s. I'd say by 1955 or so it was settled.
Interviewer:
ONE MORE GO
Ceruzzi:
So they, they hit upon the notion of building a computer, not as you would build a classical machine, but rather, build it out of very simple, basic elements which you replicate by the thousands. Now, what's the reason for doing that? Well, I think it has something to do with the nature of approaching, solving problems mathematically, and remember they were coming from a mathematical standpoint, that mathematics is a very rich body of knowledge that is built of very, very, ah, small, basic units of the, ten digits, a couple of signs, a couple of symbols and that's, and a few rules. And out of those rules, the fact if you really wanted to press it, you could get down to just about two or three rules, out of those rules you can build up this rich body of mathematics, each little piece being built up on the one below it. And people felt very good about that. They felt this is a good way to tackle the problem of building a machine to handle these kinds of mental processes. So they, very quickly, settled on the notion of the simplest kind of computing element you could get which is a switch that's either on or off. And that's as simple as you can get. And of course that leads you very quickly to the binary number system. Now, binary numbers are the kinds of things that ordinary people have a lot of trouble with but remember these are mathematicians and I don't think they had that much trouble with them, a binary number system, they're used to all kinds of symbols.
Interviewer:
NOW YOU SAID BINARY- MOST PEOPLE THINK THERE’S ONLY ONE WAY TO COUNT.
[END OF TAPE F325]
Ceruzzi:
Well, we're very comfortable with decimal numbers because we have ten fingers. And, ah, it seems to be a very useful number base. But if you're a mathematician, well, if, if you are, some mathematicians, I think it's better to say, don't have any problem with calculating using all kinds of different symbols, basically, a lot of mathematics consists of manipulating symbols in a very abstract sense without any concern over whether they stand for anything or not. Now, that's not to say that those mathematicians are primarily the ones who are, the ones who invented the computer but still there is that notion that symbols don't necessarily have to be that close to the real world to have some utility when you devise ways and laws of manipulating them and, and combining them and recombining them. So binary numbers really came about from an engineering standpoint because to an engineer nothing is simpler than a switch that has basically two positions, — on and off. Compare that to the decimal calculators and the machines that Babbage tried to build which had ten positions and you, you come away with an instant regard for the elegance and the simplicity of what later became the, called, the binary system. But to an engineer it was just a simple, — let's build the simplest element we can. What is that? It's a switch that's either on or off. And from that, came this notion that it's much easier to build a computer out of the simple elements and let the logical design or the overall architecture of the machine take care of the translation from that primitive symbol system to one that human beings are more comfortable with which includes not only the decimal numbers but also the letters of the alphabet and all kinds of things like that.
Interviewer:
OKAY IN BABBAGE’S MACHINE HE HAD NUMBERS AS POSITIONS ON THE WHEELS BUT THE SHAFTS AND WHATEVER CONDUCTED THE OPERATIONS OF ARITHMETIC, HOW IN A DEVICE MADE OUT OF SOMETHING WHICH WAS JUST SWITCHES COULD YOU ACTUALLY EMBODY THE PROCESSES, THE MENTAL PROCESSES, OF ARITHMETIC ITSELF?
Ceruzzi:
Around the turn of the century, a number of people and I could mention several names, - Alfred North Whitehead, Bertrand Russell or David Hilbert, mathematicians and, and other academic thinkers found that there is a correspondence between rules of arithmetic and what they call logic, which is simply, logical reasoning, the kind of reasoning that, that, ah, probably originated in the world of law, probably from Ancient Greece or Rome, the notion of, if something happened then something else has to happen, that kind of thing. Now, this is kind of touch, they, they realized that there is a correspondence and furthermore if you break this down to its simplest elements, you can construct machinery that will embody these basic notions of the relationships of symbols to one another in some kind of, of logical sense. There are mechanical devices that perform logic that many of us are comfortable with already. And I'm thinking of, for example, mechanical devices inside of an elevator that prevent you from opening the elevator door until you're actually stopped at a floor. Now, nowadays, that's all done with a little computer chip for many years elevators had very primitive mechanical bars that swung down when the door closed that prevented you from doing any kind of dangerous kind of activity. Ah, we see this for example when railroad switching yards, where you throw one switch to keep a train from hitting another one or something like that. So it is possible to do that kind of thing mechanically. The problem with all of those other things is that they were all sort of ad hoc solutions to an immediate problem. If you could abstract though what they did, you would come up with a fairly simple collection of maybe four or five kinds of circuits, where you would wire a few of these binary switches together and out of that you would get everything that you need to build a computer with, at least in principle. Now, the engineering problems are very tough but in principle, you've got everything you need right there.
Interviewer:
IN PRINCIPLE WOULD IT BE TRUE TO SAY THAT YOU COULD DO EVERYTHING, BUT ONE OF THE THINGS YOU’D REALIZE IMMEDIATELY IS IT MIGHT TAKE A NUMBER OF STEPS FOR IT?
Ceruzzi:
There, there's, there's a trade—off, there is a very obvious trade off though when you go to these very simple basic elements like binary switches and that's that, even the most rudimentary arithmetic operation that we're all comfortable with like adding a few numbers together takes many, many more basic steps. That implies that you would require an enormous amount of time if you wanted to carry out any practical calculation. Now the question of time was a serious one and I think it led people rather quickly to the electronic circuits, especially the vacuum tube circuits as the technology of choice for computers when people first seriously began to build them.
Interviewer:
JUST GIVE ME SOME EXAMPLE, JUST SORT OF A BALLPARK FOR THE NUMBER OF STEPS FOR A SIMPLE MULTIPLICATION OF TWO NUMBERS. HOW MANY GATES, THAT SORT OF THING…
Ceruzzi:
I would say if you wanted to multiply, ah, two five digit numbers together, you would need anywhere from 300 to 1,000 simple steps, ah, and about maybe 100 to 200 binary gates or probably even more than that, just to do a simple multiplication. And that's not even taking account of the sign or the decimal point or, or any of those things. Ah, when you use a pocket calculator, you notice that they suppress the leading zeros and they put commas in the right places and the decimal point in the right place. That takes even more. So it's quite a bit.
Interviewer:
NOW WE’RE GOING TO START TALKING ABOUT HARDWARE. NOW AMAZINGLY, DESPITE WHAT YOU’VE BEEN TELLING US ZUSE SET OUT TO BUILD THE FIRST ONES WITH MECHANICAL DEVICES, RIGHT?
Ceruzzi:
Konrad Zuse an interesting case because he lived in Germany and was essentially out of the mainstream of computing activity which we tend to associate with the United States and, and, ah, Great Britain. But he arrived at almost the exact same conclusions as everyone else, at least as far as binary arithmetic goes. He realized, as a mechanical engineer, and that was his training that mechanical switches could be make much more reliable if they had only two possible positions. He had not hear of Babbage until he went to apply for patent on it, on his invention and the patent examiner said, — Babbage has already thought of this before you but, of course, by that time, Zuse was ahead of Babbage because he was, he was using, Zuse was using binary, ah, binary elements. They were mechanical and they suffered from this problem of being a little bit slow. And this was a serious problem. They also suffered from the problem that in mechanical devices you are constrained by the three spatial dimensions of length, width, and depth and everything you build has to somehow fit in those dimensions. Now, with electronic devices, you've got the same three dimensions but you have a lot more flexibility as far as sending wires from one part of the machine to another. You can send the wire anywhere you want and not really have to worry so much over whether you have any kind of close coupling to them or not. Zuse had a tough time figuring out how to build a mechanical memory device, but he did and he succeeded quite well. And in fact, he built a machine that had a mechanical memory but it did have a, a relay processor. Ah, and it was in use right into the 1950s, so it was a successful approach to the problem.
Interviewer:
ZUSE HAD ANOTHER ALTERNATIVE THAT YOU JUST ALLUDED TO, IF YOU COULD TELL US ABOUT THAT. POINT TO IT WHEN YOU DO IT. START AGAIN.
Ceruzzi:
An electromagnetic relay is just a very simple switch.
Interviewer:
[TECHNICAL DISCUSSION]
Ceruzzi:
An electromagnetic relay is really nothing more than a simple switch which switches an electric current on or off. The switch, in turn is controlled by another current. [MISCELLANEOUS CONVERSATION] .
An electromagnetic relay is really nothing more than a simple switch which switches a current on and off. The switch, in turn, is controlled by another electric current so that the switches, the current that is switched here can control another switch which can control another switch and so on. In fact you could have multiple wires coming out of this, ah, relay which can control a whole bank of other relays. You can also have the relay, in a sense, coupled to itself so that when it switches, it latches on to a current which keeps it in a position. That means you've got a memory now. You can store information. You only store one bit of information in each relay, that's not very much but here's the, the nice part about it is that you can mass produce these things. You can make them quite small. This one is a, a bit of a large one but you can make them very small. And you can make lots of wires connecting one to another so it really isn't that hard to get enough relays together, maybe a few thousand, and pretty soon, you've got something that can do some pretty good calculating.
Interviewer:
WHY WASN’T THAT THE SECRET OF BUILDING COMPUTERS?
Ceruzzi:
As it turns out, the very first devices that actually were capable of doing what Babbage wanted to do, namely carry out general purposes sequences of arithmetic, were build with electromagnetic relays. Ah, they were built by, really three people simultaneously: Konrad Zuse in Germany, George Stibitz at Bell Laboratories in New York City and Howard Aiken at Harvard University in Massachusetts. Each of them had a slightly different approach but they all basically used the same principle of the electromagnetic relay, ah, in banks of up to several thousand or so and, ah, a sequential control of arithmetic operations, ah, fed in by a paper tape.
Interviewer:
THEY WERE FASTER THAN THE MECHANICAL ONES BUT WERE THEY FAST ENOUGH?
Ceruzzi:
They were, they were not very fast in speed relative to what a good human being, operating a mechanical calculator could do. But because they could carry out not just one operation but a whole sequence of operations at a time, it meant that for a complex problem, they were much faster because they could run right through until you got to the solution of the problem. However, it didn't take long before people realized that if you were going to go much beyond what these three gentlemen did, you were going to require even faster switching speeds. And they had pretty much pushed the limit of relays. And, ah, consequently, well, they had, they had pushed the limit, ah, these three inventors pushed the limit of what you could do with relays. But a very interesting thing happened. They designed these machines with a certain logical structure based on very simple switches. And these machines had a tremendous amount of flexibility and power. Other people came along and said, - well, if I just substitute a vacuum tube for your relay using the exact same basic logical design, I got the same machine. It costs a little bit more but not much but I've not got one thousand times the speed and I can do a thousand times more work. Given that kind of, of, ah, accounting, it's not surprising that relays soon fell out of favor.
YOU MENTIONED VACUUMS, CAN YOU GET ONE OF THOSE?
A vacuum tube consists of three basic elements, just like a relay there is an element which takes current in, an element which is the pivot or the fulcrum of, of a lever, in the sense that it switches that current and then there's a third element which collects the current. So you have a, a switch that is controlled by a current in itself, the output of it, of this switch is another current. So you can then chain sequences of vacuum tubes to one another and have them switch each other, have one feed back on itself and store a digit of information, ah, do all the same kinds of things that a, a relay can do. People often talk about vacuum tubes as being cumbersome and awkward but in fact they can be made quite compact. This is a rather large one that comes form the Sage computer that was built for the United States Air Force in the 1950s. But, they can be made compact and there were lots of, ah, there was a lot of experience in manufacturing these tubes so people knew how to make them with fairly reliable characteristics.
Interviewer:
WHAT WERE THEY USED FOR?
Ceruzzi:
The vacuum tubes history is, is very interesting. It was invented, ah, in association with the radio and the telephone industry to amplify signals. The first real practical application was in telephones for long distance conversations to amplify signals. And, ah, very soon thereafter, people began using them for radios to, again, pull very weak signals out of the air and make them audible and, ah, and demodulate them or decode them so that we could hear them. The computer though, uses a tube in a very different way. As I said, it uses it as a switch. It either turns the current on or off. In the telephone business or in the radio business, the vacuum tube was used to amplify a signal so if you had a little signal like this it would amplify it like that. If you had a bigger one it would amplify it some more in what we call an analog fashion. But it turned out not to be that difficult to modify the tube's circuits to make it work as a switch.
Interviewer:
AND HOW FAST COULD IT SWITCH?
Ceruzzi:
Well, ah, how, how fast was a tube? I, it depends, a lot of the speed.
Interviewer:
WITH A RELAY, WHAT IS THE SWITCHING...WHAT'S HAPPENING ?
Ceruzzi:
In a, in an electromagnetic relay, the switching is done physically by a piece of metal that must move from one set of contacts to another. In a vacuum tube the switching is done by electrons which, for all our purposes, can be considered to have no mass at all, no inertia. You can switch them very rapidly simply by turning on or off another current. That, in practice, means that you get switching speeds around a thousand to two thousand times faster than the best relays.
Interviewer:
THE FIRST LARGE SCALE ENGINE TO WORK ELECTRONICALLY WAS COLOSSUS, TELL US, WHAT WAS IT AND DOES IT HAVE A PLACE IN THIS STORY WE'RE TALKING ABOUT. WAS IT A DEMONSTRATION THAT IT WAS POSSIBLE?
Ceruzzi:
We tend to think of computers and in fact, throughout much of this discussion we've been talking about computers in terms of their ability to do arithmetic, as mathematical machines. As it turns out, the very first device that used electronic tubes to do computing in the more abstract definition, was not an arithmetic machine at all but something called the Colossus, a machine built by the British during World War II to decode German messages that were intercepted, ah, by radio. This machine did not process numbers but it took information in the form of messages that were coded and translated into pulses of electricity and going through, essentially binary vacuum tube circuits, with some kind of program, that processed that information, man., manipulated those codes and came out with a new set of, a new piece of information that was somehow more intelligible. So, the Colossus, in some ways it was both more and less than a computer. It was less than a computer because it couldn't even do simple arithmetic but it was more than most of the other machines of its day because it processed information very rapidly and was able to carry out a very complex and sophisticated program to get to the kind of solution to a very tough problem that people wanted.
Interviewer:
SO IT SHOWED THE LOGICAL NATURE OF THE [INAUDIBLE]…
Ceruzzi:
The Colossus used logical circuits. It used the same kinds of essentially binary switches that, ah, are only very few in number but by a very ingenious combination of those circuits plus a very large number of vacuum tubes, a very large number of those circuits, it was able to do something quite sophisticated, namely, take coded messages that had been scrambled in some, almost random fashion, and somehow unscramble them.
[END OF TAPE F326]
Interviewer:
THE ENIAC WHICH WAS BUILT, AS WE KNOW, TO DO THESE BALLISTICS CALCULATIONS, WASN’T FINISHED BEFORE THE WAR. HOW DO YOU REGARD IT? WHAT IS ITS SIGNIFICANCE IN THE HISTORY OF COMPUTING?
Ceruzzi:
The ENIAC was, the ENIAC has been really, put on a pedestal as kind of a very pivotal machine. And I think a lot of that is justified. To me the most remarkable thing about the ENIAC was that it's designers had the audacity to put 18,000 vacuum tubes into one system and make it work. And I think that was the real tough nut to crack because the notion of taking very simple switching elements and making complex and very powerful machines out of them was something that many people understood in the abstract, but in order to make it do something practical, you really had to bite the bullet and make it really happen. You had to build something that was very, very complex, that had many more vacuum tubes or elements than anything else that people had previously done and make it work. And that was very hard to do. And there was no experience before hand with, there were some things in radar, and a few other installations, but nothing like that. And for them to to first of all propose doing it, and then pull it off I think was remarkable, and it really shattered the whole sort of feeling around ever since Babbage's day that this thing was always impractical or it would never happened, or that it was an interesting intellectual idea, and all that. It just shattered through that because it immediately, as soon as it was finished, it immediately began doing very useful work, and people began lining up trying to get some time on it.
Interviewer:
BY THE TIME OF THE ENIAC, CERTAIN THINGS HAD BECOME CLEAR ABOUT HOW TO BUILD A UNIVERSAL MACHINE. I’M WONDERING IF YOU COULD GO THROUGH THEM, AND THEN GO ON TO THE THING WHICH THE ENIAC REALLY DIDN’T HAVE SORTED OUT AND SOME IDEAS OF WHAT TO DO ABOUT THAT.
Ceruzzi:
The ENIAC was a very successful machine in some ways, but not all of them, in fact one might say that one of the most important things about the ENIAC was that it showed people how not to build a computer. Mainly by a brute force attack on the basic arithmetic function. People felt that the 18,000 tubes was probably as far as anyone ever would want to go and that future designs had to be more elegant and take better advantage of some of the inherent logical structure that mathematicians were working on at the time. Now the ENIAC showed people that vacuum tubes were not only practical but in fact, the higher speeds that they promised really were worth the trouble. And I think there are some very interesting statistics about how the ENIAC could be made to run for say, ten hours at a stretch before a tube burned out. In that ten hours it could do more computations than any of these electromechanical machines could do running for weeks and weeks, and weeks. So I think it's very clear that that was proven to the world that this is the way to go, of course, admitting that there are some tough engineering problems involved. It was also shown that, let me see if I can get this. However the ENIAC, we must remember that the ENIAC however was not a pure binary machine. Can we talk about that, okay.
Interviewer:
[MISCELLANEOUS CONVERSATION].
Ceruzzi:
The ENIAC was, the designers of the ENIAC were faced with the problem of having these very high computing speeds, but having to somehow feed that machine instructions at high speeds. The paper tapes or the punch tapes that were used in the relay machine, such as those that Zuse built, those tapes ran much too slowly. The ENIAC designers went to a system of plug wires, where you essentially rewired the machine each time you set it up for a new problem. And that was a very slow and tedious process, but once it was done, it could then run very fast. People after the war realized that future computers had to retain the kind of flexibility of programming that the punch tape offered to the relay machines, and yet still do it at the high speeds that the ENIAC had. The answer to that was to store those instructions in some kind of internal electronic memory, just as you stored the data of a calculation. So what emerged in the late 1940s was a notion of a machine that had a fairly large undifferentiated storage unit that could hold say a thousand numbers and instructions. And of course, instructions are coded as numbers as well, so it would store both instructions and data in some kind of memory device that you could retrieve data from at very high speeds, so that the processing part of the machine would not sit around waiting for its next instruction.
Interviewer:
WHY DON’T YOU SAY THAT AGAIN. LET ME ASK IT A DIFFERENT WAY. HOW IS THE ENIAC DIFFERENT IN PRINCIPAL FROM A MODERN COMPUTER. WHAT I’M SAYING IS THAT A MODERN COMPUTER IF YOU WANT TO CHANGE THE PROGRAM YOU FEED IT IN…. I’M JUST WONDERING WITH THIS THING WHERE THEY HAD TO REBUILD THE MACHINE EACH TIME, SORT OF SAY IT LIKE THAT, IT JUST DIDN'T MAKE SENSE.
Ceruzzi:
The ENIAC is regarded as the very first of the modern electronic computers, but there is one thing that was quite different about it from today's computers and that was the way it was programmed. With the ENIAC you used plug boards where you pulled out wires and plugged them into various sockets that set up the machine for a specific problem. In a sense you were literally rebuilding the machine for each problem that you wanted to solve. Now as it turned out with the ENIAC once they set it up they could run it for quite a long time using different data because that was the kind of problems that it was built for. But if you wanted to make it a truly general purpose machine it wasn't very good at that because it took an awful long time to pull out all the plugs and then put them back in a new position and make sure you didn't make any mistakes and all that sort of thing. And during that time, the machine, a very expensive machine would be sitting there idle, not doing anything useful. Sometimes this could take days. As a result people came to the notion that future computers would be electronic like the ENIAC, but they would also have their instructions stored in the memory along with the data, and both could be retrieved at very high speed. That way when you wanted to change over from one program to another, you simply would load in a new set of instructions into the memory and then let the computer do the rest.
Interviewer:
CLEARLY THAT WAS SORT OF A PRACTICAL ADVANTAGE, YOU COULD SEE WHY PEOPLE DID IT, BUT IT TURNED OUT IN RETROSPECT TO HAVE MUCH MORE FAR REACHING CONSEQUENCES, DIDN'T IT? I MEAN HOW IMPORTANT DO YOU THINK THIS DECISION TO GO TO THE STORED PROGRAM HAS BEEN? IS IT CRUCIAL?
Ceruzzi:
The notion of storing the program internally was really arrived at first for very practical notions of trying to get the instructions out at high speed so the processor wouldn't have to sit around idle. But once it was implemented and people began to think about it they realized that it was much more profound than just that. It was not just an engineering expedient, but in fact it had something very, very fundamental to do with the nature of what a computer is. A computer is intrinsically a basic symbol manipulating machine. And it can manipulate, the symbols that a computer manipulates can be data, in the sense that we're familiar with. It can be instructions in the sense that we're familiar with. Inside the computer all there are, are electrical impulses. From an engineering standpoint there's no difference. What I think took people by surprise, and I think it still takes people by surprise, is the fact that there's no real difference in a philosophical sense either. Between instructions and data. Between commands that tell the machine to do something, and the numbers or symbols or words or whatever that the machine is doing those things to.
Interviewer:
WHY DID THAT MATTER IN A PRACTICAL SENSE?
Ceruzzi:
The most. . . the most. . . What this meant in a practical sense was that if you had a stored program computer and if you constructed it in such a way so that the computer could manipulate not only the data in the ordinary sense, but also manipulate instructions. In other words, treat instructions as data. Which sounds preposterous, when you think of it, but in fact that's what they could do. When you have such a machine you could feed in to the computer a program that's written in very comfortable language, the language that ordinary people, you or I express our own problems in. The computer would then take that, that instruction, or those instructions, treat them as data, manipulate those symbols and turn them into these binary ones and zeros which are really the only kinds of things that the vacuum tube circuits or the transistors or the chips inside can really manipulate. That means that you as an individual no longer have to be concerned with the language that the computer speaks, namely this very abstract and arcane arithmetic system, but rather you can, you can be busy and concentrate on the problem you want to solve. Whether it be an engineering problem if you're designing a house, if you're writing a book. If you're a, whatever you want to do. You have a language that you express that problem in. You give that problem to the computer in your language. The computer then does the rest. And you don't have to worry about what it does inside. It takes care of that for you.
Interviewer:
SO YOU’RE SAYING THAT THE STORED PROGRAM COMPUTER MAKES POSSIBLE A USER FRIENDLY COMPUTER, THE NOTION OF A STORED PROGRAM MAKES POSSIBLE SORT OF EVERYTHING THAT’S HAPPENED SINCE…
Ceruzzi:
With the stored program it lays open the door toward what we now call user friendly programs which are, application software, is the term that I prefer to use. Application software that is written by people who are intimately familiar with the kinds of problems that people really need to solve in their day to day lives. These people write this application software which then you can purchase or rent or get somehow, in some cases you get it for free. You put it into your computer and then the computer takes over and does the rest. Now it can be done without a stored program. It's possible to build a computer that does not have this stored program capability, and yet still have the ability to translate from one language to another. However, in practice to do that is very awkward, because essentially you have to build two machines. You have to build one machine that's the computer to solve the problem, and then you have to build another one that translates your problem into the language of the first machine. Why build two? If you could have the one machine do both, in fact, one machine can do both. Because that's what a computer is. It's a universal machine.
Interviewer:
OK NOW, LET ME JUMP AHEAD NOW, WE SPOKE EARLIER THIS MORNING ABOUT HOW MANY OF THE PIONEERS FAILED TO SEE THE USE OF THE COMPUTER. WHY DO YOU THINK, FOR INSTANCE WE SPOKE ABOUT THE SCIENTIFIC PEOPLE NOT SEEING THE USE AND APPLICATION, WHY DO YOU THINK MANY OF THE PEOPLE IN CORPORATE COMPUTING WHO FOUNDED THE INDUSTRY IN THE 50’S WERE UNABLE TO SEE THE DAY COMING LIKE WE HAVE TODAY WITH MILLIONS OF PERSONAL COMPUTERS ON THE DESKTOP? WHY DO YOU THINK THAT WAS A DIFFICULT THING FOR PEOPLE TO GRASP?
Ceruzzi:
Let me think about that for a minute. When we have, in the, the kind of world we live in today is one where there are computers all around us in various forms. We tend to take that for granted. In fact young people grow up, not ever seeing anything else. And what that obscures I think, is the enormous practical difficulties involved in getting something of this complexity to work. In order for computers to be as pervasive as they are, there are a number of things that have to be done, or have to be fulfilled. First of all, they have to be cheap. They can't be too expensive. Although they're not, they're not that cheap, but they're certainly not many, many thousands of dollars. Second of all they have to be reasonably small so that they'll fit on a desk top or, you can carry them with you or whatever. And they also have to be pretty reliable. They can't be breaking down every few hours and having to be repaired. And then once you've got that then you've got the basis for the notion of making programs for them that will solve your problems and be user friendly and all that sort of thing. But you can't even get that far until you satisfy some very certain basic conditions. And those conditions were really what were on the minds of the computer pioneers for decades. They really worked very hard at trying to solve the Very hard problems of reliability and size and cost and weight and power consumption. All of those things and perhaps they became so fixated on those problems that they lost the vision of what might happen once you passed a certain threshold and they became cheap enough and available enough so that ordinary people could own them. Once that happened though, and ordinary people did get them, there was simply no stopping people from using their own ingenuity to figure out ways to put them to uses that no one really have foreseen.
Interviewer:
NOW UNDERPINNING WHAT DID HAPPEN, UM, WAS, WERE PHENOMENAL ADVANCES IN THE PROCESS YOU TALKED, SPOKE ABOUT THIS MORNING OF MINIATURIZATION. I WONDER IF YOU COULD GIVE US SOME IDEA, IS THIS REALLY UNPRECEDENTED IN ENGINEERING HISTORY, COMPARING THE MINIATURIZATION AND IMPROVEMENT IN QUALITY AND SO FORTH, AS COMPARED WITH SOMETHING ELSE. IS THIS DIGITAL MEDIA WITH ITS MILLION [INAUDIBLE] INCREASES REALLY SOMETHING WE HAD TO RIGHT TO EXPECT, OR WHAT?
Ceruzzi:
It's very hard to put into perspective the advances in computing as compared with other advances in technology. A lot of technology in people's minds is associated with the notion of bigger is better. In fact you see things like hydroelectric dams, nuclear power plants, super highways where a small road is nice but a super highway is better or something like that. And I think in general you can say that's true about a lot of technology. There are other cases though of machines getting smaller, and I think for example, the mechanical clock which first appeared in cathedrals, or in the town square, it didn't take long before people figured out a way to make it small enough to carry in your pocket or even eventually on your wrist. And a lot of people went out and purchased watches. Not because they needed one, but eventually if enough people had them then they found they couldn't get along without them which is true of a lot of technology. So there are examples in the past of things getting small. We have examples in computing of things like Napier's bones which are these instruments which people carried in their pockets to help them do multiplication. We have examples of very small pocket slide rules, even pocket mechanical calculators. So there's always been a desire for people to have things small. Certainly a desire to have things cheap and mass produced, part of the myth of Henry Ford and his Model T. I don't think any of these examples though can compare to the orders of magnitude decrease in size and lowering of cost that has happened in the computer field in the past two or three decades. Now whether that really is the central issue here I'm not so sure. Because what has happened is not so much that you take the ENIAC let's say, and make it small enough to put in your pocket. What you really are doing is you are taking advantage of that smaller size and lower cost to make ever greater demands on what you would like that machine to do. I think if someone had an ENIAC today and they gave it to me, I wouldn't want, I'd throw it away, it wouldn't be of any use to me. I would much rather have something that can do a lot of other things. And I can get those things, because people are out there wanting to build them because they know there's a market for those things.
Interviewer:
HOW DOES THAT CALCULATOR YOU’VE GOT THERE COMPARE WITH THE ENIAC?
Ceruzzi:
Okay. These calculators which first appeared around the 1970s from companies like Hewlett-Packard and Texas Instruments manipulate decimal digits internally of course, they're binary, but they manipulate decimal numbers with about ten digit accuracy, can carry out about a hundred program steps. Can do sines, cosines, can store about twenty to a hundred numbers. They have easily the power of the ENIAC. They're a little bit slower than the ENIAC would have been, but not so much to make a difference. In fact, I consider these to be sort of the modern day version of the ENIAC, but you can carry it in your pocket.
[END OF TAPE F327]
Ceruzzi:
This chip that I have in my hand is a micro processor that's used in many personal computers today. In fact what you're looking at is about 90% packaging. Really the wires are really what determine the size. The chip itself is probably just a little bit, a sliver of a square inside, and this is really the heart of a personal computer and many other larger systems as well. It's hard to say what it is because what we see nowadays is very large systems made up of dozens and dozens of these things. And I think of this in terms of kind of this endless cycle. We talked about how the machines like the ENIAC were built out of thousands and thousands of simple elements, namely vacuum tubes. Now we have new computers built out of thousands and thousands of very complex elements, but they're reduced to the simplicity of a very simple chip here. And it goes on and on and I don't think there's any end in sight. At least I can't see one.
Interviewer:
NOW IN THE WORLD OF SORT OF PHYSICAL MACHINES YOU RUN UP AGAINST SCALING PROBLEMS VERY QUICKLY- WE SEEM TO HAVE AVOIDED THIS PROBLEM…
Ceruzzi:
The physical limitations of machinery are very evident in things like skyscrapers and bridges, you can't build a bridge, if you try to build a bridge too long, you run into problems of the physical strength of concrete, and steel and things like that. There are physical limits to computation. As it turns out though, we are an awfully long way from ever approaching them. The physical limits of computation people have talked about in terms of the radius of a hydrogen atom, or the time it takes a beam of light to traverse the distance of a hydrogen molecule or something like is a basic time unit. We're a long way from that. Now there are some very practical problems and I don't want to disparage how difficult it is to make chips this small, but what's holding back the progress of computing is not the physical limitations, but rather the very kind of day to day tackling of the complexity of design. How do you design a complex device like this. Earlier I said that the power of the computer comes from sort of two basic notions. One that you start with very, very simple basic binary switching elements and the second is that you organize those elements in a very complex but highly structured way. It's the organizing though that's hard, and I don't think we really have a handle yet, on organization. Although we have come a long way, certainly. But it's still a fundamentally human activity, involves a lot of creativity, and it is something that is really sort of the break that keeps computers from going to those physical limits. Which they may do some day, but we have a long way to go.
Interviewer:
NOW ONE OF THE THINGS THAT’S CLEARING GOING TO HAPPEN, IT’S HAPPENING AS WE SPEAK, IS SOMETHING THAT VERY FEW OF THE VISIONARIES EVER CONSIDERED WHICH IS THE CONVERGENCE OF DIGITAL COMMUNICATIONS AND COMPUTERS. I WONDER IF YOU COULD JUST SPEAK A LITTLE TO THAT. WHY THIS IS LIKELY TO BE SIGNIFICANT AND WHERE IT MIGHT LEAD US TO.
Ceruzzi:
All along we've talked about computers as being symbol manipulators and as a machine that can manipulate symbols and process information, one of the things we haven't really talked about is what you do with that information once you've processed it. Now you can use the computer as most of us do as a device that is right in front of you and gives you the information back right away. But certainly sending that information across time and space to other parts of the world, what traditionally is called communication is part of that same kind of activity. And what we see today is a desire among many people to realize that potential which has always been there of the computer to be merged with the already existing telephone network, or telecommunications network that we have in this world. And by that merger create a kind of synergy where you not only send information from one part of the world to another, but you also actively manipulate it and creatively shake that information as you send it. It's something that's very, very strongly believed in by many people, but to get to the realization of it requires some further technical development as well as a conceptual, it requires a conceptual kind of revolution that we haven't had yet. Now, I don't know whether that's right. We have had. [LAUGHTER] It's not, what will it take to achieve a true combination of processing information and sending information. We always, we're constantly seeing examples where little chunks of this puzzle get put into place. One of my favorite examples is the fax machine. Look at how quickly fax machines took over. From being a curiosity that had been around almost a hundred years. There were actually fax machines under development in the 19th century. But once people figured out a very nice, cheap and reliable and simple way of making a fax machine, suddenly we all had to have one. We couldn't live without it. You can't do your business without a fax machine. And it happened in a space of a couple of years. We see these little bits of the puzzle being put into place, but we haven't seen the big picture yet. And I don't think we know what the big picture is going to look like yet.
Interviewer:
THIS IS KIND OF LIKE THE FINAL WIND-UP, I MEAN, WHY DO YOU THINK THE COMPUTER HAS BEEN SO IMPORTANT? TO ASK THE QUESTION AGAIN, I MEAN WHY IS UNDERSTANDING THIS TECHNOLOGY MORE IMPORTANT THAN SAY UNDERSTANDING THE HISTORY OF THE WASHING MACHINE?
Ceruzzi:
Well, I I'm often asked by people why I spend so much time studying the history of this invention and a lot times, I feel well, it's just interesting to me. In some ways it's just another one of the many machines that we have as part of our lives. We have automobiles, and telephones, bicycles and everything else, and yet somehow I feel that the computer is more than that because it addresses something more fundamental about the place of human beings on the earth. Namely that our own lives are intimately related to learning about and discovering and interacting with the rest of our environment, in terms of exchanges of information. Now that doesn't mean that exchanges of energy aren't taking place, in fact that's very necessary too. But when you look at human society and what we do, and what we do on earth, as we band together in societies. We exchange information. Our culture which is what really separates from other living things is something that's based on information which is carried by symbols. The invention of symbols is really kind of the key to the whole notion of human culture. And here is a machine that addresses that very fundamentally. And is the tool that allows us to amplify and magnify our ability to manipulate symbols. To me that means it's somehow much more profound than let's say a washing machine. Although washing machines are wonderful. I don't see anything wrong with them and I think they're very reliable and they do their job. Bicycles are wonderful, automobiles are wonderful, they all have their good and bad points and all that sort of thing. But somehow underneath all of them is this more profound notion of what we're doing here as we create these dream machines, I guess you could call them, to, to be at the sort of junction between our own selves and the universe around us.
Interviewer:
NOW YOU MENTIONED SYMBOLS, THAT WAS VERY GOOD, UM IS THIS THEN IN THE SAME SENSE MAKING MARKS ON AS MEDIUM WHEN PEOPLE WRITE, IS THAT A SYMBOL SYSTEM? IS THIS ANOTHER SYMBOL SYSTEM IN THE LONG TRADITION?
Ceruzzi:
Traditionally historians have marked the beginnings of civilizations as synonymous with the invention of writing or making symbols to mark the storage of grain or something like that. And of course, what's interesting about that is that starting with a very practical use of symbols to record information, we end up with all the richness that comes with literature and culture and written books and libraries and all those wonderful things that probably were not anticipated by the people who first developed these primitive markings, what we now think of as primitive markings. And I think the computer fits in very well with this continuum because here is a further embellishment of this notion of abstracting information. Which is all around us. But somehow making, developing a way to abstract it through the use of symbols, whether they be marks on a piece of paper, in this case electronic marks, in a computer chip which you can then manipulate and process. That further continues this process... [MISCELLANEOUS CONVERSATION] The work that these machines do with symbols is what I consider, it's a dynamic ongoing process that...I'm trying to think about it. I keep thinking that this notion that we've come to of the stored program principle, where you have a device that treats, that treats commands, and data as one, and interchanges and mixes things together, and can modify itself and do all kinds of things in an abstract way, is at the heart of the power of these, of what we call the computer today. Because it is a machine that can treat information in very open ended ways and yet still allow you to control it, to have some kind of control to have it do something for you. To make sense out of the world for you. And I think that...I'm not sure what I'm getting at...
Interviewer:
THAT’S OK. I’VE GOT ONE MORE QUICK QUESTION AND THAT’S GOOD. JUST AS YOU REFERRED TO THE SYMBOLS WHICH RECORDED GRAIN THINGS HAD AN OPEN ENDED FUTURE, THEY LED TO THINGS THAT WEREN’T ANTICIPATED. CAN WE TALK ABOUT THE COMPUTER IN THAT SORT OF WAY? THAT IT WAS INVENTED FOR SOMETHING SPECIFIC RIGHT, AND IS IT OPEN ENDED?
Ceruzzi:
The story that people come back to time and again about the recent 30 years of the development of computing is how it has constantly leapfrogged even the most wild projections of some of its inventors, who find themselves pioneers one day, but backward the next day, because things are moving so much faster than they could dream. And I think this process is continuing. That we started out with a machine that could do arithmetic, and then it went from there to being able to sort and process business data for example, and then it went on to do some controlling applications in factories or for air traffic control or aircraft, or things like that. Now it's doing pictures. It can manipulate drawings and graphics and images. Tomorrow it's going to do something that will probably surprise us all. I know that much. What it's going to do, I don't know, but it will surprise us just because it has that inherent capability of surprising us. Because it is, it is not so tied in to any one particular function. What it does depends only on the ingenuity of people who play around with it and discover these things.
Ceruzzi:
It really consists of a simple switch that can switch current, and the current itself can energize a coil which can switch another relay. You see a coil here which switches this relay and these contacts here, and you can arrange one after another of many of these to produce a complex computer circuit.
Interviewer:
WOULD ONE BE CONNECTED AFTER THAT? IN THE FRONT? THAT’S WHAT YOU DID. WOULD THE OUTPUT OF THAT RELAY GO [INAUDIBLE].
Ceruzzi:
Oh, yes, let me keep going. The interesting thing about the relay is that the output of this switch can be connected to the input of another relay, the coil then can be energized by the contact of a relay elsewhere in the machine and with that you can build up a very complex set of circuits.
Interviewer:
[TECHNICAL DISCUSSION] DO IT AGAIN, AND JUST TELL US A COUPLE OF THINGS ABOUT IT.
Ceruzzi:
The tube consists of a switching element, the resistor and something that switches, the output of the tube can be connected to the input of another tube. And that is where the power of the tube just like that of a relay.
[END OF TAPE F328]
Interviewer:
SO THIS IS, UM, ECKERT AND MAUCHLY AND YOU KNOW, WHAT THEY SAW. THEY START A BUSINESS AND EVERYBODY THINKS IT'S A SCIENTIFIC MACHINE AND THINK THEY'RE A LITTLE BIT FOOLISH. SO WHAT IS IT THAT ECKERT AND MAUCHLY SAW THAT OTHERS DIDN'T?
Ceruzzi:
Well, the computer was invented in a scientific and engineering environment and most people consider it therefore as an engineering oriented machine to solve scientific problems, ah, that a lot of numerical processing of very complex mathematical equations. Eckert and Mauchly noticed, and I think this was unique to them, that the same machine could also be programmed to solve routine problems that we now call data processing that are used in business. Ah, these kinds of problems are really, fundamentally not that much different from what scientific problems are but they were carried out in a different environment and most of the original computer pioneers except for Eckert and Mauchly just didn't recognize that, that in fact, it's the same kind of problem and if you program the computer properly it will solve them.
Interviewer:
WHAT DID ECKERT AND MAUCHLY SEE THAT OTHERS DIDN'T?
Ceruzzi:
Eckert and Mauchly were probably alone among the early computer pioneers to notice that the computer that they were building, the UNIVAC was really capable of carrying out, ah, was capable of solving the kinds of problems that business [INTERRUPTION]. Eckert, Eckert and Mauchly were probably alone among the early computer pioneers in recognizing that computers, ah, which had been invented or were being developed by a number of people at the time, ah, were capable of solving not only scientific problems or problems involving very complex mathematical equations but also the same kinds of problems that businesses had been trying to solve using punch card, ah, accounting machines or tabulators or those kinds of machines, ah, that were not really found in a scientific environment or an engineering environment.
Interviewer:
CAN YOU EMPHASIZE THE TEDIUM?
Ceruzzi:
Ah, Eckert and Mauchly were probably alone among the early computer pioneers in noticing that this machine that they were building, and, ah, other computers that other people were building that were like it, ah, in the fact that it was being designed to solve tedious and repetitious mathematical problems could also be applied to tedious and repetitious problems that business people needed to solve, problems that they had been solving at that time with punch card equipment, accounting machines, and, ah, mechanical calculators.
[MISCELLANEOUS CONVERSATION] .
Interviewer:
WHAT DID THEY SEE THAT OTHERS DIDN'T?
Ceruzzi:
Okay, ah, Eckert and Mauchly were probably alone among the early computer pioneers in noticing that the machine that they were building, the computer that they were building which they were intending to, which they were building to solve scientific problems that had a lot of tedious, repetitious evaluating of complex mathematics, this same machine could also be applied to the tedious and repetitious problems that business people had to solve, problems that business people were solving with punch card equipment or accounting machines.
Interviewer:
GOOD, GREAT, TERRIFIC. CAN YOU TELL ME ABOUT THE ADVANTAGES OF THE TRANSISTOR AND THEN THE DISADVANTAGES.
Ceruzzi:
The transistor was looked at, ah, by many computer engineers in the 1950s as the solution to many of their problems. It was much, much smaller than a vacuum tube for example. Perhaps a fiftieth the size, it, it weighed about a hundred times less than a vacuum tube. It gave off no heat. ah, it required a fraction of the electrical power that a vacuum tube needed. What that meant for engineers designing computers is that with transistors, if they could get transistors to work in computer circuits, they could now think in terms of designing computers that were much, much more complex and powerful than anybody would have dreamed of designing using vacuum tube technology.
[MISCELLANEOUS CONVERSATION] .
Interviewer:
ALRIGHT SO, I WANT YOU TO TELL ME, BUT ONCE THEY DID THAT AND CAN WE CHANGE FRAMING THIS ONE? OK. BUT ONCE THEY DID THAT THEY RAN INTO A PROBLEM, SO GREAT, THE TRANSISTOR SOUNDS TERRIFIC.
Ceruzzi:
Okay, but once they began designing these computers of much greater power and complexity, they very quickly came across a, a new problem that rose up and it's something that's been called, - the tyranny of numbers. It's simply the fact that with many, many transistors, you just don't have any practical way of wiring them all to one another. The complexity of the design becomes overwhelming and no one really knows how to solve it.
Interviewer:
DO IT ONE MORE TIME FOR SAFETY. IT SOUND TERRIFIC.
Ceruzzi:
As soon as engineers began using the transistor to design much more powerful and complex computers, they very quickly came up against another problem and that's something that people have since called, — the tyranny of numbers which is to say that with a very complex arrangement of transistors and other components, you simply run out of places and, to wire them together, the design becomes much, ah, . . . (let me start all over again).
Interviewer:
SAY THEY RAN INTO A NEW PROBLEM.
Ceruzzi:
As soon as, as soon as engineers began designing these computers using transistors, they ran into a new problem. The new problem was one that is called sometimes — the tyranny of numbers. That is to say, you're designing a much more complex circuit, the transistor allows you to do that, but when you design this more complex circuit you just don't have any way of managing the shear complexity of how to wire one to another and routing the wires around and making sure that you don't make a mistake and just finding the room to, to put them together, it just becomes impractical.
Interviewer:
ONE MORE TIME, JUST A LITTLE BIT SHORTER.
Ceruzzi:
Ah, once engineers began using the transistor to design computers of far greater power than they could design with the tubes, they ran into a new problem. The problem this time was that you couldn't, ah, figure out a way to wire all those transistors together, they were just all over the place, they were too, too much complexity in there. They ended up calling this problem, the, the tyranny of numbers.
[MISCELLANEOUS CONVERSATION]
Interviewer:
WAS THIS A BIG PROBLEM?
Ceruzzi:
The tyranny of numbers problem, was one that engineers throughout the computer industry were obsessed with in the 1950s as they began building transistor circuits. They, ah, were struggling with all kinds of ways of trying to get around it, eh, not having too much success. Finally though, two engineers independently came up with ways of solving the tyranny of numbers; the first was Jack Kilby of Texas Instruments in Texas, and the second was Robert Noyce of Fairchild in California.
[MISCELLANEOUS CONVERSATION]
Interviewer:
HOW BIG WAS THIS PROBLEM?
Ceruzzi:
As soon as engineers began designing transistorized computers they, they found the tyranny of numbers problem just, ah, insurmountable. It was just something they were obsessed with trying to solve and they tried all kinds of approaches and most of them just didn't seem to work very well at all. Finally, two engineers independently came up with a solution to this problem; the first was Jack Kilby an engineer at Texas Instruments in Texas, the second was Robert Noyce of Fairchild in California.
Interviewer:
THANK YOU SIR.
SO YOU WERE GOING TO TELL ME HOW THE DEFINITION OF THE COMPUTER HAS CHANGED.
Ceruzzi:
If you look at a dictionary published before the war, you find the definition of computer as one who computes a calculator or reckoner specifically a person employed to make calculations in an observatory or in surveying. Now, if you take a look at a more recent dictionary, you see how that definition has since changed. In addition to the early def., earlier definition, you now have computer as a, an automatic electronic machine for performing simple and complex calculations.
Interviewer:
GO BACK TO THE BEGINNING.
Ceruzzi:
If you look at the definition of the word computer in a dictionary published before the war, you find that a computer was a person. It's defined as one who computes a calculator, a reckoner, specifically a person employed to make calculations in an observatory or in surveying or whatever. During the war that definition changed and if you take a look at a modern def., ah, definition, if you take a look at a dictionary published more recently, you see the definition now as any of several devices for making rapid calculations, specifically, ah, an automatic electronic machine for performing simple and complex calculations. That definition changed during a ten year period about from 1935 to 1945 from a person to a machine.
Interviewer:
[TECHNICAL DISCUSSION] AND, ANYTIME.
Ceruzzi:
If you look at a definition, - wait, let me start again. If you look at a dictionary published before the war, you find that computer is defined as a person, specifically a person employed to make calculations in an observatory or in surveying or in other related fields. During about a ten year period from about 1935 to 1945 that definition changed. And if you look at a more recent dictionary you see how that change is reflected by [SOUND OF BACKGROUND SCRIBBLING] the current definition which is a machine or any of several devices for making rapid calculations, an automatic electronic machine for performing simple and complex calculations. That definition changed from a person to a machine in about a ten year period from 1935 to 1945.
Interviewer:
AND, GO.
Ceruzzi:
If you look at a dictionary published before the war, you find that the word, — computer referred to a person, specifically one who computes a calculator, a reckoner, specifically a person employed to make calculations in astronomy or in surveying or in related fields. During World War II that definition changed to one of a machine and we see that in a more recently published dictionary where a computer is now any of several devices for making rapid calculations, ah, an automatic electronic machine for performing simple and complex calculations. That definition changed in a ten year period between 1935 and 1945 from a person to a machine.
Interviewer:
SOME PEOPLE REFER TO THE COMPUTER AS A UNIVERSAL MACHINE. WHAT EXACTLY DOES THAT MEAN?
Ceruzzi:
Well I think, I think we have a general notion of a machine as something that does one thing well. And, ah, most of us have probably seen these, ah, rather silly pictures of, ah, people who tried to make an automobile that could turn itself into an airplane or something like that. It doesn't usually work. Ah, a machine is designed to perform one thing and do it well, ah, until you get to a computer. A computer is, in fact, a universal machine that can do lots of different things and do them all well. And, in fact, you don't really know what it's going to do until you sell it, put it out in the marketplace and let people, ah, play with it and figure out things they can do with it.
Interviewer:
[TECHNICAL DISCUSSION] WHEN BABBAGE CONCEIVED OF THE ANALYTIC ENGINE, THE PROGRAMMABLE MACHINE, WHY IS THIS A SIGNIFICANT MOMENT IN THE HISTORY OF TECHNOLOGY?
Ceruzzi:
When Babbage, ah, when Babbage conceived of the analytical engine which, ah, was, which differed, when Babbage conceived of the analytical engine which differed from his earlier difference engine in the fact that it had a separate programming section, a, an ability to change its program by the use of punched cards. That was a really fundamental moment in the history of technology because for the first time you separated out the purpose of a machine from its intrinsic design. What it meant was that you could now program a machine to do a variety of different things and not have to worry about anticipating those things in the design of the machine itself.
Interviewer:
ALL PREVIOUS MACHINES HAD BEEN BUILT WITH A FUNCTION IN MIND?
Ceruzzi:
Prior to Babbage's realization that you could program, you could, prior to Babbage's realization that you could build a machine that was programmable, people had always thought of designing the function or the control mechanism or the program if we want, if we could use the modern term, into the intimate details of the machine's design so that you couldn't really separate it out from any of the other parts of the machine that carried the power or the mechanism or did the work. And, what that meant was that machines really were designed to be optimized for one purpose only, whatever it may be, but you really couldn't use a machine for more than one application.
[END OF TAPE F399]
Interviewer:
THE DIFFERENCE BETWEEN CONVENTIONAL MACHINES AND THE MACHINE WHICH DOESN'T HAVE A PRECONCEIVED PURPOSE, CAN YOU DESCRIBE?
Ceruzzi:
I think one of the most interesting things about the analytical engine was that it's the first time that we see a notion that has since become very powerful in modern technology and that is that you can design a machine in which you do not really know what you're going to use it for beforehand. It has a general capability and you design in an ability to program that machine. Now Babbage used punched cards which he borrowed form Jacquard Looms but it was in the design of the analytical engine that Babbage realized that you could separate out these two functions, the actual carrying out of a program and the programming itself. And it is in the programming that you transform that machine into something that does what you want it to do, into some specific application. That really had never occurred before in the design of machinery and once it did occur and of course it's at the, it's at the absolute heart of what a computer is, you have a machine that has a, what we call a universal capability and you have all kinds of possibilities open to you that you can't even foresee and I think that's the key to the, the kind of revolutionary de, developments that are going on in the modern world.
Interviewer:
IT MUST HAVE BEEN A DIFFICULT THING FOR BABBAGE'S COLLEAGUES TO GRASP BECAUSE MACHINES DID NOT HAVE THIS ABILITY TO TRANSFORM THEMSELVES.
Ceruzzi:
I, in Babbage's day and I think this, ah, is a belief that has persisted for a long time there was a notion that a machine in its classical definition was something that did one thing very well and a good machine designer was someone who knew how to optimize a design so that it did a particular thing very well. And the notion of designing a machine that could do more than one thing well was felt as being poor engineering, poor design, ah, it led to kind of ludicrous things and, — do you want the exact, - it led to rather ludicrous examples and I, perhaps we've all seen these, ah, pictures of people who tried to build automobiles that could turn themselves into airplanes if you could drive out of your garage and fly away in, or something I like that. They just don't work very well because the, the qualities that make for a good automobile are just not going to be the same as the qualities that make for a good airplane. Ah, with the computer, because it's an information processing machine, ah, that doesn't hold true. A computer can be programmed to do any variety of things and do them all well and it will be the exact same piece of hardware that's doing them all.
Interviewer:
WE COME TO PEOPLE IN THE 30’S WISHING TO BUILD A COMPUTER, ZUSE, I WANT YOU TO TALK A BIT ABOUT WHY HE ELECTED TO BUILD HIS COMPUTER THE WAY HE DID AND NOT LIKE THE TRADITIONAL COMPUTER, WHAT IS THE ESSENTIAL DIFFERENCE IN TERMS OF THE STRUCTURE? GIVEN THE YOU HAVE PHILOSOPHICALLY SEPARATED OUT THESE TWO FUNCTIONS, HOW DO YOU BUILD IT?
Ceruzzi:
Well when, when Konrad Zuse was, ah, first approaching the problem of building a calculating machine, it's very interesting to see how he eventually solved these problems. He was a mechanical engineer by training. He knew very little of calculating machines, ah, had seen a few but, ah, he didn't really know how they worked or, or anything like that. But as a mechanical engineer he was obsessed with the notion of simplicity and ruggedness and reliability, all of the kinds of things that mechanical engineers know quite well. And he felt that any calculating machine that was going to have enough power to do useful work was going to have to be built out of very simple basic elements. And he very quickly found that the binary system of arithmetic, ah, which translated itself into very simple mechanical elements that were either up or down or on or off switches or something like that was really the only way to go when you were designing a calculating machine.
Interviewer:
THIS IS QUITE DIFFERENT FROM OTHER THINGS AROUND THAT HE MIGHT HAVE BUILT, LIKE AIRPLANES.
Ceruzzi:
That kind of approach to building a calculating machine that Zuse took was quite different from the kinds of approaches you see in more traditional machines. Let's take an airplane for example, where you have some, a few sub—systems like the wings, the tail sections, the engines, each of those are designed very, very specifically for certain activities and they don't resemble one another at all and each of those sub—systems is quite complex. Zuse by contrast chose a very simple basic element that he replicated over and over and over again by the hundreds and later by the thousands and he got the function of the machine in the structure that he organized those elements in.
Interviewer:
WOULD YOU SAY THAT AGAIN.
Ceruzzi:
This approach that Zuse took to designing a calculating machine was quite different from a classical engineering design, let's say of an airplane. In an airplane you have, maybe four or five sub—systems. You have the, the power plant, the engines, you have the wings, you have the tail section, you have the fuselage. Each of those is designed very, very specially for a special type of function and it doesn't look anything like the other sections. Ah, and when they're harmoniously put together you get something that's optimized for flying. But the very fact that you have the specialized, just a few specialized sub-systems means that, that machine can't really be redesigned to do something else. It's a good machine for flying but that's all it can do. Now contrast that with the computer. Instead of having four or five sub—systems, you have thousands and each of them is made up of the same basic, simple elements, binary switches, on—off switches or something like that. The ability for that machine to do something useful comes out of the way that they're put together, the structure that they're organized in. It also means that you have an ability to rearrange that structure and if you do you can optimize that machine to do a number of different things and that's where the universality of a computer comes from.
Interviewer:
WHAT PROBLEMS DO ADULTS HAVE IN VIEWING NEW TECHNOLOGY? HOW DO THEY TEND TO SEE IT?
Ceruzzi:
Technology in general?
Interviewer:
NEW TECHNOLOGY. SO REALLY I’M LOOKING FOR THE THING THAT YOU KNOW, WE SEE, THE CAR AS THE HORSELESS CARRIAGES. SO WHAT ARE THE LENSES WE HAVE WHEN WE SEE A NEW TECHNOLOGY COMING INTO THE WORLD WHICH PREVENTS US FOR SOMEHOW UNDERSTANDING WHAT IT’S REAL… [INAUDIBLE].
Ceruzzi:
When people encounter a new technology or a new invention, they're first reaction is to fit it into the existing world that they've grown up in and are comfortable with. And they look at the functions of that machine or that invention in terms of existing definitions and existing, the existing order of things. So that you have, for example, the first automobiles being perceived as a horseless carriage, a carriage that carries people around the way the other ordinary carriages did only no horse. Ah, the first radios were considered wireless and in fact in England it's still called a wireless. It's simply a telegraph or a telephone system but you don't have the wires. Well, as you can see with those two examples, neither perception really, ah, caught the kinds of transformations that, that the automobile or the radio or later on television brought to the world, ah, those transformations were brought because in fact they weren't just horseless carriages or wireless telegraphs but completely new systems that totally reordered things. Ah, it means that people, especially adults, have a terrible time understanding the proper place of a new machine in the existing order of society. Incidentally the inventors themselves are sometimes the, the worst, — excuse me — incidentally the inventors themselves are sometimes the worst offenders in this regard where they can't really foresee what their invention is going to lead to.
Interviewer:
SO IS IT THAT PEOPLE ARE LOOKING BACKWARDS? WHAT’S REALLY GOING ON HERE?
Ceruzzi:
When people, ah, try to understand a new technology, they really have no choice but to, to fit it into a world view that they've built up through years of, of education and, and acculturation, things that, ah, that they, a model of the world that they have built that allows them to understand things, ah, to try to break out of that model and foresee potentially revolutionary changes, is very difficult because you have such an infinite variety in front of you. And, ah, science fiction writers sometimes do a good job but even there you find that science fiction writers can be, ah, just as horribly wrong as anybody else. It's just not, ah, there are very few, ah, ah, ways to help us try to understand, ah, how a society is going to absorb a technology, ah, a technology incidentally that is in many ways a product of that society's forces, ah, the complexity is just something that is beyond what most of us can, can understand. What I find is, is of course very ironic about all of this is that children who grow up with the technology already part of their world naturally incorporate it into their view and don't have this problem at all. So, if you simply sit around and wait twenty years you have the problem solved because you have a generation of people who automatically understand where things, ah, belong and how it fits.
Interviewer:
THIS PARTICULAR PROBLEM IS MAGNIFIED WITH THE COMPUTER BECAUSE IT ITSELF CHANGES WHAT IT IS.
Ceruzzi:
The, ah, when the computer first appeared in society people naturally thought of it in terms of the human computers that it was replacing or the very specific, ah, specialized machines that the computer was taking the place of, let's say in a, in a punch card environment in a business or something like that. And to a certain extent this worked for a while but it was a lot like the horseless carriage analogy again, in that it failed to capture the rich, multi dimensional aspects of the computer. Recall that a computer is a universal machine. It can, in fact, do calculations and I suppose many of them do today. But it does so many other things. And when people think of the computer in terms of just this horseless carriage, let's say, they, they fail to understand the kinds of impacts that this technology is having in, in graphics or data retrieving of information or what, retrieval of information or controlling other machines or generating music or art or sound, anything like that, all the wonderful things that we see around us today. [SOUND]. I don't know, I was getting lost anyway.
Interviewer:
PICK UP WHERE YOU GOT TO THE EQUIVALENT OF THE HORSELESS CARRIAGE ETC., ETC.
Ceruzzi:
When the, when the computer first appeared in the world, people naturally thought of it in terms of the specialized calculating machines or arithmetic machines or punch card machines that it was replacing. And for a, for a while that worked pretty well because that was in fact what the first computers were, were used for. But, ah, as people continued to hold that view and I think many people continue to hold that View today, it really began to suffer from the same kinds of problems that we saw with thinking of, of an automobile as a horseless carriage. It really failed to capture the many non numerical applications that computers do today and it failed to capture the richness of computers penetrating into all facets of society. This is especially true in recent years as they get small and they become embedded into other machines and they begin to do things like draw pictures or, or make music or, or all kinds of things that, ah, simply are unexplained by a calculator analogy which is what many people continue to hold for the computer.
Interviewer:
CHILDREN DON'T HAVE THIS.
Ceruzzi:
Now for a child growing up in this age surrounded by computers in, in all of their manifestations, not just calculators, they understand this at once. There's no problem because they see them and they just incorporate them into their world view and they recognize without anybody having to teach them, the notion of the fact that this machine has general capabilities that are used in all aspects of our society.
Interviewer:
HOW WOULD YOU SAY THAT THE INVENTION OF THE COMPUTER HAS AFFECTED OUR ABILITY TO STORE, ACCESS, AND COMMUNICATE INFORMATION?
Ceruzzi:
There are two parts to that quest. I guess they, they, ah, ah, first, the, the, as the computer, — let me see if I can start now. As computers have become more and more common in our society, they take on the qualities less and less of a distinct and separate machine and become more and more part of our general environment almost like the network of highways around us or something like that. Ah, we don't even think of them as a technology any more. We simply think of the information that they handle, ah, the, the way the manipulate symbols or process one form of information and turn it into some other form of information and we begin to think in terms of what we can do with that information, ah, if we would like to do something with it. (Now, I'm not sure if I can go, I'm not sure where we go from there) .
Interviewer:
COMPARED TO THE PREVIOUS WAY OF STORING AND ACCESSING INFORMATION, WHAT ADVANTAGES DOES IT OFFER? THE FACT THAT WE CAN NOW PUT THIS INFORMATION INTO ELECTRONIC FORM, ETC., ETC.?
Ceruzzi:
In, in many ways the computer is a natural extension of things like libraries and books and, ah, reference materials and newspapers and television and things like that. It's a way of bringing us information, ah, in a very flexible way and it just extends that capability. It also is something more than that and that I think is what's most fascinating about the computer is that it not only can bring us information but it can interact dyamically [sic] with us and help us to it can help, it can almost in a sense, anticipate what information we want before we know we want it. Second I think that's where it's becoming fundamentally something different than just a, a natural extension of the reference library or something like that, that it is almost in a sense, taking on the qualities of a university or a college in that it, it's, it allows an individual to gain an understanding or gain a way of navigating through all the information that exists in the world, something that, ah, that requires tremendous study and preparation and training and, at present and I'm sure will continue to require some of that but the computer is now taking on that capability. It's not there yet and it has a long way to go but I think we can see very clear evidence that this is happening and that we will be getting to a point to where we'll almost have a computer as a personal assistant who will be our knowledge assistant, who will help us understand things, understand things that we perhaps don't even know how to ask very coherently but things that we need to know anyway.
[END OF TAPE F400]
Interviewer:
YOU'VE SPOKEN ABOUT THE COMPUTER INFILTRATING MANY ASPECTS OF SOCIETY. WHAT ANSWER WOULD YOU GIVE TO A SKEPTIC?
Ceruzzi:
Well, if you tried to imagine a world today without computers, I think you can, and I'm sure there are many people who live comfortably without them. But for most of us, living as we do kind of caught up in the web of, of society and interacting with other people and doing kinds of things that traveling to work, or, or reading the newspaper or listening to the radio or, or doing various office kinds of work, ah, there are just computers involved in almost everything we do, from the moment we wake up with an alarm clock that's controlled by a, a microprocessor to ah, ah, getting to work, ah, ah, using the telephone, ah, ah, I, it would be hard to imagine any places that, that, that haven't been influenced in some way by these machines. [VOICE] I think one of the, one of the things that's happening with computer technology as a direct consequence of the invention of the chip which allows them to be made very small, is that they become embedded into other machines and we tend not to even know that they're there. Ah, their functions simply, ah, become part of the other, the general functions of some device that we have, let's say an alarm clock or an automobile or a telephone or a home appliance or even furniture or draperies or lighting in the walls or anything like that, they become smart machines. I think that we, we see this, ah, phrase being used all the time, ah, a smart, a smart toaster or something like that or the military always talks about smart weapons or a smart office copier or something like that. It means it has a computer built into it, except we don't like to use the word - computer, because that implies something big and in fact it's really the chip but it's got all the functions of the, of the ENIAC and then some in it, so it's a computer and it's built into this other machine and they're all over the place. I could never even begin to count them because they're just, ah, just as common as can be.
Interviewer:
WE'RE ON THE VERGE OF A CONVERGENCE BETWEEN COMPUTERS AND COMMUNICATIONS. WOULD YOU REGARD THIS AS A CRITICAL STAGE IN THE SHORT HISTORY OF THE COMPUTER, THAT IT HAS THIS CAPABILITY NOW OF BEING NETWORKED WITH OTHER COMPUTERS?
Ceruzzi:
There's, there's a, there's a feeling that I have that all of the development of computing from the time of the ENIAC up to the present day is, in a sense, a prelude to what will be the real Chapter 1 of the computer age and that chapter will begin when interconnections among computers is really achieved. At the moment we have rather rudimentary networks. We have some sophisticated networks that are in the hands of a few people but the general public does not have anywhere near the kinds of access to networked computers or, or marriages of computers in communications that they do to these, ah, embedded microprocessors for example. If and when we do get that kind of capability in the hands of ordinary people, I think we're going to see quite another transformation of society ea., easily as great as what we've seen in the past forty years.
Interviewer:
WHY IS IT SUCH A BID DEAL THAT YOU CAN NETWORK COMPUTERS TOGETHER?
Ceruzzi:
Remember that computers are fundamentally symbol manipulating machines. They create universes, artificial universes, if you will, based on the manipulation of abstract symbols. Now the world is full of abstract symbols that are being generated by all kinds of people, ah, sources, ah, cameras, microphones, transducers of all kinds, gathering information, coding that information and if you can't exchange that information among the, ah, among the various parts of the world, you're missing something, you're missing a dimension there. And that really will complete the computer, I think, if you could have a way that each processor, not each, that the processor or the computer that you have in your possession, whether it be on your desk or if you carry it with you, is somehow linked to all of the other places where there is information being, either being created that you would like to use or information, or places where you would like to have information that you create be sent to.
Interviewer:
SAY THAT LAST BIT AGAIN. SO YOU SAY, WE'RE MISSING A DIMENSION IF WE JUST GOT STAND ALONG COMPUTERS, PROCESSORS?
Ceruzzi:
I think that, ah, if we, if we, I think that to the extent that we have very powerful computers that we have personal control over, whether we have them at home or on our desk or carry them with us, ah, to the extent that these machines are not connected to one another, I think we're missing something. I think that if, if and when we come to a time when we are able to link these machines and have them communicate with one another in a very convenient and, and comfortable way, we will really complete the technology. It, it's missing a chunk right now. Ah, the connectivity will allow the true potential of symbol processing and after all that's what a computer is, of symbol manipulating, to spread out to all the places where symbols and, and information is created or where people might want to use it, it can be anywhere. And, at the moment, we just don't quite have that capability.
Interviewer:
AT THE MOMENT IF YOUR STAND ALONE COMPUTER HAS A PROBLEM IN IT, IT ONLY AFFECTS YOU. IF YOU HAVE A WORLD WITH LOTS OF NETWORK COMPUTERS NETWORKED TOGETHER, ARE THERE PROBLEMS YOU SEE EMERGING IF THE SYSTEM SOFTWARE RUNNING IT HAS PROBLEMS BECAUSE WE BECOME DEPENDENT ON THEIS SYSTEM?
Ceruzzi:
I think that, ah, everyone is very much aware of the, ah, problems associated with networks as they exist today in rudimentary form, being poisoned by viruses or malicious programs or programs that contain erroneous data or things like that, ah. If we do indeed come to this world where everyone has a computer that is somehow connected to a global network that people access in a very friendly and convenient way, we will have, indeed very serious problems to contend with of not only malicious corrupting of the network or, or destruction of the network but also just the problem of how to distinguish good data from bad data, we'll have the equivalent of kind of the trashy novel, if you will, ah, driving away good literature from the book stores, ah, which some people feel is a very serious problem and is, is ruining the publishing business today. Ah, I don't personally feel that way but it, I do admit that there is that kind of issue to contend with and something like that will happen. And, ah, how we'll solve that problem, I really don't know, ah, I'm sure that people will be very much aware of it and I hope we do solve it.
Interviewer:
NOW EVERYTHING THAT COMPUTERS DO DEPENDS ON SOFTWARE, IF SOFTWARE ISN'T WRITTEN PERFECTLY, A MISTAKE COULD HAVE ENORMOUS CONSEQUENCES. IS THIS A FUNDAMENTAL FLAW?
Ceruzzi:
Ah, in the academic world, in recent years, we've seen the rise of something they call, - software engineering, which is a term that implies that the writing of programs for computers is an analogous activity to building a bridge or a highway or, or a building or something like that. And, there are parallels but I feel that fundamentally software production, ah, ah, writing programs is very different from building a bridge. Ah, in building a bridge you have to contend with all kinds of possible unknowns and there is always a chance that you'll miss something and the bridge may fall but you, you have a way that has grown up with tradition of sort of over-building it, and building in safety factors so that you're reasonably confident that you've got something that will last. You just don't have that with software engineering. Ah, it's still true despite the best efforts of software engineers for at least twenty years now, that a single misplaced, ah, hyphen or, or typographical error can cause an enormous program to simply come tumbling down and as has happened in, in the past year, tie up the entire long distance telephone network or completely disrupt, ah, stock, ah, transactions on the New York Stock Exchange or something like that, the tiniest little software error, ah. If we're ever going to have true software engineering where this, ah, activity which is so important and is becoming more and more important to our lives, ah, is reliable, we've got to figure out a way to make programs, ah, robust enough so that they don't come crashing down when you just have the tiniest little mistake in them.
Interviewer:
WE CAN'T GO BACK TO A WORLD WITHOUT COMPUTERS THEREFORE WE HAVE TO CONFRONT THAT CHALLENGE AND THAT CHALLENGE IS SOLUBLE. WHAT DO YOU THINK?
Ceruzzi:
It's awfully hard today to feel that there is any chance that we could ever go back and return to a, an age when we weren't so dependent on such a, ah, what appears to be a fragile technology especially computer software. Ah, I'm reluctant to say that because if I were to say that it would sort of imply that we have no free will, that we can't make any choices about our own future. I don't believe that. I believe that we do have choices, ah. I also believe however that the benefits to developing these systems to our lives are potentially so great that we ought to proceed with both eyes open, if you will, be very much aware of the dangers of sort of rampant programming of our world, ah, rampant computerization without any regard to data integrity or, or software reliability or any of these things, these abuses that we have seen cases of, ah, I still think we should go ahead, ah, but let's just go ahead and understand it but, ah, I would not ever advocate, ah, going back and trying to return to some mythical age that, ah, ah, when there were no computers around because, ah, it just, it's not, ah, it's not possible.
Interviewer:
THERE'S A CERTAIN IRONY, BABBAGE ORIGINALLY CONCEIVED AS HIS MACHINERY TO RELIEVE PEOPLE OF THIS TEDIUM OF WHICH THEY MAKE MISTAKES, REPLACING HUMAN BEINGS. THE MACHINERY WHICH IT HAS GIVEN RISE TO HAS BEEN IMMENSELY VERSATILE BUT THERE ARE STILL HUMAN BEINGS IN THE LOOP. THE HUMAN BEINGS WRITE SOFTWARE AND THAT THIS SAME PROBLEMS KEEPS RE—EMERGING.
Ceruzzi:
I think it's, it, there's a certain irony in the fact that when Babbage developed his analytical engine he was driven by a desire to replace the human beings who were writing out mathematical tables and making typographical errors when they, ah, did these things by hand. The irony is that today we have computers doing all kinds of things but we still have that element of human error, in this case, the errors of writing programs that have bugs in them and these bugs can be, ah, disastrous for the computers, in some cases, disastrous for the people who depend on them for, for something or another. And it's obvious now, ah, that this problem, if you want to call it a problem, is not going to go away, ah. I don't really call it a problem. I think that's because we are human and to be human is to make mistakes, it's just something that's part of our, of our existence, ah. It's a problem only if you see it as a problem and you don't also see that there are ways to solve that problem. Babbage was perhaps one of the first to look for a very practical way, although he didn't realize it in his lifetime, but, it eventually became a practical way of solving his specific problems of producing mathematical tables. Today I think we have an ability to see through some of the problems we have today of human error or whatever place the human being has in the loop of all these complex machines. And, ah, and keep working through all that, I don't see that, ah, ah, we have to be really concerned or, or that we have to condemn the whole enterprise simply because we know that human beings will make mistakes. They make mistakes, human being make mistakes and that's human nature.
Interviewer:
[Technical Discussion].
Ceruzzi:
Well let me just start. [VOICE]. It's ironic when you look at the history of writing that it began as a very utilitarian method of remembering how much [INTERRUPTION].
Interviewer:
START AGAIN.
Ceruzzi:
It's ironic when you look at the history of writing to find that it began as a very utilitarian method of helping people remember how much grain they had, ah, so that they could prepare for, for a harvest or a wintertime or something like that. And from those very humble utilitarian beginnings came poetry and literature and all the kinds of wonderful things that we associate with writing today. Now, we bring ourselves up to the invention of the computer, very same thing is happening. Invented for a very utilitarian, prosaic purpose of doing calculations, the kind of relieving the tedium and drudgery of, of, ah, cranking out numbers for, ah, for insurance companies or something like that. And now we begin to see this, this huge culture that's grown up of people who are discovering all the things you can do by playing with the computer. They don't have any particular re. , thing in mind, the computers are cheap enough so that they don't have to worry about the expense and they're finding all kinds of things that are, are, ah, in the realm of arts and play and, ah, ah, games. But out of these, these activities are coming all kinds of new developments that, ah, will, all of us, even those who are not really computer aficionados or literate or anything like that, will just love to use and will, our lives will be enriched because of them.
Interviewer:
ARE YOU SAYING THAT WE’RE IN THE PROCESS, THAT IT’S ROOTING ITSELF IN THE CULTURE AT THE MOMENT? IT’S TRYING FIND…
Ceruzzi:
As a result of this activity primarily conducted by young people who have grown up with computers around them, computers which are, are cheap enough so that they don't have to worry about, ah, ah, wasting money playing with them, as a result of this activity, we're finding computers, ah, being used for all kinds of activities that, ah, are, - let me start, we're, - as a result of this activity of using the computer for play, the computer and its many applications are rooting themselves in our culture, they're, it's becoming a part of our environment just as words in newspapers, magazines, and books and on billboards and everywhere else, have become just rooted into our culture.
Interviewer:
AND SO, IS IT YOUR SPECULATION AS A HISTORIAN THAT WHEN PEOPLE IN 100 OR 200 YEARS LOOK BACK, THEY MIGHT NOT REMEMBER THE COMPUTER AS A MACHINE FOR SCIENCE AND BUSINESS BUT MAYBE SOMETHING DEEPER. WOULD YOU GO THAT FAR?
Ceruzzi:
If we could draw the analogy with the invention of writing, we may see a time in the not too distant future when people will look at the computer's impact on society and they'll totally forget its humble beginnings as a calculating device or a device that, ah, assisted mathematicians and physicists. [VOICE] And they will think [VOICE] instead [VOICE] instead they will think of the computer in terms of the way it has transformed very basic notions of culture and of their daily activities, ah, in, in all the kinds of richness and complexities that, ah, we see for example in modern literature or, or, or the cinema or something like that.
[END OF TAPE F401]
Interviewer:
SO, ARE YOU SAYING THEY MIGHT NOT THINK OF IT IN TERMS OF ITS HUMBLE ORIGINS, INSTEAD HOW ?
Ceruzzi:
I think it's possible, in the not too distant future, that people will think of the computer, not at all in terms of its humble origins as a calculator or a device that was invented to assist engineers in some kind of routine engineering problem, but rather as something that enriches their culture in, in undreamed of ways, ah, just as literature and art, ah, is, is perceived today in this world. [ROOM TONE].
Interviewer:
[MISCELLANEOUS CONVERSATION] SOFTWARE BUGS ARE INEVITABLE, THEY HAVE BEEN SO FAR. ARE THEY MORE SERIOUS IN A NETWORK THAN IN A STAND ALONE SITUATION?
Ceruzzi:
As, ah, beneficial as networks are, there are some problems associated with them, in particular, the problems of software reliability. If you have a program at, in your home computer and it has a bug in it, that may not be so good for you and it could potentially be disastrous for you depending on what the program is but it won't spread beyond your own environment. If you have a bug in a network system that spreads out across the world or perhaps, ah, controls something that's vital to human life in a city or, or a nation or something, ah, you have many more serious potentials for disaster, it could be the equivalent of an earthquake or a hurricane, ah, one single software error.
Interviewer:
ACTION.
Ceruzzi:
If you look at a dictionary published before the war, you find that a computer was defined as a person, one who computes, a calculator, a reckoner. [INTERRUPTION].
Interviewer:
ACTION.
Ceruzzi:
If you look at a dictionary published before the war, you find that a computer was defined as a person, one who computes, a person employed to make calculations in an observatory or in surveying or something like that. Now, if you take a look at a definition [VOICES] in a dictionary published after the war, you find that the definition has changed dramatically, now computer is defined as a machine, specifically an electronic device that performs complex calculations automatically. This definition changed in the ten year period between 1935 and 1945 from a person to a machine.
[END OF TAPE F402]