THE MACHINE THAT CHANGDED THE WORLD - TAPES F178-F181 MARVIN MINSKY
Interviewer:
[INAUDIBLE QUESTION]
Minsky:
The development of computer science produced a new kind of thinking, a new style of explanation for which there was really no precedent. And the idea was, how do you describe a process? For example, mathematics is good at describing things, geometry, it's good at describing curves and it's good at describing mechanics; differential equation for a solar system, but it's not good for describing even a simple machine whose behavior depends on things that happen as they come in mathematically, the mathematic world is, to me, sort of frozen. But once we had the idea of a computer we could start saying, - how can I make something that learns? How can I make something that adapts to its environment? How can I make something that stores responses to a million different situations and changes and adapts. And there wasn't any way to think about this before 1950 in our own lifetime. So all of philosophy, to me, is simply bad psychology. The philosophers are struggling with the idea of trying to describe these complicated processes that we're made of in terms of simple things like logic and geometry, static, low level, crude, course, vulgar descriptions compared to the subtly of any process that has a million parts and a million bits of memory and responds to a million different conditions in different ways. There just was no science of such a thing before 1950.
Interviewer:
SO LITTLE WAS KNOW ABOUT THE STRUCTURE, THE WORKINGS OF THE BRAIN THAT DIDN'T OFFER A ROUTE EITHER.
Minsky:
Even to this day, no one know how the nerve cells in the central nervous system work very well. There are a few theories but I think any sensible person would agree that there is only a small chance that the first few theories are right. And so, most of what we think we know about the human brain today is by analogy with what we really know from the computers.
Interviewer:
NOW, ONE THINK SOME PEOPLE HAVE DIFFICULTY WITH IS THIS IDEA THAT YOU COULD STUDY SOMETHING LIKE THE WORKINGS OF THE HUMAN MIND ON A MACHINE WHICH DIDN'T RESEMBLE IN ANY WAY THE ARCHITECTURE OF THE BRAIN. WHAT WAS THE LOGIC, WHAT WAS THE ARGUING BEHIND THIS?
Minsky:
I think there are some very different views of the same thing circulating today because some people say the brain is a biological thing, it's very different from a computer; It's soft, computers are hard, it's cold, people are warm, the shapes of the cells are variable and curved whereas the shapes of machines are square. But this is missing the point, I think, j The brain is different from the rest of biology in a very important way because in most of biology there are a lot of interactions and things are rather complicated. What was discovered around 1990, (sic) it's only a century old is that each brain cell is more or less separate from the others. It's a separate machine. It's certainly in a bath of chemicals but those chemicals are very carefully controlled. And so, ah, when certain conditions occur around the periphery of a nerve cell, then it suddenly goes bang and that's absolute. It's very simple and clean just like the parts in a computer, the signal flows along the fiber, it goes to the end, makes some conditions for another neuron. So you might say that the cells of the nervous system are the closest thing in the universe to the transistors and gates of a computer. So, I find it very funny that some people will say, - oh, it's so different and then they mention all the superficial features and they miss the important feature which is that unlike the liver and the heart and all the other parts of the body which are sort of continuous at analog, it's the nerve cells which are digital, they're on or off, certainly they're very complicated digital devices. And they're the thing in the universe closest to computers. So it's funny to emphasize the difference instead of the similarity.
Interviewer:
BUT THERE IS ROOM FOR THE ARGUMENT THAT THE PROCESSES THEMSELVES CAN BE STUDIED INDEPENDENTLY OF ANY IMPLEMENTATIONS.
Minsky:
Well one of the problems with the brain and the reason why today, in 1990 we still don't know how a critical granule cell works. Most of the brain is cortex. It has believed to have a hundred billion or more cells. And what we know about the cells of the brain we get from our knowledge about the crawfish and the sea anemone because they stay alive for days in cool water whereas, you know, that, ah, mammalian brains die in five or ten minutes if they're disconnected. And so, ah, it seems to me that the idea that you can't study a brain the way you could study a typewriter or an electric circuit, it's very hard to study it because it's so delicate. But that's not a matter of principle, that's a matter of bad luck. If the brain were a thousand times larger you could crawl around and attach clip leads and study it just like any other machine. And so the general belief has grown that there's something different about it. Again, people are confusing essentials with accidents. There's a reason why brain, why your brain cell is a million times smaller than you or a billion times, because in order to be intelligent you have to have a billion brain cells so you have to be a billion times larger. So, I think all animals in the universe, wherever they may be, will have a little trouble at first understanding their own brains because you need so many cells to be. But we're developing instruments and I think another ten or twenty years, we'll be able to connect things to brain cells and find out just how they work. And it won't be so hard after that.
[DIRECTION].
Interviewer:
WHAT SORT OF THINGS DID THE EARLY AI PIONEERS TRY TO DO AND WHAT WAS THIS IDEA OF SEARCH?
Minsky:
Maybe the first important idea about how machines could be smart was to notice that machines were tireless and they were becoming fast. And so, if you had a certain kind of problem, which might be hard for a person, it would be easy for a computer to solve it, if the computer could simply try every possibility. That only certain problems are small enough for this to happen but I remember once there was a wonderful puzzle, it's called - pentamenous (sic). I should have a picture of it. But anyway, it's a, a rectangle which I think is 6 x 12 or 8 x 12 of little squares like most of a checkerboard and there are a bunch of little pieces, each of which are made of five squares. There's five in a row and there's a corner shaped one, 1,2,3,4,5 and an L and zigzag like a W. I think there are twelve different pentamenous (sic) that you can make. And it's a wonderful puzzle for somebody to put all of these pieces into one rectangle- People find it very hard. They fuss around. Some people fuss for an hours, can't do it. Ah, well, ah, we once, Stewart Nelson, who is one of the hackers here, once said, - well that's easy. He wrote a little computer program that tried all possibilities and it generated all million solutions in a couple of minutes. So, there's an example where the computer wasn't very smart. But it could simply try, after I put this piece in, there are eight ways to put the next one and six ways to put the next one and five ways and it just did all those possible ways. So, if you have a small enough problem and a way to tell when it's solved, then you can use what we called, exhaustive search.
Interviewer:
MANY OF THE EARLY PROBLEMS DID IT FALL INTO THAT CATEGORY?
Minsky:
Yes, now a few problems fell into that category and worked very well. But, one of the things we wanted to do is get a machine to play chess and checkers because, ah, people like those games. And it turned out for those games, there's too many possibilities- Because, I make a move.in chess, there are about 30 things you can do typically and then the other person can make about 30 replies. So, in two layers that's 30 times 30. It's about a thousand. Four layers, it's a million and six layers you're up to a billion possibilities. And a computer today could do a billion chess moves in a minute or two but in those days it would take weeks or more. So, you're sort of getting a tree: each limb has 30 branches and each branch had 30 twigs and each twig has, that sort of thing. And so you have to prune the tree. So a lot of AI research in the 1950's was finding wonderful, new ideas about how to reduce these search trees so that you could solve harder problems in the same time. And I'd say there was ten years of progressive and interesting discoveries about that.
Interviewer:
SOME OF THESE PROJECTS HAD REALLY QUITE SPECTACULAR SUCCESS AND THEY PRESENTED THIS PARADOX THAT THESE WERE AMONG MANY OF THE THINGS THAT MOST HUMAN BEINGS CONSIDERED VERY DIFFICULT TO DO. YOU MENTIONED PREVIOUSLY, SLAGEL'S PROGRAM.
Minsky:
A wonderful example was, and I think he was my second graduate student, Manual Bloom was the first, Jim Slagel decided to write a program which would try to solve the kinds of problems that MIT students do in the first year - calculus mathematics. And the problem was doing what we call, symbolic integrals. And he wrote a program which consisted of more or less a hundred kinds of rules or suggestions, let's say, if you see, 1 minus X squared in a math problem, the mathematicians know, it's very tempting to say X equals sign Y, trigonometric thing. (The reason has to do with the fact that in a circle the equation is X squared plus Y squared equals 1.] It's very familiar to mathematicians, very alien to everyone else. But anyway, Slagel wrote down about 20 rules of thumb, suggestions for how to solve a calculus problem, another 20 or 30 rules for how to do high school algebra because you can't do calculus without algebra- And then a profound set of another dozen or so suggestions about how to tell when a problem is getting too hard and you should try another one. Well, you see, in calculus - to solve one of these things -there are many things you can try, like chess, you could try moving this pawn or that knight. And in calculus you could try using a logger rhythm here or a trigonometric or a sign or a co-sign or, or just multiplying or dividing. And there are all sorts of alternatives when you're doing the algebra. So he wrote down these, about 100 rules altogether. The machine would try various ones and then would use the special set of rules, I call them rules of fear, if, if the thing got too complicated, we'd say, - that's not good. It's too complicated. But if it seemed to be getting simpler, it would follow it further.
The amazing thing, and this was 1960, just a couple years after we started, the thing got an A on the MIT exam and it was frightening. It was doing as well as the average student or maybe slightly better ._J It couldn't do some problems, could do others. And the kinds of problems it solved were pretty much like the kind the students do.
Interviewer:
THEN, A FEW YEARS LATER, YOU HAD ONE WHICH DEALT WITH HIGH SCHOOL ALGEBRA, WHICH WAS A MORE DIFFICULT PROBLEM.
Minsky:
Well, of course, the, this calculus problem is a very purely formal problem. It, it's all in equarasions (sic) and mathematical expressions. And, although those are frightening to some people, ah, they're simpler than words. Danny Barbro in 1964 started a program, ah, to do the same sort of algebra except that it would be word problems. Problems like, ah, ah, - John is twice as old as Mary was when she was the same age as, you know those, those are things that high school students find very difficult actually. Although the algebra is terribly easy. But the problem is that the words have too many meanings and if you look at mathematical symbols in algebra, maybe it's like a language of only 20 words, plus, minus X squared, such as equal, such a small vocabulary in pure mathematics. But in the vocabulary of ordinary language, ten thousand, ah, for a normal person, a hundred thousand for a, a very articulate person. And so, the kinds of things that every little child learns to do, like talk, with ten thousand words, is much, much harder we found, to our surprise, than solving the kinds of things that a Ph.D. mathethamatach, (sic) mathematician would do,in a world of expertise. And this went on for 20 years, from the middle 1950' s to the middle 192fLLs^J Ah. we found it was, became easier and easier to do, to get machines to do things that people admired as expertise but it was very hard to creep downwards from the college level to the adolescent to the child to the infant and see if we could get a machine to learn the kinds of things that everybody considers perfectly natural and simple and obvious.
Interviewer:
YOU SAID IN ONE OF YOUR PAPERS THAT ACTUALLY AI SHOWED UP A TENDENCY TO REGRESS TOWARDS INFANCY.
Minsky:
Yes J very strange field because it had this backwards regression. We started in the 1950's; ah, ah, Newel, Simon, and Shaw, at, ah, in Pittsburgh, wrote a program that did very amusing things with mathematical logic and, ah, it proved theorems in mathematical logic. They were pretty hard, ah, they did, they found a proof that was better than the best one that Russell and White had found. And Bertrand Russell was sort of impressed. And, you know,| the machines were starting to play chess and do calculus and that sort of thing. Everybody is very impressed because machines were doing hard things. But, ah, what we began to see is that the things that people think are hard are actually rather easy and the things that people think are easy are very hard. Ah, we could do the calculus with just a few hundred pieces of program but to learn language, to recognize faces, to walk and to put your clothes on and do the kinds of things we expect every child to do, we still can't do with the robots of and the AIs of 1990.
Interviewer:
LOOK AT SOME OF THOSE THINGS THAT TURNED OUT TO BE HARD AND TRY TO UNDERSTAND WHY THEY'RE HARD. WHY SHOULD UNDERSTANDING LANGUAGE, UNDERSTANDING CHILDREN STORIES, WHY SHOULD THAT BE A DIFFICULT TASK?
Minsky:
Ah, there are several ways to explain this. A, a humorous way would be that it took animals, to, about three billion years for the first, to go from the first cells to the vertebrates, the fish and the amphibian and the reptiles. And then 400 million years to go from the first animals to the chimpanzee. And then it's just 400 million years, you see, and then it's just 4 or 5 million years to go from the chimpanzee to man. So, you might expect in that sense, that the kinds of things that the chimpanzee or the child can do are very hard and the difference between the chimp and the man, which is playing chess and doing calculus and, ah, ah, things like that. The difference between a kid and an adult would be relatively simple in a sense. First you had to get the basic brain that's able to learn complicated things, the complicated things themselves are nothing. So, that's one reason. I think the other reason is that, why was it easy to build these experts systems? And this is my own theory, - that if you look at the expert systems out there today that do such good things like chess, each one is based on a certain way of representing the world. We call this representation of knowledge or model of the world or something. And these wonderful, ah, high powered programs each use one way of representing the world and one way of representing knowledge. But in language, each thing we do uses, I suspect, three or four major different kinds of representations and maybe 20 or 30 minor ones. And so, everything than an ordinary person does in ordinary life is a, is, consists of maybe 20 different ways of preceding{?) and all their relations between them. It's much more complicated than the kind of precise, narrow thing that an expert does. For example, when I see a dog I recognize it as a physical object and part of my brain says, - oh, that interesting thing weighs about four pounds and it's, has this color and so forth. And another part, ah, says, ah, it seems to want something. So I have a (sic) emotional, not emotional but I have a social reaction to it in terms of, ah, social communication and maybe, ah, there(?) defense mechanism. I have to treat this as a threatening situation. Is it going to bite? Ah, when you meet a person: you're discussing a particular topic, you're wondering how you're getting along with them, you're trying to cope with cultural differences. If I meet somebody they say, -where do you live? If they're a foreigner I say, - I live in Boston or I live in the, ah, East Coast of the United States. If they're somebody, ah, from this area I say, Brookline. But I know that strangers don't know where Brookline is. They might have heard of Cambridge. And so, every time a word comes in, the way I react to it depends on many different other kinds of knowledge. I don't think these problems are unsolvable at all. 1 In fact, I, in the Society of Mind, I proposed some theories of it. But I feel that the research community working on artificial intelligence got so addicted to its success with experts systems that almost everybody in the community is saying, - if we just get exactly the right representation we can solve all problems. And I think the reason why it's hard to get a machine to behave like a child is that it's not finding the right representation that's important at all. It's finding six or ten representations and discovering how to manage the relations between them. I don't think that's a very hard problem. But, for some reason, no one works on it. It's, it's outside the, ah, scope of what people consider their job.
[END OF TAPE F178]
[BEGINNING OF TAPE F179]
Interviewer:
YEAH, THE, YOUR THOUGHTS ON I WONDER IF YOU COULD TELL ME HOW [INAUDIBLE] MIND
Minsky:
There's a garbled story that Gerry Susman..
Interviewer:
WAS THAT TRUE OR NOT?
Minsky:
What was true was that we had vision projects. Tried to get the vision through these blocks. Wasn't working very well, I mean there was a very freshman named Gerald Susman, so I decided that the reason the vision project wasn't working very well was that everybody must be on the wrong track. So I put him in charge of this for one summer. Because he had had many ideas of his own, and I thought there might be a good chance that he as a beginner would do better, and he didn't do worse than the others. The legend changed into, I've seen it written that Minsky put a graduate student in charge of the project. That just shows how conservative people are. It was actually a freshman and he's now a professor. A rather good freshman.
Interviewer:
WHY WAS [INAUDIBLE]
Minsky:
well, we decided we wanted to make a machine that interacted with the real world and a nice way to do that would be to give it eyes and hands. So, I decided we should try to get the computer to be able to see things and when it sees something it should be able to do something with it, pick it up. And that turned out to be very complicated indeed because when you try to recognize an object, easy enough to get a picture into a computer. We made circuits with, used things like television cameras which were just beginning to be usable and the trouble is that a block or a box is different. Move it this way, it's a different shape and so you almost never see the same thing twice. Sometimes there's shadows on it. Sometimes it's darker or lighter, different boxes have different surfaces written on it. So that even though to you or me or a child the idea of seeing a block seems simple, it's very very complicated. It's out of focus sometimes, if the light on two sides is just the same intensity you cannot see the edge. There are plenty of problems. And it turned out that uh somebody would write a computer program to locate a block and it would work on three blocks out of ten or five blocks out of ten, it | just wouldn't find the others. And another person would write another program to find blocks with a different idea and it would work on different blocks. It wasn't that you could say each program got a score of 40% or 80% or something. Each different program would see, would be better or worse at different jobs. And what I began to sense was that uh we should stop looking for a very good vision program and this concept came to Haperd and me around the same time. We were working together. The idea was all right let's see if we can get ten pretty good vision programs and get them and manage them and see which of them seems to be working in different situations. So the idea of the society of mind was that uh in the brain or in the mind or in the computer you shouldn't look for perfection and you shouldn't go around trying to debug programs and find the best possible way to do something. You should find a lot of different ways and have different resources and then you should make managers that can decide under which circumstances to use which ones. Now I still think this is the way to make big programs work better, but no one does it. And so this idea which is now from about the late 1960s and now it's the early 1990s uh in spite of how simple and clearly correct this idea is it hasn't caught on and I'm very disappointed in my colleagues and people in the field of AI in general. For some reason they've gotten fixed on the idea "Let's get it right." And that's wrong. There is no right in the world.
Interviewer:
I MEAN THERE WAS SOME, JUST BEFORE WE GO ON THERE'S ONE SENSE THAT YOU SAID BEFORE THAT IT HAD BEEN POSSIBLE TO CAPTURE THE KNOWLEDGE OF AN EXPERT; HOW WAS IT TAKING SOMETHING LIKE A CHILD STACKING BLOCKS OR LISTENING TO STORIES, WHAT WAS DIFFERENT IN THE KIND OF THINGS THAT THEY KNEW, CHILDREN KNEW, AS COMPARED TO EXPERTS.
Minsky:
Well, what happens in understanding a simple story. My friend Roger Shank at Yale, now at Northwestern, had many of his students work on the problem of getting a machine to understand a simple children's story. We did that here and a few other places. And that's different from doing calculus or playing chess because in chess there are a few rules. It's pretty clear what to do. It's very hard to know what to, how to play as well as a human and nobody's figured it out yet, although the programs are now played better than most humans, maybe, uh very much better. But they don't do it the same way and they're still using a lot of search. But when you understand the story you come across a word like "boat" and what does "boat" mean? Uh well, that's a bad word, but uh when you look at it you know so much. In calculus if you see a sign function, you only need to have a few rules, at least to do calculus. But for boat you have to know uh different kinds of boats for different kinds of water. They're kind of...
Interviewer:
TRY ANOTHER WORD BECAUSE THIS IS KIND OF..
Minsky:
Yeah, abandon the damn thing.
Interviewer:
WHAT ABOUT..
Minsky:
Really a terrible word.
Interviewer:
OR I WONDER WHETHER ANOTHER WAY OF APPROACHING THIS WOULD BE THROUGH THE STORY OF A CHARNIAK STORY.
Minsky:
I think well, very simple one in linguistics is is the word take. John took a trip to Mexico. Very strange word sometimes take means to obtain a physical object, to take it away from someone else. It's a social thing. Taking a trip, I don't know what that means. Take a look, you see there are if you look in the dictionary you'll find forty or fifty different entries, and so uh here's a set of meanings a set of processes to be applied to the rest of the words. And you have hundreds of them in your head, because the ones in the dictionary are just families of these. You have to use the other clues to decide which of these hundreds of different mental procedures to apply to the rest of the words and each of the words has the same thing. But what I'm saying is that to understand even a simple sentence you have to know thousands of different things. And my example before is uh to get an "A" in that part of uh freshman calculus you only have to know a hundred things. Now I'm not saying that it passed the whole calculus course I don't want to you know oversell this thing. But it did a certain large part of it. The formal integration and that's the part that people considered expert. Well, to get it to understand a simple child's story, you have to know so many hundreds and thousands of things and they're different kinds of things. In a little child's story some word will talk about the geometric shape or the nature of space. Another one will talk about time. Another thing will engage social relations and uh a normal person's fear of the unknown or greed or acquisitiveness or territorial defense. And just to begin to talk to a four year old you have to know all those things. What I'm saying is for a beginner to play chess you have to know a hundred rules, and if you do a little exhaustive search you can avoid the simplest disasters. To talk to a little child maybe you have to know a hundred thousand things. So what I'm saying is these simple things like understanding a little story seems to me maybe a thousand times more complicated than at least beginning to approach uh human competence in narrow expert domains.
Interviewer:
NOW FOR MOST PEOPLE, IS IT TRUE TO SAY IN ARTIFICIAL INTELLIGENCE MOVE TO A CONSENSUS AROUND KNOWLEDGE BASED PARADIGM EVEN IF THEY DIDN'T GO ALONG WITH YOUR ON SOCIETY OF MIND.
Minsky:
Well most people in every field end up in a few clusters of establishment. Uh I don't know the statistics but I'd say half of the certainly in the applied area a large proportion of people use rule based expert systems. In the world of research a large number of people use languages like Prolog which make it easy to work with rule based systems. In the America, maybe most of the world when it comes to representing knowledge, by far the majority of people I think use something related to mathematical logic. The others use frames and scripts and rules, but uh the most popular ways of doing things are always the ones that are the best established from twenty years before.
Interviewer:
SO EVERYONE AGREES THAT SORT OF KNOWLEDGE IS THE THING TO REPRESENT, THAT'S THE KEY, BUT YOU'RE SAYING..
Minsky:
Everyone agrees that uh that you can't have an ignorant, but brilliant machine be very good at solving problems because in order to solve a problem you'd better know something about the subject, otherwise you have to make all the evolutionary mistakes. But what they don't agree on is how to represent the knowledge, and I'm afraid that mostly they fight about which representation is best and I feel that we have a dozen pretty good representations and I wish there were a hundred people working on the managers. Cause I'm pretty sure in the brain that things in the visual cortex are represented one way, maybe by sort of two dimensional structures and in the auditory cortex there's two of them. This one may be used as rule based stuff, this one uses something called semantic nets, other parts use frames and uh all the different representations that the researchers in artificial intelligence have developed. I don't know one of the dozen or so popular representations that isn't better at something. And so I suspect that the brain has evolved lots of knowledge representations. The exciting problem is how to coordinate them.
Interviewer:
THE PROBLEM WHICH ALL THESE APPROACHES WERE AIMED AT IS COMMON SENSE KNOWLEDGE PROBABLY. WHAT'S MEANT BY COMMON SENSE KNOWLEDGE.
Minsky:
Ooh I think uh the expression common sense knowledge has a couple of flavors. They are almost contradictory. Maybe the literal meaning is common sense knowledge that everybody shares, and you could trace that back for example, to childhood. Every child knows what a parent is, except one that doesn't, and we don't care much about exceptions here. So every child knows something about family and every child knows something about social relations that if if you hit somebody they uh make an expression uh suggesting annoyance or pain and if it's another child it might fight with you. So many thousands of little things. Everyone knows that if you hold something and release your grip it falls. They don't know about gravity, but they know that, this is common sense there's no person that you can communicate with who doesn't know the same things you do about space and time and social relations and geometry and language and whatnot./ How large is this data base that we all share, I suspect it's about ten million items or units, whatever units are. But nobody you know depends on your representation of knowledge. Now there's another thing when we say common sense; common sense reasoning, it's as though there's a kind of thinking which is very simple and obvious and everyone has it. And that's I think a bit of an illusion. The kinds of reasoning we find most simple are perhaps the most complicated and highly evolved ones in our brains.
Interviewer:
[INCOMPREHENSIBLE QUESTION] WHAT DO YOU MEAN BY COMMON SENSE? AN EXAMPLE OF COMMON SENSE KNOWLEDGE? AS OPPOSED TO COMMON SENSE REASONING SORRY AS OPPOSED TO COMMON SENSE KNOWLEDGE?
Minsky:
I think an example of common sense reasoning is that if you see something move then you say well either it's an animal and it moved for a reason of its own or it's a physical object that's inanimate and it must have been pushed or blown or something like that. So that uh we all share kinds of reasoning when we see something happen we make explanations. And these I think are rather complicated and very important and nobody knows very much about them.
Interviewer:
MANY OF THE CRITICS OF ARTIFICIAL INTELLIGENCE AT THIS POINT LISTENING TO YOUR DESCRIPTION OF COMMON SENSE KNOWLEDGE OR COMMON SENSE REASONING, MIGHT SAY, OR DID SAY, THAT THESE THINGS CAME FROM GROWING UP WITH A BODY AND THAT THE IDEA OF A DISEMBODIED ARTIFICIAL INTELLIGENCE WAS THEREFORE IN QUESTION BECAUSE OF THIS.
Minsky:
I think that the uh there was some criticism that somehow you couldn't separate the brain from the body and the world, but 1 never could understand what the critics who were talking about that had in mind because of course, uh if a machine has to learn something, it has to have some environment from which to learn it. And if the machine is going to uh be competent in dealing with the physical world, then either it will need a body to experiment with the world, or else it will need a little computer that simulates the kind of physics that you need for a world, like an airplane simulator. But there's one other thing that the philoso, that those critics didn't understand which is that if you could program into the machine the same knowledge that you would get by experience, without learning, then it would understand it just as well as if it had learned it, and so there's a lot of confusion between the present state of a person. Suppose that you have a normal person, they become paralyzed. Will they still understand the world; they're not interacting with it. So it was nice to have controllable body for learning through. But it's not philosophically or technically important, it's just convenient and there are other ways that it could learn. ^The brain actually after all isn't fn the world. It is imprisoned in the skull in this dark moist quiet place, and it's only connected to the world by uh video cables of a sort.
Interviewer:
THE QUESTION I'VE ASKED SOME OF THESE PHILOSOPHERS IS IF YOU HAD A BABY THAT WAS BORN BLIND AND PARALYZED, WITH ONE CHANNEL, AUDITORY CHANNEL, SO BASICALLY LEARNED ABOUT THE WORLD THROUGH LANGUAGE WOULD SUCH A PERSON ACQUIRE, WOULD BE ABLE TO USE LANGUAGE WITH COMMON SENSE, WHAT'S YOUR OPINION.
Minsky:
Uh it seems to me that the reason people are as smart as they are is that they have several ways of representing knowledge. If you have just a single way to represent knowledge, say as strings of words, the chances are that you might get stuck and not be you try every way you know of solving a problem they don't work, there's nothing else to do. If you have a visual way to represent the world and an auditory way and a logical way and uh a possessional way and a political way and so forth, then whenever you're trying to solve a problem and you get stuck you can shift to another way. And so the more modalities, it's not that you have more senses, it's that each part of the brain connected to a sense organ has actually evolved a different kind of hardware, and so the person that's born deaf is a little bit handicapped, because they don't have access to a kind of one way of dealing with the world. Now in fact if they're well educated, they may become better at solving most kinds of problems than hearing people or sighted people because they can overcompensate and uh I believe every person has a dozen ways of representing knowledge and if you're blind you lose a couple of them. Uh you've still got eight left. And but if you lose all but one if you lost all but the sense of touch, uh then you might might be very difficult. Uh Helen Keller is a person who, I think she got meningitis after she had some memories of seeing and hearing and uh, it's much harder with babies who are are born with no senses at all. I don't know if there are any.
Interviewer:
THERE WAS AN EXAMPLE, WE FOUND OF COLIN SAXE AS ONE OF A PERSON BORN WITH BLIND WITH CEREBRAL PALSY AND VIRTUALLY HE KNEW MOST..
Minsky:
Blind and deaf?
Interviewer:
NOT DEAF, NO. BUT STILL IT'S REMARKABLE THAT MOST OF WHAT SHE KNEW ABOUT THE WORLD, UNTIL SHE WAS EIGHTY WAS READ TO HER OUT OF NOVELS THAT WAS...STILL VERY RESTRICTED AND SHE COULD TALK.
Minsky:
But when you think about it if you're reading a novel then you're reading knowledge that has been processed by adults. And you're much better off than learning it yourself, through a babies brain.
Interviewer:
NOW A SIGHT PROJECT IS AN INTERESTING EXAMPLE WHICH THE COMPUTER THAT DOUG LEONARD'S COMPUTER IS SEVERELY DISABLED IN THE SENSE YOU'VE BEEN TALKING ABOUT AND HE IS HAVING TO GIVE THAT KNOWLEDGE TO IT.
Minsky:
Well, the sight machine is disabled in the most profound sense of all which is that it doesn't learn, so it wouldn't matter if it could see or hear. Uh it's basically a knowledge base that's not able to acquire knowledge of its own. It's the first attempt to try to put in one machine many different kinds of knowledge. And I expect, I would like to see ten other such attempts around the world. It's a shame that we have all our eggs in one basket and Leonard and his group have many wonderful ideas. Some of them might be badly wrought.
Interviewer:
HE THINKS OR HOPES THAT HIS MACHINE WILL BE ABLE TO ACQUIRE KNOWLEDGE WHEN IT KNOWS SOMETHING, BECAUSE IT'S TRUE TO BUT IT'S DIFFICULT TO LEARN IF YOU DON'T KNOW ANYTHING.
Minsky:
I think it's hard to learn if you don't know a lot of things and if you don't know how to learn. The trouble is that uh, no one in AI knows how to tell the machine much about how to learn because uh there hasn't been really enough research on it. There are quite a number of early stage machine learning projects around the world, but I'm they're a very small minority of the general investment in artificial intelligence. Compared to building practical performing expert systems, we really don't know very much about machine learning to this day.
[END OF TAPE F179]
[BEGINNING OF TAPE F180]
Minsky:
Yes, I'm not sure that learning is intractable; that the number of people who have tried to make machines learn is pretty small, hasn't been a high priority project. In order to learn I suspect that you have to go through stages. That is unless you have a certain set of concepts that and processes for using them that's very hard understand the next step. It's the way Piaget, the great Swiss psychologist described the development of children. He didn't have much of a theory of how they learn actually, except maybe in the early stages of infancy, but he pointed out that before you can understand the idea of conservation of matter or energy or something like that you already have to or he thought you had to have the idea of distinguishing between actions which are reversible and irreversible. For example if I take a piece of clay, ball of clay and I flatten it out, it looks much bigger. But the older child knows that the flattening out process was reversible. And you can just roll it up again. And so the child knows well if the operation is irreversible and in some sense the greater extent is not essential, it's it's just a momentary feature and gradually the child accumulates a number of ideas which amount to that there's a certain quantity of substance and what Piaget is saying is that you probably can't learn that concept in its full power, or all of its facets until you've got these other ideas first. So there we're saying that you can't learn the idea of conservation until you know the idea of reversibility. Well maybe there's another route maybe there isn't, nobody has thought of a plausible one. But it might be that even to learn very simple things that we take for granted uh for example, how do I learn that if I take an object and release my grasp it goes down. I have to have the concept of down, I have to have to concept of intentionally releasing it as opposed to something else. I have to have the idea of of support, in this case support from the top, that's different from support from the bottom, amounts to the same thing. But there's so many ideas you need before you can even look at the world and make explanations. And we have people working on what we call explanation based learning and to me that's one of the very most promising ideas in modern AI research today; explanation based learning. You look at a situation you don't just describe the bits or the pixels (?) of the picture, you describe the objects and their apparent relationships, but the relations come from you. It's not that there's a hand there and a piece of paper, it's that the hand is grasping the paper. That grasping is not there in the world actually, it's something that comes from my own knowledge of the scene. So maybe for psych or any knowledge based program to learn uh it's going to be very slow nearly impossible until we prime it with the with just the right sorts of concepts so we can start going rapidly, albeit that a human infant is born with a surprisingly large collection of built in procedures and mysterious pieces of hardware and reaction schemes that make it easy for the little infant to learn we just don't know what those are as yet.
Interviewer:
WOULD YOU SAY THAT WAS THE BIGGEST PROBLEM... THE SOLUTION OF THE COMMON SENSE...
Minsky:
I suspect that once once you get a pretty good system for learning then there are a couple more stages. Maybe the child then starts experimenting with the machine with variations on ways to learn. Because after all uh some people might take five examples of something before they say, "Well, maybe that's the general rule." And there're more impulsive people they see something happen once and they say, "Ho whenever there's a this that'll happen." People go so far, you could call them superstitious. And so each person may tune things and I think the greatest breakthru of all will be when you get smart enough that you can invent new ways to learn and try them out, and see which work. And then think and invent better ways to improve itself and so forth and just take off. And what both Leonarana l and many other people agree is that some threshold if you don't get up to that threshold the machine just won't get better, maybe it'll get worse if you let it learn. There was an example in 1957 when Arthur Samuel uh made a program that learnt from experience to play checkers and if it played with a good player it got better and better. If it played with a bad player it got worse and worse, and we don't want that and, but I suspect that once you get up to a certain threshold you could say, "My goodness I've been learning from this experience and I'm getting worse, I'll turn it off." That's a very simple piece of knowledge but if you don't have it you might ruin yourself. And uh children who make bad friends that's we as parents our greatest fear is what happens, it's not our influence on the children, it's who are their real friends, who are going to elevate them or ruin them, because we know our children don't know how to learn to learn they're going to copy.
Interviewer:
NOW CAN ADAPT OR CONNECTIONISTS CLAIM THE NETWORKS USE THE TERM LEARNING. DO YOU FEEL THAT THEY MIGHT ARGUE THAT IT WAS AN ARCHITECTURAL PROBLEM, ACTUALLY. IT WAS DIFFICULT.
Minsky:
Well,
Interviewer:
[INAUDIBLE]
Minsky:
It's difficult for anything to learn something, unless it's uh the right machine for it. I, I built the first neural network learning machine in fact, before I started to work on symbolic (?) approaches, and I got annoyed with the thing because uh my particular machine learned very quickly at first and then it got slower and slower as it filled up, and in order to make a new distinction it would have to forget an old one; it was a rather small machine. And I got the feeling that uh that it just didn't have enough organization to learn hard things. Now the modern connectionists are are in a very strange level of science I would say right now, because you can see hundreds of papers, somebody says, "Look I got this machine to learn to pronounce words from spelling." Surely this is a very hard problem. Uh some human programmer took a couple of years and to do this by hand, and his program is only a little better than mine, that sort of thing. Well, the trouble is we don't know how hard that program is in an absolute sense. If that human programmer managed to write a program I still don't know what it is he understood in doing that. I don't understand what the neural net did. Uh, as far as I know until they get some more science uh we just have to look at these anecdotes, people say, "I got a neural net to do this, I got a neural net to do that." Sometimes you hear somebody say, "I'm trying to get it to do this, but I can't." Of course people don't publish what it won't do. And so we don't learn much from this, because they're just anecdotes. So uh people are angry at me in that field because my feeling is, "Yes, if a neural net did that, it shows that probably the problem that it was solving was easier than they thought." And they get very angry. But instead of getting angry of course, what they should do is uh come up with a theory to show me that that problem was in some technical sense hard. The trouble with the field right now is that uh there aren't good theories of classifying problems into levels of difficulty. In in other parts of computer science there's been some progress on that. There's a what we call the theories of algorithmic complexity, but they're still not very good. And so the progress of psychology in general and particularly connectionism as a science is going to depend on the invention of better mathematical theories of how difficult problems are. Just as in physics, physics couldn't progress until even after Newton, until we had more theories of characteristics of different kinds of differential equations.
Interviewer:
OTHERWISE, THEY'LL JUST BE RANDOM...
Minsky:
Right, every now and then somebody'11 solve a problem, somebody'11 solve, not solve a problem, we won't know whether they were lucky or all the problems they were solving are easy or they are making really profound discoveries. So it's a little muddy until you get a theory, but most sciences proceed fifty or a hundred years with the experiments ahead of a good solid theory so I, I'm not complaining that much.
Interviewer:
NOW WHEN YOU BEGAN IN 1956 YOUR COLLEAGUE, JOHN MCCARTHY AND YOU, SINCE THEN, I WANT TO INTRODUCE SOCIETY OF MINDS, YOUR VIEWS HAVE DIVERGED SOMEWHAT. HOW WOULD YOU CHARACTERIZE THE DIFFERENCE BETWEEN... DOES HE STILL BELIEVE YOU CAN GET AT THESE THINGS THROUGH LOGIC OR..
Minsky:
Uh It's rather tricky to describe just where we agree and disagree. Uh, we both agree very much and always have from the beginning that in order to uh for a machine to be smart it would have to have common sense knowledge. Uh where we differed, I think was on how that common sense knowledge would best be represented, and on what are the reasoning processes that use it. Now he's maintained that uh it would be good to have a uniform logical reasoning process, but in order to do that you have to find ways of dealing with exceptions and uh suppositions and things like that he's been working on technical subproblems of that sort, for some thirty years. How do you make a logical system, how do you have an axiom and tolerate a few exceptions? How can you do reasoning of the form, "what if this were not true for a moment, what could I learn from it." It's very difficult and my feeling is that uh there are other ways to reason by analogy, using frames and defaults that are more lifelike and more productive. And that uh you don't have to struggle quite so hard with these logical difficulties if you start with a more flexible system. In the long run though, it would be nice if we were using these other informal kinds of reasoning to have theorists come along and clean them up and say, "Well certain places we can replace it by a much more efficient perfect procedure." I suspect that uh most situations that can never be done. But uh, it doesn't matter so we differ on what problems to work on. John likes to McCarthy likes to prove things, get them settled. If you have a good theorem it lasts a lifetime; if you have a practical theory uh you just never know what its status is from one year to the next.
Interviewer:
TELL ME, YOU'RE SAYING THAT YOU AND SEYMOUR THROUGHOUT THE '60S AND '70S DEVELOPED YOUR IDEAS WHICH LED TO YOUR BOOK WHICH YOU PUBLISHED THIS FEW YEARS AGO, THIS SOCIETY OF MINDS. TELL ME A BIT ABOUT THE THEORY OF THE SOCIETY OF MINDS.
Minsky:
The society of mind theory is basically that in order to make a machine with the kind of versatility and resourcefulness that we take for granted in people, a good way to do that is to package into that machine a lot of different ways to represent knowledge and a lot of different ways to exploit it. And this leads to a certain difficulty, is there a central place in this mechanical brain that's in charge of everything and knows everything. And I think what I show in the book is that that really can't be, because if different kinds of knowledge are represented in different ways then the parts of the brain, the parts of the machine that's doing all this really can't communicate with each other very well. And so you get a very different picture of identity. And I can't explain it briefly, but uh it's a three hundred page book and in it I think I show all sorts of new ways to explain problems that have bothered psychologists and philosophers for a long time, like uh what does it mean for a machine to be conscious. And what I argue uh very much as Freud did, is that uh this is not so difficult a problem as people think, because the phenomenon of consciousness is overrated. Uh, people, if you talk to people they act as though they know what they're thinking and they know what's out in the world, and so forth, but in fact, you don't know where you got the next sentence that you speak. I don't know where my words are coming from and what made me think of them. So that there's a little speech machine which has a little bit of memory of what it did a moment ago and uh 1 don't see any great difficulty in simulating that sort of thing in the computer. The hard part are the maybe four hundred different sub machines that are computing different aspects of how to solve various problems. There are lists of goals that I have and machines interpreting those goals. Maybe one of the goals is expressed verbally, but it's talking about physical things there's a misunderstanding between this part of my mind and that part of my mind and it's a big mess. Uh, well I think that the only way to make sense of the weird phenomena that baffle psychologists and philosophers is to build a machine that works this way. And as far as I can see judging by the failures of for example, connectionist machines to learn to talk. There's a big difference between learning to understand a sentence and learning to pronounce a word. And logical machines learning to solve the simplest common sense problems and so forth, it seems to be the way to proceed is to find ways to do everything, build them all together. Find ways to manage them and then study what kind of phenomena you get when you assemble that machine. And my prediction is with a little little work you will find the machine saying that it's conscience and denying that it's a machine and uh and having all sorts of beliefs of unscientific kind that every normal common sense reasoning person ends up with. The mind is not a centralized thing it's a whole collection of different parts and we see that in brain surgery. Somebody has an accident and loses a piece of brain there's still a person there it's not the same person it's missing some trait. It can't recognize faces. It can't think of the....we see injuries where the person can't think of the names of animals. Peculiar kinds of and and so forth and if these are small injuries this apparent person still functions. It's somewhat like the original person. It's missing some things. Sometimes it adapts and rebuilds and finds substitutes but to me a person is not a person in the normal sense. A person is a wonderful package of interrelated traits and ithiologies and things it's learned and pieces of hardware and it's a wonderful concept even if realistic.
Interviewer:
ISN'T THE COMPUTER STILL A DELIGHTFUL ELEMENT OF THIS TYPE OF RESEARCH? I MEAN CAN YOU MAKE THESE THINGS AS SMALL INDIVIDUAL SOFTWARE MACHINE. CAN YOU SIMULATE THE WHOLE THING? HAS IT CHANGED? HAS HARDWARE NOW BECOME CRUCIAL AGAIN?
Minsky:
You make so that I don't understand.
Interviewer:
IF WE HAVE NOT SUCH MACHINES IN OUR MINDS CAN WE REPRESENT THEM AS PIECES OF SOFTWARE? AND ASSIMILATE THEM.
Minsky:
Oh yeah. Computers are now getting so fast pretty soon you'll be able to buy a little box that computes at a rate of a hundred million operations a second and by the year two thousand or twenty ten it'll be doing ten billion operations a second at a desk top machine. By that time I wouldn't be surprised if that's enough hardware that you could make everything a human brain does or assimilate everything a human brain does in some sort of software. Maybe it'll be ten or twenty years after that maybe sooner it's from the view of history ten or twenty years is a blink in in in evolutionary times so we shouldn't be worried about what what day it happens. But shortly in a hundred years there will be machines this big that have more capacity then the brain. Hans Morivek thinks it's ah, forty years.
Interviewer:
[INAUDIBLE]...THAT WE'RE GOING FOR TOO BIG A PROBLEM WITH THE HUMAN BRAIN?
Minsky:
Oh I think you should do what I'm doing, namely start with the most interesting aspects of ah, mental activity and try to figure out how they work and simulate parts of them. Ah, I think if you start by simulating the early stages of evolution, then ah, you'll spend a long time discovering the obvious [BOTH TALKING]...I would start as Lened(?) does with simply phenomenon of natural language and work both ways. Take, take some level of performance which is meaningful and ah, easy to understand and respectable and work down to say how could it be learned and work up to say how could this turn into something like an adult. But ah, I wouldn't start at one extreme or the other.
Interviewer:
IF YOU HAD UNLIMITED RESOURCES, HOW WOULD YOU TURN THIS INTO A RESEARCH PROGRAM?
Minsky:
If I had unlimited resources, I would duplicate myself and just stay home and think and after a long time I'd come out and tell people what I concluded. Ah, I'm not serious, I get most of my ideas by arguing with people who don't agree and ah, then going home and working on the details and then when I get stuck, coming out and arguing again but I'm not interested in a big project because ah...
Interviewer:
FROM WHAT YOU'VE BEEN SAYING ONE WAY TO APPROACH WOULD BE HAVE LOTS AND LOTS OF PEOPLE TRYING DIFFERENT APPROACHES TO IT.
Minsky:
Well I'd like to have lots of people thinking about how to combine different approaches, it's not the different approaches themselves, it's why aren't there more people making a machine that uses three different representations of knowledge and crosses over, that's a very specific kind of research project and I see no one doing it. So that's, to me that's the missing link.
[END OF TAPE F180]
[BEGINNING OF TAPE F181]
[BACKGROUND DISCUSSION]
Interviewer:
THIS HAS BEEN THROUGHOUT A CONTROVERSIAL FIELD, I WANT TO GET SOME OF YOUR REFLECTIONS ON WHY YOU THINK SOME PEOPLE HAVE BEEN UPSET BY AI WHETHER IT'S FROM WHAT YOU THINK IT'S BEEN ABOUT...
Minsky:
I think the field is controversial because we live in a ah, spiritual, spiritualist culture. Ah, when Pasteur(?) ah, argued that ah, living things were just chemistry, that was unacceptable and because people said there's a real difference between things that are alive and things that are dead...you don't even apply the word dead to rock, they're not worthy of it. When we say alive and inanimate. And so in the 19th century until Pasteur roughly, ah, this was considered to be a very important distinction. Ah, now in science there's no distinction at all, nobody considers living things to be any different from ah, other things except that happens to have certain processes going on. Well I think the same is ah, people think that we live from a tradition from Plato on which is that there's a mental world, it's a, and spiritual world that, that...the body is a mechanical thing and the mind or even the soul is something else. And so AI is challenging that. In a religious culture ah, we would be heretics to be burned or,or whatever because ah—but I don't see that has anything to do with artificial intelligence, it's that to ah, to most people in our culture ah,
we're saying there are no souls, there are no
spirits, ah, and so this is a religious
controversy not a technical one. No technical
person to me with any quality thinks that there
is such as thing as a living thing, there are
must things that move around because they have
miacin(?) and ah, mechanisms are sort of
understood pretty well and ah, we understand that
you can't have something that's half alive
because it takes a lot of stuff to have this
thing keep going and repairing itself and fueling
it ah, because it's a rather ah, crummy structure
anyway, it needs a lot of continual repairs all
the time. So the living things certainly are
identifiable and they're different from the other
things but there's nothing special about them. I
think the same thing is ah, the case with
mentality. That is if you have enough knowledge
and enough processes and enough other processes to keep it ah, in contact with what it's doing, ah, then you get a mind and ah, I don't see it as something to argue about. But if somebody thinks that we have a spirit and an inherent value which is different from the stuff we're made of, then of course it's a threat. But it's, it's a religious argument not a technical one.
Interviewer:
BUT SOME PEOPLE WHO WOULDN'T BE RELIGIOUS MIGHT ALSO —
Minsky:
They don't know they're religious
Interviewer:
THEY DON'T KNOW THEY'RE RELIGIOUS... I WONDERED WHETHER IT WAS A UNIQUENESS...
Minsky:
Religious is...to me religious is the superstitious belief in spirits that don't exist. And so anyone who says there's something in a man that's not in a machine is religious in the sense that they're saying there's a spiritual quality I can't explain, no matter what you say I refuse to believe that I don't have it. That's faith, that's not, they're not saying anything that it does that ah, that's technically ah, they can show we can't do or...
Interviewer:
THERE'S ANOTHER INTERESTING CONFLICT ABOUT WHAT'S HAPPENING IN AI...WHENEVER YOU'VE DONE SOMETHING AND THAT THE PROBLEMS CAUSE CLEARLY STILL TODAY PEOPLE WOULD THINK YOU TAKE MATHMATICA(?), I MEAN I STUDIED PHYSICS AT UNIVERSITY, CAN DO PRETTY MUCH ALL THE MATHS I EVER DID AND SO FORTH, BUT CLEARLY THAT'S CLEVER, BUT THERE'S A WAY PEOPLE SAY WELL THAT'S JUST MATHEMATICS AND WHAT'S REALLY DIFFICULT...
Minsky:
Well it's nice you gave that example because Slagel(?) was the first program to do formal integration, then Joel Moses ah, four years later ah, wrote another one which was somewhat better than he and ah, Carl Ingleman, Bill Martin, a number of people worked on it, then Bobby Cabiness(?) and ah, Robert Rish(?) ah, came in. They added more mathematics, it got better, now it's better than any mathematician in the world. And so now that it's that good it's not considered experimental or controversial. So it's out of AI and typically as a machine gets better and better at something it gets its own identity.
Interviewer:
HAS THAT BEEN A HARD THING TO TAKE, THAT THE SUCCESSES GET SHUNTED OUT OF AI?
Minsky:
It depends on ah, what you're looking for. In a funny way ah, in physics ah, for example, that's a game for young people, because it's very hard the new theory of physics comes in, it, it's more complicated, it wipes out the old one, maybe it's simpler, it's hard to keep up. Ah, in AI people are...are so, it's so controversial that it's still easy for me at my age to make up new theories...ah, so in a kind of selfish personal way it's very enjoyable that there's this hostility. Ah, there's still only a handful of us and all these wonderful problems. It's like having all the children's blocks you want and the other kids don't come and take them away from, but I think it's too bad that ah, more people don't understand how much more we could do if people would ah, ah, sort of try new ways and ah, cooperate and try to combine these methods instead of always arguing I want the best one, my method is better than yours...
Interviewer:
JUST WINDING UP NOW, WHAT HAS SURPRISED YOU MOST ABOUT DEVELOPMENT OF A) OF COMPUTERS AND B) OF ARTIFICIAL INTELLIGENCE...BACK TO THE 1950S?
Minsky:
I think it was a bit of, a big surprise that the, the ah, things children do were so much, the things that seem harder easy and the things that are easy seem hard. Ah, other than that it's hard to dissect that because I never tended to think in terms of how long things would take or how hard they were, it was more...if it's easy, then I don't want to bother with, if it's too hard, I don't want to work on it now, and —
Interviewer:
DID YOU IMAGINE THAT COMPUTERS WOULD BECOME UBIQUITOUS?
Minsky:
I don't know if I imagine when. Ah, I think everyone was surprised ah, when the machines got ah, twice as fast in memory, ah, got twice as cheap so rapidly, but when I was a little kid I read H.G. Wells and ah, E.E. Smith and Isaac Asimov, it was a great pleasure meeting him and keeping up with him now because how often do you get to meet your gods. So ah, I read science fiction more than anything else, I don't read ordinary literature at all, I read some technical things and I read science fiction novels and nothing surprises except ah, why doesn't everybody see that this is the right thing and work on it...
Interviewer:
HOW HAS SCIENCE FICTION BEEN AS TREATING
Minsky:
Science fiction is like any field, most
of it's bad, ah, but it's full of ah, ah, a dozen
people who I think are the great philosophers of
our time, ah, Asimov and Fred Pohl and Arthur
Clarke and now Gred Benford and David Brinn
and...now that I started I feel, Verner Vingy(?),
I feel everyone I don't mention, Harry Harrison,
is being left out. Ah, but these are the great writers, the, the publishers have got them in this niche, ah, but when I see Norman Mailer or, or someone like that, that's trash. Why he's no better than Aristophenes, he's writing again about the human condition and people screwing each other and people betraying and being attracted and infatuated, it's the same old stuff. Ah, but in science fiction people say what if something were actually different, and general literature is what if things were the same again. It's too boring.
Interviewer:
YOU MENTION ASIMOV BUT WEREN'T YOU A BIT DISAPPOINTED THAT THE ROBOTICS PROVED TO BE SO COMPLICATED?
Minsky:
Well he didn't say when...
Interviewer:
IT PREDATED COMPUTERS REALLY DIDN'T IT, ASIMOV, THE MACHINE HE...?
Minsky:
Oh Robert [INT. VOICE], I was so entrenched by Robert Heinlein's book in 1940 about remote manipulators and we still don't have those in any quality. So some thing are unaccountably slow. I tried for years to get people to build robots with five fingers just like hands. They said no it's too hard. After a long time they started making ones with three fingers and then four. Why don't they just bite the bullet, 'cause I want a five fingered one so I can slip into the glove and get an output, and I don't understand people, it's only 20 percent more than four fingers [CHUCKLES], it's not as though it were twice as hard.
Interviewer:
FINAL QUESTION, I WANT TO GET SOME IDEA,
WE'RE ASKING PEOPLE THIS. HOW DO YOU THINK, AS A
GUESS, FUTURE HISTORIANS WILL RATE THE COMPUTER
AS AN INVENTION? IT'S OBVIOUSLY A RAMIFICATION OF LIFE, HOW WOULD YOU RATE IT?
Minsky:
I'd say there were the dark ages and then the enlightenment and it came in 1950 rather than 1350, they'll just move the transition.
Interviewer:
THE COMPUTER IS...
Minsky:
The computer is when people started understanding processes instead of just static things. And so philosophically that was a great difference. Before 1950 there was no way to describe something that was changing.
Interviewer:
IT BROUGHT AN ENLIGHTENMENT IN?
Minsky:
A new way of thinking that there were procedures and that in computer science you make a procedure, you say here's the procedure, it's on this disk, this little package, I'm going to take this procedure and that one and put them together and I'm going to attach this one here on the side so that what happened in 1950 was that we could think of processes with the same mental equipment that we could think of things before. I can, everybody has known for 10,000 years that you can build something higher by stacking one thing on top of another. Now we know about subroutines and recursions and tail recursions and ah, there's a hundred words that the average person doesn't know which are just important as the old word like beside and on top of. So most people don't know that what happened in 1950 that man for the first time learned to talk. We didn't have—everybody says well we learned speech sometime 30,000 years ago, nobody knows when. But what I'm saying is a thousand years from now it'll be 1950 when, when this animal learned to talk. The stuff before was just emotional utterances 'cause he couldn't describe processes, he could just describe...there's a thing there.
Interviewer:
CURIOUSLY MOST PEOPLE, NOW YOU SAY 1950'S... DIDN'T SEE THIS DIMENSION , THEY SAW THIS THING AS AS AN ARITHEMATIC...
Minsky:
Computer scientists were the worst of our enemies, it was the computer scientists who were telling the public it can only add fast. I had so many friends, artists and I tell them we're going to be able to do this and they say how will it work. And I would tell my scientists and say it'll do this, they said bullshit, it's just a fast adding machine, can't do any of those things. Sort of, sort of cute irony. People who know too much but not enough.
Interviewer:
AND JUST IN THE NAME OF MEAN THE CONCEPTION YOU CAME INTO FIELD—WHAT IS A COMPUTER [UNINTELLIGIBLE]...?
Minsky:
See I had a great advantage 'cause when I came into the field, a, a little college student I, I meet Warren McCullock and John Von Neuman and ah, these people, different world, they were called cybernetics (?). And I was just very fortunate I landed in this. These are the people who centuries from now will be the philosophers of our time. How many people know the name Warren McCullock, the greatest philosopher of the 20th century, he's unknown. But that's my prediction, 100 years from now they'll say those people were so lucky to have known Warren.
Interviewer:
AND THOSE ARE THE PEOPLE WHO SET YOU ON...
Minsky:
Right and those are the people who are
thinking of processing—processes as stuff. And so ah, when I sort of appeared as a child, I got into that culture, the Macy(?) Conferences, Cybernetics inner circle, never fell out of it again.
Interviewer:
THANK YOU VERY MUCH.
[END OF TAPE F181 AND INTERVIEW]