Chapter 9

ROGER SCHANK

"Information Is Surprises"



Marvin Minsky: Roger Schank has pioneered many important ideas about how knowledge might be represented in the human mind. In the early 1970s, he developed a concept of semantics that he called "conceptual dependency," which plays an important role in my book
The Society of Mind. He's also developed other paradigms, involving representing knowledge in various types of networks, scripts, and storylike forms.

__________

ROGER SCHANK
is a computer scientist and cognitive psychologist; director of the Institute for the Learning Sciences, at Northwestern University; John Evans Professor of Electrical Engineering and Computer Science, and professor of psychology and of education and social policy; author of fourteen books on creativity, learning, and artificial intelligence, including The Creative Attitude: Learning to Ask and Answer the Right Questions, with Peter Childers (1988), Dynamic Memory (1982), Tell Me A Story (1990), and The Connoisseur's Guide to the Mind (1991).



Roger Schank:
My work is about trying to understand the nature of the human mind. In particular, I'm interested in building models of the human mind on the computer, and especially working on learning, memory, and natural-language processing. I'm interested in how people understand sentences, how they remember things, how they get reminded of one event by another, and how they learn from one experience and use it to help them in other events. Most people in the field associate me with the idea that there are mental structures called "scripts," which help you understand a sequence of events and allow you to make inferences from those events inferences that essentially guide your plans or behavior through those events.

Information is surprises. We all expect the world to work out in certain ways, but when it does, we're bored. What makes something worth knowing is organized around the concept of expectation failure. Scripts are interesting not when they work but when they fail. When the waiter doesn't come over with the food, you have to figure out why; when the food is bad or the food is extraordinarily good, you want to figure out why. You learn something when things don't turn out the way you expected.

The most important thing to understand about the mind is that it's a learning device. We're constantly trying to learn things. When people say they're bored, what they mean is that there's nothing to learn. They get unbored fast when there's something to learn. The important thing about learning is that you can learn only at a level slightly above where you are. You have to be prepared.

My most interesting invention is probably my theory of MOPs and TOPs memory-organization packets and theme-organization packets which is basically about how human memory is organized: any experience you have in life is organized by some kind of conceptual index that's a characterization of the important points of the experience. What I've been trying to do is understand how memory constantly reorganizes, and I've been building things called dynamic memories. My most important work is the attempt to get computers to be reminded the way people are reminded.

I also made early contributions to the field of natural- language processing, where I went head to head with the linguists, who were working on essentially syntactical models of natural language. I was interested in conceptual models of natural language. I was interested in the question of how, when you understand a sentence, you extract meaning from that sentence independent of language.

I've gotten into lots of arguments with linguists who thought that the important question about language was its syntactic structure, its formal properties. I'm what is often referred to in the literature as "the scruffy"; I'm interested in all the complicated, un-neat phenomena there are in the human mind. I believe that the mind is a hodge-podge of a range of oddball things that cause us to be intelligent, rather than the opposing view, which is that somehow there are neat, formal principles of intelligence.

An example I used in my book Dynamic Memory is the case of the steak and the haircut. The story is that I was complaining to a friend that my wife didn't cook steak the way I liked it she always overcooked it. My friend said, "Well, that reminds me of the time I couldn't get my hair cut as short as I wanted it, thirty years ago in England." The question I ask is, How does such reminding happen and why does it happen? The "how" is obvious. What are the connections between the steak and the haircut? If you look at it on a conceptual level, there's an identical index match: we each asked somebody who had agreed to be in a service position to perform that service, and they didn't do it the way we wanted it. There are a number of questions you can ask. First, how do we construct such indices? Obviously, my friend constructed such an index in order to find, in his own mind, the story that had the same label. Second, why do you construct them? And the answer is that you're trying to understand the universe and you need to match incoming events to past experiences. This is something I call "case-based reasoning." The idea that you would then make that match obviously has a purpose. It's not hard to understand what the purpose would be; the purpose is learning. Because how would you learn from new experiences otherwise?

The case-based-reasoning model says you process a new experience by constructing some very abstract label for it, and that label is the index into memory. Many things in memory have been labeled that way; you find them and you make comparisons, almost like a scientist, between the old experience and the new experience, to see what you can learn from the old experience to help you understand the new experience. When you finish that process, you can go back into your mind and add something that will help fix things. For example, I can imagine my friend saying, "Well, I guess that experience I had in England wasn't so unusual; there really are a lot of times when people don't do things because they think it would be too extreme." Sure enough, I go back and check with my wife, and the reason she overcooks the steak is that she thinks I want it too rare.

One of the problems we've had in AI is that in the early years in the sixties and seventies you could build programs that seemed pretty exciting. You could get a program to understand a sentence, or translate a sentence. Twenty years later, it's not exciting any more. You've got to build something real, and in order to build real things you have to work with real problems. Understanding how learning might take place when people are telling stories to each other; understanding how somebody might produce a sentence, or how somebody might make an inference, or how somebody might make an explanation: those kinds of things have interested me, whereas in AI your average person was much more interested in the formal properties of vision, say, or building robotic systems, or proving theorems, or things that are more logically based.

What I've learned in twenty years of work on artificial intelligence is that artificial intelligence is very hard. This may sound like a strange thing to say, but there's a sense in which you have only so many years to live, and if we're going to build very intelligent machines it may take a lot longer to do than I personally have left in life. The issue is that machines have to have a tremendous amount of knowledge, a tremendous amount of memory; the software-engineering problems are phenomenal. I'm still interested in AI as a theoretic enterprise- -I'm as interested in cognitive science and the mind as I've ever been but since I'm a computer scientist, I like to build things that work.

One thing that's clear to me about artificial intelligence and that, curiously, a lot of the world doesn't understand is that if you're interested in intelligent entities there are no shortcuts. Everyone in the AI business, and everyone who is a viewer of AI, thinks there are going to be shortcuts. I call it the magic-bullet theory: somebody will invent a magic bullet in the garage and put it into a computer, and Presto! the computer's going to be intelligent. Journalists believe this. There are workers in AI who believe it, too; they're constantly looking for the magic bullet. But we became intelligent entities by painstakingly learning what we know, grinding it out over time. Learning about the world for a ten-year-old child is an arduous process. When you talk about how to get a machine to be intelligent, what it has to do is slowly accumulate information, and each new piece of information has to be lovingly handled in relation to the pieces already in there. Every step has to follow from every other step; everything has to be put in the right place, after the previous piece of information. If you want to get a machine to be smart, you're going to have to put into it all the facts it may need; this is the only way to give it the necessary information. It's not going to mysteriously acquire such information on its own.

You can build learning machines, and the learning machine could painstakingly try to learn, but how would it learn? It would have to read the New York Times every day. It would have to ask questions. Have conversations. The concept that machines will be intelligent without that is dead wrong. People are set up to be capable of endless information accumulation and indexing; finding information and connecting it to the next piece of information that's all anyone is doing.

One of the most interesting issues to me today is education. I want to know how to rebuild the school system. One thing is to look at how people learn, right now, and how the schools work, right now, and see if there's any confluence. In schools today, students are made to read a lot of stuff, and they're lectured on it. Or maybe they see a movie. Then they do endless problems, then they get a multiple-choice test of a hundred questions. The schools are saying, "Memorize all this. We're going to teach you how to memorize. Practice it, we'll drill you on it, and then we're going to test you."

Imagine that this is how I'm going to teach you about food and wine. We're going to read about food and wine, and then I'll show you films about food and wine, and then I'll let you solve problems about the nature of food and wine, like how to decant a bottle of wine, what the optimal color is for a Bordeaux, and so forth. And then I'll give you a test.

Would you learn to appreciate food and wine this way? Would you learn anything about food and wine? The answer is no. Because what you have to do to learn about food and wine is eat and drink. Memorizing all the rules, or discussing the principles of cooking, isn't going to do any good if you don't eat and drink. In fact, it works the other way around. If you eat and drink a lot, I can get you interested in those subjects. Otherwise I can't.

Everything they teach in school is oriented so that they can test it to show that you know it, instead of taking note of the obvious, which is that people learn by doing what people want to do. The more they do, the more curious they get about how to do it better if they're interested in doing it in the first place. You wouldn't teach a kid to drive by giving him the New York State test manual. If you want to learn how to drive, you have to drive a lot. Most schools do everything but allow kids to experience life. If kids want to learn about what goes on in the real world, they have to go out into the real world, play some role in it, and have that motivate learning. Errors in learning by doing bring out questions, and questions bring out answers.

What kids learn in high school or college is antilearning. By reading Dickens in ninth grade, I learned to hate Dickens. Ten years later, I picked up Dickens and it was interesting, because I was ready to read it. What I learned in high school was something useless that Dickens is awful. A ninth-grade kid isn't ready for this. Why do they teach it? Because in the nineteenth century that was the literature of the time, and that's when they designed the curriculum still used in practically all schools today.

I don't think there should be a curriculum. What kids should do is follow the interests they have, with an educated advisor available to answer their questions and guide them to topics that follow from the original interest. Wherever you start, you can go somewhere else naturally. The problem is that schools want everyone to be in lockstep: everyone has to learn this on this day and that on that day. School is a wonderful baby-sitter. It lets the parents go to work and keeps the kids from killing each other.

Learning takes place outside of school, not in school, and kids who want to know something have to find out for themselves by asking questions, by finding sources of material, and by discounting anything they learned in school as being irrelevant.

Most teachers feel threatened by questions. Obviously, good teachers love to hear good questions, but the demographics don't allow them to answer all the questions anyway. This is where computers can come in. One-on-one teaching is what matters. In the old days, rich people hired tutors for their kids. The kids had one-on-one teaching, and it worked. Computers are the potential savior of the school system, because they allow one-on- one teaching. Unfortunately, every piece of educational software you see on the market today is stupid, because it was designed to follow the same old curriculum.

At the Institute for the Learning Sciences, at Northwestern, we designed a new computer program to teach biology, in which you get to design your own animal. The National Science Foundation said that this program wouldn't fit into the curriculum, because biology isn't taught in the sixth grade, which is the level at which the program works. Furthermore, since each kid would have a different conversation with the computer, how could tests be given on what was learned?

The real problem is the idea that knowledge is represented as a set of facts. It's not. You might want to know those facts, but it's not the knowing of the facts that's important. It's how you got that knowledge, the things you picked up on the way to getting that knowledge, what motivated the learning of that knowledge. Otherwise what you're learning is just an unrelated set of facts. Knowledge is an integrated phenomenon; every piece of knowledge depends on every other one. School has to be completely redesigned in order to be able to make this happen.

This is where the computer comes in, through computer programs that are knowledgeable and can have conversations with kids about whatever subject the kids want to talk about. Kids can begin to have conversations about biology or history or whatever, and have their interest sustained. What you need are computer programs that can do the kind of one-on-one teaching that a good teacher could do if he or she had the time to do it.

Not long ago, to prepare for a conference, I read Darwin. Doing this reaffirmed my belief in not reading, because if I had read Darwin at any other time in my life I wouldn't have understood him. I was only capable of understanding Darwin in a meaningful way by reading him this time, because I understood something about what his argument was with respect to arguments I was trying to make. I could internalize it. Darwin's very clever. He said all kinds of interesting things that I wouldn't have regarded as relevant twenty years ago.

The issue is reading when you're prepared to read something. For instance, at this moment I'm not thinking about consciousness, so if I read Dan Dennett, he would do one of two things to me. He would cause me to react to his thinking about consciousness, which means that I would forever think about consciousness in his metaphor. This is useless to me, if I want to be creative. Secondly, I would reject his theories out of hand and find the book and the subject not worth thinking about. This also is bad. I don't see the point of reading his book unless at this moment I've thought about consciousness and am prepared to see what he thinks. That's my view of reading. The problem is that intellectuals say to each other, "Oh my God, haven't you read X?" It's academic one-upmanship.

The MIT linguist Noam Chomsky represents everything that's bad about academics. He was my serious enemy. It was such an emotional topic for me twenty years ago that at one point I couldn't even talk about it without getting angry. I'm not sure I'm over that. I don't like his intolerant attitude or what I consider tactics that are nothing less than intellectual dirty tricks. Chomsky was the great hero of linguistics. In his view of language, the core of the issue is syntax. Linguistics is all about the study of syntax. Language should be looked at in terms of Chomsky's notion of its "deep structure." Part of Chomsky's cleverness in referring to deep structure was to use these wonderful words in a way that everyone assumed to be something other than what he meant.

What Chomsky meant by "deep structure" was that you didn't have to look at the surface structure of a sentence the nouns and the verbs, and so forth. But what any rational human being would have thought he meant by "deep structure," he emphatically did not mean. You would imagine that a deep structure would refer to the ideas behind the sentence, the meaning behind the sentence. But Chomsky stopped people from working on meaning.

I was sufficiently out of that world so that I could yell and scream and say that meaning is the core of language. I went through every point he ever made, and made fun of each one. He was always an easy target, but he had a cadre of religious academic zealots behind him who essentially would listen to no one else.

Here's an example of an argument I might have had with him in the late sixties. The sentence "John likes books" means that John likes to read. "Oh no," Chomsky might say, "John has a relationship of liking with respect to books, but he might not like to read."

Part of what linguistic understanding is about is understanding meaning: what you can assume to be absolutely true, and what you can assume to be true some of the time, or likely to be true. I call this inference. But Chomsky would say, "No, inference has nothing to do with language, it has to do with memory, and memory has nothing to do with language."

That comment is totally absurd. The psychology of language is what's at issue here. Meaning, inferences, and memory are a very deep part of language. Chomsky explicitly states in his most important book, Aspects of the Theory of Syntax, that memory is not a part of language and that language should be studied in the abstract. Language, for Chomsky, is a formal study, the study of the mathematics of language. I can see someone making arguments about language from a perspective of mathematical theory, but not if you are a founding member of the editorial board of Cognitive Psychology, and not if legions of psychologists are writing articles and conducting experiments based upon your work. Chomsky tried to have it both ways.

In Chomsky's view, the mind should behave according to certain organized principles, otherwise he wouldn't want to study it. I don't share that view. I'll study the mind, and whatever I get is O.K. Let it all be mud. Fine, if that's what it is. There are many scientists who'd like the mind to be scientific. If it isn't scientific neat and mathematical they don't want to have to deal with it. Chomsky has always adopted the physicist's philosophy of science, which is that you have hypotheses you check out, and that you could be wrong. This is absolutely antithetical to the AI philosophy of science, which is much more like the way a biologist looks at the world. The biologist's philosophy of science says that human beings are what they are, you find what you find, you try to understand it, categorize it, name it, and organize it. If you build a model and it doesn't work quite right, you have to fix it. It's much more of a "discovery" view of the world, and that's why the AI people and the linguistics people haven't gotten along. AI isn't physics.


Murray Gell-Mann:
I know Roger Schank slightly, and I find that his work has many appealing characteristics. Working with the concept of scripts, he was led into a huge project in education, using computers. As I listened to his description of some of the ideas behind the project, I found myself in sympathy with many of them.

Ever since teaching machines of the most primitive kind were first invented, I have thought that computers, programmed intelligently to function as teaching machines, could be used most effectively for education, because they would allow students to go through the routine parts of learning without using up teachers' time and without subjecting the student to the embarrassment of public viewing of his or her preliminary answers to questions. As we know, there is not really such a thing as education. There is only helping somebody to learn, and the learning process is a complex adaptive system: fooling around, making mistakes, somehow having contact with reality or truth, correcting the mistakes, assuring self-consistency, and so on. You can go through that with a machine without being subjected to ridicule. At the same time, the machine can keep track of your thought processes if necessary. When certain thought processes are in error, the machine can tell you that, so that you can change them. Furthermore, the people relieved of the necessity of doing the routine jobs carried out by the teaching machine can be saved for other duties ones that really require a human being.

I've always thought that university education, including full-scale lecture courses covering the ground of well-known subjects on which excellent books have been published, are simply an illustration of how the universities have failed to adapt, after five hundred years, to the invention of printing. For those who prefer to learn by listening and watching, videotaped courses by some of the best lecturers in the world are now or may soon be available. Presumably universities will adapt slowly to such modern inventions as well. In medieval times, books were published by having a lector read his manuscript to a roomful of scriptores, who wrote it down. Many of the students at the university say, in theology were too poor to buy books produced by this expensive method, and so at the university a theology professor would read his book to the students, who would act as their own scriptores and write down what the teacher said.

With the invention of printing, this system became obsolete, but the universities have still not noticed that, after more than five hundred years. Of course, a lecture can serve very important purposes: It can convey brand-new information, along with the exciting character of that information. A dramatic lecture can serve to present the speaker as a role model to the people in the audience. I have nothing against the occasional lecture. But the idea that at each college and university some professor has to give a series of lectures covering the ground of a subject such as electromagnetic theory seems totally insane to me. If professors really want to assist learning, they can answer questions when students are stuck, assign challenging problems and fascinating reading, and give occasional exciting talks. And of course they can choose textbooks, and if necessary, series of videotaped lectures. In brief, they can serve as resources for students engaged in the complex adaptive learning process.

Marvin Minsky: Roger Schank has pioneered many important ideas about how knowledge might be represented in the human mind. In the early 1970s, he developed a concept of semantics that he called "conceptual dependency," which plays an important role in my book The Society of Mind. He's also developed other paradigms, involving representing knowledge in various types of networks, scripts, and storylike forms. Each of these ideas suggests, in turn, another new theory of memory. In this way, Schank has been enormously productive in the artificial intelligence field. He's changed his focus from year to year, so that in each of several different periods he would train a new generation of students in different theories. Then he would force them to build computer models of those theories, so that the rest of us could see for ourselves what these models could and could not do. Most of the models were based on novel ways to represent the meanings of verbal expressions.

Ironically, Schank has been opposed and almost persecuted by the language theorist Noam Chomsky, who himself generated several families of new ideas. Generally, Chomsky both ridiculed Schank's approach sometimes by saying curtly that it just wasn't interesting and completely ignored the significance of Schank's results. I used the word "ironically" because the work of Schank and Chomsky is so strikingly complementary. Chomsky seems almost entirely concerned with the formal syntax of sentences, to the nearly total exclusion of how words are actually used to represent and communicate ideas from one person to another. He thus ignores any models indicating that syntax is only an accessory to language. For example, no one has any trouble in understanding the story implied by the three-word utterance "thief, careless, prison," although it uses no syntax at all. Schank and his students, however, have demonstrated several ways to deal with such intricate meanings. It was quite hard to persuade our colleagues to consider these kinds of theories. Sometimes, it seems, the only way to get their attention is by shocking them. Roger Schank is good at this. His original discussion of conceptual dependency used such examples as "Jack threatened to choke Mary unless she would give him her book." His technical representation of this idea is that Jack transfers into Mary's mind the conceptualization that if she doesn't transfer the possession of the book to him, he'll cut off her windpipe, so that she won't get enough air to live. I once asked Roger why so many of his examples were so bloodthirsty. He replied, "Ah, but notice how clearly you remember them!"

Francisco Varela: Roger Schank is somebody I don't know personally, but I know what he has to say. In some sense, Schank is another good example of somebody who stands on the opposite side of the fence from me, regarding the understanding of mind. For Schank, there's a fundamental assumption of mind as some kind of logical machine rationalist mind. The basic approach I take is that this is just simply a hangover from Western tradition, and that "mind" is fundamentally not rational. It's not a decision-making, software-type process. In that sense, Schank serves as a sparring partner.

Steven Pinker: Roger Schank, I think, is another example of how a scientist's reaction to a theory is "What have you done for me lately?" Roger is more of an engineer than a scientist; he doesn't have a theory of how the human brain learns and uses language based on detailed study of children and adults talking. His goal was to build computer programs that understand, and that's a very different enterprise. There was a rather acrimonious debate between him and Chomsky's followers in the 1970s. But a lot of that energy may have been wasted, because they were talking past each other.

Chomsky was looking at one small part of the problem of understanding human language namely, how children acquire the grammar of their mother tongue. His answer was one that I agree with: that the brain has, among other things, some circuitry dedicated to learning grammar, and some aspects of the design of grammar are built in. Chomsky argued that this is one of the most interesting questions about language, but he'd be the first to admit that it's only a small part of the scientific problem of how people use language in understanding stories and in conversation to say nothing of the problem of what the best ways are of building a computer to do these things. Roger had a much more ambitious goal, in terms of engineering that is, to write programs that could understand stories. He said, "A theory like Chomsky's doesn't help me solve my problem; knowing the universal constraints on grammars of all languages isn't going to help me devise a program that can understand stories in English. Therefore Chomsky was wrong about language."

This was unfortunate. Much of the debate between Chomsky and Schank is another case of the blind men and the elephant. They're asking different questions, so the answers they come up with aren't really contradictory. Chomsky, in my opinion, is right in saying that there's an autonomous mental organ for grammar and that a child can acquire grammar only if the basic design of the grammar of the world's languages is in some sense built in. Roger is right in that actual use of language, in conversation or understanding, involves a lot more than grammar such as knowledge of how people interact with one another in typical situations and that therefore to tell the whole story about how conversation works, you can't simply have a theory of grammar but you must embed it in a theory of knowledge about the world and social interactions.

W. Daniel Hillis: The Roger Schank I knew was a thorn in everybody's side constructively so. The interesting thing about Roger Schank, something he shares with Minsky, is the fact that he's produced an incredible string of students. Anybody who's produced such a great string of students has to be a constructive pain in the ass. He's always taken an adversarial stance in his theories. He doesn't just say, "Here's my theory." He says, "Here's why I'm right and everybody else is an idiot." He's often right.

Daniel C. Dennett: I've always relished Schank's role as a gadfly and as a naysayer, a guerrilla in the realm of cognitive science, always asking big questions, always willing to discard his own earlier efforts and say they were radically incomplete for interesting reasons. Part of Roger's view is that the mind is an amazing collection of gadgets, held together with some very interesting sorts of baling wire. With that sort of view, of course, you can't have a systematic scientific research program, so he doesn't try to. He's an opportunistic explorer of his own ideas. He still gets interesting results. A lot of his effort is spent trying to lead people in what he thinks are the right directions and fomenting whatever revolution he's currently fomenting, rather than trying to work out in a solitary way the final truth about anything. He's a gadfly and a good one.

One of the ideas he's best known for is "scripts" stereotypic situation-types or narrative fragments out of which, he claimed, we construct most if not all of our cognitive prowess. He probably would agree that now his own efforts on behalf of scripts can be seen as the unintended refutation of a superficially promising idea. There was something right about it, but everybody started beating up on the idea, and the more we looked, and the more we saw what was involved if you tried to make it work, the more we could see that scripts by themselves couldn't do the job he first thought they could. Roger himself was probably the most insightful critic of scripts. Good! We learned something that wasn't obvious. People who say that it was obvious from the outset reveal that they haven't thought very seriously about the problem. It wasn't clear what scripts could and couldn't do until Roger forced us to look hard at the idea.


Back to Contents

Excerpted from The Third Culture: Beyond the Scientific Revolution by John Brockman (Simon & Schuster, 1995) . Copyright 1995 by John Brockman. All rights reserved.