Edge.org
To arrive at the edge of the world's knowledge, seek out the most complex and sophisticated minds, put them in a room together, and have them ask each other the questions they are asking themselves.
Published on Edge.org (http://www.edge.org)

Home > THE EMOTION UNIVERSE: MARVIN MINSKY

by Marvin Minsky [11.5.02]
Topic:
UNIVERSE

To say that the universe exists is silly, because it says that the universe is one of the things in the universe. So there's something wrong with questions like, "What caused the Universe to exist?

MARVIN MINSKY, mathematician and computer scientist, is considered one of the fathers of Artificial Intelligence. He is Toshiba Professor of Media Arts and Sciences at the Massachusetts Institute of Technology; cofounder of MIT's Artificial Intelligence Laboratory; and the author of eight books, including The Society of Mind.

Marvin Minsky's Edge Bio Page

THE EMOTION UNIVERSE: MARVIN MINSKY [1]


THE EMOTION UNIVERSE

[MARVIN MINSKY:] I was listening to this group talking about universes, and it seems to me there's one possibility that's so simple that people don't discuss it. Certainly a question that occurs in all religions is, "Who created the universe, and why? And what's it for?" But something is wrong with such questions because they make extra hypotheses that don't make sense. When you say that X exists, you're saying that X is in the Universe. It's all right to say, "this glass of water exists" because that's the same as "This glass is in the Universe." But to say that the universe exists is silly, because it says that the universe is one of the things in the universe. So there's something wrong with questions like, "What caused the Universe to exist?"

The only way I can see to make sense of this is to adopt the famous "many-worlds theory" which says that there are many "possible universes" and that there is nothing distinguished or unique about the one that we are in - except that it is the one we are in. In other words, there's no need to think that our world 'exists'; instead, think of it as like a computer game, and consider the following sequence of 'Theories of It":

(1) Imagine that somewhere there is a computer that simulates a certain World, in which some simulated people evolve. Eventually, when these become smart, one of those persons asks the others, "What caused this particular World to exist, and why are we in it?" But of course that World doesn't 'really exist' because it is only a simulation.

(2) Then it might occur to one of those people that, perhaps, they are part of a simulation. Then that person might go on to ask, "Who wrote the Program that simulates us, and who made the Computer that runs that Program?"

(3) But then someone else could argue that, "Perhaps there is no Computer at all. Only the Program needs to exist - because once that Program is written, then this will determine everything that will happen in that simulation. After all, once the computer and program have been described (along with some set of initial conditions) this will explain the entire World, including all its inhabitants, and everything that will happen to them. So the only real question is what is that program and who wrote it, and why"

(4) Finally another one of those 'people' observes, "No one needs to write it at all! It is just one of 'all possible computations!' No one has to write it down. No one even has to think of it! So long as it is 'possible in principle,' then people in that Universe will think and believe that they exist!'

So we have to conclude that it doesn't make sense to ask about why this world exists. However, there still remain other good questions to ask, about how this particular Universe works. For example, we know a lot about ourselves - in particular, about how we evolved - and we can see that, for this to occur, the 'program' that produced us must have certain kinds of properties. For example, there cannot be structures that evolve (that is, in the Darwinian way) unless there can be some structures that can make mutated copies of themselves; this means that some things must be stable enough to have some persistent properties. Something like molecules that last long enough, etc.

So this, in turn, tells us something about Physics: a universe that has people like us must obey some conservation-like laws; otherwise nothing would last long enough to support a process of evolution. We couldn't 'exist' in a universe in which things are too frequently vanishing, blowing up, or being created in too many places. In other words, we couldn't exist in a universe that has the wrong kinds of laws. (To be sure, this leaves some disturbing questions about worlds that have no laws at all. This is related to what is sometimes called the Anthropic Principle." That's the idea that the only worlds in which physicists can ask about what created the universe are the worlds that can support such physicists.)

The Certainty Principle

In older times, when physicists tried to explain Quantum Theory, to the public what they call the uncertainty principle, they'd say that the world isn't the way Newton described it; instead it. They emphasized 'uncertainty' - that everything is probabilistic and indeterminate. However, they rarely mentioned the fact that it's really just the opposite: it is only because of quantization that we can depend on anything! For example in classical Newtonian physics, complex systems can't be stable for long. Jerry Sussman and John Wisdom once simulated our Solar System, and showed that the large outer planets would stable for billions of years. But they did not simulate the inner planets - so we have no assurance that our planet is stable. It might be that enough of the energy of the big planets might be transferred to throw our Earth out into space. (They did show that the orbit of Pluto must be chaotic.)

Yes, quantum theory shows that things are uncertain: if you have a DNA molecule there's a possibility that one of its carbon atoms will suddenly tunnel out and appear in Arcturus. However, at room temperature a molecule of DNA is almost certain to stay in its place for billions of years, - because of quantum mechanics - and that is one of the reasons that evolution is possible! For quantum mechanics is the reason why most things don't usually jump around! So this suggests that we should take the anthropic principle seriously, by asking. "Which possible universes could have things that are stable enough to support our kind of evolution?" Apparently, the first cells appeared quickly after the earth got cool enough; I've heard estimate that this took less than a hundred million years. But then it took another three billion years to get to the kinds of cells that could evolve into animals and plants. This could only happen in possible worlds whose laws support stability. It could not happen in a Newtonian Universe. So this is why the world that we're in needs something like quantum mechanics - to keep things in place! (I discussed this "Certainty Principle" in my chapter in the book Feynman and Computation, A.J.G. Hey, editor, Perseus Books, 1999.)

Intelligence

Why don't we yet have good theories about what our minds are and how they work? In my view this is because we're only now beginning to have the concepts that we'll need for this. The brain is a very complex machine, far more advanced that today's computers, yet it was not until the 1950s that we began to acquire such simple ideas about (for example) memory - such as the concepts of data structures, cache memories, priority interrupt systems, and such representations of knowledge as 'semantic networks.' Computer science now has many hundreds of such concepts that were simply not available before the 1960s.

Psychology itself did not much develop before the twentieth century. A few thinkers like Aristotle had good ideas about psychology, but progress thereafter was slow; it seems to me that Aristotle's suggestions in the Rhetoric were about as good as those of other thinkers until around 1870. Then came the era of Galton, Wundt, William James and Freud - and we saw the first steps toward ideas about how minds work. But still, in my view, there was little more progress until the Cybernetics of the '40s, the Artificial Intelligence of the '50s and '60s, and the Cognitive Psychology that started to grow in the '70s and 80s.

Why did psychology lag so far behind so many other sciences? In the late 1930s a botanist named Jean Piaget in Switzerland started to observe the behavior of his children. In the next ten years of watching these kids grow up he wrote down hundreds of little theories about the processes going on in their brains, and wrote about 20 books, all based on observing three children carefully. Although some researchers still nitpick about his conclusions, the general structure seems to have held up, and many of the developments he described seem to happen at about the same rate and the same ages in all the cultures that have been studied. The question isn't, "Was Piaget right or wrong?" but "Why wasn't there someone like Piaget 2000 years ago?" What was it about all previous cultures that no one thought to observe children and try to figure out how they worked? It certainly was not from lack of technology: Piaget didn't need cyclotrons, but only glasses of water and pieces of candy.

Perhaps psychology lagged behind because it tried to imitate the more successful sciences. For example, in the early 20th century there were many attempts to make mathematical theories about psychological subjects - notable learning and pattern recognition. But there's a problem with mathematics. It works well for Physics, I think because fundamental physics has very few laws - and the kinds of mathematics that developed in the years before computers were good at describing systems based on just a few - say, 4, 5, or 6 laws - but doesn't work well for systems based on the order of a dozen laws. The physicist like Newton and Maxwell discovered ways to account for large classes of phenomena based on three or four laws; however, with 20 assumptions, mathematical reasoning becomes impractical. The beautiful subject called Theory of Groups begins with only five assumptions - yet this leads to systems so complex that people have spent their lifetimes on them. Similarly, you can write a computer program with just a few lines of code that no one can thoroughly understand; however, at least we can run the computer to see how it behaves - and sometimes see enough then to make a good theory.

However, there's more to computer science than that. Many people think of computer science as the science of what computers do, but I think of it quite differently: Computer Science is a new way collection of ways to describe and think about complicated systems. It comes with a huge library of new, useful concepts about how mental processes might work. For example, most of the ancient theories of memory envisioned knowledge like facts in a box. Later theories began to distinguish ideas about short and long-term memories, and conjectured that skills are stored in other ways.

However, there's more to computer science than that. Many people think of computer science as the science of what computers do, but I think of it quite differently: Computer Science is a new way collection of ways to describe and think about complicated systems. It comes with a huge library of new, useful concepts about how mental processes might work. For example, most of the ancient theories of memory envisioned knowledge like facts in a box. Later theories began to distinguish ideas about short and long-term memories, and conjectured that skills are stored in other ways.

However, Computer Science suggests dozens of plausible ways to store knowledge away - as items in a database, or sets of "if-then" reaction rules, or in the forms of semantic networks (in which little fragments of information are connected by links that themselves have properties), or program-like procedural scripts, or neural networks, etc. You can store things in what are called neural networks - which are wonderful for learning certain things, but almost useless for other kinds of knowledge, because few higher-level processes can 'reflect' on what's inside a neural network. This means that the rest of the brain cannot think and reason about what it's learned - that is, what was learned in that particular way. In artificial intelligence, we have learned many tricks that make programs faster - but in the long run lead to limitations because the results neural network type learning are too 'opaque' for other programs to understand.

Yet even today, most brain scientists do not seem to know, for example, about cache-memory. If you buy a computer today you'll be told that it has a big memory on its slow hard disk, but it also has a much faster memory called cache, which remembers the last few things it did in case it needs them again, so it doesn't have to go and look somewhere else for them. And modern machines each use several such schemes - but I've not heard anyone talk about the hippocampus that way. All this suggests that brain scientists have been too conservative; they've not made enough hypotheses - and therefore, most experiments have been trying to distinguish between wrong alternatives.

Reinforcement vs. Credit assignment.

There have been several projects that were aimed toward making some sort of "Baby Machine" that would learn and develop by itself - to eventually become intelligent. However, all such projects, so far, have only progressed to a certain point, and then became weaker or even deteriorated. One problem has been finding adequate ways to represent the knowledge that they were acquiring. Another problem was not have good schemes for what we sometimes call 'credit assignment' - that us, how do you learning things that are relevant, that are essentials rather than accidents. For example, suppose that you find a new way to handle a screwdriver so that the screw remains in line and doesn't fall out. What is it that you learn? It certainly won't suffice merely to learn the exact sequence of motions (because the spatial relations will be different next time) - so you have to learn at some higher level of representation. How do you make the right abstractions? Also, when some experiment works, and you've done ten different things in that path toward success, which of those should you remember, and how should you represent them? How do you figure out which parts of your activity were relevant? Older psychology theories used the simple idea of 'reinforcing' what you did most recently. But that doesn't seem to work so well as the problems at hand get more complex. Clearly, one has to reinforce plans and not actions - which means that good Credit-Assignment has to involve some thinking about the things that you've done. But still, no one has designed and debugged a good architecture for doing such things.

We need better programming languages and architectures.

I find it strange how little progress we've seen in the design of problem solving programs - or languages for describing them, or machines for implementing those designs. The first experiments to get programs to simulate human problem-solving started in the early 1950s, just before computers became available to the general public; for example, the work of Newell, Simon, and Shaw using the early machine designed by John von Neumann's group. To do this, they developed the list-processing language IPL. Around 1960, John McCarthy developed a higher-level language LISP, which made it easier to do such things; now one could write programs that could modify themselves in real time. Unfortunately, the rest of the programming community did not recognize the importance of this, so the world is now dominated by clumsy languages like Fortran, C, and their successors - which describe programs that cannot change themselves. Modern operating systems suffered the same fate, so we see the industry turning to the 35-year-old system called Unix, a fossil retrieved from the ancient past because its competitors became so filled with stuff that no one cold understand and modify them. So now we're starting over again, most likely to make the same mistakes again. What's wrong with the computing community?

Expertise vs. Common Sense

In the early days of artificial intelligence, we wrote programs to do things that were very advanced. One of the first such programs was able to prove theorems in Euclidean geometry. This was easy because geometry depends only upon a few assumptions: Two points determine a unique line. If there are two lines then they are either parallel or they intersect min just one place. Or, two triangles are the same in all respects if the two sides and the angle between them are equivalent. This is a wonderful subject because you're in a world where assumptions are very simple, there are only a small number of them, and you use a logic that is very clear. It's a beautiful place, and you can discover wonderful things there.

However, I think that, in retrospect, it may have been a mistake to do so much work on task that were so 'advanced.' The result was that‹until today‹no one paid much attention to the kinds of problems that any child can solve. That geometry program did about as well as a superior high school student could do. Then one of our graduate students wrote a program that solved symbolic problems in integral calculus. Jim Slagle's program did this well enough to get a grade of A in MIT's first-year calculus course. (However, it could only solve symbolic problems, and not the kinds that were expressed in words. Eventually, the descendants of that program evolved to be better than any human in the world, and this led to the successful commercial mathematical assistant programs called MACSYMA and Mathematica. It's an exciting story‹but those programs could still not solve "word problems." However in the mid 1960s, graduate student Daniel Bobrow wrote a program that could solve problems like "Bill's father's uncle is twice as old as Bill's father. 2 years from now Bill's father will be three times as old as Bill. The sum of their ages is 92. Find Bill's age." Most high school students have considerable trouble with that. Bobrow's program was able to take convert those English sentences into linear equations, and then solve those equations‹but it could not do anything at all with sentences that had other kinds of meanings. We tried to improve that kind of program, but this did not lead to anything good because those programs did not know enough about how people use commonsense language.

By 1980 we had thousands of programs, each good at solving some specialized problems‹but none of those program that could do the kinds of things that a typical five-year-old can do. A five-year-old can beat you in an argument if you're wrong enough and the kid is right enough. To make a long story short, we've regressed from calculus and geometry and high school algebra and so forth. Now, only in the past few years have a few researchers in AI started to work on the kinds of common sense problems that every normal child can solve. But although there are perhaps a hundred thousand people writing expert specialized programs, I've found only about a dozen people in the world who aim toward finding ways to make programs deal with the kinds of everyday, commonsense jobs of the sort that almost every child can do.

Marvin Minsky
Mathematician; computer scientist; Professor of Media Arts and Sciences, MIT; cofounder, MIT's Artificial Intelligence Laboratory; author, The Emotion Machine

Mr. Foreman complains that he is being replaced (by "the pressure of information overload") with "a new self that needs to contain less and less of an inner repertory of dense cultural inheritance" because he is connected to "that vast network of information accessed by the mere touch of a button."

I think that this is ridiculous because I don't see any basic change; therealways was too much information. Fifty years ago, if you went into any big library, you would have been overwhelmed by the amounts contained in the books therein. Furthermore, that "touch of a button" has improves things in two ways: (1) it has change the time it takes to find a book from perhaps several minutes into several seconds, and (2) in the past date usually took many minutes, or even hours, to find what you want to find inside that book—but now, a Computer can help you can search through the text, and I see this as nothing but good.

Indeed, it seems to me that only one thing has gone badly wrong. I do not go to libraries any more, because I can find most of what I want by using that wonderful touch of a button! However the copyright laws have gotten worse—and I think that the best thoughts still are in books because, frequently, in those ancient times, the authors developed their ideas for years well for they started to publicly babble. Unfortunately, not much of that stuff from the past fifty years is in the public domain, because of copyrights.

So, in my view, it is not the gods, but Foreman himself who has been pounding on his own head. Perhaps if he had stopped longer to think, he would have written something more sensible. Or on second thought, perhaps he would not—if, in fact, he actually has been replaced.

Douglas Rushkoff
Media Analyst; Documentary Writer; Author, Present Shock

 

I don't think it's the computer itself enabling the pancake people, but the way networked computers give us access to other people. It's not the data—for downloaded data is just an extension of the wealthy gentleman in his library, enriching himself as a "self." What creates the pancake phenomenon is our access to other people, and the corresponding dissolution of our perception of knowledge as an individual's acquisition.

Foreman is hinting at a "renaissance" shift I've been studying for the past few years.

The original Renaissance invented the individual. With the development of perspective in painting came the notion of perspective in everything. The printing press fueled this even further, giving individuals the ability to develop their own understanding of texts. Each man now had his own take on the world, and a person's storehouse of knowledge and arsenal of techniques were the measure of the man.

The more I study the original Renaissance, the more I see our own era as having at least as much renaissance character and potential. Where the Renaissance brought us perspective painting, the current one brings virtual reality and holography. The Renaissance saw humanity circumnavigating the globe; in our own era we've learned to orbit it from space. Calculus emerged in the 15th Century, while systems theory and chaos math emerged in the 20th. Our analog to the printing press is the Internet, our equivalent of the sonnet and extended metaphor is hypertext.

Renaissance innovations all involve an increase in our ability to contend with dimension: perspective. Perspective painting allowed us to see three dimensions where there were previously only two. Circumnavigation of the globe changed the world from a flat map to a 3D sphere. Calculus allowed us to relate points to lines and lines to objects; integrals move from x to x-squared, to x-cubed, and so on. The printing press promoted individual perspectives on religion and politics. We all could sit with a text and come up with our own, personal opinions on it. This was no small shift: it's what led to the Protestant wars, after all.

Out of this newfound experience of perspective was born the notion of the individual: the Renaissance Man. Sure, there were individual people before the Renaissance, but they existed mostly as parts of small groups. With literacy and perspective came the abstract notion the person as a separate entity. This idea of a human being as a "self," with independent will, capacity, and agency, was pure Renaissance—a rebirth and extension of the Ancient Greek idea of personhood. And from it, we got all sorts of great stuff like the autonomy of the individual, agency, and even democracy and the republic. The right to individual freedom is what led to all those revolutions.

But thanks to new emphasis on the individual, it was also during the first great Renaissance that we developed the modern concept of competition. Authorities became more centralized, and individuals competed for how high they could rise in the system. We like to think of it as a high-minded meritocracy, but the rat-race that ensued only strengthened the authority of central command. We learned compete for resources and credit made artificially scarce by centralized banking and government.

While our renaissance also brings with it a shift in our relationship to dimension, the character of this shift is different. In a holograph, fractal, or even an Internet web site, perspective is no longer about the individual observer's position; it's about that individual's connection to the whole. Any part of a holographic plate recapitulates the whole image; bringing all the pieces together generates greater resolution. Each detail of a fractal reflects the whole. Web sites live not by their own strength but the strength of their links. As Internet enthusiasts like to say, the power of a network is not the nodes, it's the connections.

That's why new models for both collaboration and progress have emerged during our renaissance—ones that obviate the need for competition between individuals, and instead value the power of collectivism. The open source development model, shunning the corporate secrets of the competitive marketplace, promotes the free and open exchange of the codes underlying the software we use. Anyone and everyone is invited to make improvements and additions, and the resulting projects—like the Firefox browser—are more nimble, stable, and user-friendly. Likewise, the development of complementary currency models, such as Ithaca Hours, allow people to agree together what their goods and services are worth to one another without involving the Fed. They don't need to compete for currency in order to pay back the central creditor—currency is an enabler of collaborative efforts rather than purely competitive ones.

For while the Renaissance invented the individual and spawned many institutions enabling personal choices and freedoms, our renaissance is instead reinventing the collective in a new context. Originally, the collective was the clan or the tribe—an entity defined no more by what members had in common with each other than what they had in opposition to the clan or tribe over the hill.

Networks give us a new understanding of our potential relationships to one another. Membership in one group does not preclude membership in a myriad of others. We are all parts of a multitude of overlapping groups with often paradoxically contradictory priorities. Because we can contend with having more than one perspective at a time, we needn't force them to compete for authority in our hearts and minds—we can hold them all, provisionally. That's the beauty of renaissance: our capacity to contend with multiple dimensions is increased. Things don't have to be just one way or directed by some central authority, alive, dead or channeled. We have the capacity to contend with spontaneous, emergent reality.

We give up the illusion of our power as deriving from some notion of individual collecting data, and find out that having access to data through our network-enabled communities gives us an entirely more living flow of information that is appropriate to the ever changing circumstances surrounding us. Instead of growing high, we grow wide. We become pancake people.

Roger Schank
Psychologist & Computer Scientist; Engines for Education Inc.; Author, Teaching Minds: How Cognitive Science Can Save Our Schools

I am constantly astounded by people who use computers but who really don't understand them at all when I hear people talk about artificial intelligence (AI) . I shouldn't be surprised by most folk's lack of comprehension I suppose, since the people inside AI often fail to get it as well. I recently attended a high level meeting in Washington where the AI people and the government people were happily dreaming about what computers will soon be able to do and promising that they would soon make it happen when they really had no idea what was involved in what they were proposing. So, that being said, let me talk simply about what it would mean and what it would look like for a computer to be intelligent.

Simple point number 1: A smart computer would have to be able to learn.

This seems like an obvious idea. How smart can you be if every experience seems brand new? Each experience should make you smarter no? If that is the case then any intelligent entity must be capable of learning from its own experiences right?

Simple point number 2: A smart computer would need to actually have experiences. This seems obvious too and follows from simple point number 1. Unfortunately, this one isn't so easy. There are two reasons it isn't so easy. The first is that real experiences are complex, and the typical experience that today's computers might have is pretty narrow. A computer that walked around the moon and considered seriously what it was seeing and decided where to look for new stuff based on what it had just seen would be having an experience. But, while current robots can walk and see to some extent, they aren't figuring out what to do next and why. A person is doing that. The best robots we have can play soccer. They play well enough but really not all that well. They aren't doing a lot of thinking. So there really aren't any computers having much in the way of experiences right now.

Could there be computer experiences in some future time? Sure. What would they look like? They would have to look a lot like human experiences. That is, the computer would have to have some goal it was pursuing and some interactions caused by that goal that caused it to modify what it was up to in mid-course and think about a new strategy to achieve that goal when it encountered obstacles to the plans it had generated to achieve that goal. This experience might be conversational in nature, in which case it would need to understand and generate complete natural language, or it might be physical in nature, in which case it would need to be able to get around and see, and know what it was looking at. This stuff is all still way too hard today for any computer. Real experiences, ones that one can learn from, involve complex social interactions in a physical space, all of which is being processed by the intelligent entities involved. Dogs can do this to some extent. No computer can do it today. Tomorrow maybe.

The problem here is with the goal. Why would a computer have a goal it was pursuing? Why do humans have goals they are pursuing? They might be hungry or horny or in need of a job, and that would cause goals to be generated, but none of this fits computers. So, before we begin to worry about whether computers would make mistakes, we need to understand that mistakes come from complex goals not trivially achieved. We learn from the mistakes we make when the goal we have failed at satisfying is important to us and we choose to spend some time thinking about what to do better next time. To put this another way, learning depends upon failure and failure depends upon having had a goal one care's about achieving and that one is willing to spend time thinking about how to achieve next time using another plan. Two year olds do this when they realize saying "cookie" works better than saying "wah" when they want a cookie.

The second part of the experience point is that one must know one has had an experience and know the consequences of that experience with respect to one's goals in order to even think about improving. In other words, a computer that thinks would be conscious of what had happened to it, or would be able to think it was conscious of what had happened to it which may not be the same thing.

Simple point number 3: Computers that are smart won't look like you and me.

All this leads to the realization that human experience depends a lot on being human. Computers will not be human. Any intelligence they ever achieve will have to come by virtue of their having had many experiences that they have processed and understood and learned from that have helped them better achieve whatever goals they happen to have.

So, to Foreman's question: Computers will not be programmed to make mistakes. They will be programmed to attempt to achieve goals and to learn from experience. They will make mistakes along the way, as does any intelligent entity.

As to Dyson's remarks: "Turing proved that digital computers are able to answer most ­ but not all¤ programs that can be asked in unambiguous terms." Did he? I missed that. Maybe he proved that computers could follow instructions which is neither here nor there. It is difficult to give instructions about how to learn new stuff or get what you want. Google's "allowing people with questions to find answers" is nice but irrelevant. TheEncyclopedia Britannica does that as well and no one makes claims about its intelligence or draws any conclusion whatever from it. And, Google is by no means an operating system—I can't even imagine what Dyson means by that or does he just not know what an operating system is?

People have nothing to fear from smart machines. With the current state of understanding of AI I suspect they wont have to even see any smart machines any time soon. Foreman's point was about people after all and people are being changed by the computer's ubiquity in their lives. I think the change is, like all changes in the nature of man's world, interesting and potentially profound, and probably for the best. People may well be more pancake-like, but the syrup is going to very tasty.

James J. O'Donnell
Classical Scholar, University Professor, Georgetown University; Author, The Ruin of the Roman Empire; Webmaster, St. Augustine's Website

Can computers achieve everything the human mind can achieve? Can they, in other words, even make fruitful mistakes? That's an ingenious question.

Of course, computers never make mistakes—or rather, a computer's "mistake" is a system failure, a bad chip or a bad disk or a power interruption, resulting in some flamboyant mis-step, but computers can have error-correcting software to rescue them from those. Otherwise, a computer always does the logical thing. Sometimes it's not the thing you wanted or expected, and so it feels like a mistake, but it usually turns out to be a programmer's mistake instead.

It's certainly true that we are hemmed in constantly by technology. The technical wizardry in the graphic representation of reality that generated a long history of representative art is now substantially eclipsed by photography and later techniques of imaging and reproduction. Artists and other humans respond by doing more and more creatively in the zone that is still left un-competed, but if I want to know what George W. Bush looks like, I don't need to wait for a Holbein to track him down. We may reasonably expect to continue to be hemmed in. I have trouble imagining what students will know fifty years from now, when devices in their hands spare them the need to know multiplication tables or spelling or dates of the kings of England. That probably leaves us time and space for other tasks, but the sound of the gadgets chasing us is palpable. What humans will be like, accordingly, in 500 years is just beyond our imagining.

So I'll ask what I think is the limit case question: can a computer be me? That is to say, could there be a mechanical device that embodied my memory, aptitudes, inclinations, concerns, and predilections so efficiently that it could replace me? Could it make my mistakes?

I think I know the answer to that one.

Rebecca Newberger Goldstein
Philosopher, Novelist; Author, Betraying Spinoza; 36 Arguments for the Existence of God: A Work of Fiction

I admit that I'm of two distinct minds on the question posed by Richard Foreman as to whether the technological explosion has led to an expansion or a flattening of our selves. In fact, a few years ago when I was invited to represent the humanities at Princeton University's celebration of the centenary of their graduate studies, I ended up writing a dialogue to express my inner bifurcation. My way of posing the question was to wonder whether the humanities, those "soul-explorations," had any future at all, given that the soul had been all but pounded out of existence, or in any case pounded into a very attenuated sort of existence.

My one character, dubbed Lugubrioso, had a flair for elaborate phraseology that rivaled the Master's, and he turned it to deploring the loss of the inner self's solemn, silent spaces, the hushed corridors where the soul communes with itself, chasing down the subtlest distinctions of fleeting consciousness, catching them in finely wrought nets of words, each one contemplated for both its precise meaning and euphony, its local and global qualities, one's flight after that expressiveness which is thought made surer and fleeter by the knowledge of all the best that had been heretofore thought, the cathedral-like sentences (to change the metaphor) that arose around the struggle to do justice to inexhaustible complexity themselves making of the self a cathedral of consciousness. (Lugubrioso spoke in long sentences.)

He contemplated with shuddering horror the linguistic impoverishment of our technologically abundant lives, arguing that privation of language is both an effect and a cause of privation of thought. Our vocabularies have shrunk and so have we. Our expressive styles have lost all originality and so have we. The passivity of our image-heavy forms of communication—too many pictures, not enough words, Lugubrioso cried out pointing his ink-stained finger at the popular culture—substitutes in an all-too-pleasant anodyne for the rigors of thinking itself, and our weakness for images encourages us to reduce people, too—even our very own selves—to images, which is why we are drunk on celebrity hood and feel ourselves to exist only to the extent that we exist for others.

What is left but image when the self has stopped communing with itself, so that in a sad gloss on Bishop Berkeley's apothegm, our esse has becomepercipi, our essence is to be perceived? Even the torrents of words posted on "web-related locations" (the precise nature of which Lugubrioso had kept himself immaculately ignorant) are not words that are meant for permanence; they are pounded out on keyboards at the rate at which they are thought, and will vanish into oblivion just as quickly, quickness and forgetfulness being of the whole essence of the futile affair, the long slow business of matching coherence to complexity unable to keep up, left behind in the dust.

My other character was Rosa and she pointed out that at the very beginning of this business that Lugubrioso kept referring to, in stentorian tones, as "Western Civilization," Plato deplored the newfangled technology of writing and worried that it tolled the death of thought. A book, Plato complained in Phaedrus, can't answer for itself. (Rosa found the precise quotation on the web. She found 'stentorian,' too, when she needed it, on her computer's thesaurus. She sort of knew the word, thought it might be "sentorian," or "stentorious"but she'll know where to find it if she ever needs it again, a mode of knowing that Lugubrioso regards as epistemologically damnable.)

When somebody questions a book, Plato complained, it just keeps repeating the same thing over and over again. It will never, never, be able to address the soul as a living breathing interlocutor can, which is why Plato, committing his thoughts to writing with grave misgivings, adopted the dialogue form, hoping to approximate something of the life of real conversation. Plato's misgivings are now laughable—nobody is laughing harder than Lugubrioso at the thought that books diminish rather than enhance the inner life—and so, too, will later generations laugh at Lugubrioso's lamentations that the cognitive enhancements brought on by computers will make of us less rather than more.

Human nature doesn't change, Rosa tried to reassure Lugubrioso, backing up her claims with the latest theories of evolutionary psychology propounded by Steven Pinker et al. Human nature is inherently expansive and will use whatever tools it develops to grow outward into the world. The complexity suddenly facing us can feel overwhelming and perhaps such souls as Lugubrioso's will momentarily shrink at how much they must master in order to appropriate this complexity and make it their own. It's that shrinkage that Lugubriosos is feeling, confusing his own inadequacy to take in the new forms of knowing with the inadequacy of the forms themselves. Google doesn't kill people, Rosa admonished him. People kill people.

Lugubrioso had a heart-felt response, but I'll spare you.

  • John Brockman, Editor and Publisher
  • Russell Weinberger, Associate Publisher
  • Karina Knoll, Editorial Assistant
 
  • Contact Info:editor@edge.org
  • In the News
  • Manage Email Subscription
  • Get Edge.org by email
 
Edge.org is a nonprofit private operating foundation under Section 501(c)(3) of the Internal Revenue Code.
Copyright © 2012 By Edge Foundation, Inc All Rights Reserved.

 


Links:
[1] http://www.edge.org/conversation/the-emotion-universe-marvin-minsky