Question Center

Edge 322 — July 19, 2010
20,500 words




The Technium, Salon, Die Presse, Boing Boing, Medgadget, Computing


An Idea Whose Time Has Come

In retrospect the key idea in the "Aristotle" essay was this: if humans could contribute their knowledge to a database that could be read by computers, then the computers could present that knowledge to humans in the time, place and format that would be most useful to them.  The missing link to make the idea work was a universal database containing all human knowledge, represented in a form that could be accessed, filtered and interpreted by computers.

One might reasonably ask: Why isn't that database the Wikipedia or even the World Wide Web? The answer is that these depositories of knowledge are designed to be read directly by humans, not interpreted by computers. They confound the presentation of information with the information itself. The crucial difference of the knowledge web is that the information is represented in the database, while the presentation is generated dynamically. Like Neal Stephenson's storybook, the information is filtered, selected and presented according to the specific needs of the viewer.

W. Daniel ("Danny") Hillis

In May, 2004, Edge published W. Daniel "Danny" Hillis's essay "'Aristotle': The Knowledge Web" , in which he noted:

...humanity's accumulated store of information will become more accessible, more manageable, and more useful. Anyone who wants to learn will be able to find the best and the most meaningful explanations of what they want to know. Anyone with something to teach will have a way to reach those who what to learn. Teachers will move beyond their present role as dispensers of information and become guides, mentors, facilitators, and authors. The knowledge web will make us all smarter. The knowledge web is an idea whose time has come.

In his essay, Hillis asked the Edge community to begin a conversation and a number of people who think deeply about such matters participated: Douglas Rushkoff, Marc D. Hauser, Stewart Brand, Jim O'Donnell, Jaron Lanier, Bruce Sterling, Roger Schank, George Dyson, Howard Gardner, Seymour Papert, Freeman Dyson, Esther Dyson, Kai Krause, ans Pamela McCorduck.

In 2005, George Dyson noted in his prescient essay Turing's Cathedral:

My visit to Google? Despite the whimsical furniture and other toys, I felt I was entering a 14th-century cathedral — not in the 14th century but in the 12th century, while it was being built. Everyone was busy carving one stone here and another stone there, with some invisible architect getting everything to fit. The mood was playful, yet there was a palpable reverence in the air. "We are not scanning all those books to be read by people," explained one of my hosts after my talk. "We are scanning them to be read by an AI."

When I returned to highway 101, I found myself recollecting the words of Alan Turing, in his seminal paper Computing Machinery and Intelligence, a founding document in the quest for true AI. "In attempting to construct such machines we should not be irreverently usurping His power of creating souls, any more than we are in the procreation of children," Turing had advised. "Rather we are, in either case, instruments of His will providing mansions for the souls that He creates."

In March, 2007, Hillis announced a new company called "Metaweb", and the free database, Freebase.com, and he wrote second Edge essay: "Addendum to 'Aristotle' (The Knowledge Web)." He wrote:

In retrospect the key idea in the "Aristotle" essay was this: if humans could contribute their knowledge to a database that could be read by computers, then the computers could present that knowledge to humans in the time, place and format that would be most useful to them.  The missing link to make the idea work was a universal database containing all human knowledge, represented in a form that could be accessed, filtered and interpreted by computers.

One might reasonably ask: Why isn't that database the Wikipedia or even the World Wide Web? The answer is that these depositories of knowledge are designed to be read directly by humans, not interpreted by computers. They confound the presentation of information with the information itself. The crucial difference of the knowledge web is that the information is represented in the database, while the presentation is generated dynamically. Like Neal Stephenson's storybook, the information is filtered, selected and presented according to the specific needs of the viewer.

Danny Hillis at SciFoo at the Googleplex, July 2007

Last week, buried the the news on a summer Friday afternoon, was the announcement that Google had acquired Metaweb.

It all began with the technological breakthroughs in the realm of massively parallel computers and their associated algorithms. Credit for this goes to Hillis who is primarily responsible for having broken through the von Neumann bottleneck of the serial computer.

At MIT in the late seventies, Hillis built his "connection machine," a computer that makes use of integrated circuits and, in its parallel operations, closely reflects the workings of the human mind. In 1983, he spun off a computer company called Thinking Machines, which built the world's fastest supercomputer by utilizing parallel architecture.

Hillis's computers, which were fast enough to simulate the process of evolution itself, showed that programs of random instructions can, by competing, produce new generations of programs — an approach that led to the creation of his Knowledge Web. Hillis's work demonstrates that when systems are not engineered but instead allowed to evolve — to build themselves — then the resultant whole is greater than the sum of its parts. Simple entities working together produce some complex thing that transcends them; the implications for biology, engineering, and physics have been, and will increasingly be, enormous.

Philosopher Daniel C. Dennett noted that with the idea of a massively parallel architecture, which would be capable of exploring a different part of the space of possible computations, Hillis opened up a vast area:

What the British mathematician Alan Turing did, with the concept of the Turing machine, was to provide a succinct definition of the entire space of all possible computations. The machine developed by John von Neumann was a mechanical realization of Turing's idea. A von Neumann machine is the computer on your desk — the standard serial computer. In principle, the von Neumann machine — which is, for all practical purposes, a universal Turing machine — can compute any computable function; but if you don't have a billion years to wait around, you can't actually explore interesting parts of that space. The actual space explorable by any one architecture is quite limited. It sends this vanishingly thin thread out into this huge multidimensional space. To explore other parts of that space, you have got to invent other kinds of architecture. Massive parallel architectures are everybody's first, second, and third choices.

What Danny did was to create if not the first then one of the first really practical, really massive, parallel computers. It precipitated a gold rush. We had a new exploration vehicle, which was looking at portions of design space that had never been looked at before. Danny was very good at selling that idea to people in different scientific fields and demonstrating, with some of the early applications, just how powerful and exciting this vehicle was.

Two years ago this month, Hillis instigated an interesting Edge Reality Club conversation cross-referenced with a discussion on the Encyclopedia Britannica website on Nicholas Carr's Atlantic Essay "Is Google Making Us Stupid" (now expanded into Carr's book The Shallows). Hillis wrote:

We evolved in a world where our survival depended on an intimate knowledge of our surroundings. This is still true, but our surroundings have grown. We are now trying to comprehend the global village with minds that were designed to handle a patch of savanna and a close circle of friends. Our problem is not so much that we are stupider, but rather that the world is demanding that we become smarter. Forced to be broad, we sacrifice depth. We skim, we summarize, we skip the fine print and, all too often, we miss the fine point. We know we are drowning, but we do what we can to stay afloat.

As an optimist, I assume that we will eventually invent our way out of our peril, perhaps by building new technologies that make us smarter, or by building new societies that better fit our limitations. In the meantime, we will have to struggle. Herman Melville, as might be expected, put it better: "well enough they know they are in peril; well enough they know the causes of that peril; nevertheless, the sea is the sea, and these drowning men do drown."

We create tools and then we mold ourselves in their image. With The Hillis Knowledge Web he has proposed something new, something different. I can make a case that his "Aristotle" (The Knowledge Web) essay is the kind of seminal document, such as Turing's Computing Machinery and Intelligence, and MuCulloch et al's What the Frog's Eye Tells the Frog's Brain that appears a few times in a century. But now, with the Google announcement, we will all find in Internet time, how his ideas play out in the real world.

Now is the time to revisit (in chronological order) Hillis's original 2004 essay
("'Aristotle': The Knowledge Web")
, the ensuing Reality Club conversation, and his 2007 "Addendum to 'Aristotle'", and have a conversation regarding what I am taking the liberty of calling "The Hillis Knowledge Web".


For background reading on Hillis and his Knowledge Web, see:

• Part V: "Something Beyond Ourselves" in The Third Culture: Beyond The Cybernetic Revolution (1995)
• "The Genius" in Digerati: Encounters With The Cyber Elite

W. DANIEL (Danny) HILLIS is an inventor, scientist, engineer, author, and intellectual He pioneered the concept of parallel computers that is now the basis for most supercomputers, as well as the RAID disk array technology used to store large databases. He holds over 100 U.S. patents, covering parallel computers, disk arrays, forgery prevention methods, and various electronic and mechanical devices. He is also the designer of a 10,000-year mechanical clock.

Presently, he is Chairman and Chief Technology Officer of Applied Minds, Inc., a research and development company in Los Angeles, creating a range of new products and services in software, entertainment, electronics, biotechnology and mechanical design. The company also provides advanced technology, creative design and consulting services to a variety of clients.

Previously, Hillis was Vice President, Research and Development at Walt Disney Imagineering, and a Disney Fellow. He developed new technologies and business strategies for Disney's theme parks, television, motion pictures, Internet and consumer products businesses. He also designed new theme park rides, a full sized walking robot dinosaur and various micro mechanical devices.

W. Daniel Hillis's Edge Bio Page


By W. Daniel Hillis

(first published on Edge on May 6, 2004)

By John Brockman

Part of Danny Hillis's charm is his childlike curiosity and demeanor. The first time we talked was on the telephone one Sunday morning in 1988 when he was at his home in Cambridge. We got into a serious discussion about the relationship of physics to computation. "This is interesting," he said. "I'd like to come to New York and continue the conversation face-to-face." Three hours later, my doorbell rang, and there stood a young man, looking like a clean-cut hippie. He had long hair, wore a plain white T-shirt and jeans, and carried nothing. He lived up to his reputation as the "boy wonder". We talked for hours.

Danny's energies at that time were concentrated on getting processors to work together so that computation takes place with communicating processors, as happens with the Internet. The Net's potential to become an organism of intelligent agents interacting with each other, with an intelligence of its own that goes beyond the intelligence of the individual agents fired Danny up. "In a sense," he said, "the Net can become smarter than any of the individual people on the Net or sites on the Net. Parallel processing is the way that kind of emergent phenomenon can happen. The Net right now is only a glimmer of that."

Danny described the Internet of that time simply as a huge document that is stored in a lot of different places and that can be modified by many people at once, but essentially a document in the old sense of the word. In principle, the Internet could be done on paper, but the logistics are much better handled with the computer. "I am interested in the step beyond that," he says, "where what is going on is not just a passive document, but an active computation, where people are using the Net to think of new things that they couldn't think of as individuals, where the Net thinks of new things that the individuals on the Net couldn't think of."

"In the long run, the Internet will arrive at a much richer infrastructure, in which ideas can potentially evolve outside of human minds. You can imagine something happening on the Internet along evolutionary lines, as in the simulations I run on my parallel computers. It already happens in trivial ways, with viruses, but that's just the beginning. I can imagine nontrivial forms of organization evolving on the Internet. Ideas could evolve on the Internet that are much too complicated to hold in any human mind."

The passages quoted above are from my book Digerati, published in 1995, a time when the ideas set forth by Danny were technologically implausible.

In 2000, still on the same track, Danny wrote the prescient paper Aristotle, in which he proposes "The Knowledge Web", again at a time when the technological possibilities did not equal the vision.

But now in 2004, Danny is a grown-up wonder, and we are in the age of Google, of boy wonders Sergey Brin and Larry Page, and the rapidly expanding potential of the Internet as a knowledge web.

In this regard, thanks to funding from the Markle Foundation, Danny been able to assemble a group of people to begin to discuss of the implementation of a medical application based on his ideas. Other possibilities for applications are open-ended. What has changed in the past nine years is that implementation of his ideas are now technologically feasible.

Rather than limit exploration of this set of ideas to a small group of thinkers, Danny has issued an invitation to the participants of Edge — itself an example of a knowledge web, and a very sophisticated one at that — to enter into a discussion on Aristotle.


W. DANIEL (Danny) HILLIS is currently Chairman and Chief Technology Officer of Applied Minds, Inc., is best known for his innovative work in the design and implementation of the massively parallel supercomputer. Applied Minds is a research and development company creating a range of new products and services in software, entertainment, electronics, biotechnology and mechanical design. He is the author of The Pattern On The Stone: The Simple Ideas That Make Computers Work.

Danny Hillis's Edge Bio Page

THE REALITY CLUB: Responses by Douglas Rushkoff, Marc D. Hauser, Stewart Brand, Jim O'Donnell, Jaron Lanier, Bruce Sterling, Roger Schank, George Dyson, Howard Gardner, Seymour Papert, Freeman Dyson, Esther Dyson, Kai Krause, Pamela McCorduck



[W. DANIEL HILLIS:] I have always envied Alexander the Great, because he had Aristotle as a personal tutor. In those days, Aristotle knew pretty much everything there was to know. Even better, Aristotle understood the mind of Alexander. He understood which topics interested Alexander, what Alexander knew and did not know, and what kinds of explanations Alexander preferred. Aristotle had been a student of Plato, and he was himself a great teacher. We know from his writings that he was full of examples, explanations, arguments, and stories. Through Aristotle, Alexander had the knowledge of the world at his command.

Of course no one today knows all that is known, in the sense that Aristotle did. Now there is far too much knowledge for that to be possible. The scientific revolution, and the technological revolution that followed it, led to a self-reinforcing explosion of knowledge. The explosion continues. Today not even the most highly trained scientist, the most scholarly historian, or the most competent engineer can hope to have more than a general overview of what is known. Only specialists understand most of the new discoveries in science, and even the specialists have trouble keeping up.

This problem isn't new. In 1945, Vannevar Bush wrote an essay for Atlantic Monthly about out the problem of too much knowledge. He wrote,

There is a growing mountain of research. But there is increased evidence that we are being bogged down today as specialization extends. The investigator is staggered by the findings and conclusions of thousands of other workers—conclusions that he cannot find time to grasp, much less to remember, as they appear. Yet specialization becomes increasingly necessary for progress, and the effort to bridge between disciplines is correspondingly superficial.

Bush's imagined solution to this problem was something he called Memex. Memex was envisioned as a system for manipulating and annotating microfilm (computers were just then being invented). The system would contain a vast library of scholarly text that could be indexed by associations and personalized to the user Although Memex was never built, the World Wide Web, which burst onto the scene half a century later, is a rough approximation of it.

As useful as the Web is, it still falls far short of Alexander's tutor or even Bush's Memex. For one thing, the Web knows very little about you (except maybe your credit card number). It has no model of how you learn, or what you do and do not know—or, for that matter, what it does and does not know. The information in the Web is disorganized, inconsistent, and often incorrect. Yet for all its faults, the Web is good enough to give us a hint of what is possible. Its most important attribute is that it is accessible, not only to those who would like to refer to it but also to those who would like to extend it. As any member of the computer generation will explain to you, it is changing the way we learn.

A New Tool for Learning

Let's put aside the World Wide Web for a moment to consider what kind of automated tutor could be created using today's best technology. First, imagine that this tutor program can get to know you over a long period of time. Like a good teacher, it knows what you already understand and what you are ready to learn. It also knows what types of explanations are most meaningful to you. It knows your learning style: whether you prefer pictures or stories, examples or abstractions. Imagine that this tutor has access to a database containing all the world's knowledge. This database is organized according to concepts and ways of understanding them. It contains specific knowledge about how the concepts relate, who believes them and why, and what they are useful for. I will call this database the knowledge web, to distinguish it from the database of linked documents that is the World Wide Web.

For example, one topic in the knowledge web might be Kepler's third law (that the square of a planet's orbital period is proportional to the cube of its distance from the sun). This concept would be connected to examples and demonstrations of the law, experiments showing that it is true, graphical and mathematical descriptions, stories about the history of its discovery, and explanations of the law in terms of other concepts. For instance, there might be a mathematical explanation of the law in terms of angular momentum, using calculus. Such an explanation might be perfect for a calculus-loving student who is familiar with angular momentum. Another student might prefer a picture or an interactive simulation. The database would contain information, presumably learned from experience, about which explanations would work well for which student. It would contain representations of many successful paths to understanding Kepler's law.

Given such a database, it is well within the range of current technology to write a program that acts as a tutor by selecting and presenting the appropriate explanations from the database. The automated tutor would not need to create the explanations themselves— human teachers would create the explanations in the knowledge web, and the paths that connect them. The program would merely find the appropriate paths between what a student already knows and what he or she needs to learn. Along the way, the automatic tutor would quiz the student and respond to questions, much as a human tutor does. In the process, it would improve both its model of the student and the information in the database about the success of the explanations.

For example, imagine yourself in the position of an engineer who is designing a critical component and wants to learn something about fault-tolerant design. This is a fairly specialized topic, and most engineers are not familiar with it; a standard engineering education treats the topic superficially, if at all. Fault-tolerant design is an area normally left to specialists. Unless you happen to have taken a specialized course, you are faced with a few unsatisfactory alternatives. You can call in a specialist as a consultant, but if you don't know much about the field it's difficult to know what kind of specialist you need, or if the time and expense are worth the trouble. You could try reading a textbook on fault-tolerant design, but such a text would probably assume a knowledge you may have forgotten or may never have known. Besides, a textbook is likely to be out of date, so you will also have to find the relevant journals to read about recent developments. If you find them, they will almost certainly be written for specialists and will be difficult for you to read and understand. Given these unsatisfactory choices, you will probably just give up. You will go ahead and design the module without benefit of the proper knowledge, and hope for the best.

Learning with Aristotle

Now let's assume that you have access to the automatic tutor. Let's call it Aristotle.

Aristotle would begin by asking you how much time you're willing to devote to this project and the level of detail you want. Then Aristotle would show you a map of what you need to learn. The tutor program does this by comparing what you know to what needs to be known to design fault-tolerant modules. It knows what needs to be known because this is a common problem faced by many engineers, and knowledgeable teachers have identified the key concepts many times. Aristotle knows what you know because it has worked with you for a long time. There may be some things you're familiar with that Aristotle doesn't know you know, but you can point these things out to Aristotle when it shows you the learning plan. Aristotle might take your word for what you know, but it is more likely to quiz you about some of the key concepts, just to make sure.

Aristotle plans its lessons by finding chains of explanations that connect the concepts you need to learn to what you already know. It chooses the explanatory paths that match your favorite style of learning, including enough side paths, interesting examples, and related curiosities to match your level of interest. Whenever possible, Aristotle follows the paths laid down by great teachers in the knowledge web. Aristotle probably also has a model of how you want to be paced: when you have learned enough for one day, when it needs to throw in an interesting side story, etc. Along the way, Aristotle will not only explain things to you but will also ask you questions—both to make you think and to verify for itself that concepts are being learned successfully. When an explanation doesn't work, Aristotle tries another approach, and of course you can always ask questions, request examples, and give Aristotle explicit feedback on how it's doing. Aristotle then uses all these forms of feedback to adjust the lesson, and in the process it learns more about you.

The process of teaching helps Aristotle learn to be a better teacher. If an explanation doesn't work, and consistently raises a particular type of question, then Aristotle records this information in the knowledge web, where it can be used in planning the paths of other students. The feedback eventually makes its way back to the knowledge web's human authors, so that they can use it to improve their explanations.

Like any good tutor, Aristotle allows you to get sidetracked from a lesson plan and follow your interests. If you find a particular example compelling, you may want to know more about it. If some concept you have just learned allow you to appreciate an elegant explanation of some fact you already know, Aristotle may point it out. If you are close to understanding something of critical importance or something that would be of particular interest to you, Aristotle may decide to show it to you even though it is not strictly necessary as part of the lesson. Of course, as Aristotle gets to know you, it will know how much you like this sort of distraction.

Once you have learned the material, and Aristotle has verified that you have learned it, the program will update its database to indicate that you have recently learned it. As you learn more and more, it will continue to connect your recently acquired knowledge to the new concepts you are learning, until you have fully integrated them. Because Aristotle knows which subjects you are and have been interested in, it can consolidate your learning by finding connections that tie these subjects together.

For example, there's a short film of Dr. Richard Feynman explaining a principle of quantum mechanics called Bell's inequality. Most people have little interest in quantum mechanics and no interest at all in understanding Bell's inequality. Most quantum physicists already understand Bell's inequality and would learn little from Feynman's explanation. On the other hand, if you are a student who is just learning quantum mechanics, who has just mastered the necessary prerequisites, Feynman's explanation can be exciting, startling, and enlightening. It not only can explain something new but can also help you make sense of what you have recently learned. The trick is showing the film clip at just the right time. Aristotle can do that.

I used an engineering example to describe Aristotle because most engineering knowledge is a straightforward, factual type of knowledge. Similar techniques would work for learning other subjects - history, mathematics, or the kind of technical information that might normally be conveyed in a training course or a technical manual. Of course, there are types of useful knowledge that a program like Aristotle would not be suited to: Aristotle would not be of much use in learning how to ride a bicycle or tell a joke. It would not replace hands-on experience, nor would it replace the enthusiasm and wisdom of a great teacher. What Aristotle would do is help you gain mastery of factual knowledge—exactly the kind of knowledge that is overwhelming us.

How the Knowledge Web Changes Education

In his book Diamond Age, the science fiction writer Neil Stephenson describes an automatic tutor called The Primer that grows up with a child. Stephenson's Primer does everything described above and more. It becomes a friend and playmate to the heroine of the novel, and guides not only her intellectual but also her emotional development. Such a Primer is beyond the capabilities of current technology, but even a program as limited as Aristotle would be a step in that direction. Teachers and students both understand that a school is no longer able to "pre-load" its students with the knowledge that they will need for life. Instead, a good school teaches the basics—reading, arithmetic, social skills—and introduces students to subjects that they can learn more about. It gives them an overview of knowledge as a starting point for further learning. A good school education also gives students

Teachers know that individual attention helps a child learn and they would like to give their students more of it. Even a young child has special interests, special topics that he or she would like to know more about. A good teacher learns to recognize these individual interests and tries to nurture them, but this takes a lot of time. A program like Aristotle will give teachers a tool to help children follow their passions. It will also enable teachers to evaluate a child's progress and to provide individualized instruction in areas in which the child has gaps. A computer program like Aristotle cannot replace most of what goes on in school, but it can complement what goes on there. It can free teachers from the routine job of broadcasting information and give them more time for individual attention to their students.

A system like Aristotle also enables teachers by giving them a way to publish. Any good teacher knows how to teach certain topics especially well, but there are few easy ways for them to share that information effectively with others. A teacher can write a textbook, or develop a curriculum, but each of those efforts is a major undertaking. There is no simple way for a teacher to publish an isolated idea about how to explain something. If a system like Aristotle existed, then with the proper authoring tools a teacher could publish a single explanation—an effort comparable to creating a Web page. In fact, existing Web pages are a good source of initial content for the knowledge web. As Marshall McLuhan said, "The content of the new medium is the old medium." The initial content of the knowledge web will be the old curriculum materials, textbooks, and explanatory pages that are already on the World Wide Web. The existing materials already contain most of the examples, problems,

As students gain access to the best explanations from the best teachers on a given subject, their own teachers will be able to take on the role of coaches and mentors. Freed from the burden of presenting the same information over and over, the teachers will be able to give greater individual attention to their students.

A Better Infrastructure for Publishing

The shared knowledge web will be a collaborative creation in much the same sense as the World Wide Web, but it can include the mechanisms for credit assignment, usage tracking, and annotation that the Web lacks. For instance, the knowledge web will be able to give teachers and authors recognition or even compensation for the use of their materials. Teachers and learner will be able to add annotation and links to explanation—connecting them to other content, suggesting improvements, or rating their accuracy, usefulness, and appropriateness for children of various ages. For instance, with such a system, it would be possible for the learner to accept only knowledge that had been certified as correct by an authority such as Encyclopedia Britannica or the National Academy of Sciences.

All of this raises the possibility of a different kind of economic underpinning for the knowledge web, one that is not possible on the document Web of today. The support infrastructure for payments would allow different parts of the knowledge web to operate in different ways. For instance, public funding might pay for the creation of curriculum materials for elementary school teachers and students, but specialized technical training could be offered on a fee or subscription basis. Companies could pay for encoding the knowledge necessary to train their employees and customers, consultants would be able to publish explanations as advertising for their services, and enthusiasts would offer their wisdom for free. Students could subscribe not only to particular areas of knowledge but to particular types of annotations, such as commentary or seals of approval. Schools and universities could charge for interaction with teachers and certification of the student's knowledge. The system could also become the ultimate hiring tool, since employers could map areas of knowledge that they need in prospective employees.

What Makes the Knowledge Web Different?

One way to think about the knowledge web is to compare it with other publishing systems that support teaching and learning. These include the World Wide Web, Internet news groups, traditional textbooks, and refereed journals. The knowledge web takes important ideas from these systems. These ideas include peer-to-peer publishing, vetting and peer review, linking and annotation, mechanisms for paying authors, and guided learning. Each of these existing media demonstrates the success of one or more of these ideas. They are all incorporated into the knowledge web.

Peer-to-Peer Teaching
One of the reasons why Internet news groups and the World Wide Web have enjoyed such runaway success is that they allow people to communicate with each other directly, without publishers as intermediaries. The great advantage of such a peer-to-peer publishing system is that anyone with something interesting to say has an easy way to say it. The Internet has eliminated the publishing bottleneck and has created a flood of authorship. This basic human desire to share knowledge is what will drive the creation of the knowledge web. The task of recording the world's knowledge is so overwhelming that only peer-to-peer publishing can plausibly accomplish it. Yet the knowledge web is not only a record of knowledge but also a way of imparting it. The knowledge web will do for teaching what the World Wide Web did for publication. Peer-to-peer teaching will allow literally millions of people to help each other learn.

Vetting and Peer Review
One of the downsides of peer-to-peer publishing is quality control. Publishers of textbooks and journals do more than market and distribute; they also edit and select. In the case of peer-reviewed journals, some of the burden of quality control is shifted to the reviewers, but it is still coordinated by the publisher. On the World Wide Web, there is no commonly accepted system of rating and peer review, nor is there a mechanism to support one. The result is chaos. The information you find by searching on the World Wide Web is often irrelevant, badly presented, or just plain wrong. It is difficult to screen out obscene material and propaganda. It is almost impossible to sort the wheat from the chaff.

The knowledge web addresses this problem by supporting an infrastructure for peer review and third-party certification. It supports mechanisms for the labeling, rating, and categorization of material, both by the author and by third parties. The browsing tools will allow information to be filtered, sorted, and labeled according to these annotations. In addition, user feedback tools will be built into the browsing software to help identify material that is particularly good, bad, or controversial.

Linking and Annotation
Anyone who has used the World Wide Web understands the importance of linking. In principle, articles in conventional journals also support a kind of linking, in the form of footnotes and references, but these kinds of links are far less convenient to use than the convenient "click throughs" of hypertext. Even the simple message-thread linking of threaded news groups helps make the information more usable. Students accustomed to hypertext find the linear arrangement of textbooks and articles confining and inconvenient. In this respect, the Web is clearly better. The knowledge web will allow an even more generalized form of linking than the World Wide Web. In the knowledge web, not only the author but also third parties

Ways to Pay Authors
An advantage of textbooks and journals over the World Wide Web is that they support a mechanism for paying the author. The World Wide Web has demonstrated that many authors are willing to publish information without payment, but it does not give them any convenient option to do otherwise. The knowledge web will provide that option by supporting various payment mechanisms, including subscription, pay per play, fee for certification, and usage-based royalties. It will also support and encourage the production of free content.

Much of the content on the knowledge web will probably be free, but there are a number of other economic models that can coexist with this. One of the most obvious is the paid course, in which a student pays tuition for a cluster of services, including access to teachers, curriculum materials, and interaction with other students, and some form of certification at the end. With the knowledge web, many institutions may choose to offer the curriculum material for free—as a form of advertising—and charge for the other services, especially the certification. The knowledge web would also help solve one of the online course providers' greatest problems, which is marketing. The knowledge web would help direct students to the courses that meet their need.

Another model that may work well is a micropayment system, in which a student pays a fixed subscription fee for access to a wide range of information. Usage statistics would serve as a means to allocate the income among the various authors. This system has the advantage of rewarding authors for usefulness without penalizing students for use. The students' fees would be independent of the amount of use they make of the system. The ASCAP music royalty system and university payments for student access to Encyclopedia Britannica are examples of how such a system might work.

Guided Learning
Part of the information in a textbook is about the subject matter, but part of it is also about how to learn the subject matter. A good textbook is full of information on the plan of attack, strategies for learning, practice problems, and suggestions for further study. Part of what makes up a good curriculum is not just the material but the plan for moving through it. A textbook often has an accompanying "teacher guide" that contains more of this type of information. For example, the guide may note that if the student makes a certain type of error, the student is probably missing a particular point and needs it to be re-explained. Some of the best computer-aided instructional material also encodes such information. The knowledge web provides an easy mechanism for a teacher to include this type of information.

Table of Affordances

  The Web News
Peer-to-Peer publishing Yes Yes No Limited
Supports linking Yes Limited No Limited
Ability to add annotations No Yes No No
Vetting and certification No Limited Yes Yes
Supports payment model No No Yes Yes
Supports guided learning Limited No Yes No

Achieving Critical Mass

Once the knowledge web achieves critical mass, it is easy to see how it can sustain itself. The real question is how it gets started. Presumably, first users will be adults, and the first adopters will be industry and government. Industry spends $60 billion a year on training for employees. It spends even more on customer support, product liability, poor design, and other costs that could be mitigated by better training. This is the first market for the knowledge web. The second market is probably the military, which is the single largest educator of adults. The third market is continuing education for individuals.

Adults are an easier initial market than children for several reasons. Adults are usually learning because they want to or because they need to. They are already motivated to learn. Adults often need specific knowledge for a specific reason. A recent survey of the American workplace showed that 80 percent of workers feel that additional education is important for them to be successful at their job. Adults are much more likely to treat time as a limited resource. They want to learn efficiently. Also, the mechanisms that pay for adult education are often more rational and less politically charged than the mechanisms that pay for the education of children. For-profit companies are likely to adapt quickly to a more efficient process.

Eventually traditional educational institutions will use this new system of learning. It will be used first by colleges and trade schools; then it will make its way into secondary and elementary education. It is important to emphasize that computers will not replace the teachers; rather, they will give teachers a new tool. Instead of spending most of their time broadcasting information to a group, teachers will have more time to help students integrate their knowledge through discussion and through individualized interaction.

There are three technical components necessary for such a system to work: the tutor (browser), the authoring tool, and the knowledge web itself. This last component is the most difficult to create, but fortunately it does not have to be built all at once. Even a small part of it would be useful. Presumably the knowledge web will get its start in a few narrow areas, probably determined by available funding. For instance, it is easy to imagine a scenario in which a part of the knowledge web is initially funded by a supplier to explain the use of its products. Imagine, for example, that Cisco publishes the knowledge of how to configure and maintain its routers in this form. Another scenario is that the government sponsors the creation of the system for a specific application, such as continuing education for schoolteachers or job training for factory workers. A company might pay for the development of training programs for its workers and customers. A foundation might sponsor an initial effort as a way to have an impact on education.

Summary: An Idea Whose Time Has Come

It seems almost inevitable that such a system will eventually be developed, but why is now the time to undertake such a project? After all, people have been dreaming of such a project for half a century, yet Memex, Xanadu, Plato, Wais, and numerous other schemes failed to achieve critical mass. Why should this one succeed? For one thing, the infrastructure is now ready for it. The knowledge web requires widespread access to network-connected computers capable of handling graphics, audio, and video. This did not exist until recently. But it is not just the technology that is ready for this idea—people are ready for it. Email, the Web, and video games have all whetted their appetites. The younger generation is more than ready. They expect something better than listening to lessons in class-sized groups.

At the same time that a solution is becoming possible, the problem is reaching a crisis point: the amount of knowledge is becoming overwhelming, and the need for it is increasing. There is a widespread conviction that something radical needs to be done about education—both the education of children and the continuing education of adults. The world is becoming so complicated that schools are no longer able to teach students what they need to know, but industry is not equipped to deal with the problem either. Something needs to change.

With the knowledge web, humanity's accumulated store of information will become more accessible, more manageable, and more useful. Anyone who wants to learn will be able to find the best and the most meaningful explanations of what they want to know. Anyone with something to teach will have a way to reach those who what to learn. Teachers will move beyond their present role as dispensers of information and become guides, mentors, facilitators, and authors. The knowledge web will make us all smarter. The knowledge web is an idea whose time has come.

(First publshed on Edge on May 6, 2004)

Responses by Douglas Rushkoff, Marc D. Hauser, Stewart Brand, Jim O'Donnell, Jaron Lanier, Bruce Sterling, Roger Schank, George Dyson, Howard Gardner, Seymour Papert, Freeman Dyson, Esther Dyson, Kai Krause, Pamela McCorduck


Danny simply yet fully articulates an idea he has been speaking about, in one way or or another, for the past six years. It seems strangely fitting this thinking is published just around the same time that Google has its IPO.

For the more-is-more, horizontally indistinguishable landscape of the world wide web is reaching a certain point of diminishing returns (why else would Google be going public?) and a new, more dynamically structured organization for human knowledge is in order. While we may not have been able to imagine the need for such a realtime, dynamically composed tutoring matrix, it certainly addresses Bush's desire for a fully accessible knowledge bank more adequately than piling more websites into the text-searchable wasteland.

The trick, of course, is achieving lift-off. My suggestion would be, rather than approaching the likes of Cisco or another corporate giant looking to train people about their industry, would be to tackle a highly critical and desperate area: a disease like cancer, AIDS, or multiple sclerosis, where armies of "untrained" but highly intelligent victims and families might like to lend their own cognitive processes to finding a cure. The Lorenzo's Oil contingent, if you will. They could prove a perfect test sample for your tutoring hypothesis, allowing them to become more versed in cancer-specific treatment methodologies than most physicians, without having to go to medical school.

And it might be the kind of cause that would generate more immediate acceptance of the applicability of this system to the world's most pressing problems.


Danny Hillis's notion of a web based tutor—a digital school marm—is interesting, like so many of his ideas. But like all forms of guided learning, part of the art form comes from a tutor who can not only teach effectively, but a tutor who deeply understands what another knows. The latter bit is non-trivial. Implicit in Danny's piece is an assumption that we are giving tests that properly extract what another knows.

I think this is the general problem with most education. Teachers fail to think about the different ways in which someone does or does not know something. Let's say you give an exam and student Danny scores 100 and student Marc scores 0. You have certainly given an exam that demonstrates differences in performance, but have you truly demonstrated a difference in their competence. Perhaps Marc freezes during multiple choice tests, or is just terrible at math.

Perhaps Danny has a photographic memory, regurgitates all the right answers, but hasn't a clue about the underlying principles. As everyone knows, test scores provide only one window into a person's knowledge. Different tests extract different kinds of knowledge. Different tests interact with different kinds of test takers. What makes someone vary from test to test?

Some of it is there knowledge — the facts and theories at their fingertips—some of it is their general capacity for reasoning, some of it their specific capacity to reason in a particular domain, and some of it has to nothing to do with knowledge or reasoning, but merely the mood of the day, the composition of the class, and whether their boyfriend dumped them or they just ate a hot fudge sunday.

But there is another constraint: knowledge that is innate, part of our hardware. Although we are only beginning to uncover this machinery, the digital school marm will need to engage it. Part of this machinery, when properly harnessed, may well facilitate learning. Conversely, teaching that is insensitive to our innate knowledge may well retard learning.

Consider mathematics. Although our species alone has invented the symbol systems and operations that fill math books in calculus, geometry, and algebra, every human is born with a number sense that consists of two core mechanism: one system quantifies small numbers [up to about 3 or 4] precisely and a second quantifies large numbers approximately [subject to Weber ratios as opposed to absolute numbers]. Work with brain damaged patients, brain imaging, and primate physiology, confirms the anatomical specificity of these systems. Of course, when these mechanisms click into gear, they unconsciously generate judgments of quantity without revealing their mode of operation.

Intriguingly, Gauss made the following comment concerning his own insights into mathematics: "I have had the results for a long time, but I do not know yet know how to arrive at them." Similarly, Boole noted that "It is not the essence of mathematics to be conversant with the ideas of number and quantity."

Currently, educators in mathematics have no awareness of these mechanisms, and thus, typically implement teaching methods guided toward precision with large numbers. It is an open question, however, whether early math education might effectively tap this innate knowledge in order to not only speed up but enhance the acquisition of mathematical knowledge.

What's the difference between a great teacher and a good teacher? Studies by social psychologists reveal that students make judgments about the caliber of a professor within the first few minutes of the first lecture of the first class, and this judgment accurately predicts the evaluation they give the professor at the end of the course. Tell a good joke, and you may end up being perceived as the best thing since sliced bread. I am not sure I or anyone else has a coherent answer to the great-good distinction [I assume everyone can articulate what constitutes a bad teacher], but it is a topic worthy of study. Part of teaching entails dissemination of information.

But there are many other parts: opening minds, providing tools for thinking and reasoning, providing enjoyment about knowledge, etc. The digital school marm is a great idea, but we may have to confine her to only a few classes.


From massively parallel processing to massively parallel knowledge certainly feels like a natural progression in Danny's work.

Reading the 2000 paper in 2004, I crave two appendices to it from Danny...

1) Appraise the paper and Google (so far) in light of each other. What does the development of Google show was easily doable after all in the original difficult-seeming problem set? On the other hand, what parts of the Aristotle idea has Google clearly not solved yet? And, since Google apparently was not developed in response to the paper, what does their convergence of some uses and mechanisms suggest about the general robustness of the idea?

2) Danny is exceptionally alert to how intellectual tools evolve in the world. What parts of Aristotle does he think will arrive "anyway," and what parts will need special attention or funding? What are potential catalysts or accelerators? Contrariwise, what could stop the whole show? (Paralysis by copyright over-protection comes to mind, for example.) Since Danny's company, Applied Minds, has been working on pieces of Aristotle for a few years now, what can be reported from that experience?


All our paradigms of learning fall far short of what we really do. The paradigms still date back to the hunter-gatherer age of knowledge. We are trained when young to be squirrels, accumulating acorns for future use and building the skills to seek out just the acorn we need when we want it. I'm not sure that was ever an accurate description of how people do in fact learn, but it has been a model of how we teach, one acorn at a time. Bad studying (the sort of thing that happens in desperate hypercaffeinated dorm rooms at 3 a.m.) is a parody of that acorn-management learning system.

What we need are ways to imagine—and then tools to enact—a sampling, sorting, surfing, sifting learning style. There are still some things that can be carefully contained in boxes—at least for purposes of teaching—and then let us methodically pick all the acorns out of the box. Calculus and Latin are probably like this, though it is notable that real teachers in both fields these days are experimenting with at least multiple strategies for getting the acorns out of the box and into the student's real working memory. Even the simple stuff isn't simple any more.

But other examples, like the ones Hillis gives, aren't so easily controlled. To think about fault-tolerant design requires either a high level of abstraction or a clear definition of context: are we talking automobiles, planetary probes, nanotubes, or software? Some principles carry over, but the guy who wants to design the planetary probe isn't going to be happy being led through a software engineering module. And no such module is going to be stable, in today's exploding knowledge universe, for more than about ten minutes tops—maybe more like five.

So Hillis is right: we need multiple new strategies. Tutor software is one, communities of practice and learning are another, search strategies are another. The really good news is that we already make a lot of progress. Try inputting this string into Google: "cell phone fell in toilet". You can probably imagine why somebody would be curious about that, but I doubt you can imagine the richness of human experience and the usefulness of the information that Google can reveal on this extraordinarily important (for some people, at some urgent moments) topic. That's a community of learning coming into existence unconsciously, as people have similar experiences and share their experience and their learning in the public space of the Web. I haven't tried "cell phone stepped on by rhinoceros", but if that's a common experience at all, I'm sure there will be information of far more immediate quality and usefulness than I'd ever get by calling Nextel and being told that their prompts have changed and that I should listen to all of them before trying to guess which one hides an actual human being, who will tell me that nobody's ever asked that question before. That community didn't really exist, of course, until the search engine found it, on demand from me.

The answer to your obvious question now is, of course, that the soggy thing on a paper towel next to you on the desk while you type the search query isn't a phone any more, but if you rescue the SIM card, you can get a replacement phone and be back in business quickly with the same number and your dialing directory intact. Learning that didn't just solve my problem but taught me something more general about how cell phones work is to say that all our deliberate strategies for better information management and better learning need to remember that it's the apparently low-level enabling techniques that have the biggest impact the quickest.

I don't think Hillis quite imagines that Hillis-soft Corporation will release Hillis-soft Doorways Tutorware v. 3.0 and sell it for $595 (pay for upgrades to get the version with the obvious bugs removed) any time soon. Some of what we can imagine will come from database designers, some from search engine advances, and some from what I imagine to be fiendishly simple and clever open source tools that turn up on some website someday with a name as odd as "Google" or "Yahoo" once seemed. Is Amazon's new A9 search engine going to do the trick? Well, it doesn't blow my socks off yet, but it already has what Hillis's piece (more than five minutes old and so, like all good visionary work, already coming true) says the web doesn't have, the ability to annotate web pages. While we slow down to talk about this stuff, the world goes on changing around us.


If computer folk are about to be flattered yet again with a rush of fresh cash (I'm thinking of that Google IPO, and the implied revitalization of Silicon Valley), even as the world at large might be falling apart, the least we can do is try to solve some of the long standing problems we were supposed to be doing something about all along. Education and the healthcare system come to mind immediately.

So, yes. Let's raise our army and do education right.

As Danny points out, there's a long history of utopian projects that aim to enhance learning through digital connectivity and good user interfaces. To his list I'd add a couple more of my favorite examples, Alan Kay's Squeak and Andy Van Dam's Exploratories. All the projects mentioned so far share the trait that they were built primarily by experts in order to be consumed by learners, but there have also been a variety of successful kid-driven networked learning technology experiments. My two favorites of these are Thinkquest and Whyville.

One point of potential confusion ought to be addressed: Moore's Law has the unfortunate side effect of allowing so many virtual objects, or worse, proposed virtual objects, to come into existence that it is impossible to find enough names for all of them. In the case of "Knowledge Web", there are already a few namesakes. There's James ("Connections") Burke's Knowledge Web, which proposes to organize knowledge historically using loops of human-to-human relationships, and will have a spiffy VR-ish user interface. There's also a Knowledge Web that's associated with the Semantic Web research community, which is led by Tim Berners-Lee.

Danny asks two big questions:

1) What technology strategy should we advocate? 2) What human strategy will not get bogged down in stupid politics?

Here's my take:

1) On technology

Why is Danny still carrying the burden of AI? What is gained by designing learning tools around a supposed artificial tutor? Certainly it's clear that we don't understand how a mind learns as yet. Yes, we're understanding more all the time about scattered details of the learning process, but I'm sure even the most hardcore of AIers would agree that many of our basic assumptions about learning are vulnerable to overhaul as we discover more about the brain and development in the coming decades. The only reason it's even possible to consider designing an artificial tutor is that it would be hard to isolate the effects of such a construction, so it would be hard to know if it really worked. We can't even make a robot car that can drive itself, so what makes it ok to assume that we can make a robot that can "model" a child's brain?

The AI debates are old. Whether or not the ultimate project of AI makes any sense (and I believe it does not), can we at least agree that a program to model what humans know or how humans learn is not something we can plan on in the near future?

While AI-ers like the idea of adding animated paper clips and "Wizards" to user interfaces as part of a march to a holy inauguration of a new life form, what happens in practice is that AI interfaces in training and education become the tools of the worst, dullest bureaucrats who seek to use automation to avoid liability and lower labor costs. Computers in the field are inevitably used to obscure feedback that could improve a process unless the user interface is designed so that it's clear that specific humans are responsible for what happens.

Absolutely every function that can be provided by an AI interface can be provided by an honest interface better. Let's make a "Google" instead of an "Ask Jeeves." Let's make something with a user interface that's honest about what it can do and leave fantasies of future AI to the movie makers.

The viability of an artificial tutor is not the only technology strategy question to be examined. A more subtle question is the degree to which knowledge can be represented by software at this time, in the future, or ever. If the tutor is like a verb, or an actor, knowledge representation is like the corresponding noun. Somehow "content" has to be found and presented in the proper context. Once the concern of theoreticians, automated content interpretation is no longer an academic question. Huge sums are spent, for instance, on attempts to improve software that can automatically classify internet content.

Three non-glamorous examples are spam catching, obscenity blocking, and terrorist interception. The results of the first two have become part of our common experience, and the results are better than useless but short of inspirational. (Does education need to rise to the level of the inspirational to succeed?) My own web pages are blocked in most public school libraries, much to my dismay. Spam-catching filters are worth using, but in my experience are not yet good enough. Whenever I've thought they were working, it turned out I was actually missing out on messages I would have liked to have received. Automation always seems to work best when it is allowed to obscure its own results.

The current crop of academic projects to add an ontology layer to the internet usually rely on the distributed volunteer efforts of large numbers of humans. Such approaches generally start with an ontology a little like the Dewey Decimal System, but different in that on the net a card catalog's categories can be more easily extended, and one can choose unlimited, or even fuzzy, partial categories for a given entry. There are also projects that hope to generate implicit ontologies by extending what Google and other search engines have done with specific words to loose collections of adjacent words. A less rigid model of association might generate a more useful ontology.

A fair summary is to say that the whole business of generating context and ontology is still experimental, especially at large scales. Since money is flowing into the search business, there are a lot of people looking at these problems, and I hope the happy side effect of another Silicon Valley wealth convulsion will be accelerated progress.

So, while I'm deeply skeptical of an artificial tutor, I'm somewhat hopeful about improved knowledge representation. What kind of technology strategy should follow from those conclusions? Since I'm skeptical about the prospects for automation, my guess is that the answer can be found by thinking about people and politics, which brings us to the second big question.

2) On humans

Education in America is a mass of deadwood and sludge studded with gems. It's not too hard to find a magical school with devoted teachers to try out new educational technologies and generate promising results. These aren't necessarily the rich schools, either. It seems impossible, however, to preserve any of the best elements of educational technology in large scale deployment. The typical school computer is an aging business-oriented machine running useless software. It can't compete with the video game boxes kids love.

Why? Part of the problem is entrenched political and financial interests. In the state of California, for instance, textbook publishers convinced the legislature to keep non-book curriculum materials out of the requirements, so that they must be conceived as a luxury. If you want to cure a case of excessive cheer, go take a look at some of the current textbooks and see how much they cost. I saw one recently in New York, a big, expensive elementary school math textbook filled with color pictures of diverse people who "love math", but displaying no love of math itself. It was also filled with errors. Look at how cheap video game boxes are and how expensive crummy required textbooks are. While medicine and defense are the first choices for those who think of the government as the worlds stupidest but most reliable customer, education comes in as a close third.

There are two plans of attack, bottom-up and top-down. I don't want to dismiss top-down at all. We elite technical people know a lot of elite business and political people and maybe it's possible to start something grand. My sense is that this is what Danny hopes for, and I would do anything to help that effort, even if it included a non-catastrophic level of AI confusion.

There have been a few bottom-up attempts, and I'd like to tell the story of one of them because it's instructive. Thinkquest was a project conceived by Al Weis, who used to run the supercomputer business for IBM and then built much of the core of the Internet's hardware infrastructure as it existed in the 1980s and early 1990s. The idea was to run a contest in which high school age kids from all over the world would compete to build the best curriculum entries in web format. The prizes were scholarships, along with cash awards to the winning kids' teachers and schools. There were many thousands of entries, and the winners were spectacularly good. In one case, a major corporation chose a Thinkquest entry as a central training tool to teach programming. Super high quality content was created in all subjects, in many languages. Results were judged by a net-enabled community of educators from around the world managed by the Internet Society. Special credit was given to development teams who collaborated across language barriers and time zones. A grand prize winning entry one year was created by kids from Japan, South Africa (a Zulu kid), and Poland. They used barely adequate language translation software to coordinate their efforts, but they managed. The library of entries was one of the most sought after destinations on the net during the late 1990s, even though it was not publicized. It had great international buzz in the worldwide net kid underground.

So, the paradigm was to get the brightest kids to write the curriculum for the rest. Communication skills were taught along with the official topics. The contest was very cost effective since there weren't that many prizes, but there were lots of wonderful entries. Even kids who didn't win benefited hugely in gaining skills, visibility, and contacts.

The problem with the Thinkquest model wasn't the cost of the prizes, but the cost of the judging and maintenance. It became harder to manage the vast amount of content and huge number of human relationships. Hard to prevent abuse, fraud, hate speech. Hard to assure fairness. The project was transferred to a larger foundation with greater resources that had a strict policy of maintaining a firewall on all its servers. Thus, while the Thinkquest library was still readable, the authors could no longer update it. (All authors were previously expected to keep their entries current, because untended digital information rapidly loses relevance and value. No one likes stale web content.) The first experiment in global curriculum, and in mass collaboration for student-created curriculum, slowly lost it's vitality and turned into a mere contest.

What Thinkquest demonstrated was an almost successful model for distributing the overhead of content creation and classification to a huge number of people instead of artificial agents. It was close enough to being a success that it's worth trying to see if the design can be tweaked to completion. I wonder if it would be possible to reach an agreement within the technical community on something like Thinkquest as a foundation layer, with the possibility of automated content interpretation and creation being an option for the future. Automation is so easy to have illusions about when it comes to something like education, where it's hard to measure efficacy.

In this regard I'm a skeptic that the new wave of testing in American education is doing what it claims.


I don't doubt that people can learn a lot from using Google. Weblogging methods that let one labor one's way through the web in the aggregated footprints of other researchers are very handy, too. I think the promise here is impressive.

However, my skepto-meter goes off when I see the same verbs used for computers that are used for human cognitive processes. I suspect that computation and cognition are profoundly different things and share the same verbs mostly through historical accident.

I find this sentence unconvincing, for instance:

"Aristotle knows what you know because it has worked with you for a long time."

• This suggests a chummy tutor-student relationship that cannot exist. Aristotle might well record the sequence of screens presented to my eyeballs, but Aristotle is never going to "know what I know" because its circuits doesn't "know" in the same sense that my neurons do.

• Nor does Aristotle "work with me." I "worked." Aristotle shuffled code and consumed some voltage.

• If Aristotle did have such cognitive abilities, then Aristotle could simply teach *itself* all these scholarly achievements, 24-7-365, at broadband speed. Then Aristotle could get a great tech job and go right to to work. Thus moving us immediately into a Vingean hard-AI Singularity where, as Bill Joy once put it, the future doesn't need human beings.

Here's a similar elision:

"Aristotle plans its lessons by finding chains of explanations that connect the concepts you need to learn to what you already know. It chooses the explanatory paths that match your favorite style of learning."

• Those are interesting machine activities that could be pretty handy. They may well look like they deserve the verbs "planning" and "choosing," but I can't believe it. That machine activity is far better compared to the vigorous parallel activities that an anthill pursues to discover and dismember dead crickets.

Mind you, anthills are superb at doing this, but there is no ant-planning or ant-choosing ever going on there. "Go to the ant, thou sluggard: consider her ways, and be wise." I like that advice, but ants don't get tenure.

"Aristotle" has never been a student. It has no empathy. "Aristotle" doesn't "know" how it feels or what it means to get bored, or sharpen a pencil, or gaze absently out the window, or throw a spitball. "Aristotle" cannot discipline, it cannot entertain, it cannot enlighten or inspire. So that is not a teacher.

I also have to wonder if this process is an "education," in the sense that the word traditionally had.

There was once said to be no "royal road to geometry." The goal of this "Aristotle" is to build one, just for you. What if that worked? Then every classmate you possessed would a private, personal means of understanding geometry. Is that an "education", in the socializing, acculturating or character-building sense?

The historical Aristotle poses problems as the perfect model of an ideal teacher, too. He worked for a tyrant.

The historical Alexander did not deftly work out the fault-tolerant engineering of the Gordian knot. He succumbed to a fit of temper, hauled out his sword and hacked the knot in half. Then he overran half the world.

If the human students of this highly deracinated, non-intelligent, non empathic, non-socializing education system all entered the same career as Alexander, then we would likely be in for a lively time.


I am always ready to learn, but I do not always like being taught.
— Sir Winston Churchill

First let me say that I am glad Danny Hillis is getting interested in education. We will never be able to change education unless the best and brightest care to start thinking about what is wrong and what can be done. The knowledge web idea is a reasonable started but Hillis misses some key points about what is needed. When we begin to consider how computers can help in education we need to what is wrong with traditional school settings. Is school broken because students lack access to knowledge or to personal tutor? It could be improved by the construction of a dynamic knowledge base to be sure, but why would that be the key issue?

The only thing that interferes with my learning is my education.
— Albert Einstein

I have never let my schooling interfere with my education.
— Mark Twain

Education is an admirable thing, but it is well to remember from time to time that nothing worth knowing can be taught.
— Oscar Wilde

At least six things need to be changed in order to create school settings that work for those who have difficulties in traditional school settings.

1. The role of the teacher
2. The contextualization of what is learned
3. The relevance of what is learned to children's goals
4. The emphasis on success vs. failure
5. The measurement criteria
6. The fun

Teachers would be great to have around if they were there only when you needed them. In this, Hillis is right. It is clear why traditional school settings do not operate in this fashion. One cannot expect a classroom of thirty children to each be doing their own thing, asking for the teacher's help as needed. Of course, some schools actually try to do this kind of thing, Montessori Schools for example. But, even those schools give up on this model of education when faced with curricula that must be taught and tests that must be passed. It is the curriculum and the size of the class that binds the teacher, making him or her a provider of information and judge of right answers rather than someone who is there for help as needed. Hillis' tutor is great for the adult student who really cares about physics, but most children are not trying to understand physics, they are trying to understand why someone is making them take physics.

It must be remembered that the purpose of education is not to fill the minds of students with facts... it is to teach them to think, if that is possible, and always to think for themselves.
— Robert Hutchins

People who talk about education have forever been mouthing aphorisms about teaching students to think for themselves. It is the Holy Grail of teaching. Everyone believes it, but very few do much about it. Robert Hutchins transformed the University of Chicago. But he didn't eliminate grades and therefore he didn't eliminate the teacher as an authority figure whose views must be followed. Hillis' tutor would do that and therefore it might actually be consulted by students. But most students are trying to figure out how to please the teacher and get a good grade. They are not like Hillis who is more motivated by truth than by grades one would assume. As long as the teacher makes a judgment about the student, many students will try to please him. It is simple human nature. Then, original thinking goes out the window. For students who feel they cannot or do not want to please the teacher, school becomes tedious and learning becomes quite difficult.

The authority of those who teach is often an obstacle to those who want to learn.
— Cicero

A teacher who is attempting to teach without inspiring the pupil with a desire to learn is hammering on a cold iron.
— Horace Mann

No matter how good a teacher is, they are still guided by the curriculum in which they are teaching. We must get over the idea that students who fail to learn algebra or chemistry or English literature would do so if only they had a really good tutor. We need to understand that there are other things to learn besides these subjects, and that a student who turns off to the traditional curriculum, designed in 1892 for a while different set of students and circumstances, can still lead a happy and productive life if only we would consider giving him or her a more meaningful and relevant educational experience.

Schools are armored against change. But, in the last few years, a chink has appeared in that armor. It may be possible to exploit it. That chink is the internet. But it is not the knowledge contained on the internet that is the real hope here. The problem is not getting the net to contain better knowledge but getting the net to be the source for education not the supplement for education.

Courses have been with us for so long that we simply accept that they have the structure, length, and characteristics that they should have and leave it at that. Web courses can be different than what is there now. They can be different for three reasons. (1) Current courses aren't very good and the new medium exposes their weak underbelly. Listening to a lecture on line is a disaster. (2) The length, material covered, and general methodology in courses was derived from practical considerations that are irrelevant in this new medium. Faculty only can teach so many hours per week. Rooms can only accommodate so many people. Buildings open and close at certain times. None of this matters on line. (3) Its on a computer. This new medium has to change the very nature of how things are presented because it can. Just like movies ceased to be filmed plays quickly enough, on line courses will cease to be on line copies of what was there before.

What should a web course be? For starters the concept of course is all wrong. A course is of arbitrary duration. Horses run courses set up at pre-established standardized distances at various racetracks. Students are not horses. Students need to accomplish tasks and having accomplished them they should move on to the next task. Students are not horses. They needn't run through courses. They need to practice whatever skills they are being taught in a realistic environment. Such an environment is the kind of "course" that should be on the internet.

To put education on line, one needs to think about the experiences that students need to prepare for. Here is where Hillis' idea can be very valuable. As we design new experiences for kids we will need on line tutors who can help guide students through those experiences. Some of these tutors can be pre-packaged and automated. The best and brightest of our teachers can be used as a source for creating such tutors. The best expertise in the world can be captured and made available to a program that is smart enough to figure out who needs what story or suggestion at what time and get it to him. If that is what Hillis means by the knowledge web, I am all for it. It is an idea whose time has come. But it will only be of value if it fits within a new conception of what needs to be taught and who certifies who has learned what. Within the current school system it would be of little value.

New on line curricula need to be built that can utilize existing teachers and existing materials and new materials created for the knowledge web that support new kinds of experiences that students will be living. To put this another way, we can build learn-by-doing curricula by making teachers into Socratic tutors and creating realistic tasks for students to do. Those tasks should come from the real-life situations that they might be called upon to do. In this context the knowledge web would be invaluable as a means of creating an educational system that is relevant to the students within it.

When books were moved to classrooms, they were not simply read to the assembled students. (Actually they were at the beginning, hence the word lecture.) New teaching methodologies evolved that were more appropriate to classrooms and books became supplemental materials for teachers. Different media require different methods.

On line education needs to mean the creation of a complex learning environment where there are mentors available and realistic roles to learn. We need to create such environments as a significant part of the knowledge web.


Some things are learned from teachers and from experience, and some things are learned from encyclopedias, libraries, and books. Danny's initiative can certainly improve our abilities on all fronts. One simple benefit (as demonstrated by Google) is the ability to quickly recover information you learned once but now half-remember and half-forget.

To Danny's supporters, from Aristotle to Vannevar Bush, should be added H. G. Wells. From his 1938 manifesto World Brain:

In a universal organization and clarification of knowledge and ideas, in a closer synthesis of university and educational activities, in the evocation, that is, of what I have here called a World Brain, operating by an enhanced educational system through the whole body of mankind, a World Brain which will replace our multitude of uncoordinated ganglia, our powerless miscellany of universities, research institutions, literatures with a purpose, national educational systems and the like; in that and in that alone, it is maintained, is there any clear hope of a really Competent Receiver for world affairs... We do not want dictators, we do not want oligarchic parties or class rule, we want a widespread world intelligence conscious of itself... a centralized, world organ to 'pull the mind of the world together,' which will be not so much a rival to the universities, as a supplementary and co-ordinating addition to their educational activities--on a planetary scale.

There is no practical obstacle whatever now to the creation of an efficient index to all human knowledge, ideas and achievements, to the creation, that is, of a complete planetary memory for all mankind.... The whole human memory can be, and probably in a short time will be, made accessible to every individual... this new all-human cerebrum... need not be concentrated in any one single place. It need not be vulnerable as a human head or a human heart is vulnerable. It can be reproduced exactly and fully, in Peru, China, Iceland, Central Africa, or wherever else seems to afford an insurance against danger and interruption. It can have at once, the concentration of a craniate animal and the diffused vitality of an amoeba...

Hillis is on the right track (as are other versions of what he calls the Knowledge Web). These tools make it easier to learn, and, just as importantly, they make it easier to forget things along the way.

As someone once said, "Education is what's left after you have forgotten what you learned."


When Danny Hillis turns his mind to a topic, he is likely to come up with something original and important. His Aristotelian tutor may not be as original as the 10,000 year clock but it is probably more important. I especially like the customized nature of the tutoring; the effort to be reliable, even authoritative; the concern with integrating new knowledge with other knowledge already possessed by the tutee; and the search for self-sustaining funding.

Let me comment, however, on a few areas where Danny is too modest, too ambitious, or somewhat askew:

1. Undue modesty
Danny describes his tutor as dealing with "factual knowledge." Yet many of his own examples go well beyond "facts" and the tutor ( as described) ought to be able to deal with concepts, theories, and even areas of judgment.

2. Excessive claims
Danny begins by stating that there is a knowledge explosion and we need to be able to deal with it. Undoubtedly this is true. However, the tutor makes no real progress in this area—it only allows us to learn some things more efficiently, a drop in the bucket of the problem at best. What we really need our heuristics which allow us to decide what not to learn, as well as far more powerful synthesizing capacities than the tutor (as described) has.

3. There are four areas where I would phrase things differently:

a. Resistances
The persistent problem in learning new ideas and practices are the misconceptions and resistances that are present in a person's current cognition and that make mastering and retention of the new ideas so problematic. I've written extensively about this—clearly the major insight that cognitive science has given to learning and education. Unless the tutor confronts these resistances, and confirms the tenacity of the new learning, it is likely to be prove no better than the other current teaching techniques to which Danny allude—i.e. raising test scores but not enhancing genuine understanding.

b. Initiative and interaction
As described, the tutor makes the decisions about what to present and how to present it. But the most effective learning takes place when individuals themselves raise new questions and have to grapple with them. I think Danny's model places the agency too much in the hands of the tutor.

c. The medium of tutoring
As Danny describes it, the tutor adapts to the ways of cognition of the learner. But any good medium actually instills new habits of thought in the learner. The way that the tutor works will become part of the mental apparatus of the learner. And so assumptions about where agency resides (as in b.) become crucial.

d. Motivation
In discussions of learning by cognitivists and scholars in general, the issue of motivation is taken for granted or receives only lip service. That is because the readers of Edge are actually interested in fault-tolerant design or Kepler's third law. But the sad truth about education—and the 900 pound gorilla in the classroom—is that most individuals lack this motivation—as Plato put it, the challenge in education is to make learners want to do what they have to do. I am confident that Danny could produce interfaces that were more inviting than the stereotypical ones of our own memories of school—but in fact, the tutor has to compete with video games, reality television, and the temptations of the street and the libido. Barring a breakthrough in motivation which no educator has already achieved (at least beyond early childhood), I fear that Aristotle will work best for those of us who are already intrinsically motivated to learn.

While critical here, I still give Danny two cheers, and no one will cheer a third time more loudly if he can successfully address the issues raised here.


Danny's presentation of an "idea whose time has come" conflates two ideas. As he knows, since we have discussed it over the years, I absolutely agree with one and am deeply skeptical about the other. I positively endorse the idea that much more, and much deeper, thought should go into developing better ways to get the knowledge one needs when one needs it. This "knowledge access problem" is certainly one whose time has come and nobody would have a better chance at advancing it than Danny. But is he shooting himself in his intellectual foot?

I am leery of his suggestion that the right model is an "automated tutor" capable of knowing more about me than I do myself. My views about the principle underlying the super tutor idea are doubts rather that convictions: I grant that maybe, one day, someone might make an artificial tutor whose advantages will outweigh its problematic aspects. But I have an unambivalently negative view about whether its time has come. The state of the art in AI, learning theory, epistemology and other relevant areas is far short of what is needed to make an artificial tutor that will be smart enough to warrant the trust that would justify entering the kind of relationship Danny evokes in his reference to Alexander and Aristotle. I see no reason to suppose that a direct attack on designing tutors is the best way to advance fundamental thinking in these areas.

For a short while, back in the sixties, I persuaded myself that one could base an application of AI to education on the principle that "he who can does, he who can't teaches." AI was not yet good enough to deal with any intellectual domain in real depth; but was it possible that a lesser level of competence would be enough to be a good teacher? This might have been the first serious consideration of the concept of "Intelligent CAI." But I quickly became convinced that there was a fallacy: a physics teacher might need a lesser knowledge of physics than a research physicist; but the other kinds of knowledge needed to be a really good teacher were (and still are) no more than deep physics within the reach of current AI. The subsequent history of funds and effort expended on "Intelligent Tutors" bears out my conclusion that while automated tutors are good for routine tasks (such as repairing a vehicle) something else is needed to support the kind of learning Danny is talking about. I think it is relevant that when I seek intellectual help I value domain competence over pedagogic brilliance.

As I see it the "something else" needed to facilitate the process has two inter-dependent sides. An epistemological side about ways of formulating and classifying knowledge and a personal side about learning "learning skills"—becoming expert at using the system for learning whatever it is one wants to know. In the Education world there is a knee-jerk belief I call instructionism: to improve learning, improve teaching. I don't say teaching is bad. But I want to minimize the ratio of teaching to learning. What we need far more than better instruction is better skills and better conditions (epistemological conditions even more urgently than material conditions) for learner-directed learning.

Like some of the greatest world-teachers I love parables as carriers of ideas. So please imagine yourself in a country where arithmetic relies on Roman numerals. People with problems involving quantities are having trouble googling appropriate methods. Two modes of improvement are proposed. One is a mechanical tutor that can figure out from a database of life experiences why this individual has trouble remembering whether to use IIII or IV. The other is to invent Arabic numerals and make this available to everyone irrespective of psychological trivia.

Yes I know. We don't have to choose. But with limited time and resources we might want to.


More than thirty years ago, my daughter Esther was learning French in the Princeton High School and I visited one of her classes. I was surprised to observe that the children in the class spoke French better than the teacher. A few years later, Esther made her first visit to France and stayed a few days with some French friends of mine. My friends could not believe her when she told them that she had never been to France before and had learned all her French at the Princeton High School. How was that possible? It was possible because the school had a Language Lab with tape-recorders and tape-decks. These simple old-fashioned analog devices allowed kids to listen to native French speakers, to record their own efforts to imitate the native speakers, and to improve their command of French by matching the imitation to the original. Used in this way, a dumb tape-recorder and tape-deck could do a better job of teaching French than a human teacher.

During the subsequent thirty years, as digital technology became available and computers proliferated in our schools, the question often occurred to me: why do we not use digital technology to teach other subjects as successfully as we used analog technology to teach French? Many attempts have been made to use computers as teachers. My grandchildren now spend long hours sitting in Computer Labs at school and working their way through packages of educational software at home. The educational software has certainly helped them to learn reading and typing and elementary arithmetic. But they did not learn these skills noticeably faster than their parents who learned them from books such as The Cat in the Hat and Green Eggs and Ham. At a more advanced level, the advent of educational software has signally failed to produce a generation of teen-agers more attuned to scientific and mathematical thinking than their parents. In spite of massive efforts to make science and mathematics attractive with interactive programs and elegant graphics, the majority of our students remain scientifically and mathematically illiterate. The minority who assimilate the intellectual riches that computers have to offer are similar to the minority who in earlier times became addicted to science by building radios or collecting beetles.

What then are we to think of Danny Hillis's Aristotle? Certainly the experiment is worth trying. Aristotle will have enormous advantages compared with any existing educational software. He will be a combination of teacher and psychiatrist, understanding and listening to the student as he teaches. He will be hooked up to a web of other Aristotles and other students, pooling his problems and his wisdom with theirs. He will be learning from experience and constantly improving his skills as a teacher. He will undoubtedly achieve spectacular successes with the minority of students who catch fire and blaze intellectually under his tutelage. But so far as the majority of students are concerned, I am skeptical. I think of my own student, Mr. F. Mr. F. was my first student, and I gave him lavish quantities of time and attention. He was a Chinese immigrant at the University of Cambridge, and he needed to pass Part One of the mathematical tripos examination in order to go further. Part one was an easy exam, popularly known as Little-go. Students who had passed the Higher Certificate exam in mathematics in high-school normally skipped Little-go and studied for Part Two of the tripos. But Mr. F. had not been at an English high-school, and so he had to study for Little-go. I worked with him for many hours each week, struggling through old exam problems and text-books. As was the custom in those days, he paid me ten shillings an hour for my time. He was willing and I was patient, and we worked together as good comrades. I honestly believe that Aristotle could not have been a better teacher than I was. But all our endeavors were in vain. When the day came for Mr. F. to sit for his Little-go exam, he failed to answer a single question. When I listen to the conversation of the kids in my grandchildren's schools today, I still hear not infrequently the voice of Mr. F.

Another question arises when we contemplate the future of Aristotle. What will happen if Aristotle is successful? The historic Aristotle let loose upon the world a young military genius who led his armies over Europe and Asia and Africa, looting cities and destroying kingdoms. We do not know whether the historic Aristotle is to be blamed for the death and destruction wrought by his pupil. But we should at least make sure that our future Aristotles do not instill into their pupils the capacity and the desire to conquer the world. Perhaps it would be safer to go back to analog devices such as tape recorders and tape-decks.


I like this vision, and indeed it is getting closer to becoming possible. It would be wonderful to support (not replace!) teachers in a way that would enable each child to learn at her own pace, in her own way.

But there are so few things that are facts, compared to the large number of things that require interpretation... And the joy of learning is really to figure things out new things...

Danny does refer to this to some extent, but only briefly. So I'm not disagreeing with him; I'm simply trying to provoke him to write more about the possibility of discovering new things rather than teaching known ones.

The second question is how to get all this information into the system, peer-reviewed, etc. etc. As Danny points out, that will work best with a variety of business models. I take the paper as an attempt to get this encouraged bottom-up by making more people—teachers, parents, software developers and the like—aware of what is possible. it's going to require a lot of different efforts, a lot of people trying hard to be dominant and contributing a lot as they strive to do so...


I think that this is the most extraordinary collection of talent, of human knowledge, that has ever been gathered together at the White House, with the possible exception of when Thomas Jefferson dined here alone.
— President John F. Kennedy at a dinner honoring American Nobel Prize winners, April 29, 1962

Edge is a place with an immense density of talented, prized and noble thinkers. But for any one topic, it may well have to be a lone individual that will take charge and unfold a plan. I commend Danny for taking the obvious premise and running with it to a wishful place, one that is hopeful, meaningful and.... full of gigantic hurdles!

It is easy to take shots at that, and very relevant reasonable objections have already been brought from the surreal Reality Club. But, it may also be the plight of the single mind, dining alone, to bring these thoughts to a new direction, start something, try something, move forward.

OK: The obvious premise: Education is seriously flawed, so is the school system and the tools within both.

And hence the promise: It is very necessary, important, worthwhile and feasible using the technology of the coming decades to create new structures of information and new ways to create, edit, display and use them.

Coincidentally (or maybe not) I have been working on a set of ideas, theories, and actual tools for the last three years myself in a project codenamed "TD" that are focused on exactly that—the Gordian Knot as I perceive it: We are surrounded by complex information, there need to be new ways to deal with that. New ways to show the data, to sort it, and get real use of it. Extreme simplicity, (nothing to do with the corner of graphics toys that some of you may have painted me in). Education is not the immediate target for my work, but a very meaningful area to apply the solutions. I may be suicidal enough to throw TD out in front of the Edge crowd and get reality clubbed over the head soon, but this is not the time for that.

All I wanted to state there is, (nod to Esther Dyson) "it's going to require a lot of different efforts, a lot of people...contributing a lot as they strive to do so". I for one would like to be one of them.

Some further reflections on the piece:

The trouble between premise and promise often lies in entirely unexpected areas, the difficulties may not be technical, or financial, but have to do with "the forces at work", "the nature of humans", subtle properties of the problems that just don't show up on the Gantt charts.

For Danny, there has been a precedence there. The premise: everyone is chasing the next record breaking von Neumann clockspeed until it hertz. The promise: parallel computing will free us.

This is exactly the puzzle: Danny was right and wrong and right. He was right in the initial premise, he turned out to be wrong with Thinking Machines at the time. ("the forces at work", "the nature of humans", etc, etc. He can very likely give you a great account of the final days of the company and the product...and the "subtle properties of the problems".)

And yet, of course, Danny was right again, in the longer If you have not seen it yet, Playstation3 (actually 4) will have 64 cells to achieve Teraflop performance. Whether its Grid Computing or Reconfigurable Computing, a lot of it is pointing to where Danny wanted to go. He may have been just a tad early.

This point applied to the new venture is: Danny may be right, wrong and right again. Surely he is right in the starting assumption that these areas are hugely deficient and worthy of innovation. And he may be right in the long range assumption that over a span of a decade or three...everything will have changed. The question is: can he minimize the losses of the middle period, being wrong about how much it takes, how many, for how long, it will require?

And there I agree with many of the voices here: the Hard AI path proposed adds a large set of extra burdens to the already daunting task of building a real Knowledge Web. The beauty in Aristotle would not need to be an all-knowing-and-understanding Turing winner, but simply being able to deal with information at all in such a fluid manner. In other words: never mind the automatic Hard AI nature of the tutor to get any data and present it appropriately, it is just the data itself that would be amazing if only it existed.

At present, and within The Zeroes decade, there is no such thing.  Google may find keywords from billions of pages within fractions of seconds...but then it dumps them in a rather silly list which takes hundreds of seconds to make sense of, scroll, page, examine. And much of it is pure content junk, a lot of it is paid-for junk. Truly getting useful results from any search engine is actually a fine art form in itself. (But far be it for me to complain though, I craved these tools forever.). It is not hard to state the obvious: there should be much less emphasis on memorizing facts and figures, but rather teach how to find them, how to use all available options and tools.

And it starts even earlier than that with a basic desire to learn, a willingness and interest in learning. How does one instill that in kids ... ( I have 3 ). Without having a problem, they cannot show any appreciation of a solution. Without having a question, all answers given are just unwanted input for them, filtered away. In one ear, out the other We all know the harsh reality of that. "It is a miracle that curiosity survives formal education" said Albert, and I couldn't agree more. I had nine years of Latin. What a waste.

We are all here just the choir you are preaching to, we all are deeply in love with knowledge. For us, the Knowledge Web would be a truly miraculous tool.

I can't wait to see what concrete examples he has up his sleeve. Is Aristotle in alpha testing?

I can't understand why people are frightened of new ideas. I'm frightened of the old ones.
— John Cage


Danny's idea has indeed appeared before, and the reason why is because it's what we yearn for, what we desperately need. AI may not quite be ready to take on everything Danny suggests, so why not a set of gradually improving interim systems? By now, most adults know how we learn best and thus we can supply some of the missing intelligence. We can say show me a picture, or provide me with some analogies; we can ask about the authority of the source. Though I use it far more than I ever dreamed, the current web does not let me interact this way. So why not have adults who want and need to learn new things beta-test an incomplete, imperfect automated tutor? Sign me up!

We may not know everything about how a given mind learns, but after 30 years of serious research in cognitive psychology, we know far more than we once did. Tragically, precious little of this has migrated into mainstream teaching, but that's another story.

Procrastination has allowed me to see other people's responses to Danny's proposal. I'm surprised at how much skepticism it has evoked, even claims that it could probably never be done. My goodness, I think Edge needs to poll a younger group.

By W. Daniel Hillis

(fFrst publshed on Edge on March 9, 2007)

By John Brockman

In May, 2004, Edge published Danny Hillis's essay in which he proposed Aristotle: The Knowledge Web.

"With the knowledge web," he wrote, "humanity's accumulated store of information will become more accessible, more manageable, and more useful. Anyone who wants to learn will be able to find the best and the most meaningful explanations of what they want to know. Anyone with something to teach will have a way to reach those who what to learn. Teachers will move beyond their present role as dispensers of information and become guides, mentors, facilitators, and authors. The knowledge web will make us all smarter. The knowledge web is an idea whose time has come."

Last week, Hillis announced a new company called Metaweb, and the free database, Freebase.com. The launch was covered by John Markoff in The New York Times ("Start-Up Aims for Database to Automate Web Searching", March 9, 2007) ...

March 9, 2007

Start-Up Aims for Database to Automate Web Searching

By John Markoff

Danny Hillis, left, is a founder of Metaweb Technologies and Robert Cook is the executive vice president for product development.

SAN FRANCISCO, March 8 — A new company founded by a longtime technologist is setting out to create a vast public database intended to be read by computers rather than people, paving the way for a more automated Internet in which machines will routinely share information.

The company, Metaweb Technologies, is led by Danny Hillis, whose background includes a stint at Walt Disney Imagineering and who has long championed the idea of intelligent machines.

He says his latest effort, to be announced Friday, will help develop a realm frequently described as the "semantic Web" — a set of services that will give rise to software agents that automate many functions now performed manually in front of a Web browser.

The idea of a centralized database storing all of the world's digital information is a fundamental shift away from today's World Wide Web, which is akin to a library of linked digital documents stored separately on millions of computers where search engines serve as the equivalent of a card catalog.

In contrast, Mr. Hillis envisions a centralized repository that is more like a digital almanac. The new system can be extended freely by those wishing to share their information widely. ...

... In its ambitions, Freebase has some similarities to Google — which has asserted that its mission is to organize the world's information and make it universally accessible and useful. But its approach sets it apart.

"As wonderful as Google is, there is still much to do," said Esther Dyson, a computer and Internet industry analyst and investor at EDventure, based in New York.

Most search engines are about algorithms and statistics without structure, while databases have been solely about structure until now, she said.

"In the middle there is something that represents things as they are," she said. "Something that captures the relationships between things."

That addition has long been a vision of researchers in artificial intelligence. The Freebase system will offer a set of controls that will allow both programmers and Web designers to extract information easily from the system.

"It's like a system for building the synapses for the global brain," said Tim O'Reilly, chief executive of O'Reilly Media, a technology publishing firm based in Sebastopol, Calif. ...

Below is Hillis's addendum to his original essay. ...




[W. DANIEL HILLIS:] In the spring of 2000,  while I was writing the Aristotle essay, Jimmy Wales and Larry Sanger began taking a much more practical approach to a similar problem. Their project, called Nupedia, was an attempt to create a carefully edited encyclopedia of the world's knowledge that would be available to anyone, for free. The Nupedia project made some progress, but the going was slow. About a year later, they put a wiki on the web, allowing anyone to contribute feed material to Nupedia. That feeder project was called "Wikipedia". What happened next is a piece of History that should make us turn-of-the-millennium humans all feel proud.

While all this was going on,  I continued to plug away, trying to build a prototype of the more structured, computer-mediated knowledge base that is described in the essay. Comments from my friends, including those posted on Edge, helped me realize that trying to build a tutor and a knowledge base at the same time would be biting off way too much. So, I decided to concentrate on the "Knowledge Web" part of the problem. Even that seemed to be an uphill battle, because the collapse of the dot-com boom dimmed the funding prospects for all things connected. It was not a good time for ambitious ideas.

Or maybe it was a good time. It was during this period, undistracted by the frenzy of a boom, that Google and Wikipedia were able to build spectacularly ambitious tools that made us all smarter. Eventually, their success reminded everyone that the information revolution was just beginning.  There was renewed enthusiasm for ambitious dreams. This time, I was better prepared for it, having met, during the interim, a great product designer (Robert Cook) and a great engineer (John Giannandrea), both of whom shared the dream of building a connected database of human knowledge that could be presented by computers, to humans.

In retrospect the key idea in the "Aristotle" essay was this: if humans could contribute their knowledge to a database that could be read by computers, then the computers could present that knowledge to humans in the time, place and format that would be most useful to them.  The missing link to make the idea work was a universal database containing all human knowledge, represented in a form that could be accessed, filtered and interpreted by computers.

One might reasonably ask: Why isn't that database the Wikipedia or even the World Wide Web? The answer is that these depositories of knowledge are designed to be read directly by humans, not interpreted by computers. They confound the presentation of information with the information itself. The crucial difference of the knowledge web is that the information is represented in the database, while the presentation is generated dynamically. Like Neal Stephenson's storybook, the information is filtered, selected and presented according to the specific needs of the viewer.

John, Robert and I started a project,  then a company, to build that computer-readable database. How successful we will be is yet to be determined, but we are really trying to build it:  a universal database for representing any knowledge that anyone is willing to share. We call the company Metaweb, and the free database, Freebase.com. Of course it has none of the artificial intelligence described in the essay, but it is a database in which each topic is connected to other topics by links that describe their relationship. It is built so that computers can navigate and present it to humans. Still very primitive, a far cry from Neal Stephenson's magical storybook, it is a step, I hope, in the right direction.

July , 2010

Kevin Kelly

I was digging through some files the other day and found this document from 1997. It gathers a set of quotes from issues of Wired magazine in its first five years. I don't recall why I created this (or even if I did compile all of them), but I suspect it was for our fifth anniversary issue. I don't think we ever ran any of it. Reading it now it is clear that all predictions of the future are really just predictions of the present.


We as a culture are deeply, hopelessly, insanely in love with gadgetry. And you can't fight love and win.
Jaron Lanier, Wired 1.02, May/June 1993, p. 80

I expect that within the next five years more than one in ten people will wear head-mounted computer displays while traveling in buses, trains, and planes.
Nicholas Negroponte, Wired 1.06, Dec 1993, p. 136

Pretty soon you'll have no more idea of what computer you're using than you have an idea of where your electricity is generated.
Danny Hillis, Wired 2.01, Jan 1994, p. 103

If we're ever going to make a thinking machine, we're going to have to face the problem of being able to build things that are more complex than we can understand.
Danny Hillis, Wired 2.01, Jan 1994, p. 104

The scarce resource will not be stuff, but point of view.
Paul Saffo, Wired 2.03, Mar 1994, p. 73

Roadkill on the information highway will be the billions who will forget there are offramps to destinations other than Hollywood, Las Vegas, the local bingo parlor, or shiny beads from a shopping network.
Alan Kay, Wired 2.05, May 1994, p. 77

Money is just a type of information, a pattern that, once digitized, becomes subject to persistent programmatic hacking by the mathematically skilled.
Kevin Kelly, Wired 2.07, Jul 1994, p. 93

It's hard to predict this stuff. Say you'd been around in 1980, trying to predict the PC revolution. You never would've come and seen me.
Bill Gates, Wired 2.12, Dec 1994, p. 166

In the future, you won't buy artists' works; you'll buy software that makes original pieces of "their" works, or that recreates their way of looking at things.
Brian Eno, Wired 3.05, May 1995, p. 150

We're using tools with unprecedented power, and in the process, we're becoming those tools.
John Brockman, Wired 3.08, Aug 1995, p. 119

If the Boeing 747 obeyed Moore's Law, it would travel a million miles an hour, it would be shrunken down in size, and a trip to New York would cost about five dollars.
Nathan Myrhvold, Wired 3.09, Sep 1995, p. 154

Isn't it odd how parents grieve if their child spends six hours a day on the Net but delight if those same hours are spent reading books?
Nicholas Negroponte, Wired 3.09, Sep 1995, p. 206

Without a deep understanding of the many selves that we express in the virtual, we cannot use our experiences there to enrich the real.
Sherry Turkle, Wired 4.01, Jan 1996, p. 199

We're born, we live for a brief instant, and we die. It's been happening for a long time. Technology is not changing it much -- if at all.
Steve Jobs, Wired 4.02, Feb 1996, p. 106-107 ...


July 7, 2010

Online luminary Clay Shirky explains the new digital literary revolution -- and how the Web will change reading

By Andrew Keen, Barnes & Noble Review

(With additional questions from James Mustich, editor in chief of the Barnes & Noble Review).

According to media columnist Michael Wolff, the name Clay Shirky is "now uttered in technology circles with the kind of reverence with which left-wingers used to say, 'Herbert Marcuse'." Wolff is right. Shirky has emerged as a luminary of the new digital intelligentsia, a daringly eclectic thinker as comfortable discussing 15th-century publishing technology as he is making political sense of 21st-century social media.

n his 2008 book, "Here Comes Everybody," Shirky imagined a world without traditional economic or political organizations. Two years later and Shirky has a new book, "Cognitive Surplus," which imagines something even more daring — a world without television. To celebrate the appearance of the revered futurist's latest volume, we're delighted to share a February discussion between Shirky, Barnes & Noble Review editor in chief James Mustich, and BNR contributor Andrew Keen. What follows is an edited transcript of their conversation about the future of the book, of the reader and the writer, and, most intriguingly, the future of intimacy.

James Mustich: Clay, I was very taken with that post you wrote about the early days of the Gutenberg revolution.

Clay Shirky: Oh, yes. Eisenstein's book.

JM: Right. It had a very insightful historical perspective that's generally lacking in conversations about today's publishing turmoil. You also had an interesting piece at edge.org recently, about how publishing is the new literacy. You said, "It is our misfortune to live through the largest increase in expressive capability in the history of the human race -- a misfortune because surplus always breaks more things than scarcity."

Andrew Keen: This idea of publishing as "the new literacy" sounds like a sexy, kind of Twitter remark, but what actually does that mean?

CS: We have this whole complex of words, "publish," "publisher," "publicity," "publicist," that all refer to either jobs or the work of making things public. Because it used to be incredibly difficult, complicated, and expensive to simply put material into the public sphere, and now it's not. So I'm comparing it to literacy -- literacy used to be reserved for a specialist class prior to the printing press, and, much more importantly, prior to the spread of publishers and the rise of a real publishing industry. ...


July 8, 2010

Why We Talk to Terrorists: response to Supreme Court ruling on "material support" of foreign terrorist groups
Xeni Jardin

In John Brockman's Edge newsletter, an essay by Scott Atran (left) and Robert Axelrod (right), two social scientists who study and interact with violent groups "to find ways out of intractable conflicts." The piece is a response to a recent Supreme Court decision that amounts to a real "chilling effect" for anyone working for peace and reconciliation through dialogue with foreign groups that have a history of armed conflict. Before the ruling, we knew that sending money or guns to any of the four dozen groups currently designated by the secretary of state as terrorist organizations was punishable by up to 15 years in prison. But now the law has been clarified to show that, say, holding conflict resolution workshops with them, or even interviewing one of their officers for an op-ed piece, could merit the same penalty. This NPR News analysis is a good place to start for real-world examples. ...


June 14, 2010


The Economist
has recently featured an interesting article on the behavioral effects that parasitic protozoa Toxoplasma gondii has on its mammalian hosts. Many of these effects have been recognized for years, and some of us here at Medgadget been privy to Toxoplasma news, thanks to a friend at Stanford who works with Dr. Robert Sapolsky, a leading researcher in the field. First of all, there is strong evidence that urine from cats infected with Toxoplasma is sexually attractive to rats. Then there seems to be a connection between Toxoplasma gondii and schizophrenia, lack of interest in the novelties of life, and a noted correlation with people getting into more car accidents. It seems that the nature of this parasite's life cycle has created a strange symbiotic, psychological relationship between it and its typical feline and rodent hosts. The Economist provides a handy overview of the latest knowledge around this topic.

If an alien bug invaded the brains of half the population, hijacked their neurochemistry, altered the way they acted and drove some of them crazy, then you might expect a few excitable headlines to appear in the press. Yet something disturbingly like this may actually be happening without the world noticing....

One reason to suspect [that some people have their behaviour permanently changed] is that a country's level of Toxoplasma infection seems to be related to the level of neuroticism displayed by its population. Another is that those infected seem to have poor reaction times and are more likely to be involved in road accidents. A third is that they have short attention spans and little interest in seeking out novelty. A fourth, possibly the most worrying, is that those who suffer from schizophrenia are more likely than those who do not to have been exposed to Toxoplasma.

Nor is any of this truly surprising. For, besides humans, Toxoplasma has two normal hosts: rodents and cats. And what it does to rodents is very odd indeed.

Here's a must watch video interview of Sapolsky with Edge.org:

Key quote:

Somehow, this damn parasite knows how to make cat urine smell sexually arousing to male rodents...

Read on at The Economist: A game of cat and mouse

Edge: TOXO - A Conversation With Robert Sapolsky...


March 4, 2010

By Tom Young

(Having spent many a column espousing the wonders of the internet, my final column will sound a warning on the dangers.)

The first is anonymity. This can be a curse and a blessing online. Sites such as Wikileaks - which desperately needs funding to stay open - provide a valuable place where information can be put into the public domain anonymously. ...

...But the internet carries an arguably more pervasive and long-term danger than the provision of anonymity and that is the way that it changes and shapes thinking and the way people interact with information.

The online forum edge.org recently tackled this problem. It asked leading scientists, technologists and thinkers: How is the internet changing the way you think? A number of people, including American writer Nicholas Carr and science historian George Dyson, outlined fears that the web is at risk of reducing serious thought rather than promoting it. ...



Edited by John Brockman

"An intellectual treasure trove"
San Francisco Chronicle

Edited by John Brockman

Harper Perennial


[click to enlarge]

Contributors include: RICHARD DAWKINS on cross-species breeding; IAN McEWAN on the remote frontiers of solar energy; FREEMAN DYSON on radiotelepathy; STEVEN PINKER on the perils and potential of direct-to-consumer genomics; SAM HARRIS on mind-reading technology; NASSIM NICHOLAS TALEB on the end of precise knowledge; CHRIS ANDERSON on how the Internet will revolutionize education; IRENE PEPPERBERG on unlocking the secrets of the brain; LISA RANDALL on the power of instantaneous information; BRIAN ENO on the battle between hope and fear; J. CRAIG VENTER on rewriting DNA; FRANK WILCZEK on mastering matter through quantum physics.

"a provocative, demanding clutch of essays covering everything from gene splicing to global warming to intelligence, both artificial and human, to immortality... the way Brockman interlaces essays about research on the frontiers of science with ones on artistic vision, education, psychology and economics is sure to buzz any brain." (Chicago Sun-Times)

"11 books you must read — Curl up with these reads on days when you just don't want to do anything else: 5. John Brockman's This Will Change Everything: Ideas That Will Shape the Future" (Forbes India)

"Full of ideas wild (neurocosmetics, "resizing ourselves," "intuit[ing] in six dimensions") and more close-to-home ("Basketball and Science Camps," solar technology"), this volume offers dozens of ingenious ways to think about progress" (Publishers Weekly — Starred Review)

"A stellar cast of intellectuals ... a stunning array of responses...Perfect for: anyone who wants to know what the big thinkers will be chewing on in 2010. " (New Scientist)

"Pouring over these pages is like attending a dinner party where every guest is brilliant and captivating and only wants to speak with you—overwhelming, but an experience to savor." (Seed)

* based On The Edge Annual Question — 2009: "What Will Change Everything?)

Edge Foundation, Inc. is a nonprofit private operating foundation under Section 501(c)(3) of the Internal Revenue Code.