| Home | About Edge| Features | Edge Editons | Press | Reality Club | Third Culture | Digerati | Edge:Feed | Edge Search |
THE REALITY CLUB
Stewart Brand: The sequence is clear. From "the user is a luser" (early programmer joke) to "the user wins" to "the user rules" (eg. Napster) and "the user creates" (the Web) to, with Gelertner, "the user is the system."
David Ditzel: Gelernter is ahead of us all in peering through the fog that we call the future of technology.
John C. Dvorak: Bill Gates will love reading this stuff. Hating it will be the Ellisons and McNealys of the world whose goal is to de-ball the personal computer and replace it with a thin client running eunuchs.
Feeman Dyson: I suspect that he has a one-sided view of computing. I suspect that cyberspace will also be dominated by tools, as far into the future as we can imagine. The topography of our future cyberspace will be determined more by new tools than by Gelernter's vision.
George Dyson: Let us hope that Gelernter's prophecies continue to be fulfilled. The sooner spines replace icons the better would you rather work in a library where the books are shelved at eye-level or left lying face-up all over the floor??
Douglas Rushkoff: ...the trick to seeing through today's interfaces a way of envisioning information architecture that David does effortlessly involves distinguishing between our modeling systems and the models they build.
Rod Brooks: David Gelernter is no doubt right on about the coming revolution, but as with all revolutions it is hard to predict the details of how it will play out. I suspect he is wrong on the details of cyberbodies and his lifestreams.
Lee Smolin: I have the sense that David's manifesto is a bit like the predictions I read as a child that by the 21st century cars would have evolved wings and we would all be flying to work. The technology of cars has improved a bit since then, but the basic experience of driving is almost exactly the same.
Jaron Lanier: This reminds of Marx's vision of what should happen after the revolution. He imagined we'd be reading the classics and practicing archery! Idealists always believe there's some more meaningful, less dreary plane of existence that can be found in this life.
David Farber: We are at the edge of a real dramatic change in technology. For the past decade we have evolved from a view that the network is just a way of connecting computers together to the current view that the network is the action to the view often stated (by me and others) that no one cares about the network but only what they can access and interact with information and people.
Danny Hillis:David Gelernter is basically right: current generation computer interfaces are not very good. (Since we are all among friends here, we can say it: they suck).
Vinod Khosla: Transition strategies here will significantly impact the end state.
John McCarthy: Unfortunately, the making of computer systems and software is dominated by the ideology of the omnipotent programmer (or web site designer) who knows how the user (regarded as a child) should think and reduces the user's control to pointing and clicking. This ideology has left even the most sophisticated users in a helpless position compared to where they were 40 years ago in the late 1950s.
It's a great screed, inspiring and generative. It is a frame of reference worth filling with reality.
For me, Gelertner's manifesto speaks to widespread growing aggravation with the current system and growing impatience with the burgeoning tech possibilities not being addressed at a deep enough level. "About time!" was my gut response.
The sequence is clear. From "the user is a luser" (early programmer joke) to "the user wins" to "the user rules" (eg. Napster) and "the user creates" (the Web) to, with Gelertner, "the user is the system."
The still unanswered question though is: How does this system fare over time? How does it keep from the self-obsolescing self-erasure endemic to current computer tech? How do the lifestream contrails keep their shape amid ferociously turbulent winds? Those winds are not extraneous to the system; they are how the system grows.
BRAND is founder of the Whole Earth Catalog, cofounder of The Well,
cofounder of Global Business Network, cofounder and president of The
Long Now Foundation. He is the original editor of The Whole Earth
Catalog, Author Of The Media Lab: Inventing The Future At Mit, How Buildings
Learn, and The Clock Of The Long Now: Time And Responsibility
David Gelernter's Manifesto is a humbling document read, because it points out the generally unrecognized, but herein revealed truth that we are only at the beginning of understanding how the evolution of the internet is going to change our lives.
Gelernter is ahead of us all in peering through the fog that we call the future of technology.
DITZEL is CEO, Transmeta Corporation
John C. Dvorak
Finally, someone who knows what they're talking about and who isn't simply viewed as a embittered cynic tells it like it is regarding the notion of remote computing among other dumb ideas. Bill Gates will love reading this stuff. Hating it will be the Ellisons and McNealys of the world whose goal is to de-ball the personal computer and replace it with a thin client running eunuchs. I also like his slamming the dubious concept of a computer "Desktop" and trashing the idea of file folders and other computer commonplaces promoted by the charismatic Steve Jobs and copied lockstep by Gates and company. Unfortunately all the points in the manifesto are right but otiose. Trends and fads promoted by strength of personality whether it be Fascism, rap music, thong bikinis or the WIMP (windows icons mouse pointer) interface are not easy to reverse. It's the mechanism of trend reversal that needs study and comment. A laundry list of all that is wrong with computing today is an exercise in futility when hero worship and sheep-like behavior are the norm. This manifesto will amount to nothing in the end. A shame.
C. DVORAK is the host of Silicon Spin on ZDTV. He is a contributing
editor of PC Magazine, where he has been writing two columns,
including the popular "Inside Track," since 1986.
Thank you very much for sending the Gelernter manifesto, full of wonderful imagery and eloquence. Here are some brief comments.
Gelernter lays out a grand vision of cyberbodies and lifestreams inhabiting the cyberspace of the future. He brings his vision to life with images that every child can understand, the bluebird perching on a branch, the cloud's shadow drifting across the paved courtyard. There will be a place for humans, even for children, in his cyberspace. In his vision of the future, we shall no longer be parking cars in a pint-sized Manhattan parking-lot. We shall be flying free in cyberspace, leaving behind vapor trails of experience and memory for other humans to explore.
Fifty years ago we heard about a different vision of a possible future. We heard that the automobile would soon be obsolete, its mobility diminished by the constantly increasing density of traffic, its destructive effect on the environment no longer tolerable in a civilized society. We heard that the automobile would soon be replaced by the helicopter as the preferred vehicle for personal transportation. We would soon be living in a three dimensional world, with helipads replacing garages beside our homes. The reasons why that vision of a roadless civilization never materialized are obvious. Helicopters remained noisy, accident-prone and expensive, roads and automobiles turned out to be unexpectedly resilient. The vision was beautiful, but the tools to make it real were defective.
vision is also beautiful, and his scornful sweeping of existing computers
and operating systems into the dustbin of history is persuasive. The
chief question that his vision raises is, whether we shall have the
tools to make it real. Gelernter disparages tools. He says, "The real
topic in astronomy is the cosmos, not telescopes. The real topic in
computing is the cybersphere and the cyberstructures in it, not the
computers ... ''. I know more about astronomy than about computing.
I can certify that he has a one-sided view of astronomy. Modern astronomy
is dominated by tools. It is about telescopes and spacecraft as much
as it is about the cosmos that these tools explore. Every time we introduce
a new tool, we see a new cosmos. And I suspect that he has a one-sided
view of computing. I suspect that cyberspace will also be dominated
by tools, as far into the future as we can imagine. The topography of
our future cyberspace will be determined more by new tools than by Gelernter's
vision. Still, he has pointed the way for the next generation of tool
builders to follow. We must hope that they will be more successful than
the builders of helicopters fifty years ago. If the tool-builders can
build tools to match his vision, then our children and grandchildren
might see the Second Coming and live in the world of Gelernter's dreams.
DYSON is professor of physics at the Institute for Advanced Study,
in Princeton. His professional interests are in mathematics and astronomy.
Among his many books are Disturbing The Universe, Infinite In All
Directions Origins Of Life, From Eros To Gaia, Imagined Worlds, And
The Sun, The Genome, and The Internet.
From: George Dyson
Date: June 12, 2000
Let us hope that Gelernter's prophecies continue to be fulfilled. The sooner spines replace icons the better would you rather work in a library where the books are shelved at eye-level or left lying face-up all over the floor??
For fifty years, digital computing has rested upon two invariant foundations: the program (as given by Turing) and the address matrix (as given by von Neumann and Bigelow). Who could have imagined, 50 years ago, that we would load millions of lines of 'machine-building' code just to check our mail, or that an international political organization would be charged with supervising the orderly assignment of unambiguous coordinates to every bit of memory connected to the net?
Only a third miracle dirt-cheap, near-perfect microprocessing allows a system as inherently intolerant of error and ambiguity to work as well as it does today. Gelernter is right: a revolution is overdue. And underway.
In molecular biology, addressing of data and execution of order codes is accomplished by reference to local templates, not by reference to some absolute or hierarchical system of numerical address. The instructions say "do x with the next copy of y that comes along" without specifying which copy, or where. This ability to take general, organized advantage of local, haphazard processes is exactly the ability that (so far) has distinguished information processing in living organisms from information processing in digital computers. This is not to suggest an overthrow of the address matrix which is with us to stay. But software that takes advantage of template-based addressing will rapidly gain the upper hand.
The other foundation, the program, is based on the fact that digital computers are able to solve most but not all problems that can be stated in finite, unambiguous terms. They may, however, take a very long time to produce an answer (in which case you build faster computers) or it may take a very long time to ask the question (in which case you hire more programmers). For fifty years, computers have been getting better and better at providing answers but only to questions that programmers are able to ask.
I am not talking about non-computable problems. Despite the perennial attentions of philosophers, in the day-to-day world such problems remain scarce. There is, however, a third sector to the computational universe: the realm of questions whose answers are, in principle, computable, but that, in practice, we are unable to ask in unambiguous language that computers can understand. This is where brains beat computers. In the real world, most of the time, finding an answer is easier than defining the question. It's easier to draw something that looks like a cat than to describe what, exactly, makes something look like a cat. A child scribbles indiscriminately, and eventually something appears that happens to resemble a cat. A solution finds the problem, not the other way around. The world starts making sense, and the meaningless scribbles are left behind. This is the power of that Mirror World we now perceive as the Internet and the World Wide Web.
"An argument in favor of building a machine with initial randomness is that, if it is large enough, it will contain every network that will ever be required," advised cryptanalyst Irving J. Good, speaking at IBM in 1958. Even a relatively simple network contains solutions, waiting to be discovered, to problems that need not be explicitly defined. The network can and will answer questions that all the programmers in the world would never have time to ask.
DYSON is a leading authority in the field of Russian Aleut kayaks
“the subject of his book Baidarka, numerous articles, and a segment
of the PBS television show Scientific American Frontiers. His early
life and work was portrayed in 1978 by Kenneth Brower in his classic
dual biography, The Starship And The Canoe. Now ranging more
widely as a historian of technology, Dyson's most recent book is
Darwin Among The Machines.
David Gelernter's "The Second Coming" reminds me just how arbitrarily so many of our decisions about how to do computing and networking have been reached. Techniques for sharing super-computing resources or keeping lines of code ready for a compiler have, through their very legacies, become the architectural basis for humanity's shared information space.
It seems to me that the trick to seeing through today's interfaces a way of envisioning information architecture that David does effortlessly involves distinguishing between our modeling systems and the models they build. While memory, information, hardware, and software might need to conform to certain realities, the very opacity of our current operating systems (both technological and social) imply an immutability that just isn't real. The only obstacles to this unencumbered perception of memory, information, storage, and interaction are our own prejudices, formed either randomly or by long-obsolete priorities, and kept in place by market forces.
RUSHKOFF, a Professor of Media Culture at New York University's
Interactive Telecommunications Program, is an author, lecturer, and
social theorist. His books include Free Rides, Cyberia: Life In The
Trenches Of Hyperspace, The Genx Reader (Editor), Media Virus! Hidden
Agendas In Popular Culture, Ecstasy Club (A Novel), Playing The Future,
and Coercion: Why We Listen To What "They" Say.
David Gelernter is no doubt right on about the coming revolution, but as with all revolutions it is hard to predict the details of how it will play out. I suspect he is wrong on the details of cyberbodies and his lifestreams. The first because as framed it relies still on a physical icon to identify the body, and the second because it is just one metaphor that many will find inconvenient. In the following paragraphs I'll outline my own versions of what the revolution will bring in these two departments, and no doubt my visions will be as wrong or more than David's.
But first the actuality of the revolution. David's criticisms of our current computing environments are eloquently stated, and I think widely shared. A number of projects were started about a year ago, originally through a DARPA sponsored `Computing Expeditions' program. At CMU the expedition is called "Aura", at Berkely it is "Endeavour" (named for Cook's ship, and hence the spelling), at the University of Washington/Xerox Parc it is called "Portolano/Workscapes". At MIT, Michael Dertouzos, Anant Agarwal and I are leading "Project Oxygen" dedicated to pervasive human-centered computing. The common theme across all these projects is that human time and attention is the limiting factor in the future, not computation speed, bandwidth, or storage.
In the past the human has been forced to climb into the computer's world. First with binary, and holes punched in cards, and then later by physically approaching that "square foot or two of glowing colors on a glass panel", and being drawn into its virtual desktop with metaphors bogged down by copies of physical constraints in real offices. In MIT's Project Oxygen, a joint project of the Laboratory for Computer Science and the Artificial Intelligence Lab, we are trying to drag the computer out into the world of people. Computers are fast enough now to see and hear---and these are the principal modalities which we use to interact with other people. We are making our machines interact with people through these same modalities, using the perceptual capabilities of people rather than forcing them to rely on their cognitive abilities just to handle the interface. Cognitive capabilities should be reserved for the real things that people want to do.
Now for cyberbodies and lifestreams. By making computation people centric it should not matter whether I am in your office or mine, whether I pick up your PDA or mine, whether I pick up your cell phone or mine. Wherever I am the system should adapt to my identity whether I am carrying a "calling card" or not. It should adapt to me, not to yet another technological decoration that I need to carry around. And it should be automatic and secure as it does this. Just as people can tell my identity through vision and sound so too can our machines. Furthermore, as computation is cheap, much cheaper these days than special purpose circuitry (and wherever that is not true yet, it will soon be), there is no need for artifacts to have any particular identity. According to my needs at that instant, the machine in my hand should be able to morph from being a PDA to a cell phone, to an MP3/Napster player, just be changing the digital signal processing it is doing. Physics requires a little bit in the way of an aerial, but beyond that demodulation, etc., can be in software. And then the systems should handle bandwidth restrictions behind my back, performing vertical hand-off between protocols as invisibly as today's cell phones perform horizontal hand-off between cells.
Lifestreams are one sort of metaphor. We will not be subject to the tyranny of a single metaphor as we are subject today to the desktop metaphor which Gelernter so masterfully scorns. For a lot of my everyday work I will prefer a metaphor of a personal assistant. I tell it something, and it takes care of the details, watching over me and only interceding when it sees that I need help, pulling in all the necessary information from wherever it is located, perhaps cached ahead of time in anticipation of my needs. After working with me for many years my human personal assistant knows so many details of my life and interactions that I can entrust her to handle many of interactions with the world, without me ever providing any supervision. I will want a similar relationship with my computation. Others might prefer a geographical metaphor, zooming around through a virtual world, while a few might like the lifestreams metaphor. Once a few of these metaphors get invented and tried out, there will be a deluge of new metaphors as the young hackers attack the interface problem with a vengeance.
RODNEY A. BROOKS is Director of the MIT Artificial Intelligence Laboratory, and Fujitsu Professor of Computer Science. He is also Chairman and Chief Technical Officer of IS Robotics, an 85 person robotics company. Dr. Brooks also appeared as one of the four principals in the Errol Morris movie "Fast, Cheap, and Out of Control" (named after one of his papers in the Journal of the British Interplanetary Society) in 1997 (one of Roger Ebert's 10 best films of the year).
David Gelernter has a wonderful imagination and I am a bit afraid to contradict him, as he has obviously spent much more time thinking about the future of computing than I have. I am intrigued by many of the things he proposes. But let me say a word in defense of the present Macintosh system. I do suspect that some computer scientists have forgotten just how revolutionary and useful the Mac operating system is, and may be underestimating the longevity of this particular technology.
It is true that the Macintosh operating system is based on the old fashioned metaphor of a desktop and filing cabinet. But I find that metaphor very useful. I do think of my computer as a very efficient and useful filing cabinet. I like the fact that the files have names and that I can search for them efficiently in several different ways. I like the hierarchical structure of directories. I like the fact that email is different from ordinary files, and I am happy that it only takes a few key strokes to turn an email into a file if I need it to be one, or vise versa.
I also like the limited area of the desktop on my powerbook screen. At work I have a Silicon Graphics which works a bit more like David wants: one can have many different desktops for different purposes and each can be much bigger than the screen, even though that is many times the size of the screen on my powerbook. But I find that I don't use any of these added features. It is too hard to remember how to use them, and I find that when I try to I often loose windows and icons which are off the screen. What is good about the desktop is that it is so limited. I can have piles of windows open at once, but I know where they all are. When there are too many I know I have to close some, which forces me to do a bit of cleaning up. It is like having to clean up ones desk when it overflows. Only unlike my real desk, which I can simply ignore, I do have to deal with my desktop and clean it up from time to time to keep working. I find this very useful as it enforces a minimal level of organization in my work habits.
What David is describing is a computer which would work more like my own mind. But I am not sure I need a computer of this kind. Perhaps I do, I've never had one. But I do already have quite a good associative memory. My guess is that its limitations are built in, as there is an inevitable compromise between the vividness of memory and associations and alertness to the present. I would not want going to my computer to work to be like opening a box of old letters and photographs or facing the task of throwing away old magazines that I never got to read. With a computer like this I might never get anything done. More than anything what I like about my computer is that it does not offer me any information that I don't ask for.
What has gotten so distasteful about going on line is the imposition of unwanted information. The web was a lot more useful before pages began to be crowded with advertising and unwanted information. The sites I use mostly are the ones that offer the least possibilities for diversion from what I am seeking. If randomness and unpredictability were built into the experience of computing it would cease to become a useful tool for me. Not enough has been said about the way that one site can change the working habits of a whole profession, by changing the way we communicate with each other. This is true of the xxx.lanl.gov site, which is now the universal tool for publication in physics and math. It is tightly and rigidly structured, and that is what makes it so useful. It is an extremely good filing cabinet, so good that it replaces many filing cabinets in thousands of offices all over the world.
I also don't like the metaphor of organizing my interface with the computer in terms of the flow of real time. Another very good aspect of my computer is that it provides the illusion that time can be frozen. I can work on several projects at once, and each one is exactly where I left it when I go back to it. In the context of a very busy life, full of travel and unexpected demands and developments, my computer provides an oasis in which time advances in each window only when I pay attention to it.
So I don't need a computer to enhance my imagination or associative memory. I need a computer that counteracts the effects of my own too active imagination and too busy schedule. Because of this I know that a computer that works the way my powerbook does is something I will always need. And what makes my powerbook so useful is the fact that it works so differently than I do. The fact that all the files have names and locations in a hierarchical system is part of what makes it so useful. When I want to find a paper I wrote three years ago on quantum geometry I want to be able to pull up that file right away, not every file I wrote in the last five years about some aspect of quantum geometry. Every once in a while I loose something and it might be good to have a search machine that worked associatively. But not very often.
I do agree with a lot of what David says. I can imagine lots of improvements on the present Mac operating system. Some of the things he suggests would be very useful. And of course the idea of a kind of cyber-agent who represents me in cyberspace is intriguing and perhaps useful. But I have the sense that David's manifesto is a bit like the predictions I read as a child that by the 21st century cars would have evolved wings and we would all be flying to work. The technology of cars has improved a bit since then, but the basic experience of driving is almost exactly the same. Personally I don't cherish that experience so I prefer living in places where one can get almost everywhere by public transportation. Here in London at the beginning of the 21st century the only people who helicopter to work regularly are a few wealthy businessmen and a few members of the royal family.
SMOLIN is a theoretical physicist; professor of physics and member
of the Center for Gravitational Physics and Geometry at Pennsylvania
State University; author of The Life of the Cosmos.
I'm so delighted that David is still fighting the good fight, an idealist after all these years. Greed and even satisfied wealth have proven to be agents of distraction to all too many cyberdreamers. It's becoming ever more rare to find a young student with even half of David's quotient of fire in his/her soul about the potential for beauty and meaning in digital tools.
So, while I will offer some criticisms below, I hope they will be read as friendly and supportive.
David falls into a common trap that has snagged many a visionary over the years. He thinks about ideal Platonic computers instead of real computers. A billion Platonic computers support a seamless virtual space in which programs fly about unconcerned with which real computer might be visited at a given moment. A billion real computers, in contrast, require a ten million human beings to run helpdesks, many thousands more to fight lawsuits over software compatibility, and a few hundred more to track malicious viruses that invade the automated virus tracking software that never quite worked.
Real computers, unlike ideal computers, are the first machines that require an infinite rather than a finite amount of human labor for their maintenance. Real computers are less likely to allow us to forget them than any other gadget in the history of invention.
Furthermore, in order for a Platonic computer to appear, human good will and good taste will have to precede it. There will have to be no Bill Gates who forces technological sensibility into a retrograde motion in order to gain power.
In order for a Platonic computer to appear, humans will have to understand how to write large programs that interface with the real world in such a way that they are both modifiable, secure, immune to becoming the bearers of future legacy headaches, and amenable to decent user interface design. We simply don't know how to write such programs yet. I expect us to learn to do it someday, in the same way I expect us to be able to build anti-gravity devices someday. I am idealistic, but not for progress in any relevant timeframe.
Moore's Law simply doesn't apply to software as it does to hardware. Software uses every opportunity to get worse instead of better. More memory means more bloat. More users means more incentives not to change, which means more legacy highwire Band-Aids. Software is like culture, starting out fresh and becoming decadent.
Having said all that, I love David's vision. Reading it inspired me to dig up a bit of my old ranting about what virtual reality software should look like. As it happens, I was hoping for something very much like Lifestreams back in the mid-80s.*
As I re-read this old material now, about fifteen years later, it seems a little naive. Surely I didn't think I'd play back virtualized memories as if they were on tape, fast-forwarding and reversing. That works for a single movie, but is no more possible for a lifetime than naming all those 10,000 cows. How would I break memories into atomic units so that they could be summarized or re-ordered? Would I just see a little bit of each room I entered? Maybe rooms aren't the right divider markers for memories. I'd have to impose some ontology onto my memories in order to be able to reduce them enough to search through them and manipulate them. I can't deal with my memories in an unreduced form, because I don't have the time. (This is the temporal version of the old Borges story about the map as big as the country it represents.) The fact that my memories must be automatically reduced in order to be usable brings up another problem area in David's vision.
Although it isn't immediately apparent, there's a an implicit reliance on Artificial Intelligence in David's manifesto. Somehow the cybertraces that one leaves as one flits about the cyberuniverse, carefree like a butterfly, must at some point be parsed (according to that magic ontology) to be useable in the future. Either there's a sweatshop of third world workers going over the life experiences of every wired citizen of the industrialized world, or there are computer algorithms doing the job.
Maybe by now my colleagues on this list are sick of my unyielding stance on AI, but I must repeat once again that Artificial Intelligence just stinks. It's a phony effect. You can't get something for nothing; the computer can't add wisdom to the mix. Or if you believe it can, I feel you've reduced yourself in a deep way, morally and esthetically. Think of the Turing Test: How can the judge know if the computer has gotten smart or if the person has gotten stupid? How can you know if those omniscient credit rating algorithms are brilliant, or if you're being an idiot by borrowing money when you don't need to in order to feed the algorithm with data?
Once again, I feel a tension between the ideal and the real. I am sold on the Lifestreams vision, on David's whole package, but I think the experience of using it will be extremely labor intensive, for me and for everybody.
And utterly worth all the trouble.
I must reject the final paragraph of the manifesto, which imagines an aspect of life more meaningful than technology, which we will be free to pursue when we can forget about technology. This reminds of Marx's vision of what should happen after the revolution. He imagined we'd be reading the classics and practicing archery! Idealists always believe there's some more meaningful, less dreary plane of existence that can be found in this life. All we have to do is fix this hunking mess in front of us and we'll get there.
A lovely belief to hold!
LANIER , a computer scientist and musician, is a pioneer of virtual
reality, and founder and former CEO of VPL. He is currently the lead
scientist for the National Tele-Immersion Initiative
Gelernter's manifesto is certainly well written. It is flowery and eloquently stated. However, why is there always a "however", it introduces new terms but not that many new ideas that have not been often expressed.
We are at the edge of a real dramatic change in technology. For the past decade we have evolved from a view that the network is just a way of connecting computers together to the current view that the network is the action to the view often stated (by me and others) that no one cares about the network but only what they can access and interact with “ information and people.
We are about to replace our old slow electro-optical communications systems with all optical end to end systems. This technology offers an enormous increase in bits per second. One stand of fiber can carry more bits per second than the entire current national backbone. This will cause a dramatic change in every thing we have now. We will have to re-think our network protocols, the architecture of our computers and just what we mean by a computer and software. Old ideas will soon go the way of the big mainframe operating systems and computers.
to the manifesto. It blends well into this rethinking process that the
new technology will force. It would be unfortunate if the result of
this re conceptualization ended up with the same old appearance and
world model to users. The manifesto is a major step in making sure that
does not happen. Let's just realize that the ideas are not new - they
reflect the ideas of many people over many years. Now we need an industrial
structure that allows these ideas to be developed and marketed!
DAVID FARBER, considered by many to be the grandfather of the Internet, is Chief Technologist, Federal Communications Commission.
David Gelernter is basically right: current generation computer interfaces are not very good. (Since we are all among friends here, we can say it: they suck). The ubiquitous windows desktop is a classical example of "early lock in", like the Qwerty keyboard and strange conventions for English spelling. These are both generally acknowledged as unfortunate accidents of history. They are non-optimal, but not quite bad enough to be worth changing. In fact, the standard computer interface in incorporates both of these awful interfaces, yet interestingly, Gelernter does not suggest changing them.
Are we at the point where the desk top computer interface will be thrown out and replaced with something better? Is the computer desktop like the Roman alphabet, which we have learned to live with in spite of its quirks, or is like the Roman system of numerals, which we have pretty much abandoned? As much as I like the idea like the idea of starting with a clean slate, I think it is more like the alphabet than the numerals, and it is more likely that the desk top interface will be improved than abandoned. Most of the specific improvements that Geletner suggests, like content addressing, time-linking and multiple names, can be and are being incorporated into standard interfaces. It won't be elegant, but it will work.
So does this mean that we are doomed to a millennium of Windows 2xxx? I doubt it. As Scott McNealy is fond of pointing out, current PC operating systems are unwieldy "hair balls" of accumulated history. Eventually, someone will start from scratch and build something better. But I would be surprised if they start by throwing out the part that most users are the most comfortable with, which is the metaphor physical document handing. The replacement, when it emerges, will win by doing a better job of the same thing.
Yet, there is also a second type of competition, which is not so much a replacement as an addition. Computers are useful for more than handling documents, and other interfaces will be developed for these other functions. These are interfaces more likely to nurture the emergence of radical new ideas. If David Geletner really wants to invent a new interface(and he would probably be good at it) he should forget about looking for a better way to handle documents, and start think about a computer that handles ideas.
DANIEL HILLIS, former vice president of research and development
at The Walt Disney Company, is the co-founder of a startup, Applied
Minds. He is the author of book, The Pattern On The Stone: The Simple
Ideas That Make Computers Work.
A brief scan leads to the impression that while "the second coming" is inevitable, like most technologies, the path to getting there often changes the end we get to. Transition strategies here will significantly impact the end state.
VINOD KHOSLA is a partner in the venture capital firm Kleiner Perkins Caufield & Byers. He was a co-founder of Daisy Systems and founding Chief Executive Officer of Sun Microsystems.
Comments on the Gelernter Manifesto
i. I found a lot wrong with the manifesto, so I'll begin with something I found usable in it. Gelernter grumbles in item 31 that since email messages aren't files they don't have names and can't stand on their own. I also find it a problem, and it occurred to me how to mitigate the problem in my own mail reader which is within my word processor.
Suppose I'm reading a message that I consider significant. Typing a single command inserts a reference to the appropriate page in the message file at the end of a special file of messages, puts in the time, and puts me where I can add an identifying comment. The entry for the email with the manifesto is "Sat Jun 17 12:48:28 2000 /u/jmc/RMAIL.S00==1906 Gelernter Manifesto", giving the time, the location of the message in the mail file and the name I gave the message.
If I later click on that line, I'll be reading the message again.
The purpose of messages having names of some sort is so that the receiver can retrieve a message later. I doubt that such a name can be automatically generated from the message itself, because the subject line, etc. are in the mental space of the sender, not the receiver. The receiver has to somehow give the message a name if he wants to be able to subsequently retrieve it in one step. In this case, I chose "Gelernter Manifesto".
It took 12 minutes to write and debug the message naming facility in the Xemacs editor. The MS-Word users I consulted told me that it would be very difficult to script MS-word and Windows email systems to do it.
ii. We all find ourselves repeating essentially the same tasks in using computers. Here's a slogan.
Anything a user can do himself, he should be able to make the computer do for him.
Fully realizing this slogan would be a big step, but even a little helps. It's called letting the user "customize" his environment. Point i above is a small example.
Unfortunately, the making of computer systems and software is dominated by the ideology of the omnipotent programmer (or web site designer) who knows how the user (regarded as a child) should think and reduces the user's control to pointing and clicking. This ideology has left even the most sophisticated users in a helpless position compared to where they were 40 years ago in the late 1950s.
Scripting languages were a start in the direction of giving the user more power, but the present ones aren't much good, and not even programmers use them much to make their own lives simpler. Scripting is particularly awkward for point and click use. Xemacs customization is reasonably convenient, but it isn't contiguous with Xemacs Lisp, a really good programming language.
Linux is a step in the right direction of giving the user control in that the source of the operating system is available to users, but I doubt that many users, change Linux for purely personal convenience.
Back to Gelernter
iii. Most of the Manifesto's metaphors, e.g. "beer from burst barrels" and "scooped out hole in the beach", aren't informative.
iv. In item 4, Gelernter offers
The Orwell law of the future: any new technology that CAN be tried WILL be. Like Adam Smith's invisible hand (leading capitalist economies toward ever increasing wealth), Orwell's Law is an empirical fact of life.
It isn't true, and I don't believe Orwell said it. In the preface to "1984", Orwell wrote that "1984" is a cautionary tale that he didn't expect to happen. In particular, "1984" has the tv that permitted Big Brother's minions to spy on the viewer. I don't think Orwell expect that to be tried, and it hasn't been.
Indeed the reverse is true. Most possible new technologies are never tried.
v. Gelernter, like many other commentators, is glib about the system software and its documentation being bad. Don Norman beat that drum, and Apple hired him to make things better. He and they didn't have much success. A more careful analysis of what causes difficulty and how to fix it is needed.
vi. The problem with file systems and any other tree structures is that tree structures aren't memorable. Someone else's tree structure, e.g. a telephone keypad tree, is often helpful the first time you use it, but it is a pain to go through the tree again and again to reach a particular leaf.
vii. I couldn't figure out what Cybersphere was supposed to mean except that it's grand, and I see that the other commentators didn't either. Computers haven't changed people's lives to the extent that telephones, radio, automobiles and air travel did early in the previous century. Paul Krugman is eloquent on this point in the NY Times for 2000 June 18. Human level artificial intelligence would revolutionize human life, but fewer people in AI are working in that direction than in the 1970s. Erik Mueller documents one aspect of this neglect in his 1999 article http://www.media.mit.edu/~mueller/papers/storyund.html.
viii. I think the idea of doing an Amazon search for a book on your own computer is a bad one, because the computations are trivial, whereas the file accesses to the Amazon database are substantial. To do it on your own computer would require downloading the whole Amazon catalog before you started your search.
ix. Re item 21 thru 26, I don't think changing "desktop" to "information landscape" would have made much difference. The problem of what you can do with a small screen will remain as long as we have small screens. Two foot by 3 foot flat screens with 200 bit per inch resolution will change computer use much more than another factor of 100 in processor speed. We also need the bathtub screen, the beach screen and the bed screen.
x. item 32. Directories reaching out for files is vague and suggests more AI than is currently available.
xi. There's something in "streams of time", but it's vague. One thing that is feasible is for an operating system to make a journal including all the user's key strokes and mouse clicks and identifiable more substantial operations. The journal should be available for the user to inspect, replay bits of, and to offer for expert inspection when something has gone wrong.
xii. I don't understand to the objection to names; they were invented long before computers. In item 37, Natasha and Fifth Avenue are names.
xiii. item 41. "To send email, you put a document on someone else's stream." That suggests that the recipient would read it right away or at least at a time determined by the sender. Present email sits till you get around to it, and that's better.
xiv. Paper will be needed until screens are better. I use paper just as Gelernter suggests. Print the document for reading and then throw it away. I'll do that even at the cost of losing the pretty red ink I've put on my printout of the Manifesto.
McCARTHY is Professor of Computer Science at Stanford University.
A pioneer in artificial intelligence, McCarthy invented LISP, the preeminent
AI programming language, and frst proposed general-purpose time sharing
| Home | About Edge| Features | Edge Editons | Press | Reality Club | Third Culture | Digerati | Edge:Feed | Edge Search |
| Top |