Edge: ONE HALF AN ARGUMENT


REBOOTING CIVILIZATION

RAY KURZWEIL: ONE HALF OF AN ARGUMENT
A Response to Jaron Lanier's ONE HALF A MANIFESTO
and POSTSCRIPT REGARDING RAY KURZWEIL [8.4.01]


Click Here for Ray Kurtzweil's Bio Page


RAY KURZWEIL: ONE HALF OF AN ARGUMENT
[7.29.01]

In Jaron Lanier's Postscript, which he wrote after he and I spoke in succession at a technology event, Lanier points out that we agree on many things, which indeed we do. So I'll start in that vein as well. First of all, I share the world's esteem for Jaron's pioneering work in virtual reality, including his innovative contemporary work on the "Teleimmersion" initiative, and, of course, in coining the term "virtual reality." I probably have higher regard for virtual reality than Jaron does, but that comes back to our distinct views of the future.

As an aside I'm not entirely happy with the phrase "virtual reality" as it implies that it's not a real place to be. I consider a telephone conversation as being together in auditory virtual reality, yet we regard these to be real conversations. I have a similar problem with the term "artificial intelligence."

And as a pioneer in what I believe will become a transforming concept in human communication, I know that Jaron shares with me an underlying enthusiasm for the contributions that computer and related communications technologies can have on the quality of life. That is the other half of his manifesto. I appreciate Jaron pointing this out. It's not entirely clear sometimes, for example, that Bill Joy has another half to his manifesto.

And I agree with at least one of Jaron's six objections to what he calls "Cybernetic Totalism." In objection #3, he takes issues with those who maintain "that subjective experience either doesn't exist, or is unimportant because it is some sort of ambient or peripheral effect." The reason that some people feel this way is precisely because subjective experience cannot be scientifically measured. Although we can measure certain correlates of subjective experience (e.g., correlating certain patterns of objectively measurable neurological activity with objectively verifiable reports of certain subjective experiences), we cannot penetrate to the core of subjective experience through objective measurement. It's the difference between the concept of "objectivity," which is the basis of science, and "subjectivity," which is essentially a synonym for consciousness. There is no device or system we can postulate that could definitively detect subjectivity associated with an entity, at least no such device that does not have philosophical assumptions built into it.

So I accept that Jaron Lanier has subjective experiences, and I can even imagine (and empathize with!) his feelings of frustration at the dictums of "cybernetic totalists" such as myself (not that I accept this characterization) as he wrote his half manifesto. Like Jaron, I even accept the subjective experience of those who maintain that there is no such thing as subjective experience. Of course, most people do accept that other people are conscious, but this shared human assumption breaks down as we go outside of human experience, e.g., the debates regarding animal rights (which have everything to do with whether animals are conscious or just quasi-machines that operate by "instinct"), as well as the debates regarding the notion that a nonbiological entity could conceivably be conscious.

Consider that we are unable to truly experience the subjective experiences of others. We hear their reports about their experiences, and we may even feel empathy in response to the behavior that results from their internal states. We are, however, only exposed to the behavior of others and, therefore, can only imagine their subjective experience. So one can construct a perfectly consistent, and scientific, worldview that omits the existence of consciousness. And because there is fundamentally no scientific way to measure the consciousness or subjective experience of another entity, some observers come to the conclusion that it's just an illusion.

My own view is that precisely because we cannot resolve issues of consciousness entirely through objective measurement and analysis, i.e., science, there is a critical role for philosophy, which we sometimes call religion. I would agree with Jaron that consciousness is the most important ontological question. After all, if we truly imagine a world in which there is no subjective experience, i.e., a world in which there is swirling stuff but no conscious entity to experience it, then that world may as well not exist. In some philosophical traditions (i.e., some interpretations of quantum mechanics, some schools of Buddhist thought), that is exactly how such a world is regarded.

I like Jaron's term "circle of empathy," which makes it clear that the circle of reality that I consider to be "me" is not clear-cut. One's circle of empathy is certainly not simply our body, as we have limited identification with, say, our toes, and even less with the contents of our large intestines. Even with regard to our brains, we are aware of only a small portion of what goes on in our brains, and often consider thoughts and dreams that suddenly intrude on our awareness to have come from some foreign place. We do often include loved ones who may be physically disparate within our circle of empathy. Thus the aspect of the Universe that I consider to be "myself" is not at all clear cut, and some philosophies do emphasize the extent to which there is inherently no such boundary.

Having stated a few ways in which Jaron and I agree with each other's perspective, I will say that his "Half of a Manifesto" mischaracterizes many of the views he objects to. Certainly that's true with regard to his characterization of my own thesis. In particular, he appears to have only picked up on half of what I said in my talk, because the other half addresses at least some of the issues he raises. Moreover, many of Jaron's arguments aren't really arguments at all, but a amalgamation of mentally filed anecdotes and engineering frustrations. The fact that Time Magazine got a prediction wrong in 1966, as Jaron reports, is not a compelling argument that all discussions of trends are misguided. Nor is the fact that dinosaurs did not continue to increase in size indefinitely a demonstration that every trend quickly dies out. The size of dinosaurs is irrelevant; a larger size may or may not impart an advantage, whereas an increase in the price-performance and/or bandwidth of a technology clearly does impart an advantage. It would be hard to make the case that a technology with a lower price-performance had inherent advantages, whereas it is certainly possible that a smaller and therefore more agile animal may have advantages.

Jaron Lanier has what my colleague Lucas Hendrich calls the "engineer's pessimism." Often an engineer or scientist who is so immersed in the difficulties of a contemporary challenge fails to appreciate the ultimate long-term implications of their own work, and, in particular, the larger field of work that they operate in. Consider the biochemists in 1985 who were skeptical of the announcement of the goal of transcribing the entire genome in a mere 15 years. These scientists had just spent an entire year transcribing a mere one ten-thousandth of the genome, so even with reasonable anticipated advances, it seemed to them like it would be hundreds of years, if not longer, before the entire genome could be sequenced. Or consider the skepticism expressed in the mid 1980s that the Internet would ever be a significant phenomenon, given that it included only tens of thousands of nodes. The fact that the number of nodes was doubling every year and there were, therefore, likely to be tens of millions of nodes ten years later was not appreciated by those who struggled with "state of the art" technology in 1985, which permitted adding only a few thousand nodes throughout the world in a year.

In his "Postscript regarding Ray Kurzweil," Jaron asks the rhetorical question "about Ray's exponential theory of history. . . .[is he] stacking the deck by choosing points that fit the curves he wants to find?" I can assure Jaron that the more points we add to the dozens of exponential graphs I presented to him and the rest of the audience in Atlanta, the clearer the exponential trends become. Does he really imagine that there is some circa 1901 calculating device that has better price-performance than our circa 2001 devices? Or even a 1995 device that is competitive with a 2001 device? In fact what we do see as more points (representing specific devices) are collected is a cascade of "S-curves," in which each S-curve represents some specific technological paradigm. Each S-curve (which looks like an "S" in which the top portion is stretched out to the right) starts out with gradual and then extreme exponential growth, subsequently leveling off as the potential of that paradigm is exhausted. But what turns each S-curve into an ongoing exponential is the shift to another paradigm, and thus to another S-curve, i.e., innovation. The pressure to explore and discover a new paradigm increases as the limits of each current paradigm becomes apparent.

When it became impossible to shrink vacuum tubes any further and maintain the requisite vacuum, transistors came along, which are not merely small vacuum tubes. We've been through five paradigms in computing in this past century (electromechanical calculators, relay based computers, vacuum-tube-based computing, discrete transistors, and then integrated circuits, on which Moore's law is based). As the limits of flat integrated circuits are now within sight (one to one and a half decades away), there are already dozens of projects underway to pioneer the sixth paradigm of computing, which is computing in three dimensions, several of which have demonstrated small-scale working systems.

It is specifically the processing and movement of information that is growing exponentially. So one reason that an area such as transportation is resting at the top of an S-curve is that many if not most of the purposes of transportation have been satisfied by exponentially growing communication technologies. My own organization has colleagues in different parts of the country, and most of our needs that in times past would have required a person or a package to be transported can be met through the increasingly viable virtual meetings made possible by a panoply of communication technologies, some of which Jaron is himself working to advance. Having said that, I do believe we will see new paradigms in transportation. However, with increasingly realistic, high resolution full-immersion forms of virtual reality continuing to emerge, our needs to be together will increasingly be met through computation and communication.

Jaron's concept of "lock-in" is not the primary obstacle to advancing transportation. If the existence of a complex support system necessarily caused lock-in, then why don't we see lock-in preventing ongoing expansion of every aspect of the Internet? After all, the Internet certainly requires an enormous and complex infrastructure. The primary reason that transportation is under little pressure for a paradigm-shift is that the underlying need for transportation has been increasingly met through communication technologies that are expanding exponentially.

One of Jaron's primary themes is to distinguish between quantitative and qualitative trends, saying in essence that perhaps certain brute force capabilities such as memory capacity, processor speed, and communications bandwidths are expanding exponentially, but the qualitative aspects are not. And towards this end, Jaron complains of a multiplicity of software frustrations (many, incidentally, having to do with Windows) that plague both users and, in particular, software developers like himself. This is the hardware versus software challenge, and it is an important one. Jaron does not mention at all my primary thesis having to do with the software of intelligence. Jaron characterizes my position and that of other so-called "cybernetic totalists" to be that we'll just figure it out in some unspecified way, what he refers to as a software "Deus ex Machina." I have a specific and detailed scenario to achieve the software of intelligence, which concerns the reverse engineering of the human brain, an undertaking that is much further along than most people realize. I'll return to this in a moment, but first I would like to address some other basic misconceptions about the so-called lack of progress in software.

Jaron calls software inherently "unwieldy" and "brittle" and writes at great length on a variety of frustrations that he encounters in the world of software. He writes that "getting computers to perform specific tasks of significant complexity in a reliable but modifiable way, without crashes or security breaches, is essentially impossible." I certainly don't want to put myself in the position of defending all software (any more than I would care to characterize all people as wonderful). But it's not the case that complex software is necessarily brittle and prone to catastrophic breakdown. There are many examples of complex mission critical software that operates with very little if any breakdowns, for example the sophisticated software that controls an increasing fraction of airplane landings, or software that monitors patients in critical care facilities. I am not aware of any airplane crashes that have been caused by automated landing software; the same, however, cannot be said for human reliability.

Jaron says that "Computer user interfaces tend to respond more slowly to user interface events, such as a keypress, than they did fifteen years agoŠWhat's gone wrong?" To this I would invite Jaron to try using an old computer today. Even we put aside the difficulty of setting one up today (which is a different issue), Jaron has forgotten just how unresponsive, unwieldy, and limited they were. Try getting some real work done to today's standards with a fifteen year-old personal computer. It's simply not true to say that the old software was better in any qualitative or quantitative sense. If you believe that, then go use them.

Although it's always possible to find poor quality design, the primary reason for user interface response delays is user demand for more sophisticated functionality. If users were willing to freeze the functionality of their software, then the ongoing exponential growth of computing speed and memory would quickly eliminate software response delays. But they're not. So functionality always stays on the edge of what's feasible (personally, I'm waiting for my Teleimmersion upgrade to my videoconferencing software).

This romancing of software from years or decades ago is comparable to people's idyllic view of life hundreds of years ago, when we were unencumbered with the frustrations of machines. Life was unencumbered, perhaps, but it was also short (e.g., life expectancy less than half of today's), labor-intensive (e.g., just preparing the evening meal took many hours of hard labor), poverty-filled, disease and disaster prone.

With regard to the price-performance of software, the comparisons in virtually every area are dramatic. For example, in 1985 $5,000 bought you a speech recognition software package that provided a 1,000 word vocabulary, no continuous speech capability, required three hours of training, and had relatively poor accuracy. Today, for only $50, you can purchase a speech recognition software package with a 100,000 word vocabulary, continuous speech, that requires only five minutes of training, has dramatically improved accuracy, natural language understanding ability (for editing commands and other purposes), and many other features.

How about software development itself? I've been developing software myself for forty years, so I have some perspective on this. It's clear that the growth in productivity of software development has a lower exponent, but it is nonetheless growing exponentially. The development tools, class libraries, and support systems available today are dramatically more effective than those of decades ago. I have today small teams of just three or four people who achieve objectives in a few months that are comparable to what a team of a dozen or more people could accomplish in a year or more 25 years ago. I estimate the doubling time of software productivity to be approximately six years, which is slower than the doubling time for processor price-performance, which is approximately one year today. However, software productivity is nonetheless growing exponentially.

The most important point to be made here is that there is a specific game plan for achieving human-level intelligence in a machine. I agree that achieving the requisite hardware capacity is a necessary but not sufficient condition. As I mentioned above, we have a resource for understanding how to program the methods of human intelligence given hardware that is up to the task, and that resource is the human brain itself.

Here again, if you speak to some of the neurobiologists who are diligently creating detailed mathematical models of the hundreds of types of neurons found in the brain, or who are modeling the patterns of connections found in different regions, you will in at least a few cases encounter the same sort of engineer's / scientist's myopia that results from being immersed in the specifics of one aspect of a large challenge. However, having tracked the progress being made in accumulating all of the (yes, exponentially increasing) knowledge about the human brain and its algorithms, I believe that it is a conservative scenario to expect that within thirty years we will have detailed models of the several hundred information processing organs we collectively call the human brain.

For example, Lloyd Watts has successfully synthesized (that is, assembled and integrated) the detailed models of neurons and interconnections in more than a dozen regions of the brain having to do with auditory processing. He has a detailed model of the information transformations that take place in these regions, and how this information is encoded, and has implemented these models in software. The performance of Watt's software matches the intricacies that have been revealed in subtle experiments on human hearing and auditory discrimination. Most interestingly, using Watts' models as the front-end in speech recognition has demonstrated the ability to pick out one speaker against a backdrop of background sounds, an impressive feat that humans are capable of, and that up until Watts' work, had not been feasible in automated speech recognition systems.

The brain is not one big neural net. It consists of hundreds of regions, each of which is organized differently, with different types of neurons, different types of signaling, and different patterns of interconnections. By and large, the algorithms are not the sequential, logical methods that are commonly used in digital computing. The brain tends to use self organizing, chaotic, holographic (i.e., information not in one place but distributed throughout a region), massively parallel, and digital controlled-analog methods. However, we have demonstrated in a wide range of projects the ability to understand these methods, and to extract them from the rapidly escalating knowledge of the brain and its organization.

The speed, cost effectiveness, and bandwidth of human brain scanning is also growing exponentially, doubling every year. Our knowledge of human neuron models is also rapidly growing. The size of neuron clusters that we have successfully recreated in terms of functional equivalence is also scaling up exponentially.

I am not saying that this process of reverse engineering the human brain is the only route to "strong" AI. It is, however, a critical source of knowledge that is feeding into our overall research activities where these methods are integrated with other approaches.

Also, it is not the case that the complexity of software, and therefore its "brittleness" needs to scale up dramatically in order to emulate the human brain, even when we get to emulating its full functionality. My own area of technical interest is pattern recognition, and the methods that we typically use are self-organizing methods such as neural nets, Markov models, and genetic algorithms. When set up in the right way, these methods can often display subtle and complex behaviors that are not predictable by the designer putting them into practice. I'm not saying that such self-organizing methods are an easy short cut to creating complex and intelligent behavior, but they do represent one important way in which the complexity of a system can be increased without the brittleness of explicitly programmed logical systems.

Consider that the brain itself is created from a genome with only 23 million bytes of useful information (that's what's left of the 800 million byte genome when you eliminate all the redundancies, e.g., the sequence "ALU" which is repeated hundreds of thousands of times). 23 million bytes is not that much information (it's less than Microsoft Word). How is it, then, that the human brain with its 100 trillion connections can result from a genome that is so small? I have estimated that just the interconnection data alone to characterize the human brain is a million times greater than the information in the genome.

The answer is that the genome specifies a set of processes, each of which utilizes chaotic methods (i.e., initial randomness, then self-organization) to increase the amount of information represented. It is known, for example, that the wiring of the interconnections follows a plan that includes a great deal of randomness. As the individual person encounters her environment, the connections and the neurotransmitter level patterns self-organize to better represent the world, but the initial design is specified by a program that is not extreme in its complexity.

It is not my position that we will program human intelligence link by link as in some huge CYC-like expert system. Nor is it the case that we will simply set up a huge genetic (i.e., evolutionary) algorithm and have intelligence at human levels automatically evolve itself. Jaron worries correctly that any such approach would inevitably get stuck in some local minima. He also interestingly points out how biological evolution "missed the wheel." Actually, that's not entirely accurate. There are small wheel like structures at the protein level, although it's true that their primary function is not for vehicle transportation. Wheels are not very useful, of course, without roads. However, biological evolution did create a species that created wheels (and roads), so it did succeed in creating a lot of wheels, albeit indirectly (but there's nothing wrong with indirect methods, we use them in engineering all the time).

With regard to creating human levels of intelligence in our machines, we will integrate the insights and models gained from reverse engineering the human brain, which will involve hundreds of regions, each with different methods, many of which do involve self-organizing paradigms at different levels. The feasibility of this reverse engineering project and of implementing the revealed methods has already been clearly demonstrated. I don't have room in this response to describe the methodology and status of brain reverse engineering in detail, but I will point out that the concept is not necessarily limited to neuromorphic modeling of each neuron. We can model substantial neural clusters by implementing parallel algorithms that are functionally equivalent. This often results in substantially reduced computational requirements, which has been shown by Lloyd Watts and Carver Mead.

Jaron writes that "if there ever was a complex, chaotic phenomenon, we are it." I agree with that, but don't see this as an obstacle. My own area of interest is chaotic computing, which is how we do pattern recognition, which in turn is the heart of human intelligence. Chaos is part of the process of pattern recognition, it drives the process, and there is no reason that we cannot harness these methods in our machines just as they are utilized in our brains.

Jaron writes that "evolution has evolved, introducing sex, for instance, but evolution has never found a way to be any speed but very slow." But he is ignoring the essential nature of an evolutionary process, which is that it accelerates because each stage introduces more powerful methods for creating the next stage. Biological evolution started out extremely slow, and the first halting steps took billions of years. The design of the principal body plans was faster, requiring only tens of millions of years. The process of biological evolution has accelerated, with each stage faster than the stage before it. Later key steps, such as the emergence of Homo Sapiens, took only hundreds of thousands of years. Human technology, which is evolution continued indirectly (created by a species created by evolution), continued this acceleration. The first steps took tens of thousands of years, outpacing biological evolution, and has accelerated from there. The World Wide Web emerged in only a few years, distinctly faster than, say, the Cambrian explosion.

Jaron complains that "surprisingly few of the most essential algorithms have overheads that scale at a merely linear rate." Without taking up several pages to analyze this statement in detail, I will point out that the brain does what it does in its own real-time, using interneuronal connections (where most of our thinking takes place) that operate at least ten million times slower than contemporary electronic circuits. We can observe the brain's massively parallel methods in detail, ultimately scan and understand all of its tens of trillions of connections, and replicate its methods. As I've mentioned, we're well down that path.

To correct a few of Jaron's statements regarding (my) time frames, it's not my position that the "singularity" will "arrive a quarter of the way into the new century" or that a "new criticality" will be "achieved in the about the year 2020." Just so that the record is straight, my view is that we will have the requisite hardware capability to emulate the human brain in a $1,000 of a computation (which won't be organized in the rectangular forms we see today such as notebooks and palmtops, but rather embedded in our environment) by 2020. The software will take longer, to around 2030. The "singularity" has divergent definitions, but for our purposes here we can consider this to be a time when nonbiological forms of intelligence dominate purely biological forms, albeit being derivative of them. This takes us beyond 2030, to perhaps 2040 or 2050.

Jaron calls this an "immanent doom" and "an eschatological cataclysm," as if it were clear on its face that such a development were undesirable. I view these developments as simply the continuation of the evolutionary process and neither utopian nor dystopian. It's true, on the one hand, that nanotechnology and strong AI, and particularly the two together, have the potential to solve age-old problems of poverty and human suffering, not to mention clean up the messes we're creating today with some of our more primitive technologies. On the other hand, there will be profound new problems and dangers that will emerge as well. I have always considered technology to be a double-edged sword. It amplifies both our creative and destructive natures, and we don't have to look further than today to see that.

However, on balance, I view the progression of evolution as a good thing, indeed as a spiritual direction. What we see in evolution is a progression towards greater intelligence, greater creativity, greater beauty, greater subtlety (i.e., the emergence of entities with emotion such as the ability to love, therefore greater love). And "God" has been described as an ideal of an infinite level of these same attributes. Evolution, even in its exponential growth, never reaches infinite levels, but it's moving rapidly in that direction. So we could say that this evolutionary process is moving in a spiritual direction.

However, the story of the twenty-first century has not yet been written. So it's not my view that any particular story is inevitable, only that evolution, which has been inherently accelerating since the dawn of biological evolution, will continue its exponential pace.

Jaron writes that "the whole enterprise of Artificial Intelligence is based on an intellectual mistake." Until such time that computers at least match human intelligence in every dimension, it will always remain possible for skeptics to say the glass is half empty. Every new achievement of AI can be dismissed by pointing out yet other goals have not yet been accomplished. Indeed, this is the frustration of the AI practitioner, that once an AI goal is achieved, it is no longer considered AI and becomes just a useful technique. AI is inherently the set of problems we have not yet solved.

Yet machines are indeed growing in intelligence, and the range of tasks that machines can accomplish that previously required intelligent human attention is rapidly growing. There are hundreds of examples of narrow AI today (e.g., computers evaluating electrocardiograms and blood cell images, making medical diagnoses, guiding cruise missiles, making financial investment decisions, not to mention intelligently routing emails and cell phone connections), and the domains are becoming broader. Until such time that the entire range of human intellectual capability is emulated, it will always be possible to minimize what machines are capable of doing.

I will point out that once we have achieved complete models of human intelligence, machines will be capable of combining the flexible, subtle, human levels of pattern recognition with the natural advantages of machine intelligence. For example, machines can instantly share knowledge, whereas we don't have quick downloading ports on our interconnection and neurotransmitter concentration level patterns. Machines are much faster (as I mentioned contemporary electronics is already ten million times faster than the electrochemical information processing used in our brains) and have much more prodigious and accurate memories.

Jaron refers to the annual "Turing test" that Loebner runs, and maintains that "we have caused the Turing test to be passed." These are misconceptions. I used to be on the prize committee of this contest until a political conflict caused most of the prize committee members to quit. Be that as it may, this contest is not really a Turing test, as we're not yet at that stage. It's a "narrow Turing test" which deals with domain specific dialogues, not unrestricted dialogue as Turing envisioned it. With regard to the Turing test as Turing described it, it is generally accepted that this has not yet happened.

Returning to Jaron's nice phrase "circle of empathy," he writes that his "personal choice is to not place computers inside the circle." But would he put neurons inside that circle? We've already shown that a neuron or even a substantial cluster of neurons can be emulated in great detail and accuracy by computers. So where on that slippery slope does Jaron find a stable footing? As Rodney Brooks says in his September 25, 2000 commentary on Jaron's "Half of a Manifesto," Jaron "turns out to be a closet Searlean." He just assumes that a computer cannot be as subtle ­ or as conscious ­ as the hundreds of neural regions we call the human brain. Like Searle, Jaron just assumes his conclusion. (For a more complete discussion of Searle and his theories, see my essay "Locked in his Chinese Room, Response to John Searle" in the forthcoming book Are We Spiritual Machines?: Ray Kurzweil vs. the Critics of Strong AI, Discovery Institute Press, 2001. This entire book will be posted on http://www.KurzweilAI.net).

Near the end of Jaron's essay, he worries about the "terrifying" possibility that through these technologies the rich may obtain certain opportunities that the rest of humankind does not have access to. This, of course, would be nothing new, but I would point out that because of the ongoing exponential growth of price-performance, all of these technologies quickly become so inexpensive as to become almost free. Look at the extraordinary amount of high-quality information available at no cost on the web today which did not exist at all just a few years ago. And if one wants to point out that only a small fraction of the world today has Web access, keep in mind that the explosion of the Web is still in its infancy.

At the end of his "Half of a Manifesto," Jaron writes that "the ideology of cybernetic totalist intellectuals [may] be amplified from novelty into a force that could cause suffering for millions of people." I don't believe this fearful conclusion follows from Jaron's half of an argument. The bottom line is that technology is power and this power is rapidly increasing. Technology may result in suffering or liberation, and we've certainly seen both in the twentieth century. I would argue that we've seen more of the latter, but nevertheless neither Jaron nor I wish to see the amplification of destructiveness that we have witnessed in the past one hundred years. As I mentioned above, the story of the twenty first century has not yet been written. I think Jaron would agree with me that our destiny is in our hands. However, I regard "our hands" to include our technology, which is properly part of the human-machine civilization.


John Brockman, Editor and Publisher

Copyright © 2001 by Edge Foundation, Inc
All Rights Reserved.

| Top |