Polythetics and the Boeing 737 MAX

Timothy Taylor [7.16.19]

A 737-badged Boeing aircraft was first certified for flight by the US Federal Aviation Authority in 1967. The aircraft was 28.6 m long and carried up to 103 passengers; in 2019, the distant descendant of that aircraft model, the 737 MAX-10, was 43.8 m long and carried 230 passengers. In between, there have been all sorts of civilian and military variants, and the plane (‘the plane’) was immensely successful (so that in 2005 one quarter of all large commercial airliners worldwide carried the 737 badge). However, certain decisions, made at the very outset, constrained how aircraft of this kind could evolve. Now, I realize by talking about descent (in a genealogical sense), and evolution (in the sense of gradual change over time), I am already potentially getting caught up in a biological metaphor—almost as if I thought 737s got together and had babies, each generation similar to but different from themselves. Manufacturing firms, who make cars, or aircraft, or computers, use the terms ‘generation’, ‘next generation’ and so on to describe salient step changes in parts of a design chain which has both continuities and discontinuities. But how do we measure these changes, and who decides (at Boeing or elsewhere) which changes are radically discontinuous? When does one artefact type become another?

TIMOTHY TAYLOR is a professor of the prehistory of humanity at the University of Vienna, and author of The Artificial Ape. Timothy Taylor's Edge Bio Page

[ED NOTE: Tim Taylor's piece is the third offering in our 2019 initiative, "The Edge Original Essay," in which we are commissioning recognized authors to write a new and original piece exclusively for publication by Edge. The first two pieces were "Childhood's End: The Digital Revolution Isn't Over But Has Turned Into Something Else" by George Dyson and "Biological and Cultural Evolution: Six Characters in Search of an Author" by Freeman Dyson. —JB]

Humans: Doing More With Less

Tom Griffiths [7.16.19]

Imagine a superintelligent system with far more computational resources than us mere humans that’s trying to make inferences about what the humans who are surrounding it—which it thinks of as cute little pets—are trying to achieve so that it is then able to act in a way that is consistent with what those human beings might want. That system needs to be able to simulate what an agent with greater constraints on its cognitive resources should be doing, and it should be able to make inferences, like the fact that we’re not able to calculate the zeros of the Riemann zeta function or discover a cure for cancer. It doesn’t mean we’re not interested in those things; it’s just a consequence of the cognitive limitations that we have.

As a parent of two small children, a problem that I face all the time is trying to figure out what my kids want—kids who are operating in an entirely different mode of computation, and having to build a kind of internal model of how a toddler’s mind works such that it’s possible to unravel that and work out that there’s a particular motivation for the very strange pattern of actions that they’re taking.

Both from the perspective of understanding human cognition and from the perspective of being able to build AI systems that can understand human cognition, it’s desirable for us to have a better model of how rational agents should act if those rational agents have limited cognitive resources. That’s something I’ve been working on for the last few years. We have an approach to thinking about this that we call resource rationality. And this is closely related to similar ideas that are being proposed in the artificial intelligence literature. One of these ideas is the notion of bounded optimality, proposed by Stuart Russell.

TOM GRIFFITHS is the Henry R. Luce Professor of Information, Technology, Consciousness, and Culture at Princeton University. He is co-author (with Brian Christian) of Algorithms to Live By. Tom Griffiths's Edge Bio Page

[ED. NOTE:] As a follow-up to the completion of the book Possible Minds: 25 Ways of Looking at AI, we are continuing the conversation as the “Possible Minds Project.” The first meeting was at Winvian Farm in Morris, CT. Over the next few months we are rolling out the fifteen talks—videos, EdgeCasts, transcripts.

From left: W. Daniel HillisNeil GershenfeldFrank WilczekDavid ChalmersRobert AxelrodTom GriffithsCaroline JonesPeter GalisonAlison GopnikJohn BrockmanGeorge DysonFreeman DysonSeth LloydRod BrooksStephen WolframIan McEwan. Project participants in absentia: George M. ChurchDaniel KahnemanAlex "Sandy" PentlandVenki RamakrishnanAndy Clark. (Click to expand photo)


HUMANS: DOING MORE WITH LESS

TOM GRIFFITHS: I’m going to talk about two problems that seem contradictory, but I’m going to argue how they are intimately related to one another. The first problem is that people are still smarter than machines. This is not necessarily a problem for people; it’s more of a problem for machines. Despite the recent advances in AI, you can point to lots of individual things that people can still do better than computers can, but, more generally, you only have one system that is capable of doing all of those different kinds of things, and that system is human beings.

The current trend in machine learning is one of solving problems by increasing the amount of data and the amount of computation that get thrown at them. If I were showing slides here, I would show you a nice picture that some of the people at OpenAI made, where they took a bunch of the recent milestones in AI, starting from image net classification through things like AlphaGo and AlphaZero, and they plotted out as a function of time how much compute went into each of those things. You'd see there’s a nice increasing line. I would argue that focusing on that trajectory is something that isn’t necessarily going to take us in the direction of getting systems that can do the kinds of things people can do, particularly, this generality that characterizes human intelligence.

A Separate Kind of Intelligence

Alison Gopnik [7.10.19]


 

It looks as if there’s a general relationship between the very fact of childhood and the fact of intelligence. That might be informative if one of the things that we’re trying to do is create artificial intelligences or understand artificial intelligences. In neuroscience, you see this pattern of development where you start out with this very plastic system with lots of local connection, and then you have a tipping point where that turns into a system that has fewer connections but much stronger, more long-distance connections. It isn’t just a continuous process of development. So, you start out with a system that’s very plastic but not very efficient, and that turns into a system that’s very efficient and not very plastic and flexible.

It’s interesting that that isn’t an architecture that’s typically been used in AI. But it’s an architecture that biology seems to use over and over again to implement intelligent systems. One of the questions you could ask is, how come? Why would you see this relationship? Why would you see this characteristic neural architecture, especially for highly intelligent species?

ALISON GOPNIK is a developmental psychologist at UC Berkeley. Her books include The Philosophical Baby and, most recently, The Gardener and the Carpenter: What the New Science of Child Development Tells Us About the Relationship Between Parents and Children. Alison Gopnik's Edge Bio Page

Collaboration and the Evolution of Disciplines

Robert Axelrod [7.1.19]

The questions that I’ve been interested in more recently are about collaboration and what can make it succeed, also about the evolution of disciplines themselves. The part of collaboration that is well understood is that if a team has a diversity of tools and backgrounds available to them—they come from different cultures, they come from different knowledge sets—then that allows them to search a space and come up with solutions more effectively. Diversity is very good for teamwork, but the problem is that there are clearly barriers to people from diverse backgrounds working together. That part of it is not well understood. The way people usually talk about it is that they have to learn each other’s language and each other’s terminology. So, if you talk to somebody from a different field, they’re likely to use a different word for the same concept.

ROBERT AXELROD, Walgreen Professor for the Study of Human Understanding at the University of Michigan, is best known for his interdisciplinary work on the evolution of cooperation. He is author of The Evolution of Cooperation. Robert Axelrod's Edge Bio Page

The Geometry of Thought

Barbara Tversky [6.25.19]

Slowly, the significance of spatial thinking is being recognized, of reasoning with the body acting in space, of reasoning with the world as given, but even more with the things that we create in the world. Babies and other animals have amazing feats of thought, without explicit language. So do we chatterers. Still, spatial thinking is often marginalized, a special interest, like music or smell, not a central one. Yet change seems to be in the zeitgeist, not just in cognitive science, but in philosophy and neuroscience and biology and computer science and mathematics and history and more, boosted by the 2014 Nobel prize awarded to John O’Keefe and Eduard and Britt-May Moser for the remarkable discoveries of place cells, single cells in the hippocampus that code places in the world, and grid cells next door one synapse away in the entorhinal cortex that map the place cells topographically on a neural grid. If it’s in the brain, it must be real. Even more remarkably, it turns out that place cells code events and ideas and that temporal and social and conceptual relations are mapped onto grid cells. Voila: spatial thinking is the foundation of thought. Not the entire edifice, but the foundation.

The mind simplifies and abstracts. We move from place to place along paths just as our thoughts move from idea to idea along relations. We talk about actions on thoughts the way we talk about actions on objects: we place them on the table, turn them upside down, tear them apart, and pull them together. Our gestures convey those actions on thought directly. We build structures to organize ideas in our minds and things in the world, the categories and hierarchies and one-to-one correspondences and symmetries and recursions.

BARBARA TVERSKY is Professor Emerita of Psychology, Stanford University, and Professor of Psychology and Education, Columbia Teachers College. She is the author of Mind in Motion: How Action Shapes Thought. Barbara Tversky's Edge Bio Page

Questioning the Cranial Paradigm

Caroline A. Jones [6.19.19]

Part of the definition of intelligence is always this representation model. . . . I’m pushing this idea of distribution—homeostatic surfing on worldly engagements that the body is always not only a part of but enabled by and symbiotic on. Also, the idea of adaptation as not necessarily defined by the consciousness that we like to fetishize. Are there other forms of consciousness? Here’s where the gut-brain axis comes in. Are there forms that we describe as visceral gut feelings that are a form of human consciousness that we’re getting through this immune brain?

CAROLINE A. JONES is a professor of art history in the Department of Architecture at MIT and author, most recently, of The Global Work of Art. Caroline Jones's Edge Bio Page

The Brain Is Full of Maps

Freeman Dyson [6.11.19]

 I was talking about maps and feelings, and whether the brain is analog or digital. I’ll give you a little bit of what I wrote:

Brains use maps to process information. Information from the retina goes to several areas of the brain where the picture seen by the eye is converted into maps of various kinds. Information from sensory nerves in the skin goes to areas where the information is converted into maps of the body. The brain is full of maps. And a big part of the activity is transferring information from one map to another.

As we know from our own use of maps, mapping from one picture to another can be done either by digital or by analog processing. Because digital cameras are now cheap and film cameras are old fashioned and rapidly becoming obsolete, many people assume that the process of mapping in the brain must be digital. But the brain has been evolving over millions of years and does not follow our ephemeral fashions. A map is in its essence an analog device, using a picture to represent another picture. The imaging in the brain must be done by direct comparison of pictures rather than by translations of pictures into digital form.

FREEMAN DYSON, emeritus professor of physics at the Institute for Advanced Study in Princeton, has worked on nuclear reactors, solid-state physics, ferromagnetism, astrophysics, and biology, looking for problems where elegant mathematics could be usefully applied. His books include Disturbing the UniverseWeapons and HopeInfinite in All Directions, and Maker of PatternsFreeman Dyson's Edge Bio Page

Perception As Controlled Hallucination

Predictive Processing and the Nature of Conscious Experience Andy Clark [6.6.19]

Perception itself is a kind of controlled hallucination. . . . [T]he sensory information here acts as feedback on your expectations. It allows you to often correct them and to refine them. But the heavy lifting seems to be being done by the expectations. Does that mean that perception is a controlled hallucination? I sometimes think it would be good to flip that and just think that hallucination is a kind of uncontrolled perception. 

ANDY CLARK is professor of Cognitive Philosophy at the University of Sussex and author of Surfing Uncertainty: Prediction, Action, and the Embodied MindAndy Clark's Edge Bio Page

Mining the Computational Universe

Stephen Wolfram [5.30.19]

I've spent several decades creating a computational language that aims to give a precise symbolic representation for computational thinking, suitable for use by both humans and machines. I'm interested in figuring out what can happen when a substantial fraction of humans can communicate in computational language as well as human language. It's clear that the introduction of both human spoken language and human written language had important effects on the development of civilization. What will now happen (for both humans and AI) when computational language spreads?

STEPHEN WOLFRAM is a scientist, inventor, and the founder and CEO of Wolfram Research. He is the creator of the symbolic computation program Mathematica and its programming language, Wolfram Language, as well as the knowledge engine Wolfram|Alpha. He is also the author of A New Kind of Science. Stephen Wolfram's Edge Bio Page

Subscribe to Front page feed