TECHNOLOGY

Polythetics and the Boeing 737 MAX

[7.16.19]

A 737-badged Boeing aircraft was first certified for flight by the US Federal Aviation Authority in 1967. The aircraft was 28.6 m long and carried up to 103 passengers; in 2019, the distant descendant of that aircraft model, the 737 MAX-10, was 43.8 m long and carried 230 passengers. In between, there have been all sorts of civilian and military variants, and the plane (‘the plane’) was immensely successful (so that in 2005 one quarter of all large commercial airliners worldwide carried the 737 badge). However, certain decisions, made at the very outset, constrained how aircraft of this kind could evolve. Now, I realize by talking about descent (in a genealogical sense), and evolution (in the sense of gradual change over time), I am already potentially getting caught up in a biological metaphor—almost as if I thought 737s got together and had babies, each generation similar to but different from themselves. Manufacturing firms, who make cars, or aircraft, or computers, use the terms ‘generation’, ‘next generation’ and so on to describe salient step changes in parts of a design chain which has both continuities and discontinuities. But how do we measure these changes, and who decides (at Boeing or elsewhere) which changes are radically discontinuous? When does one artefact type become another?

TIMOTHY TAYLOR is a professor of the prehistory of humanity at the University of Vienna, and author of The Artificial Ape. Timothy Taylor's Edge Bio Page

[ED NOTE: Tim Taylor's piece is the third offering in our 2019 initiative, "The Edge Original Essay," in which we are commissioning recognized authors to write a new and original piece exclusively for publication by Edge. The first two pieces were "Childhood's End: The Digital Revolution Isn't Over But Has Turned Into Something Else" by George Dyson and "Biological and Cultural Evolution: Six Characters in Search of an Author" by Freeman Dyson. —JB]

Mining the Computational Universe

Topic: 

  • TECHNOLOGY
https://vimeo.com/299543702

I've spent several decades creating a computational language that aims to give a precise symbolic representation for computational thinking, suitable for use by both humans and machines. I'm interested in figuring out what can happen when a substantial fraction of humans can communicate in computational language as well as human language. It's clear that the introduction of both human spoken language and human written language had important effects on the development of civilization. What will now happen (for both humans and AI) when computational language spreads?

Mining the Computational Universe

[5.30.19]

I've spent several decades creating a computational language that aims to give a precise symbolic representation for computational thinking, suitable for use by both humans and machines. I'm interested in figuring out what can happen when a substantial fraction of humans can communicate in computational language as well as human language. It's clear that the introduction of both human spoken language and human written language had important effects on the development of civilization. What will now happen (for both humans and AI) when computational language spreads?

STEPHEN WOLFRAM is a scientist, inventor, and the founder and CEO of Wolfram Research. He is the creator of the symbolic computation program Mathematica and its programming language, Wolfram Language, as well as the knowledge engine Wolfram|Alpha. He is also the author of A New Kind of Science. Stephen Wolfram's Edge Bio Page

Collective Awareness

Topic: 

  • TECHNOLOGY
https://vimeo.com/266922416

Economic failures cause us serious problems. We need to build simulations of the economy at a much more fine-grained level that take advantage of all the data that computer technologies and the Internet provide us with. We need new technologies of economic prediction that take advantage of the tools we have in the 21st century.  

How Technology Changes Our Concept of the Self

[11.20.18]

The general project that I’m working on is about the self and technology—what we understand by the self and how it’s changed over time. My sense is that the self is not a universal and purely abstract thing that you’re going to get at through a philosophy of principles. Here’s an example: Sigmund Freud considered his notion of psychic censorship (of painful or forbidden thoughts) to be one of his greatest contributions to his account of who we are. His thoughts about these ideas came early, using as a model the specific techniques that Czarist border guards used to censor the importation of potentially dangerous texts into Russia. Later, Freud began to think of the censoring system in Vienna during World War I—techniques applied to every letter, postcard, telegram, and newspaper—as a way of getting at what the mind does. Another example: Cyberneticians came to a different notion of self, accessible from the outside, identified with feedback systems—an account of the self that emerged from Norbert Wiener’s engineering work on weapons systems during World War II. Now, I see a new notion of the self emerging; we start by modeling artificial intelligence on a conception of who we are, and then begin seeing ourselves ever more in our encounter with AI.

PETER GALISON is the Joseph Pellegrino University Professor of the History of Science and of Physics at Harvard University and Director of the Collection of Historical Scientific Instruments. Peter Galison's Edge Bio Page

How Technology Changes Our Concept of the Self

Topic: 

  • TECHNOLOGY
https://vimeo.com/273738351

The general project that I’m working on is about the self and technology—what we understand by the self and how it’s changed over time. My sense is that the self is not a universal and purely abstract thing that you’re going to get at through a philosophy of principles. Here’s an example: Sigmund Freud considered his notion of psychic censorship (of painful or forbidden thoughts) to be one of his greatest contributions to his account of who we are.

The Space of Possible Minds

[5.18.18]

Aaron Sloman, the British philosopher, has this great phrase: the space of possible minds. The idea is that the space of possible minds encompasses not only the biological minds that have arisen on this earth, but also extraterrestrial intelligence, and whatever forms of biological or evolved intelligence are possible but have never occurred, and artificial intelligence in the whole range of possible ways we might build AI.

I love this idea of the space of possible minds, trying to understand the structure of the space of possible minds in some kind of principled way. How is consciousness distributed through this space of possible minds? Is something that has a sufficiently high level of intelligence necessarily conscious? Is consciousness a prerequisite for human-level intelligence or general intelligence? I tend to think the answer to that is no, but it needs to be fleshed out a little bit. We need to break down the concept of consciousness into different aspects, all of which tend to occur together in humans, but can occur independently, or some subset of these can occur on its own in an artificial intelligence. Maybe we can build an AI that clearly has an awareness and understanding of the world. We very much want to say, "It's conscious of its surroundings, but it doesn't experience any emotion and is not capable of suffering." We can imagine building something that has some aspects of consciousness and lacks others.

MURRAY SHANAHAN is a professor of cognitive robotics at Imperial College London and a senior research scientist at DeepMind. Murray Shanahan's Edge Bio Page

The Space of Possible Minds

Topic: 

  • TECHNOLOGY
https://vimeo.com/268830612

Aaron Sloman, the British philosopher, has this great phrase: the space of possible minds. The idea is that the space of possible minds encompasses not only the biological minds that have arisen on this earth, but also extraterrestrial intelligence, and whatever forms of biological or evolved intelligence are possible but have never occurred, and artificial intelligence in the whole range of possible ways we might build AI.

Collective Awareness

[10.3.18]


Don Ross

Doyne Farmer

THE REALITY CLUB [New]

Don Ross responds to Doyne Farmer: Despite this healthy state of knowledge about material investment, production, and consumption, we will have more economic crises in the future. In particular, we’ll have a next crisis, on a global scale. It will likely come, again, from financial markets. I’m not persuaded by Farmer’s suggestion that we might get a better handle on this source of risk by running inductions on masses of information about corporate resource allocations. These will be affected, massively, by global financial dynamics, but will likely have little systematic influence on them, even if it is some event in the old-fashioned economy that turns out to furnish a trigger for financial drama. [...]

DON ROSS is professor and head of the School of Sociology, Philosophy, Criminology, Government, and Politics at University College Cork in Ireland; professor of economics at the University of Cape Town, South Africa; and program director for Methodology at the Center for Economic Analysis of Risk at the J. Mack Robinson College of Business, Georgia State University, Atlanta. Don Ross's Edge Bio Page

How To Be a Systems Thinker

[4.17.18]

Until fairly recently, artificial intelligence didn’t learn. To create a machine that learns to think more efficiently was a big challenge. In the same sense, one of the things that I wonder about is how we'll be able to teach a machine to know what it doesn’t know that it might need to know in order to address a particular issue productively and insightfully. This is a huge problem for human beings. It takes a while for us to learn to solve problems, and then it takes even longer for us to realize what we don’t know that we would need to know to solve a particular problem. 

~

The tragedy of the cybernetic revolution, which had two phases, the computer science side and the systems theory side, has been the neglect of the systems theory side of it. We chose marketable gadgets in preference to a deeper understanding of the world we live in.

MARY CATHERINE BATESON is a writer and cultural anthropologist. In 2004 she retired from her position as Clarence J. Robinson Professor in Anthropology and English at George Mason University, and is now Professor Emerita. Mary Catherine Bateson's Edge Bio

Pages

Subscribe to RSS - TECHNOLOGY