| Home | About Edge| Features | Edge Editions | Press | Reality Club | Third Culture | Digerati | Edge:Feed | Edge Search |

The Deep Question
A Talk With Rodney Brooks

Robert Provine, Douglas Rushkoff, Tom de Zengotita, Margaret Wertheim, Tom de Zengotita (2), and Marc D. Hauser on The Deep Question by Rodney Brooks

From: Robert Provine
Date: 12-12-97

Rodney Brooks focuses on an important element missing in most contemporary analyses of cognitive, psychological, and computer systems - the central role of lower-level motor processes. Too often we forget that consciousness, sensing, and learning evolved in the service of guiding movement. Without movement these capacities would never have emerged. Yet how many cognitive and computer scientists ponder the ramifications of this fact? Pure motor systems can be adaptive (imagine an "eating machine" gobbling algae on a pond bottom), yet a cognitive endowment would be useless to a non-moving entity. Of course, the hypothetical eating machine would do even better if it could use sensors to more efficiently encounter algae, or to develop strategies based on past experience to increase further its feeding efficiency.

Additional support for motor-driven evolutionary process comes from comparative and developmental analyses. Motor regions of the central nervous system often develop before they receive input from sensory regions. And sensory and neuronal components of some marine filter feeders degenerate after they pass from free-swimming larval stages to immobile adulthood.

The process of natural selection works efficiently to sculpt the neurologically-driven illusion that we call "physical reality." Natural selection is the engine through which we are linked to the wider world noted by Rodney Brooks. Our body and the brain that propels it are both adequate if not ideal matches of the environments that created them. The fragile, hypothetical nature of our "reality" become painfully obvious in brain damage, when cognitive capacity does not only degrade, but often fragments.

Rodney Brooks reminds us not to leave our bodies and computers lost in thought. Breakthroughs in both computer and neurocognitive domains are likely to come when we move beyond the philosophically driven thinking that shapes so much work in these areas and respond to the phylogentic evidence supporting action-based systems.

Best wishes,

Robert Provine

From: Douglas Rushkoff
To: John Brockman
Date: 12-12-97

Great interview with Rodney Brooks - particularly your steadfastness in attempting to extract a workable new metaphor for living systems from him, and his equally steadfast determination to refuse you one, at least for the time being.

The very interaction between the two of you seemed to crystalize the two-headed dynamic he's trying to tackle. The bottom-up development of relational toolsets - in living things and robots alike - requires a sort anti-discipline. One must refuse to surrender to the notion that there's a need for static, predetermined, command line at all. This is scary stuff, and we resist it - on both theoretical and practical levels - because we're deeply afraid of what we would do if we were literally "left to our devices."

On an interpersonal level, it calls to mind theories of transactional and transpersonal therapy, where the patient is never isolated but considered part of the living relationship between himself, his therapist, and his environment. On a cultural level, though, it's even more far-reaching.

I've been combatting the idea that human beings, in society, need a singular god or driving ethical template in order to peacefully co-exist. I'd like to believe that "what feels good, is good," so to speak, and that our uninhibited organic responses to stimuli are not a "lower" or dangerous set of behaviors, but a trait that is developed only after passing through an externally, artificially, or hierarchically directed coordination.

I'd like to ask Brooks if he's considered the moral and social implications of the bottom-up models he's working with, and whether he believes the rejection of top-down, overarching command sets in models for robotics and biology is somehow analogous to an evolutionary step where civilization learns to interact cooperatively by employing less codes rather than more.

-Douglas Rushkoff

From: Tom de Zengotita
Date: 12-12-97

I come at this from a phenomenological point if view, so I have no idea what the practical considerations are. But I can make a crucial point about consciousness in a simple way, one that moves us far from neural models and computer analogies...

If a mobile robot could be made so it had to replenish itself at intervals, and somehow had to perform the procedure privately, and had to arrange for that privacy in varying circumstances - that would be interesting...

See Sartre on "the look."

From: Margaret Wertheim
To: John Brockman
Date: 12-12-97

Dear John,

I just read the newest Edge piece on Rodney Brooks - which I found extremely interesting. I like very much his approach to robotics and his insistence that intelligence is necessarily an embodied phenomena. [BTW: the book by Brian Rotman on mathematics that I mentioned in last time also insists that numbers have no existence outside of embodied beings. You may be interested to see that the current issue of The Sciences has an article by Rotman about his work on this.] The whole issue of embodiment and cognition is one that I think is central right now. As a coincidence I have just written a couple of articles on the corollary question of could a computer intelligence ever develop a "soul" (one piece is for the Christmas issue of New Scientist) and in both pieces I discuss the Cog project. Rodney Brooks has actually appointed a theological advisor to that project, who has considered just this question - which I found fascinating.

Best wishes,

Margaret Wertheim

From: Tom de Zengotita
Date: 1-5-98

It's been a long time since I browsed the cognitive science/neural modeling literature. It is really exciting to see such an emphasis on bodies and needs. I never had a platform bias against silicon based consciousness, but I always had a strong intuition that the primary problem will be getting such materials to be "alive." Mobility and vision and limbs are nice, but Coglike beings need other similar beings to relate to~and especially to exchange with. Consciousness is a reciprocal entity...

See Marcel Mauss' The Gift and its progeny...

Tom de Zengotita

From: Marc D. Hauser
Date: 1-5-98

In the commentary by Ledoux, he states that neuroscience has yet to have its all encompassing theory, a la Newton or Darwin. Did he say Darwin? Well, some would hold that Darwinian theory is the theory that neuroscience needs to work out many of the interesting details. As Fodor articulated in The Elm and the Expert, however, Darwinian theory isn't necessarily the right kind of theory to explain how one goes from mental states as intentional operators to mental states as compuational operators, but some, like Pinker, Cosmides and Tooby, think that there is a good chance that Darwinian design stance will help. They would argue more strongly than I, of course. So, what kind of all-encompassing theory is LeDoux looking for? At what level?

Much discussion has focused on Rod Brooks' interview and on the possibility of robotic souls and moral perspective. We might start by considering the kinds of pressures that could have led to a moral perspective and for this, the animal kingdom poses some interesting problems. My own interest is in trying to work out how we go from rule-guided behavior to rule-based societies that place values on the rules. In this sense, the agenda seems to me two-fold. What are the core emotions underlying moral societies? And, what mechanisms are necessary for one to implement such emotions, control others, and solve the relevant problems that society throws.

Marc D. Hauser

Back to The Deep Question by Rodney Brooks

| Home | About Edge| Features | Edge Editions | Press | Reality Club | Third Culture | Digerati | Edge:Feed | Edge Search |

| Top |