Edge 297—August 13, 2009
[ED. NOTE: It's summer, you're kicking back, relaxing on the beach, kayaking off the coast, desperately trying to finish your book before September, and you check your iPhone and find this email with a link to a 27,200-word edition of Edge. "This is too long", you think. "Come on Edge, it's the Web: cut it down, make it pithy. Why do I want to read long, thoughtful pieces when I can make do with a couple of screens and then jump to the next link? And, by the way, where are the links in these pieces? Who needs original work when I can be a part of the link economy? Edge, you must be joking. Nobody reads this way anymore."
Or do they? — JB]
"I should have accepted your invitation. I have been listening to the Master Class on the Web — fascinating. I am learning a lot and I wish I had been there. Thanks for the invite and thanks for putting up the videos. ... Invite me again..."
I watched sessions 1 to 6. This is breathtaking. The Edge Master Class must have been spectacular and frightening. Now DNA and computers are reading each other without human intervention, without a human able to understand it. This is a milestone, and adds to the whole picture: we don't read, we will be read. What Edge has achieved collecting these great thinkers around is absolutley spectacular. Whenever I find an allusion to great writers or thinkers, I find out that they all are at Edge.
What struck me was the incredible power that is developing in bioinformatics and genomics, which so resembles the evolution in computer software and hardware over the past 30 years.
George Church's discussion of the acceleration of the Moore's law doubling time for genetic sequencing rates,, for example, was extraordinary, from 1.5 efoldings to close to 10 efoldings per year. When both George and Craig independently described their versions of the structure of the minimal genome appropriate for biological functioning and reproduction, I came away with the certainty that artificial lifeforms will be created within the next few years, and that they offered great hope for biologically induced solutions to physical problems, like potentially buildup of greenhouse gases.
At the same time, I came away feeling that the biological threats that come with this emerging knowledge and power are far greater than I had previously imagined, and this issue should be seriously addressed, to the extent it is possible. But ultimately I also came away with a more sober realization of the incredible complexity of the systems being manipulated, and how far we are from actually developing any sort of comprehensive understanding of the fundamental molecular basis of complex life. The simple animation demonstrated at the molecular level for Gene expression and replication demonstrated that the knowledge necessary to fully understand and reproduce biochemical activity in cells is daunting.
Two other comments: (1) was intrigued by the fact that the human genome has not been fully sequenced, in spite of the hype, and (2) was amazed at the available phase space for new discovery, especially in forms of microbial life on this planet, as demonstrated by Craig in his voyage around the world, skimming the surface, literally, of the ocean, and of course elsewhere in the universe, as alluded to by George.
Finally, I also began to think that structures on larger than molecular levels may be the key ones to understand for such things as memory, which make the possibilities for copying biological systems seem less like science fiction to me. George Church and I had an interesting discussion about this which piqued my interest, and I intend to follow this up.
We've known for a long time that human children are the best learning machines in the universe. But it has always been like the mystery of the humming birds. We know that they fly, but we don't know how they can possibly do it. We could say that babies learn, but we didn't know how.
ALISON GOPNIK, a psychologist at UC-Berkeley, is coauthor of The Scientist in the Crib: Minds, Brains, and How Children Learn, and author of The Philosophical Baby
[ALISON GOPNIK:] The biggest question for me is "How is it possible for children, young human beings, to learn as much as they do as quickly and as effectively as they do?" We've known for a long time that human children are the best learning machines in the universe. But it has always been like the mystery of the humming birds. We know thast they fly, but we don't know how they can possibly do it. We could say that babies learn, but we didn't know how.
But now there's this really exciting confluence of work in artificial intelligence and machine learning, neuroscience and in developmental psychology, all trying to tackle this question about how children could possibly learn as much as they do.
What's happened is that there are more and more really interesting models coming out of AI and machine learning. Computer scientists and philosophers are starting to understand how scientists or machines or brains could actually do something that looks like powerful inductive learning. The project we've been working on for the past ten years or so is to ask whether children and even young babies implicitly use some of those same really powerful inductive learning techniques.
It's been very exciting because, on the one hand, it helps to explain the thing that's been puzzling developmental psychologists since Piaget. Every week we discover some new amazing thing about what babies and young children know that we didn't realize before. And then we discover some other amazing thing about what they don't yet know. So we've charted a series of changes in children's knowledge -- we know a great deal about what children know when. But the great mystery is how could they possibly learn it? What computations are they performing? And we're starting to answer that.
It's also been illuminating because the developmentalists can help the AI people do a sort of reverse engineering. When you realize that human babies and children are these phenomenal learners, you can ask, okay, what would happen if we actually used that what we know about children to help program a computer?
The research starts out from an empirical question and a practical question, How do children learn so much? How could we design computers that learn? But then it turns out that there's a big grand philosophical question behind it. How do any of us learn as much as we do about the world? All we've got are these little vibrations of air in our eardrums and photons hitting the back of our retina. And yet human beings know about objects and people, not to mention quarks and electrons. How do we ever get there? How could our brains, evolved in the Pleistocene, get us from the photons hitting our retinas to quarks and electrons? That's the big, grand philosophical question of knowledge.
Understanding how we learn as children actually ends up providing at least the beginning of an answer to this much bigger philosophical question. The philosophical convergence, which also has a nice moral quality, is that these very, very high prestige learning systems like scientists and fancy computers at Microsoft turn out to be doing similar things to the very, very low prestige babies and toddlers and preschoolers. Small children aren't the sort of people that philosophers and psychologists and scientists have been paying much attention to over the last 2,000 years. But just looking at these babies and little kids running around turns out to be really informative about deep philosophical questions.
For example, it turns out that babies and very young children already are doing statistical analyses of data, which is not something that we knew about until the last ten years. This is a really very, very new set of findings. Jenny Saffran, Elissa Newport and Dick Aslin at Rochester started it off when they discovered that infants could detect statistical patterns in nonsense syllables. Now every month there's a new study that shows that babies and young children compute conditional probabilities, that they do Bayesian reasoning, that they can take a random sample and understand the relationship between that sample and the population that it's drawn from. And children don't just detect statistical patterns, they use them to infer the causal structure of the world. They do it in much the same way that, sophisticated computers do. Or for that matter, they do it in the same way that every scientist does who looks at a pattern of statistics and doesn't just say oh that's the data pattern, but can then say oh and that data pattern tells us that the world must be this particular way.
How could we actually ask babies and young children to tell us whether they understand statistics? We know that when we even ask adults to actually explicitly solve a probability problem, they collapse. How could we ask little kids to do it?
The way that we started out was that we built a machine we called the blicket detector. The blicket detector is a little machine that lights up and plays music when you put certain things on it but not others. We can actually control the information that the child gets about the statistics of this machine. We put all sorts of different things on it. Sometimes the box lights up, sometimes it doesn't, sometimes it plays music, sometimes it doesn't. And then we can ask the child things like what would happen if I took the yellow block off? Or which block will make it go best? And we can design it so that, for example, one block makes it go two out of eight times, and one block make it go two out of three times.
Four-year-olds, who can't add yet, say that the block that makes it go two out of three times is a more powerful block than the one that makes it go two out of eight times. That's an example of the kind of implicit statistics that even two and three and four-year-olds are using when they're trying to just figure out something about how this particular machine goes. And we've used similar experiments to show that children can use Bayesian reasoning, infer complex causal structure, and even infer hidden, invisible causal variables.
With even younger babies, Fei Xu showed that nine-month-olds were already paying attention to the statistics of their environment. She would show the baby a box of mostly red ping pong balls, 80 percent red, 20 percent white. And then a screen would come up in front of the ping pong balls and someone would take out a number of ping pong balls from that box. They would pick out five red ping-pong balls or else pick out five white ping-pong balls. Well, of course, neither of those events is impossible. But picking out five white ping pong balls from an 80 percent red box is much less likely. And even nine-month-olds will look longer when they see the white ping pong balls coming from the mostly red box than when they see the red ping pong balls coming from the mostly red box.
Fei did a beautiful control condition. Exactly the same thing happens, except now instead of taking the balls from the box, the experimenter takes the balls from her pocket. When the experimenter takes the balls from her pocket, the baby doesn't know what the population is that the experimenter is sampling. And in that case, the babies don't show any preference for the all red versus all white sample. The babies really seem to have an idea that some random samples from a population are more probable, and some random samples from a population are less probable.
The important thing is not just that they know this, which is amazing, but that once they know this, then they can use that as a foundation for making all sorts of other inferences. Fei and Henry Wellman and one of my ex-students, Tamar Kushnir, have been doing studies where you show babies the unrepresentative sample ... someone picks out five white balls from a mostly red box. And now there are red balls and white balls on the table and the experimenter puts her hand out and says, "Give me some."
Well, if the sample wasn't representative, then you think, well okay why would she have done that? She must like the white balls. And, in fact, when the sample's not representative, the babies give her the white balls. In other words, not only do the babies recognize this is a random sample or not, but when it isn't random, they say oh this isn't just a random sample, there must be something else going on. And by the time they're 18 months old, they seem to think oh the thing that's going on is that she would rather have white balls than red balls.
Not only does this show that babies are amazing, but it actually gives the babies a mechanism for learning all sorts of new things about the world. We can't ask these kids explicitly about probability and statistics, because they don't yet understand that two plus two equals four. But we can look at what they actually do and use that as a way of figuring out what's going on in their minds. These abilities provide a framework by which the babies can learn all sorts of new things that they're not innately programmed to know. And that helps to explain how all humans can learn so much, since we're all only babies who have been around for a while.
Another thing that it turns out that kids are doing is that they're experimenting. You see this just in their every day play. They are going out into the world and picking up a toy and pressing the buttons and pulling the strings on it. It looks like random play, but when you look more carefully, it turns out that that apparently random play is actually a set of quite carefully done experiments that let children figure out how it is that that toy works. Laura Schulz at MIT has done a beautiful set of studies on this.
The most important thing for children to figure out is us, other human beings. We can show that that when we interact with babies they recognize the contingencies between what we do and they do. Those are the statistics of human love. I smile at you and you smile at me. And children also experiment with people, trying to figure out what the other person is going to do and feel and think. If you think of them as little psychologists, we're the lab rats.
The problem of learning is actually in Turing's original paper that is the foundation of cognitive science. The classic Turing problem is "Could you get a computer to be so sophisticated that you couldn't tell the difference between that computer and a person? "But Turing said that there was an even more profound problem, a more profound Turing test. Could you get a computer, give it the kind of data that every human being gets as a child, and have it learn the kinds of things that a child can learn?
The way that Chomsky solved that problem was to say: oh well, we don't actually learn very much. What happens is that it's all there innately. That's a philosophical answer that has a long tradition going back to Plato and Descartes and so forth. That set the tone for the first period of the cognitive revolution. And that was reinforced when developmentalists like Andrew Meltzoff, Liz Spelke and Renee Baillargeon began finding that babies knew much more than we thought.
Part of the reason why innateness seemed convincing is because the traditional views of learning have been very narrow, like Skinnerian reinforcement or association. Some cognitive scientists, particularly connectionists and neural network theorists, tried to argue that these mechanisms could explain how children learn but it wasn't convincing. Children's knowledge seemed too abstract and coherent, too far removed from the data, to be learned by association. And, of course, Piaget rejected both these alternatives and talked about "constructivism" but that wasn't much more than a name.
Then about 20 years ago, a number of developmentalists working in the Piagetian tradition, including me and Meltzoff and Susan Carey, Henry Wellman and Susan Gelman, started developing the idea that I call the "theory theory". That's the idea that what babies and children are doing is very much like scientific induction and theory change.
The problem with that was that when we went to talk to the philosophers of science and we said, "Okay, how is it that scientists can solve these problems of induction and learn as much as they do about the world?" they said "We have no idea, go ask psychologists". Seeing that what the kids were doing was like what scientists were doing was sort of helpful, but it wasn't a real cognitive science answer.
About 15 years ago, quite independently, a bunch of philosophers of science at Carnegie-Mellon, Clark Glymour and his colleagues, and a bunch of computer scientists at UCLA, Judea Pearl and his colleagues, converged on some similar ideas. They independently developed these Bayesian causal graphical models. The models provide a graphical representation of how the world works and then systematically map that representation onto patterns of probability. That was a great formal computational advance.
Once you've got that kind of formal computational system, then you can start designing computers that actually use that system to learn about the causal structure of the world. But you can also start asking, well, do people do the same thing? Clark Glymour and I talked about this for a long time. He would say oh we're actually starting to understand something about how you can solve inductive problems. I'd say gee that sounds a lot like what babies are doing. And he'd say no, no, come on, they're just babies, they couldn't be doing that.
What we started doing empirically about ten years ago, is to actually test the idea that children might be using these computational procedures. My lab was the first to do it, but now there is a whole set of great young cognitive scientists working on these ideas. Josh Tenenbaum at MIT and Tom Griffiths, at Berkeley have worked on the computational side. On the developmental side Fei Xu who is now at Berkeley, Laura Schulz at MIT, and David Sobel at Brown, among others, have been working on empirical experiments with children. We've had this convergence of philosophers and computer scientists on the one hand, and empirical developmental psychologists on the other hand, and they've been putting these ideas together. It's interesting that the two centers of this work, along with Rochester, have been MIT, the traditional locus of "East Coast" nativism, and Berkeley, the traditional locus of "West Coast" empiricism. The new ideas really cross the traditional divide between those two approaches.
A lot of the ideas are part of what's really a kind of Bayesian revolution that's been happening across cognitive science, in vision science, in neuroscience and cognitive psychology and now in developmental psychology. Ideas about Bayesian inference that originally came from philosophy of science have started to become more and more powerful and influential in cognitive science in general.
Whenever you get a new set of tools unexpected insights pop up. And, surprisingly enough, thinking in this formal computational nerdy way, actually gives us new insights into the value of imagination. This all started by thinking about babies and children as being like little scientists, right? We could actually show that children would develop theories and change them in the way that scientists do. Our picture was ... there's this universe, there's this world that's out there. How do we figure out how that world works?
What I've begun to realize is that there's actually more going on than that. One of the things that makes these causal representations so powerful and useful in AI, is that not only do they let you make predictions about the world, but they let you construct counter-factuals. And counter-factuals don't just say what the world is like now. They say here's the way the world could be, other than the way it is now. One of the great insights that Glymour and Pearl had was that, formally, constructing these counterfactual claims was quite different from just making predictions. And casual graphical representations and Bayseian reasoning are a very good combination because you're not just talking about what's here and now, you're saying... here's a possibility, and let me go and test this possibility.
If you think about that from the perspective of human evolution, our great capacity is not just that we learn about the world. The thing that really makes us distinctive is that we can imagine other ways that the world could be. That's really where our enormous evolutionary juice comes from. We understand the world, but that also lets us imagine other ways the world could be, and actually make those other worlds come true. That's what innovation, technology and science are all about.
Think about everything that's in this room right now, there's a right angle desk and electric light and computers and window -panes. Every single thing in this room is imaginary from the perspective of the hunter/gatherer. We live in imaginary worlds.
When you think that way, a lot of other things about babies and young children start to make more sense. We know, for instance, that young children have these incredible, vivid, wild imaginations. They live 24/7 in these crazy pretend worlds. They have a zillion different imaginary friends. They turn themselves into Ninjas and mermaids. Nobody's really thought about that as having very much to do with real hard-nosed cognitive psychology. But once you start realizing that the reason why we want to build theories about the world is so that we can imagine other ways the world can be, you could say that not only are these young children the best learners in the world, but they're also the most creative imaginers in the world. That's what they're doing in their pretend play.
About 10 years ago psychologists like Paul Harris and Marjorie Taylor started to show that children aren't confused about fantasy and imagination and reality, which is what psychologists from Freud to Piaget had thought before. They know the difference between imagination and reality really well. It's just they'd rather live in imaginary worlds than in real ones. Who could blame them? In that respect, again, they're a lot like scientists and technologists and innovators.
One of the other really unexpected outcomes of thinking about babies and children in this new way is that you start thinking about consciousness differently. Now of course there's always been this big question ... the capital C question of consciousness. How can a brain have experiences? I'm skeptical about whether we're ever going to get a single answer to the big capital C question. But there are lots of very specific things to say about how particular kinds of consciousness are connected to particular kinds of functional or neural processes.
Edge asked a while ago in the World Question Center, what is something you believe but can't prove? And I thought well, I believe that babies are actually not just conscious but more conscious than we are. But of course that's not something that I could ever prove. Now, having thought about it and researched it for a while, I feel that I can, not quite prove it, but at least I can make a pretty good empirical case for the idea that babies are in some ways more conscious, and certainly differently conscious than we are.
For a long time, development psychologists like me had said, well, babies can do all these fantastic amazing things, but they're all unconscious and implicit. A part of me was always skeptical about that, though, just intuitively, having spent so much time with babies. You sit opposite a seven-month-old, and you watch their eyes and you look at their face and you see that wide-eyed expression and you say, goddamn it, of course, she's conscious, she's paying attention.
We know a lot about the neuroscience of attention. When we pay attention to something as adults, we're more open to information about that thing, but the other parts of our brain get inhibited. The metaphor psychologists always use is that it's like a spotlight. It's as if what happens when you pay attention is that you shine a light on one particular part of the world, make that little part of your brain available for information processing, change what you think, and then leave all the rest of it alone.
When you look at both the physiology and the neurology of attention in babies, what you see is that instead of having this narrow focused top-down kind of attention, babies are open to all the things that are going on around them in the world. Their attention isn't driven by what they're paying attention to. It's driven by how information-rich the world is around them. When you look at their brains, instead of just, as it were, squirting a little bit of neurotransmitter on the part of their brain that they want to learn, their whole brain is soaked in those neurotransmitters.
The thing that babies are really bad at is inhibition, so we say that babies are bad at paying attention. What we really mean is that they're bad at not paying attention. What we're great at as adults is not paying attention to all the distractions around us, and just paying attention to one thing at a time. Babies are really bad at that. But the result is that their consciousness is like a lantern instead of being like a spotlight.
They're open to all of the experience that's going on around them.
There are certain kinds of states that we're in as adults, like when we go to a new city for the first time, where we recapture that baby information processing. When we do that, we feel as if our consciousness has expanded. We have more vivid memories of the three days in Beijing than we do of all the rest of the months that we spend as walking, talking, teaching, meeting-attending, zombies. So that we can actually say something about what babies' consciousness is like, and that might tell us some important things about what consciousness itself is like.
I come from a big, close family. Six children. It was a somewhat lunatic artistic intellectual family of the 50s and 60s, back in the golden days of post-war Jewish life. I had this wonderful rich, intellectual and artistic childhood. But I was also the oldest sister of six children, which meant that I was spending a lot of time with babies and young children.
I had the first of my own three babies when I was 23. There's really only been about five minutes in my entire life when I haven't had babies and children around. I always thought from the very beginning that they were the most interesting people there could possibly be. I can remember being in a state of mild indignation, which I've managed to keep up for the rest of my life, about the fact that other people treated babies and children contemptuously or dismissively or neglectfully.
At the same time, from the time I was very young, I knew that I wanted to be a philosopher. I wanted to actually answer, or at least ask, big, deep questions about the world, and I wanted to spend my life talking and arguing. And, in fact, that's what I did as an undergraduate. I was an absolutely straight down the line honors philosophy student as an undergraduate at Mc Gill, president of the Philosophy Students Association etc. I went to Oxford partly because I wanted to do both philosophy and psychology.
But what kept happening to me was that I asked these philosophical questions, and I'd say, well, you know, you could find out. You want to know where language comes from? You could go and look at children, and you could find out how children learn language. Or you want to find out how we understand about the world? You could look at children and find out how they, that is we, come to understand about the world. You want to understand how we come to be moral human beings? You could look at what happens to moral intuition in children. And every time I did that, back in those bad old days, the philosophers around me would look as if I had just eaten my peas with a knife. One of the Oxford philosophers said to me after one of these conversations, "Well, you know, one's seen children about, of course. But one would never actually talk to them." And that wasn't atypical of the attitude of philosophy towards children and childhood back then.
I still think of myself as being fundamentally a philosopher, I'm an affiliate of the philosophy department at Berkeley. I give talks at the American Philosophical Association and publish philosophical papers. It's coincidental that the technique I use to answer those philosophical questions is to look at children and think about children. And I'm not alone in this. Of course, there are still philosophers out there who believe that philosophy doesn't need to look beyond the armchair. But many of the most influential thinkers in philosophy of mind understand the importance of empirical studies of development.
In fact, largely because of Piaget, cognitive development has always been the most philosophical branch of psychology. That's true if you look not just at the work that I do, but the work that people like Andrew Meltzoff or Henry Wellman or Susan Carey or Elizabeth Spelke do or certainly what Piaget did himself. Piaget also thought of himself as a philosopher who was answering philosophical questions by looking at children.
Thinking about development also changes the way we think about evolution. The traditional picture of evolutionary psychology is that our brains evolved in the Pleistocene, and we have these special purpose modules or innate devices for organizing the world. They're all there in our genetic code, and then they just unfold maturationally. That sort of evolutionary psychology picture doesn't fit very well with what most developmental psychologists see when they actually study children.
When you actually study children, you certainly do see a lot of innate structure. But you also see this capacity for learning and transforming and changing what you think about the world and for imagining other ways that the world could be. In fact, one really crucial evolutionary fact about us, is that we have this very, very extended childhood. We have a much longer period of immaturity than any other species does. That's a fundamental evolutionary fact about us, and on the surface a puzzling one. Why make babies so helpless for so long? And why do we have to invest so much time and energy, literally, just to keep them alive?
Well, when you look across lots and lots of different species, birds and rodents and all sorts of critters, you see that that a long period of immaturity is correlated with a high degree of flexibility, intelligence and learning. Look at crows and chickens, for example. Crows get on the cover of Science using tools, and chickens end up in the soup pot, right? And crows have a much longer period of immaturity, a much longer period of dependence than chickens.
If you have a strategy of having these very finely shaped innate modules just designed for a particular evolutionary niche, it makes sense to have those in place from the time you're born. But you might have a more powerful strategy. You might not be very well-designed for any particular niche, but instead be able to learn about all the different environments in which you can find yourself, including being able to imagine new environments and create them. That's the human strategy.
But that strategy has one big disadvantage, which is that while you're doing all that learning, you are going to be helpless. You're better off being able to consider, for example, should I attack this mastodon with this kind of tool or that kind of tool? But you don't want to be sitting and considering those possibilities when the mastodon is coming at you.
The way that evolution seems to have solved that problem is to have this kind of cognitive division of labor, so the babies and kids are really the R&D department of the human species. They're the ones that get to do the blue-sky learning, imagining thinking. And the adults are production and marketing. We can not only function effectively but we can continue to function in all these amazing new environments, totally unlike the environment in which we evolved. And we can do so just because we have this protected period when we're children and babies in which we can do all of the learning and imagining. There's really a kind of metamorphosis. It's like the difference between a caterpillar and a butterfly except it's more like the babies are the butterflies that get to flitter around and explore, and we're the caterpillars who are just humping along on our narrow adult path.
Thinking about development not only changes the way you think about learning, but it changes the way that you think about evolution. And again, it's this morally appealing reversal, which you're seeing in a lot of different areas of psychology now. Instead of just focusing on human beings as the competitive hunters and warriors, people are starting to recognize that our capacities for care-giving are also, and in many respects, even more, fundamental in shaping what our human nature is like.
We must stop perpetuating the fiction that existence itself is dictated by the immutable laws of economics. These so-called laws are, in actuality, the economic mechanisms of 13th Century monarchs. Some of us analyzing digital culture and its impact on business must reveal economics as the artificial construction it really is. Although it may be subjected to the scientific method and mathematical scrutiny, it is not a natural science; it is game theory, with a set of underlying assumptions that have little to do with anything resembling genetics, neurology, evolution, or natural systems.
ECONOMICS IS NOT NATURAL SCIENCE
An Edge Original Essay
DOUGLAS RUSHKOFF is a media analyst; documentary filmmaker, and author. His latest book is Life Inc.: How the World Became a Corporation and How to Take It Back.
...How to best transcend the current economic mess? Put Jeff Bezos, Pierre Omidyar, Elon Musk, Tim O'Reilly, Larry Page, Sergey Brin, Nathan Myhrvold, and Danny Hillis in a room somewhere and don't let them out until they have framed a new, massively-distributed financial system, founded on sound, open, peer-to-peer principles, from the start. And don’t call it a bank. Launch a new financial medium that is as open, scale-free, universally accessible, self-improving, and non-proprietary as the Internet, and leave the 13th century behind. ...
ECONOMICS IS NOT NATURAL SCIENCE
The marketplace in which most commerce takes place today is not a pre-existing condition of the universe. It's not nature. It's a game, with very particular rules, set in motion by real people with real purposes. That's why it's so amazing to me that scientists, and people calling themselves scientists, would propose to study the market as if it were some natural system — like the weather, or a coral reef.
It's not. It's a product not of nature but of engineering. And to treat the market as nature, as some product of purely evolutionary forces, is to deny ourselves access to its ongoing redesign. It's as if we woke up in a world where just one operating system was running on all our computers and, worse, we didn't realize that any other operating system ever did or could ever exist. We would simply accept Windows as a given circumstance, and look for ways to adjust our society to its needs rather than the other way around.
It is up to our most rigorous thinkers and writers not to base their work on widely accepted but largely artificial constructs. It is their job to differentiate between the map and the territory — to recognize when a series of false assumptions is corrupting their observations and conclusions. As the great interest in the arguments of Richard Dawkins, Daniel Dennett, Sam Harris, and Christopher Hitchens shows us, there is a growing acceptance and hunger for thinkers who dare to challenge the widespread belief in creation mythologies. That it has become easier to challenge the supremacy of God than to question the supremacy of the market testifies to the way any group can fall victim to a creation myth — especially when they are rewarded to do so.
Too many technologists, scientists, writers and theorists accept the underlying premise of our corporate-driven marketplace as a precondition of the universe or, worse, as the ultimate beneficiary of their findings. If a "free" economy of the sort depicted byChris Anderson or Clay Shirky is really on its way, then books themselves are soon to be little more than loss leaders for high-priced corporate lecturing. In such a scheme how could professional writers and theorists possibly escape biasing their works towards the needs of the corporate lecture market? It's as if the value of a theory or perspective rests solely in its applicability to the business sector.
In their ongoing effort to define and the defend the functioning of the market through science and systems theory, some of today's brightest thinkers have, perhaps inadvertently, promoted a mythology about commerce, culture, and competition. And it is a mythology as false, dangerous, and ultimately deadly as any religion.
The trend began on the pages of the digital business magazine, Wired, which served to reframe new tech innovations and science discoveries in terms friendly to disoriented speculators. Wired would not fundamentally challenge the market; it would provide bankers and investors with a map to the new territory, including the consultants they'd need to maintain their authority over the economy.
The first and probably most influential among them was Peter Schwartz, who, in 1997, with Peter Leyden, forecast a "long boom" of at least 25 years of prosperity and environmental health fueled by digital technology and, most importantly, the maintenance of open markets. Kevin Kelly foresaw the way digital abundance would challenge scarce markets, and offered clear rules through which the largest companies could still thrive on the phenomenon.
Stewart Brand joined Schwartz and others in cofounding GBN, a futurist consulting firm whose very name Global Business Network , seemed to cast the emergence of a web economy in a new light. What did it mean that everyone from Gregory Bateson to Brian Eno to Marvin Minsky would now be consulting to the biggest corporations on earth? Would they even be able to control their own messages? Brand did famously say in 1984 that "information wants to be free." But, much less publicized and remembered, he did so only after explaining that "information wants to be expensive, because it's so valuable." Would his and others' work now be parsed for the tidbits most effective at promoting a skewed vision of the new economy? Would the counterculture be able to use its newfound access to the board rooms of the Fortune 500 to hack the business landscape, or had they simply surrendered to the eventual absorption of everything and everyone to an eternal primacy of corporate capitalism? The "scenario plans" that resulted from this work, through which corporations could envision continued domination of their industries, appeared to indicate the latter.
Chris Anderson has analyzed where all this is going, and — rather than offering up a vision of a post-scarcity economy — advised companies to simply leverage the abundant to sell whatever they can keep scarce. Likewise, Tim O'Reilly and John Batelle's new, highly dimensional conception of the net — Web Squared— ultimately offers itself up as a template through which companies can make money by controlling the indexes people use to navigate information space.
Both science and technology are challenging long-held assumptions about top-down control, competition, and scarcity. But our leading thinkers are less likely to provide us with genuinely revolutionary axioms for a more highly evolved marketplace than reactionary responses to the networks, technologies, and discoveries that threaten to expose the marketplace for the arbitrarily designed poker game it is. They are not new rules for a new economy, but new rules for propping up old economic interests in the face of massive decentralization.
The sense of inevitability and pre-destiny shaping these narratives, as well as their ultimate obedience to market dogma, is most dangerous, however, for the way it trickles down to writers and theorists less directly or consciously concerned with market forces. It fosters, both directly and by example, a willingness to apply genetics, neuroscience, or systems theory to the economy, and of doing so in a decidedly determinist and often sloppy fashion. Then, the pull of the market itself does the rest of the work, tilting the ideas of many of today's best minds toward the agenda of the highest bidder.
So Steven Johnson ends up leaning, perhaps more than he should, on the corporate-friendly evidence that commercial TV and video games are actually healthy. (Think of how many corporations would hire a speaker who argued that everything bad — like marketing and media — is actually bad for you.) Likewise, Malcolm Gladwell finds himself repeatedly using recent discoveries from neuroscience to argue that higher human cognition is more than trumped by reptilian impulse; we may as well be guided by advertising professionals, since we're just acting mindlessly in response to crude stimuli, anyway. Everything becomes about business — and that's more than okay.
This widespread acceptance of the current economic order as a fact of nature ends up compromising the impact of new findings, and changing the public's relationship to the science going on around them. These authors do not chronicle (or celebrate) the full frontal assault that new technologies and scientific discoveries pose to, say, the monopolization of value creation or the centralization of currency. Instead, they sell corporations a new, science-based algorithm for strategic investing on the new landscape. Higher sales reports and lecture fees serve as positive reinforcement for authors to incorporate the market's bias even more enthusiastically the next time out. Write books that business likes, and you do better business. The cycle is self-perpetuating. But just because it pays the mortgage doesn't make it true.
In fact, thanks to their blind acceptance of a particular theory of the market, most of these concepts end up failing to accurately predict the future. Instead of 25 years of prosperity and eco-health, we got the dotcom bust and global warming. Immersion in media is not really good for us. People are capable of responding to a more complex call to action than the over-simplified and emotional rants of right-wing ideologues. The decentralizing effect of new media has been met by an overwhelming concentration of corporate conglomeration.
These theories fail not because the math or science underlying them is false, but rather because it is being inappropriately applied. Yet too many theorists keep buying into them, desperate for some logical flourish through which the premise of scarcity can somehow fit in, and business audiences won. In the process, they ignore the genuinely relevant question: whether the economic model, the game rules set in place half a millennium ago by kings with armies, can continue to hold back the genuine market activity of people enabled by computers.
People are beginning to create and exchange value again, and they are coming to realize the market they have taken for granted is not a condition of nature. This is the threat — and no amount of theoretical recontextualization is going to change that — or successfully prevent it.
Making Markets: From Abundance To Artificial Scarcity
The economy in which we operate is not a natural system, but a set of rules developed in the Late Middle Ages in order to prevent the unchecked rise of a merchant class that was creating and exchanging value with impunity. This was what we might today call a peer-to-peer economy, and did not depend on central employers or even central currency.
People brought grain in from the fields, had it weighed at a grain store, and left with a receipt — usually stamped into a thin piece of foil. The foil could be torn into smaller pieces and used as currency in town. Each piece represented a specific amount of grain. The money was quite literally earned into existence — and the total amount in circulation reflected the abundance of the crop.
Now the interesting thing about this money is that it lost value over time. The grain store had to be paid, some of the grain was lost to rats and spoilage. So each year, the grain store would reissue the money for any grain that hadn't actually been claimed. This meant that the money was biased towards transactions — towards circulation, rather than hording. People wanted to spend it. And the more money circulates (to a point) the better and more bountiful the economy. Preventative maintenance on machinery, research and development on new windmills and water wheels, was at a high.
Many towns became so prosperous that they invested in long-term projects, like cathedrals. The "Age of Cathedrals" of this pre-Renaissance period was not funded by the Vatican, but by the bottom-up activity of vibrant local economies. The work week got shorter, people got taller, and life expectancy increased. (Were the Late Middle Ages perfect? No — not by any means. I am not in any way calling for a return to the Middle Ages. But an honest appraisal of the economic mechanisms in place before our own is required if we are ever going to contend with the biases of the system we are currently mistaking for the way it has always and must always be.)
Feudal lords, early kings, and the aristocracy were not participating in this wealth creation. Their families hadn't created value in centuries, and they needed a mechanism through which to maintain their own stature in the face of a rising middle class. The two ideas they came up with are still with us today in essentially the same form, and have become so embedded in commerce that we mistake them for pre-existing laws of economic activity.
The first innovation was to centralize currency. What better way for the already rich to maintain their wealth than to make money scarce? Monarchs forcibly made abundant local currencies illegal, and required people to exchange value through artificially scarce central currencies, instead. Not only was centrally issued money easier to tax, but it gave central banks an easy way to extract value through debasement (removing gold content). The bias of scarce currency, however, was towards hording. Those with access to the treasury could accrue wealth by lending or investing passively in value creation by others. Prosperity on the periphery quickly diminished as value was drawn toward the center. Within a few decades of the establishment of central currency in France came local poverty, an end to subsistence farming, and the plague. (The economy we now celebrate as the happy result of these Renaissance innovations only took effect after Europe had lost half of its population.)
As it's currently practiced, the issuance of currency — a public utility, really — is still controlled in much the same manner by central banks. They issue the currency in the form of a loan to a bank, which in turn loans it a business. Each borrower must pay back more then he has acquired, necessitating competition — and more borrowing. An economy with a strictly enforced central currency must expand at the rate of debt; it is no longer ruled principally by the laws of supply and demand, but the debt structures of its lenders and borrowers. Those who can't grow organically must acquire businesses in order to grow artificially. Even though nearly 80% of mergers and acquisitions fail to create value for either party, the rules of a debt-based economy — and the shareholders it was developed to favor — insist on growth at the expense of long-term value.
The second great innovation was the chartered monopoly, through which kings could grant exclusive control over a sector or region to a favored company in return for an investment in the enterprise. This gave rise to monopoly markets, such as the British East India Trading Company's exclusive right to trade in the American Colonies. Colonists who grew cotton were not permitted to sell it to other people or, worse, fabricate clothes. These activities would have generated value from the bottom up, in a way that could not have been extracted by a central authority. Instead, colonists were required to sell cotton to the Company, at fixed prices, who shipped it back to England where it was fabricated into clothes by another chartered monopoly, and then shipped to back to America for sale to the colonists. It was not more efficient; it was simply more extractive.
The resulting economy encouraged — and often forced — people to accept employment from chartered corporations rather than create value for themselves. When natives of the Indies began making rope to sell to the Dutch East India Trading Company, the Company sought and won laws making rope fabrication in the Indies illegal for anyone except the Company itself. Former rope-makers had to close their workshops, and work instead for lower wages as employees of the company.
We ended up with an economy based in scarcity and competition rather than abundance and collaboration; an economy that requires growth and eschews sustainable business models. It may or may not better reflect the laws of nature — and that it is a conversation we really should have — but it is certainly not the result of entirely natural set of principles in action. It is a system designed by certain people at a certain moment in history, with very specific interests.
Like artists of the Renaissance, who were required to find patrons to support their work, most scientists, mathematicians, theorists, and technologists today must find support from either the public or private sectors to carry on their work. This support is not won by calling attention to the Monopoly board most of us mistake for the real economy. It is won by applying insights to the techniques through which their patrons can better play the game.
This has biased their observations and their conclusions. Like John Nash, who carried out game theory experiments for RAND in the 1950's, these business consultants see competition and self-interest where there is none, and reject all evidence to the contrary. Although he later recanted his conclusions, Nash and his colleagues couldn't believe that their subjects would choose a collaborative course of action when presented with the "prisoner's dilemma," and simply ignored their initial results.
Likewise, the proponents of today's digital libertarianism exploit any evidence they can find of evolutionary principles that reflect the fundamental competitiveness of human beings and other life forms, while ignoring the much more rigorously gathered evidence of cooperation as a primary human social skill. The late archeologist Glynn Isaac, for one, demonstrated how food sharing, labor distribution, social networking and other collaborative activities are what gave our evolutionary forefathers the ability to survive. Harvard biologist Ian Gilby's research on hunting among bats and chimps demonstrates advanced forms of cooperation, collective action, and sharing of meat disproportional to the risks taken to kill it.
Instead, it is more popular to focus on the self-interested battle for survival of the fittest. Whether or not he intends his work to be used this way, Steven Pinker's arguments about decreasing violence among humans over time are employed by others as evidence of the free market's peaceful influence on civilization. Ray Kurzweil relegates the entire human race to a subordinate role in the much more significant evolution of machines — a dehumanizing stance that dovetails all too well with an industrial marketplace in which most human beings are now relegated to the reactive role of consumers.
In Chris Anderson's vision of the coming "Petabyte Age," no human scientists are even required. That's because the structures that emerge from multi-dimensional data sets will be self-organizing and self-apparent. The emergent properties of natural systems and artificial markets are treated interchangeably. Like Adam Smith's "invisible hand," or Austrian economist Friedrich Hayek's notion of "catallaxy," markets are predestined to reach equilibrium by their very nature. Just like any other complex, natural system.
In short, these economic theories are selecting examples from nature to confirm the properties of a wholly designed marketplace: self-interested actors, inevitable equilibrium, a scarcity of resources, competition for survival. In doing so, they confirm — or at the very least, reinforce — the false idea that the laws of an artificially scarce fiscal scheme are a species' inheritance rather than a social construction enforced with gunpowder. At the very least, the language of science confers undeserved authority on these blindly accepted economic assumptions.
The Net Effect
Worst of all, when a potentially destabilizing and decentralizing medium such as the Internet comes along, this half-true and half-hearted style of inquiry follows the story only until a means to arrest its development is discovered and new strategies may be offered.
The open source ethos, through which anyone who understands the code can effectively redesign a program to his own liking, is repackaged by Jeff Howe as "crowdsourcing" through which corporations can once again harness the tremendous potential of real people acting in concert, for free. Viral media is reinvented by Malcolm Gladwell as "social contagion," or Tim Draper as "viral marketing" — techniques through which mass marketers can once again define human choice as a series of consumer decisions.
The decentralizing bias of new media is thus accepted and interpolated only until the market's intellectual guard can devise a new countermeasure for their patrons to employ on behalf of preserving business as usual.
Meanwhile, the same corporate libertarian think tanks using Richard Dawkins' theories of evolution to falsely justify the chaotic logic of capitalism through their white papers also advise politicians how to exploit the beliefs of fundamentalist Christian creationists in order to garner public support for self-sufficiency as a state of personal grace, and to galvanize suspicion of a welfare state. This is cynical at best.
It doesn't take a genius or a scientist to understand how the rules of the economic game as it is currently played reflect neither human values nor the laws of physics. The market cannot expand infinitely like the redshifts in Hubble's universe. How many other species attempt to store up enough fat during their productive years so that they can simply "retire" on their horded resources? How could a metric like the GNP accurately reflect the health of the real economy when toxic spills and disease epidemics alike actually count as short-term booms?
The Internet may be very much like a rhizome, but it is still energized by a currency that is anything but a neutral player. Most Internet business enthusiasts applaud Google's efforts to build open systems the same way their predecessors applauded the World Bank's gift of open markets to developing nations around the world — utterly unaware of (or unwilling to look at) what exactly we are opening our world to.
The net (whether we're talking Web 2.0, Wikipedia, social networks or laptops) offers people the opportunity to build economies based on different rules — commerce that exists outside the economic map we have mistaken for the territory of human interaction.
We can startup and even scale companies with little or no money, making the banks and investment capital on which business once depended obsolete. That's the real reason for the so-called economic crisis: there is less of a market for the debt on which the top-heavy game is based. We can develop local and complementary currencies, barter networks, and other exchange systems independently of a central bank, and carry out secure transactions with our cell phones.
In doing so, we become capable of imagining a marketplace based in something other than scarcity — a requirement if we're ever going to find a way to employ an abundant energy supply. It's not that we don't have the technological means to source renewable energy; it's that we don't have a market concept capable of contending with abundance. As Buckminster Fuller would remind us: these are not problems of nature, they are problems of design.
If science can take on God, it should not fear the market. Both are, after all, creations of man.
We must stop perpetuating the fiction that existence itself is dictated by the immutable laws of economics. These so-called laws are, in actuality, the economic mechanisms of 13th Century monarchs. Some of us analyzing digital culture and its impact on business must reveal economics as the artificial construction it really is. Although it may be subjected to the scientific method and mathematical scrutiny, it is not a natural science; it is game theory, with a set of underlying assumptions that have little to do with anything resembling genetics, neurology, evolution, or natural systems.
The scientific tradition exposed the unpopular astronomical fact that the earth was not at the center of the universe. This stance challenged the social order, and its proponents were met with less than a welcoming reception. Today, science has a similar opportunity: to expose the fallacies underlying our economic model instead of producing short-term strategies for mitigating the effects of inventions and discoveries that threaten this inherited market hallucination.
The economic model has broken, for good. It's time to stop pretending it describes our world.
On Rushkoff's "Economics Is Not Natural Science"
Rushkoff is right: our 21st-century global computing platform is still running a 13th-century banking system, and the resulting performance sucks.
In any hydrodynamic system, the non-dimensional Reynolds Number characterizes the ratio between inertial forces (the result of mass and velocity) to viscous forces (the result of the inherent stickiness of the fluid). When the Reynolds number reaches a certain critical value, the system changes from laminar to turbulent flow. There is an equivalent to the Reynolds Number for an economic system: the ratio between the speed (and amplitude) at which currency is flowing through the system to the viscosity of the financial medium. The Reynolds number of our electronically-mediated economy has recently gone way up, with destabilizing results. The latest problem is that automated programs — -the barnacles of the New Economy — -are now trading *within* the frequency spectrum of the turbulent boundary layer. If this happens to a ship, it will slow down, and if it happens to an airplane, it will go into a stall. Where’s the anti-fouling paint?
How to best transcend the current economic mess? Put Jeff Bezos, Pierre Omidyar, Elon Musk, Tim O'Reilly, Larry Page, Sergey Brin, Nathan Myhrvold, and Danny Hillis in a room somewhere and don't let them out until they have framed a new, massively-distributed financial system, founded on sound, open, peer-to-peer principles, from the start. And don’t call it a bank. Launch a new financial medium that is as open, scale-free, universally accessible, self-improving, and non-proprietary as the Internet, and leave the 13th century behind.
[ED. Note: Add George Dyson to above list]
[ED. NOTE: The following interview with Harvard biological anthropologist Richard Wrangham was originally published eight years ago on Edge, on February 28, 2001. Given the media interest attending the publication of Wrangham's related book, Catching Fire: How Cooking Made Us Human, we are please dto bring the piece back for an encore.]
EVOLUTION OF COOKING
An Edge Encore
One of the great thrusts of behavioral biology for the last three or four decades has been that if you change the conditions that an animal is in, then you change the kind of behavior that is elicited. What the genetic control of behavior means is not that instincts inevitably pop out regardless of circumstances; instead, it is that we are created with a series of emotions that are appropriate for a range of circumstances. The particular set of emotions that pop out will vary within species, but they will also vary with context, and once you know them better, then you can arrange the context.... It's much better to anticipate these things, recognize the problem, and design in advance to protect.
RICHARD WRANGHAM is a professor of biology and anthropology at Harvard University who studies chimpanzees, and their behavior, in Uganda. His main interest is in the question of human evolution from a behavioral perspective. His most recent book is Catching Fire: How Cooking Made Us Human.
THE EVOLUTION OF COOKING
RICHARD WRANGHAM: I make my living studying chimpanzees and their behavior in Uganda. I'm really interested in looking at the question of human evolution from a behavioral perspective, and I find that working with chimps is provocative because of the evidence that 5 million, 6 million, maybe even 7 million years ago, the ancestor that gave rise to the Australopithecus, the group of apes that came out into the savannahs, was probably very much like a chimpanzee. Being with chimpanzees in the forests of Uganda, as with the forests anywhere else in Africa, is pretty much like going into a time machine and enables us to think about the basic principles that underlie behavior.
Although humans are enormously different from the apes, the extraordinary thing that has emerged over the last two or three decades and this is becoming increasingly clear recently is that in maybe three big ways in particular, humans are more ape-like in their social behavior than you would expect to occur by chance. Moreover, there's something about our relationship to the apes that has carried through in terms of our behavior. To take an example, there are only two mammals that we know of in the world in which males live in groups of their male relatives and occasionally make attacks on individuals in neighboring groups so brutally that they kill them. Those two mammals are humans and chimpanzees. This is very odd and it needs explanation.
EDGE: Why was this not noticed until the last generation or so?
WRANGHAM: Chimpanzees weren't studied at all in the wild until 1960. It took 14 years after that before people were seeing them at the edges of their ranges. It's just difficult to follow them all over the place. It was 1974 when the first brutal attacks were seen, and these led to the extinction of an entire chimpanzee community in Gombe. People monitored that under Jane Goodall's research direction. And then slowly over the years it's been realized that chimps will carry on or will kill individuals in other communities. So now we've had chimp killing going on not only in Gombe and at the site I work at, in Kibale in west Uganda, but chimps have also killed other chimps in Budongo in Uganda and in Mahale in Tanzania. It just takes time for these observations to accumulate.
EDGE: Will the chimps kill others in their own community?
WRANGHAM: Yes, occasionally there's a Julius Caesar-like assassination, which is, of course, really intriguing, because we've got these tremendously important coalitions that go on within chimpanzee communities that determine a male's ability to do what a male is desperately striving to do all the time, which is to become the alpha male. And the question that arises once you see that on occasion these coalitions can lead to what are essentially assassinations is, what makes them stable normally? How is it that you don't get constant erosion of confidence, such that you get one individual isolated and the others all forming coalitions against him. These killings are rare events, but we know a fair amount about them. The most important aspect underlying the similarity is the fact that you can have extraordinary imbalances of power. You can have three or four individuals all jointly attacking another one, which means that it's essentially safe for the attackers. So if they decide that they are in a coalition against a victim, they can dispatch him relatively safely. That means that any animal can do that, and there are other animals that do, although they don't live in groups with their own relatives in the same way as chimpanzees do. Hyenas and female lions are other examples of animals that engage in this type of activity.
We've got three things that are really striking about humans and the great apes in parallel. The violence that chimps and humans show is pretty much unique to those two species. Then you have the extraordinary degree of social tolerance in humans and bonobos, another ape that is equally closely related to humans. And then you have a remarkable degree of eroticism in bonobos compared to humans. These parallels are not easily explained and raise all sorts of provocative questions, given the fact that humans have so many differences from the other apes in terms of our ecology, our language, our intelligence our millions of years of separation.
EDGE: How long have you been doing field work? How did you get started?
WRANGHAM: I've been studying chimps on and off for over 30
years. I began working at Jane Goodall's site at Gombe, which
is the archetypal site and represents to many people what
the chimpanzee is. In 1984 I moved to Uganda and started work
on a forest chimpanzee population, and began thinking particularly
about cultural variation, about the kinds of behavioral traditions
that vary between the two sites that I had come to know best.
I then used this as a vehicle for trying to understand variations
in social behavior among chimpanzees.
The answers are becoming clearer. In my field work I am trying to understand what it is about the ecology that leads to differences in behavior. A real key that has been given extraordinarily little attention is the fact that in some populations, the apes are able to walk and feed at the same time. In others they're not, because there's no food for them as they're walking. This sounds like an extraordinarily trivial difference, but it seems to be enormously important, because if you can walk and feed at the same time, then you can stay in a group with your friends and relations without additional members causing an increased intensity of feeding competition. On the other hand, if you are walking without feeding between the key food patches, then every time an additional chimp comes along and joins your party, the effect is that feeding competition is intensified in these food patches. And there is no melioration when you're moving between food patches. The long-term effect of this is that it fragments the parties, and it's the fragmented nature of these parties of chimps that don't have ability to walk and feed at the same time that underlie all of these social differences.
EDGE: What questions are you asking yourself today?
This is fine, but long before earth ovens came along we must
have learned to cook. And you would think that cooking would
be associated with things like evidence in your body of the
food being easier to digest, such as smaller teeth, or maybe
a reduction in the size of the rib cage as the size of the
stomach gets smaller, or maybe the jaw getting smaller. And
there's only one time in human evolution that all that happens;
that is, 1.9 million years ago with the evolution of the genus Homo. It's there we must look for evidence that cooking
What we've got to think about is the idea that once you have females ready to make a meal by collecting food and cooking it, then they're vulnerable to having their food taken away by the scroungers the big males who find it easier not to go and collect food themselves or cook it, but just take it once it's ready. Therefore the females need protective bonds in order to protect themselves from thieving males, and this is the origin of human male-female relationships. The evolution of cooking is a huge topic that is virtually completely neglected. And whatever view you take about cooking, you have to say it's a problem that needs to be addressed.
The second problem is this: There is in a number of ways in the evolution of humans evidence of our behaving and looking as if we had the characteristics of a juvenile animal. For a hundred years or more people have talked about the idea that humans might be a pedomorphic species, a species that has juvenile characteristics in general, but this is too global a way to think about it. Still, it remains the case that much in our behavior, when compared with the behavior of our closest relatives, looks more playful and less aggressive when you're thinking about interactions at a social level within a group. We are also more sexual and more ready to learn, and these sorts of characteristics are characteristics generally associated with juvenility.
In a fascinating parallel, the bonobos the second in the great pair of our two closest relatives show all sorts of traits that are pedomorphic. We can see this throughout the head, where the morphology of the skull itself looks like the skull of an early adolescent or late-juvenile chimpanzee and much of its behavior looks juvenile-like. They are more playful, they're less sex-differentiated in all sorts of aspects of their behavior, they're more sexual, and so on. And we've yet to really come to grips with where this pedomorphic change has come from and what it means.
We've already got some wonderful examples of similar things occuring in other animals in the context of domestication. When we look at the differences between wolves and dogs, for example, we see amazing parallels to the differences between chimpanzees and bonobos. In each case for a given size of animal, you have the skull being reduced in size, and the components of the skull being reduced in size, including the jaws and teeth, and the skull looking more like the skull of a juvenile in the other form. The dog's skull looks like that of a juvenile wolf, and the bonobo's skull looks like that of a juvenile chimpanzee. And the behavior of each of them looks like it has strong components of the juvenile of the other species.
This leads to the thought that species can self-domesticate. There is good reason to think that over the course of evolution the bonobos evolved from a chimpanzee-like ancestor as a consequence of being in an environment where aggression was less beneficial to the aggressors, where there was a natural selection against aggression, and where selection favored individuals that were less aggressive. Over time, selection built on those slight variations in the timing of the arrival of the aggressive characteristics in the adult males. So it was constantly pushing back, favoring individuals that retained more juvenile-like behavior and even juvenile-like heads because that's what controls the behavior. Later, what you had was a species that had effectively been tamed, had been self-domesticated.
There is experimental evidence of this process. We have the Russian geneticist Belyaev, for example, who actually took wild foxes, selected for purely tame traits over 20 or 30 generations, and at the end of that time observed not only that the descendant foxes are as tame as dogs are nowadays spontaneously but also that they have a series of characteristics that have come along for the ride, incidental consequences that were not selected for but are just there. You have dramatic morphological ones, like the star mutation the white spot on the forehead that you see in horses and cows and goats that are just somehow associated genetically with tameness, and probably result from some kind of change in developmental events. There are also other morphological changes, like curly hair, short tails, and lopped ears, which happened in a number of domesticated animals, apparently because they've been selected for tameness. In addition, you get these smaller brains.
There is a remarkable thing about human evolution. We always tend to think that humans have just had a continuous surge in brain size over the last two million years, but actually over the last thirty thousand years brain size has decreased by 10 to 15 percent. The standard explanation for this is that we became more gracile at the same time that we became thinner boned which meant that we were lighter in body weight. And because there tends to be a correlation between body weight and brain weight, then maybe this explains our smaller brains. But I don't see any reason why brain size should be correlated with the amount of meat we carry on our bodies. This gracility is exactly the same pattern we see in the evolution of dogs from wolves, or bonobos from chimpanzees, or domesticated foxes from wild foxes. In all these cases an increasing gracility of the bone is an incidental effect.
I think that we have to start thinking about the idea that humans in the last 30, 40, or 50 thousand years have been domesticating ourselves. If were following the bonobo or dog pattern, we're moving toward a form of ourselves with more and more juvenile behavior. And the amazing thing once you start thinking in these terms is that you realize that we're still moving fast. Tooth size, for example, is extremely strongly genetically controlled and develops with little environmental influence, and is continuing to decline fast. I think that current evidence is that we're in the middle of an evolutionary event in which tooth size is falling, jaw size is falling, brain size is falling, and it's quite reasonable to imagine that we're continuing to tame ourselves. The way it's happening is the way it's probably happened since we became permanently settled in villages, 20 or 30 thousand years ago, or before.
People who are anti-social, for example, have their breeding opportunities reduced. They may be executed, they may be imprisoned, or they may be punished so badly that they're kept out of the breeding pool. Just as there is selection for tameness in the domestication process of wild animals, or just as in bonobos there was a natural selection against aggressiveness, here there's a sort of social selection against excessively aggressive people within communities. This puts humans in a picture of now undergoing a process of becoming increasingly a peaceful form of a more aggressive ancestor.
EDGE: Why is understanding these episodes in evolutionary history so important for us today?
WRANGHAM: We tend to think of the problems that have given
rise to Al Qaeda, for example, as being concerned primarily
with economic and political conflict, and obviously those
are hugely important. Nevertheless, in order to understand
why it is that particular countries and particular people
within those countries find Osama bin Laden's wild schemes
attractive, we have to think in terms of rather deeper differences
among groups and sexes.
Men in the Middle East come from a society in which there is polygany one man having many wives and even though polygany can never be very wide-spread within a society because there aren't enough women, it has the enormous effect that women marry upwards. Polyganous marriages are always concentrated in the upper socio-economic strata. This means that in the lower socio-economic strata you have a lot of men with very few women, and they use the typical systems for getting wives that are used in polyganous societies, which include gaining control over women. In a polyganous society, women want to marry into the polyganous society because that's where all the wealth and the opportunities are to get good food and survival opportunities for your kids. Consequently, they allow themselves to be frustrated, to be veiled and put in the burkha, to be given rules that mean they can only stay inside the house and have to blacken their windows. They allow themselves to be totally controlled by men.
So in this society you've got a lot of lower-class men, who have very few reproductive opportunities, who want to control women, and then you introduce them to this westernization that says, "Women, we will educate you, we will free you from the burkha, we will give you opportunities to be mobile, to travel, to flirt, to make your own romantic alliances." That is a very strong threat to the men who are already up against it and whose reproductive future depends on making alliances with other men who are in complete control of their own daughters. So westernization undermines reproductive strategies of men who are already desperate.
This means that in order to develop long-term strategies for reducing the degree of resentment that globalization and westernization are inducing in those countries, we should think about what we can do to reduce polygany. The countries where Al Qaeda gets the most support are the most polyganous countries: the Afghanistans, the Pakistans, the Saudi Arabias, and so on. But if you take a country like Turkey, which banned polygany in the 1920s, you see very little support. Single men are dangerous when they face a difficult reproductive future, and when they are presented with a series of economic changes that further reduce their economic futures by liberating women from their own control, then those men become peculiarly open to those wild schemes that Osama bin Laden presents. And those sorts of dangers are liable simply to continue for as long as the reproductive inequities continue in the Middle East.
EDGE: What accounts for the controversy surrounding the publication of your book Demonic Males?
WRANGHAM: Once you use biology to analyze human behavior, it's a bit like going to a psychiatrist and having somebody help you understand where your behavior is coming from. It means that you're in a little bit less internal conflict, that you can understand what you're doing, and you can shape your own behavior better. But the reaction is not always like that. A lot of people find it difficult to live with the idea that we've had a natural history of violence. We've had natural selection in favor of emotions in men that predispose us to enjoy competition, to enjoy subordinating other men, to enjoy even killing other men. These are nasty things to accept, and there are people who have written subsequently to say that it's just inappropriate to write like this, and so they look for ways to undermine the evidence of sex differences or the uniqueness of the human species. I think it's because people are very nervous about the idea that once you see a biological component to our violent behavior, then it may mean that it is inevitable.
One of the great thrusts of behavioral biology for the last three or four decades has been that if you change the conditions that an animal is in, then you change the kind of behavior that is elicited. What the genetic control of behavior means is not that instincts inevitably pop out regardless of circumstances; instead, it is that we are created with a series of emotions that are appropriate for a range of circumstances. The particular set of emotions that pop out will vary within species, but they will also vary with context, and once you know them better, then you can arrange the context.
Once you understand and admit that human males in particular have got these hideous propensities to get carried away with enthusiasm, to have war, rape, or killing sprees, or to get excited about opportunities to be engaged in violent interactions, then you can start recognizing it and doing something about it. It's much better not to have to wait for experience to tell you that it's a good idea to have a standing army to protect yourself against the neighbors, or that you need to make sure that women are not exposed to potential rapists. It's much better to anticipate these things, recognize the problem, and design in advance to protect.
There's still a huge tendency to downplay or just simplify sex differences in behavior and emotions. As we start getting a more realistic sense about the way natural selection has shaped our behavior, we're going to be increasingly aware of the fact that the ways that men and women respond emotionally to different contexts can be very different.
One of the dramatic examples is the extent to which men and women get positive illusions. In general, women tend to have negative illusions about themselves, meaning that they regard themselves as slightly less skilled or competent than they really are. On the other hand, men tend to have positive illusions, meaning they exaggerate their own abilities, compared to the way either others see them or they perform in tests. These things are certainly changeable. They depend a lot on power relations. If you put a woman in a dominant power relation, she tends to get a positive illusion; if you put a man in a subordinate relationship he tends to get a negative illusion.
Nevertheless these things emerge very predictably and
they're dangerous. If you have positive illusions then it
means that you think you can fight better than you really
can. It looks to me as though natural selection has favored
positive illusions in men, because rather like the long canines
on a male baboon they enable men to fight better against other
men who really believe in themselves. You have to believe
in yourself to be able to fight effectively, because if you
don't believe in yourself really well, then others will take
advantage of your lack of confidence and your nervousness.
If you understand something about positive illusions, you
can look at an engagement in which everybody believes they're
going to win, and be a little more cynical about it. You can
be a bit more like a lawyer looking at two clients and saying,
wait a minute, neither of you has got a case just quite as
good as you think you have. In the future a more sensitive
appreciation for these sorts of emotional predispositions
can help us generate a more refined approach to violence prevention.
Money is human happiness in the abstract, wrote Schopenhauer grimly in the early 19th Century. He then, who is no longer capable of enjoying human happiness in the concrete devotes himself utterly to money. ... But what is happiness? ...
MONEY, DESIRE, PLEASURE, PAIN
An Edge Original Essay
EMANUEL DERMAN is a professor in Columbia University's Industrial Engineering and Operations Research Department, as well as a partner at Prisma Capital Partners. He is a former managing director and head of the Quantitative Strategies group at Goldman, Sachs & Co. His is the author of My Life As A Quant.
MONEY, DESIRE, PLEASURE, PAIN
Money is human happiness in the abstract, wrote Schopenhauer grimly in the early 19th Century. He then, who is no longer capable of enjoying human happiness in the concrete devotes himself utterly to money.
But what is happiness? In The Ethics, written in 1677, Spinoza ambitiously tried to do for the emotions what Euclid did for geometry. Euclid began with “primitives”, his raw material, the elements that everyone understands. In geometry, these were points and lines. He then added axioms, self-evident logical principles that no one would argue with, stating for example that “If equals are added to equals, then the wholes are equal”. Finally, he proceeded to theorems, interesting deductions he could prove from the primitives and the axioms. One of them is Pythagoras’ theorem that relates triangles to squares: the sum of the squares of the sides of right-angled triangle are equal to the square of the hypotenuse.
Spinoza approached human emotions the way Euclid approached triangles and squares, aiming to understand their inter-relations by means of principles, logic and deduction.
Spinoza’s primitives were pain, pleasure and desire. Everyone who inhabits a human body recognizes these feelings. Just as financial stock options are derivatives that depend on the underlying stock price, so more complex emotions depend on these three primitives pain, pleasure and desire.
Love or hate, then, is pleasure or pain associated with an external object. Hope is the expectation of future pleasure tinged with doubt. Joy is simply the pleasure we experience when that doubtful expectation materializes. Envy is pain at another’s pleasure. Cruelty is a hybrid of all three primitives: it is the desire to inflict pain on someone we love. And so on to all the other emotions …
Figure 1 is a simple diagram I constructed to illustrate Spinoza’s scheme. For Spinoza, good is everything that brings pleasure, and Evil is everything that brings pain. And happiness is good.
Figure 1: Spinoza’s model of emotions
Schopenhauer’s definition of money as abstract happiness or pleasure or good is correct, but it isn’t the entire story.
In a barter system, where you trade bread for cheese, you are trading completed work, your bread for their cheese. Work is painful and you do it if you desire to survive, the most fundamental of all desires. By the sweat of your brow will you eat bread, said God to Adam and Eve after the Fall. In Spinoza’s scheme I regard work as pain in the service of desire.
Gold coins are crystallized work, the labor of mining. Banknotes backed by gold are crystallized labor and past pain. Money, then, is past pain in the service of desire to survive as well as abstract future pleasure. It combines in one banknote all three of Spinoza’s primitives, as illustrated in Figure 2.
Figure 2: How money fits into Spinoza’s model
Doctors testing comatose patients for signs of life give up when there is no response to pain. Fiat money incorporates only two of the three primitives, pleasure and desire. Genuine money should always involve the recollected pain involved in creating it.
Since Descartes invested the Western mind with res cogitans and res extensa, the seemingly insurmountable philosophic and scientific questions his dualism posed have stalked us. Indeed, a friendly observer of the past 350 years of the philosophy of mind might be forgiven for saying that res cogitans and res extensa, despite all our efforts with Dualism, Materialism, Idealism, and now the Mind Brain Identity Theory, have held us at bay. I say 'at bay' because it is clear that there is no agreement that we have solved the mighty problems of consciousness and mind.
FIVE PROBLEMS IN THE PHILOSOPHY OF MIND
An Edge Original Essay
STUART A. KAUFFMAN is a professor at the University of Calgary with a shared appointment between biological sciences and physics and astronomy. He is the author of Reinventing the Sacred, The Origins of Order, At Home in the Universe: The Search for the Laws of Self-Organization, and Investigations.
FIVE PROBLEMS IN THE PHILOSOPHY OF MIND
Based on two physical postulates, I approach and hope to resolve five fundamental problems in the philosophy of mind that have plagued us for hundreds of years. Both postulates are testable in principle. If mind depends upon the specific physics of the mind-brains system, mind is, in part, a matter for physicists.
Since Descartes invested the Western mind with res cogitans and res extensa, the seemingly insurmountable philosophic and scientific questions his dualism posed have stalked us. Indeed, a friendly observer of the past 350 years of the philosophy of mind might be forgiven for saying that res cogitans and res extensa, despite all our efforts with Dualism, Materialism, Idealism, and now the Mind Brain Identity Theory, have held us at bay. I say 'at bay' because it is clear that there is no agreement that we have solved the mighty problems of consciousness and mind (1, 2, 3, 4).
In the present essay I propose to broach new ground that I hope may help solve five fundamental problems in the philosophy of mind and the evolution of consciousness: 1) How does mind act on matter? 2) If it cannot, is mind a mere epiphenomenon? 3) Whence free will in the face of causal closure in the brain? More, I hope to make inroads on a fundamental fourth problem, 4) Whence a responsible free will. But there is a further issue I want to discuss: 5) What is the evolutionary usefulness, or selective advantage, of consciousness? And 6) is there any hope that my tries at 1-5 might shed light on the 'hard problem' of consciousness experiences, of qualia? The answer to this last question appears, as yet, 'No'.
All the above questions are deeply familiar, and the subjects of massive efforts by philosophers (1, 2, 3), neuroscientists (5,6), physicists (7) and others. I propose to state each of these problems, then tackle them with two physical hypotheses: First, the mind is a quantum coherent-reversibly decohering-recohering system in the brain. Thus, following R. Penrose(8), I believe that consciousness is a problem, at least in part, of the physical basis subtending it. While the arguments I advance differ sharply from those of Penrose, and while he was strongly attacked for suggesting a quantum-consciousness connection, he was courageous, and did much to legitimize the 'C' word in serious scientific discussion. In this view I sharply differ from those who hope for an emergence of consciousness in a computational mind (3), whether comprised of chips, neurons, or water buckets.
The second physical hypothesis is scientifically and philosophically radical. The famous Turing-Church-Deutsch, TCD, principle(9, Bassett 2009 P.C.), states that any physical machine can be simulated to arbitrary accuracy on a universal Turing machine. This thesis is profoundly related to reductionism and the long held belief, since Descartes, Newton, Einstein, Schrodinger, and Weinberg(10), that there is a 'Final Theory of Everything' at the base of physics, which explains all that unfolds in the universe by logical entailment. As we shall see, this view derives from Aristotle's analysis of scientific explanation as deduction: All men are mortal, Socrates is a man, thus, Socrates is a mortal. As Robert Rosen rightly points out(11), with Newton, we have eliminated all but one of Aristotle's four causes, formal, final, material and efficient, retaining only efficient cause in science and mathematized it as deduction. Thus, Newton's equations, in differential form, with initial and boundary conditions are 'solved' for the behavior of the system by integration, which is precisely deduction. This identity of efficient cause with deduction leads directly to the reductionist view held by Weinberg and others. There can be no unentailed events, so emergence is just wrong and there must be a final theory 'down there' from which all derives by entailment. As Weinberg famously says(10), the explanatory arrows all point downward, from societies to people to organs to cells to biochemistry to chemistry to physics and ultimately to particle physics and General Relativity, or perhaps String Theory(12). Turing-Church-Deutsch holds precisely the same view - it is algorithms all the way down so entailment all the way up. In this view, the universe is a formalizable machine, and we who live in it are TCD machines. Then we, robot-like can use the inputs from our sensors and calculate all we need to flourish, machines afloat in a machine universe. But then, unfortunately, there is no selective advantage to conscious experience. Why then, did it evolve?
I will present four lines of reasoning and candidate evidence suggesting that reductionism is very powerful, but powerfully inadequate. I will thus argue that there can be no 'theory of everything' that can explain all that unfolds in the universe by logical entailment, hence that the universe and biosphere in their evolution are not machines, and that the Turing-Church-Deutsch does not hold (4, 13). In such a world, the evolutionary advantages of consciousness may be stunning, for if we cannot, in principle, calculate the behavior of a universe, biosphere, animal and human life that is partially lawless yet wonderfully non-random then there may be a profound advantage to conscious experience. It is one way we can understand a partially lawless, non-random, hence non-calculable, universe, biosphere, and free willed human life, and flourish in it.
I note at the outset that I think the scientific grounds for a quantum mind are presently weak, that it is, at present, an improbable scientific hypothesis, but that it is definitely not ruled out, as we shall see(4).
This article is organized in the following sections. Section 1 discusses dualism and its standard philosophy of mind problems. Section 2 discusses some facts about quantum mechanics needed for my discussion. Section 3 proposes answers to how the mind acts on the brain and mind, that appear to be solved by assuming the mind-brain system is quantum coherent, reversibly decohering to classicity for all practical purposes, FAPP, and returning to a quantum state. Section 4, I take a first inadequate step towards a free will, it is free but not responsible. Section 5, sketches a physical theory for a quantum decohereing-recohering mind-brain system rather analogous to other theories which, however, do not consider reversible decoherence and recoherence. Section 6 is about possible steps towards a responsible free will. Section 7, I consider several reasons why both reductionism and the Turing-Church-Deutsch principle is inadequate, that open the conceptual door toward partial lawlessness, yet non-random becoming. Other scientists seem to be exploring similar ideas, as I describe (14,15). I will in Section 8 use lawlessness yet non-randomness as a hoped for avenue to a responsible free will. In section 9, I discuss why the failure of Turing-Church-Deutsch gives a powerful selective advantage to consciousness. If we and the universe are not TCD, then we cannot compute what will happen. Consciousness seems a sufficient evolutionary solution and is thus selectively advantageous. In Section 10, I confess that none of the above helps understand the hard problem of qualia in themselves.
I hope the ideas in the article open new philosophic and scientific ground for our considerations.
2 Dualism and Its Familiar Problems
Descartes famously supposed mind stuff and material stuff, res cogitans and res extensia. Res extensia was conceived by Descartes as a machine, driven by Aristotle's efficient causes. We have held to the efficient cause view of the material world from Descartes to Newton to the present. As noted it is the logical basis of reductionism and TCD. With Descartes, res cogitans, experience, hovered somehow in our brain/body and somehow nowhere. The immediate issue that arose for Descartes and all who have followed was: How does mind act on matter?
The standard form of this problem depends upon causal closure in the material world of efficient causes. Any event (classical physical event) must have a sufficient classical physical efficient cause. Thus there can be no first cause, and causal closure is required. Given this view, and the current Mind-Brain Identity theory, the standard concern is that brain events are sufficient causes of later brain events, and there is nothing left over for mind to do to affect the brain. Worse, there is no obvious way the mind, res cogitans on dualism, 'mind' in a mind-brain identity theory, could manage to act on brain.
You may respond: But on the mind-brain identity theory it is not legitimate to then separate 'mind' from 'brain' and ask how the former acts on the latter. They are identical by hypothesis. Yes, we can say the words, but we all experience qualia, inter alia with respect to other minds. How can our experiences act on matter on any view at all, including the Mind-Brain identity theory? As philosopher Michael Silberstein told me (Silberstein, M. 2007 P.C.): "But it will be said of the mind-brain identity theory: separate the mind aspect from the brain aspect. Now how does the mind act on the brain?" Then Silberstein repeated the arguments above about causal closure in brain stuff and nothing for mind to do, nor any way for mind to do it to brain and body.
The response to this apparent impass is a retreat to epiphenomenalism: Mind does nothing, in fact, it does not act on brain, it is an epiphenomena of brain. It is fair to say that no one likes this view.
The third problem, assuming classical matter for the brain and causal closure, is free will. How can we have free will if the world's becoming, like Newton's laws, are fully deterministic? Then we cannot have free will in truth. And since all our behaviors are determined, we cannot have morally responsible free will.
One response to this problem now prevalent is an appeal to deterministic chaos in the brain and the thought that only a tiny subset of neurons underpin conscious experience (5,6). Then infinitesimal alterations in initial conditions will lie on divergent trajectories with positive Lyapunov exponents, the butterfly will flap energetically, and we will have the illusion of free will. This view may well be true. But I want to argue that we do not need it.
3 Some Quantum Facts
We are all familiar with the basics of quantum mechanics, including the familiar Copenhagen interpretation and Born rule under which the time dependent Schrodinger equation propagates a wave of 'possibilities' whose amplitudes, when squared, yield the probabilities of a given quantum degree of freedom being measured in a classical apparatus setting. This view of quantum mechanics is, as we all know, fully acausal. There is no cause for the radioactive decay event that kill's Schrodinger's cat, just bad luck for the cat. Beyond Copenhagen, we all know the Bohm and Many World interpretations of quantum mechanics, which few hold in favor. I will base my discussion on Copenhagen/Born and more recent work.
The central topic of my concern will be 'decoherence' as an account of the emergence of the classical world, or, for purists, the classical world FAPP, for all practical purposes, from the quantum world (17). This has been well established in work by Leggett with a quantum system interacting with a quantum oscillator bath (16). More, decoherence is a well established experimental fact in quantum computing, where it destroys the quantum coherence needed for such computation (Sanders, B. 2008 P.C.).
To be more precise, quantum interference, for example in the two slit experiment, requires that all the phase information in the Schrodinger wave, or the sum over all possible histories in Feynman's formulation, arrive at the detector and interact by constructive or destructive interference. These interactions yield the famous interference effects of quantum mechanics that defy classical explanation.
Decoherence requires considering a quantum or quantum + classical 'system' and its quantum or quantum + classical environment. The central idea is that quantum phase information is lost from the system to the environment, so the system loses the capacity to exhibit quantum characteristic interference phenomena. The system can approach classicity FAPP, or for some physicists, a classical mixed state of classical probabilities not quantum probabilities that superimpose.
It is essential to the discussion below that quantum decoherence, the loss of phase information, is not a causal process in any sense. Rather phase information, the heart of quantum possibility waves on Copenhagen and Born, is lost acausally from the system to the environment and typically cannot, in any practical way, be recovered.
The central implication of this is that decoherence constitutes the passage from the quantum world of possibilities to the actual classical (FAPP) world of physical events, and there is nothing causal in this passage.
Below I will explain possible physical embodiments of my hypothesis that the mind is quantum coherent, but reversibly locally passing to decoherence and recoherence repeatedly. At this point I will say, however, that such reversible passage from a coherent 'entangled state' to decoherent-classical (FAPP) and back is assured by Shor's theorem that shows for a quantum computer whose quantum degrees of freedom are decohering, that they can be made to recohere to coherence by the injection of information in the now thermodynamically open system (21). More, Briegel has published two recent papers showing just such reversible passage from quantum entangled to classical and back repeatedly (22, 23).
Reversibility of the coherent to decoherent-classical to recoherent quantum states are essential to my hypothesis for I wish the brain to be undergoing such reversible transformations all the time. If we imagine the coherent spatially extended regions of the brain, as discussed below, to be pink, and the decoherent regions to be increasingly grey as decoherence sets in, I imagine a 3 dimensional volume in the brain where each pixel- volume waxes and wanes pink to grey to pink somewhat like an fMRI temporal image.
4 How Does the Mind 'Act On' the Brain?
This question, which seems deeply difficult to answer for a classical brain, becomes easy to answer in the current framework: The quantum coherent-decohering-recohering mind does not act on the brain causally at all. Rather, by decohering to classical (FAPP) states, the quantum coherent mind has acausal consequences for the classical "meat" of the brain. No causality from res cogitans to res extensa is needed. Mind acausally has consequences for the classical states of the brain.
We may or may not hold a quantum theory of the mind-brain system to be scientifically plausible at this stage. Nevertheless, I claim that decoherence to classicity FAPP is a substantial candidate to answer our 350 year old question of how the mind 'acts on the brain'. It does not act on the brain causally. It decoheres and this alters the classical state(s) of the brain.
Many, notably Dennett (2), in Freedom Evolving, would disagree strongly with the need for such a quantum decoherent account. Whatever the merits of Dennett's views, however, they do not vitiate the possibility that a quantum decohering-recohering mind-brain may answer the question of how mind - acausally - has consequences for physical matter.
Next, how does the either purely quantum mind, or quantum coherent-decohering-recohering mind-brain system act on mind? A first order answer is Schrodinger's equation itself. Mind propagates quantum coherent time dependent Schrodinger waves unitarily. As we will see this is actually not sufficient for a responsible free will, but it is a start, allowing mind to have acausal consequences for the temporal behavior of mind.
With this, we are freed from a retreat into the mind as purely epiphenomenon. Because we do not have to answer the familiar (classical physics) question of how mind acts efficient causally on brain, the issue of epiphenomenalism does not arise.
5 A Random Free Will
We have now a beginning, but inadequate answer to free will. If we take mind to be quantum coherent, then to decohere to classicity, and take this decoherence to be identical to the standard interpretation of Copenhagen and Born, where the 'collapse of the wave function' occurs upon classical measurement, then the Schrodinger equation gives the fully acausal fully random probability of a quantum degree of freedom being measured with a specific value. In the older Copenhagen interpretation, the wave function collapses from all its possible values to a unique classically measured value.
Then since this process is acausal, we do not confront in the quantum realm the issue of classical causal closure, so can have a 'free will'. This is a start, but not adequate.
The inadequacy of this start of a theory of free will is that this free will is not responsible. Here is the issue: If the mind causally and deterministically determines the brain and our actions, then we do not have free will. Conversely, if the determination of our actions by an acausal quantum mind is simply randomly probabilistic, then again, we are not responsible for our actions. We just randomly happen to kill the old man in the wheelchair.
This is a very deep problem. Attempting to address it will require most of the rest of this article.
6 A Physical Theory of the Quantum Mind-Brain
I begin with old and new opinions and facts. Had one asked a physicist twenty or even ten years ago if the human brain could exhibit quantum coherent phenomena, the response, after laughter, would have been that thermalization would have destroyed any vestige of quantum coherence, so the answer was 'No'.
It is therefore astonishing and important that recent results on the chlorophyll molecule, surrounded by its evolved 'antenna protein', has been shown be quantum coherent for almost a nanosecond. Now the normal time scale for decoherence is on the order of 10 to the -15 second, or a femto-second. Yet these experiments, carried out at 77K, but thought to apply to chlorophyll in plants at ambient temperature, show quantum coherence of an absorbed photon traveling to the reaction center for over 700 femtoseconds, the length of their longest trial (24). No one expected this. The authors believe that the quantum coherence increases dramatically the quantum efficiency of the energy gathering process in photo-synthesis. More, they believe that the evolved antenna protein either suppresses decoherence or induces recoherence. No one knows at present. It seems safe to conclude that quantum coherence for on the order of a billionth of a second, a nanosecond, is possible and observerable at body or ambient temperature. The evolved role of the antenna protein is testable by mutating its sequence.
The time scale of neural activities is a million times slower, in the millisecond range. But it takes light on the order of a millisecond to cross the brain, so if there were a dispersed quantum decohering-recohering mind-brain, reaching the millisecond range is probably within grasp of a quantum theory of the mind-brain system.
The second recent fact, now widely studied by quantum chemists working on proteins, is that quantum coherent electron transfer within and between proteins is possible and almost certainly real. Because two proteins may coordinate two water molecules, and the electron can pass between the proteins by two pathways, in analogy with the two slit experiment, quantum interference can happen (Salahub, D. 2009 P.C.).
The next fact is that calculations of electrical conductivity between neighboring proteins as a function of the distance between them shows a plateau between 9 and 14 micron separation. The author, David Beratan (26), believes that this plateau reflects quantum coherent electron transfer at this separation, about right to coordinate a few water molecules between the proteins. More, quantum coherent electron transfer occurs within proteins.
Now electrons are only one kind of quantum degree of freedom that may transport within and between nearby complex molecules.
The next fact of importance is that the cell is densely crowded with macromolecules. I do not know the distribution of distances between them, but it is on the order of dozens of angstroms, probably just enough to admit and coordinate the locations of one or more water molecule that then can support quantum coherent electron transport. This is open to investigation experimentally, including the effects of alteration of osmotic effects, swelling or shrinking cells by uptake or removal of water from the cells, on electron transport in cells. Such shrinkage or swelling could surpass the 9-14 angstrom separation needed for quantum coherent electron transport, hence be visible experimentally.
These facts raise the theoretical possibility that a percolating connected web of quantum coherent-decohering-recohering processes could form among and between the rich web of packed molecules in a cell, let alone its membrane surfaces.
Hammeroff and Penrose(27) have suggested microtubules forming the cytoskeleton of cells as loci of coherent quantum behavior. Penrose(8), has suggested that quantum gravity may play a role in the transition to classicity. Others have suggested a variety of molecular bases for extended molecular structures that might support quantum coherent behavior (28, 29). As far as I know, I am the only investigator proposing a quantum coherent-decohering-recohering model of the mind brain system (4).
In short, we can imagine a physical substrate in cells that could carry a quantum recohereing-decohering, pink and grey, process in cells and between cells.
My own view of the above is that it remains scientifically unlikely, but given the chlorophyll results and quantum chemistry calculations on electron transport, not impossible at all.
7 Possible Steps Towards a Responsible Free Will
I begin with the comment that Aristotle considered four causes, formal, final, material and efficient. In a simple example of a house, the formal cause of the house is the design of the house, the blueprint. The material causes are the bricks and mortar and beams. The final cause is my responsible free willed decision to build the house. The efficient cause is the actual process of building the house.
Aristotle also offered an account of scientific explanation: The syllogism. All men are mortal. Socrates is a man. Therefore Socrates is a mortal. Feel the logical force of the conclusion. It underpins our sense that natural law governs the universe rather than compactly describing its regularities.
As noted, Rosen (11) points out that with Newton's laws, initial and boundary conditions and differential equations, Aristotle's maxim for scientific explanation as a deduction snaps into place, for integration of Newton's differential equations constitutes precisely deduction. More, as Rosen rightly points out, deduction and integration of differential equations becomes the complete mathemization of efficient cause. All other Aristotelian causes were banished from science. This banishment, this view that all that happens in the universe is to be explained by deduction, lies at the base of our long love of reductionism, Weinberg's dream of a final theory (10), and current string theory (12). If all explanation is by logical entailment, then we reason that there must be a final theory of everything at the base of physics that entails logically all that unfolds in the universe. The Turing-Church-Deutsch is in full harmony with this: It is algorithms computing functions, or deducing from laws, all the way down. As Descartes hoped, we live in a machine universe and are, res extensa, living machines. No need for conscious experience, then, just take in data and compute your world and response to it, like a robot seeking an electric plug to get its battery recharged.
But there are clouds on the reductionist horizon. Physicist Stephen Hawking recently published an article, "Godel and the End of Physics"(30), arguing that it may be the case that no finite set of efficient cause laws will describe the becoming of the universe, including mind. There may be no finite Theory of Everything.
When a looming crisis such as this arises, it may be wise to question our fundamental assumptions. One of these is our sole reliance in physics on efficient cause laws.
I therefore now want to raise four issues that will take some time. First, should we trust the 350 or 2500 year old belief that all that unfolds in the universe is due to efficient causes? Thereafter I will raise a second issue: Does the becoming of the biosphere by Darwinian preadaptations admit of a sufficient efficient law description? Third, does the quantum-classical world evolve according to a law? If not, does an abiotic natural selection and blind final cause play a role in physics given a reversible quantum-classical process? Is this process lawless and random, or lawless and non-random? Fourth, in considering whether the quantum-classical world co-evolves according to law, is there reason outside of a reversible quantum-classical process to doubt that the total process is lawful, yet if not must it be random or can it be non-random? Later I will try to find an opening for a responsible free will in a possible efficient cause lawless yet non-random evolution of the quantum-classical world. If true, we will find the possibility of free will but the non-probabilistic character needed, I hope, for a responsible free will.
8 Reductionism and the Turing-Church-Deutsch \ principle are Inadequate
8.1 Blind Final Causes
First I need to discuss the concept of a Darwinian adaptation. Philosopher David Depew recently remarked that an adaptation, once achieved, is a "blind teleology" (31). This is meant in just the same sense as Dawkin's "The Blind Watchmaker" (31.5). Darwin gave us a startling idea: the appearance of design could arise without a designer. Thus, Depew envisons no designer, hence "blind teleology".
Now I ask, can we speak of the opportunity for an adaptation before it occurs? Consider an organism that is not light sensitive, and an offspring with a red cell that is light sensitive and that constitutes an adaptation. I translate 'A is an opportunity for an adaptation' as 'A is possible. A may or may not occur. If A occurs, it will tend to be selected and go to fixation in the population.' Note that 'tend to go to fixation' is a dispositional term, and is not open to reduction by translation into any set of necessary and sufficient actual physical events. Thus, the achievement of the adaptation in which the red celled organism is selected to fixation arises by a sequence of perfectly good actual efficient causes. But because we cannot prestate necessary and sufficient efficient causes that achieve an adaptation, we cannot have an efficient cause law for how the adaptation will, in fact, be achieved. Thus, the opportunity for the adaptation itself, is not an efficient cause. It is, instead, a blind final cause.
This is an essential conclusion. I give two examples, one economic, one biological. In the 1980s, in North America, there were lots of television stations, programming, television sets and, of course, couch potatoes. In this economic 'niche', could one hope reasonably to make money inventing the television remote channel changer? Of course. And money was made on the invention. Now were the television stations, programming, television sets and couch potatoes efficient causes of the invention of the TV remote? No. These conditions are what I will call 'enabling constraints' or enabling conditions, constituting an economic niche into which the TV remote 'fit' and flourished. This is a case, assuming responsible free will (our central issue of course) of Aristotle's final cause, and requires consciousness.
But the same issue arises in the evolution of the biosphere. Species form niches into which other species 'fit'. New species evolve and create new niches into which yet more new species evolve and fit. For example rabbits 'make a living' in a 'rabbit niche', even if that niche is hard to define precisely. (So too is the TV remote economic niche hard to define precisely.)
Do we think that the rabbit niche is an efficient cause of the evolution of rabbits? No! The rabbit niche is an enabling constraint, or enabling condition, that enabled rabbits to evolve, be selected and flourish. Here there is no thought of conscious decision as above with the TV remote. Rather, we confront Depew's Blind Teleology and what I want to call 'Blind Final Cause'. This conclusion is essential, for the rabbit niche did not cause the rabbit by efficient causes - the efficient causes were the actual events that tended, the dispositional term again, to lead to the selection of rabbits that made a living in the rabbit niche.
But this conclusion means that our reliance on efficient causes as the sole explanation for the unfolding of the universe, or at least the biosphere that is part of the universe, is wrong. Darwin told us so. The selective conditions constitute the enabling conditions which are the Blind Watchmaker. But in turn, this frees us from the ancient conviction in Western thought that explanation in science can only be in terms of efficient causes - mathematized as deduction, hence reductionism.
One has only to talk to a paleontologist, or better, an historian, to realize that neither seeks to understand the facts of the world, what happened, in terms of laws and deduction. Realizing the fundamental role of blind final cause in the biosphere, let alone full teleological final cause, assuming responsible free will, means that there is no Theory of Everything 'down there', nor is all that unfolds in the universe the deductive consequence of such a Final Weinbergian Theory. It will take a long time, assuming the above is correct, to understand its full implications.
8.2 Darwinian Preadaptations Cannot be Described by Sufficient Efficient Cause Law (4, 13)
Were we to ask Darwin the function of the human heart, he would say it is to pump blood. But we might object that the heart makes heart sounds and moves water in the pericardial sac. Darwin would say that these are not the function of the heart, pumping blood is, because the heart was selected, so exists as a complex organized structure and functional system in the universe, in order to pump blood. It was of selective advantage.
This is the familiar Darwinian Blind Watchmaker adaptation.
But Darwin also noted that a causal property of an organism of no selective use in the current environment might be of use in a new selective environment, hence be selected. Typically a new function will come to exist. These are called 'exaptations' or Darwinian preadaptations. There is no thought of evolutionary foresight here.
I give two biological examples. Swim bladders are in some fish. The level of air and water in the sac adjusts neutral buoyancy in the water column. Paleontologists believe that swim bladders arose from the lungs of lung fish. Water got into some lungs, now there was a sac with air and water, poised to evolve into a swim bladder. Assume the paleontologists are correct.
Two initial question arise: Did a new function come to exist in the biosphere? Yes, neutral buoyancy in the water column. Did this affect the future evolution of the biosphere? Of course, new species, proteins, niches.
The second example concerns the three middle ear bones of mammals. These evolved from three adjacent jaw bones of an early teleost fish by preadaptations. This example is important because relational degrees of freedom matter. Were one bone in the skull, one in the spine, and one in the jaw, probably hearing bones would not have evolved.
Now I ask the same two questions. Did a new function come to exist in the biosphere? Yes, hearing. Did this alter the further evolution of the biosphere? Yes, new species, proteins, niches.
Now I come to my critical third question: Do you think you could prestate all possible Darwinian preadaptations for all organisms alive now? Well, we don't know all organisms alive now, so I simplify: Could you prestate all possible preadaptations just for humans?
I've now asked thousands of people. We all agree the answer is 'No'. Parts of the reasons we seem unable to accomplish this task are these: How would we list all possible selective conditions? How would we know we had completed the list? How would we prestate the one or many relational features of one or several organisms that might become preadaptations? We all feel utterly stymied. We have no way even to start on this task let alone complete it.
I now introduce the 'Adjacent Possible'. Consider 1000 chemical species in a beaker, and call them the Actual. Let them react by a single reaction step. If new species of molecules are formed, call these the Adjacent Possible of the initial Actual. This is perfectly defined, given a minimum stable lifetime of a species and standard reaction conditions.
I now point to the Adjacent Possible of the Biosphere. Once there were lung fish, swim bladders were in the adjacent possible of the biosphere. Before there were multicelled organisms, swim bladders were not in the adjacent possible of the biosphere.
Now let us see what we have agreed to, unless you think you really can name all human preadaptations. What we have agreed to is that we do not know all the possibilities in the adjacent possible of the biosphere! Not only do we not know what will happen, we do not even know what can happen.
The next point concerns probability statements about the evolution of the biosphere by Darwinian preadaptations. Consider flipping a fair coin 10,000 times. It will come up heads about 5000 times with a binomial distribution. But note that we knew ahead of time all the possible 2 to the 10,000th power outcomes, all heads, all tails and so on. We knew all the possibilities, or the sample space, so could construct a frequency interpretation of probability measure over the space.
But we do not know the set of possible Darwinian preadaptations, the sample space, so cannot construct a probability measure.
Laplace had a different version of probability. If confronted by N doors, behind one of which was a treasure, with no further information, the chance that we pick the right door, he said, is 1/N. But note that we know N, the number of doors. We do not know N for the biosphere so cannot construct a probability measure for the evolution of the biosphere by Darwinian preadaptations.
Worse, if a natural law is a compact description of the regularities of a process, can we have a sufficient natural law for the emergence of swim bladders? No. We cannot even state the possibility, let alone the probability, let alone have a description of the regularities of a process. So the becoming of the biosphere by Darwinian preadaptations is partially beyond natural law.
This is a major conclusion: We cannot have sufficient natural law for the evolution of the biosphere by Darwinian preadaptations. Yet such preadaptations are common in the biosphere, let alone the economy, cultural evolution and history. But if this is true, then there can be no final Theory of Everything from which all that unfolds in the universe is logically entailed. With it, the Turing-Church-Deutsch thesis is very strongly weakened. No algorithm will simulate the evolution of the biosphere with all the quantum events that did or might have happened. Nor could we confirm which simulation was correct. And by the above argument, the becoming of the biosphere by Darwinian preadaptations is not entailed by any Theory of Everything.
In its place is a vast creativity in which blind final cause, opportunities for adaptation, and unstatable Darwinian preadaptations partially alter how the biosphere evolves.
It is critical that we have here a process that is partially lawless, yet also is not random! The swim bladder and TV remote succeeded in their contexts. Again, the actual process is not describable by a sufficient natural law, but is also not random. We do not have this concept in our physics or our philosophy. It bears, I think, on a responsible free will. For we have here a partially lawless but non-random becoming. We are no longer trapped by deterministic efficient cause law, including deterministic chaos, versus 'merely random probabilistic' views of mind and brain. The success of the swim bladder and TV remote are not merely random probabilistic chance. We have, for the first time since Descartes, new freedom of intellectual maneuver.
What does this process of biological evolution say to entailment from a theory of everything? No. And what does it say to the TCD thesis? No.
8.3 Reversible Decohrence and Recoherence are Partially Lawless and may be subject to Abiotic Natural Selection Blind Final Cause
I now discuss a controversial topic. I wish to build my case for a quantum coherent-decohering-recohering responsible free will. I base the transition to classicity on decohrence. Is it lawful? I argue no, based on a position advocated by Karl Popper in his The Open Universe (32). Popper uses his argument to support indeterminacy, hence his Open Universe. I too argue for an Open Universe elsewhere on Popper's and some of the grounds given above and below (4, 33).
Popper considers the setting of special relativity. An event A has a past light cone and a future light cone, separated by a zone of possible simultaneity. B is an event in the future light cone of A, so has its own past light cone that includes all of A's past light cone, but parts of B's past light cone are space-like separated from A's past light cone. It follows that at event A, an observer cannot know the parts of B's past light cone outside of A's light cone. Yet the events in this zone outside of A's past light cone and within B's past light cone can influence the event, B. But if an efficient cause law is to be constructable by the observer, then that observer cannot do so prior to event B. For the situated observer at event A , and before event B, no efficient cause law describes the event B; such a law is unknowable and unconstructable by an observer at A and before B.
I now translate this to the decoherence setting. Picture two classical (or quantum) detectors retreating from one another at uniform velocity, the special relativity setting. Now consider a complex organic molecule in a dense mixture of such molecules. A pair of entangled particles is emitted by the organic molecules, event A, and fly off, say at the speed of light. Some time later they are detected, one or both, by the two detectors, event B. Then at the event A, (and before the B event), of the leaving of the entangled particles from the molecule in question, it is impossible to know what events outside the past light cone of A, but inside the past light cone of B, the detection of one or both entangled particles, may influence the B event. But that B event is instantaneously correlated by EPR and may affect the decohrence of molecule A. For example the shape of the electron cloud and nuclei positions may be affected, falling into one of two alternative decoherent potential wells. Thus, Popper's construction implies that there is no law in detail for decoherence. There is no efficient cause law, or function, mapping from the space-time region including A and stopping before B, but including the retreating detectors, that maps into the future to B and after event B. But a law is supposed to be a compact description of the regularities of a process available, like Newton's laws, before, during and after the events unfold. Then there can be no such law or function.
But what are the moving detectors? Special Relativity becomes important at speeds near that of light, but is relevant at any speed of relative motion. Consider our molecular soup in a cell, crowded with molecules and macromolecules at body temperature, jiggling and folding and unfolding, moving relative to one another as quantum coherent electrons may pass between them. The relative motions are not constant, but Special Relativity still applies. Each event has a past and future light cone and a zone, small, but finite, because relative motions are small, zone of possible simultaneity. No efficient cause function, or law, I claim, describes detailed decoherence in cells. No law or function maps the time space region including A and before B occurs, into what happens at B. If there is a lack of law, an absence of a function, F, that maps from A and its space-time region including the moving detectors, but before B, into a future which includes B, then it appears there can be no theory of everything which entails by deduction beforehand all that happens in the universe, and the TCD thesis is again weakened, and perhaps inadmissible in detail.
Obviously, this is a new line of thought. The critical implication that I hope is true is that a quantum decohering-recohering mind-brain identity will propagate trillions of these slightly lawless events. Then, the lawlessness but non-randomness can avalanche so that the longer term behavior of the brain is both lawless yet non-random, and can serve as a basis for a responsible free will, neither deterministic nor 'just random chance'. I return to this below.
No Law Describes the Details of Decoherence and Recoherence. Both Shor's theorem and Briegel's work imply that recoherence is possible. It may or may not be describable by a law. But if the quantum-classical world is reversible, and decohrence itself is without detailed law available before hand and constructible at A, then the total process cannot be lawful. So the total becoming of the quantum-classical world is beyond sufficient natural law. This seems to imply that no Theory of Everything will describe this becoming, and, as D. d'Lambert, (D'Lambert, D. 2009 P.C.), pointed out to me, this seems to imply that the quantum measurement problem is insoluble.
With respect to the quantum mind/brain, this means that there is no efficient cause law for its detailed time evolution.
Possible Abiotic Natural Selection and Blind Final Cause at the Quantum-Classical Interface. If quantum to classical is reversible, and if some compositions of classical matter, in their quantum-classical environmental context, are more resistant to returning to the quantum world of mere possibilities, then they will be subjected to an abiotic natural selection in that selective environment, or niche. Thus an abiotic natural selection may apply at the quantum-classical interface in appropriate circumstances where the environment has a strong bearing on the decoherence process. It seems plausible that this is true in cells. If this is correct, the abiotic natural selection, like the Blind Watchmaker, creates environments that are opportunities, blind final causes, for the persistence of any bit of now classical, FAPP, matter. As that bit of matter evolves by adding or subtracting constituents, fitter variants would be expected to be found. Like blind final cause in the biosphere, we cannot prestate all the necessary and sufficient conditions of efficient causes that achieves such adaptations.
8.4 Quantum Decoherence and the Subsequent Behavior of the Quantum-Classical System are Lawless but not Random
In standard quantum mechanics of, say an electron in a classical box, the physicist uses the classical box as classical boundary conditions and solves for the probability distribution of properties of the electron in the box. These boundary conditions enter the Hamiltonian of the total system.
Now I raise a new question: Suppose part of a complex quantum system, say the molecular soup in a cell, decoheres to classicity FAPP and yet this decoherence is somewhat lawless by Popper's arguments above. (hen if we can ever say of the now classical part of the system that it alters the Hamiltonian of the remaining quantum system, a vexed question, we do not know in detail how the Hamiltonian changes because we do not know in detail how the quantum system decohered, partially lawlessly. In short, a coherent quantum state propagates unitarily, preserving probability. But the decoherence process is dissipative - phase information is lost, but by Popper above, somewhat lawlessly. How can we know the detailed classical FAPP state, positions of nuclei, for example, that arise? We cannot, so cannot recompute the further behavior of the total system. It is somewhat lawless.
Another way of saying this is that, with decoherence, the system falls to a 'mixed' state where all the probabilities are now classical and drawn from some distribution, say of where the nuclei in the molecule are. But my claim is that we cannot know that mixed state probability distribution, for we do not know how decoherence happened. For all we know, the now classical probability distribution of the mixed state could be anything, including sharply peaked over a few alternatives. Again, the becoming of this system has no efficient cause function or law for its temporal evolution. Again this casts doubt on the capacity of a Theory of Everything to deduce by entailment all that unfolds in the universe. And it casts doubt on the Turing-Church-Deutsch principle of algorithms all the way down.
Remarkably, Conway and Kochen, in the Free Will Theorem, (14), and the (Strong) Free Will Theorem, (15), on entirely different arguments, reach much the same conclusions. "Some believe that the alternative to determinism is randomness, and go on the say that 'allowing randomness into the world does not really help understand free will" ... "adding randomness also does not explain the quantum mechanical effects described by our theorem. It is precisely the semi-free (my emphasis) nature of twinned particles, and more generally of entanglement, that shows that something very different from classical stochasticism is at play here. Although the Free Will Theorem suggests to us that determinism is not a viable option, it nevertheless enables us to agree with Einstein that 'God does not play dice with the Universe'. In the present state of knowledge, it is certainly beyond our capabilities to understand the connection between the free decisions of particles and humans, but the free will of neither of these is accounted for by mere randomness (my emphasis) ... The import of the Free Will Theorem is that it is not only current quantum theory, but the world itself that is non-deterministic, so that no future theory can return us to a clockwork universe". Elsewhere, (14), "Physical theories since Descartes have described the evolution of a state from an initial arbitrary or 'free state' according to laws that are themselves independent of space and time. We call such theories ... Free State Theories". But "No free state theory can exactly predict the results of twinned spin one experiments (my emphasis) ... (In short, no function, F, maps the current state of the system into its future. My comment and emphasis.)."We shall see that it follows from the Free State theorem that no free state theory that gives a mechanism for reduction, and a fortiori, no hidden variable theory (such as Bohm's) can be made relativistically invariant". Thus, Conway and Kochen find grounds for lawlessness - no function maps the present to the future given a 'free state', and a non-random 'semi-free' nature of twinned particles. (My comment and emphasis). This too casts doubt on a Theory of Everything explaining all that unfolds by deductive entailment and doubt on the Turing-Church-Deutsch principle.
9 Responsible Free Will
The familiar problem of a responsible free will, to state it again is this: If mind or even mind acting on brain, is deterministic, then we have no free will, but perhaps the illusion we do, for example via chaotic dynamics. Also a classical mind/brain, I note, leaves us with the forever unsolved problem of how mind acts on matter. A quantum decohering recohereing mind does have consequences for matter, so affords a solution to this 350 year old problem.
Conversely in standard quantum mechanics, on Copenhagen and Born rule, and quantum degrees of freedom, there is only the Schrodinger equation possibility wave, amplitudes squared, and an acausal fully probabilistic or random chance occurrence of an event, say the radioactive decay that kills Schrodinger's cat, as given by that equation. We obtain a free will but only a random chance free will. Again there can be no notion of a responsible free will. Obviously this is insufficient.
The discussion above has opened new conceptual avenues. In brief review Blind Final cause, acting as enabling constraints or enabling conditions, can play a non efficient causal role in the evolution of the biosphere, and, if I am right, at the quantum classical reversible boundary with abiotic natural selection. In short, blind final cause frees us from full reliance on efficient cause and explanation by deduction, yet what happens is both partially lawless, yet non-random. This is surely true for the evolution of the biosphere. There seems no reason not to consider this lawless but non-random evolution of the quantum classical boundary in a system as complex as the brain. In short, in the case of blind final cause, biological adaptations in general, and economic-technological development, and history, it seems that the process is both partially beyond sufficient efficient cause natural law, yet, importantly, very much context dependent and non-random. Both the swim bladder and the TV remote were successfully 'selected' in their environment. We may hope that the same applies to possible abiotic natural selection at the quantum-classical boundary.
But we have an entire second line of consideration, without invoking abiotic natural selection. As just pointed out, the evolution of a quantum-classical reversible system can have no law for its becoming because we do not know how the mixed state of classical probabilities forms its distribution by lawless decoherence to classicity FAPP. Alternatively, we do not know, after such lawless decoherence, how the Hamiltonian of the entire system changes. (I note that some physicists do not like this step at all, so caution is required.
I comment that there are experimental tests open to test for such lawlessness in two slit-like experiments as the complexity of the entities passed in beams through the slit increase. Anton Zeilinger(35) has shown that Buckmeisterfullerenes interfere. Presumably a stream of rabbits would not. At the complexity of objects where decoherence sets is, it should be possible to test if that decoherence is fully lawful or yields unstable statistics, perhaps as interference bands start to fade. In so far as the lawlessness depends upon Special Relativity as in Popper's argument, the speed of relative retreating motion of classical detectors should be positively correlated with signatures of lawlessness.
More, if lawless decoherence depends upon the complexity of the quantum or quantum plus classical environment, then it is reasonable to assume that decoherence by loss of phase information would occur more readily in a 'dense' and complex quantum environment. If so, then at that complexity of objects where decoherence sets in, a dense 'beam' of objects would be expected to show more decohrence and less lawfulness, than a rarified beam. Conceivably evidence for abiotic selection at the quantum-classical boundary could be found.
What we seek, based on a quantum coherent-decohering-recohering theory of mind and brain, is a use of these ideas to escape the familiar philosophic boxes. We now have two routes to lawlessness but non-random behavior at the quantum-classical boundary we can consider, either of which may provide the pathway to a responsible free will, rather than a merely 'random' free will: abiotic natural selection and no way to propagate the unknown mixed state distribution.
What we need is a way for what we can interpret as 'intentions' to shape the decoherence-recoherence process such that the classical happenings are altered as are the quantum aspects of the total system. One natural role for intentions to play is as enabling constraints shaping the classical matter. One route is by influencing abiotic natural selection through alterations in the quantum environment that selects for resistance to return to the quantum world. In short, in the context of abiotic natural selection of classical degrees of freedom resistant to return to quantum, the natural assumption is that the 'environment' of the system is itself a complex mixture of dense quantum and classical events which thereby shapes how decoherence to classicity for all practical purposes of a 'system' in that environment occurs, hence what occurs in the actual physical world. Then this environment shapes the abiotic natural selection which then alters further non-lawful but non-random decoherence and abiotic natural selection.
An alternative pathway rests on lack of lawfulness about the mixed state classical probability distribution. This can be lawless, because due to lawless decoherence, yet may yield a classical probability distribution with very useful properties for an intending mind. Thus, the probability distribution could become peaked over one or a few alternatives. Mind would have shaped the becoming of the mind-brain quantum decohering-recohering system is a lawless yet non-random way.
On either of the above accounts above, we seem to have a possibility of a responsible free will. This account is obviously only schematic at this stage of development.
10 Why Might Consciousness Be Selectively \ Advantageous?
This is a very hard problem. For most examples, an unconscious computerized robot would seem to do as well. Humphries argues that humans are conscious because awareness 'enchants us' so makes us fitter(36). It is an enchanting idea and may be right.
The fundamental argument that consciousness is not useful, however, rests on both reductionism and the Turing-Church-Deutsch principle. According to that principle, we live in a Cartesian machine universe, fully simulable to arbitrary accuracy on a universal Turing machine, and we too are Cartesian machines. Our sensors can pick up the environment and compute what they will, hopefully having been selected to be a useful set of sensors. But there is no advantage of being aware, of consciousness, of qualia.
What if TCD is, as I have argued, false? What if reductionism itself is false, as I have argued. Then the universe is not a deductively entailed unfolding in its becoming, and no universal Turing machine in me can capture or simulate all of that, partially efficient cause lawless but non-random becoming.
But if this is true, if the universe and we are not TCD, if reductionism is false, and all that happens is not entailed by a final theory down there which is 'simulable' to arbitrary accuracy, then there may be an enormous advantage to consciousness. If I am a responsible free willed tiger chasing a responsible free willed gazelle, I can 'see' what the gazelle is choosing freely to do and alter my behavior. But I cannot compute what the free willed gazelle will do.
In short, it seems to me that the putative non-TCD, non-reductionist character of the real universe, other life, animals and us, renders consciousness selectively advantageous. The degree to which consciousness is selectively advantageous depends upon how far we are from TCD and reductionism in the real universe and our lives. No one knows, of course, but this seems a fresh start to the problem of why consciousness evolved.
11 The Hard Problem, Qualia
Does any of the above help? I do not think so, at least yet... It may be that it points to an avenue that might conceivably help someday, but as ever, we have no idea what consciousness 'is'. I cannot avoid one thought: reductionism is inherently third person, for deduction is mere logical entailment, verifiable by all of us in third person language. And we feel profoundly that 'objective knowledge' must be third person sharable. Is there some kind of clue here? All our knowledge of the world is inherently first person. Something big seems missing. As Strawson noted long ago(37), we can only be in the world as here-now oriented subjects, not objects. How trapped are we by reductionism into a third person 'knowing' view of the world? More, being in the world when we do not always know what can happen cannot be a matter only of reason or knowing. Reason and knowing are then insufficient guides to living our lives. How are we, then, in the world? Perhaps if we try to give up third person language as primary, objective, scientific, and focus on being in the world when we cannot know, that may help with the hard problem.
I have presented the mind-brain identity theory in the context of two physical theories: first one in which a multiparticle quantum-classical system is capable of decohering reversibly to classicity, or classicity for all practical purposes. This allows mind to have consequences for brain without having to act by efficient cause on brain. This appears to resolve two outstanding problems in the philosophy of mind that have plagued us since Descartes: how the mind 'acts on' matter - it does so acausally via decoherence. How does mind act on mind - via the quantum decohering-recohering dynamical behavior of the mind-brain identity system. Second, I have discussed both reductionism and the Turing-Church- Deutsch principle and find both inadequate. Part of this is the inadequacy of a purely efficient cause view of the unfolding of the biosphere and perhaps the quantum-classical boundary, where I suggested in a Special Relativity setting that detailed decoherence is lawless. No function maps the present slice of space-time into its future. And I suggested an abiotic natural selection and a complex quantum-classical environment that shapes the decoherence to classicity FAPP, where that environment acts as the intention that non-lawfully but non-randomly shapes the consequences of mind for brain and action. In the apparent failure of reductionism and TCD, we have new grounds both for a responsible free will and an evolutionary advantage in evolving consciousness. We are not, on this view, machines, nor is the becoming of the universe a machine open to deductive inference. All this is quite radical and will need careful scrutiny. But it seems possible to test for lawlessness at the quantum-classical boundary, and if so, this article is both philosophy and genuine science.
This work was partially supported by an iCORE grant and a TEKKES grant.
(1) Dennett, Daniel, Consciousness Explained, Little, Brown and Co. Boston MA. (1991).
(2) Dennett, Daniel, Freedom Evolving, Viking, N.Y. (2003).
(3) Churchland, P. and Sejnowski, T. J., Computational Mind, (Computational Neuroscience) MIT Press, Cambridge MA, (1992).
(4) Kauffman, Stuart A., Reinventing the Sacred, Basic Books, N.Y. 2008.
(5) Crick, Francis, The Astonishing Hypothesis: The scientific search for the soul, Charles Scribner's and Sons, N.Y. 1994.
(6) Reesl, G., Kreiman, G. and Koch, C., The Neural Correlates of Consciousness in humans, Nature Reviews of Neurosciences 3, 261-270, (2002).
(7) Stapp, H., Mind, Matter, and Quantum Mechanics, Springer Verlag, (1993).
(8) Penrose, R., The Emperor's New Mind: Concerning Computers, Minds and the Laws of Physics, Oxford University Press, N.Y. (1989).
(9) Deutsch, D., Quantum theory, the Church-Turing Principle, and the universal quantum computer, Proc. R. Soc. Lond. A vol 400. 97-117, (1985).
(10) Weinberg, S., Dreams of a Final Theory: the search for the fundamental laws of Nature, Pantheon Books, N.Y. (1992)
(11) Rosen, R., Life Itself: A Comprehensive Inquiry into the Nature, Origin, and Fabrication of Life, Columbia University Press, N.Y. (1991)
(12) Smolin, L., Three Roads to Quantum Gravity, Basic Books, N.Y. (2001).
(13) Kauffman, S. A., Investigations, Oxford University Press, NY (2000).
(14) Conway, J. and Kochen, S., The Free Will Theorem, http://arxiv.org/abs/quant-ph/0604079, (2006)
(15) Conway, J. and Kochen, S., The Strong Free Will Theorem, http://arxiv.org/abs/0807.3286, (2008)
(17) Stamp, P.C.E., The Decoherence Puzzle, Studies in the History and Philosophy of Science: Studies in the History and Philosophy of Physics, Vol 37, 467-497, (2006).
(18) Leggett, A.J., Macroscopic Realism: What Is It and What Do We Know About It From Experiment? "Quantum Measurement: Beyond Paradox" by Richard A.Healey, (Ed), Geoffrey Hellman (Ed) (Minnesota Studies in the Philosophy of Science) University of Minnesota Press, Chicago, Illinois, (1998)
(21) Shor, P.W., Scheme for reducing decohrence in theorem error correction in quantum computer memory, Phys. Rev. A. 52:4, R2493-R2496, (1995).
(22) Briegel, H. J. and Popescu, S., Entanglement and Intramolecular cooling in biological systems? A Thermodynamic Perspective, Archiv 0806.4552v1 [quant-ph] 27, (2008).
(23) Cai, J., Popescu, S. and Briegel, H.J., http://arxiv.org/abs/0809.4906, (2008).
(24) Lee. H., Cheng, Y.C. and Fleming, G.R., Coherence Dynamics in Photosynthesis: Protein Protection of Excitonic Coherence, Science 316, 1462-1467, (2007).
(26) Lin, J. Balabin, I, Beratan, D.N., The Nature of Aqueous Tunneling Pathways Between Electron Transfers for Proteins. Science, 1310-1313, (2005).
(27) Hammeroff, S. and Penrose, R., Orchestrated reduction in quantum coherence in brain microtubules: a model for consciousness. In: toward a Science of Consciousness - The First Tucson Discussions and Debates. Eds S. Hameroff, A. Kaszniak, A Scott, MIT Press Cambridge, MA, (1996)
(28) Tegmark, M., The importance of quantum decoherence in brain processes. Phys. Rev.E. 14194-4206. arxiv.org/abs/quant-ph/9907009, (1999).
(29) Mavromatos, N., Cell Microtubules as Cavities: Quantum Coherence and Energy Transfer? arxiv.org.pdf.quant-ph/0009089, (2000).
(30) Hawking, S., Godel and the End of Physics, \ http://www.damtp.cam.ac.uk/strings02/dirac/hawking/, (2009).
(31) Depew, D., Lecture, STOQ-Vatican Conference of Evolution and Darwin, (2009).
(31.5) Dawkins, R., The Blind Watchmaker, Norton and Company, New York (1986)
(32) Popper, K., The Open Universe: An Argument for Indeterminism. Roman and Littlefield, Lanham, MD, (1956)
(33) Kauffman, S. A., Toward a Post Reductionist Science: The Open Universe, arXiv:0907.2492, (2009).
(35) Zeilinger, A., On the Interpretation and Philosophical Foundations of Quantum Mechanics, \ http://www.quantum.univie.ac.at/zeilinger/philosop.html, (2001)
(36) Humphrey, N., Getting the Measure of Consciousness, Prog. of Theo. Physics. Sup. 173, 264-269, (2008).
(37) Strawson, P. F., Lectures Oxford University, (1961).
"For those seeking substance over sheen, the occasional videos released at Edge.org hit the mark. The Edge Foundation community is a circle, mainly scientists but also other academics, entrepreneurs, and cultural figures. ... Edge's long-form interview videos are a deep-dive into the daily lives and passions of its subjects, and their passions are presented without primers or apologies. The decidedly noncommercial nature of Edge's offerings, and the egghead imprimatur of the Edge community, lend its videos a refreshing air, making one wonder if broadcast television will ever offer half the off-kilter sparkle of their salon chatter." — Boston Globe
Mahzarin Banaji, Samuel Barondes, Yochai Benkler, Paul Bloom, Rodney Brooks, Hubert Burda, George Church, Nicholas Christakis, Brian Cox, Iain Couzin, Helena Cronin, Paul Davies, Daniel C. Dennett, David Deutsch,Dennis Dutton, Jared Diamond, Freeman Dyson, Drew Endy, Peter Galison, Murray Gell-Mann, David Gelernter, Neil Gershenfeld, Anthony Giddens, Gerd Gigerenzer, Daniel Gilbert, Rebecca Goldstein, John Gottman, Brian Greene, Anthony Greenwald, Alan Guth, David Haig, Marc D. Hauser, Walter Isaacson, Steve Jones, Daniel Kahneman, Stuart Kauffman, Ken Kesey, Stephen Kosslyn, Lawrence Krauss, Ray Kurzweil, Jaron Lanier, Armand Leroi, Seth Lloyd, Gary Marcus, John Markoff, Ernst Mayr, Marvin Minsky, Sendhil Mullainathan, Dennis Overbye, Dean Ornish, Elaine Pagels, Steven Pinker, Jordan Pollack, Lisa Randall, Martin Rees, Matt Ridley, Lee Smolin, Elisabeth Spelke, Scott Sampson, Robert Sapolsky, Dimitar Sasselov, Stephen Schneider, Martin Seligman, Robert Shapiro, Clay Shirky, Lee Smolin, Dan Sperber, Paul Steinhardt, Steven Strogatz, Seirian Sumner, Leonard Susskind, Nassim Nicholas Taleb, Timothy Taylor, Richard Thaler, Robert Trivers, Neil Turok, J.Craig Venter, Edward O. Wilson, Lewis Wolpert, Richard Wrangham, Philip Zimbardo
Nearly impossible to put down: engaging original essays from brilliant young scientists on their work — and its fascinating social, ethical, and philosophical implications. The Barnes & Noble Review Long List
The engrossing essay collection which offers a youthful spin on some of the most pressing scientific issues of today—and tomorrow...Kinda scary? Yes! Super smart and interesting? Definitely. — The Observer's Very Short List
"A captivating collection of essays ... a medley of big ideas." — Amanda Gefter, New Scientist
"The perfect collection for people who like to stay up on recent scientific research but haven't the time or expertise to go to the original sources." — Playback.stl.com
If these authors are the future of science, then the science of the future will be one exciting ride! Find out what the best minds of the new generation are thinking before the Nobel Committee does. A fascinating chronicle of the big, new ideas that are keeping young scientists up at night. — Daniel Gilbert, author of Stumbling on Happiness
"A preview of the ideas you're going to be reading about in ten years." — Steven Pinker, author of The Stuff of Thought
"Brockman has a nose for talent." — Nassim Nicholas Taleb, author The Black Swan
"Capaciously accessible, these writings project a curiosity to which followers of science news will gravitate." — Booklist
WHAT HAVE YOU CHANGED YOUR MIND ABOUT
great event in the Anglo-Saxon culture."
Praise for the online publication of
"The splendidly enlightened Edge website (www.edge.org) has rounded off each year of inter-disciplinary debate by asking its heavy-hitting contributors to answer one question. I strongly recommend a visit." The Independent
"A great event in the Anglo-Saxon culture." El Mundo
"As fascinating and weighty as one would imagine." The Independent
"They are the intellectual elite, the brains the rest of us rely on to make sense of the universe and answer the big questions. But in a refreshing show of new year humility, the world's best thinkers have admitted that from time to time even they are forced to change their minds." The Guardian
"Even the world's best brains have to admit to being wrong sometimes: here, leading scientists respond to a new year challenge." The Times
"Provocative ideas put forward today by leading figures."The Telegraph
The world's finest minds have responded with some of the most insightful, humbling, fascinating confessions and anecdotes, an intellectual treasure trove. ... Best three or four hours of intense, enlightening reading you can do for the new year. Read it now." San Francisco Chronicle
"As in the past, these world-class thinkers have responded to impossibly open-ended questions with erudition, imagination and clarity." The News & Observer
"A jolt of fresh thinking...The answers address a fabulous array of issues. This is the intellectual equivalent of a New Year's dip in the lake—bracing, possibly shriek-inducing, and bound to wake you up." The Globe and Mail
"Answers ring like scientific odes to uncertainty, humility and doubt; passionate pleas for critical thought in a world threatened by blind convictions." The Toronto Star
"For an exceptionally high quotient of interesting ideas to words, this is hard to beat. ...What a feast of egg-head opinionating!" National Review Online
"The optimistic visions seem not just wonderful but plausible." Wall Street Journal
"Persuasively upbeat." O, The Oprah Magazine
"Our greatest minds provide nutshell insights on how science will help forge a better world ahead." Seed
"Uplifting...an enthralling book." The Mail on Sunday
"Danger – brilliant minds at work...A brilliant bok: exhilarating, hilarious, and chilling." The Evening Standard (London)
"A selection of the most explosive ideas of our age." Sunday Herald
"Provocative" The Independent
"Challenging notions put forward by some of the world's sharpest minds" Sunday Times
"A titillating compilation" The Guardian
"Reads like an intriguing dinner party conversation among great minds in science" Discover
"Whether or not we believe proof or prove belief, understanding belief itself becomes essential in a time when so many people in the world are ardent believers." LA Times
"Belief appears to motivate even the most rigorously scientific minds. It stimulates and challenges, it tricks us into holding things to be true against our better judgment, and, like scepticism -its opposite -it serves a function in science that is playful as well as thought-provoking. not we believe proof or prove belief, understanding belief itself becomes essential in a time when so many people in the world are ardent believers." The Times
"John Brockman is the PT Barnum of popular science. He has always been a great huckster of ideas." The Observer
"An unprecedented roster of brilliant minds, the sum of which is nothing short of an oracle—a book ro be dog-eared and debated." Seed
"Scientific pipedreams at their very best." The Guardian
"Makes for some astounding reading." Boston Globe
"Fantastically stimulating...It's like the crack cocaine of the thinking world.... Once you start, you can't stop thinking about that question." BBC Radio 4
"Intellectual and creative magnificence" The Skeptical Inquirer
Edge Foundation, Inc. is a nonprofit private operating foundation under Section 501(c)(3) of the Internal Revenue Code.