| Index | 1 | 2 | 3 | 4 | 5 | 6 | 7 | 8 | 9 | 10 | 11 | 12 | 13 | 14 | 15 | 16 | 17 |

next >




2008

"WHAT HAVE YOU CHANGED YOUR MIND ABOUT?"

RODNEY A. BROOKS
Panasonic Professor of Robotics, MIT, and CTO, iRobot Corp; author Flesh and Machines

Computation as the Ultimate Metaphor

Our science, including mine, treats living systems as mechanisms at multiple levels of abstraction.  As we talk about how one bio-molecule docks with another our explanations are purely mechanistic and our science never invokes "and then the soul intercedes and gets them to link up". The underlying assumption of molecular biologists is that their level of mechanistic explanation is ultimately adequate for high level mechanistic descriptions such as physiology and neuroscience to build on as a foundation.

Those of us who are computer scientists by training, and I'm afraid many collaterally damaged scientists of other stripes, tend to use computation as the mechanistic level of explanation for how living systems behave and "think".  I originally gleefully embraced the computational metaphor

If we look back over recent centuries we will see the brain described as a hydrodynamic machine, clockwork, and as a steam engine.  When I was a child in the 1950's I read that the human brain was a telephone switching network.  Later it became a digital computer, and then a massively parallel digital computer.  A few years ago someone put up their hand after a talk I had given at the University of Utah and asked a question I had been waiting for for a couple of years: "Isn't the human brain just like the world wide web?".  The brain always seems to be one of the most advanced technologies that we humans currently have.

The metaphors we have used in the past for the brain have not stood the test of time.  I doubt that our current metaphor of the brain as a network of computers doing computations is going to stand for all eternity either.

Note that I do not doubt that there are mechanistic explanations for how we think, and I certainly proceed with my work of trying to build intelligent robots using computation as a primary tool for expressing mechanisms within those robots.

But I have relatively recently come to question computation as the ultimate metaphor to be used in both the understanding of living systems and as the only important design tool for engineering intelligent artifacts.

Some of my colleagues have managed to recast Pluto's orbital behavior as the body itself carrying out computations on forces that apply to it.  I think we are perhaps better off using Newtonian mechanics (with a little Einstein thrown in) to understand and predict the orbits of planets and others.  It is so much simpler.

Likewise we can think about spike trains as codes and worry about neural coding.  We can think about human memory as data storage and retrieval.  And we can think about walking over rough terrain as computing the optimal place to put down each of our feet.  But I suspect that somewhere down the line we are going to come up with better, less computational metaphors.  The entities we use for metaphors may be more complex but the useful ones will lead to simpler explanations.

Just as the notion of computation is only a short step beyond discrete mathematics, but opens up vast new territories of questions and technologies, these new metaphors might well be just a few steps beyond where we are now in understanding organizational dynamics, but they may have rich and far reaching implications in our abilities to understand the natural world and to engineer new creations.


ROBERT TRIVERS
Evolutionary Biologist, Rutgers University; Coauthor, Genes In Conflict: The Biology of Selfish Genetic Elements

The Science of Self-deception Requires a Deep Understanding of Biology

When I first saw the possibility (some 30 years ago) of grounding a science of human self-deception in evolutionary logic (based on its value in furthering deception of others), I imagined joining evolutionary theory with animal behavior and with those parts of psychology worth preserving. The latter I regarded as a formidable hurdle since so much of psychology (depth and social) appeared to be pure crap, or more generously put, without any foundation in reality or logic.

Now after a couple of years of intensive study of the subject, I am surprised at the number of areas of biology that are important, if not key, to the subject yet are relatively undeveloped by biologists. I am also surprised that many of the important new findings in this regard have been made by psychologists and not biologists.

It was always obvious that when neurophysiology actually became a science (which it did when it learned to measure on-going mental activity) it would be relevant to deceit and self-deception and this is becoming more apparent every day. Also, endocrinology could scarcely be irrelevant and Richard Wrangham has recently argued for an intimate connection between testosterone and self-deception in men but the connections must be much deeper still. The proper way to conceptualize the endocrine system (as David Haig has pointed out to me) is as a series of signals with varying half-lives which give relevant information to organs downstream and many such signals may be relevant to deceit and self-deception and to selves-deception, as defined below.

One thing I never imagined was that the immune system would be a vital component of any science of self-deception, yet two lines of work within psychology make this clear. Richard Davidson and co-workers have shown that relatively positive, up, approach-seeking people are more likely to be left-brain activated (as measured by EEG) and show stronger immune responses to a novel challenge (flu vaccine) than are avoidance, negative emotion (depression, anxiety) right-brained people.  At the same time, James Pennebaker and colleagues have shown that the very act of repressing information from consciousness lowers immune function while sharing information with others (or even a diary) has the opposite effect. Why should the immune system be so important and why should it react in this way?

A key variable in my mind is that the immune system is an extremely expensive one—we produce a grapefruit-sized set of tissue every two weeks—and we can thus borrow against it, apparently in part for brain function. But this immediately raises the larger question of how much we can borrow against any given system—yes fat for energy, bone and teeth when necessary (as for a child in utero), muscle when not used and so on—but with what effects? Why immune function and repression?

While genetics is, in principle, important to all of biology, I thought it would be irrelevant to the study of self-deception until way into the distant future. Yet the 1980s produced the striking discovery that the maternal half of our genome could act against the paternal, and vice-versa, discoveries beautifully exploited in the 90’s and 00’s by David Haig to produce a range of expected (and demonstrated) internal conflicts which must inevitably interact with self-deception directed toward others. Put differently, internal genetic conflict leads to a quite novel possibility: selves-deception, equally powerful maternal and paternal halves selected to deceive each other (with unknown effects on deception of others).

And consider one of the great mysteries of mental biology. The human brain consumes about 20% of resting metabolic rate come rain or shine, whether depressed or happy, asleep or awake. Why? And why is the brain so quick to die when deprived of this energy? What is the cellular basis for all of this? How exactly does borrowing from other systems, such as immune, interact with this basic metabolic cost? Biologists have been very slow to see the larger picture and to see that fundamental discoveries within psychobiology require a deeper understanding of many fundamental biological processes, especially the logic of energy borrowed from various sources.

Finally, let me express a surprise about psychology. It has led the way in most of the areas mentioned, e.g. immune effects, neurophysiology, brain metabolism. Also, while classical depth psychology (Freud and sundries) can safely be thrown overboard almost in its entirety, social psychology has produced some very clever and hopeful methods, as well as a body of secure results on biased human mentation, from perception, to organization of data, to analysis, to further propagation. Daniel Gilbert gives a well-appreciated lecture in which he likens the human mind to a bad scientist, everything from biased exposure to data and biased analysis of information to outright forgery. Hidden here is a deeper point. Science progresses precisely because it has a series of anti-deceit-and-self-deception devices built into it, from full description of experiments permitting exact replication, to explicit statement of theory permitting precise counter-arguments, to the preference for exploring alternative working hypothesis, to a statistical apparatus able to weed out the effects of chance, and so on.


LAURENCE C. SMITH
Professor of Geography, UCLA

Rapid climate change

The year 2007 marked three memorable events in climate science:  Release of the Fourth Assessment Report of the Intergovernmental Panel on Climate Change (IPCC AR4); a decade of drought in the American West and the arrival of severe drought in the American Southeast; and the disappearance of nearly half of the polar sea-ice floating over the Arctic Ocean. The IPCC report (a three-volume, three-thousand page synthesis of current scientific knowledge written for policymakers) and the American droughts merely hardened my conviction that anthropogenic climate warming is real and just getting going — a view shared, in the case of the IPCC, a few weeks ago by the Nobel Foundation. The sea-ice collapse, however, changed my mind that it will be decades before we see the real impacts of the warming. I now believe they will happen much sooner.

Let's put the 2007 sea-ice year into context. In the 1970's, when NASA first began mapping sea ice from microwave satellites, its annual minimum extent (in September, at summer's end) hovered close to 8 million square kilometers, about the area of the conterminous United States minus Ohio. In September 2007 it dropped abruptly to 4.3 million square kilometers, the area of the conterminous United State minus Ohio and all the other twenty-four states east of the Mississippi, as well as North Dakota, Minnesota, Missouri, Arkansas, Louisiana, and Iowa. Canada's Northwest Passage was freed of ice for the first time in human memory. From Bering Strait where the U.S. and Russia brush lips, open blue water stretched almost to the North Pole.

What makes the 2007 sea-ice collapse so unnerving is that it happened too soon.  The ensemble averages of our most sophisticated climate model predictions, put forth in the IPCC AR4 report and various other model intercomparison studies, don't predict a downwards lurch of that magnitude for another fifty years. Even the aggressive models -the National Center for Atmospheric Research (NCAR) CCSM3 and the Centre National de Recherches Meteorologiques (CNRM) CM3 simulations, for example — must whittle ice until 2035 or later before the 2007 conditions can be replicated.  Put simply, the models are too slow to match reality. Geophysicists, accustomed to non-linearities and hard to impress after a decade of 'unprecedented' events, are stunned by the totter:  Apparently, the climate system can move even faster than we thought.  This has decidedly recalibrated scientist's attitudes — including my own — to the possibility that even the direst IPCC scenario predictions for the end of this century — 10 to 24 inch higher global sea levels, for example — may be prudish.

What does all this say to us about the future? The first is that rapid climate change — a nonlinearity that occurs when a climate forcing reaches a threshold beyond which little additional forcing is needed to trigger a large impact — is a distinct threat not well captured in our current generation of computer models. This situation will doubtless improve — as the underlying physics of the 2007 ice event and others such as the American Southeast drought are dissected, understood, and codified — but in the meantime, policymakers must work from the IPCC blueprint which seems almost staid after the events of this summer and fall.  The second is that it now seems probable that the northern hemisphere will lose its ice lid far sooner than we ever thought possible.  Over the past three years experts have shifted from 2050, to 2035, to 2013 as plausible dates for an ice-free Arctic Ocean — estimates at first guided by models then revised by reality.

The broader significance of vanishing sea ice extends far beyond suffering polar bears, new shipping routes, or even development of vast Arctic energy reserves. It is absolutely unequivocal that the disappearance of summer sea ice — regardless of exactly which year it arrives — will profoundly alter the northern hemisphere climate, particularly through amplified winter warming of at least twice the global average rate. Its further impacts on the world's precipitation and pressure systems are under study but are likely significant. Effects both positive and negative, from reduced heating oil consumption to outbreaks of fire and disease, will propagate far southward into the United States, Canada, Russia and Scandinavia. Scientists have expected such things in eventuality — but in 2007 we learned they may already be upon us.


LEE M. SILVER
Professor of Molecular Biology and Public Policy,  Woodrow Wilson School, Princeton; Author, Challenging Nature


"If we could just get people to understand the science, they'd agree with us." Not.

In an interview with the New York Times, shortly before he died, Francis Crick told a reporter, "the view of ourselves as [ensouled] 'persons' is just as erroneous as the view that the Sun goes around the Earth. This sort of language will disappear in a few hundred years. In the fullness of time, educated people will believe there is no soul independent of the body, and hence no life after death."

Like the vast majority of academic scientists and philosophers alive today, I accept Crick's philosophical assertion — that when your body dies, you cease to exist — without any reservations. I also used to agree with Crick's psychosocial prognosis — that modern education would inevitably give rise to a populace that rejected the idea of a supernatural soul. But on this point, I have changed my mind.

Underlying Crick's psychosocial claim is a common assumption: the minds of all intelligent people must operate according to the same universal principles of human nature. Of course, anyone who makes this assumption will naturally believe that their own mind-type is the universal one. In the case of Crick and most other molecular biologists, the assumed universal mind-type is highly receptive to the persuasive power of pure logic and rational analysis.

Once upon a time, my own worldview was similarly informed. I was convinced that scientific facts and rational argument alone could win the day with people who were sufficiently intelligent and educated. To my mind, the rejection of rational thought by such people was a sign of disingenuousness to serve political or ideological goals.

My mind began to change one evening in November 2003. I had given a lecture at small liberal arts college along with a member of The President's Council on Bioethics, whose views on human embryo research are diametrically opposed to my own. Surrounded by students at the wine and cheese reception that followed our lectures, the two of us began an informal debate about the true meaning and significance of changes in gene expression and DNA methylation during embryonic development. Six hours later, long after the last student had crept off to sleep, it was 4:00 am, and we were both still convinced that with just one more round of debate, we'd get the other to capitulate. It didn't happen.

Since this experience, I have purposely engaged other well-educated defenders of the irrational, as well as numerous students at my university, in spontaneous one-on-one debates about a host of contentious biological subjects including evolution, organic farming, homeopathy, cloned animals, "chemicals" in our food, and genetic engineering. Much to my chagrin, even after politics, ideology, economics, and other cultural issues have been put aside, there is often a refusal to accept scientific implications of rational argumentation.

While its mode of expression may change over cultures and time, irrationality and mysticism seem to be an integral part of normal human nature, even among highly educated people. No matter what scientific and technological advances are made in the future, I now doubt that supernatural beliefs will ever be eradicated from the human species.


GARY MARCUS
Psychologist, New York University; Author, The Birth of the Mind

What's Special About Human Language

When I was in graduate school, in the early 1990s, I learned two important things: that the human capacity for language was innate, and that the machinery that allowed human beings to learn language was "special", in the sense of being separate from the rest of the human mind.

Both ideas sounded great at the time. But (as far as I can tell know) only one of them turns out to be true.

I still think that I was right to believe in "innateness", the idea that the human mind, arrives, fresh from the factory, with a considerable amount of elaborate machinery. When a human embryo emerges from the womb, it has almost all the neurons it will ever have. All of the basic neural structures are already in place, and most or all of the basic neural pathways are established. There is, to be sure, lots of learning yet to come — an infant's brain is more rough draft than final product — but anybody who still imagines the infant human mind to be little more than an empty sponge isn't in touch with the realities of modern genetics and neuroscience. Almost half our genome is dedicated to the development of brain function, and those ten or fifteen thousand brain-related genes choreograph an enormous amount of biological sophistication.  Chomsky (whose classes I sat in on while in graduate school) was absolutely right to be insisting, for all these years, that language has its origins in the built-in structure of the mind.

But now I believe that I was wrong to accept the idea that language was separate from the rest of the human mind. It's always been clear that we can talk about what we think about, but when I was in graduate school it was popular to talk about language as being acquired by a separate "module" or "instinct" from the rest of cognition, by what Chomsky called a  "Language Acquisition Device" (or LAD). Its mission in life was to acquire language, and nothing else. 

In keeping with idea of language as product of specialized in-born mechanism, we noted how quickly how human toddlers acquired language, and how determined they were to do so; all normal human children acquire language, not just a select few raised in privileged environments, and they manage to do so rapidly, learning most of what they need to know in the first few years of life.  (The average adult, in contrast, often gives up around the time they have to face their fourth list of irregular verbs.)  Combine that with the fact that some children with normal intelligence couldn't learn language and that others with normal language lacked normal cognitive function, and I was convinced. Humans acquired language because they had a built-in module that was uniquely dedicated to that function.

Or so I thought then. By the late 1990s, I started looking beyond the walls of my own field (developmental psycholinguistics) and out towards a whole host of other fields, including genetics, neuroscience, and evolutionary biology.

The idea that most impressed me — and did the most to shake me of the belief that language was separate from the rest of the mind — goes back to Darwin. Not "survival of the fittest" (a phrase actually coined by Herbert Spencer) but his notion, now amply confirmed at the molecular level, that all biology is the product of what he called "descent with modification". Every species, and every biological system evolves through a combination of inheritance (descent) and change (modification). Nothing, no matter how original it may appear, emerges from scratch.

Language, I ultimately realized, must be no different: it emerged quickly, in the space of a few hundred thousand years, and with comparatively little genetic change. It suddenly dawned on me that the striking fact that our genomes overlap almost 99% with those of chimpanzees must be telling something: language couldn't possibly have started from scratch. There isn't enough room in the genome, or in our evolutionary history, for it to be plausible that language is completely separate from what came before.

Instead, I have now come to believe, language must be, largely, a recombination of spare parts, a kind of jury-rigged kluge built largely out of cognitive machinery that evolved for other purposes, long before there was such a thing as language. If there's something special about language, it is not the parts from which it is composed, but the way in which they are put together.

Neuorimaging studies seem to bear this out. Whereas we once imagined language to be produced and comprehended almost entirely by two purpose-built regions — Broca's area and Wernicke's area, we now see that many other parts of the brain are involved (e.g. the cerebellum and basal ganglia) and that the classic language areas (i.e. Broca's and Wernicke's) participate in other aspects of mental life (e.g., music and motor control) and have counterparts in other apes.

At the narrowest level, this means that psycholinguists and cognitive neuroscientists need to rethink their theories about what language is. But if there is a broader lesson, it is this: although we humans in many ways differ radically from any other species, our greatest gifts are built upon a genomic bedrock that we share with the many other apes that walk the earth.


LEE SMOLIN
Physicist, Perimeter Institute; Author, The Trouble With Physics

Although I have changed my mind about several ideas and theories, my longest struggle has been with the concept of time.  The most obvious and universal aspect about reality, as we experience it, is that it is structured as a succession of moments, each of which comes into being, supplanting what was just present and is now past.  But, as soon as we describe nature in terms of mathematical equations, the present moment and the flow of time seem to disappear, and time becomes just a number, a reading on an instrument,  like any other.

Consequently, many philosophers and physicists argue that time is an illusion, that reality consists of the whole four dimensional history of the universe, as represented in Einstein’s theory of general relativity.  Some, like Julian Barbour, go further and argue that, when quantum theory is unified with gravity,  time disappears completely.  The world is just a vast collection of moments which are represented by the "wave-function  of the universe."  Time not real, it is just an "emergent quantity" that is helpful to organize our observations of the universe when it is big and complex.

Other physicists argue that aspects of time are real, such as the relationships of causality, that record which events were the necessary causes of others. Penrose, Sorkin and Markopoulou have proposed models of quantum spacetime in which everything real reduces to these relationships of causality.

In my own  thinking, I first embraced the view that quantum reality is timeless.  In our work on loop quantum gravity we were able to take this idea more seriously than people before us could, because we could construct and study exact wave-functions of the universe. Carlo Rovelli , Bianca Dittrich and others worked out in detail how time would "emerge" from the study of the question of what quantities of the theory are observable.

But, somehow, the more this view was worked out in detail the less I was convinced. This was partly due to technical challenges in realizing the emergence of time, and partly because some naïve part of me could never understand conceptually how the basic experience of the passage of time could emerge from a world without time.

So in the late 90s I embraced the view that time,  as causality,  is real. This fit best the next stage of development of loop quantum gravity, which was based on quantum spacetime histories.    However,  even as we continued to make  progress on the technical side of these studies,  I found myself worrying  that the present moment and the flow of time were still nowhere represented.  And I had another motivation, which was to make sense of the idea that laws of nature could evolve in time.

Back in the early 90s I had formulated a view of laws evolving on a landscape of theories along with the universe they govern.  This had been initially ignored, but in the last few years there has been much study of dynamics on landscapes of theories. Most of these are framed in the timeless language of the "wavefunction  of the universe," in contrast to my original presentation, in which theories evolved in real time. As these studies progressed, it became clear that only those in which time played a role could generate  testable predictions — and this made me want  to think more deeply about time.

It is becoming clear to me that the mystery of the nature of time is connected with other fundamental questions such as the nature of truth in mathematics and whether there must be timeless laws of nature. Rather than being an illusion, time may be the only aspect of our present understanding of nature that is not temporary and emergent.


A. GARRETT LISI
Independent Theoretical Physicist; Author, "An Exceptionally Simple Theory of Everything"

I Used To Think I Could Change My Mind

As a scientist, I am motivated to build an objective model of reality. Since we always have incomplete information, it is eminently rational to construct a Bayesian network of likelihoods — assigning a probability for each possibility, supported by a chain of priors. When new facts arise, or if new conditional relationships are discovered, these probabilities are adjusted accordingly — our minds should change. When judgment or action is required, it is based on knowledge of these probabilities. This method of logical inference and prediction is the sine qua non of rational thought, and the method all scientists aspire to employ. However, the ambivalence associated with an even probability distribution makes it terribly difficult for an ideal scientist to decide where to go for dinner.

Even though I strive to achieve an impartial assessment of probabilities for the purpose of making predictions, I cannot consider my assessments to be unbiased. In fact, I no longer think humans are naturally inclined to work this way. When I casually consider the beliefs I hold, I am not readily able to assign them numerical probabilities. If pressed, I can manufacture these numbers, but this seems more akin to rationalization than rational thought. Also, when I learn something new, I do not immediately erase the information I knew before, even if it is contradictory. Instead, the new model of reality is stacked atop the old. And it is in this sense that a mind doesn't change; vestigial knowledge may fade over a long period of time, but it isn't simply replaced. This model of learning matches a parable from Douglas Adams, relayed by Richard Dawkins:

A man didn't understand how televisions work, and was convinced that there must be lots of little men inside the box, manipulating images at high speed. An engineer explained to him about high frequency modulations of the electromagnetic spectrum, about transmitters and receivers, about amplifiers and cathode ray tubes, about scan lines moving across and down a phosphorescent screen. The man listened to the engineer with careful attention, nodding his head at every step of the argument. At the end he pronounced himself satisfied. He really did now understand how televisions work. "But I expect there are just a few little men in there, aren't there?"

As humans, we are inefficient inference engines — we are attached to our "little men," some dormant and some active. To a degree, these imperfect probability assessments and pet beliefs provide scientists with the emotional conviction necessary to motivate the hard work of science. Without the hope that an improbable line of research may succeed where others have failed, difficult challenges would go unmet. People should be encouraged to take long shots in science, since, with so many possibilities, the probability of something improbable happening is very high. At the same time, this emotional optimism must be tempered by a rational estimation of the chance of success — we must not be so optimistic as to delude ourselves. In science, we must test every step, trying to prove our ideas wrong, because nature is merciless. To have a chance of understanding nature, we must challenge our predispositions. And even if we can't fundamentally change our minds, we can acknowledge that others working in science may make progress along their own lines of research. By accommodating a diverse variety of approaches to any existing problem, the scientific community will progress expeditiously in unlocking nature's secrets.


JOHN BAEZ
Mathematical Physicist

Should I be thinking about quantum gravity?

One of the big problems in physics — perhaps the biggest! — is figuring out how our two current best theories fit together. On the one hand we have the Standard Model, which tries to explain all the forces except gravity, and takes quantum mechanics into account.  On the other hand we have General Relativity, which tries to explain gravity, and does not take quantum mechanics into account. Both theories seem to be more or less on the right track — but until we somehow fit them together, or completely discard one or both, our picture of the world will be deeply schizophrenic.

It seems plausible that as a step in the right direction we should figure out a theory of gravity that takes quantum mechanics into account, but reduces to General Relativity when we ignore quantum effects (which should be small in many situations). This is what people mean by "quantum gravity" — the quest for such a theory.

The most popular approach to quantum gravity is string theory.  Despite decades of hard work by many very smart people, it's far from clear that this theory is successful. It's made no predictions that have been confirmed by experiment.  In fact, it's made few predictions that we have any hope of testing anytime soon!  Finding certain sorts of particles at the big new particle accelerator near Geneva would count as partial confirmation, but string theory says very little about the details of what we should expect. In fact, thanks to the vast "landscape" of string theory models that researchers are uncovering, it keeps getting harder to squeeze specific predictions out of this theory.

When I was a postdoc, back in the 1980s, I decided I wanted to work on quantum gravity. The appeal of this big puzzle seemed irresistible.  String theory was very popular back then, but I was skeptical of it.  I became excited when I learned of an alternative approach pioneered by Ashtekar, Rovelli and Smolin, called loop quantum gravity.

Loop quantum gravity was less ambitious than string theory. Instead of a "theory of everything", it only sought to be a theory of something: namely, a theory of quantum gravity.

So, I jumped aboard this train, and for about a decade I was very happy with the progress we were making. A beautiful picture emerged, in which spacetime resembles a random "foam" at very short distance scales, following the laws of quantum mechanics.

We can write down lots of theories of this general sort. However, we have never yet found one for which we can show that General Relativity emerges as a good approximation at large distance scales — the quantum soap suds approximating a smooth surface when viewed from afar, as it were.

I helped my colleagues Dan Christensen and Greg Egan do a lot of computer simulations to study this problem. Most of our results went completely against what everyone had expected.  But worse, the more work we did, the more I realized I didn't know what questions we should be asking!  It's hard to know what to compute to check that a quantum foam is doing its best to mimic General Relativity.

Around this time, string theorists took note of loop quantum gravity people and other critics — in part thanks to Peter Woit's blog, his book Not Even Wrong, and Lee Smolin's book The Trouble with Physics.  String theorists weren't used to criticism like this.  A kind of "string-loop war" began.  There was a lot of pressure for physicists to take sides for one theory or the other. Tempers ran high.

Jaron Lanier put it this way: "One gets the impression that some physicists have gone for so long without any experimental data that might resolve the quantum-gravity debates that they are going a little crazy."  But even more depressing was that as this debate raged on, cosmologists were making wonderful discoveries left and right, getting precise data about dark energy, dark matter and inflation.  None of this data could resolve the string-loop war! Why?  Because neither of the contending theories could make predictions about the numbers the cosmologists were measuring! Both theories were too flexible.

I realized I didn't have enough confidence in either theory to engage in these heated debates.  I also realized that there were other questions to work on: questions where I could actually tell when I was on the right track, questions where researchers cooperate more and fight less.  So, I eventually decided to quit working on quantum gravity.

It was very painful to do this, since quantum gravity had been my holy grail for decades.  After you've convinced yourself that some problem is the one you want to spend your life working on, it's hard to change your mind.  But when I finally did, it was tremendously liberating.

I wouldn't urge anyone else to quit working on quantum gravity. Someday, someone is going to make real progress.  When this happens, I may even rejoin the subject.  But for now, I'm thinking about other things.  And, I'm making more real progress understanding the universe than I ever did before.


KEN FORD
Retired Physicist & Writer; Coauthor (with John Archibald Wheeler), Geons, Black Holes, and Quantum Foam: A Life in Physics

I used to believe that the ethos of science, the very nature of science, guaranteed the ethical behavior of its practitioners. As a student and a young researcher, I could not conceive of cheating, claiming credit for the work of others, or fabricating data. Among my mentors and my colleagues, I saw no evidence that anyone else believed otherwise. And I didn't know enough of the history of my own subject to be aware of ethical lapses by earlier scientists. There was, I sensed, a wonderful purity to science. Looking back, I have to count naiveté as among my virtues as a scientist.

Now I have changed my mind, and I have changed it because of evidence, which is what we scientists are supposed to do. Various examples of cheating, some of them quite serious, have come to light in the last few decades, and misbehaviors in earlier times have been reported as well. Scientists are, as the saying goes, "only human," which, in my opinion, is neither an excuse nor an adequate explanation. Unfortunately, scientists are now subjected to greater competitive pressures, financial and otherwise, than was typical when I was starting out. Some — a few — succumb.

We do need to teach ethics as essential to the conduct of science, and we need to teach the simple lesson that in science crime doesn't pay. But above all, we need to demonstrate by example that the highest ethical standards should, and often do, come naturally.


JEFFREY EPSTEIN
Science Philanthropist

The question presupposes a well defined "you", and an implied ability that is under "your" control to change your "mind". The "you" I now believe is distributed amongst others (family friends , in hierarchal structures,) i.e. suicide bombers, believe their sacrifice is for the other parts of their "you". The question carries with it an intention that I believe is out of one's control. My mind changed as a result of its interaction with its environment. Why? because it is a part of it.



< previous

| Index | 1 | 2 | 3 | 4 | 5 | 6 | 7 | 8 | 9 | 10 | 11 | 12 | 13 | 14 | 15 | 16 | 17 |

next >


|Top|