| Index | 1 | 2 | 3 | 4 | 5 | 6 | 7 | 8 | 9 | 10 | 11 | 12 | 13 | 14 | 15 | 16 | 17 |

next >




2008

"WHAT HAVE YOU CHANGED YOUR MIND ABOUT?"

SUSAN BLACKMORE
Psychologist and Skeptic; Author, Consciousness: An Introduction

The Paranormal

Imagine me, if you will, in the Oxford of 1970; a new undergraduate, thrilled by the intellectual atmosphere, the hippy clothes, joss-stick filled rooms, late nights, early morning lectures, and mind-opening cannabis. 

I joined the Society for Psychical Research and became fascinated with occultism, mediumship and the paranormal — ideas that clashed tantalisingly with the physiology and psychology I was studying. Then late one night something very strange happened. I was sitting around with friends, smoking, listening to music, and enjoying the vivid imagery of rushing down a dark tunnel towards a bright light, when my friend spoke. I couldn't reply.

"Where are you Sue?" he asked, and suddenly I seemed to be on the ceiling looking down.

"Astral projection!" I thought and then I (or some imagined flying "I") set off across Oxford, over the country, and way beyond. For more than two hours I fell through strange scenes and mystical states, losing space and time, and ultimately my self. It was an extraordinary and life-changing experience. Everything seemed brighter, more real, and more meaningful than anything in ordinary life, and I longed to understand it.

But I jumped to all the wrong conclusions. Perhaps understandably, I assumed that my spirit had left my body and that this proved all manner of things — life after death, telepathy, clairvoyance, and much, much more. I decided, with splendid, youthful over-confidence, to become a parapsychologist and prove all my closed-minded science lecturers wrong. I found a PhD place, funded myself by teaching, and began to test my memory theory of ESP. And this is where my change of mind — and heart, and everything else — came about.

I did the experiments. I tested telepathy, precognition, and clairvoyance; I got only chance results. I trained fellow students in imagery techniques and tested them again; chance results. I tested twins in pairs; chance results. I worked in play groups and nursery schools with very young children (their naturally telepathic minds are not yet warped by education, you see); chance results. I trained as a Tarot reader and tested the readings; chance results.

Occasionally I got a significant result. Oh the excitement! I responded as I think any scientist should, by checking for errors, recalculating the statistics, and repeating the experiments. But every time I either found the error responsible, or failed to repeat the results. When my enthusiasm waned, or I began to doubt my original beliefs, there was always another corner to turn — always someone saying "But you must try xxx". It was probably three or four years before I ran out of xxxs.

I remember the very moment when something snapped (or should I say "I seem to …" in case it's a false flash-bulb memory). I was lying in the bath trying to fit my latest null results into paranormal theory, when it occurred to me for the very first time that I might have been completely wrong, and my tutors right. Perhaps there were no paranormal phenomena at all.

As far as I can remember, this scary thought took some time to sink in. I did more experiments, and got more chance results. Parapsychologists called me a "psi-inhibitory experimenter", meaning that I didn't get paranormal results because I didn't believe strongly enough. I studied other people's results and found more errors and even outright fraud. By the time my PhD was completed, I had become a sceptic.

Until then, my whole identity had been bound up with the paranormal. I had shunned a sensible PhD place, and ruined my chances of a career in academia (as my tutor at Oxford liked to say). I had hunted ghosts and poltergeists, trained as a witch, attended spiritualist churches, and stared into crystal balls. But all of that had to go.

Once the decision was made it was actually quite easy. Like many big changes in life this one was terrifying in prospect but easy in retrospect. I soon became "rentasceptic", appearing on TV shows to explain how the illusions work, why there is no telepathy, and how to explain near-death experiences by events in the brain.

What remains now is a kind of openness to evidence. However firmly I believe in some theory (on consciousness, memes or whatever); however closely I might be identified with some position or claim, I know that the world won't fall apart if I have to change my mind.


PZ MYERS
Biologist, University of Minnesota; blogger, Pharyngula

I always change my mind about everything, and I never change my mind about anything.

That flexibility is intrinsic to being human — more, to being conscious. We are (or should be) constantly learning new things, absorbing new information, and reacting to new ideas, so of course we are changing our minds. In the most trivial sense, learning and memory involve a constant remodeling of the fine details of the brain, and the only time the circuitry will stop changing is when we're dead. And in a more profound sense, our major ideas change over time: my 5-year-old self, my 15-year-old self, and my 25-year-old self were very different people with different priorities and different understandings of the world around them than my current 50-year-old self. This is simply in the nature of our existence.

In the context of pursuing science, however, there is a substantive context in which we do not change our minds: we have a commitment to following the evidence wherever it leads. We have a kind of overriding metaphysic that says that we should set out to find data that will change our minds about a subject — every good research program has as its goal the execution of observations and experiments that will challenge our assumptions — and about that all-important foundation of the scientific enterprise I have never changed my mind, nor can I, without abandoning science altogether.

In my own personal intellectual history, I began my academic career with a focus on neuroscience; I shifted to developmental neurobiology; I later got caught up in developmental biology as a whole; I am now most interested in the confluence of evolution and development. Have I ever changed my mind? I don't think that I have, in any significant way — I have instead applied a consistent attitude towards a series of problems.

If I embark on a voyage of exploration, and I set as my goals the willingness to follow any lead, to pursue any interesting observation, to overcome any difficulties, and I end up in some unpredicted, exotic locale that might be very different from my predictions prior to setting out, have I changed my destination in any way? I would say not; the sine qua non of science is not the conclusions we reach but the process we use to arrive at them, and that is the unchanging pole star by which we navigate.


GERD GIGERENZER
Psychologist; Director of the Center for Adaptive Behavior and Cognition at the Max Planck Institute for Human Development in Berlin; Author, Gut Feelings

The Advent of Health Literacy

In a 2007 radio advertisement, former NYC mayor Rudy Giuliani said, "I had prostate cancer, five, six years ago. My chances of surviving prostate cancer — and thank God I was cured of it — in the United States: 82 percent. My chances of surviving prostate cancer in England: only 44 percent under socialized medicine." Giuliani was lucky to be living in New York, and not in York — true?

In World Brain (1938), H. G. Wells predicted that for an educated citizenship in a modern democracy, statistical thinking would be as indispensable as reading and writing. At the beginning of the 21st century, we have succeeded in teaching millions how to read and write, but many of us still don't know how to reason statistically — how to understand risks and uncertainties in our technological world.

Giuliani is a case in point. One basic concept that everyone should understand is the 5-year survival rate. Giuliani used survival rates from the year 2000, where 49 Britons per 100,000 were diagnosed of prostate cancer, of which 28 died within 5 years — about 44 percent. Is it true that his chances of surviving cancer are about twice as high in what Giuliani believes is the best health care system in the world? Not at all. Survival rates are not the same as mortality rates. The U.S. has in fact about the same prostate cancer mortality rate as the U.K. But far more Americans participate in PSA screening (although its effect on mortality reduction has not been proven). As a consequence, more Americans are diagnosed of prostate cancer, which skyrockets the 5-year survival rate to more than 80%, although no life is saved. Screening detects many "silent" prostate cancers that the patient would have never noticed during his lifetime. Americans live longer with the diagnosis, but they do not live longer. Yet many Americans end up incontinent or impotent for the rest of their lives, due to unnecessary aggressive surgery or radiation therapy, believing that their life has been saved.

Giuliani is not an exception to the prevailing confusion about how to evaluate health statistics. For instance, my research shows that 80% to 90% of German physicians do not understand what a positive screening test means — such as PSA, HIV, or mammography — and most do not know how to explain the patient the potential benefits and harms. Patients however falsely assume that their doctors know and understand the relevant medical research. In most medical schools, education in understanding health statistics is currently lacking or ineffective.

The bare fact of statistical illiteracy among physicians, patients, and politicians is still not well known, much less addressed, made me pessimistic about the chances of any improvement. Statistical illiteracy in health matters turns the ideals of informed consent and shared decision-making into science fiction. Yet I have begun to change my mind. Here are a few reasons why I'm more optimistic.

Consider the concept of relative risks. You may have heard that mammography screening reduces breast cancer mortality by 25%! Impressive, isn't it? Many believe that if 100 women participate, the life of 25 will be saved. But don't be taken in again. The number is based on studies that showed that out of every 1,000 women who do not participate in mammography screening, 4 will die of breast cancer within about 10 years, whereas among those who participate in screening this number decreases to 3. This difference can be expressed as an absolute risk, that is, one out of every 1,000 women dies less of breast cancer, which is a clear and transparent.  But it also can be phrased in terms of a relative risk: a 25% benefit. I have asked hundreds of gynecologists to explain what this benefit figure means. The good news is that two-thirds understood that 25% means 1 in 1,000. Yet one third overestimated the benefits by one or more orders of magnitudes. Thus, better training in medical school is still wanted.

What makes me optimistic is the reaction of some 1,000 gynecologists I have trained in understanding risks and uncertainties as part of their continuing education. First, learning how to communicate risks was a top topic on their wish list. Second, despite the fact that most had little statistical training, they learned quickly. Consider the situation of a woman who tests positive in a screening mammogram, and asks her doctor whether she has cancer for certain, or what her chances are.  She has a right to get the best answer from medical science: Out of ten women who test positive, only one has breast cancer, the other nine cases are false alarms. Most women are never informed about this relevant fact, and react with panic and fear. Mammography is not a very reliable test. Before the training, the majority of gynecologists mistakenly believed that about 9 out of 10 women who test positive have cancer, as opposed to only one! After the training, however, almost all physicians understood how to read this kind of health statistics. That's real progress, and I didn't expect so much, so soon.

What makes me less optimistic is resistance to transparency in health from government institutions. A few years ago, I presented the program of transparent risk communication to the National Cancer Institute in Bethesda. Two officials took me aside afterwards and lauded the program for its potential to make health care more rational. I asked if they intended to implement it. Their answer was no. Why not? As they explained, transparency in this form was bad news for the government — a benefit of only 0.1% instead of 25% would make poor headlines for the upcoming election! In addition, their board was appointed by the presidential administration, for which transparency in health care is not a priority.

Win some, lose some. But I think the tide is turning. Statistics may still be woefully absent from most school curricula, including medical schools. That could soon change in the realm of health, however, if physicians and patients make common cause, eventually forcing politicians to do their homework.


ANTON ZEILINGER
University of Vienna and Scientific Director, Institute of Quantum Optics and Quantum Information, Austrian Academy of Sciences

I used to think what I am doing is "useless"

When journalists asked me about 20 years ago what the use of my research is, I proudly told them that it has no use whatsoever. I saw an analog to the usefulness of astronomy or of a Beethoven symphony. We don't do these things, I said, for their use, we do them because they are part of what it means to be human. In the same way, I said, we do basic science, in my case experiments on the foundations of quantum physics. it is part of being human to be curious, to want to know more about the world. There are always some of us who are just curious and they follow their nose and investigate with no idea in mind what it might be useful for. Some of us are even more attracted to a question the more useless it appears. I did my work only because I was attracted by both the mathematical beauty of quantum physics and by the counterintuitive conceptual questions it raises. So I told them all the time up to the early 1990s.

Then a new surprising development started. The scientific community discovered that the same fundamental phenomena of quantum physics suddenly became relevant for more and more novel ways of transmitting and processing of information. We now have the completely new field of quantum information science where some of the basic concepts are quantum cryptography, quantum computation and even quantum teleportation. All this points us towards a new information technology where the same strange fundamental phenomena which attracted me initially into the field are essential. Quantum randomness makes it possible for us in quantum cryptography to send messages such that they are secure against unauthorized third parties. Quantum entanglement, called by Einstein "spooky action at a distance" makes quantum teleportation possible. And quantum computation builds on all counterintuitive features of the quantum world together. When journalists ask me today what the use of my research is I proudly tell them of my conviction that we well have a full quantum information technology in the future, though its specific features are still very much to be developed. So, never say that your research is "useless".


ESTHER DYSON
Editor, Release 1.0; Trustee, Long Now Foundation; Author,
Release 2.0

What have I changed my mind about? Online privacy.

For a long time, I thought that people would rise to the challenge and start effectively protecting their own privacy online, using tools and services that the market would provide. Many companies offered such services, and almost none of them succeeded (at least not with their original business plans). People simply weren't interested: They were both paranoid and careless, and took little trouble to inform themselves. (Of course, if you've ever attempted to read an online privacy statement, you'll understand why.)

But now I've simply changed my mind and realized that the whole question needs reframing - which Facebook et al. are in the process of doing. Users have never learned the power to say no to marketers who want their data...but they are getting into the habit of controlling it themselves because Facebook is teaching them that this is a natural thing to do.

Yes, Facebook certainly managed to draw attention to the whole "privacy" question with its Beacon tracking tool, but for most Facebook users the big question is how many people they can get to see their feed. They are happy to share their information with friends, and they consider it the most natural thing in the world to distinguish among friends (see new Facebook add-on applications such as Top Friends and Cliquey) and to manage their privacy settings to determine who can see which parts of their profile. So why shouldn't they do the same thing vis a vis marketers?

For example, I fly a lot, and I use various applications to let certain friends know where I am and plan to be. I'd be delighted to share that information with certain airlines and hotels if I knew they would send me special offers. (In fact, United Airlines once asked me to send in my frequent flyer statements from up to three competing airlines in exchange for 2000 bonus miles each. I gladly did so, and would have done it for free. I *want* United to know what a good customer I am...and how much more of my business they could win if they offered me even better deals.)

In short, for many users the Web is becoming a mirror, with users in control, rather than a heavily surveilled stage. The question isn't how to protect users' privacy, but rather how to give them better tools to control their own data - not by selling privacy or by getting them to "sell" their data, , but by feeding their natural fascination with themselves and allowing them to manage their own presence. What once seemed like an onerous, weird task becomes akin to self-grooming online.

This begs a lot of questions, I know, including real, coercive invasions of privacy by government agencies, but I think the in-control users of the future will be better equipped to fight back. Give them a little time and a few bad experiences, and they'll start to make the distinction between an airline selling seats and a government that simply won't allow you to take it off your buddy list.


MARTIN REES
President, The Royal Society; Professor of Cosmology & Astrophysics; Master, Trinity College, University of Cambridge; Author, Our Final Century: The 50/50 Threat to Humanity's Survival

We Should Take the 'Posthuman' Era Seriously

Public discourse on very long-term planning is riddled with inconsistencies. Mostly we discount the future very heavily — investment decisions are expected to pay off within a decade or two. But when we do look further ahead — in discussions of energy policy, global warming and so forth — we underestimate the possible pace of transformational change. In particular, we need to keep our minds open — or at least ajar — to the possibility that humans themselves could change drastically within a few centuries.

Our medieval forebears in Europe had a cosmic perspective that was a million-fold more constricted than ours. Their entire cosmology — from creation to apocalypse — spanned only a few thousand years. Today, the stupendous time spans of the evolutionary past are part of common culture — except among some creationists and fundamentalists. Moreover, we are mindful of immense future potential. It seems absurd to regard humans as the culmination of the evolutionary tree. Any creatures witnessing the Sun's demise 6 billion years hence won't be human — they could be as different from us as we are from slime mould.

But, despite these hugely stretched conceptual horizons, the timescale on which we can sensibly plan, or make confident forecasts has got shorter rather than longer. Medieval people, despite their constricted cosmology, did not expect drastic changes within a human life; they devotedly added bricks to cathedrals that would take a century to finish. For us, unlike for them, the next century will surely be drastically different from the present. There is a huge disjunction between the every-shortening timescales of historical and technical change, and the near-infinite time spans over which the cosmos itself evolves.

Human-induced changes are occurring with runaway speed. It's hard to predict a mere century from now, because what will happen depends on us — this is the first century where humans can collectively transform, or even ravage, the entire biosphere. Humanity will soon itself be malleable, to an extent that's qualitatively new in the history of our species. New drugs (and perhaps even implants into our brains) could change human character; the cyberworld has potential that is both exhilarating and frightening. We can't confidently guess lifestyles, attitudes, social structures, or population sizes a century hence. Indeed, it's not even clear for how long our descendants would remain distinctively 'human'. Darwin himself noted that "not one living species will transmit its unaltered likeness to a distant futurity". Our own species will surely change and diversify faster than any predecessor —— via human-induced modifications (whether intelligently-controlled or unintended), not by natural selection alone. Just how fast this could happen is disputed by experts, but the post-human era may be only centuries away.

These thoughts might seem irrelevant to practical discussions — and best left to speculative academics and cosmologists. I used to think this. But humans are now, individually and collectively, so greatly empowered by rapidly changing technology that we can, by design, or as unintended consequences — engender global changes that resonate for centuries. And, sometimes at least, policy-makers indeed think far ahead.

The global warming induced by fossil fuels burnt in the next fifty years could trigger gradual sea level rises that continue for a millennium or more. And in assessing sites for radioactive waste disposal, governments impose the requirements that they be secure for ten thousand years.

It's real political progress that these long-term challenges are higher on the international agenda, and that planners seriously worry about what might happen more than a century hence.

But in such planning, we need to be mindful that it may not be people like us who confront the consequences of our actions today. We are custodians of a 'posthuman' future — here on Earth and perhaps beyond — that can't just be left to writers of science fiction.


JANNA LEVIN
Physicist, Columbia University; Author, A Madman Dreams of Turing Machines

I used to take for granted an assumption that the universe is infinite. There are innumerable little things about which I've changed my mind but the size of the universe is literally the biggest physical attribute that has inspired a radical change in my thinking. I won't claim I "believe" the universe is finite, just that I recognize that a finite universe is a realistic possibility for our cosmos.

The general theory of relativity describes local curves in spacetime due to matter and energy. This model of gravity as a warped spacetime has seen countless successes beginning with a confirmation of an anomaly in the orbit of mercury and continuing with the predictions of the existence of black holes, the expansion of spacetime, and the creation of the universe in a big bang. However, general relativity says very little about the global shape and size of the universe. Two spaces can have the same curvature locally but very different global properties. A flat space, for instance, can be infinite but there is another possibility, that it is finite and edgeless, wrapped back onto itself like a doughnut — but still flat. And there are an infinite number of ways of folding spacetime into finite, edgeless shapes, a kind of cosmic origami.

I grew up believing the universe was infinite. It was never taught to me in the sense that no one ever tried to prove to me the universe was infinite. It just seemed a natural assumption based on simplicity. That sense of simplicity no longer resonates as true once we have confronted that there must be a theory of gravity beyond General Relativity that involves the quantization, the discretization, of spacetime itself. In cosmology we have become accustomed to models of the universe that invoke extra dimensions, all of which are finite and it seems fair to imagine a universe born with all of its dimensions finite and compact. Then we are left with the mystery of why only three dimensions become so incredibly huge while the others remain curled up and small. We even hope to test models of extra dimensions in imminent laboratory experiments. These ideas are not remote and fantastical. They are testable.

People have said to me they were very surprised (disappointed) that I suggested the universe was finite. The infinite universe, they believed, was full of infinite potential and so philosophically (emotionally) so much richer and more thrilling. I explained that my suggestion of a finite universe was not a moral failing on my part, nor a consequence of diminished imagination. More thrilling was the knowledge that it does not matter what I believe. It does not matter if I prefer an infinite universe or a finite universe. Nature is not designed to satisfy our personal longings. Nature is what she is and it's a privilege merely to be privy to her mathematical codes.

I don't know that the universe is finite and so I don't believe that it is finite. I don't know that the universe is infinite and so I don't believe that it is infinite. I do see, however, that our mathematical reasoning has led to remarkable and sometimes psychologically uncomfortable discoveries. And I do believe that it is a realistic possibility that one day we may discover the shape of the entire universe. If the universe is too vast for us to ever observe the extent of space, we may still discover the size and shape of internal dimensions. From small extra dimensions we might possibly infer the size and shape of the large dimensions. Until then, I won't make up my mind.

 


JARON LANIER
Computer Scientist and Musician; Columnist, Discover Magazine

Here's a happy example of me being wrong. Other researchers interested in Virtual Reality had been proposing as early as twenty years ago that VR would someday be useful for the treatment of psychological disorders such as post-traumatic stress disorder.

I did not agree. In fact, I had strong arguments as to why this ought not to work. There was evidence that the brain created distinct "homuncular personas" for virtual world experiences, and reasons to believe that these personas were tied to increasingly distinct bundles of emotional patterns. Therefore, emotional patterns attached to real world situations would, I
surmised, remain attached to those situations. The earliest research on PTSD treatment in VR seemed awfully shaky to me, and I was not very encouraging to younger researchers who were interested in it.

The idea of using VR for PTSD treatment seemed less likely to work than various other therapeutic applications of VR, which were more centered around somatic processes. For instance, VR can be used as an enhanced physical training environment. The first example, from the 1980s, involved juggling. If virtual juggling balls fly more slowly than real balls, then they are easier to juggle. You can then gradually increase the speed, in order to provide a more gradual path for improving skills than would be available in physical reality. (This idea came about initially because it was so hard to make early VR systems go as fast as the reality they were emulating. In the old VPL Research lab, where a lot of VR tools were initially prototyped, we were motivated to be alert for potential virtues hiding within the limitations of the era.) Variations on this strategy have become well established. For instance, patients are learning to use prosthetic limbs more quickly by using VR these days.

Beyond rational argument, I was biased in other ways: The therapeutic use of VR seemed "too cute," and sounded too much like a press release in waiting.

Well, I was wrong. PTSD treatment in VR is now a well-established field with its own conferences, journals publishing well-repeated results, and clinical practitioners. Sadly, the Iraq war has provided all too many patients, and has also motivated increased funding for research in this subfield of VR applications.

One of the reasons I was wrong is that I didn't see that the same tactic we used on juggling balls (of gradually adapting the content and design of a virtual world to the instantaneous state of the user/inhabitant) could be applied in a less somatic way. For instance, in some clinical protocols, a traumatic event is represented in VR with gradually changing levels of realism as part of the course of treatment.

Maybe I was locked into seeing VR through the filters of the limitations of its earliest years. Maybe I was too concerned about the cuteness factor. At any rate, I'm glad there was a diversity of mindsets in the research community so that others could see where I didn't.

I'm concerned that diversity of thought in some of the microclimates of the scientific community is narrowing these days instead of broadening. I blame the nature of certain online tools. Tools like the Wikipedia encourage the false worldview that we already know enough to agree on a single account of reality, and anonymous blog comment rolls can bring out mob-like behaviors in young scientists who use them.

At any rate, one of the consolations of science is that being wrong on
occasion lets you know you don't know everything and motivates renewed
curiosity. Being aware of being wrong once in a while keeps you young.


DIMITAR SASSELOV
Astrophysicist, Harvard

I change my mind all the time — keeping an open mind in science is a good thing. Most often these are rather unremarkable occasions; most often it is acceptance of something I had been unconvinced or unsure about. But then there is this one time …

October 4th, 1995 was a warm day. Florence was overrun by tourists – and a few scientists from a conference I was attending. The next day one of my older and esteemed colleagues from Geneva was going to announce a curious find – a star that seemed to have a very small companion – as small as a planet like Saturn or Jupiter. Such claims had come and gone in the decades past, but this time the data seemed very good. He was keeping the details to himself until the next day, but he told me when I asked him about the orbital period of the new planet. I was incredulous – the period was so short, it was measured in days, not years – I told my wife back in the hotel that night – just 400 days!

I was not a planetary scientist – stars were my specialty, but I knew my planetary basics – a planet like Jupiter could not possibly exist so close to its star and have a period of 400 days. Some of this I had learned as far back as last year of high school. I did not question it, instead I was questioning my colleague’s claim. He was the first to speak the next day and he began by showing the orbital period for the new planet – it was 4.2 days! The night before, I must have heard “4.2 days”, but being so incredibly foreign to my preconception, my brain had “translated” that number to a more “reasonable” 420 days, or – roughly 400. Deeply held preconceptions can be very powerful.

My Florentine experience took some time to sink in. But when it did, it was sobering and inspiring. It made me curious and motivated to find the answers to those questions that just days before I had taken for granted. And I ended up helping develop the new field of extrasolar planets research.


FRANCESCO DE PRETIS
Journalist, La Stampa; Italy Correspondent, Science Magazin

A book on “What is really Science?”

I was on a train back from the seaside. The summer was gone and the philosophy teacher (at that time I was attending the high-school) had assigned us a book to read.  The title sounded like "The Birth of modern Science in Europe". I started to leaf through it, without expecting anything special.

Until then, I had a purist vision of science: I supposed that the development of science was - in some way - a deterministic process, scientists proceeded in a linear way doing their experiments, theories arose in the science community under a common agreement.

Well..  my vision of science was dramatically different from that one I experienced some years later! With surprise and astonishment, I discovered that Sir Isaac Newton had a not-hidden passion for Alchemy - probably the furthest thing from science I could imagine - Nicolaus Copernicus wrote to the Pope begging to accept his theories, Galileo and other scientists fought not only against the Roman Church and Aristotle's thought but maybe more often one against the others just to prevail.

In two weeks I finished the book and then my way of thinking changed. I understood that science was not only a pursuit of knowledge but a social process too, with its rules and tricks: a never-ending tale such as human life. I have never forgotten it and since then, my curiosity and passion for science have been rising more and more. Definitely, that book has changed my mind.


< previous

| Index | 1 | 2 | 3 | 4 | 5 | 6 | 7 | 8 | 9 | 10 | 11 | 12 | 13 | 14 | 15 | 16 | 17 |

next >


John Brockman, Editor and Publisher
Russell Weinberger, Associate Publisher

contact: [email protected]
Copyright © 2008 by
Edge Foundation, Inc
All Rights Reserved.
|Top|