| Index | 1 | 2 | 3 | 4 | 5 | 6 | 7 | 8 | 9 | 10 | 11 | 12 | 13 | 14 | 15 | 16 | 17 |

next >




2008

"WHAT HAVE YOU CHANGED YOUR MIND ABOUT?"

ROGER HIGHFIELD
Science Editor, The Daily Telegraph; Coauthor, After Dolly

Science as faith

I am a heretic. I have come to question the key assumption behind this survey: "When facts change your mind, that's science." This idea that science is an objective fact-driven pursuit is laudable, seductive and - alas - a mirage.

Science is a never-ending dialogue between theorists and experimenters. But people are central to that dialogue. And people ignore facts. They distort them or select the ones that suit their cause, depending on how they interpret their meaning. Or they don't ask the right questions to obtain the relevant facts.

Contrary to the myth of the ugly fact that can topple a beautiful theory - and against the grain of our lofty expectations - scientists sometimes play fast and loose with data, highlighting what that suits them and ignoring stuff that doesn't.

The harsh spotlight of the media often encourages them to strike a confident pose even if the facts don't. I am often struck by how facts are ignored, insufficient or even abused. I back well-designed animal research but am puzzled by how scientists chose to ignore the basic fact that vivisection is so inefficient at generating cures for human disease. Intelligent design is for cretins but, despite the endless proselytizing about the success of Darwin - assuming that evolution is a fact - I could still see it being superseded, rather as Einstein's ideas replaced Newton's law of gravity. I believe in man-made global warming but computer-projected facts that purportedly say what is in store for the Earth in the next century leave me cold.

I support embryo research but was irritated by one oft-cited fact in the recent British debate on the manufacture of animal-human hybrid embryos: "only a tiny fraction" of the hybrid made by the Dolly cloning method (nuclear transfer) contains animal DNA. Given that it features in mitochondria, which are central to a range of diseases; given a single spelling mistake in DNA can be catastrophic; and given no-one really understands what nuclear transfer does, this "fact" was propaganda.

Some of the most exotic and prestigious parts of modern science are unfettered by facts. I have recently written about whether our very ability to study the heavens may have shortened the inferred lifetime of the cosmos, whether there are two dimensions of time, even the prospect that time itself could cease to be in billions of years. The field of cosmology is in desperate need of more facts, as highlighted by the aphorisms made at its expense ("There is speculation, pure speculation and cosmology.... cosmologists are often in error, never in doubt')

Scientists have to make judgements about the merits of new facts. Ignoring them in the light of strong intuition is the mark of a great scientist. Take Einstein, for example: when Kaufmann claimed to have experimental facts that refuted special relativity, he stuck to his guns and was proved right. Equally, Einstein's intuition misled him in his last three decades, when he pursued a fruitless quest for a unified field theory that was not helped by his lack of interest in novel facts - the new theoretical ideas, particles and interactions that had emerged at this time.

When it comes to work in progress, in particular, many scientists treat science like a religion - the facts should be made to fit the creed. However, facts are necessary for science but not sufficient. Science is when, in the face of extreme scepticism, enough facts accrue to change lots of minds.

Our rising and now excessive faith in facts alone can be seen in a change in the translation of the motto of the world's oldest academy of science, the Royal Society. Nullius in Verba was once taken as 'on the word of no one' to highlight the extraordinary power that empirical evidence bestowed upon science. The message was that experimental evidence trumped personal authority.

Today the Society talks of the need to 'verify all statements by an appeal to facts determined by experiment'. But whose facts? Was it a well-designed experiment? And are we getting all the relevant facts? The Society should adopt the snappier version that captures its original spirit: 'Take nobody's word for it'.


DANIEL ENGBER
Science editor, Slate Magazine

It's hard to perform ethical research on animals

Two years ago I watched the Dalai Lama address thousands of laboratory biologists at the Society for Neuroscience meeting in Washington, D.C. At the end of his speech, someone asked about the use of animals in lab research: "That's difficult," replied His Holiness. "Always stress the importance of compassion ... In highly necessary experiments, try to minimize pain."

The first two words of his answer provided most of the moral insight.
Universities already have cumbersome animal research protocols in place to eliminate unnecessary suffering, and few lab workers would do anything but try to minimize pain.

When I first entered graduate school, this Western-cum-Buddhist policy seemed like a neat compromise between protecting animals and supporting the advance of knowledge. But after I'd spent several years cutting up mice, birds, kittens, and monkeys, my mind was changed.

Not because I was any less dedicated to the notion of animal research--I still believe it's necessary to sacrifice living things in the name of scientific progress. But I saw how institutional safeguards served to offload the moral burden from the researchers themselves.

Rank-and-file biologists are rarely asked to consider the key ethical questions around which these policies are based. True, the NIH has for almost 20 years required that graduate training institutions offer a course in responsible research conduct. But in the class I took, we received PR advice rather than moral guidance: What's the best way to keep your animal research out of the public eye?

In practice, I found that scientists were far from monolithic in their attitudes towards animal work. (Drosophila researchers had misgivings about the lab across the hall, where technicians perfused the still-beating hearts of mice with chemical fixative; mouse researchers didn't want to implant titanium posts in the skulls of water-starved monkeys.) They weren't animal rights zealots, of course--they had nothing but contempt for the PETA protestors who passed out fliers in front of the lab buildings. But they did have real misgivings about the extent to which biology research might go in its exploitation of living things.

At the same time, very few of us took the time to consider whether or how we might sacrifice fewer animals (or no animals at all). Why bother, when the Institutional Animal Care and Use Committee had already signed off on the research? The hard part of this work isn't convincing an IACUC board to sanction the killing. It's making sure you've exhausted every possible alternative.


AUSTIN DACEY
philosopher, Center for Inquiry; author, The Secular Conscience

What Matters

As a teenager growing up in the rural American Midwest, I played in a Christian rock band. We wrote worship songs, with texts on religious themes, and experienced the jubilation and transport of the music as a visitation by the Holy Spirit. Then one day, as my faith was beginning to waiver, I wrote a song with an explicitly nonreligious theme. To my surprise I discovered that when I performed it, I was overcome by the same feelings, and it dawned on me that maybe what we had experienced all along was our own spirits, that what had called to us was the power of music itself.

In truth, I wasn't thinking through much at the time. Later, as a graduate student of philosophy, I did start to think a lot about science and ethics, and I began to undergo a parallel shift of outlook. Having embraced a thoroughly naturalistic, materialistic worldview, I wondered: If everything is just matter, how could anything really matter? How could values be among the objective furniture of the universe? It is not as if anyone expected physicists to discover, alongside electrons, protons, and neutrons, a new fundamental moral particle--the moron?--which would show up in high magnitudes whenever people did nice things.

Then there was J. L. Mackie's famous argument from "queerness": objective values would have to be such that merely coming to appreciate them would motivate you to pursue them. But given everything we know about how ordinary natural facts behave (they seem to ask nothing of us), how could there possibly be states of affairs with this strange to-be-pursuedness built into them, and how could we come to appreciate them?

At the same time, I was taken in by the promises found in some early sociobiology that a new evolutionary science of human nature would supplant empty talk about objective values. As Michael Ruse once put it, "morality is just an aid to survival and reproduction," and so "any deeper meaning is illusory." Niceness may seem self-evidently right to us, but things could easily have been the other way around, had nastiness paid off more often for our ancestors.

I have since been convinced that I was looking at all of this in the wrong way. Not only are values a part of nature; we couldn't avoid them if we tried.

There is no doubt that had we evolved differently, we would value different things. However, that alone does not show that values are subjective. After all, hearing is accomplished by psychological mechanisms that evolved under natural selection. But it does not follow that the things we hear are any less real. Rather, the reality of the things around us helps to explain why we have the faculty to detect them. The evolved can put us in touch with the objective.

In fact, we are all intimately familiar with entities which are such that to recognize them is to be moved by them. We call them reasons, where a reason is just a consideration that weighs in favor of an action or belief. As separate lines of research by psychologist Daniel Wegner and psychiatrist George Ainslie (as synthesized and interpreted by Daniel Dennett) strongly suggest, our reasons aren't all "in the head," and we cannot help but heed their call.

At some point in our evolution, the behavioral repertoire of our ancestors became complex enough to involve the review and evaluation of numerous possible courses of action and the formation of intentions on the basis of their projected outcomes. In a word, we got options. However, as an ultrasocial species, for whom survival and reproduction depended on close coordination of behaviors over time, we needed to manage these options in a way that could be communicated to our neighbors. That supervisor and communicator of our mental economy is the self, the more-or-less stable "I" that persists through time and feels like it is the author of action. After all, if you want to be able to make reliable threats or credible promises, you need to keep track of who you are, were, and will be. According to this perspective, reasons are a human organism's way of taking responsibility for some of the happenings in its body and environment. As such, they are inherently public and shareable. Reasons are biological adaptations, every bit as real as our hands, eyes, and ears.

I do not expect (and we do not need) a "science of good and evil." However, scientific evidence can show how it is that things matter objectively. I cannot doubt the power of reasons without presupposing the power of reasons (for doubting). That cannot be said for the power of the Holy Spirit.


SIMON BARON-COHEN
Psychologist, Autism Research Centre, Cambridge University; Author, The Essential Difference

Equality

When I was young I believed in equality as a guiding principle in life. It's not such a bad idea, when you think about it. If we treat everyone else as being our equals, no one feels inferior. And as an added bonus, no one feels superior. Whilst it is a wonderfully cosy, warm, feel-good idea, I have changed my mind about equality. There seemed to be two moments in my thinking about this principle that revealed some cracks in the perfect idea. Let me describe how these two moments changed my mind.

The first moment was in thinking about economic equality. Living on a kibbutz was an interesting opportunity to see how if you want to aim for everyone to have exactly the same amount of money or exactly the same possessions or exactly the same luxuries, the only way to achieve this is by legislation.  In a small community like a kibbutz, or in an Amish community, where there is an opportunity for all members of the community to decide on their lifestyles collectively and where the legislation is the result of consensual discussion, economic equality might just be possible.

But in the large towns and cities in which most of us live, and with the unfounded opportunities to see how other people live, through travel, television and the web, it is patently untenable to expect complete strangers to accept economic equality if it is forced onto them. So, for small groups of people who know each other and choose to live together, economic equality might be an achievable principle. But for large groups of strangers, I think we have to accept this is an unrealistic principle. Economic equality presumes pre-existing relationships based on trust, mutual respect, and choice, which are hard to achieve when you hardly know your neighbours and feel alienated from how your community is run.

The second moment was in thinking about how to square equality with individual differences. Equality is easy to believe in if you believe everyone is basically the same. The problem is that it is patently obvious that we are not all the same. Once you accept the existence of individual differences, this opens the door to some varieties of difference being better than others. 

Let's take the thorny subject of sex differences. If males have more testosterone than females, and if testosterone causes not only your beard to grow but also your muscles to grow stronger, it is just naïve to hold onto the idea that women and men are going to be starting on a level playing field in competitive sports where strength matters. This is just one example of how individual differences in hormonal levels can play havoc with the idea of biological equality.

Our new research suggests hormones like prenatal testosterone also affect how the mind develops. Higher levels of prenatal testosterone are associated with slower social and language development and reduced empathy. Higher levels of prenatal testosterone are also associated with more autistic traits, stronger interests in systems, and greater attention to detail. A few more drops of this molecule seem to be associated with important differences in how our minds work.

So, biology has little time for equality. This conclusion should come as no surprise, since Darwin's theory of evolution was premised on the existence of individual differences, upon which natural selection operates. In modern Darwinism such individual differences are the result of genetic differences, either mutations or polymorphisms in the DNA sequence. Given how hormones and genes (which are not mutually exclusive, genetic differences being one way in which differences in hormone levels come about) can put us onto very different paths in development, how can we believe in equality in all respects?

The other way in which biology is patently unequal is in the likelihood of developing different medical conditions. Males are sometimes referred to as the weaker sex because they are more likely to develop a whole host of conditions, among which are autism (four boys for every one girl) or Asperger Syndrome (nine boys for every one girl). Given these risks, it becomes almost comical to believe in equality.

I still believe in some aspects of the idea of equality, but I can no longer accept the whole package. The question is, is it worth holding on to some elements of the idea if you've given up other elements? Does it make sense to have a partial belief in equality? Do you have to either believe in all of it, or none of it? My mind has been changed from my youthful starting point where I might have hoped that equality could be followed in all areas of life, but I still see value in holding on to some aspects of the principle. Striving to give people equality of social opportunity is still a value system worth defending, even if in the realm of biology, we have to accept equality has no place.


DAVID SLOAN WILSON
Biologist, Binghamton University; Author, Evolution for Everyone

I Missed the Complexity Revolution

In 1975, as a newly minted PhD who had just published my first paper on group selection, I was invited by Science magazine to review a book by Michael Gilpin titled Group Selection in Predator Prey Communities. Gilpin was one of the first biologists to appreciate the importance of what Stuart Kauffman would call "the sciences of complexity." In his book, he was claiming that complex interactions could make group selection a more important evolutionary force than the vast majority of biologists had concluded on the basis of more simple mathematical models.

Some background: Group selection refers to the evolution of traits that increase the fitness of whole groups, compared to other groups. These traits are often selectively disadvantageous within groups, creating a conflict between levels of selection. Group selection requires the standard ingredients of natural selection-a population of groups, that vary in their phenotypic properties in a heritable fashion, with consequences for collective survival and reproduction. Standard population genetics models give the impression that groups are unlikely to vary unless they are initiated by small numbers of individuals with minimal migration among groups during their existence. This kind of reasoning turned group selection into a pariah concept in the 1960's , taught primarily as an example of how not to think. I had become convinced that group selection could be revived for smaller, more ephemeral groups that I called "trait groups." Gilpin was suggesting that group selection could also be revived for larger, geographically isolated groups on the basis of complex interactions.

Gilpin focused on the most famous conjecture about group selection, advanced by V.C. Wynne-Edwards in 1962, that animals evolve to avoid overexploiting their resources. Wynne-Edwards had become an icon for everything that was wrong and naïve about group selection. Gilpin boldly proposed that animals could indeed evolve to "manage" their resources, based on non-linearities inherent in predator-prey interactions. As resource exploitation evolves by within-group selection, there is not a gradual increase the probability of extinction. Instead, there is a tipping point that suddenly destabilizes the predator-prey interaction, like falling off a cliff. This discontinuity increases the importance of group selection, keeping the predator-prey interaction in the zone of stability.

I didn't get it. To me, Gilpin's model required a house-of-cards of assumptions, a common criticism leveled against earlier models of group selection. I therefore wrote a tepid review of Gilpin's book. I was probably also influenced by a touch of professional jealousy, as someone who myself was trying to acquire a reputation for reviving group selection!

I didn't get the complexity revolution until I read James Gleik's Chaos: Making A New Science, which I regard as one of the best books ever written about science for a general audience. Suddenly I realized that as complex systems, higher-level biological units such as groups, communities, ecosystems, and human cultures would almost certainly vary in their phenotypic properties and that some of this phenotypic variation might be heritable. Complexity theory became a central theme in my own research.

As one experimental demonstration, William Swenson (then my graduate student) created a population of microbial ecosystems by adding 1 ml of pond water from a single, well-mixed source to test tubes containing 29 ml of sterilized growth medium. This amount of pond water includes millions of microbes, so the initial variation among the test tubes, based on sampling error, was vanishingly small. Nevertheless, within four days (which amounts to many microbial generations) the test tubes varied greatly in their composition and phenotypic properties, such as the degradation of a toxic compound that was added to each test tube. Moreover, when the test tubes were selected on the basis of their properties to create a new generation of microbial ecosystems, there was a response to selection. We could select whole ecosystems for their phenotypic properties (in our case, to degrade a toxic compound), in exactly the same way that animal and plant breeders are accustomed to selecting individual organisms!

These results are mystifying in terms of models that assume simple interactions but make perfect sense in terms of complex interactions. Most people have heard about the famous "butterfly effect" whereby an infinitesimal change in initial conditions becomes amplified over the course of time for a complex physical system such as the weather. Something similar to the butterfly effect was occurring in our experiment, amplifying infinitesimal initial differences among our test tubes into substantial variation over time. A response to selection in the experiments is proof that variation caused by complex interactions can be heritable.

Thanks in large part to complexity theory, evolutionary biologists are once again studying evolution as a multi-level process that can evolve adaptations above the level of individual organisms. I welcome this opportunity to credit Michael Gilpin for the original insight.


J. CRAIG VENTER
Human Genome Decoder; Director, The J. Craig Venter Institute; Author, A Life Decoded: My Genome: My Life.

The importance of doing something now about the environment.

Like many or perhaps most I wanted to believe that our oceans and atmosphere were basically unlimited sinks with an endless capacity to absorb the waste products of human existence.  I wanted to believe that solving the carbon fuel problem was for future generations and that the big concern was the limited supply of oil not the rate of adding carbon to the atmosphere. The data is irrefutable--carbon dioxide concentrations have been steadily increasing in our atmosphere as a result of human activity since the earliest measurements began. We know that on the order of 4.1 billion tons of carbon are being added to and staying in our atmosphere each year.  We know that burning fossil fuels and deforestation are the principal contributors to the increasing carbon dioxide concentrations in our atmosphere. Eleven of the last twelve years rank among the warmest years since 1850.  While no one knows for certain the consequences of this continuing unchecked warming, some have argued it could result in catastrophic changes, such as the disruption of the Gulf Steam which keeps the UK out of the ice age or even the possibility of the Greenland ice sheet sliding into the Atlantic Ocean.  Whether or not these devastating changes occur, we are conducting a dangerous experiment with our planet. One we need to stop.

The developed world including the United States, England and Europe contribute disproportionately to the environmental carbon, but the developing world is rapidly catching up.  As the world population increases from 6.5 billion people to 9 billion over the next 45 years and countries like India and China continue to industrialize, some estimates indicate that we will be adding over 20 billion tons of carbon a year to the atmosphere. Continued greenhouse gas emissions at or above current rates would cause further warming and induce many changes to the global climate that could be more extreme than those observed to date. This means we can expect more climate change; more ice cap melts, rising sea levels, warmer oceans and therefore greater storms, as well as more droughts and floods, all which compromise food and fresh water production.

It required close to 100,000 years for the human population to reach 1 billion people on Earth in 1804.  In 1960 the world population passed 3 billion and now we are likely to go from 6.5 billion to 9 billion over the next 45 years.  I was born in 1946 when there were only about 2.4 billion of us on the planet, today there are almost three people for each one of us in 1946 and there will soon be four.

Our planet is in crisis, and we need to mobilize all of our intellectual forces to save it. One solution could lie in building a scientifically literate society in order to survive. There are those who like to believe that the future of life on Earth will continue as it has in the past, but unfortunately for humanity, the natural world around us does not care what we believe. But believing that we can do something to change our situation using our knowledge can very much affect the environment in which we live.

NEIL GERSHENFELD
Physicist, MIT; Author, FAB

I've long considered myself as working at the boundary between physical science and computer science; I now believe that that boundary is a historical accident and does not really exist.

There's a sense in which technological progress has turned it into a tautological statement. It's now possible to store data in atomic nuclei and use electron bonds as logical gates. In such a computer the number of information-bearing degrees of freedom is on the same order as the number of physical ones; it's no longer feasible to account for them independently. This means that computer programs can, and I'd argue must, look more like physical models, including spatial and temporal variables in the density and velocity of information propagation and interaction. That shouldn't be surprising; the canon of computer science emerged a few decades ago to describe the available computing technology, while the canon of physics emerged a few centuries ago to describe the accessible aspects of nature. Computing technology has changed more than nature has; progress in the former is reaching the limits of the latter.

Conversely, it makes less and less sense to define physical theories by the information technology of the last millenium (a pencil and piece of paper); a computational model is every bit as fundamental as one written with calculus. This is seen in frontiers of research in nonlinear dynamics, and quantum field theories, and black hole thermodynamics, that look more and more like massively parallel programming models. However, the organization of research has not yet caught up with this content; many of the pioneers doing the work are in neither Physics nor Computer Science departments, but are scattered around (and off) campus. Rather than trying to distinguish between programming nature and the nature of programming, I think that it makes more sense to recognize not just a technology or theory of information, but a single science.


PAUL SAFFO
Technology Forecaster

The best forecasters will be computers

When I began my career as a forecaster over two decades ago, it was a given that the core of futures research lay beyond the reach of traditional quantitative forecasting and it's mathematical tools.  This meant that futures researchers would not enjoy the full labor-saving benefits of number-crunching computers, but at least it guaranteed job security.  Economists and financial analysts might one day wake up to discover that their computer tools were stealing their jobs, but futurists would not see machines muscling their way into the world of qualitative forecasting anytime soon.

I was mistaken.  I now believe that in the not too distant future, the best forecasters will not be people, but machines: ever more capable "prediction engines" probing ever deeper into stochastic spaces.  Indicators of this trend are everywhere from the rise of quantitative analysis in the financial sector, to the emergence of computer-based horizon scanning systems in use by governments around the world, and of course the relentless advance of computer systems along the upward-sweeping curve of Moore's Law.

We already have human-computer hybrids at work in the discovery/forecasting space, from Amazon's Mechanical Turk, to the myriad online prediction markets.  In time, we will recognize that these systems are an intermediate step towards prediction engines in much the same way that human "computers" who once performed the mathematical calculations on complex projects were replaced by general-purpose electronic digital computers.

The eventual appearance of prediction engines will also be enabled by the steady uploading of reality into cyberspace, from the growth of web-based social activities to the steady accretion of sensor data sucked up by an exponentially growing number of devices observing and increasingly, manipulating the physical world.  The result is an unimaginably vast corpus of raw material, grist for the prediction engines as they sift and sort and peer ahead.  These prediction engines won't ever exhibit perfect foresight, but as they and the underlying data they work on co-evolve, it is a sure bet that they will do far better then mere humans.


ALISON GOPNIK
Psychologist, UC-Berkeley; Coauthor, The Scientist In the Crib

Imagination is Real

Recently, I've had to change my mind about the very nature of knowledge because of an obvious, but extremely weird fact about children - they pretend all the time. Walk into any preschool and you'll be surrounded by small princesses and superheroes in overalls - three-year-olds literally spend more waking hours in imaginary worlds than in the real one. Why? Learning about the real world has obvious evolutionary advantages and kids do it better than anyone else. But why spend so much time thinking about wildly, flagrantly unreal worlds? The mystery about pretend play is connected to a mystery about adult humans - especially vivid for an English professor's daughter like me. Why do we love obviously false plays and novels and movies?

The greatest success of cognitive science has been our account of the visual system. There's a world out there sending information to our eyes, and our brains are beautifully designed to recover the nature of that world from that information. I've always thought that science, and children's learning, worked the same way. Fundamental capacities for causal inference and learning let scientists, and children, get an accurate picture of the world around them - a theory. Cognition was the way we got the world into our minds.

But fiction doesn't fit that picture - its easy to see why we want the truth but why do we work so hard telling lies? I thought that kids' pretend play, and grown-up fiction, must be a sort of spandrel, a side-effect of some other more functional ability. I said as much in a review in Science and got floods of e-mail back from distinguished novel-reading scientists. They were all sure fiction was a Good Thing - me too, of course, - but didn't seem any closer than I was to figuring out why.

So the anomaly of pretend play has been bugging me all this time. But finally, trying to figure it out has made me change my mind about the very nature of cognition itself.

I still think that we're designed to find out about the world, but that's not our most important gift. For human beings the really important evolutionary advantage is our ability to create new worlds. Look around the room you're sitting in. Every object in that room - the right angle table, the book, the paper, the computer screen, the ceramic cup was once imaginary. Not a thing in the room existed in the pleistocene. Every one of them started out as an imaginary fantasy in someone's mind. And that's even more true of people - all the things I am, a scientist, a philosopher, an atheist, a feminist, all those kinds of people started out as imaginary ideas too. I'm not making some relativist post-modern point here, right now the computer and the cup and the scientist and the feminist are as real as anything can be. But that's just what our human minds do best - take the imaginary and make it real. I think now that cognition is also a way we impose our minds on the world.

In fact, I think now that the two abilities - finding the truth about the world and creating new worlds-are two sides of the same coins. Theories, in science or childhood, don't just tell us what's true - they tell us what's possible, and they tell us how to get to those possibilities from where we are now. When children learn and when they pretend they use their knowledge of the world to create new possibilities. So do we whether we are doing science or writing novels. I don't think anymore that Science and Fiction are just both Good Things that complement each other. I think they are, quite literally, the same thing.


JORDAN POLLACK
Computer Scientist, Brandeis University

Electronic Mail

I've changed my mind about electronic mail. When I first used email in graduate school in 1980, it was a dream. It was the most marvelous and practical invention of computer science. A text message quickly typed and reliably delivered (or be told of an error) allowed a new kind of asynchronous communication. It was cheaper (free), faster, and much more efficient than mail, phone, or fax, with a roundtrip in minutes. Only your colleagues had your address, but you could find people at other places using "finger". Colleagues started sharing text-formatted data tables, where 50K bytes was a big email message!

Then came attachments. This hack to insert 33% bloated 8-bit binary files inside of 7-bit text email opened a Pandora's box. Suddenly anyone had the right to send any size package for FREE, like a Socialized United Parcel Service. Microsoft Outlook made it "drag 'n drop" easy for bureaucrats to send word documents. Many computer scientists saw the future and screamed "JUST SEND TEXT" but it was too late. Microsoft kept tweaking its proprietary file formats forcing anyone with email to upgrade Microsoft Office. (I thought they finally stopped with Office 97, but now I am getting DOC-X-files, which might as well be in Martian!)

We faced AOL newbies, mailing lists, free webmail, hotmail spam, RTF mail, chain letters, html mail, musical mail, flash mail, javascript mail, viruses, spybits, faked URL's, phishing, Nigerian cons, Powerpoint arms races, spam-blocking spam, viral videos, Plaxo updates, Facebook friendings, ad nauseum.

The worst part is the legal precedent that your employer "owns" the mail sent out over the network provided. It is as if they own the soundwaves which emit from your throat over the phone. An idiot judgment leads to two Kafkaesque absurdities:

First, if you send email with an ethnic slur, receive email with a picture of a naked child or a copyrghted MP3, you can be fired. Use email to organize a Union? Fugget about it! Second, all email sent and received must now be archived as critical business documents to comply with Sarbanes Oxley. And Homeland Security wants rights to monitor ISP data streams and stores, and hope no warrants are needed for data older than 90 days.

Free Speech in the Information Age isn't your right to post anonymously on a soapbox blog or newspaper story. It means that, if we agree, I should be able to send any data in any file format, with any encryption, from a computer I am using to one your are on, provided we pay for the broadband freight. There is no reason that any government, carrier, or corporation should have any right to store, read, or interpret our digital communications. Show just cause and get a warrant, even if you think an employee is spying or a student is pirating music.

Email is now a nightmare that we have to wake up from. I don't have a solution yet, but I believe the key to re-imagine email is to realize that our computers and phones are "always on" the net. So we can begin with synchronous messaging (both sender and receiver are online) — a cross between file sharing, SMS texting, and instant messaging — and then add grid storage mechanisms for asynchronous delivery, multiple recipients, and reliability.

Until then, call me.


< previous

| Index | 1 | 2 | 3 | 4 | 5 | 6 | 7 | 8 | 9 | 10 | 11 | 12 | 13 | 14 | 15 | 16 | 17 |

next >


|Top|