We've had that idea for a long time but we've recently become much more comfortable with it, living as we do in a world of abstract artifacts, where they now jump promiscuously from medium to medium. It's no longer a big deal to go from the score to the music that you hear live to the recorded version of the music. You can jump back and forth between media very rapidly now. It's become a fact of life. It never used to be like this. It used to be hard work to get things from one form to another. It's not hard work any more, it's automatic. You eliminate the middle man. You no longer have to have the musician to read the score, to produce the music. This removal of all the hard work in translating from one medium to another makes it all the more natural to populate your world with abstractions, because you find it's hard to keep track of what medium they're in. It doesn't matter much any more. You're interested in the abstraction, not the medium. Where'd you get that software? Did you go to a store and buy a physical CD and put it in your computer, or did you just download it off the Web? It's the same software one way or another. It doesn't really matter. This idea of medium neutrality is one of the essential ideas of software, or of algorithms in general. And it's one that we're becoming familiar with, but it's amazing to me how much friction there still is, how much resistance there still is, to this idea.

An algorithm is an abstract process that can be defined over a finite set of fundamental procedures, an instruction set. It is a structured array of such procedures. That's a very generous notion of algorithm—more generous than many mathematicians would like, because I would include by that definition algorithms that may be in some regards defective. Consider your laptop. There's an instruction set for that laptop, consisting of all the different basic things that your laptop's CPU can do; each basic operation has a digital name or code, and every time that bit-sequence occurs, the CPU tries to execute that operation. You can take any bit sequence at all, and feed it to your laptop, as if it were a program. Almost certainly, any sequence that isn't designed to be a program to run on that laptop won't do anything at all — it'll just crash. Still, there's utility in thinking that any sequence of instructions, however buggy, however stupid, however pointless, should be considered an algorithm, because one person's buggy, dumb sequence is another person's useful device for some weird purpose, and we don't want to prejudge that question. (Maybe that "nonsense" was included in order to get the laptop to crash at just the point it crashed!) One can define a more proper algorithm as one which runs without crashing. The only trouble is that if you define algorithm that way, then probably you don't have any on your laptop, because there's almost certainly a way to make almost every program on your laptop crash. You just haven't found it yet. Bug-free software is an ideal that's almost never achieved.

Looking at the world as if everything is a computational process is becoming fashionable. Here one encounters not an issue of fact, but an issue of strategy. The question isn't, "What's the truth?" The question is, "What's the most fruitful strategy?" You don't want to abandon standards and count everything as computational, because then the idea loses its sense. It doesn't have any grip any more. How do you deal with that? One way is to try to define, in a rigid centralist way, some threshold that has to be passed, and say we're not going to call it computational unless it has properties A, B, C, D, and E. That's fine, you can do that in any number of different ways, and that will save you the embarrassment of having to say that everything is computational. The trouble is that anything you choose as a set of defining conditions is going to be too rigid. There are going to be things that meet those conditions that are not interestingly computational by anybody's standards, and there are things that are going to fail to meet the standards, which nevertheless you see are significantly like the things that you want to consider computational. So how do you deal with that? By ignoring it, by ignoring the issue of definition, that's my suggestion. Same as with life! You don't want to argue about whether viruses are alive or not; in some ways they're alive, in some ways they're not. Some processes are obviously computational. Others are obviously not computational. Where does the computational perspective illuminate? Well, that depends on who's looking at the illumination.

I describe three stances for looking at reality: the physical stance, the design stance, and the intentional stance. The physical stance is where the physicists are, it's matter and motion. The design stance is where you start looking at the software, at the patterns that are maintained, because these are designed things that are fending off their own dissolution. That is to say, they are bulwarks against the second law of thermodynamics. This applies to all living things, and also to all artifacts. Above that is the intentional stance, which is the way we treat that specific set of organisms and artifacts that are themselves rational information processing agents. In some regards you can treat Mother Nature--that is, the whole process of evolution by natural selection--from the intentional stance, as an agent, but we understand that that's a façon de parler, a useful shortcut for getting at features of the design processes that are unfolding over eons of time. Once we get to the intentional stance, we have rational agents, we have minds, creators, authors, inventors, discoverers ­ and everyday folks ­ interacting on the basis of their take on the world.

Is there anything above that? Well, in one sense there is. People, or persons, as moral agents, are a specialized subset of the intentional systems. All animals are intentional systems. Parts of you are intentional systems. You're made up of lots of lesser intentional systems — homunculi of sorts — but unless you've got multiple personality disorder, there's only one person there. A person is a moral agent, not just a cognitive agent, not just a rational agent, but a moral agent. And this is the highest level that I can make sense of. And why it exists at all, how it exists, the conditions for its maintenance, are very interesting problems. We can look at e theory as applied to the growth of trees ­ they compete for sunlight ­ it's a game in which there are winners and losers. But when we look at game theory as applied not just to rational agents, but to people with a moral outlook, we see some important differences. People have free will. Trees don't. It's not an issue for trees in the way it is for people.

Previous | Page 1 2 3 4 5 6 Next