| An alternative
might be this: A cybernetic model of a phenomenon can never be the sole
favored model, because we can't even build computers that conform to such
models. Real computers are completely different from the ideal computers
of theory. They break for reasons that are not always analyzable, and
they seem to intrinsically resist many of our endeavors to improve them,
in large part due to legacy and lock-in, among other problems. We imagine
"pure" cybernetic systems but we can only prove we know how to build fairly
dysfunctional ones. We kid ourselves when we think we understand something,
even a computer, merely because we can model or digitize it.
There is also an epistemological problem that bothers me, even though my colleagues by and large are willing to ignore it. I don't think you can measure the function or even the existence of a computer without a cultural context for it. I don't think Martians would necessarily be able to distinguish a Macintosh from a space heater.
The above disputes ultimately turn on a combination of technical arguments about information theory and philosophical positions that largely arise from taste and faith.
So I try to augment my positions with pragmatic considerations, and some of these will begin to appear in my thoughts on...
Belief #2: That people are no more than cybernetic patterns
Every cybernetic totalist fantasy relies on artificial intelligence. It might not immediately be apparent why such fantasies are essential to those who have them. If computers are to become smart enough to design their own successors, initiating a process that will lead to God-like omniscience after a number of ever swifter passages from one generation of computers to the next, someone is going to have to write the software that gets the process going, and humans have given absolutely no evidence of being able to write such software. So the idea is that the computers will somehow become smart on their own and write their own software.