Progress requires the Pareto Optimization of Competitiveness and Informativeness
The simple idea that Nature is "Red in Tooth and Claw" lends a religious fervor to those promoting Competition as the right organizing principle for open-ended innovation, e.g. in Laissez Faire Capitalism, government procurement, Social Darwinism, personnel review, and even high-stakes educational testing.
Through the use of mathematical and computer models of learning, we discovered that competition between learning agents does not lead to open-ended progress. Instead, it leads to boom-bust cycles, winner-take-all monopolies, and oligarchic groups who collude to block progress. Unfortunately, cooperation (collaborative learning, altruism) fails as well, leading to weak systems easy to invade or corrupt.
The exciting new "law" is that progress can be sustained among self-interested agents when both competitiveness and informativeness are rewarded. A chess master who wins every game like one who loses every game - provides no information on the strengths and weaknesses of other agents, while an informative agent, like a teacher, contributes opportunity and motivation for further progress. We predict that this law will be found in Nature, and will have ramifications for building new learning organizations.
A measurement of innovation rate.
There is no measure of the rate at which processes like art, evolution, companies, and computer programs innovate.
Consider a black box that takes in energy and produces bit-strings. The complexity of a bit-string is not simply its length, because a long string of all 1's or all 0's is quite simple. Kolmogorov measures complexity by the size of the smallest program listing that can generate a string, and Bennet's Logical Depth also accounts for the cost of running the program. But these fail on the Mandelbrot Set, a very beautiful set of patterns arising from a one-line program listing. What of life itself, the result of a simple non-equilibrium chemical process baking for quite a long time? Different algorithmic processes (including fractals, natural evolution, and the human mind) "create" by operating as a "Platonic Scoop," instantiating "ideals" into physical arrangements or memory states.
So to measure innovation rate (in POLLACKS) we divide the P=Product novelty (assigned by an observer with memory) by the L=program listing size and the C= Cost of runtime/space/energy.
Platonic Density = P / LC
Pollack's Law of Robotics
Start over with Pinball Machines.
Moore's law existed before computers; it is just economics of scale with zero labor. If enough demand can justify capital investment in fully automated factories, then the price of a good approaches the cost of its raw materials, energy dissipated, and (patent/copyright) monopoly tax. Everyone knows Moore's law has lead to ultra-small-cheap integrated circuits. But why don't we have ultra-small-cheap mechanical parts?
Pollack's law of Robotics states that we won't get a Moore's law for electro-mechanical systems until we return to the age of the Pinball Machine, and bootstrap the manufacture of general purpose integrated mechatronics, reducing scale from macro through mesa and MEMS. Leaping to Nano is likely to fail.