The following are Lanier's Laws for Putting Machines in their Place, distilled from comments I've posted on Edge over the years. They are all stolen from earlier laws that predate the appearance of computers by decades or centuries.
Lanier's First Law
Humans change themselves through technology.
Example: Lanier's Law of Eternal Improvement for Virtual Reality: Average human sensory perception will gain acuity over successive generations in tandem with the improving qualities of pervasive media technology.
Lanier's Second Law
Even though human nature is dynamic, you must find a way to think of it as being distinct from the rest of nature.
You can't have a categorical imperative without categories. Or, You can't have a golden rule without gold. You have to draw a Circle of Empathy around yourself and others in order to be moral. If you include too much in the circle, you become incompetent, while if you include too little you become cruel. This is the "Normal form" of the eternal liberal/conservative dichotomy.
Lanier's Third Law
You can't rely completely on the level of rationality humans are able to achieve to decide what to put inside the circle. People are demonstrably insane when it comes to attributing nonhuman sentience, as can be seen at any dog show.
Lanier's Fourth Law
Lanier's Law of AI Unrecognizability.
You can't rely on experiment alone to decide what to put in the circle. A Turing Test-like experiment can't be designed to distinguish whether a computer has gotten smarter or a person interacting with that computer has gotten stupider (usually by lowering or narrowing standards of human excellence in some way.)
Lanier's Fifth Law
If you're inclined to put machines inside your circle, you can't rely on metrics of technological sophistication to decide which machines to choose. These metrics have no objectivity.
For just one example, consider Lanier's retelling of Parkinson's Law for the Post-dot-com Era: Software inefficiency and inelegance will always expand to the level made tolerable by Moore's Law. Put another way, Lanier's corollary to Brand's Laws: Whether Small Information wants to be free or expensive, Big Information wants to be meaningless.
Lanier's Sixth Law
When one must make a choice despite almost but not quite total uncertainty, work hard to make your best guess.
Best guess for Circle of Empathy: Danger of increasing human stupidity is probably greater than potential reality of machine sentience. Therefore choose not to place machines in Circle of Empathy.