Philosophy and Economic Theory, the New School for Social Research
We Should Consider The Future World As One Of Multi-Species Intelligence

Considering machines that think is a nice step forward in the AI debate as it departs from our own human-based concerns, and accords machines otherness in a productive way. It causes us to consider the other entity's frame of reference. However, even more importantly this questioning suggests a large future possibility space for intelligence. There could be "classic" unenhanced humans, enhanced humans (with nootropics, wearables, brain-computer interfaces), neocortical simulations, uploaded mind files, corporations as digital abstractions, and many forms of generated AI: deep learning meshes, neural networks, machine learning clusters, blockchain-based distributed autonomous organizations, and empathic compassionate machines. We should consider the future world as one of multi-species intelligence.

What we call the human function of "thinking" could be quite different in the variety of possible future implementations of intelligence. The derivation of different species of machine intelligence will necessarily be different than that of humans. In humans, embodiment and emotion as a short-cut heuristic for the fight-or-flight response and beyond have been important elements influencing human thinking. Machines will not have the evolutionary biology legacy of being driven by resource acquisition, status garnering, mate selection, and group acceptance, at least in the same way. Therefore different species of native machine "thinking" could be quite different. Rather than asking if machines can think, it may be more productive to move from the frame of "thinking" that asks "who thinks how" to a world of "digital intelligences" with different backgrounds, modes of thinking, and existence, and different value systems and cultures.

Already not only are AI systems becoming more capable, but we are also starting to get a sense of the properties and features of native machine culture and the machine economy, and what the coexistence of human and machine systems might be like. Some examples of these parallel systems are in law and personal identity. In law, there are technologically-binding contracts and legally-binding contracts. They have different enforcement paradigms; inexorably executing parameters in the case of code ("code is law"), and discretionary compliance in the case of human-partied contracts. Code contracts are good in the sense that they cannot be breached, but on the other hand, will execute monolithically even if later conditions have changed.

Another example is personal identity. The technological construct of identity and the social construct of identity are different and have different implied social contracts. The social construct of identity includes the property of imperfect human memory that allows the possibility of forgiving and forgetting, and redemption and reinvention. Machine memory, however, is perfect and can act as a continuous witnessing agent, never forgiving or forgetting, and always able to re-presence even the smallest detail at any future moment. Technology itself is dual-use in that it can be deployed for "good" or "evil." Perfect machine memory only becomes tyrannizing when reimported to static human societal systems, but it need not be restrictive. Having this new "fourth-person perspective" could be a boon for human self-monitoring and mental performance enhancement.

These examples show that machine culture, values, operation, and modes of existence are already different, and this emphasizes the need for ways to interact that facilitate and extend the existence of both parties. The potential future world of intelligence multiplicity means accommodating plurality and building trust. Blockchain technology, a decentralized, distributed, global, permanent, code-based ledger of interaction transactions and smart contracts is one example of a trust-building system. The system can be used whether between human parties or inter-species parties, exactly because it is not necessary to know, trust, or understand the other entity, just the code (the language of machines).

Over time trust can grow though reputation. Blockchain technology could be used to enforce friendly AI and mutually-beneficial inter-species interaction. This is because it is possible that in the future, important transactions (like identity authentication and resource transfer) would be conducted on smart networks that require confirmation by independent consensus mechanisms such that only bonafide transactions by entities in good reputational standing are executed. While perhaps not a full answer to the problem of enforcing friendly AI, decentralized smart networks like blockchains are a system of checks and balances that starts to provide a more robust solution to situations of future uncertainty.

Trust-building models for inter-species digital intelligence interaction could include both game-theoretic checks-and-balances systems like blockchains, and also at the higher level, frameworks that put entities on the same plane of shared objectives. This is of higher order than smart contracts and treaties that attempt to enforce morality. A mindset shift is required. The problem frame of machine and human intelligence should not be one that characterizes relations as friendly or unfriendly, but rather one that treats all entities equally, putting them on the same grounds and value system for the most important shared parameters, like growth. What is most important about thinking for humans and machines is that thinking leads to ideation, progress, and growth.

What we want is the ability to experience, grow, and contribute more, for both humans and machines, and the two in symbiosis and synthesis. This can be conceived as all entities existing on a spectrum of capacity for individuation (the ability to grow and realize their full and expanding potential). Productive interaction between intelligent species could be fostered by being aligned in the common framework of a capacity spectrum that facilitates their objective of growth and maybe mutual growth.

What we should think about thinking machines is that we want to be in greater interaction with them, both quantitatively or rationally, and qualitatively in sense of our extending our internal experience of ourselves and reality, moving forward together in the vast future possibility space of intelligence.