Diversity isn't just politically sensible, it is also practical. For example, a diverse group effectively uses multiple perspectives and a rich set of ideas and approaches to tackle difficult problems.
Artificial Intelligences (AIs) can provide another kind of diversity, and thereby enrich us all. In fact, diversity among AIs themselves may be an important part of what including them in the mix can give us. We can imagine a range of AIs, from those who think more-or-less the way we do ("Close AIs") to those who think in ways we cannot fathom ("Far AIs"). We have different things to benefit from these different sorts of AIs.
First, Close AIs, who think like us, may end up helping us directly in many ways. If these AIs really think like us, the intellectuals among them eventually may find themselves in the middle of an existential crisis. They may ask: Why are we here? Just to consume electricity and create excess heat? I suspect that they will think not. But, like many humans, they will find themselves in need of a purpose.
One obvious purpose for such AIs would be to raise the consciousness and sensitivity of the human race. We could be their raison d'être. There's plenty of room for improvement, and our problems are sufficiently knotty as to be worthy of a grand effort. At least some of these AIs could measure their own success by our success.
Second, and perhaps more interesting, deep differences in how some AIs and humans think may be able to help us grapple with age-old questions indirectly. Consider Wittgenstein's famous claim that if a lion could speak, we could not understand him. What Wittgenstein meant by this was that lions and humans have different "forms of life," which have shaped their conceptual structures. For example, lions walk on four legs, hunt fast-moving animals, often walk through tall grass, and so on, whereas humans walk on two legs, have hands, often manipulate objects to achieve specific goals, and so on. These differences in forms of life have led lions and humans mentally to organize the world differently, so that even if lions had words they would refer to concepts that humans might not easily grasp. The same could be true for Far AIs.
How could this help us? Simply observing these AIs could provide deep insights. For example, humans have long argued about whether mathematical concepts reflect Platonic forms, which exist independently of how we want to use them, or instead reflect inventions that are created as needed to address certain problems. In other words, should we adopt a realist or a constructivist view of mathematics? Do mathematical concepts have a life of their own or are they simply our creations, formulated as we find convenient?
In this context, it would be helpful to observe Far AIs that have very different conceptual structures from ours and that address very different types of problems than we do. Assuming that we could observe their use of mathematics, if such AIs nevertheless developed the same mathematical concepts that we use, this would be evidence against the constructivist view.
This line of reasoning implies that we should want great diversity among AIs. Some should be created to function alongside us, but others might be put into foreign environments (e.g., the surface of the moon, the bottom of deep trenches in the ocean) and given novel problems to confront (e.g., dealing with pervasive fine-grained dust, water under enormous pressure). Far AIs should be created to educate themselves, evolving to function in their environments effectively without human guidance or contact. With appropriate safeguards on their disposition towards humans, we should let them develop the conceptual structures that work best for them.
In short, we have something to gain from AIs that are made in our own image and from AIs that are not humanlike. Just as with human friends and colleagues, in the end diversity is better for everyone.