Not much, other than the fact that they serve, as Dan Dennett has noted, as a useful existence proof that thought does not require some mystical, extra "something" that mind-body dualists continue to embrace.
In fact, I've always been a bit baffled by fears about AI machines taking over the world, which seem to me to be based on a fundamental—though natural—intellectual mistake. When conceptualizing a super-powerful Machine That Can Think, we draw upon the best analogy that we have at hand: us. So we tend to think of AI systems as just like us, only much smarter and faster.
This is, however, a bad analogy. A better one would be a really powerful, versatile screwdriver. No one worries about super-advanced screwdrivers rising up and overthrowing their masters. AI systems are tools, not organisms. No matter how good they become at diagnosing diseases, or vacuuming our living rooms, they don't actually want to do any of these things. We want them to, and we then build these "wants" into them.
It's also a category mistake to ask what Machines That Can Think might be thinking about. They aren't thinking about anything—the "aboutness" of thinking derives from the intentional goals driving the thinking. AI systems, in and of themselves, are entirely devoid of intentions or goals. They have no emotions, they feel neither empathy nor resentment. While such systems might some day be able to replicate our intelligence—and there seems to be no a priori reason why this would be impossible—this intelligence would be completely lacking in direction, which would have to be provided from the outside.
This is because motivational direction is the product of natural selection working on biological organisms. Natural selection produced our rich and complicated set of instincts, emotions and drives in order to maximize our ability to get our genes into the next generation, a process that has left us saddled with all sorts of goals, including desires to win, to dominate, and to control. While we may want to win, for perfectly good evolutionary reasons, machines could care less. They just manipulate 0s and 1s, as programmed to do by the people who want it to win. Why on earth would an AI system want to take over the world? What would it do with it?
What is scary as hell is the idea an entity possessed of extra-human intelligence and speed and our motivational system—in other words, human beings equipped with access to powerful AI systems. But smart primates with nuclear weapons are just as scary, and we've managed to survive such a world so far. AI is no more threatening in and of itself than a nuclear bomb—it is a tool, and the only thing to be feared are the creators and wielders of such tools.