No individual, deterministic machine, however universal this class of machines is proving to be, will ever think in the sense that we think. Intelligence may be ever-increasing among such machines, but genuinely creative intuitive thinking requires non-deterministic machines that can make mistakes, abandon logic from one moment to the next, and learn. Thinking is not as logical as we think.
Non-deterministic machines, or, better yet, non-deterministic networks of deterministic machines, are a different question. We have at least one existence proof that such networks can learn to think. And we have every reason to suspect that, once invoked within an environment without the time, energy, and storage constraints under which our own brains operate, this process will eventually lead, as Irving (Jack) Good first described it, to "a machine that believes people cannot think."
Until digital computers came along, nature used digital representation (as coded strings of nucleotides) for information storage and error correction, but not for control. The ability to introduce one-click modifications to instructions, a useful feature for generation-to-generation evolutionary mechanisms, becomes a crippling handicap for controlling day-to-day or millisecond-to-millisecond behavior in the real world. Analog processes are far more robust when it comes to real-time control.
We should be less worried about having our lives (and thoughts) controlled by digital computers and more worried about being controlled by analog ones. Machines that actually think for themselves, as opposed to simply doing ever-more-clever things, are more likely to be analog than digital, although they may be analog devices running as higher-level processes on a substrate of digital components, the same way digital computers were invoked as processes running on analog components, the first time around.
We are currently in the midst of an analog revolution, but for some reason it is a revolution that dares not speak its name. As we enter the seventh decade of arguing about whether digital computers can be said to think, we are surrounded by an explosive growth in analog processes whose complexity and meaning lies not in the state of the underlying devices or the underlying code but in the topology of the resulting networks and the pulse frequency of connections. Streams of bits are being treated as continuous functions, the way vacuum tubes treat streams of electrons, or neurons treat pulse frequencies in the brain.
Bottom line: I know that analog computers can think. I suspect that digital computers, too, may eventually start to think, but only by growing up to become analog computers, first.
Real artificial intelligence will be intelligent enough to not reveal itself. Things will go better if people have faith rather than proof.