David J. Bruton Centennial Professor of Computer Science, University of Texas at Austin; Author, Quantum Computing Since Democritus
How Widely Should We Draw The Circle?

For fifteen years, popular-science readers have gotten used to breathless claims about commercial quantum computers just around the corner. As far as I can tell, though, 2015 marked a turning point. For the first time, the most hard-nosed experimentalists are talking about integrating 40 or more high-quality quantum bits (“qubits”) into a small programmable quantum computer—not in the remote future, but in the next few years. If built, such a device will probably still be too small to do anything useful, but I honestly don’t care.

The point is, forty qubits are enough to do something that computer scientists are pretty sure would take trillions of steps to simulate using today’s computers. They’ll suffice to disprove the skeptics, to show that nature really does put this immense computing power at our disposal—just as the physics textbooks implicitly predicted since the late 1920s. (And if quantum computing turns out not be possible, for some deep reason? To me that’s unlikely, but even more exciting, since it would mean a revolution in physics.)

So then, is imminent quantum supremacy the “most interesting recent [scientific] news”? I can’t say that with any confidence. The trouble is, which news we find interesting depends on how widely we draw the circle about our own hobbyhorses. And some days, quantum computing seems to me to fade into irrelevance next to the precarious state of the earth. Perhaps when people look back a century from now, they’ll say that the most important science news of 2015 was that the West Antarctic Ice Sheet was found to be closer to collapse than even the alarmists predicted. Or, just possibly, they’ll say the most important news was that in 2015, the “AI risk” movement finally went mainstream.

This movement posits that superhuman artificial intelligence is likely to be built within the next century, and that the biggest problem facing humanity today is to ensure that, when the AI arrives, it will be “friendly” to human values (rather, than, say, razing the solar system for more computing power to serve its inscrutable ends). I like to tease my AI-risk friends that I’ll be more worried about the impending AI singularity when my Wi-Fi stays working for more than a week. But who knows? At least this scenario, if it panned out, would render the melting glaciers pretty much irrelevant.

Instead of expanding my “circle of interest” to encompass the future of civilization, I could also contract it more tightly, around my fellow theoretical computer scientists. In that case, 2015 was the year that Lazslo Babai of the University of Chicago announced the first “provably fast” algorithm for one of the central problems in computing: graph isomorphism. This problem is to determine whether two networks of nodes and links are “isomorphic” (that is, whether they become the same if you relabel the nodes). For networks with N nodes, the best previous algorithm—which Babai also helped to discover, thirty years ago—took a number of steps that grew exponentially with the square root of N.

The new algorithm takes a number of steps that grows exponentially with a power of log(N) (a rate that’s called “quasi-polynomial”). Babai’s breakthrough probably has no applications, since the existing algorithms were already perfectly fast for any networks that would ever arise in practice. But for those who are motivated by an unquenchable thirst to know the ultimate limits of computation, this is arguably the biggest news so far of the twenty-first century.

Drawing the circle even more tightly, in “quantum query complexity”—a tiny subfield of quantum computing that I cut my teeth on as a student—it was discovered this year that there are Boolean functions that a quantum computer can evaluate in less than the square root of the number of input accesses that a classical computer needs, a gap that had stood as the record since 1996. Even if useful quantum computers are built, this result will have zero applications, since the functions that achieve this separation are artificial monstrosities, constructed only to prove the point. But it excited me: it told me that progress is possible, that the seemingly-eternal puzzles that drew me into research as a teenager do occasionally get solved. So damned if I’m not going to tell you about it.

At a time when the glaciers are melting, how can I justify getting excited about a new type of computer that will be faster for certain specific problems—let alone about an artificial function for which the new type of computer gives you a slightly bigger advantage? The “obvious” answer is that basic research could give us new tools with which to tackle the woes of civilization, as it’s done many times before. Indeed, we don’t need to go as far as an AI singularity to imagine how.

By letting us simulate quantum physics and chemistry, quantum computers might spark a renaissance in materials science, and allow (for example) the design of higher-efficiency solar panels. For me, though, the point goes beyond that, and has to do with the dignity of the human race. If, in millions of years, aliens come across the ruins of our civilization and dig up our digital archives, I’d like them to know that before humans killed ourselves off, we at least managed to figure out that the graph isomorphism problem is solvable in quasi-polynomial time, and that there exist Boolean functions with super-quadratic quantum speedups. So I’m glad to say that they will know these things, and that now you do too.