Open up Ada Lovelace’s 1843 paper about Charles Babbage’s unbuilt Analytical Engine, and, if you are geek enough, and can cope with long 19^{th}-century sentences, it is astonishingly readable today.

The Analytical Engine was entirely mechanical. Setting a heavy metal disc with ten teeth stored a digit, a stack of fifty such discs stored a fifty-digit number, and the store, or memory, would have contained 100 such stacks. A basic instruction to add two numbers moved them from the store to the mill, or CPU, where they would be added together, and moved back to a new place in the store to await further use: all mechanically. It was to be programmed with punched cards, representing variables and operations, with further elaborate mechanisms to move the cards around, and reuse groups of them when loops were needed. Babbage estimated that his gigantic machine would take three minutes to multiply two twenty-digit numbers.

The paper is so readable because Lovelace describes the machine, not in terms of elaborate ironmongery, but using abstractions—store, mill, variables, operations and so on. These abstractions, and the relations between them, capture the essence of the machine, in identifying the major components and the data that passes between them. They capture, in the language of the day, one of the core problems in computing then and now, that of exactly what can and cannot be computed with different machines. The paper identifies the elements needed “ to reproduce all the operations which intellect performs in order to attain a determinate result, if these operations are themselves capable of being precisely defined” and these—arithmetic, conditional branching and so on—are exactly the elements that Turing needed one hundred years later to prove his results about the power of computation.

You can’t point to a variable or an addition instruction in Babbage’s machine—only to the mechanical activities that represent them. What Lovelace can only tackle with informal explanation was made more precise in the 1960s when computer scientists such as Oxford’s Dana Scott and Christopher Strachey used separate abstractions to model both the machine and the program running on it, so that precise mathematical reasoning could predict its behavior. These concepts have become further refined as computer scientists like Samson Abramsky seek out more subtle abstractions using advanced logic and mathematics to capture not only classical computers, but quantum computation as well.

Identifying a good abstraction for a practical problem is an art as well as a science, capturing the building blocks of a problem, and the elements connecting them, with just the right amount of detail, not too little and not too much, abstracting away from the intricacies of the internals of the block so the designer only needs to focus on the elements needed to interact with other components. Jeannette Wing characterizes these kinds of skills as computational thinking, a concept that can be appointed in many situations not just programming.

Lovelace herself identified the wider power of abstraction and wrote of her ambition to understand the nervous system through developing “a law, or laws, for the mutual actions of the molecules of the brain.” And computer scientists today are indeed extending their techniques to develop suitable abstractions for this purpose.