Alan Turing and science through the algorithmic lens
BLOG: Heidelberg Laureate Forum
The Heidelberg Laureate Forum aims to connect young scholars in mathematics and computer science to illustrious pioneers who have been honoured by awards like the Abel Prize, Fields Medal, Nevanlinna Prize, and Turing Award. I wanted to take a moment to talk about the mathematicians that the decorations are named after. In particular, I wanted to discuss Alan Turing‘s contributions. As Scott Aaronson said on cstheory, asking for Alan Turing’s contributions to computer science “is a lot like asking for Newton’s contributions to physics, or Darwin’s to biology!” He is foundational.
Turing defined the Turing machine. He made explicit the Halting problem and with it the limits of computation and our understanding. In his PhD thesis, Turing introduced the central tools of computational complexity: oracles and relativization. He introduced concrete procedures like LU matrix decomposition (1948) and recognized the importance of numerical stability of algorithms. In 1949, he took the recognition of error further, developing what would become unit testing. In 1952, he was the first to come up with a ‘paper algorithm’ for chess. Opening a rich vein of problems for early AI. These are all things we recongize now as computer science, and many students cover in their undergaduate degrees.
However, I think that Turing’s biggest contribution was that he did not recognize disciplinary boundaries and he viewed all of science through the algorithmic lens.
During WW2 Turing used the idea of computation and electro-mechanical (as opposed to human) computers to help create the Turing–Welchman bombe and other tools and formal techniques for doing crypto-analysis. He started the transformation of cryptology — the art-form — to cryptography — the science — that Claude Shannon completed. Alan Turing viewed cryptology through algorithmic lenses. During his time at Bletchley Park, Turing also contributed to statistics, defining — with his assistant I. J. Good (1953) — the Good-Turing estimator for the probability of encountering an object of a hitherto unseen species, given a set of past observations of objects from different species.
In 1948, Turing followed his interested in the brain, to create the first learning artificial neural network. Unfortunately his manuscript was rejected by the director of the National Physical Laboratory and not published. At least not until 1967, well after his death and the publication of other neural networks. However, it predated both Hebbian learning (1949) and Rosenblatt’s perceptrons (1957) that we typically associated with being the first neural networks. Turing foresaw the foundation of connectionism — still a huge paradigm in cognitive science — and computational neuroscience. Alan Turing viewed the brain through algorithmic lenses.
In 1950, Turing published his famous Computing machinery and intelligence and launched AI. But it didn’t only launch a subfield of computer science, his paper also had a transformative effect on Psychology and Cognitive Science where cognition as computation on internal representations still remains as the central dogma. Alan Turing viewed the mind through algorithmic lenses. Although he did not have time to push deeper into the social sciences, other have built up an algorithmic view of economics, finance, and other parts of the social sciences.
Finally, in 1952 Turing published The Chemical Basis of Morphogenesis. This has become his most cited work. In it, he asked (and started to answer) the question: how does a spherically symmetric embryo develop into a non-spherically symmetric organism under the action of symmetry-preserving chemical diffusion of morphogens? This work had a great impact on developmental biology and people continue building on it today in fields as distant as cancer prevention. Shortly before his death, he was continuing this study by working on the basic ideas of what was to become artificial life simulations, and a more discrete and non-differential-equation treatment of biology. I wonder how he would develop biology if he had more time. Alan Turing started to view biology through algorithmic lenses.
Turing’s greatest contribution to computer science was showing that we can glean great insight by viewing science through the algorithmic lens. I am eager to see the young scholars at HLF honour his genious by continuing this work.
Good, I. J. (1953). The population frequencies of species and the estimation of population parameters. Biometrika, 40(3-4): 237-264.
Hebb, D. O. (1949). The organization of behavior: A neuropsychological theory. Psychology Press.
Rosenblatt, F. (1957). The perceptron, a perceiving and recognizing automaton. Project Para. Cornell Aeronautical Laboratory.
Turing, A. M. (1948). Rounding-off errors in matrix processes. The Quarterly Journal of Mechanics and Applied Mathematics, 1(1): 287-308.
Turing, A. (1949). Checking a large routine. In The early British computer conferences (1989; pp. 70-72). MIT Press.
Turing, A. M. (1950). Computing machinery and intelligence. Mind, 59(236): 433-460.
Turing, A. M. (1952). The chemical basis of morphogenesis. Philosophical Transactions of the Royal Society of London B: Biological Sciences, 237(641): 37-72.
Congratulation for seeing all the work of Alan Turing as united under a common theme: Algorithms. And yes, Alan Turing already had the insight, that algorithms can run not only on hardware using electromechanical or electronic switches, but also on biological hardware. As a computer science generalist he foresaw many different computing architectures and nets of computing elements. Alan Turing was a mathematician and used mathematical thinking to explore computer science. But neither Turing nor any of his followers created a branch of mathematics which covers the central concepts of computer science (what are the central concepts anyway?). Such a branch of mathematics is perhaps not possible or a thing of the future.
Can you expand a little bit more on these central concepts of computer science? What do you mean by that definition?
I see a computing system as a complex system of interacting parts which evolve a state towards a goal. The goal can be a as simple as a function or as complex as maintaining a system of invariants under changing conditions or a changing environment.