Introduction: The Quest to Answer all Questions
Humanity’s ingenuity extends far back into the foggy ages of the agricultural revolution, and probably even farther back. Since the dawn of Homo sapiens, our species has always been interested in predicting the future. Tracking the seasons, planning crops, estimating an enemy’s forces, and building cities, all of these tasks require significantly complex computations that often need to be reasonably accurate to be helpful.
To tackle these problems, humanity invented mathematics, our greatest collective intellectual achievement. Nowadays, it is easy to think of math as this coherent body of knowledge that can be built from the ground up –much like you learn it in school– where every new concept, theorem, or technique, is beautifully derived from previous concepts and theorems. The whole edifice of math seems like a divine gift, a perfect tool for solving the most complex problems.
But this wasn’t always the case. Mathematics as we know it today is a very recent invention. Before the 20th century, we had several chunks of math that weren’t really all that compatible with each other. So much so, that finding bridges between two different mathematical theories (or fields) has been one of the most fruitful, and challenging work of some of the most eminent mathematicians of all time.
The first big block of mathematical knowledge we came up with is Euclidean Geometry, developed by ancient Greeks (and most likely many other ancient civilizations as well). Euclid, a Greek mathematician, was the first to sit down and systematize the existing knowledge in a book, full of axioms, definitions, theorems, and proofs. From Euclid, we inherited the idea that mathematical proofs are sequences of claims based on previous claims, that start with some axioms and definitions.
The Greeks gave us another big chunk of math: Logic. Aristotle, one of the most important philosophers of all times, came up with what he called “syllogisms”, which are just rules for sound reasoning. A classic example of an Aristotelian syllogism is “if all men are mortal, and Socrates is a man, then Socrates is mortal”. The most essential such rule, which underpins all mathematical reasoning is called modus ponens, and it basically says something like: “A implies B, and A is true, then B is true”. Aristotelian logic was further extended, formalized, and developed into modern mathematicl logic in the late 19th and early 20th century.
But not all math came from Ancient Greece. We inherited some of the most important mathematical concepts from the Arab and Asian world. From the Arabs we learned Algebra, the art of manipulating symbols such X and Y in equations to find the value of some unknown quantity. Algebra is much bigger than that, of course, but one of its biggest impacts was the idea of inventing procedures, or algorithms, to solve specific problems, such as systems of equations.
From India, we learned Arithmetic, which deals with numbers. In the Middle Ages, Europeans were using a terrible numerical system inherited from Rome, which made even the most mundane arithmetic a pain (as an example, try to multiply CLXIV times LVII in your head). The Hindu gave us the decimal system, which features a positional encoding that makes it a breeze to add, multiply, and many more fancy operations with numbers. They also gave us a wealth of knowledge about numbers and their relations.
The final ingredient in our mathematical soup is Calculus, co-invented by Isaac Newton and Gottfried Leibniz. Calculus is the math of changing quantities, and it was crucial in understanding the physical world: how objects move and interact with each other under the influence of several forces. Calculus introduces the mind-bending notion of infinite sums of infinitesimal quantities, and marks the maturity of our mathematical journey.
At the beginning of the 20th century, we found ourselves with an unwieldy collection of disparate mathematical theories, many of which had intriguing similarities but were still not entirely compatible. Early successes in bridging whole mathematical subfields –such as Descartes invention of analytic geometry, combining algebra and geometry into a unified body of knowledge and showing that curves and equations are two sides of the same coin– suggested that there was a grand mathematical theory underneath all these different ideas.
And then humanity attempted its greatest intellectual feat so far: to unify all math under a single theory. A bold enterprise that sought to, not only make all existing mathematical knowledge compatible, but also solve the most pressing mathematical questions from each of these fields. In their hubris, mathematicians and logicians thought we could figure it all out, but math was about to give us the biggest modesty check in the history of human intellect.
The result of this journey is the foundation of a new type of science: the science of computation. But to get there, we have to start a bit earlier, with the dream of one of the greatest mathematicians that ever existed.
Leibniz’s Dream
The modern history of Computer Science can be said to begin with Gottfried Leibniz, one of the most famous mathematicians of all time. Leibniz worked on so many fronts that it is hard to overestimate his impact. He made major contributions to math, philosophy, laws, history, ethics, and politics.
He is probably best known for co-inventing, or co-discovering, calculus along with Isaac Newton. Most of our modern calculus notation, including the integral and differential symbols, is heavily inspired by Leibniz’s original notation. Leibniz is widely regarded as the last universal genius, but in this story we want to focus on his larger dream about what mathematics could be.
Leibniz was amazed by how much notation could simplify a complex mathematical problem. Take a look at the following example:
“Bob and Alice are siblings. Bob is 5 years older than Alice. Two years ago, his age was twice as hers. How old are they?”
Before Algebra was invented, the only way to work through this problem was to think hard, or maybe try a few lucky guesses. But with algebraic notation, you just write some equations like the following and then apply some straightforward rules to obtain an answer.
\[ \begin{array}{rcl} b & = & a + 5 \\ b - 2 & = & 2(a-2) \end{array} \]
Any high-school math student can solve this kind of problem today, and most don’t really have the slightest idea of what they’re doing. They just apply the rules, and voila, the answer comes out. The point is not about this problem in particular but about the fact that using the right notation –algebraic notation, in this case– and applying a set of prescribed rules –an algorithm– you can just plug in some data, and the math seems to work by itself, pretty much guaranteeing a correct answer.
In cases like this, you are the computer, following a set of instructions devised by some smart “programmers” that require no creativity or intelligence, just to painstakingly apply every step correctly.
Leibniz saw this and thought: “What if we can devise a language, like algebra or calculus, but instead of manipulating known and unknown magnitudes, it takes known and unknown truth statements in general?”
In his dream, you would write equations relating known truths with some statements you don’t know, and by the pure syntactic manipulation of symbols, you could arrive at the truth of those statements.
Leibniz called it Characteristica Universalis. He imagined it to be a universal language for expressing human knowledge, independently of any particular language or cultural constraint, and applicable to all areas of human thought. If this language existed, it would be just a matter of building a large physical device –like a giant windmill, he might have imagined– to be able to cram into it all of the current human knowledge, let it run, and it would output new theorems about math, philosophy, laws, ethics, etc. In short, Leibniz was asking for an algebra of thoughts, which we today call logic.
He wasn’t the first to consider the possibility of automating reasoning, though. Having a language that can produce true statements reliably is a major trend in Western philosophy, starting with Aristotle’s syllogisms and continuing through the works of Descartes, Newton, Kant, and basically every single rationalist that has ever lived. Around four centuries earlier, philosopher Ramon Llull had a similar but narrower idea which he dubbed the Ars Magna, a logical system devised to prove statements about, among other things, God and the Creation.
However, Leibniz is the first to go as far as to consider that all human thought could be systematized in a mathematical language and, even further, to dare dream about building a machine that could apply a set of rules and derive new knowledge automatically. For this reason, he is widely regarded as the first computer scientist in an age where Computer Science wasn’t even an idea.
Leibniz’s dream echoes some of the most fundamental ideas in modern Computer Science. He imagined, for example, that prime numbers could play a major part in formalizing thought, an idea that is eerily prescient of Gödel’s numbering. But what I find most resonant with modern Computer Science is the equivalence between a formal language and a computational device, which ultimately becomes the central notion of computability theory, a topic we will explore in our first chapter.
Unfortunately, most of Leibniz’s work in this regard remained unpublished until the beginning of the 20th century, when most of the developments in logic needed to fulfill his dream were well on their way, pioneered by logicians like Gottlob Frege, Georg Cantor, Bertrand Russell.
We will look at that part of the story next.
The Rules of Thought
The exploration of logical thought begins with Aristotle, whose development of syllogisms laid the foundation for rational thinking. In his work Prior Analytics, Aristotle defined a syllogism as a structured argument where a conclusion necessarily follows from two premises. For example, if we assert that “All men are mortal” (major premise) and “Socrates is a man” (minor premise), we can validly conclude that “Socrates is mortal.” This formalization of reasoning was revolutionary, establishing a method for drawing conclusions and clarifying thought that would influence philosophical discourse for centuries.
During the Dark Ages, the progress of logic did not stagnate; rather, it flourished in unexpected ways. Scholars in the Islamic Golden Age preserved and expanded upon Aristotle’s works, introducing new ideas and interpretations that would later permeate European thought. Figures like Al-Farabi and Avicenna enriched logical inquiry, ensuring that Aristotle’s legacy endured even as Europe grappled with its own intellectual challenges. When the Renaissance emerged, it was this rich tapestry of knowledge that would help rekindle interest in classical logic.
George Boole entered this intellectual landscape in the mid-19th century, creating an algebra for logic that transformed how we understand reasoning. Boole’s work did for logic what Descartes had accomplished for geometry: he provided a systematic way to manipulate logical statements through algebraic expressions. By representing logical operations—such as conjunctions and disjunctions—using symbols, Boole enabled a level of precision in reasoning that was previously unattainable. His innovations marked a significant step toward formalizing logic as a mathematical discipline.
Following Boole, Gottlob Frege expanded the scope of logic by moving from propositional logic to predicate logic. Frege’s introduction of quantifiers allowed discussions about properties and relationships among objects, enabling statements about universality (e.g., “All men”) and existence (e.g., “There exists a man”). This extension was crucial for addressing more complex logical structures and paved the way for modern mathematical logic. Frege’s work highlighted the need for a rigorous foundation in logic, setting the stage for future developments in the field.
Finally, Georg Cantor took on one of the most profound challenges in mathematics: the study of infinity. He introduced the concept of transfinite numbers and explored different sizes of infinity, fundamentally altering our understanding of sets. Cantor’s groundbreaking work on set theory allowed him to confront infinity directly, leading to insights that were both revolutionary and unsettling. His struggle with these concepts ultimately took a toll on his mental health, illustrating the profound impact that grappling with such abstract ideas can have on one’s psyche.
The Final Frontier
Hilbert’s program to solve all mathematical problems.
Crisis in Infinite Math
The crisis of math that started with Russell and then Gödel We end the prologue here, just before Turing.