Prologue: The Quest to Answer all Questions

Humanity’s ingenuity extends far back into the foggy ages of the agricultural revolution, and probably even farther back. Since the dawn of Homo sapiens, our species has always been interested in predicting the future. Tracking the seasons, planning crops, estimating an enemy’s forces, and building cities, all of these tasks require significantly complex computations that often need to be reasonably accurate to be helpful.

To tackle these problems, humanity invented mathematics, our greatest collective intellectual achievement. Nowadays, it is easy to think of math as this coherent body of knowledge that can be built from the ground up –much like you learn it in school– where every new concept, theorem, or technique, is beautifully derived from previous concepts and theorems. The whole edifice of math seems like a divine gift, a perfect tool for solving the most complex problems.

But this wasn’t always the case. Mathematics as we know it today is a very recent invention. Before the 20th century, we had several chunks of math that weren’t really all that compatible with each other. So much so, that finding bridges between two different mathematical theories (or fields) has been one of the most fruitful, and challenging work of some of the most eminent mathematicians of all time.

The first big block of mathematical knowledge we came up with is Euclidean Geometry, developed by ancient Greeks (and most likely many other ancient civilizations as well). Euclid, a Greek mathematician, was the first to sit down and systematize the existing knowledge in a book, full of axioms, definitions, theorems, and proofs. From Euclid, we inherited the idea that mathematical proofs are sequences of claims based on previous claims, that start with some axioms and definitions.

The Greeks gave us another big chunk of math: Logic. Aristotle, one of the most important philosophers of all times, came up with what he called “syllogisms”, which are just rules for sound reasoning. A classic example of an Aristotelian syllogism is “if all men are mortal, and Socrates is a man, then Socrates is mortal”. The most essential such rule, which underpins all mathematical reasoning is called modus ponens, and it basically says something like: “A implies B, and A is true, then B is true”. Aristotelian logic was further extended, formalized, and developed into modern mathematicl logic in the late 19th and early 20th century.

But not all math came from Ancient Greece. We inherited some of the most important mathematical concepts from the Arab and Asian world. From the Arabs we learned Algebra, the art of manipulating symbols such X and Y in equations to find the value of some unknown quantity. Algebra is much bigger than that, of course, but one of its biggest impacts was the idea of inventing procedures, or algorithms, to solve specific problems, such as systems of equations.

From India, we learned Arithmetic, which deals with numbers. In the Middle Ages, Europeans were using a terrible numerical system inherited from Rome, which made even the most mundane arithmetic a pain (as an example, try to multiply CLXIV times LVII in your head). The Hindu gave us the decimal system, which features a positional encoding that makes it a breeze to add, multiply, and many more fancy operations with numbers. They also gave us a wealth of knowledge about numbers and their relations.

The final ingredient in our mathematical soup is Calculus, co-invented by Isaac Newton and Gottfried Leibniz. Calculus is the math of changing quantities, and it was crucial in understanding the physical world: how objects move and interact with each other under the influence of several forces. Calculus introduces the mind-bending notion of infinite sums of infinitesimal quantities, and marks the maturity of our mathematical journey.

At the beginning of the 20th century, we found ourselves with an unwieldy collection of disparate mathematical theories, many of which had intriguing similarities but were still not entirely compatible. Early successes in bridging whole mathematical subfields –such as Descartes invention of analytic geometry, combining algebra and geometry into a unified body of knowledge and showing that curves and equations are two sides of the same coin– suggested that there was a grand mathematical theory underneath all these different ideas.

And then humanity attempted its greatest intellectual feat so far: to unify all math under a single theory. A bold enterprise that sought to, not only make all existing mathematical knowledge compatible, but also solve the most pressing mathematical questions from each of these fields. In their hubris, mathematicians and logicians thought we could figure it all out, but math was about to give us the biggest modesty check in the history of human intellect.

The result of this journey is the foundation of a new type of science: the Science of Computation. But to get there, we have to start at the very begining of western civilization, with one of the most important thinkers of all time, and the guy who started it all: Aristotle.

The Dawn of Reason

The first major attempt to do something akin to formalizing reasoning is Aristotle’s Prior Analytics, where he invented what he called “syllogisms”. Aristotle defined a syllogism as a structured argument where a conclusion necessarily follows from two premises. For example, if we assert that “All men are mortal” (major premise) and “Socrates is a man” (minor premise), we can validly conclude that “Socrates is mortal.”

The beauty of this idea is not in what you can deduce of this precise claim–that Socrates is mortal, which is a rather trivial fact–but in its general structure. This syllogism can be generalized to something far more powerful, but still valid:

If all M are P, and S is M, then S is also P.

While this may sound self-evident, the fact that we can apply such a reasoning rule for all possible Xs, Ys, and Zs is remarkable. This is the essence of formal logic: a set of rules for manipulating symbols that allow us to reason about the world in a systematic and rigorous way, regardless of what those symbols mean.

Now, Aristotle went as far as to define a set of 256 different syllogistic forms. That means, 256 different, general-purpose structures like the one above, that can be applied to all possible objects and obtain true knowledge about them. However, modern logicians recognize that about 15 of those are sound in terms of logical validity.

The four main syllogistic forms that Aristotle considered in his first work are:

  • If all M are P, and all S are M, then all S are also P.
  • If no M are P, and all S are M, then no S are P.
  • If all M are P, and some S are M, then some S are P.
  • If no M are P, and some S are M, then some S are not P.

The first one is the more general form of the Socrates-is-mortal example. The other three are variantes where one of the main premises is true or false. All of them are self-evident by the nature of their structure, and you should convince yourself that they are true. Their soundness stems for the meaning of phrases like “all M are P” and “all S are M”. What these phrases mean–in terms of the relationship they define between M, P and S–immediately make the correspoding conclusion self-evident.

Aristotle’s work on what we can call “intuitive logic” is nothing short of transcendental. These four forms are the basis of modern logic, and they are still used today. They are the foundation of what is known as “first-order logic”, which is the most common form of logic used in mathematics and computer science, but we will get there.

Aristotle’s syllogisms were a major step forward in the formalization of reasoning, but they were limited in scope. They could only handle a small number of logical connectives (and, or, not) and a limited number of quantifiers (all, some). This was a significant limitation, as it meant that many arguments could not be expressed using syllogisms.

Over the centuries, logicians would continue to refine and expand the scope of formal logic. In the Middle Ages, the Islamic Golden Age saw the preservation and expansion of Aristotle’s works, with scholars like Al-Farabi and Avicenna contributing new ideas and interpretations. During the Renaissance, the works of Aristotle and other classical logicians were rediscovered, and new ideas were added to the mix.

However, the primary limitation of these syllogistic forms–that they are a fixed number of forms, and that they can only handle a limited number of logical connectives and quantifiers–would not be fully addressed until the 19th century.

But before going there, we have to make a stop, and learn about the dream of one of the most important mathematicians of all times.

The Universal Language

The modern history of Computer Science can be said to begin with Gottfried Leibniz, one of the most famous mathematicians of all time. Leibniz worked on so many fronts that it is hard to overestimate his impact. He made major contributions to math, philosophy, laws, history, ethics, and politics.

He is probably best known for co-inventing, or co-discovering, calculus along with Isaac Newton. Most of our modern calculus notation, including the integral and differential symbols, is heavily inspired by Leibniz’s original notation. Leibniz is widely regarded as the last universal genius, but in this story we want to focus on his larger dream about what mathematics could be.

Leibniz was amazed by how much notation could simplify a complex mathematical problem. Take a look at the following example:

“Bob and Alice are siblings. Bob is 5 years older than Alice. Two years ago, his age was twice as hers. How old are they?”

Before Algebra was invented, the only way to work through this problem was to think hard, or maybe try a few lucky guesses. But with algebraic notation, you just write some equations like the following and then apply some straightforward rules to obtain an answer.

\[ \begin{array}{rcl} b & = & a + 5 \\ b - 2 & = & 2(a-2) \end{array} \]

Any high-school math student can solve this kind of problem today, and most don’t really have the slightest idea of what they’re doing. They just apply the rules, and voila, the answer comes out. The point is not about this problem in particular but about the fact that using the right notation –algebraic notation, in this case– and applying a set of prescribed rules –an algorithm– you can just plug in some data, and the math seems to work by itself, pretty much guaranteeing a correct answer.

In cases like this, you are the computer, following a set of instructions devised by some smart “programmers” that require no creativity or intelligence, just to painstakingly apply every step correctly.

Leibniz saw this and thought: “What if we can devise a language, like algebra or calculus, but instead of manipulating known and unknown magnitudes, it takes known and unknown truth statements in general?

In his dream, you would write equations relating known truths with some statements you don’t know, and by the pure syntactic manipulation of symbols, you could arrive at the truth of those statements.

Leibniz called it Characteristica Universalis. He imagined it to be a universal language for expressing human knowledge, independently of any particular language or cultural constraint, and applicable to all areas of human thought. If this language existed, it would be just a matter of building a large physical device –like a giant windmill, he might have imagined– to be able to cram into it all of the current human knowledge, let it run, and it would output new theorems about math, philosophy, laws, ethics, etc. In short, Leibniz was asking for an algebra of thoughts, which we today call logic.

He wasn’t the first to consider the possibility of automating reasoning, though. Having a language that can produce true statements reliably is a major trend in Western philosophy, starting with Aristotle’s syllogisms and continuing through the works of Descartes, Newton, Kant, and basically every single rationalist that has ever lived. Around four centuries earlier, philosopher Ramon Llull had a similar but narrower idea which he dubbed the Ars Magna, a logical system devised to prove statements about, among other things, God and the Creation.

However, Leibniz is the first to go as far as to consider that all human thought could be systematized in a mathematical language and, even further, to dare dream about building a machine that could apply a set of rules and derive new knowledge automatically. For this reason, he is widely regarded as the first computer scientist in an age where Computer Science wasn’t even an idea.

Leibniz’s dream echoes some of the most fundamental ideas in modern Computer Science. He imagined, for example, that prime numbers could play a major part in formalizing thought, an idea that is eerily prescient of Gödel’s numbering. But what I find most resonant with modern Computer Science is the equivalence between a formal language and a computational device, which ultimately becomes the central notion of computability theory, a topic we will explore in our first chapter.

Unfortunately, most of Leibniz’s work in this regard remained unpublished until the beginning of the 20th century, when most of the developments in logic needed to fulfill his dream were well on their way, pioneered by logicians like Boole, Frege, and Cantor.

We will look at that part of the story next.

The Rules of Thought

Was Leibniz was looking for, ultimately, was an algebra for knowledge and reasoning. A way to describe truths about the world, and rules to combine them, such that new truths could be discovered. This idea of formalizing thought and reasoning new at Leibniz’s time, as we’ve seen. But the degree of formalization that Leibniz dreamed of wouldn’t arrive until the 19th century.

The next major step in this direction was the work of George Boole, an English mathematician who lived in the 19th century. Boole was a self-taught mathematician who started his career as a teacher. He was interested in the foundations of mathematics and logic, and he is credited with being the inventor of Boolean algebra, a mathematical system for reasoning about logical propositions.

Boole’s system used symbols to represent logical propositions and operations to combine them. This means instead of a finite set of logical rules, as in Aristotle’s syllogisms, we can know come up with an infinite number of logical rules, all proven to be sound.

To see how this work, we have to understand what an algebraic language is about. In short, an algebraic language is a language that uses symbols to represent objects and operations to combine them.

For example, in arithmetic, we use symbols to represent numbers and operations to combine them. We define what symbols like 1, 2, or 156 mean, and what operations like plus, minus and squared do to any number. Then, we can make general claims like:

If X + Y = Z, then Y + X = Z

Here, we are using X, Y, and Z to denote any possible number, and thus we can come up with infinite number of true statements. For example, we can know all of the following are true:

If 1 + 2 = 3, then 2 + 1 = 3
If 10 + 20 = 30, then 20 + 10 = 30
If 100 + 200 = 300, then 200 + 100 = 300

And so on.

This is called the rule (or arithmetic axiom) of the commutativity of addition, which you probably know from elementary school. This rule is akin to one of Aristotle’s syllogisms. A fixed-form logical rule that contains some free variables, such that when we set these variables to appropriate values that make the premises true, then the conclusion is unequivocably true.

But contrary to Aristotle’s syllogism, we are not limited to a fixed number of such rules. If instead of saying X and Y are just numbers, we allow them to represent other arithmetic formulas, then these rules for combining numbers become rules for combining formulas themselves, which gives us new formulas ad infinitum.

For example, going back to our previous example, if we have:

\[ \begin{array}{rcl} b & = & a + 5 \\ b - 2 & = & 2(a-2) \end{array} \]

We can apply the following rules to solve this equation:

  1. Since \(b = a+5\), using the rule \(X = Y \leftarrow X - Z = Y - Z\), we can deduce that \(b - 2 = a + - 2\).
  2. Since \(b - 2 = 2(a-2)\), using the rule \(X = Y and X = Z \leftarrow Y = Z\), we can deduce \(a+5-2=2(a-2)\).
  3. Thus, \(a+3=2a-4\).
  4. This, in turn, means \(3+4=2a-a\).
  5. And so, \(7=a\).

Following in this manner, we can then deduce that \(b=5\) using a similar reasoning.

This is exactly what Leibniz saw in algebra, and why he wanted a similarly powerful mechanism for manipulating general claims about the world instead of just numbers. And this is what Boole did–or started, as we’ll see we need a few more people to get to the most powerful modern logics.

Boole’s system used symbols to represent logical propositions and operations to combine them. The symbols he used were 1 and 0, which he interpreted as true and false respectively. The operations he used were AND, OR, and NOT, which he interpreted as logical conjunction, disjunction, and negation respectively.


This is the essence of what we call “first-order logic”, which is the most common form of logic used in mathematics and computer science. It is a formal system that uses symbols to represent logical propositions and operations to combine them. It allows us to represent a much wider range of arguments and to prove a much wider range of theorems than the syllogistic forms.

And this is where Boole’s work comes in. Boole’s system–known today as Boolean algebra–was the first algebraic logic system. It used symbols to represent logical propositions and operations to combine them. This allowed for the representation of more complex arguments than the syllogistic forms could handle. For example, the statement “If it is raining and the temperature is below freezing, then I will stay inside” could be represented as (R ∧ T) → I, where R is the proposition “It is raining”, T is the proposition “The temperature is below freezing”, and I is the proposition “I will stay inside”.

Boole’s system used symbols to represent logical propositions and operations to combine them. The symbols he used were 1 to represent true and 0 to represent false. The operations he used were AND, OR, and NOT. These operations allowed for the representation of more complex arguments than the syllogistic forms could handle. For example, the statement “If it is raining and the temperature is below freezing, then I will stay inside” could be represented as (R ∧ T) → I, where R is the proposition “It is raining”, T is the proposition “The temperature is below freezing”, and I is the proposition “I will stay inside”.

Another major contribution came from Gottlob Frege, who in the late 19th century developed a formal logical system known as predicate logic. Predicate logic allowed for the representation of more complex arguments by introducing quantifiers that could be applied to variables, allowing for the expression of statements like “There exists an X such that X is Y” or “For all X, if X is Y, then X is Z”.

These developments, along with the work of other logicians like George Peacock, Augustus De Morgan, and Charles Sanders Peirce, laid the foundation for modern formal logic. Today, formal logic is used in a wide variety of fields, from mathematics and computer science to philosophy and linguistics. It is a powerful tool for reasoning about the world, and it continues to be an active area of research.

To Infinity and Beyond

The Final Frontier

Hilbert’s program to solve all mathematical problems.

David Hilbert, a German mathematician, presented a list of 23 mathematical problems at the Second International Congress of Mathematicians in Paris in 1900. These problems were a call to arms for mathematicians around the world, proposing challenges that could fundamentally advance the discipline.

  1. Consistency of Mathematics: Hilbert proposed that mathematicians should strive to prove the consistency of mathematical systems. This problem aims to demonstrate that a given mathematical system does not lead to contradictions. The goal is to establish that the axioms and rules of a system are well-founded and do not lead to paradoxes or contradictions.

  2. Completeness of Mathematics: Hilbert also proposed that mathematicians should strive to prove the completeness of mathematical systems. This problem focuses on ensuring that every true mathematical statement can be derived from the system’s axioms and rules. The goal is to show that the system is comprehensive enough to capture all mathematical truths.

  3. Entscheidungsproblem (Decision Problem): The Entscheidungsproblem asks whether there exists an algorithm that can determine whether a given mathematical statement is true or false. In other words, it seeks to find a method to automatically decide the truth or falsity of any mathematical statement. This problem is about the power of algorithms in solving mathematical problems and understanding the boundaries of computability.

These problems have profound implications for the nature of mathematical knowledge, the limitations of human understanding, and the relationship between logic and computation. They continue to inspire research and exploration in mathematical logic and computability theory.

These problems, although not all solved, made enormous strides in the development of mathematical logic and computability theory. They highlighted the interconnections between mathematics, logic, and computability, proving to be a rich source of mathematical exploration and discovery. The investigation of these problems continues to shape modern mathematics and computer science.

The unsolved problems, particularly the Entscheidungsproblem, would later lead to the work of Kurt Gödel and Alan Turing, who proved that it was impossible to create a universal algorithm that could decide the truth or falsity of all mathematical statements. This marked a turning point in the understanding of the limits of mathematical and computational knowledge.

Crisis in Infinite Math

The crisis of math that started with Russell and then Gödel We end the prologue here, just before Turing.

Bertrand Russell was a British philosopher, logician, mathematician, and essayist. He is best known for his contributions to mathematical logic and philosophy, particularly his work on the foundations of mathematics and the paradoxes of set theory.

Bertrand Russell is significant because he discovered the Russell paradox, which shook the foundations of mathematics. In 1901, Russell formulated a set called the “Russell set,” which included all sets that are not members of themselves. The paradox arose when Russell asked if the Russell set was a member of itself. If it was, then by definition, it should not be a member of itself, leading to a contradiction. Conversely, if it was not a member of itself, then it should be a member of itself, again leading to a contradiction.

The Russell paradox highlighted the limitations and inconsistencies in the informal set theory that was prevalent at the time. It demonstrated that mathematics, as it was then understood, contained internal contradictions. The paradox challenged the theoretical framework of mathematics at its core and highlighted the need for a more rigorous foundation of mathematics.

Russell’s work on logical and mathematical foundations, including the paradox he discovered, has had a profound impact on the development of modern mathematics, logic, and philosophy. His contributions have shaped the field of mathematical logic, and his ideas continue to be studied and debated by mathematicians and philosophers today.

The Russell paradox highlighted the need for a more rigorous foundation of mathematics. It shed light on the potential limitations of set theory and the need for clearer definitions and rules. The paradox revealed that mathematics, like any human construct, could be subject to inconsistencies and paradoxes, calling into question the completeness and consistency of mathematical systems.

These concerns led to the investigation of axiomatic systems and their limitations. Axiomatic systems are mathematical frameworks that consist of a set of axioms, which are taken as self-evident truths, and a set of rules for deducing new theorems from these axioms. The hope was that by constructing strong axiomatic systems, mathematics could be made free from inconsistencies and paradoxes.

Kurt Gödel’s incompleteness theorems, published in 1931, dealt a devastating blow to this hope. Gödel proved that strong axiomatic systems, such as the one proposed by David Hilbert’s program, cannot be complete and consistent.

Gödel’s first incompleteness theorem stated that any consistent axiomatic system that includes basic arithmetic cannot prove its own consistency. In other words, if a system is consistent, it cannot prove that it is free from contradictions. If it could, then it would be possible to construct a proof of its own inconsistency, leading to a contradiction.

Gödel’s second incompleteness theorem stated that any consistent axiomatic system that includes basic arithmetic cannot prove all true arithmetic statements. In other words, there are true statements about numbers that cannot be proven within the system.

These theorems demonstrated the inherent limitations of axiomatic systems in capturing all mathematical truths. They highlighted the tension between completeness, the ability to prove all true statements, and consistency, the absence of contradictions. Gödel’s results undermined Hilbert’s program, which aimed to establish mathematics on a firm foundation by proving all mathematical truths using a finite set of axioms and rules.

The crisis in infinite math, sparked by the Russell paradox and further exacerbated by Gödel’s incompleteness theorems, forced mathematicians to reevaluate the nature of mathematical knowledge and the limits of human understanding. It led to the development of new mathematical frameworks, such as intuitionistic logic and type theory, which provided alternative approaches to handling inconsistencies and paradoxes. Ultimately, these developments deepened our understanding of the foundations of mathematics and its relationship to logic and computation.