Prologue

The AI dream didn’t kick off with microchips or code.

It began way before, with grand philosophical ambitions and some seriously imaginative leaps, all thanks to incredibly brilliant and diverse minds.

Our journey begins in the 17th century, at the hand of Gottfried Wilhelm Leibniz. By this point in his life, Leibniz was already a superstar: a philosopher, mathematician, logician, and diplomat, who invented (or discovered) calculus independently of Newton. He actually invented the notation that we use today, with the integral symbol and the upper and lower limits.

Leibniz was a true polymath, totally immersed in the Enlightenment’s big project of organizing all knowledge and reason. His drive wasn’t just academic; he genuinely believed that logic could solve every human argument. He imagined a world where disagreements weren’t settled by yelling or endless debates, but by calm, undeniable calculations. Inspired by how algebra and calculus, by means of clever notation, could make even the trickiest problems appear simple, Leibniz fueled his grand dream of universal computation in a time where even the simplest calculation machines where considered a marvel.

What if, he mused, we could formalize all human reasoning in a similar way? He dreamed up a characteristica universalis—a universal language for thought—and a calculus ratiocinator—a mechanical way to reason with it. In this, Leibniz was a rationalist: he believed the all human thought was a grand, logical machine. Unknowingly, he was laying the intellectual groundwork for logic, which paved the way for symbolic AI centuries later.

Fast forward to 19th-century England, we find Lady Ada Lovelace. The daughter of the famously rebellious poet Lord Byron, Ada was a formidable brain, tutored in math and science by some of the most prominent thinkers of her time. By the time she met Charles Babbage, the great inventor, he was working on his Analytical Engine, an abstract machine that could, in principle, do anything a modern computer can do. Ada was already known for her sharp mind and amazing mathematical insights, but she also had this poetic and imaginative side that shaped her view of technology.

While Babbage saw his Analytical Engine mostly as the ultimate number-cruncher, Ada’s mind took flight beyond mere arithmetic. She famously wrote that the Engine “might compose elaborate and scientific pieces of music, or in any other extent, generate new content.” She was more than a century ahead of Generative AI, dreaming of the days machines would usher a new era of synthetic creativity.

As a side note, Charles Babbage would never finish constructing an actual, physical embodiment of his Analytical Engine. He kept imagining improvements over improvements, never quite settling on something that he could actually construct and use. It would have been the first true computer, but it forever remained as an unfinished project. This serves as a cautionary tale against the all too common syndrome—aptly called the Babbage Syndrome—of intellectualizing ad infinitum without actually testing out your ideas in the real world.

A few decades later, mid-20th century, as the dust settled from war and the digital age dawned, came the man who’s probably the most important figure in the history of Computer Science at large, the great Alan Turing. By the time his groundbreaking work on machine intelligence came out, Turing was already widely considered among the greatest logicians and mathematicians of his time.

He’s basically the Father of Computer Science, having come up with the abstract model of computation we know as the Turing Machine—the theoretical blueprint for every modern computer—and proving not only its potential but its intrinsic limitations. His wartime experience, where he played a key role in breaking the Enigma code, gave him also a very practical grasp of the power of computing. He actually built the first electromechanic general purpose computer, but this massive milestone was kept secret for years after his death.

Turing was a man of quiet brilliance. He wasn’t just curious about what a real machine could do; he was fundamentally wrestling with the very definition of thinking itself. In his famous 1950 paper, “Computing Machinery and Intelligence,” he dare ask if machines could think, like, for real. He proposed a brilliant, practical yet deeply philosophical way to assert it: what he called The Imitation Game, but the world came to know as the Turing Test.

If a machine could chat with a human, he suggested, in such a way that the human couldn’t tell if they were talking to a machine or another human, then, for all intents and purposes, the machine could be considered to be thinking. This wasn’t just a practical experiment, though; it was a functional definition of thinking that sparked the computational theory of mind. The implications of his hypothesis are at the core of the most profound discussions in the field of Philosophy of Mind, even today.

But crucially, in that same paper 80 years ago, Turing looked beyond the test and tossed out several ideas for how such an artificial intelligence might actually be achieved. These included the concept of a learning machine, raised like a human child, soaking up knowledge from experience instead of being preprogrammed to know everything beforehand; and even hinted at using bio-inspired algorithms to mimic how evolution works.

These ideas foreshadowed major pillars of modern AI systems, like neural networks and metaheuristic search algorithms, showing his amazing foresight and his deep understanding of both rationalist and empiricist paths to intelligence. Tragically, he wouldn’t live to see his dream materialize into the massive body of knowledge and practice that is the field of Artificial Intelligence.