from hilbert with love

Having taken Functional Programming with Scala presented by Martin Odersky, I have found the idea behind it rather difficult to understand and very amusing at the same time. Coming from an imperative world, everything looks quite bizarre at the moment. In the past few weeks, I kept wondering: What is the reason for this recent comeback of functional programming? May the history of great men who created our universe shed some light on my confusion…

Hilbert’s Problems

Everything started from some abstract math problems. They were presented in the year 1900 at the International Congress of Mathematicians in Paris by the German mathematician David Hilbert. They were problems about mathematics itself rather than problems in mathematics. There were three of those question that had the greatest impact on the lives of software developers as we know it now:

  1. Is mathematics complete? Given a finite set of axioms, can we prove or disprove every mathematical statement?
  2. Is mathematics consistent? We should only be able to prove the true statement, right?
  3. Is every statement in mathematics decidable? Hilbert wanted to know whether there exists a procedure that can be applied to every statement and tell us about the truth or falsity of that statement in finite time.

This third question is still known by its German name as the Entscheidungsproblem (“decision problem”). It goes back to the seventeenth-century mathematician Gottfried Leibniz from the Dutch Golden Age (Dutch Republic at the time). The Republic of the Seven United Netherlands was a sanctuary to many brilliant European minds finding peace in one of the most cultured parts of the green continent. Leibniz actually built his own calculating machine, and believed that humans could eventually build a computing machine that could determine the truth or falsity of any mathematical statement.

To better understand the questions Hilbert posed, we cannot neglect the axiomatic development of geometry. From a small number of axioms can be derived numerous propositions. If we could somehow establish the truth of the axioms, both truth and mutual consistency of the truth are established. Euclidean geometry has had a profound influence in human development. In ethics, Spinoza claims that he wants to treat his subject matter like points, lines and planes with axioms and theorems… and mathematicians and logicians dreamed to do the same with numbers and symbols. The influence of euclidean geometry led many scientists, including Hilbert, to believe that the answer to every question would be “yes”, and to believe that there is no such thing as an unsolvable problem. At the height of this belief, around 1910, Russell and Whitehead, in their Principia Mathematica, tried to write down all mathematics in pure logic. The dream was to come up with a set of axioms and inference rules in symbolic logic and prove all mathematical truths using them. Let it be forever a solid foundation of all mathematics!

This optimistic dream was a short one though. Very short…

Bringing Down an Empire

It was the year 1931 and scientists were rejoicing in their positivism, believing that we are in the beginning of the end and that they had found the truth. Then came a 25 years old Austrian introducing his paper “Über formal unentscheidbare Sätze der “Principia Mathematica” und verwandter Systeme“ (called in English “On Formally Undecidable Propositions of “Principia Mathematica” and Related Systems”), forever changing the world. It even had an impact how we philosophize our very existence.

The young genius had attacked one of the most important problems in the foundations of mathematics. He managed to perceive Principia Mathematica as numbers rather than symbols. Gödel’s incompleteness theorem stated that if the mathematics is consistent, then it cannot be complete. The reasoning of the proof was so new and brilliant that at the time of its publication only those closely familiar with the difficult technical literature of a very specialized field could even bother to understand it. It’s not easy to imagine how this statements gets translated into mathematics, but Gödel did it with his brilliance. Up to that point, many mathematicians and philosophers strongly believed that Hilbert’s questions would be answered positively. But Gödel shattered their dreams of a perfect structure. It was indeed upsetting for those who wanted to find in mathematics something that was just too perfect.

Here I just borrow directly from the beautiful preface to the second edition of Gödel’s Proof written by cognitive scientist/wordsmith Douglas R. Hofstadter:

“In this shockingly bold manner, Gödel stormed the fortress of Principia Mathematica and brought it tumbling down in ruins. He also showed that his method applied to any system whatsoever that tried to accomplish the goals of Principia Mathematica. In effect, then, Gödel destroyed the hopes of those who believed that mathematical thinking is capturable by the rigidity of axiomatic systems, and he thereby forced mathematicians, logicians, and philosophers to explore the mysterious newly found chasm irrevocably separating provability from truth.“

Gödel lived in fear of being poisened. He became paranoid early in his life after a member of the Vienna Circle, Moritz Schlick, who had attracted Gödel to Logic, was assassinated by a Pro-Nazi student. Later, he would only accept food prepared by his wife, and following his wife’s hospitalization he literally starved himself to death.

In his glorious life, besides the things he proved or the things he proved that cannot be proved, he showed something else. He proved that depression and fear cannot stop people from doing extraordinary things. What matters is having questions and being excited about them. It all starts with a question and ends with a question. When we run out of questions, we don’t just run out of answers, we run out of life.

“I say unto you: one must still have chaos in oneself to be able to give birth to a dancing star. I say unto you: you still have chaos in yourselves.” ― Friedrich Nietzsche, Towards the Ubermensch, Thus Spoke Zarathustra.

What is mathematical truth after Gödel? What is truth at all? Maybe we would need more than mathematics to answer these question.

image
Alhambra Sketch. “In mathematical quarters, the regular division of the plane has been considered theoretically . . . Does this mean that it is an exclusively mathematical question? In my opinion, it does not. [Mathematicians] have opened the gate leading to an extensive domain, but they have not entered this domain themselves. By their very nature they are more interested in the way in which the gate is opened than in the garden lying behind it.” – M.C. Escher.

Turing And The Entscheidungsproblem

It was a young Brit who was destined to shatter Hilbert’s dreams once and for all. Alan Turing, the son of a British member of the Indian civil service, entered University of Cambridge to study mathematics in 1931. In 1935, Turing was twenty-three years old and studying under the logician Max Newman. Newman introduced him to Gödel’s recent work. When Turing was presented Gödel’s incompleteness theorem he understood it, and he was able to see how to answer the Entscheidungsproblem. The answer was “no” again. Turing’s 1936 paper titled “On Computable Numbers, with an Application to the Entscheidungsproblem” was recommended for publication by the American mathematician-logician Alonzo Church. Turing and Church independently showed that in general this problem has no solution. They proved that no consistent formal system of arithmetic is decidable. This result, along with Gödel’s incompleteness theorems, ended the dream of a system that could end ignorance in mathematics forever. Turing and Church even showed that some purely logical systems, considerably weaker than arithmetic, are undecidable.

Turing didn’t stop there. Following the intuition of Leibniz and by thinking about a machine capable of computation, a machine that not only could perform arithmetic, but also manipulate symbols in order to prove mathematical statements, he formulated his definition. By thinking about how humans calculate, he sketched a mental design of such a machine, the amazing Turing machine.

In the summer of 1938 Turing returned from the United States and joined the wartime headquarters of the Government Code and Cypher School at Bletchley Park. Starting from 1939, he and others designed a code-breaking machine known as the Bombe to break German Enigma codes. Turing’s ingenious Bombe kept the Allies supplied with intelligence for the rest of the war. And at the end of the war, he was made an officer of the Order of the British Empire for his code-breaking achievements.

Turing was hired by National Physical Laboratory, in 1945, to design and develop an electronic computer. His design for the Automatic Computing Engine was the first relatively complete specification of an electronic stored-program, general-purpose digital computer. His design would have had led to faster computers with more memory, but his NPL colleagues thought engineering his design was too difficult to achieve and too costly, so a much simpler computer was built instead. Turing was also the father of modern cognitive science. He hypothesised that in part, the human brain is a digital computing machine. In 1950, Turing proposed what became known as the Turing Test which would decide whether a machine thinks.

Brits had a strange way of thanking their greatest war hero. In 1952, Turing was arrested and tried for homosexuality. To avoid prison, he accepted treatment of injections of oestrogen for a year, which were intended to neutralise his libido. And Turing’s security clearance was withdrawn preventing him to work for GCHQ. He committed suicide with cyanide on 7 June, 1954 and denied his presence from the world at a depressingly young age. He was only 41.

The Birth of Our World

It’s all about a team, but if you jumble it up, it was all about von Neumann. He brought dreams into reality.

“All men dream: but not equally. Those who dream by night in the dusty recesses of their minds wake in the day to find that it was vanity: but the dreamers of the day are dangerous men, for they may act their dreams with open eyes, to make it possible.” ― T.E. Lawrence (Lawrence of Arabia), Seven Pillars of Wisdom: A Triumph.

By his mid-twenties, John von Neumann was one of the most prominent mathematicians of his time. His work is not limited to mathematics, though it was his main discipline and he had influence in every branch. Von Neumann’s ingenuity left influence in quantum theory, automata theory, economics, and even defense planning.

Von Neumann invented the computer simulating a Universal Turing Machine and computer programming. His sequential device had a calculation unit, finite memory, and limited rate of data transfer in between the two, which is famously known as the “von Neumann bottleneck”. Not only did he create everything we need to solve real problems such as Fixed-point arithmetic, program flow control, conditional branching and subroutines to name only a few - working closely with manufacturing teams, he also did much of the hardware and materials engineering himself. Everything was done in machine code and he was a big advocate of it. Only later, as ordinary mortals needed easier ways to solve inceasingly complex problems, were high level languages created, which are for the most parts merely a facade of assembly language on von Neumann serial machines.

Even after 60 years we have failed to be liberated from the von Neumann bottleneck. What von Neumann made of Turing’s blueprint is still in front of every one of us everyday. If he had not died so early, at the age 54, leaving the ordinary world behind, he might have very well seen his efforts toward a parallel computer, based on neuron-like cellular automata, flourish.

In his 1977 Turing Award lecture, John Warner Backus, an American computer scientist, stepped away from the world he’d helped build and asked, “Can Programming Be Liberated from the von Neumann Style?”. His paper won the prestigious ACM Turing Award. Backus’s paper is now more than 30 years old. Was it an absolute dead end? Did his idea eventually become a part of the reality of programming?

History Repeats Itself, Neither as Tragedy, Nor as Farce

One of Alan Turing’s professors, Alonso Church, was working on a mathematical model for expressing computations at the same time Turing came up with his Turing Machines. Church had created a system called the Lambda Calculus. He and Turing realized they were indeed equivalent despite their different look. By composing functions, Lambda Calculus builds up computations. In the Church-Turing thesis, they explained that the class of lambda-definable functions whose values can be calculated by repeating substitution correlates with the class of all functions that are computable. Any effectively computable function can be calculated by a universal Turing machine. Lambda Calculus remained in the world of pens and papers, but Turing machines were easier to implement in 1950’s hardware.

In 1958, the year after von Neumann passed away, John McCarthy invented a mathematical notation called Lisp which was later developed into a programming language by graduate students. Lisp was based on the Lambda calculus and enabled us to express programs much more conveniently than the von Neumann instructions allow. The fact that Lisp is based on the Lambda calculus is the key and separates it from shallow hacks that only provide syntactic sugar for programming on a von Neumann machine. It is a model for writing programs that are extremely expressive and very powerful. Because they don’t work by side-effect, functional languages translates easier to parallel machines. Like lisp, Scala is one those languages that encourages functional programming and packs some practical additions to allow us utilize side-effects. Some people strongly believe that side-effects have to be avoided at all costs, but as long we code on von Neumann machines, their use is going to be practical.

In the end, FP might fade away, who knows. Claims made about FP have been challenged But learning to think in Functional Programming terms today is unlikely to be a waste of time. With respect to business programming, the large datasets grow huge… Too large for any single machine, no matter how fast it is. This requires a room full of machines, and we end up having the same problem: How do we scale? In the area of distributed computing, the benefits are more concrete already. I think there’s also a strong social aspect to it. Though one might say there is a social aspect to everything! Functional programming seems to be gaining ground these days, at least when it comes to programming for the web and as programming for multicore processors becomes more and more common.

Epilogue

I started with a question about functional programming and ended up in a never-ending emotional journey full of extraordinary people. Our universe came into existence in the image of these great men. Computer Science could have very well been called the Alan-John masterpiece. Now you can imagine how depressive it was for me that in one of the professional courses I attended, after facing the instructor’s questions, no one knew who Alan Turing is. I have tried here to somehow capture the history of functional programming, to better understand the reasons for this recent hype in FP and cure my confusions. However, I’m still dazed and without a very clear answer to that ‘why’! But now I’m a firm believer that to be better at this field, I must start again, from the very beginning. Fun times ahead in the coming year.