“I do apologize for not being able to satisfy a lot of people’s expectations. I kind of felt powerless,” [1], said GO grandmaster Lee Sedol after a surprising 1-4 loss to the artificial intelligence AlphaGO recently.

Machines had conquered most of the games mankind has created, including chess, Scrabble, and even *Jeopardy!*. The ancient game GO, exponentially more complex than chess, was once considered to be one of the ultimate tests of machines’ capabilities. Yet with Lee’s loss, the game has been conquered. Given the rapid advances in artificial intelligence, one cannot help but wonder “Is there any limit to what a machine can do?”

While machines have become smart enough to defeat humans in sophisticated games, humans have cleverly devised a problem that machines definitely cannot solve. Impressively, the problem was constructed more than 80 years ago, even before the birth of digital computers. The star of humanity who came up with this construction was mathematician Kurt Godel. Later, Alan Turing, the father of computer science, used Godel’s techniques to prove an analogous theorem in the context of computer science. In its simplest form, this theorem states that there exist problems that a machine will never be able to conquer.

To the surprise of many, the problem a machine can’t solve doesn’t take a Ph.D. to understand. Everything a computer does involves executing a program, from opening up a web page, displaying photos, to playing videos. A program is a set of instructions that a computer can understand and perform. Much like an IKEA assembly instruction booklet, a program specifies each step a computer needs to do. The program halts when the computer finishes performing all the steps. Some programs, though, never halt, because they have in them an infinite loop or logical errors. When your computer annoyingly freezes and stops listening to your commands, you’ve probably encountered a program that doesn’t halt.

The simple question a machine cannot answer involves those programs that freeze. Known as The Halting Problem, the question is so simple that we can state it in one single line:

**Given a program and inputs, answer whether the program halts on those inputs.**

Why can’t a modern machine, which can perform more than 3 billion operations per second and beat the best GO master, solve the halting problem? After all, the problem looks simple. Why not just run the program on a computer and check? If the computer halts, then we know that the answer to this problem is true.

With this method, though, we can’t reliably distinguish between a program not halting and a program just taking a long time to finish. If we start running this program on a computer today and it doesn’t finish, we can’t argue with full confidence that this program does not halt, because it may stop tomorrow. Similarly, if the program doesn’t stop tomorrow, it may still halt the day after. Think about the times when you see a spinning gray circle in your browser: You are frustrated, sometimes even torn apart by, not knowing whether it is just taking a long time or something has already gone wrong and it will never stop spinning.

Let’s walk through Turing’s proof on The Halting Problem by exploring the story of Magic Land.

Magic Land is a mysterious ancient kingdom. The kingdom gets its name because it has millions and millions of magic boxes. Magic boxes are boxes that can transform an item into another. Citizens of Magic Land are obsessed with their magic boxes. They work, relax and even breathe with the boxes’ help. Each box has a label describing step by step how to construct the box. For example, the following magic box, after several minutes of hard work, produces a teddy bear when you give it a singing cat.

Some magic boxes can answers questions about other magic boxes. A ‘describe what is the color of the given magic box’ magic box can tell you the color of the magic box you gave it.

Among all magic boxes in magic land, some are broken. If you give broken magic boxes particular inputs, they break and output nothing. Below is a broken box. If you give it a basketball, it will operate correctly and, after some time, output a football. But if you give it a baseball, it will break and output nothing. More importantly, just as seeing the spinning circle in your browser, you are not sure whether the magic box is broken or it is just taking a long time.

Citizens in Magic Land are deeply troubled by broken magic boxes in their life. After all, what is so magic about a broken box? The king of Magic Land decides to run a nationwide campaign to find a solution. If anyone builds a magic box to distinguish between normal and broken boxes, the king promises to award the person 1 million Magic Coins and the royal title of “The Magic Person.” The king puts his entrusted chief scientist of the kingdom, by no coincidence, named Magic Turing, in charge of this endeavor. As shown below, the submission box is what the kingdom is looking for. When given a magic box and an item, the submission box should output an answer “Yes, the magic box functions normally.” or “No, it is broken.”

Motivated by patriotism and award, millions of Magic Land citizens start to construct and build magic boxes that can distinguish normal and broken boxes. Not long after, thousands of boxes are submitted claiming to have solved the problem.

To test whether these submission boxes actually solve the problem as they claim, Magic Turing comes up with a unique box, named “Turing’s Contradiction Box.” If a submission box cannot correctly predict whether “Turing’s Contradiction Box” is broken or not, the submission box does not solve the problem.

Here is the inside of Turing’s Contradiction Box. Turing’s Contradiction Box takes in a label, constructs a Magic Box that does what the label asks, and feeds the label and the created box into the submission box. If the submission box outputs “No, the box is broken”, then Turing’s Contradiction Box outputs “Yes”. Otherwise, if the submitted submission box outputs “Yes, the box is normal”, then Turing’s Contradiction Box breaks right after seeing such output.

Surprisingly, none of the submitted boxes can correctly predict whether Turing’s Contradiction Box is broken or not. The following figure shows how Magic Turing tests each submission box. He puts “Turing’s Contradiction Box” into the submission box, as shown in the figure below. Let’s go over what would happen by looking inside the “Turing’s Contradiction Box”

Suppose the submission box says “No, the given box breaks.” Is this the correct answer? Let’s look into the “Turing’s Contradiction Box” shown in the figure above. When the submission box outputs “No”, “Turing’s Contradiction Box” should functions normally and output “Yes”. Therefore, the submission box gives a wrong answer.

Suppose the submission box says “Yes, the given box functions normally on the given item”. Inside Turing’s Contradiction Box (figure above), if the submission box says “Yes”, then Turing’s Contradiction Box should break. Therefore, the submission box gives a wrong answer.

In fact, not only this particular submission box doesn’t work, none of the submission boxes would work, because we can apply above reasoning to every submission box. No such box exists that can determine whether a box is broken and the kingdom’s endeavor is doomed.

In our world, the story of Magic Land is what Alan Turing proved in 1936. The Magic Boxes in the story are computer programs. The broken Magic Boxes are machines that do not halt. The submission box that doesn’t exist is the machine that can answer whether another will halt or not on a given input. In other words, machines cannot solve the following simple problem:

**Given a program and inputs, answer whether the program will halt on the input.**

Although machines have beaten human’s GO master, the halting problem represents a limit of machine’s capabilities. But of course, except these cleverly constructed yet not so realistic questions, machines can do an awful lot of useful things, so don’t expect machines to stop getting smarter anytime soon.

References:

Thanks a lot for excellent article, your advice is very useful to me.

LikeLike