10.1 C
London
Tuesday, November 5, 2024

Alan Turing and the Power of Negative Thinking

Turing’s diagonalization proof is a version of this game where the questions run through the infinite list of possible algorithms, repeatedly asking, “Can this algorithm solve the problem we’d like to prove uncomputable?”

“It’s sort of ‘infinity questions,’” Williams said.

To win the game, Turing needed to craft a problem where the answer is no for every algorithm. That meant identifying a particular input that makes the first algorithm output the wrong answer, another input that makes the second one fail, and so on. He found those special inputs using a trick similar to one Kurt Gödel had recently used to prove that self-referential assertions like “this statement is unprovable” spelled trouble for the foundations of mathematics.

The key insight was that every algorithm (or program) can be represented as a string of 0s and 1s. That means, as in the example of the error-checking program, that an algorithm can take the code of another algorithm as an input. In principle, an algorithm can even take its own code as an input.

With this insight, we can define an uncomputable problem like the one in Turing’s proof: “Given an input string representing the code of an algorithm, output 1 if that algorithm outputs 0 when its own code is the input; otherwise, output 0.” Every algorithm that tries to solve this problem will produce the wrong output on at least one input—namely, the input corresponding to its own code. That means this perverse problem can’t be solved by any algorithm whatsoever.

What Negation Can’t Do

Computer scientists weren’t yet through with diagonalization. In 1965, Juris Hartmanis and Richard Stearns adapted Turing’s argument to prove that not all computable problems are created equal—some are intrinsically harder than others. That result launched the field of computational complexity theory, which studies the difficulty of computational problems.

But complexity theory also revealed the limits of Turing’s contrary method. In 1975, Theodore Baker, John Gill, and Robert Solovay proved that many open questions in complexity theory can never be resolved by diagonalization alone. Chief among these is the famous P versus NP problem, which asks whether all problems with easily checkable solutions are also easy to solve with the right ingenious algorithm.

Diagonalization’s blind spots are a direct consequence of the high level of abstraction that makes it so powerful. Turing’s proof didn’t involve any uncomputable problem that might arise in practice—instead, it concocted such a problem on the fly. Other diagonalization proofs are similarly aloof from the real world, so they can’t resolve questions where real-world details matter.

“They handle computation at a distance,” Williams said. “I imagine a guy who is dealing with viruses and accesses them through some glove box.”

The failure of diagonalization was an early indication that solving the P versus NP problem would be a long journey. But despite its limitations, diagonalization remains one of the key tools in complexity theorists’ arsenal. In 2011, Williams used it together with a raft of other techniques to prove that a certain restricted model of computation couldn’t solve some extraordinarily hard problems—a result that had eluded researchers for 25 years. It was a far cry from resolving P versus NP, but it still represented major progress.

If you want to prove that something’s not possible, don’t underestimate the power of just saying no.


Original story reprinted with permission from Quanta Magazine, an editorially independent publication of the Simons Foundation whose mission is to enhance public understanding of science by covering research developments and trends in mathematics and the physical and life sciences.

Latest news
Related news