Subscribe to Continuations to receive new posts directly to your inbox.
Last Tech Tuesday, I introduced the halting problem. As you may recall the upshot was that there is no general algorithm that will determine whether a Turing Machine will stop for a particular input. Now you might at first think that this is not a big deal. After all do we really care if a Turing machine stops or not? As it turns out the implications of this are significant including philosophical limits on what is knowable (to humans) and the practical impossibility of building secure computer systems.
To understand why, let’s recall that when I introduced Turing Machines, I mentioned that “to date we have not found a model of computation that is more powerful than a Turing Machine.” Now let’s turn this on its head and ask whether or not most existing programming languages and computer designs are as powerful as a Turing Machine. The requirements turn out to be super simple: all you really need is conditional branching, i.e. testing a condition and doing one thing if it is met and another if it is not (see my post on control structures) and variables (technically infinitely many variables but we will see more about that in the future). Most languages have that and so most languages are what is called Turing complete.
That it turn implies that our halting problem applies to most programming languages and thus there is no general algorithm to determine whether a program will stop on a particular input or not. But the problem is way worse than that. Seemingly trivial questions such as “will this program output the number 42?” or “will this program ever execute line 42 of its code?” can be reduced to the halting problem. Put differently, if there was a general algorithm to determine the results for these questions then it could be used to solve the halting problem. Since we know that the halting problem cannot be solved the reduction shows that these problem also cannot be decided with a general algorithm.
Putting Turing completeness together with the ability to reduce other problems to the halting problem we are faced with a very unsatisfying situation: there really isn’t much we can say about what programs do! One of the dramatic practical implications of this is that computer security is kind of an impossible task. As long as we allow general programs (meaning programs in a Turing complete language, i.e. pretty much any general purpose language) we can’t make sure that they won’t do something nasty once they execute.
Now let’s briefly consider a possible philosophical implication of the halting problem. The universe itself (and even smaller physical processes inside of it) can be interpreted as carrying out computations. The halting problem’s undecidability then suggests that many questions cannot be decided by a general algorithm either. That in turn puts a question mark behind the believe in the singularity to the extent that a self learning machine is still applying a general algorithm.
Over 100 subscribers
Collect this post as an NFT.