Philosophy Mondays: Human-AI Collaboration
Today's Philosophy Monday is an important interlude. I want to reveal that I have not been writing the posts in this series entirely by myself. Instead I have been working with Claude, not just for the graphic illustrations, but also for the text. My method has been to write a rough draft and then ask Claude for improvement suggestions. I will expand this collaboration to other intelligences going forward, including open source models such as Llama and DeepSeek. I will also explore other moda...

Intent-based Collaboration Environments
AI Native IDEs for Code, Engineering, Science
Web3/Crypto: Why Bother?
One thing that keeps surprising me is how quite a few people see absolutely nothing redeeming in web3 (née crypto). Maybe this is their genuine belief. Maybe it is a reaction to the extreme boosterism of some proponents who present web3 as bringing about a libertarian nirvana. From early on I have tried to provide a more rounded perspective, pointing to both the good and the bad that can come from it as in my talks at the Blockstack Summits. Today, however, I want to attempt to provide a coge...
Philosophy Mondays: Human-AI Collaboration
Today's Philosophy Monday is an important interlude. I want to reveal that I have not been writing the posts in this series entirely by myself. Instead I have been working with Claude, not just for the graphic illustrations, but also for the text. My method has been to write a rough draft and then ask Claude for improvement suggestions. I will expand this collaboration to other intelligences going forward, including open source models such as Llama and DeepSeek. I will also explore other moda...

Intent-based Collaboration Environments
AI Native IDEs for Code, Engineering, Science
Web3/Crypto: Why Bother?
One thing that keeps surprising me is how quite a few people see absolutely nothing redeeming in web3 (née crypto). Maybe this is their genuine belief. Maybe it is a reaction to the extreme boosterism of some proponents who present web3 as bringing about a libertarian nirvana. From early on I have tried to provide a more rounded perspective, pointing to both the good and the bad that can come from it as in my talks at the Blockstack Summits. Today, however, I want to attempt to provide a coge...
>400 subscribers
>400 subscribers
Share Dialog
Share Dialog
The current sequence of Tech Tuesday posts is about the theory of computation. We have already seen that a simple of model of computation such as the Finite State Machine is very useful but has severe limitations in that it cannot even decide a language as simple as all palindromes. We then learned about the Turing Machine as the most powerful known model of computation and saw how a Universal Turing Machine establishes the separation of software from hardware. This leads to a natural question: what are the limitations to what a Turing machine can compute?
Let’s start with a seemingly simple question. Does a particular Turing machine stop for a given input, i.e. initial state of the tape? This seems initially trivial. After all a Turing Machine only has a finite set of states and we saw that this made it possible to answer this very question for any Finite State Machine. But a Turing Machine also has an infinite tape and that’s where the problem lies. Remember that the transition function for a Turing Machine depends on its state and the symbol in the current square of the tape.
Even with a relatively trivial machine of half a dozen states or though the patterns that can develop on the tape as the machine operates can be incredibly complex (and chaotic). It may take years of investigation to come up with a proof as to whether or not a particular machine stops (see for instance the investigation into so called Busy Beaver machines). So that leads to the obvious question: is there a general way of determining for any Turing Machine whether it will stop? This is known as the “halting problem” and turns out to have far reaching implications for the boundaries of human knowledge.
The answer turns out to be no and Turing proved this in his 1936 paper. I am not going to attempt to reproduce the proof here but will suggest the basic idea. The extreme simplicity of Turing Machines lets us enumerate all programs, which means we can assign a number (i) to each program. Similarly every input to a program can be represented by just a number (x). That let’s us reduce the halting problem to hypothetical halting function h(i, x) which is 1 if the program i halts on input x and 0 if it doesn’t. So now we need to show that such a function cannot in fact be computed.
Let’s consider an arbitrary computable function f(a, b) – meaning a function that takes two inputs and returns an output. From this we can construct a new function g(i) as follows. g(i) is 1 if f(i,i) = 0 and undefined otherwise. Since f(a, b) is computable (by assumption), so is g(i) (this is a bit of hand waiving that we will come back to). Computable means that there is a Turing Machine program that computes g. Given enumerability of programs the program that computes g has a number which we will call e.
Now let’s throw the value e into f and compare that to our posited halting function h. It must be the case that either f(e,e) is 0 or it is not. If f(e, e) is 0 then g(e) is 1 by construction. But since g halts, it would have to be that h(e, e) is 1 (which is different from f). Now conversely if f(e, e) is not 0, then g(e) is undefined, again by construction. Undefined here is implemented by an indefinitely looping program which means that h(e, e) = 0 (again different from f). Recall that f was an arbitrary computable function and yet h (evaluated at e) is different from every f. That shows that h cannot be computable.
This is reminiscent of Cantor’s diagonalization proof for why there are more real numbers than integers. The diagonalization here becomes possible because we have a model of computation that lets us assign a number to each program. And because we can feed that number back to a program (including to itself) we have a way of formalizing what it means for one program to operate on another. This is the true power of Turing’s formulation – it connects programming deeply and elegantly to mathematics. We can now use one to analyze the other and in the process see the limitations of both. Next Tuesday we will investigate some of the philosophical and practical implications of the non-computability of the halting problem.
The current sequence of Tech Tuesday posts is about the theory of computation. We have already seen that a simple of model of computation such as the Finite State Machine is very useful but has severe limitations in that it cannot even decide a language as simple as all palindromes. We then learned about the Turing Machine as the most powerful known model of computation and saw how a Universal Turing Machine establishes the separation of software from hardware. This leads to a natural question: what are the limitations to what a Turing machine can compute?
Let’s start with a seemingly simple question. Does a particular Turing machine stop for a given input, i.e. initial state of the tape? This seems initially trivial. After all a Turing Machine only has a finite set of states and we saw that this made it possible to answer this very question for any Finite State Machine. But a Turing Machine also has an infinite tape and that’s where the problem lies. Remember that the transition function for a Turing Machine depends on its state and the symbol in the current square of the tape.
Even with a relatively trivial machine of half a dozen states or though the patterns that can develop on the tape as the machine operates can be incredibly complex (and chaotic). It may take years of investigation to come up with a proof as to whether or not a particular machine stops (see for instance the investigation into so called Busy Beaver machines). So that leads to the obvious question: is there a general way of determining for any Turing Machine whether it will stop? This is known as the “halting problem” and turns out to have far reaching implications for the boundaries of human knowledge.
The answer turns out to be no and Turing proved this in his 1936 paper. I am not going to attempt to reproduce the proof here but will suggest the basic idea. The extreme simplicity of Turing Machines lets us enumerate all programs, which means we can assign a number (i) to each program. Similarly every input to a program can be represented by just a number (x). That let’s us reduce the halting problem to hypothetical halting function h(i, x) which is 1 if the program i halts on input x and 0 if it doesn’t. So now we need to show that such a function cannot in fact be computed.
Let’s consider an arbitrary computable function f(a, b) – meaning a function that takes two inputs and returns an output. From this we can construct a new function g(i) as follows. g(i) is 1 if f(i,i) = 0 and undefined otherwise. Since f(a, b) is computable (by assumption), so is g(i) (this is a bit of hand waiving that we will come back to). Computable means that there is a Turing Machine program that computes g. Given enumerability of programs the program that computes g has a number which we will call e.
Now let’s throw the value e into f and compare that to our posited halting function h. It must be the case that either f(e,e) is 0 or it is not. If f(e, e) is 0 then g(e) is 1 by construction. But since g halts, it would have to be that h(e, e) is 1 (which is different from f). Now conversely if f(e, e) is not 0, then g(e) is undefined, again by construction. Undefined here is implemented by an indefinitely looping program which means that h(e, e) = 0 (again different from f). Recall that f was an arbitrary computable function and yet h (evaluated at e) is different from every f. That shows that h cannot be computable.
This is reminiscent of Cantor’s diagonalization proof for why there are more real numbers than integers. The diagonalization here becomes possible because we have a model of computation that lets us assign a number to each program. And because we can feed that number back to a program (including to itself) we have a way of formalizing what it means for one program to operate on another. This is the true power of Turing’s formulation – it connects programming deeply and elegantly to mathematics. We can now use one to analyze the other and in the process see the limitations of both. Next Tuesday we will investigate some of the philosophical and practical implications of the non-computability of the halting problem.
No comments yet