Philosophy Mondays: Human-AI Collaboration
Today's Philosophy Monday is an important interlude. I want to reveal that I have not been writing the posts in this series entirely by myself. Instead I have been working with Claude, not just for the graphic illustrations, but also for the text. My method has been to write a rough draft and then ask Claude for improvement suggestions. I will expand this collaboration to other intelligences going forward, including open source models such as Llama and DeepSeek. I will also explore other moda...

Intent-based Collaboration Environments
AI Native IDEs for Code, Engineering, Science
Web3/Crypto: Why Bother?
One thing that keeps surprising me is how quite a few people see absolutely nothing redeeming in web3 (née crypto). Maybe this is their genuine belief. Maybe it is a reaction to the extreme boosterism of some proponents who present web3 as bringing about a libertarian nirvana. From early on I have tried to provide a more rounded perspective, pointing to both the good and the bad that can come from it as in my talks at the Blockstack Summits. Today, however, I want to attempt to provide a coge...
Philosophy Mondays: Human-AI Collaboration
Today's Philosophy Monday is an important interlude. I want to reveal that I have not been writing the posts in this series entirely by myself. Instead I have been working with Claude, not just for the graphic illustrations, but also for the text. My method has been to write a rough draft and then ask Claude for improvement suggestions. I will expand this collaboration to other intelligences going forward, including open source models such as Llama and DeepSeek. I will also explore other moda...

Intent-based Collaboration Environments
AI Native IDEs for Code, Engineering, Science
Web3/Crypto: Why Bother?
One thing that keeps surprising me is how quite a few people see absolutely nothing redeeming in web3 (née crypto). Maybe this is their genuine belief. Maybe it is a reaction to the extreme boosterism of some proponents who present web3 as bringing about a libertarian nirvana. From early on I have tried to provide a more rounded perspective, pointing to both the good and the bad that can come from it as in my talks at the Blockstack Summits. Today, however, I want to attempt to provide a coge...
>400 subscribers
>400 subscribers
Share Dialog
Share Dialog
And we are back! Sorry for the, ahem, uncertainty about when Uncertainty Wednesday would appear. I decided to work less this year on vacation and that included not blogging. Today we will wrap up the Zoltar Example. In Part 3, I first introduced the idea that even a very simple explanation in the form of the equation B + G = N (reminder B is a bad fortune, G a good one and N the total number of fortunes) let’s us start to reason about the uncertainty involved. In particular, I discussed why assuming that P(B) = P(G) = 0.5, ie. that both types of fortunes are equally likely, reflects having *no* prior information.
Now let’s consider that we are observing the machine for some time. Let’s pick an extreme case. We stand there and we see Zoltar issue 20 good fortunes (G) in a row and not a single bad fortune (B). What conclusion should we could draw from this?
There are two bookends to the possibilities. One extreme conclusion is that the machine is “broken” (e.g. something got stuck inside) and as a result only ever issues good fortunes. Meaning, no matter how long we continue to stand there we will forever forward see G.
The other extreme conclusion is that that our initial assumption of equal probability is in fact correct and that we have simply observed a sequence of G’s but that the next fortune could be a bad fortune (B) once again with 50% probability.
The first of these says that our observations provide a very strong signal about the internal state of the machine (namely we know perfectly what it is). The second one says that we have learned nothing about the machine. As a reminder here it is useful to go back to the framework that I laid out at the very beginning of Uncertainty Wednesday. The internals of the Zoltar Machine are the reality we are interested in. Our equation B + G = N is the current best explanation that we have. And the actually observed numbers of B and G are our observations.
Much of what we will be doing in the coming Uncertainty Wednesdays is to develop a more formal set of mechanics that will allow us to argue quantitatively about the conclusions we should draw from our observation of 20 good fortunes in a row. It will let us find a more precise place between the two extreme possibilities described above.
Now I would be remiss if I didn’t point out that the Zoltar example also shows the possibility that we can come up with better explanations of reality over time. For instance, we might disassemble a Zoltar machine and see what gears and electronics it has on the inside to produce its fortunes. Such a deeper look might provide us with a much more precise explanation. For instance, we might discover that the machine calculates the digits of pi and then issues that many G or B fortunes (e.g. 1 G, 4 B, 1 G, 5 B, 9 G, 2 B …).
Or we could strengthen our observations by extending them beyond the fortunes themselves. We could ask people who get fortunes to stay in touch with us. We could then ask them to tell us whether they did in fact have a good day the next if they received a (G) fortune (and vice versa). Again consider an extreme case: suppose that we see hundreds of people receiving B and G fortunes and then reporting back that their next day corresponded perfectly to the fortune. Clearly we would then start to question our simplistic explanation that the machine just randomly selects a fortune.
Next Wednesday we will jump into formalizing these ideas. Until then I hope the Zoltar example provided an illustration for the fundamental framework for thinking about uncertainty: in order to make headway you have to be explicit about the interaction between an explanation of reality and observations about that reality.
And we are back! Sorry for the, ahem, uncertainty about when Uncertainty Wednesday would appear. I decided to work less this year on vacation and that included not blogging. Today we will wrap up the Zoltar Example. In Part 3, I first introduced the idea that even a very simple explanation in the form of the equation B + G = N (reminder B is a bad fortune, G a good one and N the total number of fortunes) let’s us start to reason about the uncertainty involved. In particular, I discussed why assuming that P(B) = P(G) = 0.5, ie. that both types of fortunes are equally likely, reflects having *no* prior information.
Now let’s consider that we are observing the machine for some time. Let’s pick an extreme case. We stand there and we see Zoltar issue 20 good fortunes (G) in a row and not a single bad fortune (B). What conclusion should we could draw from this?
There are two bookends to the possibilities. One extreme conclusion is that the machine is “broken” (e.g. something got stuck inside) and as a result only ever issues good fortunes. Meaning, no matter how long we continue to stand there we will forever forward see G.
The other extreme conclusion is that that our initial assumption of equal probability is in fact correct and that we have simply observed a sequence of G’s but that the next fortune could be a bad fortune (B) once again with 50% probability.
The first of these says that our observations provide a very strong signal about the internal state of the machine (namely we know perfectly what it is). The second one says that we have learned nothing about the machine. As a reminder here it is useful to go back to the framework that I laid out at the very beginning of Uncertainty Wednesday. The internals of the Zoltar Machine are the reality we are interested in. Our equation B + G = N is the current best explanation that we have. And the actually observed numbers of B and G are our observations.
Much of what we will be doing in the coming Uncertainty Wednesdays is to develop a more formal set of mechanics that will allow us to argue quantitatively about the conclusions we should draw from our observation of 20 good fortunes in a row. It will let us find a more precise place between the two extreme possibilities described above.
Now I would be remiss if I didn’t point out that the Zoltar example also shows the possibility that we can come up with better explanations of reality over time. For instance, we might disassemble a Zoltar machine and see what gears and electronics it has on the inside to produce its fortunes. Such a deeper look might provide us with a much more precise explanation. For instance, we might discover that the machine calculates the digits of pi and then issues that many G or B fortunes (e.g. 1 G, 4 B, 1 G, 5 B, 9 G, 2 B …).
Or we could strengthen our observations by extending them beyond the fortunes themselves. We could ask people who get fortunes to stay in touch with us. We could then ask them to tell us whether they did in fact have a good day the next if they received a (G) fortune (and vice versa). Again consider an extreme case: suppose that we see hundreds of people receiving B and G fortunes and then reporting back that their next day corresponded perfectly to the fortune. Clearly we would then start to question our simplistic explanation that the machine just randomly selects a fortune.
Next Wednesday we will jump into formalizing these ideas. Until then I hope the Zoltar example provided an illustration for the fundamental framework for thinking about uncertainty: in order to make headway you have to be explicit about the interaction between an explanation of reality and observations about that reality.
No comments yet