Philosophy Mondays: Human-AI Collaboration
Today's Philosophy Monday is an important interlude. I want to reveal that I have not been writing the posts in this series entirely by myself. Instead I have been working with Claude, not just for the graphic illustrations, but also for the text. My method has been to write a rough draft and then ask Claude for improvement suggestions. I will expand this collaboration to other intelligences going forward, including open source models such as Llama and DeepSeek. I will also explore other moda...

Intent-based Collaboration Environments
AI Native IDEs for Code, Engineering, Science
Web3/Crypto: Why Bother?
One thing that keeps surprising me is how quite a few people see absolutely nothing redeeming in web3 (née crypto). Maybe this is their genuine belief. Maybe it is a reaction to the extreme boosterism of some proponents who present web3 as bringing about a libertarian nirvana. From early on I have tried to provide a more rounded perspective, pointing to both the good and the bad that can come from it as in my talks at the Blockstack Summits. Today, however, I want to attempt to provide a coge...
Philosophy Mondays: Human-AI Collaboration
Today's Philosophy Monday is an important interlude. I want to reveal that I have not been writing the posts in this series entirely by myself. Instead I have been working with Claude, not just for the graphic illustrations, but also for the text. My method has been to write a rough draft and then ask Claude for improvement suggestions. I will expand this collaboration to other intelligences going forward, including open source models such as Llama and DeepSeek. I will also explore other moda...

Intent-based Collaboration Environments
AI Native IDEs for Code, Engineering, Science
Web3/Crypto: Why Bother?
One thing that keeps surprising me is how quite a few people see absolutely nothing redeeming in web3 (née crypto). Maybe this is their genuine belief. Maybe it is a reaction to the extreme boosterism of some proponents who present web3 as bringing about a libertarian nirvana. From early on I have tried to provide a more rounded perspective, pointing to both the good and the bad that can come from it as in my talks at the Blockstack Summits. Today, however, I want to attempt to provide a coge...
>300 subscribers
>300 subscribers
Share Dialog
Share Dialog
Last Uncertainty Wednesday, I introduced sensitivity and specificity as measures of how good a test is (or using the language of our framework, how strong a signal is). We derived the following formula
P(B | H) = P(B)/P(H) * P(H | B)
which relates P(B | H), i.e. the probability of the state of the world being B conditional on receiving signal H, to P(H | B), i.e. the probability of receiving signal H when the world is B, aka the sensitivity of the test.
Let’s rewrite this slightly to get a better handle on what it means
P(B | H) = [P(H | B) / P(H)] * P(B)
So now we see that the formula let’s us turn an unconditional probability P(B) into a conditional one. Or put differently, it relates the probability of the world being in state B *before* we have observed a signal to the probability *after* we have observed signal H. That’s why P(B) is also sometimes referred to as the “prior” and P(B | H) as the “posterior” probability.
Let’s plug in all the numbers we have from the previous posts based on the PSA Test Example
P(B | H) = [0.51 / 0.091302] * 0.0031 = 5.585858 * 0.0031 = 0.017316
Thankfully this foots with the number for P(B | H) we had found originally and it now shows clearly how this test gives us about a 5.58x lift, meaning the posterior probability is 5.58x as high as the one before we received the signal.
We can also see that this “lift” number is linear in the sensitivity. A 20% better test / stronger signal in the sense of a 20% higher sensitivity, will give us 20% more lift above our prior probability.
But we also clearly see that the absolute effect depends equally on P(H), that is how likely the signal itself is. A less likely signal, meaning lower P(H), gives us much more lift.
So what makes for a really good test? A really good test has high sensitivity (big numerator) on an unlikely signal (small denominator).
Last Uncertainty Wednesday, I introduced sensitivity and specificity as measures of how good a test is (or using the language of our framework, how strong a signal is). We derived the following formula
P(B | H) = P(B)/P(H) * P(H | B)
which relates P(B | H), i.e. the probability of the state of the world being B conditional on receiving signal H, to P(H | B), i.e. the probability of receiving signal H when the world is B, aka the sensitivity of the test.
Let’s rewrite this slightly to get a better handle on what it means
P(B | H) = [P(H | B) / P(H)] * P(B)
So now we see that the formula let’s us turn an unconditional probability P(B) into a conditional one. Or put differently, it relates the probability of the world being in state B *before* we have observed a signal to the probability *after* we have observed signal H. That’s why P(B) is also sometimes referred to as the “prior” and P(B | H) as the “posterior” probability.
Let’s plug in all the numbers we have from the previous posts based on the PSA Test Example
P(B | H) = [0.51 / 0.091302] * 0.0031 = 5.585858 * 0.0031 = 0.017316
Thankfully this foots with the number for P(B | H) we had found originally and it now shows clearly how this test gives us about a 5.58x lift, meaning the posterior probability is 5.58x as high as the one before we received the signal.
We can also see that this “lift” number is linear in the sensitivity. A 20% better test / stronger signal in the sense of a 20% higher sensitivity, will give us 20% more lift above our prior probability.
But we also clearly see that the absolute effect depends equally on P(H), that is how likely the signal itself is. A less likely signal, meaning lower P(H), gives us much more lift.
So what makes for a really good test? A really good test has high sensitivity (big numerator) on an unlikely signal (small denominator).
No comments yet