Philosophy Mondays: Human-AI Collaboration
Today's Philosophy Monday is an important interlude. I want to reveal that I have not been writing the posts in this series entirely by myself. Instead I have been working with Claude, not just for the graphic illustrations, but also for the text. My method has been to write a rough draft and then ask Claude for improvement suggestions. I will expand this collaboration to other intelligences going forward, including open source models such as Llama and DeepSeek. I will also explore other moda...

Intent-based Collaboration Environments
AI Native IDEs for Code, Engineering, Science
Web3/Crypto: Why Bother?
One thing that keeps surprising me is how quite a few people see absolutely nothing redeeming in web3 (née crypto). Maybe this is their genuine belief. Maybe it is a reaction to the extreme boosterism of some proponents who present web3 as bringing about a libertarian nirvana. From early on I have tried to provide a more rounded perspective, pointing to both the good and the bad that can come from it as in my talks at the Blockstack Summits. Today, however, I want to attempt to provide a coge...
Philosophy Mondays: Human-AI Collaboration
Today's Philosophy Monday is an important interlude. I want to reveal that I have not been writing the posts in this series entirely by myself. Instead I have been working with Claude, not just for the graphic illustrations, but also for the text. My method has been to write a rough draft and then ask Claude for improvement suggestions. I will expand this collaboration to other intelligences going forward, including open source models such as Llama and DeepSeek. I will also explore other moda...

Intent-based Collaboration Environments
AI Native IDEs for Code, Engineering, Science
Web3/Crypto: Why Bother?
One thing that keeps surprising me is how quite a few people see absolutely nothing redeeming in web3 (née crypto). Maybe this is their genuine belief. Maybe it is a reaction to the extreme boosterism of some proponents who present web3 as bringing about a libertarian nirvana. From early on I have tried to provide a more rounded perspective, pointing to both the good and the bad that can come from it as in my talks at the Blockstack Summits. Today, however, I want to attempt to provide a coge...
>400 subscribers
>400 subscribers
Share Dialog
Share Dialog
OK, so technically it is Thursday but as it turns out I had kind of an epic day yesterday and so Uncertainty Wednesday is a day late (thus introducing some uncertainty about when you will get your next dose of Uncertainty Wednesday). Today I will continue with the PSA Test Example. Last time I had shown the probability of having cancer conditional on a positive test, which was P(B | H) = 0.017316. We also saw that the reason this probability, while significantly higher than the unconditional probability P(B) = 0.0031, is so surprisingly low is the high rate of false positives. A false positive results when a person who does not have cancer has an elevated level of PSA.
Now there are many other questions we can ask about the relationship between our signal (the PSA level which is high H or low L) and the state of the world (healthy A or cancer B). The most immediate one is to go the other direction. Suppose your test comes back negative, meaning the signal is L. What is the probability that you nonetheless have cancer (B)? So what we are looking for now is P(B | L), the probability of B conditional on L.
Before doing the math, let’s try to develop some intuition. We know that the unconditional probability P(B) = 0.0031. We know that P(B | H) > P(B). What do we expect for P(B | L)?
Well if our signal is useful, then we should expect P(B | L) < P(B). Let’s see if that’s indeed the case.
Here is the math, plugging in the numbers from the original post and using the formula that expresses the likelihood of cancer *and* a low PSA P({BL}) as a percentage of all cases of low PSA P(L) (which in turn is the sum of all the elementary events in which the signal is L)
P(B | L) = P({BL}) / P(L) = P({BL}) / [P({BL}) + P({AL})] = 0.001519 / (0.001519 + 0.907179) = 0.001519 / 0.908698 = 0.001672
So what does this mean? Well, as expected P(B | L) is lower than the unconditional probability which was P(B) = 0.0031, but it isn’t massively lower. P(B | L) is roughly half of P(B). Again, doing the test provided us with additional information, meaning the signal is in fact useful, but it is far from certain. There is the potential for false negatives: the test is negative (low PSA, signal L) but the person does have cancer (state B).
Next week we will work through all of this in a different way: we will use absolute numbers instead of probabilities. The math is exactly the same but nonetheless people find that more intuitive. After that we will look at “sensitivity” and “specificity” – two widely used measures for the quality of a test.
OK, so technically it is Thursday but as it turns out I had kind of an epic day yesterday and so Uncertainty Wednesday is a day late (thus introducing some uncertainty about when you will get your next dose of Uncertainty Wednesday). Today I will continue with the PSA Test Example. Last time I had shown the probability of having cancer conditional on a positive test, which was P(B | H) = 0.017316. We also saw that the reason this probability, while significantly higher than the unconditional probability P(B) = 0.0031, is so surprisingly low is the high rate of false positives. A false positive results when a person who does not have cancer has an elevated level of PSA.
Now there are many other questions we can ask about the relationship between our signal (the PSA level which is high H or low L) and the state of the world (healthy A or cancer B). The most immediate one is to go the other direction. Suppose your test comes back negative, meaning the signal is L. What is the probability that you nonetheless have cancer (B)? So what we are looking for now is P(B | L), the probability of B conditional on L.
Before doing the math, let’s try to develop some intuition. We know that the unconditional probability P(B) = 0.0031. We know that P(B | H) > P(B). What do we expect for P(B | L)?
Well if our signal is useful, then we should expect P(B | L) < P(B). Let’s see if that’s indeed the case.
Here is the math, plugging in the numbers from the original post and using the formula that expresses the likelihood of cancer *and* a low PSA P({BL}) as a percentage of all cases of low PSA P(L) (which in turn is the sum of all the elementary events in which the signal is L)
P(B | L) = P({BL}) / P(L) = P({BL}) / [P({BL}) + P({AL})] = 0.001519 / (0.001519 + 0.907179) = 0.001519 / 0.908698 = 0.001672
So what does this mean? Well, as expected P(B | L) is lower than the unconditional probability which was P(B) = 0.0031, but it isn’t massively lower. P(B | L) is roughly half of P(B). Again, doing the test provided us with additional information, meaning the signal is in fact useful, but it is far from certain. There is the potential for false negatives: the test is negative (low PSA, signal L) but the person does have cancer (state B).
Next week we will work through all of this in a different way: we will use absolute numbers instead of probabilities. The math is exactly the same but nonetheless people find that more intuitive. After that we will look at “sensitivity” and “specificity” – two widely used measures for the quality of a test.
No comments yet